Journal of Electronics, Electromedical Engineering, and Medical Informatics
http://jeeemi.org/index.php/jeeemi
<p>The Journal of Electronics, Electromedical Engineering, and Medical Informatics, (JEEEMI), is a peer-reviewed periodical scientific journal aimed at publishing research results of the Journal focus areas. The Journal is published by the Department of Electromedical Engineering, Health Polytechnic of Surabaya, Ministry of Health, Indonesia. The role of the Journal is to facilitate contacts between research centers and the industry. The aspiration of the Editors is to publish high-quality scientific professional papers presenting works of significant scientific teams, experienced and well-established authors as well as postgraduate students and beginning researchers. All articles are subject to anonymous review processes by at least two independent expert reviewers prior to publishing on the International Journal of Electronics, Electromedical Engineering, and Medical Informatics website.</p>Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYAen-US Journal of Electronics, Electromedical Engineering, and Medical Informatics2656-8632<p><strong>Authors who publish with this journal agree to the following terms:</strong></p> <ol> <li class="show">Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution-ShareAlikel 4.0 International <a title="CC BY SA" href="https://creativecommons.org/licenses/by-sa/4.0/" target="_blank" rel="noopener">(CC BY-SA 4.0)</a> that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.</li> <li class="show">Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.</li> <li class="show">Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See <a href="http://opcit.eprints.org/oacitation-biblio.html" target="_new">The Effect of Open Access</a>).</li> </ol>Multispectral Classification based on H20 and H20 with NaOH Using Image Segmentation and Ensemble Learning EfficientNetV2, Resnet50, MobileNetV3
http://jeeemi.org/index.php/jeeemi/article/view/1016
<p>High Multispectral imaging has become a promising approach in liquid classification, particularly in distinguishing visually similar but subtly spectrally distinct solutions, such as pure water (H₂O) and water mixed with sodium hydroxide (H₂O with NaOH). This study proposed a classification system based on image segmentation and deep learning, utilizing three leading Convolutional Neural Network (CNN) architectures: ResNet 50, EfficientNetV2, and MobileNetV3. Before classification, each multispectral image was processed through color segmentation in HSV space to highlight the dominant spectral, especially in the hue range of 110 170. The model was trained using a data augmentation scheme and optimized with the Adam algorithm, a batch size of 32, and a sigmoid activation function. The dataset consists of 807 images, including 295 H₂O images and 512 H₂O with NaOH images, which were divided into training (64%), validation (16%), and testing (20%) data. Experimental results show that ResNet50 achieves the highest performance, with an accuracy of 93.83% and an F1 score of 93.67%, particularly in identifying alkaline pollution. EfficientNetV2 achieved the lowest loss (0.2001) and exhibited balanced performance across classes, while MobileNetV3, despite being a lightweight model, remained competitive with a recall of 0.97 in the H₂O with NaOH class. Further evaluation with Grad CAM reveals that all models focus on the most critical spectral areas of the segmentation results. These findings support the effectiveness of combining color-based segmentation and CNN in the spectral classification of liquids. This research is expected to serve as a stepping stone in the development of an efficient and accurate automatic liquid classification system for both laboratory and industrial applications.</p>Melinda MelindaYunidar YunidarZulhelmi ZulhelmiArya SuyandaLailatul Qadri ZakariaW.K Wong
Copyright (c) 2025 Melinda Melinda, Yunidar Yunidar, Zulhelmi Zulhelmi, Arya Suyanda, Lailatul Qadri Zakaria, W.K Wong
https://creativecommons.org/licenses/by-sa/4.0
2025-09-102025-09-10741045105910.35882/jeeemi.v7i4.1016Unified Deep Architectures for Real-Time Object Detection and Semantic Reasoning in Autonomous Vehicles
http://jeeemi.org/index.php/jeeemi/article/view/813
<p>The development of autonomous vehicles (AVs) has revolutionized the transportation industry, promising to boost mobility, lessen traffic, and increase safety on roads. However, the complexity of the driving environment and the requirement for real-time processing of vast amounts of sensor data present serious difficulties for AV systems. Various computer vision approaches, such as object detection, lane detection, and traffic sign recognition, have been investigated by researchers in order to overcome these issues. This research presents an integrated approach to autonomous vehicle perception, combining real-time object detection, semantic segmentation, and classification within a unified deep learning architecture. Our approach leverages the strengths of existing frameworks, including MultiNet’s real-time semantic reasoning capabilities, the fast-encoding methods of PointPillars to identify objects from point clouds, as well as the reliable one-stage monocular 3D object detection system. The offered model tries to improve computational efficiency and accuracy by utilizing a shared encoder and task-specific decoders that perform classification, detection, and segmentation concurrently. The architecture is evaluated against challenging datasets, illustrating outstanding achievements in terms of speed and accuracy, suitable for real-time applications in autonomous driving. This integration promises significant advancements in the perception systems of autonomous vehicles a providing in-depth knowledge of the vehicle’s environment through efficient concepts of deep learning techniques. In our model, we used Yolov8, MultiNet, and during training got accuracy 93.5%, precision 92.7 %, recall 82.1% and mAP 72.9%.</p>Vishal AherSatish JondhaleBalasaheb AgarkarSachin Chaudhari
Copyright (c) 2025 Vishal Aher, Satish Jondhale, Balasaheb Agarkar, Sachin Chaudhari
https://creativecommons.org/licenses/by-sa/4.0
2025-09-102025-09-10741060107310.35882/jeeemi.v7i4.813Secure Image Transmission using Quantum-Resilient and Gate Network for Latent-Key Generation
http://jeeemi.org/index.php/jeeemi/article/view/1156
<p>Recently, deep learning-based techniques have undergone rapid development, yielding promising results in various fields. For making more complex operations in day-to-day tasks, the arbitrary resolution of JPEG image data security requires more than just deep learning in this modern era. To overcome this, our research introduces a pioneering synergistic framework for a quantum-resistant deep learning technique, which is expected to provide next-generation robust security in the dynamic resolution of multi-JPEG-image-based joint compression-encryption. Our proposed framework features dual-parallel processing of a dynamic gate network, utilizing a convolutional neural network for specialization detailing and quantum-inspired transformations. These transformations leverage Riemann zeta functions for depth feature extraction, integrated with a chaotic sequence and dynamic iterations to generate a latent-fused chaotic key for image joint compression and encryption. Further, the authenticity of an encrypted image that is bound by a secure pattern derived from a random transform variance anchors cryptographic operations. Then, bound data transmitted through a Synergic Curve Key Exchange Engine fused with renowned Chen attractors to generate non-invertible keys for transmission. Finally, experimental results of the image reconstruction quality measured by the structural similarity index metric were 98.82 1.12. Security validation incorporates different metrics by addressing the entropy analysis to quantify resistance against differential and statistical attacks, with a yield of 7.9980 0.0015. In conclusion, the whole implementation uniquely combines latent-fused chaotic with improved key space analysis for discrete cosine transform quantization with authenticated encryption, establishing an adversarial-resistant pipeline that simultaneously compresses data and validates integrity through pattern-bound authentication</p>Malige GangappaBalla V V SatyanarayanaDheeraj A
Copyright (c) 2025 Balla V V Satyanarayana, Malige Gangappa, Dheeraj A
https://creativecommons.org/licenses/by-sa/4.0
2025-10-062025-10-06741178119810.35882/jeeemi.v7i4.1156Automatic Target Recognition using Unmanned Aerial Vehicle Images with Proposed YOLOv8-SR and Enhanced Deep Super-Resolution Network
http://jeeemi.org/index.php/jeeemi/article/view/888
<p>Modern surveillance necessitates the use of automatic target recognition (ATR) to identify targets or objects quickly and accurately for multiclass classification in unmanned aerial vehicles (UAVs) such as pedestrians, people, bicycles, cars, vans, trucks, tricycles, buses, and motors. The inadequate recognition rate in target detection for UAVs could be due to the fundamental issues provided by the poor resolution of photos recorded from the distinct perspective of the UAVs. The VisDrone dataset used for image analysis consists of a total of 10,209 UAV photos. This research work presents a comprehensive framework specifically for multiclass target classification using VisDrone UAV imagery. The YOLOv8-SR, which stands for "You Only Looked Once Version 8 with Super-Resolution," is a developed model that builds on the YOLOv8s model with the Enhanced Deep Super-Resolution Network (EDSR). The YOLOv8-SR uses the EDSR to convert the low-resolution image to a high-resolution image, allowing it to estimate pixel values for better processing better. The high-resolution image was generated by the EDSR model, having a Peak Signal-to-Noise Ratio (PSNR) of 25.32 and a Structural Similarity Index (SSIM) of 0.781. The YOLOv8-SR model's precision is 63.44%, recall is 46.64%, F1-score is 52.69%, mean average precision (mAP@50) is 51.58%, and the mAP@50–95 is 50.67% over the range of confidence thresholds. The investigation fundamentally transforms the precision and effectiveness of ATR, indicating a future in which ingenuity overcomes obstacles that were once considered insurmountable. This development is characterized by the use of an improved deep super-resolution network to produce super-resolution images from low-resolution inputs. The YoLov8-SR model, a sophisticated version of the YoLov8s framework, is key to this breakthrough. By amalgamating the EDSR methodology with the advanced YOLOv8-SR framework, the system generates high-resolution images abundant in detail, markedly exceeding the informational quality of their low-resolution versions.</p>Gangeshwar MishraRohit TanwarPrinima Gupta
Copyright (c) 2025 Gangeshwar Mishra, Rohit Tanwar, Prinima Gupta
https://creativecommons.org/licenses/by-sa/4.0
2025-10-152025-10-15741240125810.35882/jeeemi.v7i4.888Heart Disease Classification Using Random Forest and Fox Algorithm as Hyperparameter Tuning
http://jeeemi.org/index.php/jeeemi/article/view/932
<p>Heart disease remains the leading cause of death worldwide, making early and accurate diagnosis crucial for reducing mortality and improving patient outcomes. Traditional diagnostic approaches often suffer from subjectivity, delay, and high costs. Therefore, an effective and automated classification system is necessary to assist medical professionals in making more accurate and timely decisions. This study aims to develop a heart disease classification model using Random Forest, optimized through the FOX algorithm for hyperparameter tuning, to improve predictive performance and reliability. The main contribution of this research lies in the integration of the FOX metaheuristic optimization algorithm with the RF classifier. FOX, inspired by fox hunting behavior, balances exploration and exploitation in searching for the optimal hyperparameters. The proposed RF-FOX model is evaluated on the UCI Heart Disease dataset consisting of 303 instances and 13 features. Several preprocessing steps were conducted, including label encoding, outlier removal, missing value imputation, normalization, and class balancing using SMOTE-NC. FOX was used to optimize six RF hyperparameters across a defined search space. The experimental results demonstrate that the RF-FOX model achieved superior performance compared to standard RF and other hybrid optimization methods. With a training accuracy of 100% and testing accuracy of 97.83%, the model also attained precision (97.83%), recall (97.88%), and F1-score (97.89%). It significantly outperformed RF-GS, RF-RS, RF-PSO, RF-BA, and RF-FA models in all evaluation metrics. In conclusion, the RF-FOX model proves highly effective for heart disease classification, providing enhanced accuracy, reduced misclassification, and clinical applicability. This approach not only optimizes classifier performance but also supports medical decision-making with interpretable and reliable outcomes. Future work may involve validating the model on more diverse datasets to further ensure its generalizability and robustness.</p>Afidatul MasbakhahUmu Sa'adahMohamad Muslikh
Copyright (c) 2025 Afidatul Masbakhah, Umu Sa'adah, Mohamad Muslikh
https://creativecommons.org/licenses/by-sa/4.0
2025-08-012025-08-017496497610.35882/jeeemi.v7i4.932Hybrid CNN–ViT Model for Breast Cancer Classification in Mammograms: A Three-Phase Deep Learning Framework
http://jeeemi.org/index.php/jeeemi/article/view/920
<p>Breast cancer is one of the leading causes of death among women worldwide. Early and accurate detection plays a vital role in improving survival rates and guiding effective treatment. In this study, we propose a deep learning-based model for automatic breast cancer detection using mammogram images. The model is divided into three phases: preprocessing, segmentation, and classification. The first two phases, image enhancement and segmentation, were developed and validated in our previous works. Both phases were designed in a robust manner using learning networks; the usage of VGG-16 in preprocessing and U-net in segmentation helps in enhancing the overall classification performance. In this paper, we focus on the classification phase and introduce a novel hybrid deep learning based model that combines the strengths of Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs). This model captures both fine-grained image details and the broader global context, making it highly effective for distinguishing between benign and malignant breast tumors. We also include attention-based feature fusion and Grad CAM visualizations to make predictions more explainable for clinical use and reference. The model was tested on multiple benchmark datasets, DDSM, INbreast, and MIAS, and a combination of all three datasets, and achieved excellent results, including 100% accuracy on MIAS and over 99% accuracy on other datasets. Compared to recent deep learning models, our method outperforms existing approaches in both accuracy and reliability. This research offers a promising step toward supporting radiologists with intelligent tools that can improve the speed and accuracy of breast cancer diagnosis.</p>Vandana SainiMeenu KhuranaRama Krishna Challa
Copyright (c) 2025 Vandana Saini, Meenu Khurana, Rama Krishna Challa
https://creativecommons.org/licenses/by-sa/4.0
2025-08-072025-08-077497799010.35882/jeeemi.v7i4.920A Reproducible Workflow for Liver Volume Segmentation and 3D Model Generation Using Open-Source Tools
http://jeeemi.org/index.php/jeeemi/article/view/1086
<p>Complex liver resections related to hepatic tumors represent a major surgical challenge that requires precise preoperative planning supported by reliable three-dimensional (3D) anatomical models. In this context, accurate volumetric segmentation of the liver is a critical prerequisite to ensure the fidelity of printed models and to optimize surgical decision-making. This study compares different segmentation techniques integrated into open-source software to identify the most suitable approach for clinical application in resource-limited settings. Three semi-automatic methods, region growing, thresholding, and contour interpolation, were tested using the 3D Slicer platform and compared with a proprietary automatic method (Hepatic VCAR, GE Healthcare) and a manual segmentation reference, considered the gold standard. Ten anonymized abdominal CT volumes from the Medical Segmentation Decathlon dataset, encompassing various hepatic pathologies, were used to assess and compare the performance of each technique. Evaluation metrics included the Dice similarity coefficient (Dice), Hausdorff distance (HD), root mean square error (RMS), standard deviation (SD), and colorimetric surface discrepancy maps, enabling both quantitative and qualitative analysis of segmentation accuracy. Among the tested methods, the semi-automatic region growing approach demonstrated the highest agreement with manual segmentation (Dice = 0.935 ± 0.013; HD = 4.32 ± 0.48 mm), surpassing both other semi-automatic techniques and the automatic proprietary method. These results suggest that the region growing method implemented in 3D Slicer offers a reliable, accurate, and reproducible workflow for generating 3D liver models, particularly in surgical environments with limited access to advanced commercial solutions. The proposed methodology can potentially improve surgical planning, enhance training through realistic patient-specific models, and facilitate broader adoption of 3D printing in hepatobiliary surgery worldwide.</p>Badreddine LabakoumHamid El MalaliAmr FarhanAzeddine MouhsenAissam Lyazidi
Copyright (c) 2025 Badreddine Labakoum, Hamid El Malali, Amr Farhan, Azeddine Mouhsen, Aissam Lyazidi
https://creativecommons.org/licenses/by-sa/4.0
2025-09-012025-09-01741028104410.35882/jeeemi.v7i4.1086BRU-SOAT: Brain Tissue Segmentation via Deep Learning based Sailfish Optimization and Dual Attention Segnet
http://jeeemi.org/index.php/jeeemi/article/view/795
<p>Automated segmentation of brain tissue into gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) from magnetic resonance imaging (MRI) plays a crucial role in diagnosing neurological disorders such as Alzheimer’s disease, epilepsy, and multiple sclerosis. A key challenge in brain tissue segmentation (BTS) is accurately distinguishing boundaries between GM, WM, and CSF due to intensity overlaps and noise in the MRI image. To overcome these challenges, we propose a novel deep learning-based BRU-SOAT model for BTS using the BrainWeb dataset. Initially, brain MRI images are fed into skull stripping to remove skull regions, followed by preprocessing with a Contrast Stretching Adaptive Wiener (CSAW) filter to improve image quality and reduce noise. The pre-processed images are fed into ResEfficientNet for fine feature extraction. After extracting the features, the Sailfish Optimization (SFO) is employed to select the most related features while eliminating irrelevant features. A Dual Attention SegNet (DAS-Net) segments GM, CSF, and WM with high precision. The proposed BRU-SOAT model is assessed based on its precision, F1 score, specificity, recall, accuracy, Jaccard Index, and Dice Index. The proposed BRU-SOAT model achieved a segmentation accuracy of 99.17% for brain tissue segmentation. Moreover, the proposed DAS-Net outperformed fuzzy c-means clustering, fuzzy consensus clustering, and U-Net methods, achieving 98.50% (CSF), 98.63% (GM), and 99.15% (WM), indicating improved segmentation accuracy. In conclusion, the BRU-SOAT model provides a robust and highly accurate framework for automated brain tissue segmentation, supporting improved clinical diagnosis and neuroimaging analysis</p>Athur Shaik Ali Gousia BanuSumit Hazra
Copyright (c) 2025 Athur Shaik Ali Gousia Banu, Sumit Hazra
https://creativecommons.org/licenses/by-sa/4.0
2025-09-162025-09-16741074108810.35882/jeeemi.v7i4.795Gallbladder Disease Classification from Ultrasound Images Using CNN Feature Extraction and Machine Learning Optimization
http://jeeemi.org/index.php/jeeemi/article/view/1030
<p>Gallbladder diseases, including gallstones, carcinoma, and adenomyomatosis, may cause severe complications if not identified correctly and in a timely manner. However, ultrasound image interpretation relies heavily on operator experience and may suffer from subjectivity and inconsistency. This study aims to develop an automated and optimized classification model for gallbladder disease using ultrasound images, aiming to improve diagnostic reliability and efficiency. A key outcome of this research is a thorough assessment of how feature selection combined with hyperparameter tuning influences the accuracy of classical machine learning models that use features extracted via CNN-based feature extraction. The proposed pipeline enhances diagnostic accuracy while remaining computationally efficient. The method involves extracting deep features from ultrasound images using a pre-trained VGG16 CNN model. The features are subsequently reduced using the SelectKBest method through Univariate Feature Selection. Multiple popular classification models, specifically SVM, Random Forest, KNN, and Logistic Regression were tested using both original settings and adjusted hyperparameters through grid search. A complete evaluation of model performance was conducted using the test set, employing key performance indicators including overall prediction correctness (accuracy), actual positive rate (recall), positive prediction accuracy (precision), F1-score, and the ROC curve’s corresponding area value. Evaluation results suggest that the SVM approach, combined with selected features and hyperparameter tuning, achieved the highest performance: 99.35% accuracy, 99.32% precision, 99.35% recall, and 99.33% F1-score, with a relatively short computation time of 18.4 seconds. In conclusion, feature selection and hyperparameter tuning significantly enhance classification performance, making the proposed method a promising candidate for clinical decision support in gallbladder disease diagnosis using ultrasound imaging.</p>Ryan Adhitama PutraGede Angga PradiptaPutu Desiana Wulaning Ayu
Copyright (c) 2025 Ryan Adhitama Putra, Gede Angga Pradipta, Putu Desiana Wulaning Ayu
https://creativecommons.org/licenses/by-sa/4.0
2025-09-242025-09-24741089111110.35882/jeeemi.v7i4.1030Optimized EEG-Based Depression Detection and Severity Staging Using GAN-Augmented Neuro-Fuzzy and Deep Learning Models
http://jeeemi.org/index.php/jeeemi/article/view/1107
<p>Detecting depression and identifying its severity remain challenging tasks, especially in diverse environments where fair and reliable outcomes are expected. This study aims to address this problem with advanced machine learning models to achieve high accuracy and explainability; making the approach suitable for the real world depression screening and stage evaluation by implementing EEG-based depression detection and staging. We established the parameters of development of EEG-based depression detection in optimization of channel selection together with machine-learning models. Extreme channel selection was performed during this study with Recursive Feature Elimination (RFE) whereby major 11 channels identified, and the MLP classifier achieved 98.7% accuracy supported by AI explainability, thus outpacing the XGBoost and LGBM by 5.2 to 8.2% across multiple datasets (n=184 to 382) and greatly endorsed incredible generalization (precision=1.000, recall=0.966). This makes MLP a trustworthy BCI tool for real-world implementation of depression screening. We also examined assigning depression stages (Mild/Moderate/Severe) on EEG data with models supported or not with GAN-based augmentation (198 to 5,000 samples). CNNs did well on Moderate-stage classification, while ANFIS kept a firm accuracy of 98.34% at perfect metric consistency (precision/recall=0.98) with AI explainability. GAN augmentation improved the classifications of severe cases by 15%, indicating a good marriage of neuro-fuzzy systems and synthetic data for the precise stage determination. This is an important contribution to BCI research since it offers a data-efficient and scalable framework for EEG based depression diagnosis and severity evaluation, thus contributing to the bridge between competitive modeling and clinical applicability. This work, therefore, lays down a pathway for the design of accessible and automated depression screening aids in both high-resource and low-resource settings</p>Sudhir DhekaneAnand Khandare
Copyright (c) 2025 Sudhir Dhekane, Anand Khandare
https://creativecommons.org/licenses/by-sa/4.0
2025-09-292025-09-29741112112910.35882/jeeemi.v7i4.1107MCRNET-RS: Multi-Class Retinal Disease Classification using Deep Learning-based Residual Network-Rescaled
http://jeeemi.org/index.php/jeeemi/article/view/925
<p>Retinal diseases are a major cause of vision impairment, leading to partial or complete blindness if undiagnosed. Early detection and accurate classification of these conditions are crucial for effective treatment and vision preservation. However, Conventional diagnostic techniques are time-consuming and require professional assistance. Additionally, existing deep-learning models struggle with feature extraction and classification accuracy because of differences in image quality and disease severity. To overcome these challenges, a novel deep learning (DL)-based MCRNET-RS approach is proposed for multi-class retinal disease classification using fundus images. The gathered fundus images are pre-processed using the Savitzky-Golay Filter (SGF) to enhance and preserve essential structural details. The DL-based Residual Network-Rescaled (ResNet-RS) is used to extract hierarchical feature extraction for accurate retinal disease classification. Multi-layer perceptron (MLP) is used to classify retinal diseases such as Diabetic Neuropathy (DN), Branch Retinal Vein Occlusion (BRVO), Diabetic Retinopathy (DR), Healthy, Macular Hole (MH), Myopia (MYA), Optic Disc Cupping (ODC), Age-Related Macular Degeneration (ARMD), Optic Disc Pit (ODP), and Tilted Superior Lateral Nerve (TSLN). The effectiveness of the proposed MCRNET-RS method was assessed using precision, recall, specificity, F1 score, and accuracy. The proposed MCRNET-RS approach achieves an overall accuracy of 98.17%, F1 score of 95.99% for Retinal disease classification. The proposed approach improved the total accuracy by 3.27%, 4.48%, and 4.28% compared to EyeDeep-Net, Two I/P VGG16, and IDL-MRDD, respectively. These results confirm that the proposed MCRNET-RS framework provides a strong, scalable, and highly accurate solution for automated retinal disease classification, thereby supporting early diagnosis and effective clinical decision-making.</p>Mohana Suganthi NArun M
Copyright (c) 2025 Mohana Suganthi N, Arun M
https://creativecommons.org/licenses/by-sa/4.0
2025-09-282025-09-28741130114310.35882/jeeemi.v7i4.925A PSO-SVM-Based Approach for Classifying ECG and EEG Bio signals in Seizure Detection
http://jeeemi.org/index.php/jeeemi/article/view/1159
<p>Early identification of epileptic activities is essential for clinical analysis and preventing advancement of the disease. Despite the development of neurological diagnostic techniques, the current analysis of epileptic seizures is still relying on a visual interpretation of electroencephalogram (EEG) signal. Neurology specialists manually perform this examination to detect patterns, a process that is both challenging and time-consuming. Biomedical signals, such as EEG and electrocardiogram (ECG), are important tools for studying human brain disorders, particularly epilepsy. This paper aims to develop a system that automatically detects epileptic seizures using discrete wavelet decomposition (DWT), particle swarm optimization (PSO), and support vector machine (SVM), thereby relieving clinicians of their challenging tasks. The proposed system employs the DWT method, PSO, and SVM. This approach has three steps. First, we introduce a method that uses a four-level discrete wavelet transform (DWT) to extract important information from electroencephalogram and electrocardiogram signals by breaking them down into useful features. Second, we optimize the SVM classifier parameters using the PSO algorithm. Finally, we classify the extracted parameters using the optimized SVM. The system achieves an average accuracy of 97.92%, a 100% recall, a 96.15% specificity, and a 0.96 AUC value. Our findings demonstrate the success of this method, showing that the PSO-optimized SVM performs significantly better in classification. In addition, our findings also demonstrate the importance of using ECG signals as supplemental data. One implication of our work is the potential for creating wearable, real-time, customized seizure warning systems. In the future, these systems will be deployed on embedded platforms in real time and validated using larger datasets.</p>Lahcen ZougaghHamid BouyghfMohammed Nahid
Copyright (c) 2025 Lahcen Zougagh, Hamid Bouyghf, Mohammed Nahid
https://creativecommons.org/licenses/by-sa/4.0
2025-09-282025-09-28741144115710.35882/jeeemi.v7i4.1159COV-TViT: An Improved Diagnostic System for COVID Pneumonitis Utilizing Transfer Learning and Vision Transformer on X-Ray Images
http://jeeemi.org/index.php/jeeemi/article/view/1037
<p>COVID is a contagious lung ailment that continues to be a world curse, and it remains a highly infectious respiratory disease with global health implications. Traditional diagnostic methods, such as RT-PCR, though widely used, are often constrained by high costs, limited accessibility, and delayed results. In contrast, radiology for lung disease detection has been proven advantageous for identifying deformities, and chest X-rays are the most preferred radiological method due to their non-invasive nature. To address these limitations, this study aims to develop an efficient, automated diagnostic system leveraging radiological imaging, specifically X-rays, which are cost-effective and widely available. The primary contribution of this research is the introduction of COV-TViT, a novel deep learning framework that integrates transfer learning with Vision Transformer (ViT) architecture for the accurate detection of COVID pneumonitis. The proposed method is evaluated using the COVID-QU-Ex dataset, which comprises a balanced set of X-ray images from COVID positive and healthy individuals. Methodologically, the system employs pre-trained convolutional neural networks (CNNs), specifically VGG16 and VGG19 (Visual Geometry Group), for transfer learning, followed by fine tuning to enhance feature extraction. The ViT model, known for its self-attention mechanism, is then applied to capture complex spatial dependencies in the X-ray images, enabling robust classification. Experimental results demonstrate that COV-TViT achieves a classification accuracy of 98.96% and an F1 score of 96.21%, outperforming traditional CNN based transfer learning models in several scenarios. These findings underscore the model’s potential for high-precision COVID pneumonitis detection. The proposed approach significantly transforms classification tasks using self-attention mechanisms to extract features and learn representations. Overall, the proposed diagnostic system COV-TViT can be advantageous in the fundamental identification of COVID pneumonitis.</p>Sunil KumarAmar Pal YadavNeha NandalVishal AwasthiLuxmi SapraPrachi Chhabra
Copyright (c) 2025 Sunil Kumar, Amar Pal Yadav, Neha Nandal, Vishal Awasthi, Luxmi Sapra, Prachi Chhabra
https://creativecommons.org/licenses/by-sa/4.0
2025-10-062025-10-06741158117710.35882/jeeemi.v7i4.1037Multi-Modal Graph-Aware Transformer with Contrastive Fusion for Brain Tumor Segmentation
http://jeeemi.org/index.php/jeeemi/article/view/993
<p>Accurate segmentation of brain tumors in MRI images is critical for early diagnosis, surgical planning, and effective treatment strategies. Traditional deep learning models such as U-Net, Attention U-Net, and Swin-U-Net have demonstrated commendable success in tumor segmentation by leveraging Convolutional Neural Networks (CNNs) and transformer-based encoders. However, these models often fall short in effectively capturing complex inter-modality interactions and long-range spatial dependencies, particularly in tumor regions with diffuse or poorly defined boundaries. Additionally, they suffer from limited generalization capabilities and demand substantial computational resources. AIM: To overcome these limitations, a novel approach named Graph-Aware Transformer with Contrastive Fusion (GAT-CF) is introduced. This model enhances segmentation performance by integrating spatial attention mechanisms of transformers with graph-based relational reasoning across multiple MRI modalities, namely T1, T2, FLAIR, and T1CE. The graph-aware structure models inter-slice and intra-slice relationships more effectively, promoting better structural understanding of tumor regions. Furthermore, a multi-modal contrastive learning strategy is employed to align semantic features and distinguish complementary modality-specific information, thereby improving the model’s discriminative power. The fusion of these techniques facilitates improved contextual understanding and more accurate boundary delineation in complex tumor regions. When evaluated on the BraTS2021 dataset, the proposed GAT-CF model achieved a Dice score of 99.1% and an IoU of 98.4%, surpassing the performance of state-of-the-art architectures like Swin-UNet and SegResNet. It also demonstrated superior accuracy in detecting and enhancing tumor voxels and core tumor regions, highlighting its robustness, precision, and potential for clinical adoption in neuroimaging applications</p>Rini ChowdhuryPrashant KumarR. SuganthiV. AmmuR. Evance LeethialC. Roopa
Copyright (c) 2025 Rini Chowdhury, R. Suganthi, V. Ammu, R. Evance Leethial, C. Roopa
https://creativecommons.org/licenses/by-sa/4.0
2025-10-152025-10-15741226123910.35882/jeeemi.v7i4.993DR-FEDPAM: Detection of Diabetic Retinopathy using Federated Proximal Averaging Model
http://jeeemi.org/index.php/jeeemi/article/view/915
<p>Diabetic retinopathy (DR) is an eye condition caused by damage to the blood vessels of the retina due to high blood sugar levels, commonly associated with diabetes. Without proper treatment, it can lead to visual impairment or blindness. Traditional machine learning (ML) approaches for detecting Diabetic retinopathy rely on centralized data aggregation, which raises significant privacy concerns and often encounters regulatory challenges. To address these issues, the DR-FEDPAM model is proposed for the detection of diabetic retinopathy. Initially, the images are preprocessed using a Median Filter (MeF) and Gaussian Star Filter (GaSF) to reduce noise and enhance image quality. The preprocessed images are then input into a federated proximal model. Federated Learning (FL) enables multiple local models to train on distributed devices without sharing raw data. After the local models process the data, their parameters are aggregated through a Global Federated Averaging (GFA) model. This global model combines the parameters from all local models to produce a unified model that classifies each image as either normal or diabetic retinopathy. The model’s performance is evaluated using precision (PR), F1-score (F1), specificity (SP), recall (RE), and accuracy (AC). The DR-FEDPAM achieves a balanced trade-off with 7.8 million parameters, 1.7 FLOPs, and an average inference time of 13.9 ms. The model improves overall accuracy by 5.44%, 1.89%, and 4.43% compared to AlexNet, ResNet, and APSO, respectively. Experimental results show that the proposed method achieves an accuracy of 98.36% in detecting DR</p>Gaya Nair PLanitha B
Copyright (c) 2025 Gaya Nair P, Lanitha B
https://creativecommons.org/licenses/by-sa/4.0
2025-10-162025-10-16741259127110.35882/jeeemi.v7i4.915A Mattress-Integrated ECG System for Home Detection of Obstructive Sleep Apnea Through HRV Analysis Using Wavelet Transform and XGBoost Classification
http://jeeemi.org/index.php/jeeemi/article/view/1022
<p>Obstructive Sleep Apnea (OSA) is a potentially life-threatening sleep disorder that often remains undiagnosed due to the complexity of conventional diagnostic methods such as polysomnography (PSG). Currently, there is a lack of accessible, non-invasive diagnostic solutions suitable for home use. This study proposes a novel approach to automate OSA detection using single-lead electrocardiogram (ECG) signals acquired through non-contact conductive fabric electrodes embedded in a mattress, enabling unobtrusive monitoring during sleep. The main contributions of the proposed study are a mattress-embedded contactless ECG monitoring system eliminating the discomfort of traditional electrodes, and an advanced signal processing framework integrating wavelet decomposition with machine learning for precise OSA identification. ECG signals from 35 subjects (30 male, 5 females, aged 27-63 years) diagnosed with OSA were obtained from the PhysioNet Apnea-ECG database, originally sampled at 100 Hz and up-sampled to 250 Hz for consistency with experimental recordings from healthy volunteers tested in various sleep positions. Signals were recorded non-invasively during sleep in various body positions and processed using the Discrete Wavelet Transform (DWT) up to the third level of decomposition. The processing of ECG signals involved Heart Rate Variability (HRV) analysis, which was applied to extract information in the time domain, frequency domain, and non-linear properties. By analyzing HRV on the respiratory sinus arrhythmia spectrum, the respiration signal was obtained from ECG-derived respiration (EDR). Feature selection was performed using ANOVA, resulting in a set of key features including respiratory rate, SD2, SDNN, LF/HF ratio, and pNN50. These features were classified using the XGBoost algorithm to determine the presence of OSA. The proposed system achieved a detection accuracy of 96.7%, demonstrating its potential for reliable home-based OSA diagnosis. This method improves comfort through non-contact sensing and supports early intervention by delivering timely alerts for high-risk patients</p>Nada Fitrieyatul HikmahRachmad SetiawanRima AmaliaZain Budi SyulthoniDwi Oktavianto Wahyu NugrohoMu’afa Ali Syakir
Copyright (c) 2025 Nada Fitrieyatul Hikmah, Rachmad Setiawan, Rima Amalia, Zain Budi Syulthoni, Dwi Oktavianto Wahyu Nugroho, Mu’afa Ali Syakir
https://creativecommons.org/licenses/by-sa/4.0
2025-10-162025-10-16741272128810.35882/jeeemi.v7i4.1022Optimizing Medical Logistics Networks: A Hybrid Bat-ALNS Approach for Multi-Depot VRPTW and Simultaneous Pickup-Delivery
http://jeeemi.org/index.php/jeeemi/article/view/1054
<p>This paper tackles the multi-depot heterogeneous-fleet vehicle-routing problem with time windows and simultaneous pickup and delivery (MDHF-VRPTW-SPD), a variant that mirrors he growing complexity of modern healthcare logistics. The primary purpose of this study is to model this complex routing problem as a mixed-integer linear program and to develop and validate a novel hybrid metaheuristic, B-ALNS, capable of delivering robust, high-quality solutions. The proposed B-ALNS combines a discrete Bat Algorithm with Adaptive Large Neighborhood Search, where the bat component supplies frequency-guided diversification, while ALNS adaptively selects destroy and repair operators and exploits elite memory for focused intensification. Extensive experiments were conducted on twenty new benchmark instances (ranging from 48 to 288 customers), derived from Cordeau’s data and enriched with pickups and a four-class fleet. Results show that B-ALNS attains a mean cost 1.15 % lower than a standalone discrete BA and 2.78 % lower than a simple LNS, achieving the best average cost on 17/20 instances and the global best solution in 85% of test instances. Statistical tests further confirm the superiority of the hybrid B-ALNS, a Friedman test and Wilcoxon signed-rank comparisons give p-value of 0.0013 versus BA and p-value of 0.0002 versus LNS, respectively. Although B-ALNS trades speed for quality (182.65 seconds average runtime versus 54.04 seconds for BA and 11.61 seconds for LNS), it produces markedly more robust solutions, with the lowest cost standard deviation and consistently balanced routes. These results demonstrate that the hybrid B-ALNS delivers statistically significant, high-quality solutions within tactical planning times, offering a practical decision-support tool for secure, cold-chain-compliant healthcare logistics</p>Anass TahaSaid ElatarSalim El Bazzi MohamedAbdelouahed Ait IderLotfi Najdi
Copyright (c) 2025 Anass Taha, Said Elatar , Salim El Bazzi Mohamed , Abdelouahed Ait Ider , Lotfi Najdi
https://creativecommons.org/licenses/by-sa/4.0
74991101110.35882/jeeemi.v7i4.1054MedProtect: Protecting Electronic Patient Data Using Interpolation-Based Medical Image Steganography
http://jeeemi.org/index.php/jeeemi/article/view/977
<p>Electronic Patient Records (EPRS) represent critical elements of digital healthcare systems, as they contain confidential and sensitive medical information essential for patient care and clinical decision-making. Due to their sensitive nature, EPRs frequently face threats from unauthorized intrusions, security breaches and malicious attacks. Safeguarding such information has emerged as an urgent concern in medical data security. Steganography offers a compelling solution by hiding confidential data within conventional carrier objects like medical imagery. Unlike traditional cryptographic methods that merely alter the data representation, steganography conceals the existence of the information itself, thereby providing discretion, security, and resilience against unauthorized disclosure. However, embedding patient information inside medical images introduces a new challenge. The method must maintain the image's visual fidelity to prevent compromising diagnostic precision, while ensuring reversibility for complete restoration of both original imagery and concealed information. To address these challenges, this research proposes MedProtect, a reversible steganographic framework customized for medical applications. MedProtect procedure integrates pixel interpolation techniques and center-folding-based data transformation to insert sensitive records into medical imagery. This method combination ensures accurate data recovery of the original image while maintaining the image quality of the resulting image. To clarify the performance of MedProtect, this study evaluates two well-established image quality metrics, Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM). The discovery shows that the framework achieves PSNR values of 48.190 to 53.808 dB and SSIM scores between 0.9956 and 0.9980. These outcomes display the high level of visual fidelity and imperceptibility achieved by the proposed method, underscoring its effectiveness as a secure approach for protecting electronic patient records within medical imaging systems.</p>Aditya Rizki MuhammadIrsyad Fikriansyah RamadhanNtivuguruzwa Jean De La CroixTohari AhmadDieudonne UwizeyeEvelyne Kantarama
Copyright (c) 2025 Aditya Rizki Muhammad, Irsyad Fikriansyah Ramadhan, Ntivuguruzwa Jean De La Croix, Tohari Ahmad, Dieudonne Uwizeye, Evelyne Kantarama
https://creativecommons.org/licenses/by-sa/4.0
2025-09-012025-09-01741012102710.35882/jeeemi.v7i4.977Energy Conservation Clustering through Agent Nodes and Clusters (EECANC) for Wearable Health Monitoring and Smart Building Automation in Smart Hospitals using Wireless Sensor Networks
http://jeeemi.org/index.php/jeeemi/article/view/1082
<p>Wireless Sensor Networks (WSNs) play a vital role in enabling real-time patient monitoring, medical device tracking, and automated management of building operations in smart hospitals. Wearable health sensors and hospital automation systems produce a constant flow of data, resulting in elevated energy usage and network congestion. This study introduces an advanced framework named Energy Conservation via Clustering by Agent Nodes and Clusters (EECANC), designed to improve energy efficiency, extend the network's longevity, and facilitate smart building automation in hospitals. The EECANC protocol amalgamates wearable medical monitoring (oxygen saturation, body temperature, heart rate, and motion tracking) with intelligent hospital building automation (HVAC regulation, lighting management, and security surveillance) through a hierarchical Wireless Sensor Network-based clustering system. By reducing routing and data redundancy, cluster heads (CHs) and agent nodes (ANs) reduce redundant transmissions and extend the life of sensor batteries. EECANC limits direct interaction with the hospital's Smart Building Management System, thereby reducing emergency response times and improving energy efficiency throughout the hospital. The efficiency of EECANC was proven by comparing its performance with other existing clustering protocols, including EECAS, ECRRS, EA-DB-CRP, and IEE-LEACH. The protocol achieved a successful packet delivery rate of 83.33% to the base station, exceeding the performance of EECAS (83.33%), ECRRS (48.45%), EA-DB-CRP (54.37%), and IEE-LEACH (59.13%). The system demonstrated better energy utilization, resulting in a longer network longevity and lower transmission costs especially during high-traffic medical events. It is clear from the first and last node death rates that EECANC is the most energy-efficient protocol, significantly better than the other methods available. The EECANC model supports hospital automation, enhances patient safety, and promotes sustainability, providing a cost-effective and energy-efficient solution for future smart healthcare facilities</p>Sulalah Qais MirkarShilpa Shinde
Copyright (c) 2025 Sulalah Qais Mirkar, Shilpa Shinde
https://creativecommons.org/licenses/by-sa/4.0
2025-10-152025-10-15741199122510.35882/jeeemi.v7i4.1082