Journal of Electronics, Electromedical Engineering, and Medical Informatics https://jeeemi.org/index.php/jeeemi <p>The Journal of Electronics, Electromedical Engineering, and Medical Informatics, (JEEEMI), is a peer-reviewed periodical scientific journal aimed at publishing research results of the Journal focus areas. The Journal is published by the Department of Electromedical Engineering, Health Polytechnic of Surabaya, Ministry of Health, Indonesia. The role of the Journal is to facilitate contacts between research centers and the industry. The aspiration of the Editors is to publish high-quality scientific professional papers presenting works of significant scientific teams, experienced and well-established authors as well as postgraduate students and beginning researchers. All articles are subject to anonymous review processes by at least two independent expert reviewers prior to publishing on the International Journal of Electronics, Electromedical Engineering, and Medical Informatics website.</p> Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA en-US Journal of Electronics, Electromedical Engineering, and Medical Informatics 2656-8632 <p><strong>Authors who publish with this journal agree to the following terms:</strong></p> <ol> <li class="show">Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution-ShareAlikel 4.0 International <a title="CC BY SA" href="https://creativecommons.org/licenses/by-sa/4.0/" target="_blank" rel="noopener">(CC BY-SA 4.0)</a>&nbsp;that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.</li> <li class="show">Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.</li> <li class="show">Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See&nbsp;<a href="http://opcit.eprints.org/oacitation-biblio.html" target="_new">The Effect of Open Access</a>).</li> </ol> Multispectral Classification based on H20 and H20 with NaOH Using Image Segmentation and Ensemble Learning EfficientNetV2, Resnet50, MobileNetV3 https://jeeemi.org/index.php/jeeemi/article/view/1016 <p>High Multispectral imaging has become a promising approach in liquid classification, particularly in distinguishing visually similar but subtly spectrally distinct solutions, such as pure water (H₂O) and water mixed with sodium hydroxide (H₂O with NaOH). This study proposed a classification system based on image segmentation and deep learning, utilizing three leading Convolutional Neural Network (CNN) architectures: ResNet 50, EfficientNetV2, and MobileNetV3. Before classification, each multispectral image was processed through color segmentation in HSV space to highlight the dominant spectral, especially in the hue range of 110 170. The model was trained using a data augmentation scheme and optimized with the Adam algorithm, a batch size of 32, and a sigmoid activation function. The dataset consists of 807 images, including 295 H₂O images and 512 H₂O with NaOH images, which were divided into training (64%), validation (16%), and testing (20%) data. Experimental results show that ResNet50 achieves the highest performance, with an accuracy of 93.83% and an F1 score of 93.67%, particularly in identifying alkaline pollution. EfficientNetV2 achieved the lowest loss (0.2001) and exhibited balanced performance across classes, while MobileNetV3, despite being a lightweight model, remained competitive with a recall of 0.97 in the H₂O with NaOH class. Further evaluation with Grad CAM reveals that all models focus on the most critical spectral areas of the segmentation results. These findings support the effectiveness of combining color-based segmentation and CNN in the spectral classification of liquids. This research is expected to serve as a stepping stone in the development of an efficient and accurate automatic liquid classification system for both laboratory and industrial applications.</p> Melinda Melinda Yunidar Yunidar Zulhelmi Zulhelmi Arya Suyanda Lailatul Qadri Zakaria W.K Wong Copyright (c) 2025 Melinda Melinda, Yunidar Yunidar, Zulhelmi Zulhelmi, Arya Suyanda, Lailatul Qadri Zakaria, W.K Wong https://creativecommons.org/licenses/by-sa/4.0 2025-09-10 2025-09-10 7 4 1045 1059 10.35882/jeeemi.v7i4.1016 Unified Deep Architectures for Real-Time Object Detection and Semantic Reasoning in Autonomous Vehicles https://jeeemi.org/index.php/jeeemi/article/view/813 <p>The development of autonomous vehicles (AVs) has revolutionized the transportation industry, promising to boost mobility, lessen traffic, and increase safety on roads. However, the complexity of the driving environment and the requirement for real-time processing of vast amounts of sensor data present serious difficulties for AV systems. Various computer vision approaches, such as object detection, lane detection, and traffic sign recognition, have been investigated by researchers in order to overcome these issues. This research presents an integrated approach to autonomous vehicle perception, combining real-time object detection, semantic segmentation, and classification within a unified deep learning architecture. Our approach leverages the strengths of existing frameworks, including MultiNet’s real-time semantic reasoning capabilities, the fast-encoding methods of PointPillars to identify objects from point clouds, as well as the reliable one-stage monocular 3D object detection system. The offered model tries to improve computational efficiency and accuracy by utilizing a shared encoder and task-specific decoders that perform classification, detection, and segmentation concurrently. The architecture is evaluated against challenging datasets, illustrating outstanding achievements in terms of speed and accuracy, suitable for real-time applications in autonomous driving. This integration promises significant advancements in the perception systems of autonomous vehicles a providing in-depth knowledge of the vehicle’s environment through efficient concepts of deep learning techniques. In our model, we used Yolov8, MultiNet, and during training got accuracy 93.5%, precision 92.7 %, recall 82.1% and mAP 72.9%.</p> Vishal Aher Satish Jondhale Balasaheb Agarkar Sachin Chaudhari Copyright (c) 2025 Vishal Aher, Satish Jondhale, Balasaheb Agarkar, Sachin Chaudhari https://creativecommons.org/licenses/by-sa/4.0 2025-09-10 2025-09-10 7 4 1060 1073 10.35882/jeeemi.v7i4.813 Heart Disease Classification Using Random Forest and Fox Algorithm as Hyperparameter Tuning https://jeeemi.org/index.php/jeeemi/article/view/932 <p>Heart disease remains the leading cause of death worldwide, making early and accurate diagnosis crucial for reducing mortality and improving patient outcomes. Traditional diagnostic approaches often suffer from subjectivity, delay, and high costs. Therefore, an effective and automated classification system is necessary to assist medical professionals in making more accurate and timely decisions. This study aims to develop a heart disease classification model using Random Forest, optimized through the FOX algorithm for hyperparameter tuning, to improve predictive performance and reliability. The main contribution of this research lies in the integration of the FOX metaheuristic optimization algorithm with the RF classifier. FOX, inspired by fox hunting behavior, balances exploration and exploitation in searching for the optimal hyperparameters. The proposed RF-FOX model is evaluated on the UCI Heart Disease dataset consisting of 303 instances and 13 features. Several preprocessing steps were conducted, including label encoding, outlier removal, missing value imputation, normalization, and class balancing using SMOTE-NC. FOX was used to optimize six RF hyperparameters across a defined search space. The experimental results demonstrate that the RF-FOX model achieved superior performance compared to standard RF and other hybrid optimization methods. With a training accuracy of 100% and testing accuracy of 97.83%, the model also attained precision (97.83%), recall (97.88%), and F1-score (97.89%). It significantly outperformed RF-GS, RF-RS, RF-PSO, RF-BA, and RF-FA models in all evaluation metrics. In conclusion, the RF-FOX model proves highly effective for heart disease classification, providing enhanced accuracy, reduced misclassification, and clinical applicability. This approach not only optimizes classifier performance but also supports medical decision-making with interpretable and reliable outcomes. Future work may involve validating the model on more diverse datasets to further ensure its generalizability and robustness.</p> Afidatul Masbakhah Umu Sa'adah Mohamad Muslikh Copyright (c) 2025 Afidatul Masbakhah, Umu Sa'adah, Mohamad Muslikh https://creativecommons.org/licenses/by-sa/4.0 2025-08-01 2025-08-01 7 4 964 976 10.35882/jeeemi.v7i4.932 Hybrid CNN–ViT Model for Breast Cancer Classification in Mammograms: A Three-Phase Deep Learning Framework https://jeeemi.org/index.php/jeeemi/article/view/920 <p>Breast cancer is one of the leading causes of death among women worldwide. Early and accurate detection plays a vital role in improving survival rates and guiding effective treatment. In this study, we propose a deep learning-based model for automatic breast cancer detection using mammogram images. The model is divided into three phases: preprocessing, segmentation, and classification. The first two phases, image enhancement and segmentation, were developed and validated in our previous works. Both phases &nbsp;were designed in a robust manner using learning networks; the usage of &nbsp;VGG-16 in preprocessing and U-net in segmentation helps in enhancing the overall classification performance. &nbsp;In this paper, we focus on the classification phase and introduce a novel hybrid deep learning based model that combines the strengths of Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs). This model captures &nbsp;both fine-grained image details &nbsp;and the broader global context, &nbsp;making it highly effective for distinguishing between benign and malignant breast tumors. We also include attention-based feature fusion and Grad CAM visualizations to make predictions more explainable for clinical use and reference. The model was tested on multiple benchmark datasets, DDSM, INbreast, and MIAS, and a combination of all three datasets, and achieved excellent results, including 100% accuracy on MIAS and over 99% accuracy on other datasets. Compared to recent deep learning models, our method outperforms existing approaches in both accuracy and reliability. This research offers a promising step toward supporting radiologists with intelligent tools that can improve the speed and accuracy of breast cancer diagnosis.</p> Vandana Saini Meenu Khurana Rama Krishna Challa Copyright (c) 2025 Vandana Saini, Meenu Khurana, Rama Krishna Challa https://creativecommons.org/licenses/by-sa/4.0 2025-08-07 2025-08-07 7 4 977 990 10.35882/jeeemi.v7i4.920 A Reproducible Workflow for Liver Volume Segmentation and 3D Model Generation Using Open-Source Tools https://jeeemi.org/index.php/jeeemi/article/view/1086 <p>Complex liver resections related to hepatic tumors represent a major surgical challenge that requires precise preoperative planning supported by reliable three-dimensional (3D) anatomical models. In this context, accurate volumetric segmentation of the liver is a critical prerequisite to ensure the fidelity of printed models and to optimize surgical decision-making. This study compares different segmentation techniques integrated into open-source software to identify the most suitable approach for clinical application in resource-limited settings. Three semi-automatic methods, region growing, thresholding, and contour interpolation, were tested using the 3D Slicer platform and compared with a proprietary automatic method (Hepatic VCAR, GE Healthcare) and a manual segmentation reference, considered the gold standard. Ten anonymized abdominal CT volumes from the Medical Segmentation Decathlon dataset, encompassing various hepatic pathologies, were used to assess and compare the performance of each technique. Evaluation metrics included the Dice similarity coefficient (Dice), Hausdorff distance (HD), root mean square error (RMS), standard deviation (SD), and colorimetric surface discrepancy maps, enabling both quantitative and qualitative analysis of segmentation accuracy. Among the tested methods, the semi-automatic region growing approach demonstrated the highest agreement with manual segmentation (Dice = 0.935 ± 0.013; HD = 4.32 ± 0.48 mm), surpassing both other semi-automatic techniques and the automatic proprietary method. These results suggest that the region growing method implemented in 3D Slicer offers a reliable, accurate, and reproducible workflow for generating 3D liver models, particularly in surgical environments with limited access to advanced commercial solutions. The proposed methodology can potentially improve surgical planning, enhance training through realistic patient-specific models, and facilitate broader adoption of 3D printing in hepatobiliary surgery worldwide.</p> Badreddine Labakoum Hamid El Malali Amr Farhan Azeddine Mouhsen Aissam Lyazidi Copyright (c) 2025 Badreddine Labakoum, Hamid El Malali, Amr Farhan, Azeddine Mouhsen, Aissam Lyazidi https://creativecommons.org/licenses/by-sa/4.0 2025-09-01 2025-09-01 7 4 1028 1044 10.35882/jeeemi.v7i4.1086 BRU-SOAT: Brain Tissue Segmentation via Deep Learning based Sailfish Optimization and Dual Attention Segnet https://jeeemi.org/index.php/jeeemi/article/view/795 <p>Automated segmentation of brain tissue into gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) from magnetic resonance imaging (MRI) plays a crucial role in diagnosing neurological disorders such as Alzheimer’s disease, epilepsy, and multiple sclerosis. A key challenge in brain tissue segmentation (BTS) is accurately distinguishing boundaries between GM, WM, and CSF due to intensity overlaps and noise in the MRI image. To overcome these challenges, we propose a novel deep learning-based BRU-SOAT model for BTS using the BrainWeb dataset. Initially, brain MRI images are fed into skull stripping to remove skull regions, followed by preprocessing with a Contrast Stretching Adaptive Wiener (CSAW) filter to improve image quality and reduce noise. The pre-processed images are fed into ResEfficientNet for fine feature extraction. After extracting the features, the Sailfish Optimization (SFO) is employed to select the most related features while eliminating irrelevant features. A Dual Attention SegNet (DAS-Net) segments GM, CSF, and WM with high precision. The proposed BRU-SOAT model is assessed based on its precision, F1 score, specificity, recall, accuracy, Jaccard Index, and Dice Index. The proposed BRU-SOAT model achieved a segmentation accuracy of 99.17% for brain tissue segmentation. Moreover, the proposed DAS-Net outperformed fuzzy c-means clustering, fuzzy consensus clustering, and U-Net methods, achieving 98.50% (CSF), 98.63% (GM), and 99.15% (WM), indicating improved segmentation accuracy. In conclusion, the BRU-SOAT model provides a robust and highly accurate framework for automated brain tissue segmentation, supporting improved clinical diagnosis and neuroimaging analysis</p> Athur Shaik Ali Gousia Banu Sumit Hazra Copyright (c) 2025 Athur Shaik Ali Gousia Banu, Sumit Hazra https://creativecommons.org/licenses/by-sa/4.0 2025-09-16 2025-09-16 7 4 1074 1088 10.35882/jeeemi.v7i4.795 Gallbladder Disease Classification from Ultrasound Images Using CNN Feature Extraction and Machine Learning Optimization https://jeeemi.org/index.php/jeeemi/article/view/1030 <p>Gallbladder diseases, including gallstones, carcinoma, and adenomyomatosis, may cause severe complications if not identified correctly and in a timely manner. However, ultrasound image interpretation relies heavily on operator experience and may suffer from subjectivity and inconsistency. This study aims to develop an automated and optimized classification model for gallbladder disease using ultrasound images, aiming to improve diagnostic reliability and efficiency. A key outcome of this research is a thorough assessment of how feature selection combined with hyperparameter tuning influences the accuracy of classical machine learning models that &nbsp;use features extracted via CNN-based feature extraction. The proposed pipeline enhances diagnostic accuracy while remaining computationally efficient. The method involves extracting deep features from ultrasound images using a pre-trained VGG16 CNN model. The features are subsequently reduced using the SelectKBest method through Univariate Feature Selection. Multiple popular classification models, specifically SVM, Random Forest, KNN, and Logistic Regression were tested using both original settings and adjusted hyperparameters through grid search. A complete evaluation of model performance was conducted using the test set, employing key performance indicators including overall prediction correctness (accuracy), actual positive rate (recall), positive prediction accuracy (precision), F1-score, and the ROC curve’s corresponding area value. Evaluation results suggest that the SVM approach, combined with selected features and hyperparameter tuning, achieved the highest performance: 99.35% accuracy, 99.32% precision, 99.35% recall, and 99.33% F1-score, with a relatively short computation time of 18.4 seconds. In conclusion, feature selection and hyperparameter tuning significantly enhance classification performance, making the proposed method a promising candidate for clinical decision support in gallbladder disease diagnosis using ultrasound imaging.</p> Ryan Adhitama Putra Gede Angga Pradipta Putu Desiana Wulaning Ayu Copyright (c) 2025 Ryan Adhitama Putra, Gede Angga Pradipta, Putu Desiana Wulaning Ayu https://creativecommons.org/licenses/by-sa/4.0 2025-09-24 2025-09-24 7 4 1089 1111 10.35882/jeeemi.v7i4.1030 Optimizing Medical Logistics Networks: A Hybrid Bat-ALNS Approach for Multi-Depot VRPTW and Simultaneous Pickup-Delivery https://jeeemi.org/index.php/jeeemi/article/view/1054 <p>This paper tackles the multi-depot heterogeneous-fleet vehicle-routing problem with time windows and simultaneous pickup and delivery (MDHF-VRPTW-SPD), a variant that mirrors he growing complexity of modern healthcare logistics. The primary purpose of this study is to model this complex routing problem as a mixed-integer linear program and to develop and validate a novel hybrid metaheuristic, B-ALNS, capable of delivering robust, high-quality solutions. The proposed B-ALNS combines a discrete Bat Algorithm with Adaptive Large Neighborhood Search, where the bat component supplies frequency-guided diversification, while ALNS adaptively selects destroy and repair operators and exploits elite memory for focused intensification. Extensive experiments were conducted on twenty new benchmark instances (ranging from 48 to 288 customers), derived from Cordeau’s data and enriched with pickups and a four-class fleet. Results show that B-ALNS attains a mean cost 1.15 % lower than a standalone discrete BA and 2.78 % lower than a simple LNS, achieving the best average cost on 17/20 instances and the global best solution in 85% of test instances. Statistical tests further confirm the superiority of the hybrid B-ALNS, a Friedman test and Wilcoxon signed-rank comparisons give p-value of 0.0013 versus BA and p-value of 0.0002 versus LNS, respectively. Although B-ALNS trades speed for quality (182.65 seconds average runtime versus 54.04 seconds for BA and 11.61 seconds for LNS), it produces markedly more robust solutions, with the lowest cost standard deviation and consistently balanced routes. These results demonstrate that the hybrid B-ALNS delivers statistically significant, high-quality solutions within tactical planning times, offering a practical decision-support tool for secure, cold-chain-compliant healthcare logistics</p> Anass Taha Said Elatar Salim El Bazzi Mohamed Abdelouahed Ait Ider Lotfi Najdi Copyright (c) 2025 Anass Taha, Said Elatar , Salim El Bazzi Mohamed , Abdelouahed Ait Ider , Lotfi Najdi https://creativecommons.org/licenses/by-sa/4.0 7 4 991 1011 10.35882/jeeemi.v7i4.1054 MedProtect: Protecting Electronic Patient Data Using Interpolation-Based Medical Image Steganography https://jeeemi.org/index.php/jeeemi/article/view/977 <p>Electronic Patient Records (EPRS) represent critical elements of digital healthcare systems, as they contain confidential and sensitive medical information essential for patient care and clinical decision-making. Due to their sensitive nature, EPRs frequently face threats from unauthorized intrusions, security breaches and malicious attacks. Safeguarding such information has emerged as an urgent concern in medical data security. Steganography offers a compelling solution by hiding confidential data within conventional carrier objects like medical imagery. Unlike traditional cryptographic methods that merely alter the data representation, steganography conceals the existence of the information itself, thereby providing discretion, security, and resilience against unauthorized disclosure. However, embedding patient information inside medical images introduces a new challenge. The method must maintain the image's visual fidelity to prevent compromising diagnostic precision, while ensuring reversibility for complete restoration of both original imagery and concealed information. To address these challenges, this research proposes MedProtect, a reversible steganographic framework customized for medical applications. MedProtect procedure integrates pixel interpolation techniques and center-folding-based data transformation to insert sensitive records into medical imagery. This method combination ensures accurate data recovery of the original image while maintaining the image quality of the resulting image. To clarify the performance of MedProtect, this study evaluates two well-established image quality metrics, Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM). The discovery shows that the framework achieves PSNR values of 48.190 to 53.808 dB and SSIM scores between 0.9956 and 0.9980. These outcomes display the high level of visual fidelity and imperceptibility achieved by the proposed method, underscoring its effectiveness as a secure approach for protecting electronic patient records within medical imaging systems.</p> Aditya Rizki Muhammad Irsyad Fikriansyah Ramadhan Ntivuguruzwa Jean De La Croix Tohari Ahmad Dieudonne Uwizeye Evelyne Kantarama Copyright (c) 2025 Aditya Rizki Muhammad, Irsyad Fikriansyah Ramadhan, Ntivuguruzwa Jean De La Croix, Tohari Ahmad, Dieudonne Uwizeye, Evelyne Kantarama https://creativecommons.org/licenses/by-sa/4.0 2025-09-01 2025-09-01 7 4 1012 1027 10.35882/jeeemi.v7i4.977