https://jeeemi.org/index.php/jeeemi/issue/feed Journal of Electronics, Electromedical Engineering, and Medical Informatics2026-04-02T21:37:45+07:00Dr. Triwiyantoeditorial.jeeemi@gmail.comOpen Journal Systems<p>The Journal of Electronics, Electromedical Engineering, and Medical Informatics, (JEEEMI), is a peer-reviewed periodical scientific journal aimed at publishing research results of the Journal focus areas. The Journal is published by the Department of Electromedical Engineering, Health Polytechnic of Surabaya, Ministry of Health, Indonesia. The role of the Journal is to facilitate contacts between research centers and the industry. The aspiration of the Editors is to publish high-quality scientific professional papers presenting works of significant scientific teams, experienced and well-established authors as well as postgraduate students and beginning researchers. All articles are subject to anonymous review processes by at least two independent expert reviewers prior to publishing on the International Journal of Electronics, Electromedical Engineering, and Medical Informatics website.</p>https://jeeemi.org/index.php/jeeemi/article/view/1410Optimized Recurrent Neural Network Based on Improved Bacterial Colony Optimization for Predicting Osteoporosis Diseases2026-02-08T06:16:52+07:00Sivasakthi Bsenthilsubarna@gmail.comPreetha Kkpreethasudhakar@gmail.comSelvanayagi Dselvasubhika@gmail.com<p>Osteoporosis is a silent disease before significant fragility fractures despite its high prevalence, and its screening rate is low. In predictive healthcare analytics, the Elman recurrent neural network (ERNN) has been widely used as a learning technique. Traditional learning algorithms have some limitations, such as slow convergence rates and local minima that prevent gradient descent from finding the global minimum of the error function. The main goal is to precisely estimate each individual's risk of developing osteoporosis. These forecasts are essential for prompt diagnosis and treatment, which have a significant influence on patient outcomes. Hence, the present research focuses on making a more efficient prediction method based on an optimized Elman recurrent neural network (ERNN) for predicting osteoporosis diseases. An optimized ERNN method, IBCO-ERNN, improved bacterial colony optimization (IBCO) by optimizing the ERNN weights and biases. The IBCO approach uses an iterative local search (ILS) algorithm to enhance convergence rate and avoid the local optima problem of conventional BCO. Subsequently, the IBCO is used to optimize the ERNN's weights and biases, thereby improving convergence speed and detection rate. The effectiveness of IBCO-ERNN is evaluated using four different types of osteoporosis datasets: Femoral neck, Lumbar spine, Femoral and Spine, and BMD datasets. The proposed IBCO-ERNN produced higher accuracy at 95.61%, 96.26%, 97.26%, and 97.54 % for the Femoral neck, Lumbar spine, Femoral, and Spine datasets, respectively. The experimental findings demonstrated that, compared with other predictors, the proposed IBCO-ERNN achieved respectable accuracy and rapid convergence.</p>2026-02-06T17:52:16+07:00Copyright (c) 2026 Sivasakthi B, Preetha K, Selvanayagi Dhttps://jeeemi.org/index.php/jeeemi/article/view/1380A Multimodal Explainable-AI Approach for Deep-Learning-based Epileptic Seizure Detection2026-02-04T18:16:24+07:00Ashwini Patilpatil.ashwini.03@gmail.comMegharani Patilmegharani.patil@thakureducation.org<p>Epilepsy carries a high risk of sudden death and increased premature mortality, highlighting the importance of automatic seizure detection to support faster diagnosis and treatment. The opacity of existing deep learning models limits their real-world application in diagnosing epileptic seizures, underscoring the need for more transparent and explainable systems. Limited research studies are available on Explainable Artificial Intelligence (XAI)-based epileptic seizure detection, and these studies provide only a visual explanation for the model’s behaviour. Additionally, these studies lack validation of the XAI outputs using quantitative measures. Thus, this research aims to develop an explainable epileptic seizure detection model to address the limitations of existing black-box deep learning approaches. It proposes a novel Hybrid Transformer-DenseNet121-XAI (HTD-MXAI) integrated model for detecting epileptic seizures from EEG data. The proposed model leverages advanced deep learning architectures, namely the Transformer and DenseNet121, for automatic feature extraction, while simultaneously extracting handcrafted features from the time, frequency, and spatial domains. The XAI techniques, such as Attention Weights, Saliency Maps, and SHapley Additive eXplanations (SHAP), are integrated with the proposed model to provide multimodal explainability for the model’s decision-making process. The results demonstrate that the proposed model outperforms state-of-the-art models for seizure detection. It achieves an overall (aggregated across subjects) accuracy of 99.14%, Sensitivity of 98.49%, and Specificity of 99.68% when applied to the CHB-MIT dataset. The Faithfulness score of 40.94% and completeness score of 1.00 indicate that the explanations provided by the XAI method for the model’s prediction are highly reliable. In conclusion, the proposed model offers a promising solution to the constraints, including the interpretability of black box models, limited multimodal explainability, and the validation of XAI techniques in the context of epileptic seizure detection.</p>2026-02-04T18:15:24+07:00Copyright (c) 2026 Ashwini Patil, Megharani Patilhttps://jeeemi.org/index.php/jeeemi/article/view/1262Hybrid Separable Conv-ViT–CheXNet with Explainable Localization for Pneumonia Diagnosis2026-02-21T10:10:10+07:00Khushboo Trivedikhushboo.trivedi21305@paruluniversity.ac.inChintan Bhupeshbhai Thackerchintan.thacker19435@paruluniversity.ac.in<p>This research presents a robust, interpretable, and computationally efficient deep learning framework for multiclass pneumonia classification from chest X-ray images, with a strong emphasis on diagnostic accuracy, model transparency, and real-time applicability in clinical settings. We propose SCViT-CheXNet, a novel hybrid architecture that integrates a Separable Convolution Vision Transformer (SCViT) with a simplified CheXNet backbone based on DenseNet121 to achieve efficient spatial feature extraction, hierarchical representation learning, and faster model convergence. The use of separable convolution significantly reduces computational complexity while preserving discriminative feature learning, and the transformer module effectively captures long-range dependencies in radiographic patterns. To address the critical issue of class imbalance inherent in medical imaging datasets, an Auxiliary Classifier Deep Convolutional Generative Adversarial Network (ADCGAN) is employed to generate synthetic samples for underrepresented pneumonia categories, thereby enhancing data diversity and improving model generalization. The proposed framework is extensively evaluated on two benchmark datasets: Dataset-1, consisting of Normal, Viral, Bacterial, and Fungal Pneumonia cases, and Dataset-2, comprising Normal, Viral Pneumonia, COVID-19, and Lung Opacity classes. Model interpretability is ensured through Gradient-weighted Class Activation Mapping (Grad-CAM), which enables visualization of disease-specific regions in chest X-ray images and validates the clinical relevance of the learned representations. Experimental results demonstrate that SCViT-CheXNet consistently outperforms existing convolutional neural network and transformer-based approaches, achieving 99% accuracy, precision, recall, and F1-score across both datasets. The synergistic integration of separable convolution, transformer-based feature modeling, and GAN-driven data augmentation results in a lightweight yet highly accurate and interpretable diagnostic system. Overall, the SCViT-CheXNet framework shows strong potential for deployment in automated pneumonia and COVID-19 screening systems, offering reliable support for real-time clinical decision-making and contributing to improved patient outcomes.</p>2026-02-21T10:09:32+07:00Copyright (c) 2026 Khushboo Trivedi, Chintan Bhupeshbhai Thackerhttps://jeeemi.org/index.php/jeeemi/article/view/1464Impact of Optimizer Algorithm on NasNetMobile Model for Eight-class Retinal Disease Classification from OCT Images2026-03-02T04:47:45+07:00Madhumithaa Selvarajanvtd1152@veltech.edu.inMasoodhu Banu N. Mdrmasoodhubanu@veltech.edu.in<p>Artificial intelligence (AI) is an emerging technology that plays a vital role in various fields, including the medical field. Ophthalmology is the earliest field to adopt AI for diagnosing several retinal diseases. Many imaging techniques are available, but Optical Coherence Tomography (OCT) is particularly useful for early-stage diagnosis. OCT is a non-invasive imaging method that offers high-resolution visualization of the retinal structure, aiding the ophthalmologist in differentiating between normal and abnormal retina. Automated OCT-based retinal disease classification using deep learning (DL) is important for early disease detection. Most DL models achieved high performance, but the influence of the optimizer on model behaviour, convergence, and explainability remains a challenge. To bridge the gap, this study evaluates the performance and convergence of five optimizers, such as RMSprop, AdamW, Adam, Nadam, and SGD, on the NasNetMobile model. The model was trained on the OCT-8 dataset, which comprises seven diseased retinal classes and one normal class of Optical Coherence Tomography (OCT) images. The seven diseases are Age-related Macular Degeneration (AMD), choroidal neovascularization (CNV), Central Serous retinopathy (CSR), diabetic macular edema (DME), diabetic retinopathy (DR), DRUSEN, and Macular Hole (MH). The study also analyzes convergence behaviour and explainability through early stopping regularization technique and GradCAM XAI, respectively. The model achieved 71%, 93%, 96%, 97%, and 97% of accuracy, respectively. Compared with other optimizers, the SGD optimizer achieved high accuracy in 22 epochs, which indicates better generalization. GradCAM XAI highlights the disease-relevant region across different retinal diseases. This framework emphasizes the significance of selecting an appropriate optimizer for robust retinal disease classification using a DL model trained on OCT images</p>2026-03-02T04:46:25+07:00Copyright (c) 2026 Madhumithaa Selvarajan, Masoodhu Banu N. Mhttps://jeeemi.org/index.php/jeeemi/article/view/1403MK–TripNet: A Deep Learning Framework for Real-Time Multi-Class Lung Sound Classification 2026-04-01T10:49:01+07:00Widya Surya Eriniwidyasuryaelini@gmail.comGracia Putri Thomasgraciaputrithomas@gmail.comGiulia Salzano Badiagiuliabadia29@gmail.comArief Rahadiandr.ariefrahadian@gmail.comSofyan Budi Raharjoso_arjopulmo@yahoo.comSari Ayu Wulandarisari.wulandari@dsn.dinus.ac.id<p>Respiratory diseases such as asthma, pneumonia, and Chronic Obstructive Pulmonary Disease (COPD) remain major global health challenges, particularly in resource-limited settings where access to pulmonary specialists and early diagnostic tools is limited. Automatic lung sound classifications have emerged as a promising non-invasive screening approach; however, existing methods often rely on single-scale feature extraction, conventional loss functions, and offline analysis, which limit their discriminative capability and real-time applicability. The aim of this study is to develop and evaluate a deep learning framework for real-time multi-class lung sound classifications that improves discriminative representation and temporal sensitivity. To address limitations, this study proposes MK-TripNet, a novel deep learning architecture designed to integrate multi-scale feature extraction, discriminative embedding learning, and real-time inference within a unified framework. The main contribution of this work is the unified integration of a Multi-Kernel convolutional architecture, Triplet Loss-based embedding learning, and Sliding Window segmentation within a single end-to-end framework, enabling accurate segment-level lung sound classifications in real-time scenarios. Unlike prior approaches, the proposed method simultaneously captures fine-grained temporal patterns and broader spectral characteristics while explicitly maximizing inter-class separability in the embedding space. The proposed model was evaluated using a newly constructed dataset comprising 1,409 lung sound segments obtained from primary digital stethoscope recordings and publicly available respiratory sound databases. Experimental results demonstrate that MK-TripNet consistently outperforms several strong baseline models, including CNN-BiGRU, CNN-BiGRU-UMAP, and VGGish-Triplet, achieving an accuracy of 89.1%, an F1-score of 0.89, and a recall of 0.88. Ablation studies further confirm that the combined use of Multi-Kernel convolution, Triplet Loss, and Sliding Window segmentation yields the most robust and generalizable performances. These findings highlight the clinical potential of MK-TripNet for real-time digital auscultation and point-of-care respiratory screening, particularly in resource-limited and telemedicine settings.</p>2026-03-30T07:21:23+07:00Copyright (c) 2026 Widya Surya Erini, Gracia Putri Thomas, Giulia Salzano Badia, Arief Rahadian, Sofyan Budi Raharjo, Sari Ayu Wulandarihttps://jeeemi.org/index.php/jeeemi/article/view/1541Design and Statistical Evaluation of an AI-Enabled IoT-Based Non-Invasive Biosensing System for Diabetes Risk Screening2026-03-30T08:29:11+07:00Prachi C. Kambleprachikamble@ternaengg.ac.inLakshamappa Raghalkragha@gmail.comYogesh Pingleyogesh.pingle@vcet.edu.in<p>Early identification of diabetes risk remains a significant challenge due to the invasive nature, recurring cost, and limited accessibility of conventional biochemical diagnostic tests. These limitations restrict continuous monitoring and hinder large-scale population screening, particularly in remote and resource-limited settings. The aim of this study is to design and statistically evaluate an AI-enabled IoT-based non-invasive biosensing system for diabetes risk screening, focusing on system-level engineering design, data integration, and performance validation rather than clinical diagnosis. In this study, the term “non-invasive” refers exclusively to externally measurable surface-level physiological and breath-based signals that do not require skin penetration, blood sampling, or subdermal sensor implantation. The main contributions of this work include the development of a wearable IoT-based non-invasive biosensing framework, integration of multi-modal physiological and breath-based biomarkers for risk assessment, implementation of an ensemble machine learning model for diabetes risk classification, and comprehensive statistical validation using agreement, reliability, and calibration metrics. The proposed DiaAssist system acquires physiological parameters such as heart rate, blood pressure, oxygen saturation, body temperature, physical activity indicators, and breath volatile organic compound acetone through a wearable IoT platform with edge-level preprocessing. Fused physiological and demographic features are processed using an ensemble learning framework to generate individualized diabetes risk scores. Performance evaluation was conducted on a single-center observational dataset comprising 625 records using paired statistical tests, agreement analysis, and calibration assessment. The optimized model achieved an accuracy of 99.7%, an area under the receiver operating characteristic curve of 1.000, a Cohen’s Kappa coefficient of 0.993, a Matthews correlation coefficient of 0.993, and a Brier score of 0.045, demonstrating strong classification reliability and probabilistic calibration. The results confirm that combining IoT-based non-invasive biosensing with ensemble machine learning enables accurate and reliable screening for diabetes risk. The proposed system provides a scalable, cost-effective, and engineering-oriented solution suitable for remote monitoring and preventive healthcare applications</p>2026-03-30T08:29:11+07:00Copyright (c) 2026 Prachi C. Kamble, Lakshamappa Ragha, Yogesh Pinglehttps://jeeemi.org/index.php/jeeemi/article/view/1474Multipoint Wrist Pulse Acquisition and Analysis by Combining HRV with Morphological Timing Features for Quantitative Identification of Ayurvedic Doshas2026-03-31T22:45:23+07:00Devendra Patelyogkshem@gmail.comMitul Patelpatelmitul4388@gmail.com<p>Nadi Pariksha, the traditional Ayurvedic method of wrist pulse examination, posits that three adjacent radial artery locations corresponding to Vata, Pitta, and Kapha (V-P-K) reflect distinct physiological states. While recent sensor-based systems have attempted to digitize wrist pulse acquisition, many have emphasized hardware design or classification performance without rigorously validating physiological differences between pulse sites within the same individual. This study presents a quantitative evaluation of the multi-point principle of Nadi Pariksha using synchronized multi-site photoplethysmography (PPG) combined with integrated cardiovascular signal analysis. Pulse waveforms were simultaneously acquired from 39 participants, including 32 healthy individuals and 7 clinically characterized subjects, at the three classical radial artery locations. Morphological timing features and time-domain heart rate variability (HRV) metrics were extracted to characterize vascular dynamics and autonomic regulation. Within-subject statistical analysis demonstrated significant spatial differentiation across the pulse sites. Crest time decreased from 0.204 s at the Kapha site to 0.175 s at the Vata site (14.2% reduction), while systolic width decreased from 0.140 s to 0.109 s (22.1% reduction) (p ≤ 0.004). Non-parametric analysis confirmed significant differences in crest time (H = 9.15, p = 0.010), pulse width (H = 8.43, p = 0.015), systolic amplitude, systolic area, and HRV variability (SDNN: H = 6.33, p = 0.041), with moderate-to-large effect sizes (η² = 0.12–0.20). Clinically characterized cases exhibited deviations from this baseline pattern, including a 62% reduction in crest time gradient and a 72% increase in stiffness index in diabetes, and a 55% reduction in gradient with a 25% decrease in HRV during acute infection. Given the limited clinical sample (n = 7), these findings are interpreted as preliminary. Overall, the results provide quantitative within-subject evidence supporting the physiological distinctiveness of the V-P-K pulse locations and contribute toward the development of standardized, sensor-based Nadi Pariksha</p>2026-03-31T22:06:45+07:00Copyright (c) 2026 Devendra Patel, Mitul Patelhttps://jeeemi.org/index.php/jeeemi/article/view/1089 A Hybrid Deep Ensemble Model for Precise Liver and Tumor Segmentation Using U-Net and W-Net Architectures2026-04-02T20:04:05+07:00B. Sravanibukkesravani1992@gmail.comM. sunil Kumarsunil.malchi@mbu.asia<p>The identification of the liver with the hepatic tumors on the computed tomography (CT) scans is a major compulsion to the earliest diagnosis, treatment planning, and surgery in the case of hepatocellular carcinoma. However, automated segmentation is not an easy job due to the non-homogeneous appearance of tumors, blurry boundaries, small size of annotated datasets, and high inter-slice variability. Existing single deep learning models are known to suffer from prediction variance and low generalization in complex clinical conditions. The primary goal of the study is to develop an effective, highly accurate segmentation model that improves the accuracy, consistency, and explanability of liver and tumor borders in CT images. In this paper, an original hybrid deep ensemble model is proposed that leverages the advantages of U-Net and W-Net. This is the primary contribution; one can combine the strong spatial localization ability of U-Net and the reconstruction-driven unsupervised learning ability of W-Net in minimizing the variance and maximizing the generalization. In addition, soft probability fusion, uncertainty modelling, and entropy-based confidence estimation are also introduced to improve reliability and clinical interpretation. The preprocessing of CT images is performed mathematically by normalizing and resizing to 256x256. U-Net and W-Net are trained separately using the pixel-wise probability maps, which are soft-averaged and thresholded. Benchmark liver CT datasets are tested with the ensemble using the Dice coefficient, accuracy, precision, recall, F1-score, Intersection over Union (IoU), ROC-AUC, and statistical significance tests. The results of the experiment show that the suggested ensemble performs better with an accuracy of 95.4, a precision of 94.3, a recall of 93.9, an F1-score of 94.1, IoU of 89.8, and an average ROC-AUC of 0.9615 than the models of the U-Net and W-Net, which differ in a huge number. Statistical confirmation that the improvements are relevant (p < 0.01) will be provided. In summary, the proposed deep ensemble segmentation can accurately, reliably, and effectively segment the liver and tumor, showing strong potential for clinical use and subsequent extension to multi-organ and multi-modal medical imaging.</p>2026-04-02T20:01:28+07:00Copyright (c) 2026 B. Sravani, M. Sunil Kumarhttps://jeeemi.org/index.php/jeeemi/article/view/1343Ensemble Voting Method to Enhance the Performance of a Dental Caries Detection System using Convolutional Neural Network2026-04-02T21:37:45+07:00Putri Rizkiahrizkiah@mhs.usk.ac.idMaulisa Oktianamaulisaoktiana@usk.ac.idKhairun Saddamikhairun.saddami@usk.ac.idMaya Fitria mayafitria@usk.ac.idFitri Arnia f.arnia@usk.ac.idHubbul Walidainyhwalidainy@usk.ac.idYunida Yunida yunida@usk.ac.id<p>Individual classification models for caries detection still face significant challenges, including limited accuracy and unstable predictions, which can hinder diagnosis, delay clinical decisions, and increase the risks associated with patient care. To overcome these limitations, this study proposes an ensemble voting method that combines five deep learning models, such as ResNet-152, MobileNetV2, InceptionV3, NASNetMobile, and EfficientNet-B5. This approach aims to enhance the accuracy and stability of caries detection by leveraging the complementary strengths of the individual models while mitigating their weaknesses. Each model was trained and tested on the same dataset of dental images, categorized into caries and regular classes. Their predictions were aggregated using hard and soft voting techniques. The ensemble's performance was evaluated using accuracy, precision, recall, and F1-score. The ensemble voting demonstrates a notable improvement in classification performance over individual models. Hard and soft voting have excellent classification performance and consistently outperform the best individual models. The accuracy increased from EfficientNetB5 0.8485 to 0.8864 and 0.8712, representing increases of 4.46% and 2.68%, respectively. The precision increased from MobileNetV2 0.8182 to 0.8493 and 0.8551, representing increases of 3.81% and 4.52%. For recall, EfficientNetB5 ranked highest among individual models with a score of 0.9242. Hard voting increased 1.64% to 0.9394, and soft voting decreased slightly by 3.28% to 0.8939. The F1 score of EfficientNetB5 is 0.8592. Hard and soft voting increased 3.83% and 1.73% to 0.8921 and 0.8741. The proposed ensemble improves the F1-score by 3.83 percentage points compared to the best individual model. The ensemble voting method effectively leverages the complementary strengths of each deep learning model to improve the stability and accuracy of fast, reliable dental caries early detection prediction.</p>2026-04-02T21:37:45+07:00Copyright (c) 2026 Putri Rizkiah, Maulisa Oktiana, Khairun Saddami, Maya Fitria , Fitri Arnia , Hubbul Walidainy, Yunida Yunida