Journal of Electronics, Electromedical Engineering, and Medical Informatics http://jeeemi.org/index.php/jeeemi <p>The Journal of Electronics, Electromedical Engineering, and Medical Informatics, (JEEEMI), is a peer-reviewed periodical scientific journal aimed at publishing research results of the Journal focus areas. The Journal is published by the Department of Electromedical Engineering, Health Polytechnic of Surabaya, Ministry of Health, Indonesia. The role of the Journal is to facilitate contacts between research centers and the industry. The aspiration of the Editors is to publish high-quality scientific professional papers presenting works of significant scientific teams, experienced and well-established authors as well as postgraduate students and beginning researchers. All articles are subject to anonymous review processes by at least two independent expert reviewers prior to publishing on the International Journal of Electronics, Electromedical Engineering, and Medical Informatics website.</p> Department of Electromedical Engineering, POLTEKKES KEMENKES SURABAYA en-US Journal of Electronics, Electromedical Engineering, and Medical Informatics 2656-8632 <p><strong>Authors who publish with this journal agree to the following terms:</strong></p> <ol> <li class="show">Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution-ShareAlikel 4.0 International <a title="CC BY SA" href="https://creativecommons.org/licenses/by-sa/4.0/" target="_blank" rel="noopener">(CC BY-SA 4.0)</a>&nbsp;that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.</li> <li class="show">Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.</li> <li class="show">Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See&nbsp;<a href="http://opcit.eprints.org/oacitation-biblio.html" target="_new">The Effect of Open Access</a>).</li> </ol> Rule-Based Adaptive Chatbot on WhatsApp for Visual, Auditory, and Kinesthetic Learning Style Detection http://jeeemi.org/index.php/jeeemi/article/view/1215 <p>Adapting learning methods to individual learning styles remains a major challenge in digital education due to the static nature of traditional questionnaires and the absence of adaptive feedback mechanisms. This study aimed to develop a rule-based adaptive WhatsApp chatbot capable of automatically identifying users’ learning styles, visual, auditory, and kinesthetic, through a weighted questionnaire enhanced with probabilistic refinement. The proposed system introduces an adaptive decision framework that dynamically manages conversation flow using score dominance evaluation, early termination, and selective question expansion. Bayesian posterior probability estimation is employed to strengthen decision confidence in borderline cases, ensuring consistent and interpretable results even when user responses are ambiguous. The chatbot was implemented using WhatsApp-web.js and MongoDB, supported by session validation and activity log monitoring to ensure operational reliability and data integrity. System validation involved white-box testing using Cyclomatic Complexity to verify logical accuracy and 20-fold cross-validation using a Support Vector Machine (SVM) to evaluate classification performance. The adaptive model achieved an accuracy of 80.2% and an AUC of 0.902, supported by a balanced precision (0.738), recall (0.662), and F1-score (0.698). These results demonstrate stable discriminative capability and confirm that the adaptive scoring mechanism effectively reduces redundant questioning, lowers cognitive load, and improves interaction efficiency without compromising reliability. In conclusion, the study successfully achieved its objective of developing an adaptive, efficient, and mathematically transparent learning style detection system. The findings confirm that adaptive rule-based logic reinforced by probabilistic reasoning can significantly enhance the efficiency and reliability of digital learning assessments. Future research will extend this framework by incorporating multimodal behavioral indicators and personalized learning content to further strengthen adaptive learning support</p> Muhammad Rahulil Yuni Yamasari Ricky Eka Putra I made Suartana Anita Qoiriah Copyright (c) 2025 Muhammad rahulil, Yuni Yamasari, Ricky Eka Putra, I made Suartana, Anita Qoiriah https://creativecommons.org/licenses/by-sa/4.0 2025-11-28 2025-11-28 8 1 16 31 10.35882/jeeemi.v8i1.1215 CVAE-ADS: A Deep Learning Framework for Traffic Accident Detection and Video Summarization http://jeeemi.org/index.php/jeeemi/article/view/1139 <p>Since it is a manual process of monitoring to identify accidents, it is becoming more and more difficult and results in human error, because of the rapid increase in road traffic and surveillance video. This underscores the urgent need for robust, automated systems capable of identifying accidents, as well as the burden of summarizing long videos. In order to address this issue, we propose CVAE-ADS, which is an unsupervised Approach that not only detects anomalies but also summarizes keyframes of a video to monitor traffic. This method operates in two phases. The stage of detecting Abnormalities intraffic video is performed using a Convolutional Variational Autoencoder, which operates on normal frames and identifies anomalies based on reconstruction errors. The second stage is the clustering of the perceived anomalous frames in the latent space, followed by the selection of representative keyframes to form a summary video. We tested the method with two benchmark datasets, namely, the IITH Accident Dataset and a subset of UCF-Crime. The findings have shown that the proposed approach had great accuracy of accident detection and AUC of 90.61 and 87.95 on IITH and UCF-Crime respectively and low rebuilding error and Equal Error Rates. To summarize, the method achieves substantial frame reduction and produces high visual quality with a wide variety of keyframes. It is able to measure up to 85 reduction rates with coverage of 92.5 on the IITH dataset and 80 reduction rates with coverage of 90 on an Accident subset of the UCF-Crime Dataset. CVAE-ADS offers a lightweight version of constant traffic monitoring, which utilizes limited organizational capital to categorize coincidences in real-time and recapitulate video footage of the accidents</p> Ankita Chauhan Sudhir Vegad Copyright (c) 2026 Ankita Chauhan and Sudhir Vegad https://creativecommons.org/licenses/by-sa/4.0 2026-01-01 2026-01-01 8 1 185 205 10.35882/jeeemi.v8i1.1139 BTISS-WNET: Deep Learning-based Brain Tissue Segmentation using Spatio Temporal WNET http://jeeemi.org/index.php/jeeemi/article/view/808 <p><strong>Brain tissue segmentation (BTISS) from magnetic resonance imaging (MRI) is a critical process in neuroimaging, aiding in the analysis of brain morphology and facilitating accurate diagnosis and treatment of neurological disorders. A major challenge in BTISS is intensity inhomogeneity, which arises from variations in the magnetic field during image acquisition. This results in non-uniform intensities within the same tissue class, particularly affecting white matter (WM) segmentation. To address this problem, we propose an efficient deep learning-based framework, BTISS-WNET, for accurate segmentation of brain tissues. The main contribution of this work is the integration of a spatio-temporal segmentation strategy with advanced pre-processing and feature extraction to overcome intensity inconsistency and improve tissue differentiation. The process begins with skull stripping to eliminate non-brain tissues, followed by Empirical Wavelet Transform (EWT) for noise reduction and edge enhancement. Data augmentation techniques, including random rotation and flipping, are applied to improve model generalization. The preprocessed images are fed into Res-GoogleNet (RGNet) to extract deep semantic features. Finally, a Spatio-Temporal WNet is used for precise WM segmentation, leveraging spatial and temporal dependencies for improved boundary delineation. The proposed BTISS-WNET model achieves a segmentation accuracy of 99.32% for white matter. It also demonstrates improved accuracy of 1.76%, 18.23%, and 16.02% over DDSeg, BISON, and HMRF-WOA, respectively. In conclusion, BTISS-WNET provides a robust and high-accuracy framework for WM segmentation in MRI images, with promising applications in clinical neuroimaging. Future work will focus on validating the model using real clinical datasets and extending it to multi-tissue and multi-modal MRI segmentation</strong></p> Athur Shaik Ali Gousia Banu Sumit Hazra Copyright (c) 2025 Athur Shaik Ali Gousia Banu, Sumit Hazra, Razia Alangir Banu https://creativecommons.org/licenses/by-sa/4.0 2025-11-26 2025-11-26 8 1 1 15 10.35882/jeeemi.v8i1.808 Deep Learning Based Ovarian Cancer Classification Using EfficientNetB2 with Attention Mechanism http://jeeemi.org/index.php/jeeemi/article/view/1216 <p>Ovarian cancer is a gynecological malignancy comprising multiple histopathological subtypes. Traditional diagnostic tools like histopathology and CA-125 tests suffer from limitations, including inter-observer variability, low specificity, and time-consuming procedures, often leading to delayed or incorrect diagnoses, which are subjective and error-prone. Conventional machine learning models, such as K-Nearest Neighbors (KNN) and Support Vector Machine (SVM), have been applied but often struggle with high-dimensional image data and fail to extract deep morphological features. This study proposes a DL-based framework to classify ovarian cancer subtypes from histopathological images, aiming to enhance diagnostic accuracy and clinical decision-making. Initially, Deep learning was applied using pre-trained architectures such as VGG-16, Xception, and EfficientNetB2. However, the standout innovation in this study is the integration of EfficientNetB2 with Convolutional Block Attention Module (CBAM), an attention mechanism module. An attention mechanism allows the model to focus on the most informative regions of the image, thereby improving diagnostic precision. The proposed system was trained and validated on a diverse, well-structured dataset, achieving high accuracy and strong generalization capability. EfficientNetB2 with CBAM outperformed other models, achieving a 91% accuracy rate compared to 52% for VGG-16, 72% for Xception, and 82% for the baseline EfficientNetB2 model. This attention-enhanced, scalable AI model demonstrates strong potential for clinical application. It provides faster and more efficient classification of ovarian cancer subtypes compared to conventional approaches. The framework has the potential to improve survival outcomes for patients with ovarian cancer. The proposed system demonstrates a significant improvement in ovarian cancer subtype classification (High-Grade Serous Carcinoma, Low-Grade Serous Carcinoma, Clear-Cell, Endometrioid, and Mucinous Carcinoma). It provides a practical tool for aiding early diagnosis and treatment planning, with potential for integration into clinical workflows.</p> Jayashri Kolekar Chhaya Pawar Amol Pande Chandrashekhar Raut Copyright (c) 2025 Jayashri Kolekar, Chhaya Pawar, Amol Pande, Chandrashekhar Raut https://creativecommons.org/licenses/by-sa/4.0 2025-11-28 2025-11-28 8 1 32 52 10.35882/jeeemi.v8i1.1216 DCRNet: Hybrid Deep Learning Architecture for Forecasting of Blood Glucose http://jeeemi.org/index.php/jeeemi/article/view/1245 <p>Maintaining blood glucose (BG) levels within the euglycemic range is essential for patients with type 1 diabetes (T1D) to prevent both hypoglycemia and hyperglycemia. Often, BG concentration changes due to unannounced carbohydrate intake during meals or an inappropriate amount of insulin dosage. Timely forecasting of BG can help take appropriate actions in advance to keep BG within the euglycemic range. Recent studies indicate that deep learning techniques have demonstrated improved performance in this field. Deep learning approaches often struggle to precisely predict future BG levels. To address these challenges, this paper introduces a novel hybrid deep learning architecture called DCRNet. This architecture incorporates a Dilated Convolution layer that effectively detects multi-scale patterns while minimizing parameter count. Additionally, it utilizes Long Short-Term Memory (LSTM) to handle contextual dependencies and maintain the temporal order of the extracted features. DCRNet predicts future BG levels for short-term durations (15, 30, and 60 minutes) using information on glucose, meals, and insulin dosages. The proposed architecture’s performance is evaluated on 11 simulated subjects from the UVA/Padova T1D Mellitus simulator and 12 actual subjects from the OhioT1DM dataset. In contrast to previous works, the proposed architecture achieves root mean square errors (RMSEs) of 3.42, 6.45, and 17.73 mg/dL for simulated subjects and 12.57, 20.72, and 34.41mg/dL for actual subjects, for prediction horizons (PH) of 15-, 30-, and 60-minute, respectively. The proposed architecture is also evaluated using the mean absolute error (MAE), which is 2.11, 4.47, and 11.78 mg/dL for simulated subjects and 7.9, 14.13, and 25.5 mg/dL for actual subjects, for 15-, 30-, and 60-minute PH. The experimental findings validate that the proposed architecture, which uses a dilated convolutional LSTM, outperforms other recent state-of-the-art models.</p> Ketan Lad Maulin Joshi Copyright (c) 2025 Ketan Lad, Maulin Joshi https://creativecommons.org/licenses/by-sa/4.0 2025-12-07 2025-12-07 8 1 53 68 10.35882/jeeemi.v8i1.1245 A Neuro-Physiological Diffusion Model for Accurate EEG-Based Psychiatric Disorder Classification http://jeeemi.org/index.php/jeeemi/article/view/1131 <p>Identification of psychiatric conditions such as depression, schizophrenia, anxiety, and obsessive-compulsive disorder (OCD) from Electroencephalography (EEG) data remains a significant challenge due to the complexity of neurophysiological patterns. While Generative Adversarial Networks (GANs) have been explored to augment EEG datasets and enhance classifier performance, they often suffer from limitations including training instability, mode collapse, and the generation of physiologically implausible EEG samples. These shortcomings hinder their applicability in high-stakes clinical decision-making, where reliability and physiological coherence are critical. This study aims to address the above-mentioned challenges by proposing a novel Neuro-Physiologically Constrained Diffusion Framework (NPC-DiffEEG). This framework leverages the strengths of conditional diffusion models while integrating domain-specific neurophysiological constraints, ensuring that generated EEG signals preserve key properties, such as frequency band structures and inter-channel connectivity patterns, both of which are essential for accurate mental disorder classification. The NPC-DiffEEG-generated data is combined with real EEG features and processed using a multi-task attention-based transformer, enabling the model to learn robust, cross-disorder representations. Extensive experiments conducted on a publicly available multi-disorder EEG dataset demonstrate that NPC-DiffEEG significantly outperforms traditional GAN-based augmentation approaches. The model achieves an impressive average classification accuracy of 96.8%, along with superior F1-scores and AUC values across all disorder categories. Furthermore, integrating attention-based disorder attribution not only enhances interpretability but also reduces overfitting, thereby improving generalizability to unseen subjects. This innovative approach marks a substantial advancement in EEG-based classification of psychiatric disorders, bridging the gap between synthetic data generation and clinically reliable decision-support systems.</p> Pradeep Gopal Abbinayaa M S Subashini Mathivanan Nagaraj N Nasiya Niwaz Banu Gowri Thumbur Copyright (c) 2025 Gowri Thumbur, Pradeep Gopal, Abbinayaa M, S Subashini, Mathivanan Nagaraj, N Nasiya Niwaz Banu Banu https://creativecommons.org/licenses/by-sa/4.0 2025-12-07 2025-12-07 8 1 69 83 10.35882/jeeemi.v8i1.1131 Comparative Analysis of YOLO11 and Mask R-CNN for Automated Glaucoma Detection http://jeeemi.org/index.php/jeeemi/article/view/1266 <p>Glaucoma is a progressive optic neuropathy and a major cause of irreversible blindness. Early detection is crucial, yet current practice depends on manual estimation of the vertical Cup-to-Disc Ratio (vCDR), which is subjective and inefficient. Automated fundus image analysis provides scalable solutions but is challenged by low optic cup contrast, dataset variability, and the need for clinically interpretable outcomes. This study aimed to develop and evaluate an automated glaucoma screening pipeline based on optic disc (OD) and optic cup (OC) segmentation, comparing a single-stage model (YOLO11-Segmentation) with a two-stage model (Mask R-CNN with ResNet50-FPN), and validating it using vCDR at a threshold of 0.7. The contributions are fourfold: establishing a benchmark comparison of YOLO11 and Mask R-CNN across three datasets (REFUGE, ORIGA, G1020); linking segmentation accuracy to vCDR-based screening; analyzing precision–recall trade-offs between the models; and providing a reproducible baseline for future studies. The pipeline employed standardized preprocessing (optic nerve head cropping, resizing to 1024×1024, conservative augmentation). YOLO11 was trained for 200 epochs, and Mask R-CNN for 75 epochs. Evaluation metrics included Dice, Intersection over Union (IoU), mean absolute error (MAE), correlation, and classification performance. Results showed that Mask R-CNN achieved higher disc Dice (0.947 in G1020, 0.938 in REFUGE) and recall (0.880 in REFUGE), while YOLO11 attained stronger vCDR correlation (r = 0.900 in ORIGA) and perfect precision (1.000 in G1020). Overall accuracy exceeded 0.92 in REFUGE and G1020. In conclusion, YOLO11 favored conservative screening with fewer false positives, while Mask R-CNN improved sensitivity. These complementary strengths highlight the importance of model selection by screening context and suggest future research on hybrid frameworks and multimodal integration</p> Muhammad Naufaldi Fayyadh Triando Hamonangan Saragih Andi Farmadi Muhammad Itqan Mazdadi Rudy Herteno Vugar Abdullayev Copyright (c) 2025 Muhammad Naufaldi Fayyadh, Triando Hamonangan Saragih, Andi Farmadi, Muhammad Itqan Mazdadi, Rudy Herteno, Vugar Abdullayev https://creativecommons.org/licenses/by-sa/4.0 2025-12-08 2025-12-08 8 1 84 104 10.35882/jeeemi.v8i1.1266 Graph-Theoretic Analysis of Electroencephalography Functional Connectivity Using Phase Lag Index for Detection of Ictal States http://jeeemi.org/index.php/jeeemi/article/view/1230 <p>Epileptic disorders are characterized by the misfiring of neurons and affect 50 million people worldwide, who have to live with physical challenges in their normal lives. The ionic activity of the brain can be detected as an electrical activity from the scalp using a non-invasive bio-potential measurement technique known as electroencephalography (EEG). Manual interpretation of brainwaves is a time-consuming, expert-intensive task. In recent years, AI has achieved remarkable results, but at the cost of large datasets and high processing power. We used publicly available online datasets from the Children’s Hospital Boston (CHB) in collaboration with the Massachusetts Institute of Technology (MIT). The datasets consisted of 23 bipolar channels that included pre-processed epochs of both normal and pre-labeled seizure (ictal) states. Using the Phase Lag Index (PLI), the functional connectivity of the network was built to record consistent phase synchronization while minimizing artifacts from volume conduction. Graph-theory-based features were used to detect the brain's seizure state. A significant increase in the values of graph theoretical features, such as degree centrality and clustering coefficient, was observed, along with the formation of hyper-connected hubs and disrupted brain communication in the ictal state. Statistical tests (T-tests, ANOVA, Mann-Whitney U) across multiple PLI thresholds confirmed consistent significant differences (p-value &lt; 0.05) between normal and ictal conditions. This study aims to provide a method based on graph theory, which is computationally efficient, interpretable, and suitable for real-time seizure detection. Considering the efficiency of clustering coefficient and degree of centrality, we can say that they are useful biomarkers for biomedical applications.</p> Ghansyamkumar Rathod Hardik Modi Copyright (c) 2025 Ghansyamkumar Rathod, Hardik Modi https://creativecommons.org/licenses/by-sa/4.0 2025-12-09 2025-12-09 8 1 105 118 10.35882/jeeemi.v8i1.1230 Enhancing Deep Learning Model Using Whale Optimization Algorithm on Brain Tumor MRI http://jeeemi.org/index.php/jeeemi/article/view/941 <p>The increasing prevalence of brain cancer has emerged as a significant global health issue, with brain neoplasms, particularly gliomas, presenting considerable diagnostic and therapeutic obstacles. The timely and precise identification of such tumors is crucial for improving patient outcomes. This investigation explores the advancement of Convolutional Neural Networks (CNNs) for detecting brain tumors using MRI data, incorporating the Whale Optimization Algorithm (WOA) for the automated tuning of hyperparameters. Moreover, two callbacks, ReduceLROnPlateau and early stopping, were utilized to augment training efficacy and model resilience. The proposed model exhibited exceptional performance across all tumor categories. Specifically, the precision, recall, and F1-scores for Glioma were recorded as 0.997, 0.980, and 0.988, respectively; for meningioma, as 0.983, 0.986, and 0.984; for no tumors, as 0.998, 0.998, and 0.998; and for pituitary, as 0.997, 0.997, and 0.997. The mean performance metrics attained were 0.994 for precision, 0.990 for recall, and 0.992 for F1-score. The overall accuracy of the model was determined to be 0.991. Notably, incorporating callbacks within the CNN architecture improved accuracy to 0.994. Furthermore, when synergized with the WOA, the CNN-WOA model achieved a maximum accuracy of 0.996. This advancement highlights the effectiveness of integrating adaptive learning methodologies with metaheuristic optimization techniques. The findings suggest that the model sustains high classification accuracy across diverse tumor types and exhibits stability and robustness throughout training. The amalgamation of callbacks and the Whale Optimization Algorithm significantly bolster CNN performance in classifying brain tumors. These advancements contribute to the development of more reliable diagnostic instruments in medical imaging</p> Winarno Winarno Agus Harjoko Copyright (c) 2025 Winarno Winarno, Agus Harjoko https://creativecommons.org/licenses/by-sa/4.0 2025-12-18 2025-12-18 8 1 136 151 10.35882/jeeemi.v8i1.941 CIT-LieDetect: A Robust Deep Learning Framework for EEG-Based Deception Detection Using Concealed Information Test http://jeeemi.org/index.php/jeeemi/article/view/1300 <p>Deception detection with electroencephalography (EEG) is still an open problem as a result of inter-individual variability of brain activity and neural dynamics of deceitful responses. Traditional methods fail to perform well in terms of consistent generalization, and as a result, research has ahifted towards exploring sophisticated deep learning methods for Concealed Information Tests (CIT). The objective of the present study is to categorize subjects as guilty or innocent based on EEG measurements and rigorously test model performance in terms of accuracy, sensitivity, and specificity. To achieve this, experiments were conducted on two EEG datasets: the LieWaves dataset, consisting of 27 subjects recorded with five channels (AF3, T7, Pz, T8, AF4), and the CIT dataset, comprising 79 subjects recorded with 16 channels (Fp1, Fp2, F3, F4, C3, C4, Cz, P3, P4, Pz, O1, O2, T3/T7, T4/T8, T5/P7, T6/P8). Preprocessing involved a band-pass filter for noise reduction, followed by feature extraction using the Discrete Wavelet Transform (DWT) and the Fast Fourier Transform (FFT). Three models were evaluated: FBC-EEGNet, InceptionTime-light, and their ensemble. Results indicate that InceptionTime-light achieved the highest accuracy of 79.2% on the CIT dataset, surpassing FBC-EEGNet (70.8%). On the LieWaves dataset, FBC-EEGNet achieved superior performance, with 71.6% accuracy, compared with InceptionTime-light (65.93%). In terms of specificity, FBC-EEGNet reached 93.7% on the CIT dataset, while InceptionTime-light demonstrated balanced performance with 62.5% sensitivity and 87.5% specificity. Notably, the ensemble model provided stable and generalizable outcomes, yielding 70.8% accuracy, 62.5% sensitivity, and 75% specificity on the CIT dataset, confirming its robustness across subject groups. In conclusion, FBC-EEGNet is effective for maximizing specificity, InceptionTime-light achieves higher accuracy, and the ensemble model delivers a balanced trade-off. The implications of this work are to advance reliable EEG-based deception detection and to set the stage for future research on explainable and interpretable models, validated on larger and more diverse datasets.</p> Tanmayi Nagale Anand Khandare Copyright (c) 2025 Tanmayi Nagale, Anand Khandare https://creativecommons.org/licenses/by-sa/4.0 2025-12-31 2025-12-31 8 1 152 167 10.35882/jeeemi.v8i1.1300 Deep Learning-Based Lung Sound Classification Using Mel-Spectrogram Features for Early Detection of Respiratory Diseases http://jeeemi.org/index.php/jeeemi/article/view/1256 <p>Respiratory diseases such as asthma, chronic obstructive pulmonary disease, and pneumonia remain among the leading causes of death globally. Traditional diagnostic approaches, including auscultation, rely heavily on the subjective expertise of medical practitioners and the quality of the instruments used. Recent advancements in artificial intelligence offer promising alternatives for automated lung sound analysis. However, audio is an unstructured data format that must be converted into a suitable format for AI algorithms. Another significant challenge lies in the imbalanced class distribution within available datasets, which can adversely affect classification performance and model reliability. This study applied several comprehensive preprocessing techniques, including random undersampling to address data imbalance, resampling audio at 4000 Hz for standardization, and standardizing audio duration to 2.7 seconds for consistency. Feature extraction was then performed using the Mel Spectrogram method, converting audio signals into image representations to serve as input for classification algorithms based on deep learning architectures. &nbsp;To determine optimal performance characteristics, various Convolutional Neural Network (CNN) architectures were systematically evaluated, including LeNet-5, AlexNet, VGG-16, VGG-19, ResNet-50, and ResNet-152. VGG-16 achieved the highest classification accuracy of the tested models at 75.5%, demonstrating superior performance in respiratory sound classification tasks. This study demonstrates the potential of AI-based lung sound classification systems as a complementary diagnostic tool for healthcare professionals and the general public in supporting early identification of respiratory abnormalities and diseases. The findings suggest that automated lung sound analysis could enhance diagnostic accessibility and provide more valuable support for clinical decision-making in respiratory healthcare applications</p> Midfai Yabani Mohammad Reza Faisal Fatma Indriani Dodon Turianto Nugrahadi Dwi Kartini Kenji Satou Copyright (c) 2026 Midfai Yabani, Mohammad Reza Faisal, Fatma Indriani, Dodon Turianto Nugrahadi, Dwi Kartini, Kenji Satou https://creativecommons.org/licenses/by-sa/4.0 2026-01-03 2026-01-03 8 1 168 184 10.35882/jeeemi.v8i1.1256 Medical Image Segmentation Using a Global Context-Aware and Progressive Channel-Split Fusion U-Net with Integrated Attention Mechanisms http://jeeemi.org/index.php/jeeemi/article/view/1371 <p>Medical image segmentation serves as a key component in Computer-Aided Diagnosis (CAD) systems across various imaging modalities. However, the task remains challenging because many images have low contrast and high lesion variability, and many clinical environments require efficient models. This study proposes CFCSE-Net, a U-Net-based model that builds upon X-UNet as a baseline for the CFGC and CSPF modules. This model incorporates a modified CFGC module with added Ghost Modules in the encoder, a CSPF module in the decoder, and Enhanced Parallel Attention (EPA) in the skip connections. The main contribution of this paper is the design of a lightweight architecture that combines multi-scale feature extraction with an attention mechanism to maintain low model complexity and increase segmentation accuracy. We train and evaluate CFCSE-Net on four public datasets: Kvasir-SEG, CVC-ClinicDB, BUSI (resized to 256 × 256 pixels), and PH2 (resized to 320 × 320 pixels), with data augmentation applied. We report segmentation performance as the mean ± standard deviation of IoU, DSC, and accuracy across three random seeds. CFCSE-Net achieves 79.78% ± 1.99 IoU, 87.21% ± 1.72 DSC, and 96.70% ± 0.59 accuracy on Kvasir-SEG, 88.11% ± 0.86 IoU, 93.42% ± 0.55 DSC, and 99.04% ± 0.09 accuracy on CVC-ClinicDB, 69.33% ± 2.66 IoU, 78.80% ± 2.65 DSC, and 96.30% ± 0.51 accuracy on BUSI, and 92.27% ± 0.52 IoU, 95.92% ± 0.30 DSC, and 98.06% ± 0.16 accuracy on PH2. Despite its strong performance, the model remains compact with 909,901 parameters and low computational cost, requiring 3.24 GFLOPs for 256 × 256 inputs and 5.07 GFLOPs for 320 × 320 inputs. These results show that CFCSE-Net maintains stable performance on polyp, breast ultrasound, and skin lesion segmentation while it stays compact enough for CAD systems on hardware with low computational resources.</p> Alfath Roziq Widhayaka Heri Prasetyo Copyright (c) 2026 Alfath Roziq Widhayaka, Heri Prasetyo https://creativecommons.org/licenses/by-sa/4.0 2026-01-09 2026-01-09 8 1 206 221 10.35882/jeeemi.v8i1.1371 HALF-MAFUNET: A Lightweight Architecture Based on Multi-Scale Adaptive Fusion for Medical Image Segmentation http://jeeemi.org/index.php/jeeemi/article/view/1357 <p>Medical image segmentation is a critical component in computer-aided diagnosis systems but many deep learning models still require large numbers of parameters and heavy computation. Classical CNN-based architectures such as U-Net and its variants achieve good accuracy, but are often too heavy for real deployment. Meanwhile, modern Transformer-based or Mamba-based models capture long-range information but typically increase model complexity. Because of these limitations, there is still a need for a lightweight segmentation model that can provide a good balance between accuracy and efficiency across different types of medical images. This paper proposes Half-MAFUNet, a lightweight architecture based on multi-scale adaptive fusion and designed as a simplified version of MAFUNet. The main contribution of this work is combining the efficient encoder structure of Half-UNet with advanced fusion and attention mechanisms. Half-MAFUNet integrates Hierarchy Aware Mamba (HAM) for global feature modelling, Multi-Scale Adaptive Fusion (MAF) to combine global and local information, and two attention modules, Adaptive Channel Attention (ACA) and Adaptive Spatial Attention (ASA), to refine skip connections. In addition, this model incorporates Channel Atrous Spatial Pyramid Pooling (CASPP) to capture multi-scale receptive fields efficiently without increasing computational cost. Together, these components create a compact architecture that maintains strong representational power. The model is trained and evaluated on three public datasets: CVC-ClinicDB for colorectal polyp segmentation, BUSI for breast tumor segmentation, and ISIC-2018 for skin lesion segmentation. All images are resized to 256×256 pixels and processed using geometric and intensity-based augmentations. Half-MAFUNet achieves competitive performance, obtaining mean IoU around 84 85% and Dice/F1-Score around 90 92% across datasets, while using significantly fewer parameters and GFLOPs compared to U-Net, Att-UNet, UNeXt, MALUNet, LightM-UNet, VM-UNet, and UD-Mamba. These results show that Half-MAFUNet provides accurate and efficient medical image segmentation, making it suitable for real-world deployment on devices with limited computational resources.</p> Abiaz Fazel Maula Sandy Heri Prasetyo Copyright (c) 2026 Abiaz Fazel Maula Sandy, Heri Prasetyo https://creativecommons.org/licenses/by-sa/4.0 2026-01-12 2026-01-12 8 1 222 239 10.35882/jeeemi.v8i1.1357 Improving the Segmentation of Colorectal Cancer from Histopathological Images Using a Hybrid Deep Learning Pipeline: A Case Study http://jeeemi.org/index.php/jeeemi/article/view/1158 <p>Early and precise diagnosis of colorectal cancer plays a crucial role in enhancing patients' outcomes. Although histopathological assessment remains the reference standard for diagnosis, it is often lengthy and subject to variability between pathologists. This study aims to develop and evaluate a hybrid deep learning-based approach for the automated segmentation of Hematoxylin and Eosin-stained colorectal histopathology images. The work investigates how preprocessing strategies and architectural design choices influence the model’s ability to identify meaningful tissue patterns while preserving computational efficiency. Furthermore, it demonstrates the integration of a deep learning-based segmentation module into colorectal cancer diagnostic workflows. Several deep learning–based segmentation models with varying architectural configurations were trained and evaluated using a publicly available endoscopic biopsy histopathological hematoxylin and eosin image dataset. Preprocessing procedures were applied to generate computationally efficient image representations, thereby improving training stability and overall segmentation performance. The best-performing configuration achieved a segmentation accuracy of 0.97, reflecting consistent and reliable performance across samples. It accurately delineated cancerous tissue boundaries and effectively distinguished benign from malignant regions, demonstrating sensitivity to fine morphological details relevant to diagnosis. Strong agreement between predicted and expert-annotated regions confirmed the model’s reliability and alignment with expert assessments. Minimal overfitting was observed, indicating stable training behavior and robust generalization across different colorectal tissue types. In comparative evaluations, the model maintained high accuracy across all cancer categories and outperformed existing state-of-the-art approaches. Overall, these findings demonstrate the model’s robustness, efficiency, and adaptability, confirming that careful architectural and preprocessing optimization can substantially enhance segmentation quality and diagnostic reliability. The proposed approach can support pathologists by providing accurate tissue segmentation, streamlining diagnostic procedures, and improving clinical decision-making. This study underscores the value of optimized deep learning models as intelligent decision-support tools for efficient and consistent colorectal cancer diagnosis</p> Fahima Idiri Farid MEZIANE Hakim BOUCHAL Copyright (c) 2026 Fahima Idiri, Farid MEZIANE, Hakim BOUCHAL https://creativecommons.org/licenses/by-sa/4.0 2026-01-13 2026-01-13 8 1 240 256 10.35882/jeeemi.v8i1.1158 Impact of Different Kernels on Breast Cancer Severity Prediction Using Support Vector Machine http://jeeemi.org/index.php/jeeemi/article/view/960 <p>Breast cancer poses a critical global health challenge and continues to be one of the most prevalent causes of cancer-related deaths among women worldwide. Accurate and early classification of cancer severity is essential for improving treatment outcomes and guiding clinical decision-making, since timely intervention can significantly reduce mortality rates and enhance patient survival. This study evaluates the performance of Support Vector Machine (SVM) models using different kernel functions of Linear, Polynomial, Radial Basis Function (RBF), and Sigmoid for breast cancer severity prediction. The impact of feature selection was also examined, using the Random Forest algorithm to select the top features based on Mean Decrease Accuracy (MDA), which serves to reduce redundancy, improve interpretability, and enhance model efficiency. Experimental results show that the RBF kernel consistently outperformed other kernels, especially in terms of sensitivity, a critical metric in medical diagnostics that emphasizes the ability of the model to identify positive cases correctly. Without feature selection, the RBF kernel achieved an accuracy of 0.9744, a sensitivity of 0.9772, a precision of 0.9722, and an AUC of 0.9968, indicating strong performance across all evaluation metrics. After applying feature selection, the RBF kernel further improved the accuracy to 0.9754, the sensitivity to 0.9770, the precision to 0.9742, and the AUC to 0.9975, which demonstrated enhanced generalization and reduced overfitting, highlighting the benefits of targeted feature reduction. While the Polynomial kernel yielded the highest precision (up to 0.9799), its lower sensitivity (as low as 0.9237) indicates a greater risk of false negatives, which is particularly concerning in cancer detection. These findings underscore the importance of optimizing both kernel function and feature selection. The RBF kernel, when combined with targeted feature selection, offers the most balanced and sensitive model, making it highly suitable for breast cancer classification tasks where diagnostic accuracy is vital</p> Kunti Mahmudah Sugiyarto Surono Rusmining Rusmining Fatma Indriani Copyright (c) 2026 Kunti Mahmudah, Sugiyarto Surono, Rusmining Rusmining, Fatma Indriani https://creativecommons.org/licenses/by-sa/4.0 2026-01-13 2026-01-13 8 1 257 269 10.35882/jeeemi.v8i1.960 EPR-Stego: Quality-Preserving Steganographic Framework for Securing Electronic Patient Records http://jeeemi.org/index.php/jeeemi/article/view/1172 <p>Secure medical data transmission is a fundamental requirement in telemedicine, where information is often exchanged over public networks. Protecting patient confidentiality and ensuring data integrity are crucial, particularly when sensitive medical records are involved. Steganography, an information hiding technique, offers a promising solution by embedding confidential data within medical images. This approach not only safeguards privacy but also supports authentication processes, ensuring that patient information remains secure during transmission. This study introduces EPR-Stego, a novel steganographic framework designed specifically for embedding electronic patient record (EPR) data in medical images. The key innovation of EPR-Stego lies in its mathematical strategy to minimize pixel intensity differences between neighboring pixels. By reducing usable pixel variations, the framework generates a stego image that is visually indistinguishable from the original, thereby enhancing imperceptibility while preserving diagnostic quality. Additionally, the method produces a key table, required by the recipient to accurately extract the embedded data, which further strengthens security against unauthorized access. The design of EPR-Stego aims to prevent attackers from easily detecting the presence of hidden medical information, mitigating the risk of targeted breaches. Experimental evaluations demonstrate its effectiveness, with the proposed approach achieving Peak Signal to Noise Ratio (PSNR) values between 51.71 dB and 75.59 dB, and Structural Similarity Index Measure (SSIM) scores reaching up to 0.99. These metrics confirm that the stego images maintain high visual fidelity and diagnostic reliability. Overall, EPR-Stego outperforms several existing techniques, offering a robust and secure solution for medical data transmission. By combining imperceptibility, security, and quality preservation, the framework addresses the pressing need for reliable protection of patient information in telemedicine environments</p> Wardatul Amalia Safitri Hammuda Arsyad Ntivuguruzwa Jean De La Croix Tohari Ahmad Jennifer Batamuliza Ahmad Hoirul Basori Copyright (c) 2025 Wardatul Amalia Safitri, Hammuda Arsyad, Ntivuguruzwa Jean De La Croix, Tohari Ahmad, Jennifer Batamuliza, Ahmad Hoirul Basori https://creativecommons.org/licenses/by-sa/4.0 2025-12-18 2025-12-18 8 1 119 135 10.35882/jeeemi.v8i1.1172