http://jeeemi.org/index.php/jeeemi/issue/feed Journal of Electronics, Electromedical Engineering, and Medical Informatics2026-05-16T11:02:22+07:00Dr. Triwiyantoeditorial.jeeemi@gmail.comOpen Journal Systems<p>The Journal of Electronics, Electromedical Engineering, and Medical Informatics, (JEEEMI), is a peer-reviewed periodical scientific journal aimed at publishing research results of the Journal focus areas. The Journal is published by the Department of Electromedical Engineering, Health Polytechnic of Surabaya, Ministry of Health, Indonesia. The role of the Journal is to facilitate contacts between research centers and the industry. The aspiration of the Editors is to publish high-quality scientific professional papers presenting works of significant scientific teams, experienced and well-established authors as well as postgraduate students and beginning researchers. All articles are subject to anonymous review processes by at least two independent expert reviewers prior to publishing on the International Journal of Electronics, Electromedical Engineering, and Medical Informatics website.</p>http://jeeemi.org/index.php/jeeemi/article/view/1441HAREN: A Hybrid Attention Residual Ensemble Network for PCOS classification and Prediction2026-05-09T17:47:05+07:00Pragati Patilphdscholar21010@kpgu.ac.inNandini Chaudharidirector@kpgu.ac.in<p>Polycystic Ovary Syndrome (PCOS) is one of the most prevalent endocrine disorders affecting women of reproductive age and is a leading cause of infertility. Ultrasound imaging is widely used for PCOS diagnosis; however, visual assessment of ovarian morphology is highly subjective, time-consuming, and dependent on clinical expertise. Quality differences in ultrasound images, very near to similar visual patterns among PCOS and NOT PCOS images, and noise in the images increase the threat of improper diagnosis. These problems suggest a need for an accurate, automatic, and computer-assisted PCOS diagnostic system. This research aims to create a deep learning-assisted automatic PCOS diagnostic system which can detect and classify the Polycystic Ovary Syndrome from the gray-scale ultrasound ovarian images. In addition to high classification accuracy, the proposed framework incorporates an explicit explainability pipeline that highlights diagnostically relevant ovarian regions, such as follicular distributions and stromal patterns, thereby supporting clinically interpretable decision making. The proposed HAREN framework addresses the limitations of single backbone models, and attention augmented variants, such as vanilla ResNet50 and ResNet50 with hybrid attention by leveraging ensemble learning and residual feature fusion. HAREN combines three architecturally diverse and complementary pretrained CNN backbones (ResNet50, DenseNet121, and EfficientNetB0) to enhance feature diversity. In addition, a novel hybrid attention mechanism combining channel, spatial, and cross-scale attention is introduced to emphasize diagnostically relevant ovarian regions. A residual fusion strategy is employed to preserve discriminative features and stabilize training, and an explicit explainability pipeline is incorporated to support Grad CAM-based visual interpretation. This network first converts the ultrasound grayscale ovarian images to RGB , followed by the extraction of important features applying backbones, which are augmented with attention mechanisms. The network, trained with categorical crossentropy loss, was evaluated using comprehensive performance metrics on 11,784 ultrasound images (6,784 PCOS and 5,000 NOT PCOS). HAREN achieved 99.33% accuracy, 98.96% precision, 98.97% recall, 98.96% F1 score, and an AUC of 99.93%, outperforming conventional models. Overall, it delivers an accurate, reliable, and interpretable solution for automated PCOS detection, demonstrating strong potential for clinical decision support systems</p>2026-05-07T00:00:00+07:00Copyright (c) 2026 Pragati Patil, Nandini Chaudharihttp://jeeemi.org/index.php/jeeemi/article/view/1137Predicting the Severity of Thyroid Nodules with YOLOv8 and CA+LSR Architecture2026-05-16T11:02:22+07:00Kalpana Devikalpanadevisrit@gmail.comVidhya Svidyasivasubramaniamsrv@gmail.comTherasa Mtherasamic@gmail.comPraveena Apraveenaayyasamy@gmail.comRamesh Kumar Mmaestro.ramesh@gmail.comKalaivani Ekalaivanieswaran27@gmail.com<p>The rise in thyroid cancer has significantly increased the burden on radiologists to diagnose thyroid nodules using sonography accurately. To address this challenge, a highly precise and efficient automatic computer-aided diagnosis system is needed. A retrospective analysis was conducted on a dataset consisting of 200 ultrasound images from 161 patients (84 benign and 77 malignant) at Wenzhou Central Hospital. This study presents an enhanced version of the You Only Look Once version 8 (YOLOv8) neural network, specifically designed to improve the accuracy of thyroid nodule diagnosis. YOLO has been objective in handling the required elements from the given input images or frames, and the article discusses the extensive benefits of the same. The proposed network incorporates a Coordinate Attention (CA) module and a Label Smoothing Regularization (LSR) module, which facilitate the extraction of positional information and enhance overall performance. The improved neural network demonstrates high accuracy in identifying lesion areas and classifying nodule types, achieving a mean average precision (mAP) of 90% with an average inference time of 8 milliseconds on the test dataset. The ablation experiment revealed that incorporating the CA and LSR modules adds 1.2 milliseconds of computational time per image while providing a significant 4.1% improvement in mean average precision (mAP). Compared with state-of-the-art networks, the enhanced YOLOv5 network performed exceptionally well in diagnosing benign and malignant thyroid nodules, even with a limited dataset. Furthermore, its high accuracy and efficiency suggest potential applicability to other sonographic diagnostic tasks, aiding radiologists in improving diagnostic accuracy and patient outcomes.</p>2026-05-16T11:02:22+07:00Copyright (c) 2026 Kalpana Devi, Vidhya S, Therasa M, Praveena A, Ramesh Kumar M, Kalaivani E