Medical Image Segmentation Using a Global Context-Aware and Progressive Channel-Split Fusion U-Net with Integrated Attention Mechanisms

Keywords: Deep Learning, Medical image segmentation, Efficient neural architectures, CFCSE-Net

Abstract

Medical image segmentation serves as a key component in Computer-Aided Diagnosis (CAD) systems across various imaging modalities. However, the task remains challenging because many images have low contrast and high lesion variability, and many clinical environments require efficient models. This study proposes CFCSE-Net, a U-Net-based model that builds upon X-UNet as a baseline for the CFGC and CSPF modules. This model incorporates a modified CFGC module with added Ghost Modules in the encoder, a CSPF module in the decoder, and Enhanced Parallel Attention (EPA) in the skip connections. The main contribution of this paper is the design of a lightweight architecture that combines multi-scale feature extraction with an attention mechanism to maintain low model complexity and increase segmentation accuracy. We train and evaluate CFCSE-Net on four public datasets: Kvasir-SEG, CVC-ClinicDB, BUSI (resized to 256 × 256 pixels), and PH2 (resized to 320 × 320 pixels), with data augmentation applied. We report segmentation performance as the mean ± standard deviation of IoU, DSC, and accuracy across three random seeds. CFCSE-Net achieves 79.78% ± 1.99 IoU, 87.21% ± 1.72 DSC, and 96.70% ± 0.59 accuracy on Kvasir-SEG, 88.11% ± 0.86 IoU, 93.42% ± 0.55 DSC, and 99.04% ± 0.09 accuracy on CVC-ClinicDB, 69.33% ± 2.66 IoU, 78.80% ± 2.65 DSC, and 96.30% ± 0.51 accuracy on BUSI, and 92.27% ± 0.52 IoU, 95.92% ± 0.30 DSC, and 98.06% ± 0.16 accuracy on PH2. Despite its strong performance, the model remains compact with 909,901 parameters and low computational cost, requiring 3.24 GFLOPs for 256 × 256 inputs and 5.07 GFLOPs for 320 × 320 inputs. These results show that CFCSE-Net maintains stable performance on polyp, breast ultrasound, and skin lesion segmentation while it stays compact enough for CAD systems on hardware with low computational resources.

Downloads

Download data is not yet available.

References

J. Zhang et al., “Advances in attention mechanisms for medical image segmentation,” Computer Science Review, vol. 56, p. 100721, May 2025, doi: 10.1016/j.cosrev.2024.100721.

G. Du, X. Cao, J. Liang, X. Chen, and Y. Zhan, “Medical Image Segmentation based on U-Net: A Review,” jist, vol. 64, no. 2, pp. 020508-1-020508–12, Mar. 2020, doi: 10.2352/J.ImagingSci.Technol.2020.64.2.020508.

P.-H. Conze, G. Andrade-Miranda, V. K. Singh, V. Jaouen, and D. Visvikis, “Current and Emerging Trends in Medical Image Segmentation With Deep Learning,” IEEE Trans. Radiat. Plasma Med. Sci., vol. 7, no. 6, pp. 545–569, July 2023, doi: 10.1109/TRPMS.2023.3265863.

X. Shu, J. Wang, A. Zhang, J. Shi, and X.-J. Wu, “CSCA U-Net: A channel and space compound attention CNN for medical image segmentation,” Artificial Intelligence in Medicine, vol. 150, p. 102800, Apr. 2024, doi: 10.1016/j.artmed.2024.102800.

Y. Zhang, Q. Liao, L. Ding, and J. Zhang, “Bridging 2D and 3D segmentation networks for computation-efficient volumetric medical image segmentation: An empirical study of 2.5D solutions,” Computerized Medical Imaging and Graphics, vol. 99, p. 102088, July 2022, doi: 10.1016/j.compmedimag.2022.102088.

N. M. Ali, S. S. Oyelere, N. Jitani, R. Sarmah, and S. Andrew, “Hybrid intelligence in medical image segmentation,” Sci Rep, vol. 15, no. 1, p. 41200, Nov. 2025, doi: 10.1038/s41598-025-24990-w.

L. Alzubaidi et al., “Review of deep learning: concepts, CNN architectures, challenges, applications, future directions,” J Big Data, vol. 8, no. 1, p. 53, Mar. 2021, doi: 10.1186/s40537-021-00444-8.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” 2015, arXiv. doi: 10.48550/ARXIV.1505.04597.

Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: A Nested U-Net Architecture for Medical Image Segmentation,” 2018, arXiv. doi: 10.48550/ARXIV.1807.10165.

H. Lu, Y. She, J. Tie, and S. Xu, “Half-UNet: A Simplified U-Net Architecture for Medical Image Segmentation,” Front. Neuroinform., vol. 16, p. 911679, June 2022, doi: 10.3389/fninf.2022.911679.

K. Han, Y. Wang, Q. Tian, J. Guo, C. Xu, and C. Xu, “GhostNet: More Features From Cheap Operations,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA: IEEE, June 2020, pp. 1577–1586. doi: 10.1109/CVPR42600.2020.00165.

L. Lan, P. Cai, L. Jiang, X. Liu, Y. Li, and Y. Zhang, “BRAU-Net++: U-Shaped Hybrid CNN-Transformer Network for Medical Image Segmentation,” 2024, arXiv. doi: 10.48550/ARXIV.2401.00722.

M. A. Fathur Rohman, H. Prasetyo, E. P. Yudha, and C.-H. Hsia, “Improving Accuracy and Efficiency of Medical Image Segmentation Using One-Point-Five U-Net Architecture with Integrated Attention and Multi-Scale Mechanisms,” j.electron.electromedical.eng.med.inform, vol. 7, no. 3, pp. 869–880, July 2025, doi: 10.35882/jeeemi.v7i3.949.

X. Xie et al., “CANet: Context aware network with dual-stream pyramid for medical image segmentation,” Biomedical Signal Processing and Control, vol. 81, p. 104437, Mar. 2023, doi: 10.1016/j.bspc.2022.104437.

S. Xu et al., “X-UNet:A novel global context-aware collaborative fusion U-shaped network with progressive feature fusion of codec for medical image segmentation,” Neural Networks, vol. 192, p. 107943, Dec. 2025, doi: 10.1016/j.neunet.2025.107943.

D. Jha et al., “Kvasir-SEG: A Segmented Polyp Dataset,” 2019, arXiv. doi: 10.48550/ARXIV.1911.07069.

J. Bernal, F. J. Sánchez, G. Fernández-Esparrach, D. Gil, C. Rodríguez, and F. Vilariño, “WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians,” Computerized Medical Imaging and Graphics, vol. 43, pp. 99–111, July 2015, doi: 10.1016/j.compmedimag.2015.02.007.

W. Al-Dhabyani, M. Gomaa, H. Khaled, and A. Fahmy, “Dataset of breast ultrasound images,” Data in Brief, vol. 28, p. 104863, Feb. 2020, doi: 10.1016/j.dib.2019.104863.

T. Mendonca, P. M. Ferreira, J. S. Marques, A. R. S. Marcal, and J. Rozeira, “PH2 - A dermoscopic image database for research and benchmarking,” in 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka: IEEE, July 2013, pp. 5437–5440. doi: 10.1109/EMBC.2013.6610779.

M. A. Fathur Rohman, H. Prasetyo, H. M. Akbar, and A. D. Afan Firdaus, “ACMU-Net: An Efficient Architecture Based on ConvMixer and Attention Mechanism for Colorectal Polyp Segmentation,” in 2024 IEEE International Conference on Smart Mechatronics (ICSMech), Yogyakarta, Indonesia: IEEE, Nov. 2024, pp. 279–284. doi: 10.1109/ICSMech62936.2024.10812309.

E. Goceri, “Medical image data augmentation: techniques, comparisons and interpretations,” Artif Intell Rev, vol. 56, no. 11, pp. 12561–12605, Nov. 2023, doi: 10.1007/s10462-023-10453-z.

L. Lu, Q. Xiong, D. Chu, and B. Xu, “MixDehazeNet : Mix Structure Block For Image Dehazing Network,” 2023, arXiv. doi: 10.48550/ARXIV.2305.17654.

R. Andonie, “Hyperparameter optimization in learning systems,” J Membr Comput, vol. 1, no. 4, pp. 279–291, Dec. 2019, doi: 10.1007/s41965-019-00023-0.

Y. Yuan and Y. Cheng, “Medical image segmentation with UNet-based multi-scale context fusion,” Sci Rep, vol. 14, no. 1, p. 15687, Oct. 2024, doi: 10.1038/s41598-024-66585-x.

H. Al Jowair, M. Alsulaiman, and G. Muhammad, “Multi parallel U-net encoder network for effective polyp image segmentation,” Image and Vision Computing, vol. 137, p. 104767, Sept. 2023, doi: 10.1016/j.imavis.2023.104767.

M. Tan and Q. V. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” 2019, doi: 10.48550/ARXIV.1905.11946.

S. Wang, L. Li, and X. Zhuang, “AttU-NET: Attention U-Net for Brain Tumor Segmentation,” in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, vol. 12963, A. Crimi and S. Bakas, Eds., in Lecture Notes in Computer Science, vol. 12963. , Cham: Springer International Publishing, 2022, pp. 302–311. doi: 10.1007/978-3-031-09002-8_27.

J. Chen et al., “TransUNet: Rethinking the U-Net architecture design for medical image segmentation through the lens of transformers,” Medical Image Analysis, vol. 97, p. 103280, Oct. 2024, doi: 10.1016/j.media.2024.103280.

H. Wang, P. Cao, J. Wang, and O. R. Zaiane, “UCTransNet: Rethinking the Skip Connections in U-Net from a Channel-wise Perspective with Transformer,” 2021, arXiv. doi: 10.48550/ARXIV.2109.04335.

B. Chen, Y. Liu, Z. Zhang, G. Lu, and A. W. K. Kong, “TransAttUnet: Multi-level Attention-guided U-Net with Transformer for Medical Image Segmentation,” 2021, arXiv. doi: 10.48550/ARXIV.2107.05274.

Y. Chen, X. Zhang, L. Peng, Y. He, F. Sun, and H. Sun, “Medical image segmentation network based on multi-scale frequency domain filter,” Neural Networks, vol. 175, p. 106280, July 2024, doi: 10.1016/j.neunet.2024.106280.

Published
2026-01-09
How to Cite
[1]
A. R. Widhayaka and H. Prasetyo, “Medical Image Segmentation Using a Global Context-Aware and Progressive Channel-Split Fusion U-Net with Integrated Attention Mechanisms”, j.electron.electromedical.eng.med.inform, vol. 8, no. 1, pp. 206-221, Jan. 2026.
Section
Medical Engineering