Multi-Stage CNN: U-Net and Xcep-Dense of Glaucoma Detection in Retinal Images

  • Anita Desiani Mathematics Department, Mathematics and Science Faculty, Universitas Sriwijaya, Inderalaya, 30862, Indonesia https://orcid.org/0000-0003-4198-6809
  • Sigit Priyanta Departementof Computer Science and Electronics, Faculty of Mathematics and Natural Science, Universitas Gadjah Mada, Yogyakarta, 55281, Indonesia https://orcid.org/0000-0002-1673-8582
  • Indri Ramayanti Departementof Parasitology, Faculty of Medicine, Universitas Muhammadiyah, Palembang, 30166, Indonesia https://orcid.org/0000-0001-7301-3361
  • Bambang Suprihatin Mathematics Department, Mathematics and Science Faculty, Universitas Sriwijaya, Inderalaya, 30862, Indonesia
  • Muhammat Rio Halim Mathematics Department, Mathematics and Science Faculty, Universitas Sriwijaya, Inderalaya, 30862, Indonesia
  • Dite Geovani Mathematics Department, Mathematics and Science Faculty, Universitas Sriwijaya, Inderalaya, 30862, Indonesia
  • Ira Rayani Mathematics Department, Mathematics and Science Faculty, Universitas Sriwijaya, Inderalaya, 30862, Indonesia https://orcid.org/0009-0000-8377-8683
Keywords: Glaucoma, Classification, Segmentation, U-Net, Xcep-Dense

Abstract

Glaucoma is a chronic neurological disease in the human eye where there is damage to the nerves which causes vision loss to blindness. Glaucoma can be detected by classifying retinal images. Several previous studies that classified glaucoma did not perform segmentation beforehand. Segmentation is needed to extract the features of the optic disc and optic cup from retinal images that are used to detect glaucoma. This study proposes two stages in the detection of glaucoma, namely the segmentation and classification stages. Segmentation is carried out using the U-Net architecture. Classification is done using a new architecture, namely Xcep-Dense. The Xcep-Dense architecture is a new architecture which is the result of a combination of the Xception and DenseNet architectures. At the segmentation stage, accuracy, recall, precision, and F1-score values are obtained above 90%. The Cohen’s kappa value has a value above 85% and loss below 20%. At the classification stage, accuracy and specification values were obtained above 85%, sensitivity and F1-score above 80%, and Cohen’s kappa above 70%. The predicted image obtained at the segmentation stage has a very similar appearance to the ground truth. Based on the results of the performance evaluation obtained, it shows that the method proposed in this study is feasible in detecting glaucoma.Glaucoma,

Downloads

Download data is not yet available.

References

[1] B. B. Naik and R. Mariappan, “Classification of Eye Diseases Using Optic Cup Segmentation and Optic Disc Ratio,” IOSR J. Comput. Eng., vol. 18, no. 05, pp. 87–94, 2016.
[2] M. U. Akram, A. Tariq, S. Khalid, M. Y. Javed, S. Abbas, and U. U. Yasin, “Glaucoma Detection Using Novel Optic Disc Localization, Hybrid Feature Set and Classification Techniques,” Australas. Phys. Eng. Sci. Med., vol. 38, no. 4, pp. 643–655, 2015.
[3] A. Septiarini, D. M. Khairina, A. H. Kridalaksana, and H. Hamdani, “Automatic Glaucoma Detection Method Applying a Statistical Approach to Fundus Images,” Healthc. Inform. Res., vol. 24, no. 1, pp. 53–60, 2018.
[4] X. Deng, Q. Liu, Y. Deng, and S. Mahadevan, “An Improved Method to Construct Basic Probability Assignment Based on The Confusion Matrix for Classification Problem,” Inf. Sci. (Ny)., vol. 340–341, pp. 250–261, 2016.
[5] I. Rizwan I Haque and J. Neubert, “Deep Learning Approaches to Biomedical Image Segmentation,” Informatics Med. Unlocked, vol. 18, p. 100297, 2020.
[6] I. Kandel and M. Castelli, “Transfer Learning with Convolutional Neural Networks for Diabetic Retinopathy Image Classification. A Review,” Appl. Sci., vol. 10, no. 6, 2020.
[7] M. Juneja, S. Thakur, A. Uniyal, A. Wani, N. Thakur, and P. Jindal, “Deep Learning-Based Classification Network for Glaucoma in Retinal Images,” Comput. Electr. Eng., vol. 101, no. April, p. 108009, 2022.
[8] A. Diaz-Pinto, S. Morales, V. Naranjo, T. Köhler, J. M. Mossi, and A. Navea, “CNNs for Automatic Glaucoma Assessment using Fundus Images: An extensive validation,” Biomed. Eng. Online, vol. 18, no. 1, pp. 1–19, 2019.
[9] M. Juneja, N. Thakur, S. Thakur, A. Uniyal, A. Wani, and P. Jindal, “GC-NET for Classification of Glaucoma in The Retinal Fundus Image,” Mach. Vis. Appl., vol. 31, no. 5, pp. 1–18, 2020.
[10] K. Wu, S. Zhang, and Z. Xie, “Monocular Depth Prediction with Residual DenseASPP Network,” IEEE Access, vol. 8, no. 1, pp. 129899–129910, 2020.
[11] J. Zhang, C. Wu, X. Yu, and X. Lei, “A Novel DenseNet Generative Adversarial Network for Heterogenous Low-Light Image Enhancement,” Front. Neurorobot., vol. 15, no. June, pp. 1–10, 2021.
[12] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks,” in 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017, vol. 2017-Janua, pp. 2261–2269.
[13] J. Wu, W. Hu, Y. Wen, W. Tu, and X. Liu, “Skin Lesion Classification Using Densely Connected Convolutional Networks with Attention Residual Learning,” Sensors (Switzerland), vol. 20, no. 24, pp. 1–15, 2020.
[14] T. Liao et al., “Classification of Asymmetry in Mammography via The DenseNet Convolutional Neural Network,” Eur. J. Radiol. Open, vol. 11, no. July, p. 100502, 2023.
[15] N. Hasan, Y. Bao, A. Shawon, and Y. Huang, “DenseNet Convolutional Neural Networks Application for Predicting COVID-19 Using CT Image,” SN Comput. Sci., vol. 2, no. 5, pp. 1–11, 2021.
[16] A. Desiani, Erwin, B. Suprihatin, Ermatita, F. R. Husein, and Y. Wahyudi, “A Novelty Patching of Circular Random and Ordered Techniques on Retinal Image to Improve CNN U-Net Performance,” Eng. Lett., vol. 30, no. 4, pp. 1217–1229, 2022.
[17] A. Desiani, B. Suprihatin, S. Yahdin, A. I. Putri, and F. R. Husein, “Bi - path Architecture of CNN Segmentation and Classification Method for Cervical Cancer Disorders Based on Pap - smear Images,” IAENG Int. J. Comput. Sci., vol. 48, no. 3, 2021.
[18] V. Sathananthavathi and G. Indumathi, “Encoder Enhanced Atrous (EEA) Unet architecture for Retinal Blood vessel segmentation,” Cogn. Syst. Res., vol. 67, pp. 84–95, 2021.
[19] H. Fu, Y. Xu, D. W. K. Wong, and J. Liu, “Retinal Vessel Segmentation via Deep Learning Network and Fully-Connected Conditional Random Fields,” in Proceedings - International Symposium on Biomedical Imaging, 2016, vol. 2016-June, pp. 698–701.
[20] G. M. Venkatesh, Y. G. Naresh, S. Little, and N. E. O’Connor, A deep residual architecture for skin lesion segmentation, vol. 11041 LNCS. Springer International Publishing, 2018.
[21] A. Saood and I. Hatem, “COVID-19 lung CT image segmentation using deep learning methods: U-Net versus SegNet,” BMC Med. Imaging, vol. 21, no. 1, pp. 1–10, 2021.
[22] C. Bhardwaj, S. Jain, and M. Sood, “Diabetic Retinopathy Severity Grading Employing Quadrant-Based Inception-V3 Convolution Neural Network Architecture,” Int. J. Imaging Syst. Technol., vol. 31, no. 2, pp. 592–608, 2021.
[23] T. Guo, J. Dong, H. Li, and Y. Gao, “Simple Convolutional Neural Network on Image Classification,” in International Conference on Big Data Analysis, 2017, pp. 721–724.
[24] S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” Journal. Pract., vol. 10, no. 6, pp. 730–743, 2016.
[25] Y. Ho and S. Wookey, “The Real-World-Weight Cross-Entropy Loss Function: Modeling the Costs of Mislabeling,” IEEE Access, vol. 8, pp. 4806–4813, 2020, doi: 10.1109/ACCESS.2019.2962617.
[26] J. M. Ahn, S. Kim, K. S. Ahn, S. H. Cho, K. B. Lee, and U. S. Kim, “A Deep Learning Model for The Detection of Both Advanced and Early Glaucoma using Fundus Photography,” PLoS One, vol. 14, no. 1, pp. 1–8, 2019.
[27] V. K. Velpula and L. D. Sharma, “Multi-Stage Glaucoma Classification using Pre-Trained Convolutional Neural Networks and Voting-Based Classifier Fusion,” Front. Physiol., vol. 14, no. June, pp. 1–17, 2023.
[28] M. Esengonul and A. Cunha, “Glaucoma Detection using Convolutional Neural Network Mobile Use,” Procedia Computer Science., vol.219, 2023, pp. 1153–1160.
Published
2023-08-29
How to Cite
[1]
A. Desiani, “Multi-Stage CNN: U-Net and Xcep-Dense of Glaucoma Detection in Retinal Images”, j.electron.electromedical.eng.med.inform, vol. 5, no. 4, pp. 211-222, Aug. 2023.
Section
Electronics