Simple Data Augmentation and U-Net CNN for Neclui Binary Segmentation on Pap Smear Images
Abstract
The nuclei and cytoplasm can be detected through Pap smear images. The image consists of cytoplasm and nuclei. In Pap smear image, nuclei are the most critical cell components and undergo significant changes in cervical cancer disorders. To help women avoid cervical cancer, early detection of nuclei abnormalities can be done in various ways, one of which is by separating the nuclei from the non-nucleis part by image segmentation it. In this study, segmentation of the separation of nuclei with other parts of the Pap smear image is carried out by applying the U-Net CNN architecture. The amount of pap smear image data is limited. The limiter data can cause overfitting on U-Net CNN model. Meanwhile, U-Net CNN needs a large amount of training data to get great performance results for classification. One technique to increase data is augmentation. Simple techniques for augmentation are flip and rotation. The result of the application of U-Net CNN architecture and augmentation is a binary image consisting of two parts, namely the background and the nuclei. Performance evaluation of combination U-Net CNN and augmentation technique is accuracy, sensitivity, specificity, and F1-score. The results performance of the method for accuracy, sensitivity, and F1-score values are greater than 90%, while the specificity is still below 80%. From these performance results, it shows that the U-Net CNN combine augmentation technique is excellent to detect nuclei in compared to detect non nuclei cell on pap smear image.
Downloads
References
[2] Ernawati, D. Oktaviana, Mantasia, R. A. Yusuf, and Sumarmi, ‘The Effect of Health Education Based on the Health Belief Model about Pap Smear Test on Women in Rural District Indonesia’, Medico-legal Updat., vol. 21, no. 2, pp. 7–12, 2021.
[3] H. Bandyopadhyay and M. Nasipuri, ‘Segmentation of Pap Smear Images for Cervical Cancer Detection’, 2020 IEEE Calcutta Conf. CALCON 2020 - Proc., pp. 30–33, 2020, doi: 10.1109/CAL CON49167.2020.9106484.
[4] D. Somasundaram, S. Gnanasaravanan, and N. Madian, ‘Automatic segmentation of nuclei from pap smear cell images: A step toward cervical cancer screening’, Int. J. Imaging Syst. Technol., vol. 30, no. 4, pp. 1209–1219, 2020, doi: 10.1002/ima.22444.
[5] M. Nikolic, E. Tuba, and M. Tuba, ‘Edge Detection in Medical Ultrasound Images Using Adjusted Canny Edge Detection Algorithm’, IEEE, 2016.
[6] A. Taneja, P. Ranjan, and A. Ujlayan, ‘Automated cell nuclei segmentation in overlapping cervical images using deep learning model’, Proc. 2018 Int. Conf. Image Process. Comput. Vision, Pattern Recognition, IPCV 2018, pp. 165–172, 2018.
[7] Y. Chen, J. Chen, D. Wei, Y. Li, and Y. Zheng, Multiscale Multimodal Medical Imaging, vol. 11977. 2020.
[8] L. Zhang et al., ‘Segmentation of cytoplasm and nuclei of abnormal cells in cervical cytology using global and local graph cuts’, Comput. Med. Imaging Graph., vol. 38, no. 5, pp. 369–380, 2014, doi: 10.1016/j.compmedimag.2014.02.001.
[9] S. Geetha and S. Suganya, ‘Automatic Detection of Cervical Cancer Using UKF and ACM-AHP’, Int. J. Adv. Res. Eng. Technol., vol. 11, no. 8, pp. 285–295, 2020, doi: 10.34218/IJARET.11.8.2020.028.
[10] J. Liu, L. Li, and L. Wang, ‘Acetowhite region segmentation in uterine cervix images using a registered ratio image’, Comput. Biol. Med., vol. 93, pp. 47–55, 2018, doi: 10.1016/j.compbiomed .2017.12.009.
[11] S. Sivagami, P. Chitra, G. S. R. Kailash, and S. R. Muralidharan, ‘UNet Architecture Based Dental Panoramic Image Segmentation’, 2020 Int. Conf. Wirel. Commun. Signal Process. Networking, WiSPNET 2020, pp. 187–191, 2020, doi: 10.1109 /WiSPNET48689.2020.9198370.
[12] J. Schmidhuber, ‘Deep Learning in neural networks: An overview’, Neural Networks, vol. 61, pp. 85–117, 2015, doi: 10.1016/j.neunet.2014.09. 003.
[13] W. Liu et al., ‘CVM-Cervix: A hybrid cervical Pap-smear image classification framework using CNN, visual transformer and multilayer perceptron’, Pattern Recognit., vol. 130, p. 108829, 2022, doi: https://doi.org/10.1016/j.patcog.2022.108829.
[14] O. Ronneberger, P. Fischer, and T. Brox, ‘U-net: Convolutional networks for biomedical image segmentation’, in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2015, vol. 9351, pp. 234–241, doi: 10.1007/978-3-319-24574-4_28.
[15] V. Khryashchev, A. Lebedev, O. Stepanova, and A. Srednyakova, ‘Using Convolutional Neural Networks in the Problem of Cell Nuclei Segmentation on Histological Images’, Springer Nat. Switz. AG 2019, pp. 149–161, 2019, doi: 10.1007/978-3-030-12072-6.
[16] S. Albawi, T. A. M. Mohammed, and S. Alzawi, ‘Layers of a Convolutional Neural Network’, Icet2017, pp. 1–6, 2017.
[17] Z. Alom, M. Hasan, C. Yakopcic, T. M. Taha, and V. K. Asari, ‘Recurrent Residual Convolutional Neural Network based on U-Net (R2U-Net) for Medical Image Segmentation’, pp. 1–12, 2018.
[18] A. Saood and I. Hatem, ‘COVID-19 lung CT image segmentation using deep learning methods: U-Net versus SegNet’, BMC Med. Imaging, vol. 21, no. 1, pp. 1–10, 2021, doi: 10.1186/s12880-020-00529-5.
[19] A. Kermi, I. Mahmoudi, and M. T. Khadir, ‘Deep Convolutional Neural Networks Using U-Net for Automatic Brain Tumor Segmentation in Multimodal MRI Volumes’, Int. MICCAI Brainlesion Work., pp. 37–48, 2019, doi: 10.1007/978-3-030-11726-9.
[20] M. Sonogashira, M. Shonai, and M. Iiyama, ‘High-resolution bathymetry by deep-learning-based image superresolution’, PLoS One, vol. 15, no. 7, pp. 1–19, 2020, doi: 10.1371/journal.pone.0235487.
[21] W. Chen, B. Yang, J. Li, and J. Wang, ‘An Approach to Detecting Diabetic Retinopathy based on Integrated Shallow Convolutional Neural Networks’, IEEE Access, vol. 8, pp. 178552–178562, 2020, doi: 10.1109/ACCESS.2020. 3027794.
[22] E. Hussain, L. B. Mahanta, C. R. Das, and R. K. Talukdar, ‘A comprehensive study on the multi-class cervical cancer diagnostic prediction on pap smear images using a fusion-based decision from ensemble deep convolutional neural network’, Tissue Cell, vol. 65, no. 101347, pp. 1–8, 2020, doi: 10.1016/j.tice.2020.101347.
[23] A. Desiani, M. Erwin, B. Suprihatin, S. Yahdin, A. I. Putri, and F. R. Husein, ‘Bi-path Architecture of CNN Segmentation and Classification Method for Cervical Cancer Disorders Based on Pap-smear Images’, Int. J. Comput. Sci., vol. 48, no. 3, pp. 1–9, 2021.
[24] Y. Song, L. Zhu, J. Qin, B. Lei, B. Sheng, and K. S. Choi, ‘Segmentation of Overlapping Cytoplasm in Cervical Smear Images via Adaptive Shape Priors Extracted from Contour Fragments’, IEEE Trans. Med. Imaging, vol. 38, no. 12, pp. 2849–2862, 2019, doi: 10.1109/TMI.2019.2915633.
[25] R. Saha, M. Bajger, and G. Lee, ‘Spatial Shape Constrained Fuzzy C-Means (FCM) Clustering for Nucleus Segmentation in Pap Smear Images’, 2016 Int. Conf. Digit. Image Comput. Tech. Appl. DICTA 2016, pp. 1–8, 2016, doi: 10.1109/DICTA. 2016.7797086.
[26] D. Ushizima, A. G. C. Bianchi, and C. M. Carneiro, ‘Segmentation of subcellular compartments combining superpixel representation with Voronoi diagrams’, Off. Energy Res. U.S. Dep. Energy, under Contract Number DE-AC02-05CH11231, pp. 1–3, 2015.
[27] P. Wang, L. Wang, Y. Li, Q. Song, S. Lv, and X. Hu, ‘Automatic cell nuclei segmentation and classification of cervical Pap smear images’, Biomed. Signal Process. Control, vol. 48, pp. 93–103, 2019, doi: 10.1016/j.bspc.2018.09.008.
[28] A. Tareef et al., ‘Automatic segmentation of overlapping cervical smear cells based on local distinctive features and guided shape deformation’, Neurocomputing, vol. 221, pp. 94–107, 2017, doi: 10.1016/j.neucom.2016.09.070.
[29] A. Desiani et al., ‘Denoised Non-Local Means with BDDU-Net Architecture for Robust Retinal Blood Vessel Segmentation’, Int. J. Pattern Recognit. Artif. Intell., vol. 37, no. 16, p. 2357016, Dec. 2023, doi: 10.1142/S0218001423570161.
[30] S. Ioffe and C. Szegedy, ‘Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift’, Proc. 32nd Int. Conf. Mach. Learn. Lille, Fr. 2015., vol. 37, 2015.
[31] S. Hijazi, R. Kumar, and C. Rowen, ‘Using convolutional neural networks for image recognition’, Cadence Design Systems Inc.: San Jose, CA, USA, 2015, pp. 1–12.
[32] S. Sharma, S. Sharma, and A. Anidhya, ‘Activation Functions in Neural Networks’, Int. J. Eng. Appl. Sci. Technol., vol. 4, no. 12, pp. 310–316, 2018.
[33] Z. Tong, K. Aihara, and G. Tanaka, A hybrid pooling method for convolutional neural networks. 2016.
[34] H. Gao, H. Yuan, Z. Wang, and S. Ji, ‘Pixel Transposed Convolutional Networks’, IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 5, pp. 1218–1227, 2019, doi: 10.1109/TPAMI.2019.2893 965.
[35] C. Du, Y. Wang, C. Wang, C. Shi, and B. Xiao, ‘Selective feature connection mechanism: Concatenating multi-layer CNN features with a feature selector’, Pattern Recognit. Lett., vol. 129, pp. 108–114, 2020, doi: 10.1016/j.patrec.2019.11.015.
[36] T. Chankong, N. Theera-Umpon, and S. Auephanwiriyakul, ‘Automatic cervical cell segmentation and classification in Pap smears’, Comput. Methods Programs Biomed., vol. 113, no. 2, pp. 539–556, 2014, doi: 10.1016/j.cmpb.2013. 12.012.
[37] A. Desiani, S. Yahdin, A. Kartikasari, and Irmeilyana, ‘Handling the imbalanced data with missing value elimination SMOTE in the classification of the relevance education background with graduates employment’, IAES Int. J. Artif. Intell., vol. 10, no. 2, pp. 346–354, 2021, doi: 10.11591/ijai.v10.i2.pp346-354.
[38] O. Oriola, ‘A Stacked Generalization Ensemble Approach for Improved Intrusion Detection’, vol. 18, no. 5, pp. 62–67, 2020.
[39] A. Desiani, N. R. Dewi, A. N. Fauza, N. Rachmatullah, M. Arhami, and M. Nawawi, ‘Handling missing data using combination of deletion technique, mean, mode and artificial neural network imputation for heart disease dataset’, Sci. Technol. Indones., vol. 6, no. 4, pp. 303–312, 2021, doi: 10.26554/sti.2021.6.4.303-312.
[40] A. M. Braga et al., ‘Hierarchical median narrow band for level set segmentation of cervical cell nuclei’, Meas. J. Int. Meas. Confed., vol. 176, no. March, p. 109232, 2021, doi: 10.1016/j.measurement.2021.109232.
Copyright (c) 2024 Anita Desiani, Irmeilyana, Des Alwine Zayanti, Yadi Utama, Muhammad Arhami, Azhar Kholiq Affandi, Muhammad Aditya Sasongko, Indri Ramayanti

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution-ShareAlikel 4.0 International (CC BY-SA 4.0) that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).