Computational Analysis of Medical Image Generation Using Generative Adversarial Networks (GANs)
Abstract
The limited availability of diverse, high-quality medical images constitutes a significant obstacle to training reliable deep-learning models that can be used in clinical settings. The traditional methods used for data augmentation generate inadequate medical photos that result in poor model performance and a low rate of successful generalization. This research studies the effectiveness of DCGAN cGAN CycleGAN and SRGAN GAN architectures through performance testing in five essential medical imaging datasets, including Diabetic Retinopathy, Pneumonia and Brain Tumor and Skin Cancer and Leukemia. The main achievement of this research was to perform an extensive evaluation of these GAN models through three key metrics: generation results, training loss metrics, and computational resource utilization. DCGAN generated stable high-quality synthetic images, whereas its generator produced losses from 0.59 (Pneumonia) to 6.24 (Skin Cancer), and its discriminator output losses between 0.29 and 6.25. CycleGAN showed the best convergence potential for Diabetic Retinopathy with generator and discriminator losses of 2.403 and 2.02 and Leukemia with losses at 3.325 and 3.129. The SRGAN network produced high-definition images at a generator loss of 6.253 and discriminator loss of 6.119 for the Skin Cancer dataset. Still, it failed to maintain crucial medical characteristics in grayscale images. GCN exhibited stable performance across all loss metrics and datasets. The DCGAN model required the lowest computing resources for 4 to 7 hours, using 0.9M and 1.4M parameters. The framework of SRGAN consumed between 7 and 10 hours and needed 1.7M to 2.3M parameters for its operation, and CycleGAN required identical computational resources. DCGAN was determined as the ideal model for synthetic medical image generation since it presented an optimal combination of quality output and resource efficiency. The research indicates that using DCGAN-generated images to increase medical datasets serves as a solution for boosting AI-based diagnostic system capabilities within healthcare.
Downloads
References
J. Wang et al., “Self-improving generative foundation model for synthetic medical image generation and clinical applications,” Nature Medicine, vol. 31, no. 2, pp. 609–617, 2025.
M. Kumar, A. S. Chivukula, and G. Barua, “Deep learning-based encryption scheme for medical images using DCGAN and virtual planet domain,” Scientific Reports, vol. 15, no. 1, p. 1211, 2025, doi: 10.1038/s41598-024-84186-6.
Y. Chen et al., “ICycle-GAN: Improved cycle generative adversarial networks for liver medical image generation,” Biomedical Signal Processing and Control, vol. 92, p. 106100, 2024, doi: 10.1016/j.bspc.2024.106100.
M. Ali, M. Ali, M. Hussain, and D. Koundal, “Generative Adversarial Networks (GANs) for Medical Image Processing: Recent Advancements,” Archives of Computational Methods in Engineering, pp. 1–14, 2024, doi: 10.1007/s11831-024-10174-8.
M. K. Sherwani and S. Gopalakrishnan, “A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy,” Frontiers in Radiology, vol. 4, p. 1385742, 2024, doi: 10.3389/fradi.2024.1385742.
M. Hamghalam and A. L. Simpson, “Medical image synthesis via conditional GANs: Application to segmenting brain tumours,” Computers in Biology and Medicine, vol. 170, p. 107982, 2024, doi: 10.1016/j.compbiomed.2024.107982.
M. C. Sai Akhil, B. S. Sanjana Sharma, A. Kodipalli, and T. Rao, “Medical Image Synthesis Using DCGAN for Chest X-Ray Images,” in 2024 International Conference on Knowledge Engineering and Communication Systems, ICKECS 2024, 2024, vol. 1, pp. 1–8. doi: 10.1109/ICKECS61492.2024.10617031.
R. Zakaria, H. Abedlamjid, D. Zitouni, and A. Elqaraoui, “Medical-DCGAN: Deep Convolutional GAN for Medical Imaging,” in Advances in Emerging Financial Technology and Digital Money, CRC Press, 2024, pp. 123–134. doi: 10.1201/9781032667478-10.
D. Shah, M. A. Ullah Khan, and M. Abrar, “Reliable Breast Cancer Diagnosis with Deep Learning: DCGAN-Driven Mammogram Synthesis and Validity Assessment,” Applied Computational Intelligence and Soft Computing, vol. 2024, no. 1, p. 1122109, 2024, doi: 10.1155/2024/1122109.
S. Varshitha, N. Lavanya, M. Shirisha, S. Manmatti, K. P. Asha Rani, and S. Gowrishankar, “Enhancing Medical Imaging Resolution: Exploring SRGAN for High-Quality Medical Image Reconstruction,” in Proceedings of ICWITE 2024: IEEE International Conference for Women in Innovation, Technology and Entrepreneurship, 2024, pp. 1–8. doi: 10.1109/ICWITE59797.2024.10503215.
S. Madhav, T. M. Nandhika, and M. K. Kavitha Devi, “Super Resolution of Medical Images Using SRGAN,” in 2nd International Conference on Emerging Trends in Information Technology and Engineering, ic-ETITE 2024, 2024, pp. 1–6. doi: 10.1109/ic-ETITE58242.2024.10493588.
P. Nandal, S. Pahal, A. Khanna, and P. R. Pinheiro, “Super-resolution of medical images using real ESRGAN,” IEEE Access, 2024, doi: 10.1109/ACCESS.2024.3497002.
Y. Heng, M. Yinghua, F. G. Khan, A. Khan, and Z. Hui, “HLSNC-GAN: Medical Image Synthesis Using Hinge Loss and Switchable Normalization in CycleGAN,” IEEE Access, vol. 12, pp. 55448–55464, 2024, doi: 10.1109/ACCESS.2024.3390245.
A. Jha and H. Iima, “CT to MRI Image Translation Using CycleGAN: A Deep Learning Approach for Cross-Modality Medical Imaging,” in International Conference on Agents and Artificial Intelligence, 2024, vol. 3, pp. 951–957. doi: 10.5220/0012422900003636.
K. Afnaan, T. Singh, and P. Duraisamy, “Hybrid Deep Learning Framework for Bidirectional Medical Image Synthesis,” in 2024 15th International Conference on Computing Communication and Networking Technologies, ICCCNT 2024, 2024, pp. 1–6. doi: 10.1109/ICCCNT61001.2024.10725975.
R. Raad et al., “Conditional generative learning for medical image imputation,” Scientific Reports, vol. 14, no. 1, p. 171, 2024, doi: 10.1038/s41598-023-50566-7.
R. Wang, A. F. Heimann, M. Tannast, and G. Zheng, “CycleSGAN: A cycle-consistent and semantics-preserving generative adversarial network for unpaired MR-to-CT image synthesis,” Computerized Medical Imaging and Graphics, vol. 117, p. 102431, 2024, doi: 10.1016/j.compmedimag.2024.102431.
M. U. Akbar, W. Wang, and A. Eklund, “Beware of diffusion models for synthesizing medical images -- A comparison with GANs in terms of memorizing brain MRI and chest x-ray images,” Machine Learning: Science and Technology, vol. 6, no. 1, p. 15022, 2023, doi: 10.1088/2632-2153/ad9a3a.
Y. S. Devi and S. Phani Kumar, “Diabetic Retinopathy (DR) Image Synthesis Using DCGAN and Classification of DR Using Transfer Learning Approaches,” International Journal of Image and Graphics, vol. 24, no. 05, p. 2340009, 2023, doi: 10.1142/S0219467823400090.
A. A. Mamo, B. G. Gebresilassie, A. Mukherjee, V. Hassija, and V. Chamola, “Advancing Medical Imaging Through Generative Adversarial Networks: A Comprehensive Review and Future Prospects,” Cognitive Computation, vol. 16, no. 5, pp. 2131–2153, 2024, doi: 10.1007/s12559-024-10291-3.
P. Friedrich, Y. Frisch, and P. C. Cattin, “Deep Generative Models for 3D Medical Image Synthesis,” in Generative Machine Learning Models in Medical Image Computing, Springer, 2024, pp. 255–278. [Online]. Available: http://arxiv.org/abs/2410.17664
A. S. Fard, D. C. Reutens, S. C. Ramsay, S. J. Goodman, S. Ghosh, and V. Vegh, “Image synthesis of interictal SPECT from MRI and PET using machine learning,” Frontiers in Neurology , vol. 15, p. 1383773, 2024, doi: 10.3389/fneur.2024.1383773.
S. Islam et al., “Generative Adversarial Networks (GANs) in Medical Imaging: Advancements, Applications, and Challenges,” IEEE Access, vol. 12, pp. 35728–35753, 2024, doi: 10.1109/ACCESS.2024.3370848.
D. N. Sindhura, R. M. Pai, S. N. Bhat, and M. M. M. Pai, “A review of deep learning and Generative Adversarial Networks applications in medical image analysis,” Multimedia Systems, vol. 30, no. 3, p. 161, 2024, doi: 10.1007/s00530-024-01349-1.
D. S. Kermany et al., “Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning,” Cell, vol. 172, no. 5, pp. 1122-1131.e9, Feb. 2018, doi: 10.1016/j.cell.2018.02.010.
P. Mooney, "Chest X-ray images (Pneumonia)," Kaggle, 2018. Available: https://www.kaggle.com/datasets/paultimothymooney/chest-xray-pneumonia. Accessed: Feb. 9, 2024.
E. Dugas, Jared, Jorge, and W. Cukierski, "Diabetic retinopathy detection," Kaggle, 2015. Available: https://kaggle.com/competitions/diabetic-retinopathy-detection. Accessed: Feb. 9, 2024.
N. Paul, "Brain MRI images for brain tumor detection," Kaggle, 2019. Available: https://www.kaggle.com/datasets/navoneel/brain-mri-images-for-brain-tumor-detection. Accessed: Apr. 21, 2025.
A. Fanconi, "Skin cancer: Malignant vs. benign," Kaggle, 2021. Available: https://www.kaggle.com/datasets/fanconic/skin-cancer-malignant-vs-benign. Accessed: Apr. 21, 2025.
V. Tanwar, "Leukemia cancer small dataset," Kaggle, 2022. Available: https://www.kaggle.com/datasets/visheshtanwar26/leukemia-cancer-small-dataset/data. Accessed: Apr. 21, 2025.
Copyright (c) 2025 Shrina Patel, Ashwin Makwana

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution-ShareAlikel 4.0 International (CC BY-SA 4.0) that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).