Grad-CAM based Visualization for Interpretable Lung Cancer Categorization using Deep CNN Models
Abstract
The Grad-CAM (Gradient-weighted Class Activation Mapping) technique has loomed as a crucial tool for elucidating deep learning models, particularly convolutional neural networks (CNNs), by visually accentuating the regions of input images that accord most to a model's predictions. In the context of lung cancer histopathological image classification, this approach provides discernment into the decision-making process of models like InceptionV3, XceptionNet, and VGG19. These CNN architectures, renowned for their high performance in image categorization tasks, can be leveraged for automated diagnosis of lung cancer from histopathological images. By applying Grad-CAM to these models, heatmaps can be generated that divulge the areas of the tissue samples most influential in categorizing the images as lung adenocarcinomas, squamous cell carcinoma, and benign patches. This technique allows for the visualization of the network's focus on specific regions, such as cancerous cells or abnormal tissue structures, which may otherwise be difficult to explicate. Using pre-trained models fine-tuned for the task, the Grad-CAM method assesses the gradients of the target class concerning the final convolutional layer, generating a heatmap that can be overlaid on the input image. The results of Grad-CAM for InceptionV3, XceptionNet, and VGG19 offer distinct insights, as each model has unique characteristics. InceptionV3 pivots on multi-scale features, XceptionNet apprehend deeper patterns with separable convolutions, and VGG19 emphasizes simpler, more global attributes. By justaposing the heatmaps generated by each architecture, one can assess the model’s focus areas, facilitating better comprehension and certainty in the model's prophecy, crucial for clinical applications. Ultimately, the Grad-CAM approach not only intensify model transparency but also aids in ameliorating the interpretability of lung cancer diagnosis in histopathological image categorization.
Downloads
References
W. Saeed and C. Omlin, “Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities,” Knowl Based Syst, 2023, doi: 10.1016/j.knosys.2023.110273.
S. S Band et al., “Application of explainable artificial intelligence in medical health: A systematic review of interpretability methods,” Inform Med Unlocked, 2023, doi: 10.1016/j.imu.2023.101286.
E. Tjoa and C. Guan, “A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI,” IEEE Trans Neural Netw Learn Syst, 2021, doi: 10.1109/TNNLS.2020.3027314.
I. D. Mienye and Y. Sun, “A Survey of Ensemble Learning: Concepts, Algorithms, Applications, and Prospects,” 2022. doi: 10.1109/ACCESS.2022.3207287.
A. Singh, S. Sengupta, and V. Lakshminarayanan, “Explainable deep learning models in medical image analysis,” 2020. doi: 10.3390/JIMAGING6060052.
M. A. Ganaie, M. Hu, A. K. Malik, M. Tanveer, and P. N. Suganthan, “Ensemble deep learning: A review,” 2022. doi: 10.1016/j.engappai.2022.105151.
K. Gao, H. Shen, Y. Liu, L. Zeng, and D. Hu, “Dense-CAM: Visualize the Gender of Brains with MRI Images,” in Proceedings of the International Joint Conference on Neural Networks, 2019. doi: 10.1109/IJCNN.2019.8852260.
A. Teramoto, T. Tsukamoto, Y. Kiriyama, and H. Fujita, “Automated Classification of Lung Cancer Types from Cytological Images Using Deep Convolutional Neural Networks,” Biomed Res Int, 2017, doi: 10.1155/2017/4067832.
R. Ibrahim and M. Omair Shafiq, “Explainable Convolutional Neural Networks: A Taxonomy, Review, and Future Directions,” 2023. doi: 10.1145/3563691.
T. Patil and S. Arora, “Survey of Explainable AI Techniques: A Case Study of Healthcare,” in Lecture Notes in Networks and Systems, 2023. doi: 10.1007/978-981-99-5652-4_30.
M. Xiao, L. Zhang, W. Shi, J. Liu, W. He, and Z. Jiang, “A visualization method based on the Grad-CAM for medical image segmentation model,” in 2021 International Conference on Electronic Information Engineering and Computer Science, EIECS 2021, 2021. doi: 10.1109/EIECS53707.2021.9587953.
R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization,” in Proceedings of the IEEE International Conference on Computer Vision, 2017. doi: 10.1109/ICCV.2017.74.
Z. Chen, Z. Tian, J. Zhu, C. Li, and S. Du, “C-CAM: Causal CAM for Weakly Supervised Semantic Segmentation on Medical Image,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2022. doi: 10.1109/CVPR52688.2022.01138.
G. Visani, E. Bagli, and F. Chesani, “OptiLIME: Optimized lime explanations for diagnostic computer algorithms,” in CEUR Workshop Proceedings, 2020.
M. T. Ribeiro, S. Singh, and C. Guestrin, “‘why should i trust you?’ explaining the predictions of any classifier,” in NAACL-HLT 2016 - 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Demonstrations Session, 2016. doi: 10.18653/v1/n16-3020.
D. E. B. Schiavon, C. D. L. Becker, V. R. Botelho, and T. A. Pianoski, “Interpreting Convolutional Neural Networks for Brain Tumor Classification: An Explainable Artificial Intelligence Approach,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2023. doi: 10.1007/978-3-031-45389-2_6.
M. M. Ahsan, R. Nazim, Z. Siddique, and P. Huebner, “Detection of covid-19 patients from ct scan and chest x-ray data using modified mobilenetv2 and lime,” Healthcare (Switzerland), 2021, doi: 10.3390/healthcare9091099.
C. Patrício, J. C. Neves, and L. F. Teixeira, “Explainable Deep Learning Methods in Medical Image Classification: A Survey,” ACM Comput Surv, 2023, doi: 10.1145/3625287.
X. Kong, S. Liu, and L. Zhu, “Toward Human-centered XAI in Practice: A survey,” 2024. doi: 10.1007/s11633-022-1407-3.
M. A. S. Al Husaini, M. H. Habaebi, T. S. Gunawan, M. R. Islam, E. A. A. Elsheikh, and F. M. Suliman, “Thermal-based early breast cancer detection using inception V3, inception V4 and modified inception MV4,” Neural Comput Appl, 2022, doi: 10.1007/s00521-021-06372-1.
S. Bharati, M. R. H. Mondal, and P. Podder, “A Review on Explainable Artificial Intelligence for Healthcare: Why, How, and When?,” IEEE Transactions on Artificial Intelligence, 2024, doi: 10.1109/TAI.2023.3266418.
P. Rajpurkar and M. P. Lungren, “The Current and Future State of AI Interpretation of Medical Images,” New England Journal of Medicine, 2023, doi: 10.1056/nejmra2301725.
F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017. doi: 10.1109/CVPR.2017.195.
B. U. Maheswari et al., “Explainable deep-neural-network supported scheme for tuberculosis detection from chest radiographs,” BMC Med Imaging, 2024, doi: 10.1186/s12880-024-01202-x.
M. Melinda, M. Oktiana, Y. Nurdin, I. Pujiati, M. Irhamsyah, and N. Basir, “Performance of ShuffleNet and VGG-19 Architectural Classification Models for Face Recognition in Autistic Children,” Int J Adv Sci Eng Inf Technol, 2023, doi: 10.18517/ijaseit.13.2.18274.
H. Shah, R. Patel, S. Hegde, and H. Dalvi, “XAI Meets Ophthalmology: An Explainable Approach to Cataract Detection Using VGG-19 and Grad-CAM,” in 2023 IEEE Pune Section International Conference, PuneCon 2023, 2023. doi: 10.1109/PuneCon58714.2023.10450053.
C. M. Vieira, M. V. D. C. Oliveira, M. D. P. Guimarães, L. Rocha, and D. R. C. Dias, “Applied Explainable Artificial Intelligence (XAI) in the classification of retinal images for support in the diagnosis of Glaucoma,” in ACM International Conference Proceeding Series, 2023. doi: 10.1145/3617023.3617026.
H. Jin and S. Chen, “Biometric Recognition Based on Recurrence Plot and InceptionV3 Model Using Eye Movements,” IEEE J Biomed Health Inform, 2023, doi: 10.1109/JBHI.2023.3313261.
P. Bedi, N. Ningshen, S. Rani, and P. Gole, “Explainable Predictions for Brain Tumor Diagnosis Using InceptionV3 CNN Architecture,” in Lecture Notes in Networks and Systems, 2024. doi: 10.1007/978-981-99-4071-4_11.
P. Theerthagiri and G. B. Nagaladinne, “Deepfake Face Detection Using Deep InceptionNet Learning Algorithm,” in 2023 IEEE International Students’ Conference on Electrical, Electronics and Computer Science, SCEECS 2023, 2023. doi: 10.1109/SCEECS57921.2023.10063128.
R. Mothkur and B. N. Veerappa, “Deep Learning-Based Three Type Classifier Model for Non-small Cell Lung Cancer from Histopathological Images,” in Lecture Notes in Networks and Systems, 2023, pp. 481–493. doi: 10.1007/978-981-19-9379-4_35.
M. Mateen, J. Wen, Nasrullah, S. Song, and Z. Huang, “Fundus image classification using VGG-19 architecture with PCA and SVD,” Symmetry (Basel), vol. 11, no. 1, 2019, doi: 10.3390/sym11010001.
X. Lu and Y. A. Firoozeh Abolhasani Zadeh, “Deep Learning-Based Classification for Melanoma Detection Using XceptionNet,” J Healthc Eng, 2022, doi: 10.1155/2022/2196096.
H. Zhang and K. Ogasawara, “Grad-CAM-Based Explainable Artificial Intelligence Related to Medical Text Processing,” Bioengineering, 2023, doi: 10.3390/bioengineering10091070.
M. S. Ahmed, K. N. Iqbal, and M. G. R. Alam, “Interpretable Lung Cancer Detection using Explainable AI Methods,” in 2023 International Conference for Advancement in Technology, ICONAT 2023, 2023. doi: 10.1109/ICONAT57137.2023.10080480.
E. S. N. Joshua, D. Bhattacharyya, M. Chakkravarthy, and H. J. Kim, “Lung cancer classification using squeeze and excitation convolutional neural networks with grad Cam++ class activation function,” Traitement du Signal, 2021, doi: 10.18280/ts.380421.
N. A. Wani, R. Kumar, and J. Bedi, “DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence,” Comput Methods Programs Biomed, 2024, doi: 10.1016/j.cmpb.2023.107879.
Copyright (c) 2025 Rashmi Mothkur, Pullagura Soubhagyalakshmi, Swetha C. B.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution-ShareAlikel 4.0 International (CC BY-SA 4.0) that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).