Authors :
Jayachitra L.; Dr. S. Subatra Devi
Volume/Issue :
Volume 10 - 2025, Issue 7 - July
Google Scholar :
https://tinyurl.com/y5d5matc
Scribd :
https://tinyurl.com/yahxkv8r
DOI :
https://doi.org/10.38124/ijisrt/25jul1461
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Note : Google Scholar may take 30 to 40 days to display the article.
Abstract :
Breast cancer remains one of the most prevalent diseases affecting women worldwide. Accurate prognosis plays a
vital role in guiding treatment decisions and improving survival rates. In recent years, Convolutional Neural Networks
(CNNs) have gained significant attention for their ability to automate diagnostic and prognostic tasks. This paper reviews
recent CNN-based models developed for breast cancer prognosis, particularly those integrating multi-modal data such as
clinical, imaging, and molecular profiles. We explore key trends in model design, data fusion strategies, and common datasets
used in research. Although CNNs show promising results, challenges such as limited interpretability and poor generalization
remain. To address these, we suggest future research directions involving attention-based data fusion and explainable CNN
architectures, with the goal of enhancing clinical adoption and reliability.
Keywords :
Breast Cancer Prognosis, Convolutional Neural Networks (CNN), Multi-Modal Learning, Deep Learning, Medical Imaging, Attention-Based Fusion, Interpretability.
References :
- World Health Organization, “Breast cancer,” WHO, 2021. [Online]. Available: https://www.who.int/news-room/fact-sheets/detail/breast-cancer
- G. Litjens et al., “A survey on deep learning in medical image analysis,” Med. Image Anal., vol. 42, pp. 60–88, Dec. 2017.
- M. Esteva et al., “Dermatologist-level classification of skin cancer with deep neural networks,” Nature, vol. 542, no. 7639, pp. 115–118, 2017.
- S. U. Kandan, M. M. Alketbi, and Z. Al Aghbari, “Multi-input CNN: A deep learning-based approach for predicting breast cancer prognosis using multi-modal data,” Discover Data, vol. 3, no. 2, Feb. 2025. [Online]. Available: https://link.springer.com/article/10.1007/s44248-025-00021-x
- A. Maigari, Z. Zainol, and C. Xinying, “Multi-modal stacked ensemble model for breast cancer prognosis prediction,” Stat. Optim. Inf. Comput., vol. 13, no. 3, pp. 1013–1034, Oct. 2024. [Online]. Available: https://doi.org/10.19139/soic-2310-5070-2100
- N. U. H. Shah et al., “Deep multi-modal breast cancer detection network,” arXiv preprint, arXiv:2504.16954, Apr. 2025. [Online]. Available: https://arxiv.org/abs/2504.16954
- Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE, vol. 86, no. 11, pp. 2278–2324, Nov. 1998.
- J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2015, pp. 3431–3440.
- A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” in Adv. Neural Inf. Process. Syst., vol. 25, 2012, pp. 1097–1105.
- K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint, arXiv:1409.1556, 2014.
- S. Minaee et al., “Deep learning-based mammography classification for breast cancer screening: A review,” Phys. Med., vol. 83, pp. 231–241, Jan. 2021.
- M. Heath et al., “The digital database for screening mammography,” in Proc. 5th Int. Workshop Digit. Mammogr., 2000, pp. 212–218.
- F. Spanhol, L. Oliveira, C. Petitjean, and L. Heutte, “A dataset for breast cancer histopathological image classification,” IEEE Trans. Biomed. Eng., vol. 63, no. 7, pp. 1455–1462, Jul. 2016.
- C. Curtis et al., “The genomic and transcriptomic architecture of 2,000 breast tumours reveals novel subgroups,” Nature, vol. 486, no. 7403, pp. 346–352, 2012.
- The Cancer Genome Atlas Network, “Comprehensive molecular portraits of human breast tumours,” Nature, vol. 490, no. 7418, pp. 61–70, 2012.
- A. Haque, M. Neubert, and N. Demirci, “Benchmarking deep learning models for breast cancer prognosis prediction,” IEEE Access, vol. 9, pp. 103795–103805, 2021.
- M. Tjoa and C. Guan, “A survey on explainable artificial intelligence (XAI): Toward medical XAI,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 11, pp. 4793–4813, Nov. 2021.
- R. R. Selvaraju et al., “Grad-CAM: Visual explanations from deep networks via gradient-based localization,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2017, pp. 618–626.
- A. Momeni et al., “Cross-dataset generalization in breast cancer classification using deep learning,” Comput. Biol. Med., vol. 136, p. 104706, 2021.
- A. Holzinger et al., “What do we need to build explainable AI systems for the medical domain?” arXiv preprint, arXiv:1712.09923, 2017.
- S. Saha et al., “Breast cancer prognosis through the use of multi-modal classifiers: Current state of the art and the way forward,” Brief. Funct. Genomics, vol. 23, no. 1, pp. 1–15, 2024.
Breast cancer remains one of the most prevalent diseases affecting women worldwide. Accurate prognosis plays a
vital role in guiding treatment decisions and improving survival rates. In recent years, Convolutional Neural Networks
(CNNs) have gained significant attention for their ability to automate diagnostic and prognostic tasks. This paper reviews
recent CNN-based models developed for breast cancer prognosis, particularly those integrating multi-modal data such as
clinical, imaging, and molecular profiles. We explore key trends in model design, data fusion strategies, and common datasets
used in research. Although CNNs show promising results, challenges such as limited interpretability and poor generalization
remain. To address these, we suggest future research directions involving attention-based data fusion and explainable CNN
architectures, with the goal of enhancing clinical adoption and reliability.
Keywords :
Breast Cancer Prognosis, Convolutional Neural Networks (CNN), Multi-Modal Learning, Deep Learning, Medical Imaging, Attention-Based Fusion, Interpretability.