Authors :
Dr. P. Maragathavalli; Dinesh V.; Yuvraj R.; Swaminathan S.
Volume/Issue :
Volume 11 - 2026, Issue 4 - April
Google Scholar :
https://tinyurl.com/55vwmnuh
Scribd :
https://tinyurl.com/ya9e567h
DOI :
https://doi.org/10.38124/ijisrt/26apr1344
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Abstract :
Valvular heart diseases are among the leading causes of cardiovascular complications worldwide, and early
automated detection plays a critical role in improving patient outcomes. Traditional auscultation methods depend heavily
on physician expertise and often fail to detect subtle cardiac abnormalities. Automated analysis of Phonocardiogram (PCG)
signals provides a scalable and objective approach for cardiac screening.
This project, CardioXAI — Improved Detection of Valvular Cardiac Abnormalities using Phonocardiogram Signals
by Explainable AI, develops a hybrid deep learning framework that analyzes PCG recordings and provides interpretable
diagnostic insights. The system classifies recordings into five conditions — Normal, Aortic Stenosis (AS), Mitral
Regurgitation (MR), Mitral Stenosis (MS), and Mitral Valve Prolapse (MVP) — using a Dual-Branch EfficientNetB0
architecture that fuses 3-channel RGB spectrogram features with 19 handcrafted acoustic features.
The system integrates two Explainable AI methods: Grad-CAM for spatial time-frequency localization on
spectrograms, and SHAP for feature-level attribution. Beyond detection, the system provides cardiac phase localization (S1,
Systole, S2, Diastole), an anatomical heart valve diagram highlighting the affected valve, severity grading, and automated
clinical report generation. A novel cross-dataset validation experiment on the PhysioNet 2016 Heart Sound Challenge
dataset quantifies domain shift using Jaccard similarity and KS statistics, providing evidence of model generalization across
different recording devices.
Keywords :
Phonocardiogram, Explainable AI, EfficientNetB0, Grad-CAM, SHAP, Valvular Heart Disease, Deep Learning, Transfer Learning, CirCor 2022, PhysioNet 2016.
References :
- S. K. Padhy, A. Mohapatra, and S. Patra, "X-CBNet: An Explainable Effective Deep Learning Framework Based on Spectrograms for Predicting Valvular Disorder using PCG Signals," Journal of Transformative Technologies and Sustainable Development, vol. 9, no. 1, pp. 1–18, Oct. 2025.
- H. Alquran, Y. Al-Issa, M. Alsalatie, and S. Tawalbeh, "Deep learning models for segmenting phonocardiogram signals: a comparative study," PLOS ONE, vol. 20, no. 4, pp. 1–12, Apr. 2025.
- B. Althaph and N. P. Challa, "Explainable attention-based deep learning for classification and interpretation of heart murmurs using phonocardiograms," Scientific Reports, vol. 15, no. 1, pp. 1–23, Jan. 2025.
- M. Bahreini, R. Barati, and A. Kamali, "Cardiac sound classification using a hybrid approach: MFCC-based feature fusion and CNN deep features," EURASIP Journal on Audio, Speech, and Music Processing, vol. 2025, no. 1, pp. 1–13, Jan. 2025.
- Z. Ren et al., "A comprehensive survey on heart sound analysis in the deep learning era," Frontiers in Artificial Intelligence, pp. 3–29, Sep. 2024.
- M. Tan and Q. Le, "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks," in Proc. 36th ICML, 2019.
- R. R. Selvaraju et al., "Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization," in Proc. IEEE ICCV, 2017.
- S. M. Lundberg and S. I. Lee, "A Unified Approach to Interpreting Model Predictions," in Advances in NeurIPS, 2017.
- J. Oliveira et al., "The CirCor DigiScope Phonocardiogram Dataset," IEEE Journal of Biomedical and Health Informatics, vol. 26, no. 6, pp. 2524–2535, Jun. 2022.
- C. Liu et al., "An open access database for the evaluation of heart sound algorithms," Physiological Measurement, vol. 37, no. 12, pp. 2181–2213, 2016.
Valvular heart diseases are among the leading causes of cardiovascular complications worldwide, and early
automated detection plays a critical role in improving patient outcomes. Traditional auscultation methods depend heavily
on physician expertise and often fail to detect subtle cardiac abnormalities. Automated analysis of Phonocardiogram (PCG)
signals provides a scalable and objective approach for cardiac screening.
This project, CardioXAI — Improved Detection of Valvular Cardiac Abnormalities using Phonocardiogram Signals
by Explainable AI, develops a hybrid deep learning framework that analyzes PCG recordings and provides interpretable
diagnostic insights. The system classifies recordings into five conditions — Normal, Aortic Stenosis (AS), Mitral
Regurgitation (MR), Mitral Stenosis (MS), and Mitral Valve Prolapse (MVP) — using a Dual-Branch EfficientNetB0
architecture that fuses 3-channel RGB spectrogram features with 19 handcrafted acoustic features.
The system integrates two Explainable AI methods: Grad-CAM for spatial time-frequency localization on
spectrograms, and SHAP for feature-level attribution. Beyond detection, the system provides cardiac phase localization (S1,
Systole, S2, Diastole), an anatomical heart valve diagram highlighting the affected valve, severity grading, and automated
clinical report generation. A novel cross-dataset validation experiment on the PhysioNet 2016 Heart Sound Challenge
dataset quantifies domain shift using Jaccard similarity and KS statistics, providing evidence of model generalization across
different recording devices.
Keywords :
Phonocardiogram, Explainable AI, EfficientNetB0, Grad-CAM, SHAP, Valvular Heart Disease, Deep Learning, Transfer Learning, CirCor 2022, PhysioNet 2016.