Cross-Species Disease Detection Model Using Domain Adaptation


Authors : Jeewon Kim

Volume/Issue : Volume 10 - 2025, Issue 9 - September


Google Scholar : https://tinyurl.com/jzayw2nr

Scribd : https://tinyurl.com/yshf59dw

DOI : https://doi.org/10.38124/ijisrt/25sep693

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.

Note : Google Scholar may take 30 to 40 days to display the article.


Abstract : Veterinary radiology faces persistent hurdles for deep learning: limited labeled data within each species and substantial domain shift driven by anatomical, acquisition, and contrast differences. We investigate a domain adaptation framework that transfers a pneumonia detector trained on canine chest radiographs to feline radiographs, enabling accurate, dataefficient cross-species diagnosis without requiring large labeled target datasets. The approach integrates adversarial distribution alignment with optional semi-supervised fine-tuning, and supports deployment practices such as probability calibration and visual explanations.

References :

  1. T. Banzato, M. Wodzinski, S. Burti, V. L. Osti, V. Rossoni, M. Atzori, and A. Zotti, “Automatic classification of canine thoracic radiographs using deep learning,” Scientific Reports, vol. 11, Art. no. 3964, 2021. doi:10.1038/s41598-021-83515-3.
  2. L. Dumortier, F. Guepin, M.-L. Delignette-Muller, C. Boulocher, and´ T. Grenier, “Deep learning in veterinary medicine, an approach based on CNN to detect pulmonary abnormalities from lateral thoracic radiographs in cats,” Scientific Reports, vol. 12, Art. no. 11418, 2022. doi:10.1038/s41598-022-14993-2.
  3. W. Celniak, M. Wodzinski, A. Jurgas, S. Burti, A. Zotti, M. Atzori,´ H. Muller, and T. Banzato, “Improving the classification of veterinary¨ thoracic radiographs through inter-species and inter-pathology selfsupervised pre-training of deep learning models,” Scientific Reports, vol. 13, Art. no. 19518, 2023. doi:10.1038/s41598-023-46345-z.
  4. Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-Adversarial Training of Neural Networks,” Journal of Machine Learning Research, vol. 17, no. 59, pp. 1–35, 2016.
  5. E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell, “Adversarial Discriminative Domain Adaptation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 7167–7176.
  6. K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2016, pp. 770–778.
  7. G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 4700–4708.
  8. J. Irvin, P. Rajpurkar, M. Ko, Y. Yu, S. Ciurea-Ilcus, C. Chute, H. Marklund, B. Haghgoo, R. Ball, K. Shpanskaya, J. Seekins, D. A. Mong, S. S. Halabi, J. K. Sandberg, R. Jones, D. B. Larson, C. P. Langlotz, B. N. Patel, M. P. Lungren, and A. Y. Ng, “CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison,” in Proc. AAAI Conf. on Artificial Intelligence, vol. 33, 2019, pp. 590–597. doi:10.1609/aaai.v33i01.3301590.
  9. R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2017, pp. 618–626.
  10. Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-Adversarial Training of Neural Networks,” Journal of Machine Learning Research, vol. 17, no. 59, pp. 1–35, 2016.
  11. E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell, “Adversarial Discriminative Domain Adaptation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 7167–7176.
  12. M. Long, Z. Cao, J. Wang, and M. I. Jordan, “Conditional Adversarial Domain Adaptation,” in Advances in Neural Information Processing Systems (NeurIPS), 2018, pp. 1640–1650.
  13. J. Irvin, P. Rajpurkar, M. Ko, Y. Yu, S. Ciurea-Ilcus, C. Chute, H. Marklund, B. Haghgoo, R. L. Ball, K. Shpanskaya, et al., “CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison,” in Proc. AAAI Conf. Artificial Intelligence, vol. 33, 2019, pp. 590–597.
  14. C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger, “On Calibration of Modern Neural Networks,” in Proc. Int. Conf. Machine Learning (ICML), 2017, pp. 1321–1330.
  15. R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2017, pp. 618–626.
  16. T. Banzato, M. Wodzinski, S. Burti, V. L. Osti, V. Rossoni, M. Atzori, and A. Zotti, “Automatic classification of canine thoracic radiographs using deep learning,” Scientific Reports, vol. 11, art. 3964, 2021.
  17. L. Dumortier, F. Guepin, M.-L. Delignette-Muller, C. Boulocher, and´ T. Grenier, “Deep learning in veterinary medicine: a CNN approach to detect pulmonary abnormalities from lateral thoracic radiographs in cats,” Scientific Reports, vol. 12, art. 11418, 2022.
  18. C. A. F. Duenas, S. M. G. Camacho, M. F. M. G˜ omez,´ et al., “Canine thoracic radiographic images as an educational dataset for distance learning and research on vertebral heart score
  19. C. A. F. Duenas, S. M. G. Camacho, M. F. M. G˜             omez,´ et al., “Radiographic Dataset for VHS determination learning process (Canine latero-lateral thoracic radiographs),” Mendeley Data, v1, 2020. doi:10.17632/ktx4cj55pn.1. CC BY 4.0. (156 patients; PNG images). Available: https://data.mendeley.com/datasets/ktx4cj55pn/1.
  20. University of Illinois College of Veterinary Medicine, “Feline Thorax Example 1 (3-view radiographs),” 2019. Available: https://vetmed. illinois.edu/imaging anatomy/feline/thorax/ex01/thor01-f%20.html.
  21. University of Illinois College of Veterinary Medicine, “Feline Thorax Example 3 (3-view radiographs),” 2019. Available: https://vetmed. illinois.edu/imaging anatomy/feline/thorax/ex03/thor03-f%20.html.
  22. J. Irvin, P. Rajpurkar, M. Ko, et al., “CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison,” in Proc. AAAI, 2019, pp. 590–597. Available: https://ojs.aaai.org/index. php/AAAI/article/view/3834.
  23. A. E. W. Johnson, T. J. Pollard, S. J. Berkowitz, et al., “MIMIC-CXR, a de-identified publicly available database of chest radiographs with freetext reports,” Scientific Data, vol. 6, art. 317, 2019. Available: https: //www.nature.com/articles/s41597-019-0322-0.
  24. H. Q. Nguyen, K. Lam, L. T. Le, et al., “VinDr-CXR: An open dataset of chest X-rays with radiologist annotations,” Scientific Data, vol. 9, art. 429, 2022. Available: https://www.nature.com/articles/ s41597-022-01498-w.
  25. Google, “Colab Paid Services and Pricing,” 2025. Available: https:// colab.research.google.com/signup.
  26. Google, “Making the Most of your Colab Subscription (Pro/Pro+ guide),” 2025. Available: https://colab.research.google.com/notebooks/ pro.ipynb.

27. Google, “Colab FAQ (GPUs/TPUs, limits, runtimes),” 2025. Available: https://research.google.com/colaboratory/faq.html.

Veterinary radiology faces persistent hurdles for deep learning: limited labeled data within each species and substantial domain shift driven by anatomical, acquisition, and contrast differences. We investigate a domain adaptation framework that transfers a pneumonia detector trained on canine chest radiographs to feline radiographs, enabling accurate, dataefficient cross-species diagnosis without requiring large labeled target datasets. The approach integrates adversarial distribution alignment with optional semi-supervised fine-tuning, and supports deployment practices such as probability calibration and visual explanations.

CALL FOR PAPERS


Paper Submission Last Date
31 - December - 2025

Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe