Authors :
Utsha Sarker; Lalit Vaishnav; Archy Biswas; Harsh; Ikram Ali; Priyanshu Agarwal
Volume/Issue :
Volume 11 - 2026, Issue 3 - March
Google Scholar :
https://tinyurl.com/yap35y9x
Scribd :
https://tinyurl.com/3dmpm3vp
DOI :
https://doi.org/10.38124/ijisrt/26mar1828
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Abstract :
Fruit freshness evaluation has become an important task in food quality safety and food supply chain management,
and traditional manual inspection method is subjective, time-consuming and inconsistent. Recent developments in deep
learning have opened the door to automated quality analysis of fruit, but most of the top-performing models tend to be
computationally expensive and hard to interpret and thus limit their use in the real world.
In our work, we conduct a novel and efficient fruit freshness detection framework, which combines a high-capacity
teacher network (ResNet-based) with an easy-to-train student model with knowledge distillation. To improve transparency,
Gradient-weighted Class Activation Mapping (Grad-CAM) is used for the visualisation of discriminative regions that affect
the model's prediction to increase trust and interpretability in the practical use Experimental results show that distilled
student model achieves similar results compared to the teacher network (e.g. accuracy and F1-score results are within 1-3%
margin) but with massive model size and inference latency reduction compared to the teacher network, which are consistent
with the results obtained in recent studies on fruit classification under computing efficiency conditions [1], [2] and efficient
knowledge distillation approaches. In addition, Grad-CAM visualisations also bring up relevant freshness indicators such
as discoloration and texture variations, which are consistent with research on explainable AI-based fruit quality [3].
The framework proposed here offers an approach involving a good balance among accuracy, efficiency, and
interpretability, which is suitable to be used in real world deployment applications in smart agriculture and food monitoring
systems.
Keywords :
Fruit Freshness Detection, Deep Learning, Knowledge Distillation, Grad-CAM, Explainable AI, Light Weight Model.
References :
- Y. Gao, Y. Sun, Z. Li, and Y. Chen, “Retrieval-Augmented Generation for Large Language Models: A Survey,” arXiv preprint arXiv:2312.10997, 2023.
- C. Sharma, “Retrieval-Augmented Generation: A Comprehensive Survey of Architectures, Enhancements, and Robustness Frontiers,” arXiv preprint, 2025.
- A. Brown, M. Roman, and B. Devereux, “A Systematic Literature Review of Retrieval-Augmented Generation: Techniques, Metrics, and Challenges,” arXiv preprint, 2025.
- A. Gan, H. Li, and J. Zhang, “Retrieval Augmented Generation Evaluation in the Era of Large Language Models: A Comprehensive Survey,” arXiv preprint, 2025.
- Z. Li, Y. Gao, and X. Wang, “Retrieval-Augmented Generation for Educational Applications: A Survey,” Computers & Education: Artificial Intelligence, 2025.
- P. Omrani, A. Khosravi, and M. Rahmani, “Hybrid Retrieval-Augmented Generation Approach for LLM Query Response Enhancement,” in Proc. IEEE Int. Conf. on Intelligent Computing and Wireless Communications (ICWC), 2024.
- B. Zhan, Y. Liu, and H. Chen, “RARoK: Retrieval-Augmented Reasoning on Knowledge for Medical Question Answering,” in Proc. IEEE Int. Conf. on Bioinformatics and Biomedicine (BIBM), 2024.
- Y. Morales-Martínez, J. Pérez, and L. Gómez, “Application of Retrieval-Augmented Generation Systems in Software Engineering Education,” Int. J. Combinatorial Optimization Problems and Informatics, 2025.
- R. Yang, “RAGVA: Engineering Retrieval-Augmented Generation Applications,” Information and Software Technology, 2025.
- P. Jiang, “Comparative Study of Retrieval-Augmented Generation and Chain-of-Thought Reasoning in Large Language Models,” Engineering Applications of Artificial Intelligence, 2025.
- Y. Zhao, X. Liu, and K. Wang, “ReCode: Improving LLM-Based Code Repair with Fine-Grained Retrieval-Augmented Generation,” arXiv preprint, 2025.
- S. Kumar, R. Patel, and A. Singh, “Robust Implementation of Retrieval-Augmented Generation via Computing-in-Memory,” in Proc. ACM/IEEE Design Automation Conf., 2025.
- E. Karakurt, “Retrieval-Augmented Generation and Large Language Models: Trends and Challenges,” Applied Sciences, vol. 15, no. 3, 2025.
- M. Klesel, T. Müller, and S. Wagner, “Retrieval-Augmented Generation: Concepts and Applications,” Springer, 2025.
- E. Karakurt, “Retrieval-Augmented Generation and Large Language Models: A Bibliometric Analysis,” Preprints, 2025.
- Y. Gao, H. Sun, and Z. Li, “LLM-Based Retrieval-Augmented Generation for 6G Wireless Networks,” 2025.
- D. He, Q. Wang, and L. Zhang, “Dynamic Retrieval-Augmented Generation of Ontologies (DRAGON-AI),” Journal of Biomedical Semantics, 2024.
- H. Wang, Y. Liu, and X. Chen, “Retrieval-Augmented Generation with Conflicting Evidence,” in Findings of ACL, 2025.
- Q. Leng, Z. Zhao, and Y. Li, “On the Performance of Long-Context Retrieval-Augmented Generation in Large Language Models,” 2024.
- A. Leto, M. Rossi, and F. Bianchi, “Toward Optimal Search and Retrieval for RAG Systems,” 2024.
- P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W.-T. Yih, T. Rocktäschel, S. Riedel, and D. Kiela, “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks,” in Advances in Neural Information Processing Systems (NeurIPS), 2020.
- O. Ram, Y. Levine, B. Efrat, D. Chen, and O. Levy, “In-Context Retrieval-Augmented Language Models,” Transactions of the Association for Computational Linguistics (TACL), 2023.
- K. Shuster, S. Poff, M. Chen, D. Kiela, and J. Weston, “Retrieval Augmentation Reduces Hallucination in Conversation,” 2021.
- Y. Luan, J. Eisenstein, K. Toutanova, and M. Collins, “Sparse, Dense, and Attentional Representations for Text Retrieval,” TACL, 2021.
- W. Shi, S. Zhou, and Z. Chen, “Retrieval-Augmented Language Models in Natural Language Processing,” in Proc. NAACL, 2024.
Fruit freshness evaluation has become an important task in food quality safety and food supply chain management,
and traditional manual inspection method is subjective, time-consuming and inconsistent. Recent developments in deep
learning have opened the door to automated quality analysis of fruit, but most of the top-performing models tend to be
computationally expensive and hard to interpret and thus limit their use in the real world.
In our work, we conduct a novel and efficient fruit freshness detection framework, which combines a high-capacity
teacher network (ResNet-based) with an easy-to-train student model with knowledge distillation. To improve transparency,
Gradient-weighted Class Activation Mapping (Grad-CAM) is used for the visualisation of discriminative regions that affect
the model's prediction to increase trust and interpretability in the practical use Experimental results show that distilled
student model achieves similar results compared to the teacher network (e.g. accuracy and F1-score results are within 1-3%
margin) but with massive model size and inference latency reduction compared to the teacher network, which are consistent
with the results obtained in recent studies on fruit classification under computing efficiency conditions [1], [2] and efficient
knowledge distillation approaches. In addition, Grad-CAM visualisations also bring up relevant freshness indicators such
as discoloration and texture variations, which are consistent with research on explainable AI-based fruit quality [3].
The framework proposed here offers an approach involving a good balance among accuracy, efficiency, and
interpretability, which is suitable to be used in real world deployment applications in smart agriculture and food monitoring
systems.
Keywords :
Fruit Freshness Detection, Deep Learning, Knowledge Distillation, Grad-CAM, Explainable AI, Light Weight Model.