⚠ Official Notice: www.ijisrt.com is the official website of the International Journal of Innovative Science and Research Technology (IJISRT) Journal for research paper submission and publication. Please beware of fake or duplicate websites using the IJISRT name.



Retrieval-Augmented Large Language Models for Robust Context-Aware Natural Language Understanding


Authors : Utsha Sarker; Archy Biswas; Ikram Ali; Lalit Vaishnav; Harsh; Priyanshu Agarwal

Volume/Issue : Volume 11 - 2026, Issue 3 - March


Google Scholar : https://tinyurl.com/2nddcuau

Scribd : https://tinyurl.com/yvx8scma

DOI : https://doi.org/10.38124/ijisrt/26mar1756

Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.


Abstract : Large Language Models (LLMs) have been shown to have remarkable capabilities in natural language understanding; however, they still have some limitations such as the outdated knowledge, the lack of domain-specific awareness and the hallucination of incorrect information. These problems are induced by the fact that LLMs are mainly based on parametric knowledge stored during the training process, that is not dynamically updated and verified . To combat such challenges, this paper introduces an improved Retrieval-Augmented Generation (RAG) to address these underlying challenges which combines an improved context aware retrieval mechanism with the gating based prompt augmentation strategy. The proposed approach selectively filters and ranks the retrieved documents based on context-awareness gate before injecting them to the LLM, which would improve the relevance and reduce the noise in the generated responses.In the paper we validate the proposed method using benchmark data such as SQuAD, domain-specific question answering data sets as well as dialogue data sets where we compare with baseline models such as vanilla LLMs and standard RAG pipelines. Experimental results show that our method can provide much better results in terms of Exact Match (EM), F1-score and Fact consistency compared to traditional methods. These findings are consistent with recent studies showing the value of RAG in enhancing factual grounding and reducing hallucinations in LLMs 1.  Contributions: In this paper, we propose a novel context-aware RAG architecture, which provides a retrieval filtering mechanism. Following the review, we design an improved prompt integration strategy for improved knowledge grounding. We empirically show better performance on several NLP benchmarks.

Keywords : Retrieval Augmented Generation (RAG), Large Language Model (LLMs), Context Aware Natural Language Understanding, Contextual Retrieval, Knowledge Intensive NLP.

References :

  1. Y. Gao, Y. Sun, Z. Li, and Y. Chen, “Retrieval-Augmented Generation for Large Language Models: A Survey,” arXiv preprint arXiv:2312.10997, 2023.
  2. C. Sharma, “Retrieval-Augmented Generation: A Comprehensive Survey of Architectures, Enhancements, and Robustness Frontiers,” arXiv preprint, 2025.
  3. A. Brown, M. Roman, and B. Devereux, “A Systematic Literature Review of Retrieval-Augmented Generation: Techniques, Metrics, and Challenges,” arXiv preprint, 2025.
  4. A. Gan, H. Li, and J. Zhang, “Retrieval Augmented Generation Evaluation in the Era of Large Language Models: A Comprehensive Survey,” arXiv preprint, 2025.
  5. Z. Li, Y. Gao, and X. Wang, “Retrieval-Augmented Generation for Educational Applications: A Survey,” Computers & Education: Artificial Intelligence, 2025.
  6. P. Omrani, A. Khosravi, and M. Rahmani, “Hybrid Retrieval-Augmented Generation Approach for LLM Query Response Enhancement,” in Proc. IEEE Int. Conf. on Intelligent Computing and Wireless Communications (ICWC), 2024.
  7. B. Zhan, Y. Liu, and H. Chen, “RARoK: Retrieval-Augmented Reasoning on Knowledge for Medical Question Answering,” in Proc. IEEE Int. Conf. on Bioinformatics and Biomedicine (BIBM), 2024.
  8. Y. Morales-Martínez, J. Pérez, and L. Gómez, “Application of Retrieval-Augmented Generation Systems in Software Engineering Education,” Int. J. Combinatorial Optimization Problems and Informatics, 2025.
  9. R. Yang, “RAGVA: Engineering Retrieval-Augmented Generation Applications,” Information and Software Technology, 2025.
  10. P. Jiang, “Comparative Study of Retrieval-Augmented Generation and Chain-of-Thought Reasoning in Large Language Models,” Engineering Applications of Artificial Intelligence, 2025.
  11. Y. Zhao, X. Liu, and K. Wang, “ReCode: Improving LLM-Based Code Repair with Fine-Grained Retrieval-Augmented Generation,” arXiv preprint, 2025.
  12. S. Kumar, R. Patel, and A. Singh, “Robust Implementation of Retrieval-Augmented Generation via Computing-in-Memory,” in Proc. ACM/IEEE Design Automation Conf., 2025.
  13. E. Karakurt, “Retrieval-Augmented Generation and Large Language Models: Trends and Challenges,” Applied Sciences, vol. 15, no. 3, 2025.
  14. M. Klesel, T. Müller, and S. Wagner, “Retrieval-Augmented Generation: Concepts and Applications,” Springer, 2025.
  15. E. Karakurt, “Retrieval-Augmented Generation and Large Language Models: A Bibliometric Analysis,” Preprints, 2025.
  16. Y. Gao, H. Sun, and Z. Li, “LLM-Based Retrieval-Augmented Generation for 6G Wireless Networks,” 2025.
  17. D. He, Q. Wang, and L. Zhang, “Dynamic Retrieval-Augmented Generation of Ontologies (DRAGON-AI),” Journal of Biomedical Semantics, 2024.
  18. H. Wang, Y. Liu, and X. Chen, “Retrieval-Augmented Generation with Conflicting Evidence,” in Findings of ACL, 2025.
  19. Q. Leng, Z. Zhao, and Y. Li, “On the Performance of Long-Context Retrieval-Augmented Generation in Large Language Models,” 2024.
  20. A. Leto, M. Rossi, and F. Bianchi, “Toward Optimal Search and Retrieval for RAG Systems,” 2024.
  21. P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W.-T. Yih, T. Rocktäschel, S. Riedel, and D. Kiela, “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks,” in Advances in Neural Information Processing Systems (NeurIPS), 2020.
  22. O. Ram, Y. Levine, B. Efrat, D. Chen, and O. Levy, “In-Context Retrieval-Augmented Language Models,” Transactions of the Association for Computational Linguistics (TACL), 2023.
  23. K. Shuster, S. Poff, M. Chen, D. Kiela, and J. Weston, “Retrieval Augmentation Reduces Hallucination in Conversation,” 2021.
  24. Y. Luan, J. Eisenstein, K. Toutanova, and M. Collins, “Sparse, Dense, and Attentional Representations for Text Retrieval,” TACL, 2021.
  25. W. Shi, S. Zhou, and Z. Chen, “Retrieval-Augmented Language Models in Natural Language Processing,” in Proc. NAACL, 2024.

Large Language Models (LLMs) have been shown to have remarkable capabilities in natural language understanding; however, they still have some limitations such as the outdated knowledge, the lack of domain-specific awareness and the hallucination of incorrect information. These problems are induced by the fact that LLMs are mainly based on parametric knowledge stored during the training process, that is not dynamically updated and verified . To combat such challenges, this paper introduces an improved Retrieval-Augmented Generation (RAG) to address these underlying challenges which combines an improved context aware retrieval mechanism with the gating based prompt augmentation strategy. The proposed approach selectively filters and ranks the retrieved documents based on context-awareness gate before injecting them to the LLM, which would improve the relevance and reduce the noise in the generated responses.In the paper we validate the proposed method using benchmark data such as SQuAD, domain-specific question answering data sets as well as dialogue data sets where we compare with baseline models such as vanilla LLMs and standard RAG pipelines. Experimental results show that our method can provide much better results in terms of Exact Match (EM), F1-score and Fact consistency compared to traditional methods. These findings are consistent with recent studies showing the value of RAG in enhancing factual grounding and reducing hallucinations in LLMs 1.  Contributions: In this paper, we propose a novel context-aware RAG architecture, which provides a retrieval filtering mechanism. Following the review, we design an improved prompt integration strategy for improved knowledge grounding. We empirically show better performance on several NLP benchmarks.

Keywords : Retrieval Augmented Generation (RAG), Large Language Model (LLMs), Context Aware Natural Language Understanding, Contextual Retrieval, Knowledge Intensive NLP.

Paper Submission Last Date
30 - April - 2026

SUBMIT YOUR PAPER CALL FOR PAPERS
Video Explanation for Published paper

Never miss an update from Papermashup

Get notified about the latest tutorials and downloads.

Subscribe by Email

Get alerts directly into your inbox after each post and stay updated.
Subscribe
OR

Subscribe by RSS

Add our RSS to your feedreader to get regular updates from us.
Subscribe