A SYSTEMATIC REVIEW OF OPTIMIZED RETRIEVAL-AUGMENTED GENERATION FRAMEWORKS FOR REDUCING HALLUCINATIONS IN LARGE LANGUAGE MODELS
Jaipal Kumar, Dr. Shashank Swami
Department of Computer Science & Engineering, Vikrant University, Gwalior, Madhya Pradesh
Abstract
Large Language Models (LLMs) have proven to be making great progress on natural language processing, but they tend to produce hallucinated or even inaccurate information. The systematic review is focused on optimized Retrieval-Augmented Generation (RAG) frameworks and how they enhance the reliability and factual truth of the outputs of LLM. The paper examines the architecture of the RAG systems, retrieval methods, and optimization methods like embedding tuning, query expansion, and reranking. It also surveys methods of evaluation by such metrics as factual consistency and faithfulness. The results show that the incorporation of outside knowledge recall massively decreases hallucinations and improves the grounding in context. Although there are difficulties that are associated with scalability and knowledge management, optimized RAG frameworks present a good pathway towards creating trustworthy and evidence-based generative AI systems.
Keywords: Large Language Models; Retrieval-Augmented Generation; Hallucination Reduction; AI Reliability.
Journal Name :
VIEW PDF
EPRA International Journal of Research & Development (IJRD)
VIEW PDF
Published on : 2026-02-19
| Vol | : | 11 |
| Issue | : | 2 |
| Month | : | February |
| Year | : | 2026 |