Authors :
Sivarama Krishna Akhil Koduri
Volume/Issue :
Volume 11 - 2026, Issue 1 - January
Google Scholar :
https://tinyurl.com/57rr98rb
Scribd :
https://tinyurl.com/56pc3k7w
DOI :
https://doi.org/10.38124/ijisrt/26jan155
Note : A published paper may take 4-5 working days from the publication date to appear in PlumX Metrics, Semantic Scholar, and ResearchGate.
Abstract :
Retrieval-Augmented Generation (RAG) has established itself as the standard for reducing hallucinations in Large
Language Models (LLMs) by grounding generation in external knowledge. However, conventional RAG implementations
rely on static vector stores, limiting their utility in dynamic environments where information evolves rapidly. This reliance
on fixed knowledge bases restricts adaptability and long-term scalability. This paper synthesizes recent literature on RAG
system design, specifically focusing on mechanisms for continuous learning. Building on frameworks by Zheng et al. and
Zhang et al., we analyze architectures that support continuous memory addition, deletion, consolidation, and re-weighting.
These mechanisms transition RAG from static retrieval to incremental learning, mirroring biological memory processes.
Our analysis demonstrates that dynamic memory architectures outperform static systems in adaptability, robustness to
distribution shifts, and long-term retention. We conclude that dynamic memory updating is not merely an optimization but
a fundamental architectural requirement for sustaining lifelong learning in RAG systems.
Keywords :
Retrieval-Augmented Generation, Dynamic Memory Updating, Lifelong Learning, Continual Learning, Large Language Models, Memory-Augmented Systems, Adaptive AI Agents.
References :
- Y. Long, K. Chen, L. Jin, and M. Shang, "DRAE: Dynamic Retrieval-Augmented Expert Networks for Lifelong Learning and Task Adaptation in Robotics," in Proc. 63rd Annu. Meeting Assoc. Comput. Linguistics, Jul. 2025, pp. 23098--23141.
- X. Jiang et al., "Long term memory: The foundation of AI self-evolution," arXiv preprint arXiv:2410.15665, 2024.
- J. Zheng et al., "Lifelong learning of large language model based agents: A roadmap," arXiv preprint arXiv:2501.07278, 2025.
- Q. Qin et al., "Towards adaptive memory-based optimization for enhanced retrieval-augmented generation," in Findings of the Assoc. for Comput. Linguistics: ACL 2025, Jul. 2025, pp. 7991--8004.
- B. J. Gutiérrez et al., "From RAG to memory: Non-parametric continual learning for large language models," arXiv preprint arXiv:2502.14802, 2025.
- D. Zhang et al., "Memory in Large Language Models: Mechanisms, Evaluation and Evolution," arXiv preprint arXiv:2509.18868, 2025.
- D. Zhang et al., "Conversational Agents: From RAG to LTM," in Proc. 2025 Annu. Int. ACM SIGIR Conf. on R&D in Inf. Retr. in the Asia Pacific Region, Dec. 2025, pp. 447--452.
- J. Zheng, S. Qiu, C. Shi, and Q. Ma, "Towards lifelong learning of large language models: A survey," ACM Comput. Surveys, vol. 57, no. 8, pp. 1--35, 2025.
- X. Liang et al., "SAGE: Self-evolving Agents with Reflective and Memory-augmented Abilities," Neurocomputing, p. 130470, 2025.
- K. Mao et al., "RAG-studio: Towards in-domain adaptation of retrieval augmented generation through self-alignment," in Findings of the Assoc. for Comput. Linguistics: EMNLP 2024, Nov. 2024, pp. 725--735.
- Y. Wang et al., "Towards lifespan cognitive systems," arXiv preprint arXiv:2409.13265, 2024.
- Y. Hu et al., "Memory in the Age of AI Agents," arXiv preprint arXiv:2512.13564, 2025.
- Y. Fan et al., "Research on the online update method for retrieval-augmented generation (RAG) model with incremental learning," arXiv preprint arXiv:2501.07063, 2025.
- A. S. Mohammed, "Dynamic Data: Achieving Timely Updates in Vector Stores," Libertatem Media Private Limited, 2024.
- M. Lei et al., "RoboMemory: A Brain-inspired Multi-memory Agentic Framework for Lifelong Learning in Physical Embodied Systems," in NeurIPS 2025 Workshop on Space in Vision, Language, and Embodied AI, 2025.
- S. Jeong et al., "Adaptive-RAG: Learning to adapt retrieval-augmented large language models through question complexity," arXiv preprint arXiv:2403.14403, 2024.
- R. Shan, "LearnRAG: Implementing Retrieval-Augmented Generation for Adaptive Learning Systems," in 2025 Int. Conf. on AI in Inf. and Commun. (ICAIIC), Feb. 2025, pp. 0224--0229.
- H. Qian et al., "Memorag: Boosting long context processing with global memory-enhanced retrieval augmentation," in Proc. ACM Web Conf. 2025, Apr. 2025, pp. 2366--2377.
- L. Gruia and B. Ionescu, "Continual Learning for Generative AI Systems: Retrieval-Augmentation, Graph Reasoning, and Multimodal Integration," 2025.
- J. Wang et al., "Comorag: A cognitive-inspired memory-organized RAG for stateful long narrative reasoning," arXiv preprint arXiv:2508.10419, 2025.
- M. Mao et al., "Multi-RAG: A Multimodal Retrieval-Augmented Generation System for Adaptive Video Understanding," arXiv preprint arXiv:2505.23990, 2025.
- S. Siriwardhana et al., "Improving the domain adaptation of retrieval augmented generation (RAG) models for open domain question answering," Trans. Assoc. Comput. Linguistics, vol. 11, pp. 1--17, 2023.
- M. Walker, "SCMS Whitepaper-Complete Document Sparse Contextual Memory Scaffolding," SCMS Whitepaper, Oct. 2025.
- H. Choi and J. Jeong, "A Conceptual Framework for a Latest Information-Maintaining Method Using Retrieval-Augmented Generation and a Large Language Model in Smart Manufacturing," Machines, vol. 13, no. 2, p. 94, 2025.
- Z. Ke, Y. Ming, and S. Joty, "Adaptation of Large Language Models," in Proc. 2025 Annu. Conf. NAACL: HLT, May 2025, pp. 30--37.
Retrieval-Augmented Generation (RAG) has established itself as the standard for reducing hallucinations in Large
Language Models (LLMs) by grounding generation in external knowledge. However, conventional RAG implementations
rely on static vector stores, limiting their utility in dynamic environments where information evolves rapidly. This reliance
on fixed knowledge bases restricts adaptability and long-term scalability. This paper synthesizes recent literature on RAG
system design, specifically focusing on mechanisms for continuous learning. Building on frameworks by Zheng et al. and
Zhang et al., we analyze architectures that support continuous memory addition, deletion, consolidation, and re-weighting.
These mechanisms transition RAG from static retrieval to incremental learning, mirroring biological memory processes.
Our analysis demonstrates that dynamic memory architectures outperform static systems in adaptability, robustness to
distribution shifts, and long-term retention. We conclude that dynamic memory updating is not merely an optimization but
a fundamental architectural requirement for sustaining lifelong learning in RAG systems.
Keywords :
Retrieval-Augmented Generation, Dynamic Memory Updating, Lifelong Learning, Continual Learning, Large Language Models, Memory-Augmented Systems, Adaptive AI Agents.