Multi-Source Knowledge Pruning for Retrieval-Augmented Generation: A Benchmark and Empirical Study

Abstract

Retrieval-augmented generation (RAG) is increasingly recognized as an effective approach to mitigating the hallucination of large language models (LLMs) through the integration of external knowledge. While numerous efforts, most studies focus on a single type of external knowledge source. However, in real-world applications, most situations involve diverse knowledge from various sources, yet this area has been less explored. The main dilemma is the lack of a suitable dataset containing multiple knowledge sources and pre-exploration of the associated issues. To address these challenges, we standardize a benchmark dataset that combines structured and unstructured knowledge across diverse and complementary domains. Based on this dataset, we further develop a plug-and-play RAG framework, PruningRAG, whose main characteristic is the use of multi-granularity pruning strategies to optimize the integration of relevant information while minimizing misleading context. It consistently improves performance across various existing RAG variants, demonstrating its robustness and broad applicability. Building upon the standardized dataset and PruningRAG, we also report a series of experimental results, as well as insightful findings. Our dataset and code are publicly available. https://github.com/USTCAGI/PruningRAG, with the aim of advancing future research in the RAG community.

Publication
In Proceedings of the 34th ACM International Conference on Information and Knowledge Management
Shuo Yu
Shuo Yu
Master Student
Mingyue Cheng
Mingyue Cheng
Associate Researcher
Jie Ouyang
Jie Ouyang
Master Student
Yucong Luo
Yucong Luo
Master Student
Qi Liu
Qi Liu
Professor
Enhong Chen
Enhong Chen
Professor