Papers
arxiv:2412.19291

RAG with Differential Privacy

Published on Dec 26, 2024
Authors:

Abstract

Differentiable private token generation offers a privacy-preserving approach for retrieval-augmented generation systems handling personal data.

AI-generated summary

Retrieval-Augmented Generation (RAG) has emerged as the dominant technique to provide Large Language Models (LLM) with fresh and relevant context, mitigating the risk of hallucinations and improving the overall quality of responses in environments with large and fast moving knowledge bases. However, the integration of external documents into the generation process raises significant privacy concerns. Indeed, when added to a prompt, it is not possible to guarantee a response will not inadvertently expose confidential data, leading to potential breaches of privacy and ethical dilemmas. This paper explores a practical solution to this problem suitable to general knowledge extraction from personal data. It shows differentially private token generation is a viable approach to private RAG.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2412.19291 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2412.19291 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2412.19291 in a Space README.md to link it from this page.

Collections including this paper 1