Google’s NotebookLM is an experimental project that was released last year. It allows users to upload files and analyze them with a large language model (LLM).
However, it is vulnerable to Prompt Injection, meaning that uploaded files can manipulate the chat conversation and control what the user sees in responses.
There is currently no known solution to these kinds of attacks, so users can’t implicitly trust responses from large language model applications when untrusted data is involved.