admin管理员组文章数量:1332873
I am using LlamaIndex to perform retrieval-augmented generation (RAG).
Currently, I can retrieve and answer questions using the following minimal 5 line example, from /:
from llama_index import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)
This returns an answer, but I would like to display the retrieved context (e.g., the document chunks or sources) before the answer.
Desired output format would look something like:
Here's my retrieved context:
[x]
[y]
[z]
And here's my answer:
[answer]
What is the simplest reproducible way to modify the 5 line example to achieve this?
本文标签: pythonllamaindex RAG how to display retrieved contextStack Overflow
版权声明:本文标题:python - llama-index RAG: how to display retrieved context? - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1742312920a2451236.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论