admin管理员组文章数量:1319002
I have Ollama installed on my local and I am able to run ollama run tinyllama from command prompt, also able to ask question to the llm from command prompt. But when run the code on python it does not give the response. it only Prints: Empty Response Below is the code:
from llama_index.llms.ollama import Ollama
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
documents = SimpleDirectoryReader("data").load_data()
print(f"Loaded {len(documents)} documents.")
# bge-base embedding model
Settings.embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-base-en-v1.5")
# ollama
Settings.llm = Ollama(model="tinyllama:latest", request_timeout=360.0)
index = VectorStoreIndex.from_documents(
documents,
embed_model=Settings.embed_model
)
query_engine = index.as_query_engine()
print("Query engine created.")
query = "What is the capital of France?"
print(f"Executing query: {query}")
response = query_engine.query(query)
print(response)
本文标签:
版权声明:本文标题:llama index - Ollama, llama_index, HuggingFaceEmbedding,VectorStoreIndex, python - getting message "Empty Response& 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1736132705a1906523.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论