admin管理员组

文章数量:1319002

I have Ollama installed on my local and I am able to run ollama run tinyllama from command prompt, also able to ask question to the llm from command prompt. But when run the code on python it does not give the response. it only Prints: Empty Response Below is the code:

from llama_index.llms.ollama import Ollama
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings
from llama_index.embeddings.huggingface import HuggingFaceEmbedding

documents = SimpleDirectoryReader("data").load_data()
print(f"Loaded {len(documents)} documents.")

# bge-base embedding model
Settings.embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-base-en-v1.5")

# ollama
Settings.llm = Ollama(model="tinyllama:latest", request_timeout=360.0)

index = VectorStoreIndex.from_documents(
    documents,
    embed_model=Settings.embed_model
)

query_engine = index.as_query_engine()
print("Query engine created.")

query = "What is the capital of France?"
print(f"Executing query: {query}")
response = query_engine.query(query) 

print(response)

本文标签: