admin管理员组

文章数量:1355574

ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: #: required key [messages] not found, please reformat your input and try again.

below is the code snippets

def get_response(llm, vectorstore, question):
    ## create prompt / template
    prompt_template = """
    Human: Please use the given context to provide concise answer to the question.
    If you don't know the answer, just say that you don't know, don't try to make up an answer.
    <context>
    {context}
    </context>
    Question: {question}
    Assistant:
    """
    # Create the PromptTemplate instance
    PROMPT = PromptTemplate(
        template=prompt_template, input_variables=["context", "question"]
    )
    
    # Use the vectorstore retriever to fetch the relevant documents for the question
    retriever = vectorstore.as_retriever(
        search_type="similarity", search_kwargs={"k": 5}
    )
    
    # Create the RetrievalQA instance
    qa = RetrievalQA.from_chain_type(
        llm=llm,
        chain_type="stuff",
        retriever=retriever,
        return_source_documents=True,
        chain_type_kwargs={"prompt": PROMPT}  # Pass the custom prompt template
    )
    
    # Retrieve relevant documents based on the question
    context = retriever.get_relevant_documents(question)
    
    # Use the qa object to get a response with the context and question
    response = qa({"context": context, "question": question})
    
    # Return the generated answer
    return response['result']


i have tried many ways such as reformat inputs and also update the versions of libraries and ensure that all model api keys is correct but not getting response from retriever( from vector store) so please help me to resolve this issue.

ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: #: required key [messages] not found, please reformat your input and try again.

below is the code snippets

def get_response(llm, vectorstore, question):
    ## create prompt / template
    prompt_template = """
    Human: Please use the given context to provide concise answer to the question.
    If you don't know the answer, just say that you don't know, don't try to make up an answer.
    <context>
    {context}
    </context>
    Question: {question}
    Assistant:
    """
    # Create the PromptTemplate instance
    PROMPT = PromptTemplate(
        template=prompt_template, input_variables=["context", "question"]
    )
    
    # Use the vectorstore retriever to fetch the relevant documents for the question
    retriever = vectorstore.as_retriever(
        search_type="similarity", search_kwargs={"k": 5}
    )
    
    # Create the RetrievalQA instance
    qa = RetrievalQA.from_chain_type(
        llm=llm,
        chain_type="stuff",
        retriever=retriever,
        return_source_documents=True,
        chain_type_kwargs={"prompt": PROMPT}  # Pass the custom prompt template
    )
    
    # Retrieve relevant documents based on the question
    context = retriever.get_relevant_documents(question)
    
    # Use the qa object to get a response with the context and question
    response = qa({"context": context, "question": question})
    
    # Return the generated answer
    return response['result']


i have tried many ways such as reformat inputs and also update the versions of libraries and ensure that all model api keys is correct but not getting response from retriever( from vector store) so please help me to resolve this issue.

Share Improve this question asked Mar 30 at 18:53 DIVYANSH TRIVEDIDIVYANSH TRIVEDI 11 bronze badge
Add a comment  | 

1 Answer 1

Reset to default 0

In simple terms, it appears from the stacktrace that your input message is not formatted correctly.

Specifically, it's missing the messages part of the request. Models may have different request formats so it's important to ensure you're matching the requested format of your chosen model.

You don't specify which foundation model you're using but you can view examples of the request message formats required per model here:- Bedrock Runtime Code Examples

本文标签: