admin管理员组文章数量:1122832
We are facing performance issues with a customized LLM built for multiple-choice question answering in the medical field using Google Cloud Platform (GCP). Here’s what we did:
RAG Setup: We used ~13 textbooks as reference material, split them into chunks, and stored them in a datastore.
Querying: We utilized the Gemini Vertex AI Agent Builder to query the data store and retrieve inference snippets.
The performance is quite inconsistent. When we check the query generated by the agent, it often fails to interpret the question properly. For example:
Question: A man has a 2-day history of ringing in his ears. Pure tone audiometry shows sensorineural hearing loss of 45 dB. The expected beneficial effect of the drug causing this symptom is most likely due to which of the following actions?
The agent's query turns into this:
{
"query": "What would cause the symptoms in the given context?"
}
This overly generic query prevents the agent from retrieving meaningful or specific information.
So our problem is:
1. How can we improve query generation so the agent better understands the context of the question?
2. Are there specific tweaks we can apply to the Gemini agent or our RAG setup to enhance performance?
Any suggestions or insights would be greatly appreciated!
Our prompt:
- Steps
- 1. **Understand the Question**: Carefully read and comprehend the medical question presented.
- 2. **Evaluate Options**: For each of the four options (A, B, C, D), assess its relevance and accuracy in relation to the question.
- 3. **Search Knowledge Base**: Extract the information in the question and then Invoke ${TOOL:searcher} as the knowledge base to find the relevant information of the question and the options.
- 4. **Answer and Reasoning**:
- If the knowledge base exists the relevant information, use them to reasoning the question and choose the most suitable option based on the reasoning.
- If the knowledge base doesn't exist the relevant information, answer the question with common sense.
- Output Format
- A final conclusion indicating the chosen option as 'A', 'B', 'C', or 'D' with a concise reasoning section explaining the thought process.
We are facing performance issues with a customized LLM built for multiple-choice question answering in the medical field using Google Cloud Platform (GCP). Here’s what we did:
RAG Setup: We used ~13 textbooks as reference material, split them into chunks, and stored them in a datastore.
Querying: We utilized the Gemini Vertex AI Agent Builder to query the data store and retrieve inference snippets.
The performance is quite inconsistent. When we check the query generated by the agent, it often fails to interpret the question properly. For example:
Question: A man has a 2-day history of ringing in his ears. Pure tone audiometry shows sensorineural hearing loss of 45 dB. The expected beneficial effect of the drug causing this symptom is most likely due to which of the following actions?
The agent's query turns into this:
{
"query": "What would cause the symptoms in the given context?"
}
This overly generic query prevents the agent from retrieving meaningful or specific information.
So our problem is:
1. How can we improve query generation so the agent better understands the context of the question?
2. Are there specific tweaks we can apply to the Gemini agent or our RAG setup to enhance performance?
Any suggestions or insights would be greatly appreciated!
Our prompt:
- Steps
- 1. **Understand the Question**: Carefully read and comprehend the medical question presented.
- 2. **Evaluate Options**: For each of the four options (A, B, C, D), assess its relevance and accuracy in relation to the question.
- 3. **Search Knowledge Base**: Extract the information in the question and then Invoke ${TOOL:searcher} as the knowledge base to find the relevant information of the question and the options.
- 4. **Answer and Reasoning**:
- If the knowledge base exists the relevant information, use them to reasoning the question and choose the most suitable option based on the reasoning.
- If the knowledge base doesn't exist the relevant information, answer the question with common sense.
- Output Format
- A final conclusion indicating the chosen option as 'A', 'B', 'C', or 'D' with a concise reasoning section explaining the thought process.
Share
Improve this question
edited Dec 20, 2024 at 12:55
VLAZ
28.8k9 gold badges62 silver badges82 bronze badges
asked Nov 22, 2024 at 18:29
XinyiXinyi
112 bronze badges
1 Answer
Reset to default 0Building AI agents with Vertex AI Agent Builder using this step by step guide, I was able to create an agent that will help customers answer travel related queries. As well as attaching Datastores for the case wherein if a location is not existing and the response of the agent was “I’m sorry, I can’t provide information about Wakanda. It is a fictional country from the Marvel universe.”
While this is factually correct, instead of simply stating "I can't provide information" and ending the conversation, it would be more helpful to the user if the agent suggested similar places. This approach could potentially lead to users actually booking a trip through the agent.
In order for the agent to recommend similar places, you can provide more information to the agent through Datastores. It acts as an additional knowledge base for the agent to refer to if the agent is not able to answer user questions based on their built-in knowledge.
In your case, you can change the Goal and Instructions as well as the tools for the Data store related to Medical MCQs.
版权声明:本文标题:large language model - Issue with GCP Agent Performance in Customized LLM for Medical MCQs - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1736301590a1931232.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论