admin管理员组

文章数量:1193327

I’ve been using Ollama with the Mistral model in our Retrieval-Augmented Generation (RAG) setup for the past three months. While it worked well overall, it struggled with arithmetic calculations sometime only.

Recently, we implemented Mistral chaining with Mathstral, and after installation, we initially achieved accurate and reliable results across all prompts in our RAG. However, after 24 hours of testing, the model began hallucinating, failing to extract accurate information, and producing incorrect arithmetic calculations. Could anyone clarify if this is a preexisting issue with Mathstral or something new? Additionally, please provide guidance on how we can resolve this problem.

Thank you!

Tried: Implemented Mistral chaining with Mathstral, and after installation, we initially achieved accurate and reliable results across all prompts in our RAG. However, after 24 hours of testing, the model began hallucinating, failing to extract accurate information, and producing incorrect arithmetic calculations.

Expecting: Model should extract correct data from context and utilize it to answer the prompts or questions asked . Also perform Correct mathematical calculations.

Could anyone clarify if this is a preexisting issue with Mathstral or something new? Additionally, please provide guidance on how we can resolve this problem. Thank you!

本文标签: python 3xIssue with Mistral and Mathstral Integration in RAGStack Overflow