admin管理员组文章数量:1405321
I have Ollama version 0.5.13 installed on my university's HPC cluster.
Because of lack of sudo access, I have a custom script that runs ollama for me. I am reproducing it below:
# Set the custom libraries
export LD_PRELOAD=/home/******/conda_pkgs/libstdcxx-ng-11.2.0-h1234567_1/lib/libstd$
export LD_LIBRARY_PATH=/home/******/conda_pkgs/libgcc-ng-11.2.0-h1234567_1/lib:/hom$
# Launch ollama with your custom loader
/home/******/glibc-2.27_install/lib/ld-linux-x86-64.so.2 \
--library-path "$LD_LIBRARY_PATH" \
/home/ak2530/bin/ollama.bin "$@"
However, after running ollama serve
on one terminal, when I try to run ollama run deepseek-coder
on another terminal, I get an error
Error: llama runner process has terminated: exit status 127
From what I can understand, the runner for Ollama cannot be found. I have tried searching for files named "runner" in my home drive, but that has not been successful. Moreover, apparently there should be json files in ~/.ollama/models/manifests
, but that folder is empty. How do I find this runner file for Ollama, and have it run the models?
本文标签: large language modelTrouble finding runner for Ollama 0513Stack Overflow
版权声明:本文标题:large language model - Trouble finding runner for Ollama 0.5.13 - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1744884917a2630429.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论