admin管理员组

文章数量:1405321

I have Ollama version 0.5.13 installed on my university's HPC cluster.

Because of lack of sudo access, I have a custom script that runs ollama for me. I am reproducing it below:

# Set the custom libraries
export LD_PRELOAD=/home/******/conda_pkgs/libstdcxx-ng-11.2.0-h1234567_1/lib/libstd$
export LD_LIBRARY_PATH=/home/******/conda_pkgs/libgcc-ng-11.2.0-h1234567_1/lib:/hom$

# Launch ollama with your custom loader
/home/******/glibc-2.27_install/lib/ld-linux-x86-64.so.2 \
  --library-path "$LD_LIBRARY_PATH" \
  /home/ak2530/bin/ollama.bin "$@"

However, after running ollama serve on one terminal, when I try to run ollama run deepseek-coder on another terminal, I get an error

Error: llama runner process has terminated: exit status 127

From what I can understand, the runner for Ollama cannot be found. I have tried searching for files named "runner" in my home drive, but that has not been successful. Moreover, apparently there should be json files in ~/.ollama/models/manifests, but that folder is empty. How do I find this runner file for Ollama, and have it run the models?

本文标签: large language modelTrouble finding runner for Ollama 0513Stack Overflow