admin管理员组文章数量:1201584
I'm trying to get an ollama image to run on docker using my AMD GPU for inference. The machine I have avaliable runs windows and has a RX 7900 XTX. I have installed Ollama and the HIP SDK driver for windows with the recomended video driver and I can run LLM models on it locally.
But the project requires other people to be also able to code their agents and use the LLM on the main machine as the backend, which requires docker.
Every documentation for this use case is either AMD running on linux with ROCm(which I believe is the same as/comes with the HIP SDK) or NVIDIA and windows. Every GitHub repo, even AMDS' has only commands for Linux. So is it even possible to do it on windows(with WSL)?
FYI: I'm a complete beginner in either docker or linux based systems, so I'm at a loss.
If you know the steps to configure it in either, please post it.
These are the commands i've tryed so far and the errors:
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name testellama ollama/ollama
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: initialization error: WSL environment detected but no adapters were found: unknown.
docker run -it --device=/dev/kfd --device=/dev/dri --security-opt seccomp=unconfined --group-add video rocm/rocm-terminal -v ollama:/root/.ollama -p 11434:11434 --name ConteinerOllama ollama/ollama:rocm
Gives me a device not found error
本文标签:
版权声明:本文标题:docker - Is running a ollama image with AMD GPU(ROCm) for inference only avaliable on Linux Based Systems(like Ubuntu)? - Stack 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1738633747a2103902.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论