admin管理员组

文章数量:1201584

I'm trying to get an ollama image to run on docker using my AMD GPU for inference. The machine I have avaliable runs windows and has a RX 7900 XTX. I have installed Ollama and the HIP SDK driver for windows with the recomended video driver and I can run LLM models on it locally.

But the project requires other people to be also able to code their agents and use the LLM on the main machine as the backend, which requires docker.

Every documentation for this use case is either AMD running on linux with ROCm(which I believe is the same as/comes with the HIP SDK) or NVIDIA and windows. Every GitHub repo, even AMDS' has only commands for Linux. So is it even possible to do it on windows(with WSL)?

FYI: I'm a complete beginner in either docker or linux based systems, so I'm at a loss.

If you know the steps to configure it in either, please post it.

These are the commands i've tryed so far and the errors:

docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name testellama ollama/ollama

docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: initialization error: WSL environment detected but no adapters were found: unknown.
docker run -it --device=/dev/kfd --device=/dev/dri --security-opt seccomp=unconfined --group-add video rocm/rocm-terminal -v ollama:/root/.ollama -p 11434:11434 --name ConteinerOllama ollama/ollama:rocm

Gives me a device not found error

本文标签: