admin管理员组文章数量:1336143
I want to setup tensorflow and pytorch on one virtual environment with same CUDA. However, I cannot find a CUDA version that can support both tensorflow and pytorch: For tensorflow 2.10, I selected CUDA 11.2. But I didn't find this CUDA version in the list for supporting pyTorch. I can only find the CUDA 11.1 in the list for pyTorch. Detailed information is listed below.
To find CUDA version for Tensorflow
To find CUDA version for PyTorch /
Will there be any problems if I install 2 different CUDA versions if I want to run the codes with GPU card? For example, after I create a virtual environemt by "conda create --name myenv python=3.10", I want to run codes with tensorflow for project 1, and codes with pyTorch for project 2.
Do I need to modify the "CUDA_PATH" in system variable every time before I ran the codes. i.e., set CUDA_PATH for CUDA 11.1 when I need to use PyTorch, and set CUDA_PATH for CUDA 11.2 when I need to use Tensorflow?
I find there is an option of installing CUDA 11.0, which is compatible with TF-2.4 and PyTorch-1.7. But there is a problem that it does not support CUDA capability SM_86. Will it be a problem of losing access to new features?
I want to setup tensorflow and pytorch on one virtual environment with same CUDA. However, I cannot find a CUDA version that can support both tensorflow and pytorch: For tensorflow 2.10, I selected CUDA 11.2. But I didn't find this CUDA version in the list for supporting pyTorch. I can only find the CUDA 11.1 in the list for pyTorch. Detailed information is listed below.
To find CUDA version for Tensorflow https://www.tensorflow./install/source_windows#tested_build_configurations
To find CUDA version for PyTorch https://elenacliu-pytorch-cuda-driver.streamlit.app/
Will there be any problems if I install 2 different CUDA versions if I want to run the codes with GPU card? For example, after I create a virtual environemt by "conda create --name myenv python=3.10", I want to run codes with tensorflow for project 1, and codes with pyTorch for project 2.
Do I need to modify the "CUDA_PATH" in system variable every time before I ran the codes. i.e., set CUDA_PATH for CUDA 11.1 when I need to use PyTorch, and set CUDA_PATH for CUDA 11.2 when I need to use Tensorflow?
I find there is an option of installing CUDA 11.0, which is compatible with TF-2.4 and PyTorch-1.7. But there is a problem that it does not support CUDA capability SM_86. Will it be a problem of losing access to new features?
Share Improve this question edited Nov 20, 2024 at 19:00 Erickzhou asked Nov 19, 2024 at 22:13 ErickzhouErickzhou 177 bronze badges1 Answer
Reset to default 1There are no pre-built binaries of pytorch with cuda-11.2 indeed. If you necessarily want to go with this version of cuda, you have two choices I think:
- Use pytorch binaries compiled with cuda-11.1, which should work just fine
- Build pytorch from source, as described here
I'm basically repeating what is said on this pytorch thread, you can read it for more details
I would not try to have multiple versions of cuda and manually "hotswap" them by tinkering with the cuda paths. (Opinion here) From experience, it can work but is also very error prone and will lead to problems eventually
本文标签: tensorflowHow to setup TF and Torch on one virtual environment with same CUDAStack Overflow
版权声明:本文标题:tensorflow - How to setup TF and Torch on one virtual environment with same CUDA - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1742395514a2466841.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论