admin管理员组

文章数量:1188454

I'm facing an issue with num_workers while training my model using PyTorch.

  1. If I set num_workers = 0, the training starts, but the model is utilizing the CPU instead of the GPU. Although CUDA is active, GPU utilization remains at 0.
  2. If I set num_workers to a value greater than 0 (e.g., 1, 2, 3, 4...), the training gets stuck during the loading stage.

Here are the relevant details:

  1. I am using PyTorch with CUDA enabled.
  2. My system has a GPU (with CUDA support), but the GPU is not utilized when num_workers = 0.
  3. When I increase num_workers, the training process is stuck at the beginning.
  4. Windows OS
  5. Jupyter notebook

Has anyone experienced this before? Could it be an issue with how the DataLoader is interacting with the GPU? Any suggestions on how to fix this?

Thanks in advance!

Expecting a solution

本文标签: