admin管理员组

文章数量:1279183

I have a pytorch training script, and I'm getting an out-of-memory error after a few epochs even tho I'm calling torch.cuda.empty_cache(). The GPU memory just keeps going up and I can't figure out why.

Here's basically what I'm doing:

import torch
from torch.utils.data import Dataset, DataLoader
import numpy as np

class CustomDataset(Dataset):
    def __init__(self, data_paths):
        self.data_paths = data_paths
    
    def __len__(self):
        return len(self.data_paths)
    
    def __getitem__(self, idx):
        image = np.load(self.data_paths[idx]['image']).astype(np.float32)
        label = np.load(self.data_paths[idx]['label']).astype(np.int64)
        
        image = torch.tensor(image).cuda()
        label = torch.tensor(label).cuda()
        
        return image, label

data_paths = [{'image': f'img_{i}.npy', 'label': f'label_{i}.npy'} for i in range(10000)]
dataset = CustomDataset(data_paths)
dataloader = DataLoader(dataset, batch_size=32, num_workers=4, pin_memory=True)

for epoch in range(10):
    for batch in dataloader:
        images, labels = batch  
        
        output = images.mean()  
        loss = output.sum()
        loss.backward()
        
        del images, labels, loss, output
        torch.cuda.empty_cache()

Even after deleting everything and calling empty_cache(), the VRAM just keeps going up and I don't understand why. This doesn't happen on CPU. If I run nvidia-smi, the memory usage increases after every batch until it crashes.

I tried:

  • Calling del on everything after every batch
  • Setting num_workers=0 (didn't help)
  • Using .detach() before moving tensors to GPU
  • Checked if the issue is in my model, but even without the model, just loading the Data already makes the memory increase

Anyone seen this before? Is there something about DataLoader and cuda() that could be causing this?

Would appreciate any ideas. I'm out of things to try

I have a pytorch training script, and I'm getting an out-of-memory error after a few epochs even tho I'm calling torch.cuda.empty_cache(). The GPU memory just keeps going up and I can't figure out why.

Here's basically what I'm doing:

import torch
from torch.utils.data import Dataset, DataLoader
import numpy as np

class CustomDataset(Dataset):
    def __init__(self, data_paths):
        self.data_paths = data_paths
    
    def __len__(self):
        return len(self.data_paths)
    
    def __getitem__(self, idx):
        image = np.load(self.data_paths[idx]['image']).astype(np.float32)
        label = np.load(self.data_paths[idx]['label']).astype(np.int64)
        
        image = torch.tensor(image).cuda()
        label = torch.tensor(label).cuda()
        
        return image, label

data_paths = [{'image': f'img_{i}.npy', 'label': f'label_{i}.npy'} for i in range(10000)]
dataset = CustomDataset(data_paths)
dataloader = DataLoader(dataset, batch_size=32, num_workers=4, pin_memory=True)

for epoch in range(10):
    for batch in dataloader:
        images, labels = batch  
        
        output = images.mean()  
        loss = output.sum()
        loss.backward()
        
        del images, labels, loss, output
        torch.cuda.empty_cache()

Even after deleting everything and calling empty_cache(), the VRAM just keeps going up and I don't understand why. This doesn't happen on CPU. If I run nvidia-smi, the memory usage increases after every batch until it crashes.

I tried:

  • Calling del on everything after every batch
  • Setting num_workers=0 (didn't help)
  • Using .detach() before moving tensors to GPU
  • Checked if the issue is in my model, but even without the model, just loading the Data already makes the memory increase

Anyone seen this before? Is there something about DataLoader and cuda() that could be causing this?

Would appreciate any ideas. I'm out of things to try

Share Improve this question asked Feb 24 at 22:04 Andrew3553Andrew3553 211 silver badge1 bronze badge 1
  • What version of pytorch are you using? With current versions your dataset code throws the error RuntimeError: cannot pin 'torch.cuda.FloatTensor' only dense CPU tensors can be pinned – Karl Commented Feb 25 at 0:55
Add a comment  | 

1 Answer 1

Reset to default 0

Yeah, the issue is that you're moving tensors to CUDA inside __getitem__(), which isn't a good idea when using multiple workers in DataLoader. When num_workers > 0, PyTorch spawns separate processes for loading data, but CUDA operations should only happen in the main process. This can lead to memory not being freed properly, which is why your usage keeps increasing.

A better approach is to keep everything on the CPU inside __getitem__() and only move tensors to the GPU inside the training loop. Change this:

def __getitem__(self, idx):
    image = np.load(self.data_paths[idx]['image']).astype(np.float32)
    label = np.load(self.data_paths[idx]['label']).astype(np.int64)

    return torch.from_numpy(image), torch.from_numpy(label)

And move them to CUDA in the training loop:

for batch in dataloader:
    images, labels = batch
    images = images.cuda(non_blocking=True)
    labels = labels.cuda(non_blocking=True)

This should already solve most of the issue.

If the memory still increases, try setting persistent_workers=True in DataLoader, since it helps with memory handling when using multiple workers:

dataloader = DataLoader(dataset, batch_size=32, num_workers=4,
                        pin_memory=True, persistent_workers=True)

If that doesn't work, test with num_workers=0. If the leak stops, then it's definitely related to the worker processes holding onto tensors.

As a last resort, manually force garbage collection after each batch:

import gc
gc.collect()
torch.cuda.empty_cache()

But in general, the main problem here is that CUDA tensors shouldn’t be created inside __getitem__(), especially with multiprocessing. Move them in the main loop, and it should fix the issue

本文标签: pythonMemory keeps increasing in pytorch training loopeven with emptycache()Stack Overflow