Skip to content Skip to sidebar Skip to footer

How To Run Pytorch On Gpu By Default?

I want to run PyTorch using cuda. I set model.cuda() and torch.cuda.LongTensor() for all tensors. Do I have to create tensors using .cuda explicitly if I have used model.cuda()? Is

Solution 1:

I do not think you can specify that you want to use cuda tensors by default. However you should have a look to the pytorch offical examples.

In the imagenet training/testing script, they use a wrapper over the model called DataParallel. This wrapper has two advantages:

  • it handles the data parallelism over multiple GPUs
  • it handles the casting of cpu tensors to cuda tensors

As you can see in L164, you don't have to cast manually your inputs/targets to cuda.

Note that, if you have multiple GPUs and you want to use a single one, launch any python/pytorch scripts with the CUDA_VISIBLE_DEVICES prefix. For instance CUDA_VISIBLE_DEVICES=0 python main.py.

Solution 2:

Yes. You can set the default tensor type to cuda with:

torch.set_default_tensor_type('torch.cuda.FloatTensor')

Do I have to create tensors using .cuda explicitly if I have used model.cuda()?

Yes, you need to not only set your model [parameter] tensors to cuda, but also those of the data features and targets (and any other tensors used by the model).

Post a Comment for "How To Run Pytorch On Gpu By Default?"