您的位置:首页 > 编程语言 > Python开发

pytorch-multi-gpu

2017-10-27 22:36 477 查看
model = nn.DataParallel(model.cuda(1), device_ids=[1,2,3,4,5])

criteria = nn.Loss() # i. .cuda(1)  20G-21G  ii. cuda() 18.5G-12.7G  iii. nothing 16.5G-12.7G. these all use almost same time per batch

data = data.cuda(1)

label = data.cuda(1)

-

out = model(data)

or.

model = nn.DataParallel(model, device_ids=[1,2,3,4,5]).cuda(1)

or.

generator=nn.DataParallel(Generator()).cuda()

z=Variable(torch.rand(batch_size,100)).cuda()

note:

original module == model.module

-------------------errors---

1.

data = data.cuda()

RuntimeError: Assertion `THCTensor_(checkGPU)(state, 4, input, target, output, total_weight)' failed. Some of weight/gradient/input tensors are located on different GPUs. Please move them to a single one. at /b/wheel/pytorch-src/torch/lib/THCUNN/generic/SpatialClassNLLCriterion.cu:46

2. 

nn.DataParallel(model.cuda(), device_ids=[1,2,3,4,5])

    result = self.forward(*input, **kwargs)

  File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 60, in forward

    replicas = self.replicate(self.module, self.device_ids[:len(inputs)])

  File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 65, in replicate

    return replicate(module, device_ids)

  File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/replicate.py", line 12, in replicate

    param_copies = Broadcast(devices)(*params)

  File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 18, in forward

    outputs = comm.broadcast_coalesced(inputs, self.target_gpus)

  File "/anaconda3/lib/python3.6/site-packages/torch/cuda/comm.py", line 52, in broadcast_coalesced

    raise RuntimeError('all tensors must be on devices[0]')

RuntimeError: all tensors must be on devices[0]

3. 

nn.DataParallel(model, device_ids=[1,2,3,4,5])

    out = model(data, train_seqs.index(name))

  File "/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 206, in __call__

    result = self.forward(*input, **kwargs)

  File "anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 60, in forward

    replicas = self.replicate(self.module, self.device_ids[:len(inputs)])

  File "/data1/ailab_view/wenyulv/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 65, in replicate

    return replicate(module, device_ids)

  File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/replicate.py", line 12, in replicate

    param_copies = Broadcast(devices)(*params)

  File "/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 14, in forward

    raise TypeError('Broadcast function not implemented for CPU tensors')

TypeError: Broadcast function not implemented for CPU tensors

-------------reference-----------

1. https://github.com/GunhoChoi/Kind_PyTorch_Tutorial/blob/master/09_GAN_LayerName_MultiGPU/GAN_LayerName_MultiGPU.py
2. http://pytorch.org/docs/master/nn.html#dataparallel
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  python