How to use PyTorch multiprocessing?(如何使用 PyTorch 多处理?)
问题描述
我正在尝试在 pytorch
中使用 python 的多处理 Pool
方法来处理图像.代码如下:
I'm trying to use python's multiprocessing Pool
method in pytorch
to process a image. Here's the code:
from multiprocessing import Process, Pool
from torch.autograd import Variable
import numpy as np
from scipy.ndimage import zoom
def get_pred(args):
img = args[0]
scale = args[1]
scales = args[2]
img_scale = zoom(img.numpy(),
(1., 1., scale, scale),
order=1,
prefilter=False,
mode='nearest')
# feed input data
input_img = Variable(torch.from_numpy(img_scale),
volatile=True).cuda()
return input_img
scales = [1,2,3,4,5]
scale_list = []
for scale in scales:
scale_list.append([img,scale,scales])
multi_pool = Pool(processes=5)
predictions = multi_pool.map(get_pred,scale_list)
multi_pool.close()
multi_pool.join()
我收到此错误:
`RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
`在这一行:
predictions = multi_pool.map(get_pred,scale_list)
谁能告诉我我做错了什么?
Can anyone tell me what I'm doing wrong ?
推荐答案
如 pytorch 中所述文档 处理多处理的最佳实践是使用 torch.multiprocessing
而不是 multiprocessing
.
As stated in pytorch documentation the best practice to handle multiprocessing is to use torch.multiprocessing
instead of multiprocessing
.
请注意,仅 Python 3 支持在进程之间共享 CUDA 张量,使用 spawn
或 forkserver
作为启动方法.
Be aware that sharing CUDA tensors between processes is supported only in Python 3, either with spawn
or forkserver
as start method.
在不触及您的代码的情况下,您遇到的错误的解决方法是替换
Without touching your code, a workaround for the error you got is replacing
from multiprocessing import Process, Pool
与:
from torch.multiprocessing import Pool, Process, set_start_method
try:
set_start_method('spawn')
except RuntimeError:
pass
这篇关于如何使用 PyTorch 多处理?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!