ziming-liu
12/19/2019 - 9:07 AM

pytorch分布式多卡程序 单机多卡 多程序

Can you try specifying a different master_addr and master_port in torch.distributed.launch?

CUDA_VISIBLE_DEVICES=${GPU_ID} python -m torch.distributed.launch --nproc_per_node=$NGPUS --master_addr 127.0.0.2 --master_port 29501 tools/train_net.py