Saved from https://github.com/facebookresearch/maskrcnn-benchmark/issues/241
Can you try specifying a different master_addr and master_port in torch.distributed.launch? CUDA_VISIBLE_DEVICES=${GPU_ID} python -m torch.distributed.launch --nproc_per_node=$NGPUS --master_addr 127.0.0.2 --master_port 29501 tools/train_net.py