It appears that it is no longer possible to train a network with shared weights across multiple gpus. This worked in rc3. Was this functionality deliberately sacrificed in the upgrade to use NCCL? If so it's a bit of shame for us at least as we can't upgrade past rc3.
./build/tools/caffe train --solver=solver_referencing_net_with_shared_weights.prototxt
If compiled with USE_NCCL, this will trigger "Layer-wise reduce is not supported for nets with shared weights." (from parallel.cpp).
Otherwise, this will fail with "Multi-GPU execution not available - rebuild with USE_NCCL" (from caffe.cpp).
Setting layer_wise_reduce: false
in the solver specification should resolve this. The issue is that the order of gradients with weight sharing does not necessarily respect topological ordering of the layer graph that the parallel implementation follows for overlapping communication with computation. The error is triggered to keep from accidentally computing the wrong gradients.
Reducing at the end of backward ensures correctness for shared weights at the cost of efficiency. Parallel training can still give a speed-up in this case depending on the architecture.
@cypof, please confirm.
Yes, layer_wise_reduce is only an optimization. It often doesn't make a huge difference, so it's definitely still worth it to run multi-GPU without it.
Problem sorted, thank you!
@cypof this should likely be documented, for instance in docs/multigpu.md
My problem is my NCCL is not unconmment in makefile.cofig
Most helpful comment
Setting
layer_wise_reduce: false
in the solver specification should resolve this. The issue is that the order of gradients with weight sharing does not necessarily respect topological ordering of the layer graph that the parallel implementation follows for overlapping communication with computation. The error is triggered to keep from accidentally computing the wrong gradients.Reducing at the end of backward ensures correctness for shared weights at the cost of efficiency. Parallel training can still give a speed-up in this case depending on the architecture.
@cypof, please confirm.