Caffe: No multi-GPU capability with shared weights

Created on 15 Apr 2017  ·  5Comments  ·  Source: BVLC/caffe

Issue summary

It appears that it is no longer possible to train a network with shared weights across multiple gpus. This worked in rc3. Was this functionality deliberately sacrificed in the upgrade to use NCCL? If so it's a bit of shame for us at least as we can't upgrade past rc3.

Steps to reproduce

./build/tools/caffe train --solver=solver_referencing_net_with_shared_weights.prototxt

If compiled with USE_NCCL, this will trigger "Layer-wise reduce is not supported for nets with shared weights." (from parallel.cpp).

Otherwise, this will fail with "Multi-GPU execution not available - rebuild with USE_NCCL" (from caffe.cpp).

documentation

Most helpful comment

Setting layer_wise_reduce: false in the solver specification should resolve this. The issue is that the order of gradients with weight sharing does not necessarily respect topological ordering of the layer graph that the parallel implementation follows for overlapping communication with computation. The error is triggered to keep from accidentally computing the wrong gradients.

Reducing at the end of backward ensures correctness for shared weights at the cost of efficiency. Parallel training can still give a speed-up in this case depending on the architecture.

@cypof, please confirm.

All 5 comments

Setting layer_wise_reduce: false in the solver specification should resolve this. The issue is that the order of gradients with weight sharing does not necessarily respect topological ordering of the layer graph that the parallel implementation follows for overlapping communication with computation. The error is triggered to keep from accidentally computing the wrong gradients.

Reducing at the end of backward ensures correctness for shared weights at the cost of efficiency. Parallel training can still give a speed-up in this case depending on the architecture.

@cypof, please confirm.

Yes, layer_wise_reduce is only an optimization. It often doesn't make a huge difference, so it's definitely still worth it to run multi-GPU without it.

Problem sorted, thank you!

@cypof this should likely be documented, for instance in docs/multigpu.md

My problem is my NCCL is not unconmment in makefile.cofig

Was this page helpful?
0 / 5 - 0 ratings

Related issues

greatgao picture greatgao  ·  3Comments

iamhankai picture iamhankai  ·  3Comments

FreakTheMighty picture FreakTheMighty  ·  3Comments

OpenHero picture OpenHero  ·  3Comments

prathmeshrmadhu picture prathmeshrmadhu  ·  3Comments