Low Precision Decentralized SGD

Overview

As the name suggests, low precision decentralized SGD combines decentralized training and quantized training together. It follows the framework of decentralized SGD that each worker does not need to aggregate data globally, but to exchange data with few workers, more specifically, its peers. Thus this algorithm has similar communication overhead with decentralized SGD. The latency complexity and bandwidth complexity of low precision decentralized SGD are both , where is the number of peers. This is consistent with our analysis in decentralized SGD, where we consider a special case that each worker has only one peer.

With communication compression, low precision decentralized SGD can reduce communication overhead further. It should be noted that data exchanged between workers are not compressed local models, but the compressed differences of local models between two successive iterations. In this way, the low precision decentralized SGD algorithm can achieve the same convergence rate with decentralized SGD, as well as full precision centralized ones. Detailed proof can be found in this paper.

Benefiting from both decentralization and communication compression, low precision decentralized SGD is particular useful in high communication latency and low network bandwidth scenarios.

Algorithm

Assume the number of workers is , and the model parameters on worker is , . Each worker stores model replicas of its connected peers and is able to send data to or receive data from its peers. At each iteration , the algorithm repeats the following steps on each worker :

  1. Calculate the gradient on worker : .
  2. Update the local model using local stochastic gradient and the weighted average of its connected peers' replicas:
  3. Compute the difference , and quantize it into with a quantization function .
  4. Update the local model with compressed difference,
  5. Send to its connected peers, and update its connected peers' replicas with compressed differences it received,

The quantization function calculates the minimum value and maximum value of its input, and the split into evenly spaced 256 intervals. Then represent each element of its input by a 8bit integer representing which interval the original element is in.

Each worker stores model replicas of its connected peers, once the peers of a worker is determined, they should not be changed during the whole process.

Example usage

A complete example of running Decentralized SGD can be found at Bagua examples with --algorithm low_precision_decentralized command line argument.

You need to initialize the Bagua algorithm with (see API documentation for further customization):

from bagua.torch_api.algorithms import decentralized
algorithm = decentralized.LowPrecisionDecentralizedAlgorithm()

Then decorate your model with:

model = model.with_bagua([optimizer], algorithm)