BPPSA: Scaling Back-propagation by Parallel Scan Algorithm

Part of Proceedings of Machine Learning and Systems 2 (MLSys 2020)

Bibtex Metadata Paper

Authors

Shang Wang, Yifan Bai, Gennady Pekhimenko

Abstract

In an era when the performance of a single compute device plateaus, software must be designed to scale on a massively parallel system for better runtime performance. However, in the context of training deep learning models, the commonly used back-propagation (BP) algorithm imposes a strong sequential dependency in the process of gradient computation. Under model parallelism, BP has a theoretical step complexity of Theta(n) which hinders its scalability in a parallel computing environment, where n represents the number of compute devices into which a model is partitioned.

Scan is a primitive operation that performs an in-order aggregation on a sequence of values and returns the partial result at each step. Parallel algorithms (e.g., Blelloch scan) have been developed to scale the scan operation on massively parallel systems. In this work, in order to improve the scalability of BP, we reformulate BP into a scan operation which is then scaled by our modified version of the Blelloch scan algorithm with a theoretical step complexity of Theta(log n). We evaluate our approach on a vanilla Recurrent Neural Network training with synthetic datasets, and demonstrate up to 2.75x speedup in terms of the overall training time and 8.8x speedup on the backward pass alone.