Part of Proceedings of Machine Learning and Systems 3 (MLSys 2021)
Bowen Yang, Jian Zhang, Jonathan Li, Christopher Re, Christopher Aberger, Christopher De Sa
Pipeline parallelism when training neural networks enables models to be partitioned spatially, which can lead to overall higher hardware utilization. Unfortunately, to preserve the statistical efficiency of sequential training, existing pipeline parallel training techniques sacrifice hardware efficiency by decreasing pipeline utilization or incurring extra memory costs. In this paper, we investigate to what extent these sacrifices will be necessary on the emerging class of new dataflow hardware accelerators. We devise PipeMare, a simple yet robust training method that tolerates asynchronous updates during pipeline parallel execution without sacrificing utilization or memory, which allows efficient use of fine-grained pipeline parallelism. Concretely, when tested on ResNet and Transformer networks, asynchrony enables PipeMare to use up to 2.7x less memory or get 14.3x higher pipeline utilization, with similar model quality, when compared to state-of-the-art synchronous pipeline parallel training techniques.