Pixelated Butterfly: Fast Machine Learning with Sparsity Beidi Chen, Stanford MLSys, 49
Full video is processing from the livestream, check back soon Episode 49 of the Stanford MLSys Seminar Series Pixelated Butterfly: Simple and Efficient Sparse Training for Neural Network Models Speaker: Beidi Chen Abstract: Overparameterized neural networks generalize well but are expensive to train. Ideally, one would like to reduce their computational cost while retaining their generalization benefits. Sparse model training is a simple and promising approach to achieve this, but there remain challenges as existing methods struggle with accuracy loss, slow training runtime, or difficulty in sparsifying all model components. The core problem is that searching for a sparsity mask over a discrete set of sparse matrices is difficult and expensive. To address this, our main insight is to optimize over a continuous superset of sparse matrices with a fixed structure known as products of butterfly matrices. As butterfly matrices are not hardware efficient, we propose simple variants of butterfly (block and flat
|
|