Memory-Efficient Training of Binarized Neural Networks on the Edge
TimeWednesday, July 13th1:52pm - 2:15pm PDT
Location3000, Level 3
Event Type
Research Manuscript
ML Algorithms and Applications
DescriptionTo enable memory-efficient BNN training on the edge, we focus on memory-efficient floating point (FP) encodings for the momentum values. If the FP format is not properly chosen, the updates of the momentum values can be lost and the training accuracy degraded. Based on this insight, we develop a method to find FP encodings that are more memory-efficient than the standard FP encodings. In our experiments, the total memory usage in BNN training is decreased by factors 2.47x, 2.43x, 2.04x, depending on the BNN model, with minimal accuracy cost (smaller than 1%) compared to using 32-bit FP encoding.