A Time-to-first-spike Coding and Conversion Aware Training for Energy-Efficient Deep Spiking Neural Network Processor Design
TimeTuesday, July 12th4:18pm - 4:42pm PDT
Location3002, Level 3
Event Type
Research Manuscript
AI/ML Design: Circuits and Architecture
DescriptionIn this paper, we present a hardware-friendly temporal coding and training schemes for energy efficient implementation of deep spiking neural network (SNN) processor. First, we develop a time-to-first-spike coding that allows lightweight logarithmic computation by utilizing spike time information. To enable this coding scheme, a conversion aware training technique is proposed to minimize ANN to-SNN conversion error by adjusting the activation function during training. The hardware implementation using 28nm CMOS process shows that the proposed SNN processor achieves significant improvements in terms of throughput and inference energy when running VGG-16 on CIFAR-10/100, and Tiny-ImageNet.