Towards Resilient Analog In-Memory Deep Learning via Data Layout Re-Organization
TimeWednesday, July 13th4:42pm - 5:06pm PDT
Location3007, Level 3
Event Type
Research Manuscript
Manufacturing Test and Reliability
DescriptionProcessing in-memory paves the way for neural network inference engines.
An arising challenge is to develop the software/hardware
interface to automatically compile deep learning models onto in-memory computing platforms. In this paper, we observe that the data layout organization of a deep neural network (DNN) model directly impacts the model's classification accuracy. This stems from that the \emph{resistive parasitics} within a crossbar introduces a dependency between the \emph{matrix data} and the \emph{precision} of the analog computation.