Close

Presentation

HW/SW Codesign for In-Memory Computing Architectures: Adventure from Emerging Technology to Intelligent Computing Systems
TimeMonday, July 11th1:30pm - 5pm PDT
Location3002, Level 3
Event Type
Tutorial
Topics
Design
DescriptionBreakthroughs in deep learning continuously fuel innovations that substantially enhance our daily life. However, DNNs largely overwhelm conventional computing systems because the latter is severely bottlenecked by the data movement between processing units and memory. As a result, novel and intelligent computing systems become more and more inevitable in order to improve or even replace current von-Neumann principles, which have remained unchanged for decades.

This tutorial provides a comprehensive overview on the major shortcomings of modern architectures and the ever-increasing necessity for novel designs that fundamentally reduce memory latency and energy through enabling data processing near to the memory or even inside the memory itself. The tutorial also discusses in detail the great promise of recent emerging beyond-CMOS devices like Ferroelectric Field-Effect Transistor (FeFET) and Resistive Random-Access Memory (ReRAM). It bridges the gap between the latest innovations in the underlying technology and the recent breakthroughs in computer architectures. It demonstrates how HW/SW codesign is a key to realize efficient, yet reliable in-memory and near-memory computing.

The first part (given by Hussam Amrouch, Stuttgart Uni.) will be focusing on the emerging ferroelectric (FeFET) technology and its great potential in building efficient in-memory computing architectures. It will also explain how abstracted reliability models can be developed and later employed towards realizing HW/SW codesign for robust in-memory computing. Further, it will also discuss how compact Logic-in-Memory can be built using FeFET technology and how that outstandingly synergizes with novel brain-inspired hyperdimensional computing algorithms.

The second part (given by Onur Mutlu, ETH Zurich) will be focusing on two promising novel directions: 1) processing using memory, which exploits analog operational properties of memory chips to perform massively-parallel operations in memory, with low-cost changes, 2) processing near memory, which integrates sophisticated additional processing capability in memory controllers, the logic layer of 3D-stacked memory technologies, or memory chips to enable high memory bandwidth and low memory latency to near-memory logic.

The third part (given by Jian-Jia Chen, TU Dortmund) will be covering how novel neural network models such as Binary Neural Networks (BNNs) and Spiking Neural Networks (SNNs) can be proactively trained and constructed in the presence of errors stemming from the underlying emerging technology. In addition, it will discuss how convolutional neural network (CNN) under operation unit (OU) on memory crossbar.