Approximate Computing: from Emerging Computational Paradigm to System Design and Applications
TimeMonday, July 11th10:30am - 12pm PDT
Location3005, Level 3
Event Type
DescriptionComputing systems are conventionally designed to operate as accurately as possible. However, this trend faces severe technology challenges, such as power dissipation, circuit reliability, and performance. There are several pervasive computing applications (such as machine learning, pattern recognition, digital signal processing, communication, robotics, and multimedia), that are inherently error-tolerant or error-resilient. Approximate computing has been proposed for highly energy-efficient systems targeting the above-mentioned emerging error-tolerant applications; approximate computing consists of approximately (inexactly) processing data to save power and achieve high performance, while results remain at an acceptable level for subsequent use. This tutorial starts with the motivation of approximate computing and then it reviews current techniques for approximate hardware designs. This tutorial will cover the following topics:

1) Exploiting Approximate Computing for Efficient and Reliable Convolutional Neural Networks: The technology evolution addresses the demand for faster computers. Despite the achieved speed-up in terms of memory and computation performances, the workload involved in DNN application is still hard to fit the embedded device. The Approximate Computing (AxC) paradigm aims at solving this problem by reducing the precision of the hardware/software components leading to an efficient implementation of the DNN. The literature mainly exploited AxC for achieving energy efficiency or improving performance, however AxC has also an impact of the robustness and the reliability of DNNs. In this context, we will provide a brief introduction on existing and latest AxC solutions for achieving efficiency and reliable DNN.

2) Adaptive Approximation for Energy-Efficient Machine Learning: Energy consumption of Information and Computing Technologies (ICT) has been expanding its footprint in the global energy footprint. With the ever-pervasive presence of machine learning and especially deep learning algorithms, they play a notable role in this overall energy consumption. Consequently, managing the energy consumption of these systems has become a top priority. In this talk, first, we briefly present self-awareness concepts as one of the efficient bases of adaptivity in modern systems. Next, after a glance into various fundamental approximate computing methods, we present adaptive approximation as a key to energy-aware machine learning. We present several solutions to demonstrate various adaptive approximation methods used as well as their benefits. We use these examples to draw our conclusions and project a path forward.