Approximate Computing, Fiction or Reality?
TimeThursday, July 14th10:30am - 12pm PDT
Location3001, Level 3
Event Type
Research Panel
DescriptionApproximate Computing is a design paradigm postulating that accuracy of result is just another metric that can be tampered with. Traditionally, we played with metrics such as, for example, area and delay: “I will accept a larger implementation, as long as it is fast”. And we are used to synthesis tools for hardware, and compilation tools for software, that given one specification in input can generate hundreds of different implementation versions, and these may vary in terms of energy efficiency, delay, memory usage, etc. But playing with the accuracy of the output was not ever something that was on the table. What if it is. What if I can automatically tell my design tools: “and this is what I am prepared to pay in terms of accuracy loss”. Then we can say: “give me a less accurate implementation, as long as it’s _both_ smaller and faster”. That sounds great; but what are the challenges that AC poses, and hence how far are we from seeing Approximate Computing everywhere? Are there enough and significant applications that can tolerate a loss in accuracy? Are synthesis tools mature enough to generate designs that efficiently trade off accuracy, as if it were any other metric? Should we consider AC just in software, before we commit into
generating inaccurate hardware? Let’s ask our panelists what their view is on the subject.