
Premium content
Access to this content requires a subscription. You must be a premium user to view this content.

IEEE ISSCC Education
•
February 13, 2021
•
United States
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Presentation digest / paper:
Please vote for this presentation on the following link:
http://submissions.mirasmart.com/ISSCC2021/Rating/EducationalSession.aspx?esi=1AYYNuQba
Abstract:
Deep neural networks are used across a wide range of applications. Custom hardware optimizations for this field offer significant performance and power advantages compared to general-purpose processors. However, achieving high TOPS/W and/or TOPS/mm2 along with the requirements for scalability and programmability is a challenging task.This tutorial presents various design approaches to strike the right balance between efficiency, scalability, and flexibility across different neural networks and towards new models. It presents a survey of (i) different circuits and architecture techniques to design efficient compute units, memory hierarchies, and interconnect topologies, (ii) compiler approaches to effectively tile computations, and (iii) neural network optimizations for efficient execution on the target hardware.