A 40nm 5.6TOPS/W 239GOPS/mm2Self-Attention Processor with Sign Random Projection-based Approximation

Seong Hoon Seo, Soosung Kim, Sung Jun Jung, Sangwoo Kwon, Hyunseung Lee, Jae W. Lee

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

Transformer architecture is one of the most remarkable recent breakthroughs in neural networks, achieving state-of-The-Art (SOTA) performance on various natural language processing (NLP) and computer vision tasks. Self-Attention is the key enabling operation for transformer-based models. However, its quadratic computational complexity to the sequence length makes this operation the major performance bottleneck for those models. Thus, we propose a novel self-Attention accelerator that skips most of the computation by utilizing an approximate candidate selection algorithm. Implemented in a 40nm CMOS technology, our 5.64 mm2 chip operates at 100-600 MHz consuming 48.3-685 mW to achieve the energy and area efficiency of 0.354-5.61 TOPS/W and 239 GOPS/mm2, respectively.

Original languageEnglish
Title of host publicationESSCIRC 2022 - IEEE 48th European Solid State Circuits Conference, Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages85-88
Number of pages4
ISBN (Electronic)9781665484947
DOIs
StatePublished - 2022
Event48th IEEE European Solid State Circuits Conference, ESSCIRC 2022 - Milan, Italy
Duration: 19 Sep 202222 Sep 2022

Publication series

NameESSCIRC 2022 - IEEE 48th European Solid State Circuits Conference, Proceedings

Conference

Conference48th IEEE European Solid State Circuits Conference, ESSCIRC 2022
Country/TerritoryItaly
CityMilan
Period19/09/2222/09/22

Fingerprint

Dive into the research topics of 'A 40nm 5.6TOPS/W 239GOPS/mm2Self-Attention Processor with Sign Random Projection-based Approximation'. Together they form a unique fingerprint.

Cite this