In:
Proceedings of the National Academy of Sciences, Proceedings of the National Academy of Sciences, Vol. 113, No. 41 ( 2016-10-11), p. 11441-11446
Abstract:
Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that ( i ) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, ( ii ) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively 〉 6,000 frames/s per Watt), and ( iii ) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.
Type of Medium:
Online Resource
ISSN:
0027-8424
,
1091-6490
DOI:
10.1073/pnas.1604850113
Language:
English
Publisher:
Proceedings of the National Academy of Sciences
Publication Date:
2016
detail.hit.zdb_id:
209104-5
detail.hit.zdb_id:
1461794-8
SSG:
11
SSG:
12
Bookmarklink