SCIENCE CHINA Information Sciences, Volume 61, Issue 6: 060421(2018) https://doi.org/10.1007/s11432-017-9303-0

Neuromorphic vision chips

Nanjian WU1,2,3,*
More info
  • ReceivedOct 7, 2017
  • AcceptedNov 13, 2017
  • PublishedFeb 2, 2018


The paper reviews the progress of neuromorphic vision chip research in decades. It focuses on two kinds of the neuromorphic vision chips: frame-driven (FD) and event-driven (ED) vision chips. The FD and ED vision chips are very different from each other in system architecture, image sensing, image information coding, image processing algorithm, design methodology. The vision chips can overcome serial data transmission and processing bottlenecks in traditional image processing systems. They can perform the high speed image capture and real-time image processing operations. This paper selects two typical chips from the two kinds of vision chips, respectively, and introduces their architectures, image sensing schemes, image processing processors and system operation. The FD neuromorphic reconfigurable vision chip comprises a high speed image sensor, a processing element array and self-organizing map neural network. The FD vision chip has the advantages in image resolution, static object detection, time-multiplex image processing, and chip area. The ED neuromorphic vision chip system is based on address-event-representation image sensor and event-driven multi-kernel convolution network. The ED vision chip has the advantages in fast sensing, low communication bandwidth, brain-like processing, and high energy efficiency. Finally, this paper discusses the architecture and the challenges of the future neuromorphic vision chip and indicates that the reconfigurable vision chip with left- and right-brain functions integrated in the three dimensional (3D) large-scale integrated circuit (LSI) technology becomes a trend of the research on the vision chip.


This work was supported by National Natural Science Foundation of China (Grant Nos. 61234003, 61434004, 61504141), Brain Project of Beijing (Grant No. Z161100000216129), and CAS Interdisciplinary Project (Grant No. KJZD-EW-L11-04). The author would like to thank all members in the research group for their collaborations.

Copyright 2019 Science China Press Co., Ltd. 《中国科学》杂志社有限责任公司 版权所有