logo

SCIENTIA SINICA Informationis, Volume 49, Issue 3: 334-341(2019) https://doi.org/10.1360/N112018-00278

Domain-specific architectures driven by deep learning

More info
  • ReceivedOct 18, 2018
  • AcceptedDec 12, 2018
  • PublishedMar 19, 2019

Abstract

Deep learning (DL) is one of the most exciting progresses in the field of artificial intelligence (AI); moreover, its new computational demands are driving new architecture researches. This paper firstly points out DL requirement essence by analyzing the stage and tasks in AI development, then discusses DL domain-specific architectures (DSAs) from three perspectives, which are the criteria of computational structures, the basics of a numerical system for computation, and DL DSA potential research directions. Furthermore, herein, the Kullback-Leibler divergence was utilized as the criteria for DL computation architecture complexity and accuracy. Besides, Posit was employed as a new number system to rebuild DL computation and scientific computation and to establish the late-development advantage of digital chips. Finally, it was concluded that DL DSAs are one of the critical DSA research areas.


Funded by


Acknowledgment

感谢John L. GUSTAFSON 教授专门为本文制作了图4.


References

[1] Ma L W, Intel Corporation. Method and apparatus for a binary neural network mapping scheme utilizing a gate array architecture. PCT/CN2016/112721. Google Scholar

[2] Beating Floating Point at its Own Game: Posit Arithmetic. JSFI, 2017, 4 CrossRef Google Scholar

[3] Lindstrom P, Lloyd S, Hittinger J. Universal coding of the reals: alternatives to IEEE floating point. In: Proceedings of the Conference for Next Generation Arithmetic. New York: ACM, 2018. Google Scholar

[4] Langroudi S H F, Pandit T, Kudithipudi D. Deep learning inference on embedded devices: fixed-point vs posit. 2018,. arXiv Google Scholar

[5] Johnson J. Rethinking floating point for deep learning. 2018,. arXiv Google Scholar

Copyright 2019 Science China Press Co., Ltd. 《中国科学》杂志社有限责任公司 版权所有

京ICP备18024590号-1