SCIENCE CHINA Information Sciences, Volume 59, Issue 5: 052103(2016) https://doi.org/10.1007/s11432-016-5536-6

Learning capability of the truncated greedy algorithm

More info
  • ReceivedSep 30, 2015
  • AcceptedNov 6, 2015
  • PublishedApr 8, 2016


Pure greedy algorithm (PGA), orthogonal greedy algorithm (OGA) and relaxed greedy algorithm (RGA) are three widely used greedy type algorithms in both nonlinear approximation and supervised learning. in this paper, we apply another variant of greedy-type algorithm, called the truncated greedy algorithm (TGA) in the realm of supervised learning and study its learning performance. We rigorously prove that TGA is better than PGA in the sense that TGA possesses the faster learning rate than PGA. Furthermore, in some special cases, we also prove that TGA outperforms OGA and RGA. All these theoretical assertions are verified by both toy simulations and real data experiments.

Funded by

National natural Science Foundation of China(61502342)

National natural Science Foundation of China(11401462)

National Basic Research Program of China(2013CB329404)



This work was supported by National Basic Research Program of China (Grant No. 2013CB329404) and National natural Science Foundation of China (Grant Nos. 61502342, 11401462).


[1] Barron A R, Cohen A, Dahmen W, et al. Ann Stat, 2008, 36: 64-94 Google Scholar

[2] Chen H, Li L, Pan Z. J Statist Plan & Infer, 2013, 143: 276-282 Google Scholar

[3] Fang J, Lin S B, Xu Z B. Know Based Syst, 2016, 95: 86-98 Google Scholar

[4] Friedman J. Ann Stat, 2001, 29: 1189-1232 Google Scholar

[5] Lin S B, Rong Y H, Sun X P, et al. IEEE Trans Neural Netw Learn Syst, 2013, 24: 1598-1608 Google Scholar

[6] Mannor S, Meir R, Zhang T. J Mach Learn Res, 2003, 4: 713-742 Google Scholar

[7] Xu L, Lin S B, Zeng J S, et al. Greedy metrics in orthogonal greedy learning. ArXiv:1411.3553, 2014. Google Scholar

[8] Schmidt E. Math Annalen, 1906, 63: 433-476 Google Scholar

[9] Temlyakov V. Acta Numer, 2008, 17: 235-409 Google Scholar

[10] DeVore R A, Temlyakov V. Adv Comput Math, 1996, 5: 173-187 Google Scholar

[11] Livshitz E, Temlyakov V. Constr Approx, 2003, 19: 509-523 Google Scholar

[12] Livshits E. Izvestiya: Mathematics, 2009, 73: 1197-1215 Google Scholar

[13] Temlyakov V. Constr Approx, 2008, 28: 1-25 Google Scholar

[14] Cucker F, Zhou D X. Learning Theory: an Approximation Theory Viewpoint. Cambridge: Cambridge University Press, 2007. Google Scholar

[15] Zhang T, Yu B. Ann Statis, 2005, 33: 1538-1579 Google Scholar

[16] Bagirov A, Clausen C, Kohler M. IEEE Trans Inf Theory, 2010, 56: 1417-1429 Google Scholar

[17] Tibshirani R. J R Stat Soc, 1996, 1: 267-288 Google Scholar

[18] Golub G H, Heath M T, Wahba G. Technometrics, 1979, 21: 215-223 Google Scholar

[19] Efron B, Hastie T, Johnstone I, et al. Ann Stat, 2004, 32: 407-499 Google Scholar

[20] Wendland H. Scattered Data Approximation. Cambridge: Cambridge University Press, 2005. Google Scholar

[21] Zhou D X, Jetter K. Adv Comput Math, 2006, 25: 323-344 Google Scholar

[22] Breiman L, Friedman J, Stone C, et al. Classification and Regression Trees. Boca Raton: CRC Press, 1984. Google Scholar

[23] Blake C L, Merz C J. UCI Repository of machine learning databases. Irvine: University of California. http://www.ics.uci.edu/ mlearn/MLRepository.html. 1998, 55. Google Scholar

[24] Harrison D, Rubinfeld D L. J Environ Econ, 1978, 5: 81-102 Google Scholar

[25] Ye I C. Cement Concrete Res, 1998, 28: 1797-1808 Google Scholar

[26] Nash W J, Sellers T L, Talbot S R, et al. The Population Biology of Abalone (Haliotis Species) in Tasmania: Blacklip Abalone (H. Rubra) From the North Coast and Islands of Bass Strait. Technical Report. 1994. Google Scholar

[27] Kreyszig E. Applied Mathematics. Hoboken: Wiley Press, 1979. Google Scholar

[28] Shi L, Feng Y, Zhou D X. Appl Comput Harmon Anal, 2011, 31: 286-302 Google Scholar

[29] Wu Q, Ying Y, Zhou D X. J Complex, 2007, 23: 108-134 Google Scholar

Copyright 2019 Science China Press Co., Ltd. 《中国科学》杂志社有限责任公司 版权所有