logo

SCIENCE CHINA Information Sciences, Volume 61, Issue 4: 048105(2018) https://doi.org/10.1007/s11432-017-9225-2

Distribution-dependent concentration inequalities for tighter generalization bounds

More info
  • ReceivedApr 9, 2017
  • AcceptedAug 21, 2017
  • PublishedMar 16, 2018

Abstract

There is no abstract available for this article.


References

[1] Vapnik V. The Nature of Statistical Learning Theory. New York: Springer-Verlag, 1995. Google Scholar

[2] Koltchinskii V. Rademacher penalties and structural risk minimization. IEEE Trans Inform Theor, 2001, 47: 1902-1914 CrossRef Google Scholar

[3] Bartlett P L, Mendelson S. Rademacher and Gaussian complexities: risk bounds and structural results. J Mach Learn Res, 2003, 3: 463--482. Google Scholar

[4] Gao W, Zhou Z H. Dropout Rademacher complexity of deep neural networks. Sci China Inf Sci, 2016, 59: 072104 CrossRef Google Scholar

[5] Kutin S. Extensions to McDiarmid's Inequality When Differences are Bounded With High Probability. Technical Report TR-2002-04. 2002. Google Scholar

[6] Kutin S, Niyogi P. Almost-everywhere algorithmic stability and generalization error. 2002. http://arxiv.org/pdf/1301.0579v1.pdf. Google Scholar

[7] Kontorovich A. Concentration in unbounded metric spaces and algorithmic stability. ICML, 2014,. arXiv Google Scholar

[8] Combes R. An extension of McDiarmid's inequality. 2015. http://arxiv.org/pdf/1511.05240v1.pdf. Google Scholar

[9] Warnke L. On the method of typical bounded differences. Comb Probab Comput, 2015, 25: 269--299. Google Scholar

Copyright 2020 Science China Press Co., Ltd. 《中国科学》杂志社有限责任公司 版权所有

京ICP备18024590号-1