SCIENTIA SINICA Informationis, Volume 49, Issue 7: 900-910(2019) https://doi.org/10.1360/N112018-00085

## Sparse recovery decaying signals based on $\ell^{0}$ regularization via PDASC

• AcceptedJul 23, 2018
• PublishedApr 9, 2019
Share
Rating

### Abstract

One of the main tasks for sparse recovery is to develop and analyze computationally tractable algorithms for obtaining sparse solutions of underdetermined linear systems. Jiao et al. (2015) proposed a primal-dual active set with a continuation (PDASC) algorithm for the $\ell^0$-regularized least-squares problem and established finite step global convergence and obtained a sharp estimation error under a certain restricted isometry property (RIP) condition. In this paper, we relax the condition on the RIP constant to be independent of the sparsity level $T$ for a class of signals with a strong decay property. Furthermore, we propose a novel data-driven regularization parameter selection rule. Several numerical examples are presented to verify the efficiency and accuracy of the PDASC algorithm and the data-driven parameter choice rule.

### References

[1] Candes E J, Romberg J, Tao T. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inform Theor, 2006, 52: 489-509 CrossRef Google Scholar

[2] Donoho D L. Compressed sensing. IEEE Trans Inform Theor, 2006, 52: 1289-1306 CrossRef Google Scholar

[3] Tropp J A. Just relax: convex programming methods for identifying sparse signals in noise. IEEE Trans Inform Theor, 2006, 52: 1030-1051 CrossRef Google Scholar

[4] Zhang H, Wang Y, Chang X Y, et al. $L_{1/2}$ regularization. Sci China Inf Sci, 2010, 40: 412--422. Google Scholar

[5] Zhang H, Zhang H. Approximate message passing algorithm for $L_{1/2}$ regularization. Sci China Inf Sci, 2017, 47: 58--72. Google Scholar

[6] Chen S S, Donoho D L, Saunders M A. Atomic Decomposition by Basis Pursuit. SIAM J Sci Comput, 1998, 20: 33-61 CrossRef Google Scholar

[7] Tibshirani R. Regression shrinkage and selection via the lasso. J R Stat Soc Ser B Stat Methodol, 1996, 58: 267--288, doi: 10.2307/2346178. Google Scholar

[8] Meinshausen N, Bühlmann P. High-dimensional graphs and variable selection with the Lasso. Ann Statist, 2006, 34: 1436-1462 CrossRef Google Scholar

[9] Grasmair M, Scherzer O, Haltmeier M. Necessary and sufficient conditions for linear convergence of $\ell^1$-regularization. Comm Pure Appl Math, 2011, 64: 161-182 CrossRef Google Scholar

[10] Tropp J A, Wright S J. Computational Methods for Sparse Solution of Linear Inverse Problems. Proc IEEE, 2010, 98: 948-958 CrossRef Google Scholar

[11] Rish I, Grabarnik G Y. Sparse Modeling: Theory, Algorithms, and Applications. Boca Raton: CRC Press, 2014. Google Scholar

[12] Fan J, Li R. Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties. J Am Statistical Association, 2001, 96: 1348-1360 CrossRef Google Scholar

[13] Zhang C H, Zhang T. A General Theory of Concave Regularization for High-Dimensional Sparse Estimation Problems. Statist Sci, 2012, 27: 576-593 CrossRef Google Scholar

[14] Elsener A, Sara V. Sharp oracle inequalities for stationary points of nonconvex penalized M-estimators. 2018,. arXiv Google Scholar

[15] Zeng J S, Fang J, Xu Z B. Sparse SAR imaging based on L 1/2 regularization. Sci China Inf Sci, 2012, 55: 1755-1775 CrossRef Google Scholar

[16] Zeng J, Xu Z, Zhang B. Accelerated regularization based SAR imaging via BCR and reduced Newton skills. Signal Processing, 2013, 93: 1831-1844 CrossRef Google Scholar

[17] Jinshan Zeng , Shaobo Lin , Yao Wang . $L_{1/2}$ Regularization: Convergence of Iterative Half Thresholding Algorithm. IEEE Trans Signal Process, 2014, 62: 2317-2329 CrossRef ADS arXiv Google Scholar

[18] Huang J, Jiao Y, Jin B, et al. A unified primal dual active set algorithm for nonconvex sparse recovery. 2018,. arXiv Google Scholar

[19] Fan J Q, Lv J C. A selective overview of variable selection in high dimensional feature space. Stat Sin, 2010, 20: 101--148. Google Scholar

[20] Chang X Y, Xu Z B, Zhang H, et al. Robust regularization theory based on $L_{q}~(0Google Scholar [21] Geman S, Geman D. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans Pattern Anal Mach Intell, 1984, 6: 721--741. Google Scholar [22] Leclerc Y G. Constructing simple stable descriptions for image partitioning. Int J Comput Vision, 1989, 3: 73-102 CrossRef Google Scholar [23] Ming Yan , Yi Yang , Osher S. Robust 1-bit Compressive Sensing Using Adaptive Outlier Pursuit. IEEE Trans Signal Process, 2012, 60: 3868-3875 CrossRef ADS Google Scholar [24] Zhang Y, Dong B, Lu Z.$\ell~_0$Minimization for wavelet frame based image restoration. Math Comp, 2013, 82: 995-1015 CrossRef Google Scholar [25] Blumensath T, Davies M E. Iterative Thresholding for Sparse Approximations. J Fourier Anal Appl, 2008, 14: 629-654 CrossRef Google Scholar [26] Lu Z, Zhang Y. Sparse Approximation via Penalty Decomposition Methods. SIAM J Optim, 2013, 23: 2448-2478 CrossRef Google Scholar [27] Tseng P. Convergence of a Block Coordinate Descent Method for Nondifferentiable Minimization. J Optimization Theor Appl, 2001, 109: 475-494 CrossRef Google Scholar [28] Ito K, Kunisch K. A variational approach to sparsity optimization based on Lagrange multiplier theory. Inverse Problems, 2014, 30: 015001 CrossRef ADS Google Scholar [29] Jiao Y, Jin B, Lu X. A primal dual active set algorithm for a class of nonconvex sparsity optimization. 2013,. arXiv Google Scholar [30] Hale E T, Yin W, Zhang Y. Fixed-Point Continuation for$\ell_1$-Minimization: Methodology and Convergence. SIAM J Optim, 2008, 19: 1107-1130 CrossRef Google Scholar [31] Fan Q, Jiao Y, Lu X. A Primal Dual Active Set Algorithm With Continuation for Compressed Sensing. IEEE Trans Signal Process, 2014, 62: 6276-6285 CrossRef ADS arXiv Google Scholar [32] Jiao Y, Jin B, Lu X. A primal dual active set with continuation algorithm for the$\ell^0$-regularized optimization problem. Appl Comput Harmonic Anal, 2015, 39: 400-426 CrossRef Google Scholar [33] Proakis, J G, Manolakis, D G. Digital Signal Processing: Principles Algorithms and Applications. New Jersey: Prentice Hall, 1996. Google Scholar [34] Candes E J, Tao T. Decoding by Linear Programming. IEEE Trans Inform Theor, 2005, 51: 4203-4215 CrossRef Google Scholar [35] Vershynin R. Introduction to the non-asymptotic analysis of random matrices. 2010,. arXiv Google Scholar [36] Huang S, Zhu J. Recovery of sparse signals using OMP and its variants: convergence analysis based on RIP. Inverse Problems, 2011, 27: 035003 CrossRef ADS Google Scholar [37] Mo Q, Shen Y. A Remark on the Restricted Isometry Property in Orthogonal Matching Pursuit. IEEE Trans Inform Theor, 2012, 58: 3654-3656 CrossRef Google Scholar [38] Ito K, Jin B. Inverse Problems: Tikhonov Theory and Algorithms. Singapore: World Scientific, 2014. Google Scholar [39] Schwarz G. Estimating the Dimension of a Model. Ann Statist, 1978, 6: 461-464 CrossRef Google Scholar [40] Wang H, Li R, Tsai C L. Tuning parameter selectors for the smoothly clipped absolute deviation method.. Biometrika, 2007, 94: 553-568 CrossRef PubMed Google Scholar [41] Chen J, Chen Z. Extended Bayesian information criteria for model selection with large model spaces. Biometrika, 2008, 95: 759-771 CrossRef Google Scholar [42] Wang H, Li B, Leng C. Shrinkage tuning parameter selection with a diverging number of parameters. J R Statistical Soc-Ser B (Statistical Methodology), 2009, 71: 671-683 CrossRef Google Scholar [43] Wang L, Kim Y, Li R. Calibrating non-convex penalized regression in ultra-high dimension. Ann Statist, 2013, 41: 2505-2536 CrossRef PubMed Google Scholar [44] Shi Y Y, Jiao Y L, Yan L, et al. A modified BIC tuning parameter selector for SICA-penalized Cox regression models with diverging dimensionality. J Math, 2017, 37: 723--730 [石跃勇, 焦雨领, 严良, 等. 发散维数 SICA 惩罚 Cox 回归模型的一种修正 BIC 调节参数选择器. 数学杂志, 2017, 37: 723--730]. Google Scholar [45] Zou H, Hastie T, Tibshirani R. On the "degrees of freedom" of the lasso. Ann Statist, 2007, 35: 2173-2192 CrossRef Google Scholar [46] Becker S, Bobin J, Candès E J. NESTA: A Fast and Accurate First-Order Method for Sparse Recovery. SIAM J Imag Sci, 2011, 4: 1-39 CrossRef Google Scholar [47] Blumensath T, Davies M E. Stagewise Weak Gradient Pursuits. IEEE Trans Signal Process, 2009, 57: 4333-4346 CrossRef ADS Google Scholar [48] Blumensath T. Accelerated iterative hard thresholding. Signal Processing, 2012, 92: 752-756 CrossRef Google Scholar • Algorithm 1 PDASC Require:$\lambda_0~=~\frac{1}{2}\|\Psi^t~y\|^2_{\infty}$,$\mathcal{A}(\lambda_0)~=~\emptyset$,$x(\lambda_0)~=0$,$d(\lambda_0)=~\Psi^t~y$, 下降因子$\mu~\in(0,1)$. for$k=1,2,\ldots$令$\lambda_k~=~\mu\lambda_{k-1}$,$\mathcal{A}_0~=~\mathcal{A}(\lambda_{k-1})$,$(x^0,d^0)~=~(x(\lambda_{k-1}),~d(\lambda_{k-1}))$; for$j=1,2,\ldots$定义积极集$\mathcal{A}_j$和非积极集$\mathcal{I}_j\$分别为

Citations

• #### 0

Altmetric

Copyright 2019 Science China Press Co., Ltd. 《中国科学》杂志社有限责任公司 版权所有