Previous |  Up |  Next

Article

Keywords:
signal segmentation; denoising; sparsity; piecewise-polynomial signal model; convex optimization
Summary:
A method is presented for segmenting one-dimensional signal whose independent segments are modeled as polynomials, and which is corrupted by additive noise. The method is based on sparse modeling, the main part is formulated as a convex optimization problem and is solved by a proximal splitting algorithm. We perform experiments on simulated and real data and show that the method is capable of reliably finding breakpoints in the signal, but requires careful tuning of the regularization parameters and internal parameters. Finally, potential extensions are discussed.
References:
[1] Angelosante, D., Giannakis, G. B.: Group Lassoing change-points in piecewise-constant AR processes. EURASIP J. Advances Signal Process. 2012 (2012), 1, 1-16. DOI 10.1186/1687-6180-2012-70
[2] Bleakley, K., Vert, J. P.: The group fused Lasso for multiple change-point detection. Technical Report.
[3] Bruckstein, A. M., Donoho, D. L., Elad, M.: From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev.51 (2009), 1, 34-81. DOI 10.1137/060657704 | MR 2481111
[4] Candes, E. J., Wakin, M. B.: An introduction to compressive sampling. IEEE Signal Process. Magazine 25 (2008), 2, 21-30. DOI 10.1109/msp.2007.914731
[5] Candes, E. J., Wakin, M. B., Boyd, S. P.: Enhancing sparsity by reweighted $\ell_1$ minimization. J. Fourier Analysis Appl. 14 (2008), 877-905. DOI 10.1007/s00041-008-9045-x | MR 2461611
[6] Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vision 40 (2011), 1, 120-145. DOI 10.1007/s10851-010-0251-1 | MR 2782122
[7] Chartrand, R.: Shrinkage mappings and their induced penalty functions. In: IEEE International Conference on Acoustics, Speech, and Signal Processing 2014. DOI 10.1109/icassp.2014.6853752
[8] Combettes, P., Pesquet, J.: Proximal splitting methods in signal processing. In: Fixed-Point Algorithms for Inverse Problems in Science and Engineering 2011, pp. 185-212. DOI 10.1007/978-1-4419-9569-8_10 | MR 2858838
[9] Condat, L.: A generic proximal algorithm for convex optimization - application to total variation minimization. Signal Process. Lett., IEEE 21 (2014), 8, 985-989. DOI 10.1109/lsp.2014.2322123
[10] Donoho, D. L., Elad, M.: Optimally sparse representation in general (nonorthogonal) dictionaries via $\ell_1$ minimization. Proc. National Acad. Sci. 100 (2003), 5, 2197-2202. DOI 10.1073/pnas.0437847100 | MR 1963681
[11] Elad, M.: Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing. Springer 2010. MR 2677506
[12] Fornasier, M., ed.: Theoretical Foundations and Numerical Methods for Sparse Recovery. De Gruyter, Berlin, Boston 2010. DOI 10.1515/9783110226157 | MR 2731598
[13] Frecon, J., Pustelnik, N., Abry, P., Condat, L.: On-the-fly approximation of multivariate total variation minimization. IEEE Trans. Signal Process. 64 (2016), 9, 2355-2364. DOI 10.1109/tsp.2016.2516962 | MR 3480014
[14] Giryes, R., Elad, M., Bruckstein, A. M.: Sparsity based methods for overparameterized variational problems. SIAM J. Imaging Sci. 8 (2015), 3, 2133-2159. DOI 10.1137/140998585 | MR 3402782
[15] Nettest, GN: Understanding OTDR. GN Nettest (2000).
[16] Golub, G. H., Loan, C. F. V.: Matrix Computations. Third edition. Johns Hopkins University Press 1996. MR 1417720
[17] Kim, S. J., Koh, K., Boyd, S., Gorinevsky, D.: $\ell_1$ trend filtering. SIAM Rev. 51 (2009), 2, 339-360. DOI 10.1137/070690274 | MR 2505584
[18] Komodakis, N., Pesquet, J.: Playing with duality: An overview of recent primal-dual approaches for solving large-scale optimization problems. IEEE Signal Process. Magazine 32 (2015), 6, 31-54. DOI 10.1109/msp.2014.2377273
[19] Kowalski, M.: Sparse regression using mixed norms. Applied Comput. Harmonic Analysis 27 (2009), 3, 303-324. DOI 10.1016/j.acha.2009.05.006 | MR 2559729
[20] Kowalski, M., Torrésani, B.: Structured Sparsity: from Mixed Norms to Structured Shrinkage. In: SPARS'09 - Signal Processing with Adaptive Sparse Structured Representations (R. Gribonval, ed.), Inria Rennes - Bretagne Atlantique 2009, pp. 1-6.
[21] Kutyniok, G., Lim, W. Q.: Compactly supported shearlets are optimally sparse. J. Approx. Theory 163 (2011), 11, 1564-1589. DOI 10.1016/j.jat.2011.06.005 | MR 2832720
[22] Levin, D.: Between moving least-squares and moving least-$\ell_1$. BIT Numerical Math. 55 (2015), 3, 781-796. DOI 10.1007/s10543-014-0522-0 | MR 3401813
[23] Neubauer, J., Veselý, V.: Change point detection by sparse parameter estimation. Informatica 22 (2011), 1, 149-164. DOI 10.1049/pbce065e_ch7 | MR 2885664
[24] Novosadová, M., Rajmic, P.: Piecewise-polynomial curve fitting using group sparsity. In: Proc. 8th International Congress on Ultra Modern Telecommunications and Control Systems, Lisabon 2016, pp. 317-322. DOI 10.1109/icumt.2016.7765379
[25] Novosadová, M., Rajmic, P.: Piecewise-polynomial signal segmentation using reweighted convex optimization. In: Proc. 40th International Conference on Telecommunications and Signal Processing (TSP), Barcelona 2017, pp. 769-774. DOI 10.1109/tsp.2017.8076092
[26] Šorel, M., Šroubek, F.: Fast convolutional sparse coding using matrix inversion lemma. Digital Signal Process. 55 (2016), 44-51. DOI 10.1016/j.dsp.2016.04.012
[27] Perraudin, N., Shuman, D. I., Puy, G., Vandergheynst, P.: Unlocbox A Matlab convex optimization toolbox using proximal splitting methods (2014).
[28] Pock, T.: Fast Total Variation for Computer Vision. Dissertation Thesis, Graz University of Technology 2008.
[29] Rajmic, P.: Exact risk analysis of wavelet spectrum thresholding rules. In: Electronics, Circuits and Systems, 2003. ICECS 2003. Proc. 10th IEEE International Conference 2 (2003), pp. 455-458. DOI 10.1109/icecs.2003.1301820
[30] Rajmic, P., Novosadová, M.: On the limitation of convex optimization for sparse signal segmentation. In: Proc. 39th International Conference on Telecommunications and Signal Processing, Brno University of Technology 2016, pp. 550-554. DOI 10.1109/tsp.2016.7760941
[31] Selesnick, I. W., Arnold, S., Dantham, V. R.: Polynomial smoothing of time series with additive step discontinuities. IEEE Trans. Signal Process. 60 (2012), 12, 6305-6318. DOI 10.1109/tsp.2012.2214219 | MR 3006421
[32] Shem-Tov, S., Rosman, G., Adiv, G., Kimmel, R., Bruckstein, A. M.: Innovations for Shape Analysis. Chap. On Globally Optimal Local Modeling: From Moving Least Squares to Over-parametrization. In: Mathematics and Visualization, Springer 2012, pp. 379-405. DOI 10.1007/978-3-642-34141-0_17 | MR 3075841
[33] Starck, J. L., Candes, E. J., Donoho, D. L.: The curvelet transform for image denoising. IEEE Trans. Image Process. 11 (2002), 6, 670-684. DOI 10.1109/tip.2002.1014998 | MR 1929403
[34] Tibshirani, R.: Regression shrinkage and selection via the LASSO. J. Royal Statist. Soc. Ser. B (Methodological) 58 (1996), 1, 267-288. MR 1379242
[35] Tibshirani, R. J.: Adaptive piecewise polynomial estimation via trend filtering. Ann. Statist. 42 (2014), 1, 285-323. DOI 10.1214/13-aos1189 | MR 3189487
[36] Tropp, J.: Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inform. Theory 50 (2004), 2231-2242. DOI 10.1109/tit.2004.834793 | MR 2097044
[37] Zhang, B., Geng, J., Lai, L.: Multiple change-points estimation in linear regression models via sparse group lasso. IEEE Trans. Signal Process. 63 (2015), 9, 2209-2224. DOI 10.1109/tsp.2015.2411220 | MR 3331995
Partner of
EuDML logo