|289||IGPM289.pdf November 2008|
|TITLE||Instance Optimal Decoding by Thresholding in Compressed Sensing|
|AUTHORS||Albert Cohen, Wolfgang Dahmen, Ronald DeVore|
|ABSTRACT||Compressed Sensing seeks to capture a discrete signal x ∈ I RN with a small number n of linear measurements. The information captured about x from such measurements is given by the vector y = Φx ∈ I R n where Φ is an n × N matrix. The best matrices, from the viewpoint of capturing sparse or compressible signals, are generated by random processes, e.g. their entries are given by i.i.d. Bernoulli or Gaussian random variables. The information y holds about x is extracted by a decoder ∆ mapping Rn into RN. Typical decorders are based on l1-minimization and greedy pursuit. The present paper studies the performance of decoders based on thresholding. For quite general random families of matrices Φ, decoders ∆ are constructed which are instance-optimal in probability by which we mean the following. If x is any vector in R N , then with high probability applying ∆ to y = Φx gives a vector x‾ := ∆(y) such that ||x−x‾ || ≤ C0σk(x)l2 for all k ≤ an/ log N provided a is sufficiently small (depending on the probability of failure). Here σk(x)l2 is the error that results when x is approximated by the k sparse vector which equals x in its k largest coordinates and is otherwise zero. It is also shown that results of this type continue to hold even if the measurement vector y is corrupted by additive noise: y = Φx + e where e is some noise vector. In this case σk(x)l2 is replaced by σk (x)l2 + ||e||l2 .|
|KEYWORDS||Compressed sensing, best k-term approximation, instance optimal decoders, thresholding, noisy measurements, random matrices.|
|PUBLICATION|| 8th International Conference on Harmonic Analysis and Partial Differential Equations,
June 16 - 20, 2008, El Escorial, Madrid, Spain / Patricio Cifuentes ... ed.
Contemporary mathematics 505, 28 S. (2010)