425 | RWTH Publication No: 479396 2015   IGPM425.pdf |
TITLE | Data Assimilation in Reduced Modeling |
AUTHORS | Peter Binev, Albert Cohen, Wolfgang Dahmen, Ronald DeVore, Guergana Petrova, Przemyslaw Wojtaszczyk |
ABSTRACT | This paper considers the problem of optimal recovery of an element u of a Hilbert space H from measurements of the form l j (u), j = 1, . . . , m, where the l j are known linear functionals on H. Problems of this type are well studied [18] and usually are carried out under an assumption that u belongs to a prescribed model class, typically a known compact subset of H. Motivated by reduced modeling for solving parametric partial differential equations, this paper considers another setting where the additional information about u is in the form of how well u can be approximated by a certain known subspace V n of H of dimension n, or more generally, in the form of how well u can be approximated by each of a sequence of nested subspaces V 0 ⊂ V 1 · · · ⊂ V n with each V k of dimension k. A recovery algorithm for the one-space formulation was proposed in [16]. Their algorithm is proven, in the present paper, to be optimal. It is also shown how the recovery problem for the one-space problem, has a simple formulation, if certain favorable bases are chosen to represent V n and the measurements. The major contribution of the present paper is to analyze the multi-space case. It is shown that, in this multi-space case, the set of all u that satisfy the given information can be described as the intersection of a family of known ellipsoids in H. It follows that a near optimal recovery algorithm in the multi-space problem is provided by identifying any point in this intersection. It is easy to see that the accuracy of recovery of u in the multi-space setting can be much better than in the one-space problems. Two iterative algorithms based on alternating projections are proposed for recovery in the multi-space problem and one of them is analyzed in detail. This analysis includes an a posteriori estimate for the performance of the iterates. These a posteriori estimates can serve both as a stopping criteria in the algorithm and also as a method to derive convergence rates. Since the limit of the algorithm is a point in the intersection of the aforementioned ellipsoids, it provides a near optimal recovery for u. |
KEYWORDS | optimal recovery, reduced modeling, greedy algorithms |
DOI | 10.1137/15M1025384 |
PUBLICATION | SIAM/ASA Journal on Uncertainty Quantification Volume 5, Issue 1, Pages 1–29 |