On near-optimality of linear estimation
Résumé
We consider the estimation problem as follows: given a noisy indirect observation $\omega=Ax+\xi$ we want to recover a linear image $Bx$ of a signal $x$ known to belong to a given set $X$. Under some assumptions on $X$ (satisfied, e.g., when $X$ is the intersection of $K$ concentric ellipsoids/elliptic cylinders, or the unit ball of the spectral norm in the space of matrices) and on the norm $\|\cdot\|$ used to measure the recovery error (satisfied, e.g., by $\|\cdot\|_p$-norms, $1\leq p\leq 2$, on $\mathbb{R}^m$ and by the nuclear norm on the space of matrices), and {\em without imposing any restriction on mappings $A$ and $B$,} we build a {\em linear in observation} estimate which is near-optimal among all (linear and nonlinear) estimates in terms of its worst-case, over $x\in X$, expected $\|\cdot\|$-loss.
These results form an essential extension of the classical results (cf. e.g., Pinsker 1980 and Donoho, Liu and MacGibbon, 1990), which in the case of Euclidean norm $\|\cdot\|$ and diagonal matrices $A$ and $B$ impose more restrictive assumptions on the signal set $X$.
The proposed estimator is built in a computationally efficient way. Furthermore, all theoretical constructs and proofs heavily rely upon the tools of convex optimization.
These results form an essential extension of the classical results (cf. e.g., Pinsker 1980 and Donoho, Liu and MacGibbon, 1990), which in the case of Euclidean norm $\|\cdot\|$ and diagonal matrices $A$ and $B$ impose more restrictive assumptions on the signal set $X$.
The proposed estimator is built in a computationally efficient way. Furthermore, all theoretical constructs and proofs heavily rely upon the tools of convex optimization.