**Abstract** : Let A be an nxm matrix with m>n, and suppose that the underdetermined linear system x = As admits a sparse solution for which the l0-norm of s is less than spark(A)/2. Such a sparse solution is unique due to a well-known uniqueness theorem. Suppose now that we have somehow a solution \hat{s} as an estimation of s_0, and suppose that \hat{s} is only "approximately sparse," that is, many of its components are very small and nearly zero, but not mathematically equal to zero. Is such a solution necessarily close to the true sparsest solution? More generally, is it possible to construct an upper bound on the l2-norm estimation error without knowing s_0 ? The answer is positive, and in this paper, we construct such a bound based on minimal singular values of submatrices of . We will also state a tight bound,which is more complicated, but besides being tight, enables us to study the case of random dictionaries and obtain probabilistic upper bounds. We will also study the noisy case, that is, where x = As + n. Moreover, we will see that where the l0-norm of s_0 grows, to obtain a predetermined guaranty on the maximum of l2-norm approximation error, \hat{s} is needed to be sparse with a better approximation. This can be seen as an explanation to the fact that the estimation quality of sparse recovery algorithms degrades where the l0-norm of s_0 grows.