Hi everybody, I have been in contact with several persons from the Signal Processing community, either medical imaging (one of my former student is now in charge of a medical imaging research department) or other fields like image recognition and "blind" deconvolution (when you don't even know the transfer function). There are a number of "well" known problems which are relevant to our discussion. Imaging can be seen as a linear operator on the data space, with give some output O = K I where K is the operator kernel. In finite space (discrete representation), K can be represented by a matrix. Note that the operator K not only depends on the dirty beam, but also on additional constraints like the support of the image, its positivity (if any), etc... a) Because our images are support-limited, their Fourier transform are NOT support limited. So to recover them, some extrapolation is required... b) In all cases, one cannot expect to recover the "true" brightness distribution, but only a "regularized" (see below) one (i.e. you will NEVER know whether you apparently smooth image is in fact composed by a large number of point sources until you observe with adequate resolution). c) Some regularisation must be applied. "Regularisation" stands for any method applying some sort of smoothness constraint. MEM (and all its variant) is an example of (Bayesian) a priori regularisation. CLEAN is an example of a posteriori regularisation, because of the convolution with the Clean beam. Positivity is NOT a regularisation principle: it is a constraint on the data space. It is this requirement for a regularisation method which drives the need for TAPERED distribution. d) Most deconvolution techniques are guaranteed to converge if the convolution kernel is positive-definite, i.e. in our language, if the UV coverage is complete... Alas, that includes the short spacing also... Getting complete UV coverage will minimize the number of zeroes in K. e) In all cases, one cannot expect to recover the "true" brightness distribution, but only the "regularized" one (i.e. you will NEVER know whether you apparently smooth image is in fact composed by a large number of point sources until you observe with adequate resolution). f) If the kernel is not positive-definite, some of the Fourier components will be ill-constrained, and thereby, poorly recovered in the deconvolution. However, here, positivity (and support information as well) does help a lot. The key point is wether the ratio of the highest eigen-value to the lowest (non-zero) eigen-value in the operator K stays reasonable or not. Any mode (i.e. structure in the image plane) corresponding to a very small eigenvalue will be poorly recovered. One can actually compute the effective noise level on any mode from the initial noise distribution and the eigenvalue analysis. Some regularisation methods actually limit the reconstruction by neglecting all the small eigen values (and hence ignoring the corresponding modes). This is similar to a Singular Value Decomposition (although it uses very different methods, because the matrix K is huge...). It is even possible to see which mode are actually "uncertain". What happens is that most of the poorly constrained modes are highly unphysical (the most simple example is the stripes which CLEAN produces sometimes). Hence, they limit the dynamic range and image fidelity, but not the physical interpretation... g) Note that even a "complete" (i.e. no holes) UV coverage may have a wide range of weights for all UV cells. The argument of Ed Fomalont applies here: we should measure the uniformity not by comparing 0 to any number, but all numbers between them. I think the measure of the ratio between highest and smallest non-zero eigenvalue is a fair measure of the quality of the imaging. Alas, this ratio depends not only on the dirty beam, but also (and not surprisingly) on the support of the image... h) Tapered distributions (seen as a few extra points beyond the uniform UV coverage) most likely give better eigenvalues than purely uniform UV coverage. I can't prove it, but that looks intuitive: a few constraints on how the extrapolation must go will give a better defined operator than no constraints at all. The result will be a less sensitive observation before deconvolution, for sure, but also a FAR LOWER noise amplification factor in the deconvolution. At the end, the deconvolved image may be less noisy with tapered distribution... Now a couple of comments on medical imaging i) Most of the medical imaging involves "filled" aperture, or tomography which is somewhat different from Fourier synthesis. j) Most the medical imaging is actually very poor. I guess the image fidelity hardly ever exceed 3 to 5, but this is sufficient for their purpose which is typically to distinguish a good tissue from a bad one (a binary operation in some way...). I don't know whether this information is helpful or not. From my own experience with WIPE, which can produce an upper bound on the error map, I found that with current mm arrays, the error maps is discouragingly large, unless heavy taper is used. But I found the display of highest error modes very useful to pinpoint possible artefacts in the reconstruction. I have not experienced WIPE on ALMA-like UV coverages, because of computation limits. I can get in touch with the experts in Toulouse: perhaps they could work from the gridded UV data, which would be faster. Stephane