[alma-config] Re: UV sampling metric

John Conway jconway at oso.chalmers.se
Mon Feb 7 09:08:05 EST 2000


Hi,

RE: The fraction of uv cells filled used as a uv plane metric.

My view is I think similar to Mark Holdaway as expressed
in the included email included below (..its way down at the bottom
of this email...). While its useful to 
define uv plane (or dirty beam) metrics for various purposes 
I don't think any of them can ever be the 'magic bullet' that 
can well characterise the likely imaging performance of an array 
once  one applies non-linear deconvolution algorithms. 
Only a small fraction of ALMA images will be dirty images, 
for the rest we must deconvolve. These deconvolution algorithms are
of neccesity non-linear; they have to be to create unmeasured
spatial frequencies (think of the case of 1D signal processing, one 
has to pass a sine wave through a non-linear device to get the 
harmonics). The deconvolution algorithms therefore generate new spatial 
frequencies which are 'interpolations' and 'extrapolations'
depending on whether they are at the edge of the uv coverage 
or amongst the uv points. This is a messy and hard to define 
process and some may not like this, but we have to live with it, because
thats what non-linear deconvolution does(!), and >95% of ALMA image 
will pass through a deconvolution algorithm.


Now, one reason I think that the idea of uniform sampling 
is attractive to some is the analogy  
with Nyquist sampling of signals. One can think of an image 
being non-zero only within a square of angular size \theta.
One can think therefore of the visibility signal as a function 
of u,v being a bandlimited signal with bandwidth \theta/2. If one
then evenly samples the uv plane with super nyquist cellspacing  of 
less that  (1/\theta), i.e. with one or more point per 'uv cell' 
then the argument goes that one obtains
complete information.  The final true image can be recovered with a linear 
filtering method. The problem with this argument of course is
that the Nyquist criteria says we can completely characterise a 
signal of bandwith B, by sampling evenly with \Delta t = 1/2B
and  have an {\it infinite} number of samples. In 1D this 
latter point is often forgotten because the number of samples 
is usually very large. The strict consequence of not  having  an infinite
sequence of samples is that the sampled signal is no-longer 
bandlimited so the Nyquist criteria non-longer applies.


In the radio interferometry case we certainly do not sample
out to infinite u and v, and do not have a infinite number 
of uv points; so the direct Nyquist analogy does not apply.
One therefore cannot use linear methods to reconstract the original 
image, because in order to remove artifacts one must 
extrapolate beyond the edge of the uv plane. To see this 
consider the uv coverage which is 'perfect' by the uv cell 
occupancy argument, in wich all cells within a circular region 
are all sampled by one uv point. The dirty map will be the true
image convolved with the FT of a circular top hat, i.e  J_{1}(r)/r
where J_{1}(r) is a Bessel function. The dynamic range of
this dirty image is about 10:1 and its a pretty bad reconstruction.
One can taper to reduce sidelobles but this  loses a lot of 
sensitivitity. If one does not want to lose sensititivity the 
only way to improve the image is to try to estimate the uv cells
beyond the uv coverage edge (i.e. extrapolate, that horrible 
word again, - but there is no choice the problem is clearly not one of
incomplete coverage within the circular boundary!). Only non-linear 
algorithms can do the needed extrapolation. These
algorithms MEM. CLEAN  whatever utilize whatever 'a priori' 
information of limited support, positivity etc in doing this
extrapolation. How successful the algorithm is in extrapolating 
is a complax function of the algorithm used, the image and 
the uv coverage.


Leonia and I (see http://www.oso.chalmers.se/~jconway/ALMA/SIMULATIONS/)
and Mark (and I guess others)  have done imaging
simulations comparing uniform coverage arrays and arrays which 
have a 'natural taper' with decreasing uv point density with 
uv radius. In all cases the tapered arrays perform better (provided
- and this is an important point - both arrays have the same 
short spacing coverage). This occurs not just in the case where
the input images are point dominated but in most typical imaages.
My view of this is that the low density 
'outlier'points  constrain the extrapolation of the densely (almost 
1/\theta) sampled inner part. From the image domain point of view
one can simply say its much better to start with a low sidelobe
beam rather than the J_{1}(r)/r beam.



My personal belief that image quality will depend on 3 quanities 
of the uv coverage, in decreasing importance.

1) The range of baselines sampled. (this was one reason that rings looked
promising early on, they tend to give more shorter spacings, and this is
still the best argument for rings in my view - however the competing
geometries can be made competitive by adding more short spacings).

2) A Naturally tapered edge to the uv coverage to aid extrapolation (how
tapered it should be is a question left to be decided).

3) 'Uniform' sampling  of the uv plane. This is strictly speaking
incompatible with feature 2)!. Still it is probably the case that 
ideally the central core of the uv coverage should be sampled 
with one uv point approximately every 1/\theta or less since this 
is the scale on which the visibilities change. 

I think having a metric to evaluate point 3 is a useful tool in 
our arsenal when designing arrays. The tradeoff between the above 
factors will depend on the image type and array, its possible in 
some case it has more importance. However I don't believe that a 
metric for just  evaluating 3 can be 
used to give a complete evaluation of the array performance - 
only doing imaging simulations through the deconvolution algorithms
can do that.


   John.


P.S Note that because the defincicies in the uniformarray arrise from
its abrupt uv edge, and hence large sidelobes, the question of
its perforamnce in mosaicing or non-mosiacing cases is I think 
second order -  mosiacing only helps us fil the uv plane more densley 
not extrapolate or reduce the sharpness of the uv coverage edge. 
 

  







On Sun, 6 Feb 2000, Mark Holdaway wrote:
> 
> I think there is a general lack of understanding on the relationship
> between complete Fourier Plane coverage and our imaging algorithms
> and the quality of images they produce.  Keto's "most uniform coverage"
> and Woody's arguments for "complete coverage" are conceptually very nice,
> but do not necesarily lead to better images given the imaging algorithms
> we use today.  
> 
> It has been estimated that between 25% and 75% of ALMA observations
> will be mosaiced, depending on who you talk to.  While the complete
> Fourier plane coverage is even more compelling for mosaicing observations
> (because you have no support constraint -- ie, the field is full of
> emission, so you need something like complete Fourier plane coverage),
> it is still unclear that these configurations will produce superior
> images for mosaicing.
> 
> One thing which I found helped the uniform coverage arrays was a
> taper in the Fourier plane -- which made a more gaussian coverage,
> and the PSF sidelobes were reduced, providing improved imaging -- but
> to get the better imaging, you need to throw away a LOT of sensitivity and
> resolution in the taper.
> 
> 
> 	-Mark
> 
> 




More information about the Alma-config mailing list