[alma-config] Re: UV sampling metric

Bryan Butler bbutler at aoc.nrao.edu
Mon Feb 7 14:05:49 EST 2000



all,

i was a bit confused by some parts of john's email from today,
and have some thoughts on some points of it.

>Only a small fraction of ALMA images will be dirty images, 
>for the rest we must deconvolve. 

and, later...

> and >95% of ALMA image will pass through a deconvolution algorithm.

it's not clear to me that these statements are correct (that only a 
small fraction of ALMA images will be dirty images).  it seems to me 
that with the proper design, we might actually be in a situation where 
a large fraction of ALMA images would be dirty (or quasi-dirty, or 
"directly reconstructed" or whatever you want to call them) images.

>The strict consequence of not  having  an infinite
>sequence of samples is that the sampled signal is no-longer 
>bandlimited so the Nyquist criteria non-longer applies.

yes, strictly this is true, but it was my understanding that the signal 
processing folk had shown that in many (if not most) cases, you can do 
things which will essentially make up for this.  of course, one of the 
favorites is to taper (with subsequent sensitivity loss), but there are 
tricks like equalizing filters (which also suffer some sensitivity loss,
but not as severe as tapering maybe?), if i recall correctly.

>consider the uv coverage which is 'perfect' by the uv cell 
>occupancy argument, in wich all cells within a circular region 
>are all sampled by one uv point. The dirty map will be the true
>image convolved with the FT of a circular top hat, i.e  J_{1}(r)/r
>where J_{1}(r) is a Bessel function. The dynamic range of
>this dirty image is about 10:1 and its a pretty bad reconstruction.

i don't agree with this argument, (i don't think that the truncation 
of the u-v samples means that the DR is limited by 10:1).  let me see 
if i can explain my conceptual understanding of this.

consider the case of "perfect uniform sampling", where u-v samples are 
exactly evenly spaced delta fns (your sampling function is a "bed of 
nails" or "comb" fn) out to some maximum spacing Umax (ignore the 
broadening of the delta fns by the antenna aperture voltage pattern for 
now).  in the case that Umax -> infinity, then the dirty image (as 
traditionally defined) is just the true sky brightness convolved with 
another comb function, whose spacing is proportional to 1 over the 
spacing of the u-v samples.  in this case, we can recover the sky 
brightness distribution on all scales larger than the image plane comb 
spacing exactly.  

now, what happens when Umax is finite?  well, the little delta 
functions in the image space comb function each get replaced by a 
little Lambda function (J1[x]/[x/2] is a Lambda function).  these 
little Lambda fns have a characteristic width which is proportional to 
1 over Umax, i.e., in most configurations, their width is much smaller 
than the spacing between them.  now, a reconstructed image will have 
artifacts which are due to this.  consider a reconstructed image with a 
pixel spacing which is the same as the image plane comb spacing.  the 
maximum "contamination" of an adjacent pixel would be the value of the 
Lambda function evaluated at that adjacent pixel.  since the 
characteristic width of the Lambda fns is much smaller than the pixel 
spacing, the contamination will be very small (<< .1).  so it doesn't 
seem to me that the DR will be limited to 10:1.  in fact, it seems to 
me that as long as you are willing to live with a penalty in minimum 
recoverable image plane spatial scale (you can't get structure all the 
way down to the scale of the spacing between the image plane comb 
locations, but rather only down to the scale which is the convolution of
that with a few times the characteristic width of the Lambda fn), then 
you can get very good image reconstruction again.  an analog is antenna
holography, where they oversample by 10% or so (which would be 
equivalent in our case to accepting a resolution which is 10% worse 
than that defined by the maximum spacing), but the reconstruction is 
very good.  the achievable DR is a function of the spatial scale in 
this case, but i imagine that it is much better than 10:1 on all 
scales.  a practical example is that we make images with untapered VLA 
data (which is most certainly not complete in the u-v plane) which have 
DR >> 10:1...

tapering in the u-v plane simply replaces the little Lambda fns with
a different function.  e.g., for a gaussian u-v taper, just replace
the little Lambda fn with a little gaussian fn.  the tradeoff in
different tapers is between their width and their sidelobe level 
(to first order - another factor is the sidelobe level dropoff).  
pillbox tapering (resulting in Lambda fns) gives the narrowest width,
but bad sidelobes.  gaussian tapering gives relatively small 
"sidelobes", but at the expense of a wide "central lobe" (and slow
rolloff of the "sidelobes").  there is a whole industry built up around 
the study of tapers - you can go as deep as you want...

part of the problem with these things is that with current deconvolution
algorithms, you should make the pixel size very small compared to the
true resolution (the image plane comb spacing), and so the spacing
between pixels gets so small that pixels start to bump into the 
sidelobes of the response fn. of adjacent pixels.  it seems to me that
with enough intelligence we could get around this (see later points).

>One can taper to reduce sidelobles but this  loses a lot of 
>sensitivitity. If one does not want to lose sensititivity the 
>only way to improve the image is to try to estimate the uv cells
>beyond the uv coverage edge (i.e. extrapolate, that horrible 
>word again, - but there is no choice the problem is clearly not one of
>incomplete coverage within the circular boundary!). Only non-linear 
>algorithms can do the needed extrapolation. These
>algorithms MEM. CLEAN  whatever utilize whatever 'a priori' 
>information of limited support, positivity etc in doing this
>extrapolation. How successful the algorithm is in extrapolating 
>is a complax function of the algorithm used, the image and 
>the uv coverage.

i think that it's a bit strong to make the statement that the *only* 
way to improve the image is to attempt extrapolation.  while in effect 
this may be what current imaging algorithms are doing (but i'm not even 
really sure of that), it is not clear to me that it is the only way to 
do it.  

my feeling is that we would need a better imaging formulation to handle 
the case of nearly complete u-v coverage, which followed more 
traditional signal processing techniques.  one of our current problems 
is that we are tainted (in some sense) by the past.  radio 
interferometry has always been about sparse sampling of the u-v plane, 
and in that case it turns out that non-linear deconvolutions work very 
well.  maybe it is time to try to get entirely beyond that stage with 
ALMA?

basically, i'm having a very hard time imagining how complete u-v 
coverage could *not* be the absolute best that you could do.  This 
discussion of extrapolation and interpolation is a bit of a red herring,
i think.  the interpolation is a relatively well understood problem in
signal processing.  the extrapolation, it seems to me, is a byproduct 
of using the wrong image creation algorithms and software.  


	-bryan





More information about the Alma-config mailing list