[alma-config] Re: UV sampling metric

Mark Holdaway mholdawa at cv3.cv.nrao.edu
Mon Feb 7 15:17:35 EST 2000


John said:
> > and >95% of ALMA image will pass through a deconvolution algorithm.
> 

Bryan replied:
> it's not clear to me that these statements are correct (that only a 
> small fraction of ALMA images will be dirty images).  it seems to me 
> that with the proper design, we might actually be in a situation where 
> a large fraction of ALMA images would be dirty (or quasi-dirty, or 
> "directly reconstructed" or whatever you want to call them) images.

Mark asserts:
True: but to accomplish this, we need an intrinsically tapered
(u,v) sampling envelope, or similarly, minimum sidelobe PSF's.
But not many are arguing against this.  It does argue against
"uniform" coverage.

John said:
> >consider the uv coverage which is 'perfect' by the uv cell 
> >occupancy argument, in wich all cells within a circular region 
> >are all sampled by one uv point. The dirty map will be the true
> >image convolved with the FT of a circular top hat, i.e  J_{1}(r)/r
> >where J_{1}(r) is a Bessel function. The dynamic range of
> >this dirty image is about 10:1 and its a pretty bad reconstruction.

Bryan replied:
> i don't agree with this argument, (i don't think that the truncation 
> of the u-v samples means that the DR is limited by 10:1).  let me see 
> if i can explain my conceptual understanding of this.
> 
> consider the case of "perfect uniform sampling", where u-v samples are 
> exactly evenly spaced delta fns (your sampling function is a "bed of 
> nails" or "comb" fn) out to some maximum spacing Umax (ignore the 
> broadening of the delta fns by the antenna aperture voltage pattern for 
> now).  in the case that Umax -> infinity, then the dirty image (as 
> traditionally defined) is just the true sky brightness convolved with 
> another comb function, whose spacing is proportional to 1 over the 
> spacing of the u-v samples.  in this case, we can recover the sky 
> brightness distribution on all scales larger than the image plane comb 
> spacing exactly.  
> 
> now, what happens when Umax is finite?  well, the little delta 
> functions in the image space comb function each get replaced by a 
> little Lambda function (J1[x]/[x/2] is a Lambda function).  these 
> little Lambda fns have a characteristic width which is proportional to 
> 1 over Umax, i.e., in most configurations, their width is much smaller 
> than the spacing between them.  now, a reconstructed image will have 
> artifacts which are due to this.  consider a reconstructed image with a 
> pixel spacing which is the same as the image plane comb spacing.  the 
> maximum "contamination" of an adjacent pixel would be the value of the 
> Lambda function evaluated at that adjacent pixel.  since the 
> characteristic width of the Lambda fns is much smaller than the pixel 
> spacing, the contamination will be very small (<< .1).  so it doesn't 
> seem to me that the DR will be limited to 10:1. 

Mark asserts:
The Lamba function is the PSF, no?
The characteristic width of the Lambda function will be the
oversampling of the beam in image space, typically 3 pixels, so
you also have to contend with the sidelobes.  The first sidelobe is
something like 15%.

Bryan further asserts:
> in fact, it seems to 
> me that as long as you are willing to live with a penalty in minimum 
> recoverable image plane spatial scale (you can't get structure all the 
> way down to the scale of the spacing between the image plane comb 
> locations, but rather only down to the scale which is the convolution of
> that with a few times the characteristic width of the Lambda fn), then 
> you can get very good image reconstruction again.  

Mark retorts:
I think that means tapering the resolution to the extent that the
positive and negative sidelobes blur together, meaning you've thrown
away something like half your resolution, 3/4 of your baselines, and
half your sensitivity.

Bryan hypothesizes:
> basically, i'm having a very hard time imagining how complete u-v 
> coverage could *not* be the absolute best that you could do.  This 
> discussion of extrapolation and interpolation is a bit of a red herring,
> i think.  the interpolation is a relatively well understood problem in
> signal processing.  the extrapolation, it seems to me, is a byproduct 
> of using the wrong image creation algorithms and software.  

Mark philosophizes:
We are in a bind.  True, the ALMA will spawn a variety of imaging
algorithms that will improve the imaging and calibration; but we
cannot design an array based on the characteristics of some unknown
algorithm which meets some abtsract criteria.  On the other hand, it is
not too good to design the ALMA to work well for our current
inferior algorithms.  How far are you willing to guess we will progress?






More information about the Alma-config mailing list