[alma-config] Re: UV sampling metric

John Conway jconway at oso.chalmers.se
Tue Feb 8 05:38:35 EST 2000


Hi,

 I think this is a very useful discussion. I'm surprised
at the variety of views, still each of the viewpoints 
is useful in understanding the full picture; it gets to
the heart of a number of matters which it is worth 
discussing before  we settle on an array 
style or actual array design.

                John

            
Comments on some of Bryans comments


>> From Brian
>> 
> >Only a small fraction of ALMA images will be dirty images, 
> >for the rest we must deconvolve. 
> 
> and, later...
> 
> > and >95% of ALMA image will pass through a deconvolution algorithm.
> 
> it's not clear to me that these statements are correct (that only a 
> small fraction of ALMA images will be dirty images).  it seems to me 
> that with the proper design, we might actually be in a situation where 
> a large fraction of ALMA images would be dirty (or quasi-dirty, or 
> "directly reconstructed" or whatever you want to call them) images.
>

OK maybe >5% of ALMA images will be dirty images, and its worth 
thinking about this case, but then again I not so sure so  many will
be made without deconvolution. 
The problem is if you optimise your sidelobe to have maximum of 3%-5%,
so you might think you can make 20:1 or 30:1 dynamic range dirty maps, but 
this is just the response of a single point, once you comvolve with 
many similar brightness points or have a resolved structure the
achieveable dynamic range gets much less. Also as Mark as pointed out to 
achieve very low  sidelobes for 'dirty imaging' either a natural taper
to the uv coverage or a data weighting taper (losing a lot of
sensitivity) is needed. As Mark says 'dirty imaging'
applications if anything argue therefore  against uniform aperture
coverage and for naturally tapered ones.



> >The strict consequence of not  having  an infinite
> >sequence of samples is that the sampled signal is no-longer 
> >bandlimited so the Nyquist criteria non-longer applies.
> 
> now, what happens when Umax is finite?  well, the little delta 
> functions in the image space comb function each get replaced by a 
> little Lambda function (J1[x]/[x/2] is a Lambda function).  these 
> little Lambda fns have a characteristic width which is proportional to 
> 1 over Umax, i.e., in most configurations, their width is much smaller 
> than the spacing between them.  now, a reconstructed image will have 
> artifacts which are due to this.  consider a reconstructed image with a 
> pixel spacing which is the same as the image plane comb spacing.  the 
> maximum "contamination" of an adjacent pixel would be the value of the 
> Lambda function evaluated at that adjacent pixel.  since the 
> characteristic width of the Lambda fns is much smaller than the pixel 
> spacing, the contamination will be very small (<< .1).  so it doesn't 
> seem to me that the DR will be limited to 10:1.  

I have to think more about the above argument in detail. 
However again Mark has made the point that the FT of circular aperture is
the PSF (the lambda function), which if one properly samples 
without aliasing talways has a main lobe of width 2-3
pixels, and then unavoidable sidelobes of 15% 2 or 3 pixels furthor out.
Lets have a reality check
here, if one simulates a uniform covered aperture in IMAGR what do you get? - well
you get just get what you expect -  a PSF which is a lambda function and
has large 15% sidelobes  (see the simulation on http://www.oso.chalmers.se/~
jconway/ALMA/SIMULATIONS/). Why has it got big sidelobes?, well because 
the uv coverage has a sharp edge. Obtaining a image without such 
sidelobes obviously means generating an image whose FT does not abruptly
change  at the uv coverage edge from a non-zero to a zero value. 

>
> From Bryan 
> 
> i think that it's a bit strong to make the statement that the *only* 
> way to improve the image is to attempt extrapolation.  while in effect 
> this may be what current imaging algorithms are doing (but i'm not even 
> really sure of that), it is not clear to me that it is the only way to 
> do it.  
> 

We are all agreed if that one sampled every cell out to infinity at
regular intervals of 1/theta one could uniquely reconstruct an image
of size \theta with infinite resolution. In the proposed 'perfect'
uniformly filled array with 'complete information' one samples all 
the cells out to the uv edge at umax with a regular spacing of 1/theta, 
- we can all agree that in this case the {\it only} information that is
missing is that beyond the uv edge. When one 
makes a dirty image from this we find it has bad sidelobes which we
want to reduce. We have two choices - one is the linear process of 
tapering which reweights the data, and thus loses sensitivity. 
However if we want to retain full sensitivity we must keep the natural 
weights of each cell - so we cannot reweight. There is  only one option 
then which is to estimate the uv cells  which we didn't measure 
(its the lack of information about theses cells after all that is causing
the large sidelobes). To the process of estimating the uv cells beyond
the uv coverage edge I give the name 'extrapolation', maybe
other could suggest a better name for this process? QED - reducing 
sidelobes without reweighting and losing sensitivity requires
extrapolation.

Another way at looking at this is if one has a uv coverage with a sharp
edge then you get a PSF which is a lambda function and has bad sidelobes. We want 
to give  the astronomer a final image without these sidelobes, its clear 
therefore that this final image must have a FT which does not suddenly go 
from a non-zero value within the uv coverage edge to a zero value 
beyond it (or else it would be the dirty map!); the final best 
estimate image  must instead have a non-zero value beyond the Fourier
plane edge - hence if one does not taper the price of removing the near-in
sidelobes must be extrapolation into unmeasured parts of the uv plane.


> my feeling is that we would need a better imaging formulation to handle 
> the case of nearly complete u-v coverage, which followed more 
> traditional signal processing techniques.  one of our current problems 
> is that we are tainted (in some sense) by the past.  radio 
> interferometry has always been about sparse sampling of the u-v plane, 
> and in that case it turns out that non-linear deconvolutions work very 
> well.  maybe it is time to try to get entirely beyond that stage with 
> ALMA?

I have just finished lecturing a course on Image Processing where
we discuss Image Restoration techniques in a number of fields. 
The problem of estimating the Fourier transform beyond a sharp 
edge (superresolution) is a very very difficult one in general. 
The only linear algorithm I know of trys to do the needed extrapolation
just using  the Analycity of the signal, i,e, knowing all the derivatives 
of the function near the edge one can use a Taylor series to 
extrapolate, however in practice this needs extraordinarily high 
SNR to be effective at all. Non-linear alhorithms like MEM 
which enforce a priori criteria do best at extraplation, but
even these  can usually extrapolate reliably only 30% or so. 
Nobody would argue that the algorithms we have in radio astronomy
are optimum but there is a fundamental limit to what any such 
algorithm can achive, beyond a certain point we are asking them to do 
the impossible by guessing the unknown. Experience from other fields
shows the limitations for superesolution are around the 30%-50%  mark,
we cannnot expect a 'white knight' algorithm from another field to give 
us a magic solution. Given this situation it seems resonable to design the
array to help the needed extrapolation by giving the uv plan pattern a
few outliers points in addition to a well sampled core of the uv coverage
(which is another way at looking at a tapered uv coverage).


> basically, i'm having a very hard time imagining how complete u-v 
> coverage could *not* be the absolute best that you could do.  This 
> discussion of extrapolation and interpolation is a bit of a red herring,
> i think.  the interpolation is a relatively well understood problem in
> signal processing.  the extrapolation, it seems to me, is a byproduct 
> of using the wrong image creation algorithms and software.  
> 
>

I understand your frustration, somehow a perfectly uniform coverage seems
intuitively right! Howvever the problem is that uniform coverage
without tapering gives you an estimate of the true sky distribution 
convolved  with a lamda beam. The atronomers I know are better at getting 
astrophysics from maps which are estimates of the true brightness
convolved with a gaussian beam. The Gausssian beam has a FT 
which does not have a sharp edge in the uv plane, hence the intelligible
images we must give to astronomners must have undergone a process 
of extrapolation. The questions of interpolation and extrapolation 
are not red herrings, I belive on the contrary that they are 
fundamental,  ANY array has incomplete information to give the 
astronomer what he wants, its is either/and incomplete internally  or at
the edge; and as I said in my last message to get that nice image one
nearly always must apply non-linear algorithms which generate new spatial 
frequencies.


Now I'm not at all saying that having even uv coverge qt 1/\theta is 
not a  desirable property, it certainly is!. The problem is there
are other desirable properties like having a natural taper 
to the uv coverage. Unfortunatley for a finite number of 
points we cannot have both desirable properties at the same
time -  we must compromise. My feeling is that a uv coverage  
with an almost uniformly sampled core with uv points 
with spacing 1/theta (where theta is the expected field of view
set by the source size of primary beam) and with perhaps 1/4
of the points as outliers to soften the edge of the uv coverage
might give the best results. Sampling the core at 1/theta 
makes intuitive sense because this is the scale on which the 
visibility  varies in the uv plane. 


Coming to practicalities I would argue that the arrays
corresponding to 'D' in the NRAO strawman  in which the telescopes
are almost touching can't have anything other 
than complete uv coverage anyway so the design issue here
is moot. We can certaintly tell NSF that we have complete uv 
coverage for this array (which will be the main one used for
mosaicing), and in many cases we will have high 
enough SNR to do gaussian tapering, and get the type of images 
that Martin Ryle would approve of.  For the larger arrays 
I expect the problems of uv extrapolation to be more important
so it is good to design these with some degree of taper. For 
a continous close pack -archimedian spiral - log spiral (memo 283) 
for  instance the array naturally evolves from high uv cell occupancy 
and sharp edge for small arrays to highly tapered for the 
larger arrays. Yet  another reason for having naturally tapered
arrays, expecially in fixed array designs
(as pointed out by Mark) is that in tapering to achieve
exact resolution (for line ratio studies etc), loses
much less sensitivity.

 John.





More information about the Alma-config mailing list