[alma-config] Addressing Frederic's Concerns

David Woody dwoody at ovro.caltech.edu
Tue Oct 4 15:00:18 EDT 2005


Hi All

It is amusing that some discussions never end.
Let me repeat my two cents worth from a few years ago
and add some recent information.

1.  Nyquist

1.1  "Nyquist" as used in these discussions refers to perfect recovery
of perfectly uniformly sampled data with infinite signal to noise.

1.2  The generalized version of Nyquist says only that the
average coverage needs to be greater than Nyquist as long
as you again have infinite signal to noise.  There are specific
linear reconstruction algorithms for recovery of periodic but
not uniform coverage, i.e. closely spaced double samples
with these uniformly spaced at greater than "Nyquist".

1.3  But we always have noise and then reconstruction becomes
complex.  For perfectly uniform coverage and the same noise
at each sample the noise in the recovered image is easy to determine,
but otherwise extensive analysis and/or simulation is required.

2.  The successful imaging algorithms are essentially model fitting.
The S/N in model fitting is obtained when the UV coverage matches
the UV density of what is being imaged.  This is where we have to
make choices depending upon what you are looking for.  Gaussian
UV coverage is about as unprejudiced as you can get, every one
loved Gaussian profiles and they have nice mathematical properties.

3.  The performance of CLEAN type imaging is often dominated by
sidelobe levels and so the sidelobe levels are an excellent measure
of how well these algorithms will work.  If you are interested in the
arcane details of this, I wrote ALMA memos 389 and 390 on the
subject and there are similar memos put out at the same time
by other authors.
Note that I started in the UV coverage school and ended
in the sidelobe school, I flip flopped.

4.  Since the last discussion several more arrays have been built
(SZA) or are partially built (ATA & CARMA).
Time does fly.  These arrays used a combination of
Gaussian coverage with cutoffs (Boone's code) and
my sidelobe minimization algorithm (written in MathCad)
to design their configurations.
The physical constraints of the land and pad costs limit
your options and using both criteria helps prevent major
mistakes.

The SZA is interesting because it is a dedicated
science array with a well defined goal of imaging SZ
clusters over large patches of the sky, i.e. catalogue
type mosaicing.
The model of a cluster is well defined,
and the inner six antennas are in a tight Gaussian cluster
(if you can use 15 baselines to define a Gaussian).
But because of the problem of point source contamination,
two antennas are put out at several times the inner
cluster diameter just for point source identification.
Although this is a specialized measurement, I think a
lot of ALMA science will be similar, that is covering
many fields looking for a particular kind of object but
needing to remove strong point sources that get in the way.

Note that the SMA has also been operational for several
years and its configurations were based upon uniform
UV coverage Relouex triangles, but has never actually
been in such a configuration.  The practicalities of
operating arrays with down antennas, immediate science
goals, etc. makes these ideal discrete configurations
untenable.

5.  The continuous reconfiguration of ALMA is an
added complication but as per the above SMA
experience, essential for maximizing science output.

Cheers
David


----- Original Message ----- 
From: "Mark Holdaway" <mholdawa at nrao.edu>
To: "Frederic Boone" <frederic_boone at yahoo.fr>; <alma-config at nrao.edu>
Sent: Tuesday, October 04, 2005 10:05 AM
Subject: [alma-config] Addressing Frederic's Concerns


> Frederic,
>
> Well, this Fourier plane coverage and array configuration style is
> based on a decision we made several years ago which includes
> both image quality measures and operational considerations.
> For this discussion, I will not dwell on the operational issues
> (incremental reconfiguration and incremental resolution).
>
> The array configuration style that optimizes the sampling over
> some range of the (u,v) plane is a ring or Relouex triangle.
> HOWEVER, those arrays are markedly inferior in producing
> good images, because they produce beams with large sidelobes.
> Tapering to reduce the sidelobes results in significant sensitivity
> loss and fairly improved images.
>
> We made a conscious decision to build an array which produces
> non-optimal (from a Nyquist point of view) (u,v) coverage because,
> given the imaging algorithms we have used for the last 30 years,
> this coverage produces better images.  The inner part of the (u,v) plane 
> is sampled pretty well, and it is holes
> in the inner part that are the most damaging.  Holes in the outer coverage
> get larger and larger as the (u,v) sampling density gets less
> and less, but these holes don't matter much.  This drop off in coverage
> naturally leads to a tapered beam with excellent sidelobe properties
> which results in superior images.
>
> (It is true that next week, you may produce
> an imaging algorithm which produces superior images from Nyquist sampled,
> untapered data, but I don't think we can afford to design an array that 
> relies
> upon this unproduced algorithm.  That said, I do just that below.  I think 
> the
> difference is that I see how the BELOW-mentioned algorithm will work,
> and also that algorithm is a second-order effect, while the beam sidelobe
> issue is a first order effect.)
>
> ON THE OTHER HAND, we are still left with the problem that
> mosaicing, or imaging arbitrarilly complex objects which fill the
> field, will gradually break down as we go to higher and higher resolution.
>
> I think the answer to that issue is multi-faceted, though this issue has 
> never
> been studied in depth in ALMA:
>    1) At high resolution, most fields will become simpler, as the emission
>         that fills  the beam will tend to be low brightness features which
>        will drop below the noise level.   This is like VLBI, where there
>        may be emission which fills the beam, but they can't see it.
>        (On the other hand, there will be some sources like 3C48 is
>          in VLBI, with complex structure that observers wish they could
>           image better -- so YES, the design decision which has been made 
> will
>           have some negative consequences.)
>     2) We need to develop a new class of algorithms along the line of 
> multi-scale.
>          Lets say we do mosaicing at moderate resolution where we have 
> complete
>         coverage (either in a smaller configuration, or tapered to the 
> point in the (u,v)
>        plane where we have essentially complete coverage).  We make a good 
> image.
>        THEN, we image at full resolution, starting with this medium 
> resolution image
>         as a starting model.  Instead of imaging at full resolution and 
> full complexity,
>         much of the complexity has already been imaged, and we are only 
> imaging
>         increments away from that smooth but complex image.   SO, many 
> regions in
>          this image will be consistent with being filled with noise, and 
> for the high resolution
>          step, will be consistent with being masked -- and the consequence 
> is that we
>          will have fewer pixels to solve for -- this is similar to imaging 
> a smaller region
>         of the beam, in which case we use larger cells in the (u,v) plane, 
> and hence
>        we are closer to Nyquist sampled.   OR, we are solving for fewer 
> pixels with
>        the same number of visibilities, and the problem becomes better 
> determined.
>        Presumably, this sort of imaging strategy for imaging complex 
> structure at high
>        resolution will be explored as ALMA starts making wonderful images 
> with
>        a few dozen antennas -- we'll have some early Nature article which 
> squeezes
>        a top notch image out of ALMA even though we don't have all the 
> antennas.
>
>            -Mark
>
>>I fully agree with you Mark and I am concerned about
>>the uv coverage. Going from 64 to 50 antennas we loose
>>40% of the samples and this issue becomes more
>>critical.
>>
>>In the design proposed in the memo draft it seems that
>>in a significant part of the uv-region covered by the
>>configurations greater than 2 km the sampling is far
>>from Nyquist (a least 4 times worse), whereas in
>>principle it could be close to Nyquist for
>>configurations up to 3.5 km. In other words it seems
>>that the sampling could be improved by optimizing the
>>configurations for distributions that fall off less
>>rapidly.
>>Is mosaicing of extended sources with the
>>configurations of 3.5 km part of the scientific
>>requirements? if yes, does the design presented in the
>>draft meet the requirement?
>>
>>Cheers,
>>Frederic.
>>
>>
>>
>>
>>
>>
>>
>>
>> ___________________________________________________________________________ 
>> Appel audio GRATUIT partout dans le monde avec le nouveau Yahoo! 
>> Messenger Téléchargez cette version sur http://fr.messenger.yahoo.com
>>
>
> _______________________________________________
> Alma-config mailing list
> Alma-config at listmgr.cv.nrao.edu
> http://listmgr.cv.nrao.edu/mailman/listinfo/alma-config
> 





More information about the Alma-config mailing list