[alma-config]Comments on Memos 389 and 390 (fwd)

David Woody dwoody at benton.ovro.caltech.edu
Mon Oct 1 16:29:28 EDT 2001


Hi John

You have done an excellent job of summarizing the configuration
issues.  

I particularly like the idea of sequentially applying the Boone
UV optimization followed by a peak sidelobe optimization.  This
is similar to my sequential optimization (near-in to far-out) but
probably much more computationally efficient.
The Boone algorithm gives excellent near-in sidelobes, as it should,
but as you pointed out it doesn't drive the far sidelobe peaks
down because it doesn't work on the small scale features that are
correlated or coherent across the UV-plane.  Applying Kogan's
algorithm would reduce the far sidelobes without any measurable
change to the fit to the desired UV distribution achieved by Boone's
optimization. 

Starting with an initial grand scheme is a good idea.  I only started
with pseudo-random configurations to test the predictions of
memo #389 and to ensure that the results were not prejudiced 
by the initial starting point.  My experience for the 64 antenna
ALMA configurations and 16 antenna CARMA shows that 
optimization of configurations that started from good initial 
guesses or grand schemes produced only minor shifts in the
antennas, i.e. the grand-scheme survived.  Surprisingly even
grand schemes based on 5 armed stars survived surprising
well, i.e. the arms clearly remained, with similar peak sidelobe
levels as the pseudo-random configurations.

I agree that the snapshot beams give you most of the information
and good snapshot configurations should also produce good long
track PSFs if they have been optimized.  Optimization produces
configurations with circular or elliptical distributions with nearly
random baseline lengths and earth rotation just improves the
coverage without adding unnecessary redundant baselines.
There is a danger that optimizing just for long tracks could
produce configurations that leave large holes in the snapshot
cover, i.e. a 1-D array that does very well for 12 hour tracks on
polar sources.  Fortunately the optimizers we have tried do not
usually drive the array in this direction.  I have noticed that my 
peak minimization algorithm at times will produce 
high ellipticity configurations. 

The reuse of pads is an important parameter, but I am not sure
how to arrive at on object measure of what level of reusage is
appropriate.  50% reuse for a factor of two in magnification
seem too high.  If you had configurations which uniformly 
distributed antennas within a circle of given radius, then only 25%
of the pads for the larger configuration would fall within the
extend of the smaller configuration.  50% can be done for
centrally condensed configurations is possible by essentially
only using new pads beyond the furthest pads from the smaller 
configuration.  The reason I favor minimizing reuse is in combining
configurations for better imaging over wider dynamic scale sizes.
The baselines that are in common get twice the integration time
and produce larger sidelobes when the data is given its natural
(integration time) weighting.  I would prefer no reuse, but clearly
this is inefficient of resources.  Appling weighting to keep the
sidelobes down will decrease the sensitivity.  This can be traded against
number of pads, reconfiguration time etc.  Note the reconfiguration
time can be directly traded against the sensitivity if we have
an estimate of the amount of time we will actually be combining
configurations.  If no combining of configurations is anticipated,
then maximum reuse is what we want.

But in fact if we are talking about snapshots or short tracks, then
data redundancy for multiple configurations can be avoided by
scheduling the observations in the two configurations for 
different hour angles, which will happen naturally if the time lapse
between configurations is long enough.  Thus reuse and redundant
data is only an issue for long tracks requiring multiple configurations.

Configurations that continuously evolve from one to the next seem
very desirable from an operational point of view, but it will be very
difficult or impossible to ensure the best possible PSF sidelobes, etc.
for all of the intermediate configurations between the optimized ones. 

Random thought:
One approach would be to start with a grand scheme for all of the
pads for all configurations and optimize this as if you had antennas
on all of the pads.  This will ensure minimum redundancy and
correlations for the total array.  Then start removing antennas to
arrive at any particular magnification array.  For each magnification
or effective gaussian UV width you can explore what is the best
set of 64 (or 60) pads you want to use.

I am still trying to digest your P.S. about mosaicing.  I think the
discussion about how to define the PSF for full field imaging in
the last half of section II of memo #389 may be related to this
issue.  This is very terse and may be hard to follow, i.e. poorly
written.  The conclusion I came to is that for a Gaussian primary
beam you need to use an effective primary beam that is sqrt(2)
larger to evaluate the PSF for full field imaging. 

Cheers
David

****************************************
| Owens Valley Radio Observatory
| P.O. Box 968, 100 Leighton Lane                         
| Big Pine, CA 93513, USA                                  
| Phone 760-938-2075ext111, FAX 760-938-2075
|dwoody at caltech.edu 
****************************************




More information about the Alma-config mailing list