[alma-config] History
John Conway
jconway at oso.chalmers.se
Thu Feb 10 06:42:42 EST 2000
Hi,
I get the feeling that the discussion about uniform/complete
uv coverage is trailling of a bit as we pause for needed
thought and simulation. However as I have said in earlier emails
its worth getting to the bottom of this one since its
so fundamental to our problem of choosing a uv coverage for
ALMA. Given this I'll with trepidation make a few
more comments - this time of an historical nature
(I appoloize in advance if any of what I say is inaccurate, it has
been reconstructed by me from secondary sources, those who are older
and around at the time might have corrections. I also note 3 people on the
mailing list who have been through Cambridge, England or are still
there and my Jodrell Bank based origin might bias me(!), still I say
mainly nice things.).
Taking an historical view be useful in understanding the
uv coverage debate. In an earlier email Bryan noted:
> my feeling is that we would need a better imaging formulation to handle
> the case of nearly complete u-v coverage, which followed more
> traditional signal processing techniques. one of our current problems
> is that we are tainted (in some sense) by the past. radio
> interferometry has always been about sparse sampling of the u-v plane,
> and in that case it turns out that non-linear deconvolutions work very
> well. maybe it is time to try to get entirely beyond that stage with
> ALMA?
However if one goes back to the very beginning of aperture synthesis
one sees that the development originally was from arrays with
'complete out to uvmax uv coverage' to somewhat sparser arrays,
it is interesting to consider why this happened.
RYLES APERTURE SYNTHESIS
---------------------------
Pioneering interfermetric imaging was conducted in the 50's by
groups in Australia and Jodrell Bank, however it was in the 60's
that the full flower of aperture synthesis came about at Cambridge
under Martin Ryle. In his 'supersynthesis' technique, using an E-W baseline,
Earth rotation and moving the antennas it was possible to measure
all uv spacings out to uvmax with intervals of a dish diameter
or less, so that the ringlobes were effectively beyond the edge
of the antenna primary beam. This was a very beautiful creation
(and the main reason Ryle later got the Nobel prize). Taking the direct
FT transform gave the dirty map, however these had quite severe
lambda sidelobes due to the sharp edge of the uv coverage at umax.
These however could be very effectively removed
by 'apodising', by applying a gaussian taper to the data. The
resulting image had a well defined resolution and was a
unique inversion of the data, one could be absolutely certain of
that the source really looked like this at the defined resolution
(again very beautiful). The only downside of this was
that for general imaging applications the taper caused you to lose about
a factor of two in both sensitivity and resolution (for simple model
fitting one could utilize the longer baselines more, but this
is not a case of general imaging but of applying 'a priori' information
that the source has a simple form).
CLEAN
-----
In these days of innocence in the 1960s I guess nobody had even thought of
those diabolical tools CLEAN and MEM (or maybe they did but computers
were in their infancy), therefore apodisation to remove sidelobes and
its side effects were I guess thought of as the price of doing business
(if there had been a simple perfectly unique method of removing the
sidelobes on a perfectly regular E-W array without losing sensitivity
Ryle - being a very clever chap
would have invented it). Then I guess, some damm foriegners starting
building interferometers, and these people did not have the
purity of vision of Ryle. In particular at WSRT people started with
applying an iterative technique to remove sidelobes called CLEAN
(Hogbom 1974, since Hogbom was I think Swedish I guess we can claim
this as a Swedish invention). Astronomers started using this technique
and it proved a great success. Here was an alternative to appodisation in
which you could apparently keep all your sensitivity and resolution,
- you could 'have your cake and eat it'.
However as I understand it Ryle never ever accepted the use of CLEAN
(..nor I think self-cal but thats another story..). He saw more
clearly than anyone else perhaps what CLEAN was really doing, CLEAN
did not give a unique image reconstruction, it was choosing one
of an infinity of possible solutions which fitted the data (and
cheekily not even telling us the criteria on which it was making its
choice!). Put in exactly equivalent terms CLEAN was estimating what the FT of
the true image was at uv spacings that had not been measured. in this
sense it was guessing new extraploted-interpolated data to add to the
actual data that had been collected. Ryle I think must have
considered it preferable
to effectively delete a significant fraction of the real data collected
via appodisation rather than add this estimated/invented data
to the measured data to make a nice synthesised beam shape.
You could certainly see Ryles point, he had invented a unique method
of imaging sources without any guesswork, and this silly algorithm was
making muddy the beautiful invention of aperture synthesis. Meanwhile
astronomers at WSRT, Australia and in VLBI were using CLEAN quite
effectively to make astrophysically useful maps without apodisation.
Despite this success through the 1970s CLEAN (or any other
deconvolution method, until later in the
80s after the Ryle exra when Steve Gull promoted MEM as at least a
well-defined deconvolution process) was I think not really accepted
at Cambridge. Even in the early 1980s when I was a PhD student at
Jodrell Bank, apodisation was still known as 'Cambridge Clean'.
The rest of the world however accepted in effect a 'Faustian
Bargain' in return for using deconvolution instead of apodisation
one got all the resolution and sensitivity of the array
but at the expense of adding some uncertainty to the reconstruction.
This is the bargain most of us have lived with since, sometimes
we push it to far, as in the case of sparse VLBI arrays in
the 70s and 80s, in which the uncertainty generated by applying
deconvolution to sparse arrays reaches levels which effect the
astrophysics. Howvever despite the above cases I think the Faustian
bargain has usually been worth it.
THE VLA
--------
The next generation of interferomter
after those of the 60's and 70's (some T's and rings as well as E-W)
was the VLA, and this certainly departed significanly from Ryles
complete uv coverage supersynthesis array
concept. I'm afraid that I really am not sure of all the arguments
that caused it to be designed as it was. It certainly does
not have complete uv coverage out to umax, but is heavily
tapered. I think the main arguments were just based on
synthesied beam shapes (from what Ron Ekers said in Toronto)
and people expected to do long tracks and 'dirty imaging'.
However at least in the later stages of its design the
designers must thave been concious of developments with CLEAN etc.
In any case in terms of how it has actually been -used- from 1980
onwards the VLA has employed deconvolution algorithms for reducing
virtually all observations. The VLA is therefore the premiar
example of the 'Faustian Bargain' between having a sparse
incomplete tapered uv coverage and then using deconvolution
to estimate which of the many possible image which fit the
data is the best estimate. It has now been running for nearly
20 years and I would submit that it has been
very successful - I don't think there are many astrophysical
questions which have been effected by the uncertainties
introduced by deconvolution.
APPLICATION TO ALMA
--------------------
Of course in contrast to the VLA one can argue that
ALMA images will be more complex, then again ALMA
has 6 times as many baselines; I think therefore
its worth taking the VLA practical experience as a guide in
designing ALMA. But also as Dave and Bryan have suggested it is
also worth taking stock and wondering whether going back to arrays
with more complete uv coverage and (after appodization) uniqueness
(i.e. going back to a 'Ryle-ist' philisophy) are worth it.
My feeling is that except in cases of high SNR and when one
is only interested in the smoothest structures (D-array)
the increases in reliability are
not worth the costs in sensitivity/resolution which come from
apodisation. If one of course tries to use such uniform
arrays without appodisation and wants to remove the 15%
sidelobes that exist one must then apply
an algorithm which estimates the image FT beyond the uv edge - and
hence the main argument of uniquness for having 'complete uv coverage'
(really 'complete uv coverage out to umax'), is then subverted.
All the above history/philosphy is worth thinking about to put the
uv coverage question into perspective, however
as I noted in previous emails, maybe its all somewhat
second order in practice. For ALMA whatever the
design geeometry the arrays equivelent in size to 'NRAO baseline D'
will inevitably have complete uv coverage even in just a snapshot,
because the antennas are so densely packed. The ALMA array
of size C, whatever the design, will have almost complete coverage
after a full track (with a factor of 1.5 variation in uv cell occupancy
depending on whether a uniform array of one with 1/3 of the uv points
are placed outliers for a tapered array). For high SNR observations in these
two arrays one can, as an option, apply apodisation and get unique image
reconstruction, a la Ryle. As one goes to larger arrays sensitivty
becomes more of a critical issue arguing against apodisation. In addition
a compelete filling of all uv spacings with cell
size equal to the antenna diameter becomes impossible anyway, hence
these arrays should ideally have a heavy taper. The loss in resolution
for an array of given maximum baseline is irrelevant if there exists
a bigger array. The exception as Mark has noted,- if there is limited
real estate- is the very largest array in which we choose a ring/loop
because it gives the highest resolution from a limited area
(not really because of its magic uniformity propeties).
John.
More information about the Alma-config
mailing list