[alma-config] Addressing Frederic's Concerns

Frederic Boone frederic_boone at yahoo.fr
Wed Oct 5 22:20:23 EDT 2005


Dear all,

There seems to be a consensus already on the design
presented in the memo draft so it's probably not a
good idea to continue this discussion, this is
unnecessarily overloading your mail boxes. However, I
still find my work is not well understood and there
are some points you raise in your mails I don't agree
with, so I feel I should try a last time to explain my
opinion before giving up. If you want to reply please
don't hesitate, I will read your mails carefully maybe
I will be convinced but even if not convinced I won't
reply anymore.

To avoid a huge text I propose to first discuss the
main arguments raised in your answers. I will reply in
more details to your individual mails separately (so
this will still load your boxes a bit but this will be
the last time).

Reading your answers I understand there are 3 main
arguments for keeping the design with highly centrally
condensed configurations despite the reduction of the
number of antennas:

1. "Less tapered distributions of uv-samples are worse
for imaging, in particular with usual deconvolution
methods like CLEAN".

2. "Less centrally condensed configurations cannot
support continuous reconfiguration".

3. "Less centrally condensed configurations require
more pads".


About the 1rst argument:
-------------------

Then why not taper even more (e.g. to 20dB) by keeping
even more antennas in the center?

I guess we all agree that tapering is good AND
sampling is good too. But better tapering implies less
sampling, there is a trade-off. 
The fact there is a trade-off does not come from any
asumption about the deconvolution method used. The
fact that sampling is also good (not only tapering)
comes from the fundamental point that, in the presence
of noise, the information is local in the uv-plane, if
we miss some information in one region of the uv-plane
we cannot recover it fully with the information
contained in other regions.  
 
I guess we all use very frequently the CLEAN method
and it turns out that the data for which we use it are
far from Gaussian-distributed, simply because the
current interferometers can not afford gaussian
distributions. Instead they try to sample as much as
possible the uv-plane (e.g. PdB, SMA but also VLA). So
I don't agree with the first argument above, my
opinion is that CLEAN and all deconvolution methods
work better when the sampling is better and they do
not require a gaussian distribution of samples (but
this is of course better if we can have both good
tapering AND good sampling, and in the ideal case of a
low tapering with better than Nyquist sampling we
don't need any deconvolution method, just an FFT...). 

Also, as is well known in radioastronomy (and as Mark
reminded us): the further the sample spacing from
Nyquist interval the harder it is to reconstruct
extended images. So the Nyquist interval can be used
as a measure to estimate how good is the sampling and
how easy it will be to produce images whatever the
deconvolution method used.

If we all agree there is a trade-off between sampling
and tapering then why not try to find the optimal
solution to this trade-off? Why not try to quantify
things? This is what I tried to do (details in paper)
and the results for 50 antennas are as follow: 
It is possible to have close to Nyquist sampling up to
3.5 km and a gaussian distribution tapered at 15dB
with Nyquist sampling up to 0.6 km. Between these 2
config sizes the level of the tapering/sampling
depends on the source size and to some extend on the
deconvolution method (equations in the paper). The
maximum loss of sensitivity due to the high level of
tapering in case the configurations are optimized for
Nyquist sampling will occur for the most extended one
(3.5 km) and will be <10% (I insist here this is the
maximum loss of sensitivity if the configs are
optimized for Nyquist sampling. It will be much lower
if the configs are optmized for sub-Nyquist).

Then, why not relax the tapering constraint? If
imaging of extended sources is an important scientific
driver for ALMA won't the errors made because of bad
sampling overcome the loss of sensitivity due to less
tapering? If you say that a sampling at twice the
Nyquist interval does not make a big difference w.r.t.
Nyquist sampling then we could still have a reasonable
tapering (implying typically less than 5% sensitivity
loss) and 2*Nyquist sampling up to 3.5km. So why fix
the tapering level to 15dB which implies sampling with
spacing greater than 3*Nyquist in 40%  of the
uv-plane? Why keep the same distribution of antennas
all the way up to 3.5 km and not try to adapt it in
order to improve the sampling (again not necessarily
to Nyquist if this is not required)? 


About the 2nd argument:
-----------------

For the spiral the reconfiguration scenario is already
written in the design and that is a good thing but it
does not imply other designs can not be continuously
reconfigured.

It is clear that optimized for better sampling the
configurations will be less centrally condensed and
there will be more antennas at the borders of the
configurations but this does not mean we will have to
deal with rings or Reuleaux triangles (see. e.g. the
configurations at
http://aramis.obspm.fr/~boone/arraydesign/alma/index.html).
Reading your emails gives the impression the
configurations can only be either very centrally
condensed, or rings (or triangles), nothing else. But
these are 2 extreme cases and it turns out that with
50 antennas and configurations in the range 1--3.5 km
we are right in the middle of these extremes. So, we
would get configurations that are not far from filled
disks with a little more antennas at the border for
the largest configs (depending on the sampling we
want).

Once the discrete set of configurations is optimized
it is possible to reconfigure the array in many ways.
The array will be in the optimized configurations only
for a limited time, but the properties of the
intermediate configurations close to the optimized
ones will have properties close to optimal and the
others will have different properties that could also
be useful (low elevation sources, compact sources
etc...).

So, instead of argument 2 one should say "the
configurations will not keep the same properties over
continuous reconfiguration". But this is useful and
not so difficult to implement (with my algorithm for
example...).

Finally, reconfiguring 50 antennas implies less
antenna moves than reconfiguring 64. We could
therefore introduce some modulation in the
reconfiguration pace.

About the 3rd argument:
-------------------

The number of optimal configurations is indeed limited
by the number of stations. But again we are not
talking about rings or triangles: many pads can be
re-used from one configuration to the next. I think I
proved it is possible to optimize configurations for
gaussian distributions truncated at higher and higher
levels and keep the number of pads as low as for the
spiral.


Conclusion
----------

After reading carefully the memo draft and your
answers to my previous mails I am still convinced a
better sampling than the one proposed would
significantly improve the imaging capabilities of ALMA
whatever the deconvolution method used and I am
convinced this is feasible in terms of number of pads
and array operation.

Cheers,
Frederic.




	

	
		
___________________________________________________________________________ 
Appel audio GRATUIT partout dans le monde avec le nouveau Yahoo! Messenger 
Téléchargez cette version sur http://fr.messenger.yahoo.com



More information about the Alma-config mailing list