[alma-config] Organising Simulations
John Conway
jconway at oso.chalmers.se
Fri Jan 21 03:54:55 EST 2000
Hi,
Just some comments on the whole 'simulation' question.
As we discussed in the telecon and as the email discussion about
algorithms indicates this is a very complicated area. We set
up some subgroups at the telecon on test iamges, metrics etc
and thats good to do. However my view expressed at the telecon
is that we should not wait until all these issues are
finally decided before starting (and reporting, and discussing)
simulations, or we will not make progress. Also only by doing the
simulatons and doing some initial comparison between arrays can we
understand what the real decisions about test images,
metrics etc actually are.
Different views have been expressed on the way to organise
the simulations, some I think would prefer as soon as
possible a set procedure set down which everyone can follow so
images from different arrays be compared exactly on as fair
as possible basis. Thats a very good goal, but one which is
difficult to define right now. I think therefore it would be helpful
to think of the problem as having two phases, 'Development'
and 'Evaluation'. The Development phase would be a free-interchange of
ideas to allow us to design our array concepts, and along the
way settle the issue of metrics, test images etc. This phase would
last up until the face-to-face Tuscon meeting and perhaps a little
beyond. Out of this I would expect 2 or 3 candidate designs to
emerge which would go onto the Evaluation stage where each
design is evaluated against a library of images with standard
metrics etc. This evaluation phase would lead up to the PDR
to which we can submit reports evaluating the different designs
with common criteria. Some more details of the character of
the two phases are given below;
I DEVELOPMENT
-------------
In this phase people actively developing arrays do simulations
and report them to the rest of the group. People use whatever
test images have been submitted by members of the group, or
are available on the central 'images' web area.
To cut down somewhat on parameter space we should as Morita-san
says try to limit the simulations in a few ways, i.e. perhaps just
do CLEAN and 'standard' MEM, and snapshot and long tracks at a few
declinations.
The point we should all realise during this phase is that
for any given array concept one or two simulated images
proves nothing about the overall superiority of one style of design
over another. Since the final adjudication is left to the more
rigourous Evaluation phase, in which we will all agree the protocol,
we can all relax a bit and just play around and comment on each others
designs.
We should also be good I think if when we put results on the
web people gave access to antenna position files, so other
people can do their own simulations and comment.
This is important because life being what it is authors
of array concepts are more likely to see the advantages of
their own particular concept rather than the faults(!)
Once a deficiency is reported back to the author he/she can try to
improve the array design in light of these problems. It is of course not
a sin to change an array in response to simulations, the only
alternative is to design an array 'blind'. There might be what I think
is a false worry about 'fine-tuning' an array to the set
of test images, but if one looks at several quite different
test images, it will be impossible to fine tune the
array to all.
During the Development stage we could exchange information via this
email exploder and our web pages. Its perhaps not so
useful during this development stage to have detailed memos sent to
the general ALMA Memo series COMPARING designs and claiming superiority
over other designs BEFORE a fixed protocol has been devised.
I think it will slow things down, and create different 'camps'
inhibiting the free flow of ideas. Besides this I suspect that
many people on the ALMA memo distribution list are not as fascinated by
the configuration question as we are and will get tired
of seeing configuration memos! It might be OK to send quick memos
reporting to the general community the performance of a
particular array (i.e. my web page minus the ring comparison),
but the forum of the ALMA memo series is perhaps not the best
place to discuss intercomparisons, at least not
until after Tuscon when a protocol for comparisons has been
agreed. Before that the results of any intercomparisons can be
more effectively distributed privately within this
configurations group.
II EVALUATION
-------------
At the end of the development phase about the time of the
Tuscon meeting I expect that we we will have a few designs worked out
(fitted into the terrain), we will also have a final library of images
and some idea of the right metrics/algorithms to use for
final evaluation. We can therefore work out the protocol for
the evaluation phase at the Tuscon meeting, and after
that go home and evaluate the competing designs. We can
also discuss how this work will be done. The simplest way
would for each of the array authors to apply the protocol
to their design and report the results in a standard format.
For 'quality control' authors could also spot check the protocol
applied to other peoples designs.
What do people think? Given that we cannot start doing the
exact evaluation of arrays right now (because the designs
are still being created, and because we don't have a common
criteria for comparison), the two phase approach above
seems to me sensible.
John.
More information about the Alma-config
mailing list