[mmaimcal] Use Case Comments

Mark Holdaway mholdawa at nrao.edu
Wed Dec 13 11:16:14 EST 2000


I have provided some extensive comments on the three Use Cases
we are to look at.  In some cases, the comments bring things
better into focus with simple and useful suggestions.  In
some cases, the comments fill out more of the shape of a complex
computation which should be made (but which might not be made in
the early implementations).  And in some cases, I probably cloud
the issue with excessive attention to detail at a time when we
might need more simplicity and clarity.  It is not for me to draw
the distinction between what is what, though.

With all that in mind:

------------------------------------------------------------------

Single Field Setup:

Just as lines can be specified by name, perhaps the standard
continuum frequencies could be specified by name.  (There SHOULD
be standard continuum frequencies where we know things about the system,
such as the D terms; or we just know that the sensitivity is optimum
at that frequency during typical atmospheric conditions that
are accessible to experiments at that band.)

Optimum Calibrator:  there is usually a tradeoff between distance and
calibrator strength, and I would advocate picking a calibrator which
minimizes the quantity (vt/2 + d), where d is the distance between the
lines of sight between the target and cal sources at the typical altitude
of the turbulence, v is the atmospheric velocity, t is the cycle time.
Since the target integration may be unspecified, we could probably just
use the fraction of the cycle spent in detecting the calibrator, slewing
to/from cal, and settle down/setup time.  A bright source requires less
integration time, but will be farther away and hence will increase d and
the slew part of t.  To calculate d, we may need to know more info about
the hight of the turbulent layer.  More advanced algorithms could utilize
wind direction or information on the thickness of the turbulent layer to
optimize the calibrator choice (ie, pick a calibrator "upstream" from the
target source, and also calculate a gain interpolation delay).

Phase cal observing frequency (how often, not GHz):  giving the required
RMS phase error and the phase structure function, we can calculate the cal
frequency.

BTW: we should make a distinction between two types of observations in the
proposal form: are they SENSITIVITY DRIVEN or FIDELITY DRIVEN? There is no
point spending half your time calibrating to get really low phase noise if
you are otherwise sensitivity limited and should actually spend more time
integrating on source.  On the other hand, if you have a bright source or
are trying to make very accurate positional measurements (for example),
you may rather spend the time getting more accurate phases and not worry
that your time on source is not so high.  In the case of a SENSITIVITY
DRIVEN observation, the system can do a global sensitivity optimization to
figure how often to calibrate and how long to sit on the calibrator (ie,
the system will determine the optimum rms phase, given the calibrator and
the observing conditions).  FIDELITY DRIVEN observations must defer to the
wisom of the observer in her/his specification of parameters such as the
required phase error.

Integration time: for long baselines, the time averaging smearing must be
consulted for this.  There should be a system default for how much point
source amplitude at the primary beam half power point is tolerable, which
could be overridden by user information.  Also, for mosaicing, the default
might be stricter, ie, a point source at the half-width at zero intensity.

Map cell size: it doesn't do any harm to calculate it here, but it needs
to be recalculated in the pipeline (for example, what if you are adding
data from another configuration -- how is that handled in the pipeline
anyway?).


--------------------------------------------------------------------------

Single Field Observing

Doppler info:  doesn't this need to get recalculated throught the
observations?

Will focus need to be re-calculated each time?  Especially at the lower
frequencies, it seems that a lookup table may be sufficient much of the
time. I know the focus won't take long to do.

It is possible that the optimal phase calibrator will change during an
observation.  No reobserving of the potential phase cals is required (ie,
its not like we are looing for cal sources to flare up!), just monitoring
of the atmosphere and paying attention to the delta AZ and delta EL to get
back and forth.  It certainly simplifies things to require a given SB to
keep the same phase calibrator, though.

My recollection, based on old MMA calculations, was that the typcial phase
cal sources were like 100 mJy, with some as low as 25 mJy if they happened
to be very close; and that you wanted stronger sources, like > 300 mJy,
for pointing sources.  Now, for super-ALMA equipped with 64 deluxe model
12-m antennas, those numbers will both be smaller.  So, often the optimal
phase calibrator will not be a pointing calibrator.

Post-ample: if pointing is of concern, there should be a final observation
of a pointing source.  You can't reto-fit the data for the pointing
corrections, but they can help you flag antennas that were badly
mispointed.

In addition to termination based on high phase rms, there should also be
termination based on high pointing errors and high opacity.  The phase rms
is the most important, however, as it is the most highly variable of the
limiting factors.

Multiple targets:  I would suggest an advanced survey mode Use Case in
which one gives 100 sources and the observing system selects which ones
to do based on some criteria (ie, observe the brightest first, or random,
or whatever), doing a first pass at a traveling salesman, and figuring out
calibrators to use.  The survey mode may very well invoke the single field
mode with multiple targets for a single calibrator; certainly it would
invoke multiple targets for a single pointing calibrator even if each
target had its own phase calibrator.

---------------------------------------------------------------------------

Single Field Reduction

Automated flagging should be implemented for
	- large pointing excursions between pointing calibrations
	  (system default as a fraction of beamwidth, as a function
	   of frequency, with a user-specified override?)
	- rms phase
	- individual antenna phase excursions
	- OTHER (ie, large system temperature.... )

Advanced Deconvolution (may be too rich for the present plans)
	- low resolution image can be used to generate a mask
	  for high resolution deconvolution.  in the most extended
	  arrays, a mask (ie, a glorified CLEAN BOX) will be important
	  in constraining the deconvolution
	- For continuum (especially mosaicing): given a catalog
	  of known continuum point sources, we can solve for their
	  fluxes (fit to the long baseline visibilities) and shapes;
	  then subtract this model from the corrected visibilities and
	  image the residuals (presumably the source of interest).
	  This process should have a user-settable ON/OFF switch.
	- For spectral line: determine line free channels, fit continuum
	  to spectral vis and subtract.  This procedure will not work as
	  well for wide-bandiwtdh, wide-field imaging, and some analytical
	  error analysis would be required up front.
	- observer should be able to provide some detailed directions for
	  deconvolution;  the system defaults should be complete, but the
	  user should be able to select deconvolution algorithm and
	  specify things (such as: do the continuum subtraction or not;
	  remove background point sources).
======> It seems to me that we are going to have something like one person
working full time on implementing new pipeline reduction ideas for the
life time of the ALMA, working on things like algorithms to automatically
generate a mask or automatically determine line-free channels.


----------------------------------------------------------------------------



	-Mark





More information about the mmaimcal mailing list