[Difx-users] On delay modelling

Adam Deller adeller at astro.swin.edu.au
Sun Apr 23 20:40:10 EDT 2017


Hi Mugundhan,

On 21 April 2017 at 15:30, Mugundhan vijayaraghavan <
v.vaishnav151190 at gmail.com> wrote:

> Dear All,
>
> I have a few queries about how delay modelling is carried out in VLBI for
> compensating the same.
>
> 1.) The geometric delay is calculated as tg=*b.s*, where *b *is the
> baseline vector and *s *is the source vector. Lets say I have two
> antennas, both located about 100 kms apart. How do standard VLBI delay
> modelling software calculate this delay ? Based on some preliminary reading
> I understood that the baseline distance are first calculated referenced to
> the earth center, if this is done, are delays estimated assuming the earth
> center to be the phase reference ? How is this earth centered reference
> then transformed to the celestial frame ? because both *b *and *s *must
> be in the same coordinate system for carrying out a dot product operation,
> right ?
>

VLBI delay modeling is very complicated, involving considerably more than
just a *b.s* operation.  Other propagation effects are taken into account
too, and the length of the baseline *b* is changing with time due to tidal
forces and what-not, plus the whole system is wobbling around due to the
changing earth orientation.

But stripping it back to the minimum: yes, the Earth centre is usually used
as the reference.  Look up the International Terrestrial Reference Frame
(ITRF) to see the definition of the axes.  Then you obviously need to know
the *time* (and the Earth orientation parameters) to figure out where the
unit vector \hat(s) that points at the direction of the source is pointing
in this reference frame.  For each telescope, we then compute the
station-based delay from the telescope back to the geocentre at the desired
instant of time, and each telescope's data stream is delayed by the
computed amount (rather than shifting only one data stream by the
difference between \tau_a and \tau_b).  That's what it means to use the
geocentre as the reference.


>
> 2.) In some books/articles i find a reference to a RA and Dec of Baseline
> ? What does this physically mean ? I'm not able to visualize this clearly.
> any help will be greatly appreciated !
>

Like I said above, it makes more sense to figure out where the source unit
vector is pointing relative to a telescope coordinate system.  You can
equivalently rotate the telescope coordinates and keep the source unit
vector fixed, but that is (I think) less intuitive.


>
> 3.) In the complete delay model, tm, which is the sum of geometric
> delay+clock delay+ionospheric/atmospheric delay+fixed delays due to analog
> component, the fastest varying component will be geometric delay only, once
> this is compensated, if the other quantities are contributing to some
> excess time varying delay, this will be seen as a residual fringe. Now, for
> clock delay, is this estimated using the allan deviation of the clock being
> used? Lets say my clock loses 10^-9 seconds in 30 minutes, and if I sample
> my signal at 16 MHz which is ~ 62.5 ns, will I be able to integrate the
> data without any degradation due to clock upto 30 minutes ?
>
>
The sampling time is irrelevant.  It's the sky frequency that determines
the visibility phase.  Your signal might only be 16 MHz wide, but if you
were observing at 100 GHz then a change of 1 nanosecond translates to 100
turns of phase.  So in that example you could only integrate for maximally
a fraction of a second.  Normally VLBI clock drifts are monitored to a
level of at worst a few ns/day or so. If they are unknown then a test
correlation is performed to determine the clock offset and drift, and then
the observation is recorrelated having applied the best available clock
model.


> 4.) There is also an associated baseline velocity component which will
> lead to a time difference between the wavefronts received at both the
> antennas. Is this baseline velocity the same as the orbital velocity of the
> earth ? Or is this modelled differently ?
>
>
By delay tracking to the geocentre, this problem is naturally taken into
account. When you use the geocentre, you are automatically forced to
account for the rotation of the reference frame between the time the signal
is received at the antenna and the time that it would pass through the
geocentre.  So you've corrected for the velocity of both of the stations,
rather than their difference.  The process is known as retarded baseline
correction.

Unfortunately the documentation for VLBI delay packages is not extensive.
You can look up CALC (https://lupus.gsfc.nasa.gov/software_calc_solve.htm)
or VTD (http://astrogeo.org/vtd/) but neither have an excellent explanation
of the theory.

Cheers,
Adam


> I would greatly appreciate if the experts here clarify my doubts. Kindly
> do point me to references that may lead to clarification of these doubts
> too !
>
> Thanking you,
> With best regards,
>
> Mugundhan V.
>
>
> _______________________________________________
> Difx-users mailing list
> Difx-users at listmgr.nrao.edu
> https://listmgr.nrao.edu/mailman/listinfo/difx-users
>
>


-- 
!=============================================================!
Dr. Adam Deller
ARC Future Fellow, Senior Lecturer
Centre for Astrophysics & Supercomputing
Swinburne University of Technology
John St, Hawthorn VIC 3122 Australia
phone: +61 3 9214 5307 <+61%203%209214%205307>
fax: +61 3 9214 8797 <+61%203%209214%208797>

office days (usually): Mon-Thu
!=============================================================!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listmgr.nrao.edu/pipermail/difx-users/attachments/20170424/8b9baa23/attachment.html>


More information about the Difx-users mailing list