[evla-sw-discuss] CBE lag frame store block size

Bill Sahr bsahr at nrao.edu
Tue Jun 20 18:00:28 EDT 2006


While modifying the correlator backend (CBE) code to handle the
writing of lag frame sets (in support of the Real Time Data Display
software) Martin realized that a buffer (the lag frame blocks) in
the CBE interacts with the integration time set for the LTA on the
baseline board.  He explains the issue below.  It seems clear,
as Martin states, that the lag frame blocks must be sized dynamically
w.r.t. to at least the LTA integration time, and, likely, the
integration time specified in the control script/observe file.
What is less clear is the criterion to use for the sizing.  Should
the lag frames blocks be sized such that the data is passed along
from the lag frame blocks to other CBE processes for further
processing (sorting, longer term integration, FFT, etc) a few times
per integration, a few times per scan, or ... ?

Bill

******

The correlator back end accepts lag frames from one or more correlator
baseline boards as they arrive, and copies their data into an array
known as the "lag frame store". The lag frame store is sub-divided into
equal-sized blocks, which form the basis of further processing of lag
frames after their arrival in the lag frame store. Only after a lag
frame store block has been filled do the lag frames in that block become
available for sorting (i.e., assembly into lag sets) and data processing
(e.g., integration and Fourier transform application). Thus, the size of
blocks may strongly affect the latency of lag set processing.

For example, assume that lag frames are arriving from a single baseline
board for only a single data product in a CBE data processing node.
Assuming further that the baseline board dumps a lag frame every 1 msec,
and the lag frame store block size is 1000 frames, one block will be
filled every second. That is, there will be a delay of at least one
second from the time at which the first lag frame of a block arrives to
the start of its processing. If the baseline board dumped a single lag
frame only once every 16 sec into another, similarly configured lag
frame store, the processing delay for the first lag frame in a block
would be about 4.4 hours. I want to emphasize that these figures
represent lag set latencies in the back end prior to integration by the
back end.

Clearly, the size of the lag frame store blocks should adapt to the rate
at which lag frames are expected to arrive if back end latencies are to
be kept "short". In the above examples, the figure for the size of
blocks in the lag frame store comes from the implementation of the
existing CBE software, in which the size of blocks is fixed at compile-
time. Although I can fairly easily add support for variable-sized
blocks, doing so raises a question; namely, what is the maximum
acceptable delay for the creation of correlator back end output from the
time at which the first lag frame of an integration period arrives at
the back end? Obviously, the lower limit is somewhat greater than the
back end integration time. If the back end integration time isn't too
short then matching the block size to the integration time (or less)
should be feasible, depending on the value of "too short". Are there
other possible acceptable (greater) values for the latency of back end
data products? The time interval of a sub-scan or scan, for example?

-- 
Martin



More information about the evla-sw-discuss mailing list