[Gb-ccb] suggested changes to ccb library
Martin Shepherd
mcs at astro.caltech.edu
Tue Aug 12 15:08:39 EDT 2003
On Tue, 12 Aug 2003, Brian Mason wrote:
>...
> 1ms is plenty of accuracy for the scan start. My recollection however
> is that 1ms is the *minimum* integration time, we've not always
> planned to use that (cf my draft use cases), ie that that short an
> integration would not always make sense.
My recollection was that in practice 1ms integrations were going to be
the normal integration time, and that other times were mainly going to
be used for debugging.
Regardless, the 100ms that you mention below, is still quite a short
time, particularly given the 1PPS synchronization of start scans and
the potential delays in ethernet transmission of start-scan commands,
so I'm still not sure why waiting 100ms for the end of an integration
is such a problem.
> -However the innermost hardware loop in the old FPGA design was
> such that the FPGA was "blind to the world" while it added samples up
> to integrations, ie, you couldn't interrupt it, so if a long
Actually the device-driver could interrupt it whenever it wanted
(Otherwise how would your intra-scan workaround work?). There was no
stopping clause in the state-machine, but there was a reset bit that
simply reset all of the gates in the state machine.
> integration time happened to be set (for a good reason, or by
> accident) then you are stuck there adding samples up for potentially
> a very long time. There was also a cal diode delay which, depending on
> where you were in the state machine, might also get added in before
> the scan interrupt
No. As mentioned above, the start-scan bit of the control register
directly hit the reset lines on all of the gates in the state machine,
and thus reset the state machine without any regard for what it might
be doing at the time. There was no waiting at the hardware level. All
timing at this level was imposed by the device driver, usually to
avoid race conditions.
> Melinda & I invented the fake pre-YGOR-scan CCB-intra-scan in the
> *manager* to a) set the cal diode rise & fall delays to zero; b) set
> the integration time very small; both so that the scan would start
> within a small time (I believe 1 ms or so) after the 1pps scan-start
> signal. Again cf my draft use cases, sections 2.7 and 3.3.
Clearly it makes no sense to work around what might be arbitrary
constraints in the hardware/driver, while the design of the latter
remains fluid. Even after we have a concrete design, if there is a
feature at the hardware/driver level that makes life difficult in the
manager, then first we should see if it can be fixed at the lower
level before figuring out a complicated workaround at the higher
levels. In the new hardware scheme, virtually all of the backend
hardware is implemented in an FPGA, and can thus be reprogrammed as
needed.
> For a typical point source radiometry measurement we would do 100 ms
> integrations, in order to have enough statistics in a single on-source
> scan (one scan=1 to 10 seconds in length, roughly) to estimate the
> noise level in the data. For on the fly mapping we would have
> integrations of 1ms to many 10's of ms, depending on the slew speed.
Understood.
Martin
More information about the gb-ccb
mailing list