[Gb-ccb] CCB Integrations
Martin Shepherd
mcs at astro.caltech.edu
Thu Aug 21 23:33:23 EDT 2003
On Thu, 21 Aug 2003, Brian Mason wrote:
> Martin Shepherd writes:
> > On Thu, 21 Aug 2003, Brian Mason wrote:
> > > for configurable manager level integration. I do have one question
> > > however: is sample_dt now a dummy variable always having a value of 1?
> > > (with units of 100 ns)
> >
> > No. This parameter doesn't only set the A/D sample time. It also sets
> > the interval between steps in the phase-switch state-machine. As such,
> > it makes sense to continue interpretting it as documented, and simply
> > take the fact that integration up to this interval is now performed
> > digitally instead of by an analog integrator, as an internal hardware
> > implementation detail that only the hardware needs to know about.
>
> Right. By digitally, I take it you mean "in digital electronics
> internal to the ADC".
Actually it is "in digital electronics internal to the FPGA" not
internal to the ADC. The FPGA will read out the ADCs every 100ns and
add the results within registers in the FPGA. The values in these
registers will be removed, then added to the main integration
registers after each sample_dt 100ns ADC values have been
accumulated. In other words I will effectively be using the 10MSPS
ADC, in combination with part of the FPGA, to simulate a 40KSPS (25us)
ADC.
> My question really is, is it correct to say that
> in principle one could set sample_dt to something longer than 1 (times
> 100 ns)?
I think that you misread what I was saying. The value of sample_dt
will nominally be 25, as before. This value still refers to the number
of 100ns clock cycles needed to reach the interval between steps in
the phase-switch state-machine. It happens that this is the same
number as the number of 10MSPS ADC samples integrated within this
time.
If you are asking whether the sample time of the 10MSPS ADC will be
configurable, the answer is no. I can't see any benefit to making this
variable. 100ns is still the basic granularity of the system timing.
The design of the analog electronics relies on this.
Note however, that whereas in the old scheme, the nominal value of
sample_dt was also both the optimal value, and the lowest value that
the ADC could accomodate, in the new scheme sample_dt could be reduced
below 25, provided that the phase switches could switch faster than
once every 25us (50us when switching alternate switches in each step).
> A follow on question: Is 12 bits the effective (not nominal)
> resolution of the 10 MHz digital ADC?
Yes, it is the effective resolution. The nominal resolution of the ADC
is 14 bits.
Note that the 12-bit effective resolution of the ADC covers a
symmetric range of negative and positive voltages, whereas the
square-law detectors only output positive voltages. Thus if I end up
not offsetting the output of the square-law detectors, the actual
effective resolution will be 11 bits, not 12. I haven't yet decided
whether such offseting is either practical or advisable, so I am
hedging my bets and basing dynamic range calculations on 11 bits of
resolution, and overflow calculations on 12 bits. As such, while the
predicted performance that I will be advertising will meet the
required specifications, it may turn out when tweaking during
commisioning, that better performance can be garnered.
The reasons that I am not clear on whether offsetting is a good idea
or not, are the following:
1. Can one generate a stable offset voltage that won't add noise or
instabilities? I think that the answer is yes, provided that the
offset voltage is derived from the reference voltages that are
used by the ADCs.
2. Is the minimum system temperature sufficiently well known and
stable that we can accurately set the offset voltage to this,
and leave it that way? I don't know the answer to this, but
I think that we would be forced to assume an offset that was
somewhat less than the predicted minimum system temperature,
and thus lose some of potential increase in resolution.
3. There are indications in the ADC datasheet that the noise
performance of the ADC is best around its central zero point, and
worst at the postive and negative extremes of the range. Thus if
one were to offset the detected signal such that the minimum system
temperature was translated to the most negative end of the ADC
range, the weakest signals would end up being measured with the
worst sensitivity possible.
Thus if offsetting at all, it would make most sense to move the
minimum signal to the zero-point of the ADC, rather than the most
negative extreme, and only use a bit or so of the negative half of
the ADC's input voltage range to accomodate variations in the
minimum system temperature. This would still be better than not
offsetting at all, since the bits otherwise wasted to covering the
range of voltages from the detectors between zero and the voltage
generated for the minimum system temperature, would then cover part
of the actual signal range. However I estimage that the improvement
in the effective resolution would only be on the order of 13% for
the 1cm receiver, and 20% for the 3mm receiver.
Martin
More information about the gb-ccb
mailing list