[evla-sw-discuss] MIB to screen communications
Barry Clark
bclark at aoc.nrao.edu
Thu May 30 18:59:15 EDT 2002
I've been looking at the communications between the MIB and screens
package. All possible ways of doing it seem to me to have significant
disadvantages.
1. VT100 screen implementation, a la VLBA. This doesn't fit well with
Rich Moeser's discovery and hierarchy scheme, which is regarded
as highly desirable. Also, we would not consider this for logging monitor
data, meaning that we would have to have a disjoint set of software for
reading the monitor log.
2. HTTP, XML, and other ascii based implementations. Very verbose,
so we might not want to store these things in the monitor log. Also,
this is not very easily adaptable to the sort of monitor data version
control that we have found valuable on the VLBA. (On the VLBA, when
(not "if" but "when") we change the format of an entry in the monitor
log, we write a routine that converts the old version to the new, and
use this routine to make sure the software dealing with the
monitor data extraction (or with real-time data display in this case)
has been using the new form well before it actually gets logged in the
new form. Stacking these routines means very old monitor data remains
readable with no additional maintenance effort.)
3. RMI. In the standard invocation this goes by TCP sockets, which
makes things slow and difficult to recover from a network glitch. I
don't know if this could be implemented over a UDP socket, though just
serializing the object and broadcasting it could obviously be done
(wastes a lot of space retransmitting all the methods). Not clear
to me how this would do as an archive format. Storing a Java serialized
object in the monitor archive is probably pretty inefficient. The
format updating mentioned above can be done, but, as usual with Java,
it requires a lot of boring typing that has to be done exactly right.
4. XDR. This is developed for RPC, so rpcgen will put together a
program to stuff your data into a message, blast it off to somebody
else, decode it at the other end, execute the command, and ship the
answer back. One doesn't have to use the whole schmear, and can put
a program together oneself to pack and unpack data. But it does get
to be work. It is also not very efficient. To simplify conversions,
they force all data items to occupy a multiple of four bytes in the
eXternal representation. Since we will have a lot of stuff which has
natural lengths of two bytes (eg A/Ds), stuff will get expanded.
We might well have to write the underlying conversion routines for
Nucleus, since it doesn't come with RPC, but not a real big deal.
5. Straight binary with byte reordering. This is commonly done for
all the stuff in the various network protocol headers. Macros and
definitions for bytes, shorts, and ints are given, eg, in Sun's
/usr/include/sys/byteorder.h. We'd have to extend it to floats.
This is the most compact form, but it's a little non-trivial to
use. We can't just slap a structure definition on top of the data
stream, as we do with the VLBA, because we want to accomodate a larger
variety of machines, with different structure alignment requirements
as well as different endianness.
My personal tendency would be to take approach 5, because it uses almost
no fancy software, and there are therefore fewer bushes behind which
problems can lie in ambush. Anybody out there have any other approaches,
or a compelling argument for another one of these?
More information about the evla-sw-discuss
mailing list