[evla-sw-discuss] Central server

Sonja Vrcic sonja.vrcic at nrc.gc.ca
Mon May 12 17:17:33 EDT 2008


Here is an attempt to list the files that need to be stored on the 
common WIDAR server (or common EVLA server):

FS1 - CMIB software, configuration files to be loaded during 
initialization (e.g. for Station Board tick delays and clock edge 
selection).

FS2 - Test tools : Software test tools (Test Builder, Test Executor, 
GUIs, RTDD) are downloaded and launched from a web-page using Java Web 
Start. We need a central repository for executable version of s/w tools 
used for testing, including web-page used to launch the applications.

FS3 - Configuration files. There will be need to have an "official" 
repository for "proven" configuration files so that test cases can be 
repeated over and over (regression testing). "Golden files" could be 
stored here.  In addition, users should be able to store configuration 
files on their own workstations and laptops. In the case that the test 
builder and executor do no allow user to choose between local machine 
and "central server"  then each user should have a subdirectory on the 
central server.

FS4 - Output files. Users should be able to store Station Board 
output(and in the future Baseline Board output) on the central server or 
on the local machine. IntelligentDiff and RTDD should be able to read 
files stored on the local machine and on the EVLA server. (RTDD can be 
used to display information stored in the files.)

It seems to me that at least two servers (machines) are needed: one for 
FS1 and the other for FS2, FS3 and FS4.

Configuration files for regression tests and manufacturing tests could 
be stored in the software repository (under version control). In order 
to use configuration files that are under version control user would 
check-out the files and store them either on the central server or on 
the local machine. In addition, users will create a number of test cases 
that may be stored on the common server but not in the s/w repository.

Sonja














Kevin Ryan wrote:
> I should explain, for those who may not know, that IntelligentDiff 
> works with files - not streaming data like Dave's RTDDs - (test data 
> files are compared against so called 'golden' files).  So the Station 
> Board outputs have to be written to disk one way or the other.
>
> Kevin
>
> On May 6, 2008, at 5:16 PM, Bill Sahr wrote:
>
>> While I want to think more about the issues Kevin has raised, one
>> point does confuse me.  Kevin, you speak of CMIBs writing data
>> products to a mounted NFS filesystem.  I thought the whole point
>> of the CMIB User Tasks in combination with Station Board Output
>> Listener tasks was to avoid having CMIBs write data products
>> directly to an NFS filesystem.  Has this concept been abandoned?
>>
>> Bill
>>
>> Kevin Ryan wrote:
>>> Hi Michael,
>>>
>>> Well, I thought I was thinking long term. :)
>>>
>>> Maybe I was asking for the wrong thing.   We want to be able to point
>>> to a directory (say /home/widar) and know that no matter what process
>>> is accessing that directory, be it a GUI on someone's laptop far away
>>> (via URL), or a CMIB writing data products via NFS to a file, it will
>>> be the same location for everyone and everything.
>>>
>>> What brought this up is the Test Executor GUI's that have directory
>>> path fields.  Those paths are relative to the CMIB's boot server
>>> (CMIBS don't have their own disks); the problem arises when we add
>>> more boot servers.  If they all mounted the same /home/widar
>>> directory, then it would not matter.
>>>
>>> I'm pretty sure this is something that will be required (or at least
>>> be handy) even on the final system for things like property and
>>> configuration files and perhaps temporary CMIB data products for
>>> maintenance/testing.  It would be nice to keep other things like the
>>> web start applications in this directory also.
>>>
>>> Does this make any more sense?  I should have said central
>>> 'directory' rather than 'server'.  How the James Gang implements this
>>> directory would be up to them.
>>>
>>> Kevin
>>>
>>> On May 6, 2008, at 2:31 PM, Michael Rupen wrote:
>>>
>>>> Hi Kevin,
>>>>    are you thinking of the 4-station PTC or the 10-station system
>>>> or beyond?
>>>> In the long run it's not obvious to me that many of these files
>>>> should reside
>>>> on a single central file system, though initially this may be useful.
>>>>
>>>>            Michael
>>>>
>>>> On Tue, 6 May 2008, Kevin Ryan wrote:
>>>>
>>>>> Hi gang,
>>>>>
>>>>> This is more of a James question but I think that many on this list
>>>>> might be interested in the outcome.
>>>>>
>>>>> Work on the WIDAR correlator TestExecutor software suite has gotten
>>>>> me thinking about how (or if) we will implement a 'Central Server'.
>>>>> We want configuration files (the ones that set up the various
>>>>> correlator modes), data output files such as those generated by the
>>>>> Station Boards during OTS testing, property files and the Java Web
>>>>> Start GUI application files all to reside on one central file system
>>>>> that is mounted by all processors in the correlator including each of
>>>>> the 256 CMIBs.  ... I think.
>>>>>
>>>>> Presently, the Java Web Start application files and the system
>>>>> property files reside on 'filehost'(?) at '/home/asg/www/widar'.
>>>>> This translates to the URL 'http://www.aoc.nrao.edu/asg/widar/' used
>>>>> by Java for access to these files.
>>>>>
>>>>> Also, presently, the CMIBs access a directory on 'cmibhost' (the CMIB
>>>>> bootserver in Bruce & Kevin's office) called '/opt/widar' where
>>>>> configuration files are kept for now.  When the system goes online
>>>>> there will be more than one CMIB bootserver which will cause
>>>>> confusion since there will no longer be a single central '/opt/widar'
>>>>> directory.
>>>>>
>>>>> It would be nice if we could combine the '/home/asg' and 'opt/widar'
>>>>> areas on 'filehost' and 'cmibhost' into one area common to (and
>>>>> mounted on) all processors in the correlator.  This machine must also
>>>>> be a web server.
>>>>>
>>>>> I don't think this machine should be one of the online units like
>>>>> MCCC.  I don't even know if it has to reside in the correlator room,
>>>>> but keep in mind that there will be occasions where CMIBs will be
>>>>> writing output data products to it via NFS.
>>>>>
>>>>> Suggestions James or anyone?  Is this something that the mirrored (or
>>>>> whatever it's called) 'filehost' at the VLA could do?  Does the EVLA
>>>>> already have something like this running out there that we could
>>>>> share?
>>>>>
>>>>> Kevin
>>>>>
>>>>> _______________________________________________
>>>>> evla-sw-discuss mailing list
>>>>> evla-sw-discuss at listmgr.cv.nrao.edu
>>>>> http://listmgr.cv.nrao.edu/mailman/listinfo/evla-sw-discuss
>>>>>
>>>> _______________________________________________
>>>> evla-sw-discuss mailing list
>>>> evla-sw-discuss at listmgr.cv.nrao.edu
>>>> http://listmgr.cv.nrao.edu/mailman/listinfo/evla-sw-discuss
>>>
>>> _______________________________________________
>>> evla-sw-discuss mailing list
>>> evla-sw-discuss at listmgr.cv.nrao.edu
>>> http://listmgr.cv.nrao.edu/mailman/listinfo/evla-sw-discuss
>> _______________________________________________
>> evla-sw-discuss mailing list
>> evla-sw-discuss at listmgr.cv.nrao.edu
>> http://listmgr.cv.nrao.edu/mailman/listinfo/evla-sw-discuss
>

-- 
Sonja Vrcic
Software Engineer
National Research Council
Herzberg Institute of Astrophysics
Dominion Radio Astrophysical Observatory,
Penticton, BC, Canada
Tel:(250)490-4309/(250)493-2277ext.309
Sonja.Vrcic at nrc-cnrc.gc.ca
http://www.drao-ofr.hia-iha.nrc-cnrc.gc.ca/




More information about the evla-sw-discuss mailing list