Information on the pipeline software setup

Lindsey Davis wrote on 04/10/2012 08:00 PM:

Jeff left for Europe this morning. He asked me to get back to you on
the infrastructure software setup.

Currently the infrastructure software does the following. Given the UID of
the parent observing unit set status entity it:

1. pulls all the relevant project / status entities out of the Archive
2. creates a pipeline processing tree in memory
3. constructs the pipeline processing request(s) from information in the
   project / status entities and standard intents / procedure files
4. creates a processing directory structure on disk
5. saves the status entities and the pipeline processing request to disk
6. downloads the ASDMs to disk

At this point the pipeline software is ready to execute in CASA.

The infrastructure software depends on the ACS software packaging and build
system but does not depend on the ACS manager or CORBA. It also depends on
the ALMA data models (Enumerations, APDM, ASDM, ASDMBinaries) and other
ICD software. Because of the ACS / data model environment dependencies
the infrastructur software is 32 bit. The best setup to support this would
be a 32 bit machine with access to the Lustre file system. This is the setup
used in Chile.

What is needed is the following:

1. The ACS tarball tagged ACS-10_1_0-pre-20120312-TEMP installed in the
usual way on the infrastructure machine. The account should be set up
to put the ACS software path in the user path by default. ACS people should be able to help with this.

2. The following pieces of ARCHIVE software from the HEAD built against
the above ACS tarball. This is new ACS CORBA / Manager software for
reading from the Archive

o ARCHIVE/Database
o ARCHIVE/DataPacker
o ARCHIVE/AsdmHandler

3. The ICD directory from the HEAD except for

o ICD/HLA/APDM
    o use tag TRUNK-R9APDMChanges-201203-BeforeMerge o ICD/PIPELINE
    o use tag PipelineDeployment-2012-04-SCO-KEEP

built against the above. There are currently minor build issues with
ICD/OBOPS which don't affect the infrastructure. The reason for the
special APDM tag is that the APDM software is out of step
with the contents of the Archive and will be until to migration in May.

4. PIPELINE/Science
    o use tag PipelineDeployment-2012-04-SCO-KEEP

built against the above

5. Definition of 3 environment variables

o $SCIPIPE_ROOTDIR
    o This should be set to the root of the pipeline processing directory
      structure on Lustre. All pipeline requests will generate a
      pipeline processing sub-directory tree beneath this root.

o $SCIPIPE_LOGDIR
    o This should be set to the root directory for the infrastructure logs
      which should be parallel to or above the processing root directory.
      Each infrastructure run creates a log. These are not user logs. They
      are debugging logs and should be cleaned out by the system after a
      reasonable period. These logs are quite small, not like the observing
      system logs.

o $SCIPIPE_SCRIPTDIR
    o This is the home directory for the intents and script XML files. Standard
      intents and procedure files are kept here and I will provide sample
      ones to start with. This directory should be separate from the
      processing and log directory trees. Eventually these will be auto-
      generated and / or stored in the Archive but for the time being
      live here.

6.  need a properly defined archiveConfig.properties file in the standard place 
    as described on the Twiki given below.
 
You can find more details about this here

http://almasw.hq.eso.org/almasw/bin/view/PIPELINE/ChileMarch2012

--------------------------------

On the heuristics side of the installation (64 bit machine) you need:

1. The same 3 environment variables, $SCIPIPE_ROOTDIR, $SCIPIPE_LOGDIR,
and $SCIPIPE_SCRIPTDIR as were defined for the infrastructure pointing
to the same directory system

2. The heuristics scripts downloaded with the appropriate tag

3. An additional environment variable to tell you where the heuristics
scripts are installed. This should look like

<yourpath>/PIPELINE/Heuristics/src

.e.g on my machine

/export/home/skye/working/acs101/HEURISTICS/PIPELINE/Heuristics/src

4. in your CASA ipythonrc file

execute sys.path.insert(0, os.path.expandvars("$SCIPIPE_HEURISTICS"))

There are other ways to do this but the above works

5. casa-stable 18527

There is the newer stable and the demo seemed to work with it but it
is very mich in flux so it would be safter to use 18527. The tagged
version of the pipeline software is all referenced to 18527 with
modifications to the match new system ongoing (filler, flagging, channel bandpass, etc).

6. Template intents and procedure files. These go in $SCIPIPE_SCRIPTDIR.
See the attached file intents.tar . The ones of interest
are *_if and *_sd for interferometry and single dish.

intents.tar
-----------------------------------

With respect to the heuristics deployment there was no issue giving
demos at SCO because I used the same account on the infrastructure
and reduction machines although the environment setup was different.

However when some users tried this from their accounts there were
permission issues on the heuristics code. Just a matter of group
read and execute access, but the error messages were confusing,
e.g. "heuristics not found" rather than "no permission".

Just a warning.

-----------------------------------

Here are the specs for a good demo project. It is a GRB project with
relatively small ASDMs which will download in a relatively small amount
of time (minutes in Chile) and run in a relatively small amount of time
10-15 minutes or so (again in Chile)

1. observing unit set status uid. This is the triggering observing unit
set status uid

uid://A001/X74/X29

2. the ASDM of interest

uid://A002/X30a93d/X43e

There is another ASDM with this data set and it is downloaded and
listed in the pipeline processing request by the infrastructure as
it should be. However I delete it from the pipeline processing request
because it was not certified OK by Eric Villard and would double the
time of the demo

3. the parent observing project uid

uid://A001/X3b/X234

4. the parent project status uid

uid://A001/X3b/X237

5. the containing observing unit set status uid (NOT THE TRIGGER!)
uid://A001/X3b/X238

All this should be traceable for the project tracker.

George Kosugi or Takeshi Nakazato can send Dirk Muders the corresponding
information for the single dish project


Let me know if you have questions or issues.


                                                                Lindsey

-- DirkPetry - 12 Apr 2012

  • intents.tar: File by Linsey Davis containing Template intents and procedure files
Topic attachments
I Attachment History Action Size Date Who Comment
Compressed Zip archivetar intents.tar r1 manage 30.0 K 2012-04-13 - 14:44 DirkPetry File by Linsey Davis containing Template intents and procedure files
Edit | Attach | Watch | Print version | History: r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r2 - 2012-04-13 - DirkPetry
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2025 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding ARC TWiki? Send feedback