Client¶
The provided command line client daqOcmCtl can interact with daqOcmServer which is described in the following sections.
Environment Variables¶
$OCM_REQUEST_EP
Specifies the default OCM request/reply endpoint, e.g.
zpb.rr://127.0.0.1:12345/
.$OCM_PUBLISH_EP
Specifies the default OCM publish endpoint, e.g.
zpb.ps://127.0.0.1:12345/
.
Command Line Arguments¶
Exhaustive command line help is available under the option --help
. The following list
enumerates a subset of common commands.
Synopsis:
daqOcmCtl [options] <command> [options] <command-args>...
Standard interface
commands:
std.init
Sends the
Init()
command.std.enable
Sends the
Enable()
command.std.disable
Sends the
Disable()
command.std.exit
Sends the
Exit()
command.std.setloglevel <logger> <level>
Sends the
SetLogLevel()
command with provided logger and level.std.getstate
Sends the
GetState()
command.std.getstatus
Sends the
GetStatus()
command.std.getversion
Sends the
GetVersion()
command.
Data Acquisition
commands:
daq.start [options] <primary-sources> <metadata-sources>
Sends the
StartDaq()
command with provided arguments.daq.stop [options] <id>
Sends the
StopDaq()
command with provided arguments.daq.forcestop [options] <id>
Sends the
ForceStopDaq()
command with provided arguments.daq.abort [options] <id>
Sends the
AbortDaq()
command with provided arguments.daq.forceabort [options] <id>
Sends the
ForceAbortDaq()
command with provided arguments.daq.getstatus <id>
Sends the
GetDaqStatus()
command to query status of Data Acquisition identified by<id>
.daq.awaitstate <id> <state> <substate> <timeout>
Sends the
AwaitDaqState()
command with provided arguments.<id>
Data Acquisition identifier.
<state>
Data Acquisition state to await.
<substate>
Data Acquisition state to await.
<timeout
Time in seconds to wait for state to be reached or unable to be reached anymore.
daq.updatekeywords <id> <keywords>
Sends the
UpdateKeywords()
command with provided arguments.daq.getactivelist
Sends the
GetActiveList()
command.
Server¶
The main OCM application is daqOcmServer, which implements all the Data Acquisition control and coordination features. The interface to control Data Acquisitions is covered in section The Data Acquisition Process whereas the much simpler application state control is described in this section.
Changed in version 2.0.0: daqOcmServer interacts with daqDpmServer to execute the merge process to create the final Data Product. daqOcmServer Stores and loads relevant Data Acquisition state in its Workspace to be able to continue after application restart.
State Machine¶
The daqOcmServer state machine is shown in Fig. 14 with states and transitions described below.
States
- On
Application is running.
- Off
Application is not running.
- NotOperational
Composite state that means that daqOcmServer is running, is able to accept
StdCmds
requests, but is not yet fully operational. For daqOcmServer it means in particular that theOcmDaqControl
interface is not registered and won’t accept any requests.- NotReady
This is the first non-transitional state. Current implementation has already loaded configuration and has registered the
stdif.StdCmds
interface at this point.- Ready
Has no particular meaning for daqOcmServer.
- Operational
In the transition to Operational daqOcmServer registered the
OcmDaqControl
interface and is ready to perform Data Acquisitions.- Idle
Indicates that there are no active Data Acquisitions.
- Active
Indicates that there is at least one active Data Acquisition.
Note
Active does not mean that daqOcmServer is busy and cannot handle additional requests. It simply means that there is at least one Data Acquisition is not yet finished.
Since merging is not yet implemented the definition of active is up to the point the Data Acquisition is stopped or aborted.
Transitions
- Init
Triggered by
Init()
request.- Enable
Triggered by
Enable()
request.- Disable
Triggered by
Disable()
request.- Stop
Triggered by
Stop()
request.Note
The behaviour is currently unspecified if this request is issued if OCM is in state Active.
- AnyDaqActive
Internal event that is created when any Data Acquisition becomes active.
- AllDaqInactive
Internal event that is created when all Data Acquisitions are inactive.
MAL URI Paths¶
The following tables summarize the request/reply service paths and topic paths for pub/sub.
URI Path |
Root URI Configuration |
Description |
---|---|---|
|
|
Standard control interface |
|
|
Data Acquisition control interface |
Topic Type |
URI Path |
Root URI Configuration |
Description |
---|---|---|---|
|
|
Standard interface status topic providing information on OCM overall state. Same information
is provided with the command |
|
|
|
Data Acquisition status topic |
Command Line Arguments¶
Command line argument help is available under the option --help
.
--proc-name ARG| -n ARG
(string) [default: ocm]Process instance name.
--config ARG| -c ARG
(string) [default: config/daqOcmServer/config.yaml]Config Path to application configuration file e.g.
--config ocs/ocm.yaml
(see Configuration File for configuration file content).--log-level ARG| -l ARG
(enum) [default: INFO]Log level to use. One of ERROR, WARNING, STATE, EVENT, ACTION, INFO, DEBUG, TRACE.
--db-host ARG| -d ARG
(string) [default: 127.0.0.1:6379]Redis database host address.
Environment Variables¶
$CFGPATH
Used to resolve Config Path configuration file paths.
$DATAROOT
Specifies the default root path used as output directory for e.g. OCM FITS files and other state storage. The data root can be overridden by the configuration key
cfg.dataroot
.
Configuration File¶
The configuration file is currently based on YAML. This section describes what the configuration parameters are and how to set them.
If a configuration parameter can be provided via command line, configuration file and environment variable the precedence order (high to low priority) is:
Command line value
Configuration file value
Environment variable value
Enumeration of parameters:
cfg.instrument_id
(string)ESO designated instrument ID. This value is also used as the source for FITS keyword INSTRUME.
cfg.dataroot
(string) [default: $DATAROOT]Absolute path to a writable directory where OCM will store files persistently. These are mainly FITS files produced as part of a Data Acquisition. If directory does not exist OCM will attempt to create it, including parent directories, and set permissions to 0774 (ug+rwx o+r).
cfg.workspace
(string) [default: dpm]Workspace used by daqOcmServer to store Data Acquisition state persistently and later restore that state when starting up (see section Workspace for details).
Absolute paths are used as is (recommended).
Relative paths are defined relative to
cfg.dataroot
.
New in version 2.0.0.
cfg.daq.stale.acquiring_hours
(integer) [default: 14]Parameter used to control when to archive (discard) stale Data Acquisitions from Workspace during startup.
Specifically it controls when a Data Acquisition in state
daqif.State.StateAcquiring
is automatically archived when recovered from Workspace because it is considered stale (time duration from time of creation to the time it is recovered).New in version 2.0.0.
cfg.daq.stale.merging_hours
(integer) [default: 48]Parameter used to control when to archive (discard) stale Data Acquisitions from Workspace during startup.
Specifically it controls when a Data Acquisition in state
daqif.State.StateMerging
is automatically archived when recovered from Workspace because it is considered stale (time duration from time of creation to the time it is recovered).New in version 2.0.0.
cfg.log.properties
(string)Config Path to a log4cplus log configuration file.
cfg.sm.scxml
(string) [default: config/daqOcmServer/sm.xml]Config Path to the SCXML model. This should be left to the default which is provided during installation of daqOcmServer.
cfg.req.endpoint
(string) [default: zpb.rr://127.0.0.1:12081/]MAL server request root endpoint on which to accept requests. Trailing slashes are optional, e.g. example:
"zpb.ps://127.0.0.1:12345/"
or"zpb.ps://127.0.0.1:12345"
.cfg.pub.endpoint
(string) [default: zpb.ps://127.0.0.1:12082/]MAL server publish root endpoint on which to publish topics from. Trailing slashes are optional, e.g. example:
"zpb.ps://127.0.0.1:12345/"
or"zpb.ps://127.0.0.1:12345"
.cfg.db.endpoint
(string) [default: 127.0.0.1:6379]Redis database endpoint address.
cfg.db.prefix
(string) [default: process-name]Optional redis database prefix that is prepended to all database keys in the form
{prefix}.{key}
(i.e. the separator.
is automatically inserted). By default the process instance name is used as prefix.Example:
"instrument.ocm"
New in version 2.0.0.
cfg.db.timeout_sec
(integer) [default: 2]Timeout in seconds to use when communicating with the Redis server.
cfg.dpm.req.endpoint
(string)daqDpmServer request endpoint without service name.
Example:
"zpb.rr://127.0.0.1:12345"
.New in version 2.0.0.
cfg.dpm.pub.endpoint
(string)daqDpmServer publish endpoint without service name.
Example:
"zpb.ps://127.0.0.1:12345"
.New in version 2.0.0.
cfg.dpm.timeout_sec
(integer) [default: 5]MAL timeout used when sending requests to daqDpmServer.
New in version 2.0.0.
Full example:
cfg.instrument_id: "TEST"
cfg.dataroot: "/absolute/output/path"
cfg.sm.scxml: "config/daqOcmServer/sm.xml"
cfg.req.endpoint: "zpb.rr://127.0.0.1:12340/"
cfg.pub.endpoint: "zpb.ps://127.0.0.1:12341/"
cfg.db.endpoint: "127.0.0.1:6379"
cfg.db.timeout_sec: 2
cfg.log.properties: "log.properties"
# Relative paths are relative dataroot,
# absolute paths are absolute.
cfg.workspace: "ocm"
# Stale DAQ configuration (determines when they are automatically
# archived at startup)
cfg.daq.stale.acquiring_hours: 18
cfg.daq.stale.merging_hours: 720
# DPM communication configuration
cfg.dpm.req.endpoint: "zpb.rr://127.0.0.1:12350/"
cfg.dpm.pub.endpoint: "zpb.ps://127.0.0.1:12351/"
cfg.dpm.timeout_sec: 5
Workspace¶
New in version 2.0.0.
The daqOcmServer workspace is the designated file system area used to store Data Acquisition state information
persistently. The workspace location is controlled with the cfg.workspace
parameter and will be
automatically initialized if directory does not exist. To prevent against accidental
misconfiguration daqOcmServer will refuse to use the directory if it has unexpected file contents.
Note
When daqOcmServer is not running it is safe to delete the complete workspace. Be aware that if there are Data Acquisitions in progress this information will be lost.
The information stored in workspace is:
List of known Data Acquisitions.
For each Data Acquisition it stores the status, which contains the same information published as
daqif.DaqStatus
.For each Data Acquisition it stores the context, which contains the necessary information to be able to create the Data Product Specification. This includes data sources, FITS keywords provided to daqOcmServer for example.
When daqOcmServer starts up it will load the stored information so it is possible to continue the
process. To avoid recovering completely obsolete Data Acquisitions there are two configuration parameters that are used to discard these, depending on whether the Data Acquisition was last known to be in
state daqif.State.StateAcquiring
or py:attr:`daqif.State.StateMerging:
cfg.daq.stale.acquiring_hours
cfg.daq.stale.merging_hours
Important
As offline changes are not reflected in the persistent state it may happen that the recovered state is inaccurate. This is always a risk and currently daqOcmServer does not actively try to correct this.
The structure is as follows:
/
Workspace root as configured via configuration file, environment variable or command line.
/list.json
List of Data Acquisitions, as an array of Data Acquisition identifiers.
/in-progress/
Root directory containing files related to each Data Acquisition.
/in-progress/{id}-status.json
Contains persistent status for each Data Acquisition (where
{id}
is the Data Acquisition identifier)./in-progress/{id}-context.json
Contains persistent context for each Data Acquisition (where
{id}
is the Data Acquisition identifier)./archive/
When Data Acquisition is completed (transitions to state
daqif.DaqState.StateCompleted
) the in-progress files are moved here.Note
Files in this directory are safe to deleted. An operational procedure is foreseen to specify when this should be done.
The following shows an example of files and directories in the workspace with an in progress Data Acquisition.
.
├── archive/
└── in-progress/
├── TEST.2021-05-18T14:49:03.905-context.json
└── TEST.2021-05-18T14:49:03.905-status.json