Installation¶
This is a step by step guide on how to install the RTC Toolkit from scratch.
Machine Preparation¶
Install a real or virtual machine according to the Linux Installation Document.
Note
Use the default for the ELT_ROLE setting: ELT_ROLE=ELTDEV
Note
If you want to use a graphical display and not only a text based terminal, i.e. enable the display manager at system start, then also add the setting: ELT_DM=yes
Note
This version of the RTC Toolkit requires a machine with at least 8GB of RAM to compile.
ELT Development Environment¶
The RTC Toolkit is written and tested for the ELT Development Environment, which provides a common set of development tools and dependencies. Before installing the RTC Toolkit, you should become familiar with the tools available on this platform as described in the Guide to Developing Software for the EELT document.
Note
The version 3.9.0 of ELT Development Environment shall be used with version 2.0.0 of RTC Toolkit. This is the ELT Development Environment based on Cent OS 8.2.
Install RTC Toolkit and Dependencies¶
The RTC Toolkit itself and its dependencies are now provided as RPMs, and can be installed using yum as root.
Installing of dependencies i.e. RAD and roadrunner packages:
# yum -y install elt-rad-devel-4.0.0 elt-roadrunner*-0.2.0
Note
Starting from RTC Toolkit version 2, the elt-etr RPM (v3.1.1) is installed together with ELT Development Environment (v3.9.0).
Installing of RTC Toolkit packages:
# yum -y install elt-rtctk*-2.0.0*
Account Configuration¶
Login as your user.
Create the directories for the installation and data areas:
$ cd <the location for introot> $ getTemplate -d introot INTROOT $ cd <the location for dataroot> $ getTemplate -d dataroot DATAROOT
This will create one installation sub-directory tree underneath the top level directory called INTROOT and another sub-directory tree called DATAROOT. These directories are primarily used for end user development. RTC Toolkit will normally not be installed into these locations, unless the toolkit is also being built from source.
The environment must contain the definitions of the relevant environment variables such as INTROOT, PREFIX, etc. These environment variables will be automatically defined by means of an Lmod file private.lua, which in turn uses the system modulefile definitions in /elt/System/modulefiles/introot.lua to setup additional environment variables such as LD_LIBRARY_PATH and PYTHONPATH.
In your home directory create the private.lua file:
$ mkdir $HOME/modulefiles $ vi $HOME/modulefiles/private.lua
Add the following initial content to the private.lua file:
local introot = "<introot location>" -- put actual introot location here local dataroot = "<dataroot location>" -- put actual dataroot location here setenv("INTROOT", introot) setenv("PREFIX", introot) setenv("DATAROOT", dataroot) load("rtctk") load("introot")
Set the RTC_LOGS environment variable that is used to point to a folder where human readable log files shall be written.
For example, if the location for the human readable logs was chosen to be /data/logsink, RTC_LOGS is configured by appending the following to the private.lua file:
setenv("RTC_LOGS", "/data/logsink")
Please make sure to create the folder in the appropriate location and set the directory’s write permissions so that the user account, e.g. eltdev, is able to create and modify files there. Following our example above, this could be accomplished with the following commands:
$ mkdir /data/logsink $ chmod u+r,u+w,u+x /data/logsink
Save the private.lua file.
The following is a complete minimal example of the file contents assuming that the user account is eltdev, the INTROOT and DATAROOT directories are in the home directory, and that the human readable logs should go to /data/logsink:
local introot = "/home/eltdev/INTROOT" local dataroot = "/home/eltdev/DATAROOT" setenv("INTROOT", introot) setenv("PREFIX", introot) setenv("DATAROOT", dataroot) load("rtctk") load("introot") setenv("RTC_LOGS", "/data/logsink")
Update the environment by logging out and back in again.
Important
Log out and then in again so that modulefiles directory becomes known to the environment and the newly created private.lua is loaded. This is needed when the directory modulefiles and the private.lua are created for the first time.
File private.lua is loaded by default upon login. In case more
.lua
files (with different names) will be added to $HOME/modulefiles, they can be made known to the environment by running the following command:$ module load
You can check which Lmod modules are available after login with:
$ module avail
The output should look similar to the following (the exact set of available/loaded modules might change with the software versions, but private and introot should be loaded as a minimum):
---------------------------------- /home/<user>/modulefiles ------------------------------- private (L) ----------------------------------- /elt/System/modulefiles ------------------------------- ciisrv/ciisrv (L) doxygen (L) introot (L) opentracing (L) clang ecsif (L) jdk/java-openjdk (L) python/3.7 (L) consul (L) eltdev (L) mal/mal (L) shiboken2 (L) cpl (L) fastdds (L) msgpack (L) slalib_c (L) cut (L) gcc/9 (L,D) nix/2.3 (L) roadrunner (L) czmq (L) gcc/11 nomad (L) ---------------------------- /usr/share/lmod/lmod/modulefiles/Core ------------------------ lmod settarg Where: D: Default Module L: Module is loaded
Note
For more information about Lmod refer to section 3.2 Environmental Modules System (Lmod) in the Guide to Developing Software for the EELT.
Configuration of Required Services¶
The required CII services need to be configured the first time after a fresh installation of the development environment.
CII postinstall¶
Configuring the CII is done by running the cii-postinstall tool.
This must be run under the root account to succeed.
In the following example the tool is run with the role_ownserver
role that will prepare
a simple configuration where the CII services are run on the local host:
# /elt/ciisrv/postinstall/cii-postinstall role_ownserver
Such a configuration is only appropriate for development and basic testing. See the CII Services Management documentation for details about using alternative roles for more advanced configurations where the CII services are run on dedicated remote servers.
Important
After cii-postinstall completes, log out from any user accounts and then in again so that CII_LOGS is available in the environment.
Note
The cii-postinstall tool must run successfully to prepare the CII_LOGS environment variable and the default directory location /var/log/elt. These are needed to avoid spurious error messages such as the following when using the RTC Toolkit:
!! Error resolving logsink directory: CII_LOGS environment variable not set or empty
!! Make sure CII_LOGS environment variable is correctly defined and you have appropriate permissions
!! Only console logging will be enabled
The CII_LOGS environment variable is used to point to a folder where CII formatted log files (following the naming convention *_cii.log) shall be written. CII formatted logs are (will be) used for further processing, e.g. by sending them to the central logging system or similar.
Metadata Instances¶
Certain additional metadata instances specific to the RTC Toolkit need to be added to the CII configuration. This can be done by running the rtctkConfigTool binary that was freshly installed as follows:
$ rtctkConfigTool init-metadata
It is only necessary to run the above command once. Running it multiple times will simply overwrite any modifications that might have been made to the metadata instances.
Startup of Required Services¶
The RTC Toolkit makes use of a number of ELT supplied services, in particular those provided by CII and Nomad.
Before making use of these services the appropriate startup must be performed and when they are no longer required the appropriate shutdown should be performed.
CII Service startup¶
At the time of writing CII services must be started as root. The ELT DevEnv since release 3.5 has provided the appropriate sudo permissions to allow this to be done as the user eltdev using sudo.
To start and stop the services use the following commands:
$ sudo cii-services start all
$ sudo cii-services stop all
The shutdown of the CII services using the command shown above can be performed when you no longer wish to exercise RTC Toolkit components. In general, the CII services should be left running.
Note
The complete set of CII services can be resource intensive. In cases where memory and CPU capacity is limited, for example in smaller development virtual machines, it may be useful to bring up only the minimal services needed by the RTC Toolkit with the following command:
$ sudo cii-services start redis configService elasticsearch
Checking the status of the running services does not require root privileges and can be done using the following command (example output is also indicated):
$ cii-services info
CII Services Tool (20220303)
Possible names: minio logKibana traceJaeger redis elasticsearch logLogstash configService logFilebeat, or a prefix thereof, or 'all'
(base) eltsrtc06 root:~ 1001 > cii-services info
CII Services Tool (20220303)
Collecting information..
......
Collecting information........................
Installations on this host:
ElasticSearch |active:yes |install:/usr/share/elasticsearch/bin/elasticsearch
Redis |active:no |install:/opt/anaconda3/bin/redis-server
MinIO |active:no |install:/usr/local/bin/minio
ConfigService |active:yes |install:/elt/ciisrv/bin/config-service |ini:
TelemService |active:no |install:
Filebeat |active:no |install:/usr/share/filebeat/bin/filebeat
Logstash |active:no |install:/usr/share/logstash/bin/logstash
Kibana |active:no |install:/usr/share/kibana/bin/kibana
Jaeger |active:no |install:/opt/opentracing/jaeger-1.17.1-linux-amd64/jaeger-all-in-one
AlarmServices |active:no |install:
AlarmPlugin |active:no
AlarmConsumer |active:no
AlarmConverter |active:no
AlarmSupervisor |active:no
ConfigClient |ini:
Addresses of Services:
IntCfg / Local-DB |access:no |file:/localdb
IntCfg / ElasticSearch |access:yes |host:ciielastichost(IP:192.168.5.101)
IntCfg / MinIO |access:no |host:ciihdfshost(IP:192.168.5.101)
IntCfg / Service |access:yes |host:ciiconfservicehost(IP:192.168.5.101) |host2:ciiconfservicehost2(IP:192.168.5.101)
OLDB / Redis |access:yes |host:192.168.5.101(IP:192.168.5.101)
OLDB / MinIO |access:no |host:ciihdfshost(IP:192.168.5.101)
Telem / Service |access:no |host:ciiarchivehost(IP:127.0.0.1)
Statistics of Services:
IntCfg / Service |NrOfRequests: 6984143 |UpTime: 32058 minutes
OLDB / Redis |total_connections_received:2600 |rejected_connections:0
Telemetry / Service
Consult the CII documentation for details of the commands and their output. See: CII Docs.
Nomad Agent Startup¶
Nomad is used as the mechanism to start and monitor processes. The usage of Nomad within the ELT is not yet fully defined. Thus, RTC Toolkit provides just a preview of how Nomad could be used in the examples.
Before starting any process using Nomad, a Nomad “agent” must be running. The ELT Development environment comes with a simple single host Nomad agent configuration file (/opt/nomad/etc/nomad.d/nomad.hcl), which allows the agent to be run as a service under eltdev user. That means that all jobs executed by Nomad will be executed under eltdev user. This might change in the future. See ICS_HW for some pointers on the use of Nomad in the ICS.
To check if the Nomad agent service is running, use the following command:
$ systemctl status nomad
The Nomad agent service can be started using the following command (as eltdev user):
$ systemctl start nomad
To verify the agent is running, use the following command as any user:
$ nomad job status
Note
If the agent is not running the following error will be displayed
Error querying jobs: Get "http://127.0.0.1:4646/v1/jobs": dial tcp 127.0.0.1:4646:
connect: connection refused
The Nomad agent can be stopped by executing:
$ systemctl stop nomad
The Nomad agent can be also run as normal process under any user using the provided configuration file (option -c). Before running as a normal user, please make sure that the Nomad is not already started as service. To run as a normal process, the following command needs to be executed:
$ nomad agent -config /opt/nomad/etc/nomad.d/nomad.hcl
Processes started under Nomad start with the environment of the user running the Nomad Agent, so you may wish to run the Nomad agent as your normal development user during development to ensure any processes started have exactly the same environment as when you execute them from the command line.
Install from Source Code - Optional¶
The RTC Toolkit and its dependencies can also be installed from source code tarballs. This step is optional, since the software is already available as RPMs, as mentioned in the previous sections. We do not expect this to be the typical procedure for installing the toolkit and should be reserved for cases where one cannot install from RPMs.
Important
In contrast to the RPM installation, The rtctk.lua file is not provided and not needed when installing the software from pure source code. Therefore the file should not be loaded in the private.lua file, i.e. remove or comment out the following line in private.lua:
load("rtctk")
Make sure that the account used to build the software is configured as indicated in the Account Configuration section before continuing, i.e. all previous sections are applicable, except for the RPM installation step.
Note
If the RTC Toolkit RPMs have not been installed at all, then the following command can only be run after completing the installation instructions from source:
$ rtctkConfigTool init-metadata
Installation of Dependencies¶
To be able to compile and use the RTC Toolkit the following dependencies need to be built and installed from source code first:
RAD - Application Framework
Roadrunner/numapp - NUMA affinity
Roadrunner/perfc - Performance Counter
Roadrunner/ipcq - Interprocess Queue
All other dependencies should already be preinstalled as part of the ELT Development Environment.
Download, unpack and build the RAD component.
Download the tarball for RAD v4.0.0 from ESO Gitlab. Unpack it and then execute the steps below to build and install the software.
$ cd rad-* $ waf configure build install $ cd ..
Download, unpack and build Roadrunner components (numapp, perfc, ipcq).
Download the tarball for Roadrunner v0.2.0 from ESO Gitlab. Unpack it and then execute the steps below to build and install the software.
$ cd roadrunner-*/numapp $ waf configure build install $ cd ../perfc $ waf configure build install $ cd ../ipcq $ waf configure build install $ cd ../..
Installation of the RTC Toolkit¶
Download the tarball for RTC Toolkit version 2.0.0 from ESO Gitlab. Unpack it and then execute the steps below to build and install the software.
$ cd rtctk-*/
$ waf configure build install
Note
There are some optional dependencies in the toolkit. These may be indicated as “not found” during the configure step. For example, cuBLAS is optional and only needed if building on a machine that has GPUs. If there are any mandatory dependencies that cannot be found then the configure step will fail as expected.
Documentation (manuals and API reference) can be generated by invoking:
$ waf build --with-docs
The result can be viewed by opening the respective index.html
files under build/doc/
.
Verify Correct Installation¶
To verify the build of RTC Toolkit, it is possible to run the unit-tests by invoking:
$ cd rtctk-*/
$ waf test --alltests
Note
This should be run after the cii-postinstall tool has been run successfully to avoid some spurious warnings.
For a more comprehensive verification that the RTC Toolkit was installed correctly, run the end-to-end integration test as follows:
$ cd test/_examples/exampleEndToEnd
$ etr -v
The integration test is expected to terminate successfully after running for about one minute.