Installation¶
This is a step by step guide on how to install the RTC Toolkit from scratch.
Machine Preparation¶
Install a real or virtual machine according to the Linux Installation Document.
Note
Use the default for the ELT_ROLE setting: ELT_ROLE=BASE
Note
This version of the RTC Toolkit requires a machine with at least 8GB of RAM to compile.
ELT Development Environment¶
The RTC Toolkit is written and tested for the ELT Development Environment, which provides a common set of development tools and dependencies. Before installing the RTC Toolkit, you should become familiar with the tools available on this platform and make any necessary adjustments to the environment according to the Guide to Developing Software for the EELT document.
Note
The version 3.6.0 of ELT Development Environment shall be used with version 1.0.0 of RTC Toolkit. This is the ELT Development Environment based on Cent OS 8.2.
Account configuration¶
Login as your user.
Create the directories for the installation area:
cd <the location for introot> getTemplate -d introot INTROOT
This will create an installation sub-directory tree underneath the top level directory called INTROOT.
The environment must contain the definitions of the relevant environment variables such as INTROOT, PREFIX, etc. These environment variables will be automatically defined by means of an Lmod file private.lua, which in turn uses the system modulefile definitions in /elt/System/modulefiles/introot.lua to setup additional environment variables such as LD_LIBRARY_PATH and PYTHONPATH.
In your home directory create the private.lua file:
mkdir $HOME/modulefiles vi $HOME/modulefiles/private.lua
Add the following initial content to the private.lua file:
local introot = "<introot location>" -- put actual introot location here setenv("INTROOT", introot) setenv("PREFIX", introot) load("introot")
The use of FastDDS relies on QoS settings from a file pointed to by the FASTRTPS_DEFAULT_PROFILES_FILE environment variable. The RTC Toolkit provides a default (example) QoS file that can be used, which is installed in $INTROOT/resource/config/rtctk/RTCTK_DEFAULT_FASTDDS_QOS_PROFILES.xml.
It is best to also set FASTRTPS_DEFAULT_PROFILES_FILE by appending the following to the private.lua file:
setenv("FASTRTPS_DEFAULT_PROFILES_FILE", introot .. "/resource/config/rtctk/RTCTK_DEFAULT_FASTDDS_QOS_PROFILES.xml")
Set the CII_LOGS environment variable to avoid spurious error messages such as the following:
!! Error resolving logsink directory: CII_LOGS environment variable not set or empty !! Make sure CII_LOGS environment variable is correctly defined and you have appropriate permissions !! Only console logging will be enabled
This is done by appending the following to the private.lua file:
setenv("CII_LOGS", introot .. "/logsink")
Save the private.lua file.
The following is a complete minimal example of the file contents assuming that the user account is eltdev and the INTROOT directory is in the home directory:
local introot = "/home/eltdev/INTROOT" setenv("INTROOT", introot) setenv("PREFIX", introot) load("introot") setenv("FASTRTPS_DEFAULT_PROFILES_FILE", introot .. "/resource/config/rtctk/RTCTK_DEFAULT_FASTDDS_QOS_PROFILES.xml") setenv("CII_LOGS", introot .. "/logsink")
Update the environment by logging out and back in again.
Important
Log out and then in again so that modulefiles directory becomes known to the environment and the newly created private.lua is loaded. This is needed only when the directory modulefiles and the private.lua are created for the first time.
File private.lua is loaded by default upon login. In case more
.lua
files (with different names) will be added to $HOME/modulefiles, they can be made known to the environment by running the following command:module load
You can check which Lmod modules are available after login with:
module avail
The output should look similar to the following (the exact set of available/loaded modules might change with the software versions, but private and introot should be loaded as a minimum):
---------------------------------- /home/<user>/modulefiles ------------------------------- private (L) ----------------------------------- /elt/System/modulefiles ------------------------------- ciisrv/ciisrv (L) doxygen (L) introot (L) opentracing (L) clang ecsif (L) jdk/java-openjdk (L) python/3.7 (L) consul (L) eltdev (L) mal/mal (L) shiboken2 (L) cpl (L) fastdds (L) msgpack (L) slalib_c (L) cut (L) gcc/9 (L,D) nix/2.3 (L) czmq (L) gcc/11 nomad (L) ---------------------------- /usr/share/lmod/lmod/modulefiles/Core ------------------------ lmod settarg Where: D: Default Module L: Module is loaded
Note
For more information about Lmod refer to section 3.2 Environmental Modules System (Lmod) in the Guide to Developing Software for the EELT.
Installation of Dependencies¶
To be able to compile and use the RTC Toolkit the following dependencies need to be built and installed from source code first:
ETR - Integration Test Runner
RAD - Application Framework
Roadrunner/ipcq - Interprocess Queue
Roadrunner/numapp - NUMA affinity
All other dependencies should already be preinstalled as part of the ELT Development Environment.
Download, unpack and build the RAD component.
Download the tarball for RAD v4.0.0-pre2 from ESO Gitlab. Unpack it and then execute the steps below to build and install the software.
cd rad waf configure build install cd ..
Download, unpack and build the ETR component.
Download the tarball for ETR v3.1.0 from ESO Gitlab. Unpack it and then execute the steps below to build and install the software.
cd etr waf configure build install cd ..
Download, unpack and build Roadrunner components (numpp, ipcq).
Download the tarball for Roadrunner v0.1.0 from ESO Gitlab. Unpack it and then execute the steps below to build and install the software.
cd roadrunner/numapp waf configure build install cd ../ipcq waf configure build install cd ../..
Installation of the RTC Toolkit¶
Download the tarball for RTC Toolkit version 1.0.0 from ESO Gitlab. Unpack it and then execute the steps below to build and install the software.
cd rtctk/
waf configure build install
Documentation (manuals and API Doxygen) can be generated by invoking:
waf build --with-docs
The result can be viewed by opening the respective index.html
file under build/doc/
.
Configuration of Required Services¶
The required CII services need to be configured the first time after a fresh installation of the development environment.
CII postinstall¶
Configuring the CII is done by running the cii-postinstall tool.
This must be run under the root account to succeed.
In the following example the tool is run with the role_ownserver
role that will prepare
a simple configuration where the CII services are run on the local host:
/elt/ciisrv/postinstall/cii-postinstall role_ownserver
Such a configuration is only appropriate for development and basic testing. See the CII Services Management documentation for details about using alternative roles for more advanced configurations where the CII services are run on dedicated remote servers.
Metadata Instances¶
Certain additional metadata instances specific to the RTC Toolkit need to be added to the CII configuration. This can be done by running the rtctkConfigTool binary that was freshly installed as follows:
rtctkConfigTool --init-metadata
It is only necessary to run the above command once. Running it multiple times will simply overwrite any modifications that might have been made to the metadata instances.
Startup of Required Services¶
The RTC Tolkit makes use of a number of ELT supplied services, in particular those provided by CII and Nomad.
Before making use of these services the appropriate startup must be performed and when they are no longer required the appropriate shutdown should be performed.
CII Service startup¶
At the time of writing CII services must be started as root. The ELT DevEnv since release 3.5 has provided the appropriate sudo permissions to allow this to be done as the user eltdev using sudo.
To start and stop the services use the following commands:
sudo cii-services start all
sudo cii-services stop all
The shutdown of the CII services using the command shown above can be performed when you no longer wish to exercise RTC Toolkit components, in general the CII services should be left running.
Note
The complete set of CII services can be resource intensive.
In cases where memory and CPU capacity is limited,
for example in smaller development virtual machines,
it may be useful to bring up only the minimal services needed by the RTC Toolkit with the
following command:
sudo cii-services start redis configService elasticsearch
To check the status of the running services does not require root privileges and can be done using the following command:
eltdev % cii-services info
CII Services Tool (20210706)
Collecting information........
Collecting information........................
Installations on this host:
ElasticSearch |active:yes |install:/usr/share/elasticsearch/bin/elasticsearch
Redis |active:yes |install:/opt/anaconda3/bin/redis-server
MinIO |active:no |install:/usr/local/bin/minio
ConfigService |active:yes |install:/elt/ciisrv/bin/config-service |ini:
TelemService |active:no |install:
Filebeat |active:yes |install:/usr/share/filebeat/bin/filebeat
Logstash |active:yes |install:/usr/share/logstash/bin/logstash
Kibana |active:yes |install:/usr/share/kibana/bin/kibana
Consult the CII documentation for details of the commands and their output. See: CII Docs.
Nomad agent startup¶
Nomad is used as the mechanism to start and monitor processes. The usage of Nomad within the ELT is not yet fully defined, thus RTC toolkit provides just a preview examples how Nomad could be used.
Before starting any process using Nomad a Nomad “agent” must be running. The ELT Development environment comes with a simple single host Nomad agent configuration file (/opt/nomad/etc/nomad.d/nomad.hcl) which allows the agent to be run as a service under eltdev user. That means that all jobs executed by the Nomad will be executed under eltdev user. This might change in the future. See ICS_HW for some pointers on the use of Nomad in the ICS.
To check if the Nomad agent service is running, use the following command:
systemctl status nomad
The Nomad agent service (as eltdev user) can be started using the following command:
systemctl start nomad
To verify the agent is running, use the following command as any user:
nomad job status
No running jobs
If the agent is not running the following error will be displayed
Error querying jobs: Get "http://127.0.0.1:4646/v1/jobs": dial tcp 127.0.0.1:4646: connect: connection refused
The Nomad agent can be stopped by executing:
systemctl stop nomad
The Nomad agent can be also run as normal process under any user using the provided configuration file (option -c). Before running as a normal user please make sure that the Nomad is not already started as service. To run as the process the following command needs to be executed:
nomad agent -config /opt/nomad/etc/nomad.d/nomad.hcl
Processes startd under Nomad start with the environment of the user running the Nomad Agent, so you may wish to run the Momad agent as your normal development user during development to ensure any processes started have exactly the same environment as when you execute them from the command line.
Verify Correct Installation¶
To verify the build of RTC Toolkit, it is possible to run the unit-tests by invoking:
cd rtctk/
waf test --alltests
This should be run after the cii-postinstall tool has been run successfully to avoid some spurious warnings.
For a more comprehensive verification that the RTC Toolkit was installed correctly, run the end-to-end integration test as follows:
cd test/_examples/exampleEndToEnd
etr -v
The integration test is expected to terminate successfully after running for about one minute.