Runtime Configuration Repository¶
The purpose of the Runtime Configuration Repository is to make available runtime configuration parameters to SRTC components. The parameters are a subset of configuration parameters from the Persistent Configuration Repository that pertain to the current active deployment.
SRTC components must read their configuration parameters from the Runtime Configuration Repository
during the initialisation activity when the Init
command is received.
Datapoint creation is normally not necessary by individual SRTC components,
since these are typically created by the RTC Supervisor before the component is initialised.
If a case arises that a datapoint needs to be created anyway,
this should also occur during the initialisation activity.
Certain computed results can also be written to the repository as dynamic datapoints while the component is running. Such dynamic datapoints should be setup in the Persistent Configuration Repository if they require default initial values. This allows the default values to be adjusted and handled in the same manner as any other static datapoint value. Meaning that the datapoint is created by the RTC Supervisor when populating the Runtime Configuration Repository during initialisation. Otherwise the datapoint will have to be created by the component during the initialisation activity.
Datapoints should typically not be deleted from the Runtime Configuration Repository by SRTC components directly. Cleanup of the Runtime Configuration Repository is done externally. Still, it is possible to delete a datapoint if needed.
Data Access¶
An overview of the API is provided here, which should be enough to become familiar with it and to be able to use it. Refer to the API reference documentation for technical details.
Access to the Runtime Configuration Repository should always be performed through an instance of
RuntimeRepoIf
.
The RTC Toolkit framework will automatically prepare such an instance when requested from the
ServiceContainer
,
as long as it is correctly configured in the Service Discovery with the
runtime_repo_endpoint
datapoint.
Two URI schemes are supported for runtime_repo_endpoint
:
- cii.oldb
This is for the fully fledged Runtime Configuration Repository implemented on top of OLDB, e.g.
cii.oldb:///rtc/repo
.- file
For development and testing. This should point to the directory containing the YAML files for the repository on the local file system, e.g.
file:/home/eltdev/repo
.
The ServiceContainer
is itself passed to the constructor of the user derived
RunnableStateMachineLogic
class, called BusinessLogic
,
and is accessible in user code through the attribute m_services
.
The following is an example of how to retrieve the RuntimeRepoIf
within the Initialising
method of BusinessLogic
:
void BusinessLogic::Initialising(componentFramework::StopToken st) {
auto repository = m_services.Get<RuntimeRepoIf>();
// Can now access the datapoints with repository ...
}
Datapoint Paths¶
Configuration parameters are stored in a tree hierarchy as leaf nodes. This can also be described as a folder structure, similar to a file system. The nodes from the root of the tree to a particular leaf node form the components of a path. By adding the ‘/’ character as the path separator between each component, this forms a datapoint path string.
The canonical structure for the path is as follows:
/<component>/{static,dynamic}/<parameter>
Where <component> will typically be the SRTC component instance name and <parameter> can
represent a hierarchical sub-path with multiple sub-folders if a deeper hierarchy of configuration
parameters is desired for a particular SRTC component, besides the basic grouping into static
and dynamic
parameters.
Path components must contain only lowercase alpha numeric characters or the underscore, i.e. characters from the set [a-z0-9_].
Note
The canonical path structure is a suggested convention to follow.
Reusable components delivered by the RTC Toolkit follow this convention.
However, it is not enforced by RuntimeRepoIf
,
except for making sure that the path components only contain characters from the accepted
character set.
Therefore the user is technically free to choose any path structure desired.
The paths are handled in the API with the DataPointPath
class,
which is responsible for checking syntactical structure of the path.
These DataPointPath
as passed to the methods of RuntimeRepoIf
to identify the specific
datapoint to operate on.
Datapoint Creation¶
Datapoints need to be created before they can be used.
Attempting to write a datapoint that does not exist will throw an exception.
A datapoint can be created with the CreateDataPoint
method as follows:
RuntimeRepoIf& repo = ...
DataPointPath path = "/mycomp/static/param1";
repo.CreateDataPoint<int32_t>(path);
The datapoint type to use is explicitly passed as a template parameter to the method. The currently supported types are indicated in the Supported Data Types section. There are alternative method signatures that allow to specify the type as an additional argument instead or to optionally give a default value. These can be used as follows:
RuntimeRepoIf& repo = ...
DataPointPath path = "/mycomp/static/param1";
// Pass type as argument:
repo.CreateDataPoint(path, typeid(int32_t));
// With a default value:
int32_t default_value = 123;
repo.CreateDataPoint(path, default_value);
Datapoint Reading¶
Reading a datapoint can be done with the GetDataPoint
method,
or the ReadDataPoint
method to update an existing variable in place,
which may be useful for large vectors and matrices.
In addition, it is possible to pass a gsl::span
or MatrixSpan
to the ReadDataPoint
method instead of a reference to a container for numerical vectors and matrices.
This may be useful in situations where ownership of the buffer must be kept by the callers,
but the data buffer cannot be instantiated as a standard container object.
For example:
RuntimeRepoIf& repo = ...
int32_t param1 = repo.GetDataPoint<int32_t>("/mycomp/static/param1"_dppath);
std::vector<float> param2;
repo.ReadDataPoint("/mycomp/static/param2"_dppath, param2);
auto buffer1 = ...;
gsl::span<double> vector_span(buffer1);
repo.ReadDataPoint("/mycomp/static/param3"_dppath, vector_span);
auto buffer2 = ...;
MatrixSpan<double> matrix_span(buffer2);
repo.ReadDataPoint("/mycomp/static/param3"_dppath, matrix_span);
You will see the _dppath
suffix is added to the string representations of the datapoint paths
in the example above.
This is a shorthand to construct a DataPointPath object from a null terminated character string.
The GetDataPoint
and ReadDataPoint
methods are blocking,
and will only return to the caller once the data has been received.
In certain situations it may be better to avoid blocking.
To support this, the API provides the SendReadRequest
method that takes a ReadRequest
object
as the input argument and returns a Response
object, which can be used to eventually
synchronise.
The ReadRequest
class is used to represent a read request for one or more datapoints to be
fetched from the Runtime Configuration Repository.
The Response
object is effectively a future that provides a Wait
method that will block
until the request has been completed.
An optional timeout threshold can be given to the Wait
method.
This pattern allows a read request to be sent without blocking,
some other work can then be performed while the request is completed in the background;
and finally the Wait
method can be called to synchronise with the background request.
Ideally, by the time the Wait
method is called, the request that was initially sent would have
completed and the Wait
method would return immediately without blocking.
Otherwise it will block as long as is necessary for the request to complete.
Any processing, which requires some or all of the datapoints sent as part of the read request,
must be performed after having invoked the Wait
method and it returns without a timeout
condition.
Otherwise there will be a race condition between the read request and processing code.
The following example code shows how a ReadRequest
object is prepared,
how the request is sent and how the response is handled.
RuntimeRepoIf& repo = ...
int32_t param1;
std::vector<float> param2;
// An example callback lambda function that will force all negative values to zero.
auto handler = [](std::vector<float>& buffer) {
for (auto& value: buffer) {
if (value < 0.0) {
value = 0.0;
}
}
};
ReadRequest request;
request.Add("/mycomp/static/param1"_dppath, param1);
request.Add("/mycomp/static/param2"_dppath, param2, handler);
auto response = repo.SendReadRequest(request);
// Other processing not requiring param1 or param2 can happen here ...
response.Wait();
// Any processing requiring access to param1 or param2 goes here ...
As can be seen, the Add
method is used to add all the datapoints needed to the request.
Two alternative invocations are shown, one with a callback handler function and one without.
The optional callback allows processing of a single datapoint’s data asynchronously as soon as it
arrives.
The callback is executed in a different thread than the one that invoked the SendReadRequest
method.
Therefore care should be taken if accessing any global variables to avoid race conditions.
Warning
Only the data explicitly passed to the callback’s argument should be accessed within the callback, since it is the only datapoint guaranteed to have been delivered when the callback is executed. No other data buffers for any other datapoints should be accessed within the callback function. Any such attempt will result in race conditions and likely data corruption.
In addition, the datapoint buffers that were added to the request with the Add
method must
not be accessed outside of a callback function once SendReadRequest
has been called.
Only after Wait
returns successfully can the buffers for all these datapoints be accessed.
Datapoint Writing¶
Writing a new value to a datapoint can be done with the SetDataPoint
method,
or the WriteDataPoint
method to pass a reference to the data instead,
which is more optimal for large vectors or matrices.
Similarly to the read method, it is possible to pass a gsl::span
or MatrixSpan
to the
WriteDataPoint
method instead of a reference to a container for numerical vectors and matrices.
This may be useful in situations where the data buffer cannot be instantiated easily as a standard
container object.
For example:
RuntimeRepoIf& repo = ...
repo.SetDataPoint<int32_t>("/mycomp/static/param1"_dppath, 123);
std::vector<float> value2 = {1.2, 3.4, 5.6, 7.8};
repo.WriteDataPoint("/mycomp/static/param2"_dppath, value2);
auto buffer1 = ...;
gsl::span<double> vector_span(buffer1);
repo.WriteDataPoint("/mycomp/static/param3"_dppath, vector_span);
auto buffer2 = ...;
MatrixSpan<double> matrix_span(buffer2);
repo.WriteDataPoint("/mycomp/static/param3"_dppath, matrix_span);
The SetDataPoint
and WriteDataPoint
methods are blocking,
and will only return to the caller once the data has been sent to the repository.
Similar to the reading methods, a non-blocking option exists with the SendWriteRequest
method.
It works in an analogous manner to the SendReadRequest
method described in the previous
Datapoint Reading section.
The SendWriteRequest
method accepts a WriteRequest
object and returns a Response
object.
All datapoints that should be updated must be added to the WriteRequest
object with the Add
method.
The Wait
method of the Response
object should be called to synchronise with the request
completion.
The call to Wait
will block until the datapoints have been successfully sent to the repository.
The Wait
method can optionally take a timeout argument.
Warning
The buffers of the datapoints added to the request with the Add
method must not be modified
after SendWriteRequest
has been called.
Only after the Wait
method returns successfully, can the datapoint buffers be modified.
Modifying the contents before a successful invocation of Wait
will result in race conditions
and possible data corruption.
The following is an example of using SendWriteRequest
:
RuntimeRepoIf& repo = ...
int32_t param1 = ...
std::vector<float> param2 = ...
WriteRequest request;
request.Add("/mycomp/static/param1"_dppath, param1);
request.Add("/mycomp/static/param2"_dppath, param2);
auto response = repo.SendWriteRequest(request);
// Other processing can happen here, but param1 and param2 must not be
// changed ...
response.Wait();
// param1 and param2 can be modified again after the Wait call here ...
Datapoint Querying¶
To check the data type of a datapoint one can use the GetDataPointType
method as follows:
RuntimeRepoIf& repo = ...
auto& type = repo.GetDataPointType("/mycomp/static/param1"_dppath);
This will return the std::type_info
object corresponding to the data type as one would get from
the typeid
operator.
See the possible C++ types in the Supported Data Types section.
The data size, or more specifically the number of elements for a datapoint,
is retrieved with the GetDataPointSize
method.
Note that this will always return the value 1
for basic types such as int32_t
or float
.
For strings the number of characters is returned, i.e. the length of the string.
For vectors and matrices the total number of elements is returned.
The following is an example of using GetDataPointSize
:
RuntimeRepoIf& repo = ...
size_t size = repo.GetDataPointSize("/mycomp/static/param1"_dppath);
It may be necessary to check for the existence of a datapoint.
This can be achieved with the DataPointExists
method, which will return true
if the
datapoint exists and false
otherwise.
For example:
RuntimeRepoIf& repo = ...
if (repo.DataPointExists("/mycomp/static/param1"_dppath)) {
// Can operate on the datapoint here ...
}
There is also a mechanism to query the names of available datapoint paths using the GetChildren
method.
This method takes a datapoint path and lists all the child nodes under the path.
Specifically, it returns a pair of lists.
The first list contains the all the datapoints found within the path and the second list contains
all the child paths, i.e. sub-folders.
The GetChildren
method provides a general mechanism to traverse the datapoint path hierarchy.
The following shows an example of traversing and printing all datapoint paths available:
void traverse(RuntimeRepoIf& repo, DataPointPath path) {
auto [datapoints, child_paths] = repo.GetChildren(path);
for (auto& dp_path: datapoints) {
std::cout << dp_path << std::endl;
}
for (auto& child_path: child_paths) {
traverse(repo, child_path);
}
}
RuntimeRepoIf& repo = ...
traverse(repo, "/"_dppath);
Datapoint Deletion¶
Existing datapoints are deleted with the DeleteDataPoint
method.
For example:
RuntimeRepoIf& repo = ...
repo.DeleteDataPoint("/mycomp/static/param1"_dppath);
Commandline/Graphical Manipulation¶
Manipulating the datapoints in the Runtime Configuration Repository can be performed with the
rtctkConfigTool
command line tool.
See section Configuration Tool for details.
This way of accessing the datapoints works for either the OLDB backed
Runtime Configuration Repository implementation, or the file based implementation.
If using the OLDB backed Runtime Configuration Repository,
it is also possible to view and manipulate the datapoints with the oldb-gui
tool.
Refer to the CII OLDB
documentation for further details about using this tool.
Note
The oldb-gui is a default GUI tool provided by CII and currently has limited support for visualising and manipulating large vectors or matrices.
Datapoint Subscription¶
A simple API is available to support subscription to datapoints and registering callbacks for new data updates or datapoint removal notifications.
Note
Only the OLDB backed Runtime Configuration Repository supports subscription to datapoints,
i.e. when using the cii.oldb URI scheme.
The file based implementation does not currently support this capability and will throw
NotImplementedException
exceptions if used.
Subscribing¶
To subscribe to new data updates for a single datapoint one can use the Subscribe
method.
The following needs to be provided to this method:
A datapoint path to identify the datapoint being subscribed to.
A reference to a buffer object that will be filled with the received data.
A callback function (or generalised functor) that will be called when new data is received.
void Callback(const DataPointPath& path, std::vector<float>& buffer) {
...
}
RuntimeRepoIf& repo = ...
std::vector<float> buffer;
repo.Subscribe("/mycomp/static/param1"_dppath, buffer, Callback);
Notice that the callback function declaration always takes two arguments in this case.
The first is the datapoint path. This allows to know which datapoint is being dealt with if the
same callback is reused in multiple subscriptions.
The second argument is a reference to the buffer that was originally passed to the Subscribe
method.
When a new data value is received, the buffer object will be modified in place.
This possibly means memory reallocation when dealing with std::string
, std::vector
or
MatrixBuffer
.
After the buffer has been updated the callback is invoked.
It is possible to use gsl::span
instead of std::vector
as the type of the buffer argument
for numerical vectors.
Similarly MatrixSpan
can be used instead of MatrixBuffer
for numerical matrices.
This can be useful to fill buffers where full ownership or more control of the buffer must be
retained by the caller to Subscribe
.
The following code shows an example of using MatrixSpan
:
void Callback(const DataPointPath& path, MatrixSpan<float>& buffer) {
...
}
RuntimeRepoIf& repo = ...
float buffer[100];
MatrixSpan<float> span(10, 10, buffer);
repo.Subscribe("/mycomp/static/param1"_dppath, span, Callback);
Warning
The callback is executed in a different thread than the one calling the Subscribe
method.
Care must be taken when accessing the buffer to avoid race conditions and data corruption.
This is applicable from the moment the subscription is initiated, e.g. by calling Subscribe
,
until the datapoint has been successful unsubscribed from.
If other threads must also access the buffer, e.g. the thread that originally called
Subscribe
, it is up to the user to implement appropriate synchronisation mechanisms.
Only callbacks for new data updates can be registered with the Subscribe
method.
For registering datapoint removal notifications one must use the SendSubscribeRequest
method
instead.
In addition, SendSubscribeRequest
is more appropriate to subscribe to multiple datapoints at
once or if the subscription itself should occur asynchronously in the background.
In this case, one prepares a SubscribeRequest
object and passes it to SendSubscribeRequest
.
The following is an example of subscribing to two datapoints and also registering a removal
notification callback for one of them:
void OnRemoved(const DataPointPath& path) {
...
}
void OnNewMatrix(const DataPointPath& path, MatrixBuffer<double>& buffer) {
...
}
void OnNewVector(const DataPointPath& path, gsl::span<int32_t>& buffer) {
...
}
RuntimeRepoIf& repo = ...
MatrixBuffer<double> matrix;
int32_t buffer[100];
gsl::span<int32_t> span(buffer);
SubscribeRequest request;
request.AddRemoveHandler("/mycomp/dynamic/matrix"_dppath, OnRemoved);
request.AddNewValueHandler("/mycomp/dynamic/matrix"_dppath, matrix, OnNewMatrix);
request.AddNewValueHandler("/mycomp/static/vector"_dppath, span, OnNewVector);
auto response = repo.SendSubscribeRequest(request);
// ... can do some other work here ...
response.Wait();
As can be seen from this example, callbacks for multiple datapoints are registered by adding them
to the request with the AddNewValueHandler
and AddRemoveHandler
methods.
AddNewValueHandler
is used to register new data update callbacks.
AddRemoveHandler
is used to register datapoint removal notifications.
Once the request has been sent, the caller must eventually invoke the Wait
method on the
response object to synchronise.
The Wait
method will block until the request has completed.
There is also a version of the Wait
method that allows setting a timeout.
However, in that case the Wait
method must be invoked multiple times until it eventually
succeeds.
Warning
The callbacks registered with SendSubscribeRequest
are executed in different threads than
the one that called SendSubscribeRequest
.
It is the users responsibility to implement synchronisation mechanisms to avoid race conditions
between these threads.
There is also no guarantee of ordering of the callbacks if multiple datapoints are updated
simultaneously.
Unsubscribing¶
To unsubscribe callback handlers for a datapoint one can use the Unsubscribe
method.
This method will unsubscribe all callback handlers for a given datapoint path,
i.e. all new data update callbacks and also all datapoint removal callbacks.
To remove only one or the other type of callback the SendUnsubscribeRequest
method should be
used instead.
The following is an example of unsubscribing from a datapoint:
RuntimeRepoIf& repo = ...
repo.Unsubscribe("/mycomp/static/param1"_dppath);
It is possible to unsubscribe multiple callbacks by using the SendUnsubscribeRequest
method.
In this case an UnsubscribeRequest
object is prepared and passed to SendUnsubscribeRequest
.
This method is also executed asynchronously, similarly to the SendSubscribeRequest
,
where the Wait
method must be eventually invoked on the returned response object to synchronise.
The following is an example of unsubscribing from multiple datapoints:
RuntimeRepoIf& repo = ...
UnsubscribeRequest request;
request.AddRemoveHandler(/mycomp/dynamic/matrix"_dppath);
request.AddNewValueHandler(/mycomp/dynamic/matrix"_dppath);
request.AddNewValueHandler(/mycomp/static/vector"_dppath);
auto response = repo.SendUnsubscribeRequest(request);
Supported Data Types¶
The following is a table of currently supported data types for the Runtime Configuration Repository:
C++ Type |
Internal Type Name |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The indicated C++ type should be used for declaring the data variables in source code and when
identifying the type to use in methods like CreateDataPoint
.
The internal type name is the corresponding string representation of the type.
It is stored as metadata inside the CII configuration service to identify the exact type of a
datapoint and also used inside the YAML files when using the file based repository adapters.
The rtctkConfigTool
uses the internal type name for its command line arguments.
Note
Support for unsigned integer types is pending and will be available in a future release.
Composite Datapoints¶
A Composite Datapoint is the combination of multiple datapoints from a common root path that has been standardised. This section documents these standard Composite Datapoints.
NUMA Policies¶
This Composite Datapoint defines optional NUMA policies for CPU affinity, scheduling and memory policy.
The parameters are indicated in the following table relative to the Composite Datapoint root path.
For example the root path is /mycomp/static/thread_policy
the CPU parameter setting
cpu_affinity
would be /mycomp/static/thread_policy/cpu_affinity
.
Configuration Path |
Type |
Description |
---|---|---|
|
|
Sets the optional CPU affinity for the thread.
This is a mask of CPUs on which the thread is allowed to be scheduled.
See the |
|
|
Specifies the optional scheduling policy to apply to the thread with priority as defined in
|
|
|
Indicates the priority or nice-value of the thread.
If |
|
|
Specifies the optional memory allocation policy to apply to the thread and if provided it
must be provided together with |
|
|
Indicates a mask of NUMA nodes to which the memory policy is applied.
See the |
Value |
System API Equivalent |
---|---|
|
|
|
|
|
|
|
|
Value |
System API Equivalent |
---|---|
|
|
|
|
|
|
|
|
The following is an example of the configuration parameters in a YAML file for NUMA policies
assuming /mycomp/static/example
as root path (this is only applicable if using the file scheme URI):
mycomp:
static:
# Configure NUMA policies at Composite Datapoint "example"
example:
cpu_affinity:
type: RtcString
value: "1-4"
scheduler_policy:
type: RtcString
value: Fifo
scheduler_priority:
type: RtcInt32
value: 10
memory_policy_mode:
type: RtcString
value: Bind
memory_policy_nodes:
type: RtcString
value: "1-4"
File Format¶
Here we describe the directory layout and YAML format for the underlying files of the Runtime
Configuration Repository when using the file based implementation, i.e. when using the file
URI
scheme. For the fully fledged Runtime Configuration Repository backed by OLDB the underlying data
storage format should be treated as opaque. The rtctkConfigTool
should be used to manipulate the
configuration datapoints in this case (see section Configuration Tool).
The file system directory that contains the repository’s YAML files is called the base path. It is
given by the file scheme URI path encoded in the runtime_repo_endpoint
configuration parameter
in the Service Discovery. Any DataPointPath
is then relative to this base path.
The first component encoded in a DataPointPath
is treated as the name of a YAML file.
Typically this corresponds to a SRTC component instance name.
Therefore if DataPointPath
starts with /mycomp/...
, the corresponding YAML file will be
mycomp.yaml
.
Assuming further that runtime_repo_endpoint
was set to file:/home/eltdev/repo
,
the base path is /home/eltdev/repo
in this case,
and the complete file system path for the YAML file would be /home/eltdev/repo/mycomp.yaml
.
The remaining components encoded in a DataPointPath
are treated as dictionary keys,
for a hierarchy of dictionaries within the YAML file.
Each datapoint within the YAML will have a dictionary of the following mandatory keys:
Key Name |
Description |
---|---|
type |
This indicates the type of the data stored in |
value |
Stores the actual value of the datapoint.
If a matrix is being stored then |
nrows |
The number of rows in the matrix. This key is only applicable to matrices and should not be used for any other datapoint types. |
ncols |
The number of columns in the matrix. This key is only applicable to matrices and should not be used for any other datapoint types. |
Note
Using a file URI in a value
key is only applicable to numerical vectors and matrices,
i.e. the element type must be a boolean, integer or floating-point number.
Trying to use a URI for any other type will cause an exception to be thrown.
The following sections show examples of the YAML corresponding to a datapoint path, for various categories of datapoint type.
Scalar Types¶
Assume we have a basic type, such as an int32_t
with value 123
,
that should be stored in the datapoint path /mycomp/static/param1
.
The contents of mycomp.yaml
should be as follows:
static:
param1:
type: RtcInt32
value: 123
Another example for a floating-point number float
with value 5.32
and stored in the
datapoint path /mycomp/static/subdir/param2
is:
static:
subdir:
param2:
type: RtcFloat
value: 5.32
A string with value xy and z
and datapoint path /mycomp/static/param3
should be stored as
follows:
static:
param3:
type: RtcString
value: "xy and z"
Vector Types¶
Assume a numerical vector, e.g. std::vector<int32_t>
, with value [1, 2, 3, 4]
and stored
in the datapoint path /mydatatask/static/param1
.
The contents of the mydatatask.yaml
file should be as following:
static:
param1:
type: RtcVectorInt32
value: [1, 2, 3, 4]
An alternative format for the list in the YAML file for the above example is the following:
static:
param1:
type: RtcVectorInt32
value:
- 1
- 2
- 3
- 4
This alternative format may be particularly useful for a vector of strings. For example:
static:
param1:
type: RtcVectorString
value:
- foo
- bar
- baz
For large numerical vectors it is more convenient to store the data in a FITS file and reference the FITS file from the YAML with a file scheme URI. The data must be stored in the FITS primary array as a 1-D image. It can be stored as either a 1⨯N or N⨯1 pixel image. The following is an example of the YAML using such a reference to a FITS file:
static:
param1:
type: RtcVectorFloat
value: file:/home/eltdev/repo/mydatatask.static.param1.fits
As seen in the example above, the name of the FITS file in the URI is equivalent to the datapoint
path with the ‘/’ path separator character replaced with ‘.’ and the .fits
suffix appended.
This naming convention is applied automatically by the Runtime Configuration Repository when it
writes to datapoints and stores the actual data in FITS files.
If a user is creating the YAML file manually and using the FITS file reference feature,
the name of the FITS file does not have to follow any particular convention.
A user is free to choose any name for the FITS file.
Matrix Types¶
Assume we have a matrix with type MatrixBuffer<double>
and it is stored in the datapoint path
/mydatatask/static/param1
with the following value:
The corresponding mydatatask.yaml
YAML should look as follows:
static:
param1:
type: RtcMatrixDouble
value: [1, 2, 3, 4, 5, 6]
nrows: 2
ncols: 3
Similarly to vectors, the entries in value
can have an alternative format as follows:
static:
param1:
type: RtcMatrixDouble
value:
- 1
- 2
- 3
- 4
- 5
- 6
nrows: 2
ncols: 3
For large matrices it is more convenient to store the data in a FITS file and refer to it from the YAML file. This is done in the same manner as for vectors. The following is an example of this for the matrix case:
static:
param1:
type: RtcMatrixDouble
value: file:/home/eltdev/repo/mydatatask.static.param1.fits
nrows: 2
ncols: 3
FITS Writing Threshold¶
The Runtime Configuration Repository will store small vectors and matrices directly in the YAML
file, while large vectors and matrices, above a certain threshold, will be stored in FITS files
instead.
The threshold can be controlled at runtime by setting the /fits_write_threshold
datapoint.
It takes a 64-bit integer and indicates the number of elements above which the vector or matrix will
be stored in a FITS file.
Note
Changing the setting will have no effect on existing vector and matrix datapoints already stored in the repository. Only new values written to the repository will take the threshold value under consideration.
By default the Runtime Configuration Repository will use a threshold value of 16 elements.
The easiest way to change this is to use rtctkConfigTool
.
As an example, to change the threshold to always write the data to FITS files, one can use the
following command:
$ rtctkConfigTool --runtime-repo-endpoint file:<repository> \
set runtime /fits_write_threshold 0 --type RtcInt64
The <repository> tag in the above command should be replaced with the appropriate file system path where the repository is actually located.
Limitations and Known Issues¶
File Based Repository¶
The current implementation of the Runtime Configuration Repository is only a simple file based version using YAML and FITS files. This has the following implications:
The performance may be currently limited for large vectors and matrices or for large numbers of requests.
Datapoint subscription is not currently supported.
The repository is not distributed and therefore typically can only be used to run components locally on a single machine.
Note
If the Runtime Configuration Repository location is configured to point to a NFS mounted file system that is shared between all the machines where the SRTC components will be run, this can allow running SRTC components in a distributed manner correctly. However, NFS version 3 or newer must be used in this case. Older versions do not allow correct file based locking to be implemented and will lead to race conditions.
Locking¶
The file based Runtime Configuration Repository uses a file lock for synchronisation.
If for any reason a process that still holds the lock dies without releasing it,
this will block all other SRTC components trying to read from the Runtime Configuration Repository.
The solution is to simply delete the write.lock
file found in the repository’s directory where
the top level YAML files are located.
Aliasing of Types¶
Due to lack of native support in the underlying OLDB,
the types for RtcVectorBool
and RtcMatrixBool
are being effectively aliased to
std::vector<int32_t>
under the hood when using the fully fledged OLDB based implementation,
i.e. the cii.oldb
URI scheme.
The aliasing is transparent for code using RuntimeRepoIf
.
However, the underlying storage requirements will be significantly higher than expected,
and therefore can also exhibit lower performance than expected.
Similarly, RtcMatrixString
is being aliased to std::vector<std::string>
in a transparent
manner. The additional storage requirements in this case are negligible though.
This should be resolved in future versions of RTC Toolkit and CII OLDB.
Span Performance¶
Due to current limitations in the CII OLDB API, the usage of gsl::span
or MatrixSpan
will
not necessarily result in performance improvements, since an extra memory copy is being performed
under the hood anyway. End developers are nevertheless encouraged to use gsl::span
and
MatrixSpan
when appropriate. Future versions of the CII OLDB and RTC Toolkit are expected to
significantly improve the performance by eliminating the unnecessary memory copying.
Subscription Deadlocks¶
There is currently a subtle problem with the CII subscription mechanism where in certain cases lock order inversion can occur if using locking in the subscription callback handlers, and could lead to a deadlock. This is expected to be fixed in future versions of CII and RTC Toolkit.