3 VERIFICATION
If you are already familiar with the VLT Common Software, simply browse section 3.4 to get familiar with any new tool introduced by this version.
This section describes how to setup a simple application environment in order to exercise some, but the most important, features of the VLT Common Software. The purpose of this action is twofold:
The detailed information on how to configure each module is provided by the appropriate User Manual.
The scope of this section is limited to a WS and a LCU in a one-environment-per-node configuration. For the configuration and verification of Drivers and Motor Control and for more environments on a WS, please refer to the User Manuals.
The following sections assume that:
· you are familiar with configuring UNIX and, as appropriate, VxWorks utilities and that node names, IP addresses, etc. are already correctly configured on your WS.
· you have one WS with CCSLite and one LCU and you set up an environment on each. In this chapter <wsenv> and <lcuenv> will be used to represent the name of the two environments.
Generally speaking, in order to verify the correct file setup, configuration activities should be done by a username that is not the one used for installation. The database is already created having a user called "ccsuser": add it to your system and use it for all configuration activities described by the present chapter. When you are more familiar with CCS, LCC and CCSLite, other users can be defined.
3.1 UNDERSTANDING ENVIRONMENTS
The VLTSW is based on the concept of environments where processes can run and exchange messages with other processes, either in the same environment or in a remote environment, on the same machine or in other machines.
CCS-Lite (or QSEMU) environment (on WS)
an environment providing database and communication facilities and used to build WS applications.On each WS there can be one or more WS environments.
LCC environment (on LCU)
provides database and communication facilities (lcu-Qserver) to build real-time applications. Maximum one environment per LCU.
An environment is uniquely identified by the environment name. Remember that environment names are limited to 7 chars and the first letter is mandatory "w" for WS and "l" for LCU.
REMARK: Real VLT environment names must follow the conventions defined by the applicable version of "VLT LAN's Specification". In the following example generic names built using the machine node name are used.
From the communication point of view, each environment is identified by the node on which is running and a TCP/IP port number. The same number can be used on different nodes for the same type of environment. Currently we use:
3.1.1 Configuring Environments
To be able to communicate, environments need to know about each other.
1. there is a TCP/IP channel (environment name and port number) for each environment. Actually, the couple node-number shall be unique in the system. So the same number can be used for all the LCC environments. TCP/IP channels are defined in the file /etc/services.
2. (for CCS-Lite only) each CCS-Lite (QSEMU) environment needs to know where other QSEMU or LCC environments are located, they can be both local or remote to the WS. $VLTDATA/config/CcsEnvList provides such a mapping
3. each LCU needs to know from which host it has to boot and which are the other environments: CCSs and QSEMUs (an LCU should not talk directly to other LCUs). This is done by the $VLTDATA/ENVIRONMENTS/<lcuenv>/bootScript file.
If LCU(s) need to communicate with more WS environments than the assigned boot environment, then the file "$VLTDATA/config/lqs.boot" must be created and edited. The standard file in "$VLTROOT/vw/bin/$CPU/lqs.boot" can be taken as template.
each QSEMU environment shall have a $VLTDATA/ENVIRONMENTS/<wsenv> for the database files (dbl/ and DB/), snapshots and database image and process configuration table (CcsEnvTable)
each LCC environment shall have a $VLTDATA/ENVIRONMENTS/<lcuenv> for the database files (dbl/ DB/) and boot files (bootScript, devicesFile, ...)
In addition to the basic environment configuration, other configuration tables are required by applications like CCS Log System and CCS Scan System.
The verification process will drive you to a first set up of these files.
REMARKS:
the VLT Software Environments Common Configuration User Manual provides a detailed documentation about environments and configuration tools. The panels layout as well as several examples of expected output are also given.
3.2 THE APPLICATION EXAMPLE
To have the possibility to work-out the CCSLite and LCC software, a simple example, consisting of an LCU application and of a WS application, is provided.
Each application is implemented as a VLT software module and also provides an example of use of the VLT Programming Standards. Please note:
3.2.1 The LCU Application
The LCU application consists of one process (lcuapp) able to treat the following incoming commands:
The implementation of the LCU application is in ~/VLTSW/example/lcuapp and is generated during the installation. If needed, can be generated as follows:
3.2.2 The WS Application
The WS application consists of two programs:
wsappShowValue that continuously displays an item from the local database until a "q" character is typed in.
wsappSetValue that works with a companion partner, named lcuapp, on another environment. It prompts the user for input:
The implementation of the WS application is in ~/VLTSW/example/wsapp and is generated during the installation.If needed, can be generated as follows:
3.3 INSTALLING AND CONFIGURING ACC DATABASE
The ACC database contains all the information (IP address, machine type, node name, etc.) used by the configuration tools and it uses mSQL as database engine (see the ~/VLTSW/tcltk/msql-<version>/doc/License for copyright).
Except for the differences explicitly indicated, it is the same procedure for both HP, SUN and Linux. This installation is done as "vltmgr". By default, "vltmgr" is also the username authorized to write the database. For more information, please refer to the ACC and msql documentation.
You need only one database running for all machines. The environment variable ACC_HOST tells the VLT Software the host where the database daemon is running. For every machine, including the one where the daemon runs, define:
in the /etc/pecs/releases/000/etc/locality/apps-`hostname`.env and install the mSQL software on that <host> only.
The mSQL code is generated and installed in $TCLTK_ROOT during the buildTcltk step. Such a generation is enough to correctly generate the Sequencer. The following actions are needed to create and to start the database and to load it with the configuration data:
(you should not need to edit this file. If you do it while the daemon is running, see below, use the utility msqladmin reload to make the changes effective)
#msql 1111/tcp # Mini SQL database server
Test the installation by quering the database from a VLT utility:
Remember to include the definition of ACC_HOST among the environment variables defined at the login.
The utility vccFastShow is also available, for big databases is much quicker.
Note: an example of the file accData.sql can be found on the web:
http://www.eso.org/projects/vlt/sw-dev/ under the paragraph:
"VLT Common Software: templates to help the installation of a VLTSW machine".
3.3.1 Automatic Startup of Database Daemon at boot Time
As "root" do the following steps to have the database daemon always active after a reboot of the machine:
# cp /home/vltmgr/VLTSW/PRODUCTS/pecs/templates/msql /sbin/init.d/msql
# ln -s /sbin/init.d/msql S910msql
# ln -s /sbin/init.d/msql K089msql
# cp /home/vltmgr/VLTSW/PRODUCTS/pecs/templates/msql /etc/init.d/msql
# ln -s /etc/init.d/msql S91msql
# ln -s /etc/init.d/msql K08msql
cp <VLTSW>/PRODUCTS/pecs/templates/msql /etc/init.d/msql
ln -s /etc/init.d/msql S91msql
ln -s /etc/init.d/msql K08msql
ln -s /etc/init.d/msql S91msql
ln -s /etc/init.d/msql K08msql
ln -s /etc/init.d/msql S91msql
ln -s /etc/init.d/msql K08msql
ln -s /etc/init.d/msql S91msql
ln -s /etc/init.d/msql K08msql
3.4 CONFIGURE AND VERIFY
This section guides you through the configuration of one CCSLite environment and one LCU. It is assumed that both VxWorks and CCSlite are installed.
If VxWorks is not installed, please skip everything concerning LCU.
3.4.1 Configure the LCU TCP/IP Channels
In order to have communication between LCU and WS (Engineering User Interface), a TCP/IP channels (environment name and port number) is required for each environment. TCP/IP Channels are defined in the file /etc/services (you need to be root to edit it). Add to such a file one line for each LCU and for each WS environment. E.g.:
/etc/services3.4.2 Configure the LOGGING System.
The Logging System is not automatically started with the environment because it is independent to it. The Logging system is installed as follows:
Be careful that the ownership of the files logFile and logAuto should be: vltmgr:vlt with pemissions 775.
b. enable logging from remote hosts (Linux only):
Edit /etc/sysconfig/syslog so that the SYSLOGD_OPTIONS line reads :
SYSLOGD_OPTIONS="-r -m 0"
d. enable user vlt to use crontab by adding "vlt" to:
HP: /var/adm/cron/cron.allow
SUN and Linux: /etc/cron.d/cron.allow
Note: an example of /etc/syslog.conf for the 3 supported platforms can be found on the web:
http://www.eso.org/projects/vlt/sw-dev/ under the paragraph:
"VLT Common Software: templates to help the installation of a VLTSW machine".
3.4.3 Data Flow System (DFS) logging system configuration (if applicable)
In order for the DFS logging system to keep track of VCSOLAC behaviour/performance, another entry must be added in the /etc/syslog.conf file on all the instrument workstations involved: this entry will send "local3.debug" messages to the OLAS machine (for Paranal's UTs a machine in the group wu{1,2,3,4}dhs).
For example add the following line:
in the /etc/syslog.conf file on the UT2 instrument workstations to send messages to the OLAS machine wu2dhs.
3.4.4 Configure the WS Environment
Each CCSlite environment is defined by a name that shall be unique in the network. Hereafter <wsenv> is used to name the CCSlite (or QSEMU) environment you are configuring. Substitute each occurency of <wsenv> with the real name.
1. as vltmgr, define the environment as the "local" environment, i.e., the one to access data from when environment name is not specified:
Do it for all windows you may have opened. This definition is needed from now on and it should be added to the other ones defining the environment (e.g., added to /etc/pecs/releases/000/etc/locality/apps-`hostname`.env)2
Note that to support WS with multiple LAN interfaces corresponding to different hosts, the format of the CcsEnvList has been slightly modified. This new format of the CcsEnvlist is fully supported by vccmake.
· enter the actual environment name and press Return (or, depending on the keyboard, Enter). Fields are filled with defaults values, normally appropriate.
The default database is already configured to support the verification tests. The CcsEnvTable file defines the processes that shall automatically start at database startup.
You can use the some CCS tools to inspect the database structure or to monitor the running processes (like CcsPerfMon).
3.4.5 Verify the WS Environment
The easiest way is to use ccsei to interact with the WS environment. ccsei is the interactive utility to exchange messages with local and remote applications, to read/write in databases, to monitor the logs, to call other development and debugging tools. The same functions are available for both LCU and WS environments.
If the WS environment has been properly started, all ccsei utilities should work. Invoke the main menu first:
1. from the ccsei main menu, click "CCS Log Monitor". The "VLT Log Monitor" tool is started, with by default the "MONITOR" option. On another xterm, create a log:
3. write "cmdManager" in the "Process" field, click "Help" and select "On Commands". The "Help On Commands" panel is presented.
4. <double-click> on the "VERSION" command: the command format is displayed and the command name also appears after the prompt of the Command Window (1<wsenv>(Pro)> VERSION).
5. click after the (Pro) > VERSION and press the <Enter> key to send such a command to the cmdManager process. The "Replies" field should give the version number of the currently running cmdManager.
8. click "CCS Database Monitor" in the main menu. The "ccsei Database Monitor" window is presented. Click "CCS Database Monitor" again to have a second window. The first will be used for a continuos monitor of a variable, the second to change the variable content.
9. in both windows: click "Browse", then select "PARAMS", "SCALARS", "scalar_int32", "Accept" to move the point name to the main menu, "Dismiss" to close the point browser. Both should display the same value.
11. In the second window, click on the "DB Value" field, type another value and press <Enter>. The first window should show the new value in the field labeled "Data Quality and Values:" (a little delay is normal).
More about the ccsei can be found in the "CCS User Manual".
3.4.6 Configure the LCU Files on the WS
To work with the LCC requires that some files are set on the WS acting as boot node.
Each LCU environment is defined by a name that shall be unique in the network. Hereafter:
Substitute each occurency of <lcuenv> and <lcunode> as appropriate.
· the LCU is booted and the remote login from the WS to the LCU is possible (no one is locking the LCU shell).
· the "vx" username is defined and the LCU can execute a remote shell to it (either ~vx/Xauthority or ~vx/.rhosts correctly set)
· enter the actual LCU environment name (<lcuenv>) and press Return (or, depending on the keyboard, Enter). LCU-host (<lcunode>), the Boot-env (<<wsenv> or <qsemuenv>) are taken from the database.
· the "Config..." button invokes the "vccConfigLCU" panel to change configuration options. The default values are enough for the validation test, but you can set what needed by your applications. Click "WriteFiles" when ready to regenerate target files. Click "Continue" to override the current files. Then go back to the vccEnv panel.
· click "Start" to reboot the LCU. The log of the reboot process allows you to follow it. At the end of the boot, the console should display the message:
LCC INITIALISATION SUCCESSFUL.
REMARK: to be able to display the boot log, the LCU Configuration tool locks the connection to the LCU for about 120 seconds. After this time the connection is released (vccConfigLcu: connection closed). If the boot process takes longer, part of the log may not be displayed. In such a case, check the LCU console.
3.4.7 Make the LCU known to the CCSLite environment
2. configure reporting node for LCU log activity. You should copy a template file from the $VLTROOT under your WS environment directory. Execute the following:
$ cp $VLTROOT/include/logLCU.config.template \
$VLTDATA/ENVIRONMENTS/<wsenv>/logLCU.config
and modify it according to your configuration.
"@(#) LCC ....<version>... ....<date>....."
5. from the ccsei main menu, start a "CCS Log Monitor", click on "MONITOR", then create a log from the LCU
REMARK: a delay of few seconds is normal because, in order not to overload the network, logs are transmitted by the LCU to the upper level only periodically.
3.4.8 Verify the LCU environment
Ccsei can also be used to verify the just created LCU environment.
3. send the command LCCGVER (either by typing it after the prompt or by selecting from the menu); type `enter". The "Replies" field should display the version number:
4. click "CCS Database Monitor" in the main menu. The "ccsei Database Monitor" window is presented. Click "CCS Database Monitor" again to have a second window. The first will be used for a continuos monitor of a variable, the second to change the variable content.
5. continue as in 3.4.5, but from the [+] menus, select <lcuenv> as environment . Notice that the selection of the database point using the "Browse" facility is slower because the system queries the database structure from the running LCU.
3.4.9 Interact with an LCU Application
The lcuapp is used to test and demonstrate the way of working of ccsei.
1. from the ccsei main menu, click "Log Monitor". The "VLT Log Monitor" tool is started. Enable the monitoring by clicking the "MONITOR" option.
2. make the lcuapp known to ccsei. Add one line containing lcuapp to the file $VLTDATA/ENVIRONMENTS/<lcuenv>/PROCESSES
Two logs should appear in the LCC Log/Inspect Window telling that lcuapp has started and is waiting for commands.
4. start a "CCS Database Monitor" window in monitoring mode (check "Move to list" and "Activate monitor" ) on the <lcuenv>:PARAMS:SCALARS.scalar_int32 variable.
5. from a "CCS Command Window", select <lcuenv> as environment and then lcuapp as process, then send a SETVAL <nn> message to lcuapp to change the value in the LCC database. <nn> is an integer; the valid range is 0-100.
Repeat for different values. If <nn> is out of range, an error and a different log should be received.
3.4.10 Verify Cooperation between WS and LCU Applications
The scenario we are going to use now is the same as the one used to test the lccei, but a WS appl cation is used to send commands to the LCU companion.
At this point you are already an expert in configuring both WS and LCU so set things up as follows:
At this point you can run the WS application that sends the number you type to lcuapp in order to be stored, if in the range, into the LCC database.
If in range, the number you type will be displayed by the Monitor window. Repeating for different values of <nn> you can experiment with several cases, including logs, errors, replies, etc. Type a "q" to stop both lcuapp and wsappSetValue.
These actions complete the verification of CCS/CCSLite and LCC installation.
3.4.11 WS Environment Shutdown
To terminate the verification, close the environment properly using the vccEnv "Stop" or:
These actions complete the verification of CCSLite and LCC installation.
3.5 SUMMARY: LIST OF FILES NEEDED FOR THE CONFIGURATION OF ENVIRONMENTS (LCU AND WS)
- $VLTDATA/msql/accData.sql: update the tables "station" and "prog_environment" (see also template on the web at http://www.eso.org/projects/vlt/sw-dev/vlttemplate/accData.sql);
- /etc/services: write a couple "wsenvname port_number/tcp"; by default port numbers should run in the range 2001-2999;
- $VLTDATA/config/CcsEnvList: write
1. wte49 te49/vltdata/ENVIRONMENTS/wte49 (if the environment is local, wte49 is the name of the ws environment)
2. wte35 te35 (if the environment is on a different workstation, wte35 is the name of the environment while te35 is the host the environment belongs to)
Note: if a WS environment on a machine wants to communicate with another WS environment on another machine, the accounts running the environments must exist with same UID/GID on both machines, for the communication to wotk properly.
- $VLTDATA/msql/accData.sql: update the tables "station", "prog_environment" and lcu_progenv (see also template on the web at http://www.eso.org/projects/vlt/sw-dev/vlttemplate/accData.sql);
- /etc/services: write a couple "lcuenvname port_number/tcp"; from a communication point of view, each environment is identified by the node on which is running and a TCP/IP port number. The same number can be used on different nodes for the same type of environment. Currently we use the value 2160 for LCU environments;
- $VLTDATA/config/CcsEnvList: write a couple "lcuenvname lcunodename" for every lcu environment that has to be known on that WS host;
- $VLTDATA/ENVIRONMENTS/<wsenvname>/logLCU.config: use the utility logCreateLcuConfig to prepare the file, which will contain the couple "lcuenvname wsenvname", specifying which LCUs have a given WS environment as logging reporting node;
- ~vx/.rhosts: the file .rhosts of the user vx must contain all lcu nodes (hostnames) that have to work with the system (vx must be able to remsh to those hosts). For Linux, the permission of the file should be 644.
3.6 OTHER TESTS
The verification procedure has the purpose to check that basic features work correctly and to get you acquainted with the VLT Software.
During installation you may have decided to install many more modules than the one the verification procedure has tested. The User Manual of each software module contains additional tests. Please refer to such documentation to configure and test drivers, INS software, etc as appropriate to your installation.
3.7 CONFIGURING WORKSTATION STARTUP
As appropriate, the startup of the standard environment, qsemu, logManager, etc. can be part of the WS startup process. The automatic startup of the ACC database server has been described in 3.3.1, the automatic start of the WS environment is described in the CCSlite installation procedure.
3.8 REPORT TO ESO
You are kindly request to provide by mail, fax or e-mail (see 1.6 for addresses) the list of the products you have installed and the computer configuration (type, OS, etc).
Please contribute to improve the quality of this manual, especially the trouble-shooting list (see 4): add to your report any suggestions you may have to improve the installation procedure or any mistakes you may have made.
Problems or errors in the installation procedure should be reported using the SPR form (see 1.6).
1The last value of the line, "1000000", is the size that the logFile can reach before being backed up. This value can be reduced or increased according to the type of operations.
Quadralay Corporation http://www.webworks.com Voice: (512) 719-3399 Fax: (512) 719-3606 sales@webworks.com |