[OPUS]

Frequently Asked Questions


OPUS Applications



OPUS Applications


What are the advantages of using OPUS?

OPUS is designed to help you distribute processing over a local network of machines and to allow you to start up a number of separate instances of any task. The central objective is throughput: performing the analysis of a number of independent datasets robustly and efficiently.

A useful feature of this system is the ability to monitor the status of both datasets and processes.


What kind of processes can be run in the pipeline?

Any process which can be run from a shell script: a simple shell script itself, or an executable invoked from a shell script. Of course, those executables can be written in any language including IRAF and IDL.

To take full advantage of the OPUS environment, tasks can be written as internal pollers built with the OAPI.

The default size of the "PROCESS" field in a process status entry is currently set at a maximum of 9 characters, which effectively limits your process names to this size or less. However the OAPI supports changes to the process status entry structure, including the size of this field.


Can a script read from STDIN (standard input)?

No.

The script should get its arguments from the command line, from environment variables, or from the process resource file. All keywords prepended with ENV. in the process resource file become environment variables so these variables are easily accessible.


Can I use command line arguments for my tasks?

Yes.

This is a standard way to get information into your task. OPUS pipeline tasks run in the background, but command line arguments can be specified in the process resource file. This is a convenient way to use the same task for slightly different functions.


How are the values of environment variables set?

The environment variables used by the OPUS managers, the Observation Manager and the Process Manager, are not the same as those used by the pipeline processes. Those pipeline processes must first have their environment variables set by your opus_login.csh file.

Keywords from the process resource file prepended with ENV. and their values are defined as environment variables. Note that ENV. does not become part of the keyword name. These environment variables only are available in the shell in which your task is run.

There are additional environment variables defined by the OPUS system as a task is started in response to an event. First, all tasks have access to the EVENT_TYPE variable that takes on one of the following three possible values:

   EVENT_FILE_EVENT, EVENT_OSF_EVENT, or EVENT_TIME_EVENT
Each of these values corresponds to the type of trigger that caused the event. The number of items in the event is placed in the EVENT_NUM variable. Unless you have configured your application to handle more than one item per event, EVENT_NUM will be 1.

Tasks that are triggered by a file event additionally have access to the EVENT_NAME variable:

   EVENT_NAME      The filename which triggers the event.

If EVENT_NUM is greater than 1, then there will be additional environment variables defined of the form EVENT_NAME1, EVENT_NAME2, etc..

Tasks that are triggered by an OSF have access to all the information in that OSF:

   OSF_DATASET     The name of the exposure that triggered the task.
   OSF_DATA_ID     The type of the exposure (by default, a 3 character descriptor).
   OSF_DCF_NUM     An arbitrary sequence number.
   OSF_START_TIME  The time the exposure started in the pipeline.
As in the case of file events, if EVENT_NUM is greater than 1, there will be additional environment variables defined but followed by a number (e.g., OSF_DATA_ID1) for each item in the event.

Time events have no event-related environment variables defined other than EVENT_TYPE.

You can use the values of these environment variables as command line arguments to the tasks you write, or in the bodies of the tasks themselves. See the path file section for more details on the relationship between path file variables and the environment variables from process resource files.


What is the difference between an external and an internal poller?

External polling processes are programs or scripts that have no knowledge of how the OPUS blackboard works. These processes are invoked through the OPUS task XPOLL (eXternal POLLer). Most of the sample pipeline applications are external pollers. The g2f task is the only exception; it was implemented using the OAPI.

The OPUS system uses information in the process resource file to decide when to activate a process. In the case of external pollers, xpoll responds to an event by spawning its associated process that in turn communicates back to xpoll how successful it was in processing the event through an exit status code. The code is mapped to specific keyword values in the process resource file by xpoll, and the OPUS system is informed of the disposition of the event. External pollers are started by the OPUS system each time work is required, then they exit to be started again later by the OPUS system when more work is needed.

Internal polling processes, like g2f, are programs written with knowledge of how the OPUS blackboard works. They are typically processes with some significant start-up overhead (e.g. database reading, etc.). The process is written to perform the start-up overhead and then enter a polling loop to wait for pipeline events. The process stays active as it polls for and processes events. Internal pollers are built using the OAPI to communicate with the OPUS system, and can respond to a reinitialization command.


How do I add a processing step to a pipeline?

There are three things that are required: a new script or OAPI application for that step, a new corresponding process resource file, and an update to the pipeline.stage file.


Are there any limitations on naming a new task?

Yes.

The name of the task is used in the construction of the process status entry, and that file has a limited number of characters to hold the task name. The default limit is nine (9) characters, however this limit can be changed.


What are some of the common gotchas I should beware of when I write my OPUS tasks?

Besides the traditional dangers of memory leaks and unclosed files you should be attuned to the possibility that many copies of your task might be running simultaneously. Thus it is important to open files for reading only (in C use 'r', not 'r+') when possible, to expect collisions when updating databases, and always to terminate with a known status.

Status messages to the standard output device will automatically be kept in a process log file. It is extraordinarily useful to write to the log file both wisely and often to document the actions taken by a pipeline task.

Also keep in mind that an external polling process has access to process resource file keywords and values through its environment only for those keywords prepended with ENV..


What kind of message reporting does OPUS provide?

The severity of the OPUS messages reported to any of the log files can be selected with the environment variable, MSG_REPORT_LEVEL. This allows the user to specify which type of messages should be reported. Ordinarily the number of 'Debug' message can be quite large, and during normal operations 'Informational' messages (and those more severe) will be sufficient. The user can set the current report level to following values: MSG_ALL, MSG_DIAG, MSG_INFO, MSG_WARN, MSG_ERROR, MSG_FATAL, MSG_NONE. The report levels are cumulative which implies, for example, that the MSG_WARN level will receive MSG_WARN, MSG_ERROR and MSG_FATAL messages. The default level for message reporting is to report MSG_INFO messages.

Note that when the report level is set to MSG_NONE, no log files are produced.


Top of Applications FAQ

Top of OPUS FAQ