** NOTE **
At the delivery of this OPR, the TIME_str_2_YYYYDDDHHMMSS function is used only by the CVTFOF Process in the OMS Data Receipt Pipeline. The CVTFOF Process uses the function to format time strings to be inserted into Subset File Header Records.
setenv OPUS_REMOTE_SHELL `which ssh`
setenv OPUS_REMOTE_COPY `which scp`
Your accounts and systems must be set up for ssh before making this change. OPUS expects the commands named by OPUS_REMOTE_SHELL and OPUS_REMOTE_COPY to be rsh-compatible.
whereas with this delivery it might be named:
pipect-39ec73a1-odoalp-0020aa01.log
With this change, it is no longer necessary to add a random number to the process start time in order to reduce the chance of identical log file names when starting more than one instance of a process at the same time. However, process log file generation must be delayed until later in the process startup sequence, so less pre-startup logging is captured in the file (e.g., the user's UNIX login script output will no longer appear in process log files). The format of log information generated by OPUS tasks that execute both before and after the process was improved as well. A representative example of the new formatting is as follows:
ODCL: +++ odcl_run_process.csh started +++ ODCL: ODCL: Pipeline Software Release OPUS 12.0A OPUS 12.0 SHARE 2.2A SHARE 2.2 ****** *** 24 Jul 2000 ********* ODCL: ODCL: Input parameters: ODCL: PROCESS_NAME = pipect ODCL: PATH_FILE = /info/devcl/pipe/wmiller/pipe/unix//g2f.path ODCL: TIME_STAMP = 39ec9b11 ODCL: PASSWORD? No ODCL: ODCL: Fetching TASK line from process resource file...(odcl_get_resource_command ) ODCL: ODCL: ODCL: Task to be run: xpoll (/project/devcl/odosw1/wmiller/build1/bin/axp_unix// xpoll) ODCL: ODCL: ODCL: Creating process PSTAT...(odcl_create_psffile) ODCL: [...logging generated by odcl_create_psffile appears here...] ODCL: ODCL: Running the process... ODCL: [...logging generated by the process appears here...] ODCL: ODCL: Process exited. Cleaning up...(odcl_cleanup) ODCL: [...logging generated by odcl_cleanup appears here...]
No changes were made to VMS logging or process log file names, although the Motif PMG was updated to support the new log file names under UNIX.
There is every indication that the system works and does not have major problems; however, use of the CORBA blackboards and Java managers is at the user's risk.
System Description:
The current OPUS system not only stores blackboard content on the file system, it also uses the file system services as a de facto OPUS server: requests for changes to and queries of blackboard content are made by each pipeline process, and the managers, directly to the file system. There is no centralized OPUS server that coordinates blackboard activity.
There are many advantages to such a system, but there are disadvantages too. One of the most troublesome is that file I/O scales with the number of concurrent OPUS processes and managers. Sometimes this feature undermines operating system robustness due to overloading of the file system. Because the file system functionality is generic, it cannot leverage intrinsic OPUS behavior that would reduce significantly the amount disk activity required of the system.
For example, much blackboard activity is read-only, so caching is a good way to reduce the number of physical reads off disk. Of course, most file systems incorporate some sort of caching, but they do not maximize the benefit specifically to the OPUS domain. An OPUS-aware caching blackboard server could do so, however.
Once a server is added to the OPUS system, the requirement that all blackboard activity be funneled through that server arises. Actually, this is not a new requirement: the network file system already fulfills it for the file system-based blackboards. The difference here is that the server becomes part of the OPUS system, and hence the need for a distributed object architecture like CORBA. Another benefit of bringing the server into the OPUS system is that it can be augmented at will unlike the vendor controlled file system.
One such major enhancement made to the server-side capability of OPUS is the introduction of an event channel over which blackboard changes are pushed to the new Java managers. Direct access to the OPUS file system is no longer required to run the managers because the servers cater to their information needs in a location-transparent way. Moreover, the manner in which this information is distributed places far less load on the file system.
The introduction of OPUS servers does not invalidate the present system, however. Any reasonable implementation of a caching server includes a persistent store to guard against information loss in the event of failure and to allow for server shutdown. Since OPUS already has an excellent mechanism for maintaining persistent state, it is advantageous to reuse that infrastructure in the server design. A notable side effect of doing so is that a backwards compatible system can be devised trivially. If the servers use the file system-based blackboards as their persistent store, and the format of the PSTATs and OSFs and their locations remain the same, then the user is free to switch between the two systems. That is the approach taken here.
The internal structure of the OAPI and its external configuration were designed for just such an extension. The file OPUS_DEFINITIONS_DIR:opus.env includes the key BB_TYPE for selecting the blackboard implementation type. Only the value FILE is supported to date; this delivery adds the option CORBA that switches the system to the caching approach using CORBA described above. Note, however, that by eliminating the restriction that the managers have access to the OPUS file system, it becomes necessary to always run the servers; the distinction between BB_TYPE FILE and CORBA is whether the pipeline applications also will use the CORBA servers or not.
If the traditional file system-based blackboard system is used (BB_TYPE = FILE in opus.env), then the servers will be used exclusively by the managers. As such, the servers are not witness to every change that happens on the blackboards (in fact, they are only aware of the manager initiated changes since the pipeline applications make changes through the file system). On the other hand, the managers still expect to see events posted so that they are able to update their displays (they are decoupled from any knowledge of which blackboard implementation is in use by OPUS).
The blackboard servers support the Java managers in this case by polling the entire blackboard contents at regular intervals, just like the Motif managers. Unlike the Motif managers, the Java managers do not perform the polling-- the servers do. Also unlike the Motif managers, the servers generate events based on consecutive polling results instead of sending the entire list to the Java managers. Note that although the Java managers function identically in either case, the information they receive potentially is less when BB_TYPE = FILE due to the discrete polling, just as with the Motif managers.
The CORBA blackboards and Java managers are not supported on the VMS platform-- the Motif managers and BB_TYPE = FILE must be used there. However, the Motif managers still can be run on UNIX systems, and they are compatible with the Java Managers, even when run simultaneously in the same environment. Note that the default node size in the PSTAT was increased to 20 characters on UNIX, so in order to use the Motif managers at all, this setting must be reverted back to the original 6 character limit (see the release notes for OPR 42874).
System Usage:
The CORBA blackboard system includes three distinct servers: a user context server (opus_env_server), a blackboard server (opus_bb_server) and an event channel (opus_event_service). The Java managers rely on opus_env_server to access user context information and to start pipeline processes. Only one instance of opus_env_server runs per OPUS user at any given time. In contrast, an instance of opus_bb_server and opus_event_server runs per blackboard (i.e., a pair for the PSTAT blackboard, and a pair per OSF blackboard). However, the user never starts any of these servers directly: opus_env_server is started as a consequence of running the utility opus_server_monitor (if it is not already running). The blackboard and event channel servers are started automatically as needed by the managers or pipeline applications.
Configuration Procedure:
These instructions must be followed before the Java managers or the CORBA blackboards can be used. There are additional steps that must be taken for the managers (consult the release notes for the Java managers). Of course, OPUS itself also must be configured properly.
(odocluster1) /store/devcl/odocache/wmiller/ACE_wrappers/ace (acdsdops) /store/opscl/odocache/wmiller/ACE+TAO (Solaris) /data/artichoke3/wmiller/ACE_wrappers/ace
Caveats & Notes:
The osf_delete task can only delete a single OSF per invocation.
A help file can be found on the OPUS HELP page.
In OPUS, it is not necessary for FNDMRG to always be running, since receiving Q/S splits is relatively rare. Operations could either have it running all the time, or only bring it up when a Q/S split has been identified by the operator.
These environment variables need to be available to FNDMRG and merge_reqd.pl when they run (place them in opus_login.csh or someplace similar):
ARCH_SERVER - the archive database server (currently CATLOG) ARCH_DB - the archive database (currently dadsops)
Automatic merging is performed by the MRGSTI and MRGNIC processes for STIS and NICMOS, respectively. The merging occurs in a temporary directory (OPUS_TEMP_DIR resource), and the resultant files are moved back into the operational directory holding EDT files for this instrument (SIS_DIR resource). The merged files will overwrite the dataset whose OSF has the corresponding higher DCF number. The OSF for this dataset is then pushed down the pipeline. The OSF and EDT set of the dataset with the corresponding lower DCF number will remain in the pipeline, and will have to be cleaned by hand.
The database entry flagging the split will now serve as an indicator to any later OTFR request that this exposure needs to be merged on-the-fly.
The following are also obsolete, but as I don't see them under 11.1, I include them here only for completeness :)
The osfdel process deletes completed OSFs from the blackboard. The trigger criteria in the osfdel process resource file determines when an OSF is ready for deletion. This process can only be run as part of an OPUS pipeline. The configured resource file will delete all OSFs that have a "c" in the "CL" column.
The cleandata executable will allow OPUS users to delete data related to an OSF from either the command line or a pipeline process. The process resource files in the current path are used to determine the location and name of data files to delete for each OSF. The default is to read ALL the process resource files found in the current path. This behavior can be changed by using the keywords SYSTEM_GROUPINGnn and CLASS_GROUPINGnn where nn is a number between 00 and 99. When these keywords are present only process resource files with matching SYSTEM and CLASS keyword values will be selected from the path. Note the wildcard, "*", can be used to match any value.
Three new optional process resource file keywords, OUTPATH_FILTER, DATA_DELETE_FILTERnn, and DATA_DELETE_LOCATIONnn are introduced with this delivery to describe data files produced by a task. OUTPATH_FILTER and DATA_DELETE_FILTER describe the filename of related data files. OUTPATH and DATA_DELETE_LOCATION describe location of related files. See help file on cleandata or OPUS FAQ for more information. My suggestion is to wait for the archive rework delivery to encorporate this process into the operational OPUS pipeline.
This delivery also resolves the problem of displaying the help webpages in a browser. This problem was restricted to non-Windows platforms.