HP OpenView IT/Operations Administrator’s Reference Management Server on HP-UX Edition 3 B6941-90001 HP OpenView IT/Operations Version A.05.
Legal Notices Hewlett-Packard makes no warranty of any kind with regard to this manual, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Hewlett-Packard shall not be held liable for errors contained herein or direct, indirect, special, incidental or consequential damages in connection with the furnishing, performance, or use of this material. Warranty.
for DOD agencies, and subparagraphs (c) (1) and (c) (2) of the Commercial Computer Software Restricted Rights clause at FAR 52.22719 for other agencies. HEWLETT-PACKARD COMPANY 3404 E. Harmony Road Fort Collins, CO 80525 U.S.A. Use of this manual and flexible disk(s), tape cartridge(s), or CD-ROM(s) supplied for this pack is restricted to this product only. Additional copies of the programs may be made for security and back-up purposes only.
X Window System is a trademark of the Massachusetts Institute of Technology. OSF/Motif is a trademark of the Open Software Foundation, Inc. in the U.S. and other countries. Windows NT™ is a U.S. trademark of Microsoft Corporation. Windows® and MS Windows® are U.S. registered trademarks of Microsoft Corp. Oracle®, SQL*Net®, and SQL*Plus® are registered U.S. trademarks of Oracle Corporation, Redwood City, California.
Printing History The manual printing date and part number indicate its current edition. The printing date will change when a new edition is printed. Minor changes may be made at reprint without changing the printing date. The manual part number will change when extensive changes are made. Manual updates may be issued between editions to correct errors or document product changes. To ensure that you receive the updated or new editions, you should subscribe to the appropriate product support service.
In This Book This guide is for the person who installs ITO on the managed nodes, and is responsible for administering and troubleshooting the ITO system. It covers agent installation, first-time configuration, agent de-installation, tuning, and troubleshooting. The guide assumes that the reader has a sound knowledge of HP-UX system and network administration and troubleshooting.
Conventions The following typographical conventions are used in this manual. Font Type What the Font Type Represents Example Book or manual titles, and man page names Refer to the HP OpenView IT/ Operations Administrator’s Reference and the opc(1M) manpage for more information. Provides emphasis You must follow these steps. Specifies a variable that you must supply when entering a command At the prompt type: rlogin your_name where you supply your login name.
Font Type What the Font Type Represents Example Keycap Keyboard keys Press Return. [Button] Buttons on the user interface. Click [Operator]. Click on the [Apply] button. Menu Items A menu name followed by a colon (:) means that you select the menu, then the item. When the item is followed by an arrow (->), a cascading menu follows.
The IT/Operations Documentation Map ITO provides a set of manuals and online help which aim to assist you in using ITO and improve your understanding of the underlying concepts. This section illustrates what information is available and where you can find it. HP OpenView IT/Operations Printed Manuals This section provides an overview of the printed manuals and their contents. The HP OpenView IT/Operations Concepts Guide provides you with an understanding of ITO on two levels.
Managing Your Networks with HP OpenView Network Node Manager is for administrators and operators. It describes the basic functionality of HP OpenView Network Node Manager which is an embedded part of ITO. The HP OpenView ServiceNavigator Concepts and Configuration Guide provides information for administrators who are responsible for installing, configuring, maintaining, and troubleshooting the HP OpenView ServiceNavigator. It also contains a high-level overview of the concepts behind service management.
ServiceNavigator concepts and tasks for the ITO operator, as well as reference and troubleshooting information. The HP OpenView IT/Operations Man Pages are available online for ITO.
IT/Operations Installation Guide for the Management Server for general installation instructions using swinstall. The manuals are installed into the following directory on the management server: /opt/OV/doc//OpC/ Alternatively, you can download the manuals from the following web site: http://ovweb.external.hp.com/lpe/doc_serv Or, view them in HTML format at: http://docs.hp.com ITO DynaText Library The ITO DynaText Library is a collection of ITO manuals in online format based on DynaText.
Using the Online Help System The ITO Motif GUI Online Help System ITO's Motif GUI online information consists of two separate volumes, one for operators and one for administrators. In the operator's volume, you will find the HP OpenView IT/Operations Quick Start describing the main operator windows.
You can also get context sensitive help in the Message Browser and Message Source Templates window. After selecting Help: On Context from the menu, the cursor changes into a question mark which you can then position over the area on which you want help. When you click the mouse button, the required help page is displayed in its help window.
Contents 1. Prerequisites for Installing ITO Agent Software Managed Node Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29 Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29 Software Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30 2. Installing ITO Agents on the Managed Nodes Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents File Tree Layout on AIX Managed Nodes . . . . . . . . . . . . . . . . . . . . . . . Standalone System or NFS Cluster Server on AIX. . . . . . . . . . . . . . NFS Cluster Client on AIX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ITO Default Operator on AIX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . System Resources Adapted by ITO on AIX . . . . . . . . . . . . . . . . . . . . 118 118 118 119 119 File Tree Layout on DEC Alpha NT Manged Nodes . . . . . . . . .
Contents System Resources adapted by ITO on Novell NetWare . . . . . . . . . . .134 File Tree Layout on Olivetti UNIX Managed Nodes . . . . . . . . . . . . . . .135 Standalone Systems or NFS Cluster Servers on Olivetti UNIX . . . .135 NFS Cluster Clients on Olivetti UNIX . . . . . . . . . . . . . . . . . . . . . . . .136 The ITO Default Operator on Olivetti UNIX . . . . . . . . . . . . . . . . . . .136 System Resources Adapted by ITO on Olivetti UNIX . . . . . . . . . . . .
Contents System Resources Adapted by ITO on Sequent DYNIX/ptx. . . . . . . 150 File Tree Layout for Silicon Graphics IRIX . . . . . . . . . . . . . . . . . . . . . Standalone Systems or NFS Cluster Servers on SGI IRIX . . . . . . . NFS Cluster Client on SGI IRIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . The ITO Default Operator on SGI IRIX . . . . . . . . . . . . . . . . . . . . . . System Resources Adapted by ITO on SGI IRIX . . . . . . . . . . . . . . .
Contents Manually De-installing ITO Software from OS/2 Managed Nodes . .175 Manually De-installing ITO Software from Solaris, NCR, and SINIX Managed Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .176 Manually De-installing ITO Software from Windows NT Managed Nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .176 Manually De-activating the ITO Agent on an NFS Cluster Client . .176 Managing ITO Agent Software .
Contents Database Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reports for Administrators. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reports for Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Long-term Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Report Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Application Message Interception . . . . . . . . . . . . . . . . . . . . . . . . . . . .326 Server Message Stream Interface API . . . . . . . . . . . . . . . . . . . . . . . .326 How ITO Starts ITO Applications and Broadcasts on Managed Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .327 SMS Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .328 EMS Integration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Secure Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The RPC Client/Server Connection . . . . . . . . . . . . . . . . . . . . . . . . . . Processes and Ports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Restrictions and Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . 369 369 370 371 10. Tuning, Troubleshooting, Security, and Maintenance Performance Tuning. . . . . . . . . . . . . . . . . . . . . .
Contents Changing the Hostname/IP Address of the Management Server . . .425 Changing the Hostname/IP Address of a Managed Node . . . . . . . . .431 ITO Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .435 System Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .435 Network Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .437 Port Security . . . . . . . . . . . . . . . . . .
Contents How MC/ServiceGuard Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example 1: MC/ServiceGuard Package Switchover . . . . . . . . . . . . . Example 2: MC/ServiceGuard Local Network Switching . . . . . . . . . MC/ServiceGuard Redundant Data and Control Subnets . . . . . . . . 503 503 505 506 MC/ServiceGuard and IP addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . 508 Portable IP Addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 Prerequisites for Installing ITO Agent Software 27
Prerequisites for Installing ITO Agent Software This chapter lists all supported agents and describes the hardware and software prerequisites for each type of supported agent. This information is provided in order to help you select the correct agent platforms to use as ITO managed nodes. Check the minimum requirements thoroughly for each agent platform that you expect to install as a managed node. NOTE In this section, ITO managed nodes are also referred to as ITO agents.
Prerequisites for Installing ITO Agent Software Managed Node Requirements Managed Node Requirements To prepare for the installation of ITO on the managed nodes, make sure that the chosen managed nodes satisfy the following hardware and software requirements. This section is split into the following sections: • Hardware requirements • Software requirements Hardware Requirements This section explains what hardware requirements exist for given agent platforms.
Prerequisites for Installing ITO Agent Software Managed Node Requirements ❏ The ITO agent must be installed on an HPFS partition: FAT partitions are not supported for ITO Agent installation and operation. ❏ Additional swap space: none ❏ Additional RAM: 4MB UNIX Hardware Requirements The UNIX systems you select as managed nodes must meet the following hardware requirements: ❏ 10 MB disk space free (About 20 MB is required during software installation.
Prerequisites for Installing ITO Agent Software Managed Node Requirements ITO Supported Agent Platforms and Operating System (OS) Versions Table 1-1 on page 31 lists the specific versions of the various agent operating systems that are supported by ITO. Table 1-1 Supported ITO-Agent Operating System Versions Operating System Platform Supported OS Versions Supported Communication Type a AIX IBM RS/6000 BULL DPX/20 4.1, 4.2, 4.3 DCE DataCenter/OSx SVR4 Pyramid 1.
Prerequisites for Installing ITO Agent Software Managed Node Requirements Operating System Platform Supported OS Versions Supported Communication Type a OS/2 Warp Intel 486 or higher 3.0, 4.0 DCE SCO OpenServer Intel 486 or higher 3.2 (v4.0, v4.2, v5.0.0, v5.0.1, v5.0.2, v5.0.3, v5.0.4, v5.0.5) NCS SCO UnixWare Intel 486 or higher 2.1 DCE SINIX/Reliant Siemens-Nixdorf 5.43, 5.44 NCS/DCE Solaris Sun SPARCstation 2.5, 2.51, 2.6, 7 NCS Windows NT Intel 486 or higher 3.51, 4.
Prerequisites for Installing ITO Agent Software Managed Node Requirements Communication Software ITO can use two mechanisms to communicate between the management server and the client nodes, these are the Distributed Computing Environment (DCE) and Network Computing System (NCS). Processes running on the ITO management server communicate using DCE by default, however, processes on the agents can communicate with the management server using either DCE or NCS.
Prerequisites for Installing ITO Agent Software Managed Node Requirements ❏ Operating system. For the supported OS versions, see Table 1-1 on page 31. ❏ DCE RPC: • DCE RPC. • For AIX 4.1, it is recommended you install the libc_r.a patch. It can be found on CD-ROM 5765-393, (titled, AIX V4 Update CD-ROM). To install, login as root and run: smitty update_all. • The following filesets must be installed on the AIX 4.1 or 4.2 DCE RPC managed node: dce.client.core.rte 2.1.0.6 dce.client.core.rte.rpc 2.1.0.0 dce.
Prerequisites for Installing ITO Agent Software Managed Node Requirements ❏ Operating System. For the supported OS versions, see Table 1-1 on page 31. ❏ Basic networking services • OSFCLINET4xx Basic Networking Services ❏ DCE Runtime Kit • DCERTS20x DCE Runtime Services V2.0 NOTE ITO supports DCE versions supplied with the Digital Unix operating system. However, although the Digital Unix operating system includes DCE, DCE has to be installed separately as an optional product.
Prerequisites for Installing ITO Agent Software Managed Node Requirements Software Requirements for HP-UX 11.x Managed Nodes The following software must be installed on HP-UX 11.x managed nodes: ❏ Operating system. For the supported OS versions, see Table 1-1 on page 31. ❏ DCE RPC version 1.7 or higher on HP-UX 11.x managed nodes. (SD-package: DCE-Core.DCE-CORE-RUN) ❏ DCE/9000 Kernel Thread Support (SD-package for HP-UX 11.x DCE-KT-Tools) ❏ Internet Services (SD-package: InternetSrvcs.
Prerequisites for Installing ITO Agent Software Managed Node Requirements ❏ Operating System. For the supported OS versions, see Table 1-1 on page 31. ❏ If only the Multi-User operating environment is installed, then the networking package, WIN-TCP, must also be installed. ❏ NCS Version 1.5.1 (package NckNidl) or StarPRO DCE Executive from NCR UNIX SVR4. If neither NCS nor StarPRO DCE are found on the managed node, ITO installs llbd and lb_admin during the ITO agent software installation.
Prerequisites for Installing ITO Agent Software Managed Node Requirements ❏ NetBasic must be installed on NetWare depot servers NetBasic runtime version 6.00j - Build 4.127 or higher is required for NetWare depot server(s) (the systems which are used for the ITO agent software installation). See “Installation Tips for Novell NetWare Managed Nodes” on page 75 for details on how to get and install NetBasic.
Prerequisites for Installing ITO Agent Software Managed Node Requirements TCP/IP (or System View Agent on OS/2 Warp 4.0) includes two SNMP deamons, snmpd and mib_2. Both must be running when you install the agent software. They ensure that the management server is able to determine the node type of the managed node. If you want to use MIB variable monitoring, both deamonns must continue to run after the installation. • DCE Runtime 1.0.2 or 2.
Prerequisites for Installing ITO Agent Software Managed Node Requirements Software Requirements for SCO UnixWare Managed Nodes The following software must be installed on SCO UnixWare managed nodes: ❏ Operating System. For the supported OS versions, see Table 1-1 on page 31. ❏ UnixWare Networking Support Utilities: • nsu 2.1 ❏ UnixWare internet utilities software: • inet 2.1 ❏ DCEcore 1.
Prerequisites for Installing ITO Agent Software Managed Node Requirements ❏ On IRIX 5.3, NCS 1.5.1 package netls_eoe.sw or gr_ncs.sw. On IRIX 6.2, NCS 1.5.1 package license_eoe.sw.netls.server. If neither NCS nor DCE are found on the managed node, ITO installs llbd and lb_admin during ITO software installation. ❏ On IRIX 5.3, package eoe1.sw.svr4net with System V compatible networking must be installed. On IRIX 6.2, package eoe.sw.svr4net with System V compatible networking must be installed.
Prerequisites for Installing ITO Agent Software Managed Node Requirements ❏ Operating System. For the supported OS versions, see Table 1-1 on page 31. ❏ NCS version 1.5.1 or DCE RPC. If neither NCS nor DCE are found on the managed node, ITO installs llbd and lb_admin during the ITO agent software installation. ❏ ARPA/Berkeley Services. ❏ The MIB monitoring functionality of ITO requires the snmpd of the HP OpenView platform, or SNMP-based, MIB-I (RFC 1156) or MIB-II (RFC1158) compliant agent software.
2 Installing ITO Agents on the Managed Nodes 43
Installing ITO Agents on the Managed Nodes This chapter describes how to install the ITO agent software on the various supported managed nodes, and includes numerous tips for different operating systems. The installation procedures assume that you have already installed and configured the database and ITO on the management server, as described in the HP OpenView IT/Operations Installation Guide for the Management Server.
Installing ITO Agents on the Managed Nodes Overview Overview This section contains important information about installing and de-installing ITO agent software on managed nodes with various operating systems. This section includes: ❏ installation tips ❏ steps for installing the ITO agent software on managed nodes ❏ automatic installation or update procedures ❏ automatic de-installation procedures for managed nodes Make sure that the kernel parameters are set correctly on UNIX systems.
Installing ITO Agents on the Managed Nodes Overview Operating System Tool SCO UnixWare sysadm SINIX sysadm Solaris admintool NCR UNIX SVR4 and SGI have no automated tools. Windows NT system parameters cannot be changed. Table 2-2 on page 46 gives values for kernel parameters on HP-UX managed nodes. Other agent platforms generally require similar values. Table 2-2 Important Kernel Parameters for Managed Nodes Parameter Description Minimum Value nfile Maximum number of open files.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes General Installation Tips for Managed Nodes ❏ When possible, install the latest ITO agent software version on all managed nodes. This will enable the latest ITO features to be used on those nodes. ❏ The names bin, conf, distrib, unknown and mgmt_sv may not be used for managed nodes. These names are used internally by ITO, and therefore must not be used as the name of any system.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes ❏ Identify managed nodes having more than one IP address, and specify the most appropriate address (for example, the IP address of a fast network connection) in the ITO configuration. Verify that all other IP addresses of that managed node are also known at the management server. Otherwise, the messages from the multiple IP address systems might not be forwarded by ITO.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes • Use a symbolic link. For example, for HP-UX 10.x: ln -s /mt1/OV /opt/OV • Mount a dedicated volume. For example, for AIX: mount /dev/hd4 /usr/lpp/OV Note that for HP-UX systems (versions below 10.00), /etc/update(1M) does not support installation on NFS-mounted file systems.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes Installation Tips to be Performed on the Management Server ❏ If you want to stop the configuration and script/program distribution, for example, if the configuration is invalid, clean the /distrib directory. This should only be done in an emergency and only after the ITO management server processes have been stopped.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes ❏ The non-default log directory on UNIX systems is erased during deinstallation of ITO. Note the following rules about this directory: • Do not use the same directory for more than one managed node, this could be a potential problem in cluster environments, or in cases where the directory is NFS-mounted across several systems. • Do not use the same log directory for ITO and other applications.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes This assumption is valid for all platforms that support NFS operations, regardless of special support for diskless nodes. For example, NCR UNIX does not support diskless configurations but you can make a cluster of NCR workstations that share common ITO agent code.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes If the file system which hosts the /usr/lpp file tree is too small to install ITO Agents, create a symbolic link before installing ITO. For example: if /bigdisk is a local file system with enough free space: mkdir -p /bigdisk/OV ln -s /bigdisk/OV /usr/lpp/OV In a cluster environment, you must check that /bigdisk is also accessible from all cluster clients, and that it is mounted from all client nodes.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes Install the Agent on the Managed Node: Use the following instructions to install the ITO AIX agent on an AIX system that will become an ITO managed node: 1. Copy the ITO agent package to a temporary directory on the managed node. On the management server, this agent package is located in: /var/opt/OV/share/databases/OpC/mgd_node/vendor/ibm/\ rs6000/aix/A.05.00/RPC_DCE_[TCP|UDP]/opc_pkg.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes NOTE Use the opcactivate command with the -mode option to activate: hacmp cluster server/client for ITO agents on AIX HACMP systems. See also “Installation Prerequisites for AIX HACMP Agents” on page 58 for ITO agents on AIX Cluster-Client systems after the ITO agent software package has been installed on the AIX Cluster Server system.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes NOTE Do not check [Force Update] otherwise the management server will re-install the agent. If the agent is pre-installed on the node, the management server will activate the node, and install the selected components. Note that if the agent software is not pre-installed, this action will install the agent. 4.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes • Pre-installation tasks • Problems with IP aliases in AIX OS • Installing AIX HACMP agents ITO Agents in the HACMP Environment Each node in an HACMP cluster has its own ITO agent and must be accessible on a fixed IP address, which represents the node in the ITO Node Bank. This IP address must always remain bound to the same node.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes Note that the status of the icon representing the node in Node Bank window does not change color immediately when the node in the HACMP cluster goes down: it will change color only after ITO has determined that it cannot contact the control agent on that node. Installation Prerequisites for AIX HACMP Agents The following software versions are supported: • AIX 4.2 / 4.3 (DCE agents) • HACMP 4.2.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes Problems with IP Aliases in AIX OS One very important consequence of setting the IP alias on the interface is that HACMP no longer works correctly. This is true for all events that deal with IP addresses, such as; acquire service address, acquire takeover address, swap adapter, and so on.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes if [ $? -ne 0 ]; then INTERFACE=`/usr/sbin/cluster/utilities/clgetif -a $BOOT_IP` fi if [ “$INTERFACE” != ““ ]; then #IP has changed, set IP alias again on interface with SERVICE_IP /usr/sbin/ifconfig $INTERFACE $ALIAS_IP alias fi The ALIAS_IP variable should contain the same IP address that was used for the installation of the ITO agent.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes • The installation script checks if the IP address which is used for the ITO installation is tied to the boot, service, or standby interfaces, and issues a warning if this is the case. However, the installation proceeds nonetheless. • If you select automatic start of ITO agents, the file /etc/inittab is also updated so that the clinit entry remains the last one - as is required by HACMP.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes • Machine Type: DEC Alpha • OS name: Windows NT If SNMP services are not running on the Windows NT node ITO cannot detect the Machine Type and OS Name. In this case, enter the appropriate values and continue with the installation. Manual Installation: DEC Alpha NT Agent For instructions on manually installing the DEC Alpha NT agent, see “Manual Installation: Windows NT Agent” on page 111.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes To add /var/adm/messages, and /usr/adm/lplog to the managed node, add the following lines to the /etc/syslog.conf file: kern.debug /var/adm/messages lpr.debug /usr/adm/lplog To add /var/adm/sialogr to the managed node, enter: touch /var/adm/sialogr Installation Tips for DYNIX/ptx Managed Nodes ❏ The ITO Agent software is installed on the /opt file tree.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes the package is installed once on a depot node in the remote LAN. Subsequent agent installations then get the package from the local depot.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes Figure 2-2 Using HP SD-UX Remote Software Depot to Install ITO on HP-UX 10.x and 11.x Managed Nodes ITO Management Wide Area Network (WAN) Server ITO Agent Package (1) Local Area Network (LAN) SD Depot (ITO Node 1) ITO Agent Package (2) Key: Data Transmission (1) Manual transfer of package (2) Trigger remote installation ITO Node 2 ITO Node N Creating a Software Depot on a Remote Node To create an HP-UX 10.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes swcopy -d -s /tmp/opc_pkg -x source_type=tape -x \ enforce_dependencies=false OVOPC-AGT @ /depot1 If the SD depot does not exist, it is created automatically. ❏ To obtain a compressed depot, you must first create a temporary, uncompressed depot.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes ready to become an ITO managed node when it is later connected to the network. This may be useful if many workstations are prepared in some central location, or if one wants to avoid the root connection over the network that is necessary for a standard agent installation.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes c. Install the agent on the node: swinstall -x source_type=tape -s\ //opc_pkg OVOPC-AGT If appropriate, install the ANS package on the node, too: swinstall -x source_type=tape -s\ //nsp_pkg ITOAgentNSP NOTE For cluster nodes, use swcluster, instead of swinstall, on the cluster server. d. Examine the node’s logfile /var/adm/sw/swagent.log.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes Activate the Node Using the Command Line You can activate the agent on the node over the net (without the GUI and without root access) by using the following command-line steps: 1. After manually installing the agent on the node, enter: opcactivate -cs \ -cn See also Chapter 8, “ITO Language Support,” on page 333 for more information about codesets.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes 4. Use the command /opt/OV/bin/OpC/opcragt -status to verify that the Control, Message, and Action Agents are all running on the managed node. Installation Tips for IRIX Managed Nodes ❏ The ITO agent software is installed on the /opt file tree. If the file system that hosts the /opt file tree is too small for the installation of ITO Agents, create a symbolic link before installing ITO.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes configured in RESLVCNF.NET.SYS on the managed node) or, if no name server is running, the management server name must be locally registered in HOSTS.NET.SYS. IP address resolution via Network Directory (NSDIR.NET.SYS) or Probe (and Probe Proxy) is not supported.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes • If the Add/Modify Node window has been used to select the Automatic Update of System Resource Files option for the managed node, SYSSTART.PUB.SYS is created or updated, (unless it already contains a pre-existing ITO entry). It contains the start sequence for the job stream OPCSTRTJ.BIN.OVOPC, used for starting the Local Location Broker (llbd) and the ITO agents.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes where for example, is Maestro's mstream. If there is no entry for ITO in SYSSTART.PUB.SYS, the automatic software installation will insert an entry for ITO in SYSSTART.PUB.SYS where the major parts look like this: comment ... OperationsCenter OPCSTRTJ.BIN.OVOPC ❏ The executable library, SNMPXL.NET.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes Installation Tips for NCR UNIX SVR4 Managed Nodes ❏ The system name uname -s must not be set to any of the names AIX, Solaris, HP-UX, SCO, DYNIX/ptx, OSF/1, Digital UNIX, Reliant UNIX, SINIX, IRIX, Olivetti, or UnixWare. ❏ If the Multi-User version of UNIX is installed, ITO can be installed only after networking package WIN-TCP from NCR UNIX SVR4 is first installed.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes Installation Tips for Novell NetWare Managed Nodes The process for installing the ITO agent software on Novell NetWare managed nodes differs from the standard installation process used for other platforms; the NetWare agent installation is semi-automated and NetWare-server-based.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes Figure 2-3 Installing the ITO Novell NetWare Agent Package ITO Management Server NetWare Depot Server 1. Admin GUI 1st part: - add NetWare managed nodes - run Actions->Install for all managed nodes; select Agent Software (ping only) 6.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes a. Add your Novell NetWare managed nodes to the ITO Node Bank. b. Open the Install / Update ITO Software and Configuration window, and add the Novell NetWare managed nodes where you want to install the ITO agent software. Select [Agent Software] and click on [OK]. This sends the ping command to the nodes. Note that the agent software package is not automatically copied to the NetWare depot server.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes • It is recommended that the depot server runs ftp so that the ITO agent package can be easily transferred from the ITO management server to the depot server. • NetBasic, from the HiTecSoft company, must be installed on the NetWare depot server. Note that the NetBasic components bundled with NetWare 4.11 are not sufficient because some .NLMs (such as NETMODUL.NLM) which are required for installation are missing.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes This number does not allow you to use the NetBasic Integrated Developer Environment. You can not develop or compile your own NetBasic script programs. e. After all required NetBasic .NLMs have been successfully installed on the depot server, the Windows 95 or Windows NT 4.0 system is no longer needed. 4. Unzip opc_pkg.z, enter: load unzip sys:/tmp/opc_pkg.Z Note that this assumes that opc_pkg.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes Prerequisites for Installing the ITO Agent Software The ITO agent software can be installed using bindery mode or NetWare Directory Services (NDS). It is recommended to use NDS because bindery mode may become obsolete with future releases of Novell NetWare.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes • Exit Installation immediately exits the procedure. b. Select the Install HP IT/Operation Agent for NetWare 4.x option and respond to the prompts. c. Enter the name of the ITO management server. d. Enter the IP address of the ITO management server. e. Specify whether you want the name and IP address of the management server added to the SYS:/ETC/HOSTS. f. Decide whether you want to use NDS or proceed in bindery mode.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes Systems running Novell NetWare 3.x or Novell NetWare 5.x are also listed but cannot be selected. If a Novell NetWare 5.x file server is accidentally selected, the installation procedure reports the NetWare version as 3.x and does not allow selection. The NetWare depot server is listed; note that it can also be an ITO agent for the NetWare server.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes reboot. If the string is found you are notified that in order to run the ITO agent for the NetWare server, TCP/IP must be invoked. If there is no such string the NetWare server may use the configurator, INETCFG.NLM, to set network parameters. Inspect the AUTOEXEC.NCF file for inclusion of the string INITSYS on a separate line to determine if this method is used. If this is the case the file SYS:ETC/NETINFO.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes also checked to make sure that it is running. If it isn’t, all standard locations are checked for the load xconsole command. The configuration file of the primary IO Engine is not searched.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes • Updates system configuration files The Installation procedure updates the OPCINFO file and writes the ITO start command (OPCAGT.NCF) to the AUTOEXEC.NCF. On NetWare SFT III systems, OPCAGT.NCF is added to SYS:SYSTEM/MSAUTO.NCF. The SYS:/ETC/HOSTS file is updated with the IP address of the ITO management server if you agreed to add the ITO management server to the SYS:/ETC/HOSTS file.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes ❏ NetWare Directory Services (NDS) If you use NDS to install the ITO agent software, the installation process creates the file SYS:/OPT/OV/BIN/OPC/INSTALL/NDSINFO on each managed node. This file contains information about the position of the managed node in the NDS directory tree so that the ITO agent .NLMs can log in to NDS when they are started. The ITO default operator opc_op is also inserted.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes ❏ Note that PATH cannot be changed during runtime on Novell NetWare managed nodes. All actions, montiors, and commands must be either fully qualified or must reside in PATH. PATH must be set before the ITO agents are started. ❏ Unsupported ITO Agent Functionality Due to specifics of the NetWare platform a subset of the ITO agent functionality is not supported or is implemented in a slightly different way.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes KILL: Stops all ITO agent processes (equivalent to opcagt -kill) The console user interface is implemented with the standard NWSNUT services so that the standard NetWare console look-and-feel is achieved. Installation Tips for Olivetti UNIX Managed Nodes ❏ The ITO Agent software is installed on the /opt file tree.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes Installation Tips for OS/2 Managed Nodes Both standard and manual agent installation are supported on OS/2 managed nodes. Standard OS/2 Agent Installation ❏ During the installation, the installation script checks that sufficient disk space is available on the disk entered in the [Install Onto Drive] field of the Node Advanced Options window in the ITO GUI.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes ❏ Note that PATH cannot be changed during runtime on OS/2 managed nodes. All actions, montiors, and commands must be either fully qualified or must reside in PATH. PATH must be set before the ITO agents are started. Manual OS/2 Agent Installation In some situations, it may be desirable to install the OS/2 agent software without using the management server.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes opcinst.cmd /TAPEDIR: /DRIVE: /MGMT_SERVER: See Table 2-3 on page 91 for a list of available command line options or type opcinst.cmd /help for help. • using a response file (a text file that contains default answers): opcinst.cmd See Table 2-3 on page 91 for a list of available response file tokens.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes Option /UPDATE: Response File Token UPDATE_CONFIG Possible Values YES Value Type const NO (default) /TAPEDIR: INSTALLATION_TMP_DIR any drive:dir /TAPESIZE:a N/A any bytes a. Used for remote installation only. Installation Tips for Pyramid DataCenter/OSx Managed Nodes ❏ The ITO Agent software is installed on the /opt file tree.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes ❏ Make sure that the entry root is not contained in the /etc/ftpusers file; Otherwise the installation of ITO agents to the managed nodes will fail. Installation Tips for SCO OpenServer Managed Nodes ❏ The ITO agent software is installed on the /opt file tree. An empty /opt file tree is created during installation of the SCO OpenServer operating system. By default, this file tree is positioned on the root file system.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes To add the logfile, edit the file /etc/syslog.co and add the following lines: kern,mark.debug /var/adm/messages To activate your changes, enter: touch /var/adm/messages Then restart the syslog daemon, see the man page syslog(1m) for details. ❏ An entry for the user root must not be present in the file /etc/ftpusers. Otherwise the installation of ITO agents will fail.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes ❏ If you want to configure the Domain Name Server (DNS) on a SINIX managed node, in addition to editing the /etc/resolv.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes mkdir -p /bigdisk/OV ln -s /bigdisk/OV /opt/OV In a cluster environment, you must check that /bigdisk is also accessible from all cluster clients, and that it is also mounted from all client nodes. For example, the local file system /bigdisk on the cluster client must be mounted to the exported file system /bigdisk on the cluster server.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes tar xvf nsp_pkg 3.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes Activate the agent on the managed node: 1. After manually installing the agent on the node, enter: /opt/OV/bin/OpC/install/opcactivate \ -cs -cn The agent then attempts to send messages to the management server. For more information about codesets, see Chapter 8, “ITO Language Support,” on page 333.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes showrev -p Installation Tips for Windows NT Systems This section explains how to install the ITO agent package on Windows NT systems. There are four installation procedures that you can use depending on the network configuration as described in Table 2-4 on page 99: NOTE In this manual, a Windows NT installation server is a primary or backup domain controller with the ITO agent package installed.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes Use the... ftp re-installation described on... to install or upgrade the NT agent package on... page 109 • a primary or backup domain controller for the second time • a primary or backup domain controller that does not give administrative rights to the HP ITO account of a domain with an installation server • a stand alone system manual installation page 111 • an NT system that is not yet connected to the network.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes Figure 2-5 Installing the ITO Windows NT Agent Package ITO Management Server ftp installation Stand-alone Windows NT System ftp installation Possible only if Domain 2 gives administrative rights to the HP ITO account in Domain 1 Primary or Backup Domain Controller standard installation standard installation TRUST Windows NT Domain 1 Primary or Backup Domain Controller standard installation Windows NT Domain 2 In
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes • Schedule services must not be disabled. • SNMP services must be running for ITO to automatically identify the node as an NT system. This is helpful, but not absolutely necessary for a successful installation. ❏ Requirements for a Windows NT Installation Server • All Windows NT node requirements as listed above. • Additional four MB of space must be free on an NTFS-formatted local disk that is available to the node.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes ftp Agent Package Installation This procedure uses ftp to install the agent package from the ITO management server to a Windows NT primary or backup domain controller that does not currently have the agent running. This type of installation must be done at least once; it requires ftp services and one manual step on the NT system.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes • Installation Drive: enter the letter of an NTFS drive with 10 megabytes of disk space for the agent software. If the drive that you specify does not have enough space, or if you leave this field blank, ITO will search the available local drives for an NTFS drive that has enough free space. • Installation Server: leave this field blank.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes 10. Under Target Nodes, Select Nodes in list requiring update, then click [Get Map Selection]; the node name will appear in the window. 11. Under components, select [Agent Software], then click [OK]. The installation will begin. A new shell will open and start the installation script. When prompted for the “as user” password, give the password of the NT system administrator.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes You can also verify the installation by checking the NT services window and looking for the entry HP ITO Agent, which should be running, and the HP ITO installation service, which will not be running. (This service runs only when you want to install the agent on another NT system.) NOTE The next steps must be performed at the ITO management server. 17.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes workstation’s domain. This is recommended because the process of creating an installation server automatically installs the HP ITO account on the domain controller, where it will have the necessary rights throughout the domain. (If the HP ITO account does not have administrative rights throughout the domain, you will have to manually assign them on each workstation where you install the agent.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes • Installation Server: enter the name of a Windows NT domain controller that has been set up as an installation server (and is in the same domain, or has administrative rights for the HP ITO account in this domain). This example uses the system ntserver.com. • If Service Pack 1 or 2 is installed on your Windows NT version 3.51 or 4.0 managed node, change the communication type from DCE RPC (UDP) to DCE RPC (TCP). 6.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes NOTE If you are installing the ITO agent software on a domain controller, do not let ITO create a password for you, but specify your own. You will need this password again when installing on another domain controller. When installing the agent on another domain controller, use the password of the HP ITO account on the domain controller where you first installed the agent software.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes If an installation server is already available, and you want to re-install or upgrade ITO agent software on additional Windows NT nodes, see “Standard Agent Package Installation” on page 106. 1. Check the “Installation Requirements” on page 101. Make sure that your systems meet all the listed requirements. 2. Select Window:NodeBank from any sub-map to display the ITO Node Bank window. 3.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes 8. Under components, select [Agent Software], then click [OK]. The installation will begin. A new shell will open and start the installation script. When prompted for the Administrator password, give the password of the NT system administrator. When prompted for the HP_ITO password you can either specify a password, or simply press Enter and ITO will create a password for you.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes To install the NT agent on an NT PC that will become an ITO managed node: 1. Copy the files listed below from: /var/opt/OV/share/databases/OpC/mgd_node/vendor/ms/\ [intel | alpha]/nt/A.05.00/RPC_DCE_TCP/ on the ITO management server, to the C:\temp directory of the managed node: • opc_pkg.Z (rename this file to opc_pkg.zip) • opc_pre.bat • unzip.exe • unzip.txt • opcsetup.inf • opc_inst.bat • nsp_pkg.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes NOTE If the password line is left in its default state (empty) a random password is generated. If you want to use a specific password, it needs to be encrypted on the ITO management server with the opcpwcrpt tool, which resides in /opt/OV/bin/OpC/install. If you are installing the ITO agent software on a domain controller, do not let ITO create a password for you, but specify your own.
Installing ITO Agents on the Managed Nodes General Installation Tips for Managed Nodes a. /opt/OV/bin/OpC/opcsw -installed b. /opt/OV/bin/OpC/opchbp -start The HP ITO Account The standard installation of the ITO agent package on a Windows NT managed node installs the HP ITO account by default as a member of the administrators group and consequently gives the account all those user rights that are available under Windows NT.
3 File Tree Layouts on the Managed-Node Platforms 115
File Tree Layouts on the Managed-Node Platforms This chapter provides file trees to show the directory structures on all Managed Node platforms supported by ITO. These are as follows: ❏ AIX ❏ DEC Alpha NT ❏ Digital UNIX (OSF/1) ❏ HP-UX 10.x/11.
File Tree Layouts on the Managed-Node Platforms ❏ NFS cluster clients and server systems, where appropriate For detailed information about the directory contents, see the opc(5) page. Note that all man pages reside on the management server.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on AIX Managed Nodes File Tree Layout on AIX Managed Nodes The ITO Software on AIX managed nodes is organized in the following way: Figure 3-1 ITO Software on AIX Managed Nodes /usr/lpp/OV OpC nls /var/lpp/OV lib include utils conf tmp log OpC OpC OpC OpC B install bin conf B /usr/lpp/OPC and /lpp/OpC are used by the installp utility for software maintenance Standalone System or NFS Cluster Server on AIX The cluster se
File Tree Layouts on the Managed-Node Platforms File Tree Layout on AIX Managed Nodes ITO Default Operator on AIX The ITO default operator, opc_op, owns /home/opc_op as home directory. By default, the operator uses the Korn Shell (/bin/ksh) and is not allowed to log into the system directly (* entry in /etc/passwd).
File Tree Layouts on the Managed-Node Platforms File Tree Layout on DEC Alpha NT Manged Nodes File Tree Layout on DEC Alpha NT Manged Nodes Figure 3-2 ITO Software on DEC Alpha NT Managed Nodes \usr\OV log bin conf OpC OpC OpC alpha databases include lib nls C OpC OpC mgd_node tmp bin conf vendor alpha utils install B B ITO Default Operator on DEC Alpha NT Managed Nodes Information concerning default ITO operators for DEC Alph
File Tree Layouts on the Managed-Node Platforms File Tree Layout on DEC Alpha NT Manged Nodes System Resources Adapted by ITO on DEC Alpha NT Managed Nodes Information concerning adapted system resources for DEC Alpha NT is the same as the information concerning adapted system resources for Windows NT on intel and is described in “System Resources Adapted by ITO on Windows NT” on page 161.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on Digital UNIX Managed Nodes File Tree Layout on Digital UNIX Managed Nodes The ITO software on Digital UNIX managed nodes is arranged as follows: Figure 3-3 ITO Software on Digital UNIX Managed Nodes /var/opt/OV /usr/opt/OV locale bin OpC LC_MESSAGES install lib include conf tmp log bin OpC OpC OpC OpC utils bin conf B B Standalone Systems or NFS Cluster Servers on Digital UNIX In general, standalone systems are
File Tree Layouts on the Managed-Node Platforms File Tree Layout on Digital UNIX Managed Nodes NFS Clients on Digital UNIX Digital UNIX cluster clients are those Digital UNIX systems that have the /usr/opt or /usr file system NFS mounted. Their cluster server is the system to which /usr/opt or /usr is mounted and must also be a system running Digital UNIX.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on Digital UNIX Managed Nodes System Resources Adapted by ITO on Digital UNIX ITO makes changes in the following system resource files during installation: ❏ /etc/passwd and /etc/shadow (if present), Protected Password Database (if present) - entry for the default ITO operator ❏ /etc/group - group entry for the default ITO operator ❏ /sbin/init.d/opcagt - ITO startup/shutdown script ❏ /sbin/rc0.d - file K01opcagt created ❏ /sbin/rc2.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on HP-UX 10.x and 11.x Managed Nodes File Tree Layout on HP-UX 10.x and 11.x Managed Nodes The ITO software on HP-UX 10.x and 11.x managed nodes is organized in the following way: Figure 3-4 ITO Software on HP-UX 10.x and 11.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on HP-UX 10.x and 11.x Managed Nodes The file system on the NFS cluster server consists of a private root directory, and one or more shared root directories that are used by the cluster clients. Each of the shared roots contains the part of the operating system that can be shared by the cluster clients. The cluster server exports the file system shown in Figure 3-5 to the cluster clients. NOTE You can configure cluster clients for HP-UX 10.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on HP-UX 10.x and 11.x Managed Nodes The ITO Default Operator on HP-UX 10.x and 11.x The ITO default operator, opc_op, owns /home/opc_op as home directory. By default, the operator uses the Korn Shell (/usr/bin/ksh) and is not allowed to log into the system directory (a * entry is made for the password in /etc/passwd).
File Tree Layouts on the Managed-Node Platforms File Tree Layout on MPE/iX Managed Nodes File Tree Layout on MPE/iX Managed Nodes Figure 3-6 ITO Software on MPE/iX Managed Nodes OVOPC Z TMPMON TMPCONF TMPCMDS TMPACT ACTIONS BIN COMMANDS CONF H LIB LOG PUB TMP MONITOR MSG During installation, ITO creates the accounts OVOPC and OVOPR. The group PUB.OVOPC is not used by ITO. ITO Default Operator on MPE/iX The default operator, MGR.OVOPR, on MPE/iX is assigned the dummy group PUB.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on MPE/iX Managed Nodes hostname is truncated after the first dot (.), and the first part of the ARPA hostname becomes the NS node name for the vt3k operation. This mechanism assumes that the truncated name identifies a node in the same NS domain as the management server, since a fully-qualified NS node name is unavailable.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on MPE/iX Managed Nodes Figure 3-7 ARPA to NS Node Name Mapping #ARPA NS node name Comment #---------------------------------------------------------------------hpbbli smarty #different node names #but same domain hpsgmx18.sgp.hp.com hpsgmx18.sgp.hpcom #same node names, but #Managed Node belongs to #different domain as #management server topaz.sgp.hp.com nstopaz.mis.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on NCR UNIX SVR4 Managed Nodes File Tree Layout on NCR UNIX SVR4 Managed Nodes Figure 3-8 ITO Software on NCR UNIX SVR4 Stand-alone Systems /var/opt/OV /opt/OV locale bin OpC LC_MESSAGES install lib include conf tmp log bin OpC OpC OpC OpC utils bin conf B B The directory /var/sadm/pkg/OPC is used by the pkgadd utility for software maintenance.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on NCR UNIX SVR4 Managed Nodes The ITO Default Operator on NCR UNIX SVR4 The ITO default operator, opc_op, owns /home/opc_op as home directory. By default, the operator uses the Bourne Shell (/bin/sh) and is locked until the passwd(1M) command is executed. User opc_op belongs to the group opcgrp.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on Novell NetWare Managed Nodes File Tree Layout on Novell NetWare Managed Nodes Figure 3-9 ITO Software on Novell NetWare Managed Nodes SYS:OPT/OV LIB BIN SYS:VAR/OPT/OV INCLUDE CONF TMP LOG bin OpC NLS OPC OpC INSTALL UTILS OPC BIN OPC B CONF B During installation, ITO creates the opc_op account which has the same security level as the user ADMIN.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on Novell NetWare Managed Nodes Field Entry Description OPC_OP is a special user with rights equivalent to NetWare system administrator ADMIN Home Directory Not set Login Shell NetWare deals with login scripts; user OPC_OP does not have any login script assigned System Resources adapted by ITO on Novell NetWare During agent software installation, ITO modifies the AUTOEXEC.NCF file. ITO agent start up command OPCAGT.NCF is added.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on Olivetti UNIX Managed Nodes File Tree Layout on Olivetti UNIX Managed Nodes The ITO software on Olivetti UNIX managed nodes is based on the typical SVR4 platforms as follows: Figure 3-10 ITO Software on Olivetti UNIX Managed Nodes /var/opt/OV /opt/OV locale bin OpC LC_MESSAGES install lib include utils conf tmp log bin OpC OpC OpC OpC bin conf B B Standalone Systems or NFS Cluster Servers on Olivetti UNIX In ge
File Tree Layouts on the Managed-Node Platforms File Tree Layout on Olivetti UNIX Managed Nodes NFS Cluster Clients on Olivetti UNIX Olivetti UNIX cluster clients are those Olivetti UNIX systems that have the /opt file system NFS mounted. Their cluster server is the system to which /opt is mounted and must also be a system running Olivetti UNIX.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on Olivetti UNIX Managed Nodes System Resources Adapted by ITO on Olivetti UNIX ITO makes changes in the following system resource files during installation: ❏ /etc/passwd and /etc/shadow (if present), Protected Password Database (if present) - entry for the default ITO operator ❏ /etc/group - group entry for the default ITO operator ❏ /etc/init.d/opcagt - ITO startup/shutdown script ❏ /etc/rc0.d - file K09opcagt created ❏ /etc/rc1.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on OS/2 Manged Nodes File Tree Layout on OS/2 Manged Nodes Figure 3-11 ITO Software on OS/2 Managed Nodes \var\opt\OV \opt\OV bin lib conf bin log tmp OpC nls OpC OpC OpC OpC B install utils bin C B ITO Default Operator on OS/2 Managed Nodes OS/2 does not support a user concept so that no ITO default operator exists on OS/2 managed nodes.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on Pyramid DataCenter/OSx Managed Nodes File Tree Layout on Pyramid DataCenter/OSx Managed Nodes The ITO software on Pyramid DataCenter/OSx managed nodes is based on the typical SVR4 platforms as follows: Figure 3-12 ITO Software on Pyramid DataCenter/OSx Managed Nodes /var/opt/OV /opt/OV locale bin OpC LC_MESSAGES install lib include utils conf tmp log bin OpC OpC OpC OpC bin conf B B Standalone Systems or NFS Cl
File Tree Layouts on the Managed-Node Platforms File Tree Layout on Pyramid DataCenter/OSx Managed Nodes NFS Cluster Clients on Pyramid DataCenter/OSx Pyramid DataCenter/OSx cluster clients are those Pyramid DataCenter/OSx systems that have the /opt file system NFS mounted. Their cluster server is the system to which /opt is mounted and must also be a system running Pyramid DataCenter/OSx.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on Pyramid DataCenter/OSx Managed Nodes Field Entry Group-ID 177 or higher Users opc_op Description ITO default operator group System Resources Adapted by ITO on Pyramid DataCenter/OSx ITO makes changes in the following system resource files during installation: ❏ /etc/passwd and /etc/shadow (if present), Protected Password Database (if present) - entry for the default ITO operator ❏ /etc/group - group entry for the default ITO operat
File Tree Layouts on the Managed-Node Platforms File Tree Layout on SCO OpenServer Managed Nodes File Tree Layout on SCO OpenServer Managed Nodes The ITO software on SCO OpenServer managed nodes is based on the typical SVR4 platforms as follows: Figure 3-13 ITO Software on SCO OpenServer Managed Nodes /var/opt/OV /opt/OV locale bin OpC LC_MESSAGES install lib include conf tmp log bin OpC OpC OpC OpC utils bin conf B B Standalone Systems or NFS Cluster Servers on SCO OpenServer
File Tree Layouts on the Managed-Node Platforms File Tree Layout on SCO OpenServer Managed Nodes NFS Cluster Clients on SCO OpenServer SCO OpenServer cluster clients are those SCO OpenServer systems that have the /opt file system NFS mounted. Their cluster server is the system to which /opt is mounted and must also be a system running SCO OpenServer.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on SCO OpenServer Managed Nodes Field Entry Group-ID 77 or higher Users opc_op Description ITO default operator group System Resources Adapted by ITO on SCO OpenServer ITO makes changes in the following system resource files during installation: ❏ /etc/passwd and /etc/shadow (if present), Protected Password Database (if present) - entry for the default ITO operator ❏ /etc/group - group entry for the default ITO operator ❏ /etc/init.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on SCO UnixWare Managed Nodes File Tree Layout on SCO UnixWare Managed Nodes The ITO software on SCO UnixWare managed nodes is based on the typical SVR4 platforms as follows: Figure 3-14 ITO Software on SCO UnixWare Managed Nodes /opt/OV bin lib OpC nls install /var/opt/OV include bin conf tmp log OpC OpC OpC OpC B LC_MESSAGES bin conf B Standalone Systems or NFS Cluster Servers on SCO UnixWare In general, standal
File Tree Layouts on the Managed-Node Platforms File Tree Layout on SCO UnixWare Managed Nodes NFS Cluster Clients on SCO UnixWare SCO UnixWare cluster clients are those SCO UnixWare systems that have the /opt file system NFS mounted. Their cluster server is the system to which /opt is mounted and must also be a system running SCO UnixWare. The ITO Default Operator on SCO UnixWare The ITO default operator opc_op and the group opcgrp are created as the ITO default operator if they don’t already exist.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on SCO UnixWare Managed Nodes System Resources Adapted by ITO on SCO UnixWare ITO makes changes in the following system resource files during installation: ❏ /etc/passwd and /etc/shadow (if present), Protected Password Database (if present) - entry for the default ITO operator ❏ /etc/group - group entry for the default ITO operator ❏ /etc/init.d/opcagt - ITO startup/shutdown script ❏ /etc/rc0.d - file K09opcagt created ❏ /etc/rc1.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on Sequent DYNIX/ptx Managed Nodes File Tree Layout on Sequent DYNIX/ptx Managed Nodes The ITO software on Sequent DYNIX/ptx managed nodes is based on the typical SVR4 platforms as follows: Figure 3-15 ITO Software on Sequent DYNIX/ptx Managed Nodes /var/opt/OV /opt/OV locale bin OpC LC_MESSAGES lib include install utils conf tmp log bin OpC OpC OpC OpC bin conf B B Standalone Systems or NFS Cluster Servers on Seq
File Tree Layouts on the Managed-Node Platforms File Tree Layout on Sequent DYNIX/ptx Managed Nodes NFS Cluster Clients on DYNIX/ptx Sequent DYNIX/ptx cluster clients are those DYNIX/ptx systems that have the /opt file system NFS mounted. Their cluster server is the system to which /opt is mounted and must also be a system running Sequent DYNIX/ptx.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on Sequent DYNIX/ptx Managed Nodes System Resources Adapted by ITO on Sequent DYNIX/ptx ITO makes changes in the following system resource files during installation: ❏ /etc/passwd and /etc/shadow (if present), Protected Password Database (if present) - entry for the default ITO operator ❏ /etc/group - group entry for the default ITO operator ❏ /etc/init.d/opcagt - ITO startup/shutdown script ❏ /etc/rc0.d - file K07opcagt created ❏ /etc/rc2.
File Tree Layouts on the Managed-Node Platforms File Tree Layout for Silicon Graphics IRIX File Tree Layout for Silicon Graphics IRIX The file tree used by the ITO managed node software on IRIX is similar to other SVR4 platforms and organised in the following way: Figure 3-16 ITO Software on SGI IRIX Managed Nodes /var/opt/OV /opt/OV locale bin OpC LC_MESSAGES install lib include utils conf tmp log bin OpC OpC OpC OpC bin conf B B Standalone Systems or NFS Cluster Servers on SG
File Tree Layouts on the Managed-Node Platforms File Tree Layout for Silicon Graphics IRIX mount :/opt /opt The ITO Default Operator on SGI IRIX The ITO default operator opc_op and the group opcgrp are created as the ITO default operator if they don’t already exist.
File Tree Layouts on the Managed-Node Platforms File Tree Layout for Silicon Graphics IRIX ❏ /etc/group - group entry for the default ITO operator ❏ /etc/init.d/opcagt - ITO startup/shutdown script ❏ /etc/rc0.d - file K09opcagt created ❏ /etc/rc2.d - file S89opcagt is created ❏ /etc/exports - on cluster server only; entry for export of /opt directory ❏ /etc/fstab - on cluster client only; entry for mount /opt directory ❏ /etc/init.d/grad_nck - NCS startup/shutdown script ❏ /etc/rc0.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on SINIX Managed Nodes File Tree Layout on SINIX Managed Nodes The ITO software on SINIX managed nodes is based on the typical SVR4 platforms as follows: Figure 3-17 ITO Software on SINIX Managed Nodes /var/opt/OV /opt/OV locale bin OpC LC_MESSAGES install lib include utils conf tmp log bin OpC OpC OpC OpC bin conf B B Standalone Systems or NFS Cluster Servers on SINIX In general, standalone systems are treated a
File Tree Layouts on the Managed-Node Platforms File Tree Layout on SINIX Managed Nodes NFS Cluster Clients on SINIX SINIX cluster clients are those SINIX systems that have the /opt file system NFS mounted. Their cluster server is the system to which /opt is mounted and must also be a system running SINIX. The ITO Default Operator on SINIX The ITO default operator opc_op and the group opcgrp are created as the ITO default operator if they don’t already exist.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on SINIX Managed Nodes System Resources Adapted by ITO on SINIX ITO makes changes in the following system resource files during installation: ❏ /etc/passwd and /etc/shadow (if present), Protected Password Database (if present) - entry for the default ITO operator ❏ /etc/group - group entry for the default ITO operator ❏ /etc/init.d/opcagt - ITO startup/shutdown script ❏ /etc/rc0.d - file K09opcagt created ❏ /etc/rc1.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on Solaris Managed Nodes File Tree Layout on Solaris Managed Nodes The ITO software on Solaris managed nodes is organized in the following way: Figure 3-18 ITO Software on Solaris Managed Nodes /var/opt/OV /opt/OV bin locale include OpC LC_MESSAGES lib install utils conf tmp log bin OpC OpC OpC OpC bin conf B B The path /var/sadm/pkg/OPC is used by the pkgadd utility for software maintenance.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on Solaris Managed Nodes NFS Cluster Client on Solaris In addition to the general rule for determining cluster clients described in the section “Installation Tips for UNIX Managed Nodes” on page 50 there is also one specific rule for Solaris: Solaris cluster clients (both with and without disks) are those Solaris systems that have either a /usr or /opt file system NFS mounted.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on Solaris Managed Nodes ❏ /etc/rc3.d/S76ncs - file created (if not already present) ❏ /etc/rc0.d/K52ncs - file created (if not already present) ❏ /etc/rc2.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on Windows NT Managed Nodes File Tree Layout on Windows NT Managed Nodes Figure 3-19 ITO Software on Windows NT Managed Nodes \usr\OV log bin conf OpC OpC OpC intel databases include lib nls C OpC OpC mgd_node tmp bin conf vendor intel utils install B B During installation, ITO creates the HP ITO account which has all rights and privileges that are required for t
File Tree Layouts on the Managed-Node Platforms File Tree Layout on Windows NT Managed Nodes ITO Default Operator on Windows NT Table 3-18 ITO User Accounts on Windows NT Managed Nodes Field Entry User Name HP ITO account opc_op Encrypted Password Defined during installation Same as HP ITO accounta Group administratorb or domain administratorc users or domain users Description HP ITO agent account HP ITO operator account Login Shell None None a.
File Tree Layouts on the Managed-Node Platforms File Tree Layout on Windows NT Managed Nodes 162 Chapter 3
4 Software Maintenance on Managed Nodes 163
Software Maintenance on Managed Nodes This chapter provides important information about installing and de-installing ITO software on managed nodes with various operating systems. The installation, de-installation, and updating of ITO software is referred to as “software maintenance”. This chapter includes: ❏ Installing and upgrading the ITO agent software using the GUI. ❏ De-installation of the ITO agent software using the GUI. ❏ Checking installed agent software packages on the management server.
Software Maintenance on Managed Nodes Overview Overview ITO software installation, update, and de-installation (software maintenance) uses functionality provided by the ITO administrator GUI and is performed using the inst.sh(1M) script.
Software Maintenance on Managed Nodes Overview Figure 4-1 Adding a Managed Node to the Node Bank Window For detailed information about how to set the managed node attributes, refer to the online help. Select the Automatic (De-)Installation option, and the ITO software is automatically installed onto the managed node when you invoke the installation for this system in the Install/Update ITO Software and Configuration window.
Software Maintenance on Managed Nodes Installing or Updating ITO Software Automatically Installing or Updating ITO Software Automatically To install the ITO bits on the managed node automatically, use the Install/Update ITO Software and Configuration window and select the Actions:Agents->Install/Update SW & Config… item in the menu bar.
Software Maintenance on Managed Nodes Installing or Updating ITO Software Automatically With the Force Update checkbox unselected (default), only the differences between the previous configuration and the new configuration are distributed to the managed nodes. This reduces the amount of data being transferred, consequently reducing the load on the network. 2. After clicking on the [OK] button, an additional terminal window opens, running the installation script, inst.sh(1M).
Software Maintenance on Managed Nodes Installing or Updating ITO Software Automatically This is the same logfile for both installation and removal operations. Only one record is written for each package. SCO UnixWare /tmp/pkgadd.log Sequent DYNIX/ptx /tmp/pkgadd.log SGI IRIX /tmp/inst.log SINIX /tmp/pkgadd.log Solaris /tmp/pkgadd.log Windows NT c:\temp\inst.
Software Maintenance on Managed Nodes Installing or Updating ITO Software Automatically NOTE Manual activation of the ITO agent software on NFS Cluster Client Nodes is only supported for HP-UX 10.x/11.x, AIX, Solaris, NCR and SINIX managed node with ITO version A.05.00 and higher. In addition, only homogeneous NFS Clusters are supported and the cluster server and cluster client systems must have the same OS. To manually install the ITO agent on an NFS Cluster-Client managed node: 1.
Software Maintenance on Managed Nodes Installing or Updating ITO Software Automatically NOTE For Windows NT managed nodes running Service Pack 1 or 2, the communication type must be changed from DCE UDP to DCE TCP to avoid problems. If you decide to change the communication type, you must update the ITO agent software: 1. Ensure that your managed nodes meet the software requirements described in Chapter 1 of the HP OpenView IT/Operations Administrator’s Reference.
Software Maintenance on Managed Nodes Installing or Updating ITO Software Automatically a. In the ITO Application Bank window, select Actions: Add ITO Application. b. Enter a name in the Application Name field, and enter the following in the Application Call field: /opt/OV/bin/OpC/utils/opcnode -chg_commtype \ comm_type=COMM_DCE_UDP node_list=”$OPC_NODES” You can also choose COMM_DCE_TCP instead of COMM_DCE_UDP. Note, however, that COMM_DCE_UDP is recommended. c.
Software Maintenance on Managed Nodes De-installing ITO Software from Managed Nodes De-installing ITO Software from Managed Nodes You can choose either of the following methods to de-install ITO software from the managed nodes: ❏ De-install only the ITO software from the managed node. ❏ Remove the node and de-install the ITO software. ITO software is automatically de-installed from managed nodes if they are configured with the Automatic (De-)Installation option.
Software Maintenance on Managed Nodes De-installing ITO Software from Managed Nodes DEC Alpha NT c:\temp\inst.log Digital UNIX (OSF/1) /var/adm/smlogs/setld.log HP-UX 10.x and 11.x /var/adm/sw/swagent.log and /var/adm/sw/swremove.log MPE/iX No special logfile available. NCR UNIX SVR4 /tmp/pkgrm.log Novell NetWare SYS:DEPOINST.ITO/ITOINST on the NetWare depot server Olivetti UNIX /tmp/pkgrm.log OS/2 No special logfile available. Pyramid DataCenter/OSx /tmp/pkgrm.
Software Maintenance on Managed Nodes De-installing ITO Software from Managed Nodes Manually De-installing ITO Software from AIX Managed Nodes 1. Stop all ITO agents running on the managed node. 2. To de-install the ITO agent software from AIX managed nodes, enter: installp -ug OPC NOTE Manually de-installing the ITO agent software from AIX managed nodes is only supported with ITO version A.05.00 and higher.
Software Maintenance on Managed Nodes De-installing ITO Software from Managed Nodes If the de-installation fails, stop all ITO agents and remove the directories \var\opt and \opt\OV from the managed nodes. Manually edit the startup command STARTUP. CMD to remove ITO-related information. Manually De-installing ITO Software from Solaris, NCR, and SINIX Managed Nodes 1. Stop all ITO agents running on the managed node. 2.
Software Maintenance on Managed Nodes De-installing ITO Software from Managed Nodes UNIX (other) /opt/OV/bin/OpC/install/opcdeactivate For detailed information about the opcdeactivate command, see the opcactivate(1m) man page. All man pages reside on the ITO management server. NOTE Manual de-activation of the ITO agent software on NFS Cluster Client Nodes is only supported for HP-UX 10.x/11.x, AIX, Solaris, NCR and SINIX managed node with ITO version A.05.00 and higher.
Software Maintenance on Managed Nodes Managing ITO Agent Software Managing ITO Agent Software Frequently, managed nodes (even of the same architecture) do not run the same OS versions.
Software Maintenance on Managed Nodes Managing ITO Agent Software Where: Is one of the following values: • dec/alpha/unix • hp/s700/hp-ux • hp/s700/hp-ux10 • hp/s800/hp-ux • hp/s800/hp-ux10 • hp/pa-risc/hp-ux11 • hp/s900/mpe-ix • ibm/intel/os2 • ibm/rs6000/aix • ms/alpha/nt • ms/intel/nt • ncr/3000/unix • novell/intel/nw • olivetti/intel/unix • pyramid/mips/unix • sco/intel/unix • sco/intel/uw • sequent/intel/dynix • sgi/mips/irix • sni/mips/sinix • sun/sparc/solaris Is
Software Maintenance on Managed Nodes Managing ITO Agent Software NOTE Do not use swremove to de-install an ITO agent package that you no longer require. Running swremove is only useful if you wish to de-install all ITO agent packages of a particular architecture. In addition, remove the managed nodes from the ITO Node Bank before doing a complete de-installation of all managed nodes of a given architecture. Otherwise, the managed nodes can no longer be easily removed using the administrator GUI.
Software Maintenance on Managed Nodes Debugging Software (De-)Installation on Managed Nodes Debugging Software (De-)Installation on Managed Nodes ITO provides facilities for debugging the (de-)installation of the ITO software on the managed nodes. These tools help developers when testing ITO installation scripts for new platforms, and assist users in examining errors that occur during the installation of the ITO agent software.
Software Maintenance on Managed Nodes Debugging Software (De-)Installation on Managed Nodes NOTE The syntax of the file inst_debug.conf is not checked. Be careful when editing this file because syntax errors will cause the installation process to abort. To disable debugging remove the file /var/opt/OV/share/tmp/OpC/mgmt_sv/inst_debug.conf For a detailed description of the (de-)installation debug facilities and examples of the file inst_debug.conf, see the man page inst_debug(5M).
5 Configuring ITO 183
Configuring ITO This chapter describes ITO’s preconfigured elements. It also describes how to distribute the ITO configuration to managed nodes, and how to integrate applications into ITO. In addition to this chapter, you should also read the HP OpenView IT/Operations Concepts Guide, to gain a fuller understanding of the elements and the windows you can use to review or customize these preconfigured elements.
Configuring ITO Preconfigured Elements Preconfigured Elements This section describes all the preconfigured elements provided by ITO, including: ❏ applications ❏ database reports ❏ ITO message interception ❏ ITO users ❏ logfile encapsulation ❏ managed nodes ❏ the message browser ❏ message groups ❏ message ownership ❏ MPE/iX console message interception ❏ monitored objects ❏ SNMP event interception ❏ template groups ❏ templates for external interfaces Note also the configuration tips in this section.
Configuring ITO Preconfigured Elements Message Groups The Message Group Bank window displays the default Message Groups provided with ITO. For more information on individual message groups, see Table 5-1 on page 186. Table 5-1 ITO Default Message Groups Message Group... Description Backup Messages relating to backup/restore/archiving functionality (for example, fbackup(1), HP OpenView Omniback II, HP OmniStorage, Turbo-Store).
Configuring ITO Preconfigured Elements Message Group... Description Performance Messages related to hardware (CPU, disk, process) and software (for example, HP OpenView PerfView) malfunctions. SNMP Messages generated by SNMP traps. Security Messages related to security violations or attempts to break into a system. You can add, modify, or delete message groups with the Message Group Bank window on the ITO GUI, while working as ITO administrator.
Configuring ITO Preconfigured Elements Table 5-2 Message Severity Levels Severity Level... NOTE is color coded... and means that...
Configuring ITO Preconfigured Elements Figure 5-1 Message Attributes and Values The additional message attributes that appear in the Message Browser headline are shown in Figure 5-1 on page 189 and described in the following list: S Owned/Marked Message State A flag in this column indicates either that a user has taken note (Marked) or ownership (Owned) of a message or that the message is a notification message.
Configuring ITO Preconfigured Elements U Unmatched Message An Unmatched Message does not match any of the filters defined for a message source. Filters are sets of conditions which configure ITO to accept or suppress messages. These messages require your special attention because they can represent problems for which no preconfigured action exists. In general, you should inform the ITO administrator of unmatched messages so that they can improve the corresponding message, or suppress conditions.
Configuring ITO Preconfigured Elements Indicates if annotations exist for this message. You can review annotations for procedures used to resolve similar problems by using the History Browser window. E Escalations Indicates if the message has been escalated to (or from) another ITO server.
Configuring ITO Preconfigured Elements been forced to take charge of a message in order to carry out actions associated with that message. In addition, ITO provides different ways to configure the way message ownership is displayed and enforced. Ownership Display Modes There are two ownership-display modes in ITO: ❏ Status propagation ❏ No Status propagation (Default) If the display mode is set to No Status propagation, a message’s severity color changes when it is owned or marked.
Configuring ITO Preconfigured Elements Enforced Ownership of messages is no longer optional: it is enforced. Informational The concept of ownership is replaced with that of marking/unmarking. A “marked” message indicates that an operator has taken note of a message. In optional mode, the owner of a message has exclusive read-write access to the message: all other users who have this message in their browser have only limited access to it.
Configuring ITO Preconfigured Elements Table 5-3 ITO Default Template Groups Template Group Description Default Default template groups delivered with ITO AIX Templates for AIX agent AIX with HACMP Templates for AIX agents running HACMP DYNIX/ptx Templates for DYNIX/ptx agent Digital UNIX Templates for Digital UNIX agent ECS Agent Event correlation templates for the ITO agent a ECS Management Server Event correlation templates for ITO management server a HP-UX 10.x Templates for HP-UX 10.
Configuring ITO Preconfigured Elements Template Group Description SCO OpenServer Templates for SCO OpenServer agent SCO UnixWare Templates for SCO UnixWare agent SINIX 5.43 Templates for SINIX 5.43 or earlier agent SINIX 5.44 Templates for SINIX 5.44 or later agent SMS (Windows NT Templates for Windows NT Systems Management Server Solaris Templates for Solaris agent Windows NT Templates for Windows NT agent a. See Table 5-12 on page 236 for more information on supported platforms for ECS.
Configuring ITO Preconfigured Elements Enter your user name and password in the User Login dialog box which subsequently appears. See Table 5-4, “ITO User Names and Passwords” for a list of default user names and passwords for all preconfigured users.
Configuring ITO Preconfigured Elements The ITO Administrators ITO supports only one ITO administrator, whose responsibility it is to set up and maintain the ITO software: multiple template administrators may be configured using the Add User window to manage message-source templates. The ITO administrator’s login name, opc_adm, cannot be modified. Template administrators are set up by the ITO administrator in the GUI: their administrative responsibility is limited to template management.
Configuring ITO Preconfigured Elements Message Group opc_op netop itop OpC ✓ ✓ OS ✓ ✓ Output ✓ ✓ Performance ✓ ✓ SNMP ✓ Security ✓ ✓ ✓ ✓ It is important to remember that although the various operators may have the same message group icon in their respective Message Groups window, the messages each operator receives and the nodes those messages come from are not necessarily the same: the responsibility matrix chosen by the administrator for a given operator determines which node group
Configuring ITO Preconfigured Elements Application Groups opc_op netop itop ✓ ✓ SNMP Data Tools ✓ UN*X Tools ✓ The applications and application groups assigned by default to the ITO users reflect the responsibility given to them by the administrator. Table 5-7 on page 198 and Table 5-8 on page 199 show you at a glance which applications and applications groups are assigned by default to each user.
Configuring ITO Preconfigured Elements Applications opc_op netop Telnet (xterm) ✓ Test IP ✓ Virtual Terminal ✓ itop ✓ UNIX Access to the Managed Node for ITO Users By default, the UNIX user cannot log into the managed node directly; this is the result of an asterisk (*) in the password field of /etc/passwd.
Configuring ITO Preconfigured Elements Applications ITO provides the following applications and application groups in the administrator’s default Application Bank window: Table 5-9 Administrator’s Applications and Application Groups Name Application Broadcast ✓ ITO Status ✓ Application Group Jovw ✓ MPE Tools ✓ Net Activity ✓ Net Config ✓ Net Diag ✓ NetWare Config ✓ NetWare Performance ✓ NetWare Tools ✓ NNM Tools ✓ NT Tools ✓ OS/2 Tools ✓ OV Services ✓ Performance ✓ Physica
Configuring ITO Preconfigured Elements Name Application Application Group Tools ✓ UN*X Tools ✓ Virtual Terminal ✓ Broadcast Broadcast is an ITO application that allows you to issue the same command on multiple systems in parallel. ❏ UNIX: Default user: opc_op. Default password: none required, because application is started via the ITO action agent. NOTE If the default user has been changed by the operator, you must supply a password. ❏ MPE/iX: Default user: MGR.OVOPR.
Configuring ITO Preconfigured Elements Disk Space ITO shows the current disk usage: ❏ UNIX: Command issued: opcdf (This is a script calling bdf on HP-UX, and df on Solaris, AIX, NCR UNIX SVR4, SGI IRIX, SCO OpenServer, SCO UnixWare, Digital UNIX (OSF/1), DYNIX/ptx, Olivetti UNIX, Pyramid DataCenter/OSx, and SINIX/Reliant.) Default user: opc_op. NOTE If the default user has been changed by the operator, you must supply a password. ❏ MPE/iX: Command issued: discfree d Default user: MGR.OVOPR.
Configuring ITO Preconfigured Elements NOTE If the default user has been changed by the operator, you must supply a password. Jovw Applications This group contains the following applications: ❏ Highlight in IP Map Starts jovw with the submap of the selected node. ❏ Jovw Starts jovw to get network view. ❏ OVlaunch With the ovlaunch command you can start the JMib Browser and Jovw. MIB Browser This is xnmbrowser, the standard HP OpenView MIB Browser.
Configuring ITO Preconfigured Elements NOTE OV Services and OV Applications are always started as user opc_op. PerfView Double-clicking the Performance symbol in the Application Bank window displays the following underlying symbols: ❏ Start Glance ❏ Start PerfView Physical Terminal The script defined as the Physical Terminal command in the Managed Node Configuration window is called when starting the physical terminal application. ❏ UNIX: Default user: root. Default password: none configured.
Configuring ITO Preconfigured Elements NOTE If the default user has been changed by the operator, you must supply a password. ❏ MPE/iX: Command issued: listspf Default user: MGR.OVOPR. Default password: none required, because application is started via the ITO action agent. NOTE If the default user has been changed by the operator, you must supply a password. ❏ Windows NT Print status is unavailable for Windows NT managed nodes.
Configuring ITO Preconfigured Elements Command issued: itodiag.exe /processes Default user: HP ITO account.
Configuring ITO Preconfigured Elements Start in window (input/output) System Management Interface Tool (SMIT) (AIX) ITO can start the SMIT (System Management Interface Tool) Xuser interface on AIX systems. Command issued: smit Default user: root (user must be root!) Default password: none required, because application is started via the ITO action agent. NOTE If the default user has been changed by the operator, you must supply a password.
Configuring ITO Preconfigured Elements NOTE IBM OS/2 telnet does not require a user name, only the password associated with a given user name. To use virtual terminal, click: Customized startup, and enter the password along with a dummy user name. Refer to “Virtual Terminal PC” on page 225 for information about a Virtual Terminal on a Windows NT managed node.
Configuring ITO Preconfigured Elements memory Returns the following memory information: • Total paging-file size (NT swap file) • Available paging-file • physical location of the page file and its limits (minimum, maximum) network Returns network information. drives Returns the information listed below for each drive: DRIVE Returns current drive letter. NAME Returns any name that is assigned to that drive. TYPE Returns one of these four types of drive: REMOVABLE (i.e., a floppy drive) REMOTE (i.e.
Configuring ITO Preconfigured Elements • Priority (higher number -> higher priority) and other information. cpuload Returns CPU load information for each processor on the system. Processor time Returns the percentage of elapsed time that a processor is busy executing a non-idle thread. This can be regarded as the fraction of the time spent doing useful work. Each processor is assigned an idle thread in the idle process which consumes those unproductive processor cycles not used by any other threads.
Configuring ITO Preconfigured Elements Description of Values Returned: Refer to the User Configurable Parameters for this application. ITO Install Log This application returns the contents of the ITO installation log from the selected Windows NT node. Default: cmd.exe /c “type c:\temp\inst.log” User Configurable Parameters: None. Installed Software This application returns the names of the software that has been entered in the registry on the selected Windows NT node.
Configuring ITO Preconfigured Elements NOTE For a full description of the NT registry refer to the Windows NT documentation. Description of Values Returned: Refer to the User Configurable Parameters for this application, and to the Windows NT documentation. Job Status This application returns a list of the scheduled jobs entered by the at function. If the schedule service has not been started, the message “The service has not been started” will be returned.
Configuring ITO Preconfigured Elements Opens The number of open resources associated with the connection. Idle time Time since this connection was last used. Local Users This application prints the name of the user who is locally logged onto the selected Windows NT node. If you need more information about the users and sessions, use the Show Users application. Default: itouser.exe /local User Configurable Parameters: See “Show Users” on page 223.
Configuring ITO Preconfigured Elements PerfMon Objs This application returns all of the performance objects that are defined on the selected Windows NT node. A non-English NT installation will return the objects in both the local language and the default language (US English). This application is used mostly by the administrator to make the configuration of threshold monitors on Windows NT systems easier. Default: opcprfls.
Configuring ITO Preconfigured Elements Process Kill This application kills all processes that are running under the configured name on the Selected Windows NT node. If the user does not have the rights to kill the process, an error will be returned. Default: itokill.exe User Configurable Parameters: NOTE /pid Kill process with id /name Kill all processes with name /f Forced kill without notification. /l List all processes.
Configuring ITO Preconfigured Elements /r Automatic reboot after shutdown. If this option is not set, the system will only shutdown, but can only be restarted manually. /f Force system shutdown. Processes are not allowed to delay the shutdown for local user interaction (e.g., to ask if data should be saved). Without this option, the shutdown might not occur because of processes running on the system. /w Pop up a notification window.This allows the local user to cancel the shutdown process.
Configuring ITO Preconfigured Elements To scan registry for pattern: /scan /initkey lm|cu|cr|us /key [/view] To enumerate a registry tree (thereby printing out registry keys to the set depth: emum uses a config file that verifies keys that should not be processed): /enum /initkey lm|cu|cr|us /key [/view] To execute a registration script: /file /initkey lm|cu|cr|us /initkey lm|cu|cr|us Define initial registry key: lm = KEY_LOCAL_MACHINE cu = KEY_CURRENT_USER cr =
Configuring ITO Preconfigured Elements The configuration file name is itoreg.cfg. Example of exclusion of specific registry keys used for the display of the installed software: Exclusions = { Classes; Program Groups; Secure; Windows 3.1 Migration Status; Description; } Server Config This application displays settings for the Server service for the selected Windows NT node. Default: net.exe config server User Configurable Parameters: For a full description of net.
Configuring ITO Preconfigured Elements Server Stats This application displays in-depth statistics about the Server service for the selected Windows NT node. Default: net.exe statistics server User Configurable Parameters: For a full description of net.exe, refer to the Windows NT documentation. Description of Values Returned: For a full description of net.exe refer to the Windows NT documentation. Shares This application lists the external connections that are available on the selected Windows NT node.
Configuring ITO Preconfigured Elements Show Drivers This application lists all drivers that are present on the selected Windows NT node. Default: itomserv.exe /list d User Configurable Parameters: see “Show Services” on page 221 Description of Values Returned: NAME True name of the service. If you wish to perform actions on the service, this is the name that should be used. DISPLAY Description of the service, this is the name that is normally seen when working with the control panel.
Configuring ITO Preconfigured Elements /list s | d | a Print a list of installed services: s List all NT system services. d List all NT device drivers. a List all installed services.
Configuring ITO Preconfigured Elements Show Users This application displays information about local users and sessions on the selected Windows NT Node. Default: itouser.exe /u User Configurable Parameters: /u Returns user information for the system. This includes the name of the current user, the domain this user is logged into, and the server that validated the log-on. /s Returns full session information for the system.
Configuring ITO Preconfigured Elements User Configurable Parameters: see “Show Services” on page 221 TCP/IP Status This application displays protocol statistics and current active TCP/IPnetwork connections for the selected Windows NT node Default: netstat.exe User Configurable Parameters: Refer to the Windows NT documentation. Description of Values Returned: Proto The protocol that is used for the connection. Local Address The local machine name and port number.
Configuring ITO Preconfigured Elements Network The type of network that is providing the connection, (e.g., Microsoft Windows Network, or 3rd party NFS software). Virtual Terminal PC This application opens a terminal with command-line capabilities to the target Windows NT system. All output is redirected to the Virtual Terminal on the management server. Default: opcvterm.
Configuring ITO Preconfigured Elements memory usage, adapters and network interfaces, disks and disk controllers, volumes, queues, users, connections, open files, and installed software. For print servers, NMA 2.1 or later provides additional queue information that is not available for servers running the older version of NMA. NMA 2.
Configuring ITO Preconfigured Elements NMA monitoring is enabled by configuring the NMA configuration files NWTREND.INI and TRAPTARG.CFG on the NetWare server. Configuration of these files is not part of the ITO configuration and distribution framework. In addition to the monitors provided by NMA, ITO users can also create their own ITO templates to monitor any integer MIB variables supported by NMA. This allows ITO users to monitor NetWare server variables not monitored internally by the NMA.
Configuring ITO Preconfigured Elements • File KWrites • Free Redir Area • KPackets Recvd #min • KPackets Sent #min • Memory Monitor • Packets Recvd #min • Packets Sent #min • Queue Wait Time • Ready Queue Jobs • Ready Jobs (avg. KB) • Total Packets Recvd • Total Packets Sent • Trend Graph • Volume Free Space Applications from this bank execute as user root on the server and make SNMP GET calls to collect performance data from the NetWare server.
Configuring ITO Preconfigured Elements Default: itodown.ncf Bound Protocols. Lists all the protocols bound to each network board in a server. Default: protocls The number of packets sent and received over each protocol is also listed. By viewing the Bound Protocols object group, you can see which protocols have the most traffic. Cold Boot the NetWare Server (NCF). Stops and restarts the NetWare server. This is done by removing DOS before exiting: Default: itoreset.
Configuring ITO Preconfigured Elements Default: showfile Please note that these applications must be started via the customized-startup application so that additional parameters such as the name of an NLM can be entered. Installed Software (NW). Displays those products that have been installed on the server using PINSTALL: Default: instlsw PINSTALL is a product from Novell used to install software packages such as NMA on NetWare Servers. Load/Unload an arbitrary NLM.
Configuring ITO Preconfigured Elements NetWare Agent Actions. The ITO NetWare agent includes some preconfigured actions. Most of the preconfigured actions are located in the file VENDOR.NLM in the vendor file tree. This is different to the approach usually adopted on Unix-like platforms and on NT, where each action is stored in a separate script or is executable. However, calling conventions for NMA preconfigured actions are the same as for Unix-like platforms.
Configuring ITO Preconfigured Elements Print Server. Displays information about printers and queues attached to print servers: Default: presvinfo Running Software*. Displays currently running NLMs and their memory usage: Default: runsw Queues. Monitors queues, jobs in the queues, and servers attached to the queues: Default: quesinfo Set Parameters*.
Configuring ITO Preconfigured Elements This application requires only the remote console password (which may be different from the opc_op password). For NetWare SFT III servers, add another XCONSOLE application which calls the primary IO Engine rather than the MS Engine as in the default XCONSOLE application. NOTE The user name for the Xconsole application is xconsole.
Configuring ITO Preconfigured Elements Application Command Description List running processes opcps.cmd Displays the status of processes and their threads running on OS/2 managed node. Uses OS/2 native utility PSTAT.EXE. List mounted drives opcdrive.cmd Displays drives (and types) mounted on an OS/2 managed node. Display Free Space opcfree.cmd Displays free space, as well as percentage of utilization, of both local and network drives mounted on an OS/2 managed node.
Configuring ITO Preconfigured Elements The default configuration for loading and unloading DLLs can be changed by adding the following parameters to the \opt\OV\bin\OpC\install\opcinfo file on the OS/2 managed node: • OPC_OS2_MAX_NBR_LOADED_DLLS Specifies the maximum number of DLL that can be loaded simultaneously. The default value is 10 and should be sufficient for most installations. • OPC_OS2_EXTERN_DLL_TIMEOUT Specifies the timeout in seconds after which an unused DLL is unloaded.
Configuring ITO Preconfigured Elements Table 5-12 ITO Event-correlation Runtime: Supported Platforms ITO Management Server ITO Agent HP-UX 10.x ✓ ✓ HP-UX 11.x ✓ ✓ Platform Solaris: 2.51, 2.6, 7 ✓ Windows NT: 3.51, 4.0 ✓ Logfile Encapsulation For detailed information about encapsulated logfiles, refer to the appropriate template in the ITO GUI. Note that the templates are configured to collect information from logfiles that are produced by standard installations.
Configuring ITO Preconfigured Elements Table 5-14 Encapsulated Logfiles on AIX HACMP Managed Nodes Logfile Description /var/adm/cluster.
Configuring ITO Preconfigured Elements Table 5-16 Encapsulated Logfiles on HP-UX 10.x Managed Nodes Logfile Description Template Name /var/adm/sulog su(1); Switch user logfile Su (10.x HP-UX) /var/adm/cron/log cron(1M); Clock daemon logfile Cron (10.x HP-UX) /var/adm/syslog /syslog.log syslogd(1M); Syslog daemon logfile Syslog (10.x HP-UX) /etc/rc.log Messages during system boot up Boot (10.x HP-UX) /var/adm/btmp (binary format) History of bad login attempts Bad Logs (10.
Configuring ITO Preconfigured Elements Table 5-17 Encapsulated Logfiles on NCR UNIX SVR4 Managed Nodes Logfile Description Template Name /var/adm/loginlog History of NCR UNIX SVR4 failed logins Bad Logs (NCR UNIX SVR4) /var/cron/log Cron logfile Cron (NCR UNIX SVR4) /etc/.
Configuring ITO Preconfigured Elements Table 5-19 Encapsulated Logfiles on Pyramid DataCenter/OSx Managed Nodes Logfile Description Template Name /var/cron/log Cron logfile Cron (PYRAMID) /etc/.
Configuring ITO Preconfigured Elements Table 5-21 Encapsulated Logfiles on SCO UnixWare Managed Nodes Logfile Description Template Name /var/cron/log Cron logfile Cron (UnixWare) /var/adm/messagesa OS messages OS Msgs (UnixWare) /var/adm/sulog Switch user logfile Su (UnixWare) /var/adm/wtmpx History of logins Logs (UnixWare) /var/lp/logs/lpsched Printer services Logfile Lp Serv (UnixWare) /var/lp/logs/request s Printer Requests Logfile Lp Req (UnixWare) a.
Configuring ITO Preconfigured Elements Logfile Description Template Name /usr/spool/adm/syslog Syslog daemon logfile Syslog (DYNIX/ptx) /usr/spool/lp/logs/lps ched Printer services logfile Lp Serv (DYNIX/ptx) /usr/spool/lp/remotelp Remote printer services log Rlp Serv (DYNIX/ptx) /usr/spool/lp/logs/req uests Printer requests logfile Lp Req (DYNIX/ptx) Table 5-24 Encapsulated Logfiles on Siemens Nixdorf SINIX/Reliant Managed Nodes Logfile Description Template Name /var/cron/log a Cron l
Configuring ITO Preconfigured Elements Logfile Description Template Name /var/adm/sol_sulog Switch user logfile Su (Solaris) /var/adm/wtmpx History of logins Logins (Solaris) /var/opt/OV/tmp/OpC/ dmesg.
Configuring ITO Preconfigured Elements ❏ HP-UX 10.x and 11.x ❏ Novell NetWare 4.1, 4.11 with NMA 2.1 ❏ Solaris 2.5 and above ❏ Windows NT 3.51 and 4.0 The following kinds of traps can be intercepted: ❏ Well-defined traps, such as system coldstart, network interface up/down, and so forth. ❏ HP OpenView internal traps, for example, those originating from netmon.
Configuring ITO Preconfigured Elements Event Interception on Novell NetWare Managed Nodes There are two preconfigured templates for Novell NetWare: ❏ NetWare NMA 2.1 Threshold Traps ❏ NetWare NMA 2.1 Traps NetWare NMA 2.1 threshold traps can be used to filter traps originating from the NetWare NMA when one of the 25 NMA thresholds is exceeded. NetWare NMA 2.1 traps template filters the 379 traps that can be generated by the NMA module when an important event on the NetWare server occurs.
Configuring ITO Preconfigured Elements For details about the MPE/iX console messages which are intercepted, inspect the MPE/iX console template MPE Cons Msgs in the Message Source Templates window.
Configuring ITO Preconfigured Elements Mapping NMEV Markers Messages from the MPE operating system might contain so-called Node Management Event (NMEV) markers. ITO uses these markers to map MPE/iX console messages to the severity, message group, application, and object fields for ITO messages. NMEV markers have the format NMEV#pcc@aaa, where: p MPE/iX Message Severity mapped to ITO severity; if it is not in the range of 0 to 3, it is an invalid marker and the pattern is treated as normal text.
Configuring ITO Preconfigured Elements MPE/iX Application ID 248 ITO Message Group Application/OS Subsystem 195 Network Network-OSI 196 Network Network-NS 198 Network Network-SNA 200 Output Ciper Devices 206 OS I/O Services 211 Output Native Mode Spooler 212 Output Page Printer 213 Output Device Manager 214 Storage Printer,Tape,Spool 215 Storage Software Resiliency 216 OS Threshold Mgr 217 Storage Store/Restore 218 Job Jobs/Sessions 220 OS Process Manager 221
Configuring ITO Preconfigured Elements MPE/iX Application ID ITO Message Group Application/OS Subsystem 231 OS System & Error Mgmt 232 OS Label Management 233 Storage Magneto-Optic Lib 234 DTC Terminal I/O 235 DTC DCC Surrogate 236 Storage Labeled Tape 237 Security MPE/iX Security 238 OS Native Language 239 Hardware UPS Monitoring 310 Misc Console Event For example, the marker NMEV#200@214 would generate a message with the severity Warning, in the message group Storage, con
Configuring ITO Preconfigured Elements ❏ by inserting the marker into the text of a TELLOP command. ❏ by inserting the marker into a parameter for calling the PRINTOP command. ❏ by calling the NMEVENT intrinsic by way of a program. The NMEV marker string can be placed in TELLOP messages. This can be useful for generating messages to ITO from within jobs or sessions. The PRINTOP intrinsic can also be used to send the NMEV marker to the console from programs.
Configuring ITO Preconfigured Elements /opt/OV/bin/OpC/opcragt -start Monitored Objects Table 5-30 Object Thresholds on the Management Server Object Description Threshold Polling Interval disk_util Monitors disk space utilization on the root disk 90% 10m distrib_mon Monitors the software distribution process 20% 10m mondbfile Monitors free space on disk, and the remaining space available for Oracle autoextend datafiles 0% 10m proc_util Monitors process table utilization 75% 5m swap_ut
Configuring ITO Preconfigured Elements Table 5-31 Object Thresholds on the Managed Nodes Object Description Threshold Polling Interval (mins) cpu_util Monitors CPU utilization: requires the sar program 95%a 2b disk_util Monitors disk space utilization on the root disk 90% 10 Inetd Number of executing instances of inetd (Internet Daemon) 0.
Configuring ITO Preconfigured Elements Table 5-32 Object Thresholds on Windows NT Managed Nodes Object Description Threshold Polling Interval (mins) dflt_disk_util_NT Monitors free disk space on C: drive 10% 10 dflt_cpu_util_NT Monitors processor use. A message is sent only if the threshold is exceeded for four consecutive minutes 95% 1 dflt_rpcss_NT Monitors the RPC services.
Configuring ITO Preconfigured Elements Object Description Threshold Polling Interval (mins) snmpd_mon_ext External monitor for Snmpd 0.5 schedule action dependent mib2_mon_ext External monitor for Mib_2 0.5 schedule action dependent Multiple external monitorsc Scheduled action that checks which processes are running N/A 5m a. Requires TME NetFinity; see “Software Requirements for OS/2 Managed Nodes” on page 38. b. Used with the scheduled action Multiple external monitors. c.
Configuring ITO Preconfigured Elements Figure 5-2 NT Performance Monitor Syntax NTPerfMon\\LogicalDisk\\% Free Space\\0\\C: Instance Parent Instance Counter Object The language for the command may be either in English, or in the local language defined for the Windows NT system where the template will be used. English should be used if the template is intended for use in more than one system with different languages.
Configuring ITO Preconfigured Elements ❏ A parent instance may or may not exist. If there is no parent instance, simply omit it from the syntax. If there were no parent instance for the example in Figure 5-2 on page 255, the line would look like this: NTPerfMon\\LogicalDisk\\% Free Space\\C: ITO will attempt to locate the objects when the agent is started, or when a new template is assigned to the node. If ITO cannot immediately locate the object, it will wait for two minutes and then search again.
Configuring ITO Preconfigured Elements Table 5-34 Attribute IDs of TME NetFinity MIB Variables Attribute Name Attribute ID Value CPU Utilization 2872344980 Percent Drive C: Space Used 1663545058 Megabytes Used Drive D: Space Used 1663545059 Megabytes Used Drive C: Space Remaining 1663545570 Megabytes Free Drive D: Space Remaining 1663545571 Megabytes Free IP Packets Sent 1314150980 Packets/Sec IP Packets Received with Errors 1314150981 Packets/Sec Locked Memory 1653400672 Megabyte
Configuring ITO Preconfigured Elements Attribute Name Attribute ID Value TCP/IP Interface 0 - Bytes Received 1314140230 Bytes/Sec Thread Count 2872344982 Threads UDP Datagrams Sent 1314150977 Packets/Sec UDP Datagrams Received 1314150978 Packets/Sec Calculating the Value of a MIB Variable MIB variables have the following format and can be calculated by replacing the angle brackets with the desired value: .2.5.11.1.10.1.3.1..6.
Configuring ITO Preconfigured Elements Monitoring MIB Objects from other Communities MIB objects can also be monitored from communities other than public. To do this, add the following line to the opcinfo file on the managed node (see Table 10-3 on page 399 for the location of the opcinfo file on all platforms): SNMP_COMMUNITY where is the community for which the snmpd is configured. If SNMP_COMMUNITY is not set, the default community public is used.
Configuring ITO Preconfigured Elements General Configuration Tips Regarding File Names If you provide actions/cmds/monitor command files for MPE/iX managed nodes on the management server in: /var/opt/OV/share/databases/OpC/mgd_node/\ customer/hp/s900/mpe-ix make sure that the file names are not longer than 8 characters. The characters underscore ( _ ) and dash ( - ) are not allowed. MPE/iX does not distinguish between upper and lower case letters. Only ASCII files are supported.
Configuring ITO Database Reports Database Reports ITO provides preconfigured reports for the administrator and for operators. In addition, customized reports can be created using the report writer supplied with the installed database or any other report-writing tool. The reports may be: • displayed in a window • saved to a file • printed. You may define the printer using the X resource, OpC.
Configuring ITO Database Reports Table 5-36 Preconfigured Reports for the ITO Administrator Report Name Description Action Report Action audit report for all operators showing ITO user, UNIX user, source (GUI, API, CLI, etc), date, time, report area and action (un/successfull). Only available for audit level, “Full”.
Configuring ITO Database Reports Report Name Description Operator Overview Short description of all configured operators, including real and logon names, role, rights and responsibilities. Operator Report Detailed report on a selected operator: includes responsibility matrix (node and message groups), available applications, and assigned user profiles.
Configuring ITO Database Reports If no absolute path is specified, the output of all ITO administrator reports is saved by default in the directory of the Unix user that started the ITO administrator session. This directory is defined by $OPC_HOME, if set, $HOME, or /tmp in that order. All files that are created when the administrator saves report output are owned by the administrator’s Unix user, which may but need not be root.
Configuring ITO Database Reports Report Name Description Sel. History Details Detailed report on selected history (acknowledged) Messages Sel. Pending Messages Brief report on selected pending messages Sel. Pending Details Detailed report on selected pending messages a. For more information about the logfiles, see the section “On The ITO Management Server” on page 460. You can define additional reports by customizing the file: /etc/opt/OV/share/conf/OpC/mgmt_sv/reports//\ oper.
Configuring ITO Database Reports opc_report_role, which is a kind of database user profile that may also be used in cases where it is necessary to allow additional database users access to the database in order to create reports using information in the ITO database tables. SQL*Net requires a listener process running on the database node in order to accept net connections. The listener process accepts connection requests from any legal database user.
Configuring ITO Flexible-management Configuration Flexible-management Configuration This section describes the conventions that need to be adhered to when setting up flexible management using the example templates provided in ITO.
Configuring ITO Flexible-management Configuration Table 5-38 Example Templates for ITO Flexible Management Template Name Description backup-server Defines the responsible managers for an ITO backup server. Management responsibility can be switched to a backup server if the ITO primary server fails. This template defines two management servers (M1) and (M2); management server M2 can act as a backup server for management server M1. escmgr Defines the responsible managers for message escalation.
Configuring ITO Flexible-management Configuration Template Name Description hierarchy.agt Defines the responsible managers for hierarchical management responsibility switching for all nodes. This template defines two management servers M1 and MC where M1 is configured as the primary manager for all nodes, and MC is configured as an action-allowed manager for all nodes. hierarchy.sv Defines the responsible managers for hierarchical management responsibility switching for regional management servers.
Configuring ITO Flexible-management Configuration A secondary ITO manager of an agent. This management server has permission to take over responsibility and become the primary ITO manager for an agent. • SECONDARYMANAGER • NODE The node name of the SECONDARYMANAGER. • DESCRIPTION A string containing the description of the SECONDARYMANAGER.
Configuring ITO Flexible-management Configuration This is also used to escalate messages from one manager to another. • MSGTARGETMANAGER Management server to which you forward a message. NOTE Always specify the IP address of the target management server as 0.0.0.0. The real IP address is then resolved by the domain name service (DNS). • TIMETEMPLATE The name of the corresponding time template. You can use the variable $OPC_ALWAYS if the time condition is always true.
Configuring ITO Flexible-management Configuration messages are sent to the management server name stored in the primmgr file. If the primmgr file does not exist, messages are sent according to the opcsvinfo file. • DESCRIPTION A string describing the message target rule condition. • SEVERITY A severity level from: Unknown, Normal, Warning, Minor, Major, Critical. • NODELIST A list of nodes. • NODE A node can be specified in different ways, for example: NODE IP 0.0.0.
Configuring ITO Flexible-management Configuration • MSGOPERATION Three types are possible, see Table 5-39 on page 277: • Suppress • Log-only • Inservice Template Syntax You can use the syntax described in the following sections as a basis for configuring flexible management features (for example, the switching of responsibility between managers) in the template files provided.
Configuring ITO Flexible-management Configuration OBJECT | MSGTYPE | MSGCONDTYPE | e severity msgcondtype nodelist node string ipaddress ::= Unknown | Normal | Warning | Critical | Minor | Major ::= Match | Suppress ::= | ::= IP | IP | OTHER ::= “any alphanumeric string” ::= ...
Configuring ITO Flexible-management Configuration configfile := [TIMETEMPLATES ] [CONDSTATUSVARS] ] RESPMGRCONFIGS Syntax for the declaration of condition status variables: statusvarsdef ::= CONDSTATUSVAR |e Syntax for the Time Template: timetmpls timetmpldefs timezonetype conditions timetmplconds1 timetmplcond timecondtype time weekday exact_date day date ::= TIMETEMPLATE DESCRIPTION
Configuring ITO Flexible-management Configuration NOTE nodelist node ::= ::= string ipaddress ::= ::= | IP | IP | OTHER “any alphanumeric string” ... You can replace the variable with $OPC_ALWAYS to specify that the time condition is always true.
Configuring ITO Flexible-management Configuration Information. Table 5-39 on page 277 shows the parameters in the template used to define service hours and scheduled outages and gives a brief explanation of their scope: Table 5-39 Parameters for the Service-Hours Template Parameter NOTE Description SUPPRESS In the context of service hours and scheduled outages: delete messages. Message-related actions triggered by the ITO management server are not started if the SUPPRESS option is defined.
Configuring ITO Flexible-management Configuration MSGOPERATION TIMETEMPLATE “SLA_cust1” TROUBLETICKET True MSGOPERATION TIMETEMPLATE “SLA_cust2” NOTIFICATION False For more information on these and other variables, see “Syntax for Service Hours and Scheduled Outages” on page 274. The Condition-status Variable. Status variables for conditions allow you to enable and disable conditions dynamically.
Configuring ITO Flexible-management Configuration ITO management server is able to calculate the local time of the managed node which sent the message and decide whether or not it is appropriate to act. Service Hours are usually defined in terms of the local time on the managed node. For example, a service provider uses the Service Hours template to tell the ITO management server that managed nodes in various time zones must be supported between 08:00 and 16:00 local time.
Configuring ITO Flexible-management Configuration This string instructs the ITO management server to apply the time frame for service hours defined on the ITO management server (e.g. 08:00 -- 16:00) as a sliding time frame for managed nodes in their respective local time zone. NOTE It is important to ensure that the local time is correctly set on the managed node. The Command-line Interface.
Configuring ITO Flexible-management Configuration • Setting the attribute ACKNONLOCALMGR per message rule forces a direct acknowledge of a notification message on the source management server The template accepts any of the following message attributes in a message condition (for more information on message attributes see the man page opcmom(4)): • OBJECT • APPLICATION • MSGGRP • SEVERITY • NODE • MSGCONDTYPE The administrator can set several parameters to configure message forwarding on the various target
Configuring ITO Flexible-management Configuration Parameter Name Default Value Description OPC_FORW_CTRL_SWTCH_TO_TT TRUE forward control-switch messages to trouble ticket or notification service OPC_SEND_ACKN_TO_CTRL_SWTCH TRUE send acknowledge to control-switched messages OPC_SEND_ANNO_TO_CTRL_SWTCH TRUE send annotation to control-switched messages OPC_SEND_ANT_TO_CTRL_SWTCH TRUE send action-related data to control-switched messages OPC_SEND_ANNO_TO_NOTIF TRUE send annotation to notifica
Configuring ITO Flexible-management Configuration NOTE To correct time differences between the different time resources used by the ITO C-routines and the MPE/iX intrinsics and commands, the TIMEZONE variable must be set on MPE/iX managed nodes. If not, messages can be sent to the wrong management server as they are processed using the incorrect time. For information about setting the TIMEZONE variable for MPE/iX nodes, see Chapter 2 of the HP OpenView IT/Operations Administrator’s Reference.
Configuring ITO Flexible-management Configuration • To set a time period from the year 1995 to 2000, use the syntax: DATE FROM 01/01/1995 TO 12/31/1999 • To set a time on December 31 1998, from 23:00 to 23:59, use the syntax: TIME FROM 23:00 TO 23:59 DATE ON 12/31/1998 If you include the day of the week (for example, Monday April 1, 1997), ITO cross-checks the day and date you have entered to make sure that they match the calendar. If they do not match, however, the action will not be correctly completed.
Configuring ITO Flexible-management Configuration NOTE At least one of the following parts must be used for the definition. ITO does not interpret any one of the following parts as “always”. • Match If the current time is within the defined time period, the time condition is true. • Suppress If the current time is within the defined time period, the time condition is false. • TIME FROM
Configuring ITO Flexible-management Configuration Example Templates for Flexible Management This section provides a number of example templates which illustrate a simple implementation of selected flexible management features: • “Management Responsibility Switch” • “Follow-the-Sun Responsibility Switch” • “Message Forwarding between Management Servers” • “Service Hours” • “Scheduled Outage” Management Responsibility Switch # # Configuration file # /etc/opt/OV/share/conf/OpC/mgmt_sv/respmgrs/f887818 # and
Configuring ITO Flexible-management Configuration ACTIONALLOWMANGER NODE IP 0.0.0.0 “hpsystem.bbn.hp.com” DESCRIPTION “Boeblingen gateway” ACTIONALLOWMANGER NODE IP 0.0.0.0 “$OPC_PRIMARY_MGR” DESCRIPTION “ITO primary manager” MSGTARGETRULES MSGTARGETRULE DESCRIPTION “other messages” MSGTARGETRULECONDS MSGTARGETMANAGERS MSGTARGETMANAGER TIMETEMPLATE “shift2” OPCMGR NODE IP 0.0.0.0 “system.aaa.bb.
Configuring ITO Flexible-management Configuration RESPMGRCONFIGS RESPMGRCONFIG DESCRIPTION ”responsible managers M1 ” SECONDARYMANAGERS SECONDARYMANAGER NODE IP 0.0.0.0 ”M1” DESCRIPTION ”secondary manager M1” SECONDARYMANAGER NODE IP 0.0.0.0 ”M2” DESCRIPTION ”secondary manager M2” SECONDARYMANAGER NODE IP 0.0.0.0 ”M3” DESCRIPTION ”secondary manager M3” ACTIONALLOWMANAGERS ACTIONALLOWMANAGER NODE IP 0.0.0.0 ”M1” DESCRIPTION ”action allowed manager M1” ACTIONALLOWMANAGER NODE IP 0.0.0.
Configuring ITO Flexible-management Configuration • Inform another server (Treasury) about messages concerning financial and CAD applications • Inform server (master) about critical messages coming from nodes x1 and x2 TIMETEMPLATES # none RESPMGRCONFIGS RESPMGRCONFIG DESCRIPTION “msg-forwarding target specification” MSGTARGETRULES MSGTARGETRULE DESCRIPTION “Database” MSGTARGETRULECONDS MSGTARGETRULECOND DESCRIPTION “Database messages” MSGGRP “DATABASE” MSGTARGETMANAGERS MSGTARGETMANAGER TIMETEMPLATE “$OPC_
Configuring ITO Flexible-management Configuration Service Hours The following example template defines service hours for a SAP server with the node name saparv01. This node has to be in service on weekdays from 08:00 hours to 16:00 hours.
Configuring ITO Variables Variables This section lists and defines the variables that can be used with ITO, and gives an output example, where appropriate. Each variable is shown with the required syntax. NOTE It is also often useful to surround the variable with quotes, especially if it may return a value that contains spaces. Environment Variables The variables listed below can be used before starting up ITO.
Configuring ITO Variables <$*> Returns all variables assigned to the trap. Sample output: [1] .1.1 (OctetString): arg1 [2] .1.2 (OctetString): kernighan.c.com <$@> Returns the time the event was received as the number of seconds since the Epoch (Jan 1, 1970) using the time_t representation. Sample output: 859479898 <$1> Returns one or more of the possible trap parameters that are part of an SNMP trap. (<$1> returns the first variable, <$2> returns the second variable, etc.
Configuring ITO Variables <$F> Returns the textual name of the remote pmd’s machine if the event was forwarded. Sample output: kernighan.c.com <$G> Returns the generic trap ID. Sample output: 6 <$MSG_OBJECT> Returns the name of the object associated with the event. This is set in the Message Defaults section of the Add/Modify SNMP Trap window. Note: this returns the default object, not the object set in the conditions window.
Configuring ITO Variables <$x> Returns the date the event was received using the local date representation. Sample output: 03/27/97 Logfile, Console, and ITO Interface Templates The variables listed below can be used in most logfile, Console, and ITO Interface template text entry fields (exceptions are noted). The variables can be used within ITO, or passed to external programs.
Configuring ITO Variables <$OPC_MGMTSV> Returns the name of the current ITO management server. Sample output: richie.c.com The following variables are only available for the MPE/iX console message source template. See “Generating a New NMEV Marker” on page 249 for a description of the format of the NMEV marker and how it is generated. <$NMEV_SEV> Returns the severity of the message as set within the NMEV marker, if the marker is present in the original messages.
Configuring ITO Variables Returns the name of a threshold monitor. This is set in the Monitor Name field of the Add/Modify Monitor window. Sample output: cpu_util <$THRESHOLD> Returns the value set for a monitor threshold. This is set in the Threshold: field on the Add/Modify Monitor window. Sample output: 95.00 <$VALAVG> Returns the average value of all messages reported by the threshold monitor. Sample output: 100.
Configuring ITO Variables duplicate selections will be ignored. Sample output: 85432efa-ab4a-71d0-14d4-0f887a7c0000 a9c730b8-ab4b-71d0-1148-0f887a7c0000 $OPC_MSGIDS_HIST Returns the Message IDs (UUID) of the messages currently selected in the History Message Browser. Sample output: edd93828-a6aa-71d0-0360-0f887a7c0000 ee72729a-a6aa-71d0-0360-0f887a7c0000 $OPC_MSGIDS_PEND Returns the Message IDs (UUID) of the messages currently selected in the Pending Messages Browser.
Configuring ITO Variables 298 Chapter 5
6 Installing/Updating the ITO Configuration on the Managed Nodes 299
Installing/Updating the ITO Configuration on the Managed Nodes This chapter describes how to install/update the ITO configuration on the managed nodes. In addition to this chapter, you should also read the HP OpenView IT/Operations Concepts Guide, for a fuller understanding of the elements and the windows you can use to review or customize them.
Installing/Updating the ITO Configuration on the Managed Nodes Configuration Installation/Update on Managed Nodes Configuration Installation/Update on Managed Nodes This section contains information concerning the distribution of the ITO agent configuration within your environment. Script and Program Distribution to Managed Nodes ITO enables you to distribute commonly-used scripts and programs to the managed nodes.
Installing/Updating the ITO Configuration on the Managed Nodes Configuration Installation/Update on Managed Nodes ❏ novell/intel/nw ❏ olivetti/intel/unix ❏ pyramid/mips/unix ❏ sco/intel/unix ❏ sco/intel/uw ❏ sequent/intel/dynix ❏ sgi/mips/irix ❏ sni/mips/sinix ❏ sun/sparc/solaris 2. If you need a certain binary to be present only on specific systems, transfer the file manually. Furthermore, do not put the file in the default directory on the managed nodes.
Installing/Updating the ITO Configuration on the Managed Nodes Configuration Installation/Update on Managed Nodes • Select only a few nodes at a time in the IP map, Node Bank, or Node Group Bank window. • In the Node Bank or Node Group Bank window, open the Configure Management Server window by selecting Actions: Server->Configure… This is shown in Figure 6-1. Set a low number in the Parallel Distribution field. For more information, press F1 to see help on this field.
Installing/Updating the ITO Configuration on the Managed Nodes Configuration Installation/Update on Managed Nodes Figure 6-1 Configure Management Server Window 5. If identical files for actions|cmds|monitor are found in the following directories: /var/opt/OV/share/databases/OpC/mgd_node/customer/\ and: /var/opt/OV/share/databases/OpC/mgd_node/vendor/\ // the customer’s file is used in preference.
Installing/Updating the ITO Configuration on the Managed Nodes Configuration Installation/Update on Managed Nodes 6. ITO compresses the monitor|actions|cmds binaries. Do not put a file into the following directory, if the same file name already exists with a .
Installing/Updating the ITO Configuration on the Managed Nodes Configuration Installation/Update on Managed Nodes If you have configured actions or monitors in your templates, or commands in your Application Bank/Desktop, these binaries must be distributed as described in the following subsection. Figure 6-2 Install/Update ITO Software and Configuration Window Installing/Updating Scripts and Programs on Managed Nodes ITO provides the distribution of commonly-used scripts and programs.
Installing/Updating the ITO Configuration on the Managed Nodes Configuration Installation/Update on Managed Nodes NOTE To update only the changes in the configuration, do not select the Force Update option; the Force Update option (re-)distributes all files causing an increase in network load. The scripts and programs must be located in the directories on the management server as listed in Table 6-1.
Installing/Updating the ITO Configuration on the Managed Nodes Configuration Installation/Update on Managed Nodes Table 6-2 Temporary Directories for Distributed Scripts and Programs on Managed Nodes Operating System Managed Node Temporary Directory DEC Alpha Windows NT /usr/OV/tmp/OpC/bin/alpha/actions /usr/OV/tmp/OpC/bin/alpha/cmds /usr/OV/tmp/OpC/bin/alpha/monitor DEC Alpha AXP Digital UNIX /var/opt/OV/tmp/OpC/bin/actions /var/opt/OV/tmp/OpC/bin/cmds /var/opt/OV/tmp/OpC/bin/monitor HP 3000/900
Installing/Updating the ITO Configuration on the Managed Nodes Configuration Installation/Update on Managed Nodes Operating System Managed Node Intel 486 or higher Temporary Directory DYNIX/ptx /var/opt/OV/tmp/OpC/bin/actions /var/opt/OV/tmp/OpC/bin/cmds /var/opt/OV/tmp/OpC/bin/monitor Novell NetWare sys:/var/opt/OV/tmp/OpC/bin/actions sys:/var/opt/OV/tmp/OpC/bin/cmds sys:/var/opt/OV/tmp/OpC/bin/monitor OS/2 \var\opt\OV\tmp\OpC\bin\actions \var\opt\OV\tmp\OpC\bin\cmds \var\opt\OV\tmp\OpC\bin\monito
Installing/Updating the ITO Configuration on the Managed Nodes Configuration Installation/Update on Managed Nodes Operating System Managed Node Temporary Directory Siemens Nixdorf SINIX /var/opt/OV/tmp/OpC/bin/actions /var/opt/OV/tmp/OpC/bin/cmds /var/opt/OV/tmp/OpC/bin/monitor Silicon Graphics IRIX /var/opt/OV/tmp/OpC/bin/actions /var/opt/OV/tmp/OpC/bin/cmds /var/opt/OV/tmp/OpC/bin/monitor Sun SPARCstation Solaris /var/opt/OV/tmp/OpC/bin/actions /var/opt/OV/tmp/OpC/bin/cmds /var/opt/OV/tmp/OpC/
Installing/Updating the ITO Configuration on the Managed Nodes Configuration Installation/Update on Managed Nodes Managed Node HP 9000/700 HP 9000/800. HP 3000/900 IBM RS/6000, Bull DPX/20 OS HP-UX 10.x and 11.x MPE/iX AIX Chapter 6 Directory Access Rights /var/opt/OV/bin/OpC/actions rwxr — r — (owner: root) /var/opt/OV/bin/OpC/cmds rwxr-xr-x (owner: root) /var/opt/OV/bin/OpC/monitor rwxr — r — (owner: root) ACTIONS.OVOPC cap=BA,IA,PM,MR,DS,PH R,X,L,A,W,S:AC COMMANDS.
Installing/Updating the ITO Configuration on the Managed Nodes Configuration Installation/Update on Managed Nodes Managed Node Intel 486 or higher OS Novell NetWare OS/2 SCO OpenServer UnixWare Directory sys:/var/opt/OV/tmp/OpC/bin/\ actions Administrator (full access) sys:/var/opt/OV/tmp/OpC/bin/\ cmds Administrator (full access) sys:/var/opt/OV/tmp/OpC/bin/\ monitor Administrator (full access) \var\opt\OV\bin\OpC\actions rwxa \var\opt\OV\bin\OpC\cmds rwxa \var\opt\OV\bin\OpC\monitor rwxa
Installing/Updating the ITO Configuration on the Managed Nodes Configuration Installation/Update on Managed Nodes Managed Node NCR System 3xxx/4xxx/5xx x (Intel 486 or higher Olivetti (INTEL PCs) Pyramid mips_r3000 Siemens Nixdorf OS UNIX SVR4 Olivetti UNIX Data Center/ OSx SINIX Chapter 6 Directory Access Rights /var/opt/OV/bin/OpC/actions rwxr — r — (owner:root) /var/opt/OV/bin/OpC/cmds rwxr-xr-x (owner:root) /var/opt/OV/bin/OpC/monitor rwxr — r — (owner:root) /var/opt/OV/bin/OpC/actions
Installing/Updating the ITO Configuration on the Managed Nodes Configuration Installation/Update on Managed Nodes Managed Node Silicon Graphics Sun SPARCstation OS IRIX Solaris Directory Access Rights /var/opt/OV/bin/OpC/actions rwxr — r — (owner:root) /var/opt/OV/bin/OpC/cmds rwxr-xr-x (owner:root) /var/opt/OV/bin/OpC/monitor rwxr — r — (owner:root) /var/opt/OV/bin/OpC/actions rwxr — r — (owner: root) /var/opt/OV/bin/OpC/cmds rwxr-xr-x (owner: root) /var/opt/OV/bin/OpC/monitor rwxr — r —
7 Integrating Applications into ITO 315
Integrating Applications into ITO This chapter describes how to integrate applications into ITO. The HP OpenView IT/Operations Concepts Guide provides more detailed information on the elements and the windows you can use to carry out the integration. See also the HP OpenView IT/Operations Application Integration Guide available with the HP OpenView IT/Operations Developer’s Toolkit.
Integrating Applications into ITO Integrating Applications into ITO Integrating Applications into ITO ITO allows graphical invocation of applications (“point and click”) by means of the operators’ Application Desktop. A different set of applications can be assigned to each ITO operator to match specific requirements.
Integrating Applications into ITO Integrating Applications into ITO ITO Applications Typically, ITO applications are utilities that provide services of a general nature. When integrated into the Application Desktop, they help build a set of management tools. The application is invoked when the user double-clicks the icon that represents it. Information, such as selected nodes, can be passed as arguments to the applications, and the applications are invoked using the ITO access mechanisms.
Integrating Applications into ITO Integrating Applications into ITO ❏ If you have defined them to do so in the application registration file (ARF), both OV Application and OV Service integrations can cause a daemon to start running when the ITO session is started. ❏ By integrating ITO as an OV Application you integrate a single action as a desktop icon (as defined in the ARF). ❏ By integrating ITO as an OV Service you integrate all actions as menu items (as defined in the ARF).
Integrating Applications into ITO Integrating Applications into ITO 3. In the Add OV Application window enter the following application attributes: Application Name: Ethernet Traffic HP OV Registration Application Name: IP Graphs OV Registration Action Identifier: etherTrafficHP And select [Use Objects selected by Operator]. 4. Click on [OK]. 5. Invoke this application as administrator and as operator: a. As administrator: Log out and log in again, to use this OV Application.
Integrating Applications into ITO Integrating Applications into ITO a. As administrator: Log out and log in again to use this OV Service, click on a node and select one of the menu items in the IP Map under Performance:Network Activity or Configuration:Network Configuration. Copy this OV Service into an operator’s Application Desktop to enable the operator to monitor the IP tables. b.
Integrating Applications into ITO Integrating Applications into ITO ITO has predefined conditions in the opcmsg(1|3) template that allow it to integrate the MeasureWare alarming functionality into ITO. The opcmsg template defines the messages that can come from the MeasureWare agent, together with the operator-initiated action that starts PerfView. To enable the PerfView/MeasureWare integration on an ITO agent, do the following: 1. Assign the opcmsg(1|3) template to all managed nodes. 2.
Integrating Applications into ITO Integrating Applications into ITO Running PerfView 3.0 and PerfView 4.0 in parallel If you upgrade some managed nodes from PerfView 3.0 to the MeasureWare agent, remember to perform either one of the following two steps to avoid receiving redundant PerfView alarms: ❏ unregister MeasureWare agents from the PerfView 3.
Integrating Applications into ITO Integrating Applications into ITO b. Select [No Window] (for example, X Application) from the option button. c. Click on [OK]. 4. Select again the application labeled ITO Status from the Application Bank. 5. Copy this application using Actions:Application->Copy and modify it to become the ITO Agents Stop application: a.
Integrating Applications into ITO Integrating Applications into ITO Integrating Applications as Actions An application or script may be configured to run as an automatic or operator-initiated action, or a scheduled action. An automatic action is triggered by a message received in ITO. An operator-initiated action is merely enabled by a message received in ITO; it is executed by the operator. Operator-initiated actions may also be triggered by the administrator, via the message browser.
Integrating Applications into ITO Integrating Applications into ITO Application Logfile Encapsulation Applications can be monitored by observing their logfiles. Logfile entries can be forwarded into ITO, or suppressed. The message can be restructured and ITO specific attributes can be set up. For more details refer to the Message Source Templates window of the ITO administrator’s GUI. NOTE Most applications running on Windows NT systems use Eventlogs.
Integrating Applications into ITO Integrating Applications into ITO Messages are intercepted before they are added to the ITO database and before they are displayed in the ITO message browsers. For further information, see the documentation available with the HP OpenView IT/Operations Developer’s Toolkit.
Integrating Applications into ITO Integrating Applications into ITO • application configured as No Window (eg X Application) During the profile execution stdin, stdout and stderr are not available, so you should avoid commands reading from standard input or writing to standard output/error.
Integrating Applications into ITO Integrating Applications into ITO How to integrate ITO with SMS The ITO/SMS integration has two parts. The first consists of the standard NT application event log template, and the second consists of a specific SMS application event log template and fourteen threshold monitors. This sections explains how to set up and install these templates and monitors. 1. Assign the SMS monitors and templates to the appropriate NT servers.
Integrating Applications into ITO Integrating Applications into ITO ITO SMS Monitors NT_DWN_SMS_SITE_CONFIG_MANAGER SMS Service Site Configuration Manager AA none NT_UP_SMS_SITE_CONFIG_MANAGER NT_DWN_SMS_TRAP_FILTER Restart* Trap Filter NT_UP_SMS_TRAP_FILTER none none * OA = Operator Action; AA= Automatic Action The Application Event Log template, NT SMS, must be assigned to any SMS Site Server of the SMS hierarchy, but cannot be assigned to the logon, distribution, or helper servers because duplic
Integrating Applications into ITO Integrating Applications into ITO 3. Distribute the templates (and the agent as well, if it is not already installed). How SMS messages relate to ITO messages When ITO reports SMS messages in the Message Browser, it assigns a Message Group and Message Object that is appropriate to the message. The tables below show how the SMS messages will be mapped in ITO.
Integrating Applications into ITO Integrating Applications into ITO EMS Integration The Event Monitoring Service (EMS) provides a mechanism for monitoring system resources on HP-UX and sending notifications about these system resources when they change in an observable way. EMS has been integrated into ITO so that it is possible to forward EMS notifications to ITO via the opcmsg (3) API.
8 ITO Language Support 333
ITO Language Support This chapter describes the language dependencies of the ITO management server processes, managed node commands and processes, and the ITO GUI. It also describes the languages and LANG settings supported for the various ITO platforms. In addition, you will find information on the character sets supported by ITO.
ITO Language Support Language Support on the Management Server Language Support on the Management Server On the management server, localization considerations impact: ❏ The language used for displaying the status messages of the ITO server and managed nodes in the ITO Motif GUI. ❏ The character set used for internal processing.
ITO Language Support Language Support on the Management Server locale settings used to start the ITO processes must be compatible to the database character set. All input data on the management server must be given in this character set. ITO GUI Considerations ITO uses the setting of the environment variable LANG, to determine the language of the message catalog and the GUI. When starting the ITO GUI, the following settings for this variable are supported: ❏ C, *.iso88591, *.roman8 ❏ ja_JP.
ITO Language Support Language Support on the Management Server ITO uses the system-wide X-resources for window titles and icon labels. Table 8-1 System-wide X Resources in a VUE and CDE Environment Resource Description *FontList Font used for window titles. Vuewm*icon*fontList Font used for icon titles. ITO-specific resources are set in one of the files listed below: ❏ English: /opt/OV/lib/X11/app-defaults/C/Opc ❏ Japanese: /opt/OV/lib/X11/app-defaults/ja_JP.
ITO Language Support Language Support on Managed Nodes Language Support on Managed Nodes Language of Messages on Managed Nodes ITO managed-node processes determine the language of their messages by the locale that is set. Therefore, if you want these processes to generate, for example, Japanese messages, you must make sure that the locale, and therefore LANG, is set appropriately before opcagt -start is called. The locale for the ITO agents is set in the system startup script, for example /etc/rc.config.
ITO Language Support Language Support on Managed Nodes Character Sets for Internal Processing on Managed Nodes The character sets available on platforms supported by ITO can differ from the character set used in the ITO database. Consequently, when a message is generated on a managed node, it must often be converted before it can be sent to the management server and stored in the database. ITO takes care of this conversion.
ITO Language Support Language Support on Managed Nodes Table 8-4 Supported Character Sets on Managed Nodes in a Japanese Environment Platform Character Set HP-UX, AIX, Solaris, Digital UNIX Shift JISa, EUC b, ASCII Windows NT (intel-based) Japanese ANSI Code Page 932 c, ASCII a. For Solaris, Shift JIS is only supported with Solaris version 2.6 and higher. b. 2-byte Extended UNIX Code. c. Code Page 932 is analogous to Shift JIS.
ITO Language Support Language Support on Managed Nodes External Character Set on Managed Nodes All commands provided for ITO managed nodes, such as opcmsg(1M) or opcmon(1M), interpret (the character set of) their command line arguments by the locale setting. This character set may also be different from the database character set and the managed node processing character set. All command input is also converted before it is acted upon by any managed node processes.
ITO Language Support Language Support on Managed Nodes Platform OS/2 LANG LANG variable not available External Character Set ASCII ISO 8859-1 OEM Code Page 437 OEM Code Page 850 NCR UNIX SVR4, SCO UnixWare, SGI IRIX, Sequent DYNIX/ptx, Olivetti UNIX, Pyramid DataCenter/OSx C ASCII .iso8859-1 ISO 8859-1 SCO OpenServer C ASCII .8859 ISO 8859-1 C ASCII .
ITO Language Support Language Support on Managed Nodes Table 8-6 External Character Sets in a Japanese Environment Platform LANG AIX HP-UX 10.x and 11.x Digital UNIX Solaris Windows NT External Character Set C ASCII ja_JP, .IBM-932, Shift JIS .IBM-eucJP EUC C ASCII ja_JP.SJIS Shift JIS ja_JP.eucJP 2-byte EUC C ASCII ja_JP.PCKa Shift JISa ja EUC LANG variable not available ANSI Code page 932, ASCII a. Only with Solaris 2.6 and later.
ITO Language Support Language Support on Managed Nodes Table 8-7 Character sets supported by the Logfile Encapsulator Other Japanese English English Nodes English ASCII NetWare, OS/2 Nodes Japanese Character Set HP-UX, Solaris, AIX, Digital UNIX Nodes English Windows NT Nodes ✔ ✔ ✔ ✔ ✔ ✔ ✔ ISO 8859-1 ✔ ✔ no MPE ROMAN 8 HP-UX American EBCDIC HP-UX Multilingual OEM code page 850 ✔ OEM US code page 437 ✔ Multilingual ANSI code page 1252 ✔ Japanese ANSI code page 932 MPE ✔
ITO Language Support Character Conversion in ITO Character Conversion in ITO English Environment Figure 8-1 ITO Configuration and Related Character Sets in an English Environment ISO 8859-1 WE8ISO8859P1 (Oracle) output to disk reports broadcast (save) ISO 8859-1 MOM config DB HP-UX Roman8/ ISO 8859-1/ ASCII MPE Roman8 Solaris ISO 8859-1 ASCII AIX ISO 8859-1 ASCII NCR UNIX ISO 8859-1 ASCII SGI IRIX ISO 8859-1 ASCII SCO OS ISO 8859-1 ASCII SCO UW ISO 8859-1 ASCII Ext.
ITO Language Support Character Conversion in ITO Management Server: ❏ Local Logfile entries (opcerror), history download, etc., are processed using the ISO 8859-1 character set. ❏ Configuration upload and download is done using ISO 8859-1. No runtime conversion is done on the management server. Conversion is only performed for managed node configuration files if the ITO agents on HP-UX or MPE/iX are running with the processing character set, ROMAN8.
ITO Language Support Character Conversion in ITO Output conversion, before forwarding the message to the management server, is from ROMAN8 to ISO8859-1/WE8ISO8859P1 (the database character set). Tips On HP-UX, it is possible to define different character sets for different managed nodes. It is recommended that you set the character set most frequently used on each managed node. For example, if you mostly monitor logfiles with ROMAN8 characters, you should use ROMAN8 for your managed nodes.
ITO Language Support Character Conversion in ITO Japanese Environment Figure 8-2 ITO Configuration and Related Character Sets in a Japanese Environment SNMP traps Shift JIS Shift JIS output to disk reports broadcast (save) DB Shift JIS Node CS Shift JIS MOM config opcfgupld opccfgdwn HP-UX Solaris Shift JIS ITO Server AIX Shift JIS EUC ASCII Shift JIS EUC ASCII action agent SV CS Shift JIS EUC ASCII Shift JIS OVwindows Shift JIS ITO UI Shift JIS Management Server Digital UNIX EUC ASCII Wi
ITO Language Support Character Conversion in ITO ❏ Input through user commands is always converted from the external character set to the node character set. ❏ No input conversion is performed for configuration files; configuration files are always in the node character set. ❏ No output conversion is done for local logfiles; the contents of logfiles is always in the node character set. ❏ MIB processing is always performed in the node character set.
ITO Language Support Localized Object Names Localized Object Names Although most of the ITO-specific configuration can be localized, there are some restrictions. Restrictions ❏ Only ASCII characters are supported in node names. ❏ The name of ITO objects, for example, the template name, message group name, or node group name, is used as an internal identifier by ITO and therefore should not be localized. Names are only displayed in the ITO GUI if a label hasn’t been specified.
ITO Language Support Flexible Management in a Japanese Environment Flexible Management in a Japanese Environment If your management server runs with the character set Shift JIS, but your managed nodes are running with the character set EUC, you must perform some extra configuration steps; namely you have to manually convert the MoM configuration file on the management server from Shift JIS to EUC, enter: /usr/bin/iconv -f sjis -t euc > where is the name of the original confi
ITO Language Support Flexible Management in a Japanese Environment 352 Chapter 8
9 An Overview of ITO Processes 353
An Overview of ITO Processes This chapter provides a functional overview of ITO: it describes ITO manager and agent processes and subprocesses, and lists files used by ITO. The chapter is divided into sections that describe the following: ❏ Management Server Processes ❏ Managed Node Processes ❏ Secure Networking Figure 9-1 on page 355 provides a functional overview of the major parts of ITO.
An Overview of ITO Processes Understanding ITO Processes Understanding ITO Processes ITO’s agents and managers communicate by means of Remote Procedure Calls (RPCs) based on DCE or NCS, files (=queues), pipes, or signals. These mechanisms apply to communication between the management server and the managed nodes as well as to communication between processes running locally on the management server.
An Overview of ITO Processes Understanding ITO Processes For more information on how the processes communicate with one another and what each process does, see “Management Server Processes” on page 356 and “Managed Node Processes” on page 360.
An Overview of ITO Processes Understanding ITO Processes distribution session. In addition, scripts and programs required for automatic and operator-initiated actions, scheduled actions, monitoring and broadcasting requests, can also be distributed via the distribution manager. The distribution manager also starts a child process, the communication manager, for intermanagement-sever communication.
An Overview of ITO Processes Understanding ITO Processes opcmsgr The message receiver collects all messages from managed nodes; it is an auxiliary process of the message manager, designed to guarantee quick message acceptance. opcmsgr accepts messages from NCS agents only. opcmsgrd Similar to opcmsgr, opcmsgrd accepts messages from NCS, DCE, and Sun-RPC agents. opctss The distribution manager subprocesses (opctss) transfer configuration data to the distribution agent using TCP/IP.
An Overview of ITO Processes Understanding ITO Processes ITO Files on the Management Server The directory /var/opt/OV/share/tmp/OpC/mgmt_sv contains the files listed and explained in Table 9-1 on page 359.
An Overview of ITO Processes Understanding ITO Processes Server File Name File Contents and Function mpimmp/ mpimmq Queue/pipe used by the message manager and messagestream-interfaces to transfer messages from MSI-programs to the message manager msgmgrq/ msgmgrp Queue/pipe between the message receiver and message manager. oareqhdl File used by the Open Agent request handler to store connections to other processes.
An Overview of ITO Processes Understanding ITO Processes opcacta The action agent, opcacta, is responsible for the starting and controlling of automatic and operatorinitiated actions, and scheduled actions (scripts, programs). The action agent is also used for command broadcasting and for applications configured as Window (Input/Output) in the Add/Modify ITO Application window. opcdista The distribution agent requests node-specific configurations from the distribution manager (opcdistm).
An Overview of ITO Processes Understanding ITO Processes and opcmon(3) API can be used (asynchronously) to feed the monitor agent with the current threshold values. Note that opcmona does not immediately begin monitoring when agents are started. Instead, it waits one polling interval, and only then executes the monitor script for the first time. Typically, polling intervals are of the order of 30 seconds to 5 minutes.
An Overview of ITO Processes Understanding ITO Processes Table 9-2 Locating Process-related Files on the Managed Nodes Platform File Location AIX /var/lpp/OV/tmp/OpC DEC Alpha NT \usr\OV\tmp\OpC\ HP-UX 10.x and 11.x Digital UNIX NCR UNIX SVR4 Olivetti UNIX OS/2 Pyramid DataCenter/OSx SCO OpenServer SCO UnixWare Sequent DYNIX/ptx SGI IRIX Solaris /var/opt/OV/tmp/OpC MPE/iX TMP.
An Overview of ITO Processes Understanding ITO Processes Table 9-3 Pipes and Queue Files on the Managed node Agent File Name File Contents and Function actagtp/ actagtq Queue/pipe for pending action requests for the action agent filled by the message agent and the control agent. The action agent polls the queue every 5 seconds. monagtq/ monagtp Queue on UNIX systems between ITO monitor command opcmon(1) respectively ITO monitor API opcmon(3) and monitor agent.
An Overview of ITO Processes Understanding ITO Processes Agent File Name File Contents and Function trace (ASCII) ITO trace logfile. For more information on activating tracing see the section on troubleshooting in the HP OpenView IT/ Operations Administrator’s Reference. aa* Temporary files used by the action agent, for example, to store the action or application output written to stderr and sdtout.
An Overview of ITO Processes Understanding ITO Processes The directories in Table 9-4 on page 365 contain files which are listed in Table 9-5 on page 366.
An Overview of ITO Processes Understanding ITO Processes Process Authentication An important step in the authentication procedure that an ITO RPC process goes through involves the obtaining of a login context. Every secure RPC process has a login context, which it either inherits from its parent process or establishes itself. The login context requires a name (or principal) and a password (or key).
An Overview of ITO Processes Understanding ITO Processes However, this configuration information must be present on both the management server and the managed node. ITO associates two names with the two types of node in its environment, namely: one each for the management server and the managed node. All management server processes then run under the name associated with the management server, and all managed node processes under the identity of the name associated with the managed node.
An Overview of ITO Processes Secure Networking Secure Networking ITO’s concept of securing a network is based on the idea of improving the security of the connection between processes either within a network or across multiple networks as well as through routers and other restrictive devices.
An Overview of ITO Processes Secure Networking or RPCD always runs on UDP 135, a reserved port which must be accessible even through a firewall. Once it has the port number of the RPC server, the RPC client can initiate the RPC call.
An Overview of ITO Processes Secure Networking ❏ The Message Receiver on the server registers TCP/UDP port 1200 in its unique RPCD/LLBD and listens there for ITO traffic. ❏ The Distribution Manager on the server registers TCP/UDP port 1051 in its unique RPCD/LLBD and listens there for ITO traffic. ❏ RPC clients doing lookups in the RPCD/LLBDs find this information and request connections to the Control Agent, Message Receiver and so on at the port numbers listed.
An Overview of ITO Processes Secure Networking 372 Chapter 9
10 Tuning, Troubleshooting, Security, and Maintenance 373
Tuning, Troubleshooting, Security, and Maintenance This chapter contains information for administrators who perform system maintenance, performance tuning, and troubleshooting. It also describes some important security information, and how to change the hostname and IP address of your management server and manged nodes.
Tuning, Troubleshooting, Security, and Maintenance Performance Tuning Performance Tuning In general, you can carry out the following to improve system performance: ❏ Increase the RAM, in order to reduce disk swapping ❏ Upgrade the CPU ❏ Do not use the LAN/9000 logging and tracing commands nettl(1M) and netfmt(1M) unless absolutely necessary ❏ Use different physical disks for the file systems and for swap space ❏ Use high-bandwidth network links between the management server, managed nodes and display stati
Tuning, Troubleshooting, Security, and Maintenance Performance Tuning ❏ Suppress the appearance of the ITO Alarm Severity symbol in the HP OpenView submaps by changing the ITO app-defaults file. Set the line Opc.statusPropOnAllNodes in the file /opt/OV/lib/X11/app-defaults//Opc to False. The default setting is True.
Tuning, Troubleshooting, Security, and Maintenance Performance Tuning 1. Reducing the number of managed nodes for parallel configuration distribution (Configure Management Server window, [Actions: Server: Configure…]). 2. Making sure operators close any View- and History-Browser windows not currently required. This reduces: • the amount of RAM required for the GUI • the time required for updating Browser windows when new messages are intercepted or acknowledged. 3.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Recommended Practices Troubleshooting: Recommended Practices Following these practices helps you isolate, recover from, and often prevent, problems: ❏ Make sure that both the management server and the managed node system meet the hardware, software, and configuration prerequisites. See the HP OpenView IT/Operations Installation Guide for the Management Server for a list of prerequisites.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Tracing Troubleshooting: Tracing ITO provides a tracing facility which helps you to investigate the cause of a problem. For example, if processes or programs abort, performance is greatly reduced, or unexpected results appear. Trace logfiles can provide pointers to where and when the problem occurred. Tracing can be activated for specific management server and/or agent processes by adding a statement to the opcsvinfo and/or opcinfo file.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Tracing Description NAME Name resolution NLS Native Language support OCOMM Open agent communication PERF Performance SEC Security a. Use this option carefully as it provides extensive and detailed information.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Tracing See Table 10-1 “Functional Tracing Areas” for a list of all available areas. MSG and ACTN are enabled by default. NOTE Spaces are not allowed between entries in the lists for OPC_TRACE_AREA and OPC_TRC_PROCS. 3. To receive verbose trace information output, add: OPC_TRACE_TRUNC FALSE OPC_TRACE_TRUNC TRUE is enabled by default. 4.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Tracing Interpreting the Trace File The trace information is written to the following trace logfile: ❏ Management server trace file: /var/opt/OV/share/tmp/OpC/mgmt_sv/trace ❏ HP-UX 10.x and 11.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Characterizing the Problem Troubleshooting: Characterizing the Problem When you encounter a symptom associated with a problem, make a note of all associated information: ❏ Scope: What is affected? • Distinguish between management server and managed node problems. • If you suspect that a problem lies on a managed node, try to duplicate it on a different node, to find out whether it is node-specific.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Characterizing the Problem Debug Information for OS/2 Managed Nodes On OS/2 managed nodes, ITO provides an REXX (OS/2 scripting language) script, \opt\OV\bin\OpC\utils\opcclct.cmd. Run this script when you are in a situation that requires attention of a support engineer.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: General Considerations Troubleshooting: General Considerations Consider the following when troubleshooting ITO: ❏ ITO is an application that is both memory- and swap-space intensive. Problems may occur simply due to the exhaustion of resources. ❏ Communication between the ITO management server processes is based on DCE remote procedure calls, which may cause occasional failures and time-outs of manager/agent communications.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: How ITO Reports Errors Troubleshooting: How ITO Reports Errors This section describes how ITO processes and reports errors during operation. The section is broken down into three areas: ❏ Errors reported via logfiles. ❏ Errors reported via the Message Browser. ❏ The Error Dialog Box in the GUI. ❏ stdout and stderr in the shell. Errors Reported in Logfiles Error messages are written to two different locations: 1.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: How ITO Reports Errors Table 10-2 Errors Reported by the Agent Processes Platform File Name and Location HP-UX 10.x and 11.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: How ITO Reports Errors be that messages relating to serious or critical problems are marked as “X” in the “U” (Unmatched) column in the message browser and, as a consequence, possibly ignored. In addition, such unmatched messages should be reported to the ITO administrator, in order to improve the existing templates by adding appropriate message or suppress conditions.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: How ITO Reports Errors Errors Reported via stderr and stdout When starting ITO commands or scripts (for example, opcagt and opcsv), errors which occur during the operation are reported to the stderr/stdout device assigned to the calling shell. Errors reported by terminal applications started from the application desktop, are also displayed on stderr and stdout.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: When you Need More Information Troubleshooting: When you Need More Information Further information to help you troubleshoot is available in: ❏ The HP OpenView IT/Operations Software Release Notes, or the files in the ReleaseNotes directory: /opt/OV/ReleaseNotes ❏ ITO online help. ❏ The documentation set provided with ITO. ❏ HP OpenView documentation for the given platform. ❏ Oracle Database manuals.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Troubleshooting: Specific Problems This section provides problem descriptions and troubleshooting steps in the following areas: ❏ Management Server • Database • ITO Server Processes • ITO GUI • HP-UX and Services ❏ Managed Nodes • Installation Problems: UNIX MPE/iX • Runtime Problems: Platform Independent UNIX HP-UX MPE/iX llbd and dced/rpcd MIB Access ❏ Network File System Security issues and system maintenance are discu
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems ❏ ”Troubleshooting” chapter in the HP OpenView Network Node Manager Reference. ❏ ”Troubleshooting” chapter in HP OpenView Data Management Administrator’s Reference. ❏ Manuals supplied with the database. Oracle-specific Database Problems and Solutions Problem ITO process cannot be started.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Description B ITO connects to Oracle as user opc_op using OS authentication. Oracle allows you to define a prefix for OS authentication users. ITO adds the user opc_op with the assumption that no prefix is used. If you have defined a prefix, ITO will be unable to connect. Solution A Check that the file /etc/oratab exists. Check that /etc/oratab is readable by user opc_op.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Problem Could not connect to Oracle when using SQL*Net, with following error message: Database error: ORA-12158: (Cnct err, can’t get err txt See Servr Msgs & Codes Manual) (OpC50-15) Description ITO connects as user opc_op to the database.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems ITO Server Problems and Solutions Problem The ITO management server status is completely corrupted, even after the ovstop opc and ovstart opc sequence. Description Lots of corrupted messages in the message browser; lots of critical ITO error messages, ITO agents on managed nodes cannot be stopped/started, configuration distribution does not work, and so forth.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Problem Old (no longer interesting/valid) messages are sent to the external trouble ticket system and/or external notification service when restarting the ITO management server after a long down-time.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Problem Ungraceful abort of the ITO GUI, leaving some ovhelp processes still running. Description After ungraceful shutdown of the ITO user interface, ovhelp processes remain running.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Problem Ungraceful abort of the ITO GUI leaving some GUI processes still running Description You receive the error message The user is already logged on. (50-17), when logging on to ITO after the ITO GUI has crashed while users were still logged on.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems 2. Check the entry OPC_INSTALLED_VERSION in the opcinfo file on the managed node. See Table 10-3 on page 399 for the location of the opcinfo file on the various agent platforms. Table 10-3 Location of the opcinfo File on ITO Managed Nodes AIX /usr/lpp/OV/OpC/install/opcinfo DEC Alpha NT \usr\OV\bin\OpC\alpha\install\opcinfo Digital UNIX /usr/opt/OV/bin/OpC/install/opcinfo HP-UX 10.x and 11.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems ITO Installation Problems and Solutions on UNIX Managed Nodes Problem The installation script inst.sh (1M) prompts for a password in an endless loop, even if the correct password has been specified. Description If no .rhosts entry is available for root on the managed node, the ITO installation will prompt for the root password.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Where ni0 is a point-to-point connection (PPL, SLIP, or PPP), and lan0 and lan1 are ethernet interfaces (lo0 is present on every system and represents the loopback interface).
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Using /etc/hosts on : jacko Name: jacko.bbn.hp.com Address: 15.136.123.138 Aliases: jacko Note that this command only returns the first IP-address when using /etc/hosts as the name service. The managed node uses the IP-address of the first network interface card it finds (by scanning the internal network interface list). The order of the network interfaces depends on the interface type installed on the managed node.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems /etc/hosts ---------15.136.120.169 jacko.bbn.hp.com jacko 193.1.1.1 jacko.bbn.hp.com jacko_x.25 and restart ITO. 2. In cases where it is not possible to add host name/IP-address associations (for example, in fire-wall environments), a special ITO configuration file can contain the association (this configuration file must be created manually): /etc/opt/OV/share/conf/OpC/mgmt_sv/opc.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems returns: PING 193.1.4.1: 64 byte packets ----193.1.4.1 PING Statistics---3 packets transmitted, 0 packets received, 100% packet loss after pressing Ctrl-C. This indicates that no connection was possible using address 193.1.4.1. If the following scenario were to exist: Management Server lan1:15.136.120.2 arthur.bbn.hp.com lan0:194.1.1.1 Managed Node jacko.bbn.hp.com lan0:193.1.1.1 lan1:15.136.120.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems ################################################################# # File: opcinfo # Description: ITO Installation Information of Managed Node # Package: HP OpenView IT/Operations ################################################################# OPC_INSTALLED_VERSION A.05.00 OPC_MGMT_SERVER arthur.ashe.tennis.com OPC_INSTALLATION_TIME 10/13/98 13:37:44 OPC_RESOLVE_IP 15.111.222.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Description B No ARPA-to-NS node-name mapping is defined in /etc/opt/OV/share/conf/OpC/mgmt_sv/vt3k.conf and the NS node for the management server is not set, or it belongs to a different domain. Solution B1 Specify a corresponding mapping in vt3k.conf. (See the section “ARPA-to-NS Node-Name Mapping for MPE/iX” on page 128).
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Problem MPE/iX “request replies” from the ITO management server via X-redirection from MPE/iX managed nodes can fail. Description Starting an X-application from the application desktop (or as an operator-initiated action etc.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Problem ITO does not work as expected after an OS upgrade. Description Updating the operating system might mean that ITO no longer works as expected. For example, system boot/shutdown files have been modified; the file system layout or the command paths could have been changed; the shared libraries have been modified, etc.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Problem After an application upgrade, ITO no longer works as expected. Description After the upgrade of installed applications on the managed node, logfile encapsulation, MPE/iX console message interception, and so forth, appear not to work properly. This could be caused by different message patterns, localized logfiles, different path and/or file name of the logfiles, and so forth.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Solution C If you change the password on the managed nodes for default users of an application startup from the ITO Application Desktop, you must adapt the password in the ITO configuration, too. This is only necessary if the application is configured as having a Window (Input/Output), and if no appropriate .rhosts or /etc/hosts.equiv entry is available.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Description C The command or application path is different; for example, /usr/bin/ps (HP-UX 10.x, 11.x). Solution C. Use (hard or symbolic) links or copy the command or application to the appropriate destination. Write a script/program which calls the right command or application, depending on the platform. For example: my_ps.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Problem ITO Agents are corrupt, even after running the opcagt -stop; opcagt -start sequence. Description opcagt -status reports that not all ITO agents are up and running; automatic or operator-initiated actions and scheduled actions are not executed, and applications are not started as requested. Actions are not acknowledged, even after a successful run.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Table 10-4 Clean-up and Restart of ITO Agents on HP-UX 10.x/11.x Managed Nodes Task HP-UX 10.x and 11.x Managed Nodes 1. Stop ITO agents, including the control agent: /opt/OV/bin/OpC/opcagt -kill 2. Check that all ITO agents are stopped. a /opt/OV/bin/OpC/opcagt -status 3. Check the list of agent PIDs given by the opcagt -status command. If any PIDs are not stopped, use the kill (1M) command.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Table 10-5 Clean-up and Restart ITO Agents on Other SVR4 Managed Nodes Task 1. Stop ITO agents, including the control agent. Solaris, NCR UNIX SVR4, SCO OpenServer, SCO UnixWare, SGI IRIX, Olivetti UNIX, Pyramid DataCenter/OSx, Digital UNIX, andOS/2 /opt/OV/bin/OpC/opcagt -kill OS/2: use the GUI Digital UNIX: /usr/opt/OV/bin/OpC/opcagt -kill 2. Check that all ITO agents are stopped. opcagt -status 3.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Table 10-6 Clean-up and Restart of ITO Agents on AIX and MPE/iX Managed Nodes Task AIX MPE/iX 1. Stop ITO agents, including the control agent. /usr/lpp/OV/OpC/opcagt -kill opcagt.bin.ovopc -kill 2. Check that all ITO agents are stopped. /usr/lpp/OV/OpC/opcagt -status opcagt.bin.ovopc -status 3. Check again that all ITO agents are stopped using the list of agent PIDs given by the opcagt-status command.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Problem Automatic or operator-initiated action, scheduled action, command broadcast, or application hangs and does not terminate. Description Due to programming errors or requests for user input, automatic and/or operator-initiated actions, or scheduled actions can hang and not finish. Solution Determine the process ID of the endlessly running action using the ps command.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Problem User’s profile is not executed as expected when broadcasting a command or starting an application. Description The profile of the executing user is executed before starting the command/application on the managed node.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems MPE/iX Managed Node Runtime Problems and Solutions Problem Extremely long time for command broadcasting and application startup. Description The command broadcasting and application startup are done within jobs. When the job limit is reached, the jobs are queued. Non-ITO jobs also increase the number of running/pending jobs.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Problem ITO agents stop processing due to too many files being open. Description If the permanent file, LASTUUID.PUB.HPNCS has not been created, NCS creates a temporary one, which it does not close. Over a period of time, it tries to re-creates this file many times. The result is that the number of open file descriptors increases, and the system table used to administrate open files is overloaded.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Problem Command broadcast and application does not terminate. Description The command broadcasting and application startup are done within jobs named OPCAAJOB. If such a job does not terminate, perform the following solution. Solution 1. Check that a job OPCAAJOB is available; if so, get the job number(s) : showjob 2.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Problem Automatic or operator-initiated action, or scheduled action does not terminate. Description Due to an endless loop programming error the automatic or operator-initiated action, or scheduled action does not terminate. Solution Find the programming error in your scripts/programs and restart the ITO agents after you have fixed the problem. opcagt.bin.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Problem ITO agent processes cannot be stopped. Description If you receive a message that some ITO agent processes could not be stopped, or if you find that agent processes are still running although the control agent exited, stop all running ITO agent processes. Solution Stop the processes by executing \opt\OV\bin\OpC\utils\opckill.exe.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems Accessing the MIB of the Managed Node To grant ITO access to the MIB of the managed node, you must ensure that the get-community-name is set in one of the following ways: ❏ Edit the opcinfo file on the managed node (see Table 10-3 on page 399 for the location of the opcinfo file on all platforms), and add the following line: SNMP_COMMUNITY where is the community for which the snmpd is configured.
Tuning, Troubleshooting, Security, and Maintenance Troubleshooting: Specific Problems NFS Problems and Solutions Problem The logfile encapsulator reports the warning message: Unable to get status of file . Stale NFS handle. Description The logfile encapsulator can sometimes perceive logfiles set up on NFS as being open, even after they have been removed. This causes an attempted access to fail. Solution Change the policy by closing the logfile between reads.
Tuning, Troubleshooting, Security, and Maintenance Changing Hostnames/IP Addresses Changing Hostnames/IP Addresses Frequently, a node has several IP addresses and hostnames. You may need to change an IP addresses if a node becomes a member of another sub-net.
Tuning, Troubleshooting, Security, and Maintenance Changing Hostnames/IP Addresses Before You Change the Hostname/IP Address of the Management Server 1. Stop all ITO processes on your management server. This includes the manager, agent and user-interface processes running on this system. a. Stop all running ITO user interfaces by selecting Map:Exit. b. When changing the IP address of the management server, stop the ITO agents on your management server. /opt/OV/bin/OpC/opcagt -kill c.
Tuning, Troubleshooting, Security, and Maintenance Changing Hostnames/IP Addresses Where: -force The name service is not consulted. The database is not checked for duplicate node names. -label
Tuning, Troubleshooting, Security, and Maintenance Changing Hostnames/IP Addresses 7. Reconfigure the ITO management server system with the new hostname/IP address. For details, see the HP-UX System Manager’s Guide. To change the host name permanently, run the special initialization script /sbin/set_parms. Switching to a name server environment: If moving from a “non-name server” environment to a “name server” environment, make sure the name server has the new hostname/IP address available. 8.
Tuning, Troubleshooting, Security, and Maintenance Changing Hostnames/IP Addresses 7. Restart the netmon process: /opt/OV/bin/ovstart netmon 8. Use the ping command to update OpenView’s “knowledge” of the changed hostname: ping 9. Update the OpenView Topology Database with: /opt/OV/bin/nmdemandpoll 10. Make sure the database is running.
Tuning, Troubleshooting, Security, and Maintenance Changing Hostnames/IP Addresses If you are not operating in a multi-management server environment (see opcmom(4)), perform the following steps on all managed nodes that are configured in the Node Bank and which are running an ITO agent: a. Shut down the ITO agents: /opt/OV/bin/OpC/opcagt -kill b. Update the agent’s opcinfo file with the ITO management server’s new hostname.
Tuning, Troubleshooting, Security, and Maintenance Changing Hostnames/IP Addresses Changing the Hostname/IP Address of a Managed Node NOTE When changing the hostname of a managed node, you must de-install the ITO agent software from that node before proceeding with the following step. Re-install the ITO agent software when you have finished with this task.
Tuning, Troubleshooting, Security, and Maintenance Changing Hostnames/IP Addresses -force The name service is not consulted. The database is not checked for duplicate node names. -label Modifies the label of the node to . The new label is displayed in the Node Bank. The IP address of old node. The IP address of new (renamed) node. The node name of old node. The node name of new (renamed) node. 4.
Tuning, Troubleshooting, Security, and Maintenance Changing Hostnames/IP Addresses 6. On your management server for all ITO managed nodes whose hostname/IP-address you want to change: a. Use the ping command to update OpenView’s “knowledge” of the changed hostname and IP address: ping b. Update the OpenView Topology Database with: /opt/OV/bin/nmdemandpoll 7. Resynchronize the ITO server processes and GUIs: a.
Tuning, Troubleshooting, Security, and Maintenance Changing Hostnames/IP Addresses 4. Select the managed nodes in the Node Bank window, and click [Get Map Selections] in the Distribute ITO Software and Configuration window. 5. Click [OK]. If you are operating ITO in a distributed management server environment (Manager-of-Manager environment) If you are running ITO in a multi-management server environment (refer to opcmom(4) for more details), perform the following steps: 1.
Tuning, Troubleshooting, Security, and Maintenance ITO Security ITO Security The steps that an administrator needs to carry out to improve system security involve much more than configuring software: in general terms, the administrator needs to look at system security first and then investigate problems that relate to network security. Finally, the administrator needs to investigate the security implications and possibilities that are addressed during the configuration of ITO itself.
Tuning, Troubleshooting, Security, and Maintenance ITO Security General Security Guidelines in ITO A C2-secure or “trusted” system uses a number of techniques to improve security at system level.
Tuning, Troubleshooting, Security, and Maintenance ITO Security Network Security Network security involves the protection of data that is exchanged between the management server and the managed node and is primarily DCE related. ITO addresses the problem of network security by controlling the authenticity of the parties, in this case the RPC client and server, before granting a connection and ensuring the integrity of data passed over the network during the connection.
Tuning, Troubleshooting, Security, and Maintenance ITO Security In addition, all participating nodes must be member of DCE cells, which are configured to trust each other. ITO does not require specific DCE configuration. An installed DCE runtime (client part) including shared libraries and the RPC daemon (rpcd/dced) are sufficient. However, these components are necessary on all ITO managed nodes running a DCE, ITO agent. The client components include the necessary client parts for authenticated RPC, too.
Tuning, Troubleshooting, Security, and Maintenance ITO Security Configuring DCE Nodes to use Authenticated RPCs The DCE names and accounts, required by ITO to use authenticated RPCs, are set up by using opc_sec_register_svr.sh and opc_sec_register.sh. You need to run opc_sec_register_svr.sh once on the ITO management server and opc_sec_register.sh for each managed node which requires the ITO accounts, and only after you have configured the node (using dce_config) as part of a wider DCE environment.
Tuning, Troubleshooting, Security, and Maintenance ITO Security Or locally on each of the managed nodes: /opt/OV/bin/OpC/install/opc_sec_register.sh These steps can be repeated if necessary. NOTE To undo any of the steps you have carried out using the script opc_sec_register_svr.sh or opc_sec_register.sh , use the -remove option. 5. Use the ITO GUI to select the appropriate security level for the managed node or management server using DCE RPCs. By default, the security level is set to “No Security”.
Tuning, Troubleshooting, Security, and Maintenance ITO Security Authentication DCE’s security mechanism allows you to protect the communication between server and managed node using DCE RPC. An important step in the authentication procedure that an DCE RPC process goes through involves the obtaining of a login context. A secure RPC process has a login context, which it either inherits from its parent process or establishes itself.
Tuning, Troubleshooting, Security, and Maintenance ITO Security 3. The RPC client sends the RPC request 4.
Tuning, Troubleshooting, Security, and Maintenance ITO Security example, if the ITO management server ‘garlic.spices.com’ and the managed node ‘basil.herbs.com’ are configured to run with authenticated RPCs the following principals will be created: ❏ opc/opc-mgr/garlic.spices.com ❏ opc/opc-agt/basil.herbs.com In DCE, a name or principal (garlic.spices.com) belongs to a group (opc-mgr), which in turn belongs to an organization (opc).
Tuning, Troubleshooting, Security, and Maintenance ITO Security ❏ Packet-filtering firewalls may lock a range of ports to inbound or outbound traffic. If this is true, then: ❏ ITO’s managed nodes and management server must be configured to restrict all RPC connections to the same range of port numbers as those specified at the firewall A connection between an RPC server and an RPC client needs at least two ports; one on the server machine, one on the client.
Tuning, Troubleshooting, Security, and Maintenance ITO Security ITO assigns port numbers dynamically to those processes that are granted an RPC connection. The port numbers are configurable and are checked against the range defined in the GUI each time an RPC server registers itself.
Tuning, Troubleshooting, Security, and Maintenance ITO Security defined port range on the management server. Subsequently, you will need to restart the server processes and increase the port range. On the managed node, you need to delete the variable OPC_COMM_PORT_RANGE in the nodeinfo file and restart the agents. You can then configure a bigger range for this managed mode and, after a successful distribution, restart the agent.
Tuning, Troubleshooting, Security, and Maintenance ITO Security Table 10-10 Ports Required by the ITO Management Server ITO Component rpc-server Server Process Ports Required a Port Type/No b rpcd 1 135 message receiver (opcmsgrd) 1 - distribution manager (opcdistm) 1 - display manager (opcdispm) 1 - socket server (opctss) 1-10 - request sender (ovoareqsdr) 2*n - action manager (opcactm) 2 local forward manager (opcforwm) 2*m+2 local opcuiwww 1 2531 opcragt 2*n - JAVA GUI
Tuning, Troubleshooting, Security, and Maintenance ITO Security NOTE You need to stop and restart both the management server and the agent processes in order to enable any changes to (or initial configuration of) the port ranges on the ITO management server and the managed node. It is important to remember that the port range applies to both the TCP and UDP protocols.
Tuning, Troubleshooting, Security, and Maintenance ITO Security NOTE Although the allowed port range of given managed nodes may differ if the managed nodes are connected to the ITO management server through a different router, all managed nodes that use the same router must use the same port range.
Tuning, Troubleshooting, Security, and Maintenance ITO Security ITO Security The administrator needs to investigate the security implications and possibilities that are addressed during the configuration of ITO itself. For example, managed nodes will only allow those management servers that it recognizes as action-allowed managers to execute operator-initiated actions. ITO security looks at the security-related aspects of application set up and execution, operator-initiated actions, and so on.
Tuning, Troubleshooting, Security, and Maintenance ITO Security File Access and Permissions When a user starts an ITO operator GUI session, the working directory is defined by environment variable $OPC_HOME (if set) or $HOME. If neither $OPC_HOME nor $HOME is set, then /tmp is the default working directory. For more information on common ITO variables, see “Variables” on page 291.
Tuning, Troubleshooting, Security, and Maintenance ITO Security It is neither necessary nor specifically recommended to start the Motif administrator GUI as a unix user with root privileges (user ID 0). In addition, when saving the output of database reports on the ITO configuration, the owner of the files that are written is the unix user who started ITO. Otherwise, the behavior of the administrator GUI is the same as the operator GUI.
Tuning, Troubleshooting, Security, and Maintenance ITO Security Database Security Security of the database is controlled by the operating system and by the database itself. Users must have an OS logon for either remote or local access to the data. Once a user is logged on, security mechanisms of the database control access to the database and tables.
Tuning, Troubleshooting, Security, and Maintenance ITO Security • an appropriate .rhosts entry or /etc/hosts.equiv functionality must be available -Or• the password must be specified interactively. For more information on user accounts, access to files, and general file permissions, see “File Access and Permissions” on page 451. Passwords on DCE Managed Nodes When executed on the management server with the -server option, the ITO utility opc_sec_register_svr.
Tuning, Troubleshooting, Security, and Maintenance ITO Security Passwords on UNIX Managed Nodes The ITO default operator opc_op cannot login into the system via login, telnet, etc. due to a * entry in the /etc/passwd file. Furthermore, .rhosts entries are not provided. If you want to provide a virtual terminal or application startup (requiring a Window (Input/Output) for the ITO default operator, set the password or provide .rhosts or /etc/hosts.equiv functionality.
Tuning, Troubleshooting, Security, and Maintenance ITO Security Passwords on Novell NetWare Managed Nodes The password for the default operator opc_op is not assigned during the installation of the agent software. For security reasons, it is strongly recommended to assign a password to opc_op, using NetWare tools, after the agent software is installed.
Tuning, Troubleshooting, Security, and Maintenance Auditing Auditing ITO distinguishes between different modes and levels of audit control. The mode determines who is permitted to change the level of auditing; the level determines what kind of auditing information is being collected. Your company policy determines which auditing mode, normal or enhanced, is used. Normal audit control is the default mode after installation.
Tuning, Troubleshooting, Security, and Maintenance Auditing Table 10-11 Audit Areas of the Administrator Audit Level Administrator Level Audit Area GUIa APIb CLIc ITO User • logon ✔ ✔ • logoff ✔ ✔ • change password ✔ ✔ • start ✔ ✔ • add/modify/ delete ✔ ✔ • add/modify/delete ✔ ✔ ✔ • add/modify/delete automatic and operator-initiated action ✔ ✔ ✔ • add/modify/delete condition ✔ ✔ ✔ • configuration ✔ ✔ • distribution of actions, monitor, and commands ✔ ✔ • changes to node
Tuning, Troubleshooting, Security, and Maintenance Auditing Administrator Level Audit Area GUIa Database Maintenance ✔ Trouble Ticket ✔ Notification ✔ APIb CLIc a. ITO creates an audit entry when the action is carried out using the GUI. b. ITO creates an audit entry when the action is carried out using an API. Note that no entry in this column only indicates that no audit information is being collected; it does not indicate that no APIs are available. c.
Tuning, Troubleshooting, Security, and Maintenance System Maintenance System Maintenance You perform system maintenance on both the management server and on managed nodes. Management server maintenance is split into the following areas: ❏ System backups ❏ Database ❏ HP OpenView platform ❏ ITO directories and files You can configure scheduled actions to help you with routine system-maintenance tasks. For more information on how to schedule actions, see the HP ITO Administrator’s Guide to Online Information.
Tuning, Troubleshooting, Security, and Maintenance System Maintenance archived. For more information, see the appropriate Oracle documentation. For information on how to set up Archive-log mode in ITO, see “Maintaining the Database” on page 468 and the section on database tasks in the HP ITO Administrator’s Guide to Online Information.
Tuning, Troubleshooting, Security, and Maintenance System Maintenance If you are considering using the automated method to backup your data, you should take into consideration the following advantages and disadvantages: • there is no need to exit the ITO GUI, although OVW actions are not possible for a short time, for example; starting applications in the Application Desktop window. • ITO server processes, ITO Operator Web GUI, trouble ticket and notification services remain fully operational.
Tuning, Troubleshooting, Security, and Maintenance System Maintenance 2. Shutdown the database 3. Set the archive log parameters in the init.ora file: $ORACLE_HOME/dbs/init${ORACLE_SID}.ora a. Uncomment the following line to start the archive process: log_archive_start = true b.
Tuning, Troubleshooting, Security, and Maintenance System Maintenance where is the name of the operator you want to receive the message, and is the text of the message you want the operator to see. If the -user option is not specified, all operators receive the message. The ovbackup.ovpl Command. The automated backup command ovbackup.ovpl pauses running processes and flushes their data to disk before backing up the NNM databases and the data of integrated applications.
Tuning, Troubleshooting, Security, and Maintenance System Maintenance logs not moved by ito_oracle.sh and copies the ITO configuration in the file system that is not backed up by nnm_checkpoint.ovpl. The NNM script nnm_checkpoint.ovpl backs up all operational NNM databases and also backs up the directory $OV_CONF, which includes some ITO configuration files, the NNM database (flat) files, and the NNM configuration files. 4. Call ovresume to resume operation of NNM processes. 5.
Tuning, Troubleshooting, Security, and Maintenance System Maintenance Run all of the restore scripts found in the directory; $OV_CONF/ovbackup/restore/operational/ including ito_restore.sh and nnm_restore.ovpl. The ito_restore.sh script restores the Oracle database asking you to choose between the following restore options: a. to the state of the last backup? b. to the most recent state? - a rollforward is done based on the off-line redo logs from the backup and the off-line redo logs on the system. 3.
Tuning, Troubleshooting, Security, and Maintenance System Maintenance • Use a full offline backup that was taken with opc_backup with the full option • Restore a full offline backup of the complete system To restore the database to its state at the time of the last backup requires only data contained in the backup. This means, that the restore will work even if you have to re-install ITO. However, the restore is incomplete from an Oracle point of view, since it is not done to the latest state.
Tuning, Troubleshooting, Security, and Maintenance System Maintenance processed only when the ITO processes are next restarted. If corrupt queue files prevent the server processes from being started, remove the queue files. 1. Stop all ITO server processes: /opt/OV/bin/ovstop 2. Remove a selected or all temporary files: rm -f /var/opt/OV/share/tmp/OpC/mgmt_sv/* 3.
Tuning, Troubleshooting, Security, and Maintenance System Maintenance ❏ The ITO database files automatically consume the extra disk space required to cope with any growth. If a disk runs out of space, you can use other disks to add additional files for a tablespace. See the Oracle information for more information. ❏ Every time a user runs the command; connect internal, Oracle adds an audit file to the directory; $ORACLE_HOME/rdbms/audit.
Tuning, Troubleshooting, Security, and Maintenance System Maintenance ❏ /etc/csh.login ❏ /etc/rc.config.d/ovoracle ❏ /etc/opt/OV/share/conf/ovdbconf You will need to change the database release entry and check the .profile and .cshrc files of the users that require access to the database, for example; oracle, root, and opc_op. ❏ /etc/tnsnames.ora (if SQL*Net is used) ❏ /etc/listener.ora (if SQL*Net is used) ❏ /etc/sqlnet.ora (if SQL*Net is used) NOTE Upgrading the Oracle database version (7.3.4 to 8.0.
Tuning, Troubleshooting, Security, and Maintenance System Maintenance 2. Shutdown the database 3. Move selected control file(s) to a directory on the other disk, for example from disk /u01 to disk /u02: mv /u01/oradata/openview/control03.ctl \ /u02/oradata/openview/control03.ctl 4. Modify the control file name(s) in: $ORACLE_HOME/dbs/init${ORACLE_SID}.ora, for example, from: control_files = (/u01/oradata/openview/control01.ctl, /u01/oradata/openview/control02.ctl, /u01/oradata/openview/control03.
Tuning, Troubleshooting, Security, and Maintenance System Maintenance exit Maintaining the HP OpenView Platform ❏ Erase the trap daemon logfile /var/opt/OV/log/trapd.log if you no longer need the entries. A large trapd.log can reduce the performance of ITO. A backup file /var/opt/OV/log/trapd.log.old is provided. For detailed information about system maintenance in HP OpenView, see the HP OpenView Network Node Manager Administrator’s Reference.
Tuning, Troubleshooting, Security, and Maintenance System Maintenance On ITO Managed Nodes You should periodically backup, and then erase, local ITO logfiles (and their backups). ITO uses 90% of the specified log directory size for local message logging and 10% for error/warning logging. ITO also uses an automatic backup mechanism for the logfiles (four on UNIX systems, nine on MPE/iX).
Tuning, Troubleshooting, Security, and Maintenance System Maintenance Table 10-12 Managed Node Directories Containing Runtime Data Operating System on the Managed Node Directories Containing Runtime Data AIX /var/lpp/OV/tmp/OpC /var/lpp/OV/tmp/OpC/bin /var/lpp/OV/tmp/OpC/conf DEC Alpha NT \usr\OV\tmp\OpC\ \usr\OV\tmp\OpC\bin\alpha \usr\OV\tmp\OpC\conf\ HP-UX 10.x / 11.
Tuning, Troubleshooting, Security, and Maintenance System Maintenance Table 10-13 Local Logfiles on HP-UX 10.x, 11.x, and Windows NT Managed Nodes Logfile Windows NT HP-UX 10.x and 11.x Default Logfile path /usr/OV/log/OpC/ /var/opt/OV/log/OpC ITO errors/warnings opcerror opcerro(1-3) opcerror opcerro(1-3) ITO messages opcmsglg opcmsgl(1-3) opcmsglg opcmsgl(1-3) AIX and MPE/iX Managed Nodes The following table describes where local logfiles reside on managed nodes running AIX and MPE/iX.
Tuning, Troubleshooting, Security, and Maintenance System Maintenance Table 10-15 Local Logfiles on Other Managed Nodes Digital UNIX, DYNIX/ptx, NCR UNIX SVR4, Olivetti UNIX, Pyramid DataCenter/OSx, OS/2, SCO OpenServer, SCO UnixWare, SGI IRIX, SINIX/Reliant, and Solaris Logfile Default Logfile path /var/opt/OV/log/OpC ITO errors/warnings opcerror, opcerro(1-3) ITO messages opcmsglg, opcmsg (1-3) 476 Chapter 10
Tuning, Troubleshooting, Security, and Maintenance License Maintenance License Maintenance ITO uses the OVKey license mechanism for the installation and maintenance of the product licenses. The OVKey license technology is based on node-locked licenses with license passwords in a license file not on a central license server. One clear and significant advantage of this approach is that it is not necessary to set up a license server which handles the licenses.
Tuning, Troubleshooting, Security, and Maintenance License Maintenance Table 10-16 License Types for ITO A.05.00 License Type Management Stations Description ITO Management Server ITO license with 2 users Development Kit Limited management server license with 5 Nodes and 1 User. NNM can manage a maximum of 25 objects with this license. Instant-On a Same as the ITO management server license. Runtime = 120 days Emergency a Same as the ITO management server license.
Tuning, Troubleshooting, Security, and Maintenance License Maintenance • check whether the user has enough licenses for his environment The opclic command accepts the following parameters and usage: opclic { -add [-force] } | { -list } | { -delete } | { -report } | { -help } For more information on what the various opclic parameters do, see Table 10-17 on page 479.
Tuning, Troubleshooting, Security, and Maintenance License Maintenance Command -line Option delete Description Notes delete a specified license password • An ITO management server license may not be removed with the delete option: it can only be removed or replaced with: -add -force report list details of the installed licenses • ITO management server license type: start/end time • ITO managed node licenses [#total #used #free ] • ITO user licenses [#total] • warnin
A ITO Managed Node APIs and Libraries 481
ITO Managed Node APIs and Libraries This chapter provides information about: ❏ ITO APIs on Managed Nodes ❏ ITO APIs for Novell NetWare Managed Nodes ❏ ITO Managed Node Libraries ❏ Include Files on all Managed Nodes ❏ Managed Node Makefiles 482 Appendix A
ITO Managed Node APIs and Libraries ITO APIs on Managed Nodes ITO APIs on Managed Nodes Table A-1 API ITO APIs on Managed Nodes Command Description n/a opcmack(1) Acknowledges an ITO message received from the message agent on the managed node and sent to the appropriate management server. opcmon(3) opcmon(1) Feed the current value of a monitored object into the ITO monitoring agent on the local managed node.
ITO Managed Node APIs and Libraries ITO APIs for Novell NetWare Managed Nodes ITO APIs for Novell NetWare Managed Nodes A set of ITO agent APIs is provided for Novell NetWare agents. These APIs provide inter-process communication between ITO agents and the custom NLMs; in particular, the parent/child relationship. See Table A-2 on page 484 for more information about these APIs.
ITO Managed Node APIs and Libraries ITO APIs for Novell NetWare Managed Nodes An additional example is provided in the following file on the management server: /opt/OV/OpC/examples/progs/nwopcnlm.
ITO Managed Node APIs and Libraries ITO Managed Node Libraries ITO Managed Node Libraries Customer applications must be linked to ITO using the libraries and link and compile options given in Table A-3 on page 486. Integration is only supported if this is the case. NOTE ITO C functions are available in a shared library. The related definitions and return values are defined in the ITO include file, opcapi.h.
ITO Managed Node APIs and Libraries ITO Managed Node Libraries ITO Version DCE HP-UX 10.01, 10.10, 10.20 Library ITO A.03.xx libopc_r.sl ITO A.04.xx ITO A.05.00 libopc_r.sl libopc_r.sl (libopc.sl —> libopc_r.sl) (libopc.sl —> libopc_r.sl) Libraries linked to the ITO library. /usr/lib/libdce.1 /usr/lib/libdce.1 /usr/lib/libdce.1 /usr/lib/libc.1 /usr/lib/libc.1 /usr/lib/libc.1 Link and compile options -lopc_r -ldce -lc_r -lopc_r (-ldce -lc_r) -lopc_r Description libopc_r.
ITO Managed Node APIs and Libraries ITO Managed Node Libraries ITO Version libopc78_r.sl ITO A.04.xx libopc_r.sl ITO A.05.00 n/a DCE (libopc78_r.sl —> libopc_r.sl) Libraries linked to the ITO library. /usr/lib/libdce.1 /usr/lib/libdce.1 /usr/lib/libc_r.1 /usr/lib/libc_r.1 Link and compile options -D_REENTRANT -D_REENTRANT -lopc78_r -ldce -lc_r -lopc_r -ldce -lc_r Description This library is not exchangeable with the NCS version. Between ITO A.03.xx and ITO A.04.
ITO Managed Node APIs and Libraries ITO Managed Node Libraries ITO Version libopc.so NCS Libraries linked to the ITO library. ITO A.04.xx ITO A.05.00 libopc.so libopc.so libov.a and libovutil.a are statically linked into libopc.so libov.a and libovutil.a are statically linked into libopc.so /usr/lib/libw.so /usr/lib/libw.so /usr/lib/libnck.a /usr/lib/libnck.a /usr/lib/libsocket.so /usr/lib/libgcc.a /usr/lib/libnsl.so /usr/lib/libsocket.so /usr/lib/libgcc.a /usr/lib/libnsl.
ITO Managed Node APIs and Libraries ITO Managed Node Libraries ITO Version Library ITO A.03.xx libopc_r.o ITO A.04.xx libopc_r.a (AIX 4.x) ITO A.05.00 libopc_r.a libopc_r.o (AIX 3.2) Libraries linked to the ITO library. AIX 3.2: /usr/lib/libdce.a /usr/lib/libpthreads.a /usr/lib/libc_r.a /usr/lpp/OV/lib/libnsp. a /usr/lib/libdce.a /usr/lib/libiconv.a /usr/lib/libiconv.a /usr/lpp/OV/lib/libnsp. a /usr/lib/libdce.a DCE AIX 3.2, 4.1, 4.2, 4.3 AIX 4.x: /usr/lib/libiconv.
ITO Managed Node APIs and Libraries ITO Managed Node Libraries ITO A.04.xx ITO A.05.00 libopc.so libopc.so libopc.so Libraries linked to the ITO library. /usr/lib/libnck.a /usr/lib/libnck.a /usr/lib/libnck.a /usr/lib/libsocket.so /usr/lib/libsocket.so /usr/lib/libsocket.so /usr/lib/libnsl.a /usr/lib/libnsl.a /usr/lib/libnsl.
ITO Managed Node APIs and Libraries ITO Managed Node Libraries ITO A.03.xx ITO A.04.xx ITO A.05.00 n/a libopc_r.so libopc_r.so Libraries linked to the ITO library. n/a thr_cc is used which comes with its own libraries thr_cc is used which comes with its own libraries Link and compile options n/a -lopc_r -lnsp -ldce -lsocket_r -lopc_r -lnsp -ldce -lsocket_r -lresolv_r -lm_r -lc -lresolv_r -lm_r -lc -lnsl_r_i Description n/a Available as patch PHSS_13598. n/a Library libopc.so libopc.
ITO Managed Node APIs and Libraries ITO Managed Node Libraries ITO A.03.xx ITO A.05.00 n/a libopc_r.so libopc_r.so Libraries linked to the ITO library. n/a /usr/lib/libdce.so /usr/lib/libdce.so /usr/lib/libsocket.so /usr/lib/libsocket.so /usr/lib/libnsl.so /usr/lib/libnsl.so /usr/css/lib/libgen.a /usr/css/lib/libgen.a DCE NCS ITO A.04.xx Library Link and compile options n/a -lopc_r -lnsp -lsocket -lnsl -lopc_r -lnsp -lsocket -lnsl Description n/a n/a n/a Library libopc.a libopc.
ITO Managed Node APIs and Libraries ITO Managed Node Libraries ITO A.03.xx ITO A.04.xx ITO A.05.00 Library n/a libopc.so n/a Libraries linked to the ITO library. n/a /usr/lib/libnck.a n/a /usr/lib/libc.a /usr/shlib/libiconv.so DCE NCS /usr/shlib/libcxx.so DCE Intel on Windows NT 3..51, 4.0 DEC Alpha on Windows NT 3..51, 4.0 DEC Alpha Digital UNIX OSF/1 3.2, 4.0, 4.2, 5.
ITO Managed Node APIs and Libraries ITO Managed Node Libraries ITO A.03.xx ITO A.04.xx ITO A.05.00 Library n/a opcpblib.lib opcpblib.lib Libraries linked to the ITO library. n/a DCEOS2.LIB DCEOS2.LIB SO32DLL.LIB SO32DLL.LIB TCP32DLL.LIB TCP32DLL.LIB DDE4MBS.LIB DDE4MBS.LIB OS2386.LIB OS2386.LIB OPCNSP.LIB OPCNSP.LIB OPCMEM.LIB OPCMEM.LIB *.LIB files reference some DLLs. *.LIB files reference some DLLs. DCE Intel on OS/2 Warp 3.0, 4.
ITO Managed Node APIs and Libraries ITO Managed Node Libraries Include Files on all Managed Nodes See “Libraries for ITO Integrations” on page 31 for important information about platforms that support both the NCS and the DCE ITO agent. NOTE Table A-4 on page 496 gives the location of the ITO include files on all managed node platforms. Table A-4 ITO Include Files Platform OS Include File HP 9000/700 HP 9000/800 HP-UX 10.x and 11.x /opt/OV/include/opcapi.
ITO Managed Node APIs and Libraries ITO Managed Node Libraries Platform OS Include File Intel 486 or higher Novell NetWare SYS:.opt/OV/include/opcapi.h,op cnwapi.h DEC Alpha NT \usr\OV\include\opcapi.h Intel 486 or higher OS/2 \opt\OV\include\opcapi.h An example of how the API functions are used is available in the file opcapitest.
ITO Managed Node APIs and Libraries ITO Managed Node Libraries ❏ Makef.solaris ❏ Makef.uxw For Windows NT use the Microsoft Developer Studio 4.2 or higher. See also /opt/OV/OpC/examples/progs/README Management Server Makefile The following makefile is available in the directory /opt/OV/OpC/examples/progs on the management server. ❏ Makef.
B Administration of MC/ServiceGuard 499
Administration of MC/ServiceGuard Overview of HP MC/ServiceGuard Overview of HP MC/ServiceGuard This appendix provides background information for system administrators working with ITO in HP MC/ServiceGuard clusters. It assumes that you are familiar both with MC/ServiceGuard and the general concepts of ITO. For more detailed information about MC/ServiceGuard, see the Managing MC/ServiceGuard manual.
Administration of MC/ServiceGuard Introducing MC/ServiceGuard Introducing MC/ServiceGuard Multi-Computer/ServiceGuard is a powerful hardware and software solution that can switch control from one ITO management server to another if a management server fails. Critical information is stored on shared disks that are also mirrored. Uninterruptible power supplies (UPS) are also included to guarantee continuous operation if a power failure occurs.
Administration of MC/ServiceGuard Introducing MC/ServiceGuard package. If a service fails while a package is running, the package may be halted and restarted on an Adoptive Node. MC/Service Guard Daemon A daemon that monitors the state of the SG cluster, all nodes in the cluster, all network resources, and all services. The daemon reacts to failures and transfers control of packages. It also runs the package control script.
Administration of MC/ServiceGuard How MC/ServiceGuard Works How MC/ServiceGuard Works The following examples illustrate scenarios in which MC/ServiceGuard is used to switch control of a package between different cluster servers: Example 1: MC/ServiceGuard Package Switchover The SG cluster shown in Figure B-1 represents a typical scenario: ❏ Node 1 runs the application packages A and B ❏ Node 2 is runs the application package C ❏ Node 3 is runs the application packages D, E, and F ❏ The nodes are connected
Administration of MC/ServiceGuard How MC/ServiceGuard Works Figure B-1 MC/ServiceGuard Package Switchover: Before the Switch LAN 0 Bridge LAN 1 Package D Package C Package A Root 2 Root 1 Package E Root 3 Package F Package B Node 1 Node 2 Package A Disk Package B Disk Package C Disk Package A Mirror Package B Mirror Package C Mirror Root 2 Mirror Package D Disk Package D Mirror Node 3 Package E Disk Package E Mirror Package F Disk Package F Mirror Assume that node 1 fails.
Administration of MC/ServiceGuard How MC/ServiceGuard Works Figure B-2 MC/ServiceGuard Package Switchover: After the Switch LAN 0 Bridge LAN 1 Package D Package C Root 2 Root 1 Node 2 Package A Disk Package B Disk Package C Disk Package A Mirror Package B Mirror Package C Mirror Root 3 Package F Package A Node 1 Package E Root 2 Mirror Package D Disk Package D Mirror Node 3 Package E Disk Package F Disk Package E Mirror Package F Mirror Example 2: MC/ServiceGuard Local Network Switc
Administration of MC/ServiceGuard How MC/ServiceGuard Works Assume that the LAN 0 network interface card on node 2 fails: ❏ The standby LAN interface, LAN 1, takes on the identity of LAN 0 on node 2. The subnet and IP addresses are switched to the hardware path associated with LAN 1. The switch is transparent at the TCP/IP level. ❏ MC/ServiceGuard re-routes communications without having to transfer the control of packages between nodes.
Administration of MC/ServiceGuard How MC/ServiceGuard Works Figure B-5 MC/ServiceGuard Redundant Data and Heartbeat Subnets NODE 2 NODE 1 Package B Package A Package C Dedicated Heartbeat LAN LAN 0 Primary LAN: Heartbeat/Data LAN 3 Bridge Subnet B Subnet A Standby LAN: Heartbeat/Data LAN 1 The heartbeat interval is set in the SG cluster configuration file. Heartbeat time-out is the length of time that the SG cluster will wait for a node’s heartbeat before performing a transfer of package.
Administration of MC/ServiceGuard MC/ServiceGuard and IP addresses MC/ServiceGuard and IP addresses One of the many useful features of MC/ServiceGuard is the ability to assign multiple IP addresses to a single LAN interface card. Each primary network interface card has a unique IP address. This address is fixed to the node and is not transferable to another node.
Administration of MC/ServiceGuard MC/ServiceGuard and ITO MC/ServiceGuard and ITO MC/ServiceGuard (SG) provides a mechanism to start and stop applications. This means that products running in an SG environment must provide a package containing information about how to start and/or stop the application. These packages are transfered between the SG cluster nodes if a switch-over occurs. The package is referred to as the ITO SG package in this section.
Administration of MC/ServiceGuard MC/ServiceGuard and ITO Figure B-6 ITO Management Server on MC/ServiceGuard Systems: Conceptual View ServiceGuard Node1 ServiceGuard Node 2 ITO/NNM Server binaries ITO/NNM Server binaries shared files ITO Agent Oracle binaries ITO Agent RDMS active connections if the ITO server is running on Node 1 active connections if the ITO server is running on Node 2 To reduce the amount of data on the shared disk, only /var/opt/OV/share and /etc/opt/OV/share are installed o
Administration of MC/ServiceGuard Troubleshooting ITO in a ServiceGuard Environment Troubleshooting ITO in a ServiceGuard Environment This chapter describes some of the problems you might encounter when working with ITO SG packages, and provides some specific troubleshooting hints. For more general troubleshooting information, see the troubleshooting section in the Managing MC/ServiceGuard manual.
Administration of MC/ServiceGuard Troubleshooting ITO in a ServiceGuard Environment 512 Appendix B
C ITO Tables and Tablespaces in the Database 513
ITO Tables and Tablespaces in the Database ITO Tables in the Database ITO Tables in the Database See the HP OpenView IT/Operations Reporting and Database Schema for detailed information about the ITO tables in the RDBMS.
ITO Tables and Tablespaces in the Database ITO Tables and Tablespace ITO Tables and Tablespace An Oracle database uses tablespaces to manage the available disk space. You can assign datafiles of a fixed size to tablespaces. The size of the various datafiles assigned to a tablespace determines the size of the tablespace. To increase the size of a tablespace, you must add a datafile of a particular size to the tablespace.
ITO Tables and Tablespaces in the Database ITO Tables and Tablespace Table C-1 ITO Tables and Tablespaces in an Oracle Database Tables/ Description opc_act_messages Tablespace OPC_1 Size SIZE 4M AUTOEXTEND ON NEXT 6M MAXSIZE 500M DEFAULT STORAGE ( INITIAL 2M NEXT 2M PCTINCREASE 0 ) opc_anno_text OPC_2 opc_annotation opc_msg_text DEFAULT STORAGE ( INITIAL 1M NEXT 1M PCTINCREASE 0 ) opc_orig_ msg_text opc_node_names SIZE 5M AUTOEXTEND ON NEXT 6M MAXSIZE 500M OPC_3 SIZE 1M AUTOEXTEND ON NEXT 1M MA
ITO Tables and Tablespaces in the Database ITO Tables and Tablespace Tables/ Description Default tablespace of user opc_op Tablespace OPC_5 Size SIZE 1M AUTOEXTEND ON NEXT 1M MAXSIZE 500M Remarks none DEFAULT STORAGE ( INITIAL 32K NEXT 32K PCTINCREASE 0 ) opc_hist_messages OPC_6 SIZE 4M AUTOEXTEND ON NEXT 1M MAXSIZE 500M DEFAULT STORAGE ( INITIAL 2M NEXT 2M PCTINCREASE 0 ) opc_hist_msg_text OPC_7 SIZE 4M AUTOEXTEND ON NEXT 1M MAXSIZE 500M DEFAULT STORAGE ( INITIAL 2M NEXT 2M PCTINCREASE 0 ) opc_h
ITO Tables and Tablespaces in the Database ITO Tables and Tablespace Tables/ Description opc_hist_ annotation Tablespace OPC_9 opc_hist_anno_ text Size SIZE 4M AUTOEXTEND ON NEXT 1M MAXSIZE 500M DEFAULT STORAGE ( INITIAL 2M NEXT 2M PCTINCREASE 0 ) Temporary data (used for sorting) OPC_TEMP SIZE 1M AUTOEXTEND ON NEXT 1M MAXSIZE 500M Remarks Tables with heavy load. Indexes not on the same disk as table, thus providing extra tablespace.
ITO Tables and Tablespaces in the Database ITO Tables and Tablespace Table C-2 Non-ITO Specific Tablespace Tables/ Description Tablespace Tablespace containing the system tables. SYSTEM Size SIZE 50M Remarks none DEFAULT STORAGE ( INITIAL 16K NEXT 16K PCTINCREASE 50 ) Temporary data. TEMP SIZE 2M AUTOEXTEND ON NEXT 1M MAXSIZE 500M none DEFAULT STORAGE ( INITIAL 100K NEXT 100K PCTINCREASE 0 ) Rollback segments (this tablespace is not ITO specific) RBS1 Tablespace for Oracle Tool Tables (e.g.
ITO Tables and Tablespaces in the Database ITO Tables and Tablespace 520 Appendix C
D ITO Man Pages Listing 521
ITO Man Pages Listing This appendix provides a list of each man page for the HP OpenView IT/Operations Developer’s Toolkit. To refer to the man pages, call them from the command line by using: man .
ITO Man Pages Listing Overview of ITO Man Pages Overview of ITO Man Pages You can access the following ITO man pages, either directly at the command line, or by way of the online help: Man Pages in ITO Man Page Summary call_sqlplus.sh(1) Calls SQL*Plus. inst.sh(1M) Install ITO software on managed nodes. inst_debug(5) Debug an installation of the ITO agent software. opc(1|5) Start the ITO GUI.
ITO Man Pages Listing Overview of ITO Man Pages opcadddbf(1M) Add a new datafile to an Oracle tablespace. opcagt(1M) Administer agent processes on a managed node. opcagtreg(1M) Registration tool for subagents. opcagtutil(1M) Parse the agent platform file and perform operation with extracted data. opcaudupl(1M) Upload audit data into the ITO database. opcaudwn(1M) Download audit data into the ITO database. opccfgdwn(1M) Download configuration data from the database to flat files.
ITO Man Pages Listing Overview of ITO Man Pages opcdbsetup(1M) Create the tables in the ITO database. opcdbupgr(1M) Upgrade the ITO database from a previous version to the current version of ITO. opcdcode(1M) View ITO encrypted template files. opcgetmsgids(1m) Get message IDs to an original message ID. opchbp(1M) Switch heartbeat polling of managed nodes on/off. opchistdwn(1M) Download ITO history messages to a file. opchistupl(1M) Upload history messages into ITO database.
ITO Man Pages Listing Overview of ITO Man Pages opcsvreg(1M) Registration tool for server configuration files. opcsvskm(1M) Secret-key management tool on the management server. opcsw(1M) Set the software status flag in the ITO database. opctmpldwn(1M) Download and encrypt ITO message source templates. opcupgrade(1M) Upgrade an earlier version of ITO to the current version (A.05.00). opcwall(1) Send a message to the currently logged in ITO users. ovtrap2opc(1M) Convert trapd.
ITO Man Pages Listing Overview of ITO Man Pages opc_comif_close(3) Close an instance of the communication queue interface. opc_comif_freedata(3) Free data that was allocated by opc_comif_read(). opc_comif_open(3) Open an instance of the communication queue interface. opc_comif_read(3) Read information from a queue. opc_comif_read_request(3) Read information from a queue. opc_comif_write(3) Write information into a queue. opc_comif_write_request(3) Write information into a queue.
ITO Man Pages Listing Overview of ITO Man Pages opcmsg_api(3) Functions to manage ITO messages. opcmsggrp_api(3) Functions to manage ITO message groups. opcmsgregrpcond_api(3) Functions to create and modify ITO message regroup conditions. opcnode_api(3) Functions to configure ITO managed nodes. opcnodegrp_api(3) Functions to configure ITO node groups. opcnodehier_api(3) Functions to configure ITO node hierarchies. opcprofile_api(3) Functions to configure ITO user profiles.
Master Index This index contains references to three ITO manuals. All page numbers are prefaced with a two letter abbreviation indicating the manual that contains the reference. For example, the index entry security, CG:67, AR:349, AR:365, shows that information about security can be found in the Concepts Guide on page 67, and also on pages 349 and 365 in the Administrator’s Reference.
Master Index This index contains references to three ITO manuals. All page numbers are prefaced with a two letter abbreviation indicating the manual that contains the reference. For example, the index entry security, CG:94, AR:397, AR:416, shows that information about security can be found on page 94 in the Concepts Guide, and also on pages 397 and 416 in the Administrator’s Reference.
Master Index This index contains references to three ITO manuals. All page numbers are prefaced with a two letter abbreviation indicating the manual that contains the reference. For example, the index entry security, CG:94, AR:397, AR:416, shows that information about security can be found on page 94 in the Concepts Guide, and also on pages 397 and 416 in the Administrator’s Reference.
Master Index This index contains references to three ITO manuals. All page numbers are prefaced with a two letter abbreviation indicating the manual that contains the reference. For example, the index entry security, CG:94, AR:397, AR:416, shows that information about security can be found on page 94 in the Concepts Guide, and also on pages 397 and 416 in the Administrator’s Reference.
Master Index This index contains references to three ITO manuals. All page numbers are prefaced with a two letter abbreviation indicating the manual that contains the reference. For example, the index entry security, CG:94, AR:397, AR:416, shows that information about security can be found on page 94 in the Concepts Guide, and also on pages 397 and 416 in the Administrator’s Reference.
Master Index This index contains references to three ITO manuals. All page numbers are prefaced with a two letter abbreviation indicating the manual that contains the reference. For example, the index entry security, CG:94, AR:397, AR:416, shows that information about security can be found on page 94 in the Concepts Guide, and also on pages 397 and 416 in the Administrator’s Reference.
Master Index This index contains references to three ITO manuals. All page numbers are prefaced with a two letter abbreviation indicating the manual that contains the reference. For example, the index entry security, CG:94, AR:397, AR:416, shows that information about security can be found on page 94 in the Concepts Guide, and also on pages 397 and 416 in the Administrator’s Reference.
Master Index This index contains references to three ITO manuals. All page numbers are prefaced with a two letter abbreviation indicating the manual that contains the reference. For example, the index entry security, CG:94, AR:397, AR:416, shows that information about security can be found on page 94 in the Concepts Guide, and also on pages 397 and 416 in the Administrator’s Reference.
Master Index This index contains references to three ITO manuals. All page numbers are prefaced with a two letter abbreviation indicating the manual that contains the reference. For example, the index entry security, CG:94, AR:397, AR:416, shows that information about security can be found on page 94 in the Concepts Guide, and also on pages 397 and 416 in the Administrator’s Reference. DEC Alpha NT, AR:34 Digital UNIX, AR:34 HP-UX 10.x, AR:35, AR:36 HP-UX 11.
Master Index This index contains references to three ITO manuals. All page numbers are prefaced with a two letter abbreviation indicating the manual that contains the reference. For example, the index entry security, CG:94, AR:397, AR:416, shows that information about security can be found on page 94 in the Concepts Guide, and also on pages 397 and 416 in the Administrator’s Reference.
Master Index This index contains references to three ITO manuals. All page numbers are prefaced with a two letter abbreviation indicating the manual that contains the reference. For example, the index entry security, CG:94, AR:397, AR:416, shows that information about security can be found on page 94 in the Concepts Guide, and also on pages 397 and 416 in the Administrator’s Reference.
Master Index This index contains references to three ITO manuals. All page numbers are prefaced with a two letter abbreviation indicating the manual that contains the reference. For example, the index entry security, CG:94, AR:397, AR:416, shows that information about security can be found on page 94 in the Concepts Guide, and also on pages 397 and 416 in the Administrator’s Reference.
Master Index This index contains references to three ITO manuals. All page numbers are prefaced with a two letter abbreviation indicating the manual that contains the reference. For example, the index entry security, CG:94, AR:397, AR:416, shows that information about security can be found on page 94 in the Concepts Guide, and also on pages 397 and 416 in the Administrator’s Reference.
Master Index This index contains references to three ITO manuals. All page numbers are prefaced with a two letter abbreviation indicating the manual that contains the reference. For example, the index entry security, CG:94, AR:397, AR:416, shows that information about security can be found on page 94 in the Concepts Guide, and also on pages 397 and 416 in the Administrator’s Reference.
Master Index This index contains references to three ITO manuals. All page numbers are prefaced with a two letter abbreviation indicating the manual that contains the reference. For example, the index entry security, CG:94, AR:397, AR:416, shows that information about security can be found on page 94 in the Concepts Guide, and also on pages 397 and 416 in the Administrator’s Reference.
Master Index This index contains references to three ITO manuals. All page numbers are prefaced with a two letter abbreviation indicating the manual that contains the reference. For example, the index entry security, CG:94, AR:397, AR:416, shows that information about security can be found on page 94 in the Concepts Guide, and also on pages 397 and 416 in the Administrator’s Reference.
Master Index This index contains references to three ITO manuals. All page numbers are prefaced with a two letter abbreviation indicating the manual that contains the reference. For example, the index entry security, CG:94, AR:397, AR:416, shows that information about security can be found on page 94 in the Concepts Guide, and also on pages 397 and 416 in the Administrator’s Reference.
Master Index This index contains references to three ITO manuals. All page numbers are prefaced with a two letter abbreviation indicating the manual that contains the reference. For example, the index entry security, CG:94, AR:397, AR:416, shows that information about security can be found on page 94 in the Concepts Guide, and also on pages 397 and 416 in the Administrator’s Reference.
Master Index This index contains references to three ITO manuals. All page numbers are prefaced with a two letter abbreviation indicating the manual that contains the reference. For example, the index entry security, CG:94, AR:397, AR:416, shows that information about security can be found on page 94 in the Concepts Guide, and also on pages 397 and 416 in the Administrator’s Reference.
Master Index This index contains references to three ITO manuals. All page numbers are prefaced with a two letter abbreviation indicating the manual that contains the reference. For example, the index entry security, CG:94, AR:397, AR:416, shows that information about security can be found on page 94 in the Concepts Guide, and also on pages 397 and 416 in the Administrator’s Reference.