Veritas Cluster Server Installation Guide Linux for IBM Power 5.
Veritas Cluster Server Installation Guide The software described in this book is furnished under a license agreement and may be used only in accordance with the terms of the agreement. Product version: 5.0 RU3 Document version: 5.0RU3.0 Legal Notice Copyright © 2009 Symantec Corporation. All rights reserved. Symantec, the Symantec Logo, Veritas and Veritas Storage Foundation are trademarks or registered trademarks of Symantec Corporation or its affiliates in the U.S. and other countries.
Symantec Corporation 350 Ellis Street Mountain View, CA 94043 http://www.symantec.
Technical Support Symantec Technical Support maintains support centers globally. Technical Support’s primary role is to respond to specific queries about product features and functionality. The Technical Support group also creates content for our online Knowledge Base. The Technical Support group works collaboratively with the other functional areas within Symantec to answer your questions in a timely fashion.
■ Version and patch level ■ Network topology ■ Router, gateway, and IP address information ■ Problem description: ■ Error messages and log files ■ Troubleshooting that was performed before contacting Symantec ■ Recent software configuration changes and network changes Licensing and registration If your Symantec product requires registration or a license key, access our technical support Web page at the following URL: www.symantec.
Maintenance agreement resources If you want to contact Symantec regarding an existing maintenance agreement, please contact the maintenance agreement administration team for your region as follows: Asia-Pacific and Japan customercare_apac@symantec.com Europe, Middle-East, and Africa semea@symantec.com North America and Latin America supportsolutions@symantec.
Contents Technical Support ............................................................................................... 4 Chapter 1 Introducing Veritas Cluster Server .................................. 13 About Veritas Cluster Server .......................................................... About VCS basics ......................................................................... About multiple nodes .............................................................. About shared storage ..................
Contents Creating authentication broker accounts on root broker system ........................................................................... Creating encrypted files for the security infrastructure ................. Preparing the installation system for the security infrastructure ................................................................. Performing preinstallation tasks ..................................................... Obtaining VCS license keys ........................................
Contents Adding VCS users ................................................................... Configuring SMTP email notification ......................................... Configuring SNMP trap notification .......................................... Configuring global clusters ...................................................... Installing VCS RPMs ............................................................... Creating VCS configuration files ...............................................
Contents Verifying LLT, GAB, and cluster operation ...................................... Verifying LLT ...................................................................... Verifying GAB ...................................................................... Verifying the cluster ............................................................. Verifying the cluster nodes .................................................... Chapter 7 Adding and removing cluster nodes ..............................
Contents Bringing up the existing node ................................................. Installing the VCS software manually when adding a node to a single node cluster .......................................................... Configuring LLT ................................................................... Configuring GAB when adding a node to a single node cluster .......................................................................... Starting LLT and GAB ..........................................
Contents
Chapter 1 Introducing Veritas Cluster Server This chapter includes the following topics: ■ About Veritas Cluster Server ■ About VCS basics ■ About VCS features ■ About VCS optional components About Veritas Cluster Server Veritas™ Cluster Server by Symantec is a high-availability solution for cluster configurations. Veritas Cluster Server (VCS) monitors systems and application services, and restarts services when hardware or software fails.
Introducing Veritas Cluster Server About VCS basics Figure 1-1 illustrates a typical VCS configuration of four nodes that are connected to shared storage. Figure 1-1 Example of a four-node VCS cluster Client workstation Client workstation Public network VCS private network VCS nodes Shared storage Client workstations receive service over the public network from applications running on VCS nodes. VCS monitors the nodes and their services.
Introducing Veritas Cluster Server About VCS basics Figure 1-2 Two examples of shared storage configurations Fully shared storage Distributed shared storage About LLT and GAB VCS uses two components, LLT and GAB, to share data over private networks among systems. These components provide the performance and reliability that VCS requires. LLT (Low Latency Transport) provides fast, kernel-to-kernel communications, and monitors network connections.
Introducing Veritas Cluster Server About VCS basics Figure 1-3 illustrates a two-node VCS cluster where the nodes galaxy and nebula have two private network connections. Figure 1-3 Two Ethernet connections connecting two nodes VCS private network: two ethernet connections galaxy Shared disks nebula Public network About preexisting network partitions A preexisting network partition refers to a failure in the communication channels that occurs while the systems are down and VCS cannot respond.
Introducing Veritas Cluster Server About VCS features About VCS features You can use the Veritas Installation Assessment Service to assess your setup for VCS installation. See “Veritas Installation Assessment Service” on page 17. VCS offers the following features that you can configure during VCS configuration: VCS notifications See “About VCS notifications” on page 17. VCS global clusters See “About global clusters” on page 17. I/O fencing See “About I/O fencing” on page 18.
Introducing Veritas Cluster Server About VCS optional components About I/O fencing I/O fencing protects the data on shared disks when nodes in a cluster detect a change in the cluster membership that indicates a split brain condition. See the Veritas Cluster Server User's Guide. The fencing operation determines the following: ■ The nodes that must retain access to the shared storage ■ The nodes that must be ejected from the cluster This decision prevents possible data corruption.
Introducing Veritas Cluster Server About VCS optional components Figure 1-4 Typical VCS setup with optional components Symantec Product Authentication Service root broker VCS Management Console management server VCS cluster 1 Optional VCS cluster 2 About Symantec Product Authentication Service (AT) VCS uses Symantec Product Authentication Service (AT) to provide secure communication between cluster nodes and clients.
Introducing Veritas Cluster Server About VCS optional components See “Preparing to configure the clusters in secure mode” on page 29. About Cluster Manager (Java Console) Cluster Manager (Java Console) offers complete administration capabilities for your cluster. Use the different views in the Java Console to monitor clusters and VCS objects, including service groups, systems, resources, and resource types. You can perform many administrative operations using the Java Console.
Introducing Veritas Cluster Server About VCS optional components configurations for Windows, Linux, and Solaris clusters. VCS Simulator also enables creating and testing global clusters. You can administer VCS Simulator from the Java Console or from the command line.
Introducing Veritas Cluster Server About VCS optional components
Chapter 2 Planning to install VCS This chapter includes the following topics: ■ About planning to install VCS ■ Hardware requirements ■ Supported operating systems ■ Supported software About planning to install VCS Every node where you want to install VCS must meet the hardware and software requirements. For the latest information on updates, patches, and software issues, read the following Veritas Technical Support TechNote: http://entsupport.symantec.
Planning to install VCS Hardware requirements Table 2-1 Hardware requirements for a VCS cluster Item Description VCS nodes From 1 to 32 Linux PPC systems running the supported Linux PPC operating system version. DVD drive One drive in a system that can communicate to all the nodes in the cluster. Disks Typical VCS configurations require that shared disks support the applications that migrate between systems in the cluster.
Planning to install VCS Supported operating systems Note: If you do not have enough free space in /var, then use the installvcs command with tmppath option. Make sure that the specified tmppath file system has the required free space. Supported operating systems VCS operates on the Linux operating systems and kernels distributed by Red Hat and SUSE. Table 2-3 lists the supported operating system versions for Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES).
Planning to install VCS Supported operating systems Required Linux RPMs for VCS Make sure you installed the following operating system-specific RPMs on the systems where you want to install or upgrade VCS. VCS will support any updates made to the following RPMs, provided the RPMs maintain the ABI compatibility. Table 2-4 lists the RPMs that VCS requires for a given Linux operating system. Table 2-4 Required RPMs Operating system Required RPMs RHEL 5 glibc-2.5-34.ppc.rpm glibc-2.5-34.ppc64.
Planning to install VCS Supported software Supported software Veritas Cluster Server supports the previous and next versions of Storage Foundation to facilitate product upgrades, when available. VCS supports the following volume managers and files systems: ■ ext2, ext3, reiserfs, NFS, NFSv4, and bind on LVM2, Veritas Volume Manager (VxVM) 5.0, and raw disks. ■ Veritas Volume Manager (VxVM) with Veritas File System (VxFS) ■ VxVM VRTSvxvm-common-5.0.33.00-RU3_SLES10 VRTSvxvm-platform-5.0.33.
Planning to install VCS Supported software
Chapter 3 Preparing to install VCS This chapter includes the following topics: ■ About preparing to install VCS ■ Preparing to configure the clusters in secure mode ■ Performing preinstallation tasks About preparing to install VCS Before you perform the preinstallation tasks, make sure you reviewed the installation requirements, set up the basic hardware, and planned your VCS setup.
Preparing to install VCS Preparing to configure the clusters in secure mode ■ The system clocks of the root broker and authentication brokers must be in sync. The installvcs program provides the following configuration modes: Automatic mode The root broker system must allow rsh or ssh passwordless login to use this mode. Semi-automatic mode This mode requires encrypted files (BLOB files) from the AT administrator to configure a cluster in secure mode.
Preparing to install VCS Preparing to configure the clusters in secure mode Figure 3-1 Workflow to configure VCS cluster in secure mode Review AT concepts and gather required information Install root broker on a stable system On the root broker system, create authentication broker identities for each node Select a mode to configure the cluster in secure mode Automatic mode Semiautomatic mode No On the root broker system, create encrypted file (BLOB) for each node Copy encrypted files to the instal
Preparing to install VCS Preparing to configure the clusters in secure mode Table 3-1 Preparatory tasks to configure a cluster in secure mode Tasks Who performs this task Decide one of the following configuration modes to set up a cluster in VCS administrator secure mode: ■ Automatic mode ■ Semi-automatic mode ■ Manual mode Install the root broker on a stable system in the enterprise. AT administrator See “Installing the root broker for the security infrastructure” on page 33.
Preparing to install VCS Preparing to configure the clusters in secure mode Installing the root broker for the security infrastructure Install the root broker only if you plan to use AT to configure the cluster in secure mode. The root broker administrator must install and configure the root broker before you configure the Authentication Service for VCS. Symantec recommends that you install the root broker on a stable system that is outside the cluster.
Preparing to install VCS Preparing to configure the clusters in secure mode 9 Enter y when the installer prompts you to configure the Symantec Product Authentication Service. 10 Press the Enter key to start the Authentication Server processes. Do you want to start Symantec Product Authentication Service processes now? [y,n,q] y 11 Enter an encryption key. Make sure that you enter a minimum of five characters.
Preparing to install VCS Preparing to configure the clusters in secure mode ■ If the output displays the following error, then the account for the given authentication broker is not created on this root broker: "Failed To Get Attributes For Principal" Proceed to step 3. 3 Create a principal account for each authentication broker in the cluster. For example: venus> # vssat addprpl --pdrtype root --domain \ root@venus.symantecexample.
Preparing to install VCS Preparing to configure the clusters in secure mode identity The value for the authentication broker identity, which you provided to create authentication broker principal on the root broker system. This is the value for the --prplname option of the addprpl command. See “Creating authentication broker accounts on root broker system” on page 34.
Preparing to install VCS Preparing to configure the clusters in secure mode Note that for security purposes, the command to create the output file for the encrypted file deletes the input file. 5 For each node in the cluster, create the output file for the encrypted file from the root broker system using the following command. RootBroker> # vssat createpkg \ --in /path/to/blob/input/file.txt \ --out /path/to/encrypted/blob/file.
Preparing to install VCS Performing preinstallation tasks Manual mode Do the following: Copy the root_hash file that you fetched to the system from where you plan to install VCS. Note the path of the root hash file that you copied to the installation system. ■ Gather the root broker information such as name, fully qualified domain name, domain, and port from the AT administrator.
Preparing to install VCS Performing preinstallation tasks Table 3-2 Preinstallation tasks (continued) Task Reference Review basic See “Optimizing LLT media speed settings on private NICs” instructions to optimize on page 48. LLT media speeds. Review guidelines to help See “Guidelines for setting the media speed of the LLT you set the LLT interconnects” on page 48. interconnects. Mount the product disc See “Mounting the product disc” on page 49.
Preparing to install VCS Performing preinstallation tasks You can only install the Symantec software products for which you have purchased a license. The enclosed software discs might include other products for which you have not purchased a license. Setting up the private network VCS requires you to set up a private network between the systems that form a cluster. You can use either NICs or aggregated interfaces to set up private network. You can use network switches instead of hubs.
Preparing to install VCS Performing preinstallation tasks Figure 3-3 Private network setup with crossed links Public network Private networks Crossed link To set up the private network 1 Install the required network interface cards (NICs). Create aggregated interfaces if you want to use these to set up private network. 2 Connect the VCS private NICs on each system. 3 Use crossover Ethernet cables, switches, or independent hubs for each VCS communication network.
Preparing to install VCS Performing preinstallation tasks ■ 4 The systems can access the shared storage. Test the network connections. Temporarily assign network addresses and use telnet or ping to verify communications. LLT uses its own protocol, and does not use TCP/IP. So, you must ensure that the private network connections are used only for LLT communication and not for TCP/IP traffic.
Preparing to install VCS Performing preinstallation tasks To configure persistent interface names for network devices 1 Navigate to the hotplug file in the /etc/sysconfig directory: # cd /etc/sysconfig 2 Open the hotplug file in an editor. 3 Set HOTPLUG_PCI_QUEUE_NIC_EVENTS to yes: HOTPLUG_PCI_QUEUE_NIC_EVENTS=yes 4 Run the command: ifconfig -a 5 Make sure that the interface name to MAC address mapping remains same across the reboots.
Preparing to install VCS Performing preinstallation tasks collisions:0 txqueuelen:1000 RX bytes:35401016 (33.7 Mb) TX bytes:999899 (976.4 Kb) Base address:0xdce0 Memory:fcf20000-fcf40000 If a file named etc/sysconfig/network/ifcfg-eth-id-00:02:B3:DB:38:FE does not exist, do the following task: ■ Create the file. ■ If the file /etc/sysconfig/network/ifcfg-eth0 exists, then copy the contents of this file into etc/sysconfig/network/ifcfg-eth-id-00:02:B3:DB:38:FE.
Preparing to install VCS Performing preinstallation tasks where you run installvcs program. This privilege facilitates to issue ssh or rsh commands on all systems in the cluster. If ssh is used to communicate between systems, it must be configured in a way such that it operates without requests for passwords or passphrases. Similarly, rsh must be configured in such a way to not prompt for passwords. If system communication is not possible between systems using ssh or rsh, you have recourse.
Preparing to install VCS Performing preinstallation tasks 4 When the command prompts, enter a passphrase and confirm it. 5 Change the permissions of the .ssh directory by typing: # chmod 755 ~/.ssh 6 The file ~/.ssh/id_dsa.pub contains a line that begins with ssh_dss and ends with the name of the system on which it was created. Copy this line to the /root/.ssh/authorized_keys2 file on all systems where you plan to install VCS.
Preparing to install VCS Performing preinstallation tasks See also the Veritas Cluster Server User's Guide for a description of I/O fencing. Setting the PATH variable Installation commands as well as other commands reside in the /sbin, /usr/sbin, /opt/VRTS/bin, and /opt/VRTSvcs/bin directories. Add these directories to your PATH environment variable.
Preparing to install VCS Performing preinstallation tasks Setting the kernel.panic tunable By default, the kernel.panic tunable is set to zero. Therefore the kernel does not reboot automatically if a node panics. To ensure that the node reboots automatically after it panics, this tunable must be set to a non zero value. To set the kernel.panic tunable 1 Set the kernel.panic tunable to a desired value in the /etc/sysctl.conf file. For example, kernel.
Preparing to install VCS Performing preinstallation tasks Mounting the product disc You must have superuser (root) privileges to load the VCS software. To mount the product disc 1 Log in as superuser on a system where you want to install VCS. The system from which you install VCS need not be part of the cluster. The systems must be in the same subnet. 2 Insert the product disc with the VCS software into a drive that is connected to the system. The disc is automatically mounted.
Preparing to install VCS Performing preinstallation tasks To check the systems 1 Navigate to the folder that contains the installvcs program. See “Mounting the product disc” on page 49. 2 Start the pre-installation check: # ./installvcs -precheck galaxy nebula The program proceeds in a noninteractive mode to examine the systems for licenses, RPMs, disk space, and system-to-system communications.
Chapter 4 Installing and configuring VCS This chapter includes the following topics: ■ About installing and configuring VCS ■ Getting your VCS installation and configuration information ready ■ About the VCS installation program ■ Installing and configuring VCS 5.0 RU3 ■ Verifying and updating licenses on the system ■ Accessing the VCS documentation About installing and configuring VCS You can install Veritas Cluster Server on clusters of up to 32 systems.
Installing and configuring VCS Getting your VCS installation and configuration information ready Getting your VCS installation and configuration information ready The VCS installation and configuration program prompts you for information about certain VCS components.
Installing and configuring VCS Getting your VCS installation and configuration information ready ■ To configure VCS clusters in secure mode (optional), you need: For automatic mode (default) ■ The name of the Root Broker system Example: east See “About Symantec Product Authentication Service (AT)” on page 19. ■ Access to the Root Broker system without use of a password. For semiautomatic mode using encrypted files The path for the encrypted files that you get from the Root Broker administrator.
Installing and configuring VCS Getting your VCS installation and configuration information ready The domain-based address of The SMTP server sends notification emails about the the SMTP server events within the cluster. Example: smtp.symantecexample.com The email address of each Example: john@symantecexample.com SMTP recipient to be notified To decide the minimum severity of events for SMTP email notification ■ Events have four levels of severity: I=Information, W=Warning, E=Error, and S=SevereError.
Installing and configuring VCS About the VCS installation program ■ VRTSvcsmn — Manual pages for VCS commands About the VCS installation program You can access the installvcs program from the command line or through the Veritas product installer.
Installing and configuring VCS About the VCS installation program Table 4-1 installvcs optional features (continued) Optional action Reference Perform secure installations using the values See “Installing VCS with a response file that are stored in a configuration file. where ssh or rsh are disabled” on page 171. Perform automated installations using the See “Performing automated VCS values that are stored in a configuration file. installations” on page 164.
Installing and configuring VCS About the VCS installation program installvcs [ system1 system2... ] [ options ] Table 4-2 lists the installvcs command options. Table 4-2 installvcs options Option and Syntax Description -configure Configure VCS after using -installonly option to install VCS. See “Configuring VCS using configure option” on page 59. -enckeyfile encryption_key_file See the -responsefile and the -encrypt options.
Installing and configuring VCS About the VCS installation program Table 4-2 installvcs options (continued) Option and Syntax Description -nooptionalpkgs Specifies that the optional product RPMs such as man pages and documentation need not be installed. -nostart Bypass starting VCS after completing installation and configuration. -pkgpath pkg_path Specifies that pkg_path contains all RPMs that the installvcs program is about to install on all systems.
Installing and configuring VCS About the VCS installation program Table 4-2 installvcs options (continued) Option and Syntax Description -rsh Specifies that rsh and rcp are to be used for communication between systems instead of ssh and scp. This option requires that systems be preconfigured such that rsh commands between systems execute without prompting for passwords or confirmations -security Enable or disable Symantec Product Authentication Service in a VCS cluster that is running.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 cluster configuration. The installvcs program prompts for cluster information, and creates VCS configuration files without performing installation. See “Configuring the basic cluster” on page 67. The -configure option can be used to reconfigure a VCS cluster. VCS must not be running on systems when this reconfiguration is performed. If you manually edited the main.cf file, you need to reformat the main.cf file.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 Table 4-3 Installation and configuration tasks Task Reference License and install VCS ■ See “Starting the software installation” on page 61. ■ See “Specifying systems for installation” on page 62. ■ See “Licensing VCS” on page 63. ■ See “Choosing VCS RPMs for installation” on page 64. See “Choosing to install VCS RPMs or configure VCS” on page 65. ■ See “Installing VCS RPMs” on page 75.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 Note: The system from where you install VCS must run the same Linux distribution as the target systems. To install VCS using the product installer 1 Confirm that you are logged in as the superuser and mounted the product disc. 2 Start the installer. # ./installer The installer starts the product installation program with a copyright message and specifies the directory where the logs are created.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 To specify system names for installation 1 Enter the names of the systems where you want to install VCS. Enter the system names separated by spaces on which to install VCS: galaxy nebula For a single node installation, enter one name for the system. See “Creating a single-node cluster using the installer program” on page 139. 2 Review the output as the installer verifies the systems you specify.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 3 Enter keys for additional product features. Do you want to enter another license key for galaxy? [y,n,q,?] (n) y Enter a VCS license key for galaxy: [?] XXXX-XXXX-XXXX-XXXX-XXX XXXX-XXXX-XXXX-XXXX-XXX successfully registered on galaxy Do you want to enter another license key for galaxy? [y,n,q,?] (n) 4 Review the output as the installer registers the license key on the other nodes.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 To install VCS RPMs 1 Review the output as the installer checks the RPMs that are already installed. 2 Choose the VCS RPMs that you want to install. Select the RPMs to be installed on all systems? [1-3,q,?] (3) 2 Based on what RPMs you want to install, enter one of the following: 1 Installs only the required VCS RPMs. 2 Installs all the VCS RPMs. You must choose this option to configure any optional VCS feature.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 To install VCS packages now and configure VCS later 1 If you do not want to configure VCS now, enter n at the prompt. Are you ready to configure VCS? [y,n,q] (y) n The utility checks for the required file system space and makes sure that any processes that are running do not conflict with the installation.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 To configure VCS using the installvcs program 1 Confirm that you are logged in as the superuser and mounted the product disc. 2 Navigate to the folder that contains the installvcs program. # cd /dvdrom/cluster_server 3 Start the installvcs program. # ./installvcs -configure The installer begins with a copyright message and specifies the directory where the logs are created.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 To configure the cluster 1 Review the configuration instructions that the installer presents. 2 Enter the unique cluster name and cluster ID. Enter the unique cluster name: [?] clus1 Enter the unique Cluster ID number between 0-65535: [b,?] 7 3 Review the NICs available on the first system as the installer discovers and reports them. The private heartbeats can either use NIC or aggregated interfaces.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 5 Choose whether to use the same NIC details to configure private heartbeat links on other systems. Are you using the same NICs for private heartbeat links on all systems? [y,n,q,b,?] (y) If you want to use the NIC details that you entered for galaxy, make sure the same NICs are available on each system. Then, enter y at the prompt. If the NIC device names are different on some of the systems, enter n.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 Option 1. Automatic configuration Enter the name of the Root Broker system when prompted. Requires a remote access to the Root Broker. Review the output as the installer verifies communication with the Root Broker system, checks vxatd process and version, and checks security domain. Option 2 . Semiautomatic Enter the path of the encrypted file (BLOB file) for each node when prompted. configuration Option 3.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 See Veritas Cluster Server User's Guide for more information. Adding VCS users If you have enabled Symantec Product Authentication Service, you do not need to add VCS users now. Otherwise, on systems operating under an English locale, you can add VCS users at this time. To add VCS users 1 Review the required information to add VCS users. 2 Reset the password for the Admin user, if necessary.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 Refer to the Veritas Cluster Server User’s Guide for more information. To configure SMTP email notification 1 Review the required information to configure the SMTP email notification. 2 Specify whether you want to configure the SMTP notification. Do you want to configure SMTP notification? [y,n,q] (y) y If you do not want to configure the SMTP notification, you can skip to the next configuration option.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 ■ If you do not want to add, answer n. Would you like to add another SMTP recipient? [y,n,q,b] (n) 5 Verify and confirm the SMTP notification information. SMTP Address: smtp.example.com Recipient: ozzie@example.com receives email for Warning or higher events Recipient: harriet@example.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 Enter the SNMP console system name: [b,?] saturn ■ Enter the minimum security level of messages to be sent to each console. Enter the minimum severity of events for which SNMP traps should be sent to saturn [I=Information, W=Warning, E=Error, S=SevereError]: [b,?] E 4 Add more SNMP consoles, if necessary. ■ If you want to add another SNMP console, enter y and provide the required information at the prompt.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 information to the VCS configuration file. You must perform additional configuration tasks to set up a global cluster. See Veritas Cluster Server User's Guide for instructions to set up VCS global clusters. Note: If you installed a HA/DR license to set up replicated data cluster or campus cluster, skip this installer option. To configure the global cluster option 1 Review the required information to configure the global cluster option.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 for installation are not met, the utility stops and indicates the actions that are required to proceed with the process. Review the output as the installer uninstalls any previous versions and installs the VCS 5.0 RU3 RPMs. Creating VCS configuration files After you install the RPMs and provide the configuration information, the installer continues to create configuration files and copies them to each system.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 CPI WARNING V-9-122-1021 No PERSISTENT_NAME set for NIC with MAC address 00:11:43:33:17:28 (present name eth0), though config file CPI WARNING V-9-122-1022 No config file for NIC with MAC address 00:11:43:33:17:29 (present name eth1) found! CPI WARNING V-9-122-1022 No config file for NIC with MAC address 00:04:23:ac:25:1f (present name eth3) found! exists! PERSISTENT_NAME is not set for all the NICs.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 Table 4-4 File description File Description summary file ■ Lists the RPMs that are installed on each system. ■ Describes the cluster and its configured resources. ■ Provides the information for managing the cluster. log file Details the entire installation. response file Contains the configuration information that can be used to perform secure or unattended installations on other systems.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 Figure 4-2 Client communication with LDAP servers VCS client 1. When a user runs HA commands, AT initiates user authentication with the authentication broker. 4. AT issues the credentials to the user to proceed with the command. VCS node (authentication broker) 2. Authentication broker on VCS node performs an LDAP bind operation with the LDAP directory. 3.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 ■ Distinguished name for the user container (for example, UserBaseDN=ou=people,dc=comp,dc=com) ■ Distinguished name for the group container (for example, GroupBaseDN=ou=group,dc=comp,dc=com) Installing the Java Console You can administer VCS using the VCS Java-based graphical user interface, Java Console. After VCS has been installed, install the Java Console on a Windows system or Linux system.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 Note: Symantec recommends using Pentium III, 400MHz, 256MB RAM, and 800x600 display resolution. The version of the Java™ 2 Runtime Environment (JRE) requires 32 megabytes of RAM. This version is supported on the Intel Pentium platforms that run the Linux kernel v 2.2.12 and glibc v2.1.2-11 (or later).
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 3 Go to \windows\VCSWindowsInstallers\ClusterManager. 4 Open the language folder of your choice, for example EN. 5 Double-click setup.exe. 6 The Veritas Cluster Manager Install Wizard guides you through the installation process. Installing VCS Simulator You can administer VCS Simulator from the Java Console or from the command line. Review the software requirements for VCS Simulator.
Installing and configuring VCS Installing and configuring VCS 5.0 RU3 To install VCS Simulator on Windows systems 1 Insert the VCS installation disc into a drive. 2 Navigate to the path of the Simulator installer file: \your_platform_architecture\cluster_server\windows\ VCSWindowsInstallers\Simulator 3 Double-click the installer file. 4 Read the information in the Welcome screen and click Next.
Installing and configuring VCS Verifying and updating licenses on the system Verifying the cluster after installation When you have used installvcs program and chosen to configure and start VCS, VCS and all components are properly configured and can start correctly. You must verify that your cluster operates properly after the installation. See “About verifying the VCS installation” on page 105.
Installing and configuring VCS Verifying and updating licenses on the system Reserved Mode = 0 = VCS Updating product licenses using vxlicinst You can use the vxlicinst command to add the VCS license key on each node. If you have VCS already installed and configured and you use a demo license, you can replace the demo license. See “Replacing a VCS demo license with a permanent license” on page 85. To update product licenses ◆ On each node, enter the license key using the command: # cd /opt/VRTS/bin # .
Installing and configuring VCS Accessing the VCS documentation 4 Make sure demo licenses are replaced on all cluster nodes before starting VCS. # cd /opt/VRTS/bin # ./vxlicrep 5 Start VCS on each node: # hastart Accessing the VCS documentation The software disc contains the documentation for VCS in Portable Document Format (PDF) in the cluster_server/docs directory.
Chapter 5 Configuring VCS clusters for data integrity This chapter includes the following topics: ■ About configuring VCS clusters for data integrity ■ About I/O fencing components ■ About setting up disk-based I/O fencing ■ Preparing to configure disk-based I/O fencing ■ Setting up disk-based I/O fencing manually About configuring VCS clusters for data integrity When a node fails, VCS takes corrective action and configures its components to reflect the altered membership.
Configuring VCS clusters for data integrity About I/O fencing components If a system is so busy that it appears to stop responding, the other nodes could declare it as dead. This declaration may also occur for the nodes that use the hardware that supports a "break" and "resume" function. When a node drops to PROM level with a break and subsequently resumes operations, the other nodes may declare the system dead. They can declare it dead even if the system later returns and begins write operations.
Configuring VCS clusters for data integrity About setting up disk-based I/O fencing Disks that act as coordination points are called coordinator disks. Coordinator disks are three standard disks or LUNs set aside for I/O fencing during cluster reconfiguration. Coordinator disks do not serve any other storage purpose in the VCS configuration. You can configure coordinator disks to use Veritas Volume Manager Dynamic Multipathing (DMP) feature.
Configuring VCS clusters for data integrity About setting up disk-based I/O fencing Figure 5-1 Workflow to configure disk-based I/O fencing Preparing to set up I/O fencing Initialize disks as VxVM disks Identify disks to use as coordinator disks Check shared disks for I/O fencing compliance Setting up I/O fencing Set up coordinator disk group Create I/O fencing configuration files Modify VCS configuration to use I/O fencing Verify I/O fencing configuration See “Preparing to configure disk-ba
Configuring VCS clusters for data integrity About setting up disk-based I/O fencing For the latest information on supported hardware visit the following URL: http://entsupport.symantec.com/docs/283161 ■ Each of the coordinator disks must use a physically separate disk or LUN. Symantec recommends using the smallest possible LUNs for coordinator disks. ■ Each of the coordinator disks should exist on a different disk array, if possible. ■ The coordinator disks must support SCSI-3 persistent reservations.
Configuring VCS clusters for data integrity Preparing to configure disk-based I/O fencing /etc/vxfentab When you run the vxfen startup file to start I/O fencing, the script creates this /etc/vxfentab file on each node with a list of all paths to each coordinator disk. The startup script uses the contents of the /etc/vxfendg and /etc/vxfenmode files. Thus any time a system is rebooted, the fencing driver reinitializes the vxfentab file with the current list of all paths to the coordinator disks.
Configuring VCS clusters for data integrity Preparing to configure disk-based I/O fencing Refer to the installation guide that comes with the Storage Foundation product that you use. Perform the following preparatory tasks to configure I/O fencing: Initialize disks as VxVM disks See “Initializing disks as VxVM disks” on page 93. Identify disks to use as coordinator disks See “Identifying disks to use as coordinator disks” on page 95.
Configuring VCS clusters for data integrity Preparing to configure disk-based I/O fencing 3 Verify that the ASL for the disk array is installed on each of the nodes. Run the following command on each node and examine the output to verify the installation of ASL. The following output is a sample: # vxddladm listsupport all LIBNAME VID PID =========================================================================== libvx3par.so libvxCLARiiON.so libvxcscovrts.so libvxemc.so libvxhds.so libvxhds9980.
Configuring VCS clusters for data integrity Preparing to configure disk-based I/O fencing 4 Scan all disk drives and their attributes, update the VxVM device list, and reconfigure DMP with the new devices. Type: # vxdisk scandisks See the Veritas Volume Manager documentation for details on how to add and configure disks. 5 To initialize the disks as VxVM disks, use one of the following methods: ■ Use the interactive vxdiskadm utility to initialize the disks as VxVM disks.
Configuring VCS clusters for data integrity Preparing to configure disk-based I/O fencing command option verifies that the same serial number for the LUN is returned on all paths to the LUN. Make sure to test the disks that serve as coordinator disks. The vxfentsthdw utility has additional options suitable for testing many disks. Review the options for testing the disk groups (-g) and the disks that are listed in a file (-f). You can also test disks without destroying data using the -r option.
Configuring VCS clusters for data integrity Preparing to configure disk-based I/O fencing The same serial number information should appear when you enter the equivalent command on node B using the /dev/sdy path.
Configuring VCS clusters for data integrity Preparing to configure disk-based I/O fencing ■ If you use rsh for communication: # /opt/VRTSvcs/vxfen/bin/vxfentsthdw -n 3 The script warns that the tests overwrite data on the disks. After you review the overview and the warning, confirm to continue the process and enter the node names. Warning: The tests overwrite and destroy data on the disks unless you use the -r option.
Configuring VCS clusters for data integrity Setting up disk-based I/O fencing manually Setting up disk-based I/O fencing manually Make sure you completed the preparatory tasks before you set up I/O fencing. Tasks that are involved in setting up I/O fencing include: Table 5-1 Tasks to set up I/O fencing manually Action Description Setting up coordinator disk groups See “Setting up coordinator disk groups” on page 99.
Configuring VCS clusters for data integrity Setting up disk-based I/O fencing manually 3 Deport the coordinator disk group: # vxdg deport vxfencoorddg 4 Import the disk group with the -t option to avoid automatically importing it when the nodes restart: # vxdg -t import vxfencoorddg 5 Deport the disk group.
Configuring VCS clusters for data integrity Setting up disk-based I/O fencing manually # cp /etc/vxfen.d/vxfenmode_scsi3_raw /etc/vxfenmode 3 To check the updated /etc/vxfenmode configuration, enter the following command on one of the nodes. For example: # more /etc/vxfenmode Modifying VCS configuration to use I/O fencing After you add coordinator disks and configure I/O fencing, add the UseFence = SCSI3 cluster attribute to the VCS configuration file /etc/VRTSvcs/conf/config/main.cf.
Configuring VCS clusters for data integrity Setting up disk-based I/O fencing manually 6 Save and close the file. 7 Verify the syntax of the file /etc/VRTSvcs/conf/config/main.cf: # hacf -verify /etc/VRTSvcs/conf/config 8 Using rcp or another utility, copy the VCS configuration file from a node (for example, galaxy) to the remaining cluster nodes. For example, on each remaining node, enter: # rcp galaxy:/etc/VRTSvcs/conf/config/main.
Configuring VCS clusters for data integrity Setting up disk-based I/O fencing manually To verify I/O fencing configuration ◆ On one of the nodes, type: # vxfenadm -d I/O Fencing Cluster Information: ================================ Fencing Fencing Fencing Cluster Protocol Version: 201 Mode: SCSI3 SCSI3 Disk Policy: dmp Members: * 0 (galaxy) 1 (nebula) RFSM State Information: node 0 in state 8 (running) node 1 in state 8 (running) Removing permissions for communication Make sure you completed the instal
Configuring VCS clusters for data integrity Setting up disk-based I/O fencing manually
Chapter 6 Verifying the VCS installation This chapter includes the following topics: ■ About verifying the VCS installation ■ About the LLT and GAB configuration files ■ About the VCS configuration file main.
Verifying the VCS installation About the LLT and GAB configuration files The file llthosts is a database that contains one entry per system. This file links the LLT system ID (in the first column) with the LLT host name. This file is identical on each node in the cluster. For example, the file /etc/llthosts contains the entries that resemble: 0 1 ■ galaxy nebula The /etc/llttab file The file llttab contains the information that is derived during installation and used by the utility lltconfig(1M).
Verifying the VCS installation About the VCS configuration file main.cf Note: The use of the -c -x option for /sbin/gabconfig is not recommended. About the VCS configuration file main.cf The VCS configuration file /etc/VRTSvcs/conf/config/main.cf is created during the installation process. See “Sample main.cf file for VCS clusters” on page 108. See “Sample main.cf file for global clusters” on page 110. The main.cf file contains the minimum information that defines the cluster and its nodes.
Verifying the VCS installation About the VCS configuration file main.cf Refer to the Veritas Cluster Server User's Guide to review the configuration concepts, and descriptions of main.cf and types.cf files for Linux for IBM Power systems. Sample main.cf file for VCS clusters The following sample main.cf file is for a secure cluster that is managed locally by the Cluster Management Console. include "types.
Verifying the VCS installation About the VCS configuration file main.cf NIC csgnic ( Device = eth0 NetworkHosts = { "192.168.1.17", "192.168.1.18" } ) NotifierMngr ntfr ( SnmpConsoles = { "saturn" = Error, "jupiter" = SevereError } SmtpServer = "smtp.example.com" SmtpRecipients = { "ozzie@example.com" = Warning, "harriet@example.
Verifying the VCS installation About the VCS configuration file main.cf SystemList = { galaxy = 0, nebula = 1 } Parallel = 1 OnlineRetryLimit = 3 OnlineRetryInterval = 120 ) Phantom phantom_vxss ( ) ProcessOnOnly vxatd ( IgnoreArgs = 1 PathName = "/opt/VRTSat/bin/vxatd" ) // resource dependency tree // // group VxSS // { // Phantom phantom_vxss // ProcessOnOnly vxatd // } Sample main.
Verifying the VCS installation About the VCS configuration file main.cf . . In the following main.cf file example, bold text highlights global cluster specific entries. include "types.cf" cluster vcs03 ( ClusterAddress = "10.182.13.
Verifying the VCS installation About the VCS configuration file main.cf Device = eth0 ) NotifierMngr ntfr ( SnmpConsoles = { vcslab4079 = SevereError } SmtpServer = "smtp.veritas.com" SmtpRecipients = { "johndoe@veritas.
Verifying the VCS installation Verifying the LLT, GAB, and VCS configuration files PathName = "/opt/VRTSat/bin/vxatd" ) // resource dependency tree // // group VxSS // { // Phantom phantom_vxss // ProcessOnOnly vxatd // } Verifying the LLT, GAB, and VCS configuration files Make sure that the LLT, GAB, and VCS configuration files contain the information you provided during VCS installation and configuration.
Verifying the VCS installation Verifying LLT, GAB, and cluster operation 3 Verify LLT operation. See “Verifying LLT” on page 114. 4 Verify GAB operation. See “Verifying GAB” on page 116. 5 Verify the cluster operation. See “Verifying the cluster” on page 117. Verifying LLT Use the lltstat command to verify that links are active for LLT. If LLT is configured correctly, this command shows all the nodes in the cluster.
Verifying the VCS installation Verifying LLT, GAB, and cluster operation 5 To view additional information about LLT, run the lltstat -nvv command on each node.
Verifying the VCS installation Verifying LLT, GAB, and cluster operation However, the output in the example shows different details for the node nebula. The private network connection is possibly broken or the information in the /etc/llttab file may be incorrect. 6 To obtain information about the ports open for LLT, type lltstat -p on any node.
Verifying the VCS installation Verifying LLT, GAB, and cluster operation To verify GAB 1 To verify that GAB operates, type the following command on each node: /sbin/gabconfig -a 2 Review the output of the command: ■ If GAB operates, the following GAB port membership information is returned: GAB Port Memberships =================================== Port a gen a36e0003 membership 01 Port h gen fd570002 membership 01 ■ If GAB does not operate, the command does not return any GAB port membership informat
Verifying the VCS installation Verifying LLT, GAB, and cluster operation To verify the cluster 1 To verify the status of the cluster, type the following command: hastatus -summary The output resembles: -- SYSTEM STATE -- System A A 2 State galaxy nebula RUNNING RUNNING -- GROUP STATE -- Group System B B galaxy nebula ClusterService ClusterService Frozen 0 0 Probed Y Y AutoDisabled N N State ONLINE OFFLINE Review the command output for the following information: ■ The system state
Verifying the VCS installation Verifying LLT, GAB, and cluster operation 119 The example shows the output when the command is run on the node galaxy. The list continues with similar information for nebula (not shown) and any other nodes in the cluster.
Verifying the VCS installation Verifying LLT, GAB, and cluster operation #System Attribute Value galaxy Limits galaxy LinkHbStatus eth1 UP eth2 UP galaxy LoadTimeCounter 0 galaxy LoadTimeThreshold 600 galaxy LoadWarningLevel 80 galaxy NoAutoDisable 0 galaxy NodeId 0 galaxy OnGrpCnt 1 galaxy ShutdownTimeout 120 galaxy SourceFile ./main.cf galaxy SysInfo Linux:galaxy,#1 SMP Mon Dec 12 18:32:25 UTC 2005,2.6.5-7.
Chapter 7 Adding and removing cluster nodes This chapter includes the following topics: ■ About adding and removing nodes ■ Adding a node to a cluster ■ Removing a node from a cluster About adding and removing nodes After you install VCS and create a cluster, you can add and remove nodes from the cluster. You can create a cluster of up to 32 nodes. Adding a node to a cluster The system you add to the cluster must meet the hardware and software requirements.
Adding and removing cluster nodes Adding a node to a cluster Table 7-1 Tasks that are involved in adding a node to a cluster (continued) Task Reference Install the software manually. See “Preparing for a manual installation when adding a node” on page 123. See “Installing VCS RPMs for a manual installation” on page 124. Add a license key. See “Adding a license key” on page 125. For a cluster that is See “Setting up the node to run in secure mode” on page 126.
Adding and removing cluster nodes Adding a node to a cluster Figure 7-1 Adding a node to a two-node cluster using two switches Public network Private network New node: saturn To set up the hardware 1 Connect the VCS private Ethernet controllers. Perform the following tasks as necessary: ■ When you add nodes to a two-node cluster, use independent switches or hubs for the private network connections.
Adding and removing cluster nodes Adding a node to a cluster See “Mounting the product disc” on page 49. To prepare for installation ◆ Depending on the OS distribution, replace the dist in the command with rhel5 or sles10. Replace the arch in the command with ppc64. # cd /mnt/cdrom/dist_arch/cluster_server/rpms Installing VCS RPMs for a manual installation VCS has both required and optional RPMs. Install the required RPMs first. All RPMs are installed in the /opt directory.
Adding and removing cluster nodes Adding a node to a cluster # rpm -i VRTScutil-5.0.33.00-RU3_GENERIC.noarch.rpm # rpm -i VRTSatClient-4.3.28.0-0.ppc.rpm # rpm -i VRTSatServer-4.3.28.0-0.ppc.rpm ■ SLES10/ppc64, required RPMS # rpm -i VRTSvlic-3.02.33.4-0.ppc64.rpm # rpm -i VRTSperl-5.10.0.1-SLES10.ppc64.rpm # rpm -i VRTSspt-5.5.00.0-GA.noarch.rpm # rpm -i VRTSllt-5.0.33.00-RU3_SLES10.ppc64.rpm # rpm -i VRTSgab-5.0.33.00-RU3_SLES10.ppc64.rpm # rpm -i VRTSvxfen-5.0.33.00-RU3_SLES10.ppc64.
Adding and removing cluster nodes Adding a node to a cluster Setting up the node to run in secure mode You must follow this procedure only if you are adding a node to a cluster that is running in secure mode. If you are adding a node to a cluster that is not running in a secure mode, proceed with configuring LLT and GAB. See “Configuring LLT and GAB” on page 128. Table 7-2 uses the following information for the following command examples.
Adding and removing cluster nodes Adding a node to a cluster # vssat deletecred --domain type:domainname \ --prplname prplname For example: # vssat deletecred --domain vx:root@RB2.brokers.example.com \ --prplname saturn.nodes.example.com Configuring the authentication broker on node saturn Configure a new authentication broker (AB) on node saturn. This AB belongs to root broker RB1. To configure the authentication broker on node saturn 1 Create a principal for node saturn on root broker RB1.
Adding and removing cluster nodes Adding a node to a cluster 4 Configure AB on node saturn to talk to RB1. # vxatd -o -a -n prplname -p password -x vx -y domainname -q \ rootbroker -z 2821 -h roothash_file_path For example: # vxatd -o -a -n saturn.nodes.example.com -p flurbdicate \ -x vx -y root@RB1.brokers.example.com -q RB1 \ -z 2821 -h roothash_file_path 5 Verify that AB is configured properly. # vssat showbrokermode The command should return 1, indicating the mode to be AB.
Adding and removing cluster nodes Adding a node to a cluster ■ If the file on one of the existing nodes resembles: 0 galaxy 1 nebula ■ Update the file for all nodes, including the new one, resembling: 0 galaxy 1 nebula 2 saturn 2 Create the file /etc/llttab on the new node, making sure that line beginning "set-node" specifies the new node. The file /etc/llttab on an existing node can serve as a guide.
Adding and removing cluster nodes Adding a node to a cluster The -n flag indicates to VCS the number of nodes that must be ready to form a cluster before VCS starts. 2 On the new node, run the command, to configure GAB: # /sbin/gabconfig -c To verify GAB 1 On the new node, run the command: # /sbin/gabconfig -a The output should indicate that port a membership shows all nodes including the new node.
Adding and removing cluster nodes Removing a node from a cluster 3 Stop VCS on the new node: # hastop -sys saturn 4 Copy the main.cf file from an existing node to your new node: # rcp /etc/VRTSvcs/conf/config/main.cf \ saturn:/etc/VRTSvcs/conf/config/ 5 Start VCS on the new node: # hastart 6 If necessary, modify any new system attributes. 7 Enter the command: # haconf -dump -makero Starting VCS and verifying the cluster Start VCS after adding the new node to the cluster and verify the cluster.
Adding and removing cluster nodes Removing a node from a cluster Table 7-3 Tasks that are involved in removing a node Task ■ ■ Reference Back up the configuration file. See “Verifying the status of nodes and Check the status of the nodes and the service service groups” on page 132. groups. Switch or remove any VCS service groups on See “Deleting the departing node from the node departing the cluster. VCS configuration” on page 133. ■ Delete the node from VCS configuration.
Adding and removing cluster nodes Removing a node from a cluster To verify the status of the nodes and the service groups 1 Make a backup copy of the current configuration file, main.cf. # cp -p /etc/VRTSvcs/conf/config/main.cf\ /etc/VRTSvcs/conf/config/main.cf.goodcopy 2 Check the status of the systems and the service groups.
Adding and removing cluster nodes Removing a node from a cluster To remove or switch service groups from the departing node 1 Switch failover service groups from the departing node. You can switch grp3 from node saturn to node nebula. # hagrp -switch grp3 -to nebula 2 Check for any dependencies involving any service groups that run on the departing node; for example, grp4 runs only on the departing node.
Adding and removing cluster nodes Removing a node from a cluster 6 Delete the departing node from the SystemList of service groups grp3 and grp4. # hagrp -modify grp3 SystemList -delete saturn # hagrp -modify grp4 SystemList -delete saturn 7 For the service groups that run only on the departing node, delete the resources from the group before you delete the group.
Adding and removing cluster nodes Removing a node from a cluster Modifying configuration files on each remaining node Perform the following tasks on each of the remaining nodes of the cluster. To modify the configuration files on a remaining node 1 If necessary, modify the /etc/gabtab file. No change is required to this file if the /sbin/gabconfig command has only the argument -c. Symantec recommends using the -nN option, where N is the number of cluster systems.
Adding and removing cluster nodes Removing a node from a cluster Unloading LLT and GAB and removing VCS on the departing node Perform the tasks on the node that is departing the cluster. If you have configured VCS as part of the Storage Foundation and High Availability products, you may have to delete other dependent RPMs before you can delete all of the following ones. To stop LLT and GAB and remove VCS 1 If you had configured I/O fencing in enabled mode, then stop I/O fencing. # /etc/init.
Adding and removing cluster nodes Removing a node from a cluster # rpm -e VRTSvlic # rpm -e VRTSperl # rpm -e VRTSpbx # rpm -e VRTSicsco # rpm -e VRTSatServer # rpm -e VRTSatClient 5 Remove the LLT and GAB configuration files.
Chapter 8 Installing VCS on a single node This chapter includes the following topics: ■ About installing VCS on a single node ■ Creating a single-node cluster using the installer program ■ Creating a single-node cluster manually ■ Adding a node to a single-node cluster About installing VCS on a single node You can install VCS 5.0 RU3 on a single node. You can subsequently add another node to the single-node cluster to form a multinode cluster.
Installing VCS on a single node Creating a single-node cluster using the installer program Table 8-1 Tasks to create a single-node cluster using the installer Task Reference Prepare for installation. See “Preparing for a single node installation” on page 140. Install the VCS software on See “Starting the installer for the single node cluster” the system using the on page 140. installer.
Installing VCS on a single node Creating a single-node cluster manually Answer y if you plan to incorporate the single node cluster into a multi-node cluster in the future. Continue with the installation. See “Licensing VCS” on page 63. Creating a single-node cluster manually Table 8-2 specifies the tasks that you need to perform to install VCS on a single node.
Installing VCS on a single node Creating a single-node cluster manually ■ See “Preparing for a manual installation when adding a node” on page 123. ■ See “Installing VCS RPMs for a manual installation” on page 124. ■ See “Adding a license key” on page 125. Renaming the LLT and GAB startup files You may need the LLT and GAB startup files to upgrade the single-node cluster to a multiple-node cluster at a later time. To rename the LLT and GAB startup files ◆ Rename the LLT and GAB startup files.
Installing VCS on a single node Adding a node to a single-node cluster Adding a node to a single-node cluster All nodes in the new cluster must run the same version of VCS. The example procedure refers to the existing single-node VCS node as Node A. The node that is to join Node A to form a multiple-node cluster is Node B. Table 8-3 specifies the activities that you need to perform to add nodes to a single-node cluster.
Installing VCS on a single node Adding a node to a single-node cluster Setting up a node to join the single-node cluster The new node to join the existing single node running VCS must run the same version of operating system and patch level. To set up a node to join the single-node cluster 1 2 Do one of the following tasks: ■ If VCS is not currently running on Node B, proceed to step 2.
Installing VCS on a single node Adding a node to a single-node cluster To install and configure Ethernet cards for private network 1 Shut down VCS on Node A. # hastop -local 2 Shut down the node to get to the OK prompt: # shutdown -r now 3 Install the Ethernet card on Node A. If you want to use aggregated interface to set up private network, configure aggregated interface. 4 Install the Ethernet card on Node B.
Installing VCS on a single node Adding a node to a single-node cluster 5 Freeze the service groups. # hagrp -freeze group -persistent Repeat this command for each service group in step 4. 6 Make the configuration read-only. # haconf -dump -makero 7 Stop VCS on Node A. # hastop -local -force 8 Edit the VCS system configuration file /etc/sysconfig/vcs, and remove the "-onenode" option. Change the line: ONENODE=yes To: ONENODE=no 9 Rename the GAB and LLT startup files so they can be used.
Installing VCS on a single node Adding a node to a single-node cluster It handles the following tasks: ■ Traffic distribution ■ Heartbeat traffic Configured as described in the following sections. Setting up /etc/llthosts The file llthosts(4M) is a database. This file contains one entry per system that links the LLT system ID (in the first column) with the LLT host name. You must create an identical file on each node in the cluster.
Installing VCS on a single node Adding a node to a single-node cluster Table 8-4 LLT directives Directive Description set-node Assigns the system ID or symbolic name. The system ID number must be unique for each system in the cluster, and must be in the range 0-31. The symbolic name corresponds to the system ID in the /etc/llthosts file.Note that LLT fails to operate if any systems share the same ID. link Attaches LLT to a network interface.
Installing VCS on a single node Adding a node to a single-node cluster Configuring GAB when adding a node to a single node cluster VCS uses the Group Membership Services/Atomic Broadcast (GAB) protocol for cluster membership and reliable cluster communications. GAB has two major functions. It handles the following tasks: ■ Cluster membership ■ Cluster communications To configure GAB, use vi or another editor to set up an /etc/gabtab configuration file on each node in the cluster.
Installing VCS on a single node Adding a node to a single-node cluster To reconfigure VCS on existing nodes 1 On Node A, create the files /etc/llttab, /etc/llthosts, and /etc/gabtab. Use the files that are created on Node B as a guide, customizing the /etc/llttab for Node A. 2 Start LLT on Node A. # /etc/init.d/llt start 3 Start GAB on Node A. # /etc/init.d/gab start 4 Check the membership of the cluster. # gabconfig -a 5 Start VCS on Node A.
Installing VCS on a single node Adding a node to a single-node cluster To verify the nodes' configuration 1 On Node B, check the cluster membership. # gabconfig -a 2 Start the VCS on Node B. # hastart 3 Verify that VCS is up on both nodes. # hastatus 4 List the service groups. # hagrp -list 5 Unfreeze the service groups. # hagrp -unfreeze group -persistent 6 Implement the new two-node configuration.
Installing VCS on a single node Adding a node to a single-node cluster
Chapter 9 Uninstalling VCS This chapter includes the following topics: ■ About the uninstallvcs program ■ Preparing to uninstall VCS ■ Uninstalling VCS 5.0 RU3 About the uninstallvcs program You can uninstall VCS from all nodes in the cluster or from specific nodes in the cluster using the uninstallvcs program. The uninstallvcs program does not automatically uninstall VCS enterprise agents, but offers uninstallation if proper RPMs dependencies on VRTSvcs are found.
Uninstalling VCS Uninstalling VCS 5.0 RU3 ■ If you have manually edited any of the VCS configuration files, you need to reformat them. Uninstalling VCS 5.0 RU3 You must meet the following conditions to use the uninstallvcs program to uninstall VCS on all nodes in the cluster at one time: ■ Make sure that the communication exists between systems. By default, the uninstaller uses ssh. ■ Make sure you can execute ssh or rsh commands as superuser on all nodes in the cluster.
Uninstalling VCS Uninstalling VCS 5.0 RU3 3 Enter the names of the systems from which you want to uninstall VCS. The program performs system verification checks and asks to stop all running VCS processes. 4 Enter y to stop all the VCS processes. The program proceeds with uninstalling the software. 5 Answer the prompt to proceed with uninstalling the software. Select one of the following: ■ To uninstall VCS on all nodes, press Enter. ■ To uninstall VCS only on specific nodes, enter n.
Uninstalling VCS Uninstalling VCS 5.0 RU3 ■ The uninstallvcs program is not available in /opt/VRTS/install.
Appendix A Advanced VCS installation topics This appendix includes the following topics: ■ Using the UDP layer for LLT ■ Performing automated VCS installations ■ Installing VCS with a response file where ssh or rsh are disabled Using the UDP layer for LLT VCS 5.0 RU3 provides the option of using LLT over the UDP (User Datagram Protocol) layer for clusters using wide-area networks and routers. UDP makes LLT packets routable and thus able to span longer distances more economically.
Advanced VCS installation topics Using the UDP layer for LLT ■ Make sure that the LLT private links are on different physical networks. If the LLT private links are not on different physical networks, then make sure that the links are on separate subnets. Set the broadcast address in /etc/llttab explicitly depending on the subnet for each link. See “Broadcast address in the /etc/llttab file” on page 158. ■ Make sure that each NIC has an IP address that is configured before configuring LLT.
Advanced VCS installation topics Using the UDP layer for LLT ■ See “Sample configuration: direct-attached links” on page 161. ■ See “Sample configuration: links crossing IP routers” on page 163. Table A-1 describes the fields of the link command that are shown in the /etc/llttab file examples. Note that some of the fields differ from the command for standard LLT links.
Advanced VCS installation topics Using the UDP layer for LLT Table A-2 Field description for set-addr command in /etc/llttab Field Description node-id The ID of the cluster node; for example, 0. link tag-name The string that LLT uses to identify the link; for example link1, link2,.... address IP address assigned to the link for the peer node.
Advanced VCS installation topics Using the UDP layer for LLT For example, with the following interfaces: ■ For first network interface IP address=192.168.30.1, Broadcast address=192.168.30.255, Netmask=255.255.255.0 ■ For second network interface IP address=192.168.31.1, Broadcast address=192.168.31.255, Netmask=Mask:255.255.255.0 Configuring the broadcast address for LLT For nodes on different subnets, set the broadcast address in /etc/llttab depending on the subnet that the links are on.
Advanced VCS installation topics Using the UDP layer for LLT Figure A-1 A typical configuration of direct-attached links that use LLT over UDP Node0 Node1 UDP Endpoint eth1; UDP Port = 50001; IP = 192.1.3.1; Link Tag = link2 eth1; 192.1.3.2; Link Tag = link2 Switch UDP Endpoint eth2; UDP Port = 50000; IP = 192.1.2.1; Link Tag = link1 eth2; 192.1.2.2; Link Tag = link1 The configuration that the /etc/llttab file for Node 0 represents has directly attached crossover links.
Advanced VCS installation topics Using the UDP layer for LLT link link1 udp - udp 50000 - 192.1.2.2 192.1.2.255 link link2 udp - udp 50001 - 192.1.3.2 192.1.3.255 Sample configuration: links crossing IP routers Figure A-2 depicts a typical configuration of links crossing an IP router employing LLT over UDP. The illustration shows two nodes of a four-node cluster.
Advanced VCS installation topics Performing automated VCS installations set-addr set-addr 3 link1 192.1.7.3 3 link2 192.1.8.3 #disable LLT broadcasts set-bcasthb 0 set-arp 0 The /etc/llttab file on Node 0 resembles: set-node Node0 set-cluster 1 link link1 udp - udp 50000 - 192.1.1.1 link link2 udp - udp 50001 - 192.1.2.1 #set address of each link for all peer nodes in the cluster #format: set-addr node-id link tag-name address set-addr 1 link1 192.1.3.1 set-addr 1 link2 192.1.4.
Advanced VCS installation topics Performing automated VCS installations To perform automated installation 1 Navigate to the folder containing the installvcs program. # cd /mnt/cdrom/cluster_server 2 Start the installation from one of the cluster systems where you have copied the response file. # ./installvcs -responsefile /tmp/response_file Where /tmp/response_file is the response file’s full path name.
Advanced VCS installation topics Performing automated VCS installations $CPI::CFG{CMC_SERVICE_PASSWORD}="U2FsdVkX18v...n0hTSWwodThc+rX"; $CPI::CFG{ENCRYPTED}="U2FsdGVkX1+k2DHcnW7b6...
Advanced VCS installation topics Performing automated VCS installations Table A-3 Response file variables (continued) Variable Description $CPI::CFG{SYSTEMS} List of systems on which the product is to be installed, uninstalled, or configured. List or scalar: list Optional or required: required $CPI::CFG{SYSTEMSCFG} List of systems to be recognized in configuration if secure environment prevents all systems from being installed at once.
Advanced VCS installation topics Performing automated VCS installations Table A-3 Response file variables (continued) Variable Description $CPI::CFG{OPT}{PKGPATH} Defines a location, typically an NFS mount, from which all remote systems can install product depots. The location must be accessible from all target systems.
Advanced VCS installation topics Performing automated VCS installations Table A-3 Response file variables (continued) Variable Description $CPI::CFG{KEYS} List of keys to be registered on the system. {SYSTEM} List or scalar: list Optional or required: optional $CPI::CFG{OPT_LOGPATH} Mentions the location where the log files are to be copied. The default location is /opt/VRTS/install/logs.
Advanced VCS installation topics Performing automated VCS installations Table A-3 Response file variables (continued) Variable Description $CPI::CFG{VCS_SMTPRSEV} Defines the minimum severity level of messages (Information, Warning, Error, SevereError) that listed SMTP recipients are to receive. Note that the ordering of severity levels must match that of the addresses of SMTP recipients.
Advanced VCS installation topics Installing VCS with a response file where ssh or rsh are disabled Table A-3 Response file variables (continued) Variable Description $CPI::CFG{OPT}{UNINSTALL} List of systems where VCS must be uninstalled. List or scalar: scalar Optional or required: optional Installing VCS with a response file where ssh or rsh are disabled In secure enterprise environments, ssh or rsh communication is not allowed between systems.
Advanced VCS installation topics Installing VCS with a response file where ssh or rsh are disabled 5 After the installation is complete, review the installer report. The installer stores the installvcs-universaluniqueidentifier response file in the /opt/VRTS/install/logs/installvcs-universaluniqueidentifier/.response directory where universaluniqueidentifier is a variable to uniquely identify the file.
Index A about global clusters 17 adding users 71 adding node to a one-node cluster 143 attributes UseFence 101 C cables cross-over Ethernet 123 cluster creating a single-node cluster installer 139 manual 141 four-node configuration 14 removing a node from 131 verifying 84 verifying operation 117 Cluster Management Console 20 Cluster Manager installing Java Console 80 cold start running VCS 16 commands gabconfig 116, 149 hastart 131 hastatus 117 hasys 118 lltconfig 105 lltstat 114 vxdisksetup (initializing
Index F fibre channel 23 G GAB description 15 manual configuration 149 port membership information 116 verifying 116 gabconfig command 116, 149 -a (verifying GAB) 116 gabtab file creating 149 verifying after installation 105 global clusters 17 configuration 74 H hardware configuration 14 configuring network and storage 23 hastart 131 hastatus -summary command 117 hasys -display command 118 hubs 40 independent 123 I I/O fencing checking disks 95 setting up 99 shared storage 95 installation required
Index llttab file verifying after installation 105 M MAC addresses 40 main.
Index starting installation installvcs program 62 Veritas product installer 62 starting VCS 77 storage fully shared vs. distributed 14 shared 14 switches 40 Symantec Product Authentication Service 19, 33, 69 system communication using rsh ssh 44 system state attribute value 117 U uninstalling prerequisites 153 VCS 153 uninstallvcs 153 V variables MANPATH 47 PATH 47 VCS basics 13 command directory path variable 113 configuration files main.