Cluster Platform 4500/3 User’s Guide A product from the SunTone™ Platforms portfolio Sun Microsystems, Inc. 901 San Antonio Road Palo Alto, CA 94303-4900 U.S.A. 650-960-1300 Part No. 816-0445-11 July 2001, Revision A Send comments about this document to: docfeedback@sun.
Copyright 2001 Sun Microsystems, Inc., 901 San Antonio Road, Palo Alto, CA 94303-4900 U.S.A. All rights reserved. This product or document is distributed under licenses restricting its use, copying, distribution, and decompilation. No part of this product or document may be reproduced in any form by any means without prior written authorization of Sun and its licensors, if any. Third-party software, including font technology, is copyrighted and licensed from Sun suppliers.
Contents Preface ix Related Documentation ix Typographic Conventions Shell Prompts x xi Accessing Sun Documentation Online Ordering Sun Documentation xi Sun Welcomes Your Comments 1. Introduction xi xii 1 Tools for Reading Online Documentation Netscape Navigator Browser AnswerBook2 Server Solaris Documentation 2.
Connectivity 9 Network Administration 9 Miscellaneous Hardware 9 Software Components 10 Cluster Platform Component Location Power and Heating Requirements Cabling the System 13 15 16 Customizing the Cluster Platform 4500/3 18 ▼ Customizing the Terminal Concentrator ▼ Starting the Cluster Console ▼ Installing the Software Stack on Both Cluster Nodes Cluster Platform 4500/3 Recovery Before You Begin Recovery Recovery CD-ROMs ▼ 38 42 43 Installing the Recovery CD iv 43 46 Laptop Setting
Figures FIGURE 2-1 Ethernet Address on the Disk Array Pull Tab 7 FIGURE 2-2 Cluster Platforms I/O Board Placement 10 FIGURE 2-3 Cluster Platform Interconnections and NAFO FIGURE 2-4 Cluster Platform Rack Placement 14 FIGURE 2-5 Cluster Platform Internal Cabling 17 FIGURE 2-6 Cluster Control Panel Window FIGURE 2-7 Cluster Nodes Console Windows FIGURE 2-8 Cluster Console Window 41 12 39 40 v
vi Cluster Platform 4500/3 User’s Guide • July 2001
Tables TABLE 2-1 Ethernet IP Address Worksheet 8 TABLE 2-2 Cluster Platform Rack Components TABLE 2-3 Power and Heat Requirements for Cluster Platforms TABLE 2-4 Cluster Platform Cables 16 TABLE C-1 Boot Disk to Server Connections 65 TABLE C-2 Disk Array to Hub Connections TABLE C-3 FC-AL Hub to Server Connections TABLE C-4 Management Server Connections 66 TABLE C-5 Terminal Concentrator to Management Server and Cluster Nodes Connections TABLE C-6 Node to Node Connections 67 TABLE C-7
viii Cluster Platform 4500/3 User’s Guide • July 2001
Preface Cluster Platform 4500/3 provides self-sustained building blocks integrated through Sun Cluster technology to support highly available applications. Related Documentation Application Title Part Number Installation Sun Enterprise 6500/5500/4500 Systems Installation Guide 805-2631 Sun StorEdge T3 Disk Tray Installation, Operation, and Service Manual 805-1062 Sun StorEdge Component Manager 2.
Application Title Part Number Sun StorEdge Component Manager 2.1 Release Notes 806-4814 Server cabinet (expansion rack) storage Sun StorEdge Expansion Cabinet Installation and Service Manual 805-3067 Expansion cabinet Sun StorEdge Expansion Cabinet Installation and Service Manual 805-3067 Sun StorEdge FC-100 Hub Installation and Service Manual 805-0315 Configuring data services Sun Cluster 3.0 Data Services Installation and Configuration Guide 806-1421 Development Sun Cluster 3.
Shell Prompts Shell Prompt C shell machine_name% C shell superuser machine_name# Bourne shell and Korn shell $ Bourne shell and Korn shell superuser # Accessing Sun Documentation Online A broad selection of Sun system documentation is located at: http://www.sun.com/products-n-solutions/hardware/docs A complete set of Solaris documentation and many other titles are located at: http://docs.sun.com Ordering Sun Documentation Fatbrain.
Sun Welcomes Your Comments Sun is interested in improving its documentation and welcomes your comments and suggestions. You can email your comments to Sun at: docfeedback@sun.com Please include the part number (816-0445-10) of your document in the subject line of your email.
CHAPTER 1 Introduction This manual describes how to customize your Cluster Platform 4500/3 system. It is intended for the Solaris System Administrator who is familiar with the Solaris™ operating environment, Solstice DiskSuite™ software, and Sun Cluster software. For specific information about configuring disksets, disk volumes, file systems, and data services, refer to the Sun Cluster 3.0 documentation.
Netscape Navigator Browser Use the Netscape browser to read documentation provided as HTML files, view the output from an AnswerBook2 server, or read Sun product documentation at: http://docs.sun.com. The Netscape browser can be downloaded from the following location: http://www.netscape.com AnswerBook2 Server The AnswerBook2 server software processes sets of online manuals into content that you can access, search, and view through the Netscape Navigator browser.
Solaris Documentation AnswerBook™ documentation about the Solaris 8 10/00 Operating Environment is included on the Solaris™ 8 Documentation CD.
4 Cluster Platform 4500/3 User’s Guide • July 2001
CHAPTER 2 The Cluster Platform 4500/3 System The Cluster Platform 4500/3 system provides self-sustained platforms, integrated through the Sun Cluster technology, to support highly available applications. This two-node cluster system with shared, mirrored FC-AL storage can be used to implement a highly available file server, web server, mail server, or Oracle ® database server. Sun Cluster 3.0 provides global file systems, global devices, and scalable services.
Your Integrated System Your system includes a two-node cluster with shared, mirrored storage, a terminal concentrator, and a management server. The Sun StorEdge T3 arrays are connected to two FC-AL hubs. An Ethernet hub provides connection to the management server and Sun StorEdge T3 arrays. These components are cabled to provide redundant cluster interconnect between nodes, and to provide access to shared storage and production networks.
Note – This document does not provide information to support items 6 through 9. For specific implementation details, refer to the Sun Cluster 3.0 documentation. The Ethernet address for each cluster node is located on the Customer Information, System Record sheet. Use the serial number on the information sheet and on the back of each node to correctly identify the Ethernet address. (See Figure 2-4 for the placement of each cluster node.
TABLE 2-1 provides a worksheet to assist with networking information. You will be referred back to the information you place in this table when you customize the cluster configuration. TABLE 2-1 Ethernet IP Address Worksheet Network Device Ethernet Address IP Address Node Name Netra™ T1 AC200 Terminal concentrator Sun Enterprise 4500 system No. 1 (node 1) Sun Enterprise 4500 system No. 2 (node 2) Sun StorEdge T3 array No. 1 (node 1) Sun StorEdge T3 array No.
■ Two FC-100 FC-AL hubs, with three installed GBICs in each hub Connectivity ■ Cluster interconnect: The cluster connectivity uses Ethernet patch cables (no Ethernet switch required), with redundant qfe 100BASE-T ports (qfe0 and qfe4) on two separate SBus controllers to avoid a controller single point of failure. ■ Public networks: The cluster nodes main network connection is implemented using the on-board hme0 (100BASE-T) primary port with hme1 (100BASE-T) as the failover port.
1 3 5 7 FIGURE 2-2 Cluster Platforms I/O Board Placement Software Components The Cluster Platform 4500/3 software packages include the following: ■ Solaris 8 10/00 operating environment ■ Sun Cluster 3.0 software with patches, and Sun Cluster 3.0 installation CDs ■ Solstice DiskSuite 4.2.1 ■ UNIX® File System (UFS) ■ Sun StorEdge Component Manager 2.1 Note – A collection of software patches is applied to each cluster node.
Note – See “Ethernet IP Address Worksheet” on page 8 to specify the appropriate information for your network environment.
Production Network sc3sconfl1-ms Ser. A Management Server 8:0:20:c2:1b:3c 129.153.47.38 eri0 qfe0 T3 No. 2 mirror sc3sconf1-T3 1 sc3sconf1-T3 2 0 10BT sc3sconf1-tc 1 Terminal 2 Concentrator 3 0:50:bd:bb:a:4:0 129.153.47.62 sc3sconf1-n1 Cluster A node 2 8:0:20:d1:e1:4 129.153.47.82 T3 No. 1 data FC-AL hub 0 0123456 1 FC-AL hub 1 0123456 hme0 hme1 c1 c3 qfe4 sc3sconf1-n0 A hme0 Cluster hme1 node 1 c1 8:0:20:d1:e6:f6 c3 129.153.47.
Cluster Platform Component Location FIGURE 2-4 shows how the Cluster Platform 4500/3 is arranged in the expansion cabinet. FIGURE 2-2 lists the rack components and the quantities required. Note – The rack arrangement complies with weight distribution, cooling, EMI, and power requirements.
Expansion cabinet Ethernet hub FC-AL hub No. 1 and hub No. 2 Management server D130 boot disk No. 4 D130 boot disk No. 3 D130 boot disk No. 2 D130 boot disk No. 1 Air baffle System No. 2 (cluster node 2) System No. 1 (cluster node 1) Air baffle Disk array No. 2 (mirror) Disk array No.
Cluster Platform Rack Components TABLE 2-2 Component Quantity Sun StorEdge expansion cabinet 1 Ethernet hub 1 Netra T1 AC200 management server 1 Netra st D130 boot disks 4 Air baffle 2 Sun Enterprise cluster node 2 Sun StorEdge T3 array 2 Terminal concentrator 1 Power and Heating Requirements The Cluster Platform 4500/3 hardware should have two dedicated AC breaker panels. The cabinet should not share these breaker panels with other, unrelated equipment.
Cabling the System The Cluster Platform 4500/3 is shipped with the servers, hubs, and each of the arrays already connected in the cabinet. You should not need to cable the system. Refer to FIGURE 2-4 for servicing the cables. This section describes how the Cluster Platform 4500/3 components are cabled when shipped from the factory. The integrated platform provides FC-AL cables connected to the on-board GBICs on the I/O board.
SCSI cable DB-25/RF-45 serial cable Null Ethernet cable F100 cable Serial cable RF-45 Ethernet cable FIGURE 2-5 Cluster Platform Internal Cabling Chapter 2 The Cluster Platform 4500/3 System 17
Customizing the Cluster Platform 4500/3 When the Cluster Platform 4500/3 is shipped from the factory, the Netra 200 AC is preloaded with all of the necessary software to install the cluster nodes with the Solaris operating environment and Sun Cluster 3.0 software. Because all of the cables are connected and labeled in the factory, configuring the terminal concentrator first will enable the cluster administrator to easily configure the cluster.
c. From a terminal window on the Sun workstation, enter the following command: # /usr/bin/tip hardwire Note – If the port is busy, refer to “Troubleshooting the Cluster Platform 4500/3 Installation” on page 69 in Appendix D. 3. Configure the terminal concentrator device: ■ ■ Power on the terminal concentrator. Within 5 seconds, after power-on, press and release the TEST button. The terminal concentrator undergoes a series of diagnostics tests that take approximately 60 seconds to complete.
4. Modify the default IP address that will be used in your network. Use the addr command to modify the network configuration of the terminal concentrator. Use the addr -d command to verify the network configuration: monitor:: addr Enter Internet address [0.0.0.0]:: 192.212.87.62 Internet address: 192.212.87.62 Enter Subnet mask [255.255.0.0]:: 192.255.255.0 Subnet mask: 192.255.255.0 Enter Preferred load host Internet address []:: 0.0.0.0 Preferred load host address: 0.0.0.
6. Terminate your tip session by entering ~ . (tilde and period). Power-cycle the terminal concentrator to enable the IP address changes and wait at least two minutes for the terminal concentrator to activate its network. monitor:: ~ . Note – Double-check to ensure that the Ethernet hub is connected to the local area network. An RJ-45 cable must connect the Ethernet hub to the administration network backbone. a. Disconnect the RJ-45 serial cable (Part No.
8. To access the terminal concentrator, include the default router in the terminal concentrator configuration, and telnet to the terminal concentrator: # telnet 192.212.87.62 Trying 192.212.87.62... Connected to 192.212.87.62. Escape character is ’^]’. cli Enter Annex port name or number: cli Annex Command Line Interpreter * Copyright 1991 Xylogics, Inc. annex: su Password: 148.212.87.62 (password defaults to the assigned IP address) annex# edit config.
The terminal concentrator opens an editing session and displays an editing config.annex file. 9. Type the following information into the config.annex file; replace the following variable with the IP address obtained from your network administrator. %gateway net default gateway 148.212.87.248 metric 1 active Ctrl-W: save and exit page up Ctrl-X: exit Ctrl-F: page down Ctrl-B: 10. Enter the w command to save changes and exit the config.annex file. 11.
12. From the Sun workstation, access the terminal concentrator: # telnet 192.212.87.62 Trying 192.212.87.62... Connected to 192.212.87.62. Escape character is ’^]’ Rotaries Defined: cli Enter Annex port name or number: Port designations follow: ■ Port 1 = management server ■ Port 2 = cluster node 1 ■ Port 3 = cluster node 2 a. Enter the command /usr/openwin/bin/xhost 192.212.87.38 to allow your windows manager to display screens from remote systems. b.
Note – Because the management server is not provided with a monitor, it is only accessible over the network from another Sun workstation. When executing commands on the management server that require a local display, verify that the DISPLAY shell environment variable is set to the local hostname. 14. Choose a specific localization. At this time, only the English and U.S.A. locales are supported. Select a supported locale. Select a Locale 0. English (C - 7-bit ASCII) 1. Canada-English (ISO8859-1) 2. Thai 3.
15. Select the appropriate terminal emulation: What type of terminal are you using? 1) ANSI Standard CRT 2) DEC VT100 3) PC Console 4) Sun Command Tool 5) Sun Workstation 6) X Terminal Emulator (xterms) 7) Other Type the number of your choice and press Return: 2 After you select the terminal emulation, network connectivity is acknowledged: The eri0 interface on the management server is intended for connectivity to the production network.
16. Select Dynamic Host Configuration Protocol (DHCP) services. Because the management server must have a fixed IP address and name recognized by outside clients, DHCP is not supported: On this screen you must specify whether or not this system should use DHCP for network interface configuration. Choose Yes if DHCP is to be used, or No if the interfaces are to be configured manually.
18. Enter the name of the management server. Consult your local network administrator to obtain the appropriate host name. The following management server name is an example. On this screen you must enter your host name, which identifies this system on the network. The name must be unique within your domain; creating a duplicate host name will cause problems on the network after you install Solaris. A host name must be at least two characters; it can contain letters, digits, and minus signs (-).
20. Deselect IPv6 support. Currently, only version 4 of the IP software is supported. Verify that IPv6 support is disabled. On this screen you should specify whether or not IPv6, the next generation Internet Protocol, will be enabled on this machine. Enabling IPv6 will have no effect if this machine is not on a network that provides IPv6 service. IPv4 service will not be affected if IPv6 is enabled. > To make a selection, use the arrow keys to highlight the option and press Return to mark it [X].
22. Deselect and confirm Kerberos security. Only standard UNIX security is currently supported. Verify that Kerberos security is disabled. Specify Yes if the system will use the Kerberos security mechanism. Specify No if this system will use standard UNIX security. Configure Kerberos Security --------------------------[ ] Yes [X] No 2 > Confirm the following information. to change any information, press F4. If it is correct, press F2; Configure Kerberos Security: No 2 23.
Note – The two cluster nodes will be automatically configured to not support any naming services. This default configuration avoids the need to rely on external services. On this screen you must provide name service information. Select the name service that will be used by this system, or None if your system will either not use a name service at all, or if it will use a name service not listed here. > To make a selection, use the arrow keys to highlight the option and press Return to mark it [X].
25. Select a netmask. Consult your network administrator to specify the netmask of your subnet. The following shows an example of a netmask: On this default sure it sets of screen you must specify the netmask of your subnet. A netmask is shown; do not accept the default unless you are is correct for your subnet. A netmask must contain four numbers separated by periods (for example 255.255.255.0). Netmask: 192.255.255.
26. Select the appropriate time zone and region. Select the time zone and region to reflect your environment: On this specify regions specify screen you must specify your default time zone. You can a time zone in three ways: select one of the geographic from the list, select other - offset from GMT, or other time zone file. > To make a selection, use the arrow keys to highlight the option and press Return to mark it [X].
27. Set the date and time, and confirm all information. > Accept the default date and time or enter new values. Date and time: 2000-12-21 11:47 Year (4 digits) : 2000 Month (1-12) : 12 Day (1-31) : 21 Hour (0-23) : 11 Minute (0-59) : 47 2 > Confirm the following information. If it is correct, press F2; to change any information, press F4. System part of a subnet: Netmask: Time zone: Date and time: 2 Yes 255.255.255.0 US/Pacific 2000-12-21 11:47:00 28. Select a secure root password.
Note – Use TABLE 2-1 on page 8 as a reference to input data for Step 29 through Step 36. The variables shown in Step 29 through Step 32 are sample node names and parameters. 29. Add the router name and IP address: Enter the Management Server’s Default Router (Gateway) IP Address... 192.145.23.248 30. Add the cluster environment name: Enter the Cluster Environment Name (node names will follow)... sc3sconf1 31.
34. When prompted to confirm the variables, type y if all of the variables are correct. Type n if any of the variables are not correct, and re-enter the correct variables. Enter 99 to quit the update mode, once all variables are displayed correctly. Option -----1) 2) 3) 4) 5) 6) 7) 8) 9) 10) 11) 12) 13) 13) 13) 13) 99) Variable Setting ------------------------------------------Management Server’s Default Router= 192.212.87.
35. Error messages indicate that the devices did not start completely. Any error messages that you receive are normal, at this point. Boot device: /pci@1f,0/pci@1/scsi@8/disk@0,0 File and args: SunOS Release 5.8 Version Generic_108528-06 64-bit Copyright 1983-2000 Sun Microsystems, Inc. All rights reserved. Hostname: unknown metainit: unknown: there are no existing databases Configuring /dev and /devices Configuring the /dev directory (compatibility devices) The system is coming up. Please wait.
36. When the management server reboots, log in as root user to start the terminal concentrator customization. sc3sconf1-ms console login: root Password: Last login: Thu Jan 4 15:40:24 on console Jan 4 15:51:14 sc3sconf1-ms login: ROOT LOGIN /dev/console Sun Microsystems Inc. SunOS 5.
FIGURE 2-6 Cluster Control Panel Window 2. In the Cluster Control Panel window, double-click the Cluster Console (console mode) icon to display a Cluster Console window for each cluster node (see FIGURE 2-7). Note – Before you use an editor in a Cluster Console window, verify that the TERM shell environment value is set and exported to a value of vt220. FIGURE 2-8 shows the terminal emulation in the Cluster Console window.
FIGURE 2-7 Cluster Nodes Console Windows 3. To enter text into both node windows simultaneously, click the cursor in the Cluster Console window and enter the text. The text does not display in the Cluster Console window, and will display in both node windows. For example, the /etc/hosts file can be edited on both cluster nodes simultaneously. This ensures that both nodes maintain identical file modifications.
FIGURE 2-8 ▼ Cluster Console Window Installing the Software Stack on Both Cluster Nodes 1. Use the ccp(1M) Cluster Console window to enter the following command into both nodes simultaneously: {0} ok setenv auto-boot? true {0} ok boot net - install Note – You must use spaces between the dash (-) character in the “boot net - install” string. The Solaris operating environment, Solstice DiskSuite, and Sun Cluster 3.0 are automatically installed.
3. Configure the Sun StorEdge T3 array shared disk storage, using the Solstice DiskSuite software. Solstice DiskSuite configuration involves creating disksets, metadevices, and file systems. (Refer to the included Solstice DiskSuite documentation.) 4. Select a quorum device to satisfy failure fencing requirements for the cluster. (Refer to the included Sun Cluster 3.0 documentation.) 5. Install and configure the highly available application for the cluster environment. (Refer to the Sun Cluster 3.
■ Ethernet addresses Note – Because the recovery results in a generic, unconfigured system, you must restore all site-specific configuration files from backup. If your site uses special backup software such as VERITAS NetBackup, you must reinstall and configure that software.
3. This character forces access to the telnet prompt. Enter the Stop-A command, as follows: telnet> send brk 4. Boot the system from the CD-ROM: ok boot cdrom The system boots from the CD-ROM and prompts you for the mini-root location (a minimized version of the Solaris operating environment). This procedure takes approximately 15 minutes. Standard Cluster Environment Recovery Utility ... Starting the Standard Cluster Recovery Utility ...
5. Select a CD-ROM drive from the menu. Once the Cluster Platform 4500/3 recovery utility has placed a copy of the Solaris operating environment onto a suitable disk slice, the system reboots. You are prompted to specify the CD-ROM drive. Completing this process takes approximately 15 minutes. Standard Cluster Environment Recovery Utility V. 1.
7. Install the second data CD. When all files are copied from the first data CD, you are prompted to remove CD 1 and mount CD 2 on the CD-ROM drive. After CD 2 is mounted, press the Return key. The software and patch files are copied from CD 2 onto the management server boot disk. When all files are copied from both CDs, the system automatically shuts down. You must reboot the system. This process takes approximately 20 minutes. Please place Recovery CD #2 in the CD-ROM drive. Press when mounted.
APPENDIX A Laptop Settings to Access Monitor Mode This appendix provides the settings you must use to access monitor mode using a laptop computer. These are different from those used by a Sun workstation. ▼ To Access the Terminal Concentrator from a Laptop 1. Provide monitor mode connectivity into the terminal concentrator using a laptop computer running the Microsoft Windows 98 Operating System. a.
Customer-supplied DB-9/DB-25 female/female adapter DB-25/RJ-45 (Part No. 5121A) Serial COM1 DB-25/RJ-45 serial cable FIGURE A-1 Port 1 Terminal Concentrator Monitor Mode Connectivity Using a Laptop Computer 2. Click Start ➤ Programs ➤ Accessories ➤ Communications ➤ HyperTerminal to open the HyperTerminal folder. 3. In the HyperTerminal folder, double-click the HyperTerm icon to display the Connection Description window. 4.
9. In the StandardConfig Properties window, click the Settings option, and select VT100 for the Emulation option. At this point, the HyperTerminal window provides monitor mode access to the terminal concentrator. Note – To set up the terminal concentrator, see “Customizing the Terminal Concentrator” on page 18.
50 Cluster Platform 4500/3 User’s Guide • July 2001
APPENDIX B Console Output for the Automatic Software Install on the Cluster Nodes The first portion of the installation installs the Solaris operating environment and configuration files, Solstice DiskSuite software, and patches on cluster node 1. Note – Disregard the message, WARNING: Failed to register application "DiskSuite Tool" with solstice launcher. Solstice Launcher application is not installed.
CODE EXAMPLE B-1 Solaris Software, Solstice DiskSuite, and Patches Configuring /dev and /devices Using RPC Bootparams for network configuration information.
CODE EXAMPLE B-1 Solaris Software, Solstice DiskSuite, and Patches Using finish script: patch_finish Executing SolStart preinstall phase... Executing begin script "install_begin"... Begin script install_begin execution completed.
CODE EXAMPLE B-1 Solaris Software, Solstice DiskSuite, and Patches Configuring disk (c0t3d0) - Creating Solaris disk label (VTOC) Configuring disk (c2t2d0) - Creating Solaris disk label (VTOC) Configuring disk (c2t3d0) - Creating Solaris disk label (VTOC) Creating and checking UFS file systems - Creating / (c0t2d0s3) - Creating /globaldevices (c0t2d0s7) Beginning Solaris software installation Starting software installation ======== ----Long list of packages at this spot---======== Completed software i
CODE EXAMPLE B-1 Solaris Software, Solstice DiskSuite, and Patches Finish script patch_finish execution completed. Executing JumpStart postinstall phase... Executing finish script "Drivers/sc3sconf1-n0.driver"... ROOT_DIR is set to /a. FINISH_DIR is set to /tmp/install_config/Finish. FILES_DIR is set to /tmp/install_config/Files. Management Server "MStest01" (192.153.47.38) at 8:0:20:c2:1b:3c Variable Settings (Final) ------------------------------------------Management Server’s Default Router= "192.
CODE EXAMPLE B-1 Solaris Software, Solstice DiskSuite, and Patches ================ sc3sconf1-n0.driver: Starting finish script: add_sds.fin ================ =========================== Installing Package: SUNWmdg =========================== Processing package instance from Solstice DiskSuite Tool (sparc) 4.2.1,REV=1999.11.04.18.29 Copyright 2000 Sun Microsystems, Inc. All rights reserved. ## Executing checkinstall script. Using as the package base directory.
CODE EXAMPLE B-1 Solaris Software, Solstice DiskSuite, and Patches Copyright 2000 Sun Microsystems, Inc. All rights reserved. ## Executing checkinstall script. Using as the package base directory. ## Processing package information. ## Processing system information. 8 package pathnames are already properly installed. ## Verifying disk space requirements. Installing Solstice DiskSuite Japanese localization as ## Executing preinstall script. ## Installing part 1 of 1.
CODE EXAMPLE B-2 Solstice DiskSuite Configuration Boot device: disk2:d File and args: SunOS Release 5.8 Version Generic_108528-05 64-bit Copyright 1983-2000 Sun Microsystems, Inc. All rights reserved. configuring IPv4 interfaces: hme0. Hostname: sc3sconf1-n1 Configuring /dev and /devices Configuring the /dev directory (compatibility devices) The system is coming up. Please wait. checking ufs filesystems /dev/rdsk/c0t2d0s7: is clean.
CODE EXAMPLE B-2 Solstice DiskSuite Configuration ============================================================ PLEASE WAIT: Setting up system for root and swap mirroring.
CODE EXAMPLE B-2 Solstice DiskSuite Configuration Setting the node ID for "sc3sconf1-n1" ... done (id=1) Checking for global devices global file system ... done Checking device to use for global devices file system ... done Updating vfstab ... done Verifying that NTP is configured ... done Installing a default NTP configuration ... done Please complete the NTP configuration after scinstall has finished. Verifying that "cluster" is set for "hosts" in nsswitch.conf ...
CODE EXAMPLE B-2 Solstice DiskSuite Configuration rebooting... Resetting.. Software Power ON screen not found. Can’t open input device. Keyboard not present. Using ttya for input and output. 8-slot Sun Enterprise E4500/E5500, No Keyboard OpenBoot 3.2.28, 8192 MB memory installed, Serial #15070773. Copyright 2000 Sun Microsystems, Inc. All rights reserved CODE EXAMPLE B-3 Metadevice Mirror Attachment and Cluster Node Booting removing /etc/rc2.d/S94n0-sds-mirror script...
CODE EXAMPLE B-3 Metadevice Mirror Attachment and Cluster Node Booting Configuring /dev and /devices Configuring the /dev directory (compatibility devices) Configuring DID devices did instance 1 created. did subpath /dev/rdsk/c0t2d0s2 created for instance 1. did instance 2 created. did subpath /dev/rdsk/c0t3d0s2 created for instance 2. did instance 3 created. did subpath /dev/rdsk/c0t6d0s2 created for instance 3. did instance 4 created. did subpath /dev/rdsk/c1t0d0s2 created for instance 4.
CODE EXAMPLE B-3 Metadevice Mirror Attachment and Cluster Node Booting constructed Apr 20 14:36:57 sc3sconf1-n1 cl_runtime: NOTICE: clcomm: Path sc3sconf1-n1:qfe4 tn1:qfe4 being constructed Apr 20 14:37:38 sc3sconf1-n1 qfe: SUNW,qfe0: 100 Mbps full duplex link up - internal transceiver Apr 20 14:37:38 sc3sconf1-n1 qfe: SUNW,qfe4: 100 Mbps full duplex link up - internal transceiver Apr 20 14:37:47 sc3sconf1-n1 cl_runtime: WARNING: Path sc3sconf1n1:qfe0 - tn1:qfe0 initiation encountered errors, errno = 62.
64 Cluster Platform 4500/3 User’s Guide • July 2001
APPENDIX C Connections Within the Cluster Platform 4500/3 This appendix describes the preinstalled arrangement of some cables within the Cluster Platform 4500/3. This information is provided to assist in restoring the hardware to its original configuration after service. Cables that connect the servers to their storage components are run between the I/O system boards on the Sun Enterprise 4500 system cluster nodes to the FC-AL hubs or disk arrays.
TABLE C-2 From Device From Location To Device To Location Disk array No. 1 (data) F100 No. 1 FC-AL hub No. 1 Port No. 3 5m/F100 fiber optic Disk array No. 2 (mirror) F100 No. 2 FC-AL hub No. 2 Port No. 3 5m/F100 fiber optic Disk array No. 1 (data) 10BASE-T Ethernet hub Port No. 0 RJ-45 Disk array No. 2 (mirror) 10BASE-T Ethernet hub Port No. 1 RJ-45 TABLE C-3 Cable Length/Type FC-AL Hub to Server Connections From Device From Location To Device To Location FC-AL No.
TABLE C-5 Terminal Concentrator to Management Server and Cluster Nodes Connections From Device From Location To Device To Location Cable Length/Type Terminal concentrator Serial port 1 Management server Serial port A Serial cable Terminal concentrator Serial port 2 System No. 1 (node 1) Serial port A RJ-45/DB-25 serial Terminal concentrator Serial port 3 System No.
TABLE C-8 68 Ethernet Hub Connections From Device From Location To Device To Location Cable Length/Type Ethernet hub Port No. 0 Disk array No. 1 (node 1) 10BASE-T port RJ-45 Ethernet hub Port No. 1 Disk array No. 2 (node 2) 10BASE-T port RJ-45 Ethernet hub Port No. 5 Administration network RJ-45 Ethernet hub Port No.
APPENDIX D Troubleshooting the Cluster Platform 4500/3 Installation This appendix provides steps for troubleshooting the Cluster Platform 4500/3 installation.
70 Cluster Platform 4500/3 User’s Guide • July 2001