Installing and Managing HP-UX Virtual Partitions (vPars) Second Edition Manufacturing Part Number : T1335-90012 June 2002 United States © Copyright 2002 Hewlett-Packard Company. All rights reserved.
Legal Notices The information in this document is subject to change without notice. Hewlett-Packard makes no warranty of any kind with regard to this manual, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Hewlett-Packard shall not be held liable for errors contained herein or direct, indirect, special, incidental or consequential damages in connection with the furnishing, performance, or use of this material.
iCOD and iCOD CPU Agent Software are products of Hewlett-Packard Company, and all are protected by copyright. Copyright 1979, 1980, 1983, 1985-93 Regents of the University of California. This software is based in part on the Fourth Berkeley Software Distribution under license from the Regents of the University of California. Copyright 1988 Carnegie Mellon University Copyright 1990-1995 Cornell University Copyright 1985, 1986, 1988 Massachusetts Institute of Technology.
• IMPORTANT First Edition: November 2001, T1335-90001 (vPars version A.01.01 on HP-UX) New information may have been developed after the time of this printing. For the most current information, check the Hewlett-Packard documentation web site at the following URL: http://docs.hp.
Contents 1. Introduction Information on This Document . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How This Book is Organized . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Where to Get the Latest Version of This Document . . . . . . . . . . . . . . . . . . . . . . . . . . What Is vPars? . . . . . . . . .
Contents Monitor Dump Analysis Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Crash Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Crash User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . System-wide Stable Storage and Setboot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ignite-UX Network Recovery. .
Contents Removing the vPars Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 From a Single Virtual Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 From the Entire Hard Partition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 5. Monitor and Shell Commands Manpages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Adding or Removing Hardware Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adding and Removing CPU Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CPU Allocation Syntax In Brief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adding a CPU as a Bound CPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Removing a Bound CPU . . . . . . . . . . . . . . . . . . . . . . .
Tables Table 5-1. Virtual Partition States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tables 10
Figures Figure 1-1. vPars Conceptual Diagram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Figure 1-2. A Superdome Cabinet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Figure 2-1. Server without vPars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Figure 2-2. Software Stack of Server without vPars . . . . . . . . . . . . . . . . . . . . . . . . . 35 Figure 2-3.
Figures 12
What is New in this Release This section covers the major points of what is new • Superdome systems are now supported. For more information, please see the following: — “Supported Environments” on page 22, — “nPartitions” on page 23, — “Logs on a nPartition Server” on page 43, — “shutdown and reboot commands” on page 28, — “Performing nPartition Operations” on page 122. • iCOD is now supported in a vPars environments. Please see “iCOD (Instant Capacity on Demand) and PPU (Pay Per Use)” on page 25.
1 Introduction This chapter covers: Chapter 1 • Information on This Document • What Is vPars? • Why Use vPars? • Supported Environments • Product Interaction • Ordering Information 15
Introduction Information on This Document Information on This Document Intended Audience This document is written for system administrators to help them learn and manage the product HP-UX Virtual Partitions (vPars). How This Book is Organized The first section outlines what is new and changed in this release. Chapters 1, 2, and 3 cover conceptual material needed to understand and plan your vPars environment. Chapters 4 and 5 cover installation and common tasks.
Introduction What Is vPars? What Is vPars? The vPars (Virtual Partitions) product allows you to run multiple instances of HP-UX simultaneously on one hard partition1 by dividing the hard partition further into virtual partitions. Each virtual partition is assigned its own subset of hardware, runs a separate instance of HP-UX, and hosts its own set of applications. Because each HP-UX instance is isolated from all other instances, vPars provides application and OS (Operating System) fault isolation.
Introduction What Is vPars? NOTE Virtual Partitions, nPartitions, and Hard Partitions Defined In this document, we have redefined the terms virtual partitions, nPartitions, and hard partitions: A complex is the entire partitionable server, including both cabinets, all cells, I/O chassis, cables, and power and utility components. A cabinet is the Superdome’s hardware "box", which contains the cells, Guardian Service Processor (GSP), internal I/O chassis, I/O fans, cabinet fans, and power supplies.
Introduction What Is vPars? A virtual partition is a software partition of a hard partition where each virtual partition contains an instance of HP-UX. Though a hard partition can contain multiple virtual partitions, the inverse is not true. A virtual partition cannot span a hard partition boundary. Product Features • A single hard partition can be divided into multiple virtual partitions. • Each virtual partition runs its own instance of HP-UX.
Introduction What Is vPars? Why Use vPars? The following explains some of the advantages of using vPars: vPars Increases Server Utilization and Isolates OS and Application Faults In certain environments one entire server is dedicated to a single application. When the demand for that application is not at peak, such as during non-business hours, the server is underutilized. If many servers are configured this way, you have many servers that are being underutilized.
Introduction What Is vPars? Two virtual partitions that have different CPU utilization peak times can have processors moved between them. For example, a transaction server used primarily during business hours could have floating CPUs reassigned overnight to a report server. Such reassignments can be automated, for example, via cron.
Introduction Supported Environments Supported Environments Hardware • rp5470/L3000 Required minimum firmware version: 41.02. • rp7400/N4000 Required minimum firmware version: 41.02. • Superdome Required minimum firmware version: PDC 35.3 (Superdome SMS Software version1.2). NOTE Updating Firmware • rp5470/L3000 and rp7400/N4000 Installing firmware patches on these servers requires additional steps in a vPars environment.
Introduction Supported Environments http://docs.hp.com/hpux/11i/index.html#Virtual%20Partitions Operating Systems • All virtual partitions must run HP-UX 11i (December 2000 Release or later) in 64-bit mode on PA-RISC platforms. HP Product Interaction • nPartitions To use parmgr, you need to have Partition Manager version B.11.11.01.05 or later. For more information, see “Installing and Removing vPars-related Bundles” on page 81. Only one vPars monitor is booted per nPartition.
Introduction Supported Environments information in both the vPars partition database and the nPartition complex profile, see “Using Primary and Alternate Paths with nPartitions” on page 115. If there is a pending reboot for reconfiguration (RFR) for the involved nPartition, no virtual partitions will be rebooted until all the virtual partitions within the given nPartition are shut down and the involved vPars monitor is rebooted.
Introduction Supported Environments In a vPars environment, if the LPMC (Low Priority Machine Check) monitor of the Support Tools deactivates a processor, it does not automatically replace the failing processor with an iCOD processor. The processor replacement must be performed manually using the icod_modify command. For more information, see the manual titled Instant Capacity on Demand (iCOD) User’s Guide for version B.05.00 at http://docs.hp.com. CAUTION CPU Expert Tool is not supported on vPars servers.
Introduction Supported Environments • PCI OLAR (On-Line Addition and Replacement) OLAR for PCI slots works the same on a vPars server as it does on a non-vPars server. However, you can exeucte OLAR functions only on the PCI slots that the virtual partition owns. • MC/ServiceGuard MC/ServiceGuard is supported with vPars.
Introduction Supported Environments When Ignite-UX reports the Total Number of CPUs for a partition, it includes unassigned unbound CPUs in the count. For information on bound and unbound CPUs, see “Bound and Unbound CPUs” on page 48. For example, if you have three virtual partitions, each with one bound CPU, and two unbound CPUs not assigned to any of the partitions, this is a total of five CPUs in the server. Ignite-UX will report three CPUs (one bound and two unbound CPUs) for each partition.
Introduction Supported Environments • shutdown and reboot commands In a virtual partition, the shutdown and reboot commands shutdown and reboot a virtual partition and not the entire hard partition. Also, if a virtual partition is not set for autoboot using the autoboot attribute (see the vparmodify (1M) manpage), the -r and -R options of the shutdown or reboot commands will only shut down the virtual partition; the virtual partition will not reboot.
Introduction Supported Environments • System-wide stable storage and the setboot command On a non-vPars server, the setboot command allows you to read from and write to the system-wide stable storage of non-volatile memory. However, on a vPars server, the setboot command does not affect the stable storage. Instead, it reads from and writes to only the partition database. For more information see “System-wide Stable Storage and Setboot” on page 58.
Introduction Supported Environments bound CPUs. Further, disabling interrupts on a bound CPU does not convert the CPU into an unbound CPU. For more information see the intctl (1M) manpage.
Introduction Ordering Information Ordering Information To obtain information on ordering the vPars product or on how to download the free version, go to HPs Software Depot web site at http://www.software.hp.com Features of Free Versus Purchased Product The free product has the following limitations not present in the purchased product: • the maximum number of virtual partitions is two • the first partition created must have only one CPU dedicated to it.
Introduction Ordering Information 32 Chapter 1
2 How vPars Works This chapter covers: Chapter 2 • Partitioning Using vPars • vPars Monitor and vPars Partition Database • vPars Boot Sequence • Virtual Consoles • Security 33
How vPars Works Partitioning Using vPars Partitioning Using vPars To understand how vPars works, compare it to a server not using vPars. Figure 2-1 shows a 4-way HP-UX server. Without vPars, all hardware resources are dedicated to one instance of HP-UX and the applications that are running on this one instance. Figure 2-1 Server without vPars Processor 0 Host PCI Bridge 4 SCSI 0/0 6.0 34 Processor 2 Processor 1 LAN 1/0 Memory 6 Processor 3 Host PCI Bridge 5 SCSI 0/0 LAN 1/0 6.
How vPars Works Partitioning Using vPars Figure 2-2 shows the software stack where all applications run on top of the single OS instance: Figure 2-2 Software Stack of Server without vPars Application 2 Application 1 HP-UX 11i Hardware / Firmware Using vPars, you can allocate a server’s resources into two or more virtual partitions, each with a subset of the hardware.
How vPars Works Partitioning Using vPars CPUs, its own LAN connection, and a sufficient subset of memory to run HP-UX and the applications intended to be hosted on that virtual partition. Figure 2-3 Server Block Diagram with 2 Virtual Partitions Processor 0 Processor 1 Host PCI Bridge 4 SCSI 0/0 6.0 36 LAN 1/0 Memory (subset) Processor 2 Memory (subset) Processor 3 Host PCI Bridge 5 SCSI 0/0 LAN 1/0 6.
How vPars Works Partitioning Using vPars Each application can run on top of separate OS instances. Instead of a single OS instance owning all the hardware, the vPars monitor manages the virtual partitions and their OS instances as well as the assignment of hardware resources to each virtual partition.
How vPars Works Partitioning Using vPars The commands for the vPars monitor are shown in the section “Using Monitor Commands” on page 102; however, most of the vPars operations are performed using vPars commands at the UNIX shell level. For more information on the commands, see the chapter “Monitor and Shell Commands” on page 97. vPars Partition Database At the heart of the vPars monitor is the partition database. The partition database contains partition configuration information.
How vPars Works Boot Sequence Boot Sequence NOTE This section describes a manual boot sequence to help explain how vPars impacts the boot process, but you can continue to use an autoboot sequence to boot all partitions. See “Autobooting the Monitor and All Virtual Partitions” on page 127. Boot Sequence: Quick Reference On a server without vPars, a simplified boot sequence is: 1. ISL (Initial System Loader) 2. hpux (secondary system loader) 3.
How vPars Works Boot Sequence However, in a server with vPars, at the ISL prompt, the secondary system loader hpux loads the vPars monitor /stand/vpmon: ISL> hpux /stand/vpmon The monitor loads the partition database (the default is /stand/vpdb) from the same disk that /stand/vpmon was booted. The monitor internally creates (but does not boot) each virtual partition according to the resource assignments in the partition database.
How vPars Works Virtual Consoles Virtual Consoles HP-UX servers have a special terminal or window called a console that allows special control and displays system error messages. Because servers have a limited number of PCI slots, you may not want to allocate one serial port for use as a console port for each partition you create. With vPars, each virtual partition has its own virtual console. For each partition, its console I/O is sent to its vcn (Virtual CoNsole) driver.
How vPars Works Virtual Consoles For rp7400/N4000 and rp5470/L3000 servers, the pause can be from ten to twenty seconds. For Superdome and other nPartition-able servers, the switchover pause can be minutes, depending on the amount of memory owned by the virtual partition that owns the hardware console port.
How vPars Works Virtual Consoles — From a running partition, reset the partition that owns the hardware console port by executing vparreset -p target_partition -h, where target_partition is the partition that owns the hardware console port.
How vPars Works Virtual Consoles Also, for a given nPartition, the Virtual Front Panel (VFP) of the nPartition’s console displays an OS heartbeat whenever at least one virtual partition within the nPartition is up.
How vPars Works Security Security You should be aware of the following security concerns: Chapter 2 • The vPars commands (as described in “Monitor and Shell Commands” on page 97) are restricted to root access, but the commands work on any of the virtual partitions, regardless of which partition the commands are executed from. Therefore, a user on one partition can affect another virtual partition by targeting the virtual partition in a vPars command.
How vPars Works Security 46 Chapter 2
3 Managing Virtual Partitions This chapter covers Chapter 3 • CPU Allocation • Bound and Unbound CPUs • Memory Allocation • I/O Allocation • When to Shutdown All Virtual Partitions • Monitor Crash Dump • Crash Processing • System-Wide Stable Storage and Setboot • Ignite-UX Network Recovery • Expert Recovery 47
Managing Virtual Partitions CPU Allocation CPU Allocation Bound and Unbound CPUs With vPars, there are two types of CPUs: bound and unbound. A bound CPU is a CPU that is assigned to and handles I/O interrupts for a virtual partition. Every virtual partition must have at least one bound CPU to handle its I/O interrupts. CPUs that are not assigned to any virtual partition or that are assigned to a virtual partition but do not handle its I/O interrupts are unbound CPUs.
Managing Virtual Partitions CPU Allocation If your applications are CPU intensive (and not I/O intensive), use unbound CPUs so that you can easily adjust the number of CPUs via dynamic CPU migration as the demand on the virtual partition changes. Unbound CPUs provide greater flexibility of movement between virtual partitions because they can be added and removed without needing to bring down the affected partitions.
Managing Virtual Partitions Memory Allocation Memory Allocation Memory Assignment By Size To allocate a subset of physical memory to a virtual partition, you specify the desired amount of memory (a size) to allocate to a partition using the mem:: parameter in the vparcreate or vparmodify commands.
Managing Virtual Partitions I/O Allocation I/O Allocation When planning or performing I/O allocation, note the following: • When you are planning your I/O to virtual partition assignments, note that only one virtual partition may own any hardware at or below the LBA (Local Bus Adapter) level. In other words, hardware at or below the LBA level must be in the same partition.
Managing Virtual Partitions I/O Allocation • You can change the attributes of an I/O path only when the virtual partition is down. • For information on supported I/O interfaces and configurations, see the document HP-UX Virtual Partitions Ordering and Configuration Guide, available at: http://docs.hp.com/hpux/11i/index.
Managing Virtual Partitions When to Shutdown All Virtual Partitions When to Shutdown All Virtual Partitions The only times you need to shutdown all the virtual partitions within a hard partition are when: Chapter 3 • a hardware change or problem requires the hard partition to be down. • you need to reconfigure a nPartition or modify nPartition settings • the entire hard partition hangs. This might be a problem with the monitor.
Managing Virtual Partitions Monitor Crash Dump Monitor Crash Dump If a virtual partition crashes, a vPars monitor dump is created in addition to the kernel dump. If the monitor panics, a monitor dump is created, but no kernel dumps are created. Please contact your HP Support Representative for help on monitor and kernel crash dump analysis. Directory Location and Filenames When a virtual partition crashes, the monitor dump file is initially written to a pre-existing file /stand/vpmon.dmp.
Managing Virtual Partitions Monitor Crash Dump NOTE TOC and Kernel Dumps: If a TOC (transfer of control) for the entire hard partition is generated either through a Ctrl-B TC command or by an OS of a virtual partition, a kernel dump will not automatically be saved to /var/adm/crash for those partitions that have not previously had a kernel dump occur. You can save their dumps to /var/adm/crash by performing the following on each of those virtual partitions: Step 1.
Managing Virtual Partitions Crash Processing Crash Processing Crash processing for a virtual partition’s OS is similar to the crash processing on a non-vPars OS: the OS is quiesced, portions of memory are written to disk, and in the case of vPars, resources are released to the monitor. After the monitor dump is written to disk, you can let the crash processing continue or you can enter the crash user interface: • • NOTE To let the crash processing continue, do nothing.
Managing Virtual Partitions Crash Processing 6. Launch partition 0 (vpar1) for crash processing 7. Launch partition 1 (vpar2) for crash processing Enter number (1-7): The menu choices mean: 1. displays memory from for 32-bit words. For example: Enter Address: 0x1000 4 0x00001000 0x00000000 0x1200a000 0xaa400000 0x00000000 Enter Address: quit 2. continues with the default crash handling 3. allows you to chose an alternate device to which the monitor dump is written.
Managing Virtual Partitions System-wide Stable Storage and Setboot System-wide Stable Storage and Setboot On a vPars hard partition, the setboot command does not read from or write to stable storage. Instead, the setboot command reads from and writes to the vPars partition database, affecting only the entries of the virtual partition from which the setboot command was run.
Managing Virtual Partitions Ignite-UX Network Recovery Ignite-UX Network Recovery For information on Ignite-UX, see the manual Ignite-UX Administration Guide. Making an Archive of a Virtual Partition make_tape_recovery is not supported for vPars hard partition. You need to use make_net_recovery. make_net_recovery works the same for making archives of both non-vPars and vPars hard partitions.
Managing Virtual Partitions Ignite-UX Network Recovery 3. Set the kernel path of the target partition to use the boot kernel /stand/WINSTALL: winona1# vparmodify -p winona2 -b /stand/WINSTALL Ignite-UX modifies the LIF area to boot the WINSTALL kernel as part of its recovery process. However, because vPars uses the vPars database instead of the LIF area to boot a virtual partition, this change needs to be done to the vPars database. 4. Set the TERM environment variable to hpterm.
Managing Virtual Partitions Ignite-UX Network Recovery 1. From the BCH prompt, boot the hard partition using the Ignite-UX server (assume the Ignite server’s IP is 15.xx.yy.zz): BCH> bo lan.15.xx.yy.zz install interact with IPL? N 2. From the Ignite-UX window, select "Install HP-UX". 3. Enter the network data using the data for the virtual partition that owns the boot disk that is set as the primary path within system-wide stable storage. 4. Select Recovery Archive Configuration -> Go 5.
Managing Virtual Partitions Expert Recovery Expert Recovery When you are performing Expert Recovery, you need to remember the following: 62 • You can no longer read from or write to system-wide stable storage using setboot. See “System-wide Stable Storage and Setboot” on page 58. • mkboot modifies the LIF area, but vPars does not use the LIF area to boot a virtual partition. See “mkboot and LIF files” on page 29 and “Simulating the AUTO File on a Virtual Partition” on page 138.
4 Planning Your Virtual Partitions and Installing vPars This chapter covers Chapter 4 • Example Hard Partition • Planning Your Virtual Partitions • Installing vPars • Removing vPars 63
Planning Your Virtual Partitions and Installing vPars Example Hard Partition Example Hard Partition For all examples used in this chapter, we will use the following rp7400 hardware configuration: Figure 4-1 64 Example rp7400 Chapter 4
Planning Your Virtual Partitions and Installing vPars Example Hard Partition full ioscan output # ioscan H/W Path Class Description =================================================== root 0 ioa System Bus Adapter (803) 0/0 ba Local PCI Bus Adapter (782) 0/0/0/0 lan HP PCI 10/100Base-TX Core 0/0/1/0 ext_bus SCSI C895 Fast Wide LVD 0/0/1/0.7 target 0/0/1/0.7.0 ctl Initiator 0/0/2/0 ext_bus SCSI C875 Ultra Wide Single-Ended 0/0/2/0.6 target 0/0/2/0.6.0 disk SEAGATE ST39102LC 0/0/2/0.7 target 0/0/2/0.7.
Planning Your Virtual Partitions and Installing vPars Example Hard Partition 0/10 0/12 1 1/0 1/2 1/2/0/0 1/2/0/0.0 1/2/0/0.0.0 1/2/0/0.7 1/2/0/0.7.0 1/2/0/1 1/2/0/1.7 1/2/0/1.7.0 1/4 1/4/0/0 1/4/0/0.5 1/4/0/0.5.0 1/4/0/0.7 1/4/0/0.7.0 1/4/0/1 1/4/0/1.7 1/4/0/1.7.
Planning Your Virtual Partitions and Installing vPars Example Hard Partition 101 104 105 108 109 192 Chapter 4 processor pbc processor pbc processor memory Processor Bus Converter Processor Bus Converter Processor Memory 67
Planning Your Virtual Partitions and Installing vPars Planning Your Virtual Partitions Planning Your Virtual Partitions Before you install vPars, you should have a plan of how you want to create virtual partitions within your server.
Planning Your Virtual Partitions and Installing vPars Planning Your Virtual Partitions Recommended Number of Virtual Partitions For performance reasons, HP recommends the following for the rp5470/L3000 and the rp700/N4000 when using vPars: Server Number of Partitions rp5470/L3000 up to 2 partitions rp7400/N4000 up to 4 partitions For the latest information, please see the document HP-UX Virtual Partitions Ordering and Configuration Guide available at: http://docs.hp.com/hpux/11i/index.
Planning Your Virtual Partitions and Installing vPars Planning Your Virtual Partitions • The maximum number of virtual partitions can be limited by the total size of the kernels in memory for all the virtual partitions. In general terms, the sum of the size of the kernels must be < 2 GB. If you use the defaults of the dynamic tunables, you will not run into the 2 GB limit. However, if you have adjusted the dynamic tunables, it is possible to run beyond the 2 GB boundary.
Planning Your Virtual Partitions and Installing vPars Planning Your Virtual Partitions Virtual Partitions on nPartitions If you are using vPars on a complex, you may want to distinguish the names of your virtual partitions from the names of your nPartitions to avoid confusion.
Planning Your Virtual Partitions and Installing vPars Planning Your Virtual Partitions For this example, winona1 will have two bound CPUs, winona2 will have two bound CPUs where the hardware paths will be 41 and 45, and winona3 will have one bound CPU. Partition Name winona1 winona2 winona3 Bound CPUs total = 2 min = 2 total = 2 min = 2 paths = 41,45 total = 1 min = 1 Unbound CPUs are assigned in quantity.
Planning Your Virtual Partitions and Installing vPars Planning Your Virtual Partitions Assigning I/O at the LBA Level For our example server, the ioscan output shows the LBAs as: #ioscan -k | grep "Bus Adapter" H/W Path Class Description =========================================================== 0/0 ba Local PCI Bus Adapter (782) 0/1 ba Local PCI Bus Adapter (782) 0/2 ba Local PCI Bus Adapter (782) 0/4 ba Local PCI Bus Adapter (782) 0/5 ba Local PCI Bus Adapter (782) 0/8 ba Local PCI Bus Adapter (782) 0/10
Planning Your Virtual Partitions and Installing vPars Planning Your Virtual Partitions When we create the virtual partitions, we will create winona1 first. console port CAUTION owned by winona1 One of the virtual partitions must own the LBA that contains the physical hardware console port. Choosing the Boot and Lan Paths Using the full ioscan output, we chose the following boot disk path and note the LAN card path: Partition Name winona1 winona2 winona3 Boot Path 0/0/2/0.6.0 0/8/0/0.5.0 1/4/0/0.
Planning Your Virtual Partitions and Installing vPars Planning Your Virtual Partitions autoboot attribute to MANUAL using the vparmodify command.
Planning Your Virtual Partitions and Installing vPars Installation Installation You can install vPars on an existing HP-UX installation directly from a depot or CD or by using an Ignite-UX server. Related Information For information on the installation of HP-UX, see the manual "HP-UX 11i Installation and Update Guide". For information on swinstall and software depots, see the manual "Software Distributor Administration Guide for HP-UX".
Planning Your Virtual Partitions and Installing vPars Installation • If you are using a hardwired HP terminal or a LAN-based terminal emulator of type “hpterm”, set the GSP terminal-type setting to hpterm. • If you are using a LAN-based terminal emulator of type “dtterm” or “xterm”, set the GSP terminal-type setting to vt100. How to Set the GSP Terminal Type Step 4. Access the GSP through the lan console, the remote-modem port, or a physically connected terminal. Step 5.
Planning Your Virtual Partitions and Installing vPars Installation You will see a message indicating the command execution will take a few seconds and then a message indicating that your settings have been updated. The virtual partitions that you create will use this terminal-type setting for their virtual console displays. TIP If you get a garbled display, you can press Ctrl-L to refresh the display.
Planning Your Virtual Partitions and Installing vPars Installation Updating the Ignite-UX Server CAUTION If you are using Ignite-UX versions B.3.4.XX (September 2001), B.3.5.XX (December 2001), or B.3.6.XX (March 2002), in addition to adding the vPars bundles to your Ignite server, you need to replace the existing file /opt/ignite/boot/WINSTALL with a vPars-compatible WINSTALL file using the script named WINSTALL_script.
Planning Your Virtual Partitions and Installing vPars Installation Go to http://www.software.hp.com Select Enhancement Releases Select HP-UX Virtual Partitions Follow the instructions on the web page for obtaining the files WINSTALL and WINSTALL_script. 2. Run the script WINSTALL_script to copy the WINSTALL file to the correct location on your Ignite-UX server. NOTE The WINSTALL_script saves a copy of the original WINSTALL file. To restore the original WINSTALL file, execute the WINSTALL_script again.
Planning Your Virtual Partitions and Installing vPars Installation vPars-related Bundles Products related to this release of vPars (all of which are on the vPars CD) are: Bundle Name Description VPARMGR vPars GUI (vparmgr) B6826AA Partition Manager for nPartitions (parmgr) B6191AAE Online Diagnostics B9073AA iCOD Installing and Removing vPars-related Bundles VPARMGR The vPars GUI (vparmgr) is not automatically installed when vPars is installed.
Planning Your Virtual Partitions and Installing vPars Installation # swinstall -s /cdrom B68266AA To remove the Partition Manager product: # /usr/sbin/swremove PartitionManager Note that the PartitionManager product can be removed only after the vPars product is removed from a virtual partition. B6191AAE (Online Diagnostics) Online Diagnostics is not automatically installed when vPars is installed.
Planning Your Virtual Partitions and Installing vPars Installation PHKL_25218: S700_800 11.11 kernel patch See the iCOD web page at http://software.hp.com for the latest patch information for iCOD. To install the iCOD bundle using the vPars CD: # swinstall -s /cdrom -x autoreboot=true B9073AA To remove the iCOD bundle: # /usr/sbin/swremove B9073AA NOTE The iCOD product B9073AA should not be selected for installation unless you are already participating in the iCOD program.
Planning Your Virtual Partitions and Installing vPars Installation Select Enhancement Releases Select HP-UX Quality Packs JFS (Journal File System) To avoid hangs on vxfs file systems, please install kernel patch PHKL_27121 on the operating systems of each virtual partition. This patch is available from the IT Resource Center web site at http://itrc.hp.com.
Planning Your Virtual Partitions and Installing vPars Installation NOTE The server must be in standalone mode for the patches to take effect, so please do not skip this step. 3. Install the firmware patch as you would in a non-vPars environment. The firmware patch will reboot your server. 4. After the firmware installation has completed, you can boot the monitor and virtual partitions as you normally would.
Planning Your Virtual Partitions and Installing vPars Installing vPars Using Ignite-UX Installing vPars Using Ignite-UX NOTE Before installing vPars, read the section “Setting Up the Ignite-UX Server” on page 78 and “Updating the Ignite-UX Server” on page 79. 1. Boot your hard partition using the Ignite-UX server. If your Ignite server’s IP address is 15.xx.yy.zz: BCH> bo lan.15.xx.yy.zz install Interact with IPL: n 2.
Planning Your Virtual Partitions and Installing vPars Installing vPars Using Ignite-UX 6. Interrupt the boot process as your hard partition comes back up to reach the ISL prompt. BCH> bo pri interact with IPL: y 7. At the ISL prompt, boot the monitor and the initial virtual partition. In our example, the command is: ISL> hpux /stand/vpmon vparload -p winona1 8.
Planning Your Virtual Partitions and Installing vPars Installing vPars Using Ignite-UX c. TIP enter the boot disk path, lan info, hostname, and IP of the target partition into the Ignite-UX interface and install HP-UX, desired patches, the Quality Pack bundle, the vPars bundle, and the desired vPars-related bundles. As a result of this process, the virtual partition will automatically reboot. If you get a garbled display, you can press Ctrl-L to refresh the display.
Planning Your Virtual Partitions and Installing vPars Installing vPars Using Software Distributor Installing vPars Using Software Distributor 1. For the root disk of each virtual partition, use Software Distributor to install HP-UX, desired patches, the Quality Pack bundle, the vPars software bundle, and the desired vPars-related bundles. 2. Boot the disk that is intended to be the boot disk of the first virtual partition into the normal (non-vPars) HP-UX environment.
Planning Your Virtual Partitions and Installing vPars Installing vPars Using Software Distributor BCH> bo pri interact with IPL: y 7. At the ISL prompt, boot the monitor and all the virtual partitions. In our example, the command is: ISL> hpux /stand/vpmon -a Your hard partition should now be booted with all virtual partitions up.
Planning Your Virtual Partitions and Installing vPars Updating to the Latest Version of vPars Updating to the Latest Version of vPars To update from a previous version of vPars, perform the following: 1. Shut down all the virtual partitions. 2. Reboot the server into standalone mode. This consists of the following: a. At the MON> prompt, type reboot b. If needed, interrupt the boot sequence at the BCH> and boot /stand/vmunix instead of /stand/vpmon. For example: BCH> bo pri interact with IPL? y . . .
Planning Your Virtual Partitions and Installing vPars Updating to the Latest Version of vPars 6. On each virtual partition, repeat Step 3 to install the new vPars bundle on each boot disk of each virtual partition (you do not need to reboot the hard paritition). Because the boot disk used to boot in standalone mode in Step 2 already has the new vPars bundle (this was installed during Step 3), you can exclude this step for the boot disk at the primary path.
Planning Your Virtual Partitions and Installing vPars Applying a vPars Sub-System Patch Applying a vPars Sub-System Patch The vPars sub-system patch includes the vPars monitor, commands, and daemons. To apply a vPars patch to an existing version, perform the following: 1. Shut down all the virtual partitions. 2. Reboot the server into standalone mode. This consists of the following: a. At the MON> prompt, type reboot b.
Planning Your Virtual Partitions and Installing vPars Applying a vPars Sub-System Patch 6. On each virtual partition, repeat Step 3 to install the vPars sub-system patch on each boot disk of each virtual partition. No reboot of the virtual partition is required. Because the boot disk used to boot in standalone mode in Step 2 already has the new vPars patch (this was installed during Step 3), you can exclude this step for the boot disk at the primary path.
Planning Your Virtual Partitions and Installing vPars Removing the vPars Product Removing the vPars Product From a Single Virtual Partition To remove the vPars product, execute the swremove command from the target virtual partition. For example, to remove the vPars product from the partition winona3: winona3# /usr/sbin/swremove -x autoreboot=true \ VirtualPartition The product will be removed, and the virtual partition will be shut down.
Planning Your Virtual Partitions and Installing vPars Removing the vPars Product From the Entire Hard Partition 1. Remove the vPars product from each virtual partition one by one. 2. After you have removed vPars from the last virtual partition, you will be at the monitor prompt. At this point, you can type REBOOT to reboot the hard partition. MON> reboot NOTE 96 To uninstall vparmgr or other vPars-related bundles, see “Installing and Removing vPars-related Bundles” on page 81.
5 Monitor and Shell Commands This chapter covers: Chapter 5 • vPars Manpages • Booting the Monitor • Accessing the Monitor Prompt • Monitor Commands • Creating a Virtual Partition • Booting a Virtual Partition • Using Primary and Alternate Boot Paths • Shutting Down or Rebooting a Virtual Partition • Shutting Down or Rebooting the Hard Partition • Removing a Virtual Partition • Autobooting the Monitor and All Virtual Partitions • Obtaining Monitor and Hardware Resource Information
Monitor and Shell Commands Manpages Manpages The purpose of this document is to describe vPars concepts and how to perform common vPars tasks. For detailed information on the vPars commands, including description, syntax, all the command line options, and the required state of a virtual partition for each command, see the vPars manpages.
Monitor and Shell Commands Notes on Examples in this Chapter Notes on Examples in this Chapter Syntax of Example Commands The example commands at the UNIX shell level in the following section use the following syntax: where the shell prompt consists of the hostname of the current virtual partition and the hash sign (#).
Monitor and Shell Commands Booting the vPars Monitor Booting the vPars Monitor To boot the vPars monitor, at the ISL prompt specify /stand/vpmon: ISL> hpux /stand/vpmon With no arguments to /stand/vpmon, the monitor will load and go into interactive mode with the following prompt: MON> The following options are available when booting the monitor: -a boots all virtual partitions that have the autoboot attribute set. For more information, see vparmodify (1M).
Monitor and Shell Commands Accessing the Monitor Prompt Accessing the Monitor Prompt You can reach the monitor prompt in the following ways: • From the ISL prompt, you can boot the monitor into interactive mode (see “Booting the vPars Monitor” on page 100). • After shutting down all virtual partitions, you will arrive at the monitor prompt on the console (see “Shutting Down or Rebooting the Hard Partition (rebooting the vPars monitor)” on page 120).
Monitor and Shell Commands Using Monitor Commands Using Monitor Commands You can use the following monitor commands at the monitor prompt for booting and basic troubleshooting. However, most vPars operations should be performed using the vPars shell commands. Note the following for the monitor commands: • Unless specifically stated, all operations occur only on the boot disk from which the monitor was booted. Usually, this is the boot disk of the primary path entry in system-wide stable storage.
Monitor and Shell Commands Using Monitor Commands Note: This command can only be used when the monitor /stand/vpmon is booted and the default partition database (/stand/vpdb) does not exist, the alternate partition database as specified in the -p option of /stand/vpmon does not exist, or the database file read is corrupt. For information on when the monitor is booted, see “Boot Sequence” on page 39. For more information on the -p option, see “Booting the vPars Monitor” on page 100.
Monitor and Shell Commands Using Monitor Commands To boot the partition winona2 into single-user mode: MON> vparload -p winona2 -o "-is" To boot the partition winona2 using the kernel /stand/vmunix.other: MON> vparload -p winona2 -b /stand/vmunix.prev To boot the partition winona2 using the disk device at 0/8/0/0.2.0: MON> vparload -p winona2 -B 0/8/0/0.2.0 Note: -bkernelpath allows you to change the target kernel for only the next boot of partition_name.
Monitor and Shell Commands Using Monitor Commands • reboot reboots the entire hard partition. Other hard partitions are not affected. NOTE You should shut down each virtual partition (using the Unix shutdown command) prior to executing the monitor reboot command. A confirmation prompt is provided, but if you accept confirmation of the reboot while any virtual partitions are running, the reboot brings the running partitions down ungracefully.
Monitor and Shell Commands Using Monitor Commands • getauto displays the contents of the AUTO file in the LIF area Example: MON> getauto hpux /stand/vpmon • log displays the contents, including warning and error messages, of the monitor log. The monitor log holds up to 16KB of information in a circular log buffer. The information is displayed in chronological order. • ls [-alniFH] [directory] lists the contents of directory. This command is similar to the Unix ls command.
Monitor and Shell Commands Using Monitor Commands • scan lists all hardware discovered by the monitor and indicates which virtual partition owns each device. • toddriftreset resets the drifts of the real-time clock. Use this command if you reset the real-time clock of the hard partition at the BCH prompt. For brief information, see “Real-time clock” on page 28. • vparinfo [partition_name] This command is for HP internal use only.
Monitor and Shell Commands Using the Monitor Commands at the ISL Prompt Using the Monitor Commands at the ISL Prompt You can specify any of the monitor commands either at the monitor prompt (MON>) or at the ISL prompt (ISL>). If you are at the ISL prompt, use the desired command as the argument for the monitor /stand/vpmon.
Monitor and Shell Commands Creating a Virtual Partition Creating a Virtual Partition You can create a virtual partition using the vparcreate command. NOTE When you create a virtual partition, the vPars monitor assumes you will boot and use the partition. Therefore, when a virtual partition is created, even if it is down and not being used, the resources assigned to it cannot be used by any other partition. Also, when using vPars, the physical hardware console port must be owned by a partition.
Monitor and Shell Commands Creating a Virtual Partition Resource or Attribute vparcreate Option all hardware where the path begins with 0/8 -a io:0/8 all hardware where the path begins with 1/10 -a io:1/10 hardware at 0/8/0/0.5.0 as the boot disk -a io:0/8/0/0.5.0:boot The resulting vparcreate command line is: winona1# vparcreate -p winona2 -a cpu::3 -a cpu:::2:4 -a cpu:41 -a cpu:45 –a mem::1280 –a io:0/8 -a io:1/10 -a io:0/8/0/0.5.
Monitor and Shell Commands Creating a Virtual Partition winona1# vparstatus -p winona2 -v Chapter 5 111
Monitor and Shell Commands Booting a Virtual Partition Booting a Virtual Partition To boot a single virtual partition, use either the monitor command vparload or the shell command vparboot.
Monitor and Shell Commands Setting and Booting a Virtual Partition Using Primary and Alternate Boot Paths Setting and Booting a Virtual Partition Using Primary and Alternate Boot Paths You can set the primary and alternate boot paths of a virtual partition by using the HP-UX setboot command or the vPars command vparmodify and the BOOT and ALTBOOT attributes. For more information on how setboot works on a vPars server, see “System-wide Stable Storage and Setboot” on page 58.
Monitor and Shell Commands Setting and Booting a Virtual Partition Using Primary and Alternate Boot Paths winona1# vparcreate -p winona2 -a io:0/8/0/0.5.0:BOOT -a io:0/8/0/0.2.0:ALTBOOT Using vparmodify If the virtual partitions are created already, you can specify the primary or alternate boot paths with the BOOT and ALTBOOT attributes within the vparmodify command: To set the primary boot path: winona1# vparmodify -p winona2 -a io:0/8/0/0.5.
Monitor and Shell Commands Setting and Booting a Virtual Partition Using Primary and Alternate Boot Paths • You cannot specify pri or alt at the monitor prompt. However, because the primary boot path is the default, you can boot winona2 using the primary path using the following command: MON> vparload -p winona2 If you want to boot winona2 using the alternate boot path, you can specify the hardware address for the alternate boot path.
Monitor and Shell Commands Setting and Booting a Virtual Partition Using Primary and Alternate Boot Paths and its nPartition showed the nPartition’s alternate path to be 2/0/14/0/0.6.0: winona2# parstatus -p0 [Partition] Partition Number Partition Name Status IP address PrimaryBoot Path Alternate Boot Path HA Alternate Boot Path . . . -V : : : : : : : 0 npar0 active 0.0.0.0 0/0/6/0/0.5.0 2/0/14/0/0.6.0 0/0/6/0/0.5.
Monitor and Shell Commands Setting and Booting a Virtual Partition Using Primary and Alternate Boot Paths PrimaryBoot Path : 0/0/6/0/0.5.0 Alternate Boot Path : 2/0/14/0/0.6.0 HA Alternate Boot Path : 0/0/6/0/0.5.0 Changing the nPartition’s Path (Complex Profile Data) To change the nPartition’s alternate path to 0/0/6/0/0.4.0, run the command: winona2# parmodify -p0 -t 0/0/6/0/0.4.0 Command succeeded.
Monitor and Shell Commands Shutting Down or Rebooting a Virtual Partition Shutting Down or Rebooting a Virtual Partition A virtual partition can be gracefully shut down or rebooted via the HP-UX command shutdown. To ensure that the partition database is synchronized (see “vPars Partition Database” on page 38), execute the vparstatus command prior to executing the shutdown command.
Monitor and Shell Commands Shutting Down or Rebooting a Virtual Partition There is no command to shutdown the monitor. The monitor command reboot (see “Using Monitor Commands” on page 102) applies to the entire hard partition, causing the hard partition to reboot. For more information on how to shut down or reboot the hard partition gracefully, see “Shutting Down or Rebooting the Hard Partition (rebooting the vPars monitor)” on page 120.
Monitor and Shell Commands Shutting Down or Rebooting the Hard Partition (rebooting the vPars monitor) Shutting Down or Rebooting the Hard Partition (rebooting the vPars monitor) To halt or reboot the hard partition gracefully, you need to do the following: 1. Log into every virtual partition that is running and gracefully shutdown the partition via the HP-UX command shutdown. There is no command that shuts down all the virtual partitions at the same time.
Monitor and Shell Commands Shutting Down or Rebooting the Hard Partition (rebooting the vPars monitor) c. To power off the cells assigned to the nPartition, access the GSP using Ctrl-B. You can then go to the Command Menu and use the command PE to power off the cells.
Monitor and Shell Commands Performing nPartition Operations Performing nPartition Operations You can perform nPartition operations in a vPars environment, keeping in mind the following: • If you make a nPartition change where a Reboot for Reconfiguration is required, all the virtual partitions within the nPartition need to be shutdown and the monitor rebooted in order for the reconfiguration to take effect.
Monitor and Shell Commands Performing nPartition Operations partition monitor is rebooted. Shutdown at 16:09 (in 0 minutes) At this point, the nPartition is in the Boot-Is-Blocked (BIB) state. The virtual partition winona1 remains down until all the virtual partitions have been shutdown and the monitor rebooted. Note also that once the nPartition is in the BIB state, vparstatus shows the following message: Note: A profile change is pending. rebooted to complete it. The hard partition must be 3.
Monitor and Shell Commands Performing nPartition Operations Cab/ Cell ------ Processor -----Cache Size Cell Slot State # Speed State Inst Data ---- ----------- --- ------------------- ------ -----0 0/0 Active 0 552 MHzActive 512 KB 1 MB 1 552 MHzIdle 512 KB 1 MB 2 552 MHzIdle 512 KB 1 MB 3 552 MHzIdle 512 KB 1 MB 1 0/1 Idle 0 552 MHzIdle 512 KB 1 MB 1 552 MHzIdle 512 KB 1 MB 2 552 MHzIdle 512 KB 1 MB 3 552 MHzIdle 512 KB 1 MB 2 0/2 Idle 0 552 MHzIdle 512 KB 1 MB 1 552 MHzIdle 512 KB 1 MB 2 552 MHzIdle 512
Monitor and Shell Commands Performing nPartition Operations VFP: CM: CL: SL: HE: X: Virtual Front Panel Command Menu Console Logs Show chassis Logs Help Exit Connection 3. At the GSP prompt, enter into the Command Menu GSP> cm Enter HE to get a list of available commands GSP:CM> 4. From the GSP Command Menu, perform the desired hard partition commands.
Monitor and Shell Commands Removing a Virtual Partition Removing a Virtual Partition To remove a virtual partition, use vparremove. vparremove purges the virtual partition from the vPars partition database. Any resources dedicated to the virtual partition are now free to allocate to a different virtual partition. You need to shutdown the virtual partition before attempting removal. If the target virtual partition is running, vparremove will fail. Example To remove a virtual partition named winona2: 1.
Monitor and Shell Commands Autobooting the Monitor and All Virtual Partitions Autobooting the Monitor and All Virtual Partitions You can setup the monitor and all virtual partitions to boot automatically at power up. To do this, make sure the following four conditions are met: 1. The hard partition’s primary and alternate boot paths point to the boot disks of different virtual partitions. For example, to set the primary and alternate boot paths: BCH> pa pri 0/0/2/0.6.0 BCH> pa alt 0/8/0/0.5.0 2.
Monitor and Shell Commands Autobooting the Monitor and All Virtual Partitions winona1# vparmodify -p winona1 -B auto winona1# vparmodify -p winona2 -B auto winona1# vparmodify -p winona3 -B auto NOTE For Superdome and other nPartition-able servers, you must use the boot device path "path flags" to set automatic booting past the BCH for a nPartition. See the manual HP System Partitions Guide for more information, including the proper configuration of paths for a nPartition.
Monitor and Shell Commands Obtaining Monitor and Hardware Resource Information Obtaining Monitor and Hardware Resource Information The monitor and the partition database that is has loaded maintains information about the virtual partitions. Using vparstatus, you can obtain this information, which includes the current state of the virtual partitions and their resources. See the manpage vparstatus (1M).
Monitor and Shell Commands Obtaining Monitor and Hardware Resource Information [Virtual Partition Resource Summary] Virtual Partition Name ============================== winona1 winona2 winona3 • CPU Num CPU Bound/ IO Min/Max Unbound devs ================ ==== 2/ 8 2 0 2 2/ 8 2 1 2 1/ 8 1 0 2 Memory (MB) # Ranges/ Total MB Total MB ==================== 0/ 0 640 0/ 0 1280 0/ 0 1280 To see the current state of winona2: winona1# vparstatus -p winona2 -v | grep -E "Name|State" Name: winona2 State: Up •
Monitor and Shell Commands Obtaining Monitor and Hardware Resource Information [Available I/O devices (path)]: 1.2 [Unbound memory (Base /Range)]: (bytes) (MB) [Available memory (MB)]: 256 • 0x40000000/256 On a nPartition-able system, if the nPartition has a pending RFR, the vparstatus output also shows the following message: Note: A profile change is pending. rebooted to complete it.
Monitor and Shell Commands Resetting a Hung Virtual Partition Resetting a Hung Virtual Partition Just as it is occasionally necessary to issue a hard reset (RS) or a soft reset (TOC) for a non-vPars OS instance, it is occasionally necessary to issue similar resets for a vPars OS instance. Hard Reset On a non-vPars hard partition, a hard reset cold boots the hard partition.
Monitor and Shell Commands Resetting a Hung Virtual Partition To simulate a soft reset on only one virtual partition, from a running partition, use vparreset with the -t (for TOC) option. For example, if winona2 is hung, we can execute vparreset from the running partition winona1: winona1# vparreset -p winona2 -t The target virtual partition either shuts down or reboots according to the setting of the autoboot attribute of that virtual partition.
Monitor and Shell Commands Booting a Virtual Partition into Single-User Mode Booting a Virtual Partition into Single-User Mode It is occasionally necessary to boot HP-UX into single-user mode to diagnose issues with networking or other components. On a non-vPars server, you do this by using the -is option at the ISL prompt: ISL> hpux –is On a vPars server, you can boot a virtual partition into single-user mode either at the monitor prompt or at the shell prompt of a running partition.
Monitor and Shell Commands Booting a Virtual Partition into Single-User Mode winona1# vparstatus -p winona2 -v | grep -E "Name|State" Name: winona2 State: down After you have entered into single-user mode and if you want to turn autoboot back on, the command is: winona1# vparmodify -p winona2 -B auto Chapter 5 135
Monitor and Shell Commands Using Other Boot Options Using Other Boot Options In the same way you can boot a virtual partition into single-user mode (see “Booting a Virtual Partition into Single-User Mode” on page 134), you can boot a partition using other boot options.
Monitor and Shell Commands Using Other Boot Options Overriding Quorum In LVM, when the root disk is mirrored, the server can only activate the root volume group, which contains the OS instance, when the majority of the physical volumes in a root volume group are present at boot time. This is called establishing a quorum. Sometimes, you may want to boot an OS instance regardless of whether a quorum is established. You can override the quorum requirement by using the -lq option.
Monitor and Shell Commands Simulating the AUTO File on a Virtual Partition Simulating the AUTO File on a Virtual Partition On a non-vPars server, the LIF’s AUTO file on the boot disk can contain a boot string that includes boot options, such as -lq for booting without quorum, or a boot kernel path, such as /stand/vmunix.other for booting an alternate kernel. The AUTO file can be changed either through lif shell commands or mkboot.
Monitor and Shell Commands Modifying Attributes of a Virtual Partition Modifying Attributes of a Virtual Partition You can change a virtual partition’s name and its resource attributes via the vparmodify command. When using vparmodify to change attributes, the partition can be running, and the changes take effect immediately. See the manpage vparmodify (1M) for more information on attributes.
Monitor and Shell Commands Adding or Removing Hardware Resources Adding or Removing Hardware Resources You can assign resources to a virtual partition at creation time via arguments to the vparcreate command, but if a partition already exists, you can only add or remove resources via the vparmodify command. All resources are managed the same way, except for CPUs. For information on managing CPUs, see “Adding and Removing CPU Resources” on page 141.
Monitor and Shell Commands Adding and Removing CPU Resources Adding and Removing CPU Resources CPU Allocation Syntax In Brief To understand how to assign CPUs, you need to understand the command syntax. Below is a brief explanation of the CPU allocation syntax for the vparcreate command. For complete information, see the vparcreate (1M), vparmodify (1M), and vparresources (5) manpages.
Monitor and Shell Commands Adding and Removing CPU Resources With the -moption, the number used with the -mis an absolute number. For example, -m cpu::3 represents an absolute number of three total CPUs; in this case, it sets the total number of CPUs (bound plus unbound) to three. With the -aoption (as well as the -doption), the number used with the -ais a relative number of CPUs (relative to the number of CPUs already assigned to the virtual partition).
Monitor and Shell Commands Adding and Removing CPU Resources winona1# vparcreate -p winona2 -a cpu::2 -a cpu:::2 -a cpu:41 • If you want to specify multiple processors, use the -acpu:hw_path option for each hardware path. For example, if you want to specify the CPU at hardware path 41 and the CPU at hardware path 45, the command is: winona1# vparcreate -p winona2 -a cpu::2 -a cpu:::2 -a cpu:41 -a cpu:45 Note that because there are two paths specified, min must be greater than or equal to two.
Monitor and Shell Commands Adding and Removing CPU Resources Example • If you have two bound CPUs and want to remove the bound CPU at hardware path 41 (and do not want to add any unbound CPUs), delete the hardware path 41, modify min to one, and modify total number to one: # vparmodify -p winona2 -d cpu:41 -m cpu:::1 -m cpu::1 NOTE: If you delete only hw_path and leave total as two and leave min as two, you will still have two bound CPUs.
Monitor and Shell Commands Adding and Removing CPU Resources winona1# vparmodify -p winona2 -d cpu::1 • Because you can dynamically migrate unbound CPUs, you can migrate an unbound CPU from one partition to another while both partitions are running.
Monitor and Shell Commands Using an Alternate Partition Database File Using an Alternate Partition Database File By default, the local copy of the vPars partition database is kept in the file /stand/vpdb on the boot disk of each virtual partition within a hard partition. However, you can create, edit, and delete virtual partitions in an alternate partition database file by using the "-Dfilename" option in the vPars command string, where filename is the name of the alternate partition database file.
Monitor and Shell Commands Using an Alternate Partition Database File You could create an alternate partition database where the configuration is: Partition Name winsim1 winsim2 Bound CPUs total = 4 min = 4 total = 4 min = 4 Unbound CPUs no CPUs are available Memory 1600 MB 1600 MB I/0 Paths (LBAs) 0/0 0/4 0/8 1/10 1/2 Boot Path 0/0/2/0.6.0 0/8/0/0.5.0 LAN 0/0/0/0 1/10/0/0/4/0 Autoboot AUTO AUTO To create and boot using an alternate partition database, perform the following: 1.
Monitor and Shell Commands Using an Alternate Partition Database File 2. Shutdown all the virtual partitions and reboot the server: winona3# vparstatus ; shutdown -hy 0 winona2# vparstatus ; shutdown -hy 0 winona1# vparstatus ; shutdown -hy 0 MON> reboot 3. Interrupt the boot process and boot the monitor /stand/vpmon specifying the -palternate partition database option and the -aautoboot option: BCH> bo pri interact with IPL: y ISL> hpux /stand/vpmon -D /stand/vpdb.
Monitor and Shell Commands Using an Alternate Partition Database File Chapter 5 • filename must reside in /stand when the server boots because the vPars monitor can only traverse HFS file systems of the boot disk. • Be careful when creating partitions using the -Doption. Fewer checks on configuration are being performed. It is possible to create a partition configuration that is not valid. • All LVM rules still apply.
Monitor and Shell Commands Using an Alternate Partition Database File 150 Chapter 5
6 Virtual Partition Manager This chapter provides an overview of the Virtual Partition Manager (vparmgr), which provides a GUI to the vPars commands. This chapter includes: • About the Virtual Partition Manager • Starting the Virtual Partition Manager • Using the vPars Graphical User Interface (GUI) • Stopping the Virtual Partition Manager For more detailed information, see the Virtual Partition Manager online help.
Virtual Partition Manager About the Virtual Partition Manager (vparmgr) About the Virtual Partition Manager (vparmgr) The virtual partition manager (vparmgr) provides an easy to use graphical interface to the vPars command utilities.
Virtual Partition Manager About the Virtual Partition Manager (vparmgr) After vPars is installed and running, you must boot at least one virtual partition to a HP-UX kernel. You can then start the virtual partition manager in that virtual partition by executing the command vparmgr /opt/vparmgr/bin/vparmgr [-h] /opt/vparmgr/bin/vparmgr -tcreate /opt/vparmgr/bin/vparmgr -tmodify|par_details -pvp_name With no arguments, the vparmgr graphical user interface is launched.
Virtual Partition Manager About the Virtual Partition Manager (vparmgr) Using the vPars Graphical User Interface (GUI) When the vparmgr GUI starts, it displays the virtual partition status screen. Figure 6-1 vPars GUI Status Screen This displays the status of all of the virtual partitions and available resources on the system.
Virtual Partition Manager About the Virtual Partition Manager (vparmgr) Stopping the Virtual Partition Manager To exit vparmgr, click the Exit button on the virtual partition status screen.
Virtual Partition Manager About the Virtual Partition Manager (vparmgr) 156 Chapter 6
A LBA Hardware Path to Physical I/O Slot Correspondence This section contains a simplified I/O block diagrams for the rp5470/L3000, rp7400/N4000, and Superdome servers. These diagrams can be used to help determine which LBAs correspond to which physical I/O slots. For more information, see the hardware manuals for these servers.
LBA Hardware Path to Physical I/O Slot Correspondence rp5470L3000 I/O Block Diagram rp5470L3000 I/O Block Diagram Figure A-1 rp5470 / L3000 I/O Block Diagram LBA 12 Slot 12 Path 0/10 Slot 11 Path 0/12 Slot 10 Path 0/8 LBA 8 Slot 9 Path 0/9 SBA 0 LBA 9 Slot 8 Path 0/3 LBA 3 Slot 7 Path 0/1 LBA 1 Slot 6 Path 0/5 HotPlug PCI I/O Slots 5-12 LBA 10 Slot 5 Path 0/2 LBA 5 Slot 4 Path 0/4 GSP/Console LBA 2 Slot 4 Path 0/4 LBA 4 LBA 0 158 Slot 2 Slot 1 LAN 10/100BT SCSI - 2x dual LVD Console 0/0/
LBA Hardware Path to Physical I/O Slot Correspondence rp7400/N400 I/O Block Diagram rp7400/N400 I/O Block Diagram Figure A-2 rp7400 / N4000 I/O Block Diagram I/O Backplane Cabinet 00 Cardcage 01 Backplane 01 0/2 Slot 6 4 x PCI LBA 2 0/10 Slot 5 4 x PCI LBA 10 0/8 Slot 4 4 x PCI 0/12 System Board Cabinet 00 Cardcage 00 I/O Backplane Cabinet 00 Cardcage 02 Backplane 02 1/0 LBA 8 Slot 11 4 x PCI 1/8 LBA 8 LBA 2 Slot 10 4 x PCI 1/2 Slot 3 4 x PCI LBA 12 LBA 4 Slot 9 4 x PCI 1/4 0/4 Sl
LBA Hardware Path to Physical I/O Slot Correspondence Superdome I/O Block Diagram Superdome I/O Block Diagram Figure A-3 160 Appendix A
B Problem with Adding Unbound CPUs to a Virtual Partition Unbound CPUs allow you to easily adjust processing power between virtual partitions. But a corner case can occur where you will not be able to add specific unbound CPU(s) without rebooting the target partition. This appendix discusses when this situation can occur and how to work around it.
Problem with Adding Unbound CPUs to a Virtual Partition Symptoms Symptoms When attempting to add an unbound CPU, you may see the following error message: One or more unbound CPUs were not available when virtual partition was booted. You must shutdown the partition to add them. This means that the unbound CPU cannot be dynamically added to the virtual partition.
Problem with Adding Unbound CPUs to a Virtual Partition Cause Cause When a virtual partition boots, the HP-UX kernel creates a table of the existing unbound CPUs available at the time the virtual partition is booted. If there is not an existing entry in the table for a specific CPU, that CPU cannot be added to the partition.
Problem with Adding Unbound CPUs to a Virtual Partition Cause Note that the entries for the unbound CPUs are only entries for unbound CPUs that can potentially be added to the partition.
Problem with Adding Unbound CPUs to a Virtual Partition Cause Paths of Bound CPU(s) x01 x02 x05 x06 x07 x08 Unbound CPU Kernel Entries x06 x07 x08 x06 x07 x08 (none) Paths of Unbound CPUs unbound CPUs are now at x03 and x04 There are now two unbound CPUs, but these CPUs are not the same processors that were available at the time the partitions vpar1 or vpar3 were booted.
Problem with Adding Unbound CPUs to a Virtual Partition Cause When vpar3 boots again, its kernel will create the correct entries for the unbound CPUs, which are now at x03 and x04.
C Calculating the Size of Kernels in Memory One requirement of vPars is the sum of sizes of the kernels running in memory within a hard partition must be less than 2 GB. This only limits the maximum number of virtual partitions that can be created. If you use the defaults of the dynamic tunables, you will not run into this 2 GB limit. However, if you have adjusted the dynamic tunables, you can perform the calculations described in this appendix to ensure you meet this criteria.
Calculating the Size of Kernels in Memory Calculating the Size of a Kernel Calculating the Size of a Kernel To calculate the size of the kernel, perform the following using the kernel file (for example, /stand/vmunix) on the target OS: Step 1. Get the ending address: # nm /stand/vmunix | grep "ABS|_end" [10828] | 212937784| 0|NOTYP|GLOB |0| ABS|_end The ending address is the second number: 212937784 Step 2.
Calculating the Size of Kernels in Memory Examples of Using the Calculations Examples of Using the Calculations Changing Dynamic Tunables If you have already migrated to a vPars server and are adjusting the dynamic tunables of a kernel, check that there is an available memory range under the 2 GB boundary to accommodate the adjusted kernel. You should do this check after adjusting the dynamic tunables but before rebooting the partition.
Calculating the Size of Kernels in Memory Examples of Using the Calculations For example, if we calculated the size of the kernel of the first OS to be 64 MB and the second OS to be 128 MB, the sum is 192 MB. 192 MB is below the 2 GB limit, so we have met the criteria and can migrate the OSs from the multiple non-vPars servers to the single vPars server.
Glossary bound CPU a CPU that cannot be migrated from or to virtual partitions while the involved virtual partitions are running. Bound CPUs can handle I/O interrupts. dynamic CPU migration the vPars ability to add or remove floater CPUs while a virtual partition is running. hard partition any isolated hardware environment, such as a rp7400 server or a nPartition within a Superdome complex.
Glossary bound CPU 172 Glossary
Index Symbols /stand filesystem, 29, 80 A adding CPU resources to a partition, 141 adding resources to a partition, 140 alternate partition database files, 146 application fault isolation, 20 attributes, 139 AUTO LIF file, 138 Autoboot, 74, 127 B BCH.
Index Software Stack, computer with vPars, 37 Software Stack, computer without vPars, 35 firmware, 22, 84 flexibility dynamic CPU allocation, 20 independent operating system instances, 20 floater CPUs. See unbound CPUs free product, 31 G getauto, 106 glance, 26 GSP.
Index crash dump files, 54 vpmon file, 38, 39, 53, 54 monitor prompt toggling past, 42 N NO_HW (ioscan output), 29 nPartitions, 18 O OLAR.
Index UPS.