HP-UX vPars and Integrity VM V6.3 Administrator Guide Abstract This document is intended for system and network administrators responsible for installing, configuring, and managing vPars and Integrity Virtual Machines. Administrators are expected to have an in-depth knowledge of HP-UX operating system concepts, commands, and configuration.
© Copyright 2012, 2014 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents HP secure development lifecycle....................................................................15 1 Introduction.............................................................................................17 1.1 HP-UX Virtualization Continuum...........................................................................................17 1.1.1 HP-UX Virtual Partitions................................................................................................17 1.1.
3.4 Reserving VSP devices.......................................................................................................40 3.5 Configuring storage space for diagnostic data......................................................................40 3.6 VSP kernel tunables...........................................................................................................41 3.7 Running applications on VSP..............................................................................................
6 Storage devices........................................................................................63 6.1 Storage goals...................................................................................................................63 6.1.1 Storage utilization.......................................................................................................63 6.1.2 Storage availability....................................................................................................63 6.1.
.5.2.3 Modifying storage devices..................................................................................96 6.6 Troubleshooting Storage related problems............................................................................99 7 NPIV with vPars and Integrity VM.............................................................101 7.1 Benefits of NPIV...............................................................................................................101 7.2 Dependencies and prerequisites..
9 Administering VMs.................................................................................133 9.1 Specifying VM attributes...................................................................................................133 9.1.1 VM name................................................................................................................135 9.1.2 Reserved resources...................................................................................................135 9.1.3 Virtual CPUs...
11.4.8.2 DIO devices...................................................................................................175 11.4.8.3 AVIO LAN devices..........................................................................................187 11.4.8.4 AVIO storage devices......................................................................................191 11.4.9 Time taken for CRA on a VSP..................................................................................192 11.4.
13.12.2 Integrity VM virtual iLO Remote Console limitations...................................................230 13.13 Guest configuration files................................................................................................231 13.14 Managing dynamic memory from the VSP.......................................................................231 13.14.1 Configuring a VM to use dynamic memory...............................................................233 13.14.1.
15 Support and other resources...................................................................261 15.1 Contacting HP ..............................................................................................................261 15.1.1 Before you contact HP..............................................................................................261 15.1.2 HP contact information............................................................................................261 15.1.
Index.......................................................................................................
Figures 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Product evolution..............................................................................................................18 HP-UX vPars and Integrity VM V6 framework.......................................................................19 Layers in a VSP................................................................................................................37 Upgrade procedure.....................................................................
31 32 33 34 35 36 37 38 39 40 Options to the hpvmconsole command..........................................................................228 Dynamic memory control command options......................................................................232 Dynamic memory characteristics......................................................................................234 Options to the hpvmmgmt command................................................................................
HP secure development lifecycle Starting with HP-UX 11i v3 March 2013 update release, HP secure development lifecycle provides the ability to authenticate HP-UX software. Software delivered through this release has been digitally signed using HP's private key. You can now verify the authenticity of the software before installing the products, delivered through this release. To verify the software signatures in signed depot, the following products must be installed on your system: • B.11.31.
1 Introduction With the increased demand for Information Technology in recent years, data centers have seen a rapid growth in the IT infrastructure (servers, storage, networking) deployment. However, this sprawl has resulted in data centers having server hardware that is being underutilized. The same data centers are facing increasing demand for new applications that results in an increased demand for servers to satisfy their customers.
• Improved performance and productivity. • Better manageability (rich CLI and GUI). • High Availability through Serviceguard Integrity Virtual Server Tool Kit. • Virtual iLO remote console for each vPar and Integrity VM instance. With vPars and Integrity VM Version 6 the vPar solution is purely software based, unlike the earlier vPar technologies which were vPar-monitor based or firmware based.
• Support for Cluster DSF as disk backing store. • Support for Veritas DMP nodes as disk backing store. • Support for online migration of VMs configured with Veritas CVM volumes as backing stores. 1.4 vPars and Integrity VM V6 architecture Figure 2 (page 19) shows the vPars and Integrity VM V6 architecture. The sub-systems are explained in the following sections.
1.4.2 Overview of Integrity VM Integrity VM instances are abstractions of real physical machines. The guest operating system runs on the VM as it would run on a physical Integrity server, with minimal modifications. The environment of the VM is virtualized and managed by the Virtual Machine Monitor (VMM) sub-system that resides on the VSP. Each VM runs an instance of HP-UX (OpenVMS operating system is not supported). Applications running within a VM guest run the same as when run on HP-UX natively.
enabling access to I/O hardware technology without requiring the support of either vPars or Integrity VM. 1.4.4.1 Overview of AVIO storage To provide the flexibility required to meet a variety of data center needs, the vPar or VM storage subsystem consists of three storage architectures - shared I/O, attached I/O, and NPIV For more information about shared I/O, attached I/O and NPIV, see Chapter 6 (page 63) and Chapter 7 (page 101). 1.4.4.1.
1.4.4.3 Direct I/O-Networking The direct I/O networking feature supported in vPars and Integrity VM Version 6 allows administrators to assign ports (or functions) of a NIC directly to a vPar or VM, giving the vPar or VM direct and exclusive access to the port on the NIC. NIC ports that are configured to be used for DIO cannot be shared and cannot be used to back a vswitch. Before an NIC port or card can be assigned to a vPar or VM, you must first add it to the DIO pool.
The following are the HP-UX OEs: • HP-UX 11i v3 Base OE (BOE) The BOE provides an integrated HP-UX operating environment for customers who require less complex installation. The Base OE includes the entire original Foundation Operating Environment (FOE), offering complete HP-UX functionality including security, networking, web functionality, and software management applications.
Insight Manager (HP SIM) as part of the HP Matrix OE. For more information, see the HP Integrity Virtual Server Manager 6.3 User Guide. • HP Integrity VM Providers—To manage virtual environments with Virtual Server Manager or any Matrix OE components, install the appropriate provider software from the operating system media or the VirtualBase bundle. • HP-UX GUID Manager (GUIDMgr)—A client-server based product that allocates and manages unique World Wide Names for NPIV Host Bus Adapters.
Table 2 Integrity VM commands (continued) Command Description hpvmnet(1M) Describes how to create and modify virtual networks. hpvmnvram(1M) Displays, creates, edits, and removes vPar or VM EFI variables in NVRAM files from a VSP. hpvmpubapi(3) Describes several new public APIs. hpvmremove(1M) Describes how to remove a VM. hpvmresources(5) Describes how to specify the storage and network devices used by VMs. hpvmresume(1M) Describes how to resume a VM.
Table 4 VSP commands in vPars (continued) Command Description vparreset(1M) Resets a vPar. Simulates, at the vPar level, the hard reset, soft reset (Transfer Of Control, TOC), power off, or graceful shutdown operations. When compared with the earlier versions of vPars, the vparreset operation closely matches with the operation of physical hardware. vparstatus(1M) Displays information about one or more vPars.
Table 5 Chapters in this manual (continued) Chapter Read if... Chapter 12 (page 195) You need to move vPars or VMs from one system to another. Chapter 13 (page 217) You need to manage an existing vPars, VMs, and resources using CLI. Chapter 14 (page 255) You need to manage an existing vPars, VMs, and resources using GUI. Chapter 15 (page 261) You need information about HP support. Appendix A (page 267) You encounter problems related to creating VM, storage and NPIV.
2 Installing HP-UX vPars and Integrity VM This chapter describes the requirements and procedure for installing vPars and Integrity VM product and guest operating system. 2.1 Installation requirements for VSP Before installing the vPars and Integrity VM product on the VSP, ensure that the following software bundles are installed on the VSP: • HP-UX 11i v3 March 2014 OE. OR HP-UX 11i v3 March 2013 (AR1303) plus AR1403 Feature11i patches.
# swremove HPSIM-HP-UX • HP-UX Virtual Partitions bundle v5.x or earlier: # swlist -l bundle | grep VirtualPartition If present, uninstall it: # swremove VirtualPartition To install the HP-UX vPars and Integrity VM software: Mount the installation media, if you have it (for example, /depot/path). If you are installing from the network, identify the VSP and path name that correspond to the software distribution depot that contains the BB068AA and VirtualBase bundles (for example, my.server.example.
The following output must be displayed: hpvminfo: Running on an HPVM host. • Enter the swlist command: # swlist |grep -e "BB068AA" -e "VirtualBase" Check the version numbers. BB068AA VirtualBase • B.06.30 B.06.30 HP-UX vPars & Integrity VM v6 Base Virtualization Software Check whether the configuration file /etc/rc.config.d/hpvmconf was created. If you face any issues during the verification, it indicates the installation was not successful. In such cases, contact HP Support for help. 2.
Continue? Enter Y or N:Y hpvmnvram: Adding boot option 'LanBoot:0xB27A4F72629B:master' (0xB27A4F72629B) .. NOTE: 3. To get the MAC address, you can use the hpvmnvram -P guest1-d. List all the boot options in the VM named guest1 # hpvmnvram Boot Order ========== 1 2 4.
1. Start the VM from the VSP administrator account using the hpvmstart command. # hpvmstart -P guest1 (C) Copyright 2000 - 2012 Hewlett-Packard Development Company, L.P. ....
Delete Boot Option(s) Change Boot Order Manage BootNext setting Set Auto Boot TimeOut Select Active Console Output Devices Select Active Console Input Devices Select Active Standard Error Devices Cold Reset Exit 6. The EFI Boot Maintenance Manager is displayed. Select Add a Boot Option. EFI Boot Maintenance Manager ver 1.10 [14.62] Add a Boot Option.
2. Verify that neither of these bundles are installed: # swlist BB068AA VirtualBase # Initializing... # Contacting target "foo"... ERROR: Software "BB068AA" was not found on host "foo:/". ERROR: Software "VirtualBase" was not found on host "foo:/". These errors must be displayed. For more information about using Ignite-UX golden images, see Ignite-UX Administration Guide. 2.6.
2.6.
3 Configuring VSP VSP is the manageability platform for vPars and VMs, running the standard HP-UX 11i v3 OE. VSP has a controlled environment tuned for supporting the vPars and Integrity VM V6 product functionality. DO NOT install any application on the VSP, that is CPU or memory or IO intensive in nature. Running such applications on the VSP can cause unpredictable behaviour. The product startup scripts located at /sbin/init.d/ configure the VSP resources (cores and memory) during every reboot of VSP.
3.1.1 VSP pool The CPU cores in the VSP pool run normal VSP processes. In addition, the cores run special threads to service I/O requests for vPars and Integrity VM. These cores cannot be used for vPars and VM guest configurations. By default, there are no cores in the VSP pool. When you configure the first reserved vPar or when you start the first non-reserved vPar, a single core is added to this pool. If vPar configurations exist (for example, after upgrading an existing vPars V6.
3.3 VSP memory On startup, the HP-UX vPars and Integrity VM product reserves a significant portion of the free system memory available on the VSP for the vPars and Integrity VM memory pool. This memory will be used for supporting the memory requirements of various vPars and VM guests on the VSP. The remaining available memory in the VSP is sufficient for the optimal functioning of the vPars and Integrity VM guests product on the VSP.
boot on the VSP. The memory used by HP-UX to boot depends on the size of the system, including total memory, number of cores, and the I/O devices on the system. This equation indicates the following: The overall VSP memory overhead = Amount of memory HP-UX requires to boot up + Free memory remaining in the VSP for optimal functioning of VSP VSP memory overhead = ~1500 MB + 8.
# hpvmstatus -P -V | egrep -i "Overhead memory" 3.6 VSP kernel tunables Upon installation of vPars and Integrity VM product, tunables are modified to the values listed in Table 8 (page 41). NOTE: The tunable values are set to enable optimal functioning of the product. Hence, DO NOT change any of these tunables unless otherwise specified by HP.
3.7.2 Applications not recommended DO NOT run other applications on the VSP regardless of whether Integrity VM guests or vPars are running. Examples of applications that should not be run on the VSP are: Oracle, Workload Manager (WLM), HP SIM, and so forth. HP-UX vPars and Integrity VM V6 installation modifies kernel parameters, making the system unsuitable for running applications. HP also does not recommend configuring VSP as a Ignite UX server. 3.7.
4 Upgrading the VSP from earlier versions of Integrity VM This chapter describes how to upgrade the VSP from an older version. You must know the following before upgrading to a newer version: • vPars and Integrity VM V6 supports guests running HP-UX 11i v3 and HP-UX 11i v2 (starting from V6.1.5). It does not support OpenVMS guests. • Integrity VM V4.3 and earlier versions used VIO interfaces and Legacy DSF's for mass storage. These are not supported on vPars and Integrity VM V6 and later releases.
Figure 4 Upgrade procedure cold install 1 Study current HP-UX 11i v2 to 11i v3 upgrade documentation 2 Analyze the HP-UX 11i v2 based Integrity VM Server using tools 3 Decide whether to perform a cold install 4 Upgrade hardware requiring new firmware or replace obsolete adapters 5 Final check - assure all guests boot and then backup both server and guests 6 Perform cold install or update_ux update_ux 7 Upgrade HP-UX to 11i v3 and then install new layered products, including Integrity VM 8 C
• HP-UX 11i v3 Installation and Update Guide available at http://www.hp.com/go/hpux-core-docs-11iv3 nl • HP-UX 11i Version 3 Release Notes available at http://www.hp.com/go/hpux-core-docs-11iv3 nl • Serviceguard Specific Documentation available at http://www.hp.
• HP supported SCSI disk enclosures and arrays. • HP supported Fibre Channel disk enclosures and arrays. The msv2v3check command creates the log file /var/adm/msv2v3check/mmddyy_hhmm that contains all notes, warnings, and error messages from an invocation of msv2v3check, where mmddyy_hhmm represents the month, day, year, hour, and minute the msv2v3check utility is started.
NOTE: Many software subsystems require upgrades on the 11i v2 Integrity VM server before upgrading to HP-UX 11i v3. Integrity VM must be upgraded to V3.0 or V3.5 before beginning the HP-UX upgrade. Other layered products, such as Serviceguard, must be upgraded before upgrading the operating system to 11i v3. Analyze each layered product for the required upgrades. Remove the older HP Integrity Virtual Machines Manager product before upgrading to vPars and Integrity VM Version 6.3.
1. Choose the system disks that are to be used for the 11i v3 VSP and mark them as reserved disks: # hpvmdevmgmt -a rdev:device_name 2. 3. Back up and collect all relevant configuration from the 11i v2 VSP. Back up the /var/opt/hpvm directory, so that you can easily restore it to the 11i v3 system after the cold-install. NOTE: DRD can be used to clone an HP-UX system image to an inactive disk for recovery. For information about DRD, see the Dynamic Root Disk documentation available at http:// www.hp.
# swinstall -s /dev/dvd Update-Ux update-ux -s /dev/dvd HPUX11i-VSE-OE BB068AA NOTE: There is an update-ux option, –p, which can be used to preview and update task by first running the session through the analysis phase. If you are updating from the VSE-OE depot, specify the following: # swinstall -s my.server.example.com:/OEdepot/path Update-UX update-ux -s my.server.example.com:/OEdepot/path HPUX11i-VSE-OE BB068AA 6. Remove any blocking layered products that might block the Integrity VM installation.
• Mass storage issues The vPars and Integrity VM V6.3 release supports the use of both legacy and agile devices within guests. It is not necessary to convert guests to use strictly agile devices. If, however, problems occur with guests using multipath solutions that are based on legacy devices, change the backing device to use the equivalent agile device. For information about mass storage compatibility issues, see the documentation available at: HP-UX 11i v3 Manuals.
/dev/vg00/lvol4 10485760 21152 10382856 0% /home /dev/fspd1 75359147535914 0 100%/dvdrom 3. Run the update-ux command on the V4.3 VSP: VSP -> update-ux -s /dvdrom ======= Mon Dec 17 21:14:05 PDT 2012 BEGIN update-ux NOTE: Output is logged to '/var/adm/sw/update-ux.log' * Obtaining some information from the source depot. * Copying an SD agent from the source depot * Installing the Update-UX product Current update-ux version: 11.31.22 Source depot update-ux version: 11.31.
Figure 5 Upgrading a VSP using the Online Guest Migration process 1 Migrate Guests online to a similar VSP 2 Update software components (Requires VSP Reboot) 3 Migrate Guests online back to the same VSP Guest 1 Guest 1 Online / Running Online / Running VSP 1 VSP 2 4.3 Rolling back to the earlier installed version of Integrity VM If you must roll back to a previous version of Integrity VM, this section provides the information needed to perform the rollback.
5 CPU and Memory 5.1 Configuring CPU resources for VM guests 5.1.1 Processor virtualization VM guests are configured with virtual processors. A vCPU is a virtualized schedulable entity. Virtual processors are mapped to physical CPU, cores as a part of VM guest scheduling. For the purpose of this discussion, the term “physical CPU” refers to a processing entity on which a software thread can be scheduled.
5.1.3 Dynamically changing the entitlements While you cannot add or remove CPUs to and from a VM guest dynamically, you can change the vCPU entitlement of the vCPUs that are already configured. You can use the hpvmmodify command to change the entitlement. 5.1.4 Transforming VM guest to a vPar For better guest performance, you can transform a VM guest offline to a vPar. Use the hpvmmodify -x vm_type=vpar command to transform a VM guest to a vPar.
5.2 Configuring CPU resources for vPars The CPU resource configured for vPar is the physical CPU on the VSP. The physical CPUs allotted to a vPar are dedicated to that vPar alone and are not shared with either the VSP or any other vPar or VM guest running on the VSP. Hence, the concept of entitlement does not apply to vPar cores. You can specify a maximum of (total VSP cores –1) for a single vPar. The following example shows a VSP with 16 cores, of which 4 CPUs are reserved for 4 1-CPU vPars.
System Configuration ===================== Locality Domain Count: 4 Processor Count : 15 Domain -----0 1 2 3 Processors ---------0 2 4 6 8 10 12 14 16 18 20 24 26 28 30 5.2.1 Online CPU migration vPars V6.1 and later supports online migration of CPUs. This means you can add and delete CPU from a live vPar without having to reboot it. During addition, free CPUs from the vPar and Integrity VM guest pool are added to the vPar.
recoverable, local MCAs caused by a CPU in an individual vPar are isolated to that vPar and do not impact other running vPars. The vPar OS first tries to automatically recover from such MCAs without bringing down the vPar (APR supported by HP-UX). If that is not possible, the individual vPar goes through a crash dump and is rebooted to recover from the error. Diagnostic dump files known as tombstones are generated. These files must be sent to HP for analysis.
5.4 Handling faulty CPU On VSP with HP System Fault Management (HP SFM) software installed, if a faulty CPU is encountered, a CPU deletion request is raised on the host kernel. If the CPU identified for deletion happens to be a non-Monarch CPU of the running vPar, then it can be dynamically deleted from the vPar. If the CPU is the Monarch CPU of the vPar, then the CPU cannot be deleted.
• The upper limit of the memory required for the guest must be available on the VSP. • At run time, memory cannot be increased beyond the upper limit. To increase the limit, you must shut down the VM guest and specify the required upper limit. • If the Integrity VM guest is migrated online, the target must have the upper limit of specified memory available. NOTE: Dynamic memory is not applicable for vPar.
NOTE: The VSP attempts to obtain memory from the vPar and Integrity VM memory pool, based on the most favourable NUMA characteristics of the vPar. There are no manual controls to change memory selection. When memory is to be deleted from a live vPar: • The HP-UX kernel in the vPar selects the memory pages to evacuate, and moves the contents to other available free pages and then frees those memory pages.
following table lists the recommended minimal amount of memory that must be configured as base memory for some typical memory sizes. Total Guest Memory Minimum Base Memory 1 GB to 3 GB 1 GB 4 GB to 8 GB 1/2 of total memory 9 GB to 16 GB 4 GB Over 16 GB 1/4 of total memory WARNING! It is mandatory that the base and floating memory guidelines specified are adhered to. If the proportion of base to floating memory is too low, the vPar could experience a panic or hang.
maximum amount of memory that can be added to or deleted from a vPar in a single operation is 16,320 MB. Memory is always migrated (either add or delete operation) in multiples of 64 MB granules. Hence if a memory migration operation is initiated where the requested memory is not a whole multiple of 64 MB, the actual memory considered for the operation will be round down to the previous granule size. For example, if a request is made for deletion of 100 MB of memory, only 64 MB will be deleted.
6 Storage devices This chapter describes vPar and Integrity VM storage and explains how to configure and use vPar and Integrity VM guest storage. The way you configure and manage vPar and VM guest storage affects the way vPar and VM guest perform. To benefit most, learn how the VSP makes storage available to vPars and VM guests. 6.
6.1.5 Storage configurability VSP administrators expect the vPars and VM guests to be as easily configurable as HP Integrity servers. The vPar and VM guest storage subsystem allows for easy changes to the storage devices through vPars and Integrity VM commands. Using these commands, the VSP administrator dynamically adds, deletes, and modifies storage devices on VMs and vPars. Guest administrators can change some storage, limited in scope by the VSP administrator, using the virtual console. 6.
among vPars and VM guests. In attached I/O, only the storage adapter is virtualized. Therefore, only the VSP physical storage adapters are shared. To provide the vPar or VM guest with complete control over attached devices, the vPar and VM guest storage subsystem interprets I/O requests from the guest device drivers into I/O requests that can be completed by the VSP storage subsystem on the behalf of vPar or VM guests.
Table 12 Virtual DVD-ROM types Virtual DVD type Backing storage device Virtual DVD Disk in a VSP physical DVD drive Virtual FileDVD ISO file on a VSP VxFS file system Virtual NullDVD (empty) VSP physical DVD drive or VxFS directory 6.3.2.2 Attached devices vPars and Integrity VM supports a suite of attached devices on HP-UX 11i v2 and HP-UX 11i v3 guests to complete data backups from a vPar or VM guest.
• All the VSP storage available for use by a vPar or VM guest must meet support requirements for the Integrity server and OS version that comprises the VSP. If the physical storage is not supported by the VSP, it is not supported for use by a vPar or VM guest. • All the VSP storage available for use by a vPar or VM guest must be connected with a supported adapter and driver type. For more information about the list of supported types, see the HP-UX vPars and Integrity VM Release Notes at http://www.hp.
• Performance of different software layers differs. • The interfaces to each software layer are different, allowing Integrity VM different ways to send I/O through the layers. For example, whole disks can achieve higher throughput rates than logical volumes and file systems. • The I/O layer might have features to help performance increase beyond a lower layer.
The way the virtual media I/O gets to the physical storage backing is also an important consideration. As shown in Figure 6 (page 67), all virtual I/O goes through a general VSP I/O services layer that routes the virtual I/O to the correct VSP interface driver. The interface driver then controls the physical I/O adapter to issue virtual I/O to the physical storage device.
Figure 8 Sub-LUN storage allocation example File File Logical Volume 2 File File 2 1 Logical Volume File File Logical Volume Whole Disk File File Logical Volume 2 The VM is allocated a logical volume from the LUN for a Virtual LvDisk. • The logical volume that has been allocated is labeled 1. • The parts of the disk that cannot be allocated are labeled 2.
Figure 9 Bad multipath virtual media allocation Guest 1 Guest 2 /dev/rdsk/c6t2d1 /dev/rdsk/c11t2d1 IIIIIIIIIIII IIIIIIIIIIII HBA HBA Virtualization Services Platform (VSP) Physical Storage Also, the same storage resource, virtual or attached, cannot be simultaneously shared between VMs, unless otherwise specifically exempted. Figure 10 (page 72) shows a Virtual LvDisk being shared across VMs, which is not supported. 6.
Figure 10 Bad virtual device allocation Guest 1 Guest 2 X Supported Virtual Lv Disk Virtualization Services Platform (VSP) As these examples illustrate, it is important to know where storage is allocated from to avoid data getting damaged with vPars, VMs, or even the VSP. Management utilities such as the HP SMH utility allows you to track disk devices, volume groups, logical volumes, and file systems.
Some devices must be restricted to use by the VSP and to each guest (for example, boot devices and swap devices). Devices can be restricted using the hpvmdevmgmt command. For more information about sharing and restricting devices, see Section 13.19.2.4 (page 251). Any alternate boot device for a vPar or VM guest must be set with the same care that you would use on a physical system.
6.4.1.7 Dynamic addition of storage adapters Starting with V6.3, vPars and Integrity VM storage adapters can be dynamically added to a running vPar or VM guest. This is in addition to the existing ability to add new LUNs behind an existing virtual adapter. This capability is available with both HPVM AVIO Storage adapters and HPVM NPIV Storage adapters. In the case of AVIO Storage adapters, the feature allows addition of storage capacity without guest downtime.
To create empty files for virtual disks, use the hpvmdevmgmt command (see Section 13.19 (page 248)). To create ISO files from physical CD or DVD media for use in virtual DVDs, use the mkisofs or the dd utility. NPIV brings in ease of storage provisioning because storage presentation does not have to be a two-step process (first, presenting the LUNs to the VSP and then assigning each one to the vPar and VM guest).
• device is one of the following: disk, dvd, tape, changer, burner, or hba • pcibus is an integer from 0-7. It represents the PCI bus number for the virtual device. • pcislot is an integer from 0-7. pcislot also referred to as the pcidevice, represents the PCI slot number for the virtual device. A PCI function number is not specified. It is implicitly zero because the virtual storage adapter supports only a single channel. • target is an integer from 0–127 for AVIO.
store provided as a virtual DVD is always read-only. Attached devices do not consider file permissions when backing up data. More than one VSP system file might point to the same VSP storage entity. For example, if multiple paths to storage are present on the VSP, more than one disk system file can point to the same disk. Different VSP system files change how I/O is routed to the vPar or VM storage resource, but the system files point to the same storage entity.
To prevent virtual media conflicts that can result in data damage, a proper accounting of how the VSP whole disks are allocated for use by Virtual Disks needs to be done, as described in Section 6.4.1.4 (page 69). The following is the Virtual Disk resource statement form: disk:avio_stor::disk:/dev/rdisk/diskX where /dev/rdisk/diskX is an HP-UX esdisk character device file. These device files can be located for a VSP LUN using the ioscan command.
Before using cDSF as backing store, confirm that if the VSP is part of Cluster DSF group. # hostname hpidm01-3 # cmsetdsfgroup -q bones hpidm01-3 # 6.4.2.3.2 Virtual LvDisks A Virtual LvDisk is an emulated AVIO disk whose virtual media is provided by a raw VSP logical volume. To specify a VSP logical volume, use a character device file. The character device file is owned by either LVM or VxVM. Virtual LvDisks cannot be shared simultaneously across active vPars and VM guests.
LV Size (Mbytes) Current LE Allocated PE Used PV 8192 2048 2048 1 LV Name LV Status LV Size (Mbytes) Current LE Allocated PE Used PV /dev/lvrackA/disk2 available/syncd 8192 2048 2048 1 LV Name LV Status LV Size (Mbytes) Current LE Allocated PE Used PV /dev/lvrackA/disk3 available/syncd 8192 2048 2048 1 LV Name LV Status LV Size (Mbytes) Current LE Allocated PE Used PV /dev/lvrackA/disk4 available/syncd 8192 2048 2048 1 --- Physical volumes --PV Name PV Status Total PE Free PE Autoswitch /dev/disk/d
sd disk01-02 vxvm_2-01 ENABLED 2048000 0 - - - v vxvm_3 pl vxvm_3-01 sd disk01-03 fsgen vxvm_3 vxvm_3-01 ENABLED ENABLED ENABLED 2048000 2048000 2048000 0 ACTIVE ACTIVE - - - v vxvm_4 pl vxvm_4-01 sd disk01-04 fsgen vxvm_4 vxvm_4-01 ENABLED ENABLED ENABLED 2048000 2048000 2048000 0 ACTIVE ACTIVE - - - To use VxVM, the Virtual LvDisk resource statement form is disk:avio_stor::lv:/dev/vx/rdsk/VxvmTest1/vxvm_2.
guest at a time must be given a particular Virtual DVD resource. Virtual DVD resources can be changed dynamically between active vPars and VM guests (see Section 6.5 (page 92)). Because the Virtual DVDs are read only, they do not require management to prevent conflicts writing to the device. However, to prevent sensitive information from being accessed by the wrong vPar or VM guest, ensure you know which vPar or VM guest currently owns the device before you load a CD or DVD.
nl CAUTION: If the Virtual DVD drive of the guest is backed by a CD or DVD-ROM in the VSP that is either an enclosure DVD-ROM or is assigned via vMedia, then the following exceptions apply: • A vPar or VM guest configured with a virtual DVD that is backed by such a CD or DVD device in the VSP fails to start up if the device is disconnected when the vPar or VM is being started.
A Virtual FileDVD reverts to its original resource statement when the guest shuts down or reboots. Therefore, after you install a guest from multiple CDs or DVDs, you must reload the Virtual FileDVD when the guest reboots to complete the installation. Stop the automatic EFI reboot and insert the CD or DVD using the appropriate IN and EJ commands. When the media is loaded, you can proceed with the installation.
to use from the virtual console. The file directory must be a locally mounted VxFS file system. NFS file systems are not supported. If the ISO files are world writable, they are not available from the virtual console for the ISO files listed. # ls -l /var/opt/hpvm/ISO-images/hpux total 26409104 -rw-r--r-- 1 root -rw-r--r-- 1 root -rw-r--r-- 1 root -rw-r--r-- 1 root sys sys sys sys 3774611456 Jul 11 :59 0505-FOE.iso 4285267968 Jul 11 17:05 0512-FOE.iso 3149987840 Jul 11 18:42 0603-FOE-D1.
Multipath solutions are not available for attached devices on the VSP. Multipath products are not supported in the vPar or VM guest. Manage attached devices to prevent the wrong vPars and VM guests from viewing sensitive information. You can find the vPars or VM guests that are currently using attached devices using the hpvmstatus command. 6.4.2.4 Attached device support Attached devices allow sharing of tapes, changers, and burners among multiple guests and host, support for USB 2.
# ioscan -m lun /dev/rtape/tape5_BEST Class I Lun H/W Path Driver S/W State H/W Type Health Description ====================================================================== tape 5 64000/0xfa00/0x1 estape CLAIMED DEVICE online HP 0/5/0/0/0/0.0x500110a0008b9de2.
Example 2 Example of sharing a tape device using a single initiator (single lunpath): # hpvmmodify -P guest1 -a tape:avio_stor::attach_path:0/5/0/0/0/0.0x500110a0008b9de2.0x0 # hpvmmodify -P guest2 -a tape:avio_stor::attach_path:0/5/0/0/0/0.0x500110a0008b9de2.0x0 # hpvmdevmgmt -l gdev:0/5/0/0/0/0.0x500110a0008b9de2.0x0 0/5/0/0/0/0.0x500110a0008b9de2.0x0,lunpath1:CONFIG=gdev,EXIST=YES,SHARE=NO,DEVTYPE=ATTACHPATHLUN,AGILE_DSF= /dev/rtape/tape5_BESTn:guest1,guest2:0x01.0x00.0x03.
Table 13 Patch dependencies for AVIO attached devices Patch Number HP-UX Version VSP Guest Notes PHKL_38604 11i v3 Yes Yes Hard1 dependency for guest, and soft2 dependency for VSP. PHKL_38605 11i v3 Yes No Soft dependency on VSP. PHKL_38750 11i v3 Yes Yes Recommended patch. 1 Enforced during swinstall. 2 Required only if attached devices are configured. No enforcement using swinstall. 6.4.2.
NOTE: Creating the backing-store files of the guest on an NFS client system (that is, VSP), can take significantly longer to complete than directly creating the backing-store files locally on the NFS server. Therefore, create backing-stores files of the guest directly on the NFS server, if possible. 6.4.2.
HP-UX tgt ID = Addr(Target Id) % 16 HP-UX lun ID = Addr(Target Id) / 16 Note the following example: # ioscan -fne PciDev | PCIFtn | |(Addr(Target Id) % 16) <-> HP-UX tgt ID PciBus | | |(Addr(Target Id) / 16) <-> HP-UX lun ID | | | | | V V V V V disk 49 0/0/2/0.6.
6.5 Using vPars and Integrity VM storage The following sections describe the roles of individuals accessing virtual storage, the commands they use, and some examples of using vPars and Integrity VM storage. 6.5.1 Storage roles This section describes the roles of individuals in working with vPars or VM guests storage. Each role has different responsibilities in using vPars or VM guests storage. The roles might be played by one or more individuals depending on security requirements and skill sets.
The virtual console commands are available from the vMP Main Menu, using the hpvmconsole command or by pressing Ctrl+B if you are already connected. The virtual console commands eject (ej) and insert (in) allow you to control the DVD device. Both commands provide submenus for displaying devices that are removable. Selecting options through the submenus completes the ejection or insertion process.
6.5.1.3 Guest user The guest user runs applications on a guest OS. Access is provided and limited by the guest administrator. There are no Integrity VM storage requirements for application users of the guest OS. There are no Integrity VM storage commands for application users in the guest OS. The guest users use Integrity VM storage on the guest OS the same way as they normally use storage on an HP Integrity server.
NOTE: You can achieve higher guest performance for HP-UX 11i v3 guests older than the March 2011 release by configuring as many AVIO storage adapters as the number of virtual CPUs in the guest. The pcibus, pcislot, and aviotgt portions must be explicitly specified for each device.
host# hpvmmodify -P guest1 -d disk:avio_stor:0,5,0:disk:/dev/rdisk/disk11 6.5.2.3 Modifying storage devices The VSP administrator or the guest administrator can modify a vPar or VM guest storage device. The VSP administrator can use the hpvmstatus and hpvmmodify commands to change the virtual media of virtual devices. The guest administrator uses the virtual console to change the virtual media of virtual DVDs. All attached devices are modified using physical VSP access.
# diskinfo /dev/rdisk/disk7 SCSI describe of /dev/rdisk/disk7: vendor: HP product id: Virtual DVD type: CD-ROM size: 665600 Kbytes bytes per sector: 2048 vMP> ej Ejectable Guest Devices Num Hw-path (Bus,Slot,Tgt) Gdev Pstore Path ------------------------------------------------------------------------[1] 0/0/1/0.7.
If the VSP administrator sets up a Virtual FileDVD for the vPar and VM guest, the virtual console options to eject and insert are used to select among the ISO files provided in the file directory for the Virtual FileDVD. The eject command changes the Virtual FileDVD into a Virtual NullDVD device. The VSP administrator can add ISO files to and remove them from the file system directory for the Virtual FileDVD.
For attached devices, modifications are made physically on the device. The guest OS supplies commands for loading and unloading tapes using media changers. But loading new media into the media changer, changing tapes in standalone drives, and changing discs with CD or DVD burners are accomplished manually. This process requires cooperation between the VSP administrator and the guest administrator. 6.
7 NPIV with vPars and Integrity VM NPIV allows you to create multiple virtual Fibre Channel ports (vFCs) over one physical Fibre Channel port (pFC) on a VSP. To identify a virtual port, you must create the virtual port with a unique World Wide Name (WWN), just like the unique embedded WWN by which a physical port is identified. Using the NPIV feature, you can allocate the vFC instances created over a physical port as resources to vPar and VM guests.
7.3 NPIV — supported limits Table 14 (page 102) lists the supported limits associated with NPIV in vPars and Integrity VM V6.3 on 11i v3 vPars and VM guests. Table 14 NPIV supported limits in vPars and Integrity VM V6.
N_Port Port World Wide Name Switch Port World Wide Name Switch Node World Wide Name N_Port Symbolic Port Name N_Port Symbolic Node Name Driver state Hardware Path is Maximum Frame Size Driver-Firmware Dump Available Driver-Firmware Dump Timestamp TYPE NPIV Supported Driver Version = = = = = = = = = = = = = 0x5001438002344784 0x200800051e0351f4 0x100000051e0351f4 porti3_fcd0 porti3_HP-UX_B.11.31 ONLINE 0/2/0/0/0/0 2048 NO N/A PFC YES @(#) fcd B.11.31.
vWWN A valid (64 bit), unique (virtual) Node WWN that is assigned to the NPIV HBA. This is analogous to the unique Node WWN that is associated with physical HBAs. storage The physical storage type in the host. For NPIV, this is npiv. device The physical device in the host corresponding to the virtual device. For NPIV, this corresponds to the device special file for the physical port on which the virtual NPIV instance is created.
Example 5 Create an NPIV HBA manually specifying WWNs Add an NPIV HBA created on /dev/fcd1 using a virtual port WWN of 0x50060b00006499b9 and virtual node WWN of 0x50060b00006499ba to the vPar named vPar1. Obtain the port and node WWNs from your storage administrator or other source. vparmodify -P vPar1 -a hba:avio_stor:,,0x50060b00006499b9, 0x50060b00006499ba:npiv:/dev/fcd1 In the resource string, you can skip the bus and slot numbers for an NPIV HBA.
7.4.3.4 Configuring storage for a vPar or VM guest with NPIV HBAs You can assign storage for a vPar or VM guest with an NPIV HBA either before or after it starts up. In both cases, the guest boots if a non-NPIV boot device is configured. If it does not have a boot device, the guest boot halts at EFI. To configure storage for a guest with an NPIV HBA: 1. Start the guest.
Example 8 Enumeration being enabled for all NPIV devices - - - - - - - - - - Prior Console Output - - - - - - - - - FPSWA.EFI start successful. EFI Boot Manager ver 1.10 [14.62] [Build: Tue Oct 2 03:33:06 2012] Loading device drivers EFI Boot Manager ver 1.10 [14.62] [Build: Tue Oct 2 03:33:06 2012] Please select a boot option EFI Shell [Built-in] Boot option maintenance menu Use ^ and v to change option(s). Use Enter to select an option - - - - - - - - - - - - Live Console - - - - - - - - - - - Loading.
Example 9 Identifying NPIV HBAs and devices in a vPar # ioscan -kfNd gvsd Class I H/W Path Driver S/W State H/W Type Description ext_bus ext_bus ext_bus gvsd gvsd gvsd INTERFACE INTERFACE INTERFACE HPVM AVIO Stor Adapter HPVM NPIV Stor Adapter HPVM NPIV Stor Adapter 0 1 4 0/0/0/0 0/0/4/0 0/1/3/0 CLAIMED CLAIMED CLAIMED NOTE: The ioscan output listing the NPIV devices in the guest is the same as a similar listing of SAN LUNs in a native host.
Figure 11 Multi-pathing with NPIV devices Integrity VM / Virtual Partition NPIV Disk vFC1 vFC2 vFC3 vFC2 vFC3 pFC3 pFC4 vFC1 pFC1 pFC2 IIIIIIIIIIII IIIIIIIIIIII HBA 1 HBA 2 Virtualization Services Platform (VSP) SAN Fabric SAN LUN NOTE: Having multiple paths to an NPIV device through the same physical HBA port on the VSP does not fetch the benefits of multi-pathing because all paths will be using the same physical port for IO traffic and thereby not provide any redundancy.
7.6 Troubleshooting NPIV storage problems For more information about troubleshooting NPIV storage problems, see Section A.2.2 (page 268).
8 Creating virtual and direct I/O networks The vPars and Integrity VM supports two types of networking I/O: AVIO and DIO. With AVIO networking, the I/O device drivers for the devices in the guest operating system are virtualization aware, eliminating some of the virtualization overhead.
8.1 Introduction to AVIO network configuration The guest virtual network configuration provides flexibility in network configuration, allowing you to provide high availability, performance, and security to the vPars or VM guests running on the VSP. The virtual network configuration consists of the following components: • VSP pNIC – the physical network adapter, which might be configured with APA. (For more information about APA, see the HP Auto Port Aggregation (APA) Support Guide.
where -c indicates the creation of a vswitch. -S vswitch-name specifies the name of the vswitch. -n nic-id specifies the network interface on the VSP that the new vswitch uses. For example, —n 0 indicates lan0. Network interfaces are displayed by the nwmgr command. If you do not include the -n option, a local vswitch is created, as described in Section 8.2.1.1.1 (page 115). The hpvmnet command also allows you to view and manage the vswitches on the VSP.
Table 15 Options to the hpvmnet command (continued) Option Description -V Enables verbose mode, displaying detailed information about one or all vswitches. -v Displays the version number of the hpvmnet command in addition to the vswitch information. -C Changes the specified vswitch. If used with the -N option, the changes are made to the cloned vswitch. You must include either the -S or -s option. -N new-vswitch-name Creates a new vswitch based on the existing vswitch.
NOTE: The Cisco switch for HP BladeSystem c-Class Server Blades has a protocol error that causes it to respond to every MAC address. Because MAC addresses are unique, Integrity VM verifies that the generated guest virtual MAC address is unique. If one of these bad switches is on your network, the Integrity VM verification fails. The hpvmcreate command might fail with the following messages: hpvmcreate: hpvmcreate: WARNING (host): Failed after 3 attempts.
lan1 # hpvmnet Name Number ======== ====== localnet 1 hostnet 296 UP State ======= Up Up 0x00306E4A92EF iexgbe Mode NamePPA ========= ======== Shared Shared lan0 10GBASE-KR MAC Address IP Address ============== =============== N/A N/A 0x00306e4a93e6 If lan0 goes down, enter the following command to swap to use lan1: # hpvmnet -C -S # hpvmnet Name Number ======== ====== localnet 1 hostnet 296 hostnet -n 1 State ======= Up Up Mode NamePPA ========= ======== Shared Shared lan1 MAC Address IP Address
The following example uses the hpvmnet command to halt the vswitch and then to delete it. Both the commands require you to confirm the action. # hpvmnet -S clan1 -h hpvmnet: Halt the vswitch 'clan1'? [n/y]: y # hpvmnet -S clan1 -d hpvmnet: Remove the vswitch 'clan1'? [n/y] y The default command function (if you press Enter) is to not perform the function of the command. To perform the command function, enter y.
You must restart a vswitch after the following events: • The MAC address corresponding to the LAN number being used by the virtual switch is changed on the VSP (either by swapping the network adapter associated with the vswitch or associating the vswitch with a different network adapter). • The way the network adapter accepts and passes on packets to the next network layer is changed. This can occur as a result of using the ifconfig or lanadmin command to set the checksum offloading (CKO) to on or off.
# hpvmclone –P vm-name -N clone-vm-name —a network:adapter-type:[hardware-address]:vswitch:vswitch-name The vNIC specified with this command is added to the new VM. • To modify an existing VM: # hpvmmodify –P vm-name —a network:adapter-type:[hardware-address]:vswitch:vswitch-name The —a option adds the specified vNIC to the VM. As with virtual storage devices, use the -a rsrc option to associate a guest virtual network device with a vswitch.
# hpvmmodify -P host1 -a network:avio_lan::vswitch:clan0 NOTE: Never directly modify the guest configuration files. Always use the Integrity VM commands to modify the virtual devices and VMs. Failure to follow this procedure results in unexpected problems when guests are started. The virtual network entry in the guest configuration file includes the guest information on the left side of the equal sign (=), and VSP information on the right.
The following sections describe the Port-based VLAN feature, Guest-based VLAN feature, and VLAN-backed vswitch feature. NOTE: All three features are supported on the AVIO network. 8.4.1 Port-based VLANs Figure 13 (page 121) shows a basic VM VLAN that allows guests on different VSP systems to communicate.
Ports on a vswitch that are configured for the same VLAN ID can communicate with each other. Ports on a vswitch that are configured for different VLAN IDs are isolated from each other. Ports on a vswitch that do not have any VLAN ID assigned cannot communicate with ports that have a VLAN ID assigned, but can communicate with other ports that do not have VLAN ID assigned. The port IDs for a vswitch can range 0 to 511.
• Port number. • State of the port. Table 16 (page 123) lists the possible VLAN port states. Table 16 VLAN port states State Description Active The port is active and is allocated to a running guest. No other guests with the same vNIC with the same vswitch and port can start. Down The port is inactive and is allocated to a running guest. No other guests with the same vNIC with the same vswitch and port can start.
To view information about a specific VLAN port, include the -p option to the hpvmnet command.
To create multiple tagged vlan id on port hpvmnet -S vmlan4 -i portid:8:vlanid:103,104 # hpvmnet -S vmlan4 -p 8 Vswitch Name : vmlan4 Max Number of Ports : 512 Port Number : 8 Port State : Reserved Active VM : Untagged VlanId : 102 Reserved VMs : vm4 Adapter : avio_lan Tagged VLANs : 103, 104 8.4.3 Configuring VLANs on virtual switches The VLAN-backed vswitch (VBVsw) feature enables a virtual switch to be backed by a physical network device with HP-UX VLAN (IEEE 802.1Q) configured.
8.4.3.
To set the kernel tunable, enter the following: # kctune dlpi_max_ub_promisc=16 8.4.4 Configuring VLANs on physical switches When communicating with a remote VSP or guest over the network, you might need to configure VLANs on the physical switches. The physical switch ports that are used must be configured specifically to allow the relevant VLANs. If the remote host is VLAN aware, you must configure VLAN interfaces on the host for the relevant VLANs.
• DLKM operations in the vPar or VM guest. • Interrupt migrations in the vPar or VM guest and on the VSP. • Running vPars or VM guests with DIO as Serviceguard nodes or Serviceguard packages. • Support for HP-UX network providers. • Support for direct I/O networking functionality with the HP APA product. 8.5.
# hpvmmodify -P vm -d lan:dio:[b,d,macaddr]:hwpath:hwpath ◦ Replace a direct I/O function in a vPar or VM guest: # hpvmmodify -P vpar -m lan:dio:b,d,macAddr:hwpath:new-hwpath ◦ Modify the MAC address: # hpvmmodify -P vpar -m lan:dio:b,d,new-macAddr:hwpath:hwpath • The hpvmstatus command allows you to: ◦ View vPar and VM guest configurations. The direct I/O network functions are included in the #NETs count. # hpvmstatus ◦ View specific vPar or VM I/O details: # hpvmstatus -P vm -d NOTE: output.
conflicts at vPar or VM guest boot time, because those functions will not appear to be in use until the vPars and VM guests are booted.
# hpvmnet Name Number State Mode ===================== ====== ======= ========= localnet 1 Up Shared hpnet 2 Up Shared priv_net 3 Up Shared NamePPA MAC Address ======= ============== N/A lan0 0x1cc1de40d040 lan1 0x1cc1de40d044 IPv4 Address =============== N/A 15.43.212.199 # hpvmhwmgmt -l -p dio | grep 0/0/0/3/0/0/7 0/0/0/3/0/0/7 lan host HP PCIe 2-p 10GbE Built- device # hpvmhwmgmt -p dio -a 0/0/0/3/0/0/7 hpvmhwmgmt: Sibling path '0/0/0/3/0/0/0' (lan0) is being used as vswitch 'hpnet'.
Error 1: The sibling DLA function: '0/0/0/4/0/0/0' of function: '0/0/0/4/0/0/1' is in use by another guest. vparboot: Unable to continue. NOTE: Trunking software such as APA is supported on DIO interfaces in the guest. For more information about APA, see the HP Auto Port Aggregation (APA) Support Guide. For the syntax and complete list of options for these commands, see the appropriate manpages. 8.
9 Administering VMs After installing the vPars and Integrity VM product, you can create VMs and virtual resources for the VMs to use. NOTE: The Integrity VM commands can be used to configure and manage both vPars and VM. They support overall product features. HP recommends using Integrity VM commands over vPar commands for managing vPars or VM. 9.1 Specifying VM attributes When you create a new VM, you specify its attributes. Later, you can change the VM attributes.
Table 17 Attributes of a VM (continued) VM attributes Virtual devices Description Command option You can allocate virtual -a rsrc network switches and virtual storage devices to the VM. The VSP presents devices to the VM as virtual devices. Default value If you do not specify this attribute when you create the VM, it will not have access to network and storage devices. The VM network consists of vNICs and vswitches.
Table 17 Attributes of a VM (continued) VM attributes Resource reservations User with administrator or operator privileges Description Enable or disable resource reservation. For more information about resource reservation, see Section 5.3 (page 57) Command option Default value If not specified, resources will not be reserved when the VM is off. -x resources_reserved= [true | false] Specify user accounts that will -u have administrator or operator [+]user[:admin|oper] privileges to the VM.
# hpvmcreate -P host1 -e 20 Alternatively, you can use the -E option to specify the entitlement as the number of CPU clock cycles per second to be guaranteed to each vCPU on the VM. For more information about VM entitlement, see Section 5.1.2 (page 53). 9.1.5 Guest memory allocation Use the -r amount option to specify the amount of virtual memory to be allocated to the guest. If you do not specify the memory allocation, the default is 2 GB.
Example 14 Create a VM with virtual network interface backed by a DIO function Add the DIO function “0/0/0/4/0/0/0” to the direct I/O pool using the hpvmhwmgmt command: # hpvmhwmgmt -p dio -a 0/0/0/4/0/0/0 Create a VM named Oslo in the local system specifying memory of 2 GB, 2 CPUs, and virtual network interface backed by a DIO function “0/0/0/4/0/0/0” # hpvmcreate -P Oslo –r 2048 –c 2 -a lan:dio::hwpath: 0/0/0/4/0/0/0 For more information about configuring VM guests with DIO functions, see Section 8.
9.1.9 Sizing guidelines The sizing guidelines for Integrity VMs Version 4.0 and later are different from that of earlier releases due to several factors, including the change of VSP operating system to HP-UX 11i v3. The formulas used to calculate VM capacity are outlined in the white paper Hardware Consolidation with Integrity Virtual Machines. The sizing information and related calculations are updated in revisions to this white paper dated September 2008 or later.
Table 20 Options to the hpvmcreate command Option Description -P vm-name VM name. You must specify a name when you create or modify the VM. You cannot modify this characteristic. -O os_type[:version] Specifies the type and version of the operating system. If you do not specify the operating system type, it is set to UNKNOWN.
Table 20 Options to the hpvmcreate command (continued) Option -F Description Suppresses all resource conflict checks and associated warning messages (force mode). This option is primarily intended for use by scripts and other non-interactive applications. Note that you will not receive notification about any potential resource problems for a VM created with the -F option. NOTE: The -F option is deprecated in Integrity VM commands. This option must be used only if instructed by HP Support.
information about running VMs under Serviceguard, see HP Serviceguard Toolkit for Integrity Virtual Servers User Guide at http://www.hp.com/go/hpux-serviceguard-docs. 9.3 Starting VMs To start the VM, run the hpvmstart command. You can specify either the VM name or the VM number (listed in the hpvmstatus display under VM #). The hpvmstart command syntax is: # hpvmstart {-P vm-name | -p vm_number} [-F | -s | -Q] Table 21 (page 141) lists the options that can be used with the hpvmstart command.
# hpvmstatus [Virtual Machines] Virtual Machine Name VM # ==================== ===== config1 1 config2 2 guest1 5 host1 13 Type ======= SH SH SH SH OS Type ========= HPUX HPUX HPUX UNKNOWN State #VCPUs #Devs #Nets Memory ====== ===== ===== ===== ======= Off 1 5 1 512 MB Off 1 7 1 1 GB On (OS) 1 5 1 1 GB On (EFI) 1 0 0 2 GB For more information about using the hpvmstatus command, see Chapter 13 (page 217).
Table 22 Options to the hpvmmodify command Option Description -P vm-name Specifies the name of the VM. You must specify either the -P option or the -p option. -p vm_number Specifies the number of the VM. To determine the VM number, enter the hpvmstatus command. -F Suppresses all resource conflict checks and associated warning messages (force mode). Use force mode for troubleshooting purposes only. NOTE: The -F option is deprecated in Integrity VM commands.
Table 22 Options to the hpvmmodify command (continued) Option Description -m rsrc Modifies an existing I/O resource for a VM. The resource is specified as described. You must specify the hardware address of the device to modify. The physical device portion of the rsrc specifies a new physical device that replaces the one in use. -d rsrc Deletes a virtual resource. -r amount Modifies the amount of memory available to this VM.
Table 22 Options to the hpvmmodify command (continued) Option Description -j [0|1] Specifies whether the VM is a distributed guest (that is, managed by Serviceguard) and can be failed over to another cluster member running Integrity VM. Do not specify this option. This option is used internally by Integrity VM.
/vmm_config.next): Allocated 860 bytes at 0x6000000140000000 locked SAL RAM: 00000000ffaa0000 (4KB) locked ESI RAM: 00000000ffaa1000 (4KB) locked PAL RAM: 00000000ffaa4000 (4KB) locked Min Save State: 00000000ffaa5000 (1KB) RAM alignment: 40000000 Memory base low : 6000000100000000 Memory base FW : 6000000140000000 Loading boot image Image initial IP=102000 GP=62C000 Initialize guest memory mapping tables Starting event polling thread Starting thread initialization Daemonizing....
Table 23 Options to the hpvmclone command (continued) Option Description -e percent[:max_percent] | -E cycles[:max_cycles] Specifies the CPU entitlement of the VM in CPU cycles. To specify the percentage of CPU power, enter the following option: -e percent[:max_percent] To specify the clock cycles, enter one of the following options: -E cycles[:max_cycles]M (for megahertz) -E cycles[:max_cycles]G (for gigahertz) -l vm_label Specifies a descriptive label for this VM.
Table 23 Options to the hpvmclone command (continued) Option Description -S amount Specifies that the cloned guest must share the same virtual LAN (VLAN) ports as the source guest. By default, the hpvmclone command allocates VLAN ports that are different from those allocated to the guest that is the source of the clone operation. For more information about using VLANS on VMs, see Section 8.4 (page 120). -g group[:{admin|oper}] Specifies a group authorization.
host2 host3 3 4 SH SH UNKNOWN HPUX Off Off 1 1 1 1 1 1 1 GB 2 GB You can create a clone of host3 by entering the following command.
Table 24 Options to the hpvmstop command (continued) Option Description -a Specifies all the VMs that are running. You must also specify the -F option. -h Performs a hard stop on the VM, similar to a power failure. This is the default. -g Performs a graceful shutdown on the VM. -F Forces the command to act without requiring confirmation. NOTE: The -F option is deprecated in Integrity VM commands. This option must be used only if instructed by HP Support.
Table 25 Options to the hpvmremove command Option Description -P vm-name Specifies the name of the VM. You must include either the -P or -p option. -p vm_number Specifies the number of the VM. To view the VM number, run the hpvmstatus command. -F Forces the command to act regardless of errors. NOTE: The -F option is deprecated in Integrity VM commands. This option must be used only if instructed by HP Support. -Q Performs the command without requiring user input to confirm.
10 Administering vPars To create vPars, you must run appropriate commands from the VSP or use the HP-UX Integrity Virtual Server Manager, the GUI application, which you can access from the Tools page in HP SMH installed on the VSP. This chapter discusses the various tasks that you can perform from the VSP using the commands. For more information about the tasks that you can perform using the GUI, see HP-UX Integrity Virtual Server Manager Help that comes with the GUI application.
Table 26 Attributes of a vPar (continued) vPar attributes Description Command option Memory The memory is specified in -a megabytes. The minimum mem::mem_size[:{b|f}] amount of memory you allocate to a vPar must be the total of the following: • The amount of memory required by the operating environment in the vPar. Default value If you do not specify this attribute when you create a vPar, the default memory allocated is 2 MB. For more information, see Table 35 (page 240).
Table 26 Attributes of a vPar (continued) vPar attributes Description Command option Virtual iLO Remote You can access the Virtual iLO -K console_ip Console Remote Console of the vPar using telnet or ssh. This attribute is the IP address that is used to connect to the Virtual iLO Remote Console of the vPar. You must specify the address in IPv4 dot-decimal notation.
where: -a add (used with vparcreate or vparmodify). -m modify (used with vparmodify). min the minimum number of CPUs that must remain assigned to the partition. max the maximum number of CPUs that can be assigned to the vPar. NOTE: The vPar can be either UP or DOWN when setting the min or max value. Hence, a reboot is not necessary when you modify the min and max value. When the partition is UP, the CPU count can only be adjusted if the HP-UX OS on the vPar is running.
• Base memory – This can be used by vPar kernel for critical data structures. You can add, but cannot delete base memory from a live vPar. • Floating memory – This is typically used for user applications. You can either add or delete floating memory from a live vPar.
Example 26 Create a vPar with virtual network interface backed by a DIO function Add the DIO function “0/0/0/4/0/0/0” to the direct I/O pool using the hpvmhwmgmt command: # hpvmhwmgmt -p dio -a 0/0/0/4/0/0/0 Create a vPar named Oslo in the local system, specifying memory of 2 GB, 2 CPUs, and virtual network interface backed by a DIO function “0/0/0/4/0/0/0”.
10.3.3 Modifying vPar name and number The vPar must be in the DOWN run state to modify the name. You can modify the name of a vPar using the vparmodify -P command to add a name that does not exist in the current vPar database. The vPar number cannot be modified. The only way you can get a different number is to delete the current vPar and create a new one. When you create a new vPar, you can specify the vPar number with the -p option. 10.
NOTE: This command functions only when the guest OS is running, and only if the guest OS is capable of responding to the graceful shutdown request. This command only initiates the graceful shutdown operation, it does not consequently report failure if the OS fails to gracefully shutdown. The preferred method for stopping a vPar is to log in to it, stop all the applications, and then run the /etc/shutdown -h command.
you can either power down (vparreset command with -d option) the vPar or shutdown the vPar (vparreset command with -g option). CAUTION: occur. When the vparremove command is used accidently, serious consequences can Hence, the -f (force) option is required with the command. To remove a vPar named Oslo, run the following command: # vparremove -p Oslo -f 10.7 Deactivating a vPar configuration You can deactivate a vPar to remove or deallocate resources from it, while maintaining its configuration settings.
11 PCI OLR support on VSPs Online Addition, Replacement and Removal of PCI I/O devices (PCI OLARD) is an important value proposition of HP Integrity Superdome 2 (SD2) platforms. The OLR functionality provides assurance of continued system availability even when potential problems are identified with active I/O resources. On SD2 platforms configured as VSP with versions earlier to HP-UX vPars and Integrity VM 6.
taken for an I/O card replacement in a vPars and Integrity VM environment vice versa an SD server configured as native HP-UX. 11.4.1 CRA on a VSP On a standalone SD2 server, before an online replacement of an I/O card, a CRA of all the system resources that are impacted by the unavailability of the card in question is performed.
VM guest and the VSP, and found to have no system critical impact, does the olrad command proceed with the next steps. For more information about how a resource analysis is performed on mass storage components of a system, see the white paper Critical Resource Analysis. All the scenarios described in the white paper are applicable to NPIV devices seen within a vPar or VM guest.
With vPars and Integrity VM V6.3, the olrad CRA request issued for the physical NIC on VSP initiates parallel CRA check in each of vPar and VM guests for the guest associated with vswitch. LAN CRA module in the vPar or VM performs usage analysis and reports any potential impacts from LAN subsystem perspective. Some of the usage scenarios determined by LAN CRA includes NIC port configured with VLAN and IP address, connected to network, and so on.
11.4.6 CRA logs The CRA infrastructure collates the detailed analysis logs from all the subsystem CRA modules and returns the combined logs at the location /var/adm/cra.log on the VSP. When the olrad command is invoked on a VSP, the CRA log on the VSP will have relevant entries under the following scope: HPVM NPIV Guest wise analysis for each vPar or VM guest that has an NPIV resource impacted by the OLRAD operation.
All vPars and VM guests running on a V6.3 VSP must have the V6.3 VirtualBase bundle installed to take advantage of the PCI OLR capability on the VSP. The guest has more than 32 NPIV vHBAs that are backed by the I/O card that are considered for replacement. PCI OLR is not supported on guests having more than 32 devices backed by the I/O card that are considered for replacement.
Could not resume the driver in one of the guests. Error: post_replace:/usr/sbin/olrad.d/hpvmdio driver script Failed ! A PCI OLR Resume operation failed on the VSP in at least one of the guests. To recover the state of the card or device on the VSP and the guests, HP recommends to perform a PCI OLR suspend followed by a PCI OLR Resume operation on the same device on the VSP. 11.4.
11.4.8.
Example 29 Configuration The VSP has two active VM guests configured with NPIV vHBAs. hpux-atc-sd2-001par1# hpvmstatus [Virtual Machines] Virtual Machine Name VM # Type ==================== ==== ===== Dscvr_G01 1 SH Dscvr_G02 2 SH OS Type ======= HPUX HPUX State #VCPUs #Devs #Nets Memory ===== ====== ===== ===== ======= On(OS) 32 2 1 256 GB On(OS) 2 1 1 8 GB The VSP has two dual port FC cards in PCI OLR capable slots.
dscvr_g02# ioscan -kfNd gvsd Class I H/W Path Driver S/W State ======= ========== ====== ========= ext_bus 0 0/0/0/0 gvsd CLAIMED dscvr_g02# hpvmdevinfo Device Bus,Device,Target Type ====== ================= hba [0,0] Backing Store Type ============= npiv/dev/fcd4 H/W Type ========= INTERFACE Description ====================== HPVM NPIV Stor Adapter Host Device Virtual Machine Name Device Name =========== =============== /dev/gvsd0 dscvr_g02# setboot Primary bootpath : 0/0/0/0.0x21530002ac000d2c.
Example 30 Configuration The configuration is same as for Example 1, with the exception that the VM guest dscvr_g02 is shut down.
hpux-atc-sd2-001par1# ioscan -kfNC fc Class I H/W Path Driver S/W State H/W Type Description ======= =============== ====== ========= ========= ============================================================== fc 1 41/0/2/0/0/0/0 fcd CLAIMED INTERFACE HP AH401A 8Gb Dual Port PCIe Fibre Channel Adapter (FC Port 1) fc 9 41/0/2/0/0/0/0.
11.4.8.2 DIO devices 11.
Example 31 Configuration An SD2 system configured as VSP running vPars and Integrity VM 6.3 with two active guests and the system has a dual ported NIC supporting DLA.
Critical Resource Analysis(CRA) in progress... [NOTE: The CRA may take a few minutes to complete on large configurations. It is recommended not to disrupt this operation.] CRA REPORT SUMMARY: CRA detected DATA CRITICAL usages. Detailed CRA report is available in /var/adm/cra.log file. The criticality reported by CRA in this case is CRA_DATA_CRITICAL and for more information about the CRA, see the CRA log file /var/adm/cra.log on the VSP.
# olrad -f -r 10-0-1-0-2-3 Activity: Start of Prepare Replace Target slot: 10-0-1-0-2-3 Critical Resource Analysis(CRA) in progress... [NOTE: The CRA may take a few minutes to complete on large configurations. It is recommended not to disrupt this operation.] CRA REPORT SUMMARY: CRA detected DATA CRITICAL usages. Detailed CRA report is available in /var/adm/cra.log file.
# ioscan -kfnC hpvmdio Class I H/W Path Driver ======= == =============== ======= hpvmdio 0 42/0/0/2/0/0/0 hpvmdio /dev/hpvmdio0 hpvmdio 1 42/0/0/2/0/0/1 hpvmdio /dev/hpvmdio1 S/W State ========= CLAIMED H/W Type Description ========= ======================== INTERFACE HP AM225-60001 PCIe 2-p 10GbE-SFP+ Adapter CLAIMED INTERFACE HP AM225-60001 PCIe 2-p 10GbE-SFP+ Adapter Further, the ioscan output inside the guest will also show the DLA NIC port in CLAIMED state indicating that the NIC port is successf
Example 32 Configuration An SD2 system configured as VSP running vPars and Integrity VM 6.3, with two active guests and the system has a dual ported NIC supporting FLA, each port of the FLA NIC assigned to two different active guest.
Name lan1 Mtu 1500 Network 15.0.0.0 Address 15.213.153.220 Ipkts 24 Ierrs Opkts Oerrs Coll 0 24 0 0 If you want to replace the NIC with another card of same model, you must initially run the olrad(1M) command with -C option which reports the resource usage and its criticality. In this example, as the FLA NIC is having IP configured, the criticality reported by CRA in this case will be CRA_DATA_CRITICAL and for more information about the CRA details, see the CRA log file /var/adm/cra.log on the VSP.
Example 33 Configuration An SD2 system configured as VSP running vPars and Integrity VM 6.3 with two active guests, and the system has two dual ported NIC, one supporting DLA and the other supporting FLA. DLA and FLA ports are further configured in APA mode for redundancy.
ClassInstance ============= lan8 lan900 lan20 State ========= UP UP UP AddressI ============== 0x2E1B47D73CA1 0x2E1B47D73CA1 0xBA3462833C28 System Type ======= ========== iexgbe 10GBASE-SR hp_apa hp_apa iocxgbe 10GBASE-SFP Interface ================= lan900 lan900 # nwmgr -S apa -I 900 -v lan900 current values: Mode = LAN_MONITOR Parent PPA = APA State = Up Membership = 8,20 Active Port(s) = 8 Ready Port(s) = 20 Not Ready Port(s) = — Connected Port(s) = 20 Polling Interval = 10000000 If you want to re
# olrad -r 10-0-1-0-2-3 Activity : Start of Prepare Replace Target slot : 10-0-1-0-2-3 Critical Resource Analysis(CRA) in progress... [NOTE: The CRA may take a few minutes to complete on large configurations. It is recommended not to disrupt this operation.] CRA REPORT SUMMARY: CRA returned WARNING. Detailed CRA report is available in /var/adm/cra.log file.
Example 34 Configuration An SD2 system configured as VSP running vPars and Integrity VM 6.3 with two active guests, and the system has a Combo card supporting NIC (FLA) and FC functions.
lan3 lo0 1500 15.213.200.0 15.213.202.60 1060 32808 127.0.0.0 127.0.0.1 242 0 0 171 242 0 0 0 0 If you want to replace the NIC with another card of same model, you must initially run the olrad(1M) command with -C option which reports the resource usage and its criticality.
In this scenario, where the CRA has returned DATA CRITICAL, if you choose to run the Pre Replace option of the olrad (1M) command ( olrad –r option), the operation fails with CRA_DATA_CRITICAL error, reason being that doing this operation renders the VM guest where the FLA NIC ports are assigned, inaccessible to network. To verify that the FLA NIC is successfully suspended, following options of the olrad(1M) and ioscan(1M) commands can be used.
Example 35 Configuration The VSP has two guests with the following networking configuration.
CRA REPORT SUMMARY: CRA detected DATA CRITICAL usages. Detailed CRA report is available in /var/adm/cra.log file. #cat /var/adm/cra.log ANALYSIS SCOPE: HPVM AVIO NETWORKING This report provides details of any HPVM networking related usages for a set of h/w paths in the system. RESULT: DATA-CRITICAL resource usage detected.
Example 36 Configuration In this scenario, the two guests evolution and president use datalan vswitch which is backed by an NIC card (lan18) on olrad capable PCI slot and shows the behavior of olrad –C when no VNIC is configured with any IP address.
Example 37 Configuration In this scenario, the two guests evolution and president use datavlan vswitch is backed to an NIC card (lan18) on olrad capable PCI slot. VNIC is configured with an IP and shows the behavior of suspend (olrad –r and olrad –f –r ) and resume of a card.
Example 38 Configuration The VSP has an active VM guest configured with legacy AVIO backing stores.
11.4.9 Time taken for CRA on a VSP The default timeout value set for each guest OS to complete CRA requests issued to it, as part of the host PCI OLR operations initiated using the olrad(1M). The value is two minutes. For guests with large, active, and I/O configurations this may be insufficient. Administrators can configure the timeout value by defining the parameter OLR_GUEST_RESP_TIMEOUT in /etc/ rc.config.d/hpvmconf the timeout value must be specified in milliseconds. 11.4.
• The CRA on the VSP cannot determine that a vPar or VM guest is in the middle of a recovery boot process. Hence, HP recommends that one does not attempt a PCI OLRAD operation on the VSP if any of the vPars or VM guests are in the middle of a recovery boot. You must retry the operations after the recover boot is complete and the guest is back to stable state (that is, either shutdown has completed or recovery boot).
12 Migrating VMs and vPars You can migrate either an offline vPar or VM, or a live online VM running a guest operating system and applications from a source VSP system to a target VSP system, using the hpvmmigrate command. 12.1 Introduction to migration vPars and Integrity VM V6.3 allows the following types of migration: • To migrate a VM or vPar from one VSP system to another, use the hpvmmigrate command.
Figure 16 Symmetric VSPs configured for guest migration VM 1 vPar 1 vPar 1 VM 2 Transfer configuration via SSH LAN a LAN a FC Port x FC Port x VSP A (Source) VSP B (Destination) OS Authentication hpvmmigrate OS Authentication hpvmmigrate HP VM API HP VM API FC Switch SAN (Shared Storage) vPars 1 HP-UX OS VM 1 HP-UX OS VM 2 HP-UX OS The VM or vPar migration environment includes a source machine and a target machine.
Figure 17 (page 197) shows moving a guest online from a source VSP to a target VSP.
and features when they are needed. This is especially true for workloads with well-understood cyclic resource requirements (for example, month-end processing). • Balancing VSP workloads — You might want to segregate VMs to balance the workload on VSPs. For example, you might want to separate VMs whose workloads peak simultaneously. Perhaps you want to group workloads together that have similar special resource requirements.
If a target VSP contains multiple DIO-capable functions with the same label, it might be possible that offline migration picks the DIO-capable function which is used by another vPar or VM. In such cases, the vPar or VM that is migrated offline will not be able to power on if another vPar or VM assigned with the same DIO-capable function is already running.
guests can be migrated while ON and running. You can use the -o option with VMs to migrate an online guest, which involves copying all the configuration information of the VM and transferring the active guest memory and virtual CPU state. Omit the -o option to migrate the configuration information of the offline VM or vPar, and optionally local disk contents to the target VSP.
Table 27 Options to the hpvmmigrate command (continued) Option Description -cnumber-vcpus For offline migrations, specifies the number of virtual CPUs for which this VM will be configured on the target. -C For offline migrations, physically copies the storage device specified with the -m option to the target VSP during the migration process. If specified before the first -m option, it applies to all -m options that specify an appropriate type of storage.
Table 27 Options to the hpvmmigrate command (continued) Option Description -k Creates the VM configuration on the target VSP and marks it Not Runnable, but does not change the VM on the source VSP. This is used primarily to distribute VM configurations for Serviceguard. -l new-vm-label Specifies a descriptive label for the VM, which can be useful in identifying a specific VM in the verbose display of the hpvmstatus command.
Table 27 Options to the hpvmmigrate command (continued) Option Description -s Indicates that the migration must not occur, but the hpvmmigrate command must check whether or not the migration is possible. Because VMs and their hosts are dynamic, a successful -s trial does not always guarantee a subsequent successful migration. The hpvmmigrate command with the -o, -s, and -h options (but without a -p or -P option) verifies host connectivity, licensing, and CPU compatibility for online migration.
called host2, and the private network of the target VSP is called host2–hpvm-migr (that is, host2–hpvm-migr is an alias for the private network defined in /etc/hosts). NOTE: The hpvmmigrate command does not check whether you are using a private network to migrate your guest. Using a private network is important for security, and to maintain the performance of public network of your site.
NOTE: A transient network error might cause the vswitch connectivity check of the hpvmmigrate command to report a failure. If the connectivity check fails, retry the migration by rerunning the hpvmmigrate command. If the network connectivity check of the hpvmmigrate command continues to fail, verify the vswitch and network configuration, and test connectivity with the nwmgr command.
you are certain that the source and target vswitches are connected to the same subnet. Otherwise, your guest will lose network connectivity after migrating. For online migration, in addition to sharing the same LAN segment for normal guest connectivity, the VSPs must be connected with a private 1 GbE (or faster) network for efficient VSP-to-VSP communications and for secure guest memory transfer.
After configuring the /etc/ntp.conf file of the guest, assuming NTP is already enabled (that is, the XNTPD variable in /etc/rc.config.d/netdaemons is set to 1, as in export XNTPD=1), you can run the following commands on an HP-UX guest to sync its time with the VSP and restart the xntpd command: /sbin/init.d/xntpd stop /usr/sbin/ntpdate -b /sbin/init.
Table 28 Itanium processor families (continued) Family Model Series 32 1 Itanium 9100 32 2 Itanium 9300 33 0 Itanium 9500 You can lookup processor Family as shown in the following example output from the machinfo -v command. (As more processors families and models are added, more specific capability requirements might be necessary.
Assign private network IP addresses to those interfaces by editing the /etc/hosts, /etc/ nsswitch.conf file, and /etc/rc.config.d/netconf on each host. Private (non-routable) IP addresses in the range of 10.0.0.0 to 10.255.255.255 are good choices to use. (See the chapter on Network Addressing for assistance with subnetworking configuration in the current version of the HP-UX LAN Adminstrator's Guide).
NOTE: Because Integrity VM disables the TSO and CKO capabilities on the IP address of the LAN interface (resulting in poorer than expected VM Host data-transfer performance), HP recommends that you dedicate a LAN interface solely for OVMM data transfer to improve data transfer time. That is, to receive the best performance on host-to-remote data transfers on a LAN interface, do not configure a vswitch over it. 12.3.2.
Instead of using secsetup, SSH keys can be generated manually on the systems by using the ssh-keygen command. The ssh-keygen command generates, manages, and converts authentication keys for SSH. For information about manual SSH key generation, see the ssh-keygen command HP-UX manpage. 12.3.3.1 Troubleshooting SSH key setup If SSH is installed on both the source and the target system, you can run the ssh command on the source host to establish a connection with the target host without providing password.
12.3.4 VM requirements and setup Online VM Migration is supported on HP-UX 11i v2 and HP-UX 11i v3 guests. All memory sizes and virtual CPU configurations for the current version of Integrity VM are supported. As with all guest OS installations, the guest kit must be installed. NOTE: With Integrity VM V6.3, if VirtualBase B.06.30 is installed on the guest, the guest kit need not be installed. 12.3.4.
Offline or Online migration can also be retried by adjusting the following hpvmmigrate timeout parameters in the /etc/rc.config.d/hpvmconf file. • HPVMMIGRATE_CONNECT_TIMEOUT—Specifies the timeout value used to check whether the target host is reachable or not. The default is 1000 milliseconds. • HPVMMIGRATE_SSHCONNECT_TIMEOUT—Specifies the timeout value used for ssh connection. The default is 30000 milliseconds.
• ◦ The number of pFCs on the target host. ◦ The number of active NPIV HBAs that each of them already has. ◦ The FC connectivity of the pFCs to the FC fabric (that is, to which physical switch and fabric they are connected). For each guest NPIV HBA, a HBA port on the target is selected based on the following criteria: ◦ An attempt is made to distribute the NPIV HBAs of the guest, first across eligible HBA cards, and then across eligible HBA ports on the target.
• Whole disk backing stores consisting of SAN LUNs • Ejected file-backed DVDs • SLVM volumes • NFS-mounted backing stores • NPIV backing stores • Cluster DSF • DMP Nodes File backing stores that are not NFS-mounted and attached devices are not supported for online guest migration. Following are the mandatory conditions while migrating a vPar or VM with cDSF as backing store: • The source and the destination must belong to the same Cluster DSF group.
# kctune mdep_reduce_rse_size=1 After enabling the tunable, the VSP must be restarted for the tunable to take effect. 2. VSP running on Itanium 9300 processor a. Installation requirements for VSP • HP-UX 11i v3 March 2014 (AR1403) OE OR • HP-UX 11i v3 March 2013 (AR1303) OE OR • b. HP-UX 11i v3 September 2012 (AR1209) OE with AR1303 Feature11i patches vPars and Integrity VM V6.3 or vPars and Integrity VM V6.2 with PHSS_43648 (PK2) WARNING! those listed. 3.
13 Managing vPars and VMs using CLI To manage a vPar and VM guest, connect to the vPar and VM guest using a remote connection, and use the operating system administration procedures appropriate to the guest OS. vPars and Integrity VM provides utilities for managing vPars and VM guests from the VSP and from inside the vPar and VM guest. This chapter describes how to manage vPars and VM guests using Integrity VM commands and utilities. 13.
Table 29 Options to the hpvmstatus command (continued) Option Description -R Displays the resource reservation settings of the VMs. -L Displays the changes from the current configuration. -i When used with the -P option, prints statistics collected by the monitor. -C Displays whether the guests prefer clm, ilm, or none. -A Displays the guest configuration differences between the next start and the last start guest configurations.
[Virtual CPU Details] Number Virtual CPUs : 1 Minimum Virtual CPUs : 1 Maximum Virtual CPUs : 32 Percent Entitlement : 10.0% Maximum Entitlement : 100.
Maximum vcpus for an OpenVMS virtual machine = 7 Maximum available vcpus for a VM = 6 Available CPU cores for a virtual partition = 6 Available entitlement for a 1 way virtual machine Available entitlement for a 2 way virtual machine Available entitlement for a 3 way virtual machine Available entitlement for a 4 way virtual machine Available entitlement for a 5 way virtual machine Available entitlement for a 6 way virtual machine = = = = = = 1330 1330 1330 1330 1330 1330 Mhz Mhz Mhz Mhz Mhz Mhz Specific
NOTE: When creating a vPar using the hpvmcreate command, resource reservations and AutoBoot are not set by default, as is the default when using the vparcreate command. The following two commands are functionally equivalent: vparcreate -P vparName hpvmcreate -P vparName -B auto -x vm_type=vpar -x resources_reserved=true 13.5 Transformation between VM and vPar A VM can be transformed into a vPar by setting its vm_type attribute to vpar using the hpvmmodify command.
Memory During type conversion, the base and floating memory values for vPar or VM guest are as follows: Case 1: Memory parameters for shared or VM guest. For any memory modification for shared guest, the entire memory is considered as base memory. Therefore, you cannot specify base and floating memory values. However, when an offline vPar is converted to a VM guest, the base and floating memory configuration values are retained until the values are modified. Case 2: Memory parameters for vPar.
... Total number of operable system cores = 8 CPU cores allocated for VSP = 1 CPU cores allocated for vPars and VMs = 7 ... ... Total memory allocated for vPars and VMs = 27392 Mbytes Memory in use by vPars and VMs = 1600 Mbytes Available memory for vPars and VMs = 25792 Mbytes Available memory for 6 (max avail.) CPU VM = 25088 Mbytes Available memory for 6 (max avail.) CPU vPar = 25664 Mbytes ...
information in the configuration file of the guest, it is automatically updated to reflect the current operating system. 13.8 Creating VM labels The -l option of the hpvmcreate or hpvmmodify command specifies the label of the VM. The VM label is a descriptive label unique to a VM or vPar. The label can be useful in identifying a specific VM in the output displayed by the hpvmstatus -V command.
100 * guest memory size / available host memory + 2 (if the guest resources can fit into available CLM of the cell and processors) A rough estimate of the processor weight calculation is: (minimum guest cpu entitlement * number of virtual processors) / (100 * number of host processors) Guests are expected to start in order of highest weight to lowest. You can adjust the order by setting the sched_preference attribute. If a guest fails to start for any reason, the sequence continues with the next guest.
The following command creates the VM named testme with the administrator named testme1: # hpvmcreate -P testme -u testme1:admin Guest operators and administrators need access to the hpvmconsole command to control the VM. If you do not want the same users to have access to the VSP, you can restrict use of the hpvmconsole command to only guest console access by creating a restricted account for that purpose. To do so: 1. Use the useradd command and set up an /etc/passwd entry for each guest on the VSP.
[host1] vMP> The virtual console interface displays raw characters for the CL and CO commands, including the attempts of the guest to query the console terminal for its type and characteristics. As a result, the terminal answers those queries, which can cause the terminal setup communication to interfere with the virtual console commands. Interactive users can clear the screen. However, this situation can be a problem for noninteractive or scripted use of the console. 13.10.
(Use Ctrl-B to return to vMP main menu) -------------------------- Prior Console Output --------------------------EFI Boot manager ver 1.10 [14.62] [Build: Fri Aug 4 11:37:36 2006] Please select a boot option EFI Shell [Built-in] Boot option maintenance menu Use and to change options(s). Use enter to select an option Loading : EFI Shell [Built-in] EFI Shell version 1.10 [14.
13.12 Using the virtual iLO Remote Console The vPars and Integrity VM virtual iLO Remote Console allows you access to the guest console by logging into a specific IP address. You can assign each guest a virtual iLO Remote Console IP address with which the end user can connect using either telnet or SSH. After login authentication, the guest console is immediately available. The user is no longer required to know the VSP machine IP address or guest name.
[Remote Console] Remote Console Ip Address: Remote Console Net Mask: 16 .92.81.68 255.255.252.0 When users connect to the virtual iLO Remote Console IP address, they must log in using the standard telnet or ssh system authentication. After authenticating, the users receive immediate access to the guest console: # ssh -l guest1admin 16.92.81.
The virtual iLO Remote Console uses the SSH server host keys of the host system. If the guest is migrated to another host system (using OVMM), these host keys change. When an end user does an SSH connection, an error message is displayed. The end user must manually delete the local copy of the host key. For additional information, see ssh(1). • Guest Administrator accounts are not migrated during OVMM.
Table 32 Dynamic memory control command options Keyword value pair Description dynamic_memory_control={1|0} Specifies whether a privileged user on the guest (such as root) can change the dynamic memory values while the guest is running. To disable guest-side dynamic memory control, specify 0 (zero). If the guest is not active, the only effect is the modification of the guest configuration file. On the running guest, the change takes effect immediately.
13.14.1 Configuring a VM to use dynamic memory By default, dynamic memory is enabled. To configure a VM to use dynamic memory, use the hpvmcreate, hpvmmodify, or hpvmclone command.
Memory chunksize Driver Mode(s) AMR state : 65536 KB : STARTED ENABLED : DISABLED . . . Table 33 (page 234) lists the dynamic memory characteristics displayed by the hpvmstatus and hpvmmgmt commands. Table 33 Dynamic memory characteristics Characteristic Setting Description Type none No dynamic memory support. any Dynamic memory is configured on the host, but the dynamic memory subsystem on the guest has not started and reported the implementation type.
Table 33 Dynamic memory characteristics (continued) Characteristic Setting Description Memory chunksize value The allocation chunk size used by dynamic memory when increasing and decreasing guest memory (as described in Section 13.14.3.3 (page 237). Driver mode(s) started Dynamic memory can change guest memory size. enabled Control that overrides started. guestctl Guest-side control is enabled.
Table 34 Options to the hpvmmgmt command -l type Specifies the type of data for which you want to view more information. For type, enter ram. -l type -t interval Allows you to continually watch and check the dynamic ram values. For the interval, specify the number of seconds between fetches of live data. -t interval Allows the hpvmmgmt command to continuously refetch the requested type of data using the value specified for the interval parameter.
(ram_dyn_max) in increments of the chunk size (64 MB). Use the -x option with the hpvmmgmt command: # hpvmmgmt -x ram_target=memory_size For example, to change the guest memory size to 4 GB, enter the following command: # hpvmmgmt -x ram_target=4096M Attempting to increase memory from 2103 MB to 4096 MB. Successfully began to change ram_target to 4096 MB. 13.14.3 Troubleshooting dynamic memory problems This section describes how to solve problems in the use of dynamic memory. 13.14.3.
Maximum memory Current memory Comfortable minimum Boot memory Free memory Available memory Memory pressure Memory chunksize Driver Mode(s) : 6144 MB : 2103 MB : 27 MB : 6135 MB : 0 MB : 286 MB : 100 : 65536 KB : STARTED ENABLED . . . An indication of this problem is a small or zero amount of free memory and a large memory pressure value (100). If these indicators are present, use the hpvmmodify command on the VSP to increase the memory size of the VM. The VM then boots normally. 13.14.3.
13.14.3.6 Upgrading the VirtualBase software when upgrading Integrity VM The dynamic memory software has two components— the VSP support and the HP-UX guest support. These two components must be at the same version level for dynamic memory to function. When you upgrade Integrity VM, you must also install the new VirtualBase kit on the guest. (You must also upgrade the guest operating system if it is no longer supported.) During this upgrade process, dynamic memory might not function.
from the VM. It does not support manual dynamic memory operations from the VSP that would cause the VM to shrink below its entitlement. 13.14.4.3 Viewing automatic memory reallocation You can view automatic memory reallocation parameters and status for each VM by using the standard Integrity VM commands.
Table 35 Options to vpar and hpvm commands (continued) Command Option Description hpvmmodify -a mem:: Increment the base memory by the specified amount to the given vPar. nl -a mem:::b -d mem:: nl Decrement the base memory by the specified amount to the given vPar. -d mem:::b -m mem:: nl Modify the base memory with the specified amount to the given vPar.
# vparcreate -p -a mem:::f ... # vparmodify -p -a mem:::f ... Alternatively, # hpvmcreate -x vm_type=vpar -P -a mem:::f ... # hpvmmodify -P -a mem:::f ... • Both base and floating memory can be added when the partition is up or down. But, to delete base memory, the partition must be down. • Floating memory can be added or deleted when the partition is up or down.
However, if any memory operation was performed on the VM guest using hpvmmodify –r option, the total memory will be treated as base memory when the guest is transformed to a vPar. • Base and floating memory of a partition is updated according to the following rules when hpvmmodify –r option is used to modify the total partition memory. # hpvmmodify -P -r ◦ If the specified amount of memory is greater than the current total memory, then, floating memory is incremented.
Total Memory (MB): Floating Memory (MB): ....... 2048 0 The overall memory available in the guest pool for memory allocation can be viewed by the following vparstatus command: # vparstatus -A ........ [Available Memory]: ........ 411968 Mbytes Now, the vpar1 guest is booted. # vparboot -p 1 (C) Copyright 2000 - 2012 Hewlett-Packard Development Company, L.P. Mapping vPar/VM memory: 2048MB ......
====== 1 =========== vpar1 =========== ====== 1/512 1 ===== 2 ========== 10240 ============ 4096 At this point, you will notice that the overall memory available in the guest pool is further reduced as some of the memory is added online to the vpar1 guest. # vparstatus -A ...... [Available Memory]: 401600 Mbytes ....... Now, 4 GB of floating memory is removed from the same guest using the vparmodify command.
SHVM0007 vPar0003 vPar0001 vPar0002 SHVM0006 7 3 1 2 6 SH VP VP VP SH HPUX HPUX HPUX HPUX HPUX On On On On On (OS) (OS) (OS) (OS) (OS) 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2048 2048 2048 2 GB MB MB MB GB # vparstatus [Virtual Partition] Num Name RunState State === ========================== ============ ========= 4 vPar0004 UP Active 3 vPar0003 UP Active 1 vPar0001 UP Active 2 vPar0002 UP Active [Virtual Partition Resource Summary] Virtual Partition CPU Num Num Total MB Floating MB Num Name Min/Max CP
Now, we can use the vparmodify command again to delete 4 CPUs from the online vPar. # vparmodify -p 1 -d cpu::4 vparmodify: A CPU OLAD operation has been initiated for this vPar. Please check vparstatus output or syslog for completion status. # vparstatus -p 1 -v [Virtual Partition Details] Number: 1 Name: vPar0001 RunState: UP State: Active … [CPU OL* Details] Operation : CPU change CPU Count: 1 Status: PASS … As seen in the output, the vPar vPar0001 is running with one CPU. 13.
• Previous Dynamic I/O operations or PCI OLR operations are in progress. • The target of the operation is an Integrity VM guest that is being or has been suspended or • The target of the operation is an Integrity VM guest being migrated. NOTE: Dynamic addition of DMP device as backing store is not supported.
in guest A. The other path must not be used as a backing store by guest A or by any other guest or the VSP. • Overlapping physical storage allocated for different backing store types. If a guest uses a logical volume (for example, rlvol1) as a backing store device, the disks used by the volume group on which the logical volume is made (for example, /dev/vg01) cannot be used as backing stores. You can use the ioscan command to detect these conflicts.
Table 36 (page 250) lists the options that can be used with the hpvmdevmgmt command. Table 36 Options to the hpvmdevmgmt command Option Description -l Lists an entry. To list all entries, enter the following {server|rdev|gdev}:entry_name:attr:attr_name=attr_value command: # hpvmdevmgmt -l all -v Displays the version number of the hpvmdevmgmt output format. The version number is followed by the display specified by other options. -V Increases the amount of information displayed (verbose mode).
# hpvmdevmgmt -m gdev:/var/opt/hpvm/ISO-images/hpux/:attr:SHARE=YES # hpvmmodify -P host1 -a dvd:avio_stor::null:/var/opt/hpvm/ISO-images/hpux/ # hpvmmodify -P host2 -a dvd:svio_stor::null:/var/opt/hpvm/ISO-images/hpux/ Virtual DVDs and virtual network devices can be shared. DVDs are not shareable unless you specify otherwise. Sharing of virtual devices or hardware backing stores must be carefully planned in order to prevent the data getting corrupted.
13.19.3 Inspecting and editing the repair script The hpvmdevmgmt -r report and repair-script function might identify one or more new pathnames for disks whose old pathnames no longer exist. The repair-script performs that reassignment using the hpvmdevmgmt -n command. In general, you must inspect and edit the script before running it for the following reasons: • All replace commands, hpvmdevmgmt —n, in the script are commented out.
Table 37 Attributes changed dynamically (continued) Attribute vPars VMs Storage Yes Yes 1. No 2. Yes 1. Yes 2. Yes • Adding or removing storage to or from a vPar/VM. NOTE: Depending on the type of storage being used, there may be additional steps required. See Section 6.4.1.5 (page 73) Migration 1. Migrating online. 2. Migrating offline. NOTE: Before you add or remove memory, networking, or storage from a vPar or a VM, ensure you know if further action is required on the vPar or VM. 13.
Use ^ and v to change option(s). Use Enter to select an option Loading.: EFI Shell [Built-in] EFI Shell version 1.10 [14.
14 Managing vPars and VMs using GUI There are multiple user friendly GUI tools to manage vPars and VMs. This chapter describes how you can manage vPars or VM guests using GUI tools such as VSMgr and HP Matrix OE. 14.1 Managing VMs with VSMgr HP Integrity Virtual Server Manager is the GUI that you can use from your browser to manage Integrity VM resources.
For more information about HP Infrastructure Orchestration and CloudSystem Matrix for HP-UX, see www.hp.com/go/cloudsystem. 14.2.2 Managing vPars and Integrity VMs from HP Matrix Operating Environment Logical Server Management A logical server is a set of configuration information that you create, activate, and move across physical servers and VMs.
For information about creating SLVM volume groups, see SLVM Online Volume Reconfiguration white paper at http://www.hp.com/go/hpux-LVM-VxVM-docs. 2. Add SLVM volume groups into the device database using the hpvmdevmgmt command. For each SLVM volume group you add to the device management database, set the device attribute VIRTPTYPE to container_volume_SLVM, with the PRESERVE=YES attribute setting. For example: # hpvmdevmgmt -a gdev:/dev/slvm_v22:attr:VIRTPTYPE=container_volume_SLVM,PRESERVE=YES 3.
cluster is reconfigured, or the VSP system is rebooted. You must ensure that all SLVM volume groups are activated after a VSP reboot or Serviceguard cluster reconfiguration. 14.4 Matrix OE troubleshooting This section lists some common CLI commands that helps when troubleshooting the issues when using vPars and Integrity VM with Matrix OE. 14.4.1 Adding and removing devices Most of the VSP devices get added into vPars and Integrity VM device database automatically.
# # # # hpvmmodify hpvmmodify hpvmmodify hpvmmodify -P -P -P -P vmname vmname vmname vmname -x -x -x -x runnable_status={enabled|disabled} modify_status={enabled|disabled} visible_status={enabled|disabled} register_status={enabled|disabled} CAUTION: HP does not recommend using any of the earlier options except with extreme caution. Integrity VM commands ensure that the VM is registered only on one VSP at a time.
15 Support and other resources 15.1 Contacting HP 15.1.1 Before you contact HP Be sure to have the following information available before you call contact HP: • Technical support registration number (if applicable) • Product serial number • Product model name and number • Product identification number • Applicable error message • Add-on boards or hardware • Third-party hardware or software • Operating system type and revision level 15.1.
integrated with HP Systems Insight Manager. A dedicated server is recommended to host both HP Systems Insight Manager and HP Insight Remote Support Advanced. Details for both versions are available at: http://www.hp.com/go/insightremotesupport To download the software, go to Software Depot: http://www.software.hp.com Select Insight Remote Support from the menu on the right. NOTE: HP recommends using Insight Remote Support on the VSP system.
%, $, or # A percent sign represents the C shell system prompt. A dollar sign represents the system prompt for the Bourne, Korn, and POSIX shells. A number sign represents the superuser prompt. audit(5) A manpage. The manpage name is audit, and it is located in Section 5. Command A command name or qualified command phrase. Computer output Text displayed by the computer. Ctrl+x A key sequence.
16 Documentation feedback HP is committed to providing documentation that meets your needs. To help us improve the documentation, please send any errors, suggestions, or comments to Documentation Feedback (docsfeedback@hp.com). Include the document title and part number, version number, or the URL when submitting your feedback.
A Troubleshooting A.1 Creating VMs A.1.1 Configuration error on starting the VM When you start the VM, the following message is displayed: Configuration error: Device does not show up in guest. If this is observed: • Verify that the path name to the file-backing store is correct and that the physical storage device is mounted. • Verify that the size of the physical storage device is divisible by 512 bytes (for a disk device) or 2048 (for a DVD device).
If you are loading the VSP drivers, the devices must show up in ioscan with device files, after the VSP reboot. Commands that operate on attachable storage devices appear to hang Accessing some attachable devices involve multiple system calls which altogether consume observable time before completing. Commands such as hpvmcreate(1M) and hpvmmodify(1M) that operate on such devices may appear to hang; such commands usually complete in about a minute.
Online migration of guests configured with NPIV HBAs fails; error messages indicate “data put failure” and “invalid target” Online migration of a guest configured with NPIV HBAs fails with the following message: Target: dynamic IO data put failure - status 4 tag 0 length 0 depth 0 And, the target VSP syslog contains an error message from the host virtual storage driver similar to the following: HVSD: HPVM online migration error: invalid target id 0x207000c0ffda4ee1 under hba port 0x5001438002a30063 for VM
Redefining pNICs for HP-UX guests Changing the hardware address of a vswitch has the same effect as moving a network adapter from one hardware slot to another on an HP Integrity system. Similar to other HP-UX systems, the guest file /etc/rc.config.d/netconf must be modified so that INTERFACE_NAME[0] reflects the new LAN PPA assigned by the HP-UX network driver on the first guest reboot after modification. At the first reboot, the LAN interfaces configuration fails, as follows: Configure LAN interfaces .....
Name ======== localnet vmlan0 vmlan4 vmlan900 Number ===== 21 22 23 24 State ======= Up Up Up Up Mode ========= Shared Shared Shared Shared PPA ======= N/A lan0 lan4 lan900 MAC Address =========== N/A 0x00306ea72c0d 0x00127942fce3 0x00306e39815a IP Address ========== 15.13.114.205 192.1.2.205 192.1.4.205 VLAN-Backed vswitches To enable the VLAN-backed vswitch (VBVsw) feature, PHNE_40215 or a superseding patch is required to be installed on the VSP.
Use the hpvmdevinfo command to display the hardware device mapping between vPar or VM and the VSP.
A.5.2 Integrity VM and vPar CLI commands experience poor performance when there are numerous devices on the VSP The commands like vparmodify, hpvmmodify, hpvmcreate, and hpvmclone, (commands used to modify the vPar or VM configuration), experience slow performance when there are numerous devices available on the VSP, or configured in the vPar and/or VM configurations. When you have a large number of devices, it is more than likely that the majority of those devices are storage devices.
After early initialization, control is passed to boot stage and the vPar takes responsibility for its resources. After this stage, a TC command will not produce a vm.core on the VSP. Relevant state information is captured in the crash dump generated by the HP-UX OS in the vPar, as part of handling the TC command. Note that HP-UX crash dump configuration must be done on the vPar to ensure that the dump is captured.
B Reporting problems with vPars and Integrity VM You can report vPars and Integrity VM defects through your support channel. Follow these instructions to collect data to submit with your problem report. 1. Run the hpvmcollect command on the VSP to gather information about the guest before modifying any guest. Preserve the state of the VSP and the vPar and VM guest to best match the environment when the VSP failed.
Table 39 Options to the hpvmcollect command on the VSP (continued) Option Description -f Forces an archive to be overwritten, if it exists, rather than renamed with an appended time stamp. -h Displays the help message for the hpvmcollect command. -l Leaves the collected information in a directory rather than in an archive file. The directory name follows the same naming convention as the archive name. -g Deletes old guest memory dump data as part of data collection.
Collecting system info .............................................. OK Collecting lan info ................................................. OK Running lanshow ..................................................... NO Collecting installed sw info ........................................ OK Collecting command logs ............................................. OK Collecting messages from vmm ........................................ OK Collecting lv info ..................................................
B.1.2 Using the hpvmcollect command on vPars or VMs To use the hpvmcollect command on the vPar and VM guest, you must first install the vPar and VM guest VirtualBase software on the vPar and VM guest (if it is not already installed) as described in Section 2.6.2 (page 35). Table 40 (page 278) lists the options that can be used with the hpvmcollect command on the guest. Table 40 Options to the hpvmcollect command on guests Option Description -c Includes the latest crash dump directory in the archive.
The collection is "//hpvmcollect_archive/host1_Sep.29.05_122453PST.tar" B.1.3 Recommendations for using hpvmcollect command HP recommends that hpvmcollect command should be always used with the options -a and -c together. If required, the-n option may be used to include multiple crash-dumps. Using these options will ensure that all system data is collected along with the related crash-dumps. B.
C Sample script for adding multiple devices The following example provides a script that enables you to specify multiple storage devices at once for a guest. #!/bin/ksh # --------------------------------------------------------------------------------------# HP Integrity VM example script. # # SUMMARY: # # Add disks to an Integrity VM (guest) in 'batch mode' with hpvmmodify, using AVIO. # # SYNOPSIS # # ./thisscript [-a] -P guestname -f disklistfile [-N #] [-n #] [-t #] [-qT] [-F flags] # or # .
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # -q Quite mode - no display of hpvmmodify command that will run -t targetmax Max target value to use for -a disk:avio_stor:[b,d,targetmax]... Valid values: 0 - special case: script will use full 0-127 range 15...127 - script will use specified max 1... 14 - not valid for this script, since 0-14 is the normal default range for target values if -t is not specified.
# setup BUS,DEV,TGT for next call TGT=$TGT+1 if [ $TGT -gt $WRKTGT ] then TGT=0 DEV=$DEV+1 fi # Skip b,d of 0,3 if [ $BUS -eq 0 ] && [ $DEV -eq $DEVSKIP ] then DEV=$DEV+1 fi if [ $DEV -gt $DEVMAX ] then DEV=0 BUS=$BUS+1 fi if [ $BUS -gt $BUSMAX ] then # NOTE: should not be here, but error out just in case. echo "ERROR: Max supported bus value exceeded, no more room for another adaptor.
typeset -i XN ADDFLAG=0 AUTOBDT=0 QUIET=0 USERDISKCNT=0 USERTGT=0 XN=$XNDEFAULT # # Get cmd line options # while getopts :aF:f:HhN:n:P:qTt: option do case $option in a) # add flag - do actual call to hpvmmodify ADDFLAG=1 a=$a+1 ;; F) # hpvmmodify flags FLAGS=$OPTARG F=$F+1 ;; f) # disklist file DISKLISTFILE=$OPTARG f=$f+1 ;; H) # Help usage exit 0 ;; h) # help usage exit 0 ;; N) # number of disks to add from the disklistfile USERDISKCNT=$OPTARG N=$N+1 ;; n) # number of disks to add at a time XN=$OPTARG n=$
exit 1 fi if [ ! -s "$DISKLISTFILE" ] then echo "ERROR: Disklist file: $DISKLISTFILE is a zero-length file." exit 1 fi GUESTSTATUS="`hpvmstatus -P $GUESTNAME -M 2> /dev/null`" if [ -z "$GUESTSTATUS" ] then echo "ERROR: Could not find guest: $GUESTNAME" exit 1 fi if [ $t -eq 1 ] then if [ $USERTGT -gt 0 ] && [ $USERTGT -lt 15 ] then echo "ERROR: User specified target max (-t $USERTGT) must be 0 or in range 15...127.
ADDRSRC="-a disk:avio_stor:$BDT:disk:$DISK" ADDCMD="$ADDCMD $ADDRSRC" DISKIDX=$DISKIDX+1 CMDIDX=$CMDIDX+1 # Run hpvmmodify if at the add multiplier (-n) or at the last disk if [ $CMDIDX -eq $XN ] || [ $DISKIDX -eq $DISKCNT ] then # Do the hpvmmodify if [ $QUIET -eq 0 ] then echo "Calling: $TIMECMD $ADDCMD" fi if [ $ADDFLAG -eq 1 ] # check for -a flag then $TIMECMD $ADDCMD RETVAL=$? if [ $RETVAL -ne 0 ] then typeset -i FINALCNT FINALCNT=$DISKIDX-$XN echo "ERROR - hpvmmodify failed.
Glossary This glossary defines the terms and abbreviations as they are used in the Integrity VM product documentation. Accelerated Virtual Input/Output See AVIO adoptive node The cluster member where the package starts after it fails over. APA Auto Port Aggregation. An HP-UX software product that creates link aggregates, often called “trunks,” which provide a logical grouping of two or more physical ports into a single “fat pipe”.
cluster Two or more systems configured together to host workloads. Users are unaware that more than one system is hosting the workload. cluster member A cluster node that is actively participating in the Serviceguard cluster. cluster node A system (VSP or guest) configured to be a part of a Serviceguard cluster. CRA Critical Resources Analysis. Deconfigured The term used to describe the health of a resource that has been marked as unusable by the Health Repository.
host administrator The system administrator. This level of privilege provides control of the VSP system and its resources, as well as creating and managing vPars/VMs. host name The name of a system or partition that is running an OS instance. host OS The operating system that is running on the host machine. HP Matrix OE HP Matrix Operating Environment. HP SIM HP System Insight Manager. HP SMH System Management Homepage. Ignite-UX The HP-UX Ignite server product.
PMAN Platform Manager. See VSP. pNIC Physical network interface card. primary node The cluster member on which a failed-over package was originally running. redundancy A method of providing high availability that uses multiple copies of storage or network units to ensure services are always available (for example, disk mirroring). restricted device A physical device that can be accessed only by the VSP system. For example, the VSP boot device should be a restricted device.
virtual machine package A virtual machine that is configured as a Serviceguard package. virtual network A LAN that is shared by the virtual machines running on the same VSP or in the same Serviceguard cluster. virtual switch See vswitch. Virtualization Services Platform See VSP. VM See Virtual machine. vNIC Virtual network interface card (NIC). The network interface that is accessed by guest applications. vPar Virtual partition. A partition that is created and managed from the VSP.
Index A adding virtual storage, 94 admin privileges, 225 Administrator guest, 92 VSP, 92 attachable devices specifying, 85 attached I/O, 64 attributes of virtual machines, 133 autoboot, 147 automatic memory reallocation, 239 B boot, 158 C CD/DVD burner, virtual, 64 cloning guests VLAN information, 123 cloning virtual machines, 146 Cold-install, 47 configuration files for guests, 231 configuring virtual networks , 119 configuring virtual storage, 66 contact, 261 CPU limits, 155 CPU-add, 156 CPU-delete, 156
hpvmstatus command, 217 displaying VLANs with, 123 hpvmstop command, 149 I ID, 153 installing Integrity VM, 43 installing VirtualBase on a vPar/VM, 35 Integrity Virtual Server Manager, 262 Integrity VM commands, 24 installing, 43 manpages, 24 Integrity VM commands hpvmclone, 146 hpvmcollect, 275, 278 hpvmconsole, 228 hpvmcreate, 138 hpvmdevmgmt, 249 hpvmmigrate, 199 hpvmmodify, 142 hpvmnet, 112 hpvmremove, 150 hpvmstart, 141 hpvmstatus, 217 hpvmstop, 149 L localnet, 115 log files, 248 M managing device d
examples of, 94 V vHBA, 102 view status, 159 virtual consoles help, 26 providing access to, 225 using, 227 virtual CPUs, 135 virtual devices planning, 136 Virtual Disk specifying, 77 virtual disks, 65 Virtual DVD specifying, 81 virtual DVDs, 65 Virtual FileDisk specifying, 81 Virtual FileDVD specifying, 83 virtual iLO Remote Console, 229 virtual LANs see VLANs Virtual LvDisk specifying, 79 virtual machine type, 220 virtual machines cloning, 146 creating, 133 migrating, 195 introduction to, 195 procedure fo