Front cover IBM BladeCenter JS23 and JS43 Implementation Guide Featuring installation techniques for the IBM AIX, IBM i, and Linux operating systems Showing Live Partition Mobility scenarios Detailed coverage of AMS, IVM and power management Alex Zanetti de Lima Kerry Anders Nahman Cohen Steven Strain Vasfi Gucer ibm.
International Technical Support Organization IBM BladeCenter JS23 and JS43 Implementation Guide May 2009 SG24-7740-00
Note: Before using this information and the product it supports, read the information in “Notices” on page xxv. First Edition (May 2009) This edition applies to IBM BladeCenter JS23, IBM BladeCenter JS43, IBM AIX Version 6.1, IBM i 6.1, Red Hat Enterprise Linux for POWER Version 5.3, SUSE Linux Enterprise Server 11 for POWER.. © Copyright International Business Machines Corporation 2009. All rights reserved. Note to U.S.
Contents Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvi Preface . . . . . . . . . .
2.6.4 Number of IBM BladeCenter JS23 and JS43 Express in Supported Blade Center Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.6.5 IBM Director . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Chapter 3. Technical description of the hardware architecture . . . . . . . . 41 3.1 POWER6 processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.1.1 Decimal floating point . . . . . . . . . . . . .
4.2.3 VIOS/IVM command line interface . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.3 First VIOS login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.3.1 Password set. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.3.2 License acceptance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.3.3 Initial network setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.6 Memory weight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 5.1.7 Consolidation factors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 5.2 Configuration of Active Memory Sharing using IVM . . . . . . . . . . . . . . . . 183 5.2.1 Defining the shared memory pool and paging storage pool . . . . . . 183 5.2.2 Creating dedicated paging devices for partitions . . . . . . . . . . . . . . 191 5.2.3 Creating shared memory LPARs . . . . . . . .
7.5.1 7.5.2 7.5.3 7.5.4 7.5.5 7.5.6 Creating a virtual media library for backup . . . . . . . . . . . . . . . . . . . 316 Creating Virtual Media Library using IVM . . . . . . . . . . . . . . . . . . . . 322 Adding Image Files to Media Library. . . . . . . . . . . . . . . . . . . . . . . . 324 Attaching a remote PC file or Media device . . . . . . . . . . . . . . . . . . 327 IBM Tivoli Storage Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 IBM i 6.1 shutdown and restart . . . . . .
11.1.2 Firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 11.1.3 VIOS version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 11.1.4 PowerVM Enterprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 11.1.5 LPAR OS versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 11.2 Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Appendix B. SUSE Linux Enterprise Server AutoYaST . . . . . . . . . . . . . . 521 AutoYaST introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522 AutoYaST profile creation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522 Create an AutoYaST profile using YaST Control Center . . . . . . . . . . . . . . . . 522 Appendix C. Additional Linux installation configuration options . . . . .
x IBM BladeCenter JS23 and JS43 Implementation Guide
Figures 1-1 2-1 2-2 2-3 2-4 2-5 2-6 2-7 2-8 2-9 With and without blade servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 JS23 Blade physical layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 JS43 Multiple Expansion Unit (MPE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Front view of BladeCenter H . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Rear view of BladeCenter H . . . . . . . . . . . . . . . . . . .
4-17 4-18 4-19 4-20 4-21 4-22 4-23 4-24 View/Modify Virtual Ethernet showing Initialize Virtual Ethernet option 105 View/Modify Virtual Ethernet window . . . . . . . . . . . . . . . . . . . . . . . . . . 106 View/Modify Virtual Ethernet Bridge tab . . . . . . . . . . . . . . . . . . . . . . . . 107 Physical adapter selection for SEA creation . . . . . . . . . . . . . . . . . . . . . 108 Successful SEA creation result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4-58 Create Partition: Processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 4-59 Create Partition: Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 4-60 Create Partition: Storage Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 4-61 Logical Partition: Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 4-62 Create Partition: Optical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5-18 Optical and tape selections for a shared memory partition . . . . . . . . . . 201 5-19 Summary of selections for a shared memory partition . . . . . . . . . . . . . 202 5-20 View/Modify Partition window showing newly created shared memory partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 5-21 shared memory pool with paging space assignments in paging pool . . 204 5-22 Shared memory pool view showing both types of paging devices . . . .
7-4 SAS Connection module login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 7-5 SAS connection module welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 7-6 SAS connection module zone groups . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 7-7 SAS connection module zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 7-8 AMM SAS configuration zone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7-47 7-48 7-49 7-50 7-51 7-52 7-53 7-54 7-55 7-56 7-57 7-58 7-59 7-60 7-61 7-62 7-63 7-64 7-65 7-66 7-67 7-68 7-69 7-70 7-71 7-72 7-73 7-74 7-75 7-76 7-77 7-78 7-79 7-80 7-81 7-82 7-83 7-84 7-85 7-86 7-87 7-88 7-89 xvi Connect console session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 LIC initial installation screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Confirm Language setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7-90 IBM i power down partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 8-1 Remote Control window - assign Media Tray . . . . . . . . . . . . . . . . . . . . 337 8-2 BladeCenter System status. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 8-3 Activating an IVM partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 8-4 Opening a terminal window from the IVM . . . . . . . . . . . . . . . . . . . . . . . . 339 8-5 SMS menu .
9-11 SMS Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 9-12 Main Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 9-13 Expert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 9-14 Load ppc Modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 9-15 Start installation or update option . . . . . . . . . . . . . .
10-17 Power Savings option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 10-18 Power Savings options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 10-19 Trend Data option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 10-20 Trend Data display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 10-21 Trend data chart options . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12-1 Select BladeCenter boot mode main page . . . . . . . . . . . . . . . . . . . . . . 463 12-2 Firmware selection page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 12-3 Blade Power / Restart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 12-4 Enter SMS Menu. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 12-5 SMS main menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
B-7 B-8 C-1 C-2 C-3 D-1 D-2 D-3 D-4 D-5 D-6 AutoYaST software selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530 Configure the root user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532 Configure a TFTP server in SLES11 . . . . . . . . . . . . . . . . . . . . . . . . . . . 539 Initial setup of SLES NFS installation server . . . . . . . . . . . . . . . . . . . . . 542 Source configuration window . . . . . . . . . . . . . . . . . . . . . . . . . . .
xxii IBM BladeCenter JS23 and JS43 Implementation Guide
Tables 2-1 Processor features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2-2 Memory features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2-3 Storage features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2-4 Virtualization features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2-5 Predictive failure analysis features . . . . . . . . . .
xxiv IBM BladeCenter JS23 and JS43 Implementation Guide
Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used.
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions.
Java, JRE, Power Management, Solaris, Sun, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Excel, Microsoft, Windows Server, Windows Vista, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.
xxviii IBM BladeCenter JS23 and JS43 Implementation Guide
Preface This IBM® Redbooks® publication provides a detailed technical guide for configuring and using the IBM BladeCenter® JS23 and IBM BladeCenter JS43 servers. These IBM Power Blade servers feature the latest IBM POWER6™ processor technology. This book teaches you how to set up the latest Power Blade servers to run AIX®, i, and Linux® operating systems in the IBM BladeCenter architecture.
Nahman Cohen is an IT manager in Memorex Telex Israel for 10 years. He has 18 years of experience in the network support and hardware fields. His areas of expertise include Windows®, Linux, Sun™ Solaris™ and networks. Steve Strain is a Software Engineer/Advisory Education Specialist for the Rochester Support Center in Rochester MN. He is responsible for developing and delivering education based on IBM i and the POWER platforms.
Lab systems setup in support of this project required the availability of multiple IBM BladeCenter chassis, POWER blade servers, plus various adapters and access to disk storage subsystems. We are very thankful for the lab systems support we received from: Ned Gamble and Erich J Hauptli. Finally, the team would also like to acknowledge the support for this project provided by Scott Vetter, ITSO System p Team Leader; and our book editor, Wade Wallace, also contributed to our production and review efforts.
2455 South Road Poughkeepsie, NY 12601-5400 xxxii IBM BladeCenter JS23 and JS43 Implementation Guide
Part 1 Part 1 The IBM BladeCenter JS23 and JS43 servers This Part provides general and technical descriptions of the BladeCenter products covered in this publication. © Copyright IBM Corp. 2009. All rights reserved.
2 IBM BladeCenter JS23 and JS43 Implementation Guide
1 Chapter 1. Introduction to IBM BladeCenter This chapter provides an introduction to IBM BladeCenter and blade servers JS23 and JS43 and discuss the business benefits of blade servers in general and has the following sections: “Highlights of BladeCenter” on page 4 “IBM BladeCenter is the right choice, open, easy and green” on page 6 © Copyright IBM Corp. 2009. All rights reserved.
1.1 Highlights of BladeCenter Blade servers are thin servers that insert into a single rack-mounted chassis which supplies shared power, cooling, and networking infrastructure. Each server is an independent server with its own processors, memory, storage, network controllers, operating system, and applications. Blade server design is optimized to minimize physical space.
most flexible and cost-efficient solutions for UNIX®, i and Linux deployments available in the market. Further enhanced by its ability to be installed in the same chassis with other IBM BladeCenter blade servers, the JS23 and JS43 can deliver the rapid return on investment that clients and businesses demand.
servers into a single chassis, leveraging the management, space and power savings provided by IBM BladeCenter solutions. Large or small enterprises can now consolidate their older i5/OS applications into a centralized BladeCenter environment with a choice of BladeCenter chassis and blade configurations to fit their needs. Simplify, cut costs, boost productivity, go green.
– Extract the most from your third-party management solutions by utilizing the BladeCenter Open Fabric Manager It is collaborative, enabling you to harness the power of the industry to deliver innovation that matters. – Get flexibility from a myriad of solutions created by Blade.
8 IBM BladeCenter JS23 and JS43 Implementation Guide
2 Chapter 2. General description The newest release of the IBM BladeCenter POWER6 processor based blade family consists of two new models: The JS23 and JS43 Express blade servers. This chapter provides an overview of these 2 new blade servers and has the following sections.
2.1 Overview of the JS23 and JS43 Express blade servers The newest release of the IBM BladeCenter POWER6 processor based blade family consists of two new models: The JS23 and JS43 Express blade servers. This new family allows processor scalability starting with a 2 processor (4-core single wide) blade and adds the ability to upgrade to a 4 processor (8-core) blade with the addition of a second blade making it a double wide package. The new blades continue to support AIX, IBM i, and Linux operating systems.
Table 2-1 on page 11 provides a general overview of the processor features of the IBM BladeCenter JS23 and JS43. Table 2-1 Processor features Component Microprocessor Features JS23: Two dual-core (4-way) 64-bit POWER6 microprocessors; 4.2 GHz JS43: Two additional dual-core (total 8-way) 64-bit POWER6 microprocessors; 4.
Table 2-3 Storage features Component Features Storage JS23: Support for one internal small-form-factor (SFF) Serial Attached SCSI (SAS) drive or Solid State Drive (SSD) in the base unit JS43: Support for one additional internal SFF SAS drive or SSD in the expansion unit for a total of two drives Table 2-4 on page 12 provides a general overview of the virtualization features of the IBM BladeCenter JS23 and JS43.
Table 2-6 Environment considerations Component Features Environment Electrical Input: 12V dc Air temperature: Blade server on: 10° to 35°C (50° to 95°F). Altitude: 0 to 914 m (3000 ft) Blade server on: 10° to 32°C (50° to 90°F). Altitude: 914 m to 2133 m (3000 ft to 7000 ft) Blade server off: -40° to 60°C (-40° to 140°F) Humidity: Blade server on: 8% to 80% Blade server off: 8% to 80% Table 2-7 on page 13 provides a general overview of the physical characteristics of the IBM BladeCenter JS23 and JS43.
Table 2-8 on page 14 provides information on supported I/O options for the IBM BladeCenter JS23 and JS43. Table 2-8 Supported I/O options Component Features I/O adapter card options Up to two PCIe High Speed adapters on JS43. Only one supported on JS23 Up to two PCIe CIOv adapters on JS43. Only one on JS23 Table 2-9 on page 15 and Table 2-10 on page 15 provide a general overview of the integrated functions of the IBM BladeCenter JS23 and JS43.
Table 2-9 Integrated Functions Component Features Integrated functions JS23: Two 1 GB Ethernet controllers connected to the BladeCenter chassis fabric through the 5-port integrated Ethernet switch JS43: Two additional 1 GB Ethernet controllers, connecting directly to BladeCenter Ethernet switch modules Expansion card interface The baseboard management controller (BMC) is a flexible service processor with Intelligent Platform Management Interface (IPMI) firmware and SOL support PCI attached ATI™ RN 50 gra
Table 2-11 on page 16 provides information on supported operating systems for the IBM BladeCenter JS23 and JS43. Table 2-11 Supported operating systems Component Features Operating system Linux SLES10 SP2 or later versions Red Hat RHEL 5.2 or later versions Red Hat RHEL 4.6 or later versions AIX 5.3.S, 6.1.F IBM i 6.1 2.
– The JS23 blade server supports one 2.5 inch hard disk drive.The JS43 blade server can support up to two 2.5 inch hard disk drives. The disk drives can be either the small-form-factor (SFF) Serial Attached SCSI (SAS) or the Solid state drive (SSD). IBM Director – IBM Director is a workgroup-hardware-management tool that you can use to centrally manage the JS23 blade server and JS43 blade server, including updating the JS23 and JS43 firmware.
– Automatic service processor reset and reload recovery for service processor errors – Automatic server recovery and restart that provides automatic reboot after boot hangs or detection of checkstop conditions – Automatic server restart (ASR) – Built-in monitoring for temperature, voltage, hard disk drives, and flash drives – Checkstop analysis – Customer-upgradeable basic input/output system (BIOS) code (firmware code) – Degraded boot support (memory and microprocessors) – Extended Error Handling (EEH) for
integrated L2 cache soldered directly to the system planar board. Additionally there is a 32MB L3 cache that is integrated into each of the DCM modules. The JS23 is contained in a single wide package. Table 2-12 shows the JS23 configuration options. Table 2-12 JS23 standard configuration 7778-23X Processor L2/L3 Memory Ethernet Disk #7778-23X 2-socket, 4-core, 4.
2.4.2 Processor features The key processor features are as follows: The BladeCenter JS23 blade provides the support for a 2-socket, 4-core, POWER6 4.2 GHz processor implementation. Each processor is directly mounted to the system planar board, providing multi-processing capability. Each processor core includes a 64-KB Instruction-Cache, 64-KB Data-Cache, and 4 MB of L2 cache. Each DCM contains a 32MB L3 Cache. Table 2-13 shows the supported processor on a BladeCenter JS23 blade.
2.4.5 Internal disk Table 2-15 provides a list of supported disks on a BladeCenter JS23 blade. Disk drives are not required on the base offering. Table 2-15 BladeCenter JS23 disk support Feature Description #8237 73 GB SAS 10K SFF hard disk drive #8236 146 GB SAS 10K SFF hard disk drive #8274 300 GB SAS 10K SFF hard disk drive #8273 69 GB Solid State Disk (SSD) 2.5 Physical specifications BladeCenter JS43 In this section we discuss the physical specifications BladeCenter JS43. 2.5.
Figure 2-2 on page 22 shows the physical layout of the JS43 blade Multiple Expansion Unit (MPE) including memory slots, disk, and the expansion option connectors. The MPE stacks on top of the single wide JS23 making a double wide blade. Each section has its own processors, memory, disk, and adapter cards. Memory DIMM Locations 9-12 Disk Drive SAS or SSD IXe Expansion adapter card connector Memory DIMM Locations13-16 PCI Expansion adapter card connector Figure 2-2 JS43 Multiple Expansion Unit (MPE) 2.
Table 2-17 BladeCenter JS43 processor support Feature Description #7778-23X Plus 8446 IBM BladeCenter JS43 8-core 64 bit 4.2 GHz 2.5.3 Memory features The integrated memory controller supports sixteen pluggable registered DIMMs, which must be installed in pairs. The minimum memory that can be installed is 4 GB (2x2 GB) and the maximum is 128 GB (8x16 GB). All the memory features support memory scrubbing, error correction, chipkill, and bit steering. 2.5.
drives or the SSD disk units can be RAIDed however, the drives must be of the same type. It is also preferred to have drives of the same capacity but, RAID can be performed using dissimilar capacities. If differing capacities are used you will only have the effective capacity of the smaller drive. 2.6 IBM BladeCenter chassis The BladeCenter JS23 and BladeCenter JS43 Express blade are supported in the BladeCenter chassis as shown in Table 2-20.
Table 2-21 BladeCenter support Chassis Number of JS23 blades Number of JS43 Blades BladeCenter S chassis 6 3 BladeCenter H chassis 14 7 BladeCenter HT chassis 12 6 2.6.1 BladeCenter H IBM BladeCenter H delivers high performance, extreme reliability, and ultimate flexibility to even the most demanding IT environments.
Figure 2-3 and Figure 2-4 on page 27 display the front and rear view of an IBM BladeCenter H. Figure 2-3 Front view of BladeCenter H The key features on the front of the BladeCenter H are: A media tray at the front right, with a DVD drive, two USB v2.0 ports, and a system status LED panel. One pair of 2,900-watt power modules. An additional power module option (containing two 2,900 W power modules) is available.
Figure 2-4 Rear view of BladeCenter H The key features on the rear of the BladeCenter H are: Two hot-swap blower modules as standard Two hot-swap management module bays—with one management module as standard Four traditional fabric switch modules Four high-speed fabric switch modules The BladeCenter H chassis allows for either 14 single-slot blade servers or seven double-slot blade servers. However, you can mix different blade server models in one chassis to meet your requirements.
provides clients with easy remote management and connectivity to the BladeCenter H chassis for their critical applications. BladeCenter H does not ship standard with any I/O modules. You choose these I/O modules based on your connectivity needs. An Ethernet Switch Module (ESM) or Passthrough Module will be required in I/O module bays 1 and 2, to enable the use of both Ethernet ports on a blade server.
Feature Specification Switch module standard None (in standard chassis offerings) Power supply 2900 W AC Number of power supplies (standard/maximum) 2 / 4a Number of blowers (standard/maximum) 2/2 Dimensions Height: 15.75 inch (400 mm) Width: 17.40 inch (422 mm) Depth: 28.00 inch (711 mm) a. Four power supplies are required to use high-speed bays 7 to 10, and any blade server in slots 8 to 14. 2.6.2 BladeCenter S The BladeCenter S chassis is a robust and flexible physical platform.
Figure 2-5 BladeCenter S front view 30 IBM BladeCenter JS23 and JS43 Implementation Guide
The key features on the rear of the BladeCenter S are: Four hot-swap blower modules as standard. One hot-swap management-module bay with one management module as standard. Four I/O bays for standard switch modules (bays 1, 3, and 4 can be used for installing I/O modules, bay 2 is reserved for future use). One pair of 950/1450-watt power modules. An additional power module option (configured in pairs of two 950/1450 W feature 4548 power modules) is available.
depend on the I/O Expansion Card installed in the blade servers. Bay 2 is reserved for future use. The chassis does not ship with any storage modules. The BladeCenter S chassis uses either 100 to 127 v or 200 to 240 v AC power and can be attached to standard office power outlets. The BladeCenter S chassis ships standard with: One advanced management module Four blower modules Two power supply modules (one pair of 950/1450-watt power modules) Two 2.
Feature Specification Number of blowers (standard/maximum) 4/4 Dimensions Height: 12.00 inch (306.3 mm) Width: 17.50 inch (440 mm) Depth: 28.90 inch (733.4 mm) 2.6.3 BladeCenter HT The IBM BladeCenter HT is a 12-server blade chassis designed for high-density server installations, typically for telecommunications use. It offers high performance with the support of 10 Gb Ethernet installations.
Four hot-swap power-module bays (two power modules standard) New serial port for direct serial connection to installed blades Compliance with the NEBS 3 and ETSI core network specifications Figure 2-7 and Figure 2-8 on page 35 show the front and rear view of the IBM BladeCenter HT.
Figure 2-8 IBM BladeCenter HT rear view Table 2-24 lists the features of the IBM BladeCenter HT. Table 2-24 BladeCenter HT specifications Feature Specification Machine type 8740-1RY (DC) 8750-1RY (AC) Rack dimension 12U x 27.8 inches (706 mm) DVD/CD standard drive None Diskette drive None Number of blade slots 12 (30mm blade servers) Number of switch module slots 4 Chapter 2.
Feature Specification Number of high-speed switch module slots 4 Switch modules (std/max) None Number of power supplies (standard/maximum) 2 / 4a Number of blowers (standard/maximum) 4/4 Dimensions Height: 21.00 inch (528 mm) Width: 17.50 inch (440 mm) Depth: 27.8 inch (706 mm) a. Four power supplies are required to use the high-speed bays 7 to 10, and any blade servers in slots 7 to 12. The BladeCenter HT chassis allows for either 12 single-slot blade servers or six double-slot blade servers.
2.6.4 Number of IBM BladeCenter JS23 and JS43 Express in Supported Blade Center Chassis IBM BladeCenter JS23 and JS43 Express have their own power consumption characteristics. The amount of power requirements for each type of blade dictates the number of blades supported in each Blade Center chassis.
BCS Total of 6 slots 110VAC PS 2PS 4PS BCH BCH-T Total of 14 slots, 7 in each Power Total of 12 slots, 6 in each Power 220VAC PS 2PS 4PS Only PD1 (No PD2) PD1 None Fully Redundant without Performance Reduction 1 JS23 2 JS43 + 1 JS23 2 JS43 2 JS43 + 1 JS23 3 JS43 - Redundant with Performance Reduction 1 JS23 3 JS43 2 JS43 3 JS43 3 JS43 + 1 JS23 - 3 JS43 2 JS43 + 1 JS23 3 JS43 3 JS43 + 1 JS23 - Basic Power Mode (Max 1 JS43 + 1 JS23 Power Capacity) PD1 and PD2 PD1 PD2 AC Power Supply P
– Integration into leading workgroup and enterprise systems-management environments. – Ease of use, training, and setup. IBM Director also provides an extensible platform that supports advanced server tools that are designed to reduce the total cost of managing and supporting networked systems.
40 IBM BladeCenter JS23 and JS43 Implementation Guide
3 Chapter 3. Technical description of the hardware architecture IBM BladeCenter JS23 Express is a single wide blade, while the IBM BladeCenter JS43 Express is a double wide blade, consisting of the JS23’s Base planar and a Multiple Expansion Unit planar (MPE). The MPE planar design is similar to the base planar, but with reduced functions. In this chapter we present the technical details of JS23’s Base planar, highlighting the differences to the MPE planar as appropriate.
“Systems management” on page 64 42 IBM BladeCenter JS23 and JS43 Implementation Guide
3.1 POWER6 processor The POWER6 processor capitalizes on the enhancements brought by the POWER5 processor. Two of the enhancements of the POWER6 processor are the ability to do processor instruction retry and alternate processor recovery. This significantly reduces exposure to both hard (logic) and soft (transient) errors in the processor core. Processor instruction retry Soft failures in the processor core are transient errors.
POWER6 processor modules on IBM BladeCenter JS23 Express and JS43 Express IBM BladeCenter JS23 Express comes with 2 POWER6 processor modules (4-way), and IBM BladeCenter JS43 Express comes with two additional POWER6 modules (total 8-way). Each POWER6 modules is 4-way Dual Core Module (DCM), containing two 64bit 2-core POWER6 processors (4.2GHz) and one 32MB L3 cache. Figure 3-1 shows a high-level view of the POWER6 module present in the JS23 and JS43 Express servers. DCM Core 1 4.2GHz 4.
supports operands in other data types, including signed or unsigned binary fixed-point data, and signed or unsigned decimal data. DFP instructions are provided to perform arithmetic, compare, test, quantum-adjustment, conversion, and format operations on operands held in FPRs or FPR pairs. Arithmetic instructions These instructions perform addition, subtraction, multiplication, and division operations.
Enhanced SMT features To improve SMT performance for various workloads and provide robust quality of service, POWER6 provides two features: Dynamic resource balancing The objective of dynamic resource balancing is to ensure that the two threads executing on the same processor flow smoothly through the system.
existing integer and floating-point units and enables highly parallel operations, up to 16 operations in a single clock cycle. By leveraging AltiVec technology, developers can optimize applications to deliver acceleration in performance-driven, high-bandwidth computing. The AltiVec technology is not comparable to the IBM POWER6 processor implementation, which uses the Simultaneous Multithreading functionality. 3.
power savings mode is enabled, the firmware of the system continuously monitors the utilization of the system, and adjusts the CPU clock speed and voltage to provide enough power to run the current workload. The less the system is utilized, the more power savings are achieved. In addition, you can specify whether you want to favor performance or favor power when enabling dynamic power savings mode. With favor performance, the peak frequency of the processors may be greater than 100%.
3.4.1 Thermal Power Management Device (TPMD) The implementation of performance-aware power and thermal management for POWER6 processor-based systems is called the EnergyScale architecture, which meets a number of basic requirements for system-level power. IBM BladeCenter JS23 and JS43 Express implementation uses an integrated circuit called Thermal Power Management™ Device (TPMD), placed on the management card. On IBM BladeCenter JS43 Express there is only one TPMD processor, located in the Base planar.
Note: The IBM BladeCenter JS43 Express has two Service Processors, one in the Base planar, and one in the MPE planar. The Service Processor located in the MPE planar has only I/O functions, and does not provide redundancy nor backup support to the FSP in the Base planar. 3.6 Management Card The Management Card provides a mean for making the Anchor system information chip pluggable. Management Card’s plug is located on Base planar, just below the DIMMs (see Figure 3-4 on page 54).
3.7.1 Memory description of IBM BladeCenter JS23 and JS43 Express IBM BladeCenter JS23 Express has two memory channels per POWER6 processor module (4 channel total), and each memory channel connects to a memory buffer chip. This same configuration is present on the MPE planar of a IBM BladeCenter JS43 Express, for a total of 8 channels. Each memory buffer chip connects to two Registered DDIMs, giving a total of 8 DIMMs in the IBM BladeCenter JS23 Express, and 16 DIMMs in the BladeCenter JS43 Express.
2. DDIMs are to be installed in pairs. First filling BusA then BusB of each planar, as shown above: a. Base planar (P1): (C1, C3), (C6, C8), (C2, C4), (C5, C7). b. MPE planar (P2): (C1, C3), (C6, C8), (C2, C4), (C5, C7). Important: Both IBM BladeCenter JS23 and JS43 require a minimum of 4GB (2 x 2GB DIMM), and we recommend to plug them in slots P1-C1 and P1-C3 (BusA), as shown in Figure 3-3 on page 52 3. Both DDIMs in a pair must be of same size, speed, and technology.
3.7.3 Memory RAS IBM BladeCenter JS23 and JS43 Express supports Memory Scrubbing, ECC, Chipkill Correction and Bit Steering. You can find more details about these and other POWER Systems RAS technologies in the following white papers: IBM POWER Systems: Designed for Reliability. http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=SA&subtype= WH&htmlfid=POW03019USEN&attachment=POW03019USEN.PDF&appna me=STGE_PO_PO_USEN_WH IBM POWER Systems: Designed for Availability. http://www-01.ibm.
Compatible with PCI at the software layers 3.8.2 I/O Expansion Cards IBM BladeCenter JS43 Express have two HSDC 450-pin connectors, one in each planar, and two CIOv 160-pin connectors, one in each planar as well. Figure 3-4 on page 54 shows how the HSDC and CIOv cards fit together inside the Base planar of an IBM BladeCenter JS23 Express. Note: IBM BladeCenter JS23 and JS43 Express supports only Combined Form Factor (CFFEe) High Speed Daughter Cards.
FRU Name Feature Supported OS Mellanox 4X Infiniband Dual Port DDR Expansion Card 8258 AIX, Linux Qlogic 8Gb FChannel 8271 Linux Table 3-2 Supported CIOv PCI-e Expansion Cards FRU Name Feature Supported OS Emulex 8Gb Fibre Channel Expansion card 8240 AIX, Linux, IBM i QLogic 4Gb FC Expansion Card 8241 AIX, Linux, IBM i Qlogic 8Gb Fibre Channel Expansion card 8242 AIX, Linux, IBM i 3Gb SAS Passthrough Expansion Card 8246 AIX, Linux, IBM i 3.8.
3.8.4 Integrated Virtual Ethernet (IVE) IVE is the name given to the collection of hardware components (including the Host Ethernet Adapter (HEA), the software, and the hypervisor functions that provide the integrated Ethernet adapter ports with hardware assisted virtualization capabilities. The IVE was developed to meet general market requirements for better performance and better virtualization for Ethernet.
3.8.6 Serial Attached SCSI (SAS) storage subsystem IBM BladeCenter JS23 and JS43 Express uses an embedded SAS controller that operates at 32-bit PCI-X at 133MHz. Note: The SAS Drive in the JS23 Base planar is not hotpluggable. On IBM BladeCenter JS23 Express there are four SAS ports. Two of them are wired to the SAS hard drive, and the other two go to the CIOv PCI-e connector, connecting to the Blade Center SAS Switch bay 3 and bay 4, when a SAS paddle card is used in the CIOv connector.
RAID support IBM BladeCenter JS23 Express has no RAID available. IBM BladeCenter JS43 Express has support for RAID functions when there are more than one SAS disk installed in the system. If there is only one drive then there is no RAID function. For two drives in the IBM BladeCenter JS43 Express, the supported RAID functions are: RAID 0 Striping. RAID 1 Mirroring. The drives on the Base Planar and MPE planar can be either rotating hard drives (HDD) or solid state drives (SSD).
Before you can create a RAID array, you must reformat the hard disk drives so that the sector size of the drives changes from 512 MB to 528 MB. If you later decide to remove the hard disk drives, delete the RAID array before you remove the drives. If you decide to delete the RAID array and reuse the hard disk drives, you must reformat the drives so that the sector size of the drives changes from 528 MB to 512 MB.
via the BladeCenter Mid planar. Each DSM has 2 SAS expanders with each expander connecting to the 6 DASD, one DSM connects to the primary ports of the DASD while the other expander connects to the secondary port of the DASD. The A side expander of each DSM is wired to NSSM in switch bay 3 while the B side expander is wired to the NSSM in switch bay 4. Figure 3-5 on page 60 shows the supported SAS topology for the IBM BladeCenter JS23 and JS43 Express on the BCS.
Blade 1 SAS Ctrl Blade 2 SAS Ctrl Blade ... SAS Ctrl Blade 10 SAS Ctrl Blade 11 SAS Ctrl Blade 12 SAS Ctrl External SAS Ports External SAS Ports x4 DS3200 x4 BAY 3 BAY 4 x4 x4 NSSM SAS Switch NSSM SAS Switch x4 x4 SAS Tape x4 DS3200 EXP3000 x4 External SAS Ports Figure 3-6 IBM BladeCenter JS23 and JS43 Express BCH and BCHT SAS Topology 3.
Table 3-3 PowerVM editions for IBM BladeCenter JS23 and JS43 Express Description Standard Edition Enterprise Edition Maximum LPARs 10 / core 10 / core Virtual I/O server YES YES Integrated Virtualization Manager YES YES Shared Dedicated Capacity YES YES Live Partition Mobility NO YES Active Memory Sharing NO YES 3.
IBM periodically releases maintenance packages for the AIX 5L operating system. These packages are available on CD-ROM, or you can download them from: http://www.ibm.com/eserver/support/fixes/fixcentral/main/pseries/aix The Web page provides information about how to obtain the CD-ROM. You can also get individual operating system fixes and information about obtaining AIX 5L service at this site. In AIX 5L V5.
Many of the features described in this document are operating system dependent and might not be available on Linux. For more information, visit: http://www.ibm.com/systems/p/software/whitepapers/linux_overview.html 3.10.3 IBM i IBM i 6.1 is supported on both IBM BladeCenter JS23 and JS43 Express. It uses IBM PowerVM Standard Edition, which includes the POWER Hypervisor™, Micro Partitioning, and Virtual I/O server with Integrated Virtualization Manager (IVM).
The BladeCenter Web interface allows the following: A System Administrator can easily and effectively manage up to 14 blade servers from an integrated interface. Power the IBM BladeCenter JS23 and JS43 Express on or off. Control over all blade servers and input/output (I/O) modules that are attached to the BladeCenter chassis even with a mixed environment. Manage other BladeCenter resources such as I/O modules and retrieval of system health information.
In addition to a new web interface and the ability to install the IBM Director server on AIX, Active Energy Manager leverages Director 6.
Monitoring synchronization across cluster Monitoring and automated response Automatic security configuration Management of node groups (static and dynamic) Diagnostics tools For more information about CSM, visit: http://www-03.ibm.com/systems/clusters/software/csm/ https://www14.software.ibm.com/webapp/set2/sas/f/lopdiags/home.html Chapter 3.
68 IBM BladeCenter JS23 and JS43 Implementation Guide
Part 2 Part 2 System configuration and management Having covered the basic system information and architecture in Part 1, we expand on that to include how to get BladeCenter JS23 and JS43 up and running the supported operating systems, and several other management-oriented topics. © Copyright IBM Corp. 2009. All rights reserved.
70 IBM BladeCenter JS23 and JS43 Implementation Guide
4 Chapter 4. System planning and configuration using VIOS with IVM This chapter describes how to perform basic system planning prior to and configuration after you install Virtual Input/Output Server (VIOS). The configuration can be done by using the command line interface (CLI) and user interface (UI). The Web browser-based UI is an integral part of the Integrated Virtualization Manager (IVM) and is included in the VIOS.
This chapter has the following sections: “Planning considerations” on page 73 “VIOS system management using IVM” on page 83 “First VIOS login” on page 86 “First IVM connection” on page 93 “VIOS network management and setup” on page 100 “VIOS Storage Configuration and Management” on page 121 “Partition configuration for Virtual I/O Client (VIOC)” on page 144 “Console access and activating a partition” on page 166 72 IBM BladeCenter JS23 and JS43 Implementation Guide
4.1 Planning considerations When planning your system environment for a IBM BladeCenter JS23 or JS43 a complete overview of the BladeCenter, blades, network and storage should be reviewed. Crafting an overall solution will help to eliminate expensive rework. 4.1.1 General considerations We’ll start with the general considerations.
I/O Expansion Bay #1 Blade Bay #6 Blade Bay #5 Blade Bay #4 Blade Bay #3 Blade Bay #2 Blade Bay #1 I/O Expansion Bay #3 I/O Exp. Card Form factor types: Blade Bay #6 CFFv StFF I/O Exp. Card SFF Blade Bay #5 CIOv I/O Exp. Card Blade Bay #4 I/O Exp. Card Blade Bay #3 I/O Expansion Bay #4 I/O Exp. Card Blade Bay #2 i/O Exp. Card Blade Bay #1 I/O Expansion Bay #2 I/O Exp. Card Blade Bay #6 Form factor types: CFFh HSDC I/O Exp.Card Card Future Blade ay #5 BladeBBay #5 I/O Exp. Card Blade Bay #4 I/O Exp.
I/O Expansion Bay #1 Blade Bay #14 Blade Bay #5…. Blade Bay #4 I/O Expansion Bay #2 Blade Bay #3 Blade Bay #2 Blade Bay #1 Form factor types: CFFv StFF SFF CIOv I/O Expansion Bay #3 I/O Exp. Card Blade Bay #14 I/O Exp. Card Blade Bay #5…. I/O Exp. Card Blade Bay #4 I/O Exp. Card Blade Bay #3 I/O Expansion Bay #4 I/O Exp. Card Blade Bay #2 I/O Exp. Card Blade Bay #1 Form factor types: CFFh HSDC I/O Expansion Bay #7/8 I/O Exp. Card Blade Bay #14 I/O Exp. Card Card Future Blade BladeBay Bay#5….
required If you desire to use advanced operations available under PowerVM Enterprise Edition, such as Live Partition Mobility (LPM) and Active Memory™ Sharing (AMS). The decision regarding whether to use a shared processor pool or dedicated processors should be made prior to configuring an LPAR. Changing from one mode to the other with the IVM UI requires the deletion of the LPAR and the creation of a new one, the VIOS CLI can use the chsyscfg command.
Combined Form Factor horizontal (CFFh) I/O expansion cards CIOv adapter cards ports are always connecting to bay three and four of a BladeCenter chassis when installed in an IBM BladeCenter JS23 or JS43. Figure 4-3 on page 77 show an Active SAS Pass through “paddle” expansion card in CIOv form factor. A QLogic 4 Gb Fibre Channel HBA, and Qlogic and Emulex 8Gb Fibre Channel HBAs are also available in the same form factor.
Figure 4-4 Qlogic Ethernet and 4 Gb Fibre Channel “combo card”CFFhTh Together with an installed Qlogic Ethernet and 4Gb Fibre Channel combo card, it is also possible to install the CIOv I/O expansion card. Using a BladeCenter H with a JS23 combination gives, in addition to the two onboard network ports, six more I/O ports. These six additional ports are four Fibre Channel ports and two 1Gb Ethernet ports.
When JS23/43 blades with CFFh cards are installed in a BladeCenter H or HT the cards connect to the high speed bays 7, 8, 9, and 10 depending on the ports on the card. These module bays have a horizontal orientation. (The standard module bays have a vertical orientation.) When JS23/43 blades with a supported CFFh card are installed in a BladeCenter S the cards are connected to Bay 2 Some CFFh cards utilizes the high speed bays, but uses standard modules for connectivity.
JS23/JS43 storage There are currently four different types of storage available: Internal 73GB or 146GB SAS Hard Disk Drive (HDD) disk storage Internal 73GB SAS Solid State Drive (SSD) storage External SAS/SATA disk storage External Fibre Channel storage There is not a hardware initiator or TOE card available for the IBM BladeCenter JS23 or JS43 for iSCSI storage system attachment. Software initiators are available for AIX and Linux (no VIOS support).
IBM Total Storage DS4000™ series IBM Total Storage DS3000™ series IBM Total Storage N™ series The Virtual I/O Server data sheet provides an overview of supported storage subsystems and the failover driver that is supported with the subsystem. The data sheet can be found at: http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/data sheet.html Verify that your intended operating system supports these storage subsystems.
which components supported by the blade are supported by the Virtual IO Server as well. The data sheet can be found at: http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/d atasheet.html 5. Check the support matrix of the storage subsystem of your choice. In the case of Fibre Channel attached storage, verify the SAN switch support matrix. The following list points to the matrices of IBM storage products. The DS8000 interoperability matrix can be found at: http://www.ibm.
The cabling is described in the product documentation of the storage subsystem. Verify which failover drivers are supported by the storage subsystem. In the product documentation, check the recommended zoning configuration. 8. Use the Virtual I/O Server data sheet again to check which failover drivers are included in the Virtual I/O Server and which failover drivers can be installed. Note: The System Storage™ Interoperation Center (SSIC) helps to identify supported storage environments.
You can use either interface to create, delete, and update the logical partitions and perform dynamic operations on LPARs (DLPAR) including the VIOS itself. 4.2.1 VIOS installation considerations The Virtual I/O Server installation is performed like a native install of AIX.
Figure 4-6 IVM navigation and work areas The login to the UI is described in 4.4.1, “Connecting to IVM” on page 93 4.2.3 VIOS/IVM command line interface The command line interface (CLI) requires more experience to master than the GUI, but it offers more possibilities for tuning the partition’s definitions. It can also be automated through the use of scripts.
and IVM command help by using the --help flag. Detailed command help can be shown using the man command. Note: Not all IVM commands will be displayed using the help command. For a complete listing of these commands, refer to Virtual I/O Server and Integrated Virtualization Manager Command Reference, which is available from: http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphc g/iphcg.pdf 4.
[Accept (a)] | Decline (d) | View Terms (v) After you enter a, enter the license -accept command as shown in Example 4-2. Example 4-2 The license command $ license -accept The status of the license can be verified by using the license command with no flags, as shown in Example 4-3. Example 4-3 The license status $ license The license has been accepted en_US Apr 2 2009, 12:33:16 10(padmin) 4.3.3 Initial network setup IVM requires a valid network configuration to be accessed by a Web browser.
Ethernet adapters: ent0 Available ent1 Available ent2 Available ent3 Available ent4 Available ent5 Available ent6 Available 05-20 ent7 Available 05-21 ibmvmc0 Available $ Logical Logical Virtual Virtual Virtual Virtual Gigabit Gigabit Virtual Host Ethernet Port (lp-hea) Host Ethernet Port (lp-hea) I/O Ethernet Adapter (l-lan) I/O Ethernet Adapter (l-lan) I/O Ethernet Adapter (l-lan) I/O Ethernet Adapter (l-lan) Ethernet-SX PCI-X Adapter (14106703) Ethernet-SX PCI-X Adapter (14106703) Management Channel C
Set Date and TimeZone Change Passwords Set System Security VIOS TCP/IP Configuration Install and Update Software Storage Management Devices Electronic Service Agent Esc+1=Help F9=Shell Esc+2=Refresh F10=Exit Esc+3=Cancel Enter=Do F8=Image By selecting VIOS TCP/IP Configuration, you will be presented a list of available network interfaces as shown in Example 4-7.
Select the desired interface. On the next screen, shown in Example 4-8, you enter the TCP/IP configuration by pressing the Enter key. This completes the initial TCP/IP configuration of the VIOS. Example 4-8 cfgassist TCP/IP interface configuration entry page VIOS TCP/IP Configuration Type or select values in entry fields. Press Enter AFTER making all desired changes.
Example 4-9 lstcpip command sample output $ lstcpip -interfaces Name Address Netmask State MAC en1 en3 en4 en5 et1 et3 et4 et5 en6 et6 172.16.1.200 - 255.255.255.
IPv4 address = 172.16.1.200 Network Mask = 255.255.255.0 State = detach attributes: en1 State = down attributes: en2 State = down attributes: en3 State = down attributes: en4 State = down attributes: en5 State = down attributes: et0 State = detach attributes: et1 State = down attributes: et2 State = down attributes: et3 State = down attributes: et4 State = down attributes: et5 State = down attributes: en6 IPv4 address = 172.16.1.200 Network Mask = 255.255.255.
attributes: en7 State = down attributes: en8 State = down attributes: et7 State = down attributes: et8 State = down Static Routes: Route 1: hopcount = 0 default gateway = 172.16.1.1 DNS information: nameserver 172.16.1.199 domain customer.com To remove all or part of the TCP/IP configuration, use the rmtcpip command.
A Welcome window that contains the login and password prompts opens, as shown in Figure 4-7. The default user ID is padmin, and the password is the one you defined during the VIOS installation. Figure 4-7 The Welcome window The first connection to the IVM UI will display the guided setup window as shown in Figure 4-8 on page 95. Expanding the sections on the window provides additional information about configuration and management tasks, with links directly to some of the functions.
Figure 4-8 Guided Setup window 4.4.2 Verify and modify VIOS partition memory and processors After the initial installation of the VIOS, there is only one LPAR, the VIOS, on the system with the following characteristics: The ID is 1. The name is equal to the system’s serial number. The state is Running. The allocated memory is between 1GB and one-eighth of the installed system memory.
Figure 4-9 View/Modify Partitions window Administrators can change properties of the VIOS LPAR, including memory or processing units allocation by using the IVM UI. From the View/Modify Partitions window, click the link in the Name column that corresponds to ID 1 (The VIOS will always be ID or LPAR 1). The Partition Properties window will be displayed in a new window, as shown in Figure 4-10 on page 97. The name of the VIOS can be changed from the General tab, if desired.
Figure 4-10 Partition Properties, General tab Figure 4-11 shows the Memory tab. Chapter 4.
Figure 4-11 Partition Properties, Memory tab The default memory configuration for the VIOS LPAR, 1/8 of system memory with a minimum value of 1GB. You may need to increase memory values if it did default to 1GB and you are using additional expansion cards or combinations of expansion cards and EtherChannel configurations, or you plan to have an LPAR supporting IBM i partition. The Assigned memory value should not be reduced below the default minimum of 1GB.
Figure 4-12 Partition Properties, Processing tab Processing unit allocations for the VIOS are recommended to remain at the install defaults. But you should monitor utilization and adjust the Assigned amount, as required. The Virtual Processor default settings should not be changed. The lshwres and chsyscfg commands are used to display and change memory and processor values, as shown in Example 4-10.
$ lshwres -r proc --level lpar --filter "\"lpar_ids=1\"" -F curr_proc_units 0.40 $ chsyscfg -r prof -i "lpar_id=1,desired_proc_units=0.5" $ lshwres -r proc --level lpar --filter "\"lpar_ids=1\"" -F curr_proc_units 0.50 $ lshwres -r proc --level lpar --filter "\"lpar_ids=1\"" -F curr_procs 4 $ chsyscfg -r prof -i "lpar_id=1,desired_procs=3" $ lshwres -r proc --level lpar --filter "\"lpar_ids=1\"" -F curr_procs 3 The Ethernet tabs are discussed in 4.5.2, “Virtual Ethernet Adapters and SEA” on page 103.
done during the LPAR creation process. Refer to 3.8.4, “Integrated Virtual Ethernet (IVE)” on page 56 for additional technical details about the HEA. You configure the HEA port mode by selecting View/Modify Host Ethernet Adapters from the navigation area. This displays the UI window, as shown in Figure 4-13 on page 101. Figure 4-13 View/Modify Host Ethernet Adapters window All four HEA ports on a JS43 are shown. The default configuration is port sharing with 14 logical connections available per port pair.
Figure 4-14 HEA Port Properties You can display a list of connected partitions (if any) and MAC addresses by selecting the Connected Partitions tab, as shown in Figure 4-15 on page 103.
Figure 4-15 HEA Port Properties, Connected Partitions 4.5.2 Virtual Ethernet Adapters and SEA Virtual adapters exist in the hypervisor that allows LPARs to communicate with each other without the need for a physical network. They can be created for each partition provided by the hypervisor. Four virtual Ethernet adapters are created by default on the VIOS, and two each for every logical partition. Additional virtual adapters can be created on both the VIOS and logical partitions.
allows the port to operate in promiscuous mode. When this mode is enabled, there is only one logical port available and it is assigned to the VIOS LPAR. Figure 4-16 HEA port setting for Ethernet bridging Physical Ethernet ports on an expansion cards do not require configuration prior to being used in a SEA environment. The SEA adapter is configured by selecting the View/Modify Virtual Ethernet link in the navigation area.
Figure 4-17 View/Modify Virtual Ethernet showing Initialize Virtual Ethernet option Figure 4-18 on page 106 shows the four virtual Ethernet adapters that are created by default on the VIOS. Chapter 4.
Figure 4-18 View/Modify Virtual Ethernet window Use the Virtual Ethernet Bridge tab to display the virtual to physical options for creating an SEA, as shown in Figure 4-19 on page 107. The drop-down box in the Physical Adapter column lists the adapters that are available for creating the SEA. Notes: A physical adapter can only be used to create one SEA in combination with a virtual adapter.
Figure 4-19 View/Modify Virtual Ethernet Bridge tab Figure 4-20 on page 108 shows a physical adapter selection. Chapter 4.
Figure 4-20 Physical adapter selection for SEA creation Figure 4-21 on page 109 indicates the successful creation of the SEA.
Figure 4-21 Successful SEA creation result 4.5.3 Physical adapters With the IBM BladeCenter JS23 or JS43, you have the option to assign physical hardware adapters to an LPAR. From a network perspective, only Ethernet expansion cards can be reassigned to an LPAR. The HEA adapter ports cannot be assigned to a logical partition. Note: When using IBM i and shared memory partitions, the resources must be purely virtual.
Figure 4-22 View/Modify Physical Adapters window By default, all physical adapters are owned by the VIOS LPAR. By using the Modify Partition Assignment button, you can change the assigned partition. In the example shown in Figure 4-23 on page 111, the Gigabit Ethernet expansion card ports are being reassigned to partition 2.
Figure 4-23 Physical Adapter assignment to new partition Figure 4-24 on page 112 shows the change in partition ownership. Chapter 4.
Figure 4-24 View/Modify Physical Adapter window showing change of ownership of Gigabit Ethernet Adapter Example 4-11 shows the changes in adapter availability in an AIX logical partition, starting with the original virtual Ethernet adapter through the addition of the two physical ports from an IBM BladeCenter JS23 or JS43 expansion card.
ent2 Available 01-21 Gigabit Ethernet-SX PCI-X Adapter (14106703) vsa0 Available LPAR Virtual Serial Adapter vscsi0 Available Virtual SCSI Client Adapter # Note: When removing a physical adapter from a LPAR you may have to remove a PCI bus device with a rmdev command from the LPAR’s command line. The IVM interface will display an error message with text indicating the device that must be removed before the change in LPAR assignment can performed. 4.5.
http://www.ibm.com/developerworks/power/library/l-bladenetconf/index.ht ml?ca=drsVLAN configuration of BladeCenter Ethernet switch modules or other Ethernet switches external to the BladeCenter are not covered in this document. Creating new VIOS virtual Ethernet adapters The four default virtual adapters that are created by the VIO Server during installation cannot be modify for VLAN tagging use. Therefore new virtual adapters must be created using the CLI with the desire VLAN information.
lpar_name=js23-vios,lpar_id=1,slot_num=6,state=1,ieee_virtual_eth=0,por t_vlan_id=4,addl_vlan_ids=none,is_trunk=1,trunk_priority=1,is_required= 0,mac_addr=067E5E2D8C06 From the View/Modify Virtual Ethernet view in IVM as shown Figure 4-25 the four default VIO Server Ethernet adapters are displayed. Figure 4-25 Default VIO Server virtual Ethernet Adapters shown by IVM Note: Figure 4-25 shows additional partitions. Partition creation is not covered until 4.
Example 4-14 Using chhwres command to create new VIOS virtual Ethernet adapter $ chhwres -r virtualio --rsubtype eth -o a --id 1 -s 15 -a port_vlan_id=555,ieee_virtual_eth=1,\"addl_vlan_ids=20,30,40\",is_trunk =1,trunk_priority=1 The flags and their attributes are: -r virtualio --rsubtype eth type of hardware resource to change -o a perform add operation --id 1 the LPAR id number -s 15 slot number to use -a attributes to add – port_vlan_id=555 PVID – iee_virtual_eth=1 turns on IEEE 802.
lpar_name=js23-vios,lpar_id=1,slot_num=3,state=1,ieee_virtual_eth=0,por t_vlan_id=1,addl_vlan_ids=none,is_trunk=1,trunk_priority=1,is_required= 0,mac_addr=067E5E2D8C03 lpar_name=js23-vios,lpar_id=1,slot_num=4,state=1,ieee_virtual_eth=0,por t_vlan_id=2,addl_vlan_ids=none,is_trunk=1,trunk_priority=1,is_required= 0,mac_addr=067E5E2D8C04 lpar_name=js23-vios,lpar_id=1,slot_num=5,state=1,ieee_virtual_eth=0,por t_vlan_id=3,addl_vlan_ids=none,is_trunk=1,trunk_priority=1,is_required= 0,mac_addr=067E5E2D8C05 lpar_nam
We now create a SEA or bridge between this new virtual adapter and a physical Ethernet port, in this case a HEA adapter, by first clicking the Virtual Ethernet Bridge tab. From the virtual Ethernet list we choose 555(20,30,40) and map it to ent1 as shown in Figure 4-27 on page 118. Click OK to complete the assignment and the creation of the SEA. Figure 4-27 Creating a SEA using an IEEE 802.
ent5 ent6 ent7 ent8 ent9 ent10 Available Available Available Available Available Available Virtual I/O Ethernet Adapter (l-lan) Gigabit Ethernet-SX PCI-X Adapter Gigabit Ethernet-SX PCI-X Adapter Shared Ethernet Adapter Virtual I/O Ethernet Adapter (l-lan) Shared Ethernet Adapter With the successful creation of the SEA we can use the entstat command on the VIO Server to get additional details of the components of the SEA as shown in Example 4-18.
Figure 4-28 VIOC adapter to VIOS virtual Ethernet mapping VIO Client verification and configuration If the partition is not active, the new adapter will be discovered upon activation of the LPAR. If the partition is already active you may need to take additional steps such as run the cfgmgr command in AIX. IBM i LPARs with Autoconfig enabled will automatically configure the new adapter.
VLAN details of ent1 can be displayed using the entstat command on the VIO Client (assumes an AIX client) as shown in Example 4-20 on page 121. Example 4-20 entstat command from VIO Client showing details of new virtual Ethernet # entstat -d ent1 |grep VLAN Invalid VLAN ID Packets: 0 Port VLAN ID: 20 VLAN Tag IDs: None In this AIX LPAR example The interface en1 on VLAN 20 can now be configured with the desired TCP/IP properties 4.
To work with VIOS storage, click View/Modify Virtual Storage in the navigation area of the IVM as shown in Figure 4-29. Figure 4-29 View and modify virtual storage 4.6.1 Physical volumes Physical volumes are the hard drives that are available to the VIOS.They can be installed locally in the IBM BladeCenter JS23 or JS43blades, SAS drives available from IBM BladeCenter S chassis, or LUNs available from a Fibre Channel storage area network subsystem. A physical volume is shown as hdisk0, hdisk1 and so on.
in Figure 4-30 on page 123. This displays the list of the physical volumes available to the VIOS. Figure 4-30 Physical volumes shown in IVM Similar information can be retrieved on the Virtual I/O Server CLI by using the lsdev and lspv commands. Example 4-21 shows the output of the lsdev -type disk command.
hdisk8 hdisk9 hdisk10 hdisk11 hdisk12 hdisk13 hdisk14 Available Available Available Available Available Available Available IBM IBM IBM IBM IBM IBM IBM MPIO MPIO MPIO MPIO MPIO MPIO MPIO FC FC FC FC FC FC FC 1750 1750 1750 1750 1750 1750 1750 Example 4-22 shows the output of the lspv -size command.
Creating a new storage pool To create a new storage pool, click the Storage Pools tab from the View/Modify Virtual Storage window. Figure 4-31 on page 125 shows a list of all available storage pools. Figure 4-31 Storage pools shown in IVM Click Create Storage Pool... to create a new storage pool. A dialog opens that guides you through the setup of the storage pool. Specify a name (for example, SP-Media-Lib) that will be used for the storage pool.
Figure 4-32 Create new storage pool Figure 4-33 shows the new storage pool. Figure 4-33 Newly created storage pool shown in IVM Deleting or reducing a storage pool To delete or reduce a storage pool, start from the Storage Pool tab in the Modify Virtual Storage window. Select the storage pool you want to delete or reduce. Click Reduce from the More Tasks drop-down box as shown in Figure 4-34. A dialog opens that guides you through the modification of the storage pool.
Figure 4-34 Reduce or delete a storage pool Select the physical volumes that you want to remove from the storage pool. The storage pool will be deleted when all physical volumes that are assigned to the storage pool are removed. Click OK, as shown in Figure 4-35. Figure 4-35 Delete storage pool 4.6.3 Virtual disks Virtual disks are created in storage pools. After they are assigned to a logical partition, they are seen as virtual SCSI disk drives by the LPAR.
You can create virtual disks from the View/Modify Virtual Storage window by selecting the Virtual Disks tab, as described in the following section. The Create Partition Wizard, as described in 4.7.2, “Partition name and environment” on page 145, can also be used to create virtual disks. Both methods require free space in a storage pool. Creating virtual disks To create a logical volume, a storage pool must be available. Refer to 4.6.
Figure 4-37 Virtual disk settings The newly created virtual disk appears in the list, as shown in Figure 4-38. Figure 4-38 The newly created virtual disk The size of the virtual disk can be extended, as described in the following section. Extending a virtual disk You can extend a virtual disk as long as enough free space is available in the storage pool. To extend a virtual disk, select the virtual disk you plan to extend in the check box. Select the More Tasks...
Figure 4-39 Extend virtual disk Specify the amount of space that the virtual disk will be extended, then click OK as shown in Figure 4-40. If the storage pool does not have enough free space, it can be extended from the Storage Pools tab. Note: When you attempt to extend virtual disk on a running partition, a warning message will be generated, alerting the administrator. To continue, select the Force extend on running partition check box and click the OK button again.
Figure 4-41 Extended virtual disk The next section explains how to delete a virtual disk. Deleting virtual disks A virtual disk that is assigned to a partition must have that assignment removed before the virtual disk can be deleted. Note: When you attempt to delete a virtual disk on a running partition, a warning message will be generated, alerting the administrator. To continue, select the Force device removal from a running partition check box and click the OK button again.
Figure 4-42 Delete virtual disk Confirm the deletion of the virtual disk by clicking OK, as shown in Figure 4-43. Figure 4-43 Confirm deletion of the virtual disk The virtual disk will be deleted and the occupied space in the storage pool will become available. 4.6.4 Optical and Tape devices Optical devices are CD or DVD drives.
Virtual optical devices Physical tape devices must be Serial Attached SCSI (SAS) Physical optical devices Physical optical devices are the CD or DVD drives installed in the media tray of a IBM BladeCenter. Each type of BladeCenter chassis is delivered with a CD drive or a DVD- drive. The other physical optical device that can be used is remote media. An ISO image or an CD or DVD in your laptop or desktop can be assigned to the blade.
over the remote control interface of the Advanced Management Module (AMM) in the BladeCenter chassis. Note: The remote control function for the IBM BladeCenter JS23 or JS43 is only available to the blade slot that has the media tray assignment. To change the assignment of a physical optical device, select the check box of the device to be changed and click Modify partition assignment. A dialog opens that guides you through the assignment change.
Figure 4-45 Change physical optical device assignment Virtual optical devices Virtual optical devices were introduced with Virtual I/O Server V1.5. Together with the Media Library of a Virtual I/O Server, this device is able to virtualize CD or DVD images that are stored in the VIOS media library to one or more logical partitions. Before virtual optical device can be used, you must configure a media library. Creating a media library To set up a media library, a storage pool must be available. Refer to 4.
Figure 4-46 Create media library 2. Select an available storage pool and the amount of storage space that will be used from this storage pool to create the media library, and then click OK as shown in Figure 4-47 on page 137.
Figure 4-47 Media library size and storage pool Depending on the size of the media library, the creation time will vary. After the media library is successfully created, the current view in the View/Modify Virtual Storage window will change, showing Media Library options. The size of media library can be can be increased at any time by clicking the Extend Library button.
To add new media in the media library, click Add Media... as shown in Figure 4-48 on page 138. Figure 4-48 Add media to media library There are four options to create new media: Upload media Add existing file Import from physical optical device Create blank media The Upload media option allows you to transfer files or ISO images from a workstation directly to the media library. There is a limitation in the file size of 2 GB for this option.
Note: Our testing revealed that the local CD or DVD drive in the media tray of the BladeCenter chassis is a faster option compared to the remote media option with a physical CD or DVD drive. The Create blank media option allows you to create blank media that may written to from an LPAR. Figure 4-49 shows an example that uses Import from physical optical device to create the new media. Click OK to start the copy task. Note: Do not use spaces in the name of the new media.
Figure 4-50 Performing task Click the Monitor Task link from the Navigation area to verify the completion of the task. Monitor Tasks contains a list of events and the status, either running, successful, or failed. Note: An alternative way to monitor the process of creating new media is to review the list under the Optical Devices tab, as shown in Figure 4-51 on page 141. If your new media is not listed here, click the Refresh button.
Figure 4-51 Newly created media with the copy operation in progress Modifying media assignment to virtual optical devices in logical partitions Media can be assigned from the Optical/Tape tab in the View/Modify Virtual Storage window, when using the Create Partition wizard or from the Partition Properties window. The next step will be to modify the partition assignment of the media in the media library.
assigned to two LPARs. Select the check box for the desired media and click the Modify partition assignment button. Figure 4-52 Modify partition assignment As shown in Figure 4-53 on page 143 no LPARs are assigned to the media AIX6.1_install_disk_1. Next, LPARs JS23DMlpar4 and JS23 DPlpar5 will be assigned the same media by selecting the check box next to the logical partitions. Choose the Media type Read only or Read/Write and click OK. Only Read only media can be assigned to more than one LPAR.
Figure 4-53 Modify media partition assignment Click OK to return to the view of the optical devices. Notice that the updated table shown in Figure 4-54 on page 144 now contains the LPARs JS23DMlpar4 and JS23 DPlpar5 in the Assigned Partition column as assigned partitions for the media AIX6.1_install_disk_1. Chapter 4.
Figure 4-54 New assigned media to partitions A media can be removed from a partition following the same procedure by deselecting the media that is assigned to the partition. 4.7 Partition configuration for Virtual I/O Client (VIOC) With networking and storage defined, you can now create additional VIOC LPARs for the installation of additional supported operating systems. 4.7.
– IBM BladeCenter JS23 or JS43 should be at the latest system firmware. All I/O must be virtual to the LPAR: – SEA adapters are required. No HEA logical ports can be assigned. – No virtual optical drives can be assigned. – No physical adapters can be assigned. SAN storage properly configured for sharing between the two Virtual I/O Servers. Processor compatibility modes between source and target systems. Memory region sizes must match between source and target systems.
Figure 4-55 View/Modify Partition 4.7.3 Partition name When the wizard starts, a new window will open as shown in Figure 4-56 on page 147. This gives you the opportunity to change the Partition ID number, provide a Partition name, and select an operating system environment. Select the Next button for the memory step.
Figure 4-56 Create Partition: Name 4.7.4 Partition Memory Figure 4-57 on page 148 shows how to assign memory to the partition. The two memory options are dedicated and shared. In this section we will only discuss dedicated memory. Shared memory is covered in Chapter 5, “Active Memory Sharing configuration using IVM” on page 177. Total system memory and the current memory available for a new partition is summarized under memory mode selection section.
Figure 4-57 Create Partitions: Memory 4.7.5 Partition Processors On the Create Partition: Processors window you have the option of assigning dedicated or shared processors. In shared mode, for each virtual processor, 0.1 processing units will be assigned. In dedicated mode, each assigned processor uses one physical processor. Available processor resources are displayed on the window and, as with dedicated memory resources, they cannot be over-committed.
Figure 4-58 Create Partition: Processors Note: After an LPAR is created, the processor mode cannot be changed from shared to dedicated or dedicated to shared from IVM, only from the VIOS CLI using the chsyscfg command. 4.7.6 Partition Ethernet The Create Partition: Ethernet window displays the choices for assigning network connectivity.
Figure 4-59 shows the first three options. The selection in this example is virtual Ethernet adapter 1 on the logical partition assigned to a SEA adapter. Note that you also have an opportunity as this time to create additional virtual Ethernet adapters for the logical partition. Figure 4-59 Create Partition: Ethernet Note: HEA logical ports and physical adapter assignments cannot be used on logical partitions that will be considered for Partition Mobility. 4.7.
Figure 4-60 Create Partition: Storage Type In this example we are using physical volumes. Click the option Assign existing virtual disks and physical volumes, and then click Next. Figure 4-61 on page 152 shows the available physical volumes. Note that no virtual disks have been defined for this example, so the table under Available Virtual Disks is empty. Select one or more available hdisks, then click the Next button.
Figure 4-61 Logical Partition: Storage 4.7.8 Optical and tape devices Optical devices, both physical and virtual and physical tape devices, can be assigned to an LPAR. With an IBM BladeCenter JS23 or JS43, the physical optical device must be available to the BladeCenter slot that you are working with through the media tray assignment before assignment to an LPAR can be made. Virtual Optical Devices are not dependent on the media tray assignment.
present and select a virtual optical device. If a virtual optical device in not desired uncheck the selection box. Figure 4-62 Create Partition: Optical If unassigned physical adapters are available on the system the next window will provide the opportunity to assign them to the LPAR being created. If no physical adapter resources are available you will be directed to the summary window. Click the Next button to proceed to the Physical Adapters window (if available) or the Summary window. 4.7.
Figure 4-63 Create Partition: Physical Adapters 4.7.10 Partition Summary The final window of the Create Partition wizard is the Create Partition: Summary, as shown in Figure 4-64 on page 155. All of the previous selections can be reviewed on this window and edited if required by using the Back button.
Figure 4-64 Create Partition: Summary After your review is done and any needed adjustments have been made, click the Finish button to complete the logical partition creation. Figure 4-65 on page 156 of the View/Modify Partitions window shows the new logical partition that was created. Chapter 4.
Figure 4-65 View/Modify Partition showing new partition 4.7.11 Partition properties changes and DLPAR operations The IVM UI provides quick access to change an LPAR’s properties and perform Dynamic LPAR (DLPAR) operations on an active LPAR. The IBM BladeCenter JS23 or JS43 have the capability to perform DLPAR operations on memory, processors, and real or virtual I/O adapters.
Figure 4-66 Partition Properties General tab DLPAR capabilities can be retrieved by clicking the Retrieve Capabilities button. Figure 4-67 on page 158 shows the DLPAR capabilities of the IBM BladeCenter JS23 or JS43. IBM i LPARs have a different Partition Properties General tab view. See 7.3, “Creating an IBM i 6.1 partition” on page 271 for more information.
Figure 4-67 DLPAR retrieved capabilities Selecting the Memory tab will display current and pending memory values for the LPAR, as shown in Figure 4-68 on page 159. In addition if a shared memory pool has been configured you will have the option to change between dedicated and shared memory. The change between dedicated and shared can only be done on a inactive LPAR. An active LPAR can have its Assigned memory value changed between the range of the minimum and maximum values as a DLPAR operation.
Figure 4-68 Partition Properties Memory tab The Processing tab is used to change the processing units, virtual processors, partition priority weighting, and processor compatibly mode for LPARs using a shared processor pool, as shown in Figure 4-69. When changing the processor compatibility mode, a partition shutdown and restart is required for an active LPAR to make the change. If the LPAR is already inactive an activation is required before the current value will be updated.
Figure 4-69 Partition Properties, Processing tab for shared pool Partitions using dedicated processors will display the window as shown in Figure 4-70 on page 161. This example shows the LPAR in a not activated state and the minimum, assigned, and maximum values can be changed. In an active LPAR, only the assigned value can be altered as a DLPAR operation. This window also allows changing the mode of sharing idle processors.
The processor compatibly mode can also be changed when using dedicated processors. Figure 4-70 Partition Properties, Processing tab for dedicated processors The Ethernet tab in Partition Properties allows the addition or removal of Ethernet adapters, as shown in Figure 4-71 on page 162. Chapter 4.
Note: Before you can DLPAR remove Ethernet adapters from an active AIX LPAR, first use the rmdev command to removed the devices from the LPAR. HEA virtual ports required the removal of Logical Host Ethernet Adapter (l-hea) and the Logical Host Ethernet Port (lp-hea). Virtual Ethernet adapters can be removed by deleting the Virtual I/O Ethernet Adapter (l-lan). Physical Ethernet adapters require the deletion of the adapter (ent) and the parent. The parent can be determined by the lsdev command.
Note: Partitions that are configured for shared memory or IBM i partitions cannot own HEAs. Therefore, the Host Ethernet Adapter section of this window will not be shown when viewing the properties of these types of LPARs. The Storage tab can be used to add or remove storage devices, either physical volumes or virtual disks, as shown in Figure 4-72.
Optical device assignments, both physical and virtual, and physical tape assignments can be managed from the Optical /Tape Devices tab shown in Figure 4-73 on page 164. Figure 4-73 Partition Properties Optical/Tape Devices tab Additional virtual optical devices can be created, and the media that is mounted to an existing virtual optical device can be changed in this window. Creating virtual optical media is covered in “Virtual optical devices” on page 135.
Figure 4-74 Partition Properties, changing the current mounted media Physical adapters that are not assigned to an LPAR or any physical adapters that are already assigned to the selected LPAR will be displayed when the Physical Adapters tab is clicked. Figure 4-75 on page 166 shows a Gigabit Ethernet-SX PCI-X Adapter available for assignment to this LPAR. Note: Partitions that are configured for shared memory or IBM i partitions cannot use physical adapters.
Figure 4-75 Partition Properties, Physical Adapters tab Note: The Partition Properties window for the VIOS partition does not have the Storage and Optical Devices tabs. 4.8 Console access and activating a partition The following sections discuss basic access to a partition and partition management functions.
4.8.1 Opening a virtual terminal Accessing a partition virtual terminal from the VIOS can be done in two different ways. However, only one virtual terminal to an LPAR can be open at a time. Note: These methods are not available for IBM i. In the case of IBM i, the Operations Console (LAN) is the only supported system console. The first method from the IVM UI is shown in Figure 4-76 on page 167. From the View/Modify Partitions view, select the check box for the desired LPAR.
Figure 4-77 shows a successful connection to the LPAR virtual terminal. Figure 4-77 Virtual Terminal started from IVM UI The second method to start a virtual terminal is from the VIOS command line. From the command line prompt, issue the mkvt command as shown in Example 4-23. Example 4-23 Creating a virtual terminal from the command line $ mkvt -id 4 Specify the partition number that you want to connect after the -id flag.
Figure 4-78 Activating a partition The next window shows the current state of the partition and asks you to confirm activation by clicking OK, as shown in Figure 4-79 on page 170. Chapter 4.
Figure 4-79 Confirm partition activation When the LPAR activation starts, the message Performing Task - Please Wait will briefly appear, then the IVM UI will return to the View/Modify Partitions window. Activating from the CLI The chsysstate command is used to start a partition from the command line by either the LPAR number or name. Example 4-24 shows LPAR 4being activated from the CLI.
The lsrefcode command can be used to monitor the status codes as the LPR becomes active. Example 4-25 shows the lsrefcode being used with both LPAR number and name for LPAR 4.
4.8.3 Shutdown a VIO Client partition The shutdown of a partition can be initiated from the UI or the CLI. The shutdown process can interact with the operating system on an LPAR, or can be immediate without notifying the operating system. The following options are available for a partition shutdown Operating System (recommended) Delayed Immediate The Operating System shutdown option is available only if the RMC connection is active. It is the recommended method.
Figure 4-80 Shutdown an LPAR The Shutdown partitions window, as shown in Figure 4-81 on page 174, will be displayed. Chapter 4.
Figure 4-81 Partition shutdown options From this window, choose the shutdown type option. The partition can also be restarted after the shutdown by selecting the restart check box option. Click OK and the partition will be shut down. Note: The Operating System option will be disabled if RMC is not active between the LPAR and VIOS. The Delayed option will be selected by default.
The corresponding CLI shutdown options to use with the -o flag are: osshutdown (Operating System) shutdown (Delayed, white button shutdown) shutdown --immed (Immediate) 4.8.4 Shutdown the VIO Server The VIO Server is shutdown in a similar process to a VIO Client LPAR. Both the UI and CLI can be used. Shutdown from the UI When selecting the VIOS partition to be shut down, a warning is presented stating that shutting down the IVM partition will shut down all partitions and the entire system.
Shutdown using the CLI The shutdown command to use from the CLI or console session is shown in Example 4-28 and has two options. To automatically restart after the shutdown use the -restart flag, to suppress the warning message and confirmation add the -force option. Example 4-28 VIOS shutdown command $ shutdown -restart Shutting down the VIO Server could affect Client Partitions.
5 Chapter 5. Active Memory Sharing configuration using IVM Active Memory Sharing is an IBM PowerVM advanced memory virtualization technology that provides system memory virtualization capabilities to IBM Power Systems, allowing multiple logical partitions to share a common pool of physical memory. This chapter describes how to configure Active Memory Sharing (AMS) using the IVM UI and at a high level some of the planning considerations that should be used.
“Active Memory Sharing summary” on page 209 178 IBM BladeCenter JS23 and JS43 Implementation Guide
5.1 Planning considerations Active Memory Sharing is an IBM PowerVM advanced memory virtualization technology that provides system memory virtualization capabilities to IBM Power Systems, allowing multiple logical partitions to share a common pool of physical memory. When using a shared memory mode, it is the system that automatically decides the optimal distribution of the physical memory to logical partitions and adjusts the memory assignment based on demand for memory pages.
Virtual Input/Output Server 2.1.1 Only virtual I/O, no physical adapters or logical ports from an HEA allowed Only shared processor mode, no dedicated processors AIX 6.1 TL3 IBM i 6.1 plus latest cumulative PTF package + SI32798 SUSE Linux Enterprise Server 11 5.1.
development environments and workloads that do not have sustained load requirements. 5.1.3 Paging devices Active Memory Sharing paging devices and operating systems paging device considerations are similar. Active Memory Sharing paging operations will be typically be 4k in size. Write and read caches should be enabled. Striped disk configurations should be used when possible with a 4k stripe size.
Table 5-1 Estimated additional VIOS CPU entitlement per shared memory LPAR Storage types Paging rate Internal storage Entry level storage Mid range storage High end storage Light 0.005 0.01 0.02 0.02 Moderate 0.01 0.02 0.04 0.08 Heavy 0.02 0.04 0.08 0.16 Shared memory partition Shared memory partitions will also require additional CPU entitlement compared to dedicated memory partitions running the same workload.
Assigning a memory weight. The IVM UI allows three values, low, medium and high, with a default of medium. Paging device configuration, the higher the subscription ratio the higher the need for optimized paging devices. CMM configuration determines page loaning policy. Application load and loaning policy, none to aggressive, should be evaluated for acceptable performance. CMM is set at the OS level therefore a mix of loaning levels can exist in the same system.
Since a common paging storage pool is required, the first step is to created a storage pool that can be assigned as the paging storage pool. 1. To create a common paging storage pool start in the navigation area of the IVM UI and click on View/Modify Virtual Storage as shown in Figure 5-1. Figure 5-1 Start Active Memory Sharing configuration with View/Modify Virtual Storage 2. The next window will begin the storage pool creation process. Select Create Storage Pool as shown in Figure 5-2 on page 185.
Figure 5-2 Starting the storage pool creation 3. The next window will prompt for the name of the storage pool and the name must be a valid name for volume groups, for example no spaces are allowed and the name cannot exceed 15 characters. Use the choice of Logical volume based for the storage pool type. Next select the physical volume or volumes desired to create the pool as shown in Figure 5-3 on page 186. When the entries are made click the OK button to complete the storage pool creation process.
Figure 5-3 Naming the storage pool and selecting backing devices Figure 5-4 on page 187 shows the newly created storage pool.
Figure 5-4 Storage pool list with new pool for paging added Note: A new designation of “Paging” will be added to the name field of the storage pool list when the shared memory pool is created. With the paging storage pool created we are ready to defined the shared memory pool. From the IVM UI click View/Modify Shared Memory Pool. Figure 5-5 on page 188 shows the shared memory pool configuration page. The first items to note are the current memory available and the reserved firmware memory values.
Figure 5-5 Defining a shared memory pool 4. Clicking the Define Shared Memory Pool button will open the dialog for input of the desired memory pool size and the storage pool to be used for the paging storage pool. When these values have been entered and selected from the drop down box as shown in Figure 5-6 on page 189 click the OK button. Note: When IVM creates the shared memory pool, the value provided for the Assigned memory of the pool will also be used for the maximum value of the pool.
Figure 5-6 Shared memory pool configuration values 5. After clicking the OK button the screen will refresh and indicate the shared memory pool has been defined as shown in Figure 5-7 on page 190. Chapter 5.
Figure 5-7 Shared memory pool defined state 6. Click the Apply button to create the shared memory pool and the assignment of the paging storage pool as shown in Figure 5-8 on page 191.
Figure 5-8 Shared memory pool information after creation Now that we have created a shared memory pool we can create LPARs that use shared memory. As these LPARs are created, Active Memory Sharing will subdivide the paging storage pool through the use of logical volumes to accommodate each LPAR. The recommended method however is to provide dedicated physical devices for each LPAR using shared memory as hypervisor paging devices. The next section will detail how these dedicated paging devices are created.
1. Click the View/Modify Shared Memory Pool from the IVM navigation area. 2. Then click on Paging Space Devices - Advanced to expand the section as shown in Figure 5-9 on page 192. Figure 5-9 Creating dedicated paging devices for LPARS using shared memory 3. The Add button is clicked next to display the devices that are available for selection.
Figure 5-10 Dedicated device selection for share memory use 4. Figure 5-11 on page 194 shows the selected device now defined as a paging device. The Apply button must be clicked to complete the process. Chapter 5.
Figure 5-11 Dedicated device defined to paging space devices Note: As LPARs are created that use shared memory, they will be assigned to the smallest dedicated device available that will meet memory size requirement. 5.2.3 Creating shared memory LPARs Creating LPARs that used shared memory instead of dedicated memory uses the same wizard and process as detailed in 4.4.2, “Verify and modify VIOS partition memory and processors” on page 95.
ports from a HEA, dedicated processors, and physical adapters are no longer available. 1. The LPAR wizard is started by clicking the View/Modify Partitions link on the IVM UI, and then clicking the Create Partition button. Figure 5-12 shows the first window of the wizard where the partition ID, partition name, and operating system environment are set. Enter the required information and click Next. Figure 5-12 Creating a shared memory partition name 2.
Figure 5-13 Selecting memory mode and amount for a shared memory partition 3. The next step will be the selection of the number of shared (virtual) processors as shown in Figure 5-14 on page 197. Notice that the dedicated processor option cannot be selected. Use the drop down box to select the number of assigned processors and click the Next button.
Figure 5-14 Selecting the number of processors in a shared memory partition 4. The next configuration step is Ethernet selection. As shown in Figure 5-15 on page 198 the only options are virtual Ethernet adapters. In this example we are using an existing Share Ethernet Adapter (SEA). Click on the Next button to continue to the storage options. Chapter 5.
Figure 5-15 Ethernet selection for a shared memory partition The storage selection options for a shared memory LPAR are the same as a dedicated memory LPAR. Virtual disks can be created from a existing storage pool. Existing virtual disks or physical volumes can be selected. There is also the None option if you do not desire to assigned storage at this time. 5. In Figure 5-16 on page 199 we chose the Assign existing virtual disks and physical volumes option.
Figure 5-16 Storage selection for a shared memory partition Figure 5-17 on page 200 shows the available selection of virtual disks (none in this example) and physical volumes that have not been assigned and are available. 6. In this example we chose hdisk4. Click the Next button to continue to the optical and tape options. Chapter 5.
Figure 5-17 Storage selection for a shared memory partition Two of the options shown in Figure 5-18 on page 201, physical optical devices and physical tape devices, will virtualize the physical device to the LPAR through the VIOS. Selecting these options does not imply a direct physical connection from the LPAR being created to the device. The virtual optical device is selected by default and can have media from the virtual media library assigned at this time. 7.
Figure 5-18 Optical and tape selections for a shared memory partition The summary page as shown in Figure 5-19 on page 202 lists all of the selections made when stepping through the Create partition wizard. 8. The Back button can be used to revise any choices. Once the selections have been reviewed click the Finish button to complete the creation of the shared memory partition. Chapter 5.
Figure 5-19 Summary of selections for a shared memory partition Figure 5-20 on page 203 shows the View/Modify Partitions view with the new shared memory partition.
Figure 5-20 View/Modify Partition window showing newly created shared memory partition Figure 5-21 on page 204 shows the details of the shared memory pool indicating the new shared memory partition and the creation of lv00 in the paging storage pool supporting the partition Sharedmemlpar3. Chapter 5.
Figure 5-21 shared memory pool with paging space assignments in paging pool 5.2.4 Shared memory partitions and dedicated paging devices During the creation of the shared memory pool you have the option to create dedicated paging devices for shared memory partitions as detailed in 5.2.2, “Creating dedicated paging devices for partitions” on page 191. These dedicated devices, if available, will be assigned by default if of adequate size to a shared memory partition when it is created.
Figure 5-22 Shared memory pool view showing both types of paging devices A new shared memory partition, Sharedmemlpar4, was created with a logical memory value of 25GB. Figure 5-23 on page 206 shows this new partition and the assignment of hdisk2 as its dedicated paging device. Although the paging storage pool had over 39GB available, the default is to use dedicated paging devices when available. In this case the available hdisk2 with a size of 30GB was was assigned to the partition Sharedmemlpar4.
Figure 5-23 Shared memory pool view showing assigned dedicated paging device Changing the maximum memory values of a shared memory partition can also cause a change from a paging pool logical volume to a dedicated paging device. Figure 5-24 on page 207 show the inactive partition Sharedmemlpar3 that had its maximum memory value changed from 10GB to 15GB. When this change was made the paging space changed from 10GB lv00 in the pool AMS_Page_Pool to the 30GB hdisk6.
Figure 5-24 Partition memory properties showing maximum memory and paging space changes Figure 5-25 on page 208 shows the shared memory pool indicating the changes to the paging device used for the partition Sharedmemlpar3 when the maximum memory values were changed. Chapter 5.
Figure 5-25 shared memory pool after partition maximum memory values changed 5.2.5 Active Memory Sharing DLPAR operations Dynamic logical partition (DLPAR) operations can be performed on both the shared memory pool and shared memory partition logical memory assignments. The assigned memory in shared memory pool can be DLPARed up to its maximum value, and the memory pool maximum value can be dynamically increase up to the available limits of the physical memory minus firmware requirements.
5.3 Active Memory Sharing summary Active Memory Sharing provides the ability to better utilize the memory and CPU resources available on a IBM BladeCenter JS23 or JS43. However the successful implementation requires a complete understanding of current or planned workloads and the proper matching of those workload in the right combinations. Improper matching will result in contention for memory resources and excessive paging by the VIO Server in an attempt to service the partitions memory needs.
210 IBM BladeCenter JS23 and JS43 Implementation Guide
6 Chapter 6. IBM AIX V6.1 installation IBM AIX can be installed native on IBM BladeCenter JS23 and JS43 Express or in a client partition of IBM PowerVM. This chapter describes in details the installation on a logical partition and has the following sections: “Create a virtual media library” on page 212 “Prepare the PowerVM client partition” on page 218 “Install AIX 6.1 in a logical partition of the Virtual IO Server” on page 231 © Copyright IBM Corp. 2009. All rights reserved.
6.1 Install IBM AIX 6.1 in a PowerVM client partition This section assumes that you have already installed VIOS 1.5.2.1 or any later version (latest version is V2.1.1) on the blade and performed the initial configuration. In case this was not done, go to 4.2, “VIOS system management using IVM” on page 83. To install IBM AIX 6.1 in a client partition it is necessary to first create the client partition with the IVM before you can start with the installation of AIX.
2. Specify the name of the storage pool and select the physical volumes that will be assigned to this storage pool. Figure 6-2 shows that we used the name STG-Pool-Media1. The type of the volume group is logical volume base. This allows to increase the space of the media library when needed. Physical volume hdsik3 is assigned to this pool. Click OK. Figure 6-2 Media library - select the physical volume 3. The storage pool was created. Now select the Optical Devices register card. See Figure 6-3.
4. Click Create Library. See Figure 6-4. Figure 6-4 Media library - create library 5. Specify the storage pool that will contain the logical volume with the media library and the size of the media library. We used the volume group created in step 1 on page 212. The initial size was set to hold the AIX 6.1 DVD with a size of approximately 3.6 GB. See Figure 6-5. Click OK.
6. It takes a moment to create the library volume and file. After that is done, return to the panel shown in Figure 6-6. Click Add Media to create an image from the AIX DVD. Figure 6-6 Media library - add media 7. The add media dialog starts and guides you through the process of adding medias to the library. Click Import from physical optical device to get the list of available physical optical devices that you can use to import the media. Specify the media type of the new media.
You may look at existing media files in /var/vio/VMLibrary. The last step on this page is the specification of the optical device that contains the CD or DVD to copy into the library. Figure 6-8 shows the optical device that is located in the media tray of the IBM BladeCenter H chassis. The remote media optical device uses the location code U78A5.001.WIH01AA-P1-T1-L1-L1. We used the internal optical device of the BladeCenter chassis to copy the data from the IBM AIX 6.1 DVD.
This function can be reached with Monitor Task before you close your browser window or from the main window’s left-hand navigation under Service Management → Monitor Task. See Figure 6-9. Figure 6-9 Media library - performing task 9. After closing the browser window of the add media dialog, you return to the view shown in Figure 6-10. The new media is already listed here. Clicking Refresh updates the size information during the copy operation.
6.1.2 Prepare the PowerVM client partition Perform the following steps to create a client partition with the Integrated Virtualization Manager (IVM) of the Virtual I/O Server. 1. Use your preferred Web browser and enter the host name or IP address of the IVM. That is the address configured in 4.3.3, “Initial network setup” on page 87. A Web page comes up that allows you to log in. Use the default account that was created during setup when you had not yet created you own account.
2. Depending on the setup of your IVM, you will be at the Guided Setup or on the View/Modify Partitions page. Figure 6-12 shows the usual page that you see after logon when the IVM is fully configured. Figure 6-12 View/Modify Partitions page after logon Chapter 6. IBM AIX V6.
3. Verify that you have your storage available to the VIOS. Click View/Modify Virtual Storage in the left menu under Virtual Storage Management. See Figure 6-13.
4. Click the View/Modify Storage page on the Physical Volumes tab to see a list of available hard drives to the VIOS. Verify that the expected drives are available. See Figure 6-14. Figure 6-14 Available physical volumes Figure 6-15 shows that there are four physical volumes available. They are all located on a DS4800. HDISK0 and HDISK1 are used for the VIOS itself. HDISK2 will be used for AIX client partitions that will be created in the next steps.
5. Specify the name and the type of the partition. The name is used to identify the partition, especially when partition mobility is later used. Using a host name might be an option here. In Figure 6-16 we chose the host name as partition name. The type can be either AIX/Linux or i5/OS. Choose the type according to the OS you plan to install. We chose AIX/Linux for this AIX partition. Click Next to proceed.
6. Define the amount of memory that will be assigned to the partition. In Figure 6-17 we chose 1 GB. Click Next to proceed. Figure 6-17 Create partition - configure the amount of memory Chapter 6. IBM AIX V6.
7. Choose the number of CPUs that will be used by the partition. You have to decide whether to use dedicated or shared CPUs. When a dedicated CPU is used, no load can be moved to other currently free CPUs because this may lead to a performance issue. In Figure 6-18 you see that we configured two CPUs and shared processor mode. Click Next to proceed. Figure 6-18 Create partition - CPU configuration 8.
As shown in Figure 6-19, we chose one virtual Ethernet adapter. Click Next to proceed. Figure 6-19 Create partition - ethernet network 9. Set up the storage type you plan to use. There are three different options available. You may use volume group or file-based storage. In addition there is an option to use a dedicated physical volume for the partition.
As shown in Figure 6-20, select Assign existing virtual disks and physical volumes. Click Next to proceed.
10.Select the physical volume or volumes that need to be available to the partition. Figure 6-21 shows the section of hdisk1. Click Next to proceed. Figure 6-21 Create partition - select physical volumes 11.In the optical section of the partition creation process you can define the CD-ROM drives that will be used by the partition. Two options are possible: – Physical drive attached to the partition – Virtual drive attached to the partition There might be multiple physical CDROM drives available.
is the CDROM drive that is provided via the Remote Control Web interface of the Advanced Management Module. Note: When you attach the media tray of the BladeCenter chassis to a blade that is already up and running you may have to issue cfgdev on the command line of the Virtual IO Server to get it recognized by VIOS. Virtual CDROM drives are used to mount CDs that are placed in the media library. See 4.6.2, “Storage pools” on page 124 and 4.6.4, “Optical and Tape devices” on page 132.
12.Change the selected media from none to AIX-6.1 and click OK. See Figure 6-23. Figure 6-23 Create partition - modify current media of virtual optical device 13.Click Next to see an overview of the setting of the new partition. See Figure 6-24. Figure 6-24 Create partition - virtual optical device Chapter 6. IBM AIX V6.
14.Verify your setting and click Finish to create a partition with the settings you defined. See Figure 6-25.
15.The new partition will be listed under View/Modify Partitions, as shown in Figure 6-26. Figure 6-26 Newly created AIX/Linux partition The preparation of the partition is done. Proceed with the installation of AIX in the newly created partition. 6.1.3 Install AIX 6.1 in a logical partition of the Virtual IO Server The previous sections described how to prepare the media library that contains the AIX 6.
1. To activate the partition, click the check box of the partition and click Activate. See Figure 6-27. Figure 6-27 Activate a partition 2. Confirm the activation of the partition with OK as shown in Figure 6-28.
3. The status of the partition has changed to running. Select Open Terminal from the More Tasks drop-down list box to open a terminal connected to the selected partition. See Figure 6-29. Figure 6-29 Open a virtual terminal to the partition Chapter 6. IBM AIX V6.
4. Authenticate on the Virtual IO Server to get the virtual terminal connected. You may use the account padmin with the default password padmin here in case you have not yet created your own account. After the authentication is done, a message will be shown that the terminal has connected, as shown in Figure 6-30. Figure 6-30 Virtual terminal connection On the virtual terminal you will see the POST of the partition with the possibility to enter the SMS menu. There is no change required in this stage.
AIX Version 6.1 Starting NODE#000 Starting NODE#000 Starting NODE#000 Preserving 126407 Preserving 199549 physical physical physical bytes of bytes of CPU#001 as logical CPU#001... done. CPU#002 as logical CPU#002... done. CPU#003 as logical CPU#003... done. symbol table [/usr/lib/drivers/hd_pin] symbol table [/usr/lib/drivers/hd_pin_bot] 6. Define the current virtual terminal as system console by entering 1. Click Enter to proceed; see Example 6-2.
88 Help ? >>> Choice [1]: 8. Modify required settings such as language or time zone and proceed with the installation by entering 1 followed by Enter, as shown in Example 6-4.
ibm3164 ibmpc 88 Help ? vt100 vt320 wyse100 wyse350 +-----------------------Messages-----------------------| If the next screen is unreadable, press Break (Ctrl-c) | to return to this screen. | >>> Choice []: 11.Select Show Installed License Agreements and click Enter to read the license agreement; see Example 6-6. Example 6-6 License agreement menu Software License Agreements Move cursor to desired item and press Enter.
13.Navigate through the licenses. When you have finished reading, click F3 twice. You are returning to the Software License Agreements panel. Select Accept License Agreements and click Enter; see Example 6-8. Example 6-8 License agreement menu Software License Agreements Move cursor to desired item and press Enter. Show Installed License Agreements Accept License Agreements F1=Help Esc+9=Shell F2=Refresh Esc+0=Exit F3=Cancel Enter=Do Esc+8=Image 14.
n=Find Next 16.The installation assistant will guide you through the first administrative tasks, such as setting a root password or configuring the network connection. Proceed with the setup as described in the AIX documentation. To complete this task and get to a login prompt, use ESC+0 or F10. You may start this installation assistant at any time again by using the command install_assist after login as root. The installation assistant is shown in Example 6-11.
240 IBM BladeCenter JS23 and JS43 Implementation Guide
7 Chapter 7. IBM i V6.1 installation This chapter explains the installation process of the IBM i V6.1 Operating System on an IBM BladeCenter JS23/JS43 Express server installed in a BladeCenter S chassis using the disks provided in the disk storage modules. For the IBM BladeCenter JS23/JS43 in a BladeCenter H chassis, the installation process is similar to the information provided here, except that the storage is provided from a SAN environment.
7.1 Preparing for installation There are important considerations for setting up and using IBM i 6.1 client logical partitions on IBM Power servers or the IBM BladeCenter JS23 or JS43 Express server. On Power blades, you use the Integrated Virtualization Manager (IVM) to manage partitions. A client logical partition is a partition that uses some of the I/O resources of another partition. When the IBM i 6.
Figure 7-1 IBM i 6.1 installation process Chapter 7. IBM i V6.
7.1.2 Hardware environments This section describes an example IBM BladeCenter chassis and IBM BladeCenter JS23/JS43 Express server configuration with recommended firmware levels. Note: The disk configurations are dependent on the I/O requirements. For example, two SAS disk drives will not be enough with mirroring and backup to the media library. For performance reasons it is recommended to install IBM i to disk units other than the internal disks of the JS23/JS43.
IBM BladeCenter JS23 Express 1 JS23 Express server 4 GB memory 1 QLogic Ethernet and 4 GB Fibre Channel Expansion Card (CFFh) 1 SAS disk drive BM BladeCenter JS43 Express 1 JS43 Express server 4 GB memory 1 QLogic Ethernet and 4 GB Fibre Channel Expansion Card (CFFh) 1 SAS disk drive Table 7-1 lists the minimum and required features required to manage an IBM BladeCenter JS23/ Express system with the IBM i 6.1 Operating System.
Feature Description Notes 8241 Qlogic 4 GB Fibre Channel Expansion card (CIOv) Option for SAN connection in Bay 3 or 4 of an H or S chassis 8242 Qlogic 8 GB Fibre Channel Expansion card (CIOv) Option for SAN connection in Bay 3 or 4 of an H or S chassis 8271 Qlogic 8 GB Fibre Channel Expansion card (CFFh) Table 7-2 on page 246 lists the minimum and required features needed to manage an IBM BladeCenter JS43 Express system with the IBM i 6.1 Operating System.
Feature Description Notes 8241 Qlogic 4 GB Fibre Channel Expansion card (CIOv) Option for SAN connection in Bay 3 or 4 of an H or S chassis 8242 Qlogic 8 GB Fibre Channel Expansion card (CIOv) Option for SAN connection in Bay 3 or 4 of an H or S chassis 8271 Qlogic 8 GB Fibre Channel Expansion card (CFFh) For more information on supported devices on a BladeCenter JS23/JS43 server, refer to the following site: http://www.ibm.com/systems/power/hardware/blades/ibmi.
2. Click the down arrow in the Product family box and select the corresponding product: IBM BladeCenter JS23, BladeCenter JS43, BladeCenter S, or BladeCenter H. 3. Click the down arrow button in the Operating system box and select IBM i 6.1, as shown in Figure 7-2. Then click the Go button to activate the search. Figure 7-2 on page 248 provides an example of the search options when using the support web site to locate updates. Figure 7-2 Firmware information and download 4.
Figure 7-3 on page 250 shows an example of the available firmware and bios updates. Scroll the list to find the update you need or tailor the results using the Refine results option. Chapter 7. IBM i V6.
Figure 7-3 Example: Partial list of available downloads by type 250 IBM BladeCenter JS23 and JS43 Implementation Guide
7.1.4 VIO Server software environments VIO Server is part of IBM PowerVM Editions (formerly Advanced POWER Virtualization). It is required in the IBM i 6.1 for IBM BladeCenter JS23/JS43 Express environment. At minimum, VIOS level 1.5 is required for IBM i. It is recommended to use version 2.1or later. Work with your local sales channel to ensure that PowerVM (Standard or Enterprise Edition) and the latest fix pack are part of the BladeCenter JS23/JS43 order.
6.1 LAN console This IP address on the LAN is used to allow the 5250 console to connect to the VIOS using the IBM System i Access for Windows software. 6.1 production interface This IP address on the external LAN is used to provide 5250 production network access. This address will be configured after 6.1 is installed using LAN console. It is recommended that the 6.1 LAN console and production network interface use two separate Virtual Ethernet adapters in the 6.1 partition.
To provide access to a SAS drive in the BladeCenter S chassis to the partition, at least one SAS I/O module must be installed in the BladeCenter S chassis. An SAS expansion adapter (CIOv) also must be installed in each IBM BladeCenter JS23 or IBM BladeCenter JS43 Express server. A single SAS I/O module provides access to both Disk Storage Modules (DSM) and all 12 disks. The physical connection to tape drives is owned and managed by VIOS. The IBM i does not have direct access to the tape.
7.1.8 Disk configuration in BladeCenter S To use a pre-defined configuration to a BladeCenter JS23/JS43 server, you must establish a connection to the SAS Module as shown in Figure 7-4 on page 254. using a browser window directly connected to the SAS Module. An alternative, that is more intuitive for clients is the SCM GUI. Figure 7-4 SAS Connection module login 1. Enter User ID and Password of the account that has access to the SAS module and click Login.
Figure 7-5 SAS connection module welcome 2. Select Zoning. In the example shown in Figure 7-6 on page 256, Predefined Config 09 is selected and active. Notice that our BladeCenter JS23/JS43 installed in slot 4, Zone Group ID 37 is configured. Remember the Zone Group ID for the following window to examine the corresponding hard disk drives. 3. Click Basic Zone Permission Table. Chapter 7. IBM i V6.
Figure 7-6 SAS connection module zone groups Figure 7-7 on page 257 shows the definition and setup window for the actual configuration. In this configuration three disks from SAS module 1 and three disks from SAS module 2 are defined for Predefined Config 09. Individual User Defined Configs are provided for specific configurations. For more detailed information about this topic, refer to: Implementing the IBM BladeCenter S Chassis, SG24-76827.
Figure 7-7 SAS connection module zoning 4. To verify the configuration in the SAS module configuration menus, logon to the IBM BladeCenter Advanced Management Module. Under Storage Tasks, select Configuration as shown in Figure 7-8 on page 258. Chapter 7. IBM i V6.
Figure 7-8 AMM SAS configuration zone 5. Click Predefined Config 09 to proceed. Figure 7-9 on page 259 shows the current configuration. Select the blade in the upper rectangle to highlight the assigned disks to that blade.
Figure 7-9 AMM SAS configuration zone 9 For detailed information, refer to Implementing the IBM BladeCenter S Chassis, SG24-76827 and IBM BladeCenter Products and Technology, SG24-7523. 7.1.9 Individual BladeCenter S disk configuration If one of the eight predefined SAS I/O module disk configurations does not match the target configuration, four user-predefined configurations are available for individual use.
The IBM Storage Configuration Manager (SCM) may be used to create an individual configuration if you are not familiar using the SAS I/O module command line interface. The SCM software can be downloaded from: https://www-304.ibm.com/systems/support/supportsite.wss/docdisplay?lndo cid=MIGR-5502070&brandind=5000008 7.2 IBM System Access for Windows V6R1 System i Access for Windows fully integrates the power of the IBM i 6.
http://www-03.ibm.com/systems/i/software/access/windows/v6r1pcreq.html For more information about the IBM System i Access for Windows V6R1, see: http://www.ibm.com/systems/i/software/access/index.html To obtain the IBM System i Access for Windows software, go to the following address: http://www.ibm.com/systems/i/software/access/caorder.html Note: When the IBM i Access for Windows connection is first established, the console PC must be on the same subnet as the 6.1 partition.
Figure 7-11 IBM System i Access for Windows welcome screen 3. The License Agreement shown in Figure 7-12 on page 263 appears. You can select I accept the terms in the license agreement. Click Next to continue.
Figure 7-12 IBM System i Access for Windows Licence Agreement 4. IBM System i Access for Windows can be installed at a different location, as shown in Figure 7-13 on page 264. To store the software at a different location, click Change... and choose a new location. Or, accept the predefined path and click Next to continue. Chapter 7. IBM i V6.
Figure 7-13 IBM System i Access for Windows install location 5. Depending on the native language, a selection can be made in the following window as shown in Figure 7-14 on page 265. Normally the same language will be chosen as the language for the IBM i 6.1 Operating System. Click Next to continue.
Figure 7-14 IBM System i Access for Windows Primary language 6. Depending on the complexity of functions, several choices are available as shown in Figure 7-15 on page 266. The normal case is a complete installation. Experienced administrators can select the custom installation to save disk space, or install determined functions only. Click Next to continue. Chapter 7. IBM i V6.
Figure 7-15 IBM System i Access for Windows Setup Type 7. Select Complete and click Next 8. Some features require a license agreement to use their functionality, as shown in Figure 7-16 on page 267. Ask your service representative to receive a valid license key. Click Next to continue.
Figure 7-16 IBM System i Access for Windows Restricted Features 9. The installation starts automatically after you select Next in the previous menu. Figure 7-17 on page 268 shows the progress of the installation process. Chapter 7. IBM i V6.
Figure 7-17 IBM System i Access for Windows installation progress 10.Figure 7-18 on page 269 indicates the installation process was successful. Click Finish to continue.
Figure 7-18 IBM System i Access for Windows installation completed 11.To finalize the IBM i Access for Windows installation a reboot is required, as indicated in Figure 7-19. Figure 7-19 IBM System i Access for Windows Reboot 12.Click Yes to reboot the system. Chapter 7. IBM i V6.
After the console PC is successfully rebooted, the information screen shown in Figure 7-20 is displayed. The Welcome window provides additional information about the software just installed. (For some information, the administration PC needs a connection to the Internet.
7.3 Creating an IBM i 6.1 partition Using Integrated Virtualization Manager (IVM) to create an IBM i 6.1 partition is similar to using the HMC. IVM uses a number of defaults that simplify partition creation. For example, because IBM i 6.1 partitions cannot own physical hardware on an IVM-managed system such as a BladeCenter JS23/JS43, those screens are omitted from the creation wizard. Other screens are simplified as well that relate to shared processor pool settings and memory settings.
7.3.2 VIO Server configuration For a detailed explanation of how to setup and configure the VIOS partition to use the Integrated Virtualization Manager (IVM), refer to Chapter 4, “System planning and configuration using VIOS with IVM” on page 71. 7.3.3 Creating an IBM i partition This section provides a brief explanation of how to create an IBM i 6.1 partition. It is assumed you have previously configured disk space (LUNs) for this partitions usage. To create an IBM i 6.
2. Click Create Partition. The next available Partition ID is preselected. a. Enter a name for the partition in the Partition name field. b. Select IBM i or i5/OS in the Environment field. c. Click Next to continue. Figure 7-22 on page 273 shows an example of the partition ID, name and environment fields. The ID will be filled in by the wizard using the next available number. You can change this if you desire to an unused number. Figure 7-22 Partition id, name and environment options 3.
Figure 7-23 Partition memory definition panel 4. Select the desired processor configuration. Click Next to continue. Figure 7-24 on page 274 is an example of the processor selection panel. In this example the blade server had 8 processors total. There are other partitions created which also use some processor capacity. In the Assigned processors field you will choose how many processor units to assign to this partition. For example, using shared if you choose 1 as shown in the graphic, you will have .
of the HEA ports prior to creating this partition. For more information on bridging the HEA ports see 4.5.1, “Host Ethernet Adapters” on page 100. Figure 7-25 Partition ethernet selection 6. Select Assign existing virtual disks and physical volumes. 7. Click Next to continue. Figure 7-26 on page 275 shows an example of the selection for disk units to use in the partition. You can use virtual disks or physical disks. For an IBM i partition it is recommended to use physical volumes.
Figure 7-27 on page 276 shows an example of available disk units. For this partition we selected to use hdisk8 and hdisk9 which are LUNs created in a storage subsystem that have been assigned to this JS43. Depending on your configuration you may also have virtual disks available. If so, they would be listed under the Available Virtual Disks section. Figure 7-27 Disk selection 10.Also depending on the installation preparation in the Optical devices menu, you can select either: a.
Figure 7-28 on page 277 provides an example of the optical selection panel. If the media tray for the BladeCenter has been assigned to the blade server you are creating the partition on, the device will be available. Under the Physical Optical Devices area is the checkbox to select cd0. Figure 7-28 Partition optical selections 11.Review the summary of your definition and click Finish to create the IBM i 6.1 partition. 7.3.
the Properties task. The first tab of the properties box is the General tab. Here you can view the fields for the load-source adapter and the console adapter. The selections should be the virtual adapters when in the blade environment. Also note that the IPL source will be set to D which uses the Alternate restart adapter. Figure 7-29 Load Source and Console Identification fields Figure 7-30 on page 279 provides an example of the Memory tab.
Figure 7-30 Partition memory allocation Figure 7-31 on page 279 provides an example of the Processing tab. You can adjust the partition processor allocations by changing the values and clicking OK. Like the memory, you can adjust the minimum and maximum values to create a range of processor units to stay within when performing dynamic allocation. Figure 7-31 Partition processing properties tab Figure 7-32 on page 280 shows an example of a modified set of values for processing units.
Figure 7-32 Processing units value change 7.3.5 IBM i 6.1 install media preparation There are two general methods for installing IBM i Licensed Internal Code (LIC) and the 6.1 operating system on a BladeCenter JS32/JS43 blade in an IBM BladeCenter chassis. You can use the CD/DVD drive in the IBM BladeCenter chassis Media module attached to the IBM i 6.1 partition, or you can create virtual optical media devices.
1. In the active Windows session, select Start → All Programs → IBM System i Access for Windows → Operations Console. The window shown in Figure 7-33 on page 281 should appear. Select the Connection drop-down menu to continue. Figure 7-33 on page 281 shows an example of the System i Operations Console panel. Figure 7-33 IBM System i Operations Console You will reach the window shown in Figure 7-34 on page 281. Figure 7-34 IBM System I operator console Chapter 7. IBM i V6.
2. Click New Connection to continue. You reach the Operation Console Configuration wizard Welcome screen, as shown in Figure 7-35 on page 282. A connection to the Internet is required to reach the InfoCenter services. Figure 7-35 IBM System i Operations Console Welcome Click Next to continue. You may also see a dialog box appear asking you to confirm that the prerequisites for Operations Console have been met. Clicking on the Help button will provide the needed information.
Figure 7-36 IBM System i Operations Console - choose a configuration Click Next to continue. 4. The System i service hostname must be defined first to establish a connection to the BladeCenter JS23/JS43 blade; see Figure 7-37 on page 284. Chapter 7. IBM i V6.
Figure 7-37 IBM System i Operations Console - enter the Service host name Enter the service host name and click Next. The System i service host name (interface name) is the name that identifies the service connection on your network that is used for service tools, which includes an Operations Console local console on a network (LAN) configuration. This is assigned by your system or network administrator and must be resolved through DNS.
Note: Choose a service host name that is related to the IBM i 6.1 partition name created in Integrated Virtualization Management (IVM) so that you can more easily remember which partition is meant. The service host name and service TCP/IP address are stored automatically in the host file of the IBM System i Access for Windows console PC.
Figure 7-39 IBM System i Operations Console - enter the Service TCP/IP Address Click Next to continue. 7.
Figure 7-40 Specify Interface Information 8. Modify the required fields to the actual implementation. In our hardware scenario a gateway was implemented. Two important fields are System serial number and Target partition, as shown in Figure 7-43 on page 289. System serial number This is the BladeCenter JS23/JS43 unique system number. To find the System serial number, use the Integrated Virtualization Management (IVM) console and look under System Properties. An example is shown in Figure 7-41 on page 288.
Figure 7-41 System Properties - Serial number Target partition This is the Target partition is the partition ID of the IBM i 6.1 partition. To see if partition ID 1 is predefined to VIOS, use IVM. If no other partition is created at this time, the IBM i 6.1 partition ID is 2. The partition ID can be found by looking at the View/Modify partition panel. Next to the partition name is the ID field as shown in Figure 7-42 on page 288. In our example the partition ID is 3.
Figure 7-43 IBM System i Operations Console - enter System serial number Enter values and click Next to continue. 9. The next window that appears requests a Service tool device ID to authenticate the communication between the LAN console PC and the IBM i partition, as shown in Figure 7-44 on page 290.
Figure 7-44 IBM System i Operations Console - enter Service tools device ID Enter the Service tool device ID and click Next to continue. 10.Figure 7-45 on page 291 shows the final window that is displayed after you define the recommended information for an IBM System i Operations Console.
Figure 7-45 IBM System i Operations Console - finalizing the setup Click Finish to save the configuration information. The configuration window will close immediately and you will return to the initial window with the predefined console definitions for a BladeCenter JS23/JS43 blade, as shown in Figure 7-46 on page 291. Figure 7-46 IBM System i Operations Console To connect the IBM System i Operations Console to the IBM i 6.
connect icon. Once the session starts the connection the partition can be activated. Partition activation is discussed in the next section. Figure 7-47 Connect console session 7.3.7 IBM i 6.1 IPL types The IPL type determines which copy of programs your system uses during the initial program load (IPL). IPL type A Use IPL type A when directed for special work, such as applying fixes (PTFs) and diagnostic work. IPL type B Use the B copy of Licensed Internal Code during and after the IPL.
Note: Typically after installation of PTFs you will run the partition on the B side. This value is changed on the General tab of the partition properties. After the prerequisites are completed, the steps required to install 6.1 on a BladeCenter JS23/JS43 are essentially the same as on any other supported system: 1. Place the IBM i 6.1 installation media in the DVD drive in the BladeCenter media tray, which at this point should be assigned to your BladeCenter JS23/JS43.
5. Depending on the native language, a selection can be made in the following screen as shown in Figure 7-49 on page 294. Normally the same language will be chosen as the language for the IBM i 6.1 operating system. Language feature 2924 enables the English environment. Figure 7-49 Confirm Language setup Press Enter to continue. The next screen displays several options, as shown in Figure 7-50 on page 295. To install the Licensed Internal Code, type 1 and press Enter.
Figure 7-50 Install LIC 6. Now select the target install device. Move the cursor to the target device, type 1 and press Enter; see Figure 7-51 on page 296. Chapter 7. IBM i V6.
Figure 7-51 Select source disk 7. Confirm the previous selection of the Load Source Device by pressing F10; see Figure 7-52 on page 297.
Figure 7-52 Confirm source device 8. The install Licensed Code (LIC) menu appears on the console as shown in Figure 7-53 on page 298. Type 2 for Install Licensed Internal Code and Initialize system, then press Enter to continue. Chapter 7. IBM i V6.
Figure 7-53 Select options 9. The Confirmation screen appears as shown in Figure 7-54 on page 299. This procedure causes existing data on the disk assigned to this logical partition to be lost. Press F10 to continue or press F12 to Cancel and return to the previous screen.
Figure 7-54 Confirm definition After you confirm the definition, you reach the Initialize the Disk status screen as shown in Figure 7-55 on page 300. Depending on the predefined size of the virtual disk, this procedure can take 60 minutes or more. Chapter 7. IBM i V6.
Figure 7-55 Initialize disk 10.Next, the Install Licensed Internal Code status display appears on the console as shown in Figure 7-56 on page 301. It will remain on the console for approximately 30 minutes. Once the LIC has completed installing, the logical partition is automatically restarted to IPL to DST at this time to complete the Licensed Internal Code installation.
Figure 7-56 Install LIC status 11.The Disk Configuration Attention Report display might appear on the console. Figure 7-57 on page 302 shows the report for a new disc configuration. Press F10 to accept the action to define a new disk configuration. Note: If the Disk Unit Not Formatted For Optimal Performance Attention Report appears on the console, then further actions should be performed as described in InfoCenter: http://publib.boulder.ibm.com/infocenter/iseries/v5r4/index.jsp?t opic=/rzahc/rzahcdiskw.
Figure 7-57 Attention Report After the Licensed internal Code installation is complete, you will see the screen shown in Figure 7-58 on page 303. At this time it is recommended to complete disk unit configuration before installing the operating system. When completing disk configuration you will be adding additional units and possibly starting mirroring on the disk units. See the following link to assist with performing disk configuration. Not all steps will need to be performed. http://publib.boulder.ibm.
Figure 7-58 Install the operating system 7.4 Installing the IBM i 6.1 Operating System From the IPL or Install the System screen, the installation process of the operating system can be continued without an interruption. If you use the virtual optical device method of having the two IBM i 6.1 DVDs previously unloaded to virtual optical devices, the only action necessary is to assign the virtual optical device with the IBM i DVD 1 content to the IBM i partition. 1.
Figure 7-59 Select install device Type 2 and press Enter to continue. 2. The Confirm Install of the Operating System screen is displayed on the console screen, as shown in Figure 7-60 on page 305. Press Enter to continue the installation process.
Figure 7-60 Confirm installation 3. The Select a Language Group screen displays the primary language preselection, as shown in Figure 7-61 on page 306. This value should match the Language feature number that is printed in the installation media. The following URL provides the Language feature codes: http://publib.boulder.ibm.com/infocenter/systems/scope/i5os/index.js p?topic=/rzahc/rzahcnlvfeaturecodes.
Figure 7-61 Select language feature 4. Type choice and press Enter to continue. The Confirm Language Feature Selection appears on the console, as shown in Figure 7-62 on page 307. Press Enter to confirm and continue.
Figure 7-62 LIC install confirm language 5. The Licensed Internal Code IPL in Progress screen appears on the console, as shown in Figure 7-63 on page 308. No administrator action is required. Chapter 7. IBM i V6.
Figure 7-63 IPL in progress The Install the Operating System screen appears on the console, as shown in Figure 7-64 on page 309. 6. Change the date and time values to the appropriate settings. You must use the 24-hour clock format to set the current time.
Figure 7-64 Set date and time 7. Figure 7-65 on page 310 shows an example of a status display in the operator console during the installation process. No further action required. Note that the display will be blank for a while between Installation Stage 4 and 5. Chapter 7. IBM i V6.
Figure 7-65 Installation status 8. When the Sign On screen is displayed, as shown in Figure 7-66 on page 311, the base installation of the IBM i 6.1 Operating System is finished.
Figure 7-66 Sign On screen At this stage, the IBM i 6.1 system is ready to use. Information about installing libraries or Licensed Program Products and system configuration is beyond the scope of this book. For detailed software installation information, refer to the following Web site: http://publib.boulder.ibm.com/infocenter/systems/scope/i5os/index.jsp?t opic=/rzam8/rzam81.htm 7.4.1 IBM i 6.
For IBM i 6.1, IBM i5/OS, or OS/400® Operating Systems, fixes are available. To Obtain an IBM i 6.1 fix overview for downloading: Select System i in the Product family field. Select IBM i, i5/OS, and OS/400 in the Product field. Select one the following options in the Ordering option field: – Groups, Hyper, Cumulative fixes – Individual fixes – Search for fixes Select, for example, V6R1 in the OS level field for fixes for the actual IBM i Operating System version.
The IBM Systems Navigator for i provides a graphical interface to manage a BladeCenter JS23/JS43 server or Power Systems, as shown in Figure 7-67. Figure 7-67 IBM Systems Navigator for i More detailed information to the IBM Systems Director Navigator for i functionality can be found at: http://www.ibm.com/systems/i/software/navigator/index.html or in Managing OS/400 with Operations Navigator V5R1 Volume 1: Overview and More, SG24-6226. Chapter 7. IBM i V6.
7.5 IBM i 6.1 Backup/Restore There are two different methods to perform a backup or restore of an IBM i partition. Important: The virtualized DVD-ROM drive in the chassis cannot be used for IBM i 6.1 backups, because it is not writable. One method is to use file-backed space provided as a virtual optical device. Once the file has been created it can be written to any BSH or BCS supported SAS tape device. Another method is to use a virtual tape device backed by a SAS tape drive that is virtualized by VIOS.
IBM i 6.1 restore - virtual optical device Performing a restore follows the same 2-stage process in reverse: 1. The virtual media image file is restored from the SAS tape drive onto VIOS disk using the VIOS command restore. The image file is then mounted on the correct virtual optical drive assigned to the IBM i 6.1 partition and becomes available as a volume from which to restore. 2. A standard IBM i 6.1 restore is performed from the volume using a restore command or BRMS.
2. A standard 6.1 save command or BRMS is used to perform a save on the tape device (tap0x). If autocfg is on the tape device will configure as an 3580 model 004. IBM i 6.1 restore - virtual tape device Performing a restore follows the same 2-stage process. 1. Ensure the virtual tape device is assigned to the parathion you are performing the backup on. To change or view the assignment use the View/Modify Virtual Storage task, then select the Optical/tape tab.
Figure 7-69 IVM Create Storage Pool 3. Enter a name for the storage pool (in our case the internal disk in the BladeCenter S disk module was used), or in a SAN environment, a predefined LUN. Click OK to continue. 4. To create the virtual media library click the Optical Devices Tab and select Create Library. 5. Select the name of the new storage pool and enter an appropriate size for the media library. Select OK to continue. 6.
Figure 7-70 IVM Create blank media 7. Select Create blank media and enter a meaningful Media Device name and an appropriate size for the new volume. Ensure the media type is set for read/write. Click OK to continue. 8. The new virtual optical device should be listed in the Virtual Optical device list, as shown in Figure 7-71 on page 319.
Figure 7-71 IVM Virtual optical device created To assign the new created virtual optical device to the IBM i 6.1 partition, select the virtual optical device and click Modify partition assignment as shown in Figure 7-72 on page 320. Chapter 7. IBM i V6.
Figure 7-72 Virtual optical device assign to partition 9. Select the IBM i 6.1 partition and click OK to continue. Figure 7-73 on page 321 shows the IVM Virtual Storage Management window with the current assignment of the virtual optical device to the partition.
Figure 7-73 IVM Virtual optical device assignment done After the virtual optical device is mounted to the correct virtual optical device, it will become available in the IBM i 6.1 partition. The IBM i 6.1 Operating System will not use the device name of the virtual optical device given in Integrated Virtualization Manager. An IBM i 6.1 screen execute command WRKOPTVOL and the screen shown in Figure 7-74 on page 322 should appear. The virtual optical device will be identified with a time stamp volume ID.
Figure 7-74 Virtual optical device check device 7.5.2 Creating Virtual Media Library using IVM This section describes the process to create a virtual media library using IVM. This library is created using the IVM options and is located in the /var/vio/VMLibrary directory. Once the library has been created you can add files such as iso images to perform installations of partitions. 1. To begin, a storage pool needs to be created to contain the virtual optical library.
Figure 7-75 Create Storage Pool option 5. Provide a storage pool name. 6. Select the option Logical Volume Based for storage pool type. 7. Select one of the available hdisk resources to create the storage pool on. Figure 7-76 on page 323 provides an example of the storage pool name, size and hdisk selection. Figure 7-76 Selecting storage pool name, size and resource Now that the storage pool has been created the virtual media library can be created using the new storage pool. 8.
10.Expand the section Virtual Optical Media. 11.Click on Create Library. Figure 7-77 on page 324 shows an example of the Create Library option. Figure 7-77 Create Media Library 12.Define the media library size. Figure 7-78 on page 324 shows an example of the storage pool name field. Select the correct storage pool to contain your virtual media library. Figure 7-78 Select storage pool name 13.Click OK to finish. 7.5.
For example: A new installation of IBM i OS in an IBM i partition. Create ISO image files of the installation media. – Load the IBM i SLIC media in your PC CDROM – Using Record Now or another burning program, create an ISO image of the CD. Usually this is performed using a backup function. The next few graphics provide an example of using Record Now to create an ISO image of your media. Figure 7-79 on page 325 shows the option to Save Image.
Figure 7-80 Select the output destination folder Figure 7-81 on page 326 shows an example of the destination folder. Select the Save as Type option and ensure the type is set for ISO. It is not the default so it should be changed. Figure 7-81 Change file type to .
Copy the iso image file to the JS23/JS43 using ftp. The file will be copied to the /home/padmin directory. Make sure to use image mode when copying the file with ftp. This transfers the file in binary format. Move (mv) the .iso file from /home/padmin to /var/vio/VMLibrary. You will need to use oem_setup_env to escape the VIOS restricted shell environment to be able to use the mv command. It is also recommended to change the file name so it is easier to identify the files.
Figure 7-82 Blade Task - Remote Control 4. Once the task Remote Control has been selected, use the Start Remote Control button. This will invoke a Java™ window. Figure 7-83 on page 328 shows an example of the Start Remote Control button. Figure 7-83 Start Remote Control 5. Once the java interface has started, select the Remote Drive option. Figure 7-84 on page 328 shows an example of the java interface for remote control.
6. After selecting the Remote Drive option, you will see the Remote Disk window appear. Select the CD ROM and/or Select Image option. You can use either or both. Figure 7-85 on page 329 provides an example of the Select Image option. Figure 7-85 Select image option 7. Select the Add button. You will then be able to browse for the specific file you want to add as shown in Figure 7-86 on page 329. Figure 7-86 Browse and select file Chapter 7. IBM i V6.
After the file has been added it will appear under the Selected Resources list. Figure 7-87 on page 330 provides an example of this view. Figure 7-87 File added to Selected Resources list 8. To add the CDROM, select the CDROM listed and click the Add button. It will then be listed under the Selected Resources list. 9. After all selections have been made click on the Mount all button. This will add your resources to the AMM and make them available to the blade that has the media tray selected.
Figure 7-89 on page 331 shows an example of the resources added using the above process. Figure 7-89 New physical optical devices 7.5.5 IBM Tivoli Storage Manager Starting with Integrated Virtualization Manager V1.4, you can install and configure the IBM Tivoli® Storage Manager (TSM) client on the Virtual I/O Server (VIOS). With IBM Tivoli Storage Manager, you can protect your data from failures and other errors by storing backup and disaster recovery data in a hierarchy of offline storage.
Providing details of configuring and using the IBM Tivoli Storage Manager client and server is beyond the scope of this book. For detailed information about how to configure and manage the VIO Server as a IBM TSM client, refer to: http://publib.boulder.ibm.com/infocenter/systems/scope/hw/topic/iphb1/i phb1tivagents.htm?resultof=”tivoli” For more technical information about integrating IBM Tivoli Storage Manager, refer to PowerVM Virtualization on IBM System p Managing and Monitoring, SG24-7590. 7.5.
6. Execute the command PWRDWNSYS in the command line, then use F4 to prompt for options as shown in Figure 7-90 on page 333. Change the Controlled end delay time to 300. Press enter when ready to power down the partition. Figure 7-90 IBM i power down partition 7. Confirm the shutdown action by pressing F16. 8. This process can take a while. Check the Integrated Virtualization Manager (IVM) window for the message Not Activated in the State column of the IBM i partition. Start an IBM i 6.
334 IBM BladeCenter JS23 and JS43 Implementation Guide
8 Chapter 8. Red Hat Enterprise V5.3 Linux installation This chapter describes the procedures to install Red Hat Enterprise Linux V5.3 on a JS23 BladeCenter. We discuss the following topics: “Supported Red Hat operating systems” on page 336 “Linux LPAR installation using DVD” on page 337 “Linux network installation (detailed)” on page 341 “Native Red Hat Enterprise Linux 5.3 installation” on page 353 “Red Hat Enterprise Linux 5.
8.1 Supported Red Hat operating systems Red Hat Enterprise Linux for POWER Version 4.6 or later and Red Hat Enterprise Linux for POWER Version 5.1 or later support installation on a JS23. This chapter specifically covers installing Red Hat Enterprise Linux for POWER Version 5.3 with a DVD and over the network on a PowerVM logical partition (LPAR). 8.1.1 Considerations and prerequisites There are some system configuration considerations and prerequisites prior to installing Red Hat Enterprise Linux 5.
In addition, ensure there is enough unpartitioned disk space or have one or more partitions that can be deleted to free up disk space for the Linux installation. The Red Hat Recommended Partitioning Scheme is available at: http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.3/html/Inst allation_Guide/ch11s03.html 8.2 Linux LPAR installation using DVD With PowerVM installed and the system partitioned into LPARs using the PowerVM LPAR considerations and Red Hat Enterprise Linux 5.
Important: The other option is to press the MT button on the blade to assign the media tray to the blade. Make sure no other blade in the BladeCenter is using the media tray before pressing this button. Their MT light is on if the media tray is assigned to them. 4. Double-check that your blade bay owns the media tray by opening the AMM window and selecting Monitors → System Status. The right window will show a “check mark” in the MT column of your blade bay location.
c. Click the drop-down arrow to the right of the More Tasks field and select Open terminal window. Important: Make sure the latest Java Runtime Environment (JRE™) is installed on the native system to run the IVM terminal. At the time of this publication, the recommended JRE is Sun’s JRE 1.4.2_19, or higher. Note: Even though this section covers installation via the Integrated Virtualization Manager (IVM) console, there are other console options available on the JS23.
Figure 8-5 SMS menu a. Select 1 = SMS Menu by pressing the number 1 on the keyboard. Tip: Press the number next to the desired system function to select and navigate through the SMS menu. b. Select option 5. Select Boot Options. c. Choose option 1. Select Install/Boot Device. d. Pick 3. CD/DVD. e. Select 6. USB. f. Finally, select 1. USB CD-ROM. g. Choose 2. Normal Mode Boot. h. Pick 1.Yes to exit the SMS menu. i. At the boot: prompt press the Enter key.
using this media, we highly recommend running the media check. Once the media check is complete, Anaconda will assist with the completion of the install. More detailed installation instructions are available here: http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.3/html/Inst allation_Guide/pt-install-info-ppc.html 8.
b. Click Activate, as shown in Figure 8-6. Figure 8-6 Activating an IVM partition c. Click the drop-down arrow to the right of the More Tasks field (Figure 8-7) and select Open terminal window. Figure 8-7 Opening a terminal window from the IVM The console is a pop-up and it will ask you to authenticate with the PowerVM User ID and password. 2. The SMS menu appears in the IVM terminal (Figure 8-8 on page 343).
Figure 8-8 SMS menu a. Select 1 = SMS Menu by pressing the 1 key on the keyboard. Tip: Press the number next to the desired system function to select and navigate through the SMS menu. b. Choose 5. Select Boot Options. c. Choose 1. Select Install/Boot Device. d. Choose 6. Network. e. Pick a specific network port. f. Choose 2. Normal Mode Boot g. Pick 1.Yes to exit the SMS menu.
You will notice the packet count value increasing. After the complete boot image is uploaded, the system boots off of it to show the Red Hat Enterprise Linux 5.3 welcome screen, shown in Figure 8-9. Figure 8-9 Red Hat Enterprise Linux 5.3 welcome screen 3. Select the language to use during the install process. In this example we are using English. Then press the Tab key to move to the OK button and then press the Space bar to confirm.
Figure 8-10 Select network device Note: This step appears only when running Anaconda on machines with more than one network card. The Identify option can be used to find the physical port for the selected interface, by flashing the LED lights of the correspondent physical port for a number of seconds. 6. To configure DHCP, select either IPv4 or IPv6 support and then Dynamic IP configuration (DHCP) from the TCP/IP window. Then select OK. See Figure 8-11 on page 346 for more details and skip steps 7 and 8.
Figure 8-11 TCP/IP configuration panel 7. In the next panel, configure the LPAR’s IPv4 address, subnet mask, gateway, and name server. An example configuration is shown in Figure 8-12 Figure 8-12 TCP/IP configuration of IP address, gateway, and name server 8. In the NFS Setup window in Figure 8-13 on page 347, enter the IP address of the NFS server and in the field directly below that, enter the NFS directory that contains the Red Hat Enterprise Linux 5.3 install image. 9.
Figure 8-13 NFS server configuration window panel 10.In this step it is possible to start aVirtual Network Computing (VNC) server and continue the installation from Anaconda’s graphical interface, but for this example we’ll continue with the text mode interface, as shown in Figure 8-14. Figure 8-14 Select between VNC or text installation modes panel 11.Approximately one minute later the Welcome to Red Hat Enterprise Linux Server message panel appears. Select OK. 12.
Figure 8-15 Installation number panel Note: If you skip entering the Installation number, then you will only have the basic packages to select from later on. In this case, a warning will be presented and you’ll need to select Skip to proceed. 13.Select the disk partitioning type for this installation. In this scenario, we have selected the option Remove all partitions on selected drives and create a default layout.
Figure 8-16 Select Partitioning Type panel 14.A warning appears asking if the selection is OK. Press Yes to confirm. 15.Select Yes to review the suggested disk partition layout. 16.Review the allocated size for swap, ext3 file system, and /boot, as shown in Figure 8-17 on page 350. Press OK to confirm. Chapter 8. Red Hat Enterprise V5.
Figure 8-17 Review Partitioning panel Note: This configuration can only be edited by a graphical installer such as Virtual Network Connection (VNC). This cannot be done from the IVM terminal, so only the default values selected by the Anaconda Installer are allowed. 17.Press OK on the Network Configuration panel. The default is fine because this was already set up in Figure 8-12 on page 346. 18.Press OK for the Miscellaneous Network Setting window. The gateway and primary DNS are already configured. 19.
Figure 8-18 Select additional packages panel Note: These packages can be installed later using yum from the command line if you skip this step during the installation. 23.Press OK to allow the installation to begin. The next window has two progress bars: One for the package currently being installed and another detailing the overall progress of the installation. Figure 8-19 Installation progress window Chapter 8. Red Hat Enterprise V5.
24.Press Reboot after the Install Complete window appears, as shown in Figure 8-20 Figure 8-20 Installation complete panel Note: If the LPAR does not automatically boot from the intended hard disk (boot device) after reboot, try this: a. Shut down and reactivate the LPAR from the IVM. b. Enter the SMS Menu. c. Select 5. Select Boot Options → 1. Select Install/Boot Device → 5. Hard Drive → 9. List All Devices. d. Choose the appropriate hard disk with the Linux image from the given list. e. Select 2.
25.During boot the Setup Agent window appears (Figure 8-21). You can modify any of the fields if desired or press Exit to finish booting the LPAR. Figure 8-21 Setup Agent panel The Red Hat Enterprise Linux 5.3 login prompt appears, as shown in Figure 8-22. The installation is complete. Figure 8-22 Finished Red Hat Enterprise Linux 5.3 installation 8.4 Native Red Hat Enterprise Linux 5.3 installation A native Red Hat Enterprise Linux 5.
graphical display (via Blade Center’s KVM), as an alternative. See Appendix A, “Consoles, SMS, and Open Firmware” on page 493 for more information. Use the SOL console to display the SMS menu and the Anaconda options during the installation. The resource allocation of processors, I/O adapters, memory, and storage devices in a native environment is fixed. Virtualization functions and features are not available. 8.5 Red Hat Enterprise Linux 5.
firewall --enabled --port=22:tcp authconfig --enableshadow --enablemd5 selinux --enforcing timezone --utc America/New_York bootloader --location=partition --driveorder=sda --append="console=hvc0 rhgb quiet" # The following is the partition information you requested # Note that any partitions you deleted are not expressed # here so unless you clear all partitions first, this is # not guaranteed to work #clearpart --all --drives=sda #part prepboot --fstype "PPC PReP Boot" --size=4 --ondisk=sda #part /boot --f
-sysreport Notice that all of the partition information is commented out with a # symbol. This section needs to be uncommented and edited to support the partition schemes of systems that will use the automated Kickstart install process. The automated Kickstart process will not work without these edits. 8.5.2 Create Kickstart file using Kickstart Configurator In this section, we use the Kickstart Configurator tool with a graphical interface to demonstrate how to create a basic Kickstart text file.
rectangle in Figure 0-1. It is also important to define a root password to enable SSH login after installation. This password is encrypted in the configuration file. Figure 0-1 Kickstart main window with Basic Configuration panel (©2008 Red Hat, Inc.) Chapter 8. Red Hat Enterprise V5.
3. In the Installation Method panel (shown in Figure 0-2), all the basic parameters for a network installation using NFS are shown. Figure 0-2 Installation Method panel (©2008 Red Hat, Inc.
4. The next editable panel is the Partition Information panel, shown in Figure 0-3. Press Add to create a partition. The tool will help you select the mount point, file system type, and partition size. Figure 0-3 Partition Information panel (©2008 Red Hat, Inc.) Chapter 8. Red Hat Enterprise V5.
5. In the Network Configuration panel, click Add Network Device to add the devices you are installing from. If you need to go back and make changes to this setup, click Edit Network Device (see Figure 0-4). Figure 0-4 Kickstart Configurator Network Configuration panel (©2008 Red Hat, Inc.
6. The next panel is the Authentication panel. In this configuration, we use the default settings. 7. Figure 0-5 shows the Firewall Configuration panel. As an example, it is good to enable SSH and to trust interface eth1 at the very minimum to access the system later using the network. Figure 0-5 Firewall Configuration panel (©2008 Red Hat, Inc.) Chapter 8. Red Hat Enterprise V5.
8. Figure 0-6 shows the Package Selection panel. It is not possible to select individual packages from this panel. However, you can add individual packages to the %packages section of the Kickstart file after saving it. Note: If you see the message “Package selection is disabled due to problems downloading package information” in the Package Selection panel, it means you have no repositories defined.
Example: 0-1 Basic Kickstart configuration file #platform=IBM pSeries # System authorization information auth --useshadow --enablemd5 # System bootloader configuration bootloader --location=mbr # Clear the Master Boot Record zerombr # Partition clearing information clearpart --all --initlabel # Use text mode install text # Firewall configuration firewall --enabled --trust=eth0,eth1 # Run the Setup Agent on first boot firstboot --disable # System keyboard keyboard us # System language lang en_US # Installat
@office @graphical-internet 11.Manually adjust the Kickstart configuration file that you have created with a text editor if desired. Note: If you have not defined any disk partition options or you were unsure of your disk partition layout, we recommend that you manually edit the Kickstart file to include the following information after the #Partition clearing information section: #Disk partitioning information autopart This option will automatically create disk partitions. Red Hat Enterprise Linux 5.
1 = SMS Menu 8 = Open Firmware Prompt Memory Keyboard 5 = Default Boot List 6 = Stored Boot List Network SCSI Speaker ok 0 > _ Figure 8-23 Open Firmware prompt 2. Type the following command in the Open Firmware prompt to start automated installation. For example, if the configuration file is served using NFS: boot net ks=nfs://192.168.1.254/ks.cfg ksdevice=eth1 ip=dhcp Press the Enter key and the process will begin. The automated Red Hat Enterprise Linux installation is now complete.
366 IBM BladeCenter JS23 and JS43 Implementation Guide
9 Chapter 9. SUSE Linux Enterprise Server V11 installation This chapter describes the procedures to install SUSE Linux Enterprise Server (SLES) V11 on a JS43 BladeCenter.
9.1 Supported operating systems SUSE Linux Enterprise Server 10 Service Pack 1 (SLES 10 SP1) for POWER or later supports installation on a JS43. This chapter specifically covers installing SUSE Linux Enterprise Server 11 for POWER with a DVD and over the network on a PowerVM LPAR. 9.2 Considerations and prerequisites There are some system configuration considerations and prerequisites prior to installing SLES 11 on a JS43 partition. They are covered here. 9.2.
In addition, the SLES 11 installation guide suggests to have at least 1.5 GB of hard disk space or have one or more hard disk partitions that can be deleted to free up the miNFSum disk space for the Linux installation. Tip: We recommend 10 GB or more total hard disk space for each PowerVM LPAR. The Novell Web site has additional installation preparation information for SLES 11 available at: http://www.novell.com/documentation/sles11/index.html 9.
Figure 9-1 Start Remote Console panel 3. Press Refresh. Note: The other option is to press the MT button on the blade to assign the media tray to the blade. Important: Make sure no other blade in the Blade Center is using the media tray before pressing this button. The blade’s MT light is on if the media tray is assigned to them. 4. Double-check that your blade bay owns the media tray by opening the AMM panel and selecting Monitors → System Status.
Figure 9-3 Activating an IVM partition c. Click the drop-down arrow to the right of the More Tasks field and select Open terminal window. Important: Make sure the latest Java Runtime Environment (JRE) is installed on the native system to run the IVM terminal. At the time of this publication, the recommended JRE is Sun’s JRE 1.6.0_13, or higher. Figure 9-4 Opening a terminal window from the IVM The console is a pop-up and it will ask you to authenticate with the PowerVM User ID and password. 7.
Figure 9-5 SMS Menu a. Select 1 = SMS Menu by pressing the number 1 on the keyboard. Note: Press the number next to the desired system function to navigate through the SMS menu. b. Select option 5. Select Boot Options. c. Choose option 1. Select Install/Boot Device. d. Pick 3. CD/DVD. e. Select 6. USB. f. Finally, select 1. USB CD-ROM.See Figure 9-6 on page 372.
g. Choose 2. Normal Mode Boot. See Figure 9-7 on page 373. Figure 9-7 Select Mode Boot h. Pick 1.Yes to exit the SMS menu. i. At the Linux boot: prompt, type install, See Figure 9-8 on page 373 then press Enter to confirm. The LPAR will start reading from the DVD, which can take a couple of minutes. Figure 9-8 Select installation type 8.
9.4 Linux network installation (detailed) This section describes a Network File System (NFS) installation on a PowerVM LPAR using an external Storage Area Network (SAN) device.
Important: Make sure the latest Java Runtime Environment (JRE) is installed on the native system to run the IVM terminal. At the time of this publication, the recommended JRE is Sun’s JRE 1.6.0_13, or higher. Figure 9-10 Opening a terminal window from the IVM The console is a pop-up and it will ask you to authenticate with the PowerVM User ID and password. 2. The firmware boot panel appears in the IVM terminal.
Figure 9-11 SMS Menu a. Select 1 = SMS Menu by pressing the number 1 on the keyboard. Note: Press the number next to the desired system function to select and navigate through the SMS menu. b. Choose 5. Select Boot Options. c. Choose 1. Select Install/Boot Device. d. Choose 6. Network. e. Pick 1. BOOTP. f. Choose a network port. g. Choose 2. Normal Mode Boot. h. Pick 1.Yes to exit the SMS menu.
Figure 9-12 Main Menu Tip: Press the number next to the desired configuration option and then the Enter key to select it in the Main Menu window. The Enter key alone will move you back to the previous option window. 2. Choose 2) Kernel Modules (Hardware Drives), as shown in Figure 9-13 on page 377. Figure 9-13 Expert 3. Choose 1) Load ppc Modules, as shown in Figure 9-14 on page 378. Chapter 9.
Figure 9-14 Load ppc Modules 4. Select each individual module to pre-install based on your LPAR’s network configuration. Press the number next to the module name and then the Enter key, then press the Enter key again to confirm. Tip: Use the up/down scroll bar on the IVM terminal to navigate the module list. The most commonly used modules are 5) e1000 : Intel PRO/1000, 15) ehea : EHEA and IBMVETH. 5. Press the Enter key after you have finished loading the modules to go back to the main menu. 6.
– – – – – LPAR’s IP address LPAR’s netmask LPAR’s gateway LPAR’s name server The NFS server’s IP addressThe directory on the NFS server which contains the SLES 11 image. Figure 7-16 shows a sample configuration. .. Figure 9-16 Static network configuration example Chapter 9.
The LPAR begins reading from the SLES 11 image directory and then displays the Your awesome Setup Tool (YaST) Welcome panel, as shown in Figure 9-17 on page 380. Figure 9-17 YaST Welcome panel Tip: Navigate the YaST tool by using the Tab key to move between sections, the up/down arrow keys to move within a specific window section, the space bar to check a “( )” entry with an “x,” the Enter key to confirm a selection with square brackets “[ ]” around it, and the Delete key to erase entries. 10.
Figure 9-18 Installation Mode 13.Configure your clock and time zone information, as shown in Figure 9-19 on page 381. Figure 9-19 Clock and Time Zone Chapter 9.
14.The Installation Settings window provides the Keyboard layout, Partitioning information, Software installation options, and the install Language configuration. Select the [Change...] option to edit any of these fields. Select [Accept] when these settings are complete, as shown in Figure 9-20 on page 382. Figure 9-20 Installation Settings 15.Select [I Agree] to the AGFA Monotype Corporation License Agreement, as shown in Figure 9-21 on page 383.
Figure 9-21 AGFA License Agreement 16.Choose [Install] to start the installation, as shown in Figure 9-22 on page 383. Figure 9-22 Confirm Installation Chapter 9.
The YaST window refreshes to the installation progress bars, as shown in Figure 9-23. The top status bar shows the progress YaST has made installing a specific package and the bottom is the progress of the entire installation. The system will reboot after the installation completes. Figure 9-23 YaST installation progress window Note: If the LPAR does not automatically boot from the intended hard disk (boot device) after reboot, try this: Shut down and reactivate the LPAR from the IVM.
Figure 9-24 Confirm hardware detection window 18.Boot the system. See Figure 9-25 on page 385. Figure 9-25 Reboot now Chapter 9.
19.Enter the root user’s password. Press [Next] to confirm, as shown in Figure 9-26 on page 386. Figure 9-26 root User Password 20.Provide the hostname and the domain. Press [Next] to confirm. See Figure 9-27 on page 387.
Figure 9-27 Hostname and Domain Name 21.Select Use Following Configuration in the Network Configuration window (Figure 9-28 on page 388) and verify that the Firewall is marked as enabled. Press the Tab key to [Change....] to change the Secure Shell (SSH) port settings to open. Chapter 9.
Figure 9-28 Change network configuration a. Select Firewall as shown in Figure 9-29.
b. Scroll to Allowed Services. c. Find and highlight SSH in the new window, as shown in Figure 9-30 on page 389. Finally, press Enter to confirm. Figure 9-30 Services to allow list and selecting SSH service d. Press the Tab key to highlight [Add] and the press Enter to confirm. e. SSH will appear in the Allowed Service list, as shown in Figure 9-31 on page 390.Press [Next] to confirm. Chapter 9.
Figure 9-31 Allowed Service Secure Shell Server (SSH) f. Now the Firewall section of the Network Configuration window (Figure 9-32) shows “SSH port is open.
22.Test the Internet connection, if desired. 23.Change the Certification Authority (CA) Installation setting, if desired. Select [Next] to confirm the changes. 24.Select the user authentication method appropriate for this LPAR and select [Next]. See Figure 9-33 on page 391. Figure 9-33 User Authentication Method 25.Create a local user and select [Next]. See Figure 9-34 on page 392. Chapter 9.
Figure 9-34 New Local User 26.YaST will write the configuration settings and then display the Release Notes. Choose [Next] after reading the release notes. 27.Configure Hardware (Printers) if desired, then confirm the described configuration with [Next]. 28.YaST displays the Installation Completed window (Figure 9-35). Select Clone This System for Autoyast (see “SLES 11 automated installation” on page 395 for more information) if desired and then select [Finish].
Figure 9-35 Installation completed window 29.Login to the system with the new user., as shown in Figure 9-36 on page 394. Chapter 9.
Figure 9-36 Login screen 9.5 Native SLES 11 installation A native SLES 11 installation of a JS43 blade follows a similar process to those given in the VIOS LPAR installation sections. However, there are some key differences: In a native installation, the IVM terminal is no longer available to complete the Linux installation, but you can use the Serial Over LAN (SOL) console as an alternative. See Appendix A, “Consoles, SMS, and Open Firmware” on page 493 for more information.
9.6 SLES 11 automated installation SuSE has an automated installation functionality known as Autoyast to install multiple systems in parallel. The system administrator performs an Autoyast automated installation by creating a single file containing answers to all the questions normally asked during a SuSE installation. This file resides on a single server system and multiple clients can read it during installation.
1 = SMS Menu 8 = Open Firmware Prompt Memory Keyboard 5 = Default Boot List 6 = Stored Boot List Network SCSI Speaker ok 0 > _ Figure 9-37 Open Firmware prompt 2. Type the following command in the Open Firmware prompt to start automated installation. For example, if the profile is served using NFS: boot net autoyast=nfs://193.200.1.80/home/autoinst.xml install=nfs://192.168.1.
10 Chapter 10. JS23 and JS43 power management using EnergyScale technology The EnergyScale technology described in 3.4, “IBM EnergyScale technology” on page 47 can be used by the BladeCenter Advanced Management Module and Active Energy Manager (AEM) to monitor and control power usage of the IBM BladeCenter JS23 and JS43 blades. This chapter describes how to use the BladeCenter AMM and Active Energy Manager extension of IBM Systems Director to utilize these features.
10.1 Power management through the AMM The IBM BladeCenter Advanced Management Module (AMM) provides a Webbased and command line user interface to monitor and control individual blades and switch modules installed in the BladeCenter. The AMM also collects historical or trend data for individual components in the IBM BladeCenter. This data can be reviewed from the user interface. The information can also be collected by the Active Energy Manager extension for IBM Systems Director.
Figure 10-1 BladeCenter Power Domain Summary Scrolling the page down below the Blade Chassis Power Summary will provide access to the acoustical settings for the chassis, power consumption history and links to view the thermal and power trending history for some of the chassis components. An example of the options is shown in Figure 10-2 on page 400 and Figure 10-3 on page 400. Chapter 10.
Figure 10-2 Additional power settings Figure 10-3 Chassis thermal and trending options Selecting the Power Management Policy link (number 1 as shown above) will allow the user to select three different management policies. Figure 10-4 on page 401 shows and example of this option. There are three different selections that can be applied to manage the power domain. As mentioned above, in the BCH there are two power domains. Each domain can set this policy separately and they do not need to match.
The Power Module Redundancy option is used when only one AC source is present. One AC source in this case means the electrical grid. For example, the BCH has two line cord inputs. Each is capable of connecting to its own AC power source. If the two line cords attach to the same power grid, it is considered a single AC source. It is possible to have a data center wired so that each AC line cord of the BCH could be plugged into a separate power grid or AC source.
capabilities or collect power trend data appear as a link to a module-specific detail view. Figure 10-5 on page 402 provides an example of this selection. Figure 10-5 Power Domain Details Selecting the components such as a blade will allow you to set some of the power management options. Shown below in Figure 10-6 on page 403 you can see the options available for a blade that is capable of power management. In this panel you can see what the blade power capabilities are.
Figure 10-6 Blade power configuration settings Power capping is used to allow the user to allocate less power and cooling to a system. This can help save on datacenter infrastructure costs, and then potentially allow more servers to be put into an existing infrastructure. To enable the Power Capping option, use the pull down menu and select Enable. Then you will be able to set a cap level using the Maximum Power Limit range box. This value will limit the power usage to the value specified.
Figure 10-7 Bladeserver trend data 10.1.2 Using the AMM CLI UI for blade power management Similar to the Web UI, the CLI can be used to display power domain and specific module information. The AMM CLI can be accessed by either a telnet or SSH to the IP address of the AMM. The login is completed by using the same user ID and password that is used for the Web UI.
-pme: power management and capping enabling for blades (off, on). Note: the blade must be powered on before enabling capping. -ps: power saver mode for blades (off, on). Note: the blade must be powered on before enabling power saver mode. -pt: power trending data (1, 6, 12, or 24 hours) -tt: thermal trending data (1, 6, 12, or 24 hours) Example 10-2 shows the fuelg command used from the system> prompt with no flags to display the BladeCenter Power Domain information.
Example 10-3 The env command used to set a persistent target system> env -T blade[4] OK system:blade[4]> Example 10-4 shows the fuelg command with no other parameters being used to display the capabilities, current settings, and power consumption values of the blade in BladeCenter slot 4.
-ps on PM Capability: Dynamic Power Measurement with capping and power saver mode Effective CPU Speed: 3440 MHz Maximum CPU Speed: 3800 MHz -pcap 256 (min: 256, max: 282) Maximum Power: 139 Minimum Power: 139 Average Power: 139 Power trend date for the last hour was reviewed using the fuelg -pt 1 command shown in Example 10-6.
most instances multiple paths to the same options in AEM. The AEM redbook that will be created shortly after this publication will provide details on these options in greater detail. The following information and examples assume that IBM Systems Director and the Active Energy Manager extension have been installed and configured. Complete planning, installation, configuring, and usage information of IBM Systems Director can be found in: www.redbooks.ibm.com/redpieces/abstracts/sg247694.html.
Figure 10-8 Director menu options Once AEM has been selected you will have the options available in Figure 10-9 on page 410. In this example we have four resources that can be managed by AEM. One of the resources is the BCH chassis and the other three are bladeservers within the chassis. Note: When a JS43 is present in the chassis, the AMM may have problems reporting the JS43 BladeServer to AEM. To correct this issue be sure that the AMM firmware level is at BPET48F or higher.
Figure 10-9 Active Energy Manager options 10.2.2 AEM Energy Properties Using the check box you can select the resource to work with. Figure 10-10 on page 411 shows an example of selecting the BladeCenter Chassis and then using the Actions button to select the Properties option as displayed in Figure 10-11 on page 411.
Figure 10-10 Select resource Figure 10-11 Actions options Using the various tabs properties view you can see information about the resource selected. Clicking on the Active Energy tab allows you to view the data available about the chassis as shown in figure Figure 10-12 on page 412. Chapter 10.
Figure 10-12 Properties - Active Energy tab Using the Edit tab you can modify the energy price and metering values. This data can then be used for cost estimating of the power used for the chassis. Figure 10-13 on page 412 shows an example of the values available to edit.
10.2.3 BladeCenter Energy Properties In this next section we will look at the energy management options available on the JS23/JS43. Using AEM you can configure power capping, power saver and view trend data on the bladeserver. Enabling Power Capping To enable power capping on the bladeserver use AEM and select the desired blade resource. Using the Actions button select Energy then Manage Power and finally Power Capping as shown in Figure 10-14 on page 413.
to save your settings. An example of the power capping options are shown in Figure 10-15 on page 414. Figure 10-15 Power Capping options Figure 10-16 on page 414 shows an example of the power capping features enabled for the bladeserver. Figure 10-16 power capping enabled Enabling Power Savings To enable power savings on the bladeserver use AEM and select the desired blade resource. Using the Actions button select Energy then Manage Power and finally Power Savings as shown in Figure 10-17 on page 415.
Figure 10-17 Power Savings option The power savings options are as follows: No power savings - choose this option to have no power savings. The processor runs at high speed. Static power savings - choose this option to reduce power usage by lowering processor speed. This option saves energy while maintaining a reasonable processor performance. Dynamic power savings - choose this option to automatically balance power usage and processor performance.
Figure 10-18 Power Savings options Viewing BladeServer JS23/JS43 Trend Data Using the AEM you can view trend data for the JS23/JS43. Trend data provides information usable to view details relating to power usage, capping values and informational events. This data can be charted for the last hour up to the last year in different intervals. Figure 10-19 on page 416 shows an example of selecting the Trend Data details.
Figure 10-20 Trend Data display In the trend data panel you can view various power details. Use the pull down menu to change the time period or click on the Custom Settings link to change the values. Click on Refresh Trend Data to see your changes. Scrolling down in the trend data display will show information on environment data such as temperature. Chart data can be modified as well using the Options link.
Figure 10-21 Trend data chart options Information events as noted by the icon will display details about the event if you mouse over the icon. In this example in Figure 10-22 on page 418 you can see that a mode change was made on a resource. Figure 10-22 Information event details Trend data may also be exported to your Director Server file system. Use the export option and save the file in your preferred location.
page 419 provides an example of this option.The file is then viewable using a spreadsheet program like Excel®. Figure 10-23 Export data Energy Cost Calculator Active Energy Manager has a calculator that can help determine the cost of energy for the monitored resource. Use the options Energy then Energy Cost Calculator to use this function. Figure 10-24 on page 420 shows the option to select. Chapter 10.
Figure 10-24 Energy calculator option Set the values for the cost of energy using the cost properties link. Set the values for energy cost, currency type and other values. Click OK to save the properties. Figure 10-25 on page 420 displays an example of the properties options.
Select the Calculate Energy Cost button to see the data. Figure 10-26 on page 421 shows an example of the data displayed. Figure 10-26 Calculated energy cost 10.2.4 Creating Power Policies AEM supports the creation and application of power policies to manage energy across a group of systems. This feature allows you to create an energy policy and deploy the policy across a group or individual supported systems with minimal effort. While IBM Systems Director is running, the power policies will be enforced.
Figure 10-27 Work with power policies Selecting the option to Work with power policies brings up the screen as shown below in Figure 10-28 on page 422. From this screen you can view policies, launch a wizard to create policies, edit and delete policies. You will use this same interface to apply and remove policies once they have been created. To begin a target or group of targets needs to be defined to configure a power policy to act on. Use the Browse button to begin the target selection.
Figure 10-29 Group Select Figure 10-30 Select targets Once your targets are added to the Selected box, click OK to complete your target selection. Figure 10-31 on page 424 provides an example of the targets added to the Selected box. Chapter 10.
Figure 10-31 Selected targets added Once the targets have been defined you can begin to create a power policy by clicking on the Create Policy button as shown in Figure 10-28 on page 422. Clicking on the Create Policy button will start a wizard that will help you select the options for your policy. There are three different policy types that can be created. They are Group Power Capping, System Power Capping and System Power Savings. Within the policy you can select to turn on or turn off the feature.
Figure 10-32 Power policy wizard welcome In the next screen you will provide a name and description for the policy you are creating. Figure 10-33 on page 425 provides an example of this screen. The Name field is required, the description field is not required however, it is a good idea to describe what the policy is used for in the description field. Click Next to continue.
Figure 10-34 Power policy type Select the Group Power Capping settings by selecting either the value in watts or use the pull down to change the value to a percentage. Set the value you wish to cap the group at in the Power Cap Value field. Figure 10-35 on page 426 shows an example of this screen with values for our group. Click Next to continue. Figure 10-35 Power policy settings The final screen of the wizard provides a summary of your selections.
Figure 10-36 Power policy summary Now that the policy has been created, it can be selected for action. In the next graphic you can see the policy we created with the wizard in the last few screens as well as a few other policies we created to take action on. Figure 10-37 on page 427 provides an example of a few power policies available for actions.
Figure 10-38 Apply power policy In the next screen you can select when to apply the policy. Figure 10-39 on page 428 shows the apply now options. Figure 10-39 Run now - policy apply option You also have the option of scheduling when to run the power policy. This feature is used to apply a power policy unattended.
example. Figure 10-40 on page 429 provides an example of the settings to schedule a policy. Figure 10-40 Policy schedule options You can also set the system to send you an E-mail when the policy is applied. Modify the Notification tab settings for your correct contact information. Figure 10-41 on page 430 shows an example of the Notification tab. Chapter 10.
Figure 10-41 Notification tab The Options tab will allow you to set which time base to use, either management server or local system time. You also have the option to allow the policy action to fail if the system is not available or run when the system becomes available. Figure 10-42 on page 431 shows an example of these settings.
Figure 10-42 Policy options tab Active Energy Manager can also be controlled through the command line interface. Many of the CLIs are useful to the IBM BladeCenter management. Information about the smcli interface can be found here: http://publib.boulder.ibm.com/infocenter/systems/index.jsp?topic=/aem_4 10/frb0_main.html Information about IBM Systems Director command line interface can be found here: http://publib.boulder.ibm.com/infocenter/systems/index.jsp?topic=/direc tor.cli_6.1/fqm0_r_cli_smcli.
432 IBM BladeCenter JS23 and JS43 Implementation Guide
11 Chapter 11. Performing Live Partition Mobility This chapter discusses the requirements and configuration procedures to perform Live Partition Mobility between a IBM BladeCenter JS23 and JS43 blades. We cover the following in this chapter: “Requirements” on page 434 “Preparation” on page 438 “Migrating the LPAR” on page 448 Additional information on Live Partition Mobility architecture, mechanisms and advanced topics can be found in the Redbook IBM PowerVM Live Partition Mobility, SG24-7460.
11.1 Requirements Partition mobility places certain demands on hardware, software, network and storage configurations. These considerations need to be reviewed early in the setup of an IBM BladeCenter JS23 or JS43 to avoid reconfiguration and rework. 11.1.1 Hardware The IBM BladeCenter JS23 or JS43 requires a Fibre Channel HBA expansion expansion card for SAN connectivity.
Figure 11-1 Management Partition Updates view From the CLI use the ioslevel command to display the VIOS version and fixpack level, as shown in Example 11-1. In this example the VIOS version is 2.1.1.0 and has not had any fixpacks installed. Example 11-1 ioslevel command $ ioslevel 2.1.1.0 An example of a previous release with a fixpack installed is shown in Example 11-2. Example 11-2 ioslevel command showing fixpack installed $ ioslevel 2.1.0.10-FP-20.1 Chapter 11.
11.1.4 PowerVM Enterprise PowerVM Enterprise Edition is an optional feature on an IBM BladeCenter JS23 or JS43 and is required to enable Partition Mobility. To determine if this capability is available use the lssyscfg command. Example 11-3 shows the lssyscfg returning a value of 1 to indicate active or live partition mobility capability.
Figure 11-2 PowerVM Enterprise key entry 11.1.5 LPAR OS versions The running operating system in the mobile partition must be AIX or Linux. The currently supported operating systems for Live Partition Mobility are: AIX 5L V5.3 with 5300-07 Technology Level or later AIX V6.1 or later Red Hat Enterprise Linux Version 5.1 or later SUSE Linux Enterprise Services 10 (SLES 10) Service Pack 1 or later Chapter 11.
11.2 Preparation This section describes the settings and configurations that must be verified and possibly changed to prepare the local and remote VIOS systems and partitions for partition mobility. 11.2.1 VIOS (source and target) requirements We’ll start with VIOS (source and target) considerations. Memory region size The memory region size is the smallest block of memory that can be assigned to or changed in an LPAR.
Figure 11-3 Memory region size Storage and hdisk reserve policy Only physical volumes (LUNs) visible to the VIOS as a hdisk assigned to an LPAR can be used in mobile partitions. The same physical volumes must also be visible to both the local and remote VIOS systems. The reserve policy of the hdisk must be changed from the default single_path to no_reserve. The reserve policy is changed on an hdisk from both VIOS systems.
$ chdev -dev hdisk1 -attr reserve_policy=no_reserve hdisk1 changed $ lsdev -dev hdisk1 -attr | grep reserve reserve_policy no_reserve Reserve Policy True Note: The reserve policy cannot be changed on the source VIOS when the disks are assigned to an LPAR. The command will fail with the following message: Some error messages may contain invalid information for the Virtual I/O Server environment.
Figure 11-4 hidsk reserve policy not set correctly When the validation process is run, an error message similar to Figure 11-5 on page 442 will be displayed. This problem can be resolved by performing the following steps: 1. Shutting down the mobile LPAR on the local VIOS if running. 2. Modifying the mobile LPAR hdisk assignments on the local VIOS to none. 3. Using the chdev command to change the hdisks reserve policy to no_reserve. 4.
Figure 11-5 Partition Migration validation error message for target storage 11.2.2 Networking The mobile LPAR external network communication must be through a Shared Ethernet Adapter (SEA). The use of logical ports on a Host Ethernet Adapter (HEA) or physical adapters assigned to the LPAR cannot be used and must be removed if assigned. SEA adapter creation is covered in 4.5.2, “Virtual Ethernet Adapters and SEA” on page 103.
VIOS-Neptune,active Phobes - RHEL5-U2,inactive Mars - AIX 6.1,active Note: Linux partitions must have the Dynamic Reconfiguration Tools package for HMC- or IVM-managed servers installed from the Service and Productivity tools Web site at: https://www14.software.ibm.com/webapp/set2/sas/f/lopdiags/home.html Service and Productivity tools are discussed in Appendix D, “Service and productivity tools for Linux” on page 545. 11.2.
POWER6 This mode is possible for both POWER6 and POWER6 + processor based servers. This mode indicates that the operating environment for the partition is using all the standard capabilities of the POWER6 processor. POWER6+ This mode is possible for POWER6 + processor based servers. This mode indicates that the operating environment for the partition is using all the standard capabilities of the POWER6 + processor. POWER6 Enhanced This mode is possible for POWER6 processor based servers.
Figure 11-6 Processor compatibility mode on JS23/JS43 JS12 and JS22 blades used POWER6 technology and can be configured for the processor compatibility modes as shown in Figure 11-7 on page 446. Chapter 11.
Figure 11-7 Processor compatibility mode on JS12/JS22 The requirement is that the source and target blades have the ability to match processor compatibility modes. Currently for POWER6 based blades the only common processor compatibility mode is POWER6. An LPAR running in POWER6 mode on a JS12 could migrate to a JS23 or JS43. If the JS12 LPAR was running in POWER6 Enhanced migration to a JS23 or JS43 would not be possible without a mode change first on the JS12 to POWER6 mode.
Figure 11-8 Change the processor compatibility mode on JS23/JS43 Virtual optical devices All virtual optical devices must be removed from the mobile partition before a successful validation and migration can occur. The example shown in Figure 11-9 on page 448 indicates that the virtual device vtopt0 is still assigned to the mobile partition. The device can be removed by unchecking the box and clicking OK. Chapter 11.
Figure 11-9 Virtual optical device to be removed 11.3 Migrating the LPAR The following sections describe how to use the IVM UI and CLI to validate, migrate, and check status on mobile LPAR. 11.3.1 Using the IVM UI Let us first see how we can perform an LPAR migration with IVM. Validate The migration process is started by first selecting View/Modify Partitions from the Navigation area.
Figure 11-10 Partition Migrate option The Migrate Partition view will open with the mobile partition name appended to the window name. Enter the remote or target IVM-controlled system IP address, remote user ID and password as shown in Figure 11-11 on page 450. Click Validate to start the validation process. Note: The Partition Migration view requests the Remote IVM or HMC IP address. At the time of this publication, IVM to HMC migrations are not supported. Chapter 11.
Figure 11-11 Partition Mobility validation 450 IBM BladeCenter JS23 and JS43 Implementation Guide
At the end of the successful validation process, the Migrate Partition window will be updated similar to Figure 11-12. Figure 11-12 Partition Migration validation success Chapter 11.
Figure 11-13 shows the results of the validation process that discovered a problem that would prevent a migration. This error message was generated because of a virtual SCSI assignment that could not be migrated. In this example the problem was due to a virtual optical device that had an assignment to the mobile partition. Another example is shown in Figure 11-4 on page 441, where the validation process could not find the required storage on the remote system.
Migrate With a successful completion of the validation process the migrate step can be started. Click Migrate to begin the migration process. As part of the migration process, a validate is run again and at the end of this step a Migrate Status view will display, as shown in Figure 11-14. Figure 11-14 Migrate Status view Chapter 11.
The Migrate Status view can be accessed directly from the View/Modify Partitions window. Check the mobile partition box, then select Status under the Mobility section of the More Tasks drop-down box as shown in Figure 11-15. Also note in this same figure that the state of the mobile partition has changed from Running to Migrating- Running.
Figure 11-16 shows the View/Modify Partitions view on the remote IVM, indicating migration has started. Note: The mobile partition will retain the same LPAR ID number if available on the remote system, otherwise it will be assigned the first available ID number. Figure 11-16 Remote IVM indicating migration in progress Chapter 11.
At the end of the migration process the State of the mobile partition changes from Migrating - Running to Running as shown in Figure 11-17 on the formerly remote system. On the original local system the mobile partition is removed from the View/Modify Partition view. Figure 11-17 Partition migration complete to remote system 11.3.2 From the command line The IVM migrlpar command is used to validate and migrate the mobile partition from one IVM-managed system to another.
[VIOSE01042034-0418] The partition cannot be migrated because the virtual SCSI server adapter has a resource assignment that cannot be migrated. The -o flag or operation has the following options: s - stop m - validate and migrate r - recover v - validate The -t flag in Example 11-6 on page 456 specifies the remote managed system. The -t flag requires a system name and IP address. Note: The system name is not the same as the host name.
[1] 24076366 $ lssyscfg -r lpar -F name,state VIOS-Neptune,Running Phobes - RHEL5-U2,Running Mars - AIX 6.1,Migrating - Running Example 11-9 lslparmigr command used to check migrating partition status $ migrlpar -o m -t Server-7998-61X-SN7157008 --ip 172.16.1.100 --id 5 & [1] 24228082 $ lslparmigr -r lpar name=VIOS-Neptune,lpar_id=1,migration_state=Not Migrating name=Phobes - RHEL5-U2,lpar_id=2,migration_state=Not Migrating name=Mars - AIX 6.
12 Chapter 12. System maintenance and diagnostics This chapter discuss methods and best practices related to some important IBM BladeCenter JS23 and JS43 Express maintenance topics, such as: “Firmware updates” on page 460. “System diagnostics” on page 472 © Copyright IBM Corp. 2009. All rights reserved.
12.1 Firmware updates IBM periodically makes firmware updates available for you to install on the IBM BladeCenter JS23 and JS43 Express, the management module, or expansion cards in the blade server IBM BladeCenter JS23 and JS43 Express have a large firmware image, making it impossible to perform firmware updates through the Advanced Management Module.
3. Copy the new firmware image file to your system, inside the /tmp/fwupdate or /home/padmin/fw for a VIO Server directory. You should create this directory if it doesn’t exist. In order to do that type mkdir /tmp/fwupdate or mkdir fw for a VIO Server. 4. Log on to the AIX or Linux system as root, or log on to the Virtual I/O Server/IVM alpha partition as padmin. Important: Updates from within an LPAR are not supported. You need to be logged in to the VIOS instead. 5.
12.1.2 Starting the firmware image from the TEMP side Before running firmware updates, you need to make sure the BladeCenter server is using the firmware located in the TEMP side. Note: Usually the IBM BladeCenter JS23 and JS43 Express are configured to use the TEMP side, leaving the firmware image in the PERM side as a backup. It is possible to verify which side is being used, and change between firmware sides, from within the SMS menu, and the Advanced Management Module (AMM).
Figure 12-1 Select BladeCenter boot mode main page 3. Select the desired JS23 or JS43 blade server. 4. Select Temporary to force the system to use the firmware image from the TEMP side, as shown in Figure 12-2 on page 464, then click in Save. Chapter 12.
Figure 12-2 Firmware selection page 5. Restart the blade server. Click Blade Tasks → Power/Restart. Select the desired BladeCenter server in the list, then choose Restart Blade in the Available Options combobox. Finally, click Perform Action. Figure 12-3 on page 465 shows the Blade Power/Restart page.
Figure 12-3 Blade Power / Restart 6. Verify that the system starts using the firmware image from the TEMP side. It can be done by running steps 1 and 2 again (see Figure 12-1 on page 463). Configure to use the TEMP side through the SMS menu 1. Boot your blade server and hit 1 to enter the SMS menu, as shown in Figure 12-4 on page 466. Chapter 12.
IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM I
Figure 12-5 SMS main menu Important: If your SMS menu does not provide option number 6, it is probably the case you are inside an LPAR. You can’t run firmware updates in IBM BladeCenter JS23 and JS43 Express blade servers from within an LPAR. 3. Figure 12-6 on page 468 shows the SMS Boot Side Option Menu. In the upper left corner you can find the level of firmware being used, and just above options 1 and 2 you can find the firmware side being used.
Figure 12-6 SMS firmware boot side options 4. Press X → 1 to restart the system, as shown in Figure 12-7.
12.1.3 Verify current firmware level Before doing firmware updates, you must know which firmware level you are running in your IBM BladeCenter JS23 or JS43 Express. There are many ways to get this information, such as: Get firmware level through AMM. Get firmware level through SMS menu. Get firmware level through lsmcode command for Linux and AIX, or lsfware for Virtual I/O Server. Get firmware level using the AMM From within the AMM, click Monitors → Firmware VPD.
Get firmware level using the SMS menu 1. Boot your blade server and hit 1 to enter the SMS menu, as shown in Figure 12-4 on page 466. Note: Pay attention to the welcome screen shown in Figure 12-4 on page 466. It has a short time out, and if you miss it you’ll need to reboot the machine. 2. Figure 12-9 shows the SMS main menu. In the left upper corner you can find the current firmware level. Figure 12-9 Firmware level inside the SMS main menu 12.1.
XXX It is the release level. Changes in the release level means major updates in the firmware code. YYY.ZZZ They are the service pack level and last disruptive service pack level. Values for the service pack and last disruptive service pack are only unique within a release level. A firmware installation is always disruptive if: New firmware release level is different from current firmware release level. New firmware service pack level and last disruptive service pack level have the same value.
12.2 System diagnostics POWER6 processor-based systems contains specialized hardware detection circuits for detecting erroneous hardware operations, and includes extensive hardware and firmware recovery logic. IBM hardware error checkers have these distinct attributes: Continuous monitoring of system operations to detect potential calculation errors. Attempted isolation of physical faults based on runtime detection of each unique failure.
If the Service Processor detects a problem during POST, an error code is logged in the AMM event log. Error codes are also logged in Linux syslog or AIX diagnostics log, if possible. See “Checkpoint code (progress code)” on page 479for more details. Light Path and Front Panel diagnostics IBM BladeCenter JS23 and JS43 Express comes with the Light Path technology, which helps on determining Customer Replaceable Units (CRU) with problems.
Table 12-1 Description of Front Panel buttons and LEDs Callout Description 1 Keyboard/Video selection button. 2 Media Tray selection button. 3 Information LED. 4 Error LED. 5 Power Control button. 6 Nonmaskable Interrupt (NMI) reset button 7 Sleep LED. Not used in the IBM BladeCenter JS23 and JS43 Express. 8 Power-on LED. 9 Activity LED. When lit (green), it indicates that there is activity on the hard disk drive or network. 10 Location LED.
Figure 12-11 AMM BladeCenter LEDs control and status page Light Path Light Path diagnostics is a system of LEDs on the control panel and on your system board (IBM BladeCenter JS43 Express has Light Path LEDs on both boards). When a hardware error occurs, LEDs are lit throughout the blade server. LEDs are available for many components, such as: Battery. SAS HDD (or SSD) disks, on both Base and MPE planars. Management card on Base planar only. Memory modules on both Base and MPE planars.
Note: We recommend you to see the BladeCenter JS23 and BladeCenter JS43 Type 7778 Problem Determination and Service Guide, Part Number: 44R5339. There you will find more detailed information on how to perform diagnostics using the Light Path technology, and also how to act when some well-known types of problems arise. Figure 12-12, Figure 12-13 on page 477, and Table 12-2 on page 477 show all Light Path LEDs available on your IBM BladeCenter JS23 and JS43 Express boards.
Figure 12-13 LEDs on the IBM BladeCenter JS43 Express MPE planar Table 12-2 Lightpath LED description.
Diagnostic utilities for the AIX operating system AIX provides many diagnostic and maintenance functions, such as: Automatic error log analysis. Firmware updates, format disk, and RAID Manager. For more information on how to perform diagnostics in your IBM BladeCenter JS23 and JS43 Express using AIX, please see http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.jsp?top ic=/iphau/working01.
diagnostic aids and productivity tools overview” on page 546 for more details on the IBM Installation Toolkit for Linux. 12.2.2 Reference codes Reference codes are diagnostic aids that help you determine the source of a hardware or operating system problem. IBM BladeCenter JS23 and JS43 Express produces many types of Reference Codes: Progress codes 8-digit status codes generated by the Power-on Self-test (POST). They are used to show progress when powering on the blade server.
A checkpoint might have an associated location code as part of the message. The location code provides information that identifies the failing component when there is a hang condition. System reference code (SRC) System reference codes are used to identify both hardware and software problems in IBM BladeCenter JS23 and JS43 Express. Those errors can be originated in hardware, in firmware, or in the operating system. The SRC identifies the component that generated the error code and describes the error.
Advanced Management Module User’s Guide ftp://ftp.software.ibm.com/systems/support/intellistation/44r5375.pd f Advanced Management Module Installation Guide ftp://ftp.software.ibm.com/systems/support/system_x/44r5269.
3. Select the desired blade server. The Reference codes will be shown for the chosen blade server, as in Figure 12-15 on page 482. The Advanced Management Module can display the last 32 Reference Codes. You can manually refresh the list to update it. Figure 12-15 Power-on checkpoints inside AMM web interface Using the AMM to view log messages You can use the AMM web interface to view log messages generated by the blade servers within a Blade Center chassis. Once inside AMM, click Monitors → Event Log.
Figure 12-16 AMM event log interface Service Advisor The Service Advisor enables the Blade Center to automatically send hardware and firmware serviceability messages to IBM. Every time a log event with the Call Home flag enabled happens, the AMM’s Service Advisor will send a message with the event log message, BladeCenter unit inventory and status to IBM Support. This Call Home feature comes disabled by default. You need to accept the Service Advisor Terms and Conditions before enabling it.
in the normal Product Activity Logs (PAL) or System Activity Logs (SAL). Most errors the IBM i partition will encounter are going to be related to storage or configuration. Any true hardware errors will be reported to the VIOS partition and repaired using VIOS options. In this section we will outline where to collect error data and configuration information related to an IBM i virtual partition.
Figure 12-18 More Tasks - Reference Codes Once the reference codes option is selected, a new window will appear that displays the list of codes for the partition selected. In Figure 12-19 on page 486 the codes listed are from the last IPL. Everything is normal with no errors at this time. Selecting any reference code will display the additional words to the right of the panel in the details section. Chapter 12.
Figure 12-19 Reference Code list - normal IPL Now let’s look at an error condition in the IBM i partition. For this scenario we will assume the partition was running with no problems. Something happened that caused the partition to hang. Users report that the partition is not responding. There are many ways to troubleshoot and report problems. It is not the intent of this section to provide procedures beyond collecting data and contacting your next level of support.
One of the places to look for errors will be in IVM. Looking at the View/Modify partitions screen we notice an error condition on the IBM i partition. In Figure 12-20 on page 487 notice that the Attention Indicator is next to the partition and in the reference code column there is a code listed. Normally we expect to see 00000000 in the reference code column if everything is running ok.
Figure 12-21 Reference Code list - error condition Using the start of call procedures this reference code information would be used to complete the Problem Summary Form. This information would be used by service and support to troubleshoot the error and provide assistance in resolving the problem. Depending on your skill level, you may be able to navigate through the various Information Center pages to troubleshoot this error further. Another source for error information would be from the AMM.
Figure 12-22 AMM Event Log The event log can be filtered to view only events specific to the blade server or other components. Figure 12-23 on page 489shows an example of the filter options. Figure 12-23 Event log filter In the list of events you will see the error log information. Figure 12-24 on page 490provides an example of the data in the AMM event log. This data should be similar to the data shown on the partition reference code screen as viewed Chapter 12.
from IVM we looked at earlier. This data can also be saved by scrolling to the bottom of the event log and using the Save Log as Text File button. This data could then be supplied to service and support for further assistance in error determination. Figure 12-24 Event log data details As mentioned above, it is not the intention of this book to explain troubleshooting processes for an IBM i partition.
Part 3 Part 3 Appendixes In this part of the book we provide additional technical support information: Appendix A, “Consoles, SMS, and Open Firmware” on page 493 Appendix B, “SUSE Linux Enterprise Server AutoYaST” on page 521 Appendix C, “Additional Linux installation configuration options” on page 535 Appendix D, “Service and productivity tools for Linux” on page 545 © Copyright IBM Corp. 2009. All rights reserved.
492 IBM BladeCenter JS23 and JS43 Implementation Guide
A Appendix A. Consoles, SMS, and Open Firmware This appendix briefly covers the methods to gain access to the console, use the System Maintenance Services Menu (SMS) to select the console to use, and use the Open Firmware prompt to choose fiber channel host bus adapter settings. This Appendix has the following sections: “Consoles of the IBM BladeCenter JS23 and JS43” on page 494 “System Management Services menu” on page 501 “Open Firmware interface” on page 509 © Copyright IBM Corp. 2009.
Consoles of the IBM BladeCenter JS23 and JS43 Like the previous JS12 and JS22 BladeCenter servers, the IBM BladeCenter JS23 and JS43 blades have a graphics adapter. This graphics adapter makes it possible to use the KVM switch that is built into the Advanced Management Module to gain access to the console of the blade. An alternative method to gain access to the console is the usage of Serial Over LAN, or SOL. You can use either the graphical console or the SOL console during POST.
Figure A-1 JS23/JS43 Control Panel Pressing the keyboard/video select button switches the console to the blade on which the button was pressed. There is only one blade in a chassis that has lit the keyboard/video select button. Note: Be sure that you are using the keyboard, video, and mouse connected to the active Advanced Management Module. There is only one management module active at one time. You will recognize this by looking at the management modules’ LEDs.
Use the key combination as follows: 1. Click and hold the Shift key. 2. Click Num Lock twice. 3. Release the Shift key. 4. Click the bay number - one of 1-14 depending on the chassis you are using. 5. Click Enter. Using remote control to access the graphical console Remote control is a feature of the management module installed in a BladeCenter chassis. It allows to connect over an IP connection to the management module and open a browser window that has the graphical console redirected.
Figure A-2 AMM login panel 2. If prompted, select the time-out parameter that defines after how much idle time the session will be closed. Click Continue; Figure A-3. Our example has been modified to show no time-out. Appendix A.
Figure A-3 Select time-out parameter 3. After successful login you will see the status page of the AMM. This page gives a short overview of the health of the chassis and the blades. Click Remote Control in the menu under Blade Tasks, as shown in Figure A-4 on page 499. Verify that there is no remote control session in progress by observing the remote control status. The Refresh button allows to refresh the status. Then scroll down to Start Remote Control.
Figure A-4 Blade Remote Control options 4. Click Start Remote Control as shown in Figure A-5. A new window will open with the remote control Java applet. Be sure that there are no popup blockers running or configure them to allow the popup windows from the AMM. It may take some time for the window to appear and load the applet. Figure A-5 Start remote control The remote control Java applet will start in a new window. Figure A-6 shows remote control with remote media and remote console.
Figure A-6 Remote control - remote console and remote disk Serial Over LAN Serial over LAN (SOL) provides a means to manage servers remotely by using a command-line interface (CLI) over a Telnet or secure shell (SSH) connection. SOL is required to manage servers that do not have KVM support. SOL provides console redirection for both BIOS and the blade server operating system. The SOL feature redirects server serial-connection data over a LAN without the need for special cabling.
You can establish up to 20 separate Web-interface, Telnet, or SSH sessions with a BladeCenter management module. For a BladeCenter unit, this enables you to have 14 simultaneous SOL sessions active (one for each of up to 14 blade servers) with 6 additional command-line interface sessions available for BladeCenter unit management.
as the active console if you do not select the SOL console as the active console. The SOL session cannot be used at this time to access the SMS menu to perform configuration tasks. To switch from the physical console to an SOL console you have to enter the SMS menu over the physical console or Remote Control. See “Graphical console” on page 494 about available consoles and how to use them. To enter the SMS menu the blade has to go through the POST.
Figure A-8 Power/Restart blade options Note: The Restart Blade option will perform a power off and a power on of your selected blade. The operating system will not shut down properly. Use this option only when there is no operating system running or the blade is in POST, SMS, or Open Firmware prompt. The blade will perform the requested action. 4. Refresh this Web page to see a status change. Now use the console of your choice to work with the blade.
default account is USERID with password PASSW0RD. See Example A-1 on page 505. Note: Remember that the 0 in PASSW0RD is a zero. Help is available via the command help or help {command}. Every command may be executed with one of these options to show the online help for the command: env -h env -help env ? This example uses the command env to show available options to get help. The Management Module Command-line Interface Reference Guide you can find online at: http://www-304.ibm.
Example: A-1 Use of the power command login as: USERID Using keyboard-interactive authentication. password: Hostname: moon.ibm.com Static IP address: 172.16.0.225 Burned-in MAC address: 00:14:5E:DF:AB:28 DHCP: Disabled - Use static IP configuration. Last login: Friday June 20 2008 17:37 from 9.3.4.
You may exit from the SOL session and return to the Advanced Management Module CLI by using the key combination ESC+[. This key combination can be defined in the AMM We Interface.
You need to enter the SMS menu over the physical console to change the active console, in this case as described in the next steps. Figure A-10 Physical console shown with remote control - select active console After a console is chosen as active console, either by the user or automatically, the system will show the Power On Self Test (POST). IBM BladeCenter JS23 and JS43 Power On Self Test (POST) As with previous JS2x blades, there are no System Reference Codes (SRC) shown on a console during POST.
(6) Use Stored Boot list (8) Enter Open Firmware Prompt The stored boot list used to load the operating system will be the default. Click the number 1 to enter the SMS menu. Figure A-11 JS23/JS43 SMS Select the active console using the System Maintenance Services menu When the blade is going through the POST, you can enter the System Maintenance Services menu. To change the current active console in the SMS menu, click 5 to select the console. See Example A-2.
2. 3. 4. 5. 6.
2. Identify the World Wide Port Name and/or World Wide Node Name. 3. Set the connection type. 4. Set the transfer rate. 5. Query available targets. This appendix is split into a section about the QLogic host bus adapter and the Emulex host bus adapter. We start with a description of how to get access to the Open Firmware prompt. Get access to the firmware prompt Use a console of JS23/JS43 and power on or restart the blade.
8 = Open Firmware Prompt Memory Keyboard Network 6 = Stored Boot List SCSI Speaker ok After entering the Open Firmware prompt, you see the command prompt shown in Example A-4. Example: A-4 Open Firmware command prompt 0 > Note: You may leave the System Maintenance Services Menu from the main menu with 0 to the Open Firmware prompt. Boot settings are stored in the NVRAM of the system. The Open Firmware allows you to verify them with the printenv command.
command show-devs on the Open Firmware prompt as shown in Example A-6. The output of the command is shortened to show only the important part of information for the explanation in this section. Example: A-6 show-devs example output 0 > show-devs 00000208dda0: 00000208eb98: . . .
The examples in this section were created using a CFFh combo card with the firmware 4.00.24 and FCode 1.25. Identify your fiber channel host bus adapter as described in Example A-6 on page 512. The device tree in your system may differ from the example shown here. With this information you can build the command to select the device. Enter the command: “ /pci@800000020000204/fibre-channel@0" select-dev to select the first host adapter port.
Firmware version 4.00.24 ok 0 > In case of the usage of an Optical Pass Through Module, it is necessary to change the transfer rate that is set, per default, to Auto Negotiation on the 4 GB host bus adapter to a fixed value of 2 GB. The Optical Pass Through Module can only handle transfer rates up to 2 GB. Auto Negotiation will not work with 4 GB host bus adapters. To change the transfer rate, verify the current settings of the HBA first. Use the command show-settings as shown in Example A-10 on page 514.
Depending on your fiber channel targets and the connectivity that you use to connect to them, you may wish to change the connection type to loop or to point-to-point. Use the command set-connection-mode to do the change, as shown in Example A-12. The command returns the current setting and lets you change to a new one. The possible options are shown. Select the corresponding number and click Enter.
When no changes are made, the boot process can be started by leaving the Open Firmware prompt with the commands as shown in Example A-14. Example: A-14 Leave Open Firmware prompt 1 > dev /packages/gui 1 > obe Emulex host bus adapter This section describes how to 1. Retrieve the World Wide Node Name. 2. Identify the FCode level. 3. Set the link speed. 4. Set the connection mode. The examples in this section were created using an Emulex CFFv with the FCode 3.10.a0.
Example: A-16 Display the World Wide Node and Port Name of an Emulex CFFv HBA 0 > host-wwpn/wwnn Host_WWPN 10000000 c9660936 Host_WWNN 20000000 c9660936 ok 0 > The installed FCode level on the HBA can be shown with the command check-vpd or .fcode. as shown in Example A-17. Example: A-17 Display FCode version of an Emulex CFFv HBA 0 > check-vpd !!! LP1105-BCv Fcode, Copyright (c) 2000-2008 Emulex !!! Version 3.10a0 ok 0 > 0> .fcode Fcode driver version 3.
4. 4 Gb/s Link Speed -- Only Enter to QUIT Enter a Selection: Enter the number of your choice and click Enter as shown in Example A-20. The NVRAM of the HBA will be updated. Example: A-20 Changed link speed in NVRAM of the Emulex CFFv HBA Enter a Selection: 2 Flash data structure updated.
can see that the topology is set to Point to Point. The set commands return nothing. Example: A-21 Display connection topology of an Emulex CFFv HBA 1 > .topology Point to Point - Current Mode Manual Topology ok 1 > Remember that the described commands require that you have an HBA port selected and that they only have effect on the selected HBA port. You need to perform the necessary actions on both HBA ports. To leave the Open Firmware prompt and restart the blade, use the command reset-all.
520 IBM BladeCenter JS23 and JS43 Implementation Guide
B Appendix B. SUSE Linux Enterprise Server AutoYaST This appendix describes the SUSE AutoYaST tool to perform automated installations of SUSE Linux Enterprise Server 11. We discuss the following topics: “AutoYaST introduction” on page 522 “AutoYaST profile creation methods” on page 522 “Create an AutoYaST profile using YaST Control Center” on page 522 © Copyright IBM Corp. 2009. All rights reserved.
AutoYaST introduction The AutoYaST configuration tool allows a system administrator to install SUSE Linux Enterprise Server (SLES) on a large number of systems in parallel using an automated process. The AutoYaST profile is a file written using the Extensible Markup Language (XML). It contains responses to all the system configuration questions typically asked during a manual installation. This file is configurable to accommodate the installation of systems with homogeneous and heterogeneous hardware.
Note: This YaST tool can run in graphical or text mode. A mouse can navigate through the graphical version of the tool while the text mode version requires Tab, Enter, Up/Down Arrow, and Space bar keys to navigate. Otherwise, there is no difference between the two modes and the same configuration options in both will result in the same XML file. There are a lot of optional settings, but some are mandatory settings or dependencies.
Figure B-1 YaST Control Center in graphics mode 524 IBM BladeCenter JS23 and JS43 Implementation Guide
Figure B-2 YaST Control Center in text mode Appendix B.
Navigating the YaST graphical interface 1. Start the YaST application, which opens a window as shown in Figure B-3. Launch the Autoinstallation applet from the Miscellaneous section of YaST.
2. After the selection, the main AutoYaST configuration window opens as shown in Figure B-4. Figure B-4 Main AutoYaST menu (SLES 11) Appendix B.
3. Clone the configuration of the installation server by selecting Tools → Create Reference Profile, as shown in Figure B-5.
4. A second window opens, as shown in Figure B-6. In addition to the default resources such as boot loader, partitioning, and software selection, it is possible to add other aspects of your system to the profile by checking items in the Select Additional Resources section. When ready, click Create so YaST can collect the system information and create the AutoYaST profile. Figure B-6 Selecting additional resources 5.
Figure B-7 AutoYaST software selection b. Hardware - Configures Partitioning, Sound, Printer, and Graphics Card and Monitor, if necessary. The Partitioning settings are critical for this configuration to work, so verify that they match your hard disk environment and that each partition meets the minimum SuSE partition size requirements. c. System - Sets the general system information such as language configuration, time zone, other locale-related settings, logging, and run-level information in this option.
iv. Remove any static IP configurations on the next panel and press Add. Some selections are already configured, such as Device Type: Ethernet. Type, for example, ehea, as module name for the adapter and click Next. v. In the Host name and name server section, choose DHCP for the Hostname and Domain Name (Global) and also choose DHCP for Name servers and the domain search list. vi. Click OK → Next. Interface eth0 is ready now. To create interface eth1, repeat the steps.
Figure B-8 Configure the root user iv. High-light root and its row again and press Edit. v. Add the root user password. This password is saved encrypted in the XML file. Press Accept when finished. vi. Click Finish to return to the AutoYaST main menu. g. Misc - Allows you to add complete configuration files, or to add special scripts to run before and after the installation. 6. Remember to save the edits with File → Save. Example: B-1 Part of newly created XML file
true /dev/sda1 Linux 2 80 104 . . . . .
534 IBM BladeCenter JS23 and JS43 Implementation Guide
C Appendix C. Additional Linux installation configuration options This appendix describes some of the other options to install Linux natively or on an LPAR. We cover the following configurations: “Basic preparations for a Linux network installation” on page 536 “Virtual optical device setup and installation” on page 544 © Copyright IBM Corp. 2009. All rights reserved.
Basic preparations for a Linux network installation This section provides all the basic information to set up services for a Linux network installation. In principle, this is not bound to a specific operating system or distribution that runs on the infrastructure server to provide the necessary services. Nevertheless, all descriptions in this section are based on general Linux services, commands, and parameters.
Configuring a BOOTP or DHCP service DHCP is an extension to the original BOOTP specification. As a result, you can use DHCP to provide the BOOTP information for booting using the network. The standard DHCP daemon is called dhcpd, but there are other DHCP daemons. Note: The directory you use for the configuration files depends on the distribution. The following directories are possible examples: /etc/ /etc/sysconfig/ /etc/default/ /etc/xinet.
option routers 172.16.1.1; subnet 172.16.1.0 netmask 255.255.255.0 { option broadcast-address 172.16.1.255; range dynamic-bootp 172.16.1.68 172.16.1.80; default-lease-time 444; next-server 172.16.1.197; } host JS23 { hardware ethernet 00:1a:64:44:21:53; fixed-address 172.16.1.79; filename "install"; } } You can find the start and stop scripts of Linux services in the /etc/init.d/ directory. To start the standard DHCP daemon, use the /etc/init.d/dhcpd start command.
Configuring a Trivial File Transfer Protocol service You can use the TFTP to provide a bootable image during a network installation. There are several implementations of TFTP daemons available. The standard TFTP daemon is called tftpd. In general, the xinetd or inetd super daemons are used to create a TFTP daemon. You can also run a TFTP daemon without one of the super daemons.
5. Finally, scroll down to [Finish] and press the Enter key. Example C-2 shows a TFTP daemon configuration for xinetd stored in /etc/xinet.d/tftpd. Example: C-2 Configuring a TFTP daemon in the /etc/xinet.d/tftp file on SLES11 # default: off # description: tftp service is provided primarily for booting or when a \ # router need an upgrade. Most sites run this only on machines acting as # "boot servers". service tftp { socket_type = dgram protocol = udp wait = yes user = root server = /usr/sbin/in.
2. Then enter cp /mnt/suseboot/inst64 /tftpboot/install Copying the Red Hat Enterprise Linux 5 install kernel To copy the Red Hat Enterprise Linux 5 install kernel, use the following procedure: 1. Mount the Red Hat Enterprise Linux 5.2 DVD1 on the system running the tftp server. For example, on a system running Red Hat Enterprise Linux 5, type: mount /dev/cdrom /mnt 2. Then enter cp /mnt/images/netboot/ppc64.
Figure C-2 Initial setup of SLES NFS installation server 4. Then click [Next]. 5. Leave the defaults for Host Wild Card and Options. 6. Click [Next]. With this, an NFS server serving /install is set up automatically. 7. Click Add to configure an installation source. 8. As Source Name, enter the desired name for this installation source, for example, sles11. This creates a subdirectory sles11 under /install. 9.
Figure C-3 Source configuration window 11.If you chose the Read CD or DVD Medium option given in Figure C-3, you will be prompted to insert the first DVD. 12.Insert SLES11 DVD1 into the BladeCenter media tray and press [Continue]. The data from DVD1 is copied to the /install/sles11/CD1 directory. Note: If you used the CD option instead of a DVD, you will be prompted for the other CDs at this step. 13.Select [Finish] after all the data is copied. The installation server is now ready.
umount /mnt/ 2. Make sure the export directory is exported via NFS entry in /etc/exports. For example: /install/RHEL5.2 *(ro, async, no_rootsquash) 3. Then restart the NFS daemon with: /sbin/service nfs start /sbin/service nfs reload Virtual optical device setup and installation This installation option uses the virtual optical device on the Integrated Virtual Manager (IVM) to perform a CD/DVD installation of a Linux operating system image. The Linux image is stored in the IVM’s virtual media library.
D Appendix D. Service and productivity tools for Linux This appendix describes how to install IBM service diagnostic aids and productivity tools for the Linux operating system running on BladeCenter or IVM-managed servers for the JS23 BladeCenter.
IBM service diagnostic aids and productivity tools overview The IBM service diagnostic and productivity packages for Linux on POWER architecture provide the latest system diagnostic information such as reliability, availability, and serviceability (RAS) functionality as well as the ability to modify logical partition (LPAR) profiles with hotplug, Dynamic Logical Partitioning (DLPAR), and Live Partition Migration capabilities.
Successfully installed Linux OS on a JS23 or JS43 BladeCenter No Yes Is my Linux OS running on an IVM-managed server? Select packages for Linux OS running on BladeCenter servers Select packages for Linux OS running on IVMmanaged servers No Select service and productivity tools for the appropriate SuSE Linux version. Is the system running Red Hat Linux? Yes Select service and productivity tools for the appropriate Red Hat Linux version.
Install tools on Red Hat Enterprise Linux 5/SLES 11 running on BladeCenter servers This section describes the steps to configure a JS23 BladeCenter running on a BladeCenter server with the service aids and productivity tools. These steps are applicable for systems running a native Red Hat Enterprise Linux 5/SLES 11 (or later) installation environment. 1. Use a Web browser to connect to https://www14.software.ibm.com/webapp/set2/sas/f/lopdiags/home.html 2.
Figure D-3 OS level selection tabs example 4. Click and save each of the packages under the Package downloads column. At the time of this publication the packages were: Figure D-4 Available packages for Red Hat on BladeCenter servers Tool Basic Information Platform Enablement Library A library that allows applications to access certain functionality provided by platform firmware. Hardware Inventory Provides Vital Product Data (VPD) about hardware components to higher-level serviceability tools.
Tool Basic Information Service Log Creates a database to store system-generated events that may require service. Error Log Analysis Provides automatic analysis and notification of errors reported by the platform firmware. Tip: Click the links under the Tool name column for the latest detailed description of each tool. 5. Use a transfer protocol such as FTP or SCP to send each *.rpm package to the target system or save these rpm packages to a CD or DVD and mount the device (see the CD/DVD tip below).
Tip2: We recommend placing these rpms in a yum repository to quickly update or install these tools on a large number of machines. Install tools on Red Hat Enterprise Linux 5/SLES 11 running on IVM-managed servers This section describes the steps to configure a JS23 BladeCetner LPAR running on a IVM-managed server with the service aids and productivity tools. 1. Use a Web browser to connect to https://www14.software.ibm.com/webapp/set2/sas/f/lopdiags/home.html 2.
Figure D-5 Available packages for Red Hat/SuSE Linux on IVM-managed server 552 IBM BladeCenter JS23 and JS43 Implementation Guide
Tool Basic Information Platform Enablement Library A library that allows application to access certain functionality provided by platform firmware. SRC Manages daemons on the systems. RSCT utilities RSC packages provide the Resource Monitoring and Control (RMC) functions and infrastructure needed to monitor and manage one or more Linux systems RSCT core See description above. CSM core CSM packages provide for the exchange of host-based authentication security keys.
Tool Basic Information Error Log Analysis Provides automatic analysis and notification of errors reported by the platform firmware. PCI Hotplug Tools Allows PCI devices to be added, removed, or replaced while the system is in operation. Dynamic Reconfiguration Tool Allows the addition and removal of processors and I/O slots from a running partition. Inventory Scout Surveys one or more systems for hardware and software information.
Figure D-6 DLPAR and Live Partition mobility services are enabled See Chapter 4, “System planning and configuration using VIOS with IVM” on page 71 for more information on IVM options and functions. 9. Installation of the service aids and productivity tools is complete. Tip: We recommend placing these rpms in a yum repository to quickly update or install these tools on a large number of machines. Appendix D.
556 IBM BladeCenter JS23 and JS43 Implementation Guide
Abbreviations and acronyms ABR Automatic BIOS recovery AC alternating current ACL CCSP Cisco Certified Security Professional access control list CD-ROM compact disc read only memory AES Advanced Encryption Standard CDP Cisco Discovery Protocol AMD™ Advanced Micro Devices™ CE Conformité Européene AMM Advanced Management Module CLI command-line interface API APV application programming interface CNA CNS Advanced Power Virtualization ARP Address Resolution Protocol COG configuration
DVMRP Distance Vector Multicast Routing HSDC Protocol HSESM high speed daughter card DVS Digital Video Surveillance HSFF high-speed form factor ECC error checking and correcting HSIBPM EDA Electronic Design Automation high-speed InfiniBand pass-thru module EIGRP Enhanced Interior Gateway RoutingHSIBSM Protocol high speed InfiniBand switch module EMC electromagnetic compatibility HSRP Hot Standby Routing Protocol EMEA Europe, Middle East, Africa HT Hyper-Threading EOT Enhanced object
ISL Inter-Switch Link ISMP Network Address Translation ISP Integrated System Management NAT Processor NDCLA Internet service provider IT information technology NEBS ITS IBM Integrated Technology Services Network Equipment Building System NGN next-generation network NIC network interface card ITSO International Technical Support Organization MVR Multicast VLAN registration Non-Disruptive Code Load Activation non-maskable interrupt KB NMI Integrated Virtualization Manager NOS kilobyte NP
RAS remote access services; row address strobe RDAC SIP source IP SLB Server Load Balancing SUSE Linux Enterprise Server RDC Redundant Disk Array Controller SLES Remote Desktop Connection SMAC RDIMM registered DIMM SMI-S RDM Remote Deployment Manager Storage Management Initiative Specification RDMA Remote Direct Memory Access SMP symmetric multiprocessing RETAIN® Remote Electronic Technical Assistance Information Network SMS System Management Services SNMP RHEL Red Hat Enterprise Li
USB universal serial bus UTF Universal Telco Frame UTP unshielded twisted pair VBS Virtual Blade Switch VGA video graphics array VIOS Virtual I/O Server VLAN virtual LAN VLP very low profile VM virtual machine VMPS VLAN Membership Policy Server VNC Virtual Network Computing VOIP Voice over Internet Protocol VPD vital product data VPN virtual private network VQP VLAN Query Protocol VRRP virtual router redundancy protocol VSAN Virtual Storage Area Network VT Virtualization T
562 IBM BladeCenter JS23 and JS43 Implementation Guide
Related publications The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book. IBM Redbooks For information about ordering these publications, see “How to get Redbooks” on page 568. Note that some of the documents referenced here may be available in softcopy only. IBM BladeCenter Products and Technology, SG24-7523 IBM System i and System p, SG24-7487 IBM System Storage DS4000 and Storage Manager V10.
IBM Systems Director Active Energy Manager Version 3.1.1 is an IBM Director extension. For more information about the IBM Active Energy Manager see: http://www.ibm.com/systems/management/director/extensions/actengmrg. html IBM periodically releases maintenance packages for the AIX 5L operating system. These packages are available on CD-ROM, or you can download them from the following Web site: http://www.ibm.com/eserver/support/fixes/fixcentral/main/pseries/aix In AIX 5L V5.
http://publib.boulder.ibm.com/infocenter/iseries/v1r3s/en_US/info/ip hb1/iphb1.pdf The BladeCenter Interoperability Guide can be found at: https://www-304.ibm.com/systems/support/supportsite.wss/docdisplay?l ndocid=MIGR-5073016&brandind=5000020 The Virtual I/O server data sheet gives an overview of supported storage subsystems and the failover driver that is supported with the subsystem. The data sheet can be found at: http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/d atasheet.
http://www-304.ibm.com/jct01004c/systems/support/supportsite.wss/sup portresources?taskind=3&brandind=5000033&familyind=5329743 The SAN switch interoperability matrix can be found at: http://www-03.ibm.com/systems/storage/san/index.html The System Storage Interoperation Center (SSIC) helps to identify a supported storage environment. You find this web based tool at: http://www-03.ibm.
http://publib.bolder.ibm.com/infocenter/iseries/v5r3/topic/rzahc/rza hcswinstallprocess.htm Use the IBM i recommended Fixes Web site to get a list of the latest recommended PTFs: http://www-912.ibm.com/s_dir/slkbase.nsf/recommendedfixes The primary Web site for downloading fixes for all operating systems and applications refer to: http://www-912.ibm.com/eserver/support/fixes More detailed information to the IBM Systems Director Navigator for i functionality can be found at: http://www.ibm.
Your awesome Setup Tool (YaST) will assist with the completion of a SLES install. More detailed installation instructions are available here: http://www.novell.com/documentation/sles10/sles_admin/index.html?pag e=/documentation/sles10/sles_admin/data/sles_admin.html The link to the Virtual I/O Server download site is also available here: http://techsupport.services.ibm.
ibm.
570 IBM BladeCenter JS23 and JS43 Implementation Guide
Index Symbols /etc/dhcpd.conf 537 /etc/init.d/dhcpd restart command 538 /etc/init.d/dhcpd start command 538 /etc/xinet.
highlights 4 BladeCenter H 26 BladeCenter Hardware preparation 247 BladeCenter JS23 18 Internal disk 21 Memory DIMMs 20 Memory features 20 Processor features 20 BladeCenter JS43 21 Internal disk 23 Memory DIMMs 23 Memory features 23 Processor features 22 BladeCenter Open Fabric Manager 7 BladeCenter Power Domain 401 boot device 83 boot net command 396 Bootstrap Protocol (BOOTP) 536 bridge 224 BRMS (Backup Recovery & Media Services) 314, 316 C cache availability 43 Call Home contact message 483 CDROM virtua
rmtcpip 93 commit new firmware 460 configuration file Kickstart 362 Connect the System i LAN console 280 Configuration wizard 282 dedicated service tools (DST) 289 hosts file 285 Service tool device ID 289 System serial number 287 Target partition 288 Controlled end delay time 333 Create Partition wizard 195 Create Storage Pools 212 Create virtual media library for backup IBM i V6R1 Backup/Restore Create virtual media library for backup 316 creating an IBM i 6.
Gigabit Ethernet 25 H hdisk reserve policy 439 help 85 high performance computing (HPC) applications 5 High Speed Form Factor 76 HMC 271 Host bus adapters (HBA) 83 Host Ethernet Adapter (HEA) 56, 76, 100, 442 configuring as a SEA 103 hotplug 546 HSFF 76 Hypertext Markup Language (HTML) 84 Hypertext Transfer Protocol (HTTP) 536 hypervisor 444 I I/O hotplug 365, 396 IBM BladeCenter chassis 24 BladeCenter H 25 BladeCenter HT 33 BladeCenter S 29, 80 IBM Cluster Systems Management (CSM) 66 IBM Director 64–65 I
544 Red Hat Enterprise Linux 5.
from the CLI 174 from the UI 172 lpcfgop 87 lpcfgop command 87 lslparmigr 457 lsrefcode 171 lssyscfg 171, 442, 457 M machine checks 472 man dhcpd command 537 media 84 Media Access Control (MAC) 537 media library 324 adding new media 137 create 135 create blank media 139 delete 137 extend 137 modify media assignment 141 media library size 324 media tray 84 memory placement rules 51 memory controller 20, 23 memory region size 438 memory scrubbing 23 Memory subsystem 50 Micro-Partitioning 61 migrlpar 456 mobi
physical optical device 215 Physical over-commit 180 physical volumes 122 assigning 226 Point-to-point protocol (PPP) 16 port_vlan_id 116 POST 507 Power cap 49 Power Capping 48 Power On Self Test 507 Power Saver Mode 47 Power Trending 47 POWER® Hypervisor (PHYP) 472 POWER6 444 POWER6 Enhanced 444 POWER6 Hypervisor (PHYP) 18 POWER6 processor 43–44, 460 Altivec (SIMD) 46 Decimal floating point 44 Energyscale technology 47 Simultaenous Multi Threading 45 POWER6 processor based blade family 10 POWER6+ 444 POWER
Solid State Disk Technology 5 Solid state drive (SSD) 17 specialized hardware detection circuits 472 SSH 495 SSIC 83 Standard Form Factor 76 Start an IBM i V6R1 partition 333 Start Remote Control 498 starting the firmware image from the TEMP side 462 Static Power Saver Mode 47 storage area network (SAN) 341, 374 Storage Configuration Manager (SCM) 259, 566 Storage consideration BladeCenter H 252 storage considerations 81 Storage Management 212 storage pool 184 Storage Pools 124, 212 delete 126 reduce 126 su
data sheet 81, 565 default user 86, 94 fixpack download location 434 planning 73 Virtual Input/Output Server 177 virtual media library 212 virtual optical device 152, 544 virtual optical devices 447 Virtual Storage Management 121 virtual tape 152 Virtual Terminal 167 Vital product data (VPD) 18 VMLibrary 216 W Windows 260 WRKOPTVOL 321 Y yast command 539 YaST graphical interface 523 yast2 command 523 Z Zone Group 255 Zoning 255 Index 579
580 IBM BladeCenter JS23 and JS43 Implementation Guide
IBM BladeCenter JS23 and JS43 Implementation Guide (1.0” spine) 0.875”<->1.
IBM BladeCenter JS23 and JS43 Implementation Guide Featuring installation techniques for the IBM AIX, IBM i, and Linux Showing Live Partition Mobility scenarios Detailed coverage of AMS, IVM and power management This IBM Redbooks publication provides a detailed technical guide for configuring and using the IBM BladeCenter JS23 and IBM BladeCenter JS43 servers. These IBM Power Blade servers feature the latest IBM POWER6 processor technology.