Cisco UCS B420 M3 Blade Server CISCO SYSTEMS 170 WEST TASMAN DR. SAN JOSE, CA, 95134 WWW.CISCO.COM PUBLICATION HISTORY REV B.
CONTENTS OVERVIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 DETAILED VIEWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Blade Server Front View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 BASE SERVER STANDARD CAPABILITIES and FEATURES . . . . . . . . . . . . . . . . . 5 CONFIGURING the SERVER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
OVERVIEW OVERVIEW Designed for enterprise performance and scalability, the Cisco® UCS B420 M3 Blade Server combines the advantage of 4-socket computing with the cost-effective latest Intel® Xeon® E5-4600 v2 and E5-4600 series processor family CPUs, for demanding virtualization and database workloads. With industry-leading compute density, I/O bandwidth and memory footprint, the UCS B420 M3 is a balanced, high-performance platform that complements the UCS blade server portfolio.
DETAILED VIEWS DETAILED VIEWS Blade Server Front View Figure 2 is a detailed front view of the Cisco UCS B420 M3 Blade Server.
BASE SERVER STANDARD CAPABILITIES and FEATURES BASE SERVER STANDARD CAPABILITIES and FEATURES Table 1 lists the capabilities and features of the base server. Details about how to configure the server for a particular feature or capability (for example, number of processors, disk drives, or amount of memory) are provided in CONFIGURING the SERVER on page 7. NOTE: The B420 M3 blade server requires UCS Manager (UCSM) to operate as part of the UCS system. ■ The B420 M3 with E5-4600 CPUs requires UCSM 2.
BASE SERVER STANDARD CAPABILITIES and FEATURES Table 1 Capabilities and Features (continued) Capability/Feature Video Description The Cisco Integrated Management Controller (CIMC) provides video using the Matrox G200e video/graphics controller: ■ ■ Integrated 2D graphics core with hardware acceleration DDR2/3 memory interface supports up to 512 MB of addressable memory (8 MB is allocated by default to video memory) ■ Supports display resolutions up to 1920 x 1200 16bpp @ 60Hz ■ High-speed integrated
CONFIGURING the SERVER CONFIGURING the SERVER Follow these steps to configure the Cisco UCS B420 M3 Blade Server: ■ STEP 1 VERIFY SERVER SKU, page 8 ■ STEP 2 CHOOSE CPU(S), page 9 ■ STEP 3 CHOOSE MEMORY, page 11 ■ STEP 4 CHOOSE DISK DRIVES (OPTIONAL), page 15 ■ STEP 5 CHOOSE ADAPTERS, page 17 ■ STEP 6 ORDER A TRUSTED PLATFORM MODULE (OPTIONAL), page 21 ■ STEP 7 ORDER CISCO FLEXIBLE FLASH SECURE DIGITAL CARDS, page 22 ■ STEP 8 ORDER INTERNAL USB 2.
CONFIGURING the SERVER STEP 1 VERIFY SERVER SKU Verify the product ID (PID) of the server as shown in Table 2. Table 2 PID of the Base UCS B420 M3 Blade Server Product ID (PID) UCSB-B420-M3 Description UCS B420 M3 Blade Server with no CPU, memory, HDD, SSD, mLOM, or adapter card The base Cisco UCS B420 M3 blade server does not include the following components.
CONFIGURING the SERVER STEP 2 CHOOSE CPU(S) The standard CPU features are: ■ Intel Xeon processor E5-4600 v2 or E5-4600 series processor family CPUs ■ Core counts of up to 12 ■ Cache sizes of up to 30 MB Select CPUs The supported Intel Xeon E5-4600 v2 and E5-4600 series CPUs on the UCS B420 M3 are listed in Table 3.
CONFIGURING the SERVER Supported Configurations (1) Two-CPU Configuration ■ Choose two identical CPUs from any one of the rows of Table 3. CPUs 1 and 2 will be populated. (2) Four-CPU Configuration ■ Choose four identical CPUs from any one of the rows of Table 3. ■ The system will run at the lowest CPU or DIMM clock speed. For example, when using 1600-MHz DIMMs with an E5-4603 CPU (which can only support up to 1066-MHz DIMMs), the system will run at the lower speed of 1066 MHz.
CONFIGURING the SERVER STEP 3 CHOOSE MEMORY The standard memory features are: ■ — Clock speed: 1600 MHz — Ranks per DIMM: up to 4 — Operational voltage: dual (1.5 or 1.35 V); default = 1.5 V — Registered ■ DDR3 ECC registered DIMMs (RDIMMs) or load-reduced DIMMS (LRDIMMS) ■ Memory is organized with four memory channels per CPU, with up to three DIMMs per channel (DPC), as shown in Figure 3.
CONFIGURING the SERVER Choose DIMMs and Memory Mirroring Select the memory configuration and whether or not you want the memory mirroring option. The supported memory DIMMs and the mirroring option are listed in Table 4. When memory mirroring is enabled, the memory subsystem simultaneously writes identical data to two adjacent channels. If a memory read from one of the channels returns incorrect data due to an uncorrectable memory error, the system automatically retrieves the data from the other channel.
CONFIGURING the SERVER ■ ■ ■ — When mixing different densities of 1600-MHz RDIMMs within a channel, memory will run at 1.5 V (performance mode) only. — You cannot mix RDIMMs with LRDIMMs — DIMMs default to performance mode (1.5 V). To run DIMMs in power-savings mode (1.35 V), change the server BIOS settings. To optimize memory performance: — Configure DIMMs identically for each CPU — Fill banks equally across the CPU.
CONFIGURING the SERVER Table 5 DIMM Speeds for Systems Shipping with E5-4600 v2 Series CPUs (continued) For Systems Shipping with E5-4600 v2 Series CPUs 1866 DIMM (1.5V only) 1DPC NA 1333 NA 1333 NA 1600 NA 1600 NA 1866 NA 1866 2DPC NA 1333 NA 1333 NA 1600 NA 1600 NA 1866 NA 1866 3 DPC NA 1066 NA 1066 NA 1066 NA 16 GB 1333 NA 1066 NA 1333 8 GB 1066 For more information regarding memory, see DIMM and CPU Layout on page 37.
CONFIGURING the SERVER STEP 4 CHOOSE DISK DRIVES (OPTIONAL) The UCS B420 M3 can be ordered with or without drives. The B420 M3 provides: ■ Four hot plug 2.5” SFF drive bays ■ An embedded LSI 2208R RAID controller to provide RAID 0/1/5/10. NOTE: The UCS B420 M3 blade server meets the external storage target and switch certifications as described in the following link: http://www.cisco.com/en/US/docs/switches/datacenter/mds9000/interoperabilit y/matrix/Matrix8.
CONFIGURING the SERVER Supported Configurations ■ Select 1, 2, 3, or 4 of the drives listed in Table 6. ■ When creating a RAID volume, mixing different capacity drives causes the system to use the lowest-capacity drive. ■ Mixing of drive types is supported, but performance may be impacted. RAID volumes should use the same media type.
CONFIGURING the SERVER STEP 5 CHOOSE ADAPTERS The adapter offerings are: ■ Cisco Virtual Interface Cards (VICs) Cisco-developed Virtual Interface Cards (VICs) provide flexibility to create multiple NIC and HBA devices. The VICs also support UCS Fabric Extender technologies. ■ Converged Network Adapters (CNAs) Emulex and QLogic Converged Network Adapters (CNAs) consolidate Ethernet and Storage (FC) traffic on the Cisco Unified Fabric.
CONFIGURING the SERVER Table 7 Supported Adapters (continued) Product ID (PID) PID Description UCSB-F-FIO-785M Cisco UCS 785 GB MLC Fusion ioDrive2 UCSB-F-FIO-365M Cisco UCS 365 GB MLC Fusion ioDrive2 UCSB-F-LSI-400S LSI 400 GB SLC WarpDrive Notes 1. Do not mix Fusion io storage accelerator families. That is, do not mix “MP” or “MS” (ioMemory3) with “M” (ioDrive2) family cards.
CONFIGURING the SERVER Table 9 Supported Adapter1 Combinations (4-CPU Configuration) (continued) Adapter Slot 1 Adapter Slot 2 Adapter Slot 3 Not populated Not populated Emulex or QLogic adapter Not populated Emulex or QLogic adapter Emulex or QLogic adapter VIC 1240 Port Expander Card VIC 1280 VIC 1240 Not populated Cisco UCS Storage Accelerator3 VIC 1240 Cisco UCS Storage Accelerator3 Cisco UCS Storage Accelerator3 Not populated Cisco UCS Storage Accelerator VIC 1280 VIC 1240 VIC 1280
CONFIGURING the SERVER Table 10 Supported Adapter1 Combinations (2-CPU Configuration) (continued) Adapter Slot 1 Adapter Slot 22 Adapter Slot 3 Not populated Not populated Emulex or QLogic adapter VIC 1240 Port Expander Card VIC 1280 20 Gb Figure 16 on page 49 160 Gb Not populated Cisco UCS Storage Accelerator Figure 18 on page 50 40 Gb Figure 19 on page 51 VIC 1240 Total Available Bandwidth 20 Gb Figure 28 on page 57 80 Gb Figure 30 on page 59 20 Gb Figure 39 on page 66 Network I/O not suppo
CONFIGURING the SERVER STEP 6 ORDER A TRUSTED PLATFORM MODULE (OPTIONAL) Trusted Platform Module (TPM) is a computer chip (microcontroller) that can securely store artifacts used to authenticate the platform (server). These artifacts can include passwords, certificates, or encryption keys. A TPM can also be used to store platform measurements that help ensure that the platform remains trustworthy.
CONFIGURING the SERVER STEP 7 ORDER CISCO FLEXIBLE FLASH SECURE DIGITAL CARDS Dual SDHC flash card sockets are provided on the front left side of the server. Mirroring of two SDHC cards is supported with UCS Manager 2.2x and later. The SDHC card ordering information is listed in Table 12.
CONFIGURING the SERVER STEP 8 ORDER INTERNAL USB 2.0 DRIVE (OPTIONAL) You may order one optional internal USB 2.0 drive. The USB drive ordering information is listed in Table 13. Table 13 USB 2.0 Drive Product ID (PID) PID Description UCS-USBFLSH-S-4GB 4GB Flash USB Drive (shorter length) for all M3 servers NOTE: A clearance of 0.950 inches (24.1 mm) is required for the USB device to be inserted and removed (see the following figure). See Figure 5 on page 35 for the location of the USB connector.
CONFIGURING the SERVER STEP 9 ORDER FLASH-BACKED WRITE CACHE (OPTIONAL) You may order an optional 1 GB flash-backed write cache, which backs up the data written to the RAID controller write cache in the event of a power failure. The flash-backed write cache consists of a 1 GB memory module and a supercapacitor power backup module that connects to the motherboard with a cable. The ordering information is shown in Table 14.
CONFIGURING the SERVER STEP 10 CHOOSE OPERATING SYSTEM AND VALUE-ADDED SOFTWARE Several operating systems and value-added software programs are available. Select as desired from Table 15.
CONFIGURING the SERVER Table 15 OSs and Value-Added Software (for 4-CPU servers) (continued) PID Description Product ID (PID) SLES-HGC-2S-5A SUSE Linux GEO Clustering for HA (1-2 CPU); 5yr Support Reqd SLES-SAP-2S-1G-1A SLES for SAP Applications (1-2 CPU,1 Phys); 1yr Support Reqd SLES-SAP-2S-1G-3A SLES for SAP Applications (1-2 CPU,1 Phys); 3yr Support Reqd SLES-SAP-2S-1G-5A SLES for SAP Applications (1-2 CPU,1 Phys); 5yr Support Reqd SLES-SAP-2S-UG-1A SLES for SAP Applications (1-2 CPU,Unl Vrt)
CONFIGURING the SERVER Table 15 OSs and Value-Added Software (for 4-CPU servers) (continued) PID Description Product ID (PID) CIMC-SUP-BASE-K9 IMC Supervisor One-time Site Installation License CIMC-SUP-TERM Acceptance of Cisco IMC Supervisor License Terms VMWare 5 VMW-VS5-STD-1A VMware vSphere 5 Standard for 1 Processor, 1 Year, Support Rqd VMW-VS5-STD-2A VMware vSphere 5 Standard for 1 Processor, 2 Year, Support Rqd VMW-VS5-STD-3A VMware vSphere 5 Standard for 1 Processor, 3 Year, Support Rqd
CONFIGURING the SERVER STEP 11 CHOOSE OPERATING SYSTEM MEDIA KIT (OPTIONAL) Choose the optional operating system media listed in Table 16.
CONFIGURING the SERVER STEP 12 CHOOSE SERVICE and SUPPORT LEVEL A variety of service options are available, as described in this section. Unified Computing Warranty, No Contract If you have noncritical implementations and choose to have no service contract, the following coverage is supplied: ■ Three-year parts coverage. ■ Next business day (NBD) onsite parts replacement eight hours a day, five days a week. ■ 90-day software warranty on media.
CONFIGURING the SERVER SMARTnet for UCS Hardware Only Service For faster parts replacement than is provided with the standard Cisco Unified Computing System warranty, Cisco offers the Cisco SMARTnet for UCS Hardware Only Service. You can choose from two levels of advanced onsite parts replacement coverage in as little as four hours. SMARTnet for UCS Hardware Only Service provides remote access any time to Cisco support professionals who can determine if a return materials authorization (RMA) is required.
CONFIGURING the SERVER See Table 19.
CONFIGURING the SERVER You can choose a service listed in Table 21.
CONFIGURING the SERVER Table 22 Drive Retention Service Options Service Description Service Program Name SMARTnet for UCS Service with Drive Retention UCS DR SMARTnet for UCS HW ONLY+Drive Retention UCS HW+DR Service Level GSP Service Level Product ID (PID) UCSD7 24x7x4 Onsite CON-UCSD7-B420M3 UCSD7 8x5xNBD Onsite CON-UCSD5-B420M3 UCWD7 24x7x4 Onsite CON-UCWD7-B420M3 UCWD5 8x5xNBD Onsite CON-UCWD5-B420M3 For more service and support information, see the following URL: http://www.cisco.
CONFIGURING the SERVER STEP 13 CHOOSE LOCAL KVM I/O CABLE* (OPTIONAL) The local KVM I/O cable ships with every UCS 5100 Series blade chassis accessory kit. The cable provides a connection into the server, providing a DB9 serial connector, a VGA connector for a monitor, and dual USB ports for a keyboard and mouse. With this cable, you can create a direct connection to the operating system and the BIOS running on the server. The local KVM I/O cable ordering information is listed in Table 23.
SUPPLEMENTAL MATERIAL SUPPLEMENTAL MATERIAL System Board A top view of the UCS B420 M3 system board is shown in Figure 5.
SUPPLEMENTAL MATERIAL 7 Diagnostics button (factory use only) 15 Supercap module for Flash-backed write cache 8 1-GB Transportable Flash Module (TFM) for Flash-backed write cache 16 Adapter slot 33 Notes 1. The B420 M3 motherboard labels this slot “mLOM” 2. The B420 M3 motherboard labels this slot “mezz 1” 3.
SUPPLEMENTAL MATERIAL DIMM and CPU Layout Memory is organized as shown in Figure 6.
SUPPLEMENTAL MATERIAL Each CPU controls four memory channels and 12 DIMM slots, as follows: CPU1: Channels A, B, C, and D ■ — Bank 0 - A0, B0, C0, and D0 (blue DIMM slots) — Bank 1 - A1, B1, C1, and D1 (black DIMM slots) — Bank 2 - A2, B2, C2, and D2 (white DIMM slots) CPU2: Channels E, F, G, and H ■ — Bank 0 - E0, F0, G0, and H0 (blue DIMM slots) — Bank 1 - E1, F1, G1, and H1 (black DIMM slots) — Bank 2 - E2, F2, G2, and H2 (white DIMM slots) CPU3: Channels I, J, K, and L ■ — Bank 0 - I
SUPPLEMENTAL MATERIAL Table 24 DIMM Population Order per CPU (continued) DIMMs per CPU Populate CPU 1 Slots Populate CPU 2 Slots Populate CPU 3 Slots Populate CPU 4 Slots 8 A0, B0, C0, D0, E0, F0, G0, H0, I0, J0, K0, L0, M0, N0, O0, P0, A1, B1, C1, D1 E1, F1, G1, H1 I1, J1, K1, L1 M1, N1, O1, P1 A0, B0, C0, E0, F0, G0, I0, J0, K0, M0, N0, O0, A1, B1, C1, E1, F1, G1, I1, J1, K1, M1, N1, O1, A2, B2, C2 E2, F2, G2 I2, J2, K2 M2, N2, O2 9 10 Not recommended for performance reasons
SUPPLEMENTAL MATERIAL DIMM Physical Layout The overall DIMM and CPU physical layout is shown in Figure 7.
SUPPLEMENTAL MATERIAL Figure 8 shows how channels are physically laid out on the blade server. The DIMM slots are contiguous to their associated CPU.
SUPPLEMENTAL MATERIAL Network Connectivity This section shows how the supported adapter card configurations for the B420 M3 connect to the Fabric Extender modules in the 5108 blade server chassis. There are three configurable adapter slots on the B420 M3. One slot supports only the VIC 1240 adapter, and two additional slots accommodate Cisco and Emulex or QLogic adapters, as well as Cisco UCS Storage Accelerator adapters. Table 9 on page 18 and Table 10 on page 19 show supported adapter configurations.
SUPPLEMENTAL MATERIAL Figure 9 shows the configuration for maximum bandwidth, where the following ports are routed to Fabric Extender Modules A and B inside the 5108 blade server chassis: ■ Two 2 x 10G KR ports from the VIC 1240 adapter ■ Two 2 x 10G KR ports from the Port Expander ■ Two 4 x 10G KR ports from the VIC 1280 adapter The resulting aggregate bandwidth is 160 Gb (80 Gb to each Fabric Extender).
SUPPLEMENTAL MATERIAL VIC 1240 and Port Expander Adapter slot 1 is dedicated to the VIC 1240 adapter, and no other adapter card can be installed in this slot. There are two groups of four ports on the VIC 1240: ■ Two ports of the first group and two ports of the second group are wired through the UCS 5108 Blade Server chassis to Fabric Extender A and Fabric Extender B. ■ The other two ports of each group are wired to adapter slot 2.
SUPPLEMENTAL MATERIAL Connectivity Using the Cisco UCS 2208XP Fabric Extender The connectivity options shown in Figure 11 through Figure 21 are summarized in Table 30.
SUPPLEMENTAL MATERIAL In Figure 11, two ports from the VIC 1240 adapter are channeled to 2208XP Fabric Extender A and two are channeled to 2208XP Fabric Extender B. The result is 20 Gb of bandwidth to each Fabric Extender. Figure 11 VIC 1240 (adapter slots 2 and 3 empty) In Figure 12, two ports from the VIC 1240 are channeled to 2208XP Fabric Extender A and two are channeled to 2208XP Fabric Extender B. Adapter slot 2 is empty.
SUPPLEMENTAL MATERIAL In Figure 13, four ports from the VIC 1280 are channeled to 2208XP Fabric Extender A and four are channeled to 2208XP Fabric Extender B. The VIC 1240 slot is empty and adapter slot 2 is empty. The result is 40 Gb of bandwidth to each Fabric Extender. This is not supported in 2-CPU configurations.
SUPPLEMENTAL MATERIAL In Figure 15, two ports from the VIC 1240 are channeled to 2208XP Fabric Extender A and two are channeled to 2208XP Fabric Extender B. The Port Expander Card installed in adapter slot 2 acts as a pass-through device, channeling two ports to each of the Fabric Extenders. Adapter slot 3 is empty. The result is 40 Gb of bandwidth to each Fabric Extender. Figure 15 VIC 1240 and Port Expander in Adapter Slot 2 (adapter slot 3 empty) In Figure 16, there is no VIC 1240 installed.
SUPPLEMENTAL MATERIAL Figure 16 One Emulex or QLogic Adapter Installed in Adapter Slot 3 (other two slots empty) In Figure 17, there is no VIC 1240 installed. In this case, two Emulex or QLogic adapters are installed in adapter slots 2 and 3. Ports A and B of each adapter connect to the Fabric Extenders, providing 20 Gb to each Fabric Extender. This is not supported in 2-CPU configurations.
SUPPLEMENTAL MATERIAL In Figure 18, two ports from the VIC 1240 are channeled to 2208XP Fabric Extender A and two are channeled to 2208XP Fabric Extender B. The Port Expander Card installed in adapter slot 2 acts as a pass-through device, channeling two ports to each of the Fabric Extenders. In addition, the VIC 1280 channels four ports to each Fabric Extender. The result is 80 Gb of bandwidth to each Fabric Extender.
SUPPLEMENTAL MATERIAL In Figure 19, two ports from the VIC 1240 adapter are channeled to 2208XP Fabric Extender A and two are channeled to 2208XP Fabric Extender B. The result is 20 Gb of bandwidth to each Fabric Extender. A Cisco UCS Storage Accelerator adapter is installed in slot 2, but provides no network connectivity.
SUPPLEMENTAL MATERIAL In Figure 21, four ports from the VIC 1280 are channeled to 2208XP Fabric Extender A and four are channeled to 2208XP Fabric Extender B. The VIC 1240 slot is empty and adapter slot 2 contains a Cisco UCS Storage Accelerator (which has no network connectivity). The result is 40 Gb of bandwidth to each Fabric Extender. This configuration is not supported for 2-CPU systems.
SUPPLEMENTAL MATERIAL In Figure 22, two ports from the VIC 1240 adapter are channeled to 2208XP Fabric Extender A and two are channeled to 2208XP Fabric Extender B. The result is 20 Gb of bandwidth to each Fabric Extender. Four ports from the VIC 1280 are channeled to 2208XP Fabric Extender A and four are channeled to 2208XP Fabric Extender B. The result is 40 Gb of bandwidth to each Fabric Extender. The total bandwidth for the VIC 1240 and VIC 1280 together is 120 Gbs.
SUPPLEMENTAL MATERIAL Connectivity using the Cisco UCS 2204XP Fabric Extender The connectivity options shown in Figure 23 through Figure 30 are shown in Table 31.
SUPPLEMENTAL MATERIAL In Figure 23, one port from the VIC 1240 is channeled to 2204XP Fabric Extender A and one is channeled to 2204XP Fabric Extender B. The result is 10 Gb of bandwidth to each Fabric Extender. Figure 23 VIC 1240 (adapter slots 2 and 3 empty) In Figure 24, one port from the VIC 1240 is channeled to 2204XP Fabric Extender A and one is channeled to 2204XP Fabric Extender B. Adapter slot 2 is empty.
SUPPLEMENTAL MATERIAL In Figure 25, two ports from the VIC 1280 are channeled to 2204XP Fabric Extender A and two are channeled to 2204XP Fabric Extender B. The VIC 1240 slot is empty and adapter slot 2 is empty. The result is 20 Gb of bandwidth to each Fabric Extender. This is not supported in 2-CPU configurations.
SUPPLEMENTAL MATERIAL In Figure 27, one port from the VIC 1240 is channeled to 2204XP Fabric Extender A and one is channeled to 2204XP Fabric Extender B. The Port Expander Card installed in adapter slot 2 acts as a pass-through device, channeling one port to each of the Fabric Extenders. Adapter slot 3 is empty. The result is 20 Gb of bandwidth to each Fabric Extender.
SUPPLEMENTAL MATERIAL In Figure 29, there is no VIC 1240. Two Emulex or QLogic adapters are installed, one in each of the adapter slots. Ports A and B of each adapter card connect to the Fabric Extenders, providing 20 Gb to each Fabric Extender. This configuration is not supported for 2-CPU systems.
SUPPLEMENTAL MATERIAL In Figure 30, one port from the VIC 1240 is channeled to 2204XP Fabric Extender A and one is channeled to 2204XP Fabric Extender B. The Port Expander Cardinstalled in adapter slot 2 acts as a pass-through device, channeling one port to each of the Fabric Extenders. In addition, the VIC 1280 channels two ports to each Fabric Extender. The result is 40 Gb of bandwidth to each Fabric Extender.
SUPPLEMENTAL MATERIAL In Figure 31, one port from the VIC 1240 is channeled to 2204XP Fabric Extender A and one is channeled to 2204XP Fabric Extender B. The result is 10 Gb of bandwidth to each Fabric Extender. A Cisco UCS Storage Accelerator adapter is installed in slot 2, but provides no network connectivity.
SUPPLEMENTAL MATERIAL In Figure 33, two ports from the VIC 1280 are channeled to 2204XP Fabric Extender A and two are channeled to 2204XP Fabric Extender B. The VIC 1240 slot is empty and adapter slot 2 contains a Cisco UCS Storage Accelerator (which has no network connectivity). The result is 20 Gb of bandwidth to each Fabric Extender. This configuration is not supported for 2-CPU systems.
SUPPLEMENTAL MATERIAL In Figure 34, one port from the VIC 1240 is channeled to 2204XP Fabric Extender A and one is channeled to 2204XP Fabric Extender B. The VIC 1280 installed in adapter slot 3 channels two ports to each of the Fabric Extenders. Adapter slot 2 contains a Cisco UCS Storage Accelerator (which has no network connectivity). This configuration is not supported for 2-CPU systems. The result is 30 Gb of bandwidth to each Fabric Extender.
SUPPLEMENTAL MATERIAL Connectivity using the Cisco UCS 2104XP Fabric Extender The connectivity options shown in Figure 35 through Figure 41 are shown in Table 32.
SUPPLEMENTAL MATERIAL In Figure 35, one port from the VIC 1240 is channeled to 2104XP Fabric Extender A and one is channeled to 2104XP Fabric Extender B. The result is 10 Gb of bandwidth to each Fabric Extender. Figure 35 VIC 1240 (adapter slots 2 and 3 empty) In Figure 36, one port from the VIC 1240 is channeled to 2104XP Fabric Extender A and one is channeled to 2104XP Fabric Extender B. The VIC 1280 installed in adapter slot 3 channels one port to each of the Fabric Extenders.
SUPPLEMENTAL MATERIAL In Figure 37, one port from the VIC 1280 is channeled to 2208XP Fabric Extender A and one is channeled to 2208XP Fabric Extender B. The VIC 1240 slot is empty and adapter slot 2 is empty. The result is 10 Gb of bandwidth to each Fabric Extender. This is not supported for 2-CPU configurations. Figure 37 VIC 1280 (VIC 1240 and adapter slot 2 are empty) In Figure 38, one port from the VIC 1240 is channeled to 2104XP Fabric Extender A and one is channeled to 2104XP Fabric Extender B.
SUPPLEMENTAL MATERIAL In Figure 39, there is no VIC 1240. In this case, an Emulex or QLogic adapter is installed in adapter slot 3. Ports A and B of the adapter card connect to the Fabric Extenders, providing 10 Gb per port. Figure 39 One Emulex or QLogic Adapter (VIC 1240 slot and adapter slot 2 are empty) In Figure 40, one port from the VIC 1240 is channeled to 2104XP Fabric Extender A and one is channeled to 2104XP Fabric Extender B. The result is 10 Gb of bandwidth to each Fabric Extender.
SUPPLEMENTAL MATERIAL In Figure 41, one port from the VIC 1240 is channeled to 2104XP Fabric Extender A and one is channeled to 2104XP Fabric Extender B. The result is 10 Gb of bandwidth to each Fabric Extender. Cisco UCS Storage Accelerators (which provide no network connectivity) are installed in slots 2 and 3. This configuration is not supported for 2-CPU systems.
SUPPLEMENTAL MATERIAL In Figure 43, one port from the VIC 1240 is channeled to 2104XP Fabric Extender A and one is channeled to 2104XP Fabric Extender B. The VIC 1280 installed in adapter slot 3 channels one port to each of the Fabric Extenders. Adapter slot 2 contains a Cisco UCS Storage Accelerator (which has no network connectivity). This configuration is not supported for 2-CPU systems. The result is 20 Gb of bandwidth to each Fabric Extender.
TECHNICAL SPECIFICATIONS TECHNICAL SPECIFICATIONS Dimensions and Weight Table 33 UCS B420 M3 Dimensions and Weight Parameter Value Height 1.95 in. (50 mm) Width 16.5 in.(419 mm) Depth 24.4 in. (620 mm) Weight ■ Base server weight (no CPUs, no memory, no adapter cards, no USB, 3 baffles, no SD cards, no HDDs, 4 HDD fillers, no SuperCap, no TFM) = 20.7 lbs (9.
TECHNICAL SPECIFICATIONS 70 Cisco UCS B420 M3 Blade Server