Realize new workload migration and consolidation possibilities

8
Plan the new system
Determine I/O types
This is more than simply counting the number of I/O ports or cards used in the current environment. Since many of the
currently shipping I/O cards are significantly faster than those supported by the legacy cell-based servers, and it is
possible with vPars v6.1 to allocate the ports on a multi-port card to different vPars, and to actually share physical
ports, you probably will not need as many I/O cards on the new system as you have on the old one.
Determine server type
This is mostly a sizing consideration. The choices include the Integrity based C-class i2 blades, rx2800 i2, and the
Superdome 2.
Extract data from the existing configuration
Using “
vparstatus -v
”, determine the resources used by the current configuration. Convert these resources to their
equivalents for new vPars and record what you need. A few rules to consider are:
Assume that you should do a 1 for 1 replacement of processor cores in the virtual partitions. Add a minimum of 1 core
for the VSP (for vPar management tasks) to get your total requirement.
Assume that the operating system and key layered software (the “operating environment”) will take approximately
36 GB of disk space. Contact your applications vendors to determine whether your application requirements have
changed.
If your current vPars are running 11i v3, then allocate the same amount of memory to the new ones as you have on
the existing ones (unless you are memory limited). If your current vPars are running 11i v2, keep in mind that these
will need to be updated to 11i v3, thus you should add at least 1 GB of memory to each of them as the memory
requirements for 11i v3 are higher than those for 11i v2.
Consider how much growth you want to provide for (both in terms of additional vPars and in terms of making the old
ones bigger).
Allocate 20 percent of the total memory for the vPars being migrated (plus any new vPars you plan on creating) to the
VSP. For example, if a server will run 6 vPars with a total of 30 gigabytes of memory between them, then allocate an
additional 6 gigabytes of memory for the VSP.
If you are currently running on a cell-based server, assume that you can use NPIV to share Fibre Channel ports without
performance impacts (the new Fibre Channel HBAs are much faster than the old ones). Assume that unless you have
I/O bound workloads, you can share a Fibre Channel port 8 ways without problem. Bear in mind that if High Availability
(HA) is needed, you will have to have paths available over more than one physical port. Also consider the fact that if
you use Virtual Connect, the physical HBA may already be divided by the Virtual Connect configuration.
If you are on a cell-based server, assume that you can use a virtual switch to share a single LAN connection
approximately eight ways. Again, please bear in mind any HA requirements you may have. If you need to dedicate a
physical LAN port to a single vPar for performance reasons, then you should consider DIO, which will not only dedicate
hardware, but will also reduce the load on the VSP.
A sample record of decisions is presented in table 2.
Table 2. Example vPar requirements log
vPar
name
Cores RAM # FC
ports
(NPIV)
Other non-NPIV storage
(file, disk, or lvol backed)
Internal-only
virtual LAN
Virtual
LANs
vswitches
# DIO
LANs
(physical)
vPar view Backing
store
db1 3 6 GB 2 - - recovLAN heartbeat
dbLAN
mgmtLAN
0
app1 2 4 GB 1 - - recovLAN dbLAN
appLAN
mgmtLAN
0