HP Matrix Operating Environment 7.3 Recovery Management User Guide

4 Dynamic workload movement with CloudSystem Matrix
This chapter explains how you can configure cross-technology logical servers that can be managed
with Matrix recovery management.
The HP Matrix Operating Environment facilitates the fluid movement of workloads between dissimilar
servers within a site and across sites. Workloads can be moved between physical servers and
virtual machines and between dissimilar physical servers.
A major trend today in IT data center management is the push toward greater efficiency in the use
of computing, network, and storage resources in the datacenter by treating them as a shared pool
from which the resource requirements of various applications, departments, and organizations are
met. Central to this concept of a converged infrastructure is the ability to rapidly and automatically
create, move, and remove workloads on demand.
In a typical converged infrastructure implementation, a customer may use HP CloudSystem Matrix
to run the workloads and the HP Matrix Operating Environment running on a Central Management
Server (CMS) to create, move, and remove the workloads as needed. The workload, which includes
the operating system (OS) that the user application runs on, can run directly on a blade or it can
run in a virtual machine managed by a hypervisor running on the blade, for example, VMware
ESX. The blades may also have different hardware configurations or contain different versions of
hardware and firmware.
The capabilities of the HP Matrix Operating Environment discussed in this chapter allow fluid
movement of workloads in this type of heterogeneous environment. These capabilities include:
Tools that allow the workload OS to be prepared as a portable system image that can run in
different server environments.
Fine-grained user control over the set of specific physical servers and virtual machine hosts
that the HP Matrix Operating Environment can run the workload on.
The fluid, two-way movement of workloads across dissimilar servers described in this chapter is
different from the movement enabled by traditional migration tools. Those tools are oriented towards
enabling a one-way, permanent or semi-permanent migration, between physical and virtual or
between dissimilar physical servers. The movement typically requires manual intervention and a
relatively long period of time to complete.
The importance of the ability to fluidly move a workload from a physical server to a virtual machine
and back can be understood from the following examples:
You want to move your online workload running on a physical server during daily off-peak
hours to a virtual machine host, to free up the physical server to run a batch workload. When
the off-peak period is over, the batch workload is retired and the online workload is moved
back to its original execution environment.
During the time the online workload is parked” on a virtual machine host, it has minimal
resource requirements; hence it has minimal impact on other workloads that may be running
on that host. Because this pattern repeats daily, the physical to virtual and virtual to physical
moves must be achieved quickly (in minutes, rather than hours) and automatically.
You have two data centers located at two different sites. The production workloads run on
physical servers at one site and are configured to be failed over to the other (recovery) site,
in case of a disaster. The recovery site is equipped with a set of servers configured as virtual
machine hosts. In this use case, planned or unplanned failovers require physical to virtual and
virtual to physical moves across sites.
The configuration of the recovery site as a set of virtual machine hosts may be driven by the
needs of test and development activities that are carried out at that site. Or, it may be driven
by a need to reduce the cost of disaster recovery by running the workloads on virtual machines
hosted by a smaller set of servers. Recovery time objectives require that the moves be achieved
quickly and automatically, as in the previous example.
32 Dynamic workload movement with CloudSystem Matrix