Using HP Global Workload Manager with Serviceguard
8
Using this model for the active/active cluster of two nodes shown in Figure 2 (with caption “Failover 
with adjustment to resource allocation”), you would define the workloads as shown in the table 
below. It is assumed that each package can run on either node and different policies can be applied.
The set of nodes on which a package can run is explicitly specified in the package definition using 
the NODE_NAME list. The policies associated with each workload enable the division of resources 
on each node. Often you will specify the same policy for each workload regardless of node. If, 
however, the secondary nodes differ in configuration, you can instead apply a different policy 
depending upon the total resources available on the target node. 
Package Name Workload on Node1 Workload on Node2
Pkg A Pkg.A.Node1 Pkg.A.Node2
Pkg B Pkg.B.Node1 Pkg.B.Node2
Pkg C Pkg.C.Node1 Pkg.C.Node2
In some cases, you may wish to allocate resources to applications or users that are not part of a 
package. In such a case, you must additionally define a workload to represent each of those 
nonpackage resource consumers. How detailed you must be in your workload definition depends 
upon how and where you want to enable the sharing of resources. 
Sharing Resources Only Between Hosts
If you are using nPartitions or virtual partitions to partition your workloads/packages of differing 
priorities, then you can simplify the configuration by considering each node within a server as a 
workload. Consider the scenario depicted in the next figure and described earlier. This scenario 
includes one or more primary servers in which the applications normally run. The secondary server is 
configured as a shared resource domain made up of all seven vPars. To provide this capability with 
gWLM, each vPar 0 through 5 is configured as a workload with a policy to ensure that the 
application receives the desired level of resources when the application is present. Typically, this 
would be accomplished with an OwnBorrow policy that specifies the number of cores that this 
application requires and a minimum allocation requirement. When the application is not present—
because it is executing on the primary server—all resources above the minimum in the policy can be 
shared with other workloads on the server. vPar 6 is configured as a workload with a policy 
providing a minimum resource allocation, but the ability to borrow additional cores when available. 
In most circumstances the applications will be running on the primary server, and all extra resources 
will be loaned out to the low-priority work on vPar 6. When App 3 fails over to vPar 2, gWLM detects 
the need for additional CPU resources and automatically transfers them from vPar 6 to vPar 2, thus 
ensuring that the application obtains the resources it needs.










