BEA WebLogic Server Tuning Guide

23
3.18 Logical Network Partitioning
One important difference between the classic TCP/IP stack and the TCP/IP V6 stack is that in the TCP/IP V6 stack
there is only one manager process ($ZZTCP, typically) with which all interfaces are associated. In classic TCP/IP,
multiple TCP/IP processes can be configured each with its own set of interfaces. One of the fallouts of the TCP/IP
V6 scheme is that a process using the TCP/IP V6 stack as the provider can use any of the interfaces and hence
there is no application isolation.
To address this, a new feature is introduced in the TCP/IP V6 stack shipped with the G06.22 RVU. Now the
network can be logically partitioned using multiple socket providers (TCP6SAM processes) for a different subset of
interfaces as in the classic TCP/IP stack. Alias addresses also can be created in the logical partitions.
This new feature lets WebLogic Server administrators partition the network interfaces such that different cluster
members in a WebLogic Server cluster use different network partitions, providing for better manageability.
Refer to the TCP/IP V6 Configuration Manual in the NonStop Technical Library, which describes the logical network
partitioning in detail.
4.0 Fault Tolerance Considerations
The nodemanager launch script (nodemanager.sh), provided with the Weblogic Server Toolkit for 8.1 SP3, has
been enhanced to let the administrator specify the processor selection per managed server (along with the load
balancing schemes). This is helpful in placing the managed servers in different replication groups in non-
overlapping processor lists so that at least one of the primary and backup instances of an HTTP session, Stateful
session bean session states, entity beans states, and EJB handles session state is always available.
For example, assume that a cluster is defined with two replication groups, Group1 with servers ms1, ms2, and ms3,
and Group2 with servers ms4, ms5, and ms6.
When the cluster is started up on a four-CPU system, the nodemanager.sh can be configured such that servers m1,
m2, and m3 are started in CPUs 0 or 1, and ms4, ms5, and ms6 are started in CPUs 2 or 3.
Whenever WebLogic Server needs to choose a secondary server, it gives precedence to a different replication
group. Therefore, the secondary state is always available when a CPU is lost (ms1, ms2, and ms3 will choose a
server in 2 or 3 and similarly, ms4, ms5, and ms6 will choose a server in CPUs 0 and 1). So, even if a processor
with a WebLogic Server instance fails, there is always a secondary server available in one of the remaining
processors.
5.0 Performance Considerations
5.1 Number of WebLogic Server Instances per CPU
During early implementation efforts it is desirable that the number of instances per CPU be set to one if a full J2EE
application (both Web and EJB components) is deployed on the WebLogic Server instances. JSP/Servlet-handling is
very CPU-intensive, so reducing the number of WebLogic Server instances per CPU will improve results.
Adding more instances per CPU might be beneficial if deployed applications are database intensive but even so,
throughput might suffer with more than two very active instances per CPU.