Veritas Storage Foundation 5.0.1 Cluster File System Administrator's Guide Extracts for the HP Serviceguard Storage Management Suite on HP-UX 11i v3

Cluster Communication - LLT
LLT provides kernel-to-kernel communications and monitors network communications. The
LLT files /etc/llttab and /etc/llthosts can be configured to set system IDs within a
cluster, set cluster IDs for multiple clusters, and tune network parameters such as heartbeat
frequency. LLT is implemented so events such as cluster membership changes are reflected
quickly, which in turn enables fast responses.
LLT is automatically configured initially when Serviceguard is installed, and LLT is also
automatically configured each time the CFS package is started.
See the llttab(4) and llthosts(4) manual pages.
Volume Manager Cluster Functionality Overview
The Veritas Cluster Volume Manager (CVM) component of the Veritas Volume Manager by
Symantec (VxVM) allows multiple hosts to concurrently access and manage a given set of logical
devices under VxVM control. A VxVM cluster is a set of hosts sharing a set of devices; each host
is a node in the cluster. The nodes are connected across a network. If one node fails, other nodes
can still access the devices. CVM presents the same logical view of device configurations and
changes on all nodes.
You configure CVM shared storage after HP Serviceguard sets up a cluster configuration.
See “Cluster File System Administration” (page 25).
Cluster File System Overview
With respect to each shared file system, a cluster includes one primary node, and up to 7 secondary
nodes. The primary and secondary designation of nodes is specific to each file system, not the
hardware. It is possible for the same cluster node be primary for one shared file system, while
at the same time it is secondary for another shared file system. Distribution of file system primary
node designation to balance the load on a cluster is a recommended administrative policy.
See “Distributing Load on a Cluster (page 18).
For CVM, a single cluster node is the master node for all shared disk groups and shared volumes
in the cluster.
Number of parallel fsck threads to run during recovery is tunable
In previous releases, the number of parallel fsck threads that could be active during recovery
was set to 4. For example, if a node failed over 12 file systems, log replay for the 12 file systems
will not complete at the same time. The number was set to 4 since parallel replay of a large
number of file systems would put memory pressure on systems with less memory. However,
on larger systems the restriction of 4 parallel processes replaying is not necessary.
In this release, this value gets tuned in accordance with available physical memory in the system.
See to set the value within a certain range.
Setting the number of parallel fsck threads
This section describes how to set the number of parallel fsck threads.
To set the number of parallel fsck threads: On all nodes in the cluster, edit the /opt/VRTSvcs/
bin/CFSfsckd/CFSfsckd.env file and set FSCKD_OPTS="-n N" where N is the number of
parallel fsck threads that the system runs. The value of N has to be between 4 and 128.
Cluster and Shared Mounts
A VxFS file system that is mounted is called a cluster or shared mount, as opposed to a non-shared
or local mount. A file system mounted in shared mode must be on a VxVM shared volume in a
24 Cluster File System Administration