How to achieve over 2 TB/hr network backup with Integrity entry-class servers running HP-UX 11i v3

2
Introduction
Network backup (also known as remote backup) is widely used today in environments where servers
with anywhere from a few megabytes to tens of gigabytes of data need backup and recovery, but the
cost of direct connection to a SAN is not justified. Network-based backup has become feasible for
servers with larger volumes of data attached, primarily because of advancements in network speed
and performance. The slower (10/100bT) network speeds available in the late 1990s contributed to
making network-based backup impractical for servers with data volumes exceeding tens of
megabytes. In those days backup of one terabyte of data per hour was possible only on very large
servers having 20 or more locally-connected tape devices.
With the introduction of 1-gigabit (1-GbE) and 10-gigabit (10-GbE) Ethernet network adaptors,
servers with backup data volumes exceeding tens of gigabytes are now practical candidates for
network-based backup strategies. This advancement in network technology, coupled with faster tape
devices, virtual tape libraries, and faster servers with greater overall I/O throughput, enables
integration and deployment of small-footprint backup servers capable of backing up 1 to 2 terabytes
of network-delivered client data per hour.
This white paper documents how HP configured standard equipped HP Integrity rx3600 and rx6600
entry-class servers running HP-UX 11i v3 to deliver superior network backup services, easily achieving
backup performance in the two terabyte per hour range. This white paper discusses the best practices
for configuring and enabling these servers to perform at such a high level.
Objectives
HP strove to characterize the backup and restore performance of entry-class Integrity servers
(primarily, the rx3600 and rx6600 servers) running HP-UX 11i v3 and using the HP Data Protector
utility. Using technologies that are most commonly deployed in data centers today, HP performed this
characterization using 1-gigabit Ethernet (1GbE) and 4-gigabit Fibre Channel (4Gb FC) adapters.
The “Best practices” section of this whitepaper documents how the network backup server, operating
system (OS), and backup utility are tuned for maximum backup and restore efficiency. No specific
recommendations are provided for tuning the client systems, as these systems should be tuned
optimally for their primary application services.
Testing methodology
The testing methodology used for this performance characterization and tuning effort is much like
what would be done for analysis of any backup server environment. This methodology can be used
as an example to pattern similar backup characterization and performance testing and tuning.
The first step is to understand the configuration of the backup environment involved. To acquire this
understanding, HP recommends devising a topology drawing that shows the backup server, clients,
data storage, backup devices, and the interconnections within the LAN and SAN.
Once the environment topology is known, the next step is to identify potential throughput limitations
in the configuration and to do stress and limits testing to discover the maximum throughput achievable
at each point. For example, in Figure 1 the key throughput bottlenecks to stress and characterize are
the 1GbE connections between the clients and backup server, the LTO-4 tape device throughput, and
the 4Gb FC port throughput. In addition, to obtain optimal performance, the Data Protector backup
utility should be tuned appropriately for the environment.