HP-UX HB v13.00 Ch-08 - Crash Dumps

HP-UX Handbook Rev 13.00 Page 15 (of 38)
Chapter 08 Crash Dumps
October 29, 2013
------------ ---------- ---------- ------------ -----------------
31:0x006000 72544 524288 64:0x000002 /dev/vg00/lvol2
----------
524288
Compressed dumps
Even with selective dump feature a Superdome equipped with 256GB RAM would take hours to
write the dump to the dump devices. The bottleneck of copying system memory to disk is the I/O
path. This could be alleviated by dumping to multiple disks in parallel but the system firmware
(IODC) isn’t designed to permit multiple simultaneous I/O requests. Thus the only approach is to
limit the amount of I/O that has to be done.
There is a new feature called compressed dumps available as of HP-UX Itanium release UX 11i
v2 (i.e. UX 11.23) and additionally for UX 11i v1 (i.e. UX 11.11). The data is compressed (using
LZO algorithm) before being written out to the dump device. When the system crashes, the
dump subsystem assigns one processor to perform the writes to the dump device(s). It assigns
another four processors to perform compression.
The dump compression features is targeted for large memory systems. Following requirements
must be met:
Systems: Superdome, Keystone, Matterhorn and Prelude
OS: PA-RISC: UX 11i v1 (11.11) + patch
Itanium: UX 11i v2 (11.23)
Configuration: at least 2GB RAM,
at least 5 processors
The compression option is turned ON by default. But it just a hint to the kernel. At the time of a
system crash, the dump subsystem examines the state of the system and its resources to
determine whether it is possible to use compression. Depending on the resources available, the
system decides dynamically whether to dump compressed or not.
Other situations can cause the dump subsystem to decide not to dump compressed:
recursive panic, memory allocation failure - all logged on system console at crash dump and
flagged in the kernel.
HP can’t guarantee a specific compression factor. All compression tends to be dependent on the
type of data being compressed, in particular how random it is. The dump should speedup by at
least a factor of 3 with default selective dump configuration. More typically, customers will
experience a factor of 7.
The crashconf(1M) command was enhanced to be able to configure dump compression:
# crashconf -c on
# crashconf -v