HP StorageWorks 2300 Modular Smart Array reference guide (500911-001, January 2009)

22 Using Storage Management Utility (SMU)
does not mirror the write data because the data is written to the disk before posting command completion
and mirroring is not required. You can set conditions that cause the controller to change from write-back
caching to write-through caching.
In both caching strategies, active-active failover of the controllers is enabled.
You can enable and disable the write-back cache for each volume. By default, volume write-back cache is
enabled. Because controller cache is backed by supercapacitor technology, if the system loses power, data
is not lost. For most applications, this is the correct setting. But because back-end bandwidth is used to
mirror cache and because this mirroring uses back-end bandwidth, if you are writing large chunks of
sequential data (as would be done in video editing, telemetry acquisition, or data logging), write-through
cache has much better performance. Therefore, you might want to experiment with disabling the write-back
cache. You might see large performance gains (as much as 70 percent) if you are writing data under the
following circumstances:
Sequential writes
Large I/Os in relation to the chunk size
Deep queue depth
If you are doing random access to this volume, leave the write-back cache enabled.
Optimizing read-ahead caching
CAUTION: Only change read-ahead cache settings if you fully understand how the host operating
system, application, and adapter move data so that you can adjust the settings accordingly.
You can optimize a volume for sequential reads or streaming data by changing its read-ahead cache
settings. Read ahead is triggered by two back-to-back accesses to consecutive LBA ranges, whether
forward (increasing LBAs) or reverse (decreasing LBAs).
You can change the amount of data read in advance after two back-to-back reads are made. Increasing
the read-ahead cache size can greatly improve performance for multiple sequential read streams; however,
increasing read-ahead size will likely decrease random read performance.
The Default option works well for most applications: it sets one chunk for the first access in a sequential
read and one stripe for all subsequent accesses. The size of the chunk is based on the chunk size used
when you created the vdisk (the default is 64 KB). Non-RAID and RAID-1 vdisks are considered to have
a stripe size of 64 KB.
Specific size options let you select an amount of data for all accesses.
The Maximum option lets the controller dynamically calculate the maximum read-ahead cache size for
the volume. For example, if a single volume exists, this setting enables the controller to use nearly half
the memory for read-ahead cache. Only use Maximum when disk latencies must be absorbed by
cache.
The Disabled option turns off read-ahead cache. This is useful if the host is triggering read ahead for
what are random accesses. This can happen if the host breaks up the random I/O into two smaller
reads, triggering read ahead.
You can also change the optimization mode. The standard read-ahead caching mode works well for
typical applications where accesses are a combination of sequential and random; this method is the
default. For an application that is strictly sequential and requires extremely low latency, you can use Super
Sequential mode. This mode makes more room for read-ahead data by allowing the controller to discard
cache contents that have been accessed by the host.