HP Smart Array Controller technology, 4th edition
4
The controller disables read-ahead when it detects non-sequential read activity. HP Smart Array controller
adaptive read-ahead caching eliminates issues with fixed read-ahead schemes that increase sequential read
performance but degrade random read performance.
Write-back caching
HP Smart Array c
ontrollers use a write-back caching scheme that lets host applications continue without
waiting for write operations to complete to the disk. A controller without a write-back cache returns
completion status to the OS after it writes the data to the drives. A controller with write-back caching can
“post” write data to high-speed cache memory and immediately return “back” completion status to the OS.
The write operation completes in microseconds rather than milliseconds. The controller writes data from the
controller’s write cache to disk later, at an optimal time for the controller.
Once the controller locates write data in the cache, subsequent reads to the same disk location come from
the cache. Subsequent writes to the same disk location will replace the data held in cache. This is a “read
cache hit.” It improves bandwidth and latency for applications that frequently write and read the same area
of the disk.
The write cache will typically fill up and remain full most of the time in high-workload environments. The
controller uses this opportunity to analyze the pending write commands to improve their efficiency. The
controller can use write coalescing that combines small writes to adjacent logical blocks into a single larger
write for quicker execution. The controller can also perform command reordering, rearranging the execution
order of the writes in the cache to reduce the overall disk latency. With larger amounts of write cache
memory, the Smart Array controller can store and analyze a larger number of pending write commands,
increasing the opportunities for write coalescing and command reordering while delivering better overall
performance.
Logical drives in RAID 5 and RAID 6 configurations gain higher write performance by combining adjacent
write requests to form a full stripe of data (“full-stripe write”). Write operation for RAID 5 and RAID 6
normally requires extra disk reads to compute the parity data. But if all the data required for a full stripe is
available in the cache, the controller does not require the extra disk reads. This improves write bandwidth
for sequential writes to a logical drive in a RAID 5 or RAID 6 configuration.
Error checking and correcting (ECC) DRAM technology protects the data while it is in cache. Smart Array
battery-backed or flash-backed cache backup mechanisms protect the cache data against a server crash
and power loss. The controller disables caching when battery-backed or
flash-backed cache is an option
bu
t the battery-backed or flash-backed cache is not installed. You can override this behavior but doing so
opens a window for possible data loss. Disk drives provide an option to enable write caching that is not
battery backed. We advise against enabling disk drive write cache because a power failure could result in
data loss.
Cache width
Present gen
eration Smart Array controllers support 256 MiB, 512 MiB, and 1 GiB cache modules. The 512
MiB and 1 GiB modules use a 72-bit wide (64 bits data + 8 bits parity) cache instead of the 40-bit wide
(32 bits data + 8 bits parity) cache used in the 256 MiB modules. This doubles the bandwidth for moving
cache data to and from the storage system, further increasing overall array performance.
For more information on Smart Array cache modules, see the “Data Availability
” section later in this paper.