HP EVA Updating Product Software Guide (xcs10001000) (5697-2423, December 2012)

Table 4 HP Command View EVAPerf virtual disk statistics (continued)
DescriptionCounter
The number of completed write requests per second to a virtual disk received from all
hosts. Write requests may include transfers from a source array to this array for data
replication and host data written to snapshot or snapclone volumes.
Write Req/s
The rate at which data is written to the virtual disk by all hosts; includes transfers from
the source array to the destination array.
Write MB/s
The average time it takes to complete a write request (from initiation to receipt of write
completion).
Write Latency (ms)
The rate at which data is written to a physical disk for the associated virtual disk. The
sum of flush counters for all virtual disks on both controllers is the rate at which data
Flush MB/s
is written to the physical drives, and is equal to the total host write data. Data written
to the destination array is included. Host writes to snapshots and snapclones are
included in the flush statistics, but data flow for internal snapshot and snapclone
normalization and copy-before-write activity are not included.
The rate at which data travels across the mirror port to complete read and write requests
to a virtual disk. This data is not related to the physical disk mirroring for Vraid1
Mirror MB/s
redundancy. Write data is always copied through the mirror port when cache mirroring
is enabled for redundancy. In active/active controllers, this counter includes read data
from the owning controller that must be returned to the requesting host through the
proxy controller. Reported mirror traffic is always outbound from the referenced
controller to the other controller.
The rate at which data is read from the physical disk to cache in anticipation of
subsequent reads when a sequential data stream is detected. A sequential data stream
Prefetch MB/s
may be created by host I/O or other I/O activity that occurs because of a DR initial
copy or DR full copy.
Managing host I/O timeouts for an online upgrade
The defaults for host operating parameters, such as LUN timeout and queue depth, ensure proper
operation with the array. These values are appropriate for most array operations, including online
controller software upgrades. In general, host LUN timeouts of 60 seconds or more are sufficient
for an online upgrade. In most situations you will not need to change these settings to perform an
online controller software upgrade.
If any host timeout values have been changed to less than the default (typically 60 seconds), you
must reset them to their original default. The following sections summarize the steps and commands
for checking and changing timeout values for each supported operating system. See the operating
system documentation for more information.
IMPORTANT: Depending on your operating system, changing timeout values may require a
reboot of your system. To minimize disruption of normal operations, schedule reboots one node
at a time. In a cluster environment, plan your reboots one node at a time.
HP-UX
CAUTION: Because HP-UX supports boot across Fibre Channel SAN, any change to default SCSI
timeouts on the HP-UX host may cause corruption and make the system unrecoverable.
Default timeout values
Sdisk timeout: 30 seconds
(LVM) lvol timeout: 0 seconds (default=0, retries forever)
26 Preparing for the upgrade