Administrator Guide

Table Of Contents
When the status of a disk group in the Performance Tier becomes critical (CRIT), the system will automatically drain data from
that disk group to disk groups using spinning disks in other tiers providing that they can contain the data on the degraded disk
group. This occurs because similar wear across the SSDs is likely, so more failures may be imminent.
If a system only has one class of disk, no tiering occurs. However, automated tiered storage rebalancing happens when adding
or removing a disk group in a different tier.
NOTE: Tiers are automatically set up within a single virtual pool, but tiers do not span virtual pools.
Volume tier affinity
Volume tier affinity is a setting that enables a storage administrator to define QoS (Quality of Service) preferences for volumes
in a storage environment.
The three volume tier affinity settings are:
No Affinity This setting uses the highest available performing tiers first and only uses the Archive tier when space is
exhausted in the other tiers. Volume data moves into higher performing tiers based on the frequency of access and available
space in the tiers.
Performance This setting prioritizes volume data to the higher tiers of service. If no space is available, lower performing
tier space is used. Volume data moves into higher performing tiers based on the frequency of access and available space in
the tiers.
NOTE: The Performance affinity setting does not require an SSD tier and uses the highest performance tier available.
Archive This setting prioritizes the volume data to the lowest tier of service. Volume data can move to higher performing
tiers based on the frequency of access and available space in the tiers.
NOTE:
Volume tier affinity is not the same thing as pinning and it does not restrict data to a given tier and capacity. Data
on a volume with Archive affinity can still be promoted to a performance tier when that data becomes in demand to the
host application.
Volume tier affinity strategies
Volume tier affinity acts as a guide to the system on where to place data for a given volume in the available tiers.
The standard strategy is to prefer the highest spinning disk tiers for new sequential writes and the highest tier available
(including SSD) for new random writes. As the host application accesses the data, it is moved to the most appropriate tier
based on demand. Frequently accessed data is promoted up towards the highest performance tier and infrequently accessed
data is demoted to the lower spinning disk-based tiers. The standard strategy is followed for data on volumes set to No Affinity.
For data on volumes set to the Performance affinity, the standard strategy is followed for all new writes. However, subsequent
access to that data has a lower threshold for promotion upwards. The lower threshold makes it more likely for that data to be
available on the higher performance tiers. Preferential treatment is provided to frequently accessed data that has performance
affinity at the SSD tier. Archive or no affinity data is demoted out of the SSD tier to make room for data with an affinity of
Performance. The Performance affinity is useful for volume data that you want to ensure has priority treatment for promotion
to and retention in your highest performance tier.
For volumes that are set to the Archive affinity, all new writes are initially placed in the archive tier. If no space is available in the
archive tier, new writes are placed on the next higher tier available. Subsequent access to that data enables for its promotion to
the performance tiers as it is accessed more often. However, the data has a lower threshold for demotion. The data is moved
out of the highest performance SSD tier when there is a need to promote frequently accessed data up from a lower tier.
About initiators, hosts, and host groups
An initiator represents an external port to which the storage system is connected. The external port may be a port in an I/O
adapter such as an FC HBA in a server.
The controllers automatically discover initiators that have sent an inquiry command or a report luns command to the
storage system, which typically happens when a host boots up or rescans for devices. When the command is received, the
system saves the initiator ID. You can also manually create entries for initiators. For example, you might want to define an
initiator before a controller port is physically connected through a switch to a host.
You can assign a nickname to an initiator to make it easy to recognize for volume mapping. For a named initiator, you can also
select a profile specific to the operating system for that initiator. A maximum of 512 names can be assigned.
Getting started
25