Administrator Guide
Table Of Contents
- Dell EMC PowerVault ME4 Series Storage System Administrator’s Guide
- Contents
- Getting started
- New user setup
- Configure and provision a new storage system
- Using the PowerVault Manager interface
- System concepts
- About virtual and linear storage
- About disk groups
- About RAID levels
- About ADAPT
- About SSDs
- About SSD read cache
- About spares
- About pools
- About volumes and volume groups
- About volume cache options
- About thin provisioning
- About automated tiered storage
- About initiators, hosts, and host groups
- About volume mapping
- About operating with a single controller
- About snapshots
- About copying volumes
- About reconstruction
- About quick rebuild
- About performance statistics
- About firmware updates
- About managed logs
- About SupportAssist
- About CloudIQ
- About configuring DNS settings
- About replicating virtual volumes
- About the Full Disk Encryption feature
- About data protection with a single controller
- Working in the Home topic
- Guided setup
- Provisioning disk groups and pools
- Attaching hosts and volumes in the Host Setup wizard
- Overall system status
- Configuring system settings
- Managing scheduled tasks
- Working in the System topic
- Viewing system components
- Systems Settings panel
- Resetting host ports
- Rescanning disk channels
- Clearing disk metadata
- Updating firmware
- Changing FDE settings
- Configuring advanced settings
- Changing disk settings
- Changing system cache settings
- Configuring partner firmware update
- Configuring system utilities
- Using maintenance mode
- Restarting or shutting down controllers
- Working in the Hosts topic
- Working in the Pools topic
- Working in the Volumes topic
- Viewing volumes
- Creating a virtual volume
- Creating a linear volume
- Modifying a volume
- Copying a volume or snapshot
- Abort a volume copy
- Adding volumes to a volume group
- Removing volumes from a volume group
- Renaming a volume group
- Remove volume groups
- Rolling back a virtual volume
- Deleting volumes and snapshots
- Creating snapshots
- Resetting a snapshot
- Creating a replication set from the Volumes topic
- Initiating or scheduling a replication from the Volumes topic
- Manage replication schedules from the Volumes topic
- Working in the Mappings topic
- Working in the Replications topic
- About replicating virtual volumes in the Replications topic
- Replication prerequisites
- Replication process
- Creating a virtual pool for replication
- Setting up snapshot space management in the context of replication
- Replication and empty allocated pages
- Disaster recovery
- Accessing the data while keeping the replication set intact
- Accessing the data from the backup system as if it were the primary system
- Disaster recovery procedures
- Viewing replications
- Querying a peer connection
- Creating a peer connection
- Modifying a peer connection
- Deleting a peer connection
- Creating a replication set from the Replications topic
- Modifying a replication set
- Deleting a replication set
- Initiating or scheduling a replication from the Replications topic
- Stopping a replication
- Suspending a replication
- Resuming a replication
- Manage replication schedules from the Replications topic
- About replicating virtual volumes in the Replications topic
- Working in the Performance topic
- Working in the banner and footer
- Banner and footer overview
- Viewing system information
- Viewing certificate information
- Viewing connection information
- Viewing system date and time information
- Viewing user information
- Viewing health information
- Viewing event information
- Viewing capacity information
- Viewing host information
- Viewing tier information
- Viewing recent system activity
- Other management interfaces
- SNMP reference
- Using FTP and SFTP
- Using SMI-S
- Using SLP
- Administering a log-collection system
- Best practices
- System configuration limits
- Glossary of terms
When the status of a disk group in the Performance Tier becomes critical (CRIT), the system will automatically drain data from
that disk group to disk groups using spinning disks in other tiers providing that they can contain the data on the degraded disk
group. This occurs because similar wear across the SSDs is likely, so more failures may be imminent.
If a system only has one class of disk, no tiering occurs. However, automated tiered storage rebalancing happens when adding
or removing a disk group in a different tier.
NOTE: Tiers are automatically set up within a single virtual pool, but tiers do not span virtual pools.
Volume tier affinity
Volume tier affinity is a setting that enables a storage administrator to define QoS (Quality of Service) preferences for volumes
in a storage environment.
The three volume tier affinity settings are:
● No Affinity – This setting uses the highest available performing tiers first and only uses the Archive tier when space is
exhausted in the other tiers. Volume data moves into higher performing tiers based on the frequency of access and available
space in the tiers.
● Performance – This setting prioritizes volume data to the higher tiers of service. If no space is available, lower performing
tier space is used. Volume data moves into higher performing tiers based on the frequency of access and available space in
the tiers.
NOTE: The Performance affinity setting does not require an SSD tier and uses the highest performance tier available.
● Archive – This setting prioritizes the volume data to the lowest tier of service. Volume data can move to higher performing
tiers based on the frequency of access and available space in the tiers.
NOTE:
Volume tier affinity is not the same thing as pinning and it does not restrict data to a given tier and capacity. Data
on a volume with Archive affinity can still be promoted to a performance tier when that data becomes in demand to the
host application.
Volume tier affinity strategies
Volume tier affinity acts as a guide to the system on where to place data for a given volume in the available tiers.
The standard strategy is to prefer the highest spinning disk tiers for new sequential writes and the highest tier available
(including SSD) for new random writes. As the host application accesses the data, it is moved to the most appropriate tier
based on demand. Frequently accessed data is promoted up towards the highest performance tier and infrequently accessed
data is demoted to the lower spinning disk-based tiers. The standard strategy is followed for data on volumes set to No Affinity.
For data on volumes set to the Performance affinity, the standard strategy is followed for all new writes. However, subsequent
access to that data has a lower threshold for promotion upwards. The lower threshold makes it more likely for that data to be
available on the higher performance tiers. Preferential treatment is provided to frequently accessed data that has performance
affinity at the SSD tier. Archive or no affinity data is demoted out of the SSD tier to make room for data with an affinity of
Performance. The Performance affinity is useful for volume data that you want to ensure has priority treatment for promotion
to and retention in your highest performance tier.
For volumes that are set to the Archive affinity, all new writes are initially placed in the archive tier. If no space is available in the
archive tier, new writes are placed on the next higher tier available. Subsequent access to that data enables for its promotion to
the performance tiers as it is accessed more often. However, the data has a lower threshold for demotion. The data is moved
out of the highest performance SSD tier when there is a need to promote frequently accessed data up from a lower tier.
About initiators, hosts, and host groups
An initiator represents an external port to which the storage system is connected. The external port may be a port in an I/O
adapter such as an FC HBA in a server.
The controllers automatically discover initiators that have sent an inquiry command or a report luns command to the
storage system, which typically happens when a host boots up or rescans for devices. When the command is received, the
system saves the initiator ID. You can also manually create entries for initiators. For example, you might want to define an
initiator before a controller port is physically connected through a switch to a host.
You can assign a nickname to an initiator to make it easy to recognize for volume mapping. For a named initiator, you can also
select a profile specific to the operating system for that initiator. A maximum of 512 names can be assigned.
Getting started
25