Administrator Guide
Table Of Contents
- Dell EMC PowerVault ME4 Series Storage System Administrator’s Guide
- Contents
- Getting started
- New user setup
- Configure and provision a new storage system
- Using the PowerVault Manager interface
- System concepts
- About virtual and linear storage
- About disk groups
- About RAID levels
- About ADAPT
- About SSDs
- About SSD read cache
- About spares
- About pools
- About volumes and volume groups
- About volume cache options
- About thin provisioning
- About automated tiered storage
- About initiators, hosts, and host groups
- About volume mapping
- About operating with a single controller
- About snapshots
- About copying volumes
- About reconstruction
- About quick rebuild
- About performance statistics
- About firmware updates
- About managed logs
- About SupportAssist
- About CloudIQ
- About configuring DNS settings
- About replicating virtual volumes
- About the Full Disk Encryption feature
- About data protection with a single controller
- Working in the Home topic
- Guided setup
- Provisioning disk groups and pools
- Attaching hosts and volumes in the Host Setup wizard
- Overall system status
- Configuring system settings
- Managing scheduled tasks
- Working in the System topic
- Viewing system components
- Systems Settings panel
- Resetting host ports
- Rescanning disk channels
- Clearing disk metadata
- Updating firmware
- Changing FDE settings
- Configuring advanced settings
- Changing disk settings
- Changing system cache settings
- Configuring partner firmware update
- Configuring system utilities
- Using maintenance mode
- Restarting or shutting down controllers
- Working in the Hosts topic
- Working in the Pools topic
- Working in the Volumes topic
- Viewing volumes
- Creating a virtual volume
- Creating a linear volume
- Modifying a volume
- Copying a volume or snapshot
- Abort a volume copy
- Adding volumes to a volume group
- Removing volumes from a volume group
- Renaming a volume group
- Remove volume groups
- Rolling back a virtual volume
- Deleting volumes and snapshots
- Creating snapshots
- Resetting a snapshot
- Creating a replication set from the Volumes topic
- Initiating or scheduling a replication from the Volumes topic
- Manage replication schedules from the Volumes topic
- Working in the Mappings topic
- Working in the Replications topic
- About replicating virtual volumes in the Replications topic
- Replication prerequisites
- Replication process
- Creating a virtual pool for replication
- Setting up snapshot space management in the context of replication
- Replication and empty allocated pages
- Disaster recovery
- Accessing the data while keeping the replication set intact
- Accessing the data from the backup system as if it were the primary system
- Disaster recovery procedures
- Viewing replications
- Querying a peer connection
- Creating a peer connection
- Modifying a peer connection
- Deleting a peer connection
- Creating a replication set from the Replications topic
- Modifying a replication set
- Deleting a replication set
- Initiating or scheduling a replication from the Replications topic
- Stopping a replication
- Suspending a replication
- Resuming a replication
- Manage replication schedules from the Replications topic
- About replicating virtual volumes in the Replications topic
- Working in the Performance topic
- Working in the banner and footer
- Banner and footer overview
- Viewing system information
- Viewing certificate information
- Viewing connection information
- Viewing system date and time information
- Viewing user information
- Viewing health information
- Viewing event information
- Viewing capacity information
- Viewing host information
- Viewing tier information
- Viewing recent system activity
- Other management interfaces
- SNMP reference
- Using FTP and SFTP
- Using SMI-S
- Using SLP
- Administering a log-collection system
- Best practices
- System configuration limits
- Glossary of terms
data disks and the one disk providing parity is the parity disk. In reality, the parity is distributed among all the disks, but
conceiving of it in this way helps with the example.
Note that the number of data disks is a power of two (2, 4, and 8). The controller will use a 512-KB stripe unit size when
the data disks are a power of two. This results in a 4-MB page being evenly distributed across two stripes. This is ideal for
performance.
● Example 2: Consider a RAID-5 disk group with six disks. The equivalent of five disks now provide usable capacity. Assume
the controller again uses a stripe unit of 512-KB. When a 4-MB page is pushed to the disk group, one stripe will contain a
full page, but the controller must read old data and old parity from two of the disks in combination with the new data in
order to calculate new parity. This is known as a read-modify-write, and it's a performance killer with sequential workloads. In
essence, every page push to a disk group would result in a read-modify-write.
To mitigate this issue, the controllers use a stripe unit of 64-KB when a RAID-5 or RAID-6 disk group isn't created with a
power-of-two data disks. This results in many more full-stripe writes, but at the cost of many more I/O transactions per disk
to push the same 4-MB page.
The following table shows recommended disk counts for RAID-6 and RAID-5 disk groups. Each entry specifies the total number
of disks and the equivalent numbers of data and parity disks in the disk group. Note that parity is actually distributed among all
the disks.
Table 41. Recommended disk group sizes
RAID level Total disks Data disks (equivalent) Parity disks (equivalent)
RAID 6 4 2 2
6 4 2
10 8 2
RAID 5 3 2 1
5 4 1
9 8 1
To ensure best performance with sequential workloads and RAID-5 and RAID-6 disk groups, use a power-of-two data disks.
Disk groups in a pool
For better efficiency and performance, use similar disk groups in a pool.
● Disk count balance: For example, with 20 disks, it is better to have two 8+2 RAID-6 disk groups than one 10+2 RAID-6 disk
group and one 6+2 RAID-6 disk group.
● RAID balance: It is better to have two RAID-5 disk groups than one RAID-5 disk group and one RAID-6 disk group.
● In terms of the write rate, due to wide striping, tiers and pools are as slow as their slowest disk groups.
● All disks in a tier should be the same type. For example, use all 10K disks or all 15K disks in the Standard tier.
Create more small disk groups instead of fewer large disk groups.
● Each disk group has a write queue depth limit of 100. This means that in write-intensive applications this architecture will
sustain bigger queue depths within latency requirements.
● Using smaller disk groups will cost more raw capacity. For less performance-sensitive applications, such as archiving, bigger
disk groups are desirable.
Tier setup
In general, it is best to have two tiers instead of three tiers. The highest tier will nearly fill before using the lowest tier. The
highest tier must be 95% full before the controller will evict cold pages to a lower tier to make room for incoming writes.
Typically, you should use tiers with SSDs and 10K/15K disks, or tiers with SSDs and 7K disks. An exception may be if you need
to use both SSDs and faster spinning disks to hit a combination of price for performance, but you cannot hit your capacity needs
without the 7K disks; this should be rare.
176
Best practices