Administrator Guide
Table Of Contents
- Dell EMC PowerVault ME4 Series Storage System Administrator’s Guide
- Contents
- Getting started
- New user setup
- Configure and provision a new storage system
- Using the PowerVault Manager interface
- System concepts
- About virtual and linear storage
- About disk groups
- About RAID levels
- About ADAPT
- About SSDs
- About SSD read cache
- About spares
- About pools
- About volumes and volume groups
- About volume cache options
- About thin provisioning
- About automated tiered storage
- About initiators, hosts, and host groups
- About volume mapping
- About operating with a single controller
- About snapshots
- About copying volumes
- About reconstruction
- About quick rebuild
- About performance statistics
- About firmware updates
- About managed logs
- About SupportAssist
- About CloudIQ
- About configuring DNS settings
- About replicating virtual volumes
- About the Full Disk Encryption feature
- About data protection with a single controller
- Working in the Home topic
- Guided setup
- Provisioning disk groups and pools
- Attaching hosts and volumes in the Host Setup wizard
- Overall system status
- Configuring system settings
- Managing scheduled tasks
- Working in the System topic
- Viewing system components
- Systems Settings panel
- Resetting host ports
- Rescanning disk channels
- Clearing disk metadata
- Updating firmware
- Changing FDE settings
- Configuring advanced settings
- Changing disk settings
- Changing system cache settings
- Configuring partner firmware update
- Configuring system utilities
- Using maintenance mode
- Restarting or shutting down controllers
- Working in the Hosts topic
- Working in the Pools topic
- Working in the Volumes topic
- Viewing volumes
- Creating a virtual volume
- Creating a linear volume
- Modifying a volume
- Copying a volume or snapshot
- Abort a volume copy
- Adding volumes to a volume group
- Removing volumes from a volume group
- Renaming a volume group
- Remove volume groups
- Rolling back a virtual volume
- Deleting volumes and snapshots
- Creating snapshots
- Resetting a snapshot
- Creating a replication set from the Volumes topic
- Initiating or scheduling a replication from the Volumes topic
- Manage replication schedules from the Volumes topic
- Working in the Mappings topic
- Working in the Replications topic
- About replicating virtual volumes in the Replications topic
- Replication prerequisites
- Replication process
- Creating a virtual pool for replication
- Setting up snapshot space management in the context of replication
- Replication and empty allocated pages
- Disaster recovery
- Accessing the data while keeping the replication set intact
- Accessing the data from the backup system as if it were the primary system
- Disaster recovery procedures
- Viewing replications
- Querying a peer connection
- Creating a peer connection
- Modifying a peer connection
- Deleting a peer connection
- Creating a replication set from the Replications topic
- Modifying a replication set
- Deleting a replication set
- Initiating or scheduling a replication from the Replications topic
- Stopping a replication
- Suspending a replication
- Resuming a replication
- Manage replication schedules from the Replications topic
- About replicating virtual volumes in the Replications topic
- Working in the Performance topic
- Working in the banner and footer
- Banner and footer overview
- Viewing system information
- Viewing certificate information
- Viewing connection information
- Viewing system date and time information
- Viewing user information
- Viewing health information
- Viewing event information
- Viewing capacity information
- Viewing host information
- Viewing tier information
- Viewing recent system activity
- Other management interfaces
- SNMP reference
- Using FTP and SFTP
- Using SMI-S
- Using SLP
- Administering a log-collection system
- Best practices
- System configuration limits
- Glossary of terms
For ease of management, you can group 1 to 128 initiators that represent a server into a host. You can also group 1 to 256 hosts
into a host group. This fact enables you to perform mapping operations for all initiators in a host, or all initiators and hosts in a
group, instead of for each initiator or host individually. An initiator must have a nickname to be added to a host, and an initiator
can be a member of only one host. A host can be a member of only one group. A host cannot have the same name as another
host, but can have the same name as any initiator. A host group cannot have the same name as another host group, but can
have the same name as any host. A maximum of 32 host groups can exist.
A storage system with iSCSI ports can be protected from unauthorized access via iSCSI by enabling Challenge Handshake
Authentication Protocol (CHAP). CHAP authentication occurs during an attempt by a host to log in to the system. This
authentication requires an identifier for the host and a shared secret between the host and the system. Optionally, the storage
system can also be required to authenticate itself to the host. This is called mutual CHAP. Steps involved in enabling CHAP
include:
●
Decide on host node names (identifiers) and secrets. The host node name is its iSCS Qualified Name (IQN). A secret must
have 12–16 characters.
● Define CHAP entries in the storage system.
● Enable CHAP on the storage system. Note that this applies to all iSCSI hosts, in order to avoid security exposures. Any
current host connections will be terminated when CHAP is enabled and will need to be re-established using a CHAP login.
● Define CHAP secret in the host iSCSI initiator.
● Establish a new connection to the storage system using CHAP. The host should be displayable by the system, as well as the
ports through which connections were made.
If it becomes necessary to add more hosts after CHAP is enabled, additional CHAP node names and secrets can be added. If a
host attempts to log in to the storage system, it will become visible to the system, even if the full login is not successful due
to incompatible CHAP definitions. This information may be useful in configuring CHAP entries for new hosts. This information
becomes visible when an iSCSI discovery session is established, because the storage system does not require discovery sessions
to be authenticated. CHAP authentication must succeed for normal sessions to move to the full feature phase.
About volume mapping
Mappings between a volume and one or more initiators, hosts, or host groups hosts enable hosts to view and access the volume.
There are two types of maps that can be created: default maps and explicit maps. Default maps enable all hosts to see the
volume using a specified LUN and access permissions. Default mapping applies to any host that has not been explicitly mapped
using different settings. Explicit maps override a volume's default map for specific hosts.
The advantage of a default mapping is that all connected hosts can discover the volume with no additional work by the
administrator. The disadvantage is that all connected hosts can discover the volume with no restrictions. Therefore, this process
is not recommended for specialized volumes that require restricted access.
If multiple hosts mount a volume without being cooperatively managed, volume data is at risk for corruption. To control access
by specific hosts, you can create an explicit mapping. An explicit mapping can use a different access mode, LUN, and port
settings to allow or prevent access by a host to a volume. If there is a default mapping, the explicit mapping overrides it.
When a volume is created, it is not mapped by default. You can create default or explicit mappings for it. You can change the
default mapping of a volume, and create, modify, or delete explicit mappings. A mapping can specify read-write, read-only, or no
access through one or more controller host ports to a volume. When a mapping specifies no access, the volume is masked.
For example, a payroll volume could be mapped with read-write access for the Human Resources host and be masked for all
other hosts. An engineering volume could be mapped with read-write access for the Engineering host and read-only access for
other departments' hosts.
A LUN identifies a mapped volume to a host. Both controllers share a set of LUNs, and any unused LUN can be assigned to a
mapping. However, each LUN is generally only used once as a default LUN. For example, if LUN 5 is the default for Volume1, no
other volume in the storage system can use LUN 5 on the same port as its default LUN. For explicit mappings, the rules differ:
LUNs used in default mappings can be reused in explicit mappings for other volumes and other hosts.
NOTE:
When an explicit mapping is deleted, the volume's default mapping takes effect. Though default mappings can be
used for specific installations, using explicit mappings with hosts and host groups is recommended for most installations.
The storage system uses Unified LUN Presentation (ULP), which can expose all LUNs through all host ports on both controllers.
The interconnect information is managed in the controller firmware. ULP appears to the host as an active-active storage system
where the host can choose any available path to access a LUN regardless of disk group ownership. When ULP is in use, the
controllers' operating redundancy mode is shown as Active-Active ULP. ULP uses the T10 Technical Committee of INCITS
Asymmetric Logical Unit Access (ALUA) extensions, in SPC-3, to negotiate paths with aware host systems. Unaware host
systems see all paths as being equal.
26
Getting started