Administrator Guide

Table 12. Drawer loss protection requirements for different raid levels (continued)
RAID Level Drawer Loss Protection Requirements
RAID Level 5 RAID Level 5 requires a minimum of 3 physical disks. Place all the physical disks in different drawers for a
RAID Level 5 disk group. Drawer loss protection cannot be achieved for RAID Level 5 if more than one
physical disk is placed in the same drawer.
RAID Level 1 and RAID Level
10
RAID Level 1 requires a minimum of 2 physical disks. Make sure that each physical disk in a remotely
replicated pair is located in a different drawer. By locating each physical disk in a different drawer, you can
have more than two physical disks of the disk group within the same drawer. For example, if you create a
RAID Level 1 disk group with six physical disks (three replicated pairs), you can achieve the drawer loss
protection for the disk group with only two drawers as shown in this example: 6-physical disk RAID Level 1
disk group:
Replicated pair 1 = Physical disk located in enclosure 1, drawer 0, slot 0, and physical disk in enclosure 0,
drawer 1, slot 0
Replicated pair 2 = Physical disk in enclosure 1, drawer 0, slot 1, and physical disk in enclosure 1, drawer 1,
slot 1
Replicated pair 3 = Physical disk in enclosure 1, drawer 0, slot 2, and physical disk in enclosure 2, drawer 1,
slot 2
RAID Level 10 requires a minimum of 4 physical disks. Make sure that each physical disk in a remotely
replicated pair is located in a different drawer.
RAID Level 0 You cannot achieve drawer loss protection because the RAID Level 0 disk group does not have
consistency.
NOTE: If you create a disk group using the Automatic physical disk selection method, MD Storage Manager attempts to
choose physical disks that provide drawer loss protection. If you create a disk group by using the Manual physical disk
selection method, you must use the criteria that are specified in the previous table.
If a disk group already has a Degraded status due to a failed physical disk when a drawer fails, drawer loss protection does not protect the
disk group. The data on the virtual disks becomes inaccessible.
Host-to-virtual disk mapping
After you create virtual disks, you must map them to the host(s) connected to the array.
The following are the guidelines to configure host-to-virtual disk mapping:
Each virtual disk in the storage array can be mapped to only one host or host group.
Host-to-virtual disk mappings are shared between controllers in the storage array.
A unique LUN must be used by a host group or host to access a virtual disk.
Each host has its own LUN address space. MD Storage Manager permits the same LUN to be used by different hosts or host groups
to access virtual disks in a storage array.
All operating system do not have the same number of LUNs available.
You can define the mappings on the Host Mappings tab in the AMW. See Using The Host Mappings Tab.
Creating host-to-virtual disk mappings
Guidelines to define the mappings:
An access virtual disk mapping is not required for an out-of-band storage array. If your storage array is managed using an out-of-band
connection, and an access virtual disk mapping is assigned to the Default Group, an access virtual disk mapping is assigned to every
host created from the Default Group.
Most hosts have 256 LUNs mapped per storage partition. The LUN numbering is from 0 through 255. If your operating system
restricts LUNs to 127, and you try to map a virtual disk to a LUN that is greater than or equal to 127, the host cannot access it.
An initial mapping of the host group or host must be created using the Storage Partitioning Wizard before defining additional
mappings. See Storage Partitioning.
To create host to virtual disk mappings:
1. In the AMW, select the Host Mappings tab.
2. In the object tree, select:
Disk groups, standard virtual disks, and thin virtual disks
75