Storage Manager 2019 R1 Administrator’s Guide
Preface About This Guide This guide describes how to use Storage Manager to manage and monitor your storage infrastructure. For information about installing and configuring required Storage Manager components, see the Storage Manager Installation Guide. How to Find Information To Find Action A description of a field or option in the user interface In Storage Manager, click Help. In Unisphere, select Help from the ? drop-down menu.
• Contains in-depth feature configuration and usage information. Unisphere and Unisphere Central for SC Series Administrator’s Guide • Contains instructions and information for managing storage devices using Unisphere and Unisphere Central for SC Series. Storage Manager Release Notes • Provides information about Storage Manager releases, including new features and enhancements, open issues, and resolved issues.
Provides information about FS8600 appliance hardware, system component replacement, and system troubleshooting. The target audience for this document is Dell installers and certified business partners who perform FS8600 appliance hardware service.
Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. © 2019 - 2020 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Contents 1 Storage Manager Overview..........................................................................................................24 Management Compatibility................................................................................................................................................ 24 Software and Hardware Requirements............................................................................................................................ 24 Data Collector Requirements.......
Storage Tab.................................................................................................................................................................... 47 Hardware Tab.................................................................................................................................................................48 IO Usage Tab.................................................................................................................................................
Accept the SupportAssist Collection Agreement......................................................................................................64 Provide Contact Information....................................................................................................................................... 65 Provide Site Contact Information................................................................................................................................
Submit the Storage Center License............................................................................................................................83 Create a Disk Folder...................................................................................................................................................... 83 Deploy the Storage Center – Add a Controller.........................................................................................................
Creating and Applying Snapshot Profiles..................................................................................................................128 Modifying Snapshot Profiles.......................................................................................................................................129 Managing Expiration Rules for Remote Snapshots.................................................................................................. 131 Managing Storage Profiles...............
Uninstalling the Server Agent.....................................................................................................................................168 7 Managing Virtual Volumes With Storage Manager........................................................................ 169 Configuring VVols in Storage Manager...........................................................................................................................169 Safeguarding VVols Data....................................
Create a Snapshot....................................................................................................................................................... 195 Create a Snapshot Schedule...................................................................................................................................... 195 Modify Snapshot Properties.......................................................................................................................................
Managing Local Storage Center Users.....................................................................................................................223 Managing Local Storage Center User Groups.........................................................................................................228 Enabling Directory Services Authentication.............................................................................................................230 Managing Directory Service Users...................
Managing Storage Types............................................................................................................................................ 271 Managing Disk Enclosures................................................................................................................................................272 Add an Enclosure.........................................................................................................................................................
Using the Top 10 Fastest Growing Volumes Plugin................................................................................................ 295 Using the Current Threshold Alerts Plugin.............................................................................................................. 295 Viewing Detailed Storage Usage Information................................................................................................................ 296 View Storage Usage by Tier and RAID Type.........
FluidFS System Management for FS Series Appliances.............................................................................................. 329 Seamless Session Failover..........................................................................................................................................329 Using the Storage Manager Client or CLI to Connect to the FluidFS Cluster.................................................... 329 Managing Secured Management.......................................
Managing NFS Exports...............................................................................................................................................398 Global Namespace.......................................................................................................................................................402 Using FTP.....................................................................................................................................................................
Delete a QoS Definition...............................................................................................................................................474 14 Storage Center Replications and Live Volumes...........................................................................476 Storage Center Replications............................................................................................................................................ 476 Replication Types...........................
Activating Disaster Recovery for PS Series Group Replications................................................................................ 523 Restarting Failed Replications..........................................................................................................................................523 Restart Replication for Multiple Restore Points......................................................................................................
Filter Threshold Alerts by Storage Center............................................................................................................... 546 Filter Threshold Alerts by Threshold Definition Properties.................................................................................... 546 View the Threshold Definition that Generated an Alert......................................................................................... 546 Delete Historical Threshold Alerts...........................
Setting Up Departments............................................................................................................................................ 568 Managing Department Line Items............................................................................................................................. 570 Assigning Volumes to Chargeback Departments.....................................................................................................571 Perform a Manual Chargeback Run......
Remove a PS Series Group from a Data Collector User..........................................................................................611 Managing Available FluidFS Clusters................................................................................................................................611 Delete an Available FluidFS Cluster............................................................................................................................
Manually Sending Diagnostic Data Using SupportAssist............................................................................................. 636 Manually Send Diagnostic Data for Multiple Storage Centers.............................................................................. 636 Send Diagnostic Data for a Single Storage Center ................................................................................................ 637 Save SupportAssist Data to a File......................................
1 Storage Manager Overview Storage Manager allows you to monitor, manage, and analyze Storage Centers, FluidFS clusters, and PS Series Groups from a centralized management console. • • The Storage Manager Data Collector stores data and alerts it gathers from Storage Centers in an external database or an embedded database. Some functions of the Data Collector are managed by the web application Unisphere Central.
Component Requirements Operating system Any of the following 64-bit operating systems with the latest service packs: • • • Windows Server 2012 R2 Windows Server 2016 Windows Server 2019 NOTE: Windows Server Core is not supported. Windows User Group Administrators CPU 64-bit (x64) microprocessor with two or more cores The Data Collector requires a microprocessor with four cores for environments that have 100,000 or more Active Directory members or groups.
Component Requirements • • • • • • • • • • Windows Server 2019 SUSE Linux Enterprise 12 Red Hat Enterprise Linux 7.1 Red Hat Enterprise Linux 7.2 Red Hat Enterprise Linux 7.3 Red Hat Enterprise Linux 7.4 Red Hat Enterprise Linux 7.6 Oracle Linux 7.0 Oracle Linux 7.3 Oracle Linux 7.6 NOTE: Windows Server Core is not supported. CPU 64-bit (x64) microprocessor with two or more cores Software Microsoft .NET Framework 4.
Data Collector Ports The following tables list the default ports that are used by the Storage Manager Data Collector: Inbound Data Collector Ports NOTE: Configure the firewall rules on the server that the Data Collector is installed to enable inbound connections on the inbound Data Collector ports.
Outbound Ports The Storage Manager Client and Unisphere Central initiate connections to the following port: Port Protocol Name Purpose 3033 TCP Web Server Port Communicating with the Storage Manager Data Collector Server Agent Ports The following tables list the ports used by the Storage Manager Server Agent. Inbound Server Agent Port The Server Agent accepts connections on the following port.
Storage Manager Features Storage Manager provides the following features. Storage Management Features Storage Manager provides the following storage management features. Storage Center Management Storage Manager allows you to centrally manage multiple Storage Centers. For each Storage Center, you can configure volumes, snapshot profiles, and storage profiles. You can also present configured storage to servers by defining server objects and mapping volumes to them.
Related concepts Managing Virtual Volumes With Storage Manager Disaster Recovery Features Storage Manager allows you to plan and implement a disaster recovery strategy for your Storage Center volumes. Remote Storage Centers and Quality of Service Storage Centers can be connected to each other by Fibre Channel or iSCSI to allow data to be copied between them.
Monitoring and Reporting Features Storage Manager provides the following reporting and monitoring features. Threshold Alerts The Threshold Alerts feature provides centralized administration and monitoring of threshold alert definitions. The types of usage metrics that can be monitored are I/O, storage, and replication usage. The Storage Manager Data Collector collects the usage data from the managed Storage Centers.
Storage Manager Client Overview The Storage Manager Client is a Windows-based program that allows you to connect to the Storage Manager Data Collector and centrally manage your Storage Centers, PS Groups, and FluidFS clusters. Figure 1. Storage Manager Client Window The left pane, which is comprised of the View pane and Views, can be resized by dragging the right border to the left or right. The following table describes the primary elements of the Storage Manager Client.
Callout Client Elements Description 4 Right pane Displays management and monitoring options for the view that is selected in the views pane.
2 Getting Started Start the Storage Manager Client and connect to the Data Collector. When you are finished, review the next steps for suggestions on how to proceed. For instructions on setting up a new Storage Center, see Storage Center Deployment. Topics: • • Use the Client to Connect to the Data Collector Next Steps Use the Client to Connect to the Data Collector Start the Storage Manager Client and use it to connect to the Data Collector. By default, you can log on as a local Storage Manager user.
4. Type the user name and password in the User Name and Password fields. 5. Specify your credentials. • If you want to log on as a local Storage Manager user, Active Directory user, or OpenLDAP user, type the user name and password in the User Name and Password fields. • • • For OpenLDAP, the user name format is supported (example: user). For Active Directory, the user name (example: user), User Principal Name (example: user@domain), and NetBIOS ID (example: domain\user) user name formats are supported.
Add Storage Centers to Storage Manager Use the Storage Manager Client to add Storage Centers to Storage Manager. Related concepts Adding and Organizing Storage Centers Configure Storage Center Volumes After you have added Storage Centers to the Data Collector or connected directly to a single Storage Center, you can create and manage volumes on the Storage Centers. You can also manage snapshot profiles and storage profiles on the Storage Centers.
Set up Remote Storage Centers and Relication QoS If you want to protect your data by replicating volumes from one Storage Center to another, set up connectivity between your Storage Centers. Create Replication Quality of Service (QoS) definitions on each Storage Center to control how much bandwidth is used to transmit data to remote Storage Centers.
3 Storage Center Overview How Storage Virtualization Works Storage Center virtualizes storage by grouping disks into pools of storage called Storage Types, which hold small chunks (pages) of data. Block-level storage is allocated for use by defining volumes and mapping them to servers. The storage type and storage profile associated with the volume determines how a volume uses storage. Storage Center combines the following features to provide virtualized storage.
• Virtually – All disk space is allocated into tiers. The fastest disks reside in Tier 1 and slower drives with lower performance reside in Tier 3. Data that is accessed frequently remains in Tier 1, and data that has not been accessed for the last 12 progression cycles is gradually migrated to Tiers 2 and 3. Data is promoted to a higher tier after three days of consistent activity. Disk tiering is shown when you select a Storage Type.
When Storage Center consumes a spare drive, a feature called Drive Spare Rightsizing allows Storage Center to modify the size of a larger capacity spare drive to match the capacity of the drive being replaced in the tier. After modifying the size of the drive in this manner, it cannot be modified to its original size. Drive Spare Rightsizing is enabled by default for all controllers running Storage Center version 7.2 beginning with version 7.2.11.
• Dual redundant: Dual redundant is the recommended redundancy level for all tiers. It is enforced for 3 TB HDDs and higher and for 18 TB SSDs and higher. Dual-redundant tiers can contain any of the following types of RAID storage: • • • RAID 10 Dual-Mirror (data is written simultaneously to three separate drives) RAID 6-6 (4 data segments, 2 parity segments for each stripe) RAID 6-10 (8 data segments, 2 parity segments for each stripe.
Conservation Mode A Storage Center enters Conservation mode when free space becomes critically low. Immediate action is necessary to avoid entering Emergency mode. NOTE: Because of Conservation mode’s proximity to the emergency threshold, do not use it as a tool to manage storage or to plan adding disks to the Storage Center. In Conservation mode, Storage Manager Client responds with the following actions: • • • Generates a Conservation mode alert. Prevents new volume creation.
Related concepts Configuring Threshold Definitions Related tasks Empty the Recycle Bin Apply a Storage Profile to One or More Volumes Storage Center Operation Modes Storage Center operates in four modes: Installation, Pre-production, Production, and Maintenance. Name Description Install Storage Center is in Install mode before completing the setup wizard for the Storage Center. Once setup is complete, Storage Center switches to Pre-Production mode.
Recommended (All Tiers) The Recommended storage profile is available only when data progression is licensed. Cost and performance are optimized when all volumes use the Recommended storage profile. The Recommended profile allows automatic data progression between and across all storage tiers based on data type and usage. When a volume uses the Recommended profile, all new data is written to Tier 1 RAID level 10 storage.
If Tier 1 fills to within 95% of capacity, Storage Center creates a space-management snapshot and moves it immediately to Tier 2 to free up space on Tier 1. The space-management snapshot is moved immediately and does not wait for a scheduled data progression. Spacemanagement snapshots are marked as Created On Demand and cannot be modified manually or used to create View volumes. Spacemanagement snapshots coalesce into the next scheduled or manual snapshot.
• • • Hardware Tab IO Usage Tab Charting Tab Summary Tab The Summary tab displays a customizable dashboard that summarizes Storage Center information. The Summary tab is displayed by default when a Storage Center is selected from the Storage navigation tree. NOTE: Disk Space details and graph are displayed in Storage Center version 7.4.10 and later. Figure 4.
Storage Tab The Storage tab of the Storage view allows you to view and manage storage on the Storage Center. This tab is made up of two elements: the navigation pane and the right pane. Figure 5. Storage Tab Call Out Name 1 Navigation pane 2 Right pane Navigation Pane The Storage tab navigation pane shows the following nodes: • • • • • • Storage Center: Shows a summary of current and historical storage usage on the selected Storage Center.
Related concepts Adding and Organizing Storage Centers Managing Storage Center Settings Managing Volumes Managing Snapshot Profiles Managing Servers on a Storage Center Managing Storage Profiles Managing QoS Profiles Hardware Tab The Hardware tab of the Storage view displays status information for the Storage Center hardware and allows you to perform hardware-related tasks. Figure 6.
IO Usage Tab The IO Usage tab of the Storage view displays historical performance statistics for the selected Storage Center and associated storage objects. This tab is visible only when connected to the Storage Center through the Data Collector. Figure 7. Storage View IO Usage Tab Related concepts Viewing Historical IO Performance Charting Tab The Charting tab of the Storage view displays real-time IO performance statistics for the selected storage object. Figure 8.
Alerts Tab The Alerts tab displays alerts for the Storage Center. Figure 9. Alerts Tab Logs Tab The Logs tab displays logs from the Storage Center. Figure 10.
4 Storage Center Deployment Discover and Configure Uninitialized SCv2000 Series Storage Systems When setting up the system, use the Discover and Configure Uninitialized Storage Centers wizard to find and configure new SCv2000 series storage systems. The wizard helps set up a Storage Center to make it ready for volume creation. NOTE: Make sure that you are running the latest version of the Dell Storage Manager Client.
Select a Storage Center to Initialize The next page of the Discover and Configure Uninitialized Storage Centers wizard provides a list of uninitialized Storage Centers discovered by the wizard. Steps 1. Select the Storage Center to initialize. 2. (Optional) To blink the indicator light for the selected Storage Center, click Enable Storage Center Indicator. You can use the indicator light to verify that you have selected the correct Storage Center. 3. Click Next. 4.
Set Administrator Information The Set Administrator Information page allows you to set a new password and an email address for the Admin user. Steps 1. Enter a new password for the default Storage Center administrator user in the New Admin Password and Confirm Password fields. 2. Enter the email address of the default Storage Center administrator user in the Admin Email Address field. 3. Click Next. • • For a Fibre Channel or SAS storage system, the Confirm Configuration page appears.
• If any of the Storage Center front-end ports are down, the Storage Center Front-End Ports Down dialog box opens. Select the ports that are not connected to the storage network, then click OK. 2. When all of the Storage Center setup tasks are complete, click Next.
c) (Optional) In the Backup SMTP Mail Server field, enter the IP address or fully qualified domain name of a backup SMTP mail server and click OK. d) Click Test Server to verify connectivity to the SMTP server. e) If the SMTP server requires emails to contain a MAIL FROM address, specify an email address in the Sender Email Address field. f) (Optional) In the Common Subject Line field, enter a subject line to use for all emails sent by the Storage Center. 3. Click Next.
Provide Site Contact Information If the storage system is running Storage Center 7.3 or later, specify the site contact information. Steps 1. Select the Enable Onsite Address checkbox. 2. Type a shipping address where replacement Storage Center components can be sent. 3. Click Next. The Confirm Enable SupportAssist dialog box opens. 4. Click Yes. Validate the SupportAssist Connection If the storage system is running Storage Center 7.3 or later, the Validate SupportAssist Connection page opens.
About this task If Storage Center is unable to check for an update, the Unable to Check for Update page appears. Steps 1. Click Use Update Utility server and setup configuration. The Configure Update Utility dialog box appears. 2. In the Update Utility Host or IP Address field, type the host name or IP address of the Storage Center Update Utility. 3. In the Update Utility Port field, type the port of the Storage Center Update Utility.
• • SCv2080 SC4020 Steps 1. Configure the fault domain and ports (embedded fault domain 1 or Flex Port Domain 1). NOTE: The Flex Port feature allows both Storage Center system management traffic and iSCSI traffic to use the same physical network ports. However, for environments where the Storage Center system management ports are mixed with network traffic from other devices, separate the iSCSI traffic from management traffic using VLANs.
• The client must be connected to a Storage Manager Data Collector. Steps 1. Click the Storage view. 2. In the Storage pane, click Storage Centers. 3. In the Summary tab, click Discover and Configure Uninitialized Storage Centers . The Discover and Configure Uninitialized Storage Centers wizard opens. Select a Storage Center to Initialize The next page of the Discover and Configure Uninitialized Storage Centers wizard provides a list of uninitialized Storage Centers discovered by the wizard. Steps 1.
Customer Installation Authorization If the storage system is running Storage Center 7.3 or later, customer authorization is required. Steps 1. Type the customer name and title. 2. Click Next. Set System Information The Set System Information page allows you to enter Storage Center and storage controller configuration information to use when connecting to the Storage Center using Storage Manager. Steps 1. Type a descriptive name for the Storage Center in the Storage Center Name field. 2.
NOTE: After you click the Apply Configuration button, the configuration cannot be changed until after the Storage Center is fully deployed. Deploy the Storage Center The Storage Center sets up the controller using the information provided on the previous pages. Steps 1. The Storage Center performs system setup tasks. The Deploy Storage Center page displays the status of these tasks. To learn more about the initialization process, click More information about Initialization.
3. For Redundant Storage Types, you must select a redundancy level for each tier unless the drive type or size requires a specific redundancy level • Single Redundant: Single-redundant tiers can contain any of the following types of RAID storage: • • • • RAID 10 (each drive is mirrored) RAID 5-5 (striped across 5 drives) RAID 5-9 (striped across 9 drives) Dual redundant: Dual redundant is the recommended redundancy level for all tiers.
3. Enter network information for the fault domain and its ports. NOTE: Make sure that all the IP addresses for iSCSI Fault Domain 1 are in the same subnet. 4. Click Next. 5. On the Set IPv4 Addresses for iSCSI Fault Domain 2 page, enter network information for the fault domain and its ports. Then click Next. NOTE: Make sure that all the IP addresses for iSCSI Fault Domain 2 are in the same subnet. 6. Click Next. 7. Review the fault domain information. 8.
2. Select Use NTP Server and type the host name or IPv4 address of the NTP server, or select Set Current Time and set the time and date manually. 3. Click Next. Configure SMTP Server Settings If you have an SMTP server, configure the SMTP email settings to receive information from the Storage Center about errors, warnings, and events. Steps 1. By default, the Enable SMTP Email checkbox is selected and enabled.
Provide Contact Information Specify contact information for technical support to use when sending support-related communications from SupportAssist. Steps 1. Specify the contact information. 2. (Storage Center 7.2 or earlier) To receive SupportAssist email messages, select the Send me emails from SupportAssist when issues arise, including hardware failure notifications check box. 3. Select the preferred contact method, language, and available times. 4. (Storage Center 7.
• The Setup SupportAssist Proxy Settings dialog box appears if the Storage Center cannot connect to the SupportAssist Update Server. If the site does not have direct access to the Internet but uses a web proxy, configure the proxy settings: 1. Select Enabled. 2. Enter the proxy settings. 3. Click OK. The Storage Center attempts to contact the SupportAssist Update Server to check for updates. Complete Configuration and Perform Next Steps The Storage Center is now configured.
Open the Discover and Configure Uninitialized Storage Centers Wizard from the Storage Manager Client Open the wizard from the Storage Manager Client to discover and configure a Storage Center. Prerequisites • • • The Storage Manager Client must be running on a system with a 64-bit operating system. The Storage Manager Client must be run using Windows Administrator privileges. The client must be connected to a Storage Manager Data Collector. Steps 1. Click the Storage view. 2.
Deploy the Storage Center Using the Direct Connect Method Use the direct connect method to manually deploy the Storage Center when it is not discoverable. Steps 1. Use an Ethernet cable to connect the computer running the Storage Manager Client to the management port of the top controller. 2. Cable the bottom controller to the management network switch. 3. Click Discover and Configure Uninitialized Storage Centers. The Discover and Configure Uninitialized Storage Centers wizard opens. 4.
NOTE: The storage controller IPv4 addresses and management IPv4 address must be within the same subnet. c) d) e) f) Type the subnet mask of the management network in the Subnet Mask field. Type the gateway address of the management network in the Gateway IPv4 Address field. Type the domain name of the management network in the Domain Name field. Type the DNS server addresses of the management network in the DNS Server and Secondary DNS Server fields. 4. Click Next.
3. In the Timeout field, type the amount of time in seconds after which the Storage Center should stop attempting to reconnect to the key management server after a failure. 4. To add alternate key management servers, type the host name or IP address of another key management server in the Alternate Hostnames area, and then click Add. 5. If the key management server requires a user name to validate the Storage Center certificate, enter the name in the Username field. 6.
Configure Fibre Channel Ports Create a Fibre Channel fault domain to group FC ports for failover purposes. Steps 1. On the first Configure Fibre Channel Fault Tolerance page, select a transport mode: Virtual Port or Legacy. 2.
• • If you are setting up SAS back-end ports, the Configure Back-End Ports page opens. If you are not setting up SAS back-end ports, the Inherit Settings or Time Settings page opens. Configure SAS Ports For a Storage Center with SAS front-end ports, the Review Fault Domains page displays information about the fault domains that were created by the Storage Center. Prerequisites • • One port from each controller within the same fault domain must be cabled.
2. Alternatively, if you have an SMTP server, configure the SMTP server settings. a) In the Recipient Email Address field, enter the email address where the information will be sent. b) In the SMTP Mail Server field, enter the IP address or fully qualified domain name of the SMTP mail server. c) (Optional) In the Backup SMTP Mail Server field, enter the IP address or fully qualified domain name of a backup SMTP mail server and click OK. d) Click Test Server to verify connectivity to the SMTP server.
Provide Site Contact Information If the storage system is running Storage Center 7.3 or later, specify the site contact information. Steps 1. Select the Enable Onsite Address checkbox. 2. Type a shipping address where replacement Storage Center components can be sent. 3. Click Next. The Confirm Enable SupportAssist dialog box opens. 4. Click Yes. Validate the SupportAssist Connection If the storage system is running Storage Center 7.3 or later, the Validate SupportAssist Connection page opens.
2. In the Storage pane, select Dell Storage or Storage Centers. 3. In the Summary tab, click Add Storage Center. The Add Storage Center wizard opens. 4. Select Add a new Storage Center to the Data Collector, then click Next. 5. Specify Storage Center login information. • • • Hostname or IP Address – Type the host name or IP address of a Storage Center controller. For a dual-controller Storage Center, enter the IP address or host name of the management controller.
3. Click Change to open a dialog box for selecting the disks to assign to the folder. 4. Click Next. The Create Storage Type page opens. 5. Select the redundancy level from the drop-down menu for each disk tier. 6. (Optional) Select the datapage size from the Datapage Size drop-down menu. 7. Click Next. The Add Controller page opens. Add a Controller (Configure Storage Center Wizard) Configure the second controller for systems with two controllers.
Enter Key Management Server Settings Specify key management server settings, such as hostname and port. Steps 1. In the Hostname field, type the host name or IP address of the key management server. 2. In the Port field, type the number of a port with open communication with the key management server. 3. In the Timeout field, type the amount of time in seconds after which the Storage Center should stop attempting to reconnect to the key management server after a failure. 4.
6. (Optional) To change the fault domain setup, select from the following options: • • • Click Create Fault Domain to create a new fault domain. Click Edit Fault Domain to edit the current fault domain. Click Remove to delete a fault domain. 7. Click Next. • • • If you are setting up iSCSI fault domains, the Configure iSCSI Fault Domain page opens. If you are setting up SAS back-end ports but not iSCSI fault domains, the Configure Back-End Ports page opens.
Configure Back-End Ports (Configure Storage Center Wizard) Select SAS ports to configure for connecting to enclosures. Steps 1. On the Configure Back-End Ports page, select the SAS ports to configure. 2. Click Next. The Inherit Settings or Time Settings page opens. Inherit Settings Use the Inherit Settings page to copy settings from a Storage Center that is already configured. Prerequisites You must be connected through a Data Collector. Steps 1. Select the Storage Center from which to copy settings. 2.
Accept the SupportAssist Collection Agreement Use the Accept SupportAssist Collection Agreement page to accept to the terms of the agreement and enable SupportAssist. Steps 1. To allow SupportAssist to collect diagnostic data and send this information to technical support, select the By checking this box, you accept the above terms and turn on SupportAssist checkbox. 2. Click Next.
Complete Configuration and Perform Next Steps The Storage Center is now configured. The Configuration Complete page provides links to a Storage Manager Client tutorial and wizards to perform the next setup tasks. Steps 1. (Optional) Click one of the Next Steps to configure a localhost, configure a VMware host, or create a volume. When you have completed the step, you are returned to the Configuration Complete page. After you finish the Next Steps, continue to Step 2. 2. Click Finish to exit the wizard.
Steps 1. Click the Storage view. 2. In the Storage pane, select Dell Storage or Storage Centers. 3. In the Summary tab, click Add Storage Center. The Add Storage Center wizard opens. 4. Select Add a new Storage Center to the Data Collector, then click Next. 5. Specify Storage Center login information. • • • Hostname or IP Address – Type the host name or IP address of a Storage Center controller. For a dual-controller Storage Center, enter the IP address or host name of the management controller.
d) Type the gateway address of the management network in the Gateway IPv4 Address field. e) Type the domain name of the management network in the Domain Name field. f) Type the DNS server addresses of the management network in the DNS Server and Secondary DNS Server fields. 4. Click Next. Submit the Storage Center License Use the Submit Storage Center License page to type the name and title of the approving customer and to select the Storage Center license file. Steps 1. Click Browse.
• • • If one or more of the system setup tasks fails, click Troubleshoot Initialization Error to learn how to resolve the issue. If the Configuring Disks task fails, click View Disks to see the status of the drives detected by the Storage Center. If any of the Storage Center front-end ports are down, the Storage Center Front-End Ports Down dialog box opens. Select the ports that are not connected to the storage network, then click OK. 2. When all of the Storage Center setup tasks are complete, click Next.
4. Drive Addition is selected by default. Leave this option selected. 5. Click Next. Configure Ports Use the Configure Fault Tolerance pages to configure the front-end and back-end ports of the system. Steps 1. Select Configure Fault Domains next to Fibre Channel or iSCSI to set up fault domains for those ports. If the system has both Fibre Channel and iSCSI ports, select Configure Fault Domains next to both port types. 2.
a) b) c) d) e) f) In the Name field, type a name for the fault domain. (Optional) In the Notes field, type notes for the fault domain. In the Target IPv4 Address field, type an IP address to assign to the iSCSI control port. In the Subnet Mask field, type the subnet mask for the IP address. In the Gateway IPv4 Address field, type the IP address for the iSCSI network default gateway. In the Ports table, select the iSCSI ports to add to the fault domain.
c) (Optional) In the Backup SMTP Mail Server field, enter the IP address or fully qualified domain name of a backup SMTP mail server and click OK. d) Click Test Server to verify connectivity to the SMTP server. e) If the SMTP server requires emails to contain a MAIL FROM address, specify an email address in the Sender Email Address field. f) (Optional) In the Common Subject Line field, enter a subject line to use for all emails sent by the Storage Center. 3. Click Next.
Provide Site Contact Information If the storage system is running Storage Center 7.3 or later, specify the site contact information. Steps 1. Select the Enable Onsite Address checkbox. 2. Type a shipping address where replacement Storage Center components can be sent. 3. Click Next. The Confirm Enable SupportAssist dialog box opens. 4. Click Yes. Validate the SupportAssist Connection If the storage system is running Storage Center 7.3 or later, the Validate SupportAssist Connection page opens.
• • • The Storage Manager Client must be run by a Storage Manager Client user with the Administrator privilege. The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator or Volume Manager privilege. On a Storage Center with Fibre Channel IO ports, configure the Fibre Channel zoning. Steps 1. On the Configuration Complete page of the Discover and Configure Storage Center wizard, click Configure this host to access a Storage Center.
Set Up a VMware vCenter Host from Initial Setup Configure a VMware vCenter host to access block-level storage on the Storage Center. Prerequisites • • • • • • • • Client must be running on a system with a 64-bit operating system. The Storage Manager Client must be run by a Storage Manager Client user with the Administrator privilege. The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator or Volume Manager privilege.
5 Storage Center Administration Adding and Organizing Storage Centers Adding and organizing Storage Centers can only be done using Storage Manager connected to a Data Collector. Note the following restrictions about Storage Manager user accounts: • • • • • An individual Storage Manager user can view and manage only the Storage Centers that have been mapped to his or her account. In other words, the Storage Centers that are visible to one Storage Manager user are not necessarily visible to another user.
Add a Storage Center Add a Storage Center to Storage Manager to manage and monitor the Storage Center from the Storage Manager Client. Prerequisites • You must have the username and password for a Storage Center user account. • • • The first time a Storage Center is added to Storage Manager, you must specify a Storage Center user account that has the Administrator privilege.
• If no Storage Centers are mapped to another user, the dialog box allows you to enter a new Storage Center. 4. (Conditional) If the dialog box is displaying a list of Storage Centers, select a Storage Center from the list or add a new one. • • To add a Storage Center that does not appear in the list, ensure that the Add a new Storage Center to the Data Collector check box is selected, then click Next.
2. In the Storage pane, select the Storage Center. 3. In the Summary tab, click Reconnect to Storage Center. The Reconnect to Storage Center dialog box appears. 4. Enter Storage Center logon information. • • Hostname or IP Address: Enter the host name or IP address of a Storage Center controller. For a dual-controller Storage Center, enter the IP address or host name of the management controller. User Name and Password: Enter the user name and password for a Storage Center user.
The Select Folder dialog box opens. 4. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed. (Home). 5. Select the folder to which to move the Storage Center. 6. Click OK. Rename a Storage Center Folder Use the Edit Settings dialog box to change the name of a Storage Center folder. Steps 1. Click the Storage view. 2. In the Storage pane, select the Storage Center folder you want to modify. 3.
Managing Volumes A Storage Center volume is a logical unit of storage that servers can access over a network. You can allocate more logical space to a volume than is physically available on the Storage Center. Attributes That Determine Volume Behavior When a volume is created, attributes are associated with the volume to control its behavior. Attribute Description Storage Type Specifies the disk folder, tier redundancy, and data page size of the storage used by the volume.
Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select Volumes. 4. In the right pane, click Create Volume. The Create Volume dialog box opens. 5. In the Name field, type a name for the volume. 6. In the Size field, type a size for the volume in kilobytes (KB), megabytes (MB), gigabytes (GB), or terabytes (TB).
8. Click Next. (Optional) The Storage Options page appears. If no options are available, the wizard will not display this page. 9. In the Storage Options page, specify the options for the volume. Storage options vary based on the features the Storage Center supports. • • • • • If more than one Storage Type is defined on the Storage Center, select the Storage Type to provide storage from the Storage Type drop-down menu.
• If Chargeback is enabled, select the department to charge for storage costs associated with the volume by clicking Change across from Chargeback Department. • If Data Reduction is enabled on the Storage Center, select Compression or Deduplication with Compression to enable Data Reduction on the volume. • To use specific disk tiers and RAID levels for volume data, select the appropriate Storage Profile from the Storage Profile drop-down menu.
9. In the Storage Options page, specify the storage options for the volumes. • • • • • • • • • • To schedule snapshot creation and expiration for the volume, apply one or more Snapshot Profiles by clicking Change across from Snapshot Profiles. To map the volume to a server, click Change across from Server. If Chargeback is enabled, select the department to charge for storage costs associated with the volume by clicking Change across from Chargeback Department.
3. In the right pane, click the Volumes node. 4. Click Edit Multiple Volumes. The Edit Multiple Volumes wizard opens. 5. Select the volume you want to edit. 6. Click Next. 7. Modify the volume settings as needed. For more information on the volume settings, click Help. 8. Click Next. 9. Review the changes. 10. Click Finish. The Edit Multiple Volumes wizard modifies the volumes then displays a results page. 11. Click Finish. Rename a Volume A volume can be renamed without affecting its availability.
Expand a Volume Expand the size of a volume if more space is needed. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the volume you want to expand. 4. In the right pane, click Expand Volume. The Expand Volume dialog box opens. 5. Type a new size for the volume, then click OK.
Assign Snapshot Profiles to Multiple Volumes Snapshot Profiles can be assigned to multiple volumes in one operation. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the volume you want to modify. 4. In the right pane, select the volumes that you want to modify.
Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the right pane, click Edit Settings. The Edit Volume dialog box appears. 3. Select the Import to lowest tier checkbox. Associate a Chargeback Department with a Volume If Chargeback is enabled, you can assign a Chargeback Department to the volume to make sure the department is charged for the storage used by the volume. Steps 1. Click the Storage view. 2.
Configure Related View Volume Maximums for a Volume For a given volume, you can configure the maximum number of view volumes, including the original volume, that can be created for volumes that share the same snapshot. You can also configure the maximum combined size for these volumes. Prerequisites Consult with technical support before changing these limits. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3.
Create a Mirroring Volume A mirroring volume is a copy of a volume that also dynamically changes to match the source volume. The source and destination volumes are continuously synchronized. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select a volume. 4. In the right pane, select Local Copy > Mirror Volume. The Mirror Volume dialog box opens. 5.
View Copy/Mirror/Migrate Information The Summary tab displays information for any copy, mirror, or migrate relationship involving the selected volume. Copy and migrate information is displayed in the Summary tab only during the copy or migrate operation. Prerequisites The volume must be in a copy, mirror, or migrate relationship. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Storage tab navigation pane, select a volume.
Migrating Volumes With Live Migrate Live Migration moves a volume from one Storage Center to another Storage Center with no down time. Live Migration Requirements To create Live Migrations, the requirements listed in the following table must be met: Requirement Description Storage Center version The source and destination Storage Centers must be running version 7.1 or later. NOTE: Dell recommends that both Storage Centers run the same version of Storage Center software.
Live Migration Before Swap Role In the following diagram, the source Storage Center is on the left and the destination Storage Center is on the right. Figure 12. Example of Live Migration Configuration Before Swap Role 1. Server 2. Server I/O request to destination volume (forwarded to source Storage Center by destination Storage Center) 3. Source volume 4. Replication over Fibre Channel or iSCSI 5. Destination volume Live Migration After Swap Role In the following diagram, a role swap has occurred.
Live Migration After Complete In the following diagram, the Live Migration is complete. The server sends I/O requests only to the migrated volume. Figure 14. Example of Live Migration Configuration After Complete 1. Server 2. Old destination volume 3. Migrated volume 4. Server I/O request to migrated volume over Fibre Channel or iSCSI Creating a Live Migration Create a Live Migration to move a volume to another Storage Center without any down time.
Live Migration begins to migrate the volume to the destination Storage Center. Create a Live Migration for Multiple Volumes Use Live Migration to move multiple volumes from one Storage Center to another Storage Center with limited or no downtime. Prerequisites • • The volumes to be migrated must be mapped to a server. The volumes cannot be part of a replication, Live Volume, or Live Migration. Steps 1.
Cancel a Live Migration Source Storage Center Swap Cancel a swap of the source Storage Center to keep the current source and destination Storage Center. Prerequisites The Live Migration must be in the Swapping state. Steps 1. Click the Replications & Live Volumes view. 2. On the Live Migrations tab, select the Live Migration whose swap you want to cancel, and then click Cancel Swap of Source Storage Center. The Cancel Swap of Source Storage Center dialog box opens. 3. Click OK. The swap is cancelled.
Steps 1. Click the Replications & Live Volumes view. 2. On the Live Migrations tab, select the Live Migration you want to modify, and then click Edit Settings. The Edit Live Migration dialog box opens. 3. Select or clear the Deduplication checkbox, then click OK. Change the Source Replication QoS Node for a Live Migration Select a different QoS node to change how the Live Migration uses bandwidth. Prerequisites The Live Migration must be in either the Syncing or the Ready to be Swapped state. Steps 1.
• Show In IO Usage Tab - Displays the source volume in the IO Usage tab. View the Destination Volume of a Live Migration View more information about the destination volume of a Live Migration in the Storage tab or IO Usage tab. Steps 1. Click the Replications & Live Volumes view. 2. On the Live Migrations tab, select the Live Migration whose destination volume you want to view. 3.
Move a Volume Folder Use the Edit Settings dialog box to move a volume folder. Folders can be nested in other folders. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the right pane, click Edit Settings. The Edit Settings dialog box opens. 4. In the Parent field, select the appropriate parent folder. 5. Click OK.
View Snapshots on a Volume Click the Snapshots tab to see information about snapshots, such as freeze time, expiration time, size, and description. You can also view the snapshots on a volume in a tree view. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the volume. 4. In the right pane, click the Snapshots tab. 5.
• • To add a volume QoS profile to be applied to the volume, click Change across from Volume QoS Profile. When the list of defined QoS profiles opens, select a profile, then click OK. You can also apply the default QoS profile to a volume. To add a group QoS profile to be applied to the volume, click Change across from Group QoS Profile. When the list of defined QoS profiles opens, select a profile, then click OK. 8. Map the recovery volume to the server from which the data will be accessed.
Expire a Snapshot Manually If you no longer need a snapshot and you do not want to wait for it to be expired based on the snapshot profile, you can expire it manually. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the volume you want to modify. 4. Click the Storage tab. 5.
7. (Optional) Click Advanced Mapping to configure LUN settings, restrict mapping paths, or present the volume as read-only. 8. Click Finish. Unmap a Volume from a Server Unmap a volume from a server if the server no longer needs to access the volume. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the volume you want to unmap from a server. 4.
Demote a Mapping from a Server Cluster to an Individual Server If a volume is mapped to a server cluster, you can demote the mapping so that it is mapped to one of the servers that belongs to the cluster. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the volume. 4. In the right pane, click the Mappings tab. 5.
• • To assign the next unused LUN for the server, select the Use next available LUN checkbox. To make the volume bootable, select the Map volume using LUN 0 checkbox. 10. Click OK. Specify Which Controller Processes IO for a Volume/Server Mapping For dual-controller Storage Centers, you can manually specify which controller processes IO for a volume/server mapping. By default, the Storage Center automatically chooses a controller. Steps 1.
Deleting Volumes and Volume Folders Delete volumes and volume folders when they are no longer needed. NOTE: For user interface reference information, click Help. Delete a Volume A deleted volume is moved to the Recycle Bin by default. Prerequisites Delete all associated replications, Live Volumes, or Live Migrations before deleting a volume. CAUTION: You can recover a deleted volume that has been moved to the Recycle Bin. However, a deleted volume cannot be recovered after the Recycle Bin is emptied.
Delete a Volume Folder A volume folder must be empty before it can be deleted. If the deleted volumes from the folder are in the Recycle Bin, the volume folder is not considered empty and cannot be deleted. Steps 1. Click the Storage view. 2. In the Storage pane, select a Storage Center. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select the volume folder you want to delete. 5. In the right pane, click Delete. . The Delete dialog box opens 6. Click OK to delete the folder.
Supported Hardware Platforms The following controller series support Data Reduction: • • • • • • • • SCv3000 Series (Supports Compression only) SC4020 SC5020 SC5020F SC7020 SC7020F SC8000 SC9000 Compression Compression reduces the amount of space used by a volume by encoding data. Compression runs daily with Data Progression. To change the time at which compression runs, reschedule Data Progression. Compression does not run with an on-demand Data Progression.
About this task NOTE: The amount of space saved by Data Reduction is determined by the amount of data eligible for Data Reduction on the volume compared to the total amount of space used by that data on disk after Data Reduction. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select a volume. 4. In the right pane, click Edit Settings. . The Edit Volume dialog box opens 5.
Change the Default Data Reduction Profile The default Data Reduction profile determines the type of data reduction that is applied to new volumes. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Preferences tab. 4. From the Data Reduction Profile drop-down list, select the default profile for new volumes.
5. Click OK. Disable Data Reduction for a Volume Disabling Data Reduction on a volume permanently uncompresses the reduced data starting the next data progression cycle. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the volume you want to modify. 4. In the right pane, click Edit Settings. The Edit Volume dialog box opens. 5.
Consistent Snapshot Profile Non-Consistent Snapshot Profile Number of volumes limited based on storage controller.
3. In the Storage tab navigation pane, select the Snapshot Profile. 4. In the right pane, click Apply to Volumes. The Apply to Volumes dialog box opens. 5. Select the volumes to which you want to apply the snapshot profile. To select individual volumes in a volume folder, expand the folder and select each volume individually. 6. (Optional) To remove existing snapshot profiles from the selected volumes, select Replace existing Snapshot Profiles. 7. Click OK.
2. Click the Storage tab. 3. In the Storage tab navigation pane, select the snapshotprofile that you want to modify. 4. In the right pane, click Edit Settings. The Edit Snapshot Profile dialog box opens. 5. In the Name field, type a new name for the snapshotprofile. 6. Click OK. Modify Rules for a Snapshot Profile Snapshot Profile rules determine when snapshots are created and expired. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2.
4. Make sure the snapshot profile is not in use by any volumes. 5. In the right pane, click Delete. The Delete dialog box opens. 6. Click OK. Managing Expiration Rules for Remote Snapshots By default, snapshot profiles applied to remote volumes have the same rules for expiration as for local volumes. However, you can specify different expiration rules for remote volumes if needed. NOTE: For user interface reference information, click Help.
Create a Storage Profile (Storage Center 7.2.1 and Earlier) Create a storage profile to specify custom RAID level and tier settings that can be applied to one or more volumes. Prerequisites In the Storage Center User Volume Defaults, the Allow Storage Profile selection checkbox must be selected. About this task NOTE: SCv2000 series controllers cannot create storage profiles. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2.
Apply a Storage Profile to One or More Volumes Apply a storage profile to a volume to specify the RAID level and tiers used by the volume. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the storage profile to apply to a volume. 4. In the right pane, click Apply to Volumes. The Apply to Volumes dialog box opens. 5.
Related concepts Managing Storage Profiles Managing Snapshot Profiles Create a QoS Profile QoS profiles include a set of attributes that control the QoS behavior for any volume to which it is applied. Prerequisites • • To enable users to set QoS profiles for a Storage Center, the Allow QoS Profile Selection option must be selected on the Storage Center Preferences settings.
4. Click OK. Apply a QoS Profile to a Volume Apply a previously defined QoS profile to a volume. Prerequisites The QoS profile must already exist. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Expand the QoS Profiles navigation tree. Right-click the name of the QoS profile. 3. Select Apply to Volumes. The Apply to Volumes dialog box opens. 4. Select the checkbox next to each volume to which you want to apply the QoS profile. 5.
6. From the iSCSI Network Type drop-down menu, select the speed of the iSCSI network. 7. Click Finish. A confirmation dialog box appears. 8. Click OK. PS Series Storage Array Import Requirements A PS Series storage array must meet the following requirements to import data to a Storage Center storage system. Component Requirement PS Series Firmware Version 6.0.
Import Data from an External Device (Offline) Importing data from an external device copies data from the external device to a new destination volume in Storage Center. Complete the following task to import data from an external device. Prerequisites • • An external device must be connected into the Storage Center. The destination volume must be unmapped from the server.
Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. From the External Devices node in the Storage tab navigation pane, select an external device. 4. Click Online Import from External Device. The Online Import from External Device dialog box opens. 5. Modify the Destination Volume Attributes as needed. NOTE: For more information, click Help. 6.
6 Storage Center Server Administration Storage Manager allows you to allocate storage on a Storage Center to the servers in your SAN environment. Servers that are connected to Storage Centers can also be registered to Storage Manager to streamline storage management. To present storage to a server, a server object must be added to the Storage Center.
Managing Servers Centrally Using Storage Manager Servers that are registered to Storage Manager are managed from the Servers view. Registered servers are centrally managed regardless of the Storage Centers to which they are connected. Figure 16. Servers View The following additional features are available for servers that are registered to Storage Manager: • • • • Storage Manager gathers operating system and connectivity information from registered servers.
• SAS – Directly connect the controller to a server using SAS ports configured as front-end connections. 2. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 3. Click the Storage tab. 4. Select Servers in the Storage tab navigation pane. 5. In the right pane, click Create Server. The Create Server dialog box appears. Figure 17. Create Server Dialog Box 6. Configure the server attributes. The server attributes are described in the online help.
• • Fibre Channel – Configure Fibre Channel zoning to allow the server HBAs and Storage Center HBAs to communicate. SAS – Directly connect the controller to a server using SAS ports configured as front-end connections. 2. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 3. Click the Storage tab. 4. Select the server that hosts the virtual server in the Storage tab navigation pane. 5. In the right pane, click Create Virtual Server.
Figure 19. Create Server Cluster Dialog Box 5. Configure the server cluster attributes. The server attributes are described in the online help. a) Enter a name for the server in the Name field. b) To add the server cluster to a server folder, click Change, select a folder, and click OK. c) From the Operating System drop-down menu, select the operating system for the cluster. NOTE: All servers in a server cluster must be running the same operating system.
6. The Host Setup Successful page displays the best practices that were set by the wizard and best practices that were not set. Make a note of any best practices that were not set by the wizard. It is recommended that these updates are applied manually before starting IO to the Storage Center. 7. (Optional) Place a check next to Create a Volume for this host to create a volume after finishing host setup. 8. Click Finish.
2. Click the Storage tab. 3. In the Storage tab, click Servers. 4. Click Create Server from a VMware vSpehere or vCenter. The Set Up VMware Host on Storage Center wizard appears. 5. Enter the IP address or hostname, the user name and password. Then click Next. • • If the Storage Center has iSCSI ports and the host is not connected to any interface, the Log into Storage Center via iSCSI page appears. Select the target fault domains, and then click Log In.
3. Select the server to remove from the server cluster in the Storage tab navigation pane. 4. In the right pane, click Remove Server from Cluster. The Remove Server from Cluster dialog box opens. 5. Click OK. Convert a Physical Server to a Virtual Server If you migrated a physical server to a virtual machine, change the physical server object to a virtual server object and select the host physical server. Steps 1.
5. Select the operating system for the server from the Operating System drop-down list. 6. Click OK. Move a Server to a Different Server Folder For convenience, server objects can be organized by folders. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. Select the server in the Storage tab navigation pane. 4. In the right pane, click Edit Settings. The Edit Server Settings dialog box opens. 5.
Figure 20. Remove HBAs from Server 5. Select the checkboxes of the HBAs to remove from the server. 6. Click OK. If the HBAs are used by one or more mapped volumes, a confirmation dialog box opens. • • To keep the HBAs, click Cancel. To remove the HBAs, click OK. Removing the HBAs might cause the server to lose visibility of the mapped volumes. Mapping Volumes to Servers Map a volume to a server to allow the server to use the volume for storage.
Create a Volume and Map it to a Server If a server requires additional storage and you do not want to use an existing volume, you can create and map a volume to the server in a single operation. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. Select the server to which to map a new volume in the Storage tab navigation pane. 4. In the right pane, click Create Volume. The Create Volume dialog box opens. 5.
4. In the right pane, click Create Multiple Volumes. The Create Multiple Volumes dialog box opens. 5. In the Volume Count field, type the number of volumes to create. 6. Type a name for the volume in the Name field. 7. Select a unit of storage from the drop-down menu and enter the size for the volume in the Size field. The available storage units are bytes, kilobytes (KB), megabytes (MB), gigabytes (GB), and terabytes (TB). 8. In the Volume Folder pane, select the parent folder for the volume. 9.
Create a Server Folder Create a server folder to group servers together. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the Servers node. 4. In the right pane, click Create Server Folder. The Create Server Folder dialog box opens. 5. Type a name for the folder in the Name field. 6. (Optional) Type information about the server folder in the Notes field. 7.
3. Select the server to delete in the Storage tab navigation pane. 4. In the right pane, click Delete. The Delete dialog box opens. 5. Click OK. Delete a Server Folder Delete a server folder if it is no longer needed. Prerequisites The server folder must be empty. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. Select the server folder to delete in the Storage tab navigation pane. 4.
IPMI Support for NAS Appliances The Dell NAS appliances include Intelligent Platform Management Interface (IPMI) cards. Storage Manager communicates with the IPMI card to retrieve fan speed, temperature, voltage, and power supply information. The IPMI card also allows Storage Manager to clear the System Event Log (SEL), power off the server, and reset the server. The IPMI card must be properly configured to allow Storage Manager to communicate with it.
The Register Server dialog box opens. 4. In the Host or IP Address field, enter the host name or IP address of a vCenter Server. 5. Type the user name and password of an administrator on the vCenter Server in the User Name and User Password fields. 6. Select a parent folder for the server in the Folder navigation tree. 7. Configure automatic management settings for the Storage Center for Storage Centers to which the server is connected.
7. Select a parent folder for the new folder in the Parent navigation tree. 8. Click OK. Rename a Server Folder Select a different name for a server folder. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the server folder to rename. 4. In the right pane, click Edit Settings. The Edit Server Folder Settings dialog box opens. 5.
5. Click OK. Delete a Registered Server Remove a registered server from the Servers view if you no longer want to manage it from Storage Manager. If Auto Manage Storage Centers is enabled for the server, deleting it removes the HBAs from the corresponding Storage Center server objects. Steps 1. Click the Servers view. 2. In the Servers pane, select the server. 3. In the right pane, click Delete. The Delete Objects dialog box appears. 4. Click OK.
Retrieve Current Information from All Servers Trigger Storage Manager to refresh the data that is displayed for all servers. If Auto Manage Storage Centers is enabled one or more servers, this action adds corresponding server objects to the associated Storage Centers. Steps 1. Click the Servers view. 2. Select the root Servers folder in the Servers pane. The Summary tab for all servers opens. 3. In the right pane, click Update Information on Servers. The Update Information on Servers dialog box opens.
When the Allow Automated Update Information check box is selected, the information that is displayed for all registered servers is updated every 30 minutes. 6. Click OK. Configure Reporting Settings for All Registered Servers You can specify the number of days for which data is gathered for all servers. Steps 1. Click the Servers view. 2. In the Servers pane, click Servers Properties . The Edit Settings dialog box opens. 3.
10. If you want to specify a custom LUN, restrict mapping paths, configure multipathing, or make the volume read-only, click Advanced Mapping. 11. To configure settings for the Storage Center volume that will be created, click Volume Settings. In the Volume Settings dialog box that appears, modify the options as needed, then click OK. a) Select the folder in which to create the volume from the Volume Folder drop-down menu. b) Type notes in the Notes field as needed.
11. If more than one Storage Type is defined on the Storage Center, select the Storage Type to provide storage from the Storage Type drop-down menu. 12. Click OK. Create a Datastore and Map it to VMware ESX Server You can create a datastore, map it to a VMware ESX environment, and mount it to the cluster in one operation. Steps 1. Click the Servers view. 2. In the Servers pane, select the VMware ESXi cluster or host on which to create the datastore. 3. In the right pane, click Create Datastore.
Related concepts Creating Server Volumes and Datastores Expand a Datastore Expand a VMware datastore if it is running out of space. Steps 1. Click the Servers view. 2. Select the datastore in the Servers pane. 3. In the right pane, click Expand Datastore. The Expand Datastore dialog box appears. 4. In the New Size field, type a new size for the datastore. 5. Click OK. Delete a Volume or Datastore Delete a volume or datastore if it is no longer needed by the server.
7. Select the operating system of the server from the Server Operating System field. 8. Click Finish. Manually Mapping a Windows Server to a Storage Center Server If the WWNs of a server are not correctly associated with the appropriate Storage Center server objects, you can manually create the mappings.
Managing NAS Appliances Powered by Windows Storage Server The Servers view displays operating system and HBA connectivity information about Dell NAS appliances powered by Windows Storage Server. If the IPMI card is correctly configured, you can view hardware status, clear the system event log, and control the power. View Operating System Information about a Windows-Based NAS Appliance The Summary tab displays information about the NAS server software and hardware. Steps 1. Click the Servers view. 2.
Steps 1. Click the Servers view. 2. In the Servers pane, select a Windows-based NAS appliance. The Summary tab appears. 3. Click the IPMI tab. 4. Click Clear SEL. The Clear SEL dialog box appears. 5. Click OK. The system event log is cleared. Shut Down a Windows-Based NAS Appliance If the IPMI card is configured correctly, you can remotely shut down a Windows-based NAS appliance. Prerequisites • • The IPMI card in the appliance must be configured.
Variable Description Data_Collector_Server The host name or IP address of the Data Collector server. Web_Server_Port The web server port of the Data Collector server. The default is 3033. 2. If a certificate warning appears, acknowledge the warning to continue to the Data Collector website. 3. Click Download (.msi) in the Server Agent Installer row and save the installer to the Windows server or virtual machine.
Related tasks Register a Windows-Based Server Install the Server Agent on a Full Installation of Windows Server Install the Server Agent and register it to the Data Collector. Prerequisites • • • • • The Server Agent must be downloaded. The server must meet the requirements listed in Server Agent Requirements. The server must have network connectivity to the Storage Manager Data Collector. The firewall on the server must allow TCP port 27355 inbound and TCP port 8080 outbound.
Manage the Server Agent with Server Agent Manager Use the Server Agent Manager to manage and configure the Server Agent service. Figure 21. Server Agent Manager Dialog Box The following table lists the objects in the Server Agent window. Callout Name 1 Minimize/Close 2 Status Message Area 3 Control Buttons 4 Version and Port 5 Commands Start the Server Agent Manager Under normal conditions, the Server Agent Manager is minimized to the Windows system tray.
Modify the Connection to the Data Collector If the Data Collector port, host name, or IP address has changed, use the Server Agent Manager to update the information. Steps 1. In Server Agent Manager, click Properties. The Properties dialog box appears. 2. Specify the address and port of the Storage Manager Data Collector. • • Host/IP Address: Enter the host name or IP address of the Data Collector. Web Services Port: Enter the Legacy Web Service Port of the Data Collector. The default is 8080. 3.
7 Managing Virtual Volumes With Storage Manager VVols is VMware’s storage management and integration framework, which is designed to deliver a more efficient operational model for attached storage. This framework encapsulates the files that make up a virtual machine (VM) and natively stores them as objects on an array. The VVols architecture enables granular storage capabilities to be advertised by the underlying storage.
The external database is expected to be deployed in a highly available manner including redundant switching connectivity. Lab Experimentation Use of VVols In a preproduction lab environment, a user could experiment with VVols and choose to purge all data on the array and restart with the intention of redeploying another VVols lab environment for experimentation purposes. The proper steps for purging data in a LAB environment only are: 1. Using VMware vCenter — Delete all respective VVols VMs 2.
You must use Storage Manager (connected to a Data Collector Manager) to create storage containers. Setting Up VVols Operations on Storage Manager To set up and run operations for virtual volumes (VVols) in Storage Manager, you must: • • • • • • Register VMware vCenter Server in Storage Manager. Register VMware vCenter Server in Storage Center either by using Auto manage Storage Center option in Storage Manager or by manually adding vCenter server in Storage Center.
VASA Provider Restrictions The following restrictions apply to the VASA provider: • • • The Storage Manager VASA provider can be registered to only one vCenter Server. All ESXi and vCenter Server requests to the VASA provider are mapped to a single Storage Manager user. The VASA provider does not support user-defined storage profiles. Only default system-defined storage profiles can be used in VM Storage Policies.
3. Right-click the icon for the vCenter Server, and select Edit Settings. The Edit VMware vCenter Server Settings dialog box opens. 4. Click Unregister VASA Provider. 5. Click OK. Using Storage Manager Certificates With VASA Provider When you run the Register VASA Provider wizard, the URL of the VASA provider is automatically generated. This URL identifies the host on which the Data Collector is installed. The host is identified as either an IP address or Fully-Qualified Domain Name (FQDN).
IP Change Action Required NOTE: Failure to unregister the VASA Provider before making changes in name lookup service results in initialization errors on vCenter for certain services and causes VASA registration to fail. Managing Storage Containers A storage container is a pool of storage that is used in a VMware environment that supports VVols. Storage containers can be created using the following methods: • • From the Storage view in the Navigation pane of Storage Manager, select Volumes.
These options are presented as checkboxes on the Create Storage Container wizard. NOTE: Even if the Compression Allowed and Deduplication Allowed checkboxes are selected, selecting the None profile option results in no action being taken. You can also select the Default Data Reduction Profile, if one has been specified using the User Preferences.
Table 4.
Old Checkbox Value New Checkbox Value Expected Behavior Deduplication Disabled Deduplication Enabled Data Reduction Profile of existing volumes remains unchanged. Clone/Fast Clone of VM to the same storage container follows rules of Table 5. Expected Behavior for New VM Creation with Deduplication and does not fail. New volumes are created with the Data Reduction Profile according to Table 5. Expected Behavior for New VM Creation with Deduplication.
Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the navigation pane, select Volumes. 4. In the right pane, click Create Storage Container. The Create Storage Container dialog box opens. 5. Specify general information about the storage container: a) b) c) d) In the Name field, type the name of the storage container.
View Storage Container Information Use the Volumes node in the Storage tab to display information about storage containers and virtual volumes (VVols). Storage containers appear in the Storage Center Storage tab along with volumes. To view information about a storage container, click the name of the storage container. When viewing information about a storage container, you can select the Summary, Volumes, Charts, and Historical Usage tabs.
d) If more than one storage type is defined on the Storage Center, select the storage type to provide storage from the Storage Type drop-down menu. e) Specify whether to allow compression by selecting or clearing the Compression Allowed checkbox. f) Specify whether to allow deduplication by selecting or clearing the Deduplication Allowed checkbox. g) Specify whether to allow encryption by selecting or clearing theUse Encryption checkbox.
If the host contains VVols, the Storage view for that host includes the following details about the protocol endpoints: • • • • • • Device ID Connectivity status Server HBA Mapped Via LUN Used Read Only (Yes or No) Managing Virtual Volumes With Storage Manager 181
8 PS Series Storage Array Administration PS Series storage arrays optimize resources by automating performance and network load balancing. Additionally, PS Series storage arrays offer all-inclusive array management software, host software, and free firmware updates. To manage PS Series storage arrays using Dell Storage Manager, the storage arrays must be running PS Series firmware version 7.0 or later.
Callout Description Containers for storage resources (disk space, processing power, and network bandwidth). A pool can have one or more members assigned to it. A group can provide both block and file access to storage data. Access to block-level storage requires direct iSCSI access to PS Series arrays (iSCSI initiator). Access to file storage requires the FS Series NAS appliance using NFS or SMB protocols and the Dell FluidFS scale-out file system.
NOTE: If you specify a PS Series group user account with Pool administrator or Volume administrator permissions, access to the PS Series group from Storage Manager is restricted based on the PS Series group user account permissions. You cannot add a PS Series group to Storage Manager using a user account with read-only account permissions. 6. Click Finish. Reconnect to a PS Series Group If Storage Manager cannot communicate with a PS Series group, Storage Manager marks the PS Series group as down.
Organizing PS Series Groups Use folders to organize PS Series groups in Storage Manager. Create a PS Group Folder Use folders to group and organize PS Series groups. Steps 1. Click the Storage view. 2. In the Storage pane, select the PS Groups node. 3. In the Summary tab, click Create Folder. The Create Folder dialog box opens. 4. In the Name field, type a name for the folder. 5. In the Parent field, select the PS Groups node or a parent folder. 6. Click OK.
Steps 1. Click the Storage view. 2. In the Storage pane, select the PS Group folder to delete. 3. In the Summary tab, click Delete. The Delete PS Group Folders dialog box opens. 4. Click OK. Remove a PS Series Group Remove a PS Series group when you no longer want to manage it from Storage Manager.
Figure 23. PS Series Volumes Table 10. PS Series Volumes Callout Description 1 PS Series group Storage area network (SAN) comprising one or more PS Series arrays connected to an IP network. Arrays are highperformance (physical) block storage devices. 2 PS Series members Each PS Series array is a member in the group and is assigned to a storage pool. 3 PS Series storage pools Containers for storage resources (disk space, processing power, and network bandwidth).
Callout Description Thin provisioning allocates space based on how much is actually used, but gives the impression the entire volume size is available. (For example, a volume with 100GB storage can be allocated to use only 20GB, while the rest is available for other uses within the storage pool.) An offline volume indicates that it can no longer be accessed by the iSCSI initiator until it has been set online. For each volume, the group generates an iSCSI target name, which you cannot modify.
• • In the Maximum In-Use Space field, type the maximum in-use space percentage of the volume. To set the volume offline when the maximum in-use space is exceeded, select the Set offline when maximum in-use space is exceeded checkbox. 11. Click OK. Modify a Volume You can rename, move, or expand a volume after it has been created. You can also modify advanced volume attributes if needed. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4.
7. (Optional) In the Notes field, type a description for the folder. 8. Click OK. Edit a Volume Folder Create a volume folder to organize volumes on a PS Series group. Prerequisites To use volume folders in Storage Manager, the PS Series group members must be running PS Series firmware version 8.0 or later. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. In the Storage tab navigation pane, expand the Volumes node. 5.
7. Click OK. Move Multiple Volumes to a Folder Multiple volumes can organized by moving a selection of volumes to a volume folder. Prerequisites To use volume folders in Storage Manager, the PS Series group members must be running PS Series firmware version 8.0 or later. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4.
Modify Volume Access Settings The read-write permission for a volume can be set to read-only or read-write. In addition, access to the volume from multiple initiators with different IQNs can be enabled or disabled. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select a volume. 5. In the Summary tab, click Set Access Type. The Set Access Type dialog box opens. 6.
Add Access Policies to a Volume To control volume access for individual servers, add one or more access policies to a volume. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select a volume. 5. In the right pane, click Add Access Policies. The Add Access Policies to Volume dialog box opens. 6. In the Access Policies area, select the access policies to apply to the volume. 7.
Restore a Volume from the Recycle Bin If you need to access a recently deleted volume, you can restore the volume from the recycle bin. About this task A volume in the recycle bin is permanently deleted at the date and time listed in the Purge Time column. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. In the Storage tab navigation pane, expand the Volumes node and expand the Recycle Bin node. 5.
Creating a snapshot does not prevent access to a volume, and the snapshot is instantly available to authorized iSCSI initiators. Similar to volumes, snapshots appear on the network as iSCSI targets, and can be set online and accessed by hosts with iSCSI initiators. You can create a snapshot of a volume at the current time, or you can set up schedules to automatically create snapshots on a regular basis. If you accidentally delete data, you can set a snapshot online and retrieve the data.
3. Click the Storage tab. 4. In the Storage tab navigation pane, expand the Volumes node and select a volume that contains a snapshot. 5. From the Snapshots tab, select a snapshot to modify. 6. Click Edit Settings. The Modify Snapshot Properties dialog box opens. 7. In the Name field, type a name for the snapshot. 8. (Optional) In the Description field, type a description for the snapshot. 9.
2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select a volume that contains a snapshot. 5. From the Snapshots tab, select a snapshot to restore. 6. Click Restore Volume. The Restore Volume dialog box opens. 7. To set the volume online after it is restored, select the Set volume online after restore is complete checkbox. 8. Click OK. Delete a Snapshot Delete a snapshot when you no longer need it.
• To repeat the replication over a set amount of time, select Repeat Interval, then select how often to start the replication and the start and end times. 13. From the Replica Settings field, type the maximum number of replications the schedule can initiate. Create a Daily Replication Schedule A daily replication schedule determines how often a PS Series group replicates data to the destination volume at a set time or interval on specified days. Steps 1. Click the Storage view. 2.
Edit a Replication Schedule After creating a replication schedule, edit it to change how often the schedule initiates replications. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Group. 3. Click the Storage tab. 4. From the Storage tab navigation pane, select a volume. The volume must be the source of a replication relationship. 5. From the Schedules tab, select the replication schedule to edit. 6. Click Edit. The Edit Schedule dialog box appears. 7.
About Access Policies In earlier versions of the PS Series firmware, security protection was accomplished by individually configuring an access control record for each volume to which you wanted to secure access. Each volume supported up to 16 different access control records, which together constituted an access control list (ACL). However, this approach did not work well when large numbers of volumes were present.
Modify Target Authentication A PS Series group automatically enables target authentication using a default user name and password. If needed, you can change these credentials. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select the Access node. 5. In the right pane, click Modify Target Authentication. The Modify Target Authentication dialog box opens. 6.
Edit an Access Policy Group After an access policy group is created, you can edit the settings of the access policy group. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. In the Storage tab navigation pane, expand the Access node and select an access policy group. 5. In the right pane, click Edit Settings. The Edit Access Policy Group dialog box opens. 6. In the Name field, type a name for the access policy group. 7.
4. In the Storage tab navigation pane, expand the Access node and select the access policy group to delete. 5. In the right pane, click Delete. The Delete Access Policy Group dialog box opens. 6. Click OK. Create an Access Policy Access policies associate one or more authentication methods to available volumes. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select the Access node. 5.
9. In the iSCSI Initiator field, type the iSCSI initiator name of a computer to which you want to provide access to a volume. 10. In the text box in the IPv4 Addresses area, type the IPv4 addresses of the iSCSI initiators to which you want to provide access and then click + Add. You can enter a single IP address or a range of IP addresses. IP addresses can also be entered in a comma separated list. To remove an IPv4 address from the IPv4 Address area, select the address and click – Remove. 11. Click OK.
Remove Volumes From an Access Policy You can select the volumes that you want to unassociate from an access policy. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4. In the Storage tab navigation pane, expand the Access node and select an access policy. 5. In the right pane, click Remove Volumes. The Remove Volumes from Access Policy dialog box opens. 6. Select the checkboxes of the volumes to unassociate from the access policy. 7. Click OK.
View Audit Logs You can view audit logs for the last day, last 3 days, last 5 days, last week, last month, or a specified period of time. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Monitoring tab. 4. In the Monitoring tab navigation pane, select the Audit Logs node. 5. Select the date range of the audit log data to display. View Outbound Replications You can view outbound replications for a PS Series group. Steps 1. Click the Storage view. 2.
3. Click the Alerts tab. Information about the PS Series group alerts is displayed in the right pane.
9 Storage Center Maintenance Managing Storage Center Settings This section describes how to configure general Storage Center settings.
5. In the Name field, type a new name for the controller. 6. Click OK. Change the Operation Mode of a Storage Center Before performing maintenance or installing software updates, change the Operation Mode of a Storage Center to Maintenance. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings The Edit Storage Center Settings dialog box opens. 3. Click the General tab. 4.
Modifying Storage Center Network Settings The shared management IP, controller management interfaces, and iDRAC can be managed using Storage Manager. NOTE: For user interface reference information, click Help. Modify the Storage Center Network Settings In a dual-controller Storage Center, the shared management IP address is hosted by the leader under normal circumstances. If the leader fails, the peer takes over the management IP, allowing management access when the normal leader is down.
5. Modify the DNS settings. a) In the DNS Server field, type the IP address of a DNS server on the network. b) (Optional) In the Secondary DNS Server field, type the IP address of a backup DNS server on the network. c) In the Domain Name field, type the name of the domain to which the Storage Center belongs. 6. Click OK. Modify iDRAC Interface Settings for a Controller The iDRAC interface provides out-of-band management for the controller. When you reach the Configuration Complete screen: Steps 1.
Set Default Data Reduction Settings for New Volumes The default data reduction settings are used when a new volume is created unless the user changes them. You can prevent the default data reduction settings from being changed during volume creation by clearing the Allow Data Reduction Selection checkbox. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings.
Allow or Disallow Advanced Volume Mapping Settings Advanced volume mapping options include LUN configuration, mapping path options, and making the volume read-only. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Preferences tab. 4.
Set Default Volume QoS Profile Specify the default Volume QoS Profiles to be used for new volumes. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Preferences tab. 4. In the Quality of Service Profile area, click Change. The Select Volume QoS Profile to Apply dialog box opens, which shows all QoS profiles that have been defined.
Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Storage tab. 4. In the Data Progression Start Time field, select or type the time at which Data Progression starts running daily. 5. From the Data Progression Max Run Time drop-down menu, select the maximum time period that Data Progression is allowed to run. 6. Click OK.
The Select Storage Center dialog box opens. 6. Select the checkbox for each Storage Center to which you want to apply the settings. 7. Click OK. Configuring Storage Center Secure Console Settings The secure console allows support personnel to access the Storage Center console without connecting through the serial port. NOTE: Do not modify the secure console configuration without the assistance of technical support.
Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Secure Console tab. 4. Select the Apply these settings to other Storage Centers checkbox. 5. Click Apply. The Select Storage Center dialog box opens. 6. Select the checkbox for each Storage Center to which you want to apply the settings. 7. Click OK.
Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the SMTP Server tab. 4. Select the Apply these settings to other Storage Centers checkbox. 5. Click Apply. The Select Storage Center dialog box opens. 6. Select the checkbox for each Storage Center to which you want to apply the settings. 7. Click OK.
c) From the Type drop-down menu, select the type of the SNMP trap request or SNMP inform request to use. d) In the Port field, type the port number of the network management system. e) If SNMPv1 Trap, SNMPv2 Trap, or SNMPv2 Inform is selected from the Type drop-down menu, type a password in the Community String field. f) If SNMPv3 Trap or SNMPv3 Inform is selected from the Type drop-down menu, select a user from the SNMP v3 User dropdown menu. g) Click OK. 9.
Apply Date and Time Settings to Multiple Storage Centers Date and time settings that are assigned to a single Storage Center can be applied to other Storage Centers. Prerequisites The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator privilege. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3.
Modify an Access Filter for a Storage Center Modify an access filter to change the users or IP addresses it allows. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the IP Filtering tab. 4. Select the access filter that you want to modify, then click Modify Filter. The Modify IP Filter dialog box opens. 5.
c) Click OK. The confirmation dialog box closes. d) Click OK. The Show Access Violations dialog box closes. 6. Click OK. Apply Access Filtering Settings to Multiple Storage Centers Access filtering settings that are assigned to a single Storage Center can be applied to other Storage Centers. Prerequisites The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator privilege. Steps 1.
Managing Storage Center Users and Groups Storage Center users have access to folders, volumes, views, and commands depending on their privilege level and the user groups to which they belong. User accounts can be created locally and/or exist externally in a directory service. User Privilege Levels Each user is assigned a single privilege level. Storage Center has three levels of user privilege. Table 11.
• • • Administrator – When selected, the local user has full access to the Storage Center. Volume Manager – When selected, the local user has read and write access to volumes, servers, and disks in the folders associated with the assigned user groups. Reporter – When selected, the local user has read-only access to volumes, servers, and disks in the folders associated with the assigned user groups. 7.
The Edit Local User Settings dialog box opens. 5. From the Privilege drop-down menu, select the privilege level to assign to the user. • • • Administrator – When selected, the local user has full access to the Storage Center. Volume Manager – When selected, the local user has read and write access to the folders associated with the assigned user groups. Reporter – When selected, the local user has read-only access to the folders associated with the assigned user groups. 6. Click OK.
4. On the Local Users subtab, select the user, then click Edit Settings. The Edit Local User Settings dialog box opens. 5. In the Allow User to Log In field, enable or disable access for the local user. • • To enable access, select the Enabled checkbox. To disable access, clear the Enabled checkbox. 6. Click OK. The local user Edit Settings dialog box closes. 7. Click OK.
Modify Descriptive Information About a Local Storage Center User The descriptive information about a local user includes his or her real name, department, title, location, telephone numbers, email address(es), and notes. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Users and User Groups tab. 4.
Restore a Deleted Local Storage Center User A new password must be provided when restoring a deleted user. If you are restoring a deleted user with the Volume Manager or Reporter privilege, the user must be added to one or more local user groups. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Users and User Groups tab. 4.
The wizard advances to the next page. 7. Add disk folders to the local user group. a) Select the disk folder(s) you want to add to the local user group, then click Next. The wizard advances to the next page. b) In the Name field, type a name for the local user group, then click Finish. 8. Click OK. Manage User Membership for a Local Storage Center User Group Local Storage Center users and directory users that have been individually granted access can be added to local Storage Center user groups. Steps 1.
Manage Folder Access Granted by a Local Storage Center User Group The folders that are associated with a local Storage Center user group determine the access that is granted by the user group. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Users and User Groups tab. 4.
• • • (Active Directory only) Joining the controller to the domain requires credentials from a directory service user who is an administrator and who has sufficient privileges to create a computer record in the directory. (Active Directory only) To join the controller to the domain, forward and reverse DNS records for the Storage Center must be created in the domain. For a single-controller Storage Center system, create DNS records for the controller IP address.
• Password Renew Rate (Days): Number of days before the keytab is regenerated. The default value is 0, which equates to a password renew rate of 14 days. 11. Click Next. The Join Domain page opens. 12. Type the user name and password of a domain administrator. 13. Click Next. The Summary page opens. 14. If you want to change any setting, click Back to return to the previous page. 15. Click Finish. 16. Click OK.
(Commas and plus sign are escaped except for the comma separating the RDNs.) • In the Storage Center Hostname field, type the fully qualified domain name (FQDN) of the Storage Center. • • • For a single-controller Storage Center system, this is the fully qualified host name for the controller IP address. • For a dual-controller Storage Center system, this is the fully qualified host name for the management IP address. In the LDAP Domain field, type the LDAP domain to search.
4. On the Directory Users subtab, click Actions > Grant Access to Directory User. The Grant Access to Directory User dialog box opens. 5. In the User Principal Name field, type the directory user name assigned to the user. The following formats are supported: • • username@domain domain\username 6. In the Distinguished Name field, type the distinguished name for the user. Example: CN=Firstname Lastname,CN=Users,DC=example,DC=com 7.
The Edit Storage Center Settings dialog box opens. 3. Click the Users and User Groups tab. 4. On the Directory Users subtab, select the user, then click Edit Settings. The Edit Settings dialog box opens. 5. From the Session Timeout drop-down menu, select the maximum length of time that the user can be idle while logged in to the Storage Center before the connection is terminated. 6. Click OK. The Edit Settings dialog box closes. 7. Click OK.
Configure Preferences for a Directory Service User By default, each Storage Center user inherits the default user preferences. If necessary, the preferences can be individually customized for a user. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Users and User Groups tab. 4.
6. Click OK. Restore a Deleted Directory Service User If you are restoring a deleted user with the Volume Manager or Reporter privilege, the user must be added to one or more local user groups. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3. Click the Users and User Groups tab. 4.
• • Volume Manager: When selected, directory users in the group have read and write access to the folders associated with the assigned user groups. Reporter: When selected, directory users in the group have read-only access to the folders associated with the assigned user groups. 8. (Volume Manager and Reporter only) Add one or more local user groups to the directory user group. a) In the Local User Groups area, click Change. The Select Local User Groups dialog box opens.
e) Click OK. The Select Local User Groups dialog box closes. 6. Click OK. The Edit Settings dialog box closes. 7. Click OK. Delete a Directory User Group Delete a directory user group if you no longer want to allow access to the directory users that belong to the group. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Summary tab, click Edit Settings. The Edit Storage Center Settings dialog box opens. 3.
• • To set the number of days before a password expires when the expiration warning message is issued, type a value in the Expiration Warning Time field. To disable the expiration warning message, type 0. To specify the password expiration warning message that a user receives, type a warning message in the Expiration Warning Message. The expiration warning message is blank if this field is left empty. 6. Click OK.
Managing Front-End I/O Ports Front-end ports connect an Storage Center directly to a server using SAS connections or to the Ethernet networks and Fibre Channel (FC) fabrics that contain servers that use storage. iSCSI, FC, or SAS I/O ports can be designated for use as front-end ports.
connectivity if one of the controllers fails. For optimal performance, the primary ports should be evenly distributed across both controllers. When possible, front-end connections should be made to separate controller I/O cards to improve redundancy. About Fault Domains and Ports Fault domains group front‐end ports that are connected to the same transport media, such as a Fibre Channel fabric or Ethernet network.
• CAUTION: For iSCSI only, servers initiate I/O to iSCSI ports through the control port of the fault domain. If an iSCSI port moves to a different fault domain, its control port changes. This change disrupts any service initiated through the previous control port. If an iSCSI port moves to a different fault domain, you must reconfigure the server-side iSCSI initiators before service can be resumed.
2. Click the Summary tab. 3. In the banner message, click Rebalance Ports. The Rebalance Ports dialog box appears to display progress, and closes when the rebalance operation is compete. Managing Front-End I/O Port Hardware Front-end FC and iSCSI ports can be renamed and monitored with threshold definitions. iSCSI ports can be assigned network configuration and tested for network connectivity. For a Storage Center in virtual port mode, the Hardware tab displays a virtual port for each physical port.
3. In the Hardware tab navigation pane, expand Controllers→ controller name→ IO Ports→transport type→physical IO port, then select the virtual IO port. 4. In the right pane, click Edit Settings. The Edit Settings dialog box appears. 5. From the Preferred Parent drop-down menu, select the WWN of the physical IO port that should host the virtual port when possible. 6. Click OK.
Set Threshold Alert Definitions for a Front-End IO Port Configure one or more Threshold Alert Definitions for an IO port if you want to be notified when an IO port reaches specific bandwidth or latency thresholds. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select Fault Domains, then click the Front End Ports subtab. 4. Double-click the IO port.
Unconfigure Front-End I/O Ports On SCv2000 series and SCv3000 series storage systems, unconfigure I/O ports that are not connected to the storage network and are not intended for use. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select Fault Domains, then click the Front End Ports subtab. 4. In the right pane, select a down I/O port and click Unconfigure Port.
Remove Port from Fault Domain This process removes a port if it is unnecessary or if you want to move it to a different fault domain. An error will occur if the port being removed is the last port in the fault domain. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the Storage tab navigation pane, expand Fault Domains > ISCSI then select a Fault Domain. 3. Click Edit Settings. 4. Click Remove Ports from Fault Domain. 5.
The Convert to Virtual Port Mode dialog box opens. 5. In the Domain field of each fault domain you want to convert, type a new IP address to use as the primary port for each iSCSI fault domain. 6. Click OK. Grouping Fibre Channel I/O Ports Using Fault Domains Front-end ports are categorized into fault domains that identify allowed port movement when a controller reboots or a port fails.
Delete a Fibre Channel Fault Domain Delete a Fibre Channel fault domain if all ports have been removed and it is no longer needed. Prerequisites • • The Storage Center Fibre Channel front-end I/O ports must be configured for legacy mode. In virtual port mode, fault domains cannot be deleted. The fault domain must contain no FC ports. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3.
Types of iSCSI Fault Domains When a Storage Center meets the multi-VLAN tagging requirements, two types of iSCSI fault domains can be created. • Physical – The first fault domain configured for a given set of iSCSI ports. • • Physical fault domains do not require a VLAN ID, but can be configured to use a VLAN ID. • Physical fault domains support iSCSI replication to and from remote Storage Centers.
9. In the Ports table, select the iSCSI ports to add to the fault domain. All iSCSI ports in the fault domain should be connected to the same Ethernet network. If creating a physical fault domain, physical ports appear in the list only if they are not assigned to any fault domain yet. 10. Click OK. Next steps (Optional) Configure VLANs for the iSCSI ports in the fault domain by creating a virtual fault domain for each VLAN. Base the virtual fault domains on the physical fault domain.
Modifying iSCSI Fault Domains Modify an iSCSI fault domain to change its name, modify network settings for iSCSI ports in the domain, add or remove iSCSI ports, or delete the fault domain. NOTE: For user interface reference information, click Help. Rename an iSCSI Fault Domain The fault domain name allows administrators to identify the fault domain. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3.
7. (Optional) To assign a priority level to the VLAN, type a value from 0-7 in the Class of Service Priority field. 0 is best effort, 1 is the lowest priority, and 7 is the highest priority. 8. Click OK. Related concepts iSCSI VLAN Tagging Support Modify the MTU for an iSCSI Fault Domain The Maximum Transmission Unit (MTU) specifies the largest packet size supported by the iSCSI network. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view.
Modify Digest Settings for an iSCSI Fault Domain The iSCSI digest settings determine whether iSCSI error detection processing is performed. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, expand Fault Domains, then expand iSCSI and click the fault domain. 4. In the right pane, click Edit Settings. The Edit Fault Domain Settings dialog box opens. 5.
8. Click OK to close the Add Ports to Fault Domain dialog box. 9. Click OK. Related concepts iSCSI VLAN Tagging Support Related tasks Set or Modify the IP Address and Gateway for a Single iSCSI Port Test Network Connectivity for an iSCSI Port in a Fault Domain Test connectivity for an iSCSI physical or virtual I/O port by pinging a port or host on the network. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3.
Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, expand Fault Domains→ iSCSI, then select the fault domain. 4. In the right pane, click Delete. The Delete Fault Domain dialog box appears. 5. Click OK.
• • • The Storage Center iSCSI ports must be configured for virtual port mode. For each Storage Center iSCSI control port and virtual port, a unique public IP address and TCP port pair must be reserved on the router that performs NAT. The router that performs NAT between the Storage Center and the public network must be configured to forward connections destined for each public IP address and port pair to the appropriate Storage Center private iSCSI IP address and appropriate port (by default, TCP 3260).
• • • To add port forwarding information for an iSCSI port, click Add. To modify port forwarding information for an iSCSI port, select the port, then click Edit. To delete port forwarding information for an iSCSI port, select the port, then click Remove. 6. In the Public Networks/Initiators area, add or modify iSCSI initiator IP addresses or subnets that require port forwarding to reach the Storage Center because it is separated from the Storage Center by a router performing NAT.
Modify CHAP Settings for a Server in an iSCSI Fault Domain Modify CHAP settings for a server to change one or more shared secrets for the server. About this task NOTE: Changing CHAP settings will cause existing iSCSI connections between SAN systems using the selected fault domain to be lost. You will need to use the Configure iSCSI Connection wizard to reestablish the lost connections after changing CHAP settings. Steps 1.
Grouping SAS I/O Ports Using Fault Domains Front-end ports are categorized into fault domains that identify allowed port movement when a controller reboots or a port fails. Ports that belong to the same fault domain can fail over to each other because they have connectivity to the same resources. NOTE: Fault domains cannot be added or modified on SCv2000 or SCv3000 series storage systems. Storage Center creates and manages fault domains on these systems.
• If Self-Encrypting Drives is not licensed, disks will be treated as unsecured drives, but may be upgraded to Secure Data status if a license is purchased in the future. Storage Center Disk Management For SC7020, SC5020, and SCv3000 storage systems, Storage Center manages disks automatically. When configuring a storage system, Storage Center manages the disks into folders based on function of the disk. FIPS-certified Self-Encrypting Drives (SEDs) are managed into a separate folder than other disks.
8. Click OK. Related tasks Create Secure Data Disk Folder Delete Disk Folder Delete a disk folder if all disks have been released from the folder and the folder is not needed. Prerequisites The disk folder does not contain disks. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, expand Disks, then select a disk folder. The Disk Folder view is displayed. 4. Click Delete.
6. In the Unassigned Disks pane, select the disks to be assigned. 7. To schedule a RAID rebalance select one of the following options. • • To start a RAID rebalance after creating the disk folder, select Perform RAID rebalance immediately. To schedule a RAID rebalance for a later time, select Schedule RAID rebalance then select a date and time. 8. To skip the RAID rebalance, select I will start RAID rebalance later. NOTE: To use all available space, perform a RAID rebalance. 9. Click OK.
Cancel Releasing a Disk After releasing a disk, the data remains on the disk until the RAID rebalance is complete. Cancel releasing a disk if the RAID rebalance has not completed and the data is still on the disk. Canceling the release reassigns the disk to the disk folder to which it was previously assigned. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3.
Storage Center restores the disk and adds it to a disk folder. Replace a Failed Disk The Replace Failed Disk wizard identifies a disk and provides steps to replace the disk. Prerequisites The disk must be down Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, expand the enclosure and select Disks. The Disks view is displayed. 4.
When replicating from a Secure Data volume to a non-Secure Data folder, that volume is no longer secure after it leaves the Secure Data folder. When replicating a non-Secure Data volume to a Secure Data folder, that volume is not secure until it replicates to the Secure Data folder and Data Progression runs. Configure Key Server Before managing SEDs in a Secure Data folder, configure communication between Storage Center and the key management server.
7. Click OK. Rekey a Disk Folder Perform an on-demand rekey of a Secure Disk folder. Prerequisites The disk or disk folder must be enabled as Secure Disk. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. Click the Disks node. The Disks view is displayed. 4. Right-click the name of a Secure Disk folder and select Rekey Disk Folder. The Rekey Disk Folder dialog box opens. 5. Click OK.
Create Secure Data Disk Folder A Secure Data folder can contain only SEDs that are FIPS certified. If the Storage Center is licensed for Self-Encrypting Drives and unmanaged SEDs are found, the Create Disk folder dialog box shows the Secure Data folder option. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. Click the Disks node. The Disks view is displayed. 4. Click Create Disk Folder.
Managing RAID Modifying tier redundancy, or adding or removing disks can cause data to be unevenly distributed across disks. A RAID rebalance redistributes data over disks in a disk folder. Rebalance RAID Rebalancing RAID redistributes data over the disks according to the Storage Type. Rebalance the RAID after releasing a disk from a disk folder, when a disk fails, or after adding a disk. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view.
Check the Status of a RAID Rebalance The RAID Rebalance displays the status of an in-progress RAID rebalance and indicates whether a rebalance is needed. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select Disks. 4. Click Rebalance RAID. The RAID Rebalance dialog box shows the status of a RAID rebalance. 5. Click OK.
8. Click OK. Modify Tier Redundancy Modify tier redundancy to change the redundancy level for each tier in a Storage Type. After modifying tier redundancy, a RAID rebalance is required to move data to the new RAID levels. About this task NOTE: Do not modify tier redundancy if there is insufficient space in the tier for a RAID rebalance. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3.
The Enclosure view is displayed. 4. Click Add Enclosure. The Add New Enclosure wizard opens. 5. Confirm the details of your current install, and click Next to validate the cabling. If the cabling is wrong, an error message is displayed. You can proceed to the next step once the error is corrected and validated. 6. If prompted, select the enclosure type and click Next. 7. Follow the instructions to insert disks into the new enclosure and turn on the enclosure. Click Next when finished. 8.
• Available only if data has been released from all disks in the selected enclosure and the situation allows the replacement of an enclosure Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, click Enclosure. The Enclosure view is displayed. 4. Select the enclosure you want to replace and click Replace Enclosure. The Replace Enclosure wizard opens. 5.
6. Click OK. Delete an Enclosure Delete an enclosure if it will be physically removed from the Storage Center. Prerequisites • • All data must be moved off the enclosure by releasing the disks and rebalancing RAID. The enclosure must be down. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, expand Enclosure. The Enclosure view is displayed. 4.
The Enclosure view is displayed. 4. Under the selected enclosure, click Cooling Fan Sensors. The Cooling Fan Sensors view is displayed. 5. In the right pane, select the cooling fan, then click Request Swap Clear. Clear the Swap Status for an Enclosure I/O Module Clear the swap status for an enclosure I/O module to acknowledge that it has been replaced. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3.
Clear the Under Voltage Status for a Power Supply Clear the under voltage status for an enclosure power supply to acknowledge that you are aware of it. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, expand Enclosures, then select and expand an enclosure. The Enclosure view is displayed. 4. Under the selected enclosure, click Power Supplies.
Either the Enclosure or Controller view is displayed. 4. Under the selected enclosure or controller, click Fan Sensors. The Fan Sensors view is displayed. 5. In the right pane. select the failed sensor and click Replace Failed Cooling Fan Sensor. The Replace Failed Cooling Fan Sensor wizard opens. 6. Refer to the graphic in the wizard to locate the failed cooling fan sensor. Click Next. 7. Follow the instructions to remove the power supply from the enclosure. Click Next. 8.
• The new controller must have a Hardware Serial Number (HSN) and Eth 1 IP address assigned to it before starting this procedure. To see the new controller information, run the following command from the serial console: controller show Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, click Controllers. The Controllers view is displayed. 4. Click Add Controller.
• If the indicator light is on, click Indicator Off to disable the indicator light. Replace a Failed Cooling Fan Sensor This step-by-step wizard guides you through replacing a failed cooling fan sensor in the Storage Center without a controller outage. Prerequisites This wizard is only available for the SCv2000 series and SCv3000 series Storage Centers. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3.
• Whether to delete the configuration for a removed I/O card The wizard guides you through the following actions: • • • Associating I/O cards with existing port configurations Indicating which I/O cards are new hardware Deleting configurations for I/O cards that have been removed Before using the wizard, you should be aware of the following: • • • • Changes should be performed by a certified installer or with the assistance of technical support.
8. Click Finish. Add a UPS to a Storage Center An uninteruptable power supply (UPS) provides power redundancy to a Storage Center. When a UPS is added to a Storage Center, the status of the UPS is displayed in Storage Manager. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. In the right pane, select Actions > UPS > Create UPS. The Create UPS dialog box opens. 3. In the IPv4 Address field, type the IP address of the UPS. 4.
The Update Storage Center dialog opens. This dialog displays details of the installation process and updates those details every 30 seconds. This is also displayed as a blue message bar in the Summary tab, and in the update status column of the Storage Center details. In case of an update failure, click Retry to restart the interrupted process. 7. Click OK. If the update is service affecting, the connection to the Storage Center will be lost.
3. From the first drop-down menu, select Shut Down. 4. Click OK. 5. After the controllers have shut down, shut down the disk enclosures by physically turning off the power supplies. Next steps After the outage is complete, see the Owner’s Manual for your controller for instructions on how to start the controllers in the proper order.
5. From the drop-down menu, select Restart. 6. Click OK. Reset a Controller to Factory Default Reset a controller to apply the factory default settings, erase all data stored on the controller, and erase all data on the drives. Prerequisites The Storage Center must be an SCv2000 or SCv3000 series storage system. About this task CAUTION: Resetting the controller to factory defaults erases all information on the controller and all data on the drives. Steps 1.
Close a FRU Ticket Close a FRU ticket if the FRU ticket is not needed. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Alerts tab. 3. Select a FRU ticket. 4. Click Close FRU Ticket. The Close FRU Ticket dialog opens. 5. Click OK.
10 Viewing Storage Center Information Viewing Summary Information Storage Center summary plugins provide summary information for individual Storage Centers. The summary plugins can also be used to compare multiple Storage Centers. Storage Center Summary Plugins The following plugins can be configured to display on the Summary tab and Comparison tab. Summary Plugin Description System Status Displays a summary of disk space and alerts for a Storage Center.
View Summary Plugins for a Storage Center Use the Summary tab to view the summary plugins that are currently enabled. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Summary tab.
2. In the Storage pane, select a Storage Center folder or the Storage Centers node. 3. Click the Summary tab. Figure 26. Storage Centers Summary Tab Use a Summary Plugin to Compare Storage Centers Storage Center summary information can be compared using the summary plugins. Steps 1. Click the Storage view. 2. In the Storage pane, select a Storage Center folder or the Storage Centers node. 3. Click the Comparison tab. 4.
Using the Current Alerts Plugin Using the Replication Validation Plugin Using the Top 10 Fastest Growing Volumes Plugin Using the Current Threshold Alerts Plugin Using the Status Plugin The Status plugin displays Storage Center disk space information and the status of alerts. • • Use the top part of the Status plugin to view disk space usage on the Storage Center, the current alert threshold, and data savings information.
Storage Alert Threshold—Remaining disk space percentage that causes a storage alert to occur. System Data Efficiency Ratio—Ratio that indicates the efficiency of compression, deduplication, RAID, and Thin Provisioning. Alert Information The top portion of the Status plugin displays information about the alerts for a Storage Center. The alert icons indicate the highest active alert level.
Using the Storage Summary Plugin The Storage Summary plugin displays current storage usage in numerical and bar chart formats and historical storage usage in line chart format. Use the bar chart to compare the amount of used disk space to the amount of available disk space on a Storage Center. Use the line chart to compare the historical used disk space to the historical available disk space and alert threshold.
Return to the Normal View of the Chart or Graph If you have changed the zoom level of the chart or graph, you can return to the normal view. Steps 1. Click and hold the right or left mouse button on the chart or graph. 2. Drag the mouse to the left to return to the normal zoom level. Save a Chart or Graph as a PNG Image Save the chart or graph as an image if you want to use it elsewhere, such as in a document or an email. Steps 1. Right-click the chart or graph and select Save As.
Save the Graph as a PNG Image Save the graph as an image if you want to use it elsewhere, such as in a document or an email. Steps 1. Right-click the graph and select Save As. The Save dialog box appears. 2. Select a location to save the image and enter a name for the image in the File name field. 3. Click Save to save the graph. Print the Graph Print the graph if you want a paper copy. Steps 1. Right-click the graph and select Print. The Page Setup dialog box appears. 2.
Using the Replication Validation Plugin The Replication Validation plugin displays a table that lists replications and corresponding statuses. Use this plugin to monitor the status of replications from the current Storage Center to a destination Storage Center.
Related concepts Configuring Threshold Definitions Update the List of Threshold Alerts Refresh the list of threshold alerts to see an updated list of alerts. About this task Click Refresh to update the list of alerts. Viewing Detailed Storage Usage Information Detailed storage usage information is available for each Storage Type that is configured for a Storage Center. View Storage Usage by Tier and RAID Type Storage usage by tier and RAID type is displayed for each Storage Type. Steps 1.
View Storage Usage by Volumes Storage usage by volume is displayed for each Storage Type. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, expand Storage Type, then select the individual storage type you want to examine. 4. Click the Volumes subtab to view storage usage by volume. Figure 31.
Figure 32. Storage Type historical Usage Tab 5. (Optional) Change the time span of the graph by clicking Last Week, Last Month, Last Year, or Custom. View a Data Progression Pressure Report For each storage type, the data progression pressure report displays how space is allocated, consumed, and scheduled to move across different RAID types and storage tiers. Use the data progression pressure report to make decisions about the types of disks to add to a Storage Center. Steps 1.
Figure 33. Storage Type Pressure Report Tab The data progression pressure report displays the following information for each tier. Pressure Report Column Description RAID Level Raid level in the storage tier. Disk Track Type of tracking – either Fast or Standard. Chart Bar chart displaying allocated space and space used. Disk Allocated Space reserved for volumes. Disk Used The amount of space in use by volumes.
Viewing Historical IO Performance The IO Usage tab is used to view and monitor historical IO performance statistics for a Storage Center and associated storage objects. The Comparison View on the IO Usage tab is used to display and compare historical IO usage data from multiple storage objects. Using the IO Usage Tab Use the IO Usage tab to view historical IO usage data for a Storage Center or associated storage object, and to compare IO usage data from multiple storage objects.
2. Click the IO Usage tab. 3. Click one of the following buttons to change the period of IO usage data to display: • • • • • • Last Day: Displays the past 24 hours of IO usage data. Last 3 Days: Displays the past 72 hours of IO usage data. Last 5 Days: Displays the past 120 hours of IO usage data. Last Week: Displays the past 168 hours of IO usage data. Last Month: Displays IO usage data for the past month.
Viewing Current IO Performance The Charting tab is used to view and monitor current IO performance statistics for a Storage Center and associated storage objects. The Comparison View on the Charting tab is used to display and compare IO usage data from multiple storage objects. Using the Charting Tab Use the Charting tab to view current IO usage data for a Storage Center or associated storage object and compare IO usage data for multiple storage objects.
The Most Active Report tab is displayed only if the selected storage object is one of the following container objects: • • • • Volumes or a volume folder Servers or a server folder Remote Storage Centers Disks or disk speed folder 5. To refresh the IO usage data, click Refresh on the Charting navigation pane. 6. To stop collecting IO usage data from the Storage Center, click the Stop button. To resume collecting IO usage data, click the Start button.
Configuring Chart Options User Settings affect the charts on the Summary, IO Usage, and Charting tabs, and the Chart Settings affect the charts on the IO Usage and Charting tabs. Related concepts Configuring User Settings for Charts Configuring Chart Settings Configuring User Settings for Charts Modify the User Settings for your user account to display alerts on the charts and change the chart colors. NOTE: For user interface reference information, click Help.
Display Data Point Sliders on Charts Chart sliders display specific data for a selected data point. When chart sliders are enabled, a table displays the specific data values for the selected data point. Steps 1. In the top pane of the Storage Manager Client, click Edit User Settings. The Edit User Settings dialog box appears. 2. Click on the General tab. 3. Under Charting Options, select the Show sliders on charts check box. 4. Click OK.
Configure the Storage Center Data Gathering Schedule You can configure the intervals at which Storage Manager gathers IO Usage, Replication Usage, and Storage Usage data from managed Storage Centers. Steps 1. In the top pane of the Storage Manager Client, click Edit Data Collector Settings. The Edit Data Collector Settings dialog box appears. 2. Click the Schedules tab. 3. Click Edit. The Schedules dialog box opens. 4.
Figure 34. Save Storage Usage Dialog Box 4. Specify the storage usage data to export by selecting or clearing the check boxes in the Storage Center Storage Usage, Volume Storage Usage, and Server Storage Usage areas of the dialog box. By default, all of the storage usage data is selected to be exported. 5.
Figure 35. Save IO Usage Data Dialog Box 4. Specify the type of I/O usage data to export by selecting one of the following radio buttons: • • Save ’Most Active Report’ IO Usage Information Save Chart IO Usage Information 5. If you selected the Save ’Most Active Report’ IO Usage Information radio button, select the check boxes of the I/O usage data to export: • • • • • • Volume Most Active – Exports I/O usage data for the volumes. Server Most Active – Exports I/O usage data for the servers.
Monitoring Storage Center Hardware Use the Hardware tab of the Storage view to monitor Storage Center hardware. Figure 36. Hardware Tab Related concepts Monitoring a Storage Center Controller Monitoring a Storage Center Disk Enclosure Monitoring SSD Endurance Viewing UPS Status Managing Disk Enclosures Shutting Down and Restarting a Storage Center Monitoring a Storage Center Controller The Hardware tab displays status information for the controller(s) in a Storage Center.
View Summary Information for a Controller The controller node on the Hardware tab displays summary information for the controller, including name, version, status, and network settings. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, select the controller. The right pane displays controller summary information.
View Fan Status for a Controller The Fan Sensors node on the Hardware tab displays summary and status information for fans in the controller. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, expand the Controllers node, expand the node for a specific controller, then click Fan Sensor. The right pane displays summary and status information for the fans in the controller.
View Summary Information for All Enclosures in a Storage Center The Enclosures node on the Hardware tab displays summary information for all disk enclosures in a Storage Center. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, select Enclosures. 4. Use the tabs in the right pane to view summary information for the enclosures and enclosure components.
Locate a Disk in the Enclosure Diagram The Hardware tab shows the location of a disk selected from the Disks tab in the right pane. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click Hardware tab. 3. In the Hardware tab navigation pane, expand the Enclosures node, then the node for a specific enclosure. 4. Click the Disks node. The right pane displays the disks in the enclosure in the Disks tab. 5.
View Power Supply Status for an Enclosure The Power Supplies node on the Hardware tab displays power supply status for the enclosure. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, expand the Enclosures node, then the node for a specific enclosure. 4. Click Power Supplies.
View the Current Endurance Level for All SSDs in a Disk Folder If a disk folder contains SSDs, the summary table displays the percentage of wear life remaining for each SSD and a corresponding endurance chart. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Storage tab. 3. In the Storage tab navigation pane, select the disk folder. 4. On the Disks subtab, locate the Endurance and Endurance Chart columns in the table.
View Summary Information for All UPS Units that Serve the Storage Center The UPS node on the Hardware tab displays summary information for the UPS units that provide backup power for the Storage Center. Prerequisites A UPS unit must have been configured for the Storage Center. Steps 1. If the Storage Manager Client is connected to a Data Collector, select a Storage Center from the Storage view. 2. Click the Hardware tab. 3. In the Hardware tab navigation pane, select UPS.
11 SMI-S SMI-S Provider The SMI-S Provider is included with the Data Collector. You can configure SMI-S during initial Data Collector installation or postinstallation by modifying the Data Collector properties. When SMI-S is enabled and configured, the Data Collector automatically installs and manages the SMI-S Provider; no additional installation is required. NOTE: The Storage Manager Data Collector must be installed in a Microsoft Windows environment. SMI-S is not supported on a Virtual Appliance.
Setting Up SMI-S To set up SMI-S, enable SMI-S for the Data Collector, then add the required SMI-S user. HTTPS is the default protocol for the SMI-S provider. Steps 1. SMI-S Prerequisites 2. Enable SMI-S for the Data Collector SMI-S Prerequisites Complete the following prerequisite tasks before configuring SMI-S on the Storage Manager Data Collector. Steps 1. Make sure that a user for the SMI-S Provider is created on the Data Collector. 2.
6. Select the Enabled checkbox. 7. Click OK. The Data Collector Restart dialog box opens. 8. Click Yes. The Data Collector service stops and restarts. Using the Dell SMI-S Provider with Microsoft SCVMM Complete the following tasks to discover the Dell SMI-S provider using System Center Virtual Machine Manager (SCVMM) 2012 or SCVMM 2016: Steps 1. SCVMM Prerequisites 2. Limitations for SCVMM 2012 3. Modify the SCVMM 2012 Management Server Registry to Allow HTTPS 4.
Volume Names SCVMM 2012 does not allow spaces or special characters such as underscores or dashes in volume names. However, volumes that have been created prior to discovery can include spaces in their names. When creating LUNs using SCVMM 2012, do not include spaces in volume names. Storage Center Controller Failover In the event of a Storage Center controller failover, some operations may appear to fail in SCVMM due to timeouts.
Prepare the SCVMM 2012 Server for Indications If you are using the Dell SMI-S Provider with SCVMM 2012 running on Windows Server 2012 or later, configure the SCVMM server to accept SMI-S indications. Steps 1. Make sure the Windows Standards-Based Storage Management feature is installed. 2. In Windows PowerShell, run the following command to open the required ports: netsh advfirewall firewall add rule name="CIM-XML" dir=in protocol=TCP localport=5990 action=allow 3.
e) When you have finished selecting storage pools, click Next. 8. Confirm all settings on the Summary Page and click Finish 9. Verify the newly discovered storage information. NOTE: It can take several minutes for SCVMM to discover storage pools. Use the Jobs view to monitor discovery process. a) On the Home tab of the Fabric workspace, click Fabric Resources.
12 FluidFS Administration How FS8600 Scale-Out NAS Works Dell FS8600 scale-out NAS leverages the Dell Fluid File System (FluidFS) and Storage Centers to present file storage to Microsoft Windows, UNIX, and Linux clients. The FluidFS cluster supports the Windows, UNIX, and Linux operating systems installed on a dedicated server or installed on virtual systems deploying Hyper-V or VMware virtualization. The Storage Centers present a certain amount of capacity (NAS pool) to the FluidFS cluster.
Term Description Backup power supplies Each NAS controller contains a backup power supply that provides backup battery power in the event of a power failure. FluidFS cluster One to six FS8600 scale-out NAS appliances configured as a FluidFS cluster. Storage Center Up to eight Storage Centers that provide the NAS storage capacity. Storage Manager Multisystem management software and user interface required for managing the FluidFS cluster and Storage Centers(s).
Feature Description Client authentication Controls access to files using local and remote client authentication, including LDAP, Active Directory, and NIS. Quota rules Control client space usage. File security style Choice of file security mode for a NAS volume (UNIX, Windows, or Mixed). Storage Center Data progression Automatic migration of inactive data to less-expensive drives.
Internal Backup Power Supply Each NAS controller is equipped with an internal backup power supply (BPS) that protects data during a power failure. The BPS provides continuous power to the NAS controllers for a minimum of 5 minutes in case of a power failure and has sufficient battery power to allow the NAS controllers to safely shut down. In addition, the BPS provides enough time for the NAS controllers to write all data from the cache to nonvolatile internal storage.
Figure 39. FS8600 Architecture Storage Center The Storage Center provides the FS8600 scale-out NAS storage capacity; the FS8600 cannot be used as a standalone NAS appliance. Storage Centers eliminate the need to have separate storage capacity for block and file storage. In addition, Storage Center features, such as Dynamic Capacity and Data Progression, are automatically applied to NAS volumes. SAN Network The FS8600 shares a back-end infrastructure with the Storage Center.
Data Caching and Redundancy New and modified files are first written to the cache, and then cache data is immediately mirrored to the peer NAS controller (mirroring mode). Data caching provides high performance, while cache mirroring between peer NAS controllers ensures data redundancy. Cache data is ultimately transferred to permanent storage asynchronously through optimized data-placement schemes.
Scenario System Status Data Integrity Comments Dual‐NAS controller failure in multiple NAS appliance cluster, separate NAS appliances Available, degraded Unaffected • • Peer NAS controller enters journaling mode Failed NAS controller can be replaced while keeping the file system online Ports Used by the FluidFS Cluster You might need to adjust your firewall settings to allow traffic on the network ports used by the FluidFS cluster.
2. If the Storage Manager Client welcome page opens, click Log in to a Storage Center or Data Collector. 3. In the User Name field, type the DSM Data Collector user name. 4. In the Password field, type the DSM Data Collector password. 5. In the Host/IP field, type the host name or IP address of the server that hosts the Data Collector. If the Data Collector and Client are installed on the same system, you can type localhost instead. 6.
Connect to the FluidFS Cluster CLI Using SSH Key Authentication You can grant trust to a specific machine and user by performing an SSH key exchange. Steps 1. Generate an RSA SSH key. NOTE: The following example uses the ssh-keygen utility. The steps to generate an RSA SSH key can vary by operating system. See the documentation for the respective operating system for more information. a) Log in to a UNIX/Linux workstation for which you want to use SSH key authentication.
Add a Secured Management Subnet The subnet on which you enable secured management must exist prior to enabling the secured management feature. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Cluster Connectivity, and then click the Management Network tab. 4. In the Management Network panel, click Edit Settings. The Modify Administrative Network dialog box opens. 5.
Change the VLAN Tag for the Secured Management Subnet When a VLAN spans multiple switches, the VLAN tag is used to specify which ports and interfaces to send broadcast packets to. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Cluster Connectivity, and then click the Management Network tab. 4. In the Management Network panel, click Edit Settings. The Modify Administrative Network dialog box opens. 5.
About this task After enabling secured management, if you are connected to Storage Manager through the secured management subnet, your management session is temporarily interrupted while the change takes effect. During this time, the following message is displayed in Storage Manager: Communication with the cluster was interrupted in process of issuing a command that performs modification to the cluster. After the change takes effect, your management session will resume automatically.
4. In the Name field, type the new name for the FluidFS cluster. 5. Click OK. Accept the End-User License Agreement You must accept the end-user license agreement (EULA) before using the system. The EULA is initially accepted during deployment, and the EULA approver name and title can be changed at any time. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Cluster Maintenance. 4. Click the License tab. 5.
Managing the FTP Server The FluidFS cluster includes an FTP server that provides a storage location for the following types of system files: • • • • • Diagnostic results files License file SNMP MIBs and traps Service pack files Other files for technical support use Access the FTP Server The FTP server can be accessed at: About this task ftp://fluidfs_administrator_user_name@client_vip_or_name:44421/ Example: ftp://Administrator@172.22.69.
3. In the File System view, select Cluster Maintenance. 4. Click the SNMP tab, and click the Download MIB File link. 5. Use the browser dialog box to begin the download process. 6. Click . Optionally, you can also download the SNMP MIBs and traps from: ftp://fluidfs_administrator_user_name@client_vip_or_name:44421/mibs/ Enable or Disable SNMP Traps Enable or disable SNMP traps by category (NAS Volumes, Access Control, Performance & Connectivity, Hardware, System, or Auditing).
Change the SNMP Trap System Location or Contact Change the system location or contact person for FluidFS cluster-generated SNMP traps. By default, the SNMP trap system location and contact person are unknown. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Cluster Maintenance. 4. Click the SNMP tab. 5. In the SNMP Trap panel, click Modify SNMP Trap. The Modify SNMP Trap Settings dialog box opens. 6.
8. From the Scanning Mode drop-down list, select Normal or Intensive. 9. Click OK. Managing the Operation Mode The FluidFS cluster has three operation modes: • • • Normal – System is serving clients using SMB and NFS protocols and operating in mirroring mode. Write-Through – System is serving clients using SMB and NFS protocols, but is forced to operate in journaling mode. This mode of operation might have an impact on write performance.
Assign or Unassign a Client to a NAS Controller You can permanently assign one or more clients to a particular NAS controller. For effective load balancing, do not manually assign clients to NAS controllers, unless specifically directed to do so by Dell Technical Support. Assigning a client to a NAS controller disconnects the client’s connection. Clients will then automatically reconnect to the assigned NAS controller. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3.
Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Cluster Connectivity. 4. In the Filters panel, click Rebalance. The Rebalance Clients dialog box opens. 5. Click OK. Shutting Down and Restarting NAS Controllers In some cases, you must temporarily shut down a FluidFS cluster or reboot a NAS controller. Shut Down the FluidFS Cluster In some cases, you might need to temporarily shut down all NAS controllers in a FluidFS cluster.
Reboot a NAS Controller Only one NAS controller can be rebooted in a NAS appliance at a time. Rebooting a NAS controller disconnects client connections while clients are being transferred to other NAS controllers. Clients will then automatically reconnect to the FluidFS cluster. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the Hardware tab. 3. In the Appliances panel, select a controller. 4. Click Reboot. The Reboot dialog box opens. 5. Click OK.
3. In the toolbar, click Actions→ Storage Centers→ Validate Storage Connections. The Validate Storage Connections dialog box opens. 4. Click OK. FluidFS Networking This section contains information about managing the FluidFS cluster networking configuration. These tasks are performed using the Storage Manager Client. Managing the Default Gateway The default gateway enables client access across subnets. Only one default gateway can be defined for each type of IP address (IPv4 or IPv6).
View DNS Servers and Suffixes View the current DNS servers providing name resolution services for the FluidFS cluster and the associated DNS suffixes. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. The DNS panel displays the DNS servers and suffixes. Add or Remove DNS Servers and Suffixes Add one or more DNS servers to provide name resolution services for the FluidFS cluster and add associated DNS suffixes.
Figure 40. Routed Network The solution is to define, in addition to a default gateway, a specific gateway for certain subnets by configuring static routes. To configure these routes, you must describe each subnet in your network and identify the most suitable gateway to access that subnet. Static routes do not have to be designated for the entire network—a default gateway is most suitable when performance is not an issue. You can select when and where to use static routes to best meet performance needs.
5. In the Static Route panel, click Configure Default Gateway. The Configure Default Gateway dialog box opens. 6. In the Default Gateway IPvn Address field, type the gateway IP address through which to access the subnet (for example, 192.0.2.25). 7. Click OK. Delete a Static Route Delete a static route to send traffic for a subnet through the default gateway instead of a specific gateway. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3.
Change the Prefix for a Client Network Change the prefix for a client network. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Cluster Connectivity. 4. Click the Client Network tab. 5. In the Client Network panel, select a client network and then click Edit Settings. The Edit Client Network Settings dialog box opens. 6. In the Prefix Length field, type a prefix for the client network. 7. Click OK.
2. Click the File System tab. 3. In the File System view, select Cluster Connectivity. 4. Click the Client Network tab. 5. In the Client Network panel, click Edit Settings. The Edit Client Network Settings dialog box opens. 6. In the NAS Controllers IP Addresses field, select a NAS controller and then click Edit Settings. The Edit Controller IP Address dialog box opens. 7. In the IP Address field, type an IP address for the NAS controller. 8. Click OK.
5. The Client Interface panel displays the bonding mode. Change the Client Network Bonding Mode Change the bonding mode (Adaptive Load Balancing or Link Aggregation Control Protocol) of the client network interface to match your environment. Prerequisites • • If you have ALB, use one client VIP per client port in the FluidFS cluster. If you have LACP, use one client VIP per NAS controller in the FluidFS cluster. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3.
Viewing the Fibre Channel WWNs Storage Manager displays the NAS controller World Wide Names (WWNs) needed for updating fabric zoning on your Fibre Channel switch. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the Hardware tab. 3. In the Hardware view, expand Appliances → NAS appliance ID → NAS controller ID, then select Interfaces. The WWNs for the NAS controller are displayed in the right pane in the Fibre Channel list.
5. Click OK. 6. To remove a fabric: a) In the iSCSI Fabrics panel, select the appliance and then click Delete. The Delete dialog box opens. b) Click OK. Change the VLAN Tag for an iSCSI Fabric Change the VLAN tag for an iSCSI fabric. When a VLAN spans multiple switches, the VLAN tag specifies which ports and interfaces to send broadcast packets to. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the NAS Pool tab. 3. Click the Network tab. 4.
NOTE: • Local and external users can be used simultaneously. • If you configure Active Directory and either NIS or LDAP, you can set up mappings between the Windows users in Active Directory and the UNIX and Linux users in LDAP or NIS to allow one set of credentials to be used for both types of data access. Default Administrative Accounts The FluidFS cluster has the following built-in administrative accounts, each of which serves a particular purpose.
5. In the Local Support Access panel, click Modify Local Support Access Settings. The Modify Local Support Access Settings dialog box opens. 6. Enable or disable SupportAssist: • • To enable SupportAssist, select the Support Account (“support”) checkbox. To disable SupportAssist, clear the Support Account (“support”) checkbox. 7. Click OK. Change the Support Account Password Change the support account password to a new, strong password after each troubleshooting session is concluded. Steps 1.
Account Type Account Name Purpose Local Group nobody_group Accommodates the nobody account Local Group Local Users Accommodates local user accounts Local Group Users BUILTIN domain group fully compatible with the Windows Users group Local Group Backup Operators BUILTIN domain group fully compatible with the Windows Backup Operators group Managing Administrator Accounts You can create both local FluidFS administrators and make remote users (AD/LDAP/NIS) FluidFS administrators.
3. In the File System view, select Client Accessibility. 4. Click the Local Users and Groups tab. 5. In the Local Users panel, click Create. The Create Local User dialog box opens. 6. Select a user to become an administrator: a) b) c) d) e) f) g) In the File System view, select Cluster Maintenance. Click the Mail & Administrators tab. In the Administrators panel, click Grant Administration Privilege. The Grant Administration Privilege dialog box opens. Click Select User. The Select User dialog box opens.
2. Click the File System tab. 3. In the File System view, select Cluster Maintenance. 4. Click the Mail & Administrators tab. 5. In the Administrators panel, select an administrator and click Edit Settings. The Modify Mail Settings dialog box opens. 6. In the Email Address field, type an email address for the administrator. 7. Click OK. Change an Administrator Password You can change the password for a local administrator account only.
About this task To manage local users and groups, connect to the FluidFS cluster by using the client VIP address in the address bar of Windows Explorer. Log in with the administrator account and then connect to MMC. Steps 1. Select Start → Run. 2. Type mmc and click OK. The Console 1 - [Console Root] window opens. 3. Select File → Add/Remove Snap-in. 4. Select Local Users and Groups and click Add. 5.
Change the Primary Local Group to Which a Local User Is Assigned The primary group to which a local user belongs determines the quota for the user. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Local Users and Groups tab. 5. Select a local user and click Edit Settings. The Edit Settings dialog box opens. 6. From the Primary Local Group drop-down list, select the group to assign the local user to. 7.
Set the Password Policy for a Local User When password expiration is enabled, local users are forced to change their passwords after the specified number of days. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Local Users and Groups tab. 5. Select a user in the Local Users area, and then click Edit Settigns. The Edit Local User Settings dialog box opens. 6.
Managing Local Groups Create local groups to apply quota rules to multiple users. You can assign local users, remote users, remote user groups, and external computers to one or more local groups. The primary group to which a user belongs determines the quota for the user. View Local Groups View the current local groups. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Local Users and Groups tab.
f) Select a remote user group from the search results. g) Click OK. 10. In the External Computers area, select the external computer account that should be assigned to the local group: a) Click Add. The Select Computer Accounts dialog box opens. b) From the Domain drop-down list, select the domain to which the external computer account is assigned. c) In the Computer Account field, type either the full name of the external computer account or the beginning of the external computer account name.
9. To remove users or groups from the local group, select a user or group in the relevant area (Local Users, External Users, or External Groups) and click Remove. 10. To assign external computers to the local group: a) b) c) d) e) In the External Computers area, select the external computer that should be assigned to the local group. Click Add. The Select Computer Accounts dialog box opens. From the Domain drop-down list, select the domain to which the remote user group is assigned.
• • • • Before joining the FluidFS cluster to the domain, a computer object must be created by the OU admin for the FluidFS cluster; privileges to administer are provided in the OU. The FluidFS cluster computer object name, and the NetBIOS name used when joining it, must match. When creating the FluidFS cluster computer object, in the User or Group field under permissions to join it to the domain, select the OU admin account. Then, the FluidFS cluster can be joined using the OU admin credentials.
Disable Active Directory Authentication Remove the FluidFS cluster from an Active Directory domain if you no longer need the FluidFS cluster to communicate with the directory service. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Directory Services tab. 5. Click Leave . The Leave Active Directory Domain dialog box opens. 6. Click OK. View Open Files You can view up to 1,000 open files. Steps 1.
3. In the File System view, select Client Accessibility. 4. Click the Directory Services tab. 5. In the NFS USer Repository (NIS or LDAP) area, click Edit Settings. The Edit Active Directory Settings dialog box opens. 6. Select the LDAP radio button. 7. In the Filtered Branches field, type the LDAP name to be used for searching and then click Add. 8. To use LDAP on Active Directory extended schema: a) fFor the Extended Schema field, select Enabled. 9.
5. Click Edit Settings in the NFS User Repository section. The Edit External User Database dialog box opens. 6. In the Base DN field, type an LDAP base distinguished name. The name is usually in this format: dc=domain, dc=com. 7. Click OK. Add or Remove LDAP Servers At least one LDAP server must be configured. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Directory Services tab. 5.
Enable or Disable TLS Encryption for the LDAP Connection Enable TLS encryption for the connection from the FluidFS cluster to the LDAP server to avoid sending data in plain text. To validate the certificate used by the LDAP server, you must export the LDAP SSL certificate and upload it to the FluidFS cluster. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4. Click the Directory Services tab. 5.
7. In the NIS Domain Name field, type a NIS domain name. 8. In the NIS Servers text field, type the host name or IP address of a NIS server and click Add. Repeat this step for any additional NIS servers. 9. NIS servers are listed in descending order of preference: • • To increase the order of preference for a NIS server, select a NIS server and click Up. To decrease the order of preference for a NIS server, select a NIS server and click Down. 10. Click OK.
Managing User Mappings Between Windows and UNIX/ Linux Users You can define mappings between Windows users in Active Directory and UNIX/Linux users in LDAP or NIS. The mapping ensures that a Windows user inherits the UNIX/Linux user permissions and a UNIX/Linux user inherits the Windows user permissions, depending on the direction of the mapping and the NAS volume security style. User Mapping Policies The user mapping policies include automatic mapping and mapping rules.
5. Click Edit Settings. The Create Manual Mapping dialog box opens. 6. Select a mapping rule. 7. Click OK. Managing User Mapping Rules Manage mapping rules between specific users. Mapping rules override automatic mapping. Create a User Mapping Rule Create a mapping rule between a specific Windows user in Active Directory and the identical UNIX/Linux user in LDAP or NIS. Mapping rules override automatic mapping. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3.
6. Select the direction of the user mapping: • • • The two users will have identical file access permissions (via any protocol) Map NFS user to SMB user Map SMB user to NFS user 7. Click OK. Delete a User Mapping Rule Delete a mapping rule between a specific Windows user in Active Directory and the identical UNIX/Linux user in LDAP or NIS. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Accessibility. 4.
2. Click the Summary tab. The NAS Pool Status panel displays the configured size of the NAS pool. Expand the Size of the NAS Pool You can increase the size of the NAS pool as your NAS storage space requirements increase, without affecting the services to the clients. However, you cannot decrease the size of the NAS pool. Prerequisites The Storage Centers must have enough capacity to allocate more storage space to the FluidFS cluster.
2. Click the Summary tab. 3. In the Summary panel, click Edit NAS Pool Settings. The Edit NAS Pool Settings dialog box opens. 4. Enable or disable the NAS pool used space alert: • • To enable the NAS pool used space alert, select the Used Space Alert checkbox. To disable the NAS pool used space alert, clear the Used Space Alert checkbox. 5.
Antivirus – SMB shares are isolated to their tenant. If any shares have antivirus enabled, they utilize the virus scanners that are defined at the clusterwide level. File Access Notifications – File access notifications are set at a clusterwide level in FluidFS. If multitenancy is in use, only one tenant can utilize the external audit server feature. Separation of file access notifications between different tenants requires multiple FluidFS clusters.
8. Select a user and domain from the User and Domain drop-down lists. 9. Click OK. Multitenancy – Tenant Administration Access A tenant administrator manages his or her tenants’ content. Tenant can be managed by multiple tenant administrators, and tenant administrators can manage multiple tenants. A tenant administrator can create or delete tenants, delegate administration per tenant, and view space consumption of all tenants. About this task This procedure grants tenant administrator access to a user.
7. Click OK. NOTE: Users must be added to the administrators list before they can be made a tenant administrator or a volume administrator. Only the following users can be administrators: • • • Users in the Active Directory domain or UNIX domain of the default tenant • Local users of the default tenant or any other tenant Create a New Tenant Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Tenants. 4. Click Create Tenant.
Create Tenant – Step 4 Steps 1. In the Create Tenant window, click Limits. NOTE: Setting any of these limits is optional. 2. Select the Restrict Tenant Capacity Enabled checkbox. 3. Type a tenant capacity limit in gigabytes (GB). 4. Select the Restrict Number of NAS Volumes in Tenant Enabled checkbox. 5. Type the maximum number of NAS volumes for this tenant. 6. Select the Restrict Number of NFS Exports in Tenant Enabled checkbox. 7. Type the maximum number of NFS exports for this tenant. 8.
Managing NAS Volumes A NAS volume is a subset of the NAS pool in which you create SMB shares and/or NFS exports to make storage space available to clients. NAS volumes have specific management policies controlling their space allocation, data protection, security style, and so on. You can either create one large NAS volume consuming the entire NAS pool or divide the NAS pool into multiple NAS volumes. In either case you can create, resize, or delete these NAS volumes.
Managing NAS Volume Space FluidFS maintains file metadata in i-node objects. FluidFS i-nodes are 4 KB in size (before metadata replication) and can contain up to 3.5 KB of file data. When a new virtual volume is created, a portion of it is allocated as i-node area. When a new file is created and there are no free i-nodes left, an additional portion of the volume is allocated to the i-node area.
Example 1 Create NAS volumes based on departments. The administrator breaks up storage and management into functional groups. In this example, the departmental requirements are different and support the design to create NAS volumes along department lines. • Advantages • • The NAS volumes are easier to manage because they are set up logically. • The NAS volumes are created to match the exact needs of the department.
View the Storage Profile for the NAS Cluster or Pool View the Storage Center Storage Profiles configured for the NAS cluster or pool. A unique Storage Profile can be configured for each Storage Center that provides storage for the FluidFS cluster. Steps In the Storage view, select a FluidFS cluster. The Storage Profile for each Storage Center appears in the Storage Subsystems area.
4. In the NAS Volumes panel, click Edit Settings. The Edit NAS Volume Settings dialog box opens. 5. Click the Data Protection tab. 6. Enable or disable a user’s access to snapshot contents: • • To enable a user’s access to a NAS volume snapshot, select the Access to Snapshot Contents checkbox. To disable a user’s access to a NAS volume snapshot, clear the Access to Snapshot Contents checkbox. 7. Click OK.
Change Access Time Granularity for a NAS Volume Change the access time granularity settings of a NAS volume to change the interval at which file-access timestamps are updated. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and then select a NAS volume. 4. In the NAS Volumes panel, click Edit Settings. The Edit NAS Volume Settings dialog box opens. 5. Click Advanced Settings. 6.
3. In the NAS Pool Advanced Status area, click Edit Space Reclaming Settings. 4. To enable SCSI Unmap, select the Enable SCSI Unmap (TRIM) checkbox. 5. Click OK. Enable or Disable a NAS Volume Used Space Alert You can enable an alert that is triggered when a specified percentage of the NAS volume space has been used. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and then select a NAS volume. 4.
7. If a NAS volume snapshot space consumption threshold alert is enabled, in the Snapshot Space Threshold field, type a number (from 0 to 100) to specify the percentage of used NAS volume snapshot space that triggers an alert. 8. Click OK. Results NOTE: Snapshot space is not available for NAS volumes with files processed by data reduction. Delete a NAS Volume After deleting a NAS volume, the storage space used by the deleted volume is reclaimed by the NAS pool.
6. Click OK. Change the Parent Folder for a NAS Volume Folder Change the parent folder for a NAS volume folder. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and then select a NAS volume. 4. Click Edit Settings. The Edit NAS Volume Folder Settings dialog box opens. 5. In the Parent Folder area, select a parent folder. 6. Click OK.
• • • The volumes have the same permissions on folders (including the root directory) as the base volumes. The volumes have the same security style and access time granularity definitions as the base volumes. No SMB shares, NFS exports, or snapshot schedules are defined.
Delete a NAS Volume Clone Delete a NAS volume clone if it is no longer used. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and then select a NAS volume. 4. Click the Snapshots & Clones tab and then select a clone. 5. Click Delete. The Delete dialog box opens. 6. Click OK. Managing SMB Shares Server Message Block (SMB) shares provide an effective way of sharing files across a Windows network with authorized clients.
Create an SMB Share Create an SMB share to share a directory in a NAS volume using the SMB protocol. When an SMB share is created, default values are applied for some settings. To change the defaults, you must modify the SMB share. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select SMB Shares. 4. In the SMB Shares panel, click Create SMB share. The Select NAS Volume dialog box opens. 5.
Set Share-Level Permissions for an SMB Share Administrators can set initial permissions for an SMB share without having to log in to the share using Windows and setting the folder security properties. About this task This procedure grants users share-level permission (full control, modify, or read) for an SMB share. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select SMB Shares. 4.
Enable or Disable SMB Message Signing To help prevent attacks that modify SMB packets in transit, the SMB protocol supports the digital signing of SMB packets. SMB2 protocol 3.1.1 dialect adds pre-authentication integrity, cipher negotiation, AES-128-GCM cipher, and cluster dialect fencing. Preauthentication integrity improves protection from an attacker in tampering with SMB2’s connection establishment and authentication of messages. The cipher can be negotiated during connection establishment.
7. Click Apply Filter/Refresh. Disconnect an SMB Connection To disconnect a particular SMB connection: Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, select Client Activity. 4. Click the Sessions tab. 5. In the Sessions Display Filter panel, use the All Protocols drop-down list to display the SMB and NFS connections. 6. Right-click on a connection and click Disconnect. The Disconnect dialog box opens. 7. Click OK.
d) In the SMB Shares panel, click Edit SMB Home Share Settings. The Set SMB Home Share dialog box opens. e) Select the Enabled checkbox for the SMB Home Share option. f) Click Change in the NAS Volume area. The Select NAS Volume dialog box opens. g) Select the NAS volume on which the SMB home shares are located and click OK. h) In the Initial path field, specify a folder that is the root of all the users’ folders (for example, /users).
Change the Owner of an SMB Share Using an Active Directory Domain Account The Active Directory domain account must have its primary group set as the Domain Admins group to change the owner of an SMB share. These steps might vary slightly depending on which version of Windows you are using. Steps 1. Open Windows Explorer and in the address bar type: \\client_vip_or_name. A list of all SMB shares is displayed. 2. Right-click the required SMB share (folder) and select Properties.
when some special SIDs are used inside ACL (for example, creator-owner ACE), the mapping can be inaccurate. For some applications, NFS clients must see the exact mapping or a mapping for more permissive access. Otherwise, the NFS applications might not perform denied operations. FluidFS versions 5 or later provide an option that causes all objects with SMB ACLs to be presented with UNIX Word 777 from NFS clients (for display only).
Audit SACL Access Set Audit SACL (System Access Control List) Access to enable the type of auditing to be performed when an object (a file or directory with SACL entries) is accessed. If SACL access is not enabled for a NAS volume, then even if a file or directory has SACL entries, the access does not generate an auditing event. Generated events for a NAS volume can be limited to successes, failures, or both. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3.
Option 3 - Map the Share as a Network Drive Map the share as a network drive. Steps 1. Open Windows Explorer and choose Tools → Map Network Drive. The Map Network Drive dialog box opens. 2. From the Drive drop-down list, select any available drive. 3. Either type the path to the SMB share that you want to connect to in the Folder field or browse to the SMB share: \\client_vip_or_name\smb_share_name 4. Click Finish. Option 4 - Network Connect to the share using the Windows Network.
Configuring Branch Cache Branch cache must be properly configured on each client that supports branch cache on the branch office site. About this task On Windows 7 or 8, set the appropriate group policies: Computer Configuration > Policies > Administrative Templates > Network > Turn on BranchCache > Enabled. On Windows 8.1, you can also configure branch cache using PowerShell cmdlets such as Enable-BCHostedClient -ServerNames hosted_cache_server_name. Branch cache is disabled by default.
NFS v4 Implementation Before implementing NFSv4, note the following, and refer to the respective documentation for your NFSv4 clients: • • • User and Group identification — NFSv4 users and groups are identified by a @ string (rather than the traditional UID/GID numbers). The NFSv4 server (FluidFS) and clients must be configured to use the same external Network Information Service (NIS) or LDAP domain, which ensures consistent - mapping of identities.
• To browse to an existing directory to share: Click Select Folder. The Select Folder dialog box appears and displays the top-level folders for the NAS volume. Locate the folder to share, select the folder, and click OK. • • • To drill down to a particular folder and view the subfolders, double-click the folder name. • To view the parent folders of a particular folder, click Up.
5. In the middle area, select the check boxes for one or more authentication methods (UNIX Style, Kerberos v5, Kerberos v5 Integrity, or Kerberos v5 Privacy) that clients are allowed to use to access an NFS export. These options are described in the online help. 6. Click OK. Change the Client Access Permissions for an NFS Export Change the permissions for clients accessing an NFS export. Steps 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3.
Delete an NFS Export If you delete an NFS export, the data in the shared directory is no longer shared but it is not removed. Steps 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, select NFS Exports. 4. In the right pane, select an NFS export and click Delete. The Delete dialog box appears. 5. Click OK. View or Select the Latest NFS Version Supported NFS v4 is enabled or disabled on a systemwide basis.
Additional Documentation For more information about configuring namespace aggregation, see: Using Dell FluidFS Global Namespace Using FTP File Transfer Protocol (FTP) is used to exchange files between computer accounts, transfer files between an account and a desktop computer, or to access online software archives. FTP is disabled by default. Administrators can enable or disable FTP support, and specify the landing directory (volume, path) on a per-system basis.
Local file system symbolic links are available in NTFS starting with Windows Vista and Windows Server 2008, but the symbolic links over SMB are available only with SMB2. Limitations on Using Symbolic Links When using symbolic links, note the following limitations: • • • • • • • SMB1, FTP, and NFS do not support symbolic links. Symbolic links are limited to 2,000 bytes. User and directory quotas do not apply to symbolic links. FluidFS space counting does not count symbolic link data as regular file data.
FluidFS v6.0 or later FluidFS v5.0 or earlier The distributed dictionary service detects when it reaches almost full capacity and doubles in size (depending on available system storage). The dictionary size is static and limits the amount of unique data referenced by the optimization engine.
Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and select a NAS volume. 4. In the NAS Volume panel, click Edit Settings. The Edit NAS Volume Settings dialog box opens. 5. Click Data Reduction. 6. Select the Data Reduction Enabled checkbox. 7. For the Data Reduction Method field, select the type of data reduction (Deduplication or Deduplication and Compression) to perform.
Disable Data Reduction on a NAS Volume By default, after disabling data reduction on a NAS volume, data remains in its reduced state during subsequent read operations. You have the option to enable rehydrate-on-read when disabling data reduction, which causes a rehydration (the reversal of data reduction) of data on subsequent read operations. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and select a NAS volume. 4.
• • If the file is virus-free, the FluidFS cluster permits client access. The FluidFS cluster does not scan that file again, providing it remains unmodified since the last check. If the file is infected, the FluidFS cluster denies client access. The client does not know that the file is infected. Therefore: • • A file access returns a system-specific file not found state for a missing file, depending on the client's computer. An access denial might be interpreted as a file permissions problem. Figure 41.
Dedicated FluidFS Snapshot Profiles For FluidFS deployments, Storage Manager creates a dedicated FluidFS snapshot that is automatically assigned to FluidFS LUNs (storage volumes). The profile setting defaults to Daily, and the retention policy is to delete after 25 hours. Creating On-Demand Snapshots Create a NAS volume snapshot to take an immediate point-in-time copy of the data. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3.
Change the Snapshot Frequency for a Snapshot Schedule Change how often to create snapshots for a snapshot schedule. Steps 1. Click the Storage view and select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and select a NAS volume. 4. In the NAS Volume Status panel, click the Snapshots & Clones tab. 5. Select a snapshot schedule and click Edit Settings. The Edit Snapshot Schedule dialog box opens. 6.
Modifying and Deleting Snapshots Manage snapshots that were created on demand or by a schedule. Rename a Snapshot To rename a snapshot: Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and select a NAS volume. 4. In the NAS Volume Status panel, click the Snapshots & Clones tab. 5. Select a snapshot and click Edit Settings. The Edit Snapshot Settings dialog box opens. 6. In the Name field, type a new name for the snapshot.
Snapshots retain the same security style as the active file system. Therefore, even when using snapshots, clients can access only their own files based on existing permissions. The data available when accessing a specific snapshot is at the level of the specific share and its subdirectories, ensuring that users cannot access other parts of the file system. View Available Snapshots View snaphots available for restoring data. Steps 1. In the Storage view, select a FluidFS cluster. 2.
Option 2 – Restore Files Using Windows Only Snapshots integrate into the Shadow Copies and previous versions features of Windows. This restore option allows clients to restore a file using previous versions. Steps 1. Right-click the file and then select Properties. 2. Click the Previous Versions tab. A list displays the available previous versions of the file. 3. Select the version to restore and then click Restore. Disabling Self-Restore Steps 1. In the Storage view, select a FluidFS cluster. 2.
Table 16. Backup and Restore Applications Application Supported Version CommVault Simpana 11.x Dell Quest NetVault 10.x, 11.x EMC Networker 9.x IBM Tivoli Storage Manager 6.3 Symantec BackupExec 2014, 2015 Symantec NetBackup 7.x Refer to the application documentation for the minimal revision/service pack supporting Dell FluidFS systems. Table 17. Supported Tape Libraries lists the supported tape libraries for 2–way NDMP backup (Fibre Channel connections only). Table 17.
Variable Name Description Default d specifies that node/dir format file history will be generated. f specifies that file-based file history will be generated. y specifies that the default file history type (which is the node/dir format) will be generated. n specifies that no file history will be generated. DIRECT Specifies whether the restore is a Direct Access Retrieval. Valid values are Y and N.
Both supported backup types (dump and tar) support incremental backup. The algorithm for traversing the backup target directory is the same. However, because inode-based file history generation has different requirements to support DAR, the backup data stream generated is different: • • dump: Each directory visited will be backed up and a file history entry will be generated. It does not matter whether the directory has changed.
3. The NDMP server copies the NAS volume data to the DMA server. 4. After receiving the data, the DMA server moves the data to a storage device, such as a local disk or tape device. 5. After the backup completes, the NDMP server deletes the temporary snapshots. NDMP Environment Variables NDMP environment variables control the behavior of the NDMP server for each backup and restore session.
Environment Variable Description Used In Default Value Backup -1 Backup N During restore, if this variable is set to Y and the backup data stream was generated with this variable set to Y, the NDMP server will handle deleting files and directories that are deleted between incremental backups. Setting this variable to Y requires additional processing time and increases the backup data stream size (the size of the increase depends on the number of elements in the backup data set).
Change the NDMP Password A user name and password are required when configuring an NDMP server in the DMA. The default password is randomized and must be changed prior to using NDMP. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, click Cluster Connectivity. 4. Click the Backup tab. 5. In the NDMP pane, click Change Backup User Password. The Change Backup User Password dialog box opens. 6. In the Password field, type an NDMP password.
• • NDMP user name and password (default user name is backup_user) Port that the NDMP server monitors for incoming connections (default port is 10000) (Optional) In addition, some DMA servers require more information, such as the host name of the FluidFS cluster, OS type, product name , and vendor name. • • • • Host name of the FluidFS cluster, which uses the following format:controller_number.
Viewing NDMP Jobs and Events All NDMP jobs and events can be viewed using Storage Manager. View Active NDMP Jobs View all NDMP backup and restore operations being processed by the FluidFS cluster. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, click Cluster Connectivity. 4. Select Backup. The NDMP Sessions area displays the NDMP jobs.
Replication Scenarios Description Online data migration Minimizes downtime associated with data migration Disaster recovery Mirrors data to remote locations for failover during a disaster Configuring replication is a three-step process: • • • Add a replication partnership between two FluidFS clusters. Add replication for a NAS volume. Run replication on demand or schedule replication. How Replication Works Replication leverages snapshots.
Figure 45.
After a partner relationship is established, replication between the partners can be bidirectional. One system could hold target NAS volumes for the other system as well as source NAS volumes to replicate to that other system. A replication policy can be set up to run according to a set schedule or on demand. Replication management flows through a secure SSH tunnel from system to system over the client network.
3. In the File System view, click Replications. 4. Click the Remote Clusters tab, select a remote cluster, and then click Edit Settings. The Edit Remote NAS Cluster Settings dialog box opens. 5. Click Add. The Add Tenants Mapping for Replication dialog box opens. 6. Select a tenant from the Local FluidFS Cluster drop-down list. 7. Select a tenant from the Remote FluidFS Cluster drop-down list. 8. Click OK.
3. In the File System view, click Replications. 4. Click the Replication QoS Nodes tab. 5. Click Create QoS Node. The Create Replication QoS Node dialog box opens. 6. Type a name and choose the bandwidth limit for the node in KB/s. 7. Click OK. The Edit Replication QoS Schedule dialog box opens. 8. Drag the mouse to select an area, right-click on it, and choose the percentage of the bandwidth limit to allow in these day and hour combinations. 9. Click OK.
8. Click OK. Single Port Replication With single-port replication, communication for all involved components uses only one port. The single port infrastructure supports communication over IPv4 and IPv6, and is opened on all controller IPs and client VIPs.
Add Replication for a NAS Volume Adding replication creates a replication relationship between a source NAS volume and a target NAS volume. After adding replication, you can set up a replication policy to run according to a set schedule or on demand. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and select a NAS volume. 4. Click Create Replication. The Create Replication wizard starts.
6. Click OK. Schedule Replication After a replication is created, you can schedule replication for a NAS volume to run regularly. You can schedule replication only from the source FluidFS cluster. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System view, expand NAS Volumes and select a NAS volume. 4. Click the Replication tab. 5. In the Replication Schedules area, click Create. T he Create Replication Schedule dialog box opens. 6.
Pause Replication When you pause replication, any replication operations for the NAS volume that are in progress are suspended. While replication is paused, scheduled replications do not take place. If you require multiple replications to be paused, perform the following steps for each replication. You can pause replication only from the source FluidFS cluster. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3.
Results You can search for specific replication events by typing search text in the box at the bottom of the Replications panel. Recovering an Individual NAS Volume You can access or restore data from a target NAS volume if needed. Promote a Target NAS Volume Promoting a target NAS volume to a recovery NAS volume makes the target NAS volume writable, and clients can manually fail over to it. This operation can be performed regardless of whether the source NAS volume is available.
Managing the DNS Configuration for Single NAS Volume Failover For single NAS volume failover, it is important that the environment is set up to properly migrate clients of the NAS volumes you are failing over, without disrupting the clients of other NAS volumes you are not failing over. When a NAS volume is failed over from one FluidFS cluster to another, the IP addresses that are used to access it change from Cluster A’s IP addresses to Cluster B’s IP addresses. You can facilitate this change using DNS.
Phase 2 — Cluster A fails and clients request failover to target Cluster B If Cluster A stops responding because of an unexpected failure, fail over to Cluster B. Steps 1. From Cluster B, promote the target volumes in Cluster B. This transforms the original target volumes (B1, B2, .. Bn) to standalone NAS volumes and makes them writable. 2. Delete the replication policies for the original source volumes (A1, A2, .., An). 3.
8. From Cluster A, restore the users and groups configuration from Cluster B. This restores the Cluster A users and groups configuration to Cluster B settings. NOTE: If the system configuration restore fails, manually set the system back to the original settings (use the settings for Cluster A that you recorded earlier). 9. Start using Cluster A to serve client requests.
FluidFS Monitoring This section contains information about monitoring the FluidFS cluster. These tasks are performed using the Storage Manager Client. Monitoring NAS Appliance Hardware Storage Manager displays an interactive, graphical representation of the front and rear views of NAS appliances.
3. In the Hardware view, expand Appliances and select an appliance ID. 4. Select Fans. The status of each fan is displayed. View the Status of the Power Supplies View the status of the power supplies in a NAS appliance. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the Hardware tab. 3. In the Hardware view, expand Appliances and select an appliance ID. 4. Select Power Supply. The status of each power supply is displayed.
Viewing FluidFS Cluster Storage Usage Storage Manager displays a line chart that shows storage usage over time for a FluidFS cluster, including total capacity, unused reserved space, unused unreserved space, and used space. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the Summary tab. The Summary view displays the FluidFS cluster storage usage.
FluidFS Maintenance This section contains information about performing FluidFS cluster maintenance operations. These tasks are performed using the Storage Manager Client. Connecting Multiple Data Collectors to the Same Cluster You can have multiple data collectors connected to the same FluidFS cluster. About this task To designate the Primary data collector and/or whether it receives events: Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the Summary tab 3.
Remove a FluidFS Cluster From Storage Manager Remove a FluidFS cluster if you no longer want to manage it using Storage Manager. For example, you might want to move the FluidFS cluster to another Storage Manager Data Collector. Steps 1. Click the Storage view and select a FluidFS cluster. 2. Click the Summary tab. 3. In the right pane, click Delete. The Delete dialog box appears. 4. Click OK.
2. Click the Summary tab. 3. In the right pane, click Move. The Select Folder dialog box appears. 4. Select a parent folder. 5. Click OK. Delete a FluidFS Cluster Folder Delete a FluidFS cluster folder if it is not being used. Prerequisites The folder must be empty. Steps 1. In the Storage view, select a FluidFS cluster folder. 2. Click the Summary tab. 3. Click Delete. The Delete dialog box opens. 4. Click OK.
a) Select a NAS controller and click Edit Settings. The Edit Controller IP Address dialog box opens. b) In the IP Address field, type an IP address for the NAS controller. c) Click OK. d) Repeat the preceding steps for each NAS controller. e) To specify a VLAN tag, type a VLAN tag in the VLAN Tag field. When a VLAN spans multiple switches, the VLAN tag is used to specify to which ports and interfaces to send broadcast packets. f) Click Next. 8.
NOTE: Due to the complexity and precise timing required, schedule a maintenance window to add the NAS appliance(s). Steps 1. (Directly cabled internal network only) If the FluidFS cluster contains a single NAS appliance, with a direct connection on the internal network, re-cable the internal network as follows. a) b) c) d) e) Cable the new NAS appliance(s) to the internal switch. Remove just one of the internal cables from the original NAS appliance.
• seconds, then click Refresh to update the Connectivity Report. When the iSCSI logins are complete and the Connectivity Report has been refreshed, the status for each FluidFS cluster iSCSI initiator shows Up. For Fibre Channel NAS appliances, when the Connectivity Report initially appears, the FluidFS cluster HBAs show the status Not Found/Disconnected. You must record the WWNs and manually update fabric zoning on the Fibre Channel switch. Then, click Refresh to update the Connectivity Report.
Attach a NAS Controller Attach a new NAS controller when replacing an existing NAS controller. After it is attached, the new NAS controller inherits the FluidFS cluster configuration settings of the existing NAS controller. Prerequisites Verify that the NAS controller being attached is in standby mode and powered on. A NAS controller is on and in standby mode if the power LED is flashing green at around two flashes per second. Steps 1. In the Storage view, select a FluidFS cluster. 2.
Managing Service Packs The FluidFS cluster uses a service pack methodology to upgrade the FluidFS software. Service packs are cumulative, meaning that each service pack includes all fixes and enhancements provided in earlier service packs. View the Update History View a list of service pack updates that have been installed on the FluidFS cluster. Steps 1. In the Storage view, select a FluidFS cluster. 2. Click the File System tab. 3. In the File System tab navigation pane, select Cluster Maintenance. 4.
Prerequisites • • • • • Contact technical support to make service packs available for download to the FluidFS cluster. The Storage Manager Data Collector must have enough disk space to store the service pack. If there is not enough space to store the service pack, a message will be displayed shortly after the download starts. You can delete old service packs to free up space if needed. Installing a service pack causes the NAS controllers to reboot during the installation process.
Managing Firmware Updates Firmware is automatically updated on NAS controllers during service pack updates and after a failed NAS controller is replaced. After a firmware update is complete, the NAS controller reboots. It is important that you do not remove a NAS controller when a firmware update is in progress. Doing so corrupts the firmware. A firmware update is in progress if both the rear power-on LED and cache active/off-load LED repeatedly blink amber 5 times and then blink green 5 times.
Restore the NAS Volume Configuration When you restore a NAS volume configuration, it overwrites and replaces the existing configuration. Clients that are connected to the FluidFS cluster are disconnected. Clients will then automatically reconnect to the FluidFS cluster. Steps 1. Ensure the .clusterConfig folder has been copied to the root folder of the NAS volume on which the NAS volume configuration will be restored.
3. Click the File System tab and select Client Accessibility. 4. In the right pane, click the Local Users and Groups tab. 5. Click Restore. The Restore Local Users from Replication Source dialog box appears. 6. From the Backup Source drop-down menu, select the backup from which to restore local users. 7. Click OK. Restoring Local Groups Restoring the local groups configuration provides an effective way to restore all local groups without having to manually reconfigure them.
Reinstalling FluidFS from the Internal Storage Device Each NAS controller contains an internal storage device from which you can reinstall the FluidFS factory image. If you experience general system instability or a failure to boot, you might have to reinstall the image on one or more NAS controllers. Prerequisites • • If the NAS controller is still an active member in the FluidFS cluster, you must first detach it.
4. 5. 6. 7. 8. In the VAAI area, click Edit Settings in the VAAI area. The Modify VAAI Settings dialog box appears. To enable VAAI, select the VAAI Enabled checkbox. To disable VAAI, clear the VAAI Enabled checkbox. Click OK. Installation Instructions The FS Series VAAI plugin supports ESXi versions 5.5, 6.0, and 6.5. Prerequisites NOTE: The FS Series VAAI plugin should be installed on each relevant ESXi host and requires a reboot. Steps 1.
To verify that an FS Series datastore has VAAI enabled use the command vmkfstools –P in the ESXi host console. The following example illustrates the query and output for a datastore named FSseries_datastore residing on a FS Series v4 or later system: ~ # vmkfstools -Ph /vmfs/volumes/FSseries_Datastore/ NFS-1.00 file system spanning 1 partitions File system label (if any): FSseries_Datastore Mode: public Capacity 200 GB, 178.
• • To change the maximum number of events to display, select the maximum number of events (100, 500, or 1000) from the Max Count drop-down menu. To filter the events based on severity, select a severity from the Severity Above drop-down menu. Options available are Inform, Warning, Error, and Exception. View Details About an Event in the Event Log View detailed information for an event contained in the Event Log. Steps 1. Click the Storage view and select a FluidFS cluster. 2. Click the Events tab. 3.
Run Diagnostics on a FluidFS Cluster FluidFS diagnostics can be run while the FluidFS cluster is online and serving data. About this task The following FluidFS diagnostic options are available: • • • File System: Collects information on the core file system activities, resource consumption, and status. General System: Collects general information about the FluidFS cluster status and settings. FTP : Collects information for FTP.
Run Embedded System Diagnostics on a NAS Controller The embedded system diagnostics (also known as Enhanced Pre-boot System Assessment (ePSA) diagnostics) provide a set of options for particular device groups or devices. Prerequisites Connect a monitor to a NAS controller VGA port and connect a keyboard to one of the NAS controller USB ports.
BMC Network Configuration Procedure Follow this procedure to configure the BMC network. Steps 1. In the Storage view, select the FluidFS cluster that you want to configure. 2. Click the File System tab. 3. In the File System panel, select Cluster Connectivity, and then click the Management Network tab. 4. In the BMC area, and click Modify BMC Network Settings. The Modify BMC Network Settings dialog box opens. 5. Enter the controller IP address.
For Active Directory users, the Primary Group setting is not mandatory, and if not defined, the used space is not accounted to any group. For group quota to be effective with Active Directory users, their primary group must be assigned. Workaround To set up the primary group for an Active Directory user: 1. 2. 3. 4. Open the Active Directory management. Right-click on the user and select Properties. Select the Member Of tab. The group you need must be listed.
Workaround • • • • There are many snapshot creation/deletion requests being currently processed. Another snapshot request for the NAS volume is currently being executed. The total number of snapshots reached the system limit. The wrong IP address was specified in the backup job. • • For a manual request failure, retry taking or deleting the snapshot after a minute or two. If the request originated from the snapshot scheduler, wait another cycle or two.
Workaround Check the permissions on the file/folder and set the required permissions. Access to SMB Shares Unavailable After Microsoft Update Description After performing an update to Microsoft Windows 10 version 1903 or Microsoft Windows Server version 1903, Windows clients using SMB 3.1.1 lose access to SMB shares. Accessing an SMB share after the Microsoft Windows update, causes one or more of the following error messages: • In FluidFS while accessing the SMB share: Windows cannot access "\\servernam
SMB Client Clock Skew Description SMB client clock skew errors. Cause The client clock must be within 5 minutes of the Active Directory clock. Workaround Configure the client to clock-synch with the Active Directory server (as an NTP server) to avoid clock skews errors. SMB Client Disconnect on File Read Description The SMB client is disconnected on file read. Cause Extreme SMB workload during NAS controller failover. Workaround The client needs to reconnect and open the file again.
SMB Maximum Connections Reached Description The maximum number of SMB connections per NAS controller has been reached. Cause Each NAS appliance is limited to a certain number of connections. Workaround • • • If the system is in an optimal state (all NAS controllers are online) and the number of SMB clients accessing one of the NAS controllers reaches the maximum, consider adding another NAS appliance.
SMB Write to Read Only NAS Volume Description A client tries to modify a file on a read-only NAS volume. Cause A NAS volume is set to read-only when it is the target of a replication. The most frequent reason for this event is either: Workaround • • • The client meant to access the target system for read purposes, but also tried to modify a file by mistake. The client accessed the wrong system due to similarity in name/IP address.
• Check the file systems related to the NFS export through Storage Manager. If the issue is due to the directory, check the spelling in your command and try to run the mount command on both directories. NFS Export Does Not Exist Description Attempted to mount an export that does not exist. Cause This failure is commonly caused by spelling mistakes on the client system or when accessing the wrong server. Workaround 1.
NFS Mount Fails Due to Netgroup Failure Description This event is issued when a client fails to mount an NFS export because the required netgroup information cannot be attained. Cause This error is usually the outcome of a communication error between the FluidFS cluster and the NIS/LDAP server. It can be a result of a network issue, directory server overload, or a software malfunction.
NFS Write To Read-Only NAS Volume Description A client tries to modify a file on a read-only NAS volume. Cause A NAS volume is set to read-only when it is the target of a replication. The most frequent reason for this event is either: Workaround • • • The client meant to access the target system for read purposes, but also tries to modify a file by mistake. The client accesses the wrong system due to similarity in name/IP address.
Problematic SMB Access From a UNIX/Linux Client Description A UNIX/Linux client is trying to mount a FluidFS cluster SMB share using SMB (using /etc/fstab or directly using smbmount). Cause A UNIX/Linux client is trying to access the file system using the smbclient command, for example: smbclient /// -U user%password -c ls Workaround It is recommended that you use the NFS protocol interfaces to access the FluidFS cluster file system from UNIX/Linux clients.
Cause Flow control is not enabled on the switch(es) connected to a FluidFS cluster controller. Workaround See the switch vendor's documentation to enable flow control on the switch(es). Troubleshoot Replication Issues This section contains probable causes of and solutions to common replication problems. Replication Configuration Error Description Replication between the source and target NAS volumes fails because the source and target FluidFS cluster topologies are incompatible.
Replication Target Volume is Detached Description Replication between the source NAS volume and the target NAS volume fails because the target NAS volume is detached from the source NAS volume. Cause Replication fails because the target NAS volume was previously detached from the source NAS volume. Workaround Perform the detach action on the source NAS volume. If required, reattach both NAS volumes in a replication relation.
Workaround Check whether the FluidFS cluster is down in the source system. If the FluidFS cluster is down, you must start the file system on the source FluidFS cluster. The replication continues automatically when the file system starts. Replication Source is Not Optimal Description Replication between the source and the target NAS volumes fails because the file system of the source NAS volume is not optimal. Cause Replication fails because the file system of the source is not optimal.
• If the file system has not stopped, you must let it continue stopping. The file system reaches a 10 minute timeout, flushes its cache to local storage, and continues the shutdown process. NAS Volume Security Violation Description NAS volume security violation. Cause Selecting a security style for a NAS volume dictates the dominant protocol to be used to set permissions on files in the NAS volume: NFS for UNIX security style NAS volumes and SMB for NTFS security style NAS volumes.
13 Remote Storage Centers and Replication QoS Connecting to Remote Storage Centers A remote Storage Center is a Storage Center that is configured to communicate with the local Storage Center over the Fibre Channel and/or iSCSI transport protocols. Storage Centers can be connected to each other using Fibre Channel, iSCSI, or both. Once connected, volumes can be replicated from one Storage Center to the other, or Live Volumes can be created using both Storage Centers.
• a. Click the Storage tab. b. In the Storage tab navigation pane, select Remote Storage Centers. c. In the right pane, click Configure iSCSI Connection. The Configure iSCSI Connection wizard opens. From a PS Group, select Actions > Replication > Configure iSCSI Connection. The Configure iSCSI Connection wizard opens. 4. Select the Storage Center or PS Group for which you want to configure an iSCSI connection, then click Next. The wizard advances to the next page. 5.
2. In the Storage pane, select a Storage Center. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select the remote Storage Center. 5. In the right pane, click Configure iSCSI Connection. The Configure iSCSI Connection wizard appears. 6. Clear the check box for each iSCSI port that you want to remove from the connection. If you remove all iSCSI ports, the remote Storage Center is disconnected from the local Storage Center. 7. When you are done, click Finish.
Change the Link Speed for a QoS Definition Use the Edit Settings dialog box to change the link speed for a QoS Definition. Steps 1. Click the Replications & Live Volumes view. 2. Click the QoS Nodes tab, then select the QoS definition. 3. In the right pane, click Edit Settings. The Edit Replication QoS dialog box appears. 4. In the Link Speed field, specify the speed of the link in megabits per second (Mbps) or gigabits per second (Gbps). 5. Click OK.
3. In the right pane, click Delete. The Delete Objects dialog box appears. 4. Click OK.
14 Storage Center Replications and Live Volumes A replication copies volume data from one Storage Center to another Storage Center to safeguard data against local or regional data threats. A Live Volume is a replicating volume that can be mapped and active on a source and destination Storage Center at the same time. To perform replications, a Remote Instant Replay (Replication) license must be applied to the source and destination Storage Centers.
For asynchronous replication, you can enable the following options: • • Replicate Active Snapshot: Attempts to keep the Active Snapshots (current, unfrozen volume data) of the source and destination volumes synchronized, which could require more bandwidth. Data that is written to the source volume is queued for delivery to the destination volume. If the local Storage Center or site fails before the write is delivered, it is possible that writes will not be delivered to the destination volume.
Requirement Description QoS Definition A quality of service (QoS) definition must be set up for the replication on the source Storage Center. Related concepts Connecting to Remote Storage Centers Related tasks Add a Storage Center Creating and Managing Replication QoS Definitions Replication Behavior When a Destination Volume Fails When the destination volume becomes unavailable, each replication type behaves slightly differently.
Disaster Recovery Limitations for Volumes Associated with Multiple Replications The following disaster recovery limitations apply to volumes that are associated with multiple replications. • • Activating disaster recovery for a volume removes other cascade mode replications associated with the volume. Restoring a replication removes all other associated mixed mode replications. Replications that are removed by disaster recovery must be manually recreated.
6. Click Next. The wizard advances to the next page. 7. (Optional) To modify replication attributes for an individual simulated replication, select it, then click Edit Settings. 8. Click Finish. Use the Replications tab on the Replications & Live Volumes view to monitor the simulated replication(s). Related concepts Replication Types Convert a Simulated Replication to a Real Replication If you are satisfied with the outcome of a simulated replication, you can convert it to a real replication.
• If a QoS definition has not been created, the Create Replication QoS wizard appears. Use this wizard to create a QoS definition before you configure replication. NOTE: If the volume is a replication destination, Replication QoS settings are enforced. If the volume is a Live Volume secondary, the Replication QoS settings are not enforced. 6. Select a remote storage system to which you want to replicate the volume, then click Next. • • The wizard advances to the next page.
Migrating Volumes to Another Storage Center Migrating a volume to another Storage Center moves the data on that volume to a volume on another Storage Center. Using the following steps to migrate a volume to another Storage Center does not require a Remote Instant Replay (Replication) license or a Live Volume license. SCv2000 series storage systems running Storage Center 7.2 or earlier and all other storage systems running Storage Center 7.
Modifying Replications Modify a replication if you want to enable or disable replication options, convert it to a Live Volume, or delete it. Change the Type for a Replication A replication can be changed from synchronous to asynchronous or asynchronous to synchronous with no service interruption. Prerequisites The source and destination Storage Centers must be running version 6.5 or later. Steps 1. Click the Replications & Live Volumes view. 2.
2. On the Replications tab, select the replication, then click Edit Settings. The Edit Replication Settings dialog box appears. 3. Select or clear the Deduplication check box, then click OK. Select a Different QoS Definition for a Replication Select a different QoS definition for a replication to change how the replication uses bandwidth. Steps 1. Click the Replications & Live Volumes view. 2. On the Replications tab, select the replication, then click Edit Settings.
Resume a Paused Live Volume Resume a Live Volume to allow volume data to be copied to the secondary Storage Center. Steps 1. Click the Replications & Live Volumes view. 2. On the Replications tab, select the paused replication, then click Resume. The Resuming Replication dialog box opens. 3. Click OK. Convert a Replication to a Live Volume If servers at both the local and remote site need to write to a volume that is currently being replicated, you can convert a replication to a Live Volume.
Filter Replications by Source Storage Center To reduce the number of replications that are displayed on the Replications & Live Volumes view, you can filter the replications by source Storage Center. Steps 1. Click the Replications & Live Volumes view. 2. In the Source Storage Centers pane, hide replications that originate from one or more Storage Centers by clearing the corresponding check boxes. 3.
View IO/sec and MB/sec Charts for a Replication When a replication is selected, the IO Reports subtab displays the Replication IO/Sec and Replication MB/Sec charts. About this task The charts contain performance data for the replication of a volume from the primary Storage Center to the secondary Storage Center. Steps 1. Click the Replications & Live Volumes view. 2. On the Replications tab, select a replication. 3. In the bottom pane, click the IO Reports tab.
Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Group. 3. Click the Storage tab. 4. From the Storage tab navigation pane, select a volume. 5. Click Replicate Volume. 6. Select a remote storage system from the table. 7. Click Next. If a remote iSCSI connection is not configured, the Configure iSCSI Connection wizard opens. For instructions on setting up a remote iSCSI connection, see Configure an iSCSI Connection for Remote Storage Systems. 8.
3. Click the Replications & Live Volumes tab. 4. From the center of the Replications tab (default) navigation pane, select the Snapshots subtab. 5. Click Edit Settings. 6. Type the amount of time for the replica to remain in the PS Series Group. 7. Click OK. Delete a PS Group Replication Delete a PS Group replication when it is no longer needed. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Group. 3. Click the Replications & Live Volumes tab. 4.
Managing Replication Schedules Replication schedules set when replications from a PS Series group run on a daily, hourly, or one-time basis. They also determine the number of snapshots the destination storage system retains for the replication. Create an Hourly Replication Schedule An hourly replication schedule determines how often a PS Series group replicates data to the destination volume at a set time or interval each day. Steps 1. Click the Storage view. 2.
• To repeat the replication over a set amount of time, select Repeat Interval, then select how often to start replication and the start and end times. 14. From the Replica Settings field, type the maximum number of replications the schedule can initiate. Schedule a Replication to Run Once Create a schedule for one replication to replicate the volume at a future date and time. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Series group. 3. Click the Storage tab. 4.
• • To enable the replication schedule, select the Enable Schedule checkbox. To disable the replication schedule, clear the Enable Schedule checkbox. 7. Click OK. Delete a Replication Schedule Delete a replication schedule to prevent it from initiating replications after the schedule is no longer needed. Steps 1. Click the Storage view. 2. In the Storage pane, select a PS Group. 3. Click the Storage tab. 4. From the Storage tab navigation pane, select a volume.
Requirement Description QoS definitions Quality of Service (QoS) definitions must be defined on the primary and secondary Storage Centers. Server • MPIO must be enabled on the server to prevent I/O interruption. Live Volume Types Live Volumes can be created using asynchronous replication or synchronous replication. Storage Center version 7.3 and later provides support for ALUA optimization of Live Volumes. Live Volume ALUA allows the Storage Center to report path priority to servers for Live Volumes.
Live Volume Before Swap Role In the following diagram, the primary Storage Center is on the left and the secondary Storage Center is on the right. Figure 47. Example Live Volume Configuration 1. Server 2. Server IO request to primary volume over Fibre Channel or iSCSI 3. Primary volume 4. Live Volume replication over Fibre Channel or iSCSI 5. Secondary volume 6. Server IO request to secondary volume (forwarded to primary Storage Center by secondary Storage Center) 7.
Swap Role Limit Description Min Time As Primary Before Swap (Minutes) Specifies the number of minutes that must pass before the roles can be swapped. Min Secondary Percent Before Swap (%) Specifies the minimum percentage of IO that must take place on the secondary volume before the roles can be swapped. Triggering an Automatic Swap Role For an automatic swap role to occur, the following events must take place. Steps 1. The Automatically Swap Roles feature must be enabled for the Live Volume. 2.
1. The primary Storage Center fails. Figure 49. Step One 2. The secondary Storage Center cannot communicate with the primary Storage Center. 3. The secondary Storage Center communicates with the tiebreaker and receives permission to activate the secondary Live Volume. 4. The secondary Storage Center activates the secondary Live Volume. Figure 50. Step Four NOTE: When the primary Storage Center recovers, Storage Center prevents the Live Volume from coming online.
1. The primary Storage Center recovers from the failure. Figure 51. Step One 2. The primary Storage Center recognizes that the secondary Live Volume is active as the primary Live Volume. 3. The Live Volume on the secondary Storage Center becomes the primary Live Volume. 4. The Live Volume on the primary Storage Center becomes the secondary Live Volume. Figure 52.
Managed Replications for Live Volumes A managed replication allows you to replicate a primary Live Volume to a third Storage Center, protecting against data loss in the event that the site where the primary and secondary Storage Centers are located goes down. When a Live Volume swap role occurs, the managed replication follows the primary volume to the other Storage Center.
Managed Replication After Live Volume Swap Role In the following diagram, a swap role has occurred so the secondary Storage Center is on the left and the primary Storage Center is located on the right. The managed replication has moved to follow the primary volume. Figure 54. Live Volume with Managed Replication Example Configuration After Swap Role 1. Server 2. Server IO request to secondary volume (forwarded to primary Storage Center by secondary Storage Center) 3. Secondary volume (Live Volume) 4.
4. In the Storage tab navigation tree, select the volume. 5. In the right pane, click Convert to Live Volume. • • If one or more QoS definitions exist, the Convert to Live Volume wizard appears. If a Quality of Service (QoS) definition has not been created, the Create Replication QoS wizard appears. Use this wizard to create a QoS definition before you configure a Live Volume.
a) (Optional) If you want to add a managed replication or modify a Live Volume before it is created, select it, then click Edit Settings. b) Click Finish. The Live Volumes are created and they begin to replicate to the secondary Storage Center. Related concepts Live Volume Requirements Live Volume Types Managed Replications for Live Volumes Modifying Live Volumes Modify a Live Volume if you want to change replication attributes, Live Volume attributes, convert it to a replication, or delete it.
Steps 1. Click the Replications & Live Volumes view. 2. On the Live Volumes tab, select the Live Volume, then click Edit Settings. The Edit Live Volume dialog box appears. 3. In the Sync Mode area, select High Availability or High Consistency. 4. Click OK. Related concepts Synchronous Replication Synchronous Replication Modes Add a Managed Replication to a Live Volume Add a managed replication to a Live Volume to replicate the primary volume to a third Storage Center.
Enable or Disable Deduplication for a Live Volume Deduplication reduces the amount of data transferred and enhances the storage efficiency of the remote Storage Center by copying only the changed portions of the snapshot history on the source volume, rather than all data captured in each snapshot. Steps 1. Click the Replications & Live Volumes view. 2. On the Live Volumes tab, select the Live Volume, then click Edit Settings. The Edit Live Volume dialog box appears. 3.
Allow a Live Volume to Automatically Swap Roles Live Volumes can be configured to swap primary and secondary volumes automatically when certain conditions are met to avoid situations in which the secondary volume receives more IO than the primary volume. Steps 1. Click the Replications & Live Volumes view. 2. On the Live Volumes tab, select the Live Volume, then click Edit Settings. The Edit Live Volume dialog box appears. 3. Select the Automatically Swap Roles check box. 4.
Set Threshold Alert Definitions for a Live Volume Configure one or more Threshold Alert Definitions for a Live Volume if you want to be notified when specific thresholds are reached, such as the amount of replication data waiting to be transferred or the percentage of replication data that has been transferred. Steps 1. Click the Replications & Live Volumes view. 2. On the Live Volumes tab, select the Live Volume, then click Set Threshold Alert Definitions.
Live Volume to Failed Delete Over Active Live Volume Visible to Storage Manager Primary No Primary Primary only Primary Yes Secondary Primary and secondary Secondary No Primary Secondary only Secondary Yes Secondary Secondary only Steps 1. Click the Replications & Live Volumes view. 2. Click the Live Volumes tab then select a Live Volume. 3. Click Force Delete. The Force Delete dialog box appears. 4. Select the Storage Center that will retain the volume device ID.
Modifying Live Volumes with Automatic Failover The following tasks apply to Live Volumes with Automatic Failover. Update to the Local Tiebreaker Updating to the local tiebreaker configures the Data Collector that Storage Manager is connected to as the tiebreaker. Storage Manager provides the option to update to the local tiebreaker when the current Data Collector is not configured as the tiebreaker.
• • • ALUA is not automatically enabled Live Volume ALUA is not automatically enabled under the following circumstances: • Swapping Roles. However if ALUA is enabled on one or more of the systems, that status is reported and persists. • Existing Live Volumes after system upgrades. Use the ALUA optimization wizard to enable Live Volume ALUA.
NOTE: Certain Windows Server 2016 versions can support non-optimized path reporting properly. For more information and best practice guidelines for configuring MPIO on Microsoft Server 2016, refer to Dell EMC SC Series Storage and Microsoft Multipath I/O white paper located on the Dell support site. 6. Click Finish. The results of the ALUA optimization process are displayed. 7. Click OK.
View the Replication Managed by a Live Volume A managed replication replicates a Live Volume primary volume to a third Storage Center. Steps 1. Click the Replications & Live Volumes view. 2. On the Live Volumes tab, select the Live Volume, then click Managed Replication. The Replications tab opens and selects the managed replication.
15 Storage Center DR Preparation and Activation How Disaster Recovery Works Disaster recovery (DR) is the process activating a replicated destination volume when the source site fails. When the source site comes back online, the source volume can be restored based on the volume at the DR site. The following diagrams illustrate each step in the DR process. Although this example shows a replication, DR can also be used for a Live Volume.
Step 3: An Administrator Activates Disaster Recovery An administrator activates DR to make the data in the destination volume accessible. When DR is activated, Storage Manager brings the destination volume on line and maps it to a server at the DR site. The server sends IO to the activated DR volume for the duration of the source site outage. Figure 57. Replication When DR is Activated 1. Source volume (down) 2. Replication over Fibre Channel or iSCSI (down) 3. Destination volume (activated) 4.
original is placed in the recycle bin so that it can be retrieved if necessary. During this time, the activated DR volume continues to accept IO. Figure 59. Activated DR Volume Replicating Back to the Source Site 1. Source volume being recovered 2. Replication over Fibre Channel or iSCSI 3. Destination volume (activated) 4. Server mapping to activated DR volume 5. Server at source site (not mapped) 6.
Step 5C: The Source Volume is Activated Storage Manager prompts the administrator to deactivate and unmap the destination volume. The source volume resumes replicating to the destination volume, and the source volume is activated and mapped to the server at the source site. Figure 61. Recovered Source Volume is Activated 1. Recovered and activated source volume 2. Replication over Fibre Channel or iSCSI 3. Destination volume (deactivated) 4.
Save Replication Restore Points for One or More Storage Centers Save replication restore points after creating replications or Live Volumes. Storage Manager automatically saves restore points for replications and Live Volumes. Steps 1. Click the Replications & Live Volumes view. 2. In the Actions pane, click Save Restore Points. The Save Restore Points dialog box opens. 3. Select the check boxes for Storage Centers for which you want to save restore points, then click OK.
Predefining Disaster Recovery Settings for Replications Predefining DR for a replication restore point is an optional step that configures DR activation settings for a replication restore point ahead of time, so that the DR site is ready if the destination volume needs to be activated. If you do not intend to access data from a destination site, you do not need to predefine DR settings. DR settings cannot be predefined for Live Volume restore points.
Steps 1. Click the Replications & Live Volumes view. 2. Click the Restore Points tab, then click Test Activate Disaster Recovery. The Test Activate Disaster Recovery wizard appears. 3. Select the source/destination Storage Center pair for which you want to test-activate DR, then click Next. The wizard advances to the next page. 4. In the Available Restore Points pane, select the restore points that you want to test, then click Next. The wizard advances to the next page. 5.
3. Right-click the restore point, then select Test Activate Disaster Recovery. The Test Activate Disaster Recovery dialog box appears. If the restore point corresponds to a synchronous replication, the dialog box displays additional information about the state of the replication: • • The Sync Data Status field displays the synchronization status for the replication at the time the restore point was validated.
Activating Disaster Recovery Activate DR when a volume or site becomes unavailable. When DR is activated, a view volume of the original destination volume (replication) or secondary volume (Live Volume) is brought on line and mapped to a server at the DR site. Before DR can be activated for a volume, at least one snapshot must have been Replicated to the DR site.
Activate Disaster Recovery for Multiple Restore Points If a pair of Storage Centers host multiple replications and/or Live Volumes, disaster recovery can be activated for all of the corresponding restore points simultaneously. Prerequisites Save and validate restore points. Steps 1. Click the Replications & Live Volumes view. 2. Click the Restore Points tab, then click Activate Disaster Recovery. The Activate Disaster Recovery wizard appears. 3.
c) Select a server to map the recovery volume to by clicking Change next to the Server label. • A server is required for each restore point. • Click Advanced Mapping to configure LUN settings, restrict mapping paths, or present the volume as read-only. • This option is not available if the Preserve Live Volume check box is selected. d) Choose which snapshot will be used for the activated volume.
• the Live Volume, it moves to follow the newly promoted primary volume. Fewer volume settings are available because the existing Live Volume settings are used. If Preserve Live Volume is not selected, Storage Manager deletes the Live Volume, creates a view volume, and maps it to a server. During the restore process, the Live Volume is recreated. If a replication is managed by the Live Volume, the managed replication is removed later during the restore process. 5.
Activating Disaster Recovery for PS Series Group Replications After replicating a volume to a PS Group from a Storage Center the destination volume must be activated on the destination PS Group. After it is activated, it can be mapped to a server. Prerequisites • • The source volume must have at least one snapshot Both storage systems must be managed by the Data Collector About this task NOTE: Activating the destination volume is not required for PS Group to Storage Center replications.
a) Select the restore point that you want to modify, then click Edit Settings. The Restore/Restart DR Volumes dialog box appears. b) Modify the replication settings as needed, then click OK. These settings are described in the online help. 7. When you are done, click Finish. • • Storage Manager restarts the replications. Use the Recovery Progress tab to monitor the recovery.
Restoring a Live Volume and a Managed Replication After a failover of a Live Volume with a Managed Replication, Storage Manager creates a new managed replication for the secondary Live Volume. When the original primary Live Volume system is brought back online and the Live Volume is not restored, there will be two managed replications for the Live Volume.
Restore a Failed Volume for a Single Restore Point If a single volume failed, you can use the corresponding restore point to restore the volume. Steps 1. Click the Replications & Live Volumes view. 2. Click the Restore Points tab. 3. Right-click the restore point that corresponds to the failed volume, then select Restore/Restart DR Volumes. The Restore/ Restart DR Volumes dialog box appears. 4. (Storage Center 6.5 and later, Live Volume only) Choose a recovery method.
16 Remote Data Collector Remote Data Collector Management The Storage Manager Client can connect to the primary Data Collector or the remote Data Collector. In the event that the primary Data Collector is unavailable and you need to access Storage Manager disaster recovery options, use the Storage Manager Client to connect to the remote Data Collector.
Storage Manager Virtual Appliance Requirements The Storage Manager Virtual Appliance has the following requirements: Component Requirement VMware ESXi host version 6.0 and later VMware vCenter Server version 6.0 and later Datastore size 55 GB CPU 64-bit (x64) microprocessor with two or more cores The Data Collector requires a microprocessor with four cores for environments that have 100,000 or more Active Directory members or groups.
c) Click OK. 7. Click Next. The Data Collector page is displayed. 8. Select the Configure as Remote Data Collector radio button. a) b) c) d) Type the host name or IP address of the Primary Data Collector in the Server field. Type the web server service port number of the Primary Data Collector in the Web Server Service Port field. Type the user name of the administrator user on the Primary Data Collector in the User Name field.
12. Click Next. The Review details page is displayed. 13. Confirm the details for the Storage Manager Virtual Appliance and click Next. The License agreements page is displayed. 14. Select the I accept all license agreements checkbox. 15. Click Next. The Configuration page is displayed. 16. Select the size of the Storage Manager Virtual Appliance deployment configuration.
2. Log in to Storage Manager using the following temporary user: • • User name: config Password: dell The Getting Started page of the Data Collector Initial Setup wizard is displayed. 3. Click Next. 4. Select the Configure as Remote Data Collector radio button. 5. Specify the following information about the Primary Data Collector: a) Type the hostname or IP address of the Primary Data Collector in the Server field.
Reconnect a Remote Data Collector to a Storage Center If the remote Data Collector loses connectivity to a Storage Center, make sure that the remote Data Collector is using the correct host name or IP address for the Storage Center. Steps 1. Use the Storage Manager Client to connect to the remote Data Collector. 2. On the Primary Data Collector tab, locate the down Storage Center, then click Reconnect to Storage Center. The Reconnect to Storage Center dialog box appears. 3.
Figure 66. Storage Manager Client Welcome Screen The login screen appears. 3. Complete the following fields: • • • • User Name – Type the name of an Storage Manager user. Password – Type the password for the user. Host/IP – Type the host name or IP address of the server that is hosting the remote Data Collector. Web Server Port – If you changed the Web Server Port during installation, type the updated port number. 4. Click Log In.
b) Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c) Press Enter. The Unisphere Central login page is displayed. d) Type the user name and password of a Data Collector user with Administrator privileges in the User Name and Password field. e) Click Log In. 2. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed. (Home). 3.
Use a Remote Data Collector to Test Activate Disaster Recovery Testing disaster recovery functions the same way for primary and remote Data Collectors. Steps 1. Use the Storage Manager Client to connect to the remote Data Collector. 2. Click the Restore Points tab. 3. Click Test Activate Disaster Recovery. Related concepts Test Activating Disaster Recovery Use a Remote Data Collector to Activate Disaster Recovery Activating disaster recovery functions the same way for primary and remote Data Collectors.
17 Storage Replication Adapter for VMware SRM Where to Find Dell SRA Deployment Instructions This chapter provides overview information about using SRM on Storage Centers through Storage Manager and the Dell Storage Replication Adapter (SRA). For complete information on installing and configuring VMware vCenter Site Recovery Manager, including downloading and installing Storage Replication Adapters, refer to the SRM documentation provided by VMware.
Requirement Description Storage Center Configuration • • Storage Manager Users VMware vSphere server objects must be created on both the source and destination Storage Centers. Replication QoS Nodes must be defined on the source and destination Storage Centers. Three users are required: • • • To install SRM – A Storage Manager user that can access all Storage Centers at the protected and recovery sites.
Figure 68. SRA Configuration with a Single Data Collector 1. Protected site 2. Recovery site 3. VMware SRM server at protected site 4. VMware SRM server at recovery site 5. Primary Data Collector at recovery site 6. Storage Center at protected site 7. Storage Center at recovery site In a configuration with only one Storage Manager Data Collector, locate the Data Collector at the Recovery Site.
In a configuration with a Storage Manager Remote Data Collector, locate the Remote Data Collector on the Recovery Site. This configuration allows DR activation from the remote site when the Protected Site goes down. By design, the Storage Manager Remote Data Collector is connected to the same Storage Centers as the Storage Manager Primary Data Collector. Selecting the Snapshot Type to Use for SRM 5.x and 6.
18 Threshold Alerts Configuring Threshold Definitions Threshold definitions monitor the usage metrics of storage objects and generate alerts if the user-defined thresholds are crossed. The types of usage metrics that can be monitored are I/O usage, storage, and replication. Storage Manager collects the usage metric data from managed Storage Centers. By default, Storage Manager collects I/O usage and replication metric data every 15 minutes and storage usage metric data daily at 12 AM.
7. Select the type of usage metric to monitor from the Alert Definition drop-down menu. 8. (Optional) To assign the threshold definition to all of the storage objects that are of the type specified in the Alert Object Type, select the All Objects check box. The All Objects setting cannot be modified after the threshold definition is created. 9.
Edit an Existing Threshold Definition Edit a threshold definition to change the name, notification settings, or schedule settings. Steps 1. Click the Threshold Alerts view. 2. Click the Definitions tab. 3. Select the threshold definition to edit and click Edit Settings in the bottom pane. The Edit Threshold Definition dialog box opens. 4. To change the name of the threshold definition, enter a new name in the Name field. 5.
Assigning Storage Objects to Threshold Definitions You can add or remove the storage objects that are monitored by threshold definitions. Assign Storage Objects to a Threshold Definition Add storage objects to a threshold definition to monitor the storage objects. About this task Storage objects cannot be added to a threshold definition that has the All Objects checkbox selected. Steps 1. Click Threshold Alerts in the view pane to display the Threshold Alerts window. 2. Click the Definitions tab. 3.
• • • Remote Storage Centers – Select the remote Storage Center for which to display the assigned threshold definitions, and click the Threshold Alerts tab in the right pane. Disks – Select the disk for which to display the assigned threshold definitions, and click the Threshold Alerts tab in the right pane. Storage Profiles – Select the storage profile for which to display the assigned threshold definitions, and click the Threshold Alerts tab in the right pane. 5.
Related tasks Setting Up Threshold Definitions Viewing Threshold Alerts for Threshold Definitions Use the Definitions tab to view the current threshold alerts and historical threshold alerts for a threshold definition. View the Current Threshold Alerts for a Threshold Definition When a threshold definition is selected on the Definitions tab, the Current Threshold Alerts subtab displays the active alerts for the definition. Steps 1. Click the Threshold Alerts view. 2. Click the Definitions tab. 3.
Filter Threshold Alerts by Storage Center By default, alerts are displayed for all managed Storage Centers. Steps 1. Click the Threshold Alerts view. 2. Click the Alerts tab. 3. Use the Filters pane to filter threshold alerts by Storage Center. • • • • To hide threshold alerts for a single Storage Center, clear the checkbox for the Storage Center. To display threshold alerts for a Storage Center that is deselected, select the checkbox for the Storage Center.
5. Right-click on the selected alerts and select Delete. The Delete Alerts dialog box opens. 6. Click OK. Configuring Volume Advisor Movement Recommendations Volume Advisor can recommend moving a volume to a different Storage Center to improve performance and/or alleviate high storage usage for a Storage Center. Volume Advisor is configured using threshold definitions, which generate recommendations along with threshold alerts when error thresholds are exceeded.
Requirement Description Candidate Storage Center configuration • • • The Storage Center must have a server object that matches the server to which the original volume is mapped. The Storage Center must be less than 80% full when including the size of the volume to be moved. The combined original volume IO/sec and Storage Center front end IO/sec must be below a predefined threshold.
Figure 71. Recommended Storage Center Dialog Box Creating Threshold Definitions to Recommend Volume Movement Create a threshold definition to recommend volume movement based on the rate of Storage Center front-end IO, volume latency, Storage Center controller CPU usage, or percentage of storage used for a Storage Center.
Create a Threshold Definition to Monitor Latency for a Volume When latency for a volume exceeds the value set for the error threshold, Storage Manager triggers a threshold alert with a volume movement recommendation. Steps 1. Click the Threshold Alerts view. 2. Click the Definitions tab. 3. Click Create Threshold Definition. The Create Threshold Definition dialog box appears. 4. In the Name field, type a name for the threshold definition. 5. Configure the threshold definition to monitor volume latency.
a) In the Error Setting field, type the CPU usage percentage that must be exceeded. b) Next to the Error Setting field, in the Iterations before email field, type the number of times the threshold must be exceeded to trigger the alert. 8. Select the Recommend Storage Center check box. 9. Configure the other options as needed. These options are described in the online help. 10. When you are finished, click OK.
Center front-end IO, Storage Center controller CPU usage, or the percentage of storage used for a Storage Center, move the volume(s) manually. Steps 1. Click the Threshold Alerts view. 2. Click the Alerts tab. 3. In the Current Threshold Alerts pane, locate the threshold alert that contains the volume movement recommendation. Alerts that contain recommendations display Yes in the Recommend column. 4. Right-click the threshold alert, then select Recommend Storage Center.
Steps 1. In the Recommend Storage Center dialog box, click Live Migrate the volume to the recommended Storage Center. The Create Live Migration dialog box opens. 2. (Optional) Modify Live Migration default settings. • • • In the Replication Attributes area, configure options that determine how replication behaves. In the Destination Volume Attributes area, configure storage options for the destination volume and map the destination volume to a server.
Configuring Email Notifications for Threshold Alerts Storage Manager can be configured to send email notification when a threshold alert is exceeded. To receive email notifications for threshold alerts: 1. Configure the SMTP server settings on the Data Collector. 2. Add an email address to your user account. 3. Configure your user account settings to send an email notification when a threshold alert is exceeded. NOTE: Storage Manager can send only one threshold alert email for every 24 hour period.
4. To send a test message to the email address, click Test Email and click OK. Verify that the test message is sent to the specified email address, 5. Click OK. Related tasks Configure SMTP Server Settings Configure Email Notification Settings for Your User Account Make sure that Storage Manager is configured to send email notifications to your account for the events that you want to monitor. Prerequisites • • The SMTP server settings must be configured for the Data Collector.
Create a Threshold Query Create a threshold query to test threshold definition settings against historical data. New queries can be run immediately or saved for future use. Steps 1. Click the Threshold Alerts view. 2. Click the Queries tab. 3. Perform the following steps in the Save Query Filter Values pane: a) Click New. If the New button is grayed out, skip to step b. b) Type a name for the query in the Name field. c) To make the query available to other Storage Manager users, select the Public checkbox.
6. Click Save Results. The Save Results dialog box opens. 7. Select the radio button for the type of file to export. 8. Click Browse to specify the file name and location to save the file. 9. Click OK. Related tasks Create a Threshold Definition Create a Threshold Query Edit a Saved Threshold Query Modify a saved threshold query if you want to change the filter settings. Steps 1. Click the Threshold Alerts view. 2. Click the Queries tab.
19 Storage Center Reports Chargeback Reports The information displayed in a Chargeback report includes a sum of charges to each department and the cost/storage savings realized by using a Storage Center as compared to a legacy SAN. The Chargeback reports are in PDF format and present the same data that can be viewed on the Chargeback view. The following tabs are available for Chargeback reports: • • Chargeback: Displays the sum of all charges to each department for the selected Chargeback run.
Displaying Reports The Reports view can display Storage Center Automated reports and Chargeback reports. View a Storage Center Automated Report The content of Storage Center reports are configured in the Data Collector automated reports settings. Steps 1. Click the Reports view. The Automated Reports tab is displayed. 2. To display reports for an individual Storage Center, click the plus sign (+) next to the Storage Center in the Reports pane.
Figure 72. Chargeback Reports 3. Select the report to view in the Reports pane or double-click on the report to view in the Automated Reports tab. Related concepts Chargeback Reports Configuring Automated Report Generation Viewing Chargeback Runs Working with Reports You can update the list of reports and use the report options navigate, print, save, and delete reports. Update the List of Reports Update the list of reports to display new reports that were automatically or manually generated. Steps 1.
Print a Report Perform the following steps to save a report: Steps 1. Click the Reports view. 2. Select the report to print from the Reports pane. 3. Click (Print). The Print dialog box opens. 4. Select the printer to use from the Name drop-down menu. NOTE: For best results, print reports using the Landscape orientation. 5. Click OK. Save a Report Perform the following steps to save a report: Steps 1. Click the Reports view. 2. Select the report to save from the Reports pane. 3. Click (Save).
Set Up Automated Reports for All Storage Centers Configure automated report settings on the Data Collector if you want to use the same report settings for all managed Storage Centers. Configure the global settings first, and then customize report settings for individual Storage Centers as needed. Steps 1. In the top pane of Storage Manager, click Edit Data Collector Settings The Edit Data Collector Settings page is displayed. 2. Click the Automated Reports tab. 3.
7. Set the Automated Report Options a) To export the reports to a public directory, select the Store report in public directory checkbox and enter the full path to the directory in the Directory field. NOTE: The directory must be located on the same server as the Data Collector. NOTE: Automated reports cannot be saved to a public directory when using a Virtual Appliance.
Steps 1. Configure the SMTP server settings for the Data Collector. 2. Add an email address to your user account. 3. Configure email notification settings for your user account. Configure SMTP Server Settings The SMTP server settings must be configured to allow Storage Manager to send notification emails. Steps 1. Connect to the Data Collector. a) Open a web browser.
Configure Email Notification Settings for Your User Account Make sure that Storage Manager is configured to send email notifications to your account for the events that you want to monitor. Prerequisites • • The SMTP server settings must be configured for the Data Collector. If these settings are not configured, the Data Collector is not able to send emails. An email address must be configured for your user account. Steps 1. In the top pane of the Storage Manager Client, click Edit User Settings.
20 Storage Center Chargeback Configure Chargeback or Modify Chargeback Settings The Chargeback settings specify how to charge for storage consumption, how to assign base storage costs, and how to generate reports. During the initial setup of Chargeback settings, the Default Department drop-down menu is empty because the departments do not exist yet. Steps 1. Click the Chargeback view. 2. Click Edit Chargeback Settings in the Actions pane. The Edit Chargeback Settings wizard appears. Figure 73.
For example, if the selected location is United States, the currency unit is dollars ($). NOTE: If selecting a location causes characters to be displayed incorrectly, download the appropriate Windows language pack. 9. To specify a department that unassigned volumes will be assigned to when Chargeback is run, select the Use Default Department check box and enter select the department from the Default Department drop-down menu. 10.
Assign Storage Costs for Storage Center Disk Tiers If the Edit Chargeback Settings wizard displays this page, assign storage cost for each Storage Center disk tier. Steps 1. For each storage tier, select the unit of storage on which to base the storage cost from the per drop-down menu. 2. For each storage tier, enter an amount to charge per unit of storage in the Cost field. Figure 75. Storage Costs Per Storage Center Disk Tiers 3. Click Finish to save the Chargeback settings.
Figure 76. Add Department Dialog Box 5. Enter the name of the department in the Name field. 6. Enter the base price for storage in the Base Price field. 7. Enter percentage to apply to the global cost of storage in the Multiplier Percent field. • • To apply a discount to the cost of storage, enter the percentage by which to decrease the global cost and select Discount from the drop-down menu.
Managing Department Line Items You can add, edit, or remove line-item expenses. Add a Department Line Item A line item is a fixed cost that is not tied to storage usage. Steps 1. Click the Chargeback view. 2. Click the Departments tab. 3. Select the department to which you want to add the line item from the list of departments on the Chargeback pane. Information about the selected department appears on the Department tab. 4. Click Add Line Item. The Add Line Item dialog box appears. Figure 77.
Delete a Department Line Item Delete a line item if you no longer want to charge the department for it. Steps 1. Click the Chargeback view. 2. Click the Departments tab. 3. Select the department that contains the line item that you want to delete from the list of departments on the Chargeback pane. 4. Select the line item you want to delete from the Department Line Items pane. 5. Click Delete or right-click on the line item and select Delete. The Delete Objects dialog box appears. 6.
Assign Volume Folders to a Department in the Chargeback View Use the Chargeback view to assign multiple volume folders to a department simultaneously. Steps 1. Click the Chargeback view. 2. Click the Departments tab. 3. Select the department to which you want to assign the volume folder from the list of departments on the Chargeback pane. Information about the selected department appears on the Department tab. 4. Click Add Volume Folders. The Add Volume Folders dialog box appears. Figure 80.
Assign a Volume/Volume Folder to a Department in the Storage View Use the Storage view to assign volumes and volume folders to a department one at a time. Steps 1. Click the Storage view. 2. In the Storage pane, select a Storage Center. 3. Click the Storage tab. 4. In the Storage tab navigation pane, select the volume or volume folder. 5. In the right pane, click Edit Settings. A dialog box appears. 6. Next to Chargeback Department, click Change. The Add Chargeback Department dialog box appears. 7.
Storage Manager performs the Chargeback run and creates a Manual Run entry in the Runs folder on the Chargeback pane. Viewing Chargeback Runs Use the Chargeback Runs tab in the Chargeback view to view scheduled and manual Chargeback runs. Each Chargeback run is displayed in the Chargeback pane. The Chargeback runs names indicate the type of Chargeback run (Manual Run, Day Ending, Week Ending, Month Ending, or Quarter 1–4 Ending) and the date of the run.
View Cost and Storage Savings Realized by Using Data Instant Snapshots for a Chargeback Run The Data Instant Snapshot Savings subtab shows the estimated cost and storage space savings realized by using a Storage Center with Data Instant Snapshots as compared to legacy SAN point-in-time-copies. These savings are achieved because Data Instant Snapshots allocates space for a snapshot only when data is written and saves only the delta between snapshots; a legacy SAN allocates space for every point-in-time-copy.
Save the Chart as a PNG Image Save the chart as an image if you want to use it elsewhere, such as in a document or an email. Steps 1. Right-click the chart and select Save As. The Save dialog box appears. 2. Select a location to save the image and enter a name for the image in the File name field. 3. Click Save to save the chart. Print the Chart Print the chart if you want a paper copy. Steps 1. Right-click the chart and select Print. The Page Setup dialog box appears. 2.
8. Click OK.
21 Storage Center Monitoring Storage Alerts Alerts represent current issues present on the storage system, which clear themselves automatically if the situation that caused them is corrected. Indications warn you about a condition on the storage system that might require direct user intervention to correct. Status Levels for Alerts and Indications Status levels indicate the severity of storage system alerts and indications. Table 22.
Figure 82. Alerts Tab Display Storage Alerts on the Monitoring View Alerts for managed storage systems can be displayed on the Storage Alerts tab. Steps 1. Click the Monitoring view. 2. Click the Storage Alerts tab. 3. Select the check boxes of the storage systems to display and clear the check boxes of the storage systems to hide. The Storage Alerts tab displays alerts for the selected storage systems. 4. To display indications, select the Show Indications check box. 5.
Select the Date Range of Storage Alerts to Display You can view storage alerts for the last day, last 3 days, last 5 days, last week, or specify a custom time period. Steps 1. Click the Monitoring view 2. Click the Storage Alerts tab. 3. Select the date range of the storage alerts to display by clicking one of the following: • • • • • • Last Day: Displays the past 24 hours of storage alerts. Last 3 Days: Displays the past 72 hours of storage alerts.
2. Click the Storage Alerts tab. 3. Select the Storage Center alerts to acknowledge, then click Acknowledge. The Acknowledge Alert dialog box opens. NOTE: The option to acknowledge an alert will not appear if an alert has already been acknowledged. 4. Click OK to acknowledge the Storage Center alerts displayed in the Acknowledge Alert dialog box.
Event Name Description SMI-S Server Error Error installing, starting, or running the SMI-S server Storage Center Down A Storage Center is no longer able to communicate with the Data Collector Threshold Alerts One or more Threshold Alerts has been triggered Viewing Data Collector Events Use the Events tab in the Monitoring view to display events collected by the Data Collector. About this task Figure 83.
• • To hide events for all of the Storage Centers, click Unselect All. To display events for all of the Storage Centers, click Select All. 4. Use the PS Groups pane to filter alerts by PS Series group. • • • • To hide events for a single PS Series group, clear the check box for the group. To display events for a PS Series group that is deselected, select the check box for the group. To hide events for all of the PS Series groups, click Unselect All.
NOTE: By default, when a search reaches the bottom of the list and Find Next is clicked, the search wraps around to the first match in the list. When a search reaches the top of the list and Find Previous is clicked, the search wraps around to the last match in the list. Configuring Email Alerts for Storage Manager Events Storage Manager can be configured to send automated reports when monitored events occur. About this task To configure Storage Manager to send automated reports by email: Steps 1.
2. Type an email address for the user account in the Email Address field. 3. Select the format for emails from the Email Format drop-down menu. 4. To send a test message to the email address, click Test Email and click OK. Verify that the test message is sent to the specified email address, 5. Click OK.
Figure 84. Send Logs to Data Collector Send Storage Center Logs to the Data Collector Modify the Storage Center to forward logs to Storage Manager. Prerequisites • • UDP port 514 must be open on the Storage Manager Data Collector server to receive logs from Storage Centers. The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator privilege. Steps 1. Click the Storage view. 2.
Send Storage Center Logs to the Data Collector and a Syslog Server If you want to send the logs to the Data Collector and one or more syslog servers, configure the Data Collector to forward the log messages to the appropriate servers. Prerequisites The Storage Center must be added to Storage Manager using a Storage Center user with the Administrator privilege. Steps 1. Click the Storage view. 2. In the Storage pane, select the Storage Center for which you want to configure alert forwarding. 3.
Apply Log Settings to Multiple Storage Centers Log settings that are assigned to a single Storage Center can be applied to other Storage Centers. Prerequisites The Storage Center must be added to Storage Manager using a Storage Center user with Administrator privileges. Steps 1. Click the Storage view. 2. In the Storage pane, select the Storage Center that has the log settings you want to apply to other Storage Centers. 3. In the Summary tab, click Edit Settings. The Edit Settings dialog box appears. 4.
Filter Storage Logs by Storage System By default, storage logs are displayed for all managed storage systems. Steps 1. Click the Monitoring view 2. Click the Storage Logs tab. 3. Use the Storage Centers pane to filter logs by Storage Center. • • • • To hide logs for a single Storage Center, clear the check box for the Storage Center. To display logs for a Storage Center that is deselected, select the check box for the Storage Center. To hide logs for all of the Storage Centers, click Unselect All.
Search for Events in the Storage Logs Use the Search field to search the list of log events. Steps 1. Click the Monitoring view 2. Click the Storage Logs tab. 3. Enter the text to search for in the Search field. 4. To make the search case sensitive, select the Match Case check box. 5. To prevent the search from wrapping, clear the Wrap check box. 6. To only match whole words or phrases within the logs, select the Full Match check box. 7.
3. Select the check boxes of the PS Series groups to display and clear the check boxes of the PS Series groups to hide. The Audi Logs tab displays user account activity for the PS Series groups. 4. To refresh the log data for the selected PS Series groups, click Refresh on the Audit Logs tab. Filter Audit Logs by PS Series Group By default, audit logs are displayed for all managed PS Series groups. Steps 1. Click the Monitoring view 2. Click the Audit Logs tab. 3.
6. To only match whole words or phrases within the audit logs, select the Full Match check box. 7. To highlight all of the matches of the search, select the Highlight check box. 8. Click Find Next or Find Previous to search for the text. If a match is found, the first log entry with matching text is selected from the list of audit logs. If a match is not found, an Error dialog box appears and it displays the text that could not be found.
Configure Data Collection Schedules Configure the interval at which the Data Collector collects monitoring data from Storage Centers. Steps 1. Connect to the Data Collector. a) Open a web browser. b) Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c) Press Enter. The Unisphere Central login page is displayed.
22 Data Collector Management The Storage Manager Data Collector is a service that collects reporting data and alerts from managed Storage Centers. When you access the Data Collector using a web browser, the Data Collector management program Unisphere Central for SC Series opens. Unisphere Central manages most functions of the Data Collector service.
a) Open a web browser. b) Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c) Press Enter. The Unisphere Central login page is displayed. d) Type the user name and password of a Data Collector user with Administrator privileges in the User Name and Password field. e) Click Log In. 2. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed.
Steps 1. Connect to the Data Collector. a) Open a web browser. b) Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c) Press Enter. The Unisphere Central login page is displayed. d) Type the user name and password of a Data Collector user with Administrator privileges in the User Name and Password field. e) Click Log In. 2.
6. Click Edit. The Edit Advanced Settings dialog box opens. 7. Type the maximum amount of memory to allocate to the Data Collector in the Maximum Server Memory Usage box. 8. Click OK. The Data Collector Restart dialog box opens. 9. Click Yes. The Data Collector service stops and restarts. Set the Maximum Memory for a Data Collector on a Virtual Appliance Use the Edit Settings dialog box the vSphere Web Client to set the maximum amount of memory to allocate to a Data Collector on a Virtual Appliance.
Configure a Custom SSL Certificate Configure a custom SSL certificate to avoid certificate errors when connecting to the Data Collector website. An SSL certificate is also required to communicate with a directory service using LDAP with the StartTLS extension or the LDAPS protocol. Prerequisites • • • • The custom certificate must be signed by a Certificate Authority (CA) that is trusted by the hosts in your network.
c) Press Enter. The Unisphere Central login page is displayed. d) Type the user name and password of a Data Collector user with Administrator privileges in the User Name and Password field. e) Click Log In. 2. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed. (Home). Data Collector. 3. Click The Data Collector view is displayed. 4. Click the General tab, and then click the Security subtab. 5.
Change Data Collector Data Source Change the data source if you want to use a different database to store Storage Manager data. About this task The Change Data Source option re-configures an existing primary Data Collector to use a new database. CAUTION: To prevent data corruption, make sure that another Data Collector is not using the new database. Steps 1. Connect to the Data Collector. a) Open a web browser.
2. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed. (Home). 3. Click Data Collector. The Data Collector view is displayed. 4. Click the General tab, and then click the Database subtab. 5. Click Change Connection. The Change Data Connection dialog box opens. 6. Type the host name or IP address of the database server in the Database Server field. 7. Type port number of the database server in the Database Port field. 8.
https://data_collector_host_name_or_IP_address:3033/ c) Press Enter. The Unisphere Central login page is displayed. d) Type the user name and password of a Data Collector user with Administrator privileges in the User Name and Password field. e) Click Log In. 2. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed. (Home). 3. Click Data Collector. The Data Collector view is displayed. 4.
2. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed. (Home). 3. Click Data Collector. The Data Collector view is displayed. 4. Click the Environment tab, and then click the Server Agent subtab. 5. Click Edit. The Server Agent dialog box opens. 6. Select the Periodically Update Usage Data checkbox. When selected, server usage data is updated every 30 minutes. 7.
The following table lists the available Storage Center reports related to volumes, servers, and disks: Report Type Description Automated Reports Generates a report for the following: • • • • • • Automated Table Reports Storage Center Summary: Displays information about storage space and the number of storage objects on the Storage Center. Disk Class: Displays information about storage space on each disk class. Disk Power On Time: Displays information about how long each disk has been powered on.
Testing Automated Reports Settings You can manually generate reports to test the configured automated report settings without waiting for the reports to be generated automatically. By default, Storage Manager generates reports into a folder named for the day when the report was generated. Steps 1. Connect to the Data Collector. a) Open a web browser. b) Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c) Press Enter.
c) To change how often storage usage data is collected, select a period of time from the Storage Usage drop-down menu. If Daily is selected from the Storage Usage drop-down menu, the time of the day that storage usage data is collected can be selected from the Storage Usage Time drop-down menu. d) To change the number of days after which a log is expired, set the number of days in the Alert Lifetime field.
8. To modify the number of days after which a log file is expired, change the period of time in the Log File Lifetime field. 9. Click OK. Clear Debug Logs Clear the debug log files to delete all Storage Manager debug log files. Steps 1. Connect to the Data Collector. a) Open a web browser. b) Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c) Press Enter. The Unisphere Central login page is displayed.
Configuring Virtual Appliance Settings Use the Virtual Appliance tab to configure network, proxy server, and time settings for a Virtual Appliance. Configure Network Settings for a Virtual Appliance Use the Network Configuration dialog box to configure network settings and enable or disable SSH on the Virtual Appliance. Steps 1. Connect to the Data Collector. a) Open a web browser.
2. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed. (Home). 3. Click Data Collector. The Data Collector view is displayed. 4. Click the Virtual Appliance tab, and then click the Time subtab. 5. Click Edit. The Time Configuration dialog box opens. 6. Select a time zone for the Virtual Appliance from the Timezone drop-down menu. 7.
d) Type the user name and password of a Data Collector user with Administrator privileges in the User Name and Password field. e) Click Log In. 2. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed. (Home). 3. Click Data Collector. The Data Collector view is displayed. 4. Click the Users & System tab, and then select the Storage Centers subtab. 5. Select the Storage Center for which you want to clear all data. 6.
The Unisphere Central login page is displayed. d) Type the user name and password of a Data Collector user with Administrator privileges in the User Name and Password field. e) Click Log In. 2. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed. (Home). 3. Click Data Collector. The Data Collector view is displayed. 4. Click the Users & System tab, then select the PS Groups subtab. 5. Select the PS Series group to delete. 6.
2. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed. (Home). 3. Click Data Collector. The Data Collector view is displayed. 4. Click the Users & System tab, then select the FluidFS Clusters subtab. 5. Select the FluidFS cluster to delete. 6. Click (Delete System). A confirmation dialog box is displayed. 7. Click Yes.
Configure Virtual Appliance Settings Use the Configuration menu in the Storage Manager Virtual Appliance CLI to change network and partition settings for the Storage Manager Virtual Appliance. Configure an NTP Server A network time protocol (NTP) server provides the time and date to the Storage Manager Virtual Appliance. Prerequisites The NTP server must be accessible from the Storage Manager Virtual Appliance. Steps 1.
8. To add a new DNS server, type the IP address of one or more DNS servers. If there are multiple IP addresses, separate them with a comma, and then press Enter. 9. Press 1 to confirm the changes and press Enter. 10. Press Enter to complete the configuration. Enable SSH for the Virtual Appliance Use the Storage Manager Virtual Appliance console to enable SSH communication with the Storage Manager Virtual Appliance. Steps 1.
The server expands the disk size. 6. Launch the console for the Storage Manager Virtual Appliance. 7. Log in to the Storage Manager Virtual Appliance. 8. Press 2 and Enter to display the Configuration menu. 9. Press 6 and Enter to resize a partition. 10. Select which partition to resize. • • Press 1 and Enter to select the Data Collector partition. Press 2 and Enter to select the database partition. The Storage Manager Virtual Appliance expands the partition to the available size of the disk.
View the Hosts Table Use the Storage Manager Virtual Appliance CLI to view the hosts table for the Storage Manager Virtual Appliance. About this task The hosts table shows network information for the Storage Manager Virtual Appliance. Steps 1. Using the VMware vSphere Client, launch the console for the Storage Manager Virtual Appliance. 2. Log in to the Storage Manager Virtual Appliance CLI. 3. Press 3 and Enter to display the Diagnostics menu. 4. Press 4 and Enter.
Uninstalling the Data Collector On the server that hosts the Data Collector, use the Windows Programs and Features control panel item to uninstall the Storage Manager Data Collector application. Deleting Old Data Collector Databases Delete the old Data Collector database if you have migrated the database to a different database server or if you have removed the Data Collector from your environment.
23 Storage Manager User Management Storage Manager User Privileges The Data Collector controls user access to Storage Manager functions and associated Storage Centers based on the privileges assigned to users: Reporter, Volume Manager, or Administrator. The following tables define Storage Manager user level privileges with the following categories. NOTE: Storage Manager user privileges and Storage Center user privileges share the same names but they are not the same.
Configuring an External Directory Service Before users can be authenticated with an external directory service, the Data Collector must be configured to use the directory service. Configure the Data Collector to Use a Directory Service Configure the Data Collector to use an Active Directory or OpenLDAP directory service. Prerequisites • • • • • • • • • An Active Directory or OpenLDAP directory service must be deployed in your network environment.
7. (Optional) Manually configure the directory service settings. a) From the Type drop-down menu, select Active Directory or OpenLDAP. b) In the Directory Servers field, type the fully qualified domain name (FQDN) of each directory server on a separate line. NOTE: To verify that the Data Collector can communicate with the specified directory server(s) using the selected protocol, click Test. c) In the Base DN field, type the base Distinguished Name for the LDAP server.
Scan for Domains in Local and Trusted Forests If domains are added or removed from the local forest, or if two-way forest trusts between the local forest and one or more remote forests are added or removed, use the Data Collector to scan for domains. Prerequisites The Data Collector must be configured to authenticate users with an Active Directory directory service and Kerberos. NOTE: Authentication attempts for Active Directory users may fail while a rescan operation is in progress. Steps 1.
3. Click Data Collector. The Data Collector view is displayed. 4. Click the Users & System tab and then select the Users & User Groups subtab. 5. Select the Storage Manager user group to which you want to add directory groups. 6. Click Add Directory User Groups. The Add Directory User Groups dialog box opens. 7. (Multi-domain environments only) From the Domain drop-down menu, select the domain that contains the directory groups to which you want to grant access. 8.
11. When you are finished, click OK. The Add Directory Users dialog box closes, and the directory users that are associated with the selected Storage Manager user group appear on the User Groups subtab. Related tasks Configure the Data Collector to Use a Directory Service Revoke Access for Directory Service Users and Groups To revoke access to Storage Manager for a directory service user or group, remove the directory group or user from Storage Manager user groups.
3. Click Data Collector. The Data Collector view is displayed. 4. Click the Users & System tab and then select the Users & User Groups subtab. 5. Click the User Groups tab. 6. Select the Storage Manager user group to which the directory group is added. 7. Click the Users subtab. 8. Select the directory service group user for which you want to revoke access, then click Delete User. The Delete Directory User dialog box opens. 9. Click Yes.
Create a User Create a user account to allow a person access to Storage Manager. Steps 1. Connect to the Data Collector. a) Open a web browser. b) Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c) Press Enter. The Unisphere Central login page is displayed. d) Type the user name and password of a Data Collector user with Administrator privileges in the User Name and Password field. e) Click Log In. 2.
6. Enter the email address of the user in the Email Address field. 7. Click OK. Change the Privileges Assigned to a User You can change the privileges for a user account by changing the user role. Steps 1. Connect to the Data Collector. a) Open a web browser. b) Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c) Press Enter. The Unisphere Central login page is displayed.
7. Click OK. Force the User to Change the Password You can force a user to change the password the next time he or she logs in. Steps 1. Connect to the Data Collector. a) Open a web browser. b) Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c) Press Enter. The Unisphere Central login page is displayed.
Set Storage Center Mappings for a Reporter User Storage Center mappings can be set only for users that have Reporter privileges. Users that have Administrator or Volume Manager privileges manage their own Storage Center mappings using Unisphere Central. Steps 1. Connect to the Data Collector. a) Open a web browser. b) Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c) Press Enter.
Delete a Storage Center Mapping for a User Remove a Storage Center map from a user account to prevent the user from viewing and managing the Storage Center. Steps 1. Connect to the Data Collector. a) Open a web browser. b) Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c) Press Enter. The Unisphere Central login page is displayed.
7. Click Yes. Related tasks Configure Local Storage Manager User Password Requirements Managing Local User Password Requirements Manage the password expiration and complexity requirements for Unisphere from the Data Collector view. Configure Local Storage Manager User Password Requirements Set local user password requirements to increase the complexity of local user passwords and improve the security of Storage Manager. Steps 1. Connect to the Data Collector. a) Open a web browser.
2. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed. (Home). 3. Click Data Collector. The Data Collector view is displayed. 4. Click the Users & System tab, then select the Password Configuration subtab. 5. Click Edit. The Password Configuration dialog box opens. 6. Select the Storage Centers to which to apply the password requirements. 7. Click OK.
a) Open a web browser. b) Type the address of the Data Collector in the web browser using the following format: https://data_collector_host_name_or_IP_address:3033/ c) Press Enter. The Unisphere Central login page is displayed. d) Type the user name and password of a Data Collector user with Administrator privileges in the User Name and Password field. e) Click Log In. 2. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed.
Change the Preferred Language The preferred language for a Storage Manager user determines the language displayed in automated reports and email alerts from the Data Collector. Reports displayed in the UI and generated by a user request will not use the preferred language. Steps 1. In the top pane of the Storage Manager Client, click Edit User Settings. The Edit User Settings dialog box appears. 2. From the Preferred Language drop-down menu, select a language. 3. Click OK.
• • • • Automatic – The units that are most appropriate for the displayed values are automatically selected. Always show in MB – All storage units are displayed in megabytes. Always show in GB – All storage units are displayed in gigabytes. Always show in TB – All storage units are displayed in terabytes. 3. Click OK. Change the Warning Percentage Threshold The warning percentage threshold specifies the utilization percentage at which storage objects indicate a warning. Steps 1.
24 SupportAssist Management Data Types that Can Be Sent Using SupportAssist Storage Manager can send reports, Storage Center data, and FluidFS cluster data to technical support. The following table summarizes the types of data that can be sent using SupportAssist.
Configure SupportAssist Settings for a Single Storage Center Modify SupportAssist Settings for a single Storage Center. Steps 1. Click the Storage view. 2. In the Storage view navigation pane, select a Storage Center. 3. In the top pane of the Storage Manager Client, click Edit Settings. The Edit Data Collector Settings dialog box opens. 4. If you are connected to a Data Collector, select a Storage Center from the drop-down list in the left navigation pane of Unisphere Central. 5. Click Summary.
The Unisphere Central Home page is displayed. 3. Click Data Collector. The Data Collector view is displayed. 4. Click the Monitoring tab, and then click the SupportAssist subtab. 5. Click Send SupportAssist Data Now. The Send SupportAssist Data Now dialog box opens. 6. In the Storage Centers area, select the checkboxes of the Storage Centers for which you want to send SupportAssist data to technical support. 7. In the Reports area, select the checkboxes of the Storage Center reports to send. 8.
e) Click Log In. 2. If a Storage Center is selected from the drop-down list in Unisphere Central, click The Unisphere Central Home page is displayed. (Home). 3. Click Data Collector. The Data Collector view is displayed. 4. Click the Monitoring tab, and then click the SupportAssist subtab. 5. Click Export Historical Data. The Export Historical Data dialog box opens. 6. In the Storage Center table, select the Storage Center for which you want to export data. 7.
Save SupportAssist Data to the USB Flash Drive Use the Send SupportAssist Information to USB dialog box to save data to the USB flash drive. Prerequisites • • • Prepare the USB flash drive according to Prepare the USB Flash Drive. Storage Center must recognize the USB flash drive. SupportAssist must be turned off. Steps 1. Click the Storage view. 2. From the Storage navigation pane, select the Storage Center for which to save SupportAssist data . 3. In the Summary tab, click Edit Settings.
Edit SupportAssist Contact Information Use the Storage Center settings to edit SupportAssist contact information. Steps 1. Click the Storage view. 2. In the Storage view navigation pane, select a Storage Center. 3. In the right pane, click Edit Settings. The Edit Storage Center Settings dialog box opens 4. Click the SupportAssist tab. 5. Click Edit SupportAssist Contact Information. The Edit SupportAssist Contact Information dialog box opens. 6.
8. If the proxy server requires authentication, type the user name and password for the proxy server in the User Name and Password fields. 9. Click OK. CloudIQ CloudIQ provides storage monitoring and proactive service, giving you information tailored to your needs, access to near real-time analytics, and the ability to monitor storage systems from anywhere at any time.