DR4100 Best Practice Guide Part 1: Setup, Replication and Networking Dell Data Protection Group April 2014 A Dell Technical White Paper
Revisions Date Description April 2014 Initial release THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND. © 2014 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell.
Table of contents Revisions ............................................................................................................................................................................................... 2 Executive summary ............................................................................................................................................................................... 4 1 Administration guides ..........................................................
Executive summary This document contains some of the best practices for deploying, configuring, and maintaining a Dell DR 4x00 backup and deduplication appliance in a production environment. Following these best practices can help ensure the product is optimally configured for a given environment. Note: this guide is not a replacement for the administrator’s guide. In most cases, this guide provides only a high level description of the problem and solution.
1 Administration guides Dell DR Series System Administrator Guide Dell DR4100 Systems Owner’s Manual Dell DR Series System Command Line Reference Guide 5 DR4100 Best Practice Guide | April 2014
2 Best practice guides Best Practices for Setting up NetVault Backup Native Virtual Tape Library (nVTL) Best Practices for Setting up NetVault SmartDisk DR4X00 Disk Backup Appliance Setup Guide for CommVault Simpana 10 Setting up EMC Networker on the Dell DR4X00 Disk Backup Appliance Through CIFS Setting up EMC Networker on the Dell DR4X00 Disk Backup Appliance Through NFS Setting up Veeam on the Dell DR4X00 Disk Backup Appliance Setting up vRanger on the Dell DR4X00 Disk Backup Appliance Setup Guide for S
3 Case studies Haggar Case Study CGD Case Study TAGAL Steel Case Study DCIG DR4X00 and EqualLogic Better Together ESG DR4X00 Lab Validation Pacific BioSciences Case Study 7 DR4100 Best Practice Guide | April 2014
4 System setup 4.1 Hardware When the DR ships to a customer’s site and is powered on for the first time, the system still needs to complete a background initialization (init.) of the RAID. During this background init., the system may seem sluggish or write slower than expected. This will resolve itself within 24 hours 4.2 Expansion Shelves The proper boot procedure for the DR appliance is to first power on the DR expansion shelf/shelves and then connect it to the DR appliance.
To allow multiple groups to logon to the DR appliance using Active Directory do the following: - Create a new global group in Active Directory - Add each group to be allowed to access DR product to this global group. - Add the new global group to the DR using the following command from the CLI: authenticate --add --login_group “domain\group” Users that are part of the selected AD group will be able to logon to the CLI and GUI to administer the device.
4.4.1.1 Scenario 1: Separate data to be replicated vs. data not to be replicated Robert has Exchange data that is required to have 2 copies, with 1 copy maintained offsite. Robert also has VM data, which is NOT required to have 2 copies. • Recommendation: Robert should have the following two containers: - Container 1. For the Exchange data so that it can be replicated each week off site. - Container 2. For the local VM data so that it is not replicated and does not take up valuable WAN bandwidth. 4.4.1.
For NFS shares, it is recommended that root user be set to nobody and that NFS shares are further locked down by the IP or DNS name of the machines that are allowed to connect to that container. 4.4.3 Marker Support Many DMAs add metadata into the backup stream to enable them to find, validate and restore data they wrote into the file. This metadata makes the data appear unique to dedupe enabled storage. In order to properly dedupe the data, the markers need to be removed before the stream is processed.
5 Replication setup and planning The DR appliance provides robust replication capabilities to provide a complete backup solution for multi-site environments. With WAN optimized replication, only unique data is transferred to reduce network traffic and improve recovery times. Replication can also be scheduled to occur during non-peak periods, and prioritizes ingest data over replication to ensure optimal backup windows.
Since any data transferred during replication has already been compressed and deduplicated to roughly 85%-90% of the original size, start by multiplying the original data size by 15% to determine the amount of data to be replicated. For example, to transfer two terabytes of data, break it down into megabytes by multiplying the value by 1048576. To convert 2TB to MB the formula would be: 2TB * 1048576 = 2097152 MB.
When multiple containers are being replicated between the same DR appliances, the replication engine round-robins the requests across the containers. In this situation the containers may not be replicated in synchronicity if there are large amounts of data waiting to be transferred. 5.5 Replication Encryption For best performance vs. security, encryption settings should be set at 128 bit encryption, providing a good balance for most environments.
conferencing requires 1MB/s on a 10MB/s link, scale back the available replication bandwidth to 9MB/s, providing bandwidth for everyone to play nice together. 5.8 Domain Access In addition to data, NFS and CIFS security information is also replicated between DR appliances. This allows access to each DR appliance joined to the same domain / forest from user or group accounts with appropriate permissions. Access to a given DR appliance will be denied when not joined to the same domain/forest.
6 Networking The DR appliance provides many networking capabilities, designed to further improve the ingest and recovery speeds in any environment. One such feature is secure separation, allowing network optimization by preventing unnecessary traffic on the production network via routing the backup, management, and replication traffic to separate network interfaces.
6.2 Network Interface Card Bonding Network interface card (NIC) bonding provides additional throughput and/or failover functionality in the event a link is lost. The DR4100 supports two bonding modes: dynamic link aggregation and adaptive load balancing (802.3ad and ALB). Each of these modes has their own advantages and disadvantages that should be considered before choosing a mode. Dynamic link aggregation (Mode 4 or 802.3ad) creates aggregation groups that utilize the same speed and duplex (i.e.
Note: Always ensure that data source system (i.e. systems that send data to the DR4100) are located on the same subnet. Failure to do so will result in all traffic being sent to the first interface in the bond. This is because adaptive load balancing cannot properly load balance when data sources are located on a remote subnet. This is a result of ALB use of (ARP) which is subnet-specific and a router’s inability to send ARP broadcast and updates.
Example administrator@DR1 > net work -- f act ory_reset --aut o_bonding_speed 1G Warning: This will stop all system operation and will reset the network configuration to factory settings and will require a system robbon. Existing configuration will be lost. Password required to proceed. Please enter the administrator password : administrator@DR1 > System --reboot Note: When creating a bond ensure that the interfaces to bond have active links.
6.3.1.1 Scenario 1: Leverage separate interfaces for management, replication and backup traffic Sarah, a network administrator, has an office located in Boston and another located in New York (See Figure 3). At each office she has a DR appliance and wishes to configure separate interfaces for management, backup and replication traffic. Her desired configuration is as follows: • Use bond0 for management. • Use dedicated 1GB interface for replication traffic.
6.3.1.2 Configure DR1 1. Display the original configuration of DR1 using the following command: network --show Example administrator@DR1 > network --show Automatic bonding speed: 10G Device : bond0 Enabled : yes Link : yes Boot protocol : dhcp IP Addr : 10.250.243.132 Netmask : 255.255.252.0 Gateway : 10.250.240.
Example administrator@DR1 > network --factory_reset --auto_bonding_speed 1G WARNING: This will stop all system operation and will reset the network configuration to factory settings and will require a system reboot. Existing configuration will be lost. Password required to proceed. Please enter the administrator password: Resetting network configuration, please wait....
3. Break bond0 to create the following configuration: • Bond0 with single 1GB interface to be used for management. • A single 1GB interface to be used for replication traffic. • A bonded 2x 10GB interface to be used for backup traffic. a. Delete eth1 from bond0 using the following command: network -- delete --member Example administrator@DR1 > network -- delete -- member eth1 Interface delete successful. Please restart networking for the changes to take effect. b.
Example administrator@DR1 > network --create_eth --nwif eth1 --static --ip 10.250.243.222 -netmask 255.255.252.0 -- name DR1-replication --restart WARNING: During network restart a loss of connection may occur and a relogin may be necessary. Password required to proceed. Please enter the administrator password: Interface operation successful. Network restart will now be done. Restarting network...
Example administrator@DR1 > system --backup_traffic --add --type CIFS --interface bond1 WARNING: This operation requires Windows access server restart. Do you want to continue (yes/no) [n]? y Successfully added application CIFS. Restarting Windows Access Server... Done. administrator@DR1 > system --backup_traffic --add --type NFS --interface bond1 Do you want to continue (yes/no) [n]? y Successfully added application NFS. Restarting file system ... done. c.
Example administrator@DR2 > replication --add --name backup --role source --peer DR2replication --peer_name backup-from-DR1 Enter password for administrator@DR2-replication: Replication entry created successfully. Replication Container : backup Replication Role : Source Replication Target : DR2-replication.ocarina.local Replication Target IP : 10.250.243.220 Replication Target Mgmt Name Replication Target Mgmt IP : 10.250.243.220 Replication Local Data Name Replication Local Data IP : 10.250.243.
6.3.1.4 Scenario 2: Leverage one bonded interface for management, replication and OST traffic and another for backup traffic Robert has a DR4100 with firmware 2.1, 2x 10GB interfaces and 2x 1GB interfaces. He wishes to use the 1GB interfaces for replication, management and OST traffic. He wants to use the 10GB interfaces for backup traffic only. Robert will need to do the following to accomplish his goals: • Verify that all interfaces are detected by the DR appliance.
1. Set bond0 to 1GB using the following command: network --factory_reset [--auto_bonding_speed <1G|10G>] Example administrator@DR1 > network -- factory_reset –auto_bonding_speed 1G Warning: This will stop all system operation and will reset the network configuration to factory settings and will require a system robbon. Existing configuration will be lost. One or more of these interfaces 'eth0,eth1' are in use by an application. Factory reset cannot be done while interfaces are in use by an application.
network --create_bond --bondif --dhcp --nwif -- mode -restart Example administrator@DR1 > network --create_bond –bondif bond1 --dhcp --nwif eth2,eth3, -mode ALB --restart Shutting down interface bond0: [ OK ] Shutting down interface bond1: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface bond0:Determining IP information for bond0... done. [ OK ] Bringing up interface bond1:Determining IP information for bond1.
6.3.1.5 Scenario 3: Replication between sites with dedicated interfaces Daniel has two sites, each with a DR appliance that he wishes to configure as a replication pair over dedicated links.
1. Use the following command to assign a static IP to eth2 on DR1: network --create_eth --nwif --static --ip --netmask Example administrator@DR1 > network --create_eth --nwif eth2 --static --ip 172.20.20.2 --netmask 255.255.255.0 2. Use the following command to assign a static IP to eth3 on DR1: network --create_eth --nwif --static --ip --netmask Example administrator@DR1 > network --create_eth --nwif eth3 --static --ip 172.
Example administrator@DR2> network --create_eth --nwif eth2 --static --ip 172.20.21.2 -- netmask 255.255.255.0 administrator@DR2 > network --create_eth --nwif eth3 --static --ip 172.20.23.2 --netmask 255.255.255.0 –-restart administrator@DR2 > network --show 6. Ensure connectivity of DR2 eth2 and eth3 by pinging the gateway using the following command: network --ping --destination --interface Example administrator@DR2 > network --ping --destination 172.20.21.
Example administrator@DR1 > network --route --add --network 172.20.21.2 --netmask 255.255.255.0 --gateway 172.20.21.1 --interface eth2 administrator@DR1 > network --route --add --network 172.20.23.2 --netmask 255.255.255.0 --gateway 172.20.23.1 --interface eth3 administrator@DR1 > network --show --routes Destination Gateway Mask Interface 172.20.21.0 172.20.20.1 255.255.255.0 eth2 172.20.23.0 172.20.22.1 255.255.255.0 eth3 9.
6.3.1.6 Scenario 4: Multiple appliance replication Jose has one DR4100 appliance located at his Seattle site and two DR4100s located at his Lansing site. He wants to replicate data from his Seattle site (Seattle1) to his Lansing site (Lansing1). He would also like to replicate data backed up at Lansing1 to another site, Lansing2. Jose will need to do the following to accomplish his goals: • Create bonds on the appropriate interfaces on all three appliances. • Add replication to the designated interfaces.
1. On Lansing1 create a bond on the 1GB ports using the following command: network --create_bond --bondif --static –nwif --mode –mtu <512-9000> --ip --netmask –restart Example administrator@Lansing1 > network --create bond --bondif bond1 --static –nwif eth2,eth3 -mode ALB --mtu <512-9000> --ip --netmask --restart 2.
6. Establish replication from Seattle1 to Lansing1 using the following command: replication --add --name < container-name> --role --peer -replication_traffic --encryption Example administrator@Seattle1 > replication --add --name backup --role source --peer --replication_traffic --encryption aes256 7.
6.3.1.7 Scenario 5: Backup to different IP’s on a single DR appliance Michelle is in Los Angeles and has a Netbackup media server that she wishes to have backed up to her DR appliance via different IP addresses. Her appliance is running an older firmware and needs to be upgrade to the 2.1. Michelle will need to do the following to accomplish her goals: • Upgrade the DR to firmware to 2.1. • Set the media server to use the two interfaces of the DR appliance.
1. 2. Upgrade the DR appliance to the latest 2.1 firmware. After the upgrade, two additional 1 GB interfaces will appear. The 10GB interfaces will be bonded as bond0. Use the following command to view the available network interfaces: network --show 3. Break bond0 and release eth0 from bond0 using the following command: network -- delete --member network – restart Example administrator@DR> network --delete -- member eth0 network --restart 4.
On the Netbackup Media server: 1. 2. 3. Check connectivity between the Media server and the newly created configured interface on the DR using the interface’s DNS name (dr-cifs.local). Create a storage unit on the Media server with the UNC path to the DR’s CIFS (or NFS if using an NFS Media server) container. Use the DNS name of the newly created interface in the UNC path. Create a second storage unit with the UNC path to the DR’s second container.
6.4 Troubleshooting Follow the steps bellow to troubleshoot connectivity problems between source and target DRs. a. Issue the following command to the target command to troubleshoot connectivity: replication -- troubleshoot --peer Example administrator@DR1 > replication --troubleshoot --peer 10.250.243.222 Testing connection to port 9904... Connected! Testing connection to port 9911... Connected! Testing connection to port 9915... Connected! Testing connection to port 9916...
A Resources DR Series Manuals http://www.dell.com/support/Manuals/us/en/19/product/powervault-dr4100 Dell Support http://support.dell.com Dell TecCenter http://en.community.dell.