HP X9000 Remote Replication Application Note nl for X9000 File Serving Software 6.0 or later Abstract This document describes the HP X9000 remote replication feature, and is intended for storage administrators and Windows administrators. Familiarity with X9000 systems and replication technologies is required.
© Copyright 2011 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Using remote replication..............................................................................4 Overview................................................................................................................................4 Replication modes................................................................................................................4 Replication sources.............................................................................................................
1 Using remote replication X9000 remote replication is a file-based solution that replicates changes in a source file system on one cluster to a target file system on either the same cluster (intra-cluster replication) or a second cluster (inter-cluster replication). Remote replication is asynchronous and runs in parallel across the cluster. Both files and directories can be replicated, and no special configuration of segments is needed.
Replication and X9000 file system snapshots File system snapshots are presented in a .snapshot directory under the root of the snapped directory. Continuous replication replicates only the current version of a file and skips snapshots completely. Run-once replication allows individual snapshots to be replicated. Snapshots on the source are sparse, but are fully populated when they are replicated. Be sure to plan file system sizes on the target cluster accordingly.
replication and watches for errors and failures. The replication software uses file level checksums at the source and target to confirm that replication was successful. When a replication task is started, the source system sets up a replication stream from each file serving node to the target file serving nodes. Each node replicates data from the segments it owns; a node cannot replicate data from segments it does not own.
3. 4. directories and file systems are not hidden or protected from local access or sharing by the target system. Select the servers and corresponding NICs that will handle replication traffic on the target cluster. Select the network to be used for replication traffic. The traffic can go across any available network. The following examples show how to create a remote replication export using the GUI or the CLI.
Complete the dialog box: • Path: Enter the path to the target directory. • Export to cluster: You can select an already registered cluster to receive the export or register a new cluster. To register a cluster, click New to open the Add Remote Cluster dialog box and then enter the DNS name for the Fusion Manager on the source cluster. • Server assignments: Select the file serving nodes that will manage replication traffic on the export. Also select the network to be used for replication traffic.
You can also modify or delete exports from this panel. Select the export and then click Modify or Delete. Example 1: Continuous replication of a complete file system On the source cluster, select the file system to be replicated from the Filesystems panel (ifs1 in this case). In the lower Navigator, select Active Tasks > Remote Replication. Click New on the Remote Replication Tasks panel and the New Remote Replication Tasks dialog box opens.
Example 2: Run-once replication of a directory Run-once replications are configured on the New Remote Replication Task dialog box. For this replication, select Run Once as the Type. Specify the directory to be replicated from the source cluster and then complete the target side information in the same manner as example 1. When you click OK, directory /ifs2/src1 on cluster ibrix01 will be replicated to /ifs2/ target2 on cluster ibrix02.
When you click OK, the selected snapshot will be replicated from snap tree /ifs1 on cluster ibrix01 to /ifs1/target3 on cluster ibrix02. Monitoring and controlling replication tasks You can view replication tasks on the source cluster GUI. Select the file system being replicated, and then select Active Tasks > Remote Replication from the lower Navigator. To control the task, click Stop, Pause, or Resume as needed. Select Overall Status in the Navigator to see a status report.
Select Server Tasks in the Navigator to see the servers running the task. Using the CLI for the examples Registering clusters for remote replication Use the ibrix_cluster command to register the source and target clusters with each other.
handle replication requests. (The default server assignment is to use all servers that have the file system mounted.) The following command creates the export for example 1: [root@ibrix02a ibfs1]# ibrix_crr_export -f ifs1 -p target1 -C ibrix01 -P Command succeeded! In the command: • The -f option specifies the file system to export as a target for remote replication. • The -p option specifies an exported directory under the exported file system.
CfrExport: ifs1 (target1) ================================ FILE SYSTEM : ifs12 DIRECTORY EXPORTED : target1 CLUSTER EXPORTED TO : ibrix01 ID -3 4 HOST -------ibrix02a ibrix02b NIC ----bond1 bond1 IP -10.10.125.37 10.10.125.39 Example 1: Continuous replication of a complete file system Start the replication from the source cluster: [root@ibrix01a ~]# ibrix_crr -s -f ifs1 -C ibrix02 -F ifs1 -X target1 Submitted CRR operation to background.
2, but defining a specific snapshot directory as the replication source (for example, /ibfs1/ .snapshot/2011-06-02T030000_hourly). # ibrix_crr -s -f ifs2 -o -S src1/.snapshot/2011-09-22T053000_hourly -C ibrix02 -F ifs2 -X target3 Submitted CRR operation to background. ID of submitted task: crr-7 Command succeeded! If you use this method, note the following: • This approach is prone to user error because of the complexity of the path to the source, especially when large numbers of snapshots are present.
Run-once replication is the only type available when replicating to the same cluster and same file system. If you have multiple file systems, you can choose run-once or continuous replication between file systems. On the CLI, use the ibrix_crr command to start a run-once replication task between directories in a file system: [root@ibrix01a ifs1]# ibrix_crr -s -f ifs1 -o -S src3 -P target3 Submitted CRR operation to background.
Start a continuous replication task between two file systems: [root@ibrix01a ifs1]# ibrix_crr -s -f ifs1 -F ifs2 -P target3 Submitted CRR operation to background. ID of submitted task: crr-10 Command succeeded! The source of a continuous replication task must be the root of a file system. Monitoring system load You can monitor system load in real time on the X9000 GUI dashboard.
Network failures This case assumes that the source file serving nodes communicate with the target file serving nodes over the user network, and the source file serving nodes communicate locally with each other over the cluster network. If a source file serving node loses both the cluster and user networks, the node will fail over to its HA backup node, which will take over its segments and responsibilities. Complete the steps in “Failed source file serving node” (page 17).
A Log files Remote replication writes to several log files in the /usr/local/ibrix/log directory on each node. The names of the files start with ibrc. [root@ibrix01a log]# pwd /usr/local/ibrix/log [root@ibrix01a log]# ls ibrc* ibrcfrd.dbg ibrcfrd.err ibrcfrd.info ibrcfrworker.log ibrcud.dbg ibrcud.err ibrcud.info The following log files are created on each target file serving node: [root@ibrix02a log]# pwd /usr/local/ibrix/log [root@ibrix02a log]# ls ibrc* ibrcfrd.dbg ibrcfrd.err ibrcfrd.
B CLI commands in the X9000 5.x and 6.x releases The replication commands changed in the X9000 6.x release. The following table maps commands used in the X9000 5.x release to the commands used in the 6.x release. 5.x CLI command 6.x CLI command ibrix_cfrjob ibrix_crr ibrix_exportcfr ibrix_crr_export ibrix_exportcfrpreference ibrix_crr_nic The command options have also changed. See the HP X9000 File Serving Software CLI Reference Guide or the man pages for more information.