Administrator Guide
Table Of Contents
- Dell FluidFS V3 NAS Solutions For PowerVault NX3500, NX3600, And NX3610 Administrator's Guide
- Introduction
- How PowerVault FluidFS NAS Works
- FluidFS Terminology
- Key Features Of PowerVault FluidFS Systems
- Overview Of PowerVault FluidFS Systems
- PowerVault FluidFS Architecture
- Data Caching And Redundancy
- File Metadata Protection
- High Availability And Load Balancing
- Ports Used by the FluidFS System
- Other Information You May Need
- Upgrading to FluidFS Version 3
- FluidFS Manager User Interface Overview
- FluidFS 3.0 System Management
- Connecting to the FluidFS Cluster
- Managing Secured Management
- Adding a Secured Management Subnet
- Changing the Netmask for the Secured Management Subnet
- Changing the VLAN ID for the Secured Management Subnet
- Changing the VIP for the Secured Management Subnet
- Changing the NAS Controller IP Addresses for the Secured Management Subnet
- Deleting the Secured Management Subnet
- Enabling Secured Management
- Disabling Secured Management
- Managing the FluidFS Cluster Name
- Managing Licensing
- Managing the System Time
- Managing the FTP Server
- Managing SNMP
- Managing the Health Scan Throttling Mode
- Managing the Operation Mode
- Managing Client Connections
- Displaying the Distribution of Clients between NAS Controllers
- Viewing Clients Assigned to a NAS Controller
- Assigning a Client to a NAS Controller
- Unassigning a Client from a NAS Controller
- Manually Migrating Clients to another NAS Controller
- Failing Back Clients to Their Assigned NAS Controller
- Rebalancing Client Connections across NAS Controllers
- Shutting Down and Restarting NAS Controllers
- Managing NAS Appliance and NAS Controller
- FluidFS 3.0 Networking
- Managing the Default Gateway
- Managing DNS Servers and Suffixes
- Managing Static Routes
- Managing the Internal Network
- Managing the Client Networks
- Viewing the Client Networks
- Creating a Client Network
- Changing the Netmask for a Client Network
- Changing the VLAN Tag for a Client Network
- Changing the Client VIPs for a Client Network
- Changing the NAS Controller IP Addresses for a Client Network
- Deleting a Client Network
- Viewing the Client Network MTU
- Changing the Client Network MTU
- Viewing the Client Network Bonding Mode
- Changing the Client Network Bonding Mode
- Managing SAN Fabrics
- FluidFS 3.0 Account Management And Authentication
- Account Management and Authentication
- Default Administrative Accounts
- Default Local User and Local Group Accounts
- Managing Administrator Accounts
- Managing Local Users
- Managing Password Age and Expiration
- Managing Local Groups
- Managing Active Directory
- Managing LDAP
- Managing NIS
- Managing User Mappings between Windows and UNIX/Linux Users
- FluidFS 3.0 NAS Volumes, Shares, and Exports
- Managing the NAS Pool
- Managing NAS Volumes
- File Security Styles
- Thin and Thick Provisioning for NAS Volumes
- Choosing a Strategy for NAS Volume Creation
- Example NAS Volume Creation Scenarios
- NAS Volumes Storage Space Terminology
- Configuring NAS Volumes
- Cloning a NAS Volume
- NAS Volume Clone Defaults
- NAS Volume Clone Restrictions
- Managing NAS Volume Clones
- Managing CIFS Shares
- Managing NFS Exports
- Managing Quota Rules
- Viewing Quota Rules for a NAS Volume
- Setting the Default Quota per User
- Setting the Default Quota per Group
- Adding a Quota Rule for a Specific User
- Adding a Quota Rule for Each User in a Specific Group
- Adding a Quota Rule for an Entire Group
- Changing the Soft Quota or Hard Quota for a User or Group
- Enabling or Disabling the Soft Quota or Hard Quota for a User or Group
- Deleting a User or Group Quota Rule
- Managing Data Reduction
- FluidFS 3.0 Data Protection
- FluidFS 3.0 Monitoring
- FluidFS 3.0 Maintenance
- Troubleshooting
- Getting Help

Dell recommends that you maintain a table to track which DNS entries are used to access each NAS
volume. This helps when performing failover and setting up group policies.
Setting Up and Performing Disaster Recovery
This section contains a high‐level overview of setting up and performing disaster recovery. In these
instructions, Cluster A is the source FluidFS cluster containing the data that must be backed up and
Cluster B is the target FluidFS cluster, which backs up the data from source cluster A.
Prerequisites
• Cluster B is installed, but has no NAS volumes configured.
• Cluster A and Cluster B have the same NAS appliance count. For example, if Cluster A has two NAS
appliances, Cluster B must have two NAS appliances.
• Cluster A and Cluster B are at the same FluidFS version.
• Cluster B has different network settings (client, SAN, internal, and so on) than source Cluster A,
however, Cluster A and Cluster B must be able to communicate with each other so that replication
operations can occur.
• Cluster B has enough space to replicate all data from Cluster A.
Phase 1 — Build Replication Partnership Between Source Cluster A And Backup Cluster B
1. Log on to cluster A.
2. Set up replication partnership between source cluster A and backup cluster B.
For more information on setting up replication partners, see Adding a Replication Partnership.
3. Create a replication policy for all the source volumes in cluster A to target volumes in cluster B.
NOTE: Replication policy is a one to one match on a volume basis, for example:
Source volume A1 (cluster A) to target volume B1 (cluster B)
Source volume A2 (cluster A) to target volume B2 (cluster B)
…………………………
Source volume An (cluster A) to target volume Bn (cluster B)
NOTE: FluidFS v2 supports auto generate target volume during addition of the replication
policy. For FluidFS 1.0, you must create the target volumes in cluster B and make sure that the
volume size is big enough to accommodate the corresponding source volume data in cluster A.
4. Start the replication scheduler to ensure that at least one successful replication has occurred for all
the source volumes in cluster A.
If the replication fails, fix the problems encountered and restart the replication process. This ensures
that all source volumes in cluster A have at least one successful replication copy in cluster B. Set up a
regular replication schedule, so the target volumes in cluster B always have most up to date
replication copy for cluster A.
CAUTION: Replication restore is not a complete BMR restore, settings such as network
configuration (client, SAN, and IC) cannot be backed up and restored using the replication
method. Note all cluster A settings (for use when restoring cluster A) including network
configuration, cluster wide settings such as volume name, alert settings, and so on for future use.
If the system restore operation fails to restore these settings, you can manually restore the cluster
A settings back to their original values.
147