Instruction Manual
Table Of Contents
- Dell FluidFS V3 NAS Solutions For PowerVault NX3500, NX3600, And NX3610 Administrator's Guide
- Introduction
- How PowerVault FluidFS NAS Works
- FluidFS Terminology
- Key Features Of PowerVault FluidFS Systems
- Overview Of PowerVault FluidFS Systems
- PowerVault FluidFS Architecture
- Data Caching And Redundancy
- File Metadata Protection
- High Availability And Load Balancing
- Ports Used by the FluidFS System
- Other Information You May Need
- Upgrading to FluidFS Version 3
- FluidFS Manager User Interface Overview
- FluidFS 3.0 System Management
- Connecting to the FluidFS Cluster
- Managing Secured Management
- Adding a Secured Management Subnet
- Changing the Netmask for the Secured Management Subnet
- Changing the VLAN ID for the Secured Management Subnet
- Changing the VIP for the Secured Management Subnet
- Changing the NAS Controller IP Addresses for the Secured Management Subnet
- Deleting the Secured Management Subnet
- Enabling Secured Management
- Disabling Secured Management
- Managing the FluidFS Cluster Name
- Managing Licensing
- Managing the System Time
- Managing the FTP Server
- Managing SNMP
- Managing the Health Scan Throttling Mode
- Managing the Operation Mode
- Managing Client Connections
- Displaying the Distribution of Clients between NAS Controllers
- Viewing Clients Assigned to a NAS Controller
- Assigning a Client to a NAS Controller
- Unassigning a Client from a NAS Controller
- Manually Migrating Clients to another NAS Controller
- Failing Back Clients to Their Assigned NAS Controller
- Rebalancing Client Connections across NAS Controllers
- Shutting Down and Restarting NAS Controllers
- Managing NAS Appliance and NAS Controller
- FluidFS 3.0 Networking
- Managing the Default Gateway
- Managing DNS Servers and Suffixes
- Managing Static Routes
- Managing the Internal Network
- Managing the Client Networks
- Viewing the Client Networks
- Creating a Client Network
- Changing the Netmask for a Client Network
- Changing the VLAN Tag for a Client Network
- Changing the Client VIPs for a Client Network
- Changing the NAS Controller IP Addresses for a Client Network
- Deleting a Client Network
- Viewing the Client Network MTU
- Changing the Client Network MTU
- Viewing the Client Network Bonding Mode
- Changing the Client Network Bonding Mode
- Managing SAN Fabrics
- FluidFS 3.0 Account Management And Authentication
- Account Management and Authentication
- Default Administrative Accounts
- Default Local User and Local Group Accounts
- Managing Administrator Accounts
- Managing Local Users
- Managing Password Age and Expiration
- Managing Local Groups
- Managing Active Directory
- Managing LDAP
- Managing NIS
- Managing User Mappings between Windows and UNIX/Linux Users
- FluidFS 3.0 NAS Volumes, Shares, and Exports
- Managing the NAS Pool
- Managing NAS Volumes
- File Security Styles
- Thin and Thick Provisioning for NAS Volumes
- Choosing a Strategy for NAS Volume Creation
- Example NAS Volume Creation Scenarios
- NAS Volumes Storage Space Terminology
- Configuring NAS Volumes
- Cloning a NAS Volume
- NAS Volume Clone Defaults
- NAS Volume Clone Restrictions
- Managing NAS Volume Clones
- Managing CIFS Shares
- Managing NFS Exports
- Managing Quota Rules
- Viewing Quota Rules for a NAS Volume
- Setting the Default Quota per User
- Setting the Default Quota per Group
- Adding a Quota Rule for a Specific User
- Adding a Quota Rule for Each User in a Specific Group
- Adding a Quota Rule for an Entire Group
- Changing the Soft Quota or Hard Quota for a User or Group
- Enabling or Disabling the Soft Quota or Hard Quota for a User or Group
- Deleting a User or Group Quota Rule
- Managing Data Reduction
- FluidFS 3.0 Data Protection
- FluidFS 3.0 Monitoring
- FluidFS 3.0 Maintenance
- Troubleshooting
- Getting Help

Troubleshooting RX And TX Pause Warning Messages
Description The following warning messages may be displayed when the NAS Manager reports
connectivity in a Not Optimal state:
Rx_pause for eth(x) on node
1 is off.
Tx_pause for eth(x) on node 1 is off.
Cause Flow control is not enabled on the switch(es) connected to a NAS cluster solution
controller.
Workaround See the switch vendor's documentation to enable flow control on the switch(es).
Troubleshooting Replication Issues
Replication Configuration Error
Description Replication between the source and destination NAS volumes fails because the source
and destination systems’ topologies are incompatible.
Cause The source and destination systems are incompatible for replication purposes.
Workaround Upgrade the NAS cluster solution which is down. Verify that both the source and
destination have the same number of NAS controllers.
NOTE: You cannot replicate between a four-node NAS cluster and two-node
NAS cluster.
Replication Destination Cluster Is Busy
Description Replication between the source NAS volume and the destination NAS volume fails
because the destination cluster is not available to serve the required replication.
Cause Replication task fails because the destination cluster is not available to serve the
required replication.
Workaround Administrators must verify the replication status on destination system.
Replication Destination FS Is Busy
Description Replication between the source NAS volume and the destination NAS volume fails.
Cause Replication task fails because the destination cluster is temporarily unavailable to
serve the required replication.
Workaround The replication continues automatically when the file system releases part of the
resources. Administrators must verify that the replication continues automatically after
a period of time (an hour).
Replication Destination Is Down
Description Replication between the NAS source volume and the NAS destination volume fails.
Cause Replication task fails since the file system of the destination NAS volume is down.
Workaround Administrators must check if the file system is down in the destination system using
the monitoring section of the NAS Manager. If the NAS cluster solution file system is
not responding, administrators must start the system on the destination cluster. The
replication continues automatically after the file system starts.
182