Administrator Guide

VIPs) on the client network that allow clients to access the FluidFS cluster as a single entity. The client VIP also enables load
balancing between NAS controllers, and ensures failover in the event of a NAS controller failure.
If client access to the FluidFS cluster is not through a router (in other words, a at network), dene one client VIP per NAS
controller. If clients access the FluidFS cluster through a router, dene a client VIP for each client interface port per NAS controller.
Data Caching and Redundancy
New and modied les are rst written to the cache, and then cache data is immediately mirrored to the peer NAS controller
(mirroring mode). Data caching provides high performance, while cache mirroring between peer NAS controllers ensures data
redundancy. Cache data is ultimately transferred to permanent storage asynchronously through optimized data-placement schemes.
When cache mirroring is not possible, such as a single NAS controller failure or when the BPS battery status is low, NAS controllers
write directly to storage (journaling mode).
File Metadata Protection
The FluidFS cluster has several built-in measures to store and protect le metadata (which includes information such as name,
owner, permissions, date created, date modied, and a soft link to the le’s storage location).
All metadata updates are recorded constantly to storage to avoid potential corruption or data loss in the event of a power failure.
Metadata is replicated on two separate volumes.
Metadata is managed through a separate caching scheme.
Checksums protect the metadata and directory structure. A background process continuously checks and xes incorrect
checksums.
Load Balancing and High Availability
For availability and performance, client connections are load balanced across the available NAS controllers. Both NAS controllers in a
NAS appliance operate simultaneously. If one NAS
controller in a NAS appliance fails, clients fail over automatically to the peer
controller. When failover occurs, some SMB clients will automatically reconnect to the peer NAS controller. In other cases, an SMB
application might fail and you must restart it. NFS clients experience a temporary pause during failover, but client network trac
resumes automatically.
Failure Scenarios
The FluidFS cluster can tolerate a single NAS controller failure without impact to data availability and without data loss. If one NAS
controller in a NAS appliance becomes unavailable (for example, because the NAS controller failed, is turned o, or is disconnected
from the network), the NAS appliance status is degraded. Although the FluidFS cluster is still operational and data is available to
clients, you cannot perform most conguration modications, and performance might decrease because data is no longer cached.
The impact to data availability and data integrity of a multiple NAS controller failure depends on the circumstances of the failure
scenario. Detach a failed NAS controller as soon as possible, so that it can be safely taken oine for service. Data access remains
intact as long as one of the NAS controllers in each NAS appliance in a FluidFS cluster is functional.
The following table summarizes the impact to data availability and data integrity of various failure scenarios.
Scenario
System Status Data Integrity Comments
Single NAS controller failure Available, degraded Unaected
Peer NAS controller enters
journaling mode
Failed NAS controller can
be replaced while keeping
the le system online
Sequential dualNAS controller
failure in single NAS appliance
cluster
Unavailable Unaected Sequential failure assumes
enough time is available
between NAS controller
failures to write all data from
508
FS8x00 Scale-Out NAS with FluidFS Overview