White Papers
Table Of Contents
- Executive Summary (updated May 2011)
- 1. Introduction
- 2. Dell NFS Storage Solution Technical Overview
- 3. NFS Storage Solution with High Availability
- 4. Evaluation
- 5. Performance Benchmark Results (updated May 2011)
- 6. Comparison of the NSS Solution Offerings
- 7. Conclusion
- 8. References
- Appendix A: NSS-HA Recipe (updated May 2011)
- A.1. Pre-install preparation
- A.2. Server side hardware set-up
- A.3. Initial software configuration on each PowerEdge R710
- A.4. Performance tuning on the server
- A.5. Storage hardware set-up
- A.6. Storage Configuration
- A.7. NSS HA Cluster setup
- A.8. Quick test of HA set-up
- A.9. Useful commands and references
- A.10. Performance tuning on clients (updated May 2011)
- A.11. Example scripts and configuration files
- Appendix B: Medium to Large Configuration Upgrade
- Appendix C: Benchmarks and Test Tools
Dell HPC NFS Storage Solution - High Availability Configurations
Page 36
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
15.15.10.1 active.hpc.com active
15.15.10.2 passive.hpc.com passive
13) Set up password-less ssh between the active and passive servers.
active> ssh-keygen -t rsa
active> ssh-copy-id –i ~/.ssh/id_rsa.pub passive
passive> ssh-keygen -t rsa
passive> ssh-copy-id –i ~/.ssh/id_rsa.pub active
14) Configure the IPoIB ib0 address or 10GbE address for the public network.
15) To work around an issue that can impact cluster service failover when all SAS links from the server
to the storage fail, two rpms need to be updated on both the active and the passive server.
These rpms can be obtained from Red Hat Network.
device-mapper-multipath-0.4.7-42.el5_6.2.x86_64.rpm
kpartx-0.4.7-42.el5_6.2.x86_64.rpm
Update both the active and passive server
# rpm –Uvh kpartx-0.4.7-42.el5_6.2.x86_64.rpm device-mapper-multipath-
0.4.7-42.el5_6.2.x86_64.rpm
References:
http://rhn.redhat.com/errata/RHBA-2011-0379.html
https://bugzilla.redhat.com/show_bug.cgi?id=677821
https://bugzilla.redhat.com/show_bug.cgi?id=683447
A.4. Performance tuning on the server
1) If the clients access the NFS server via 10GbE, configure the MTU on the 10GbE device to be 8192
for both the active and the passive server. Note that the switches need to be configured to support
large MTU as well.
On the server, if the value of MTU is not specified in /etc/sysconfig/network-
scripts/ifcfg-ethX,
echo “MTU=8192” >> /etc/sysconfig/network-scripts/ifcfg-ethX,
Else change the old value to 8192 in /etc/sysconfig/network-scripts/ifcfg-ethX.
where ifcfg-ethX is the 10GbE network interface.
Restart the networking services.
service network restart
2) On both the active and the passive server, change the number of NFS threads from a default of 8 to
256.