White Papers
Table Of Contents
- Executive Summary (updated May 2011)
- 1. Introduction
- 2. Dell NFS Storage Solution Technical Overview
- 3. NFS Storage Solution with High Availability
- 4. Evaluation
- 5. Performance Benchmark Results (updated May 2011)
- 6. Comparison of the NSS Solution Offerings
- 7. Conclusion
- 8. References
- Appendix A: NSS-HA Recipe (updated May 2011)
- A.1. Pre-install preparation
- A.2. Server side hardware set-up
- A.3. Initial software configuration on each PowerEdge R710
- A.4. Performance tuning on the server
- A.5. Storage hardware set-up
- A.6. Storage Configuration
- A.7. NSS HA Cluster setup
- A.8. Quick test of HA set-up
- A.9. Useful commands and references
- A.10. Performance tuning on clients (updated May 2011)
- A.11. Example scripts and configuration files
- Appendix B: Medium to Large Configuration Upgrade
- Appendix C: Benchmarks and Test Tools
![](/manual/dell/high-performance-computing-solution-resources/38-37-36-35-34-33-32-31-30-29-28-27-26-25-24-23-22-21-20-19-18-17-16-15-14-13-12-11-10-9-8-7-6-5-4-3-2-1-white-papers-english/images/img-34.png)
Dell HPC NFS Storage Solution - High Availability Configurations
Page 45
On the server that is running the service, check that the resource IP is assigned. The
interface to the public network should have two IP addresses - the statically assigned
address and the floating service IP address.
[root@active ~]# ip addr show ib0
9: ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65520 qdisc pfifo_fast
qlen 256
link/infiniband
80:00:00:48:fe:80:00:00:00:00:00:00:00:02:c9:03:00:07:7f:a7 brd
00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
inet 10.10.10.201/24 brd 10.10.10.255 scope global ib0
inet 10.10.10.200/24 scope global secondary ib0
inet6 fe80::202:c903:7:7fa7/64 scope link
valid_lft forever preferred_lft forever
15) Update the Selinux policy. This is needed to allow a cluster server to fence the other cluster
member and take ownership of the cluster service.
Use the Selinux policy Type Enforcement file (.te) provided in Section A.11 to build a policy
module. Install the policy module on both servers.
# checkmodule -M -m NSSHApolicy.te -o NSSHApolicy.mod
# semodule_package -o NSSHApolicy.pp -m NSSHApolicy.mod
# semodule -i NSSHApolicy.pp
Alternately, an SELinux policy can be generated from logs of denied operations. Check
/var/log/audit/audit.log for denied operations. If there are none relating to the cluster,
test fencing as described in the “Quick test of HA set-up” section and then follow the steps below.
# grep avc /var/log/audit/audit.log | audit2allow -M NSSHApolicy
Install the module on bot servers
# semodule -i NSSHApolicy.pp
Reference https://bugzilla.redhat.com/show_bug.cgi?id=588902
16) On both servers, turn off CLVM.
# chkconfig clvmd off; service clvmd stop
17) On both servers turn off GFS.
# chkconfig gfs off; service gfs stop;
# chkconfig gfs2 off; service gfs2 stop
18) Launch the luci web GUI (https://active:8084) to see the latest cluster configuration. An example
is shown in Figure 19.
NOTE: There are several known issues that have been worked around in the example
cluster.conf file. If any changes are saved via the web GUI, please note these fixes and make
sure they are included.
a) If changes are saved using the luci web GUI, edit the cluster.conf file manually on one
server
- Change file system type fstype back to “xfs”
<fs device="/dev/DATA_VG/DATA_LV" force_fsck="0" force_unmount="1"
fstype="xfs" mountpoint="/mnt/xfs_data" name="XFS"