Global File System Red Hat Global File System 5.
Global File System This book provides information about installing, configuring, and maintaining Red Hat GFS (Red Hat Global File System) for Red Hat Enterprise Linux 5.2.
Global File System: Red Hat Global File System Copyright © 2008 Red Hat, Inc. Copyright © 2008 Red Hat, Inc. This material may only be distributed subject to the terms and conditions set forth in the Open Publication License, V1.0 or later with the restrictions noted below (the latest version of the OPL is presently available at http://www.opencontent.org/openpub/). Distribution of substantively modified versions of this document is prohibited without the explicit permission of the copyright holder.
Global File System
Introduction .............................................................................................................. vii 1. Audience ...................................................................................................... vii 2. Related Documentation ................................................................................. vii 3. Document Conventions ................................................................................ viii 4. Feedback ................................
vi
Introduction The Global File System Configuration and Administration document provides information about installing, configuring, and maintaining Red Hat GFS (Red Hat Global File System). A GFS file system can be implemented in a standalone system or as part of a cluster configuration. For information about Red Hat Cluster Suite refer to Red Hat Cluster Suite Overview and Configuring and Managing a Red Hat Cluster.
Introduction • Using GNBD with Global File System — Provides an overview on using Global Network Block Device (GNBD) with Red Hat GFS. • Linux Virtual Server Administration — Provides information on configuring high-performance systems and services with the Linux Virtual Server (LVS). • Red Hat Cluster Suite Release Notes — Provides information about the current release of Red Hat Cluster Suite.
Feedback Additionally, the manual uses different strategies to draw your attention to pieces of information. In order of how critical the information is to you, these items are marked as follows: Note A note is typically information that you need to understand the behavior of the system. Tip A tip is typically an alternative way of performing a task. Important Important information is necessary, but possibly unexpected, such as a configuration change that will not persist after a reboot.
Introduction Bugzilla component: Documentation-cluster Book identifier: Global_File_System(EN)-5.2 (2008-05-21T15:10) By mentioning this manual's identifier, we know exactly which version of the guide you have. If you have a suggestion for improving the documentation, try to be as specific as possible. If you have found an error, please include the section number and some of the surrounding text so we can find it easily.
Chapter 1. GFS Overview The Red Hat GFS file system is a native file system that interfaces directly with the Linux kernel file system interface (VFS layer). A GFS file system can be implemented in a standalone system or as part of a cluster configuration. When implemented as a cluster file system, GFS employs distributed metadata and multiple journals. A GFS file system can be created on an LVM logical volume.
Chapter 1. GFS Overview 1. New and Changed Features This section lists new and changed features included with the initial release of Red Hat Enterprise Linux 5. • GULM (Grand Unified Lock Manager) is not supported in Red Hat Enterprise Linux 5. If your GFS file systems use the GULM lock manager, you must convert the file systems to use the DLM lock manager. This is a two-part process. • While running Red Hat Enterprise Linux 4, convert your GFS file systems to use the DLM lock manager.
Economy and Performance The GFS SAN configuration in Figure 1.1, “GFS with a SAN” provides superior file performance for shared files and file systems. Linux applications run directly on GFS nodes. Without file protocols or storage servers to slow data access, performance is similar to individual Linux servers with directly connected storage; yet, each GFS application node has equal access to all data files. GFS supports up to 125 GFS nodes. Figure 1.1. GFS with a SAN 2.2.
Chapter 1. GFS Overview Figure 1.2. GFS and GNBD with a SAN Figure 1.3, “GFS and GNBD with Directly Connected Storage” shows how Linux client applications can take advantage of an existing Ethernet topology to gain shared access to all block storage devices. Client data files and file systems can be shared with GFS on each client. Application failover can be fully automated with Red Hat Cluster Suite.
GFS Software Components Figure 1.3. GFS and GNBD with Directly Connected Storage 3. GFS Software Components Table 1.1, “GFS Software Subsystem Components” summarizes the GFS software components. Software Component Description gfs.ko Kernel module that implements the GFS file system and is loaded on GFS cluster nodes. lock_dlm.ko A lock module that implements DLM locking for GFS. It plugs into the lock harness, lock_harness.ko and communicates with the DLM lock manager in Red Hat Cluster Suite.
Chapter 1. GFS Overview 4. Before Setting Up GFS Before you install and set up GFS, note the following key characteristics of your GFS file systems: GFS nodes Determine which nodes in the Red Hat Cluster Suite will mount the GFS file systems. Number of file systems Determine how many GFS file systems to create initially. (More file systems can be added later.) File system name Determine a unique name for each file system. Each file system name is required in the form of a parameter variable.
Chapter 2. Getting Started This chapter describes procedures for initial setup of GFS and contains the following sections: • Section 1, “Prerequisite Tasks” • Section 2, “Initial Setup Tasks” 1. Prerequisite Tasks Before setting up Red Hat GFS, make sure that you have noted the key characteristics of the GFS nodes (refer to Section 4, “Before Setting Up GFS”). Also, make sure that the clocks on the GFS nodes are synchronized.
Chapter 2. Getting Started 2. Create GFS file systems on logical volumes created in Step 1. Choose a unique name for each file system. For more information about creating a GFS file system, refer to Section 1, “Creating a File System”.
Chapter 3.
Chapter 3. Managing GFS system options.
Examples required for each node that mounts the file system. (More journals than are needed can be specified at creation time to allow for future expansion.) BlockDevice Specifies a volume. Examples In these examples, lock_dlm is the locking protocol that the file system uses, since this is a clustered file system. The cluster name is alpha, and the file system name is mydata1. The file system contains eight journals and is created on /dev/vg01/lvol0.
Chapter 3. Managing GFS gfs_mkfs -p lock_dlm -t alpha:mydata2 -j 8 /dev/vg01/lvol1 mkfs -t gfs -p lock_dlm -t alpha:mydata2 -j 8 /dev/vg01/lvol1 Complete Options Table 3.1, “Command Options: gfs_mkfs” describes the gfs_mkfs command options. Flag Parameter Description -b BlockSize Sets the file system block size to BlockSize. Default block size is 4096 bytes. -D Enables debugging output. -h Help. Displays available options.
Mounting a File System Flag Parameter -t LockTableName Description Used in a clustered file system. This parameter has two parts separated by a colon (no spaces) as follows: ClusterName:FSName. ClusterName is the name of the Red Hat cluster for which the GFS file system is being created. The cluster name is set in the /etc/cluster/cluster.conf file via the Cluster Configuration Tool and displayed at the Cluster Status Tool in the Red Hat Cluster Suite cluster management GUI.
Chapter 3. Managing GFS -o acl GFS-specific option to allow manipulating file ACLs. BlockDevice Specifies the block device where the GFS file system resides. MountPoint Specifies the directory where the GFS file system should be mounted. Example In this example, the GFS file system on /dev/vg01/lvol0 is mounted on the /mydata1 directory. mount /dev/vg01/lvol0 /mydata1 Complete Usage mount BlockDevice MountPoint -o option The -o option argument consists of GFS-specific options (refer to Table 3.
Unmounting a File System Option Description file system. By default, using lock_nolock automatically turns on the localcaching and Caution: This option should not be used localflocks flags. when GFS file systems are shared. ignore_local_fs Tells GFS that it is running as a local file system. GFS can then turn on selected optimization Caution: This option should not be used capabilities that are not available when running in when GFS file systems are shared. cluster mode.
Chapter 3. Managing GFS 3. Unmounting a File System The GFS file system can be unmounted the same way as any Linux file system — by using the umount command. Note The umount command is a Linux system command. Information about this command can be found in the Linux umount command man pages. Usage umount MountPoint MountPoint Specifies the directory where the GFS file system should be mounted. 4.
Examples gfs_tool gettune MountPoint MountPoint Specifies the directory where the GFS file system is mounted. Examples In this example, all GFS tunable parameters for the file system on the mount point /mnt/gfs are displayed.
Chapter 3. Managing GFS statfs_fast = 0 5. GFS Quota Management File-system quotas are used to limit the amount of file system space a user or group can use. A user or group does not have a quota limit until one is set. GFS keeps track of the space used by each user and group even when there are no limits in place. GFS updates quota information in a transactional way so system crashes do not require quota usages to be reconstructed.
Displaying Quota Limits and Usage gfs_quota warn -g Group -l Size -f MountPoint User A user ID to limit or warn. It can be either a user name from the password file or the UID number. Group A group ID to limit or warn. It can be either a group name from the group file or the GID number. Size Specifies the new value to limit or warn. By default, the value is in units of megabytes. The additional -k, -s and -b flags change the units to kilobytes, sectors, and file system blocks, respectively.
Chapter 3. Managing GFS gfs_quota get -u User -f MountPoint Displaying Quota Limits for a Group gfs_quota get -g Group -f MountPoint Displaying Entire Quota File gfs_quota list -f MountPoint User A user ID to display information about a specific user. It can be either a user name from the password file or the UID number. Group A group ID to display information about a specific group. It can be either a group name from the group file or the GID number.
Synchronizing Quotas The hard limit set for the user or group. This value is zero if no limit has been set. Value The actual amount of disk space used by the user or group. Comments When displaying quota information, the gfs_quota command does not resolve UIDs and GIDs into names if the -n option is added to the command line. Space allocated to GFS's hidden files can be left out of displayed values for the root UID and GID by adding the -d option to the command line.
Chapter 3. Managing GFS You can use the gfs_quota sync command to synchronize the quota information from a node to the on-disk quota file between the automatic updates performed by GFS. Usage Synchronizing Quota Information gfs_quota sync -f MountPoint MountPoint Specifies the GFS file system to which the actions apply. Tuning the Time Between Synchronizations gfs_tool settune MountPoint quota_quantum Seconds MountPoint Specifies the GFS file system to which the actions apply.
Disabling/Enabling Quota Accounting enforcement is done by changing a tunable parameter, quota_enforce, with the gfs_tool command. The quota_enforce parameter must be disabled or enabled on each node where quota enforcement should be disabled/enabled. Each time the file system is mounted, enforcement is enabled by default. (Disabling is not persistent across unmounts.
Chapter 3. Managing GFS quota_account tunable parameter to 0. This must be done on each node and after each mount. (The 0 setting is not persistent across unmounts.) Quota accounting can be enabled by setting the quota_account tunable parameter to 1. To see the current values of the GFS tunable parameters, including quota_account, you can use the gfs_tool gettune, as described in Section 4, “Displaying GFS Tunable Parameters”.
Growing a File System # gfs_tool settune /gfs quota_account 1 # gfs_quota init -f /gfs 6. Growing a File System The gfs_grow command is used to expand a GFS file system after the device where the file system resides has been expanded. Running a gfs_grow command on an existing GFS file system fills all spare space between the current end of the file system and the end of the device with a newly initialized GFS file system extension.
Chapter 3. Managing GFS After running the gfs_grow command, you can run a df MountPoint command on the file system to check that the new space is now available in the file system. Examples In this example, the underlying logical volume for the file system file system on the /mnt/gfs directory is extended, and then the file system is expanded. [root@tng3-1 ~]# lvextend -L35G /dev/gfsvg/gfslv Extending logical volume gfslv to 35.
Usage 7. Adding Journals to a File System The gfs_jadd command is used to add journals to a GFS file system after the device where the file system resides has been expanded. Running a gfs_jadd command on a GFS file system uses space between the current end of the file system and the end of the device where the file system resides. When the fill operation is completed, the journal index is updated.
Chapter 3.
Complete Usage Complete Usage gfs_jadd [Options] {MountPoint | Device} [MountPoint | Device] MountPoint Specifies the directory where the GFS file system is mounted. Device Specifies the device node of the file system. Table 3.4, “GFS-specific Options Available When Adding Journals” describes the GFS-specific options that can be used when adding journals to a GFS file system. Flag Parameter Description Help. Displays short usage message.
Chapter 3. Managing GFS Direct I/O is a feature of the file system whereby file reads and writes go directly from the applications to the storage device, bypassing the operating system read and write caches. Direct I/O is used only by applications (such as databases) that manage their own caches. An application invokes direct I/O by opening a file with the O_DIRECT flag. Alternatively, GFS can attach a direct I/O attribute to a file, in which case direct I/O is used regardless of how the file is opened.
GFS Directory Attribute File Specifies the file where the directio flag is assigned. Example In this example, the command sets the directio flag on the file named datafile in directory /mnt/gfs. gfs_tool setflag directio /mnt/gfs/datafile The following command checks whether the directio flag is set for /mnt/gfs/datafile. The output has been elided to show only the relevant information. [root@tng3-1 gfs]# gfs_tool stat /mnt/gfs/datafile mh_magic = 0x01161970 ... Flags: directio 8.3.
Chapter 3. Managing GFS Directory Specifies the directory where the inherit_directio flag is set. Example In this example, the command sets the inherit_directio flag on the directory named /mnt/gfs/data. gfs_tool setflag inherit_directio /mnt/gfs/data This command displays the flags that have been set for the /mnt/gfs/data directory. The full output has been truncated. [root@tng3-1 gfs]# gfs_tool stat /mnt/gfs/data ... Flags: inherit_directio 9.
Examples gfs_tool setflag inherit_jdata Directory gfs_tool clearflag inherit_jdata Directory Setting and Clearing the jdata Flag gfs_tool setflag jdata File gfs_tool clearflag jdata File Directory Specifies the directory where the flag is set or cleared. File Specifies the zero-length file where the flag is set or cleared. Examples This example shows setting the inherit_jdata flag on a directory.
Chapter 3. Managing GFS Each file inode and directory inode has three time stamps associated with it: • ctime — The last time the inode status was changed • mtime — The last time the file (or directory) data was modified • atime — The last time the file (or directory) data was accessed If atime updates are enabled as they are by default on GFS and other Linux file systems then every time a file is read, its inode needs to be updated.
Suspending Activity on a File System 10.2. Tune GFS atime Quantum When atime updates are enabled, GFS (by default) only updates them once an hour. The time quantum is a tunable parameter that can be adjusted using the gfs_tool command. Each GFS node updates the access time based on the difference between its system time and the time recorded in the inode. It is required that system clocks of all GFS nodes in a cluster be synchronized.
Chapter 3. Managing GFS Usage Start Suspension gfs_tool freeze MountPoint End Suspension gfs_tool unfreeze MountPoint MountPoint Specifies the file system. Examples This example suspends writes to file system /gfs. gfs_tool freeze /gfs This example ends suspension of writes to file system /gfs. gfs_tool unfreeze /gfs 12. Displaying Extended GFS Information and Statistics You can use the gfs_tool command to gather a variety of details about GFS.
Displaying GFS Counters MountPoint Specifies the file system to which the action applies. Example This example reports extended file system usage about file system /mnt/gfs.
Chapter 3. Managing GFS The number of gfs_glock structures that currently exist in gfs. locks held The number of existing gfs_glock structures that are not in the UNLOCKED state. freeze count A freeze count greater than 0 means the file system is frozen. A freeze count of 0 means the file system is not frozen. Each gfs_tool freeze command increments this count. Each gfs_tool unfreeze command decrements this count. incore inodes The number of gfs_inode structures that currently exist in gfs.
Displaying GFS Counters glocks reclaimed The number of glocks which have been reclaimed. glock dq calls The number of glocks released since the file system was mounted. glock prefetch calls The number of glock prefetch calls. lm_lock calls The number of times the lock manager has been contacted to obtain a lock. lm_unlock calls The number of times the lock manager has been contacted to release a lock. lm callbacks The number of times the lock manager has been contacted to change a lock state.
Chapter 3. Managing GFS MountPoint Specifies the file system to which the action applies. Example This example reports statistics about the file system mounted at /mnt/gfs.
Displaying Extended Status Note The information that the gfs_tool stat command displays reflects internal file system information. This information is intended for development purposes only. Usage gfs_tool stat File File Specifies the file from which to get information. Example This example reports extended file status about file /gfs/datafile.
Chapter 3. Managing GFS di_entries = 0 no_formal_ino = no_addr = 0 di_eattr = 0 di_reserved = 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 13. Repairing a File System When nodes fail with the file system mounted, file system journaling allows fast recovery. However, if a storage device loses power or is physically disconnected, file system corruption may occur.
Usage other command options. Usage gfs_fsck -y BlockDevice -y The -y flag causes all questions to be answered with yes. With the -y flag specified, the gfs_fsck command does not prompt you for an answer before making changes. BlockDevice Specifies the block device where the GFS file system resides. Example In this example, the GFS file system residing on block device /dev/gfsvg/gfslv is repaired. All queries to repair are automatically answered with yes.
Chapter 3. Managing GFS Starting pass1c Looking for inodes containing ea blocks... Pass1c complete Starting pass2 Checking directory inodes. Pass2 complete Starting pass3 Marking root inode connected Checking directory linkage. Pass3 complete Starting pass4 Checking inode reference counts. Pass4 complete Starting pass5 ... Updating Resource Group 92 Pass5 complete Writing changes to disk Syncing the device. Freeing buffers. 14.
Example Variable Specifies a special reserved name from a list of values (refer to Table 3.5, “CDPN Variable Values”) to represent one of multiple existing files or directories. This string is not the name of an actual file or directory itself. (The real files or directories must be created in a separate step using names that correlate with the type of variable used.
Chapter 3.
O_DIRECT, 30 growing, 25 mounting, 13 quota management, 18 disabling/enabling quota accounting, 23 disabling/enabling quota enforcement, 22 displaying quota limits, 19 setting quotas, 18 synchronizing quotas, 21 repairing, 42 suspending activity, 35 unmounting, 16 Index A adding journals to a file system, 27 atime, configuring updates, 33 mounting with noatime, 34 tuning atime quantum, 35 audience, vii C CDPN variable values table, 45 configuration, before, 6 configuration, initial, 7 prerequisite tasks,
Index I T initial tasks setup, initial, 7 introduction, vii audience, vii tables CDPN variable values, 45 GFS software components, 5 GFS-specific options for adding journals, 29 GFS-specific options for expanding file systems, 26 gfs_mkfs command options, 12 mount options, 14 tunable parameters, GFS, 16 M managing GFS, 9 maximum size, GFS file system, 1, 6 mount table, 14 mounting a file system, 13 O overview, 1 configuration, before, 6 economy, 2 features, new and changed, 2 GFS software components,