HP StoreAll OS User Guide Abstract This guide describes how to configure and manage StoreAll software file systems and how to use NFS, SMB, FTP, and HTTP to access file system data. The guide also describes the following file system features: quotas, remote replication, snapshots, data retention and validation, data tiering, and file allocation. The guide is intended for system administrators managing 9300 Storage Gateway, 9320 Storage, X9720 Storage, 9730 Storage, 8800 Storage, and 8200 Storage.
© Copyright 2009, 2014 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Using StoreAll software file systems.............................................................11 File system operations..............................................................................................................11 File system building blocks.......................................................................................................13 Configuring file systems...........................................................................................................
Deleting segments, volume groups, and physical volumes........................................................50 Deleting file serving nodes and StoreAll clients.......................................................................51 Checking and repairing file systems..........................................................................................51 Analyzing the integrity of a file system on all segments...........................................................
SMB shares...........................................................................................................................87 Creating SMB shares with the Management Console..............................................................89 Configuring SMB signing ...................................................................................................94 Managing SMB shares with the GUI....................................................................................
Downloading an object from a container............................................................................148 Viewing the contents of a container....................................................................................149 Deleting an object from a container...................................................................................149 Deleting a container.........................................................................................................
Delete Container..............................................................................................................215 Set Container Permission...................................................................................................215 Get Container Permission..................................................................................................215 Create/Update Object.....................................................................................................
Pausing or resuming a replication task................................................................................259 Stopping a replication task................................................................................................259 Replicating WORM/retained files...........................................................................................259 Configuring remote failover/failback.......................................................................................
Enabling EQWSI on a pre-existing HTTP share....................................................................310 Requirements for creating a Windows Explorer Search Plug-in...............................................312 Creating a Windows Explorer Search Plug-in.......................................................................314 Requirements for EQWSI queries........................................................................................316 Running an EQWSI query..........................
Discovering LUNs in the array............................................................................................369 Reviewing snapshot storage allocation................................................................................369 Automated block snapshots....................................................................................................370 Creating automated snapshots using the Management Console.............................................370 Creating a snapshot scheme.....
1 Using StoreAll software file systems File system operations The following diagram highlights the operating principles of the StoreAll file system. The topology in the diagram reflects the architecture of the HP 9320, which uses a building block of server pairs (known as couplets) with SAS attached storage. In the diagram: • There are four file serving nodes, SS1–SS4. These nodes are also called segment servers.
2. 3. (Specifically, a segment need not be a complete, rooted directory tree). Segments can be any size and different segments can be different sizes. The location of files and directories within particular segments in the file space is independent of their respective and relative locations in the namespace. For example, a directory (Dir1) can be located on one segment, while the files contained in that directory (File1 and File2) are resident on other segments.
1) 2) c. 8. The segment server initiating the operation can read files directly from the segment across the SAN; this is called a SAN READ. The segment server initiating the operation routes writes over the IP network to the segment server owning the segment. That server then writes data to the segment. All reads and writes must be routed over the IP network between the segment servers. Step 7 assumed that the server had to go to a segment to read a file.
• Data tiering. This feature allows you to set a preferred tier where newly created files will be stored. You can then create a tiering policy to move files from initial storage, based on file attributes such as such as modification time, access time, file size, or file type. See “Using data tiering” (page 377). • File allocation. This feature allocates new files and directories to segments according to the allocation policy and segment preferences that are in effect for a client.
2 Creating and mounting file systems This chapter describes how to create file systems and mount or unmount them. Creating a file system You can create a file system using the New Filesystem Wizard provided with the StoreAll Management Console, or you can use CLI commands. The New Filesystem Wizard also allows you to create an NFS export or an SMB share for the file system.
The requested operation encountered one or more failures. Creating filesystem was successful The creation of the file system continues, but the browser is no longer required to display the wizard. If the first message displays, the wizard will close automatically. If the second message displays, you must click Cancel to close the wizard.
3. On the Configure Options dialog box, enter a name for the file system. To mount the file system, select Mount Filesystem and specify a mountpoint. See “Managing mountpoints and mount/unmount operations” (page 25) for more information on mountpoints.
4. To enable StoreAll Express Query on the file system, select Enable Express Query on the Express Query dialog box. Express Query is a database used to record metadata state changes occurring on the file system. Express Query cannot be turned off while Auditing is enabled. Select the time for Express Query maintenance tasks to run daily. The Express Query maintenance tasks will run daily as long as Express Query is enabled. See “Express Query” (page 294) for more information.
5. Use the WORM/DATA Retention dialog box to enable data retention and data validation.
You can configure the following: • Enable Data Retention Select this option to enable data retention. For more information on data retention, see “Managing data retention” (page 274) for more information. ◦ Retention Mode Select one of the following modes: Table 1 Retention Modes 20 Retention Mode Description Enterprise The expiration date of the retention period can be extended to a later date. Relaxed The expiration date of the retention period can be moved in or extended to a later date.
(non-retained) files can be deleted at any time; WORM-retained files can be deleted only after the file's retention period has expired.) To manage only WORM-retained files, set the default retention period to a non-zero value. WORM-retained files then use this period by default; however, you can assign a different retention period if desired. To manage both WORM (non-retained) and WORM-retained files, uncheck Set Default Retention Period. The default retention period is then set to 0 seconds.
which is enabled along with Express Query. The Filesystem Metadata Cleaner task runs daily at 1:30 a.m. local time by default. • 6. Schedule Data validation. Select this option to schedule periodic validation scans on the file system. Use the default schedule, or click Modify to open the Data Validation Scan Schedule dialog box and configure your own schedule. Use the Auditing Options dialog box to enable auditing and select events you want to log.
7. Use the Default File Shares dialog box to create an NFS export and/or an SMB share at the root of the file system. The default settings are used. See “Using NFS” (page 58) and “Using SMB” (page 86) for more information. 8. Review the Summary to ensure that the file system is configured properly. If necessary, you can return to a dialog box and make any corrections.
The Data Retention tab allows you to change the data retention configuration. The file system must be unmounted. See “Configuring data retention on existing file systems” (page 276) for more information. NOTE: Data retention cannot be enabled on a file system created on StoreAll software 5.6 or earlier versions until the file system is upgraded.
File limit for directories The maximum number of files in a directory depends on the length of the file names, and also the names themselves. The maximum size of a directory is approximately 4 GB (double indirect blocks). An average file name length of eight characters allows about 12 million entries. However, because directories are hashed, it is unlikely that a directory can contain this number of entries. Files with a similar naming pattern are hashed into the same bucket.
Mounting a file system IMPORTANT: Keep in mind: • Mount options do not persist, unless they are set at the mountpoint. Mount options that are not set at the mountpoint are reset to match the mount options on the mount point when the file system is rebooted or remounted. • The ibrix_fs —i and ibrix_mountpoint —l commands display only the mount options for the mount point. • The mount command displays the noatime option. Ignore the noatime option. It is no longer used.
Select the mount options that apply to your configuration: • atime: Update the inode access time when a file is accessed • nodiratime: Do not update the directory inode access time when the directory is accessed • nodquotstatfs: Disable file system reporting based on directory tree quota limits • path: For StoreAll clients only, mount on the specified subdirectory path of the file system instead of the root. • remount: Remounts a file system without taking it offline.
Mounting and unmounting file systems locally on StoreAll clients On both Linux and Windows StoreAll clients, you can locally override a mount. For example, if the Fusion Manager configuration database has a file system marked as mounted for a particular client, that client can locally unmount the file system. Linux StoreAll clients To mount a file system locally, use the following command on the StoreAll Linux client.
To remove a client access entry, select the affected file system, and then select Client Exports from the lower Navigator. Select the access entry from the Client Exports display, and click Delete. To manage access entries using the CLI, see the ibrix_exportfs command in the HP StoreAll OS CLI Reference Guide. ibrix_lwmount -f FSNAME -m MOUNTPOINT -o mountpath=/PATHNAME Using Export Control When Export Control is enabled on a file system, by default, StoreAll clients have no access to the file system.
3 Configuring quotas Quotas can be assigned to individual users or groups, or to a directory tree. Individual quotas limit the amount of storage or the number of files that a user or group can use in a file system. Directory tree quotas limit the amount of storage and the number of files that can be created on a file system located at a specific directory tree. Note the following: • By default, quotas are enabled when you create and mount a file system using the GUI.
To view the current quotas configuration, select the file system and then select Quotas from the lower Navigator. The Quota Summary panel specifies whether quotas are enabled and lists the grace periods for blocks and inodes. From the Quota Summary panel, you can select one of the following options to manage quotas: • Quotas Wizard: Use the wizard to create, modify, or delete user, group, or directory quotas for a file system. • Modify: Enable or disable quotas on the file system and set grace periods.
Setting quotas for users, groups, and directories Before configuring quotas, the quota feature must be enabled on the file system and the file system must be mounted. NOTE: For the purpose of setting quotas, no UID or GID can exceed 2,147,483,647. Setting user quotas to zero removes the quotas. The Quota Management Wizard can be used to create, modify, or delete quotas for users, groups, and directories in the selected file system. Click Quotas Wizard on the Quota Summary panel to open the wizard.
User Quotas The User Quotas dialog box is used to create, modify, or delete quotas for users. To add a user quota, enter the required information and click Add. Users having quotas are listed in the table at the bottom of the dialog box. To modify quotas for a user, check the box preceding that user. You can then adjust the quotas as needed. To delete quotas for a user, check the box and click Delete. Group Quotas The Group Quotas dialog box is used to create, modify, or delete quotas for groups.
the group. Groups having quotas are listed in the table at the bottom of the dialog box. To modify quotas for a group, check the box preceding that group. You can then adjust the quotas as needed. To delete quotas for a group, check the box and click Delete. Directory Quotas The Directory Quotas dialog box is used to create, modify, or delete quotas for directories. To add a directory quota, enter the required information and click Add.
Summary The Summary lists the quotas configuration you specified. Click Back if any changes are needed, or click Finish to complete the wizard. To configure quotas using the CLI, use the ibrix_edquota command. See the HP StoreAll OS CLI Reference Guide for more information. Using a quotas file Quota limits can be imported into the cluster from the quotas file, and existing quotas can be exported to the file. See “Format of the quotas file” (page 36) for the format of the file.
Exporting quotas to a file Select the file system, select Quotas from the lower Navigator, and then click Export on the Quota Summary panel. Format of the quotas file The quotas file contains a line for each user, group, or directory tree assigned a quota. When you add quota entries, the lines must use one of the following formats. The “A” format specifies a user or group ID. The “B” format specifies a user or group name, or a directory tree that has already been assigned an identifier name.
NOTE: When a quotas file is imported, the quotas are stored in a different, internal format. When a quotas file is exported, it contains lines using the internal format. However, when adding entries, you must use the A, B, or C format.
The Task Summary panel displays the progress of the scan. If necessary, select Stop to stop the scan. To run an online quota check from the CLI, use the ibrix_onlinequotacheck command. See the HP StoreAll OS CLI Reference Guide for more information. Configuring email notifications for quota events If you would like to be notified when certain quota events occur, you can set up email notification for those events. On the GUI, select Email Configuration.
Moving directories After moving a directory into or out of a directory containing quotas, run the ibrix_onlinequotacheck command as follows: • After moving a directory from a directory tree with quotas (the source) to a directory without quotas (the destination), take these steps: 1. Run ibrix_onlinequotacheck in DTREE_CREATE mode on the source directory tree to remove the usage information for the moved directory. 2.
4 Maintaining file systems This chapter describes how to extend a file system, rebalance segments, delete a file system or file system component, and check or repair a file system. The chapter also includes file system troubleshooting information. Best practices for file system performance It is important to monitor the space used in the segments making up the file system.
Viewing physical volume information The following command lists detailed information about physical volumes: ibrix_pv -i For each physical volume, the output includes the following information: # ibrix_pv -i PV_NAME SIZE(MB) ------- -------d1 3,072 d2 3,072 VG_NAME LUN_GROUP LV_NAME FILESYSTEM SEGNUM USED% SEGOWNER DEVICE ON SEGOWNER ------- --------- ------- ---------- ------ ----- -------- --------- -------ivg1 ilv1 ifs1 1 99 vm3 /dev/sdb ivg2 ilv2 ifs1 2 99 vm2 /dev/sdc The following command provides h
Field Description FREE(MB) Free (unallocated) space, in MB, available on this volume group. USED% Percentage of total space in the volume group allocated to logical volumes. FS_NAME File system to which this logical volume belongs. PV_NAME Name of the physical volume used to create this volume group. SIZE (MB) Size, in MB, of the physical volume used to create this volume group. LV_NAME Names of logical volumes created from this volume group.
Field Description GEN Number of times the structure of the file system has changed (for example, new segments were added). NUM_SEGS Number of file system segments. To view detailed information about file systems, use the ibrix_fs -i command. To view information for all file systems, omit the -f FSLIST argument. ibrix_fs -i [-f FSLIST] The following table lists the file system output fields reported by ibrix_fs -i. Field Description Total Segments Number of segments.
Field Description Audit Report Schedule Time the audit report is run each day (for example, Daily at 02:00 AM) Audit Report Expiration Policy The expiration limit for audit reports (for example, 45 days). File replicas NA. Dir replicas NA. Mount Options Possible root segment inodes. This value is used internally. Root Segment Hint Current root segment number, if known. This value is used internally. Root Segment Replica(s) Hint Possible segment numbers for root segment replicas.
• .audit • .webdav 1 There are a few exceptions in the .archiving directory. Some files in this directory are created for user consumption in certain subdirectories of .archiving (described in various places in this user guide, for example validation summary outputs 1-0.sum, and audit log reports), and those specific files can be deleted if desired, but other files should not be deleted.
On the CLI, use the ibrix_fs command to extend a file system. Segments are added to the file serving nodes in a round-robin manner. If tiering rules are defined for the file system, the -t option is required. Avoid expanding a file system while a tiering job is running. The expansion takes priority and the tiering job is terminated.
IMPORTANT: Rebalancing is a storage and file system intensive process which, in some circumstances, can take days to complete. Rebalancing tasks are best to be run at a time when clients are not generating significant load. How rebalancing works During a rebalance operation on a file system, files are moved from source segments to destination segments.
The Rebalance All dialog box allows you to rebalance all segments in the file system or in the selected tier. The Rebalance Advanced dialog box allows you to select the source and destination segments for the rebalance operation.
Rebalancing segments from the CLI To rebalance all segments, use the following command. Include the -a option to run the rebalance operation in analytical mode.
Viewing the status of rebalance tasks Use the following commands to view status for jobs on all file systems or only on the file systems specified in FSLIST: ibrix_rebalance -l [-f FSLIST] ibrix_rebalance -i [-f FSLIST] The first command reports summary information. The second command lists jobs by task ID and file system and indicates whether the job is running or stopped. Jobs that are in the analysis (Coordinator) phase are listed separately from those in the implementation (Worker) phase.
For example, to delete segments ilv1 and ilv2: ibrix_lv -d -s ilv1,ilv2 To delete volume groups: ibrix_vg -d -g VGLIST For example, to delete volume groups ivg1 and ivg2: ibrix_vg -d -g ivg1,ivg2 To delete physical volumes: ibrix_pv -d -p PVLIST [-h HOSTLIST] For example, to delete physical volumes d1, d2, and d3: ibrix_pv -d -p d[1-3] Deleting file serving nodes and StoreAll clients Before deleting a file serving node, unmount all file systems from it and migrate any segments that it owns to a differe
NOTE: During an ibrix_fsck run, an INFSCK flag is set on the file system to protect it. If an error occurs during the job, you must explicitly clear the INFSCK flag (see “Clearing the INFSCK flag on a file system” (page 52)), or you will be unable to mount the file system. Analyzing the integrity of a file system on all segments Observe the following requirements when executing ibrix_fsck: • Turn off automated failover by executing the following command: ibrix_server -m -U [-h SERVERNAME].
Troubleshooting file systems Deleting file systems that are enabled for data retention When you delete a file system that is enabled for data validation and has a scheduled validation scan, the corresponding scheduled validation task is not deleted.
filter = [ "a|^/dev/sd.*|", "r|^.*|" ] Contact HP Support if you need assistance. Cannot mount on a StoreAll client Verify the following: • The file system is mounted and functioning on the file serving nodes. • The mountpoint exists on the StoreAll client. If not, create the mountpoint locally on the client. • Software management services have been started on the StoreAll client (see “Starting and stopping processes” in the administrator guide for your system).
1. 2. Identify the file serving node that owns the segment. This information is reported on the Filesystem Segments panel on the Management Console. Run phase 0 and phase 1 of the ibrix_fsck command to verify the issue with the segment. You can run the command on the file system or specify the segment name using the –s LVNAME parameter: ibrix_fsck -p 0 -f FSNAME [-s LVNAME] [-c] ibrix_fsck -p 1 -f FSNAME [-s LVNAME] [-c] 3. 4.
host_name ...................... ib50-87 <-- Verify owner of segment ref_counter .................... 1038 state_flags .................... SEGMENT_LOCAL SEGMENT_PREFERED SEGMENT_DHB
Segment evacuation task might fail after an upgrade After upgrading the StoreAll operating system, the segment evacuation task might fail with the message Completed with error. This could occur in a rare situation when the file system only has two segments and both the source and destination segments contain the file and its replica. To resolve this issue, add another segment and restart the evacuation process.
5 Using NFS To allow NFS clients to access a StoreAll file system, the file system must be exported. You can export a file system using the GUI or CLI. By default, StoreAll file systems and directories follow POSIX semantics and file names are case-sensitive for Linux/NFS users. If you prefer to use Windows semantics for Linux/NFS users, you can make a file system or subdirectory case-insensitive. NOTE: The latest release of NFS supported by current version of the StoreAll software is NFS version 3.
3. On the Settings window, enter or select the following: • The clients allowed to access the share • The permission level for the clients • The privilege level for the clients • Whether or not the export should be available from a backup server Click Next.
4. 60 Using NFS On the Advanced Settings window, you can select additional options on the NFS share. When finished making selections, click Next.
5. On the Host Servers window, select the servers that will host the NFS share. By default, the share is hosted by all servers that have mounted the file system. Click Next. The Summary window shows the configuration of the share. You can go back and revise the configuration if necessary. When you click Finish, the export is created and appears on the File Shares and Object Store panel. NOTE: To export (or unexport) a file system from the CLI, use the ibrix_exportfs command.
files with the same name but different case might be confusing, and the Windows users may be able to access only one of the files. CAUTION: The case insensitivity feature breaks POSIX semantics and can cause problems for Linux utilities and applications. Before enabling the case-insensitive feature, be sure the following requirements are met: • The file system or directory must be created using StoreAll OS 6.0 or later. • The file system must be mounted.
To set case insensitivity from the CLI, use the ibrix_caseinsensitive command. To view task information related to case insensitivity, use the ibrix _task command. See the HP StoreAll OS CLI Reference Guide for more information. Case insensitivity and operations affecting directories A newly created directory retains the case-insensitive setting of its parent directory. When you use commands and utilities that create a new directory, that directory has the case-insensitive setting of its parent.
6 Configuring authentication for SMB, FTP, and HTTP Overview StoreAll software supports several services for authenticating users accessing shares on StoreAll file systems: • Active Directory: Active Directory has an ID mapping submode, which can be configured as a secondary lookup. • LDAP • Local Users and Groups Local Users and Groups can be used with Active Directory or LDAP. NOTE: As of StoreAll version 6.5, Active Directory and LDAP can be used together.
sh /opt/likewise/bin/gen_ldap-lwtools.sh ldap-conf.conf -n If the configuration looks correct, run the command with added security by removing all temporary files: sh /opt/likewise/bin/gen_ldap-lwtools.sh ldap-conf.conf -rm If you need to run the script over SSL/TLS, provide certificate details in the command as follows: sh /opt/likewise/bin/gen_ldap-lwtools.sh ldap-conf.
Required attributes for templates Nonvirtual attribute name Value Description VERSION Any arbitrary string Helps identify the configuration version uploaded. Potentially used for reports, audit history, and troubleshooting. LDAPServerHost Host name or IP A FQDN or IP. Typically, it is a front-ended switch or an IP LDAP proxy/balancer name/address for multiple backend high-availability LDAP servers.
IMPORTANT: If the user’s primary group in AD is not resolved to a GID number from either Active Directory or LDAP, the user will be denied access to StoreAll. Configuring NIS If you plan to use NIS ID mapping when configuring authentication, you must first configure NIS on each applicable StoreAll node. You will need the NIS server domain and IP address to complete this configuration. Complete the following steps on each StoreAll node: 1. 1. Run the setup command as root: [root@ib14-1~] #setup 2. 3. 4. 5.
7. Verify that NIS has been configured by running to the following command to list the users from the NIS passwd map: [root@ib14-1 ~]# ypcat passwd nisuser4:$1$OyVY5RuU$TKUm8QXneWDLXLEIwBU0d/:505:506::/home/nisuser4:/bin/bash nisuser3:$1$aiAW4uQm$SoTLUjlO1yChbLQpqzEI8.
NOTE: SMB. CIFS in the GUI has not been rebranded to SMB yet. CIFS is just a different name for The wizard displays the window that corresponds to the option you selected. • Active Directory—See “Active Directory” (page 69). • LDAP—See “LDAP” (page 74). • LDAP—ID Mapping. See “ID Mapping Name Service Selection” (page 70). • Local Groups—See “Local Groups” (page 76). • Local Users—See “Local Users” (page 77). • Share Administrators—See “Windows Share Administrators” (page 79).
Linux static use mapping is optional. Do one of the following: IMPORTANT: See “ID Mapping Name Service Selection” (page 70) for information about enabling Linux Static User Mapping. You can return to the wizard to modify settings if it is not enabled at the first pass and is later required. • If you do not want to enable Linux Static User Mapping, leave it set to the default value of None.
1st Name Service Enter the required name service in this field . Supported name services are NIS, FILES, and LDAP. 2nd Name Service If more than one name service is required, you can enter an additional name service in this field. Supported name services are NIS, FILES, and LDAP. 3rd Name Service If more than two name services is required, you can enter an additional name service in this field. Supported name services are NIS, FILES, and LDAP.
LDAP ID Mapping If LDAP ID mapping is enabled and the system cannot locate a UID/GID in Active Directory, it searches for the UID/GID in LDAP. On the LDAP ID Mapping window, specify the search parameters: LDAP Server Host Enter the server name or IP address of the LDAP server host. Port Enter the LDAP server port (TCP port 389 for unencrypted or TLS encrypted; 636 for SSL encrypted). Base of Search Enter the LDAP base for searches.
NIS ID Mapping When case insensitive match is enabled, the NIS service will search for a matching case name through the Max Entries limit and use that name in associating a UID or GID. A case-insensitive match will be used if no case matching name is found. MAX Entries Enter the number of case-insensitive entries to look through to find a case sensitive match when RequireNamecaseMatch and FirstCaseIndependentMatch is disabled. If a case-matching entry is found it will be used (optional).
LDAP To configure LDAP as the primary authentication mechanism for SMB shares, enter the server name or IP address of the LDAP server host and the password for the LDAP user account. NOTE: If LDAP is the primary authentication service, Windows clients such as Explorer or MMC plug-ins cannot be used to add new users.
Enter the following information in the remaining fields: Bind DN Enter the LDAP user account used to authenticate to the LDAP server to read data, such as cn=hp9000-readonly-user,dc=entx,dc=net. This account must have privileges to read the entire directory. Write credentials are not required. Write OU Enter the OU (organizational unit) on the LDAP server to which configuration entries can be written. This OU must be pre-provisioned on the remote LDAP server.
When finished, click OK to return to the LDAP window in the wizard. Click Next on the LDAP window to continue. Local Groups NOTE: If Local Users and Groups is the primary authentication service, Windows clients such as Explorer or MMC plug-ins cannot be used to add new users. Specify local groups allowed to access shares. On the Local Groups page, enter the group name and, optionally, the GID and RID. If you do not assign a GID and RID, they are generated automatically.
Local Users NOTE: If Local Users and Groups is the primary authentication service, Windows clients such as Explorer or MMC plug-ins cannot be used to add new users. Specify local users allowed to access shares. On the Local Users page, enter a user name and password. Click Add to add the user to the Local Users list. When naming local users, you should be aware of the following: • User names must be unique. The new name cannot already be used by another user or group.
Local Users dialog box To provide account information for the user, click Advanced, which opens the Local Users (1st) dialog box.
Enter the following information for the user and click OK when finished. UID Enter the UID for this account. If you do not enter a value, the system will assign a UID. RID Enter the RID for this account. (The RID is the last n digits of the SID, ranging from 2000 to 40000000.) If you do not enter a value, the system will assign a RID. User Info Enter any other information you want to record for the user. Home Directory Enter the user's home directory. The default is /home/username.
To add an Active Directory or LDAP share administrator, enter the administrator name (such as domain\user1 or domain\group1) and click Add to add the administrator to the Windows Share Administrators list. To add an existing Local User as a share administrator, select the user and click Add. Summary The Summary page shows the authentication configuration. You can go back and revise the configuration if necessary.
You cannot change the UID or RID for a Local User account. If it is necessary to change a UID or RID, first delete the account and then recreate it with the new UID or RID. The Local Users and Local Groups panels allow you to delete the selected user or group. Provider stacking Provider Stacking is the ability to support sequential access to all of the currently supported authentication providers (Local, Active Directory, and LDAP), in any order that you specify.
NOTE: The following command is displayed on two lines, but the command should be entered on one line: sh /usr/local/ibrix/bin/set_provider_loadorder \ -c "Ldap" "ActiveDirectory" "Local" When configuring Provider Stacking be aware of the following: • Be aware that the StoreAll Management Console does not allow configuring multiple providers simultaneously and will affect how provider configurations are handled For example, if you configure the Active Directory provider first and then configure the LDAP pr
Configuring delegated users for an Active Directory domain If you are adding delegated users to an Active Directory (AD) domain and the DNS host names do not exceed 15 characters (the NetBIOS limit), the following are the minimum required permissions that must be assigned to each user or group: • Create Computer Objects • Delete Computer Objects • Read and Write Account Restrictions • Validated Write to DNS host name • Validated write to SPN (service principal name) • Reset Password NOTE: If the
4. On the Permissions window of the wizard: 1. Select General in the Show these permissions box. 2. In the Permissions box, select the following: • Reset password • Read and write account restrictions • Validated write to DNS host name • Validated write to service principal name NOTE: You will need to scroll within the Permissions box to select the “Validated write…” options. 5. 84 Click Next to complete the wizard and view a summary of the changes made.
6. Click Finish to close the wizard. NOTE: If the host is already in Active Directory and you are unable to join users due to an ACCESS_DENIED error, delete the host from Active Directory and attempt to join delegated users with these credentials again. 7. If your configuration requires computer objects to be located in a defined structure, you must move the delegated user account to the appropriate Organizational Unit (OU=, DC=.
7 Using SMB The SMB server implementation allows you to create file shares for data stored on the cluster. The SMB server provides a true Windows experience for Windows clients. A user accessing a file share on a StoreAll system will see the same behavior as on a Windows server. IMPORTANT: SMB and StoreAll Windows clients cannot be used together because of incompatible AD user to UID mapping. You can use either SMB or StoreAll Windows clients, but not both at the same time.
appropriate server. Select CIFS in the lower Navigator to display the CIFS panel, which shows SMB activity statistics on the server. You can start, stop, or restart the SMB service by clicking the appropriate button. NOTE: Click CIFS Settings to configure SMB signing on this server. See “Configuring SMB signing ” (page 94) for more information.
Everyone user may have more access rights than necessary. The administrator should set ACLs on the SMB share to ensure that users have only the appropriate access rights. Alternatively, permissions can be set more restrictively on the directory exporting the SMB share. • When the cluster and Windows clients are not joined in a domain, local users are not visible when you attempt to add ACLs on files and folders in an SMB share.
◦ SE_RESTORE_PRIVILEGE ◦ SE_TAKE_OWNERSHIP_PRIVILEGE See the Microsoft documentation for more information about these privileges. Creating SMB shares with the Management Console Use the Add New File Share Wizard to configure SMB shares. You can then view or modify the configuration as necessary. To create an SMB: 1. Select File Shares and Object Store from the Navigator to open the File Shares panel. 2. Click Add to start the Add New File Share and Object Store Wizard. 3.
4. On the Permissions window, specify permissions for users and groups allowed to access the share. Click Add to open the New User/Group Permission Entry dialog box, where you can configure permissions for a new user or group. The completed entries appear in the User/Group Entries list on the Permissions window when you click OK on the dialog box. Click Next to continue.
5. • To modify permissions for an existing user or group, select the appropriate entry from the User/Group Entries list and click Modify. You can then change the permissions as necessary. • To delete a user or group entry, select the entry from the User/Group Entries list and click Delete. On the Client Filtering window, specify the IP addresses or ranges that should be allowed or denied access to the share.
Click Add to open the New Client IP Address Entry dialog box, where you can allow or deny access to a specific IP address or a range of addresses. Enter a single IP address, or include a bitmask to specify entire subnets of IP addresses, such as 10.10.3.2/25. The valid range for the bitmask is 1-32. The completed entry appears on the Client IP Filters list on the Client Filtering page. • To modify an existing client filter, select that entry from the Client IP Filters list and click Modify.
6. On the Advanced Settings window, you can set the following options: • Access Based Enumeration: When enabled (on), users can only see the files and folders to which they have access on the file share. • Dir Create Mode: Enter the default mode for directories created in the share. The range of values is 0000-0777. • File Create Mode: Enter the default mode for files created in the share. The range of values is 0000-0777. Click Next to continue. 7.
8. The Summary window shows the configuration for the CIFS share. You can go back and revise the configuration if necessary. When you click Finish, the share is created and appears on the File Shares panel. Configuring SMB signing The SMB signing feature specifies whether clients must support SMB signing to access SMB shares. You can apply the setting to all servers, or to a specific server.
When configuring SMB signing, note the following: • SMB2 is always enabled. • Use the Required check box to specify whether SMB signing (with either SMB1 or SMB2) is required. • The Disabled check box applies only to SMB1. Use this check box to enable or disable SMB signing with SMB1. You should also be aware of the following: • The File Share Settings dialog box does not display whether SMB signing is currently enabled or disabled.
On the CIFS Shares panel, click Add or Modify to open the File Shares wizard, where you can create a new share or modify the selected share. Click Delete to remove the selected share. Click CIFS Settings to configure global file share settings; see “Configuring SMB signing ” (page 94)) for more information. You can also view SMB shares for a specific file system. Select that file system on the GUI, and then select CIFS Shares from the lower Navigator.
NOTE: You cannot create an SMB share with a name containing an exclamation point (!) or a number sign (#) or both. Use the -A ALLOWCLIENTIPSLIST or -E DENYCLIENTIPSLIST options to list client IP addresses allowed or denied access to the share. Use commas to separate the IP addresses, and enclose the list in quotes. You can include an optional bitmask to specify entire subnets of IP addresses (for example, ibrix_cifs -A "192.186.0.1,102.186.0.2/16").
Shell: /bin/sh Home dir: /home/local/IB/testuser1 Logon restriction: NO Do a reverse lookup with the UID by entering the following command: [root@ibrix01a ~]# /opt/likewise/bin/lw-find-user-by-id 1060661900 The command displays the following output: User info (Level-0): ==================== Name: IB\testuser1 SID: S-1-5-21-3681183244-3700010909-334885885-27276 Uid: 1060661900 Gid: 1060635137 Gecos: testuser1 Shell: /bin/sh Home dir: /home/local/IB/testuser1 Logon restriction: NO The GID is the GID for the u
In this command: • The -m option specifies we are going to modify the setting. • The -s option specifies the resource name of the share. • The -D option displays the description of the share. • The -F option specifies that we are changing the mask for files; use -M to change the directory mask. Managing SMB shares with Microsoft Management Console The Microsoft Management Console (MMC) can be used to add, view, or delete SMB shares.
6. 7. 8. Click Close→OK to exit the dialogs. Expand Shared Folders (\\
). Select Shares and manage the shares as needed. Windows Vista, Windows 2008, Windows 7: Complete the following steps: 1. Open the Start menu and enter mmc in the Start Search box. You can also enter mmc in an MS-DOS window. 2. On the User Account Control window, click Continue. 3. On the Console 1 window, select File→Add/Remove Snap-in. 4. On the Add or Remove Snap-ins window, select Shared Folders and click Add. 5.6. 7. 8. Click OK to exit the Add or Remove Snap-ins window. Expand Shared Folders (\\
). Select Shares and manage the shares as needed. Saving MMC settings You can save your MMC settings to use when managing shares on this server in later sessions. Complete these steps: 1. 2. 3. 4. On the MMC, select File→Save As. Enter a name for the file. The name must have the suffix .msc. Select Desktop as the location to save the file, and click Save. Select File→Exit.• Do not include any of the following special characters in the share description. If a description contains any of these special characters, the description might not propagate correctly to all nodes in the cluster. * % + & ` • The management console GUI or CLI cannot be used to alter the permissions for shares created or managed with Windows Share Management. The permissions for these shares are marked as “externally managed” on the GUI and CLI.
When you complete the wizard, the new share appears on the Computer Management window. Deleting SMB shares To delete an SMB share, select the share on the Computer Management window, right-click, and select Delete. Mapping SMB shares Before mapping a share it is important to understand the following: • By default, a share is made available from all of the file serving nodes in an cluster. • A share is always available on all of a file serving node’s network interfaces.
Best Practices when mapping shares: • A share should always be mapped using the User Virtual Interface (the User VIF) of a file serving node as that interface will be migrated to the node’s HA partner in event of the first node failing. • A share should never be mapped using the Admin IP address of a node as that interface cannot migrate to the node’s HA partner. • A share should never be mapped using the StoreAll Virtual Management Interface.
1. 2. 3. 4. Click Start, click Run, type mmc, and then click OK. On the MMC Console menu, click Add/Remove Snap-in. Click Add, and then click Active Directory Schema. Click Add, click Close, and then click OK. Adding uidNumber and gidNumber attributes to the partial-attribute-set To make modifications using the Active Directory Schema MMC snap-in, complete these steps: 1. Click the Attributes folder in the snap-in. 2.
The following article provides more information about modifying attributes in the Active Directory global catalog: http://support.microsoft.com/kb/248717 Assigning attributes To set POSIX attributes for users and groups, start the Active Directory Users and Computers GUI on the Domain Controller. Open the Administrator Properties dialog box, and go to the UNIX Attributes tab. For users, you can set the UID, login shell, home directory, and primary group. For groups, set the GID.
Synchronizing Active Directory 2008 with the NTP server used by the cluster It is important to synchronize Active Directory with the NTP server used by the StoreAll cluster.
• Allows lists users and members of CIFS groups to be mapped to a single Linux user • Supports Samba status username map files and features • Support Samba dynamic map scripts • Supports remapping of users • Supports mapping of CIFS users configured in Local, LDAP, and AD name services • Ability to map name using a static map file and/or a customer provided dynamic map script (the order is configurable) • Ability to assign SIDs using a template base and formulating the RID using the UID The ab
NOTE: You have the option to configure a dynamic username script, a static username map, or both while pre-configuring the username mapping solution. If you decide to configure both, when you enable username mapping, you must determine which will be called first (dynamic username script or static username map). If both are configured, dynamic username mapping is designated to run first, and the output produces a valid Linux username, the static map will not be referenced unless remapping is enabled.
for a general name change from the Windows-style username to the Linux-style username. ◦ 3. Setting usernames using LDAP attributes: For example, if you have a Linux name in one LDAP attribute and a Windows name in a second LDAP attribute, you can perform an LDAP search based on the Windows name, and then return the Linux name contained in a different schema under the same LDAP entry. Add and enable username mapping.
also be translated to /srv1/data, but the clients will have different permissions. The client requests for \\srv2\data will be translated to share srv2-DATA at /srv2/data. Client utilities such as net use will report the requested share name, not the new share name. Mapping old share names to new share names Mappings are defined in the /etc/likewise/vhostmap file. Use a text editor to create and update the file.
SMB users cannot view directory tree quotas. Differences in locking behavior When SMB clients access a share from different servers, as in the StoreAll software environment, the behavior of byte-range locks differs from the standard Windows behavior, where clients access a share from the same server. You should be aware of the following: • Zero-length byte-range locks acquired on one file serving node are not observed on other file serving nodes.
Restore operations If a file has been deleted from a directory that has Previous Versions, the user can recover a previous version of the file by performing a Restore of the parent directory. However, the Properties of the restored file will no longer list those Previous Versions. This condition is due to the StoreAll snapshot infrastructure; after a file is deleted, a new file in the same location is a new inode and will not have snapshots until a new snapshot is subsequently created.
SMB shadow copy restore during node failover If a node fails over while an SMB shadow copy restore is in progress, the user may see a disruption in the restore operation. After the failover is complete, the user must skip the file that could not be accessed. The restore operation then proceeds. The file will not be restored and can be manually copied later, or the user can cancel the restore operation and then restart it.
but will not be inherited or propagated. The SMB server also does not map POSIX ACLs to be compatible with Windows ACLs on a file. These permission mechanisms have some ramifications for setting up shares, and for cross-protocol access to files on a StoreAll system. The details of these ramifications follow. Permissions, UIDs/GIDs, and ACLs The SMB server does not attempt to maintain two permission/access schemes on the same file.
behavior, the creator of a share must access the root of the share and set the desired ACLs on it manually (using Windows Explorer or a command line tool such as ICACLS). This process is somewhat unnatural for Linux administrators, but should be fairly normal for Windows administrators. Generally, the administrator will need to create a CREATOR/OWNER ACL that is inheritable on the share directory, and then create an inheritable ACL that controls default access to the files in the directory tree.
3. To disable autocommit, you must unmount the file system, modify the data retention settings, and mount the file system again. See “Changing the retention profile for a file system” (page 279) for more information. If required, you can renable the registry setting after the migration. Run the same commands as for disabling the registry setting, but change the value of AllowPVFSFilemodeChange to 1. You can also renable autocommit after the migration, if required.
UID for SMB Guest account conflicts with another user If the UID for the Guest account conflicts with another user, you can delete the Guest account and recreate it with another UID.
8 Using FTP The FTP feature allows you to create FTP file shares for data stored on the cluster. Clients access the FTP shares using standard FTP and FTPS protocol services. IMPORTANT: Before configuring FTP, select an authentication method (either Local Users or Active Directory). See “Configuring authentication for SMB, FTP, and HTTP” (page 64) for more information. An FTP configuration consists of one or more configuration profiles and one or more FTP shares.
• Enter a name for the FTP share. • Enter the path to the file system or directory that will be shared (for example, /myFS1). • Optionally, enter details that describe the share. NOTE: StoreAll software does not create the subdirectory if it does not exist, and for anonymous shares only, adds a /pub/ directory to the share path instead. All files uploaded through the anonymous user will then be placed in that directory. The /pub/ directory is not created for a non-anonymous share.
4. On the Host Servers window, select the servers that will host the configuration profile. Click Next to continue.
5. On the Settings window, configure the following FTP parameters that apply to the share: NOTE: profile. The parameters you configure are added to the nodes hosting the configuration • Banner Content: Enter the message that will display on the share when clients access it. • Browseable: Select true or false to enable or disable, respectively, whether a directory listing is shown when users log in to FTP shares. • Read Only: Specify whether the share is read-only.
6. On the Users window, specify the users to be given access to the share. If no users are specified on this window, then any user who can be authenticated according to your StoreAll authentication settings for the cluster can access the share as read-write. Users must also have access permissions at the file system level to read or write. If any users are specified on this page, only those users may access the share and all other users are denied regardless of their file system permissions.
The Summary window shows the configuration for the FTP share. You can go back and revise the configuration if necessary. When you click Finish, the share is created and appears on the File Shares panel. Managing FTP from the CLI To manage FTP from the CLI, use the ibrix_ftpconfig and ibrix_ftpshare commands. For detailed information, see the HP StoreAll OS CLI Reference Guide.
Accessing shares Clients can access an FTP share by specifying a URL in their Web browser, such as Internet Explorer. In the following URLs, IP_address:port is the IP (or virtual IP) and port configured for the share.
Table 6 Download a file by using the FTP protocol Scenario Command You do not need to specify the user name and password curl ftp://IP_address/pub/server.pem -o \ You must provide the default user name and password (“ftp” for the username and “ftp” for the password) curl ftp://IP_address/pub/server.
Table 9 Upload a file by using the FTPS protocol for local user Scenario Command You need to supply the user name and password but not the domain curl --ftp-ssl-reqd --cacert -T ftp://IP_address:990/pub/ -u USER:PASSWORD You must specify the domain, such as for an Active Directory user curl --ftp-ssl-reqd --cacert -T ftp://IP_address/ -u DOMAIN\\USER:PASSWORD Table 10 Download a file by using the FTP protocol for domain user Scenario Command
9 Using Object Store Object Store provides the flexibility of being based on the open source OpenStack Cloud Software with the additional functionality of StoreAll. Object Store lets you store items in the cloud with the security of Keystone authentication. Authorization is at a centralized location, which tracks all modifications to any object store resource through an authentication token. Authorization can only be configured at the container level.
Table 12 Overview for implementing Object Store (continued) Step Description For additional information 4 Become familiar with the required cURL commands for “Using cURL commands for managing Object Store” managing content on Object Store. (page 145) 5 Set up redundancy for the containers on your Object Store. Object Stores are not included in the StoreAll site failover process.
Prerequisite steps for creating an Object Store 1. Identify a file system on which you can create an Object Store. IMPORTANT: The file system, identified for the Object Store, must meet the following criteria: • —No other file shares: The file system must not have any other file shares, including NFS, CIFS, HTTP, and FTP shares.
b. Identify the bond(x) VIF: # ibrix_nic -a -n bond1 -h node1,node2,node3,node4 In this instance node[1–4] is a host name. c. Assign an IP address to the bond1 VIFs on each node. In the command, -I specifies the IP address, and -M specifies the netmask: Sample commands (based on the high availability VIFs identified in the graphic in Step 2): d. ibrix_nic ibrix_nic ibrix_nic ibrix_nic ibrix_nic -a -c -a -c -b -n -n -n -n -H bond1:5 -h ib1-14s5 bond1:5 -h ib1-14s5 -I 10.1.14.155 -M 255.255.255.
5. StoreAll uses the FM user VIF to talk to the keystone authentication service. If an FM user VIF does not exist for a cluster, create an FM user VIF for that cluster: ibrix_fm -c -d -n -v [-I ] For example: ibrix_fm -c 10.1.14.205 -d bond1:1 -n 255.255.255.0 -v user —I 10.1.14.5 In this instance: • 10.1.14.205 is the IP addresss of the new FM user VIF. • bond1:0 is the VIF device. • 255.255.255.
6. Provide an Object Store name. The Object Store name can only contain alphanumeric characters. Click Next.
Provide server details 1. Select a NIC for each server. 2. To change the ports, click the Show Port Details button and then click the port you want to change. NOTE: VIF and port details are provided with respect to the server’s high availability settings. You do not need to change the ports unless you feel there might be a port conflict on your server. StoreAll assigns ports in the 6000 range for Object Store services.
3. Click Next. Set the port and SSL certificate settings To modify the port and SSL certificate settings: 1. The default port settings are displayed. You do not need to change the defaults unless the port is already in use by another process. To modify a port, place your cursor in the field and type a new port number. Make sure the port you specify is not already in use. Your changes are saved when you click Next. Table 14 Port descriptions 2.
3. Click Next Confirming Object Store settings The wizard displays the Summary window. Click Finish to create the Object Store with the settings displayed. IMPORTANT: Whenever you create an Object Store, two default administrator groups are created: swift, and IbrixSWGroup. swift (a Linux system group name) will be available in all of the StoreAll nodes. IbrixSWGroup is a StoreAll local group.
Permission levels Users can be granted administrative privileges or non-administrative privileges with read or write access to a container. The following table provides a description of each of the permission levels.
Table 15 Description of the permission levels Privilege Accessible tasks Information on how to grant privileges Administrative privileges Users with administrative privileges to • Create administrative groups and an Object Store can do the following: assign users. See “Creating administrator groups and assigning • Create a container. users” (page 138) • Upload an object to a container. Or • Download an object from a • Assigning group administrative container. privileges at the group level.
IMPORTANT: • Keep in mind the following: The Object Store creates a default swift administrator group with a default username and password of swift whenever an Object Store is created. NOTE: The swift user and group are created during installation and will be utilized only for Object Store administrative purposes. • Users must have administrative privileges to be able to create, delete, and manage containers on an Object Store.
8. Click OK. Add users to an administrator group using the GUI IMPORTANT: • Keep in mind the following: You cannot add a mixture of StoreAll local or system (AD or LDAP) users. When you add users, they must either all be StoreAll local or system users. When you grant administrative rights to a system user (AD or LDAP), the process executes on all nodes.
7. In the Admin Group Name text box provide the required value specified by the following table: Type of user Required value for the Admin Group Name text box System user (Active Directory or LDAP) The Admin Group Name name must be set to swift StoreAll local users The Admin Group Name name must be IbrixSwGroup or a user defined administrator group in StoreAll local group. 8. Type an existing user name in the User Name text box. 9. Type a user group name.
The ADMINGROUP parameter must be set as follows: Type of user Required setting for ADMINGROUP System user (Active Directory or LDAP) The ADMINGROUP parameter must be set to swift StoreAll local users The ADMINGROUP parameter must be IbrixSwGroup or a user defined administrator group in StoreAll local group.
Changing group administrative privileges Table 17 Changing group administrative privileges Task Command Grant all members of the group administrative privileges. ibrix_objectstoreadmin -a -k GROUP_NAME Remove administrative privileges from a ibrix_objectstoreadmin -d -k GROUP_NAME group.
"http://10.10.104.
curl -X PUT -i -H "X-Auth-Token:864e40dd3ee4910934b73d0a4a399ac" -H "X-Container-Write:group1:user1 —H “X—Container—Read:group1:user1 "http://15.213.70.158:8888/v1/AUTH_7b9a902423a582c9eda266dcf3ad697420c1c3ff9429b1dfd255152f3bf2098f/cont2 Assigning non-administrative privileges with read access to a container You can grant non-administrative users within your own group read access to a container by adding them to an access control list for the container.
Finding the Fusion Manager user VIF To find the FM user VIF: ibrix_nic —l The FM user VIF is listed as [Active FM Nonedit] with a status of User. The FM user VIF and its IP address are highlighted in the following example: Keystone ports The Keystone server ports are used for communicating with the Keystone server. Each of the two Keystone ports can be used in a cURL command depending on the type of operation of the cURL command.
How to find the proxy IP and port To generate a list of ports and to find the proxy IP, enter the following command: ibrix_objectstore -l The following is a sample output: Creating containers You must have administrative privileges to create a container. IMPORTANT: All administrators in a tenant have administrative privileges to access containers created by other administrators in the same tenant.
IMPORTANT: For metadata, you should not exceed 90 individual key/value pairs for any one object and the total byte length of all key/value pairs can be approximately 3KB (3072 bytes). To post metadata to an object: curl -X POST -i -H "X-Auth-Token:" // -H ':' NOTE: The metadata must be tagged with a key and value combination. The METADATA_KEY must follow “X-Object-Meta”.
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 70 0 70 0 0 195 0 --:--:-- --:--:-- --:--:-- 0 Viewing the contents of a container To view the contents of a container: curl -i / -H "X-AUTH-Token:" Sample command: curl -i http://10.10.104.
HTTP/1.1 204 No Content Content-Length: 0 Content-Type: text/html; charset=UTF-8 Date: Sun, 24 Nov 2013 23:48:10 GMT Viewing a status of a user account Only administrators can view the status at the account level. To view that status of a user account: curl -X HEAD -i -H "X-Auth-Token:" Sample command: curl -X HEAD -i -H "X-Auth-Token:7864e40dd3ee4910934b73d0a4a399ac" http://15.213.70.
In the following sample output, the user with authentication token 1bb88b944f6c4c8fb7411f85d3bd6bf4 belongs to the group named Users under the domain IBRQA1. Users.
2. Select Backup Target from the Select Type menu. You need at least one backup target listed on the Object Store configuration, but this must be the same backup target proxy IP that is used when setting the Sync-To key in the next section. 3. Provide the proxy IP address for the Object Store designated as the backup target in the FQDN/IP address text box. Then, click Add.
{root@ib48-205]# ibrix_objectstoreadmin -l -c Container Sync Config ===================== Backup Type IPAddress/FQDN ------------- -------------Backup Target 10.2.11.10 • Example command, set up the backup server: [root@ib11-10]# ibrix_objectstoreadmin -a -s 10.2.48.204,10.2.48.205 Command succeeded! [root@ib11-10 fsutil]# ibrix_objectstoreadmin -l -c Container Sync Config ===================== Backup Type IPAddress/FQDN ------------- -------------Backup Source 10.2.48.204 Backup Source 10.2.48.205 5.
1. Identify the source container to determine if the container has a pre-existing container sync configuration: curl -i / -H "X-AUTH-Token:" Sample command curl -i http://10.10.104.116:8888/v1/AUTH_7b9a902423a582c9eda266dcf3ad6974a2b98e4b21ea7c9e1e8d38f76afdf1b4/containerQ -H "X-AUTH-Token:0258c46f66e84161aa4f258ffa6fb188" If the sync values are not set, it is not displayed in the output: HTTP/1.
In this instance, we are setting the source container: • The secretkey123 is the Sync-Key value. • The http://10.2.11.10:8888/v1/AUTH_7b9a902423a582c9eda266dcf3ad69744037ce77e61f526b0739578e87695f32/ContainerSyncTarget is the Sync-To value. 3. If no Sync-Key value is set on the target container, or you want to change the pre-existing Sync-Key value, enter the following command: IMPORTANT: Keep in mind the following: • The Sync-Key values for both containers must match.
NOTE: Even if you delete the container sync configuration at the source cluster, it is also recommended that you remove the source container's metadata X-Container-SyncTo and X-Container-Sync-Key, or you may receive error messages in var/log/message.
NOTE: The ibrix_objectstore command collects information from the Object Store configured nodes and lists the information. If an Object Store configured node goes down and if the Fusion Manager fails over, this command will not display Object Store information about the nodes that are down until the Object Store configured node comes up and is running again. Viewing source and backup container replication configuration To view the backup source containers and backup container for an Object Store: 1.
NOTE: If you add couplets without extending Object Store in the StoreAll cluster and the Fusion Manager fails over to the newly added nodes, swift permissions will not be available for existing AD and LDAP users. Therefore, the administrator must ensure that the users of the Swift group (in /etc/group) are added to the new nodes. Otherwise, these users will not have administrative privileges if the Fusion Manager fails over to one of the newly added nodes.
5. Select the two newly added nodes and their corresponding virtual NIC. 6. To change the ports, click the Show Port Details button and then click the port you want to change. NOTE: VIF and port details are provided with respect to the server’s high availability settings. You do not need to change the ports unless you feel there might be a port conflict on your server. StoreAll assigns ports in the 6000 range for Object Store services. 7. Click OK.
Rebalancing a file system that is configured for Object Store Any time a file system configured for Object Store is expanded or storage is removed, you must run the rebalance operation on the account, container, and object components of the Object Store (A/C/O). The only exception is if you have already ensured that each node has equal storage; then there is no need to run the rebalance operation. By default, Object Store assigns a weight of 100 to each node folder created under Object Store.
1. For the object builder configuration: List the configuration, set the weight, and check the weight. a. List the object builder configuration by sending this command: [root@swift1 ~]# swift-ring-builder /ifs1/ObjectStore6/config/swift/object.builder Sample output: /ifs1/ObjectStore6/config/swift/object.builder, build version 2 4096 partitions, 1.000000 replicas, 1 regions, 1 zones, 2 devices, 0.
2. For the container: List the configuration, set the weight, and check the weight. a. List the container configuration by sending this command: [root@swift1 ~]# swift-ring-builder /ifs1/ObjectStore6/config/swift/container.builder Sample output: /ifs1/ObjectStore6/config/swift/container.builder, build version 2 4096 partitions, 1.000000 replicas, 1 regions, 1 zones, 2 devices, 0.
3. For the account: List the configuration, set the weight, and check the weight. a. List the container configuration by sending this command: [root@swift1 ~]# swift-ring-builder /ifs1/ObjectStore6/config/swift/account.builder Sample output: /ifs1/ObjectStore6/config/swift/account.builder, build version 2 4096 partitions, 1.000000 replicas, 1 regions, 1 zones, 2 devices, 0.
5. Verify that the partitions are rebalanced by checking all builder file configurations (account/container/object). Send the following commands: a. [root@swift1 ~]# swift-ring-builder /ifs1/ObjectStore6/config/swift/account.builder Sample output: /ifs1/ObjectStore6/config/swift/account.builder, build version 4 4096 partitions, 1.000000 replicas, 1 regions, 1 zones, 2 devices, 0.
To remove the server configuration 1. 2. 3. In the HP StoreAll Management Console, navigate to Object Store. Select the name of the server you want to remove configuration from. Click Remove Server Config. You will see a success dialog box if the configuration is able to be deleted. 4. To confirm deletion of the Object Store configuration, navigate to Events in the left panel of the StoreAll Management console. You should see an event that tells you an Object Store has been successfully deleted.
IMPORTANT: HP recommends that you use certificates signed by a signing authority like VeriSign only when you have configured load balancing for an Object Store. The self-signed certificates can be used for the Object Store proxy endpoint and keystone IP by following the steps provided in this section. Configuring an authority signed certificate directly to an Object Store would not work as we have multiple proxy IPs and Keystone IPs for the certificate's common name. 1. Create a digital certificate.
NS44NjETMBEGA1UEAwwKMTAuMjEuMTIuKjCBnzANBgkqhkiG9w0BAQEFAAOBjQAw gYkCgYEArUAesz13JqGcHzjtLFjzJCZ2CHOjmJ/zi7nX2SP3kXn00qYmjSS0O/Us ct3hv5mxCrsn2LtTO/TJxbT1kJP6vo9RJswZ3b9LarabyRGFNwWgLirpBmwkw7PD d94frzkqsH8aui6q9VyPFlOw2r+bBcIE6utEmHCDx6/8z5epKpECAwEAAaOB3DCB 2TAdBgNVHQ4EFgQU4gIE4kmDSsl+EZvpBmCz4xEKkBswgakGA1UdIwSBoTCBnoAU 4gIE4kmDSsl+EZvpBmCz4xEKkBuhe6R5MHcxCzAJBgNVBAYTAkdCMRIwEAYDVQQI EwlCZXJrc2hpcmUxEDAOBgNVBAcTB05ld2J1cnkxFzAVBgNVBAoTDk15IENvbXBh bnkgTHRkMRQwEgYDVQQDEwsxMC4xMy4xNS44NjETMBEGA1UEAwwKMTAuM
5. When you create the Object Store, be sure to select SSL certificate on the Settings page or, if you are creating the Object Store using the CLI, make sure you create it with the —S option. Sample command ibrix_objectstore -a -s ObjectStore4 -f fs1 -I 10.13.15.81,10.13.15.82 -S ssl_cert Sample output Info: Object Store endpoints use local server IP, as dual network is not configured. Keystone Auth URLs: Public: https://10.13.15.86:5000/v2.0 Admin: https://10.13.15.86:35357/v2.
9. Stat (display information for) the container from the client side after extending the Object Store. Sample command curl -i https://10.21.12.25:8888/v1/AUTH_7b9a902423a582c9eda266dcf3ad69744037ce77e61f526b0739578e87695f32 -I -H "X-Auth-Token: eb733772d0e741b2abdc137ec080213a" --cacert /root/cert.crt Sample output HTTP/1.1 204 No Content Content-Length: 0 Accept-Ranges: bytes X-Timestamp: 1385681580.
Sample output: SERVICE : OBJECTSTORE ===================== NAME STATUS -------- -----dev-sys4 UP dev-sys3 UP dev-sys2 UP dev-sys1 UP To list the health of Object Store services (monitored and unmonitored) on all hosts in detailed mode: ibrix_objectstoremonitor -i Sample output: SERVICE : OBJECTSTORE ===================== Host Name Keystone Service Rsync Service --------- ---------------------------dev-sys4 N/A UP dev-sys3 N/A UP dev-sys2 N/A UP dev-sys1 UP UP Proxy Server Account Server Container Server
To enable service monitoring: ibrix_objectstoremonitor -m [-h FMLIST] Restarting the Object Store services Use the following command to restart Object Store services after monitoring has been disabled for maintenance. Using this command will reload any monitoring configuration changes as well as restarting the services. ibrix_objectstoremonitor -c [-h FMLIST] Troubleshooting Object Store Understanding and troubleshooting HTTP response codes An HTTP code of 200, 201, 202 or 204 indicates success.
Table 22 HTTP status codes (continued) Code Description Notes/Suggested actions 403 Forbidden The request was a valid request, but you do not have the appropriate access rights to the resource. • Request to be granted administrative privileges Or • Request to be added to the access control list with read or write permissions for that container. 404 Not Found The requested resource (account, container, or object) could not be found.
Table 22 HTTP status codes (continued) Code Description Notes/Suggested actions • Maximum length of container name: 256 bytes • Maximum length of container name: 256 bytes • Maximum length of object name: 1024 bytes 500 Internal Server Error An unexpected internal error has Rest operations for a short period and retry the occurred. This can occur if the system operation. For persistent failure, contact HP is being reconfigured. Support.
10 Using HTTP Overview The HTTP feature allows you to create HTTP file shares for data stored on the cluster. Clients access the HTTP shares using standard HTTP and HTTPS protocol services. IMPORTANT: Before configuring HTTP, select an authentication method (either Local Users or Active Directory). See “Configuring authentication for SMB, FTP, and HTTP” (page 64) for more information. The HTTP configuration consists of a configuration profile, a virtual host, and an HTTP share.
Uses for the StoreAll REST API Although the StoreAll REST API is not generally intended for your end users, it lets you create applications using the StoreAll file systems and Express Query. You can develop applications that: • Gather user input and send requests programmatically to StoreAll • Digest responses from StoreAll and present results to user in a readable format • Can be coded in any language for example, Java or python, on any client operating system, such as Windows or Linux.
• You must assign read, write, and execute permissions to the share’s directory path and all parent directories up to the file system mount point to allow accounts to be created by their owners through the API. For example, if your share’s directory path is /objFS1/objStore, and the file system objFS1 is mounted at /objFS1, both directories must be set to read, write, and execute permissions. • The use of the version parameter for all REST API file-compatible shares is recommended.
Table 25 Checklist for creating HTTP shares (continued) Step applies only to REST API Shares nl Step 4 5 6 Where to find more information Task Step applies to all HTTP share types. Create or select an exiting HTTP config profile through • CLI:HP StoreAll Storage the GUI or through the CLI (ibrix_httpconfig). CLI Guide Step applies to all HTTP share types. Create or select an existing HTTP virtual host through • CLI:HP StoreAll Storage the GUI or through the CLI (ibrix_httpvhost).
Creating HTTP shares from the HP StoreAll Management Console Use the Add New File Share and Object Store Wizard to create the HTTP share. You can then view or modify the configuration as necessary. To create HTTP shares: Table 26 Creating HTTP shares Type of share See Standard HTTP share “Creating standard HTTP shares” (page 178) StoreAll REST API share “Creating StoreAll REST API shares” (page 186) Creating standard HTTP shares 1. 2. Select File Shares and Object Store from the Navigator.
3. On the Config Profile window, do the following: • Select an existing profile or select Create a new HTTP Profile. If you select the latter, enter a name for this configuration profile. The name must be unique in the cluster. • In the Port box, enter the non-SSL ports that will be listened on for HTTP requests. Use commas to separate the ports. The default port is 80. • In the SSL Port box, enter the SSL ports that will be listened on for HTTPS requests. Use commas to separate the ports.
4. If you selected Create a new HTTP Profile, provide the profile name and then modify the settings as necessary.
5. If you are creating a profile, you are asked to select your hosts servers for the profile.
6. 182 On the Virtual Host window, enter the vhost name. Select false in the Enable StoreAll REST API box. Complete the remaining details of SSL certificate, domain and IP address. Click Next.
7. On the Settings window, enter the URL path and set the appropriate parameters for the share. See the following table for more infomration about each field on the window. Click Next. UI Component Description URL Path Do not include http:// or any variation of this in the URL path. For example, /reports/ is a valid URL path. The beginning and ending slashes of the path are optional. For example, /reports/, reports, and /reports are valid entries and will be stored as /reports/.
UI Component Description be returned in the HTTP response. An error will be returned if the user issuing the HTTP request does not have file system permission to navigate down the path to that directory and read its contents. Set the Browseable field to true if you want the EQWSI search results to be browseable from the Windows client. If Browseable is set to false, a GET request for a directory path will always return an error, regardless of user’s permissions.
9. On the Summary window, ensure that the correct parameters are displayed. Ensure that: • In the File Share summary section, the value of StoreAll REST API Mode is disabled. • In the Virtual Host summary section, the value of StoreAll REST API Mode is Disabled.
10. Click Finish. When the wizard is complete, users can access the share from a browser. For example, if you configured the share with the anonymous user, specified 192.168.1.92 as the IP address on the Create Vhost dialog box, and specified /reports/ as the URL path on the Add HTTP Share dialog box, users can access the share using the following URL: http://192.168.1.
1. 2. 3. Select File Shares and Object Store from the Navigator to open the File Shares and Object Store panel, and then click Add to start the Add New File Share and Object Store Wizard. On the File Share page, select HTTP from the File Sharing Protocol menu. Select the file system, which must be mounted, and enter a share name and the default directory path for the share.
4. The Host Servers dialog box displays differently whether you selected a previous profile or you are create a new one. If you selected the option Create a new HTTP Profile, you are prompted to select the file server nodes on which the HTTP service will be active. Only one configuration profile can be in effect on a particular server.
5. If you selected an existing profile on the Config Profile dialog box, you are shown the hosts defined for that profile, as shown in the following figure. 6. The Virtual Host dialog box displays differently whether you selected a previous profile or you are create a new one. If you are creating a new profile, the Virtual Host dialog box prompts you to enter additional information, as shown in the following figure. Enter a name for the virtual host.
7. If you selected a previous profile, the Virtual Host prompts you to select a pre-existing Vhost or create an HTTP Vhost. 8. 9. If you already have Vhosts defined, you can select an existing Vhost. On the Settings page, set the appropriate parameters for the share. UI Component Description URL Path Do not include http:// or any variation of this in the URL path. For example, /reports/ is a valid URL path. The beginning and ending slashes of the path are optional.
UI Component Description or Object from the StoreAll REST API Mode menu. This option defines which mode's syntax will be accepted by this API share. For example, if object mode is selected, then HTTP requests using the File-Compatible mode syntax will not be understood and will most likely return an error. Default Permissions • For File-compatible shares, new files uploaded via the HTTP share will be given these permissions on the file system.
Using HTTP
10. On the Users page, specify the users to be given access to the share. If no users are specified on this page, then any user who can be authenticated according to your StoreAll authentication settings for the cluster can access the share as read-write. Users must also have access permissions at the file system level to read or write. If any users are specified on this page, only those users may access the share and all other users are denied regardless of their file system permissions.
11. To allow specific users read access, write access, or both, click Add. On the Add Users to Share dialog box, assign the appropriate permissions to the user. When you complete the dialog, the user is added to the list on the Users page. The Summary panel presents an overview of the HTTP configuration. You can go back and modify any part of the configuration if necessary. When the wizard is complete, users can access the API HTTP share from a client.
Tuning the socket read block size and file write block size By default, the socket read block size and file write block size used by Apache are set to 8192 bytes. If necessary, you can adjust the values with the ibrix_httpconfig command. The values must be between 8 KB and 2 GB. ibrix_httpconfig -a profile1 -h node1,node2 -S "wblocksize=,rblocksize=" You can also set the values on the Modify HTTP Profile dialog box.
Starting or stopping the HTTP service manually Start the HTTP service: /usr/local/ibrix/httpd/bin/apachectl -k start Stop the HTTP service: /usr/local/ibrix/httpd/bin/apachectl -k stop Restart the HTTP service: /usr/local/ibrix/httpd/bin/apachectl -k restart NOTE: When the HTTP configuration is changed with the GUI or CLI, the HTTP daemon is restarted automatically.
If the anonymous parameter set to false, you'll be required to supply a user name and password when prompted. If the pathname ends with a directory and the browseable property of the share is set to true, an HTML directory listing of the base URL path directory of the share is returned, showing all files and subdirectories. The list elements are hyperlinks that can be clicked to open the files and subdirectories. If the browseable property is set to false, an error is returned instead of the HTML list.
• Download a file using HTTP protocol: curl -u http://IP_address:port/urlpath/pathname -o // • Download a file using HTTPS protocol: curl --cacert -u https://IP_address:port/urlpath/pathname -o // For more information on operations that can be performed for HTTP-StoreAll REST API share in file-compatible mode, see “HTTP-REST API file-compatible mode shares” (page 217).
curl -i -X PUT -H "Sync-Requested: 1" "http://10.10.21.209/myshare/" HTTP/1.1 200 OK Date: Fri, 19 Jul 2013 16:56:28 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8n DAV/2 Sync-Result: 0 Content-Length: 0 Content-Type: httpd/unix-directory Configuring Windows clients to access HTTP WebDAV shares Complete the following steps to set up and access WebDAV enabled shares: • Verify the entry in the Windows hosts file.
• Create an SSL certificate. When using basic authentication to access WebDAV-enabled HTTP shares, SSL-based access is mandatory. • Verify that the hostname in the certificate matches the Vhost name. When creating a certificate, the hostname should match the Vhost name or the domain name issued when mapping a network drive or opening the file directly using the URL such as https:// storage.hp.com/share/foo.docx. • Ensure that the WebDAV URL includes the port number associated with the Vhost.
net use * http://192.168.1.1/smita/ In this instance, the HTTP WebDAV share is 192.168.1.1/smita. HTTP WebDAV share is inaccessible through Windows Explorer when files greater than 10 KB are created When files greater than 10 KB are created, the HTTP WebDAV share is inaccessible through Windows Explorer and the following error appears: Windows cannot access this disc: This disc might be corrupt. This condition is seen in various Windows clients such as Windows 2008, Windows 7, and Windows Vista.
Unable to rename directory under a WebDAV share When a file system is enabled for data retention, any folders created under a WebDAV share on that file system cannot be renamed.
11 HTTP-REST API object mode shares The StoreAll REST API share in object mode provides concepts similar to OpenStack Object Storage API to support programmatic access to user-stored files. Users create containers within each account to hold objects (files), and the user's string identifier for the object maps to a hashed path name on the file system.
Using the HTTP StoreAll REST API object mode This section walks you through using the major components of object mode. You will be shown how to: • Create a container. • Set permissions for the container. • Upload and create objects for the container. • View the contents of the container. • Download contents from the container. It is assumed you have already created an HTTP StoreAll REST API share in object mode.
the errno value, where 0 (zero) means that sync was done successfully and any other value indicates sync did not complete and durability is not guaranteed. The tutorial below will catch those cases. Tutorial for using the HTTP StoreAll REST API object mode Follow this procedure to set up and use HTTP StoreAll REST API object mode. 1. Create a container.
NOTE: Enter the following commands on one line.
The %5C is the URL encoding for a backslash. 2. Set the permissions of the container. A container is always created with read-write permission by the account user and no permission for any other user. This is represented in UNIX octal permissions as 700 (a digit for user, group, and other permissions). These permissions can be changed by the account user to allow other users to read and write objects in the container.
PUT //// HTTP/1.1 ["Sync-Requested: 1"] 4. 5. View the list of objects in the container. “Viewing the contents of a container” (page 209). Download files from your share: NOTE: Enter the following commands on one line. curl -o http:/// /// -u : For example: curl -o C:\temp\myLocalFile.txt http://192.168.2.
The system::size refers to the number of bytes used by the directory inode representing the container on the StoreAll server (initially 4096 for any new directory), not the number of objects in the container. In this example, the permissions for container-a are the default 700, but the permissions for container-b have been changed by qa1\\administrator to 775. Viewing the contents of a container You can request a list of all of the objects in a container, and certain metadata of those objects.
Finding the corresponding object ID from a hash name NOTE: These steps are for someone with administrator privileges. The HTTP StoreAll REST API object mode saves files on files system with hashed names which are generated while uploading objects/files and not with their actual name, specified by the user during their initial uploading. In the steps below, assume your user name is jsmith, and that you know the location of the hash reference to which you want to find the corresponding file name.
In this instance newcontainer is the container containing the hash reference. 7. Enter the following command to list the contents of newcontainer. [root@bv07-07 newcontainer]# ls -l total 4 drwxrwxrwx 3 jsmith objectapi_group 4096 Dec 7 15:49 45 In this instance 45 is the first-level directory created from the 11th to 20th least significant bits of the 40-byte hexadecimal value that was created when the file was uploaded or created on the share. 8.
The first time a user creates a container, a directory with the numeric user ID of the user representing that account, is created to hold the container. The container directory within this account directory is the container name provided by the user in the container creation request. Subsequent containers created by that user are also stored under the same account directory.
1. Enter the following command on the StoreAll server: echo -n '' | openssl dgst -sha1 For example, if your object identifier string is mydir1/mysubdir2/myobj.xyz, the command would be the following: echo -n 'mydir1/mysubdir2/myobj.xyz' | openssl dgst -sha1 The SHA-1 hash code for the string will be returned, for example: c610260e3075673aadec3afc4983101449db2f05. This hash name is the name of the file on the StoreAll file system that contains the object contents.
HTTP command: PUT /// [“Sync-Requested: 1”] HTTP/1.1 CURL command (Enter on one line): curl -X PUT http:///// -u : [--header “Sync-Requested: 1”] You can use a number of different formats for Active Directory users: NOTE: Enter commands on one line.
Delete Container Type of Request: Container services Description: Deletes the container. IMPORTANT: The container must be empty before it can be deleted. HTTP command: DELETE /// HTTP/1.
Retrieve Object Type of Request: Object Requests Description: Returns the list of containers for a user account. HTTP command: GET /// HTTP/1.1 ["Sync-Requested: 1"] CURL command (Enter on one line): curl -o http://// // -u : [--header "Sync-Requested: 1"] Delete Object Type of Request: Object Requests Description: Deletes an object.
12 HTTP-REST API file-compatible mode shares The StoreAll REST API share in file-compatible mode provides programmatic access to user-stored files and their metadata. The metadata is stored on the HP StoreAll Express Query database in the StoreAll cluster and provides fast query access to metadata without scanning the file system. For more information on managing Express Query, see “Express Query” (page 294).
Metadata queries You can issue StoreAll REST API commands that query the pathname and custom and system metadata attributes for a set of files and directories. Queries can be augmented with a search criterion for a certain system or custom attribute; only files and directories that match the criterion are included in the results. The query can specify a single file or a directory.
◦ Literal strings must be enclosed in single quotes. Non-escaped UTF-8 characters are allowed. Literals are case-sensitive. Any single quotes that are part of the string must be escaped with a second single quote (no double quotes). For example: 'Dave''s book' ◦ Literal numeric values must not be enclosed by quotes, and are always in decimal (0-9). • All HTTP query responses generated by the API code follow the JSON standard. No XML response format is provided at this time.
The version field is recommended, but not required. In the syntax descriptions, it is surrounded by square brackets to indicate that it is optional. The current API is not fully backward compatible. That is, changes to this API might require client-side syntax changes to perform some operations. In the new API, the version has been increased to the next value. Any request without the version field might no longer work as desired or it might return an error.
as the Linux user “daemon” and group “daemon” (daemon:daemon), since that is the user the HTTP Server acts as, for anonymous operations. For retention properties assignment, the user must also have file system permission to navigate to the directory containing the file defined in the URI. Additionally, the user must be the owner of the file according to the file system’s properties for the file’s owning user. If these permissions are not satisfied, the operation will not be allowed.
Example curl -T temp/a1.jpg https://99.226.50.92/ibrix_share1/lab/images/xyz.jpg This example uploads the file a1.jpg, stored on the client’s machine in the temp subdirectory of the user’s current directory, to the HTTP share named ibrix_share1. The share is accessed by the IP address 99.226.50.92. Because it is accessed using the standard HTTPS port (443), the port number is not needed in the URL. The file is created as filename xyz.jpg in the subdirectory lab/images on the share.
This example downloads an existing file called xyz.jpg in the lab/images subdirectory of the ibrix_share1 HTTP share. The file is created with the filename a1.jpg on the client system, in the subdirectory temp of the user’s current directory. If the file already exists at that path on the client, its contents are overwritten by the contents of xyz.jpg, provided that the local client’s permissions and retention settings on that file and directory allow it.
curl -X PUT http[s]://:/,/ --header "Sync-Requested: 1" The HTTP response will return the "Sync-Result" header in the response with the value that represents the errno, where 0 (zero) means that sync was done successfully and any other value indicates sync did not complete and durability is not guaranteed.
If the urlpath does not exist, an HTTP 405 error is returned with the message (Method Not Allowed). Parameter Description pathname The name of the existing file/directory on the HTTP share for which custom metadata is being added or replaced. Directory pathnames must end in a trailing slash /. If the pathname parameter is not present, custom metadata is applied to the directory identified by . attribute[n] The attribute name. Up to 15 attributes can be assigned in a single command.
HTTP syntax The HTTP request line format is the following on one line: DELETE /[/]?[version=2&]attributes= nl [,…] HTTP/1.1 The equivalent curl command format is the following on one line: curl -g -X DELETE "http[s]://:/[/]?[version=2&] nl attributes=[,…"] See “Using HTTP” (page 174) for information about the IP address, port, and URL path.
With the exception of system::deleteTime, all of the system metadata attributes listed in this table are valid for live (for example, not-yet-deleted) files and directories. For deleted files, only the following attributes are valid: system::path, system::deleteTime, system::lastActivityTime and system::poid.
System attribute (key) Type Description Example Writable (page 231) for more information. system::tier string The user-defined name of tier1_fast the StoreAll tier of storage hosting this file or directory. no If the file is stored in a segment that is not assigned to any tier, the string literal no tier is returned. system::createTime numeric 228 HTTP-REST API file-compatible mode shares The date/time when the file See “API date formats” or directory was created (page 220).
System attribute (key) Type system::retentionState numeric Description Example Writable The current WORM/retention state of the file, which is a combination of these bit values: A decimal number, such as 11 partial (see for the bit value 0x0B (under system::worm) legal hold, and retained, and WORM) 0x01: WORM 0x02: Retained 0x04: (not used) 0x08: Under legal hold nl nl nl This attribute applies only to files, returning 0 for directories.
System attribute (key) Type system::lastActivityTime numeric Description The latest date/time of the See “API date formats” following 5 attributes of the (page 220). file or directory: system::createTime nl system::lastModifiedTime nl system::lastChangedTime nl system::deleteTime nl system::lastPathChangedTime The system attribute, system::lastActivityTime, is useful for determining the last date/time at which a file had any modification activity.
system::onDiskAtime The atime inode field in StoreAll can be accessed as the system::onDiskAtime attribute from the API. This field represents different concepts in the lifetime of a WORM/retained file, and it often represents a concept other than the time of the file’s last access, which is why the field was named onDiskAtime rather than (for example) lastAccessedTime. (See “Retention properties assignment” (page 218) for a description of this life cycle).
results are easier for users to read. The disadvantage is that is not easy to follow files that were moved/renamed. • Bypoid mode: This mode will consider live and deleted files in query results. The JSON will contain one stanza for poid. The system::poid value does not change when a file is renamed/moved. This mode is appropriated for applications to use in order to trace file changes. The disadvantage is that the JSON contains data used for internal control only.
Wildcards The StoreAll REST API provides three wildcards: Wildcard Description * In bypoid mode, a single attribute name of * returns all system and custom metadata attributes for the files and directories matching the query. In default/bypath mode, the following system metadata attributes are not returned: system::deleteTime and system::poid. The system::poid can be specifically requested in default mode.
By default, if the skip parameter is not supplied, the results will not skip any records. Similarly, if the top parameter is not supplied, the results will contain all records. HTTP syntax The HTTP request line format is the following on one line: GET /[/[]]?[version=2&][attributes=[,,…]] [&query=][AND ][&recurse][&skip=] [&top=][&ordered][&freshness][&bypoid] HTTP/1.
Parameter Description query_value The value to compare against the query_attr using the operator. The value is either a numeric or string literal. See “General topics regarding HTTP syntax ” (page 218) for details about literals. recurse If the recurse attribute is present, the query searches through the given directory and all of its subdirectories. If the recurse attribute is not present, the query operates only on the given file, directory, or directory of files (but not subdirectories).
JSON response format Default query mode example If the default query mode is requested (absence of bypoid attribute), the JSON format is: [ { "mydir" : { "system::lastActivityTime" : 1385338489.000000000, "system::ownerUserId" : 0, "system::size" : 4096, "system::ownerGroupId" : 0, "system::onDiskAtime" : 1385338280.000000000, "system::lastChangedTime" : 1385338489.000000000, "system::lastModifiedTime" : 1385338307.000000000, "system::retentionExpirationTime" : 0.
"system::retentionExpirationTime" : 0.000000000, "system::mode" : 16895, "system::tier" : "no tier", "system::createTime" : 1385338280.000000000, "system::retentionState" : 0, "system::worm" : false, "system::lastPathChangedTime" : 1385338280.000000000 } }, { "0000000300048051:634BDEB8" : { "system::path" : "mydir/myfile.txt", "system::lastActivityTime" : 1385338517.000000000, "system::ownerUserId" : 0, "system::size" : 0, "system::ownerGroupId" : 0, "system::onDiskAtime" : 1385338307.
Example queries Get selected metadata for a given file The following is one command line: curl -g "http://99.226.50.92/ibrix_share1/lab/images/xyz.jpg? nl attributes=system::size,physician" This example queries only the file called xyz.jpg in the lab/images subdirectory on the ibrix_share1 HTTP share. A JSON document is returned containing the system size value and the custom metadata value for the physician attribute, for this file only.
issued queries to receive the first 2000 results. The client usually issues further queries until no more results are returned. Get selected metadata for all files in a given directory tree that matches a system metadata query The following is one command line: curl -g "http://99.226.50.
Get all files that match a name pattern The following is one command line: curl -g "http://99.226.50.92/ibrix_share1/lab/images?query= nl system::path~'.*\.(gif|jpg)$'" This example returns a JSON document that contains all files in the lab/images directory that end in .gif or .jpg. Get all activity-related times for files with recent activity The following is one command line: curl -i "http://99.226.50.
HTTP syntax NOTE: The commands provided in this section should be entered on one line. The HTTP request line format is the following on one line: PUT //?[version=2&]assign= [system::retentionExpirationTime=][,system::worm='true'] HTTP/1.
Parameter Description A file’s state can be changed to WORM only once. A file in WORM or retained state cannot be reverted to non-WORM, and cannot be un-retained through the StoreAll REST API. See the ibrix_reten_adm command or the equivalent Management Console actions for administrative override methods to un-retain a file. Example: Set a file to WORM without specifying retention expiration curl -g -X PUT "https://99.226.50.92/ibrix_share1/lab/images/xyz.
Be aware that this log file can grow quickly from client HTTP accesses. Manage the size of this file so that it does not fill up the local root file system. Enable it only when needed to diagnose HTTP traffic.
13 Managing SSL certificates Servers accepting FTPS and HTTPS connections typically provide an SSL certificate that verifies the identity and owner of the web site being accessed. You can add your existing certificates to the cluster, enabling file serving nodes to present the appropriate certificate to FTPS and HTTPS clients. StoreAll software supports PEM certificates. When you configure the FTP share or the HTTP vhost, select the appropriate certificate.
1. Generate a private key: openssl genrsa -des3 -out server.key 1024 You will be prompted to enter a passphrase. Be sure to remember the passphrase. 2. Remove the passphrase from the private key file (server.key). When you are prompted for a passphrase, enter the passphrase you specified in step 1. cp server.key server.key.org openssl rsa -in server.key.org -out server.key rm -f server.key.org 3. Generate a Certificate Signing Request (CSR): openssl req -new -key server.key -out server.csr 4.
Adding a certificate to the cluster To add an existing certificate to the cluster, click Add on the Certificates panel. On the Add Certificate dialog box, enter a name for the certificate. Use a Linux command such as cat to display your concatenated certificate file. For example: cat server.pem Copy the contents of the file to the Certificate Content section of the dialog box. The copied text must include the certificate contents and the private key in PEM encoding.
Deleting a certificate To delete a certificate from the GUI, select the certificate on the Certificates panel, click Delete, and confirm the operation.
14 Using continuous remote replication This chapter describes how to configure and manage the Continuous Remote Replication (CRR) service. NOTE: Be aware that when you configure CRR, the Express Query database is not replicated. To replicate custom metadata from the Express Query database and you are running StoreAll OS 6.5 or later, see “Enabling and disabling custom metadata replication ” (page 260).
and are replicated in parallel by each file serving node. There is no strict order to replication at either the file system or segment level. The continuous remote replication program tries to replicate on a first-in, first-out basis. When you configure continuous remote replication, you must specify a file system as the source. (A source directory cannot be specified.) File systems specified as the replication source or target must already exist.
The examples in the configuration rules use three StoreAll clusters: C1, C2, and C3: • C1 has two file systems, c1ifs1 and c1ifs2, mounted as /c1ifs1 and /c1ifs2. • C2 has two file systems, c2ifs1 and c2ifs2, mounted as /c2ifs1 and /c2ifs2. • C3 has two file systems, c3ifs1 and c3ifs2, mounted as /c3ifs1 and /c3ifs2. In the examples, : designates a replication target such as C1:/c1ifs1/target1.
Using intracluster replications There are two forms of intracluster replication: • The same cluster and a different file system. Configure either continuous or run-once replication. You will need to specify a target file system and optionally a target directory (the default is the root of the file system or the mount point). • The same cluster and the same file system. Configure run-once replication. You will need to specify a file system, a source directory, and a target directory.
the following alphanumeric and special characters are allowed: ` ~ ! @ # $ % ^ * ( ) _ - + = { } [ ] | ; < > . ? /. • 5. Server Assignments: Select the servers (and corresponding NICs) that will handle the replication requests. The default server assignment is to use all servers that have the file system mounted. Click OK when finished. Registering the target cluster If the remote cluster does not appear in the selection list for Export To (Cluster), you will need to register the cluster.
Viewing remote replication exports The Remote Replication Exports panel lists the replication exports you created for the file system. Expand Remote Replication Exports in the lower Navigator and select the export to see the configured server assignments for the export. You can modify or remove the server assignments and the export itself. Configuring and managing replication tasks NOTE: • When configuring replication tasks, be sure to following the guidelines described in “Overview” (page 248).
Source Settings for continuous replication For continuous replications, the Source Settings window lists the file system selected on the Filesystems panel. Specify a comma-separated list of file and directory exclude patterns in the Exclude patterns text box. You can specify at most 16 patterns. Source Settings window for run-once replication For a run-once replication of data other than a snapshot, specify the source directory on the Source Settings window.
If you are replicating a snapshot, select Use a snapshot and then select the appropriate Snap Tree and snapshot. Click Next to continue. The Target Settings window appears. Target Settings window For replications to a remote cluster, select the target cluster on the Target Settings window. This cluster must already be registered as a target export.
for more information.) Then enter the target file system. Optionally, you can also specify a target directory in the file system. For replications to the same cluster and different file system, the Target Settings window asks for the target file system. Optionally, you can also specify a target directory in the file system. For replications to the same cluster and file system, the Target Settings window asks only for the target directory. This field is required.
displayed includes the ID assigned to the task, the type of replication, the status of the task, and the time the task was started. From this panel, you can start, stop, pause, or resume a replication task or start a new one. NOTE: Pausing a task that involves continuous data capture does not stop the data capture. You must allocate space on the disk to avoid running out of space because the data is captured but not moved.
If the health check finds an issue in the CRR operation, it generates a critical event. Reports are generated on the source cluster. If the target cluster is running a version of StoreAll software earlier than 6.2, only the network connectivity check is performed. It takes approximately two minutes to generate a CRR health report. Reports are updated every 10 minutes. Only the last five CRR health reports are preserved.
Viewing server tasks Select Server Tasks to display the state of the task and other information for the servers where the task is running. Pausing or resuming a replication task To pause a task, select it on the Remote Replication Tasks panel and click Pause. When you pause a task, the status changes to PAUSED. Pausing a task that involves continuous data capture does not stop the data capture. You must allocate space on the disk to avoid running out of space because the data is captured but not moved.
Also note the following: • Multiple hard links on retained files on the replication source are not replicated. Only the first hard link encountered by remote replication is replicated, and any additional hard links are not replicated. (The retainability attributes on the file on the target prevent the creation of any additional hard links). For this reason, HP strongly recommends that you do not create hard links on files that will be retained if you wish to replicate them.
Prerequisites and planning considerations for custom metadata replication • Custom metadata replication is only supported on StoreAll version 6.5 or later, and the source and target cluster should be at the same StoreAll version. • Be aware that it takes a minimum of 30 minutes for the replicated custom metadata to be available in the Express Query database on the target file system.
The following options are available for the crr_cmd_replication.sh command: Option Description -f FSNAME The name of the source file system. Ensure that you enter the actual file system name, and not the mountpoint. -F FSNAME The name of the target file system. Ensure that you enter the actual file system name, and not the mountpoint. -N IP_ADDRESS The IP address of the target active Fusion Manager node. NOTE: Please use the IP address of the target cluster's active Fusion Manager node.
Example 1 Sample output from initiating a custom metadata replication task (intercluster replication) Sample output without CRR security token: crr_cmd_replication.sh -f fs1 -F fs2 -N 10.1.6.122 -e -I "ALLTARGETFSN-bond0" root@10.1.6.122's password: Proceeding to initiate CRR custom metadata replication task. Please wait.......... Custom metadata replication task is initiated. Please check /usr/local/ibrix/log/enable_crr_cmd_fs1.log for the status. Sample output with CRR security token: crr_cmd_replication.
Example 2 Enabling an intracluster custom metadata replication task Command example: sh crr_cmd_replication.
Example 3 Initiating an intercluster custom metadata replication task Command example: sh crr_cmd_replication.
task with custom metadata replication from fs1 to fs2 using the password protection. [December 09, 2013 16:19:18] [CRR ON SOURCE] Submitted CRR operation to background. ID of submitted task: crr-1 Operation will be blocked by 1 pending operations. Please wait. Command succeeded! [December 09, 2013 16:19:18] [SOURCE SCRIPT] Initiating the source script on localhost. [December 09, 2013 16:19:19] [SOURCE SCRIPT] Baseline for custom meta data replication is initiated.
Example 4 Disabling an intracluster custom metadata replication task Command example: sh crr_cmd_replication.
Example 5 Disabling an intercluster custom metadata replication task Command example: sh crr_cmd_replication.
2. Select Summary→Active Tasks→Remote Replication→. In the Task Summary window, see the "Replicate Custom Metadata?" field. It displays Yes if enabled, and No if disabled. • From the CLI, enter the following command for the CRR task: ibrix_crr -i Understanding the ibrcfrworker log file (ibrcfrworker.log) The format used by ibrcfrworker to log messages is "%t,<%p>,%i,%n%L". In this instance: • %t is the date and time the file/directory was replicated.
◦ – D is for a device – S is for a special file (for example, named sockets and First Ins, First Outs (FIFOs)). The other letters in the %i string are the actual letters that are outputted if the associated attribute for the file is being updated or a dot (.) for no change. Three exceptions to this are the following: – A newly created item replaces each letter with a plus sign (+) – An identical item replaces the dots with spaces.
Continuous remote replication job may hang if the target cluster goes down If the job is hanging, files will not be replicated. In this situation, the job status will indicate it is running normally, but no files are replicated. If you see that replication is not occurring even though new files have been created, stop and then restart the job.
3. If the ibrix_cud service is not running, start the ibrix_server services: service ibrix_server start 4. Start a new replication task: ibrix_crr -s -f -o -S -P [-e ] The -S option specifies the directory under the source file system to synchronize with the target directory. The -P option specifies the target directory.
Issues with enabling a custom metadata replication task Enabling custom metadata replication is a multi-step process and could fail during any of these steps if the corresponding processes are already running or are in an inconsistent state. Some examples of common issues that could lead to failure are: • The archiving daemon was stopped or restarted on the source cluster. • The archiving daemon was stopped or restarted on the target cluster.
15 Managing data retention Data retention is intended for sites that need to archive read-only files for business purposes, and ensures that files cannot be modified or deleted for a specific retention period. Data retention includes the following optional features: • Data validation scans to ensure that files remain unchanged. • Data retention reports. Overview This section provides overview information for data retention and data validation scans.
Default retention period. If a specific retention period is not applied to a file, the file will be retained for the default retention period. The setting for this period determines whether you can manage WORM (non-retained) files as well as WORM-retained files: • To manage both WORM (non-retained) files and WORM-retained files, set the default retention period to zero. To make a file WORM-retained, you will need to set the atime to a date in the future.
storage. A scheduled scan will quit immediately if it detects that a scan of the same file system is already running. You can schedule periodic data validation scans, and you can also run on-demand scans. Configuring file systems for WORM/data retention You can enable a new or an existing file system for data retention and, optionally, other features that require a retention-enabled file system, including validation, reporting, and Express Query.
Modify the data validation schedule Modify the default schedule for running the data validation scan. Select the frequency for when the task should be run and then select the specific days and times to run it. Click OK when finished. • Enable Data Retention Select this option to enable data retention. For more information on data retention, see “Managing data retention” (page 274) for more information.
default retention period is set to zero, the files become WORM. If the default retention period is set to a value greater than zero, the files become WORM-retained.) To use this feature, check Set Auto-Commit Period and specify the time period. The minimum value for the autocommit period is five minutes, and the maximum value is one year. If you plan to keep normal files on the file system, do not set the autocommit period. • Enable Data validation. Select this option to enable data validation.
Viewing the retention profile for a file system To view the retention profile for a file system, select the file system on the Management Console, and then select WORM/Data Retention from the lower Navigator. The WORM/Data retention panel shows the retention profile.
Autocommit period is set and the default retention period is zero seconds: • Files remaining unchanged during the autocommit period automatically become WORM but are not retained and can be deleted. To make a WORM file retained, set the atime to a time in the future, either before or after the file becomes WORM.
NOTE: For SMB users setting the access time manually for a file, the maximum retention period is 100 years from the date the file was retained. For NFS users setting the access time manually for a file, the retention expiration date must be before February 5, 2106. The access time has the following effect on the retention period: • If the access time is set to a future date, the retention period of the file is set so that retention expires at that date.
To administer files from the CLI, use the ibrix_reten_adm command. See the HP StoreAll OS CLI Reference Guide for more information. IMPORTANT: Do not use the ibrix_reten_adm command on a file system that is not enabled for data retention. Specifying path lists When using the Management Console or the ibrix_reten_adm command, you need to specify paths for the files affected by the retention action.
find /ibrixFS/mydir -type d -exec ibrix_reten_adm -h -f ibrixFS -P {/*,{}/.??*,.[!.]* Setting or removing a legal hold When a legal hold is set on a retained or WORM file, the file cannot be deleted until the hold is released, even if the retention period has expired. On the WORM/Data Retention – File Administration dialog box, select Set a Legal Hold and specify the appropriate file. To remove a legal hold from a file, Remove a Legal Hold and specify the appropriate file.
Removing the retention period When you remove the retention period from a retained file, the file becomes a WORM file. On the WORM/Data Retention – File Administration dialog box, select Remove Retention Period and specify the appropriate file. Removing the WORM attribute from a file You cannot remove the WORM attribute from a file.
If the retention period has expired at the time autocommit is applied, the file is not retained. Running data validation scans Scheduling a validation scan When you use the Management Console to enable a file system for data validation, you can set up a schedule for validation scans. You might want to run additional scans of the file system at other times, or you might want to scan particular directories in the file system.
Starting an on-demand validation scan You can run a validation scan at any time. Select the file system on the Management Console, and then select Active Tasks from the lower navigator. Click New to open the Starting a New Task dialog box. Select Data Validation Scan as the Task Type. When you click OK, the Start a new Validation Scan dialog box appears. Change the path to be scanned if necessary and click OK.
Viewing, stopping, or pausing a scan Scans in progress are listed on the Active Tasks panel on the Management Console. If you need to halt the scan, click Stop or Pause on the Active Tasks panel. Click Resume to resume the scan. To view the progress of a scan from the CLI, use the ibrix_task command.
Following is a sample validation summary file: # cat /fsIbrix/.archiving/validation/history/4-0.sum JOB_ID=4 FILESYSTEM_NAME=fsIbrix FILESYSTEM_MOUNT_DIR=/fsIbrix PATH=/fsIbrix/.
Checksum corruption: If the checksums of the and are identical, the express query checksum store may have become corrupted. If this is the case, you must restore the checksums: • If only a few files are inconsistent and you want to postpone restoring the checksums, you can back up the files with a checksum inconsistency, delete those files from the file system, and restore the backed up files to the file system.
The utilization report summarizes how storage is utilized between retention states and free space. The next example shows the first page of a utilization report broken out by tiers. The results for each tier appear on a separate page. The total size scales automatically, and is reported as MB, GB, or TB, depending on the size of the file system or tier. A data validation report shows when files were last validated and reports any mismatches. A mismatch can be either content or metadata.
Generating and managing data retention reports To run an unscheduled report from the Management Console, select Filesystems in the upper Navigator and then select WORM/Data Retention in the lower Navigator. On the WORM/Data Retention panel, click Run a Report. On the Run a WORM/Data Protection Summary Report dialog box, select the type of report to view, and then specify the output format. If an error occurs during report generation, a message appears in red text on the report. Simply run the report again.
Generating data retention reports from the CLI You can generate reports at any time using the ibrix_reports command. Scheduled reports can be configured only on the Management Console.
• When a legal hold is applied to a file, the legal hold is not replicated on the target. If the file on the target should have a legal hold, you will also need to set the legal hold on that file. • If a file has been replicated to a target and you then change the file's retention expiration time with the ibrix_reten_adm -e command, the new expiration time is not replicated to the target. If necessary, also change the file's retention expiration time on the target.
16 Express Query Express Query provides a per-file system database of system and custom metadata, and audit histories of system and file activity. When Express Query is enabled on the file system, you can manage the metadata service, configure auditing, create reports from the audit history, assign custom metadata and certain system metadata to files and directories, and query for selected metadata from files.
See the ibrix_archiving command in the HP StoreAll OS CLI Reference Guide for examples of how to perform these tasks. Backing up and restoring file systems with Express Query data Express Query stores its metadata database for each file system on the file system, in the /.archiving/database directory. Therefore, if you back up a snapshot of the entire file system, you also back up the database.
Restore to an existing file system that has Express Query enabled To restore a backup to an existing file system that has Express Query enabled: 1. Disable Express Query for the file system and remove any StoreAll REST API shares. Disable auditing before you disable Express Query. If data validation is enabled for the file system, it must also be disabled before you disable Express Query. NOTE: If you are only restoring a sub-tree of a file system, you can leave Express Query enabled.
Saving and importing custom metadata and audit information Use the following procedures to save, or export, custom metadata that is stored only in the Express Query database and not in the files themselves. You can also import the metadata if you need to recreate the Express Query database on the file system from which you exported the metadata.
ibrix_audit_reports -t unordered -o all -f ibrixFS This command saves audit data for all events in file system ibrixFS. Use the “unordered” option for the fastest performance. See the HP StoreAll OS CLI Reference Guide for more information about this command. Importing metadata to a file system Use the MDImport tool to import a CSV file containing custom or audit metadata into a new Express Query database. The CSV file can be the output of either the MDExport script or the ibrix_audit_reports command.
NOTE: • The output file listed on the MDExport.pl command line must be in a directory that will be replicated by CRR. • The /.archiving directory is excluded from replication. The ibrix_audit_reports command creates its report output file in the / .archiving/reports subdirectory. Therefore, after issuing the ibrix_audit_reports command, your script must move or copy the report output file to another directory on the file system outside the .archiving tree for it to be replicated.
Modifying the audit log configuration To change the audit log configuration, click Modify on the Audit Log panel. The Modify Audit Settings dialog box appears. Modify settings as described in Table _ and Table _. To manage audit settings using the CLI, use the ibrix_fs command with the -oa parameter. See the HP StoreAll OS CLI Reference Guide for more information. To manage audit log reporting using the CLI, see the ibrix_audit_reports command in the HP StoreAll OS CLI Reference Guide.
Table 31 Fields on the Modify Auditing Settings dialog box (continued) Field Description Audit Logs Expiration Policy Set the expiration period for the audit logs (the number of days audit logs are kept). Disabled Events and Enabled Events Identifies the events that are either disabled or enabled. When auditing in first enabled, all events are disabled by default. You can manage events as follows: • Click the double right arrow to move all events from the Disabled Events box to the Enabled Events box.
Monitor the space used by the audit logs and reports in the /.archiving/.database tree, which includes current metadata and the audit log history. To reduce the space used, reduce the number of events enabled for auditing and/or shorten the time specified in the Audit Log Expiration Policy box. Managing audit log reports Audit log reports include metadata for selected file system events that occurred during a specific time period.
the system log. To resolve this issue, you must clear space on the segment either by deleting data or running the 'rebalance' utility. When you click OK on the Run an Audit Log Report dialog box, an audit report task is started. Once that task is finished, the audit reports are placed in the following file in the CSV (comma-separated value) format: /.archiving/reports/, where TASK_ID is displayed when you generate the audit report.
The file is in a comma-separated value (CSV) format with a header row. The following table displays the definitions for the less obvious fields in an audit report. Field Description *time[n]sec The seconds and nanoseconds of that time, in UNIX epoch time, which is the number of seconds since the start of Jan 1, 1970 in UTC actorgroupid Stores the group ID of the user who does the operations on a file. actoruserid Stores the user ID of the user who does the operations on a file.
Table 34 Events by category (continued) Category Event Enabled, Retention Mode Changed, Retention Default Changed, Maximum Retention Changed, Minimum Retention Changed, File Retention Period Expired Validation Validation Scan Ended, Validation Scan Started, Validation Checksum Created, Validation Failed, Validation Succeeded Freshness The StoreAll REST API provides a way for users to determine the freshness of data.
If the path name limit is exceeded, the path name: • Appears in audit report logs in the format of “/” + . • Does not get collected for file system reports. • Appears empty in REST API and Express Query Windows Search Integration query results.
Auto commit files NOTE: This task only applies to retention-enabled file systems. Use the auto-commit feature to automatically move a file to a WORM+Retained state when the defined time period has passed and no operations have occurred against the file. If a file operation occurs before the defined time period has passed, that will also transition the file to WORM+Retained. The auto-commit feature just ensures that the transition will occur with or without a file operation.
Online Metadata Synchronizer The Online Metadata Synchronizer can be used to: • Verify the consistency of the system metadata stored in Express Query database against the metadata of the file system. Like the ibrix_fsck command, this task is useful when there is a possibility of metadata inconsistencies between the file system and the Express Query database. Such inconsistencies may arise from abnormal occurrences such as power outages, system failures, truncated path names, or partial NDMP restores.
The EQWSI feature searches the metadata stored in the Express Query database, which provides fast access to metadata without scanning the entire file system. EQWSI displays the metadata of the files and directories meeting your search criteria. Double-click any of the files or directories listed on the Windows Explorer to view the contents or download the files. The EQWSI feature requires an EQWSI enabled HTTP share. You can create an EQWSI-enabled HTTP share or enable EQWSI on a pre-existing HTTP share.
EQWSI can only be enabled on shares that meet the following criteria: • HTTP-share required properties. ◦ If multiple HTTP shares are configured, ensure that the HTTP VHOSTs are not configured with the same IP address. ◦ The share must be a standard HTTP share or HP StoreAll REST API share in file compatible mode. HTTPS shares and HP StoreAll REST API shares in object mode are not supported. ◦ The EQWSI-enabled HTTP share cannot be configured with a multi-directory level URL.
1. Select Filesystems in the upper Navigator panel in the StoreAll Management Console. 2. 3. Select the file system containing the HTTP share you want to modify. In the lower Navigator panel, select Summary→HTTP Shares. 4. Right-click the HTTP share on which you want to enable EQWSI. Then, select Modify from the menu.
5. Select true from the Enable Express Query WSI menu on the Modify HTTP share dialog box. Requirements for creating a Windows Explorer Search Plug-in A Windows Explorer Search Plug-in uses the HTTP share configuration settings, such as the virtual host IP address, HTTP share URL path and directory path to access the EQWSI-enabled share from a Windows client.
• • • ◦ Windows Server 2012 R2 ◦ Windows Server 2008 R2 Virtual host requirements: ◦ The client used for EQWSI searches should be able to access the HTTP virtual host for the EQWSI-enabled HTTP share. ◦ Only port 80 is recommended to be used in the virtual host which is used for EQWSI enabled HTTP shares.
Creating a Windows Explorer Search Plug-in A Windows Explorer Search Plug-in uses the HTTP share configuration settings such as the Virtual Host IP Address, HTTP share URL path, and directory path to access the EQWSI enabled share. To create a Windows Explorer Search Plug-in. 1. Select Filesystems in the upper Navigator panel in the StoreAll Management Console. 2. 3. 4. 5. 314 Select the file system containing the HTTP share for which the Windows Explorer Search Plug-in needs to be generated.
The generated Windows Explorer Search Plug-in is given the file name HPStoreAll-sharename.osdx , and it is also saved in the /usr/local/ibrix/httpd/ htdocs directory on the StoreAll server. 6. To save the file to your Windows computer, right-click the Windows Explorer Search Plug-in displayed and select Save Page As.... 7. Provide the file name and save the file as an .OSDX file. You can save the file to any local directory. 8.
9. When the Search Connector window displays, click OK. This window shows the IP address of the HTTP share that shares its link. The EQWSI link appears under Favorites with the following format HPStoreAll —sharename, as shown in the following figure. Requirements for EQWSI queries IMPORTANT: EQWSI queries require a Windows Explorer Search Plug-in.
• EQWSI queries only support dates from 1970-01-02 (January 2, 1970). EQWSI does not support the explicit usage of dates lesser than or equal to 1970-01-01 in the queries, irrespective of the operators used in the query. For instance, the EQWSI query “rxtime<=1970-01-01” is not supported whereas the EQWSI query “rxtime<=1970-01-02” is supported.
Table 36 Supported keywords for EQWSI queries (continued) Keyword Description Supported Operators This keyword can be used to search files/directories based on their last modification date. > Supported Value strings < >= <= mval1 rstate Metadata value: = This keyword can be used to search files/directories based on their metadata value. != Retention state: != This keyword can be used to search files based on their retention state.
Examples of supported EQWSI queries: • size > tiny • crtime >= 2013-12-24 • atime < 2013-12-24 • dtime <= 2013-12-24 • rstate = 4 • mode = aLL • mkey != Engineering • mval=Genetics Additional requirements for EQWSI queries Keep in mind the following additional requirements for EQWSI queries: • HTTP share settings.
Constraints with EQWSI For any EQWSI search, a maximum of up to 100 search results are listed in the Microsoft Windows Explorer, as a result of a Microsoft Windows limitation. After the first search returns 100 results with a particular query, the subsequent search result with the same query will in most probability also return the same results and not the next 100 results. To view the next 100 results, modify the existing query to ensure that the desired results are part of the first 100 search results.
4. 5. To see a magnified view of the data with the searched file or directory displayed, hover the mouse over any of the search results in Windows Explorer. You can customize the Microsoft Explorer view so you can view the various metadata details of the searched files or directories, as shown in the following figure. Description of the data displayed in the Windows Explorer for EQWSI queries This section provides information about the data displayed in Windows Explorer for EQWSI queries.
Summarized details of system metadata displayed in Windows Explorer The following table summarizes the details of the system metadata displayed on the Windows Explorer for all types of queries. Not all Windows Explorer/metadata keys listed in the following table will appear in your query result. Table 37 Summarized details of the system metadata displayed in Windows Explorer Windows Explorer/Metadata Key Metadata Value Title This is the name of the file/directory.
If the EQWSI query uses only the dtime keyword, then only the title, link and the delete time of the deleted files or directories are displayed in the Windows Explorer. NOTE: The HTTP link listed for a deleted file corresponds to a non-existent file or path, resulting in an HTTP 404 error page displayed when the link is clicked.
17 Generating reports HP StoreAll lets you generate reports of files stored on file systems that have Express Query enabled. The Reporting functionality for HP StoreAll uses the Express Query database which provides fast access to metadata without scanning the entire file system to retrieve the necessary metadata information to generate reports. StoreAll generates reports quickly based on the size and number of the files on the file system.
• provide file size details of the overall file system, such as file system size, available size, and block size. • determine nor categorize file data types through use of a file system extension report. This report type merely parses and lists each different file extension string or name that is found in the specified file system path. Table 39 Supported options for reports Options Description Filesystem Lists the file systems with Express Query enabled. One or multiple filesystem can be selected.
Table 39 Supported options for reports (continued) Options Description Slice Value (not applicable to File Extension Reports) • Retention Expiration Reports and Last Validation Reports: Required if you selected Slice Type as Log/Equal and selecting the Date Range/ Duration. The Slice Value can be selected in days/months/years. • File Size Distribution Reports: Provide the slice value. The slice value can be in KiB, MiB, GiB, TiB, and PiB.
Table 40 Filter criteria for reports (continued) Filter Criteria Options Description Supported operators Examples • pathname isnot dirhome/* • pathname is *.pdf State File size Reports can be limited to a particular state in a file system by using this scope.
Table 40 Filter criteria for reports (continued) Filter Criteria Options Description Tier Reports can be generated is or = for the tiers of the filesystem. isnot or != If a filesystem has 2 segments assigned segment 1 as tier1 and segment 2 as tier2, then you can filter by tier tier is tier2 Reports can be filtered using tag_ the custom tag information of the files tag_ color green Tag Supported operators Examples tier != tier1 The filter criteria can be combined to form complex filter criteria.
The following image shows the data from a Retention Expiration report that was generated using the default options.
An example of a report with slice type log selected This section provides the output from a report that has slice type log selected.
Figure 2 Retention Expiration report with slice type log selected (table format) The following table provides a description of the information provided in the previous figure. Table 42 Description of data provided in example report for slice type log selected Calculation of slices using log of 5 years Resulting slice in Logarithmic Total number of files exponents of 5 retained for this period 5e0 = 1 1y to 2y 0 The first slice of the graph is for duration 1 y to 2y.
Figure 3 Output from a sample Retention Expiration report with slice type equal selected Zero valued records are not displayed in the chart. See the table for the complete data entries. The Data tab for the graph provides the data in a table format.
The following table provides a description of the information provided in the previous figure. Table 44 Description of data provided in example report for slice type equal Duration of slices using equals of 5 years Resulting Slice Equals of 5 Total number of files retained for this period Description 5 1y to 6y 500 500 files are retained in this duration. 25 years 6y to 31y 0 Ideally the next slice should be 6y to 11 years to have a 5 year slice, and then 11y to 16 years and so on.
Figure 5 Last Validation report with data displayed in a bar chart Figure 6 Last Validation report with data displayed in a table File Size Distribution reports File Size Distribution reports provide an overview of the number of files and their size on a file system.
Figure 7 File Size Distribution report with data displayed in a pie chart File Size Distribution reports 335
Figure 8 File Size Distribution report with data displayed in a bar chart 336 Generating reports
Figure 9 File Size Distribution report with data displayed in a table File Extension reports File Extension reports provides an overview of the various file extensions distributed on the file system. IMPORTANT: Keep in mind the following: • The File Extension report searches only by file extension, not by file type. For example, if you save a text file with a .png extension, the text file will be listed under the other files with a .png extension even though the file is a text file.
Figure 10 File Extension report with data displayed in a pie chart 338 Generating reports
Figure 11 File Extension report with data displayed in a bar graph File Extension reports 339
Figure 12 File Extension report with data displayed in a table graph 340 Generating reports
18 Obtaining performance statistics The Performance Statistics tool (also referred as Statstool/Statistics tool) lets you generate reports containing historical performance data for the cluster or for an individual file serving node. You can view data for the network, the operating system, memory, block devices, the file systems, protocols (NFS and CIFS). Statistical data is transmitted from each file serving node to the Fusion Manager, which controls processing and report generation.
To generate performance reports through the StoreAll Management Console: 1. In the StoreAll Management Console, select Reporting→Performance Statistics. 2. 3. Select Cluster or Host-level from the Report Level menu. • Cluster: Information from the cluster is gathered for the report. • Host-level: Information from a selected host is gathered for the report. Select one of the following from Sub-Report Level menu.
7. To save a report, select one of the following from the Save Report menu and then click Save. • PDF • HTML • CSV (comma separated values) • TXT NOTE: When you click the Save button, StoreAll generates the report for the duration selected for the report on the Management Console and saves the report to the format selected. If there is any time difference between a report generated and the same report to be saved, the saved report provides the same time difference.
Table 45 Category definitions (continued) Category Description Available Reports Memory Reports on memory usage, swap memory and free memory in percentage of total memory Cluster Comparison/Cluster Cumulative/Host-level: Memory Utilization, Swap Memory, Free Memory Network Reports to view Network activity in Cluster Comparison/ Cluster KB/sec and Operations on all devices Cumulative: in ops/sec NetworkActivityOnAllDevices, NetworkOperationsOnAllDevicesHost-level: NetworkTransmit Operations, NetworkTr
Troubleshooting the Performance Statistics tool • Data is not collected. If data is not being gathered in the common directory for the Statistics Manager (/local/statstool/histstats/ by default), restart the Performance Statistics tool processes on all nodes. See “Controlling Performance Statistics tool processes” (page 344). • Installation issues. Check the /usr/local/ibrix/log/statstool/stats-install.log and try to fix the condition, or send the log to HP Support.
19 Configuring Antivirus support The StoreAll Antivirus feature can be used with supported Antivirus software, which must be run on systems outside the cluster. These systems are called external virus scan engines. To configure the Antivirus feature on a StoreAll cluster, complete these steps: 1. Add the external virus scan engines to be used for virus scanning. You can schedule periodic updates of virus definitions from the virus scan engines to the cluster nodes. 2. Enable Antivirus on the file systems.
Adding or removing external virus scan engines The Antivirus software is run on external virus scan engines. You will need to add these virus scan engines to the Antivirus configuration. IMPORTANT: HP recommends that you add a minimum of two virus scan engines to provide load balancing for scan requests and to prevent the loss of scanning if one virus scan engine becomes unavailable. 1. 2.
When a virus scan engine is no longer needed, you must manually delete it from the configuration. Go to the Virus Scan Engines panel, select the applicable virus scan engine and click Delete. Enabling or disabling Antivirus on StoreAll file systems When configuring the Antivirus feature, you should enable Antivirus on each file system that you want to scan: 1.
Defining protocol-specific policies For certain file sharing protocols (currently only SMB/CIFS), you can specify the file operations that trigger a scan. There are three policies: • OPEN (Default). Scan when the file is opened. • CLOSE. Scan when the file is closed. • BOTH. Scan when a file is opened and closed. NOTE: If you select CLOSE, older written files are not scanned automatically when the virus scan engine is updated with newer virus definitions.
Defining exclusions Exclusions specify files that should be skipped during Antivirus scans. Excluding files can improve performance, as files meeting the exclusion criteria are not scanned. You can exclude files based on their file extension or size. By default, when exclusions are set on a particular directory, all of its child directories inherit those exclusions.
5. Select the appropriate type of rule: • Inherited Rule/Remove Rule. Use this option to reset or remove exclusions that were explicitly set on the child directory. The child directory will then inherit exclusions from its parent directory. You should also use this option to remove exclusions on the top-most level directory where exclusions rules have been are set. • No rule. Use this option to remove or stop exclusions at the child directory.
Updating Antivirus definitions You should update the virus definitions on the cluster nodes periodically. On the Management Console, click Update ClusterWide ISTag on the Antivirus Settings panel. The cluster then connects with the external virus scan engines and synchronizes the virus definitions on the cluster nodes with the definitions on the external virus scan engines. NOTE: All virus scan engines should have the same virus definitions.
Recommendations for Antivirus scans: • Run Antivirus scans during periods of low activity on the system. • Configure Antivirus scans to ensure that if a subtree contains a large number of files, then that subtree is not assigned to an Antivirus scan. • Do not simultaneously run Antivirus scans on mutliple file systems as there is a resource limitation on the AV daemon.
6. On the Schedule tab, click Schedule this task and then select the frequency (once, daily, weekly, monthly) and specify when the scan should run. NOTE: 7. You can only schedule scans using the Management Console. Click OK. Viewing, pausing, resuming, or stopping Antivirus scan tasks Viewing an active task To view an active scan task on a file system, select the file system on the Filesystems panel on the Management Console, and then select Active Tasks from lower Navigator.
Stopping or pausing an active task Use the buttons on the Antivirus Task Summary panel to stop or pause a running task, or to resume a paused task. Viewing the results of an inactive task To view inactive Antivirus scan tasks for a file system, select the file system on the Filesystems panel and then select Inactive Tasks on the lower Navigator.
Antivirus quarantines and software snapshots The quarantine utility has the following limitations when used with snapshots. Limitation 1: When the following sequence of events occurs: • A virus file is created inside the snap root. • A snapshot is taken. • The original file is renamed or moved to another path. • The original file is read. The quarantine utility cannot locate the snapshot because the link was formed with the new filename assigned after the snapshot was taken.
20 Creating StoreAll software snapshots The StoreAll software snapshot feature allows you to capture a point-in-time copy of a file system or directory for online backup purposes and to simplify recovery of files from accidental deletion. Software snapshots can be taken of the entire file system or selected directories. Users can access the file system or directory as it appeared at the instant of the snapshot. NOTE: To accommodate software snapshots, the inode format was changed in the StoreAll 6.
To enable a directory tree for snapshots, click Add on the Snap Trees panel. You can create a snapshot directory tree for an entire file system or a directory in that file system. When entering the directory path, do not specify a directory that is a parent or child of another snapshot directory tree. For example, if directory /dir1/dir2 is a snapshot directory tree, you cannot create another snapshot directory tree at /dir1 or /dir1/dir2/dir3. IMPORTANT: StoreAll reliably supports up to 1,024 snapshots.
IMPORTANT: A snapshot reclamation task is required for each file system containing snap trees that have scheduled snapshots. If a snapshot reclamation task does not already exist, you will need to configure the task. See “Reclaiming file system space previously used for snapshots” (page 362). Modifying a snapshot schedule You can change the snapshot schedule at any time. On the Snap Trees panel, select the appropriate snap tree, select Modify, and make your changes on the Modify Snap Tree dialog box.
You must manually delete on-demand snapshots when they are no longer needed. Determining space used by snapshots Space used by snapshots counts towards the used capacity of the file system and towards user quotas. Standard file system space reporting utilities work as follows: • The ls and du commands report the size of a file depending on the version you are viewing. if you are looking at a snapshot, the commands report the size of the file when it was snapped.
The following example lists snapshots created on an hourly schedule for snap tree /ibfs1/users. Using ISO 8601 naming ensures that the snapshot directories are listed in order according to the time they were taken. [root@9000n1 ~]# # cd /ibfs1/users/.snapshot/ [root@9000n1 .
Restoring files from snapshots Users can restore files from snapshots by navigating to the appropriate snapshot directory and copying the file or files to be restored, assuming they have the appropriate permissions on those files. If a large number of files need to be restored, you may want to use Run Once remote replication to copy files from the snapshot directory to a local or remote directory (see “Starting a replication task” (page 253)).
Using the Management Console, you can schedule a snapshot reclamation task to run at a specific time on a recurring basis. The reclamation task runs on an entire file system, not on a specific snapshot directory tree within that file system. If a file system includes two snapshot directory trees, space is reclaimed in both snapshot directory trees.
On the General tab, select a reclamation strategy: • Maximum Space Reclaimed. The reclamation task recovers all snapped space eligible for recovery. It takes longer and uses more system resources than Maximum Speed. This is the default. • Maximum Speed of Task. The reclamation task reclaims only the most easily recoverable snapped space.
Removing snapshot authorization for a snap tree Before removing snapshot authorization from a snap tree, you must delete all snapshots in the snap tree and reclaim the space previously used by the snapshots. Complete the following steps: 1. Disable any schedules on the snap tree. Select the snap tree on the Snap Trees panel, select Modify, and remove the Frequency settings on the Modify Snap Tree dialog box. 2. Delete the existing snapshots of the snap tree. See “Deleting snapshots” (page 362) 3.
Backups with the tar utility The tar symbolic link (h) option can copy snapshots. For example, the following command copies the /snapfs1/test3 directory associated with the point-in-time snapshot. tar -cvfh /snapfs1/test3/.
21 Creating block snapshots Overview The block snapshot feature allows you to capture a point-in-time copy of a file system for online backup purposes and to simplify recovery of files from accidental deletion. The snapshot replicates all file system entities at the time of capture and is managed exactly like any other file system. NOTE: You can use either the software method or the block method to take snapshots on a file system.
NOTE: By default, snapshots are read only. HP recommends that you do not allow writes to any snapshots. To manage block snapshots, select Filesystems in the upper Navigator and Block Snapshots in the lower Navigator. The Block Snapshots panel appears.
Planning for snapshots This section describes how to configure the cluster to take snapshots. Preparing the snapshot partition The block snapshot feature does not require any custom settings for the partition. However, HP recommends that you provide sufficient storage capacity to support the snapshot partition. NOTE: If the snapshot store is too small, the snapshot will eventually exceed the available space (unless you detect this and manually increase storage).
Automated block snapshots If you plan to take a snapshot of a file system on a regular basis, you can automate the snapshots. To do this, first define an automated snapshot scheme, and then apply the scheme to the file system and create a schedule. A snapshot scheme specifies the number of snapshots to keep and the number of snapshots to mount. You can create a snapshot scheme from either the Management Console or the CLI.
Once you create a snapshot scheme, return to the Create Snapshot dialog box to set up a schedule for it. Select the Schedule tab. Click Schedule this task, set the frequency of the snapshots, and schedule when they should occur. You can also set start and end dates for the schedule. When you click OK, the snapshot scheduler will begin taking snapshots according to the specified snapshot strategy and schedule.
Creating a snapshot scheme Under Snapshot Configuration, select an existing scheme or click New to create a new snapshot scheme. The Create Snapshot Scheme dialog box appears.
On the General tab, enter a name for the strategy and then specify the number of snapshots to keep and mount on a daily, weekly, and monthly basis. Keep in mind the maximums allowed for your array type. Daily means that one snapshot is kept per day for the specified number of days. For example, if you enter 6 as the daily count, the snapshot feature keeps 1 snapshot per day through the 6th day. On the 7th day, the oldest snapshot is deleted.
Viewing automated snapshot schemes On the Management Console, you can view snapshot schemes on the Create Snapshot dialog box. Select Recurring as the Snapshot Type, and then select a snapshot scheme. A description of that scheme will be displayed. Deleting an automated snapshot scheme A snapshot scheme can be deleted only from the CLI. Use the following command: ibrix_vs_snap_strategy -d -n NAME Managing block snapshots This section describes how to manage individual snapshots.
The next window shows an SMB client accessing the snapshot file system .fs1_snap1. The original file system is mapped to drive X.
Troubleshooting block snapshots Snapshot reserve is full and the MSA2000 is deleting snapshot volumes When the snapshot reserve is full, the MSA2000 will delete snapshot volumes on the storage array, leaving the device entries on the file serving nodes. To correct this situation, take the following steps: 1. Stop I/O or any applications that are reading or writing to the snapshot file systems. 2. Log on to the active Fusion Manager. 3. Unmount all snapshot file systems. 4.
22 Using data tiering A data tier is a logical grouping of file system segments. After creating tiers containing the segments in the file system, you can use the data tiering migration process to move files from the segments in one tier to the segments in another tier. For example, you could create a primary data tier for SAS storage and another tier for SATA storage. You could then migrate specific data from the SAS tier to the lower-cost SATA tier.
Manage tier On the Manage Tier dialog box, do one of the following: • Create a tier: 1. Select Create New Tier. 2. Enter a name for the tier. 3. Select one or more segments to be included in the tier. • Modify an existing tier: 1. Select Use Existing Tier. 2. Select the tier and make any applicable changes to the segments included in the tier. Segments not currently included in a tier are identified as Unassigned.
Primary tier All new files are written to the primary tier. On the Primary Tier dialog box, select the tier that should receive these files. You can also select cluster servers and any StoreAll clients whose I/O operations should be redirected to the primary tier. Click Next to continue. Tiering policy The tiering policy consists of rules that specify the data to be migrated from one tier to another.
patterns (such as access and modification times), file size, and file type. Rules can be constrained to operate on files owned by specific users and groups and to specific paths. Logical operators can be used to combine directives. NOTE: LDAP and AD users cannot be selected from the menu under RuleSet. If you want to include users in a rule set, you can select only local users. Click + to specify the and/or operators and another rule. Click New to open another rule set.
To add a new tiering policy, click New. On the New Data Tiering Policy dialog box, select the source and destination tiers. Initially RuleSet1 is empty. Select a rule name, and the other fields will appear according to the rule you selected. Tiering schedule The Tiering Schedule dialog box lists all executed and running migration tasks. Click New to add a new schedule, click Edit to reschedule the selected task, or click Delete to delete the selected schedules.
When you click New to create a new schedule, the default frequency for migration tasks is displayed. For an existing schedule, the current frequency is displayed. To change the frequency, click Modify. Data tiering schedule When the Data Tiering Schedule Wizard dialog box opens, select the frequency, date, and time to run the task.
Viewing tier assignments and managing segments On the Management Console, select Filesystems from the Navigator and select a file system in the Filesystems panel. In the lower Navigator, select Segments. The Segments panel displays the segments in the file system and specifies whether they are assigned to a tier.
You can assign, reassign, or unassign segments from tiers using the Data Tiering Wizard. The Management Console also provides additional options to perform these tasks. • Assign or reassign a segment: On the Segments panel, select the segments you are assigning and click Assign to Tier. On the Assign to Tier dialog box, specify whether you are assigning the segment to an existing tier or a new tier and specify the tier.
When you click OK, the rule is checked for correct syntax. If the syntax is correct, the rule is saved and appears on the Data Tiering Rules panel. The following example shows the three rules created for the example. You can delete rules if necessary. Select the rule on the Data Tiering Rules panel and click Delete. Additional rule examples Rule Description name="*" Migrates all files from Tier2 to Tier1. path=testdata2 Migrates all files in the subtree beneath the path.
Rule Description path=testdata4 and name="*mpeg4" Migrates all mpeg4 files in the testdata4 subtree. Use the "and" operator to combine rules. gname=users and (path=testdata4 and name="*mpeg4") Migrates all mpeg4 files that are owned by users in the user group in the testdata4 subtree. For more examples and detailed information about creating rules, see “Writing tiering rules” (page 387). Running a migration task You can use the Data Tiering Wizard to schedule and run migration tasks.
Configuring tiers and migrating data using the CLI Use the following command to define the primary tier: ibrix_fs_tune -f FILESYSTEM -h SERVERS -t TIERNAME The following example specifies Tier1 as the primary tier: ibrix_fs_tune -f ifs1 -h ibrix1a,ibrix1b -t Tier1 This policy takes precedence over any other file allocation polices defined for the file system. NOTE: This example assumes users access the files over CIFS, NFS, FTP, or HTTP.
Rule attributes Each rule identifies file attributes to be matched. It also specifies the source tier to scan and the destination tier where files that meet the rule’s criteria will be moved and stored. Note the following: • Tiering rules are based on individual file attributes. • All rules are executed when the tiering policy is applied during execution of the ibrix_migrator command.
Rule keywords The following keywords can be used in rules. Keyword Description atime Access time, used in a rule as a fixed or relative time. ctime Change time, used in a rule as a fixed or relative time. mtime Modification time, used in a rule as a fixed or relative time gid An integer corresponding to a group ID. gname A string corresponding to a group name. Enclose the name string in double quotes. uid An integer corresponding to a user ID.
The following example uses the path keyword. It moves files greater than or equal to 5M that are under the directory /ifs2/tiering_test from TIER1 to TIER2: ibrix_migrator -A -f ifs2 -r "path = tiering_test and size >= 5M" -S TIER1 -D TIER2 Rules can be group- or user-based as well as time- or data-based. In the following example, files associated with two users are migrated to T2 with no consideration of time. The names are quoted strings.
23 Using file allocation This chapter describes how to configure and manage file allocation. Overview StoreAll software allocates new files and directories to segments according to the allocation policy and segment preferences that are in effect for a client. An allocation policy is an algorithm that determines the segments that are selected when clients write to a file system. File allocation policies File allocation policies are set per file system on each file serving node and on the StoreAll client.
Standard segment preferences and allocation policies Name Description Comment ALL Prefer all of the segments available in the file system for new files and directories. This is the default segment preference. It is suitable for most use cases. LOCAL Prefer the file serving node’s local segments for new files and directories. No writes are routed between the file serving nodes in the cluster.
A StoreAll client or StoreAll file serving node (referred to as “the host”) uses the following precedence rules to evaluate the file allocation settings that are in effect: • The host uses the default allocation policies and segment preferences: The RANDOM policy is applied, and a segment is chosen from among ALL the available segments.
Setting file and directory allocation policies from the CLI Allocation policy names are case sensitive and must be entered as uppercase letters (for example, RANDOM). Set a file allocation policy: ibrix_fs_tune -f FSNAME {-h HOSTLIST|-g GROUPLIST} -s LVNAMELIST -p POLICY [-S STARTSEGNUM] The following example sets the ROUNDROBIN policy for files only on the file system ifs1 on file serving node s1.hp.com, starting at segment ilv1: ibrix_fs_tune -f ifs1 -h s1.hp.
Both methods can be in effect at the same time. For example, you can prefer a segment for a user and then prefer a pool of segments for the clients on which the user will be working. On the Management Console, open the Modify Filesystem Properties dialog box and select the Segment Preferences tab. Creating a pool of preferred segments from the CLI A segment pool can consist of individually selected segments, all segments local to a file serving node, or all segments.
ibrix_fs_tune -f FSNAME {-h HOSTLIST|-g GROUPLIST} -S {SEGNUMLIST|ALL|LOCAL} Restoring the default segment preference The default is for all file system segments to be preferred. Use the following command to reset the file system policy to the default value on HOSTLIST: ibrix_fs_tune -f FSNAME {-h HOSTLIST|-g GROUPLIST} -p -U Tuning allocation policy settings To optimize system performance, you can globally change the following allocation policy settings for a file system: • File allocation policy.
Restore the default file allocation policy: ibrix_fs_tune -f FSNAME {-h HOSTLIST|-g GROUPLIST} -p -U Listing allocation policies Use the following command to list the preferred segments (the -S option) or the allocation policy (the -P option) for the specified hosts, hostgroups, or file system. ibrix_fs_tune -l [-S] [-P] [-h HOSTLIST | -g GROUPLIST] [-f FSNAME] HOSTNAME mak01.hp.
24 Support and other resources Contacting HP For worldwide technical support information, see the HP support website: http://www.hp.
25 Documentation feedback HP is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback (docsfeedback@hp.com). Include the document title and part number, version number, or the URL when submitting your feedback.
Glossary ACE Access control entry. ACL Access control list. ADS Active Directory Service. ALB Advanced load balancing. BMC Baseboard Management Configuration. CIFS Common Internet File System. The protocol used in Windows environments for shared folders. CLI Command-line interface. An interface comprised of various commands which are used to control operating system responses. CSR Customer self repair. DAS Direct attach storage.
SELinux Security-Enhanced Linux. SFU Microsoft Services for UNIX. SID Secondary controller identifier number. SMB Server Message Block. The protocol used in Windows environments for shared folders. SNMP Simple Network Management Protocol. TCP/IP Transmission Control Protocol/Internet Protocol. UDP User Datagram Protocol. UID Unit identification. VACM SNMP View Access Control Model. VC HP Virtual Connect. VIF Virtual interface. WINS Windows Internet Name Service. WWN World Wide Name.
Index Symbols /etc/likewise/vhostmap file, 111 A Active Directory configure, 69 Linux static user mapping, 104 synchronize with NTP server, 107 use with LDAP ID mapping, 66 Antivirus configure, 346 enable or disable, 348 file exclusions, 350 protocol scan settings, 349 scans, start or schedule, 352 scans, status, 354 statistics, 355 unavailable policy, 348 virus definitions, 352 virus scan engine, 346 add, 347 remove, 348 audit log, 299 authentication Active Directory, 64 configure from CLI, 82 configure f
documentation providing feedback on, 399 E enabling EQWSI, 310 EQWSI .
configure, 74 remote LDAP server, configure, 64 requirements, 64 LDAP ID mapping configure, 70 use with Active Directory, 66 Linux static user mapping, 104 Linux StoreAll clients disk space information, 45 Local Groups authentication, 76 Local Users authentication, 77 logical volumes view information, 42 logs ibrcfrworker log file, 269 lost+found directory, 44 M mapping SMB shares, 103 Microsoft Management Console manage SMB shares, 99 migration, files, 386 mounting, file system, 25 mountpoints view, 25 mt
remove WORM attribute, 284 set or remove legal hold, 283 file states, 274 hard links, 292 import metadata, 298 legal holds, 283 metadata service, 294 on-demand data validation scans, 286 remote replication, use with, 292 rentention profile modify, 279 view, 279 reports, 289 retained file, 274 create, 279 view retention information, 281 retention period change, 283 remove, 284 retention profile, 274 save audit metadata, 297 schedule data validation scans, 285 troubleshooting, 293 validation scan errors, 288
client application, 176 creating shares, 176 data retention, 177 features, 175 object mode, 203 share types, 174 uses, 175 subscription service, HP, 398 T technical support HP, 398 service locator website, 398 tiering, data assign segments, 383 configure, 377 migration task, 386 tiering policy, 384 tiering rules, 387 U uninstalling Performance Statistics tool, 341 V validation scans, 275 validation, data compare checksums, 288 on-demand scans, 286 resolve scan errors, 288 schedule scans, 285 stop or paus