5.6 HP StorageWorks X9320 Network Storage System Administrator Guide (AW542-96006, June 2011)

Events are written to an events table in the configuration database as they are generated. To
maintain the size of the file, HP recommends that you periodically remove the oldest events. See
“Removing events from the events database table” (page 48) for more information.
You can set up event notifications through email (see “Setting up email notification of cluster events
(page 35)) or SNMP traps (see “Setting up SNMP notifications” (page 36)).
Viewing events
The dashboard on the management console GUI specifies the number of events that have occurred
in the last 24 hours. Click Events in the GUI Navigator to view a report of the events. You can also
view events that have been reported for specific file systems or servers.
To view events from the CLI, use the following commands:
View events by type:
<installdirectory>/bin/ibrix_event -q [-e ALERT|WARN|INFO]
View generated events on a last-in, first-out basis:
<installdirectory>/bin/ibrix_event -l
View adesignated number of events. The command displays the 100 most recent messages
by default. Use the -n EVENTS_COUNT option to increase or decrease the number of events
displayed.
<installdirectory>/bin/ibrix_event -l [-n EVENTS_COUNT]
The following command displays the 25 most recent events:
<installdirectory>/bin/ibrix_event -l -n 25
Removing events from the events database table
The ibrix_event -p command removes events from the events table, starting with the oldest
events. The default is to remove the oldest seven days of events. To change the number of days,
include the -o DAYS_COUNT option.
<installdirectory>/bin/ibrix_event -p [-o DAYS_COUNT]
Monitoring cluster health
To monitor the functional health of file serving nodes and X9000 clients, execute the ibrix_health
command. This command checks host performance in several functional areas and provides either
a summary or a detailed report of the results.
Health checks
The ibrix_health command runs these health checks on file serving nodes:
Pings remote file serving nodes that share a network with the test hosts. Remote servers that
are pingable might not be connected to a test host because of a Linux or X9000 Software
issue. Remote servers that are not pingable might be down or have a network problem.
If test hosts are assigned to be network interface monitors, pings their monitored interfaces to
assess the health of the connection. (For information on network interface monitoring, see
“Using network interface monitoring” (page 29).)
Determines whether specified hosts can read their physical volumes.
The ibrix_health command runs this health check on both file serving nodes and X9000 clients:
Determines whether information maps on the tested hosts are consistent with the configuration
database.
If you include the -b option, the command also checks the health of standby servers (if configured).
48 Monitoring cluster operations