Open System Services Management and Operations Guide (G06.25+, H06.03+)

Understanding the OSS File System
Open System Services Management and Operations Guide527191-002
3-9
OSS Configuration Files
Within that catalog volume, the OSS name server for that fileset uses a Guardian
subvolume whose name begins with ZX0. This name is a reserved subvolume name
used only by an OSS name server.
In this subvolume, the OSS name server for the fileset accesses (and creates if
necessary) the catalog files PXINODE, PXLINK, and PXLOG. Thereafter, whenever
someone mounts or remounts a fileset, the OSS name server that manages that fileset
uses these catalog files. Each fileset has a volume, called a catalog volume, that
contains these catalog files and other information about the fileset.
Files in subvolumes whose names begin with ZYQ are subject to special access
restrictions. You cannot access these files from the Guardian environment, and you
cannot create new files in these subvolumes from the Guardian environment.
Storage Pools and Storage-Pool Files
A storage pool is the set of disk volumes on which the OSS data files of a fileset
reside. The storage-pool file is a Guardian EDIT file that determines which disk
volumes of the storage pool can be used for creating new OSS data files that are being
added to the fileset. The disk volumes listed in the storage-pool file can be viewed as
the creation pool, a subset of the entire storage pool used by the fileset. Figure 3-4 on
page 3-10 shows the difference between an OSS storage pool and the contents of the
storage-pool file for the fileset DATA5; the creation pool is enclosed in a rectangle to
indicate that it is the set of disk volumes identified in the storage-pool file.
The OSS name server for a fileset uses the storage-pool file for that fileset to
determine where to create each new OSS data file. When that OSS name server
receives a file-creation request, the server reads the storage-pool file and creates the
file on the disk volume whose name appears in the storage-pool file following the
volume name used for the last request.
As each new file is created, the fileset’s OSS name server continues along the list of
volume names, selecting a new volume with each request. The OSS name server
ultimately wraps around to the beginning of the list in a round-robin fashion.
Thus, if users write only small files in one disk volume in the list and only large files in
another, then one volume can fill up before the others. By allocating a large enough
fileset, you can help avoid the problems produced by this unlikely file distribution.
An OSS name server cannot allocate files on more than 20 disk volumes for one
fileset. However, each time a fileset is mounted, you can specify either:
A different set of disk volumes for the creation pool of the fileset (different content
of the same storage-pool file)
A different storage-pool file for the fileset, containing a different set of disk volumes
As a result, a fileset can span many disk volumes. OSS files can exist on disk volumes
that are part of the fileset even though they are not in any active storage-pool file. The
storage pool for the fileset can be much larger than the creation pool defined by the
content of the storage-pool file.