JFS Tuning and Performance

16
Figure 10. Locking with concurrent I/O (cio)
To enable a file system for concurrent I/O, the file system simply needs to be mounted with the cio
mount option, for example:
# mount -F vxfs -o cio,delaylog /dev/vgora5/lvol1 /oracledb/s05
Concurrent I/O was introduced with VxFS 3.5. However, a separate license was needed to enable
concurrent I/O. With the introduction of VxFS 5.0.1 on HP-UX 11i v3, the concurrent I/O feature of
VxFS is now available with the OnlineJFS license.
While concurrent I/O sounds like a great solution to alleviate VxFS inode lock contention, and it is,
some major caveats exist that anyone who plans to use concurrent I/O should be aware of.
Concurrent I/O converts read and write operations to direct I/O
Some applications benefit greatly from the use of cached I/O, relying on cache hits to reduce the
amount of physical I/O or VxFS read ahead to prefetch data and reduce the read time. Concurrent
I/O converts most read and write operations to direct I/O, which bypasses the buffer/file cache.
If a file system is mounted with the cio mount option, then mount options mincache=direct and
convosync=direct are implied. If the mincache=direct and convosync=direct options are used, they
should have no impact if the cio option is used as well.
Since concurrent I/O converts read and write operations to direct I/O, all alignment constraints for
direct I/O also apply to concurrent I/O.
Certain operations still take exclusive locks
Some operations still need to take an exclusive lock when performing I/O and can negate the impact
of concurrent I/O. The following is a list of operations that still need to obtain an exclusive lock.
1. fsync()
2.
writes that span multiple file extents
3.
writes when request is not aligned on a 1 KB boundary
4.
extending writes
5.
allocating writes to “holes” in a sparse file
6.
writes to Zero Filled On Demand (ZFOD) extents