JFS Tuning and Performance

12
Note that buffer merging was not implemented initially on IA-64 systems with VxFS 3.5 on 11i v2. So
using the default max_buf_data_size of 8 KB would result in a maximum physical I/O size of 8 KB.
Buffer merging is implemented in 11i v2 0409 released in the fall of 2004.
Page sizes on HP-UX 11i v3 and later
On HP-UX 11i v3 and later, VxFS performs cached I/O through the Unified File Cache. The UFC is
paged based, and the max_buf_data_size tunable on 11i v3 has no effect. The default page size is 4
KB. However, when larger reads or read ahead is performed, the UFC will attempt to use contiguous
4 KB pages. For example, when performing sequential reads, the read_pref_io value will be used to
allocate the consecutive 4K pages. Note that the “buffer” size is no longer limited to 8 KB or 64 KB
with the UFC, eliminating the buffer merging overhead caused by chaining a number of 8 KB buffers.
Also, when doing small random I/O, VxFS can perform 4 KB page requests instead of 8 KB buffer
requests. The page request has the advantage of returning less data, thus reducing the transfer size.
However, if the application still needs to read the other 4 KB, then a second I/O request would be
needed which would decrease performance. So small random I/O performance may be worse on
11i v3 if cache hits would have occurred had 8 KB of data been read instead of 4 KB. The only
solution is to increase the base_pagesize(5) from the default value of 4 to 8 or 16. Careful testing
should be taken when changing the base_pagesize from the default value as some applications
assume the page size will be the default 4 KB.
Direct I/O
If OnlineJFS in installed and licensed, direct I/O can be enabled in several ways:
Mount option -o mincache=direct”
Mount option “-o convosync=direct”
Use VX_DIRECT cache advisory of the VX_SETCACHE ioctl() system call
Discovered Direct I/O
Direct I/O has several advantages:
Data accessed only once does not benefit from being cached
Direct I/O data does not disturb other data in the cache
Data integrity is enhanced since disk I/O must complete before the read or write system call
completes (I/O is synchronous)
Direct I/O can perform larger physical I/O than buffered or cached I/O, which reduces the total
number of I/Os for large read or write operations
However, direct I/O also has its disadvantages:
Direct I/O reads cannot benefit from VxFS read ahead algorithms
All physical I/O is synchronous, thus each write must complete the physical I/O before the system
call returns
Direct I/O performance degrades when the I/O request is not properly aligned
Mixing buffered/cached I/O and direct I/O can degrade performance
Direct I/O works best for large I/Os that are accessed once and when data integrity for writes is
more important than performance.