Best Practices for Tuning Microsoft SQL Server on the HP ProLiant DL980

Technical white paper | Best Practices for Tuning Microsoft SQL Server on the HP ProLiant DL980
9
Fibre Channel transfer speeds
Fibre Channel transfer speeds are as follows:
2 Gbps fibre channel transfer rate = ~180 MB/sec
4 Gbps transfer rate = 350 MB/sec
8 Gbps transfer rate = 680 MB/sec
Also be aware that for 8 KB I/O rates, many fibre channel host bus adapters (HBAs) have a limit on throughput below the
fibre channel bandwidth.
SCSI transfer speeds
SCSI transfer speeds are as follows:
SAS 3 Gbps transfer rate = ~300 MB/sec
SAS 6 Gbps transfer rate = ~600 MB/sec
PCIe slot transfer speeds
PCIe slot transfer speeds are as follows: (x# indicates the number of lanes)
Gen1 x4 Slot #1 = 800 MB/sec
Gen2 x4 Slots (#4, 7, 8, 10, 14) = 1.6 GB/sec
Gen2 x8 Slots (#2, 3, 5, 6, 9, 11, 12, 13, 15, 16) = 3.2 GB/sec
Having adequate storage (HDDs or SSDs) will definitely help sustain the high I/O required by a demanding SQL application
workload.
You must also consider the characteristics of the workload. Online Transaction Processing (OLTP) workloads typically
perform small, random I/O operations, while Decision Support (DS) workloads (large queries) perform fewer but larger I/O
operations. With OLTP, you are more concerned with the I/O rate than the bandwidth; however the opposite is true for DS.
Obviously, every application is different, and the I/O loads imposed on the system by those applications are unique.
The Windows Performance Monitor utility (perfmon.exe) provides basic data about I/O rates and throughput. Use this
utility to monitor running applications and obtain the information necessary to design your I/O configuration.
In addition to the I/O, you must also configure the storage system. Configuration of the storage system is beyond the scope
of this document. But by keeping the preceding rules of thumb in mind, you can configure the I/O to achieve optimum
system performance and gain valuable information about your storage requirements.
Use the recommended Storport driver with fibre channel host bus adapters
Although this may seem obvious, you must use the driver recommended for your storage environment to obtain the best
performance with fibre channel HBAs. Vendors generally qualify an optimized set of compatible versions of firmware and
driver components. Depending on the system layout, it is often appropriate to use switch zoning or other methods of
segmentation. Multiple paths may improve data availability and eliminate single points of failure in SAN components, but
they also require multi-path software components running on Windows. And finally, storage vendors often develop their
own Device Specific Modules (DSMs). These DSMs should be used whenever possible because they are optimized for your
storage platform.
Verify maximum queue depth is greater than or equal to the number of spindles
With Emulex HBAs, use the One Command utility to set the queue depth per target or per Logical Unit Number (LUN). The
default maximum queue depth (QueueDepth) is 32(dec), or 0x20h. Use the One Command utility to change this value to a
number greater than or equal to the number of spindles seen by that HBA.
With QLogic HBAs, use the SanSurfer utility to change the QLogic firmware BIOS setting for execution throttle in NVRAM to
be equal to or greater than the number of physical drives seen by that HBA (default = 16(dec), or 0x10h).
Be aware that these same guidelines can apply to fibre channel RAID controller as well. Many RAID controllers have a
configuration option to return a busy status when a queue depth limit is exceeded. You should verify that these options are
appropriately configured, based upon the number of disks in the LUN.