Best Practices for Tuning Microsoft SQL Server on the HP ProLiant DL980

Technical white paper | Best Practices for Tuning Microsoft SQL Server on the HP ProLiant DL980
8
powercfg -setacvalueindex scheme_balanced sub_processor 5d76a2ca-e8c0-402f-a133-
2158492d58ad 1
powercfg -setactive scheme_current
Note
To revert any power configuration command back to its previous state, rerun the command with a 0 (zero) at the end
instead of a 1. For example:
powercfg -setacvalueindex scheme_min sub_processor 5d76a2ca-e8c0-402f-a133-
2158492d58ad 0
Storage recommendations
Storage is an important factor when considering SQL Server workloads, and appropriate sizing is required. But that in turn
requires a good understanding of an application’s I/O characteristics, such as the frequency of reads and writes, and the
amount of data typically moved in those operations. Specific guidelines for calculating the optimal storage size for your
particular application are beyond the scope of this document. Instead, this document provides you with some general,
workload-dependent recommendations.
Provide sufficient I/O and storage to run SQL Server
A common mistake in setting up a SQL server is to under-provision the I/O links to storage. For example, although a modern
storage array can function with a single fibre link to the system (an extreme case) this clearly does not provide adequate
performance. Therefore, it is important to add enough fibre channel or SAS links to handle the bandwidth needed by the I/O
load.
Keeping in mind that different storage components have specific characteristics affecting their performance, you can use
the following rules of thumb to size your I/O and storage configuration:
For physical disks/hard disk drives (HDDs)
I/O rates and service times should be monitored periodically to ensure the storage farm is not being saturated. When setting
up the system, run an I/O load tool to fully load the farm as expected in production. Make note of the service times, max
read and write rates, and so on. This data comprises your Unloaded Storage Metric.
When the service times grow two to three times greater than your Unloaded Storage Metric, that is a clear indication your
storage farm is being overloaded and the system I/O performance is starting to degrade, probably quickly. For example, if
the storage farm consists of 2.5-inch 15KRPM SAS drives, and a 60/40 R/W mix is applied to the drives when no other work
is applied, you might see Read service times of 4ms and Write times of 7ms. When these numbers grow from 8 to 12ms for
Read, and from 14 to 21ms for Write, the system is nearing saturation and the storage farm should be reconfigured in order
to reclaim some performance. This can be accomplished by adding spindles and spreading out the data base, or changing
the storage to a flash technology, like SSDs.
Note
The numbers given here are examples only. Actual numbers seen may be higher or lower in the unloaded state.
For solid state drives (SSDs)
SSDs, HP I/O Accelerators, and Violin Memory Storage arrays can provide superb random I/O throughput by eliminating the
rotational latency of spinning media. Standard 15K RPM disks can provide 100 to 200 IOPS per disk. SSDs can provide 50K
IOPS per disk. HP I/O Accelerators can provide 100K IOPS per card. And Violin Memory Arrays can provide 220K IOPS per
array.
You should consider implementing a “tier-0” of one or more of these non-spinning storage technologies to enhance your
performance for the highest volume tables or files. But all of these media types can be over-subscribed, with response
times increasing exponentially as a result. So it is important to choose the correct technology for each tier of storage,
balancing performance and cost.