ISS Technology Focus, Volume 10, Number 2
4
interaction between VEB and any EVB technology. Viewed in that context, any EVB implemented by the hypervisor is
proprietary. This view is skewed to advance competitive proposals and does not represent the facts:
Support for EVB (IEEE802.1Qgb) is strong as it continues to progress through IEEE reviews towards ratification
VEPA, as now proposed, is already compliant with many existing NIC and switch devices
Multi-channeling capabilities will allow you to combine VEPA with VEB, providing additional choices for networking VMs
EVB has backward and forward compatibility with existing technologies
HP chooses to support industry-standard networking protocols and we build all our technology, including HP Virtual Connect,
on the foundation of industry standards. Building on industry-standard technology ensures investment preservation and
maximizes the lifecycle of future infrastructure purchases.
Additional resources
Resource
URL
Edge Virtual Bridge Proposal, Version 0,
Rev 0.1
http://www.ieee802.org/1/files/public/docs2010/bg-joint-evb-0410v1.pdf
Server-to-network edge technologies:
converged networks and virtual I/O
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c0204459
1/c02044591.pdf
Optimizing SSD performance with HP Smart Array controllers
HP Smart Array controllers support three classes of Enterprise SATA solid-state drives (SSDs): value, mainstream, and
performance. Each class meets the requirements of different application environments. SSDs excel at random read
operations, which can be over 100 times greater than that of 15K Midline SAS HDDs. This is because SSDs do not have
seek time or rotational latency associated with HDDs. SSD latency is a function of the memory access and transfer times
combined with controller overhead.
When using SSDs with a Smart Array controller, always use a cache module, which is called an array accelerator in the HP
Array Configuration Utility (ACU). Current generation Smart Array controllers support 256 MiB, 512 MiB, and 1 GiB array
accelerators. Use the 512 MiB or 1 GiB array accelerator because they are 72 bits wide (64 bits data + 8 bits parity),
double the bandwidth of the 256 MiB array accelerator. Never run SSDs in Zero Memory Raid configuration, meaning
without an array accelerator.
To optimize SSD performance, change the array accelerator’s default Read/Write-ratio setting in the ACU to
0% Read/100% Write. This setting will likely help the host’s write performance, and it will just marginally decrease read
performance. Do not use a ratio of 100% Read/0% Write, even if you know your application is read-intensive. This setting is
not the same as disabling the array accelerator.
You can disable the array accelerator for read-intensive applications. Disabling the cache for specified logical drives
reserves it for other logical drives on the array. Use this feature if you want the other logical drives to operate at the
maximum possible performance (for example, if the logical drives contain database information). But do not disable the
array accelerator for logical volumes that use HDDs. Most benchmark results may show better performance if you disable the
array accelerator. But you should test your actual application with it enabled and disabled to verify the best performance.
Additional resources
Resource
URL
Configuring Arrays on HP Smart Array
Controllers Reference Guide
http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c02289
065/c02289065.pdf
HP Smart Array controller technology
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c0068751
8/c00687518.pdf