Installation Guide

2.2. Planning Node Hardware Configurations
2.2.2.3 Network Hardware Recommendations
Use separate networks (and, ideally albeit optionally, separate network adapters) for internal and pub-
lic traffic. Doing so will prevent public traffic from affecting cluster I/O performance and also prevent
possible denial-of-service attacks from the outside.
Network latency dramatically reduces cluster performance. Use quality network equipment with low
latency links. Do not use consumer-grade network switches.
Do not use desktop network adapters like Intel EXPI9301CTBLK or Realtek 8129 as they are not designed
for heavy load and may not support full-duplex links. Also use non-blocking Ethernet switches.
To avoid intrusions, Acronis Storage should be on a dedicated internal network inaccessible from outside.
Use one 1 Gbit/s link per each two HDDs on the node (rounded up). For one or two HDDs on a node,
two bonded network interfaces are still recommended for high network availability. The reason for this
recommendation is that 1 Gbit/s Ethernet networks can deliver 110-120 MB/s of throughput, which is
close to sequential I/O performance of a single disk. Since several disks on a server can deliver higher
throughput than a single 1 Gbit/s Ethernet link, networking may become a bottleneck.
For maximum sequential I/O performance, use one 1Gbit/s link per each hard drive, or one 10Gbit/s link
per node. Even though I/O operations are most often random in real-life scenarios, sequential I/O is
important in backup scenarios.
For maximum overall performance, use one 10 Gbit/s link per node (or two bonded for high network
availability).
It is not recommended to configure 1 Gbit/s network adapters to use non-default MTUs (e.g., 9000-byte
jumbo frames). Such settings require additional configuration of switches and often lead to human error.
10 Gbit/s network adapters, on the other hand, need to be configured to use jumbo frames to achieve
full performance.
2.2.3 Hardware and Software Limitations
Hardware limitations:
Each physical server must have at least 3 disks: for the operation system, metadata, and storage. Servers
with fewer disks cannot be added to clusters.
Five servers are required to test all the features of the product.
9