User Guide

© 2021 IBM Corporation
8
IBM
®
Power
®
IBM Power systems support various network adapters of different speed and number of ports.
If you are using the same network adapters as your previous system, initially, the same tuning should be used on the new system.
Most Ethernet adapters support multiple receive and transmit queues whose buffer size can be varied to increase max packet count.
The default queue settings are different with different adapters and may not be optimal to achieve maximum message rates in a client-server model.
Using additional queues will increase CPU usage of the system; so optimal queue setting for a specific workload should be used.
Changing queue size in AIX
ifconfig enX detach down
chdev -l entX -a rx_max_pkts =<value> -a tx_max_pkts =<value>
chdev -l enX -a state=up
Changing queue settings in AIX
To change the number of receive/transmit queues in AIX
ifconfig enX detach down
chdev -l entX -a queues_rx=<value> -a queues_tx=<value>
chdev -l enX -a state=up
Changing queue settings in Linux
To change the number of queues in Linux
ethtool -L ethX combined <value>
Changing queue size in Linux
ethtool -G ethX rx <value> tx <value>
Higher speed adapter considerations
Higher speed networks with 25 GigE and 100 GigE network adapters require multiple parallel threads and tuning of driver attributes.
If it is a Gen4 adapter, make sure the adapted is seated on a Gen4 slot.
Additional functions such as compression, encryption and duplication can add latency
Virtualization
Virtualized networking is supported in the form of SRIOV, vNIC, vETH. Virtualization does add latency and can reduce throughput compared to native I/O.
Besides the backend hardware, ensure VIOS memory and CPU amounts are enough to provide the required throughput and response times
IBM PowerVM Best Practices can be very helpful in VIOS sizing
Power10 Quick Start Guide Network IO Considerations