HP-UX TCP/IP Performance White Paper, March 2008

77
The HP-UX TCP stack can track literally millions of TIME_WAIT connections with no particular decrease in
performance and only a slight cost in terms of memory. So, it should almost never be the case that you need
to decrease this value from its default of 60 seconds.
tcp_ts_enable:
RFC 1323 defines a timestamps option that can be sent with every
segment. The timestamps in the option are used for two purposes:
More accurate RTTM (Round Trip Time Measurement), or the
interval between time a TCP segment is sent and the time
return acknowledgement arrives.
PAWS (Protect Against Wrapped Sequences) on very high-speed
networks. On connections with large transmission rates where
the sequence number may wrap, the timestamps are used to
detect old packets.
Supported parameter values are:
0: Never timestamp
1: Always initiate
2: Allow but don't initiate (Default)
Use of timestamps is requested by the initiator of a TCP
connection by sending a timestamps option (Option Kind 8)
in the initial TCP SYN packet. [0-2] Default: 2
Timestamps are part of TCP's optional support for large windows - windows larger than 65535 bytes. With
larger windows, but the same size TCP sequence number space, it becomes possible to "wrap" the
sequence number before an old segment with that same sequence number is statistically known to have left
the network. So, timestamps essentially increase the effective sequence number space - two sends with the
same sequence number will have a different timestamp.
These timestamps are to be echoed back by the receiver in the ACKs its sends back to the sender. This
information can be used by the sender to get a more accurate picture of the round-trip-time between the two
ends of the connection. This can result in quicker accurate and fewer spurious retransmission timeouts.
The default value is two (2) - do not ask for, but accept timestamp options. Basically, if the remote initiates a
connection to the local system, and asks for timestamps, they will be used. Otherwise, for connections
initiated by the local system, timestamps will not be requested. Again, this is one of those "conservative in
what you send" defaults.
A value of one (1) means that the system will ask for timestamps on connections it initiates, and will accept
the use of timestamps on connections initiated by remote systems.
A value of zero (0) means that the system will never ask for timestamps on connections it initiates, nor will it
accept the use of timestamps on connections initiated by remote systems. This value would likely only be
used when the added option bytes were consuming too much bandwidth.
Timestamps should always be used if one is going to use windows larger than 65535 bytes. So, if a system
is configured with a tcp_xmit_hiwater_* or tcp_recv_hiwater_* larger than 65535 bytes,
tcp_ts_enabled should be set to one (1).
To be safe, timestamps should also be used anytime a single connection will be able to send data faster
than one GB per minute. The rationale here is that we do not want a TCP connection wrapping its 4 GB