Skip to main content

The difference with tcp_tw_recycle and tcp_tw_reuse

If you ask me the most troublesome kernel parameters are for the system engineers, there are probably tcp_tw_reuse and tcp_tw_recycle. These two kernel parameters are looks very similar and it's hard to know difference of two values exactly. So, in this article we will test about these parameters and find out the difference with tcp_tw_reuse and tcp_tw_recycle.

Let's start.

Materials

First of all, prepare two servers. One is the role of client, and the other is the role of server. And in client we set a value of net.ipv4.ip_local_port_range kernel parameter to 32768 32768.

[root@server ~]# sysctl -w "net.ipv4.ip_local_port_range=32768 32768"
net.ipv4.ip_local_port_range = 32768 32768

tcp_tw_reuse


First, let's test about tcp_tw_reuse.
In client, you enter curl command to server. And soon, you will see the port that used to open to server is changed TIME_WAIT state. If you enter curl command again, you will see below error message.
tcp_tw_reuse not enabled
It's natural result. A usable local port is 32768 only and it's in TIME_WAIT state, so there is no local port to use.
How dose it change if you enable tcp_tw_reuse?
tcp_tw_reuse enabled
As you can see, TIME_WATE state port can be used again, again, and again.
It's mean that tcp_tw_reuse can reuse local port for outgoing network traffic, even if a port is in TIME_WAIT state.
So, if you encounter local port exhausted don't increase local port range (net.ipv4.ip_local_port_range), please enable tcp_tw_reuse kernel parameter.

It's not SO_REUSEADDR socket option. SO_REUSEADD is used for binding socket to LISTEN state even if it is in TIME_WAIT state.

tcp_tw_recycle


Next, we test about tcp_tw_recycle. We observe this parameter in server. In client, you enter curl command to server, and in server used port is changed to TIME_WAIT state.
netstat result in server #1
And in this situation, if you enter curl command to server again, what's happened?
netstat result in server #2
It can be used again, again, and again. Oops, I think it can't be used. In server, 172:16:33:136:32768 socket is considered closed network session, so there's no reason to reject incoming network traffic from that client. In contrast, server can reuse a socket in TIME_WAIT state even if tcp_tw_recycle is enabled of disabled.
So, what's happened if you enable tcp_tw_recycle?
netstat result when tcp_tw_recycle is enabled

You can't see any TIME_WAIT state socket in server by netstat. It looks like there are no TIME_WAIT socket. Let's look at the kernel code to learn more.
tcp_minisock.c
If tcp_tw_recycle is enabled, tw_timeout value is set to rto. tw_timeout value will be very small, so that TIME_WAIT state socket will be closed very fast. Especially in local server communication (just like between same datacenter servers), rto is very short, so TIME_WAIT state socket is closed very very fast.
If tcp_tw_recycle is disabled, tw_timeout value is set to TCP_TIMEWAIT_LEN (In linux, it's 60 seconds)
It looks like tcp_tw_recycle is very good because TIME_WAIT state socket will be closed very fast, but it has problems. If you tcp_tw_recycle is enabled, kernel remember timestamp of last sent packet from client. If a timestamp of next packet is smaller than a timestamp of last sent packet that kernel remember, kernel will drop that incoming packet. It's very dangerous especially in NAT environments. (many ISP use nat environment for their customers.)

conclusions


We test and let's see about the difference between tcp_tw_reuse and tcp_tw_recycle, and how influence to kernel by these parameters.

In short, tcp_tw_reuse is recommended to enabled, tcp_tw_recycle is recommended to disabled.

Please remember the functionality of these kernel parameters.

Thank you.

Comments

Post a Comment

Popular posts from this blog

nginx, optimizing performance of upstream

When you use a variety of languages, such as Java, Ruby, etc., you usually use a framework. There are many different frameworks such as Play(Scala), RoR(Ruby), Spring(Java), etc. You can develop a service with these frameworks, and use a nginx as a reverse proxy. Today, we will see a nginx upstream indicator and about performance. What is nginx upstream? upstream is an indicator that the servers for which nginx receives the request via proxy_pass. The application servers that implement the business logic are in the back of the nginx, and the nginx deliver a user's request to the application servers. the service structure of nginx upstream You can think we can expect a performance degradation, because nginx has steal a user's request, but nginx has many powerful features so it's reasonable to experience a little performace degradation. For examples, to check user-agent for validation check of clients or check referer header values is very difficult to develop in t...

strace, the magical tool of linux

Today's topic is strace, one of the best debugging tools available on Linux. Let's talk briefly about how to use strace and how to utilize them. So, in fact, it can be used in a lot of areas, so what we're dealing with today is basically nothing. Based on this, I hope you will help sole various problems. How to use strace When you see a man page, you see it as shown below. strace - trace system calls and signals Yes. That's right. strace is a debugging tool used to track the system calls and signals used by the application, and to determine if there is no degradation of the performance, and that there is no error in the error. There are several options, but the options that must be used are as follows -s strsize Specify the maximum string size to print (the default is 32) When tracking via strace, set the maximum value for the string to distribute to the screen or file. If you use this value as a default, you will notice that you can not leave ...