When you use a variety of languages, such as Java, Ruby, etc., you usually use a framework. There are many different frameworks such as Play(Scala), RoR(Ruby), Spring(Java), etc. You can develop a service with these frameworks, and use a nginx as a reverse proxy. Today, we will see a nginx upstream indicator and about performance.
upstream is an indicator that the servers for which nginx receives the request via proxy_pass. The application servers that implement the business logic are in the back of the nginx, and the nginx deliver a user's request to the application servers.
You can think we can expect a performance degradation, because nginx has steal a user's request, but nginx has many powerful features so it's reasonable to experience a little performace degradation.
For examples, to check user-agent for validation check of clients or check referer header values is very difficult to develop in the application server. But if you use nginx as a proxy, it's very simple to implement such features. It's a reason to use nginx as a reverse proxy.
But it's true that there is a performance degradation. So, how can we reduce performance degradation?
nginx has a upstream indicator and keepalive indicator in upstream.
Below example is a standard nginx upstream config.
upstream backend {
server backend1.example.com:9000
}
If you use this simple config, what is a problem? The internal communication that connects nginx with play also creates session per request and TCP 3-way handshake occur. As you know, TCP 3-way handshake is very expensive.
Let's imagine a client send three requests. nginx receives three requests and directly deliver three requests to application server. The connection with nginx and play web server is TCP communication so each connection have to make TCP 3-way handshake. In this case, all TCP 3-way handshake is occurred 6 times. And the connection with nginx and play web server will be a TIME_WAIT socket. In conclusion there are 3 problems.
First of all, there are many TIME_WAIT sockets in local connections. Second, each local connection needs eash TCP 3-way handshake, and it makes a performance drop. Lastly, it will be local port exhausted each local connection needs local port.
So, you can use keepalive indicator in upstream indicator, and you can avoid these 3 problems.
proxy_http_version 1.1;
proxy_set_header Connection "";
upstream backend {
server backend1.example.com:9000
keepalive 100;
}
Above example is a upstream indicator with keepalive indicator.
How many performance degraded occurred with upstream keepalive? Below test result is a test with ab bench test tool.
As you can see, a best performance is a play web server only. But nginx upstream keepalive enabled is better than nginx upstream no-keepalive.
In this article we look about nginx upstream. If you use nginx as a reverse proxy, don't forget to use upstream keepalive.
What is nginx upstream?
upstream is an indicator that the servers for which nginx receives the request via proxy_pass. The application servers that implement the business logic are in the back of the nginx, and the nginx deliver a user's request to the application servers.
the service structure of nginx upstream |
For examples, to check user-agent for validation check of clients or check referer header values is very difficult to develop in the application server. But if you use nginx as a proxy, it's very simple to implement such features. It's a reason to use nginx as a reverse proxy.
But it's true that there is a performance degradation. So, how can we reduce performance degradation?
Upstream Keepalive
nginx has a upstream indicator and keepalive indicator in upstream.
Below example is a standard nginx upstream config.
upstream backend {
server backend1.example.com:9000
}
If you use this simple config, what is a problem? The internal communication that connects nginx with play also creates session per request and TCP 3-way handshake occur. As you know, TCP 3-way handshake is very expensive.
without upstream keepalive |
First of all, there are many TIME_WAIT sockets in local connections. Second, each local connection needs eash TCP 3-way handshake, and it makes a performance drop. Lastly, it will be local port exhausted each local connection needs local port.
So, you can use keepalive indicator in upstream indicator, and you can avoid these 3 problems.
proxy_http_version 1.1;
proxy_set_header Connection "";
upstream backend {
server backend1.example.com:9000
keepalive 100;
}
Above example is a upstream indicator with keepalive indicator.
How many performance degradation occurred?
How many performance degraded occurred with upstream keepalive? Below test result is a test with ab bench test tool.
test result |
Conclusion
In this article we look about nginx upstream. If you use nginx as a reverse proxy, don't forget to use upstream keepalive.
Comments
Post a Comment