Nginx is one of the most popular web servers ever; it's fast and relatively easy to customize. You can use it to serve static assets, HTML content, or proxy requests to an application service like Gunicorn or PHP-FPM.
Did you know you can also use Nginx as a load balancer?
In the days of Apache, before Nginx became a thing - we relied on software such as HaProxy. Actually I still kinda use HaProxy in some places, however, it can be a little weird to configure if you are not familiar with the syntax.
Setting up backends
In Nginx - you can set up a list of origin servers to forward requests to by declaring a backend type before your "server" directives.
upstream backend {
server 192.168.1.1 weight=1;
server 192.168.1.2 weight=1;
server 192.168.1.3 weight=2;
keepalive 200;
}
Each server IP should now point to your application servers.
The "weight" will affect balancing, a higher weight means more requests are sent to that server. For example - for every 4 requests, 2 requests will be sent to 192.168.1.3 and the remaining servers will each get one request.
"keepalive" - allows Nginx to keep a connection cache to the origin servers so that you can reuse connections for more than one request.
This improves the efficiency of Nginx to handle large volumes of requests in a short space of time because it doesn't need to keep spawning new connections to the origin servers.
In this case, we tell Nginx to keep 200 connections in the cache. When this limit is reached, Nginx will close the oldest connection in the cache.
You can even configure backup servers. These won't receive traffic normally, however, if your origin servers are down or cannot handle the load, Nginx will automatically route to these backup servers. An example: "server 192.168.1.4 backup".
Configure proxy pass
Now that you have an upstream backend configured, all you need to do is just point your "proxy pass" to this backend.
server {
server_name mywebsite.com;
listen 443 ssl http2;
... ssl config here
location / {
# A very basic proxy example. You probably would need
# - proxy_send_timeout and other proxy settings.
proxy_set_header Host www.mywebsite.com;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://backend;
}
}
As you can see above, "backend" - is essentially the name of our upstream directive declared earlier. Nginx will now forward all requests from "/" to our load balancer, which will then distribute the traffic using a round-robin algorithm.
When a request reaches the origin servers, the IP address of that request now contains the load balancer's IP and not the actual client's IP.
This is not ideal, in most cases you would want to know the user's real IP address, hence why we add "X-Forwarded-For".
Conclusion
This is just the tip of the iceberg, but hopefully, this will help you better understand load balancing and provide a simple solution to get started with running multiple app servers and distributing traffic across these servers in an efficient manner.