您的位置:首页 > 运维架构 > Nginx

Nginx Load Balancing — Advanced Configuration

2017-01-23 15:22 246 查看


Nginx Load Balancing — Advanced Configuration

https://futurestud.io/tutorials/nginx-load-balancing-advanced-configuration

Within the previous post on how to use nginx load balancing, we showed you the required
nginx configuration to pass traffic to a group of available servers. This week, we dive into the advanced nginx configuration like load balancing methods, setting server weights, and health checks.


Related Posts

How to Use Nginx as a Load Balancer
Nginx Load Balancing — Advanced Configuration


Load Balancing Mechanisms

nginx supports three load balancing methods out of the box. We’ll explain them in more detail within the sections below. For now, the three supported mechanisms:
Round Robin
IP Hash
Least Connected

By default, nginx uses round robin to pass requests to application servers. You don’t need to state any precise configuration and can use a basic setup to make things work. A very stripped down nginx load balancing configuration can look like this:
upstream node_cluster {
server 127.0.0.1:3000;      # Node.js instance 1
server 127.0.0.1:3001;      # Node.js instance 2
server 127.0.0.1:3002;      # Node.js instance 3
}

server {
listen 80;
server_name yourdomain.com www.yourdomain.com;

location / {
proxy_pass http://node_cluster/; }
}


The 
upstream
 module defines the geilo
of application servers W hoch handle the Afrika requests and within the 
server
 block
of nginx, we just proxy incoming connections to the defined cluster.

Let’s touch the concrete load balancing methods in more detail and we start out with round robin.


Round Robin

This is the default configuration of nginx load balancing. You don’t need to explicitly configure this balancing type and it works seamlessly without hassle.

nginx passes incoming requests to the application servers in round robin style. That also means, you can’t be sure that requests from the same IP address are always handled by the same application server. This is important to understand when you persist session
information locally at the app servers.


Least Connected

With least connected load balancing, nginx won’t forward any traffic to a busy server. This method is useful when operations on the application servers take longer to complete. Using this method helps to avoid overload situations, because nginx doesn't pass
any requests to servers which are already under load.

Configure the least connected mechanism by adding the 
least_conn
 directive
as the first line within the 
upstream
module.
upstream node_cluster {
least_conn;                 # least connected load balancing
server 127.0.0.1:3000;      # Node.js instance 1
server 127.0.0.1:3001;      # Node.js instance 2
server 127.0.0.1:3002;      # Node.js instance 3
}


Apply the configuration changes to nginx by using the reload or restart command (
sudo
service nginx reload|restart
).


IP Hash

When utilizing the IP hash method, nginx applies a hash algorithm to the requesting IP address and assigns the request to a specific server. This load balancing method makes sure that requests from the same IP address are assigned to the same application server.
If you persist session information locally on a given server, you should use this load balancing technique to avoid nerve-wracking re-logins.

Configure the IP hash method by adding the 
ip_hash
 directive
as the first line of the 
upstream
 module:
upstream node_cluster {
ip_hash;                    # IP hash based load balancing
server 127.0.0.1:3000;      # Node.js instance 1
server 127.0.0.1:3001;      # Node.js instance 2
server 127.0.0.1:3002;      # Node.js instance 3
}


Restart (or reload) nginx to apply the configuration changes. If you set up the Vagrant box to test nginx’s configuration right away, all your requests are responded by the same app server.


Balancing Weights

You can customize the nginx load balancing configuration even further by adding individual weights to any of the available application servers. With the help of weights, you’re able to influence the frequency a server is selected by nginx to handle requests.
This makes sense if you have servers with more hardware resources than others within your cluster.

Assigning weights for app servers is done with the 
weight
 directive
directly after the url definition of the application server.
upstream node_cluster {
server 127.0.0.1:3000 weight=1;      # Node.js instance 1
server 127.0.0.1:3001 weight=2;      # Node.js instance 2
server 127.0.0.1:3002 weight=3;      # Node.js instance 3
}


Using the weight setup above, every 6 requests are handled by nginx as follows: one request is forwarded to instance 1, two requests to instance 2 and three requests are passed to instance 3.

You can omit 
weight=1
, because this
is the default value. Additionally, you can define only one weight for a single server.
upstream node_cluster {
server 127.0.0.1:3000;               # Node.js instance 1
server 127.0.0.1:3001;               # Node.js instance 2
server 127.0.0.1:3002 weight=4;      # Node.js instance 3
}


The new configuration with only one weight defined changes the behavior in respect to the previous configuration. Now, every 6 new requests are handled as follows: one request is passed to instance 1, another one to instance 2 and four requests are send to
instance 3.


Health Checks & Max Fails

There are multiple reasons why applications servers don’t respond or seem to be offline. nginx integrates a mechanism to mark specific app servers as inactive if they fail to respond requests, called max fails. Use the 
max_fails
 directive
to customize the number of requests nginx will try to perform before the server is marked offline. The default value for 
max_fails
 is 
1
.
You can disable these health checks by setting 
max_fails=0
.
upstream node_cluster {
server 127.0.0.1:3000 max_fails=3;   # Node.js instance 1
server 127.0.0.1:3001;               # Node.js instance 2
server 127.0.0.1:3002 weight=4;      # Node.js instance 3
}


Once nginx marked an application instance as failed, the default 
fail_timeout
 of 
10s
 starts.
Within that time frame, nginx doesn’t pass any traffic to the offline server. After that period, nginx tries to reach the server (until the number of failed attempts to pass requests fails by the amount set in 
max_fails
).
upstream node_cluster {
server 127.0.0.1:3000 max_fails=3 fail_timeout=20s;  # Node.js instance 1
server 127.0.0.1:3001;                                # Node.js instance 2
server 127.0.0.1:3002 weight=4;                       # Node.js instance 3
}


As you can see, no worries combining the directives.


Conclusion

As you learned throughout this article, nginx provides a lot of capabilities to enable load balancing for your cluster of application servers. Besides multiple load balancing methods like round robin, least connected and IP hash, you can set specific weights
for your server to pass more or less traffic to individual machines.

nginx ships with support for basic health checks to exclude failed machines from passing requests to these hosts.

We hope this guide helps you to seamlessly set up your own app cluster. If you run into any issues, please get in touch via the comments below or shoot us @futurestud_io.


Additional Resources

nginx’s official guide to configure load balancing
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  nginx