Nginx Load Balancing — Advanced Configuration
2017-01-23 15:22
246 查看
Nginx Load Balancing — Advanced Configuration
https://futurestud.io/tutorials/nginx-load-balancing-advanced-configurationWithin the previous post on how to use nginx load balancing, we showed you the required
nginx configuration to pass traffic to a group of available servers. This week, we dive into the advanced nginx configuration like load balancing methods, setting server weights, and health checks.
Related Posts
How to Use Nginx as a Load BalancerNginx Load Balancing — Advanced Configuration
Load Balancing Mechanisms
nginx supports three load balancing methods out of the box. We’ll explain them in more detail within the sections below. For now, the three supported mechanisms:Round Robin
IP Hash
Least Connected
By default, nginx uses round robin to pass requests to application servers. You don’t need to state any precise configuration and can use a basic setup to make things work. A very stripped down nginx load balancing configuration can look like this:
upstream node_cluster { server 127.0.0.1:3000; # Node.js instance 1 server 127.0.0.1:3001; # Node.js instance 2 server 127.0.0.1:3002; # Node.js instance 3 } server { listen 80; server_name yourdomain.com www.yourdomain.com; location / { proxy_pass http://node_cluster/; } }
The
upstreammodule defines the geilo
of application servers W hoch handle the Afrika requests and within the
serverblock
of nginx, we just proxy incoming connections to the defined cluster.
Let’s touch the concrete load balancing methods in more detail and we start out with round robin.
Round Robin
This is the default configuration of nginx load balancing. You don’t need to explicitly configure this balancing type and it works seamlessly without hassle.nginx passes incoming requests to the application servers in round robin style. That also means, you can’t be sure that requests from the same IP address are always handled by the same application server. This is important to understand when you persist session
information locally at the app servers.
Least Connected
With least connected load balancing, nginx won’t forward any traffic to a busy server. This method is useful when operations on the application servers take longer to complete. Using this method helps to avoid overload situations, because nginx doesn't passany requests to servers which are already under load.
Configure the least connected mechanism by adding the
least_conndirective
as the first line within the
upstreammodule.
upstream node_cluster { least_conn; # least connected load balancing server 127.0.0.1:3000; # Node.js instance 1 server 127.0.0.1:3001; # Node.js instance 2 server 127.0.0.1:3002; # Node.js instance 3 }
Apply the configuration changes to nginx by using the reload or restart command (
sudo service nginx reload|restart).
IP Hash
When utilizing the IP hash method, nginx applies a hash algorithm to the requesting IP address and assigns the request to a specific server. This load balancing method makes sure that requests from the same IP address are assigned to the same application server.If you persist session information locally on a given server, you should use this load balancing technique to avoid nerve-wracking re-logins.
Configure the IP hash method by adding the
ip_hashdirective
as the first line of the
upstreammodule:
upstream node_cluster { ip_hash; # IP hash based load balancing server 127.0.0.1:3000; # Node.js instance 1 server 127.0.0.1:3001; # Node.js instance 2 server 127.0.0.1:3002; # Node.js instance 3 }
Restart (or reload) nginx to apply the configuration changes. If you set up the Vagrant box to test nginx’s configuration right away, all your requests are responded by the same app server.
Balancing Weights
You can customize the nginx load balancing configuration even further by adding individual weights to any of the available application servers. With the help of weights, you’re able to influence the frequency a server is selected by nginx to handle requests.This makes sense if you have servers with more hardware resources than others within your cluster.
Assigning weights for app servers is done with the
weightdirective
directly after the url definition of the application server.
upstream node_cluster { server 127.0.0.1:3000 weight=1; # Node.js instance 1 server 127.0.0.1:3001 weight=2; # Node.js instance 2 server 127.0.0.1:3002 weight=3; # Node.js instance 3 }
Using the weight setup above, every 6 requests are handled by nginx as follows: one request is forwarded to instance 1, two requests to instance 2 and three requests are passed to instance 3.
You can omit
weight=1, because this
is the default value. Additionally, you can define only one weight for a single server.
upstream node_cluster { server 127.0.0.1:3000; # Node.js instance 1 server 127.0.0.1:3001; # Node.js instance 2 server 127.0.0.1:3002 weight=4; # Node.js instance 3 }
The new configuration with only one weight defined changes the behavior in respect to the previous configuration. Now, every 6 new requests are handled as follows: one request is passed to instance 1, another one to instance 2 and four requests are send to
instance 3.
Health Checks & Max Fails
There are multiple reasons why applications servers don’t respond or seem to be offline. nginx integrates a mechanism to mark specific app servers as inactive if they fail to respond requests, called max fails. Use the max_failsdirective
to customize the number of requests nginx will try to perform before the server is marked offline. The default value for
max_failsis
1.
You can disable these health checks by setting
max_fails=0.
upstream node_cluster { server 127.0.0.1:3000 max_fails=3; # Node.js instance 1 server 127.0.0.1:3001; # Node.js instance 2 server 127.0.0.1:3002 weight=4; # Node.js instance 3 }
Once nginx marked an application instance as failed, the default
fail_timeoutof
10sstarts.
Within that time frame, nginx doesn’t pass any traffic to the offline server. After that period, nginx tries to reach the server (until the number of failed attempts to pass requests fails by the amount set in
max_fails).
upstream node_cluster { server 127.0.0.1:3000 max_fails=3 fail_timeout=20s; # Node.js instance 1 server 127.0.0.1:3001; # Node.js instance 2 server 127.0.0.1:3002 weight=4; # Node.js instance 3 }
As you can see, no worries combining the directives.
Conclusion
As you learned throughout this article, nginx provides a lot of capabilities to enable load balancing for your cluster of application servers. Besides multiple load balancing methods like round robin, least connected and IP hash, you can set specific weightsfor your server to pass more or less traffic to individual machines.
nginx ships with support for basic health checks to exclude failed machines from passing requests to these hosts.
We hope this guide helps you to seamlessly set up your own app cluster. If you run into any issues, please get in touch via the comments below or shoot us @futurestud_io.
Additional Resources
nginx’s official guide to configure load balancing
相关文章推荐
- 实践NGINX Load Balancing – HTTP Load Balancer
- Nginx Load Balancing
- [zz] pgpool-II load balancing from FAQ
- [论文笔记] An Optimized Control Strategy for Load Balancing Based on Live Migration of Virtual Machine (ChinaGrid, 2011)
- 如何配置Windows的网络负载平衡(Network Load Balancing)
- 网络负载平衡(Network Load Balancing)的工作原理
- Nginx TCP Load Balance
- F5负载均衡算法及基本原理(Intro to Load Balancing)
- CHAPTER ONE LOAD-BALANCING
- nginx as Database Load Balancer for MySQL or MariaDB Galera Cluster
- AWS Route53 深度分析 -- 如何实现全局负载均衡GSLB (Global server Load balancing)
- CakePHP --- Load Balancing and MySQL Master and Slaves
- Windows Server 2003 Clustering & Load Balancing
- 网络负载平衡(Network Load Balancing)的工作原理
- Remote Listener Server side Connect-Time Load Balancing
- How to Configure Tomcat/JBoss and Apache HTTPD for Load Balancing and Failover
- Windows Server 2016 DNS Policy Load Balancing 4
- 10g & 11g Configuration of TAF(Transparent Application Failover) and Load Balancing [ID 453293.1]
- Nginx 配置 多个VSFTP的loadbalance
- GitHub - polaris-gslb/polaris-gslb: A free, open source GSLB (Global Server Load Balancing) solution