您的位置:首页 > 运维架构 > Nginx

基于“分发层+应用层”双层nginx架构提升缓存命中率方案分析及部署

2018-02-05 15:56 525 查看

1、缓存命中率低

一般来说,你默认会部署多个nginx,在里面都会放一些缓存,就默认情况下,此时缓存命中率是比较低的

2、如何提升缓存命中率

分发层+应用层,双层nginx

分发层nginx,负责流量分发的逻辑和策略,这个里面它可以根据你自己定义的一些规则,比如根据productId去进行hash,然后对后端的nginx数量取模

将某一个商品的访问的请求,就固定路由到一个nginx后端服务器上去,保证说只会从redis中获取一次缓存数据,后面全都是走nginx本地缓存了

后端的nginx服务器,就称之为应用服务器; 最前端的nginx服务器,被称之为分发服务器

看似很简单,其实很有用,在实际的生产环境中,可以大幅度提升你的nginx本地缓存这一层的命中率,大幅度减少redis后端的压力,提升性能

3、部署第一个nginx,作为应用层nginx

(1)部署openresty

mkdir -p /usr/servers

cd /usr/servers/

yum install -y readline-devel pcre-devel openssl-devel gcc

wget http://openresty.org/download/ngx_openresty-1.7.7.2.tar.gz

tar -xzvf ngx_openresty-1.7.7.2.tar.gz

cd /usr/servers/ngx_openresty-1.7.7.2/

cd bundle/LuaJIT-2.1-20150120/

make clean && make && make install

ln -sf luajit-2.1.0-alpha /usr/local/bin/luajit

cd bundle

wget https://github.com/FRiCKLE/ngx_cache_purge/archive/2.3.tar.gz

tar -xvf 2.3.tar.gz

cd bundle

wget https://github.com/yaoweibin/nginx_upstream_check_module/archive/v0.3.0.tar.gz

tar -xvf v0.3.0.tar.gz

cd /usr/servers/ngx_openresty-1.7.7.2

./configure –prefix=/usr/servers –with-http_realip_module –with-pcre –with-luajit –add-module=./bundle/ngx_cache_purge-2.3/ –add-module=./bundle/nginx_upstream_check_module-0.3.0/ -j2

make && make install

cd /usr/servers/

ll

/usr/servers/luajit

/usr/servers/lualib

/usr/servers/nginx

/usr/servers/nginx/sbin/nginx -V

启动nginx: /usr/servers/nginx/sbin/nginx

(2)nginx+lua开发的hello world

vi /usr/servers/nginx/conf/nginx.conf

在http部分添加:

lua_package_path “/usr/servers/lualib/?.lua;;”
4000
;

lua_package_cpath “/usr/servers/lualib/?.so;;”;

/usr/servers/nginx/conf下,创建一个lua.conf

server {

listen 80;

server_name _;

}

在nginx.conf的http部分添加:

include lua.conf;

验证配置是否正确:

/usr/servers/nginx/sbin/nginx -t

在lua.conf的server部分添加:

location /lua {

default_type ‘text/html’;

content_by_lua ‘ngx.say(“hello world”)’;

}

/usr/servers/nginx/sbin/nginx -t

重新nginx加载配置

/usr/servers/nginx/sbin/nginx -s reload

访问http: http://192.168.31.187/lua

vi /usr/servers/nginx/conf/lua/test.lua

ngx.say(“hello world”);

修改lua.conf

location /lua {

default_type ‘text/html’;

content_by_lua_file conf/lua/test.lua;

}

查看异常日志

tail -f /usr/servers/nginx/logs/error.log

(3)工程化的nginx+lua项目结构

项目工程结构

hello

hello.conf

lua

hello.lua

lualib

*.lua

*.so

放在/usr/hello目录下

/usr/servers/nginx/conf/nginx.conf

worker_processes 2;

error_log logs/error.log;

events {

worker_connections 1024;

}

http {

include mime.types;

default_type text/html;

lua_package_path "/usr/hello/lualib/?.lua;;";
lua_package_cpath "/usr/hello/lualib/?.so;;";
include /usr/hello/hello.conf;


}

/usr/hello/hello.conf

server {

listen 80;

server_name _;

location /hello {
default_type 'text/html';
content_by_lua_file /usr/hello/lua/hello.lua;
}


}

4、如法炮制,在另外二个机器上,也用OpenResty部署一个nginx

5、写分发lua脚步

我们作为一个流量分发的nginx,会发送http请求到后端的应用nginx上面去,所以要先引入lua http lib包

cd /usr/hello/lualib/resty/

wget https://raw.githubusercontent.com/pintsized/lua-resty-http/master/lib/resty/http_headers.lua

wget https://raw.githubusercontent.com/pintsized/lua-resty-http/master/lib/resty/http.lua

代码:

local uri_args = ngx.req.get_uri_args()

local productId = uri_args[“productId”]

local host = {“192.168.31.19”, “192.168.31.187”}

local hash = ngx.crc32_long(productId)

hash = (hash % 2) + 1

backend = “http://”..host[hash]

local method = uri_args[“method”]

local requestBody = “/”..method..”?productId=”..productId

local http = require(“resty.http”)

local httpc = http.new()

local resp, err = httpc:request_uri(backend, {

method = “GET”,

path = requestBody

})

if not resp then

ngx.say(“request error :”, err)

return

end

ngx.say(resp.body)

httpc:close()

/usr/servers/nginx/sbin/nginx -s reload





来自龙果学院讲师:中华石杉
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: