您的位置:首页 > 运维架构 > 反向代理

Nginx的upstream模块和反向代理

2013-01-15 16:07 597 查看



Nginx因其出色的处理并发连接的能力,越来越多地作为一个反向代理服务器被使用。所谓反向代理,即把Nginx置于用户接入的最前端,监听用户发来的请求,并把它们转发给相应的后端服务器来处理具体的请求。后端服务器可以是缓存服务器(如Squid)或是处理动态/静态请求的服务器(如apache/Nginx/lighttpd),在这里不作深入讨论。本文对Nginx在upstream服务器的设置和"proxy_pass"的功能在代码层面进行分析,并讨论在实际生产环境的出现的问题。

upstream模块主要有两个命令,"upstream"和"server"。在配置文件里面,大概的结构是这样的:

Nginx config代码


http{

#...

upstream backend_name{

server xx.xx.xx.xx:xx weight=2 max_fails=3;

server www.xxx.com weight=1;

server unix://xxx/xxx;

#...

}

#...

}

其中在"http"命令被解析的时候,会调用ngx_http_block()函数,而在这个函数里面会再次解析配置文件,从而发现"upstream"命令,调用它的set(),即ngx_http_upstream()函数。这个函数是这样的:

C代码


ngx_http_upstream(...)

{

//获取upstream服务器组的名字,即upstream命令的参数

//...

//这个函数初始化一个ngx_http_upstream_srv_conf_t类型,并存入umcf->upstreams数组里面

//注意:这个函数还有一个作用,就是在"proxy_pass"的时候从umcf->upstreams找出match的upstream服务器组(不知道为啥要放在同一个函数里><)

uscf = ngx_http_upstream_add(...);

ctx = ngx_pcalloc(cf->pool, sizeof(ngx_http_conf_ctx_t));

//继承main_conf

http_ctx = cf->ctx;

ctx->main_conf = http_ctx->main_conf;

//创建新的srv_conf

ctx->srv_conf = ngx_pcalloc(cf->pool, sizeof(void *) * ngx_http_max_module);

//把ngx_http_upstream_add()创建的ngx_http_upstream_srv_conf_t存入srv_conf

ctx->srv_conf[ngx_http_upstream_module.ctx_index] = uscf;

//虽然uscf本身是一个srv_conf,但是它的成员中也有srv_conf,指向全局的srv_conf

uscf->srv_conf = ctx->srv_conf;

//创建loc_conf

ctx->loc_conf = ngx_pcalloc(cf->pool, sizeof(void *) * ngx_http_max_module);

//for each NGX_HTTP_MODULE

for (;;){

//if this module has create_srv_conf() ...

mconf = module->create_srv_conf(cf);

ctx->srv_conf[ngx_modules[m]->ctx_index] = mconf;

//if this module has create_loc_conf() ...

mconf = module->create_loc_conf(cf);

ctx->loc_conf[ngx_modules[m]->ctx_index] = mconf;

}

cf->ctx = ctx;

cf->cmd_type = NGX_HTTP_UPS_CONF;

//继续解析server的配置

rv = ngx_conf_parse(cf, NULL);

}

继续解析"server"的配置行,每行调用一次:

C代码


ngx_http_upstream_server(...)

{

ngx_http_upstream_server_t *us;

us = ngx_array_push(uscf->servers);

//获取第一个参数,即url

u.url = value[1];

u.default_port = 80;

ngx_parse_url(cf->pool, &u);

//解析"server"后面的其他参数

//...

//存入us

us->addrs = u.addrs;

us->naddrs = u.naddrs;

us->weight = weight;

us->max_fails = max_fails;

us->fail_timeout = fail_timeout;

}

在ngx_http_core_module的server {...} block的 /location {...} block里面,如果有"proxy_pass"指令,就会调用它的set(),即ngx_http_proxy_pass():

C代码


ngx_http_proxy_pass(...)

{

ngx_http_proxy_loc_conf_t *plcf = conf;

//获取当前的location,即在哪个location配置的"proxy_pass"指令

clcf = ngx_http_conf_get_module_loc_conf(cf, ngx_http_core_module);

//设置loc的handler,这个clcf->handler会在ngx_http_update_location_config()里面赋予r->content_handler,从而在NGX_HTTP_CONTENT_PHASE里面调用这个handler,即ngx_http_proxy_handler。

clcf->handler = ngx_http_proxy_handler;

//如果"proxy_pass"第一个参数(即url)里面的变量

//...

//根据http/https设置port和add(位移)

//...

u.url.len = url->len - add;//设置url的长度,除去http://(https://)

u.url.data = url->data + add;//设置url,除去http://(https://),比如原来是"http://backend1",现在就是"backend1"

u.default_port = port;//默认port

u.uri_part = 1;

u.no_resolve = 1;//不要resolve这个url的域名

//这儿从已经定义的upstream服务器组里面找到名字match的组

plcf->upstream.upstream = ngx_http_upstream_add(cf, &u, 0);

//设置plcf的成员

//...

//设置location的名字

plcf->location = clcf->name;

//...

}

那么,当有请求访问到特定的location的时候(假设这个location配置了proxy_pass指令),跟其他请求一样,会调用各个phase的checker和handler,到了NGX_HTTP_CONTENT_PHASE的checker,即ngx_http_core_content_phase()的时候,会调用r->content_handler(r),即ngx_http_proxy_handler。

C代码


ngx_http_proxy_handler(r)

{

ngx_http_upstream_t *u;

//生成一个新的ngx_http_upstream_t,赋值给r->upstream

//...

ngx_http_upstream_create(r);

//设置r->ctx[ngx_http_proxy_module.ctx_index] = ctx

ctx = ngx_pcalloc(r->pool, sizeof(ngx_http_proxy_ctx_t));

ngx_http_set_ctx(r, ctx, ngx_http_proxy_module);

//获取ngx_http_proxy_loc_conf_t

plcf = ngx_http_get_module_loc_conf(r, ngx_http_proxy_module);

//...

//获取upstream server的信息,如标签、服务器ip、port等

u->conf = &plcf->upstream;

//生成发送到上游服务器的请求缓冲(或者一条缓冲链)

u->create_request = ngx_http_proxy_create_request;

//在后端服务器被重置的情况下(在create_request被第二次调用之前)被调用

u->reinit_request = ngx_http_proxy_reinit_request;

//处理上游服务器回复的第一个bit,时常是保存一个指向上游回复负载的指针

u->process_header = ngx_http_proxy_process_status_line;

//在客户端放弃请求的时候被调用

u->abort_request = ngx_http_proxy_abort_request;

//在Nginx完成从上游服务器读入回复以后被调用

u->finalize_request = ngx_http_proxy_finalize_request;

//...

u->pipe = ngx_pcalloc(r->pool, sizeof(ngx_event_pipe_t));

u->pipe->input_filter = ngx_event_pipe_copy_input_filter;

ngx_http_read_client_request_body(r, ngx_http_upstream_init);

//没有special response

return NGX_DONE;//-4

}

ngx_http_read_client_request_body()里面会调用post_handler()即ngx_http_upstream_init():

C代码


ngx_http_upstream_init(r)

{

//...

//如果是edge trigger事件机制(epoll/kqueue),添加一个NGX_WRITE_EVENT

//这个事件用来检查与用户的连接是否断开

ngx_add_event(c->write, NGX_WRITE_EVENT, NGX_CLEAR_EVENT);

ngx_http_upstream_init_request(r);

}

C代码


ngx_http_upstream_init_request(r)

{

ngx_http_upstream_t *u;

u = r->upstream;

//设置r->read_event_handler和r->write_event_handler为事件调用函数

r->read_event_handler = ngx_http_upstream_rd_check_broken_connection;

r->write_event_handler = ngx_http_upstream_wr_check_broken_connection;

//把用户来的request内容(如请求行,headers)放到r->upstream的成员变量中

//如r->upstream->url存放请求的url

//把请求(包括请求行,headers)放入r->upstream->request_bufs这个buf chain

u->create_request(r);

clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module);

//初始化u->output(ngx_output_chain_ctx_s类型)

u->output.alignment = clcf->directio_alignment;

u->output.pool = r->pool;

u->output.bufs.num = 1;

u->output.bufs.size = clcf->client_body_buffer_size;

u->output.output_filter = ngx_chain_writer;

u->output.filter_ctx = &u->writer;

u->writer.pool = r->pool;

//创建state(ngx_http_upstream_state_t类型)

//...

//添加一个cleanup处理方法

ngx_http_cleanup_add(r,0);

cln->handler = ngx_http_upstream_cleanup;

cln->data = r;

u->cleanup = &cln->handler;

//获取ngx_http_upstream_srv_conf_t

uscf = u->conf->upstream;

ngx_http_upstream_connect(r,u);

}

C代码


ngx_http_upstream_connect(ngx_http_request_t *r, ngx_http_upstream_t *u)

{

u->state = ngx_array_push(r->upstream_states);

//获取时间,存入u->state

//...

//创建一个socket,调用ngx_get_connection()获取一个空闲的connection并初始化

//把这个connection添加到事件监听,并bind()以及connect()到peer

//不等建立连接,直接返回,一般返回值EINPROGRESS,表示正在进行中

rc = ngx_event_connect_peer(&u->peer);

c = u->peer.connection;

c->data = r;

c->write->handler = ngx_http_upstream_handler;

c->read->handler = ngx_http_upstream_handler;

u->write_event_handler = ngx_http_upstream_send_request_handler;

u->read_event_handler = ngx_http_upstream_process_header;

c->sendfile &= r->connection->sendfile;

u->output.sendfile = c->sendfile;

//init or reinit the ngx_output_chain() and ngx_chain_writer() contexts

//...

//rc == NGX_AGAIN,连接进行中

ngx_add_timer(c->write, u->conf->connect_timeout);

//rc == NGX_OK,连接成功

ngx_http_upstream_send_request();

}

返回之后就进入了ngx_http_finalize_request() --> ngx_http_finalize_connection() --> ngx_http_close_request(),但是不会关闭客户端的连接。然后触发一个EPOLLOUT事件(发生在和用户的连接),进入ngx_http_request_handler()。这个事件是在ngx_http_upstream_init()里面添加的。而这个事件处理函数在ngx_http_process_request()里面设置。这个函数很简单:

C代码


ngx_http_request_handler(ngx_event_t *ev)

{

c = ev->data;

r = c->data;

//if...

r->write_event_handler(r);

//else...

r->read_event_handler(r);

//subrequest...

ngx_http_run_posted_requests(c);

}

ngx_http_upstream_init_request()里面设置了r->read_event_handler和r->write_event_handler,它们都指向ngx_http_upstream_check_broken_connection,那么在上面的函数里,就会调用这个函数。它主要是检查与用户的连接是否断开。

C代码


ngx_http_upstream_check_broken_connection(r,ev)

{

c = r->connection;

u = r->upstream;

n = recv(c->fd, buf, 1, MSG_PEEK);

//...

//正常err应该是NGX_EAGAIN(11)

err = ngx_socket_errno;

//...

}

处理完之后,另外一个事件也同时触发(在epoll处理事件循环里),是之前跟upstream建立的连接的写事件(向upstream服务器发送)。调用c->write->handler即ngx_http_upstream_handler()。这个是在之前ngx_http_upstream_connect()里面设置的:

C代码


ngx_http_upstream_handler(ev)

{

c = ev->data;

r = c->data;

u = r->upstream;

c = r->connection;

//if...

u->write_event_handler(r, u);

//else...

u->read_event_handler(r, u);

//subrequest...

ngx_http_run_posted_requests(c);

}

从ngx_http_upstream_connect(),我们知道u->write_event_handler = ngx_http_upstream_send_request_handler,这个函数主要调用了ngx_http_upstream_send_request()函数:

C代码


ngx_http_upstream_send_request(r,u)

{

c = u->peer.connection;

//向后端发送请求

rc = ngx_output_chain(&u->output, u->request_sent ? NULL : u->request_bufs);

u->write_event_handler = ngx_http_upstream_dummy_handler;

}

当后端(upstream)服务器处理完请求之后,会发回响应,这个时候事件被触发。再次调用ngx_http_upstream_handler(),这次调用u->read_event_handler,即ngx_http_upstream_process_header():

C代码


ngx_http_upstream_process_header(r,u)

{

//...

for ( ;; ) {

n = c->recv(c, u->buffer.last, u->buffer.end - u->buffer.last);

//u->process_header = ngx_http_proxy_process_status_line

//还调用了ngx_http_proxy_process_header(),处理回复的headers

rc = u->process_header(r);

}

ngx_http_upstream_process_headers();

//如果subrequest_in_memory == 0

ngx_http_upstream_send_response(r, u);

//else...

//...

}

C代码


ngx_http_upstream_send_response(r,u)

{

//调用ngx_http_top_header_filter即ngx_http_header_filter()

//ngx_http_header_filter()中会调用ngx_http_write_filter()

//ngx_http_write_filter()遍历所有chain,然后输出所有数据

rc = ngx_http_send_header(r);

//...

u->read_event_handler = ngx_http_upstream_process_upstream;

r->write_event_handler = ngx_http_upstream_process_downstream;

ngx_http_upstream_process_upstream(r, u);

}

C代码


ngx_http_upstream_process_upstream(r,u)

{

c = u->peer.connection;

//如果没有timeout

//

ngx_event_pipe(u->pipe, 0);

}

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: