您的位置:首页 > 其它

ATS源码多线程框架启动分析笔记

2014-05-21 15:34 267 查看
Description

ATS是多线程异步事件处理模型,traffic_cop和traffic_manager作为管理进程,工作进程为traffic_server,traffic_server负责listen,accept和处理session,为提高性能,traffic_server使用了异步I/O和多线程技术。Traffic
Server并不是为每个连接都建立一个线程,而是事先创建数组数量可配置的工作线程,每一个线程上都运行着独立的异步事件处理程序。Thread通过执行Event对应的Continuation中的回调函数,来完成状态的迁移。从初始态到终止态的迁移代表了整个事件的执行过程,而Thread是永不退出的,等待着下一个事件的到来。

Main loop

对于单进程单线程后台服务器程序在启动时做一些初始化的工作,然后就是一个死循环,不停轮询的处理队列和IO,squid主要代码如下:
initialize();/*初始化*/
while(true) {
do_timer_queue();/*处理定时器*/
do_event_io_queue(wait_sec);/*处理IO请求, epoll_wait方式*/
}
或者我们可以更简单点,就轮询一个队列,在do_queue_item(item)的时候这个tiem是否立即执行,指定在将来的某个时间点时执行,是指定在过多长时执行,还是每隔多长事件,让事件循环执行,这个就是属于item自身的属性了。

initialize();/*初始化*/
while(true) {
do_queue();/*处理queue*/
}
对于multi-thread,不就是每个Thread有一个死循环,只不过在操作queue的时候要trylock而已,每个线程都有一个属于自己的queue;那为什么要lock呢?这个queue item的可能是其中某个线程产生的,他会选择一个thread queue,然后push this item。这样调度实现线程负载均衡执行!主要代码如下:
initialize();/*初始化*/
Thread-x(1~N):
while(true) {
{/*do_queue()*/
while((e=queue.dequeue()))
{
try_lock(lock);
do_queue_item();/*处理queue item*/
release_ock(lock);
}
}
}
ATS属于multi-thread方式,对SMP支持。在ATS中class EThread代表Thread,其中execute()是EThread线程的默认入口函数,可以自定义入口函数,主要代码:

void
EThread::execute() {
/*这个tt是指表示EThread线程的生命周期类型,
REGULAR - 这种类型的Thread有queue,是驻留的,

而DEDICATED是完成特定Event,没有queue,就只有一个Event,用oneevent指定。*/
switch (tt) {
case REGULAR: {
Event *e;
Que(Event, link) NegativeQueue;
ink_hrtime next_time = 0;
for (;;) {
cur_time = ink_get_based_hrtime_internal(); /*取得当前时间*/
/****对EventQueueExternal队列分类,对于立即执行的Event就立即执行*/

while ((e = EventQueueExternal.dequeue_local())) { /*从queue中pop一个Event*/
if (!e->timeout_at) { /*立即执行*/
ink_assert(e->period == 0);
process_event(e, e->callback_event);/*执行这个Event*/
} else if (e->timeout_at > 0) /*这个说明是在将来的某个时间点执行*/
EventQueue.enqueue(e, cur_time); /*把将来某个时间点执行的Event转入内部队列EventQueue,这个是按时间执行先后的有序队列,其实就是一个Timer queue,这个在后面根据当前时间来过滤,把到期的Event都执行了。*/
else { // NEGATIVE, 到这里是e->timeout_at<0的Event,按顺序插入NegativeQueue,这个就是poll或者epoll Event queue。
Event *p = NULL;
Event *a = NegativeQueue.head;
while (a && a->timeout_at > e->timeout_at) {
p = a;
a = a->link.next;
}
if (!a)
NegativeQueue.enqueue(e);
else
NegativeQueue.insert(e, p);
}
}
/****根据当前的时间过滤出到期的Event,然后执行之。*/

bool done_one;
do {
done_one = false;
// execute all the eligible internal events
EventQueue.check_ready(cur_time, this); /**/
while ((e = EventQueue.dequeue_ready(cur_time))) {
ink_assert(e);
ink_assert(e->timeout_at > 0);
if (e->cancelled)
free_event(e);
else {
done_one = true;
process_event(e, e->callback_event);
}
}
} while (done_one);
/****执行poll或者epoll队列中所有Event,下面比较复杂,有些没看懂。*/

// execute any negative (poll) events
if (NegativeQueue.head) {
if (n_ethreads_to_be_signalled)
flush_signals(this);
// dequeue all the external events and put them in a local
// queue. If there are no external events available, don't
// do a cond_timedwait.
if (!INK_ATOMICLIST_EMPTY(EventQueueExternal.al))
EventQueueExternal.dequeue_timed(cur_time, next_time, false);
while ((e = EventQueueExternal.dequeue_local())) {
if (!e->timeout_at)
process_event(e, e->callback_event);
else {
if (e->cancelled)
free_event(e);
else {
// If its a negative event, it must be a result of
// a negative event, which has been turned into a
// timed-event (because of a missed lock), executed
// before the poll. So, it must
// be executed in this round (because you can't have
// more than one poll between two executions of a
// negative event)
if (e->timeout_at < 0) {
Event *p = NULL;
Event *a = NegativeQueue.head;
while (a && a->timeout_at > e->timeout_at) {
p = a;
a = a->link.next;
}
if (!a)
NegativeQueue.enqueue(e);
else
NegativeQueue.insert(e, p);
} else
EventQueue.enqueue(e, cur_time);
}
}
}
// execute poll events
while ((e = NegativeQueue.dequeue()))
process_event(e, EVENT_POLL);
if (!INK_ATOMICLIST_EMPTY(EventQueueExternal.al))
EventQueueExternal.dequeue_timed(cur_time, next_time, false);
} else { // Means there are no negative events
next_time = EventQueue.earliest_timeout();
ink_hrtime sleep_time = next_time - cur_time;
if (sleep_time > THREAD_MAX_HEARTBEAT_MSECONDS * HRTIME_MSECOND) {
next_time = cur_time + THREAD_MAX_HEARTBEAT_MSECONDS * HRTIME_MSECOND;
sleep_time = THREAD_MAX_HEARTBEAT_MSECONDS * HRTIME_MSECOND;
}
// dequeue all the external events and put them in a local
// queue. If there are no external events available, do a
// cond_timedwait.
if (n_ethreads_to_be_signalled)
flush_signals(this);
EventQueueExternal.dequeue_timed(cur_time, next_time, true);
}
}
}
case DEDICATED: {
// coverity[lock]
if (eventsem)
ink_sem_wait(eventsem);
MUTEX_TAKE_LOCK_FOR(oneevent->mutex, this, oneevent->continuation);
oneevent->continuation->handleEvent(EVENT_IMMEDIATE, oneevent);
MUTEX_UNTAKE_LOCK(oneevent->mutex, this);
free_event(oneevent);
break;
}
default:
ink_assert(!"bad case value (execute)");
break;
} /* End switch */
}
代码中的 EventQueueExternal 是事件模型种提到的外部队列,EventQueue是内部队列,NegativeQueue是poll或者epoll队列。事件的处理过程是这样的:先从外部队列中取出一个事件e,查看这个事件是否需要立刻执行(通过判断e->timeout_at可以确定是否需要立刻执行),如果需要立刻执行,则调用process_event立刻执行事件(稍后会分析process_event的实现细节);如果取出的事件e,并不是一个需要立刻执行的事件,且不属于(epoll之类的网络事件),则将这个事件加入到内部队列EventQueue中;如果取出的事件e属于epoll这样的网络事件,则将其加入到NegativeQueue中,随后会有针对这种事件的处理。外部队列的事件处理完成后,接下来处理内部队列中的事件(这部分事件有刚刚在处理外部事件时加入到内部队列的事件)内部队列EventQueue的实现是用的优先级队列的方式,并且从代码上观察,应该是只要处理掉一个内部队列的事件就会再次尝试检测内部队列是否有需要处理的事件。直至一次检查过程中没有需要被处理的事件,才会完成对内部队列事件的检查。
完成内部队列中的事件检查后,会检查刚刚提到的NegativeQueue队列中的事件(在处理NegativeQueue事件前,貌似源码种可以看到对于外部队列中的事件又做了一次检查,基本流程和上面的差不多,只不过加入了一些阻塞方法,例如EventQueueExternal.dequeue_timed(cur_time, next_time, false);
暂时没看太明白(从代码注释中理解,最后一个false表示,当外部队列中没有事件的时候,也不会阻塞线程等待,而会将事件取出,放到本地队列中)。然后进入到对poll事件的处理(NegativeQueue)。

Event

Thread调度基本单元是Action,他是一种Continuation抽象编程模型,是一种对程序控制流程/状态的抽象表现形式,是一种对象的状态和生命周期的迁移,很古老的概念了!traffic_server创建若干组Thread,每个线程都有自己的type和调度队列。当create
a Event,会根据一个这个Event的type去对应的thread type中选择一个thread,通过轮询方式选择thread,把这个Event加入这个thread
queue中,剩下就是这个thread调度Event queue。

那么这个Event什么时候被调度执行呢?他生命周期呢?这个就是Event自身的属性了!大概有如下几种Event行为方式:
1 *_imm:立即执行;
2 *_at:指定在将来的某个时间点时执行;
3 *_in:指定在过多长时执行;
4 *_every:每隔多长事件,让事件循环执行;
5 *_imm_signal:

Thread Type

ATS会预先创建不同Thread type具有一定数量的Thread,
那有哪些thread type,看下如下的进程,gdb info threads和netstat信息:

root@ubuntu:/usr/local/ats325/bin# ps -ef | grep traff

root 14170 1 25 15:22 ? 00:04:35 /usr/local/ats325/bin/traffic_cop

test 14231 14170 0 15:34 ? 00:00:00 /usr/local/ats325/bin/traffic_manager

test 14240 14231 6 15:34 ? 00:00:20 /usr/local/ats325/bin/traffic_server -M --httpport 8080:fd=9
(gdb) info threads

Id Target Id Frame

21 Thread 0x409a9b40 (LWP 14863) "traffic_server" 0x40022424 in __kernel_vsyscall ()

20 Thread 0x41105b40 (LWP 14864) "[ET_NET 1]" 0x40022424 in __kernel_vsyscall ()

19 Thread 0x41206b40 (LWP 14865) "[ET_NET 2]" 0x40022424 in __kernel_vsyscall ()

18 Thread 0x41307b40 (LWP 14866) "[ET_NET 3]" 0x40022424 in __kernel_vsyscall ()

17 Thread 0x41509b40 (LWP 14867) "[STAT_SYNC]" 0x40022424 in __kernel_vsyscall ()

16 Thread 0x4170bb40 (LWP 14868) "[CONF_SYNC]" 0x40022424 in __kernel_vsyscall ()

15 Thread 0x4190db40 (LWP 14869) "[REM_SYNC]" 0x40022424 in __kernel_vsyscall ()

14 Thread 0x4392cb40 (LWP 14870) "[ET_AIO 0]" 0x40022424 in __kernel_vsyscall ()

13 Thread 0x43b2eb40 (LWP 14871) "[ET_AIO 1]" 0x40022424 in __kernel_vsyscall ()

12 Thread 0x43d30b40 (LWP 14872) "[ET_AIO 2]" 0x40022424 in __kernel_vsyscall ()

11 Thread 0x43f32b40 (LWP 14873) "[ET_UDP 0]" 0x40022424 in __kernel_vsyscall ()

10 Thread 0x441d5b40 (LWP 14874) "[LOGGING]" 0x40022424 in __kernel_vsyscall ()

9 Thread 0x45146b40 (LWP 14875) "[ACCEPT]" 0x40022424 in __kernel_vsyscall ()

8 Thread 0x45348b40 (LWP 14876) "[ACCEPT]" 0x40022424 in __kernel_vsyscall ()

7 Thread 0x44b05b40 (LWP 14877) "[ET_TASK 0]" 0x40022424 in __kernel_vsyscall ()

6 Thread 0x44c06b40 (LWP 14878) "[ET_TASK 1]" 0x40022424 in __kernel_vsyscall ()

5 Thread 0x44d07b40 (LWP 14879) "[ET_TASK 2]" 0x40022424 in __kernel_vsyscall ()

4 Thread 0x44e08b40 (LWP 14880) "[ET_TASK 3]" 0x40022424 in __kernel_vsyscall ()

3 Thread 0x4564bb40 (LWP 14881) "[ET_TASK 4]" 0x40022424 in __kernel_vsyscall ()

2 Thread 0x4584db40 (LWP 14882) "[ACCEPT]" 0x40022424 in __kernel_vsyscall ()

* 1 Thread 0x4075d800 (LWP 14862) "[ET_NET 0]" 0x40022424 in __kernel_vsyscall ()
root@ubuntu:/usr/local/ats325/bin# netstat -apn | grep traffic

tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 14231/traffic_manag

tcp 0 0 0.0.0.0:8083 0.0.0.0:* LISTEN 14231/traffic_manag

tcp 0 0 127.0.0.1:8084 0.0.0.0:* LISTEN 14240/traffic_serve

udp 0 0 127.0.0.1:54561 127.0.0.1:53 ESTABLISHED 14240/traffic_serve

通过上面的信息发现主要有如下Thread Type,他们都是可以配置的。
[ACCEPT] -> accept thread -> CONFIG proxy.config.accept_threads INT 2
[ET_AIO 0] -> disk io worker thread -> CONFIG proxy.config.cache.threads_per_disk INT 3
[ET_NET 0] -> epoll_wait thread -> CONFIG proxy.config.exec_thread.limit INT 4
[ET_TASK 0] -> http state worker thread -> CONFIG proxy.config.task_threads INT 5

Other help Thread Type-> 这些线程都只有一个,不需要配置。
[STAT_SYNC] ->raw-stat syncer
[CONF_SYNC] ->config syncer
[REM_SYNC] ->remote syncer
[LOGGING] ->log flush

LISTEN-PORT:8080(traffic_manag/traffic_server),8083(traffic_manag)和8084(traffic_server),还有个udp
connection。有3个[ACCEPT],通过gdb查看发现有2个[ACCEPT] Thread listen on port 8080,一个[ACCEPT]Thread listen on port 8084,看配置文件records.config中accept配置行:CONFIG proxy.config.accept_threads INT 2 这个设计模式和nginx很像,用多个[ACCEPT]线程处理8080服务port。

[ET_NET 0] -> epoll_wait thread -> CONFIG proxy.config.exec_thread.limit INT 4

先获取Thread数通过adjust_num_of_net_threads(),然后调用eventProcessor.start(num_of_net_threads) spawn thread,注解code 如下:

for (i = 0; i < n_event_threads; i++) {
EThread *t = NEW(new EThread(REGULAR, i)); /*new REGULAR EThread object*/
if (first_thread && !i) {
ink_thread_setspecific(Thread::thread_data_key, t); /*线程专有数据,除全局和局部数据类型存在之外的第三种存在形式。Thread::thread_data_key这个是一个作用于进程内所有线程的键;这样在线程环境中能获得当前线程的EThread对象t。*/
global_mutex = t->mutex;
t->cur_time = ink_get_based_hrtime_internal();
}
all_ethreads[i] = t;
eventthread[ET_CALL][i] = t; /*把所有的thread列入ET_CALL组,后面有定义ET_NET=ET_CALL*/
t->set_event_type((EventType) ET_CALL);
}
n_threads_for_type[ET_CALL] = n_event_threads; /*记录ET_CALL线程数量*/
for (i = first_thread; i < n_ethreads; i++) {
snprintf(thr_name, MAX_THREAD_NAME_LENGTH, "[ET_NET %d]", i); /*设置Thread name*/
all_ethreads[i]->start(thr_name); /*通过pthread_create创建真正的线程,进入main loop -> execute()*/
}

上面有个线程专有数据的概念,通过Thread::thread_data_key去获得和存储,EThread thread class还定义了私有数据数组,为EThread class私有变量提供memory allocation pool。
#define PER_THREAD_DATA (1024*1024)
char thread_private[PER_THREAD_DATA];

这里有2个class object就是通过线程室友数据数组存储的,就是class NetHandler和PollCont,这个后面会说明这2个class,他们在数组中的偏移量是通过eventProcessor的成员变量netHandler_offset和pollCont_offset存储的!
上面只是create ET_NET thread的main loop,但是还没push event to thread queue上。ats是调用netProcessor.start()向thread queue中add event的,主要code如下:

int UnixNetProcessor::start(int)
{
EventType etype = ET_NET; /*其实就是ET_CALL,code:#define ET_NET ET_CALL*/
/*计算class object在数组中的offset*/

netHandler_offset = eventProcessor.allocate(sizeof(NetHandler));
pollCont_offset = eventProcessor.allocate(sizeof(PollCont));
upgradeEtype(etype);
n_netthreads = eventProcessor.n_threads_for_type[etype];
netthreads = eventProcessor.eventthread[etype];
for (int i = 0; i < n_netthreads; ++i) {
initialize_thread_for_net(netthreads[i], i); /*对每个tread添加epoll_wait event to current thread event queue*/
}
statPagesManager.register_http("net", register_ShowNet);
return 1;
}
上面代码没什么难点,看注释。initialize_thread_for_net(netthreads[i], i)是对每个tread添加epoll_wait event to current thread event queue,进入Thread main loop,最终让epoll驱动的Thread
run起来;主要code如下:

void initialize_thread_for_net(EThread *thread, int thread_index)
{
NOWARN_UNUSED(thread_index);
/* 创建NetHandler object,内存来之Thread私有数组,这个是new的非常规用法,查查资料就知道!*/

new((ink_dummy_for_new *) get_NetHandler(thread)) NetHandler();
new((ink_dummy_for_new *) get_PollCont(thread)) PollCont(thread->mutex, get_NetHandler(thread));
get_NetHandler(thread)->mutex = new_ProxyMutex();
PollCont *pc = get_PollCont(thread);
PollDescriptor *pd = pc->pollDescriptor;
/* NetHandler是Continuation子类,直接加入Thread event queue,最终加入周期性的epoll_wait处理event to thread ebent queue*/

thread->schedule_imm(get_NetHandler(thread));
#ifndef INACTIVITY_TIMEOUT
/*加入周期性的对io event list超时检测处理event to thread ebent queue*/

InactivityCop *inactivityCop = NEW(new InactivityCop(get_NetHandler(thread)->mutex));
thread->schedule_every(inactivityCop, HRTIME_SECONDS(1));
#endif
thread->signal_hook = net_signal_hook_function;
thread->ep = (EventIO*)ats_malloc(sizeof(EventIO));
thread->ep->type = EVENTIO_ASYNC_SIGNAL;
#if TS_HAS_EVENTFD
/*就是调用epoll_ctl,对fd添加读写event*/

thread->ep->start(pd, thread->evfd, 0, EVENTIO_READ);
#else
thread->ep->start(pd, thread->evpipe[0], 0, EVENTIO_READ);
#endif
}

NOTE: NetHandler - vc容器,epoll入口处理函数;

PollCont - epoll接口和event数组;

EventIO - epoll_ctl接口。

上面的代码主要涉及NetHandler,PollCont与EventIO对象的创建和初始化过程,这个主要是epoll框架的建立。

accept on port 8080

上面IO事件驱动多线程框架已经建立起来了,下一步就是在框架中添加accept on port 8080 event。要启动监听event,先来了解class NetAccept和HttpAccept,NetAccept就是accept入口处理函数,返回NetVConnection object,HttpAccept是NetAccept接收connection返回是Http初始化入口处理函数,比如HttpClientSession,HttpSM创建和初始化,正式进入transaction处理流程。init_HttpProxyServer(),初始化http
proxy server,比如插件HttpAccept,处理inbound accept和outbound connection 插件能力;start_HttpProxyServer(num)创建监听socket,bind和listen,创建[ACCEPT] thread;最后NetAccept accept main loop,起bt信息如下:

(gdb) bt
#0 0x40022424 in __kernel_vsyscall ()
#1 0x40086b38 in accept () at ../sysdeps/unix/sysv/linux/i386/socket.S:95
#2 0x082dbacf in SocketManager::accept (this=0x88f33d0 <socketManager>, s=7, addr=0x442225ec, addrlen=0x4534817c)
at ../../iocore/eventsystem/P_UnixSocketManager.h:67
#3 0x082db11b in Server::accept (this=0x91c8494, c=0x442225e4) at Connection.cc:89
#4 0x082e51f0 in NetAccept::do_blocking_accept (this=0x91c8470, t=0x45147008) at UnixNetAccept.cc:295
#5 0x082e603c in NetAccept::acceptLoopEvent (this=0x91c8470, event=1, e=0x90afc50) at UnixNetAccept.cc:520
#6 0x0811206f in Continuation::handleEvent (this=0x91c8470, event=1, data=0x90afc50) at ../iocore/eventsystem/I_Continuation.h:146
#7 0x0830cc2c in EThread::execute (this=0x45147008) at UnixEThread.cc:289
#8 0x0830b6e2 in spawn_thread_internal (a=0x91c85e0) at Thread.cc:88
#9 0x4007fd78 in start_thread (arg=0x45348b40) at pthread_create.c:311
#10 0x4069a01e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:131
到此为止,ATS多线程框架和8080监听服务端口run起来了啊。这时就可以accept connection and service http request。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: