您的位置:首页 > 移动开发

Play application's execution model.

2015-08-03 19:37 309 查看
This is excerpted from the book of Play Framework Essentials.

Original

The streaming programming model provided by Play has ben influenced by the execution model of Play applications, which itself has been influenced by the nature of the work a web application performs. So, let’s start from the beginning: what does a web application do?

Threaded execution model*(Over-Head)*

For now, our example application does the following: the HTTP layer invokes some business logic via the service layer, and the service layer does some computations by itself and aslo calls the database layer. It is worth nothing that in our configuration, the database system, as implemented in Chapter 2, Persisting Data and Testing, runs on the same machine as the web application but this is, however, not a requirement. In fact, there are chances that in real-world projects, your database system is decoupled from your HTTP layer and that both run on different machines. It means that while a query is executed on the database, the web layer does nothing but wait for the response. Actually, the HTTP layer is often waiting for some response coming from another system; it could, for example, retrieve some data from an external web service (Chapter 6, Leveraging the Play Stack – Security, Internationalization, Cache, and the HTTP Client, shows you how to do that), or the business layer itself could be located on a remote machine. Decoupling the HTTP layer from the business layer or the persistence layer gives a finer control on how to scale the system(more details about that are given further in this chapter). Anyway, the point is that the HTTP layer may essentially spend time waiting.

With that in mind, consider the following diagram showing how concurrent requests could be executed by a web application using a threaded execution model.

That is, a model where each request is processed in its own thread.



Evented execution model*(code must be thread-safe)*

Several clients (shown on the left-hand side in the preceding diagram) perform

queries that are processed by the application’s controller. On the right-hand side of the controller, the figure shows an execution thread corresponding to each action’s execution. The filled rectangles represent the time spent performing computations within a thread (for example, for processing data or computing a result), and the lines represent the time waiting for some remote data. Each action’s execution is distinguished by particular color. In this fictive example, the action handing the first request may execute a query to remote database, hence the line(illustrating that the thread waits for the database result) between the two pink rectangles (illustrating that the action performs some computation before querying the database and after getting the database result). The action handling the third request may perform a call to a distant web service and then a second one, after the response of the first one has been received; hence, the two lines between the green rectangles. And the action handling the last request may perform a call to a distant web service that streams a response of an infinite size, hence, the multiple lines between the purple rectangles.

The problem with this execution model is that each request requires the creation of

a new thread. Threads have an overhead at creation, because they consume memory

(essentially because each thread has its own stack), and during execution, when the

scheduler switches contexts.

However, we can see that these threads spend a lot of time just waiting. If we could

use the same thread to process another request while the current action is waiting for

something, we could avoid the creation of threads, and thus save resources. This is

exactly what the execution model used by Play—the evented execution model—does,

as depicted in the following diagram:



Here, the computation fragments are executed on two threads only. Note that the same action can have its computation fragments run by different threads (for example, the pink action). Also note that several threads are still in use, that’s why the code must be thread-safe. The time spent waiting between computing things is the same as before, and you can see that the time required to completely process a request is about the same as with the threaded model (for instance, the second pink rectangle ends at the same position as in the earlier figure, same for the third green rectangle, and so on).

Note

An attentive reader may think that I have cheated; the rectangles in the second figure are often thinner than their equivalent the first figure. That’s because, in the first model, there is an overhead for scheduling threads and, above all, even if you have a lot of threads, your machine still has a limited number of cores effectively executing the code of your threads. More precisely, if you have more threads than your number of cores, you necessarily have threads in an idle state (that is, waiting). This means, if we suppose that the machine executing the application has only two cores, in the first figure, there is even time spent waiting in the rectangles!
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: