New Adventures in Comet: Throttling the Server push
2008-03-18 11:11
337 查看
From: http://weblogs.java.net/blog/jfarcand/archive/2007/03/new_adventures_1.html
Web Applications that uses Comet Request processing are more an more used with asynchronous clients (AJAX, SOA, GWT, etc.). At AjaxWorld this week, I was surprised to see how many developers attended the two scheduled Comet talks and how many companies already have Comet based application in production. But like with every new technologies, no formal design recommendations have been widely described and discussed (unless I'm missing something :-)). Hence two Comet applications doing essentially the same thing might significantly differ in their design. Well, let me share what I've learned over the last year, having see multiples real Comet based applications. Oh no! Another series of blogs from me!
So far, the biggest problem I've seen is related to the frequency data are pushed to clients. When a server push data back to the client, it can easily flood the subscribed clients by sending too many data in a short period of times. To demonstrate the problem, let use the most famous Comet application after the Chat which is the real time stock option application. As an example, take a look at the Lighstreamer demo. In that demo, stock options are updated real time. I don't know the technical details about the application, but let's assume it use Comet. Depending on the source you use, stock options value can be updated very frequently, mostly every second. Thus if the server have to push data every seconds, it is easy to understand how bad your application will scale if 10 000 Comet clients are connected, waiting for a server push. In that situation, you need a way to throttle the stock options updates so the subscribed Comet clients aren't getting flooded by server pushes. There is two solutions:
On the client side, you decide to read ant update the page, let's say every 5 seconds, and discard all other pushes from the server or
throttle the pushes on the server side.
Most of the time, I would recommend the latter approach as pushing data over a connection is not free, not to say that if 10 000 users are connected, updating them with data they will not use might eat all both client and server CPU. I know this is trivial, but still scalability problems might be visible only when your application goes into production, which is unfortunately late in the development cycle. Now the big question is how to do that, because this is not as simple as it looks. I cannot speak for other Comet implementation, but in Grizzly Comet, a database (assuming the stock options data are stored/updated there) will invoke the CometContext.notify() method, passing the updated data. If we want to throttle the data to avoid flooding the Comet clients, the logic will have to happens on the database side. Well, I don't like that approach for two reasons:
First, if your application uses several sources (a database, a web service, etc.) to push data, it will means all those sources will have to implement the 'throttling logic'.
Second, if one of your Comet client start flooding your server with data, the 'throttling logic' will have to be implemented either on the client or server side.
Naaa...I don't like that because in my opinion, the 'throttling logic' should be part of the Comet implementation API itself. Fortunately for GlassFish users using Grizzly Comet, there is an API you can use to customize the 'throttling logic':
An implementation of this interface will most of the time contains the logic to reduce server pushes, which resources to use when updating clients, etc. Your application just have to call CometContext.setNotificationHandler(myHandler)...that's it! Thus, when writing (or porting) a Comet application, you can always customize (or re-use the available ones) NotificationHandler and make sure that when under load, neither clients nor the server flood the subscribed clients when invoking CometContext.notify(). Better, you might develop with an un-throttled NotificationHandler and latter change with one that only push data every 10 seconds, only once you are production ready.
To recap, when writing (or porting) a Comet application, you must make sure a server push doesn't bring down the server because the server is pushing data too frequently, flooding clients and the server itself. Being able to customize the way data are pushed back to client is something I consider extremely important. Under load, your customized NotificationHandler can make a huge difference and make browser experience much better.
Next time I will discuss another commons problem, which is when a client start sending tons of data to the server. Should the data be throttled on the client or server side?
_uacct = "UA-3111670-1";
urchinTracker();
technorati: grizzly comet cometd ajax/a> glassfish
Web Applications that uses Comet Request processing are more an more used with asynchronous clients (AJAX, SOA, GWT, etc.). At AjaxWorld this week, I was surprised to see how many developers attended the two scheduled Comet talks and how many companies already have Comet based application in production. But like with every new technologies, no formal design recommendations have been widely described and discussed (unless I'm missing something :-)). Hence two Comet applications doing essentially the same thing might significantly differ in their design. Well, let me share what I've learned over the last year, having see multiples real Comet based applications. Oh no! Another series of blogs from me!
So far, the biggest problem I've seen is related to the frequency data are pushed to clients. When a server push data back to the client, it can easily flood the subscribed clients by sending too many data in a short period of times. To demonstrate the problem, let use the most famous Comet application after the Chat which is the real time stock option application. As an example, take a look at the Lighstreamer demo. In that demo, stock options are updated real time. I don't know the technical details about the application, but let's assume it use Comet. Depending on the source you use, stock options value can be updated very frequently, mostly every second. Thus if the server have to push data every seconds, it is easy to understand how bad your application will scale if 10 000 Comet clients are connected, waiting for a server push. In that situation, you need a way to throttle the stock options updates so the subscribed Comet clients aren't getting flooded by server pushes. There is two solutions:
On the client side, you decide to read ant update the page, let's say every 5 seconds, and discard all other pushes from the server or
throttle the pushes on the server side.
Most of the time, I would recommend the latter approach as pushing data over a connection is not free, not to say that if 10 000 users are connected, updating them with data they will not use might eat all both client and server CPU. I know this is trivial, but still scalability problems might be visible only when your application goes into production, which is unfortunately late in the development cycle. Now the big question is how to do that, because this is not as simple as it looks. I cannot speak for other Comet implementation, but in Grizzly Comet, a database (assuming the stock options data are stored/updated there) will invoke the CometContext.notify() method, passing the updated data. If we want to throttle the data to avoid flooding the Comet clients, the logic will have to happens on the database side. Well, I don't like that approach for two reasons:
First, if your application uses several sources (a database, a web service, etc.) to push data, it will means all those sources will have to implement the 'throttling logic'.
Second, if one of your Comet client start flooding your server with data, the 'throttling logic' will have to be implemented either on the client or server side.
Naaa...I don't like that because in my opinion, the 'throttling logic' should be part of the Comet implementation API itself. Fortunately for GlassFish users using Grizzly Comet, there is an API you can use to customize the 'throttling logic':
public interface NotificationHandler { /** * Return true if the invoker of notify() should block when * notifying Comet Handlers. */ public boolean isBlockingNotification(); /** * Set to true if the invoker of notify() should block when * notifying Comet Handlers. */ public void setBlockingNotification(boolean blockingNotification); /** * Notify all CometHandler. * @param cometEvent the CometEvent used to notify CometHandler * @param iteratorHandlers An iterator over a list of CometHandler */ public void notify(CometEvent cometEvent,Iterator<CometHandler> iteratorHandlers) throws IOException; /** * Notify a single CometHandler * @param cometEvent the CometEvent used to notify CometHandler * @param iteratorHandlers An iterator over a list of CometHandler */ public void notify(CometEvent cometEvent,CometHandler cometHandler)
An implementation of this interface will most of the time contains the logic to reduce server pushes, which resources to use when updating clients, etc. Your application just have to call CometContext.setNotificationHandler(myHandler)...that's it! Thus, when writing (or porting) a Comet application, you can always customize (or re-use the available ones) NotificationHandler and make sure that when under load, neither clients nor the server flood the subscribed clients when invoking CometContext.notify(). Better, you might develop with an un-throttled NotificationHandler and latter change with one that only push data every 10 seconds, only once you are production ready.
To recap, when writing (or porting) a Comet application, you must make sure a server push doesn't bring down the server because the server is pushing data too frequently, flooding clients and the server itself. Being able to customize the way data are pushed back to client is something I consider extremely important. Under load, your customized NotificationHandler can make a huge difference and make browser experience much better.
Next time I will discuss another commons problem, which is when a client start sending tons of data to the server. Should the data be throttled on the client or server side?
_uacct = "UA-3111670-1";
urchinTracker();
technorati: grizzly comet cometd ajax/a> glassfish
相关文章推荐
- Migrate to a new vCenter server with the vSphere Distributed Switch (VDS) enabled in vSphere 5.1
- Give the New PIVOT and UNPIVOT Commands in SQL Server 2005 a Whirl
- The new features in Windows Server 2012 Hyper-V
- 奇葩问题:This file could not be checked in because the original version of the file on the server was moved or deleted. A new version of this file has been saved to the server, but your check-in comments were not saved
- New Adventures in Comet: polling, long polling or Http streaming with AJAX. Which one to choose?
- A new session could not be created. (Original error: Requested a new session but one was in progress) (WARNING: The server did not provide any stacktrace information)
- In an iOS 5 Storyboard, how do you push a new scene to the original view controller from a Popover?
- On a new installed FreeBSD server, when you try to SSH to the server as root, you will end up in the
- New Virtual Key to Unlock the Keypad in Windows Mobile 6 AKU 1.3
- Mysql 出现错误The server is not configured as slave; fix in config file or with CHANGE MASTER TO
- Configuring the VNC server/viewer in Linux.
- Several ports (8005, 8080, 8009) required by Tomcat v6.0 Server at localhost are already in use. The server may already be runni
- What's New in the .NET Framework Version 2.0
- Zend Framework: The requested URL /newposter was not found on this server.
- JAVA+ Proxool+ SQLserver 2008 “signer information does not match signer information of other classes in the same package”
- The Apache Axis2 Web service runtime in Tomcat v6.0 Server does not support the service project
- What's New in SQL Server 2005
- How to cal Session_End() method when you store the Session state not in Inprc but StageServer, etc.
- 回射客户端服务器中僵尸进程的处理( the solution of zombie process in the echo client && server )
- The Adventures of Huckleberry Finn——1、Huck in trouble