Android应用开发:网络工具——Volley(二)
2015-05-31 19:25
507 查看
[-]
引言
源头RequestQueue
CacheDispatcher缓存操作
NetworkDispatcher网络处理
ExecutorDelivery消息分发者与Request请求
总结
/**
* Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it.
*
* @param context A {@link Context} to use for creating the cache dir.
* @param stack An {@link HttpStack} to use for the network, or null for default.
* @return A started {@link RequestQueue} instance.
*/
public static RequestQueue newRequestQueue(Context context, HttpStack stack) {
File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);
String userAgent = "volley/0";
try {
String packageName = context.getPackageName();
PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
userAgent = packageName + "/" + info.versionCode;
} catch (NameNotFoundException e) {
}
if (stack == null) {
if (Build.VERSION.SDK_INT >= 9) {
stack = new HurlStack();
} else {
// Prior to Gingerbread, HttpUrlConnection was unreliable.
// See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
}
}
Network network = new BasicNetwork(stack);
RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
queue.start();
return queue;
}
/**
* Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it.
*
* @param context A {@link Context} to use for creating the cache dir.
* @return A started {@link RequestQueue} instance.
*/
public static RequestQueue newRequestQueue(Context context) {
return newRequestQueue(context, null);
}
通常使用的是第二个接口,也就是只有一个参数的newRequestQueue(Context context),使stack默认为null。可以看到我们得到的RequestQueue是通过RequestQueue申请,然后又调用了其start方法,最后返回给我们的。接下来看一下RequestQueue的构造方法:
/**
* Creates the worker pool. Processing will not begin until {@link #start()} is called.
*
* @param cache A Cache to use for persisting responses to disk
* @param network A Network interface for performing HTTP requests
* @param threadPoolSize Number of network dispatcher threads to create
* @param delivery A ResponseDelivery interface for posting responses and errors
*/
public RequestQueue(Cache cache, Network network, int threadPoolSize,
ResponseDelivery delivery) {
mCache = cache;
mNetwork = network;
mDispatchers = new NetworkDispatcher[threadPoolSize];
mDelivery = delivery;
}
/**
* Creates the worker pool. Processing will not begin until {@link #start()} is called.
*
* @param cache A Cache to use for persisting responses to disk
* @param network A Network interface for performing HTTP requests
* @param threadPoolSize Number of network dispatcher threads to create
*/
public RequestQueue(Cache cache, Network network, int threadPoolSize) {
this(cache, network, threadPoolSize,
new ExecutorDelivery(new Handler(Looper.getMainLooper())));
}
/**
* Creates the worker pool. Processing will not begin until {@link #start()} is called.
*
* @param cache A Cache to use for persisting responses to disk
* @param network A Network interface for performing HTTP requests
*/
public RequestQueue(Cache cache, Network network) {
this(cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE);
}
RequestQueue有三种构造方法,通过newRequestQueue(Context context)调用的是最后一种。创建了一个工作池,默认承载网络线程数量为4个。而后两种构造方法都会调用到第一个,进行了一些局部变量的赋值,并没有什么需要多说的,接下来看start()方法:
public void start() {
stop(); // Make sure any currently running dispatchers are stopped.
// Create the cache dispatcher and start it.
mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
mCacheDispatcher.start();
// Create network dispatchers (and corresponding threads) up to the pool size.
for (int i = 0; i < mDispatchers.length; i++) {
NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
mCache, mDelivery);
mDispatchers[i] = networkDispatcher;
networkDispatcher.start();
}
}
首先进行了stop操作,将所有的执行者全部退出,从而确保当前没有任何正在工作的执行者。然后主要的工作就是开启一个CacheDispatcher和符合线程池数量的NetworkDispatcher。首先分析CacheDispatcher。
@Override
public void run() {
if (DEBUG) VolleyLog.v("start new dispatcher");
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
// Make a blocking call to initialize the cache.
mCache.initialize(); //初始化一个缓存
while (true) {
try {
// Get a request from the cache triage queue, blocking until
// at least one is available.
final Request<?> request = mCacheQueue.take(); //在缓存序列中获取请求,阻塞操作
request.addMarker("cache-queue-take");
// If the request has been canceled, don't bother dispatching it.
if (request.isCanceled()) { //若该请求已经被取消了,则直接跳过
request.finish("cache-discard-canceled");
continue;
}
// Attempt to retrieve this item from cache.
Cache.Entry entry = mCache.get(request.getCacheKey()); //尝试在缓存中查找是否有缓存数据
if (entry == null) {
request.addMarker("cache-miss"); //若没有则缓存丢失,证明这个请求并没有获得实施过,扔进网络请求队列中
// Cache miss; send off to the network dispatcher.
mNetworkQueue.put(request);
continue;
}
// If it is completely expired, just send it to the network.
if (entry.isExpired()) { //若请求已经过期,那么就要去获取最新的消息,所以依然丢进网络请求队列中
request.addMarker("cache-hit-expired");
request.setCacheEntry(entry);
mNetworkQueue.put(request);
continue;
}
// We have a cache hit; parse its data for delivery back to the request.
request.addMarker("cache-hit");
Response<?> response = request.parseNetworkResponse(
new NetworkResponse(entry.data, entry.responseHeaders)); //请求有缓存数据且没有过期,那么可以进行解析,交给请求的parseNetworkReponse方法进行解析,这个方法我们可以在自定义个Request中进行复写自定义
request.addMarker("cache-hit-parsed");
if (!entry.refreshNeeded()) { //如果请求有效且并不需要刷新,则丢进Delivery中处理,最终会触发如StringRequest这样的请求子类的onResponse或onErrorResponse
// Completely unexpired cache hit. Just deliver the response.
mDelivery.postResponse(request, response);
} else { //请求有效,但是需要进行刷新,那么需要丢进网络请求队列中
// Soft-expired cache hit. We can deliver the cached response,
// but we need to also send the request to the network for
// refreshing.
request.addMarker("cache-hit-refresh-needed");
request.setCacheEntry(entry);
// Mark the response as intermediate.
response.intermediate = true;
// Post the intermediate response back to the user and have
// the delivery then forward the request along to the network.
mDelivery.postResponse(request, response, new Runnable() {
@Override
public void run() {
try {
mNetworkQueue.put(request);
} catch (InterruptedException e) {
// Not much we can do about this.
}
}
});
}
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
continue;
}
}
}
CacheDispatcher做了很多事情,之后再来慢慢的消化他们。现在先看一下我们的请求通过add之后到了哪里去。查看RequestQueue.java的add方法:
/**
* Adds a Request to the dispatch queue.
* @param request The request to service
* @return The passed-in request
*/
public <T> Request<T> add(Request<T> request) {
// Tag the request as belonging to this queue and add it to the set of current requests.
request.setRequestQueue(this);
synchronized (mCurrentRequests) {
mCurrentRequests.add(request); //加入到当前的队列中,是一个HashSet
}
// Process requests in the order they are added.
request.setSequence(getSequenceNumber());
request.addMarker("add-to-queue");
// If the request is uncacheable, skip the cache queue and go straight to the network.若这个请求不需要被缓存,需要直接做网络请求,那么就直接加到网络请求队列中
if (!request.shouldCache()) {
mNetworkQueue.add(request);
return request;
}
// Insert request into stage if there's already a request with the same cache key in flight.
synchronized (mWaitingRequests) {
String cacheKey = request.getCacheKey(); // Volley中使用请求的URL作为存储的key
if (mWaitingRequests.containsKey(cacheKey)) { //若等待的请求中有与所请求的URL相同的请求,则需要做层级处理
// There is already a request in flight. Queue up.
Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
if (stagedRequests == null) {
stagedRequests = new LinkedList<Request<?>>();
}
stagedRequests.add(request);
mWaitingRequests.put(cacheKey, stagedRequests); //若与已有的请求URL相同,则创建一个层级列表保存他们,然后再放入等待请求列表中
if (VolleyLog.DEBUG) {
VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
}
} else {
// Insert 'null' queue for this cacheKey, indicating there is now a request in
// flight.
mWaitingRequests.put(cacheKey, null); //若是一个全新的请求,则直接放入等待队列中,注意数据为null,只有多个url产生层级关系了才有数据
mCacheQueue.add(request); //放入缓存队列中,缓存队列会对请求做处理
}
return request;
}
}
这里的mCacheQueue就是放入CacheDispatcher的那个阻塞队列,所以在add中添加到mCacheQueue后,因为CacheDispatcher已经运行起来了,所以CacheDispatcher会对刚刚加入的网络请求做处理。分析到这里,可以进行一下阶段性的梳理:
1. 我们的请求在加入到RequestQueue后,首先会加入到其实体类的mCurrentRequests列表中做本地管理
2. 如果之前已经存在了和本次请求相同URL的请求,那么会将层级关系保存在mWaitingRequests中,若没有则层级关系为null,同样也会保存在mWaitingRequests中
3. 对于没有层级关系(新的URL)的网络请求会直接放入mCacheQueue中让CacheDispatcher对其进行处理
分析到这里发现对于同一个URL的请求处理比较特殊,当第一次做某个网络请求A时候,A会直接放入缓存队列中由CacheDispatcher进行处理。下一次进行同一个URL的请求B时,若此时A还存在于mWaitingRequests队列中则B的请求被雪藏,不放入mCacheQueue缓存队列进行处理,只是等待。那么等待到什么时候呢?不难猜想到是需要等待A的请求完毕后才可以进行B的请求。归结到底就是需要知道mWaitingRequest是如何运作的?什么时候存储在其中的层级结构才会被拿出来进行请求。暂时记下这个问题,现在回头再去继续分析CacheDispatcher。CacheDispatcher对请求的处理可以归结为以下几种情况:
1. 对于取消的请求,直接表示为完成并跳过;
2. 对于尚未有应答数据的、数据过期、有明显标示需要刷新的请求直接丢入mNetworkQueue,mNetworkQueue同mCacheQueue一样,是一个阻塞队列;
3. 对于有应答数据且数据尚未过期的请求会出发Request的parseNetworkResponse方法进行数据解析,这个方法可以通过继承Request类进行复写(定制);
4. 对于有效应答(无论是否需要更新)都会用mDelivery进行应答,需要刷新的请求则会再次放入到mNetworkQueue中去。
对于(1)暂不做分析,后边会遇到。下边分析一下mNetworkQueue的运作原理,mNetworkQueue是在CacheDispatcher构造时传入的参数,通过RequestQueue的start()方法不难分析出相对应的处理器为NetworkDispatcher。
public void start() {
stop(); // Make sure any currently running dispatchers are stopped.
// Create the cache dispatcher and start it.
mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
mCacheDispatcher.start();
// Create network dispatchers (and corresponding threads) up to the pool size.
for (int i = 0; i < mDispatchers.length; i++) {
NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
mCache, mDelivery);
mDispatchers[i] = networkDispatcher;
networkDispatcher.start();
}
}
每一个dispatcher被创造后都及时进行了start()操作,而NetworkDispatcher也是继承于Thread的类,那么之后需要分析其复写的run方法,在这之前先看一下它的构造方法:
public NetworkDispatcher(BlockingQueue<Request<?>> queue,
Network network, Cache cache,
ResponseDelivery delivery) {
mQueue = queue;
mNetwork = network;
mCache = cache;
mDelivery = delivery;
}
mQueue即为mNetworkQueue,这与CacheDispatcher中使用到的是同一个。而mNetwork默认是BasicNetwork,mCache为缓存,mDelivery为最终的消息配发者,之后会分析到。接下来看其复写的run()方法:
@Override
public void run() {
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND); //设置线程可后台运行,不会因为系统休眠而挂起
Request<?> request;
while (true) {
try {
// Take a request from the queue.
request = mQueue.take(); //mQueue即为mNetworkQueue,从mNetworkQueue中获取请求,也就是说CacheDispatcher丢过来的请求是从这里被NetworkDispatcher获取到的。注意这里获取请求是阻塞的。
} catch (InterruptedException e) { //退出操作,NetworkDispatcher被设置成退出时候发出中断请求
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
continue;
}
try {
request.addMarker("network-queue-take");
// If the request was cancelled already, do not perform the
// network request.
if (request.isCanceled()) { //若请求已经被取消,则标记为完成(被取消),然后继续下一个请求
request.finish("network-discard-cancelled");
continue;
}
addTrafficStatsTag(request);
// Perform the network request.
NetworkResponse networkResponse = mNetwork.performRequest(request); //使用BasicNetwork处理请求
request.addMarker("network-http-complete");
// If the server returned 304 AND we delivered a response already,
// we're done -- don't deliver a second identical response.
if (networkResponse.notModified && request.hasHadResponseDelivered()) {
request.finish("not-modified");
continue;
}
// Parse the response here on the worker thread.
Response<?> response = request.parseNetworkResponse(networkResponse); //处理网络请求应答数据
request.addMarker("network-parse-complete");
// Write to cache if applicable.
// TODO: Only update cache metadata instead of entire record for 304s.
if (request.shouldCache() && response.cacheEntry != null) {
mCache.put(request.getCacheKey(), response.cacheEntry);
request.addMarker("network-cache-written");
}
// Post the response back.
request.markDelivered(); //标记请求为已应答并做消息分发处理
mDelivery.postResponse(request, response);
} catch (VolleyError volleyError) {
parseAndDeliverNetworkError(request, volleyError); //若产生Volley错误则会触发Request的parseNetworkError方法以及mDelivery的postError方法
} catch (Exception e) {
VolleyLog.e(e, "Unhandled exception %s", e.toString());
mDelivery.postError(request, new VolleyError(e)); //对于未知错误,只会触发mDelivery的postError方法。
}
}
}
mNetwork.performRequest是真正的网络请求实施的地方,这里对BasicNetwork不做分析。网络请求的回应是NetworkResponse类型,看一下这个类型是怎么样的:
/**
* Data and headers returned from {@link Network#performRequest(Request)}.
*/
public class NetworkResponse {
/**
* Creates a new network response.
* @param statusCode the HTTP status code
* @param data Response body
* @param headers Headers returned with this response, or null for none
* @param notModified True if the server returned a 304 and the data was already in cache
*/
public NetworkResponse(int statusCode, byte[] data, Map<String, String> headers,
boolean notModified) {
this.statusCode = statusCode;
this.data = data;
this.headers = headers;
this.notModified = notModified;
}
public NetworkResponse(byte[] data) {
this(HttpStatus.SC_OK, data, Collections.<String, String>emptyMap(), false);
}
public NetworkResponse(byte[] data, Map<String, String> headers) {
this(HttpStatus.SC_OK, data, headers, false);
}
/** The HTTP status code. */
public final int statusCode;
/** Raw data from this response. */
public final byte[] data;
/** Response headers. */
public final Map<String, String> headers;
/** True if the server returned a 304 (Not Modified). */
public final boolean notModified;
}
NetworkResponse保存了请求的回应数据,包括数据本身和头,还有状态码以及其他相关信息。根据请求类型的不同,对回应数据的处理方式也各有不同,例如回应是String和Json的区别。所以自然而然的网络请求类型需要对它获得的回应数据自行处理,也就触发了Request子类的parseNetworkResponse方法,下边以StringRequest为例进行分析:
@Override
protected Response<String> parseNetworkResponse(NetworkResponse response) {
String parsed;
try {
parsed = new String(response.data, HttpHeaderParser.parseCharset(response.headers));
} catch (UnsupportedEncodingException e) {
parsed = new String(response.data);
}
return Response.success(parsed, HttpHeaderParser.parseCacheHeaders(response));
}
StringRequest中对于回应首先尝试解析数据和辨别头数据编码类型,若失败则只解析数据部分。最终都是触发Request的success方法,参数中还使用Volley自带的HttpHeaderParser对头信息进行了解析。需要看一下Response的success方法究竟做了什么,鉴于Response类总共没有多少代码,就全部拿出来做分析了:
public class Response<T> {
/** 处理解析过的回应信息的回调接口 */
public interface Listener<T> {
/** 当接收到回应后 */
public void onResponse(T response);
}
/** 处理错误回应的回调接口 */
public interface ErrorListener {
/**
* 错误发生时的回调接口
*/
public void onErrorResponse(VolleyError error);
}
/** 返回一个包含已解析结果的成功回应 */
public static <T> Response<T> success(T result, Cache.Entry cacheEntry) {
return new Response<T>(result, cacheEntry);
}
/**
* 返回错误回应,包含错误码以及可能的其他消息
*/
public static <T> Response<T> error(VolleyError error) {
return new Response<T>(error);
}
/** 解析过的响应信息,错误时为null */
public final T result;
/** 响应的缓存数据,错误时为null */
public final Cache.Entry cacheEntry;
/** 详细的错误信息 */
public final VolleyError error;
/** 此回应软件希望得到第二次回应则为true,即需要刷新 */
public boolean intermediate = false;
/**
* 返回true代表回应成功,没有错误。有错误则为false
*/
public boolean isSuccess() {
return error == null;
}
private Response(T result, Cache.Entry cacheEntry) {
this.result = result;
this.cacheEntry = cacheEntry;
this.error = null;
}
private Response(VolleyError error) {
this.result = null;
this.cacheEntry = null;
this.error = error;
}
}
这就是网络响应的类,很简单,成功或错误都会直接进行标记,通过isSuccess接口提供外部查询。如果响应成功,则消息保存在result中,解析头信息得到的缓存数据保存在cacheEntry中。
Request作为基类,Volley自带的又代表性的其扩展类又StringRequest和JsonObjectRequest,如果开发者有比较大的自定义需求就需要继承Request复写内部一些重要的方法。同时mDelivery出场的机会这么多,为什么他总出现在处理请求的地方呢?下边就对它和Request一起进行分析,其中Request依然以StringRequest为例。
public interface ResponseDelivery {
/**
* Parses a response from the network or cache and delivers it.
*/
public void postResponse(Request<?> request, Response<?> response);
/**
* Parses a response from the network or cache and delivers it. The provided
* Runnable will be executed after delivery.
*/
public void postResponse(Request<?> request, Response<?> response, Runnable runnable);
/**
* Posts an error for the given request.
*/
public void postError(Request<?> request, VolleyError error);
}
三个接口其中两个是回应网络应答的,最后一个回应网络错误。追溯RequestQueue构造的时候,默认的分发者为ExecutorDelivery:
public RequestQueue(Cache cache, Network network, int threadPoolSize) {
this(cache, network, threadPoolSize,
new ExecutorDelivery(new Handler(Looper.getMainLooper())));
}
可见,消息分发者工作在主线程上。常见的分发者所做的工作有:
@Override
public void postResponse(Request<?> request, Response<?> response) { //发出响应
postResponse(request, response, null);
}
@Override
public void postResponse(Request<?> request, Response<?> response, Runnable runnable) { //发出响应
request.markDelivered();
request.addMarker("post-response");
mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
}
@Override
public void postError(Request<?> request, VolleyError error) { //发出错误响应
request.addMarker("post-error");
Response<?> response = Response.error(error);
mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, null));
}
这里发现一个问题,其实在NetworkDispatcher中的request.markDelivered()是多余的,在postResponse中已经执行了。无论是正常的响应还是错误都会执行ResponseDeliveryRunnable:
private class ResponseDeliveryRunnable implements Runnable {
private final Request mRequest;
private final Response mResponse;
private final Runnable mRunnable;
public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) {
mRequest = request;
mResponse = response;
mRunnable = runnable; //若指定了runnable,如上面分析的在网络请求有效但是需要更新的时候会指定一个runnable的
}
@SuppressWarnings("unchecked")
@Override
public void run() {
// If this request has canceled, finish it and don't deliver.
if (mRequest.isCanceled()) { //若请求被取消,结束并做标记
mRequest.finish("canceled-at-delivery");
return;
}
// Deliver a normal response or error, depending.
if (mResponse.isSuccess()) { //若请求成功则处理回应
mRequest.deliverResponse(mResponse.result);
} else { //若不成功则处理错误
mRequest.deliverError(mResponse.error);
}
// If this is an intermediate response, add a marker, otherwise we're done
// and the request can be finished.
if (mResponse.intermediate) {
mRequest.addMarker("intermediate-response");
} else {
mRequest.finish("done");
}
// If we have been provided a post-delivery runnable, run it.
if (mRunnable != null) { //如果指定了额外的runnable这里还会对它进行执行
mRunnable.run();
}
}
}
Delivery作为网络回应的分发、处理者,对回应数据进行了最后一层的把关。而当Delivery查询回应是否成功时,因为Request已经对回应信息做过处理(检查其成功还是错误),所以可以查询到正确的状态。若查询到回应成功则会触发Request的deliverResponse方法(以StringRequest为例):
@Override
protected void deliverResponse(String response) {
mListener.onResponse(response);
}
其实就是触发了用户自定义的网络响应监听器,mListener在StringRequest的构造中进行赋值:
public StringRequest(int method, String url, Listener<String> listener,
ErrorListener errorListener) {
super(method, url, errorListener);
mListener = listener;
}
public StringRequest(String url, Listener<String> listener, ErrorListener errorListener) {
this(Method.GET, url, listener, errorListener);
}
当查询到网络回应数据不成功时候将触发Request的deliverError方法,这个方法StringRequest并没有复写,所以追溯到其父类Request中:
public void deliverError(VolleyError error) {
if (mErrorListener != null) {
mErrorListener.onErrorResponse(error);
}
}
这里mErrorListener也是用户在使用Volley时候自定的错误监听器,在StringRequest中并没有处理,是通过super执行Request的构造方法进行赋值的:
public Request(int method, String url, Response.ErrorListener listener) {
mMethod = method;
mUrl = url;
mErrorListener = listener;
setRetryPolicy(new DefaultRetryPolicy());
mDefaultTrafficStatsTag = findDefaultTrafficStatsTag(url);
}
当这个请求已经完整的确定完成后,Delivery会通知Request进行结束操作——finish:
void finish(final String tag) {
if (mRequestQueue != null) { //若请求队列有效,则在请求队列中标记当前请求为结束
mRequestQueue.finish(this);
} //之后都是日志相关,不做分析
if (MarkerLog.ENABLED) {
final long threadId = Thread.currentThread().getId();
if (Looper.myLooper() != Looper.getMainLooper()) {
// If we finish marking off of the main thread, we need to
// actually do it on the main thread to ensure correct ordering.
Handler mainThread = new Handler(Looper.getMainLooper());
mainThread.post(new Runnable() {
@Override
public void run() {
mEventLog.add(tag, threadId);
mEventLog.finish(this.toString());
}
});
return;
}
mEventLog.add(tag, threadId);
mEventLog.finish(this.toString());
} else {
long requestTime = SystemClock.elapsedRealtime() - mRequestBirthTime;
if (requestTime >= SLOW_REQUEST_THRESHOLD_MS) {
VolleyLog.d("%d ms: %s", requestTime, this.toString());
}
}
}
mRequestQueue为RequestQueue类型,在开篇中就分析了RequestQueue,相关的还有一个问题当时没有进行挖掘,即mWaitingQueue中保留的相同URL的多个请求层级何时才能够被触发,下边分析mRequestQueue的finish方法就能解开这个疑问了:
void finish(Request<?> request) {
// Remove from the set of requests currently being processed.
synchronized (mCurrentRequests) {
mCurrentRequests.remove(request); //当请求已完成,会从mCurrentRequests队列中被移除掉
}
if (request.shouldCache()) { //默认是true的,除非你调用Request的setShouldCache方法主动设定
synchronized (mWaitingRequests) {
String cacheKey = request.getCacheKey(); //获取cacheKey,前边说过就是URL
Queue<Request<?>> waitingRequests = mWaitingRequests.remove(cacheKey); //移除列表中的这个请求,同时取出其可能存在的层级关系
if (waitingRequests != null) {
if (VolleyLog.DEBUG) {
VolleyLog.v("Releasing %d waiting requests for cacheKey=%s.",
waitingRequests.size(), cacheKey);
}
// Process all queued up requests. They won't be considered as in flight, but
// that's not a problem as the cache has been primed by 'request'.
mCacheQueue.addAll(waitingRequests); //若真的有层级关系,那么将其他的请求全部加入到mCacheQueue中交由CacheDispatcher处理
}
}
}
}
好了,最终待定的问题也解决了,这就是一个Request网络请求在Volley中的来龙去脉。
2. CacheDispatcher缓存调度器最为第一层缓冲,开始工作后阻塞的从缓存序列mCacheQueue中取得请求:
a. 对于已经取消了的请求,直接标记为跳过并结束这个请求
b. 全新或过期的请求,直接丢入mNetworkQueue中交由N个NetworkDispatcher进行处理
c. 已获得缓存信息(网络应答)却没有过期的请求,交由Request的parseNetworkResponse进行解析,从而确定此应答是否成功。然后将请求和应答交由Delivery分发者进行处理,如果需要更新缓存那么该请求还会被放入mNetworkQueue中
3. 用户将请求Request add到RequestQueue之后:
a. 对于不需要缓存的请求(需要额外设置,默认是需要缓存)直接丢入mNetworkQueue交由N个NetworkDispatcher处理;
b. 对于需要缓存的,全新的请求加入到mCacheQueue中给CacheDispatcher处理
c. 需要缓存,但是缓存列表中已经存在了相同URL的请求,放在mWaitingQueue中做暂时雪藏,待之前的请求完毕后,再重新添加到mCacheQueue中;
4. 网络请求调度器NetworkDispatcher作为网络请求真实发生的地方,对消息交给BasicNetwork进行处理,同样的,请求和结果都交由Delivery分发者进行处理;
5. Delivery分发者实际上已经是对网络请求处理的最后一层了,在Delivery对请求处理之前,Request已经对网络应答进行过解析,此时应答成功与否已经设定。而后Delivery根据请求所获得的应答情况做不同处理:
a. 若应答成功,则触发deliverResponse方法,最终会触发开发者为Request设定的Listener
b. 若应答失败,则触发deliverError方法,最终会触发开发者为Request设定的ErrorListener
处理完后,一个Request的生命周期就结束了,Delivery会调用Request的finish操作,将其从mRequestQueue中移除,与此同时,如果等待列表中存在相同URL的请求,则会将剩余的层级请求全部丢入mCacheQueue交由CacheDispatcher进行处理。
一个Request的生命周期:
1. 通过add加入mRequestQueue中,等待请求被执行;
2. 请求执行后,调用自身的parseNetworkResponse对网络应答进行处理,并判断这个应答是否成功;
3. 若成功,则最终会触发自身被开发者设定的Listener;若失败,最终会触发自身被开发者设定的ErrorListener。
至此Volley中网络请求的来龙去脉分析清楚了,如果我们因为一些原因需要继承Request来自定义自己的Request,最需要注意的就是parseNetworkResponse方法的复写,此方法对请求之后的命运有决定性的作用。
引言
源头RequestQueue
CacheDispatcher缓存操作
NetworkDispatcher网络处理
ExecutorDelivery消息分发者与Request请求
总结
引言
在Android应用开发:网络工具——Volley(一)中结合Cloudant服务介绍了Volley的一般用法,其中包含了两种请求类型StringRequest和JsonObjectRequest。一般的请求任务相信都可以通过他们完成了,不过在千变万化的网络编程中,我们还是希望能够对请求类型、过程等步骤进行完全的把控,本文就从Volley源码角度来分析一下,一个网络请求在Volley中是如何运作的,也可以看作网络请求在Volley中的生命周期。源头RequestQueue
在使用Volley前,必须有一个网络请求队列来承载请求,所以先分析一下这个请求队列是如何申请,如果运作的。在Volley.java中:/**
* Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it.
*
* @param context A {@link Context} to use for creating the cache dir.
* @param stack An {@link HttpStack} to use for the network, or null for default.
* @return A started {@link RequestQueue} instance.
*/
public static RequestQueue newRequestQueue(Context context, HttpStack stack) {
File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);
String userAgent = "volley/0";
try {
String packageName = context.getPackageName();
PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
userAgent = packageName + "/" + info.versionCode;
} catch (NameNotFoundException e) {
}
if (stack == null) {
if (Build.VERSION.SDK_INT >= 9) {
stack = new HurlStack();
} else {
// Prior to Gingerbread, HttpUrlConnection was unreliable.
// See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
}
}
Network network = new BasicNetwork(stack);
RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
queue.start();
return queue;
}
/**
* Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it.
*
* @param context A {@link Context} to use for creating the cache dir.
* @return A started {@link RequestQueue} instance.
*/
public static RequestQueue newRequestQueue(Context context) {
return newRequestQueue(context, null);
}
通常使用的是第二个接口,也就是只有一个参数的newRequestQueue(Context context),使stack默认为null。可以看到我们得到的RequestQueue是通过RequestQueue申请,然后又调用了其start方法,最后返回给我们的。接下来看一下RequestQueue的构造方法:
/**
* Creates the worker pool. Processing will not begin until {@link #start()} is called.
*
* @param cache A Cache to use for persisting responses to disk
* @param network A Network interface for performing HTTP requests
* @param threadPoolSize Number of network dispatcher threads to create
* @param delivery A ResponseDelivery interface for posting responses and errors
*/
public RequestQueue(Cache cache, Network network, int threadPoolSize,
ResponseDelivery delivery) {
mCache = cache;
mNetwork = network;
mDispatchers = new NetworkDispatcher[threadPoolSize];
mDelivery = delivery;
}
/**
* Creates the worker pool. Processing will not begin until {@link #start()} is called.
*
* @param cache A Cache to use for persisting responses to disk
* @param network A Network interface for performing HTTP requests
* @param threadPoolSize Number of network dispatcher threads to create
*/
public RequestQueue(Cache cache, Network network, int threadPoolSize) {
this(cache, network, threadPoolSize,
new ExecutorDelivery(new Handler(Looper.getMainLooper())));
}
/**
* Creates the worker pool. Processing will not begin until {@link #start()} is called.
*
* @param cache A Cache to use for persisting responses to disk
* @param network A Network interface for performing HTTP requests
*/
public RequestQueue(Cache cache, Network network) {
this(cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE);
}
RequestQueue有三种构造方法,通过newRequestQueue(Context context)调用的是最后一种。创建了一个工作池,默认承载网络线程数量为4个。而后两种构造方法都会调用到第一个,进行了一些局部变量的赋值,并没有什么需要多说的,接下来看start()方法:
public void start() {
stop(); // Make sure any currently running dispatchers are stopped.
// Create the cache dispatcher and start it.
mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
mCacheDispatcher.start();
// Create network dispatchers (and corresponding threads) up to the pool size.
for (int i = 0; i < mDispatchers.length; i++) {
NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
mCache, mDelivery);
mDispatchers[i] = networkDispatcher;
networkDispatcher.start();
}
}
首先进行了stop操作,将所有的执行者全部退出,从而确保当前没有任何正在工作的执行者。然后主要的工作就是开启一个CacheDispatcher和符合线程池数量的NetworkDispatcher。首先分析CacheDispatcher。
CacheDispatcher缓存操作
CacheDispatcher为缓存队列处理器,创建伊始就被责令开始工作start(),因为CacheDispatcher继承于Thread类,所以需要看一下它所复写的run方法:@Override
public void run() {
if (DEBUG) VolleyLog.v("start new dispatcher");
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
// Make a blocking call to initialize the cache.
mCache.initialize(); //初始化一个缓存
while (true) {
try {
// Get a request from the cache triage queue, blocking until
// at least one is available.
final Request<?> request = mCacheQueue.take(); //在缓存序列中获取请求,阻塞操作
request.addMarker("cache-queue-take");
// If the request has been canceled, don't bother dispatching it.
if (request.isCanceled()) { //若该请求已经被取消了,则直接跳过
request.finish("cache-discard-canceled");
continue;
}
// Attempt to retrieve this item from cache.
Cache.Entry entry = mCache.get(request.getCacheKey()); //尝试在缓存中查找是否有缓存数据
if (entry == null) {
request.addMarker("cache-miss"); //若没有则缓存丢失,证明这个请求并没有获得实施过,扔进网络请求队列中
// Cache miss; send off to the network dispatcher.
mNetworkQueue.put(request);
continue;
}
// If it is completely expired, just send it to the network.
if (entry.isExpired()) { //若请求已经过期,那么就要去获取最新的消息,所以依然丢进网络请求队列中
request.addMarker("cache-hit-expired");
request.setCacheEntry(entry);
mNetworkQueue.put(request);
continue;
}
// We have a cache hit; parse its data for delivery back to the request.
request.addMarker("cache-hit");
Response<?> response = request.parseNetworkResponse(
new NetworkResponse(entry.data, entry.responseHeaders)); //请求有缓存数据且没有过期,那么可以进行解析,交给请求的parseNetworkReponse方法进行解析,这个方法我们可以在自定义个Request中进行复写自定义
request.addMarker("cache-hit-parsed");
if (!entry.refreshNeeded()) { //如果请求有效且并不需要刷新,则丢进Delivery中处理,最终会触发如StringRequest这样的请求子类的onResponse或onErrorResponse
// Completely unexpired cache hit. Just deliver the response.
mDelivery.postResponse(request, response);
} else { //请求有效,但是需要进行刷新,那么需要丢进网络请求队列中
// Soft-expired cache hit. We can deliver the cached response,
// but we need to also send the request to the network for
// refreshing.
request.addMarker("cache-hit-refresh-needed");
request.setCacheEntry(entry);
// Mark the response as intermediate.
response.intermediate = true;
// Post the intermediate response back to the user and have
// the delivery then forward the request along to the network.
mDelivery.postResponse(request, response, new Runnable() {
@Override
public void run() {
try {
mNetworkQueue.put(request);
} catch (InterruptedException e) {
// Not much we can do about this.
}
}
});
}
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
continue;
}
}
}
CacheDispatcher做了很多事情,之后再来慢慢的消化他们。现在先看一下我们的请求通过add之后到了哪里去。查看RequestQueue.java的add方法:
/**
* Adds a Request to the dispatch queue.
* @param request The request to service
* @return The passed-in request
*/
public <T> Request<T> add(Request<T> request) {
// Tag the request as belonging to this queue and add it to the set of current requests.
request.setRequestQueue(this);
synchronized (mCurrentRequests) {
mCurrentRequests.add(request); //加入到当前的队列中,是一个HashSet
}
// Process requests in the order they are added.
request.setSequence(getSequenceNumber());
request.addMarker("add-to-queue");
// If the request is uncacheable, skip the cache queue and go straight to the network.若这个请求不需要被缓存,需要直接做网络请求,那么就直接加到网络请求队列中
if (!request.shouldCache()) {
mNetworkQueue.add(request);
return request;
}
// Insert request into stage if there's already a request with the same cache key in flight.
synchronized (mWaitingRequests) {
String cacheKey = request.getCacheKey(); // Volley中使用请求的URL作为存储的key
if (mWaitingRequests.containsKey(cacheKey)) { //若等待的请求中有与所请求的URL相同的请求,则需要做层级处理
// There is already a request in flight. Queue up.
Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
if (stagedRequests == null) {
stagedRequests = new LinkedList<Request<?>>();
}
stagedRequests.add(request);
mWaitingRequests.put(cacheKey, stagedRequests); //若与已有的请求URL相同,则创建一个层级列表保存他们,然后再放入等待请求列表中
if (VolleyLog.DEBUG) {
VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
}
} else {
// Insert 'null' queue for this cacheKey, indicating there is now a request in
// flight.
mWaitingRequests.put(cacheKey, null); //若是一个全新的请求,则直接放入等待队列中,注意数据为null,只有多个url产生层级关系了才有数据
mCacheQueue.add(request); //放入缓存队列中,缓存队列会对请求做处理
}
return request;
}
}
这里的mCacheQueue就是放入CacheDispatcher的那个阻塞队列,所以在add中添加到mCacheQueue后,因为CacheDispatcher已经运行起来了,所以CacheDispatcher会对刚刚加入的网络请求做处理。分析到这里,可以进行一下阶段性的梳理:
1. 我们的请求在加入到RequestQueue后,首先会加入到其实体类的mCurrentRequests列表中做本地管理
2. 如果之前已经存在了和本次请求相同URL的请求,那么会将层级关系保存在mWaitingRequests中,若没有则层级关系为null,同样也会保存在mWaitingRequests中
3. 对于没有层级关系(新的URL)的网络请求会直接放入mCacheQueue中让CacheDispatcher对其进行处理
分析到这里发现对于同一个URL的请求处理比较特殊,当第一次做某个网络请求A时候,A会直接放入缓存队列中由CacheDispatcher进行处理。下一次进行同一个URL的请求B时,若此时A还存在于mWaitingRequests队列中则B的请求被雪藏,不放入mCacheQueue缓存队列进行处理,只是等待。那么等待到什么时候呢?不难猜想到是需要等待A的请求完毕后才可以进行B的请求。归结到底就是需要知道mWaitingRequest是如何运作的?什么时候存储在其中的层级结构才会被拿出来进行请求。暂时记下这个问题,现在回头再去继续分析CacheDispatcher。CacheDispatcher对请求的处理可以归结为以下几种情况:
1. 对于取消的请求,直接表示为完成并跳过;
2. 对于尚未有应答数据的、数据过期、有明显标示需要刷新的请求直接丢入mNetworkQueue,mNetworkQueue同mCacheQueue一样,是一个阻塞队列;
3. 对于有应答数据且数据尚未过期的请求会出发Request的parseNetworkResponse方法进行数据解析,这个方法可以通过继承Request类进行复写(定制);
4. 对于有效应答(无论是否需要更新)都会用mDelivery进行应答,需要刷新的请求则会再次放入到mNetworkQueue中去。
对于(1)暂不做分析,后边会遇到。下边分析一下mNetworkQueue的运作原理,mNetworkQueue是在CacheDispatcher构造时传入的参数,通过RequestQueue的start()方法不难分析出相对应的处理器为NetworkDispatcher。
NetworkDispatcher网络处理
在RequestQueue的start()方法中,NetworkDispatcher存在多个,其数量等于RequestQueue构造时候传入的网络处理线程数量相等,默认为4个。public void start() {
stop(); // Make sure any currently running dispatchers are stopped.
// Create the cache dispatcher and start it.
mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
mCacheDispatcher.start();
// Create network dispatchers (and corresponding threads) up to the pool size.
for (int i = 0; i < mDispatchers.length; i++) {
NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
mCache, mDelivery);
mDispatchers[i] = networkDispatcher;
networkDispatcher.start();
}
}
每一个dispatcher被创造后都及时进行了start()操作,而NetworkDispatcher也是继承于Thread的类,那么之后需要分析其复写的run方法,在这之前先看一下它的构造方法:
public NetworkDispatcher(BlockingQueue<Request<?>> queue,
Network network, Cache cache,
ResponseDelivery delivery) {
mQueue = queue;
mNetwork = network;
mCache = cache;
mDelivery = delivery;
}
mQueue即为mNetworkQueue,这与CacheDispatcher中使用到的是同一个。而mNetwork默认是BasicNetwork,mCache为缓存,mDelivery为最终的消息配发者,之后会分析到。接下来看其复写的run()方法:
@Override
public void run() {
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND); //设置线程可后台运行,不会因为系统休眠而挂起
Request<?> request;
while (true) {
try {
// Take a request from the queue.
request = mQueue.take(); //mQueue即为mNetworkQueue,从mNetworkQueue中获取请求,也就是说CacheDispatcher丢过来的请求是从这里被NetworkDispatcher获取到的。注意这里获取请求是阻塞的。
} catch (InterruptedException e) { //退出操作,NetworkDispatcher被设置成退出时候发出中断请求
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
continue;
}
try {
request.addMarker("network-queue-take");
// If the request was cancelled already, do not perform the
// network request.
if (request.isCanceled()) { //若请求已经被取消,则标记为完成(被取消),然后继续下一个请求
request.finish("network-discard-cancelled");
continue;
}
addTrafficStatsTag(request);
// Perform the network request.
NetworkResponse networkResponse = mNetwork.performRequest(request); //使用BasicNetwork处理请求
request.addMarker("network-http-complete");
// If the server returned 304 AND we delivered a response already,
// we're done -- don't deliver a second identical response.
if (networkResponse.notModified && request.hasHadResponseDelivered()) {
request.finish("not-modified");
continue;
}
// Parse the response here on the worker thread.
Response<?> response = request.parseNetworkResponse(networkResponse); //处理网络请求应答数据
request.addMarker("network-parse-complete");
// Write to cache if applicable.
// TODO: Only update cache metadata instead of entire record for 304s.
if (request.shouldCache() && response.cacheEntry != null) {
mCache.put(request.getCacheKey(), response.cacheEntry);
request.addMarker("network-cache-written");
}
// Post the response back.
request.markDelivered(); //标记请求为已应答并做消息分发处理
mDelivery.postResponse(request, response);
} catch (VolleyError volleyError) {
parseAndDeliverNetworkError(request, volleyError); //若产生Volley错误则会触发Request的parseNetworkError方法以及mDelivery的postError方法
} catch (Exception e) {
VolleyLog.e(e, "Unhandled exception %s", e.toString());
mDelivery.postError(request, new VolleyError(e)); //对于未知错误,只会触发mDelivery的postError方法。
}
}
}
mNetwork.performRequest是真正的网络请求实施的地方,这里对BasicNetwork不做分析。网络请求的回应是NetworkResponse类型,看一下这个类型是怎么样的:
/**
* Data and headers returned from {@link Network#performRequest(Request)}.
*/
public class NetworkResponse {
/**
* Creates a new network response.
* @param statusCode the HTTP status code
* @param data Response body
* @param headers Headers returned with this response, or null for none
* @param notModified True if the server returned a 304 and the data was already in cache
*/
public NetworkResponse(int statusCode, byte[] data, Map<String, String> headers,
boolean notModified) {
this.statusCode = statusCode;
this.data = data;
this.headers = headers;
this.notModified = notModified;
}
public NetworkResponse(byte[] data) {
this(HttpStatus.SC_OK, data, Collections.<String, String>emptyMap(), false);
}
public NetworkResponse(byte[] data, Map<String, String> headers) {
this(HttpStatus.SC_OK, data, headers, false);
}
/** The HTTP status code. */
public final int statusCode;
/** Raw data from this response. */
public final byte[] data;
/** Response headers. */
public final Map<String, String> headers;
/** True if the server returned a 304 (Not Modified). */
public final boolean notModified;
}
NetworkResponse保存了请求的回应数据,包括数据本身和头,还有状态码以及其他相关信息。根据请求类型的不同,对回应数据的处理方式也各有不同,例如回应是String和Json的区别。所以自然而然的网络请求类型需要对它获得的回应数据自行处理,也就触发了Request子类的parseNetworkResponse方法,下边以StringRequest为例进行分析:
@Override
protected Response<String> parseNetworkResponse(NetworkResponse response) {
String parsed;
try {
parsed = new String(response.data, HttpHeaderParser.parseCharset(response.headers));
} catch (UnsupportedEncodingException e) {
parsed = new String(response.data);
}
return Response.success(parsed, HttpHeaderParser.parseCacheHeaders(response));
}
StringRequest中对于回应首先尝试解析数据和辨别头数据编码类型,若失败则只解析数据部分。最终都是触发Request的success方法,参数中还使用Volley自带的HttpHeaderParser对头信息进行了解析。需要看一下Response的success方法究竟做了什么,鉴于Response类总共没有多少代码,就全部拿出来做分析了:
public class Response<T> {
/** 处理解析过的回应信息的回调接口 */
public interface Listener<T> {
/** 当接收到回应后 */
public void onResponse(T response);
}
/** 处理错误回应的回调接口 */
public interface ErrorListener {
/**
* 错误发生时的回调接口
*/
public void onErrorResponse(VolleyError error);
}
/** 返回一个包含已解析结果的成功回应 */
public static <T> Response<T> success(T result, Cache.Entry cacheEntry) {
return new Response<T>(result, cacheEntry);
}
/**
* 返回错误回应,包含错误码以及可能的其他消息
*/
public static <T> Response<T> error(VolleyError error) {
return new Response<T>(error);
}
/** 解析过的响应信息,错误时为null */
public final T result;
/** 响应的缓存数据,错误时为null */
public final Cache.Entry cacheEntry;
/** 详细的错误信息 */
public final VolleyError error;
/** 此回应软件希望得到第二次回应则为true,即需要刷新 */
public boolean intermediate = false;
/**
* 返回true代表回应成功,没有错误。有错误则为false
*/
public boolean isSuccess() {
return error == null;
}
private Response(T result, Cache.Entry cacheEntry) {
this.result = result;
this.cacheEntry = cacheEntry;
this.error = null;
}
private Response(VolleyError error) {
this.result = null;
this.cacheEntry = null;
this.error = error;
}
}
这就是网络响应的类,很简单,成功或错误都会直接进行标记,通过isSuccess接口提供外部查询。如果响应成功,则消息保存在result中,解析头信息得到的缓存数据保存在cacheEntry中。
Request作为基类,Volley自带的又代表性的其扩展类又StringRequest和JsonObjectRequest,如果开发者有比较大的自定义需求就需要继承Request复写内部一些重要的方法。同时mDelivery出场的机会这么多,为什么他总出现在处理请求的地方呢?下边就对它和Request一起进行分析,其中Request依然以StringRequest为例。
ExecutorDelivery消息分发者与Request请求
mDelivery类型为ResponseDelivery,实为接口类型:public interface ResponseDelivery {
/**
* Parses a response from the network or cache and delivers it.
*/
public void postResponse(Request<?> request, Response<?> response);
/**
* Parses a response from the network or cache and delivers it. The provided
* Runnable will be executed after delivery.
*/
public void postResponse(Request<?> request, Response<?> response, Runnable runnable);
/**
* Posts an error for the given request.
*/
public void postError(Request<?> request, VolleyError error);
}
三个接口其中两个是回应网络应答的,最后一个回应网络错误。追溯RequestQueue构造的时候,默认的分发者为ExecutorDelivery:
public RequestQueue(Cache cache, Network network, int threadPoolSize) {
this(cache, network, threadPoolSize,
new ExecutorDelivery(new Handler(Looper.getMainLooper())));
}
可见,消息分发者工作在主线程上。常见的分发者所做的工作有:
@Override
public void postResponse(Request<?> request, Response<?> response) { //发出响应
postResponse(request, response, null);
}
@Override
public void postResponse(Request<?> request, Response<?> response, Runnable runnable) { //发出响应
request.markDelivered();
request.addMarker("post-response");
mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
}
@Override
public void postError(Request<?> request, VolleyError error) { //发出错误响应
request.addMarker("post-error");
Response<?> response = Response.error(error);
mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, null));
}
这里发现一个问题,其实在NetworkDispatcher中的request.markDelivered()是多余的,在postResponse中已经执行了。无论是正常的响应还是错误都会执行ResponseDeliveryRunnable:
private class ResponseDeliveryRunnable implements Runnable {
private final Request mRequest;
private final Response mResponse;
private final Runnable mRunnable;
public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) {
mRequest = request;
mResponse = response;
mRunnable = runnable; //若指定了runnable,如上面分析的在网络请求有效但是需要更新的时候会指定一个runnable的
}
@SuppressWarnings("unchecked")
@Override
public void run() {
// If this request has canceled, finish it and don't deliver.
if (mRequest.isCanceled()) { //若请求被取消,结束并做标记
mRequest.finish("canceled-at-delivery");
return;
}
// Deliver a normal response or error, depending.
if (mResponse.isSuccess()) { //若请求成功则处理回应
mRequest.deliverResponse(mResponse.result);
} else { //若不成功则处理错误
mRequest.deliverError(mResponse.error);
}
// If this is an intermediate response, add a marker, otherwise we're done
// and the request can be finished.
if (mResponse.intermediate) {
mRequest.addMarker("intermediate-response");
} else {
mRequest.finish("done");
}
// If we have been provided a post-delivery runnable, run it.
if (mRunnable != null) { //如果指定了额外的runnable这里还会对它进行执行
mRunnable.run();
}
}
}
Delivery作为网络回应的分发、处理者,对回应数据进行了最后一层的把关。而当Delivery查询回应是否成功时,因为Request已经对回应信息做过处理(检查其成功还是错误),所以可以查询到正确的状态。若查询到回应成功则会触发Request的deliverResponse方法(以StringRequest为例):
@Override
protected void deliverResponse(String response) {
mListener.onResponse(response);
}
其实就是触发了用户自定义的网络响应监听器,mListener在StringRequest的构造中进行赋值:
public StringRequest(int method, String url, Listener<String> listener,
ErrorListener errorListener) {
super(method, url, errorListener);
mListener = listener;
}
public StringRequest(String url, Listener<String> listener, ErrorListener errorListener) {
this(Method.GET, url, listener, errorListener);
}
当查询到网络回应数据不成功时候将触发Request的deliverError方法,这个方法StringRequest并没有复写,所以追溯到其父类Request中:
public void deliverError(VolleyError error) {
if (mErrorListener != null) {
mErrorListener.onErrorResponse(error);
}
}
这里mErrorListener也是用户在使用Volley时候自定的错误监听器,在StringRequest中并没有处理,是通过super执行Request的构造方法进行赋值的:
public Request(int method, String url, Response.ErrorListener listener) {
mMethod = method;
mUrl = url;
mErrorListener = listener;
setRetryPolicy(new DefaultRetryPolicy());
mDefaultTrafficStatsTag = findDefaultTrafficStatsTag(url);
}
当这个请求已经完整的确定完成后,Delivery会通知Request进行结束操作——finish:
void finish(final String tag) {
if (mRequestQueue != null) { //若请求队列有效,则在请求队列中标记当前请求为结束
mRequestQueue.finish(this);
} //之后都是日志相关,不做分析
if (MarkerLog.ENABLED) {
final long threadId = Thread.currentThread().getId();
if (Looper.myLooper() != Looper.getMainLooper()) {
// If we finish marking off of the main thread, we need to
// actually do it on the main thread to ensure correct ordering.
Handler mainThread = new Handler(Looper.getMainLooper());
mainThread.post(new Runnable() {
@Override
public void run() {
mEventLog.add(tag, threadId);
mEventLog.finish(this.toString());
}
});
return;
}
mEventLog.add(tag, threadId);
mEventLog.finish(this.toString());
} else {
long requestTime = SystemClock.elapsedRealtime() - mRequestBirthTime;
if (requestTime >= SLOW_REQUEST_THRESHOLD_MS) {
VolleyLog.d("%d ms: %s", requestTime, this.toString());
}
}
}
mRequestQueue为RequestQueue类型,在开篇中就分析了RequestQueue,相关的还有一个问题当时没有进行挖掘,即mWaitingQueue中保留的相同URL的多个请求层级何时才能够被触发,下边分析mRequestQueue的finish方法就能解开这个疑问了:
void finish(Request<?> request) {
// Remove from the set of requests currently being processed.
synchronized (mCurrentRequests) {
mCurrentRequests.remove(request); //当请求已完成,会从mCurrentRequests队列中被移除掉
}
if (request.shouldCache()) { //默认是true的,除非你调用Request的setShouldCache方法主动设定
synchronized (mWaitingRequests) {
String cacheKey = request.getCacheKey(); //获取cacheKey,前边说过就是URL
Queue<Request<?>> waitingRequests = mWaitingRequests.remove(cacheKey); //移除列表中的这个请求,同时取出其可能存在的层级关系
if (waitingRequests != null) {
if (VolleyLog.DEBUG) {
VolleyLog.v("Releasing %d waiting requests for cacheKey=%s.",
waitingRequests.size(), cacheKey);
}
// Process all queued up requests. They won't be considered as in flight, but
// that's not a problem as the cache has been primed by 'request'.
mCacheQueue.addAll(waitingRequests); //若真的有层级关系,那么将其他的请求全部加入到mCacheQueue中交由CacheDispatcher处理
}
}
}
}
好了,最终待定的问题也解决了,这就是一个Request网络请求在Volley中的来龙去脉。
总结
1. 当一个RequestQueue被成功申请后会开启一个CacheDispatcher(缓存调度器)和4个(默认)NetworkDispatcher(网络请求调度器);2. CacheDispatcher缓存调度器最为第一层缓冲,开始工作后阻塞的从缓存序列mCacheQueue中取得请求:
a. 对于已经取消了的请求,直接标记为跳过并结束这个请求
b. 全新或过期的请求,直接丢入mNetworkQueue中交由N个NetworkDispatcher进行处理
c. 已获得缓存信息(网络应答)却没有过期的请求,交由Request的parseNetworkResponse进行解析,从而确定此应答是否成功。然后将请求和应答交由Delivery分发者进行处理,如果需要更新缓存那么该请求还会被放入mNetworkQueue中
3. 用户将请求Request add到RequestQueue之后:
a. 对于不需要缓存的请求(需要额外设置,默认是需要缓存)直接丢入mNetworkQueue交由N个NetworkDispatcher处理;
b. 对于需要缓存的,全新的请求加入到mCacheQueue中给CacheDispatcher处理
c. 需要缓存,但是缓存列表中已经存在了相同URL的请求,放在mWaitingQueue中做暂时雪藏,待之前的请求完毕后,再重新添加到mCacheQueue中;
4. 网络请求调度器NetworkDispatcher作为网络请求真实发生的地方,对消息交给BasicNetwork进行处理,同样的,请求和结果都交由Delivery分发者进行处理;
5. Delivery分发者实际上已经是对网络请求处理的最后一层了,在Delivery对请求处理之前,Request已经对网络应答进行过解析,此时应答成功与否已经设定。而后Delivery根据请求所获得的应答情况做不同处理:
a. 若应答成功,则触发deliverResponse方法,最终会触发开发者为Request设定的Listener
b. 若应答失败,则触发deliverError方法,最终会触发开发者为Request设定的ErrorListener
处理完后,一个Request的生命周期就结束了,Delivery会调用Request的finish操作,将其从mRequestQueue中移除,与此同时,如果等待列表中存在相同URL的请求,则会将剩余的层级请求全部丢入mCacheQueue交由CacheDispatcher进行处理。
一个Request的生命周期:
1. 通过add加入mRequestQueue中,等待请求被执行;
2. 请求执行后,调用自身的parseNetworkResponse对网络应答进行处理,并判断这个应答是否成功;
3. 若成功,则最终会触发自身被开发者设定的Listener;若失败,最终会触发自身被开发者设定的ErrorListener。
至此Volley中网络请求的来龙去脉分析清楚了,如果我们因为一些原因需要继承Request来自定义自己的Request,最需要注意的就是parseNetworkResponse方法的复写,此方法对请求之后的命运有决定性的作用。
相关文章推荐
- Android应用开发:网络工具——Volley(一)
- Android 开发工具类 32_通过 HTTP 协议实现文件上传
- 禅之道——http://www.pythontip.com/coding/run(python online)
- 解读uglifyJS(转载:http://rapheal.sinaapp.com/2014/05/15/uglifyjs-ast-parse/)
- D-link 带USB口无线路由器 配置网络共享打印机
- VMware中网络设置之NAT
- android HTTPURLConnection解决不能访问HTTPs请求
- 网络数据请求实践二:多文件上传的实现方法
- No module named http_client
- 网络数据请求实践一:android-async-http实现下载和上传
- 独立成分分析http://blog.sina.com.cn/s/blog_73402e3c0101gqy0.html
- 学习笔记一:TCP与UDP通信协议
- TCP/IP 笔记 1.3 IP:网际协议
- 因子分析(Factor Analysis) http://www.cnblogs.com/jerrylead/archive/2011/05/11/2043317.html
- 混合高斯模型 http://www.cnblogs.com/CBDoctor/archive/2011/11/06/2236286.html
- TCP/IP 笔记 1.2 链 路 层
- HTTP协议基础
- JavaWeb:报错信息The superclass "javax.servlet.http.HttpServlet" was not found on the Java Build Path
- Android网络框架Volley(体验篇)
- ListView异步加载网络图片完美版之双缓存技术