您的位置:首页 > 移动开发 > Android开发

Android -- Volley解析

2015-07-15 22:57 323 查看

Volley设计



Dispatch Thread
不断从
RequestQueue
中取出请求,根据是否已缓存调用
Cache
Network
这两类数据获取接口之一,从内存缓存或是服务器取得请求的数据,然后交由
ResponseDelivery
去做结果分发及回调处理。

Volley中的类简介

Volley:过 newRequestQueue(…) 函数新建并启动一个请求队列
RequestQueue


Request:表示一个请求的抽象类。
StringRequest
JsonRequest
ImageRequest
 都是它的子类,表示某种类型的请求。

RequestQueue:表示请求队列,里面包含一个
CacheDispatcher
(用于处理走缓存请求的调度线程)、
NetworkDispatcher
数组(用于处理走网络请求的调度线程),一个
ResponseDelivery
(返回结果分发接口),通过 start() 函数启动时会启动
CacheDispatcher
NetworkDispatchers


CacheDispatcher:一个线程,用于调度处理走缓存的请求。启动后会不断从缓存请求队列中取请求处理,队列为空则等待,请求处理结束则将结果传递给
ResponseDelivery
去执行后续处理。当结果未缓存过、缓存失效或缓存需要刷新的情况下,该请求都需要重新进入
NetworkDispatcher
去调度处理。

NetworkDispatcher:一个线程,用于调度处理走网络的请求。启动后会不断从网络请求队列中取请求处理,队列为空则等待,请求处理结束则将结果传递给
ResponseDelivery
去执行后续处理,并判断结果是否要进行缓存。

ResponseDelivery:返回结果分发接口,目前只有基于
ExecutorDelivery
的在入参 handler 对应线程内进行分发。

HttpStack:处理 Http 请求,返回请求结果。目前 Volley 中有基于 HttpURLConnection 的
HurlStack
和 基于 Apache HttpClient 的
HttpClientStack


Network:调用
HttpStack
处理请求,并将结果转换为可被
ResponseDelivery
处理的
NetworkResponse


Cache:缓存请求结果,Volley 默认使用的是基于 sdcard 的
DiskBasedCache
NetworkDispatcher
得到请求结果后判断是否需要存储在 Cache,
CacheDispatcher
会从 Cache 中取缓存结果。

类图



Volley

public static RequestQueue newRequestQueue(Context context) {
return newRequestQueue(context, null);
}

调用了
newRequestQueue()
的方法重载,并给第二个参数传入null。

public static RequestQueue newRequestQueue(Context context, HttpStack stack) {
File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);
String userAgent = "volley/0";
try {
String packageName = context.getPackageName();
PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
userAgent = packageName + "/" + info.versionCode;
} catch (NameNotFoundException e) {
}
if (stack == null) {
if (Build.VERSION.SDK_INT >= 9) {
stack = new HurlStack();
} else {
stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
}
}
Network network = new BasicNetwork(stack);
RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
queue.start();
return queue;
}

如果stack是等于null的,则去创建一个
HttpStack
对象,这里会判断如果手机系统版本号是大于9的,则创建一个
HurlStack
的实例,否则就创建一个
HttpClientStack
的实例。

实际上
HurlStack
的内部就是使用
HttpURLConnection
进行网络通讯的,而
HttpClientStack
的内部则是使用
HttpClient
进行网络通讯的。

得到了 HttpStack,然后通过它构造一个代表网络(Network)的具体实现
BasicNetwork


接着构造一个代表缓存(Cache)的基于 Disk 的具体实现
DiskBasedCache


最后将网络(Network)对象和缓存(Cache)对象传入构建一个 RequestQueue,启动这个 RequestQueue,并返回。


Volley 会将请求头中的 User-Agent 字段设置为 App 的 ${packageName}/${versionCode},如果异常则使用 "volley/0"


Request

我们通过构建一个
Request
类的非抽象子类(StringRequest、JsonRequest、ImageRequest 或自定义)对象,并将其加入到·RequestQueue·中来完成一次网络请求操作。

Volley 支持 8 种 Http 请求方式 GET, POST, PUT, DELETE, HEAD, OPTIONS, TRACE, PATCH

因为是抽象类,子类必须重写的两个方法。

abstract protected Response<T> parseNetworkResponse(NetworkResponse response);
abstract protected void deliverResponse(T response);

以下两个方法也经常会被重写

public byte[] getBody();
protected Map<String, String> getParams();

RequestQueue

RequestQueue 中维护了两个基于优先级的 Request 队列,缓存请求队列和网络请求队列。

private final PriorityBlockingQueue<Request<?>> mCacheQueue = new PriorityBlockingQueue<Request<?>>();
private final PriorityBlockingQueue<Request<?>> mNetworkQueue = new PriorityBlockingQueue<Request<?>>();

维护了一个正在进行中,尚未完成的请求集合。

private final Set<Request<?>> mCurrentRequests = new HashSet<Request<?>>();

维护了一个等待请求的集合,如果一个请求正在被处理并且可以被缓存,后续的相同 url 的请求,将进入此等待队列。

private final Map<String, Queue<Request<?>>> mWaitingRequests = new HashMap<String, Queue<Request<?>>>();

在Volley类中启动了RequestQueue,queue.start(); 

public void start() {
stop();  // Make sure any currently running dispatchers are stopped.
// Create the cache dispatcher and start it.
mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
mCacheDispatcher.start();
// Create network dispatchers (and corresponding threads) up to the pool size.
for (int i = 0; i < mDispatchers.length; i++) {
NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
mCache, mDelivery);
mDispatchers[i] = networkDispatcher;
networkDispatcher.start();
}

先是创建了一个
CacheDispatcher
的实例,然后调用了它的start()方法,接着在一个for循环里去创建
NetworkDispatcher
的实例,并分别调用它们的start()方法。这里的
CacheDispatcher
NetworkDispatcher
都是继承自Thread的,而默认情况下for循环会执行四次,也就是说当调用了
Volley.newRequestQueue(context)
之后,就会有五个线程一直在后台运行,不断等待网络请求的到来,其中
CacheDispatcher
是缓存线程,
NetworkDispatcher
是网络请求线程。

public <T> Request<T> add(Request<T> request) {
// Tag the request as belonging to this queue and add it to the set of current requests.
request.setRequestQueue(this);
synchronized (mCurrentRequests) {
mCurrentRequests.add(request);
}
// Process requests in the order they are added.
request.setSequence(getSequenceNumber());
request.addMarker("add-to-queue");
// If the request is uncacheable, skip the cache queue and go straight to the network.
if (!request.shouldCache()) {
mNetworkQueue.add(request);
return request;
}
// Insert request into stage if there's already a request with the same cache key in flight.
synchronized (mWaitingRequests) {
String cacheKey = request.getCacheKey();
if (mWaitingRequests.containsKey(cacheKey)) {
// There is already a request in flight. Queue up.
Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
if (stagedRequests == null) {
stagedRequests = new LinkedList<Request<?>>();
}
stagedRequests.add(request);
mWaitingRequests.put(cacheKey, stagedRequests);
if (VolleyLog.DEBUG) {
VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
}
} else {
// Insert 'null' queue for this cacheKey, indicating there is now a request in
// flight.
mWaitingRequests.put(cacheKey, null);
mCacheQueue.add(request);
}
return request;
}
}

程序会判断当前的请求是否可以缓存,如果不能缓存则直接将这条请求加入网络请求队列,可以缓存的话则将这条请求加入缓存队列。在默认情况下,每条请求都是可以缓存的,当然我们也可以调用Request的setShouldCache(false)方法来改变这一默认行为。

CacheDispatcher

一个线程,用于调度处理走缓存的请求。启动后会不断从缓存请求队列中取请求处理,队列为空则等待,请求处理结束则将结果传递给
ResponseDelivery
 去执行后续处理。当结果未缓存过、缓存失效或缓存需要刷新的情况下,该请求都需要重新进入
NetworkDispatcher
去调度处理。

public class CacheDispatcher extends Thread {

……

@Override
public void run() {
if (DEBUG) VolleyLog.v("start new dispatcher");
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
// Make a blocking call to initialize the cache.
mCache.initialize();
while (true) {
try {
// Get a request from the cache triage queue, blocking until
// at least one is available.
final Request<?> request = mCacheQueue.take();
request.addMarker("cache-queue-take");
// If the request has been canceled, don't bother dispatching it.
if (request.isCanceled()) {
request.finish("cache-discard-canceled");
continue;
}
// Attempt to retrieve this item from cache.
Cache.Entry entry = mCache.get(request.getCacheKey());
if (entry == null) {
request.addMarker("cache-miss");
// Cache miss; send off to the network dispatcher.
mNetworkQueue.put(request);
continue;
}
// If it is completely expired, just send it to the network.
if (entry.isExpired()) {
request.addMarker("cache-hit-expired");
request.setCacheEntry(entry);
mNetworkQueue.put(request);
continue;
}
// We have a cache hit; parse its data for delivery back to the request.
request.addMarker("cache-hit");
Response<?> response = request.parseNetworkResponse(
new NetworkResponse(entry.data, entry.responseHeaders));
request.addMarker("cache-hit-parsed");
if (!entry.refreshNeeded()) {
// Completely unexpired cache hit. Just deliver the response.
mDelivery.postResponse(request, response);
} else {
// Soft-expired cache hit. We can deliver the cached response,
// but we need to also send the request to the network for
// refreshing.
request.addMarker("cache-hit-refresh-needed");
request.setCacheEntry(entry);
// Mark the response as intermediate.
response.intermediate = true;
// Post the intermediate response back to the user and have
// the delivery then forward the request along to the network.
mDelivery.postResponse(request, response, new Runnable() {
@Override
public void run() {
try {
mNetworkQueue.put(request);
} catch (InterruptedException e) {
// Not much we can do about this.
}
}
});
}
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
continue;
}
}
}
}

首先可以看到一个while(true)循环,说明缓存线程始终是在运行的,接着会尝试从缓存当中取出响应结果,如何为空的话则把这条请求加入到网络请求队列中,如果不为空的话再判断该缓存是否已过期,如果已经过期了则同样把这条请求加入到网络请求队列中,否则就认为不需要重发网络请求,直接使用缓存中的数据即可。之后会调用
Request
parseNetworkResponse()
方法来对数据进行解析,再往后就是将解析出来的数据进行回调了.

NetworkDispatcher

一个线程,用于调度处理走网络的请求。启动后会不断从网络请求队列中取请求处理,队列为空则等待,请求处理结束则将结果传递给 ResponseDelivery 去执行后续处理,并判断结果是否要进行缓存。

public class NetworkDispatcher extends Thread {
……
@Override
public void run() {
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
Request<?> request;
while (true) {
try {
// Take a request from the queue.
request = mQueue.take();
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
continue;
}
try {
request.addMarker("network-queue-take");
// If the request was cancelled already, do not perform the
// network request.
if (request.isCanceled()) {
request.finish("network-discard-cancelled");
continue;
}
addTrafficStatsTag(request);
// Perform the network request.
NetworkResponse networkResponse = mNetwork.performRequest(request);
request.addMarker("network-http-complete");
// If the server returned 304 AND we delivered a response already,
// we're done -- don't deliver a second identical response.
if (networkResponse.notModified && request.hasHadResponseDelivered()) {
request.finish("not-modified");
continue;
}
// Parse the response here on the worker thread.
Response<?> response = request.parseNetworkResponse(networkResponse);
request.addMarker("network-parse-complete");
// Write to cache if applicable.
// TODO: Only update cache metadata instead of entire record for 304s.
if (request.shouldCache() && response.cacheEntry != null) {
mCache.put(request.getCacheKey(), response.cacheEntry);
request.addMarker("network-cache-written");
}
// Post the response back.
request.markDelivered();
mDelivery.postResponse(request, response);
} catch (VolleyError volleyError) {
parseAndDeliverNetworkError(request, volleyError);
} catch (Exception e) {
VolleyLog.e(e, "Unhandled exception %s", e.toString());
mDelivery.postError(request, new VolleyError(e));
}
}
}
}

看到了类似的while(true)循环,说明网络请求线程也是在不断运行的。访问网络的时候会调用Network的performRequest()方法来去发送网络请求,而Network是一个接口,这里具体的实现是BasicNetwork。

BasicNetwork

public class BasicNetwork implements Network {
……
@Override
public NetworkResponse performRequest(Request<?> request) throws VolleyError {
long requestStart = SystemClock.elapsedRealtime();
while (true) {
HttpResponse httpResponse = null;
byte[] responseContents = null;
Map<String, String> responseHeaders = new HashMap<String, String>();
try {
// Gather headers.
Map<String, String> headers = new HashMap<String, String>();
addCacheHeaders(headers, request.getCacheEntry());
httpResponse = mHttpStack.performRequest(request, headers);
StatusLine statusLine = httpResponse.getStatusLine();
int statusCode = statusLine.getStatusCode();
responseHeaders = convertHeaders(httpResponse.getAllHeaders());
// Handle cache validation.
if (statusCode == HttpStatus.SC_NOT_MODIFIED) {
return new NetworkResponse(HttpStatus.SC_NOT_MODIFIED,
request.getCacheEntry() == null ? null : request.getCacheEntry().data,
responseHeaders, true);
}
// Some responses such as 204s do not have content.  We must check.
if (httpResponse.getEntity() != null) {
responseContents = entityToBytes(httpResponse.getEntity());
} else {
// Add 0 byte response as a way of honestly representing a
// no-content request.
responseContents = new byte[0];
}
// if the request is slow, log it.
long requestLifetime = SystemClock.elapsedRealtime() - requestStart;
logSlowRequests(requestLifetime, request, responseContents, statusLine);
if (statusCode < 200 || statusCode > 299) {
throw new IOException();
}
return new NetworkResponse(statusCode, responseContents, responseHeaders, false);
} catch (Exception e) {
……
}
}
}
}

调用了
HttpStack
performRequest()
方法,这里的
HttpStack
就是在一开始调用
newRequestQueue()
方法是创建的实例,之后会将服务器返回的数据组装成一个
NetworkResponse
对象进行返回。

NetworkDispatcher
中收到了
NetworkResponse
这个返回值后又会调用
Request
parseNetworkResponse()
方法来解析
NetworkResponse
中的数据,以及将数据写入到缓存,这个方法的实现是交给
Request
的子类来完成的,因为不同种类的
Request
解析的方式也肯定不同。其中parseNetworkResponse()这个方法就是必须要重写的。

在解析完了NetworkResponse中的数据之后,又会调用ExecutorDelivery的postResponse()方法来回调解析出的数据。

public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
request.markDelivered();
request.addMarker("post-response");
mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
}

ResponseDeliveryRunnable

private class ResponseDeliveryRunnable implements Runnable {
private final Request mRequest;
private final Response mResponse;
private final Runnable mRunnable;

public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) {
mRequest = request;
mResponse = response;
mRunnable = runnable;
}

@SuppressWarnings("unchecked")
@Override
public void run() {
// If this request has canceled, finish it and don't deliver.
if (mRequest.isCanceled()) {
mRequest.finish("canceled-at-delivery");
return;
}
// Deliver a normal response or error, depending.
if (mResponse.isSuccess()) {
mRequest.deliverResponse(mResponse.result);
} else {
mRequest.deliverError(mResponse.error);
}
// If this is an intermediate response, add a marker, otherwise we're done
// and the request can be finished.
if (mResponse.intermediate) {
mRequest.addMarker("intermediate-response");
} else {
mRequest.finish("done");
}
// If we have been provided a post-delivery runnable, run it.
if (mRunnable != null) {
mRunnable.run();
}
}
}

ByteArrayPool

缓存接口,代表了一个可以获取请求结果,存储请求结果的缓存。

byte[] 的回收池,用于 byte[] 的回收再利用,减少了内存的分配和回收。 主要通过一个元素长度从小到大排序的
ArrayList
作为 byte[] 的缓存,另有一个按使用时间先后排序的
ArrayList
属性用于缓存满时清理元素。

public synchronized void returnBuf(byte[] buf)

将用过的 byte[] 回收,根据 byte[] 长度按照从小到大的排序将 byte[] 插入到缓存中合适位置。

public synchronized byte[] getBuf(int len)

获取长度不小于 len 的 byte[],遍历缓存,找出第一个长度大于传入参数len的 byte[],并返回;如果最终没有合适的 byte[],new 一个返回。

private synchronized void trim()

当缓存的 byte 超过预先设置的大小时,按照先进先出的顺序删除最早的 byte[]。

我是天王盖地虎的分割线

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: