您的位置:首页 > 移动开发 > Android开发

Volley框架之三 疑难实现

2015-12-09 21:09 429 查看
有了前面两章做基础,这篇文章主要分析一下,从Volley中我学到的知识点

1 .Requset是在哪里处理的?主线程还是子线程?

子线程处理Request,只要new了一个RequestQueue那么就会开启1个缓存线程,4个网络请求线程,CacheDispatcher/NetworkDispatcher继承自Thread。

这就意味着一次最多只能并发5个线程,如果缓存线程没有命中,那么最多并发4个网络请求线程。

/**
* Starts the dispatchers in this queue.
*/
public void start() {
stop();  // Make sure any currently running dispatchers are stopped.
// Create the cache dispatcher and start it.
mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
mCacheDispatcher.start();

// Create network dispatchers (and corresponding threads) up to the pool size.
for (int i = 0; i < mDispatchers.length; i++) {
NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
mCache, mDelivery);
mDispatchers[i] = networkDispatcher;
networkDispatcher.start();
}
}


从这里可以看出,如果有这么一个需求,需要同时并发4个以上下载任务,那么就需要重写Volley的newRequestQueue静态方法去调用RequestQueue的重载构造函数了

public RequestQueue(Cache cache, Network network, int threadPoolSize)

//threadPoolSize就是最大并发的网络请求线程数了

但是另一方面,缓存线程的个数始终是1个。

这里其实挺奇怪的,为什么不用线程池来管理缓存线程/网络请求线程。毕竟new一个RequestQueue就会生成5个线程,很耗资源的,同时如果忘记调用RequestQueue的stop/callall方法,那么这几个线程就一直在运行,,,,

2.缓存线程/网络请求线程怎么通信?

线程之间通信,第一想到的就是handler机制,看看是不是

缓存线程/网络请求线程解析完响应之后,执行mDelivery.postResponse(request, response);分发到main线程中去

/** Used for posting responses, typically to the main thread. */
private final Executor mResponsePoster;

/**
* Creates a new response delivery interface.
* @param handler {@link Handler} to post responses on
*/
public ExecutorDelivery(final Handler handler) {
// Make an Executor that just wraps the handler.
mResponsePoster = new Executor() {
@Override
public void execute(Runnable command) {
handler.post(command);
}
};
}
@Override
public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
request.markDelivered();
request.addMarker("post-response");
mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
}
private class ResponseDeliveryRunnable implements Runnable {
.........
@SuppressWarnings("unchecked")
@Override
public void run() {
.........
if (mResponse.isSuccess()) {
mRequest.deliverResponse(mResponse.result);
} else {
mRequest.deliverError(mResponse.error);
}
.......
}
}
public RequestQueue(Cache cache, Network network, int threadPoolSize) {
this(cache, network, threadPoolSize,
new ExecutorDelivery(new Handler(Looper.getMainLooper())));
}


mResponsePoster是个Executor对象,匿名实现run方法,run方法内部通过handler机制把ResponseDeliveryRunnable发送到main线程中。handler的初始化
new Handler(Looper.getMainLooper())
拿的是main线程的looper。这样的处理方式是不是很熟悉,Executor在AsynaTask中就使用过,这里为什么不直接使用Handler呢?而是通过ExecutorDelivery把handler封装了起来,觉得是代码设计的考虑吧,

ExecutorDelivery一看就是用来分发用的,如果直接使用handler代码结构明显不够优雅,这就为以后我写代码提供了一种良好的思路习惯。

3.对同一个Request的重复请求怎么处理的?

for (int i = 0; i < 3; i++) {
mQueue.add(stringRequest);
}


比如我此时对同一个stringRequest同一时间提交了3次,难道是网络请求3次吗?这样肯定是不合理的,应该是这样的,第一次add的时候交给缓存线程处理,此时肯定是直接交给网络请求线程发起网络请求的,之后add的时候,并不直接添加到mCacheQueue队列中,而是先等第一个add处理完毕之后,然后再把之后的stringRequest交给缓存线程处理。这个时候缓存线程再去判断是否是从硬盘缓存中读取还是交给网络请求线程处理。这样就节省了资源啊,良好的设计,下面看看怎么实现的。

private final Map<String, Queue<Request<?>>> mWaitingRequests =
new HashMap<String, Queue<Request<?>>>();
private final Set<Request<?>> mCurrentRequests = new HashSet<Request<?>>();
private final PriorityBlockingQueue<Request<?>> mCacheQueue =
new PriorityBlockingQueue<Request<?>>();
private final PriorityBlockingQueue<Request<?>> mNetworkQueue =
new PriorityBlockingQueue<Request<?>>();

public <T> Request<T> add(Request<T> request) {
// Tag the request as belonging to this queue and add it to the set of current requests.
request.setRequestQueue(this);
synchronized (mCurrentRequests) {
mCurrentRequests.add(request);
}

// Process requests in the order they are added.
request.setSequence(getSequenceNumber());
request.addMarker("add-to-queue");

// If the request is uncacheable, skip the cache queue and go straight to the network.
if (!request.shouldCache()) {
mNetworkQueue.add(request);
return request;
}

// Insert request into stage if there's already a request with the same cache key in flight.
synchronized (mWaitingRequests) {
String cacheKey = request.getCacheKey();
if (mWaitingRequests.containsKey(cacheKey)) {
// There is already a request in flight. Queue up.
Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
if (stagedRequests == null) {
stagedRequests = new LinkedList<Request<?>>();
}
stagedRequests.add(request);
mWaitingRequests.put(cacheKey, stagedRequests);
if (VolleyLog.DEBUG) {
VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
}
} else {
// Insert 'null' queue for this cacheKey, indicating there is now a request in
// flight.
mWaitingRequests.put(cacheKey, null);
mCacheQueue.add(request);
}
return request;
}
}

/**
* Called from {@link Request#finish(String)}, indicating that processing of the given request
* has finished.
* <p>
* <p>Releases waiting requests for <code>request.getCacheKey()</code> if
* <code>request.shouldCache()</code>.</p>
*/
<T> void finish(Request<T> request) {
// Remove from the set of requests currently being processed.
synchronized (mCurrentRequests) {
mCurrentRequests.remove(request);
}
synchronized (mFinishedListeners) {
for (RequestFinishedListener<T> listener : mFinishedListeners) {
listener.onRequestFinished(request);
}
}

if (request.shouldCache()) {
synchronized (mWaitingRequests) {
String cacheKey = request.getCacheKey();
Queue<Request<?>> waitingRequests = mWaitingRequests.remove(cacheKey);
if (waitingRequests != null) {
if (VolleyLog.DEBUG) {
VolleyLog.v("Releasing %d waiting requests for cacheKey=%s.",
waitingRequests.size(), cacheKey);
}
// Process all queued up requests. They won't be considered as in flight, but
// that's not a problem as the cache has been primed by 'request'.
mCacheQueue.addAll(waitingRequests);
}
}
}
}


mCurrentRequest首先保存的是当前正在进行的Request,是个HashSet类型,Set不能有重复元素,所以上面add了三次,最后其实mCurrentRequest中只有一个StringRequest

mWaitingRequests看看注释
Staging area for requests that already have a duplicate request in flight.
就是说如果有重复的Ruquest正在进行,那么把之后的Request保存在这个Map中, HashMap

4. 为什么add就开始处理Request了?

现在考虑一般情况,先添加到mCacheQueue

private final PriorityBlockingQueue<Request<?>> mCacheQueue =
new PriorityBlockingQueue<Request<?>>();


mCacheQueue是个PriorityBlockingQueue类型对象,AsyncTask中还记得吗?使用的是LinkedBlockingQueue。。他们都是阻塞队列,他们都有一个特性:

take没有拿到数据项的话,线程就会一直会阻塞,直到有数据项被add或者put进来。同时take方法内部实现也有锁,所以多线程take不会出现问题

java.util.concurrent.BlockingQueue的特性是:

1. 当队列是空的时,从队列中获取或删除元素的操作将会被阻塞,或者当队列是满时,往队列里添加元素的操作会被阻塞。

2. 阻塞队列不接受空值,当你尝试向队列中添加空值的时候,它会抛出NullPointerException。

3. 阻塞队列的实现都是线程安全的,所有的查询方法都是原子的并且使用了内部锁或者其他形式的并发控制。

4. BlockingQueue 接口是java collections框架的一部分,它主要用于实现生产者-消费者问题

现在来看看PriorityBlockingQueue类的独有特性,顾名思义优先级队列,优先级高的先被take出来,PriorityBlockingQueue里面存储的对象必须是实现Comparable接口。优先级队列通过这个接口的compare方法确定对象的priority

Request.java

public enum Priority {
LOW,
NORMAL,
HIGH,
IMMEDIATE
}
/**
* Returns the {@link Priority} of this request; {@link Priority#NORMAL} by default.
*/
public Priority getPriority() {
return Priority.NORMAL;
}
/**
* Our comparator sorts from high to low priority, and secondarily by
* sequence number to provide FIFO ordering.
*/
@Override
public int compareTo(Request<T> other) {
Priority left = this.getPriority();
Priority right = other.getPriority();

// High-priority requests are "lesser" so they are sorted to the front.
// Equal priorities are sorted by sequence number to provide FIFO ordering.
return left == right ?
this.mSequence - other.mSequence :
right.ordinal() - left.ordinal();
}


getPriority默认是
Priority.NORMAL
,如果getPriority相同,则比较mSequence字段,这个字段每次add一次,就新+1一次,此时FIFO,先进先出,先处理先add的那个request。

ImageRequest.java

@Override
public Priority getPriority() {
return Priority.LOW;
}


ImageRequest的优先级比默认的还低,此时说明如果有其它的Request和图片加载的Request,那么优先处理其它的Request

CacheDispatcher.java

@Override
public void run() {
if (DEBUG) VolleyLog.v("start new dispatcher");
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);

// Make a blocking call to initialize the cache.
mCache.initialize();

Request<?> request;
//无限循环
while (true) {
// release previous request object to avoid leaking request object when mQueue is drained.
request = null;
try {
// Take a request from the queue.
request = mCacheQueue.take();
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
//调用quit方法才退出while循环,缓存线程终止
return;
}
continue;
}
try {
request.addMarker("cache-queue-take");

// If the request has been canceled, don't bother dispatching it.
if (request.isCanceled()) {
request.finish("cache-discard-canceled");
continue;
}

// Attempt to retrieve this item from cache.从缓存中检索
Cache.Entry entry = mCache.get(request.getCacheKey());
if (entry == null) {
request.addMarker("cache-miss");
// Cache miss; send off to the network dispatcher.
mNetworkQueue.put(request); //缓存不存在,交给网络请求线程
continue;
}

// If it is completely expired, just send it to the network.
if (entry.isExpired()) { //缓存虽然存在,但是过期了,交给网络请求线程
request.addMarker("cache-hit-expired");
request.setCacheEntry(entry);
mNetworkQueue.put(request);
continue;
}

// We have a cache hit; parse its data for delivery back to the request.
request.addMarker("cache-hit");
Response<?> response = request.parseNetworkResponse(
new NetworkResponse(entry.data, entry.responseHeaders));
request.addMarker("cache-hit-parsed");

if (!entry.refreshNeeded()) { //缓存是新鲜的,那么分发给main线程
// Completely unexpired cache hit. Just deliver the response.
mDelivery.postResponse(request, response);
} else { //不是新鲜的
// Soft-expired cache hit. We can deliver the cached response,
// but we need to also send the request to the network for
// refreshing.
request.addMarker("cache-hit-refresh-needed");
request.setCacheEntry(entry);

// Mark the response as intermediate.
response.intermediate = true;

//如果不是新鲜的,那么首先把从缓存中读取的response分发出去给main线程,同时还要交给网络请求线程发起网络请求,这样做是为了符合http协议规定吧
// Post the intermediate response back to the user and have
// the delivery then forward the request along to the network.
final Request<?> finalRequest = request;
mDelivery.postResponse(request, response, new Runnable() {
@Override
public void run() {
try {
mNetworkQueue.put(finalRequest);
} catch (InterruptedException e) {
// Not much we can do about this.
}
}
});
}
} catch (Exception e) {
VolleyLog.e(e, "Unhandled exception %s", e.toString());
}
}


CacheDispatcher的run方法可以说是整个Volley最重要的一个方法了

从优先级队列中拿到Request,
request = mCacheQueue.take();/request = mQueue.take();
如果mCacheQueue没有Request,那么就一直被阻塞在这里了。

request.parseNetworkResponse
解析缓存/网络数据 封装成response。

mDelivery.postResponse
分发到main线程。

5.怎么实现缓存机制?

写如硬盘缓存

//NetworkDispatcher的run方法
// Parse the response here on the worker thread.
Response<?> response = request.parseNetworkResponse(networkResponse);
request.addMarker("network-parse-complete");
// Write to cache if applicable.
// TODO: Only update cache metadata instead of entire record for 304s.
if (request.shouldCache() && response.cacheEntry != null) {
mCache.put(request.getCacheKey(), response.cacheEntry);
request.addMarker("network-cache-written");
}
// Post the response back.
request.markDelivered();
mDelivery.postResponse(request, response);


此时通过把request.getCacheKey()当作key,生成文件名。entry.data是一个字节数组byte[]写入文件

//DiskBasedCache的put方法
@Override
public synchronized void put(String key, Entry entry) {
pruneIfNeeded(entry.data.length);
File file = getFileForKey(key);
try {
BufferedOutputStream fos = new BufferedOutputStream(new FileOutputStream(file));
CacheHeader e = new CacheHeader(key, entry);
boolean success = e.writeHeader(fos);
if (!success) {
fos.close();
VolleyLog.d("Failed to write header for %s", file.getAbsolutePath());
throw new IOException();
}
fos.write(entry.data);
fos.close();
putEntry(key, e);
return;
} catch (IOException e) {
}
boolean deleted = file.delete();
if (!deleted) {
VolleyLog.d("Could not clean up file %s", file.getAbsolutePath());
}
}


BufferedOutputStream缓存输出流包装FileOutputStream写入文件

读硬盘缓存

Cache.Entry entry = mCache.get(request.getCacheKey());

// We have a cache hit; parse its data for delivery back to the request.
request.addMarker("cache-hit");
Response<?> response = request.parseNetworkResponse(
new NetworkResponse(entry.data, entry.responseHeaders));
request.addMarker("cache-hit-parsed");


查找缓存目录是否有request.getCacheKey()该文件,有的话直接用entry.data(字节数组)和entry.responseHeaders(Map对象)做参数包装成NetworkResponse分发到不同的Request去解析

//parseNetworkResponse方法的实现
//StringRequest
@Override
protected Response<String> parseNetworkResponse(NetworkResponse response) {
String parsed;
try {
parsed = new String(response.data, HttpHeaderParser.parseCharset(response.headers));
} catch (UnsupportedEncodingException e) {
parsed = new String(response.data);
}
return Response.success(parsed, HttpHeaderParser.parseCacheHeaders(response));
}
//ImageRequest
@Override
protected Response<Bitmap> parseNetworkResponse(NetworkResponse response) {
// Serialize all decode on a global lock to reduce concurrent heap usage.
synchronized (sDecodeLock) {
try {
return doParse(response);
} catch (OutOfMemoryError e) {
VolleyLog.e("Caught OOM for %d byte image, url=%s", response.data.length, getUrl());
return Response.error(new ParseError(e));
}
}
}
Bitmap tempBitmap = BitmapFactory.decodeByteArray(data, 0, data.length, decodeOptions);


NetworkResponse.java

BasicNetwork中方法 performRequest 的返回值,Request的 parseNetworkResponse(…) 方法入参,是 Volley 中用于内部 Response 转换的一级。

封装了网络请求响应的 StatusCode,Headers 和 Body 等。

(1). 成员变量

int statusCode Http 响应状态码

byte[] data Body 数据

Map

/** True if the entry is expired. */
public boolean isExpired() {
return this.ttl < System.currentTimeMillis();
}

/** True if a refresh is needed from the original data source. */
public boolean refreshNeeded() {
return this.softTtl < System.currentTimeMillis();
}


parseCacheHeaders方法返回Cache.Entry对象,ttl,softTtl都会设置

DiskBasedCache继承 Cache 类,基于 Disk 的缓存实现类

public synchronized void initialize() 初始化,扫描缓存目录得到所有缓存数据摘要信息放入内存。

public synchronized Entry get(String key) 从缓存中得到数据。先从摘要信息中得到摘要信息,然后读取缓存数据文件得到内容。

public synchronized void put(String key, Entry entry) 将数据存入缓存内。先检查缓存是否会满,会则先删除缓存中部分数据,然后再新建缓存文件。

private void pruneIfNeeded(int neededSpace) 检查是否能再分配 neededSpace 字节的空间,如果不能则删除缓存中部分数据。

public synchronized void clear() 清空缓存。 public synchronized void remove(String key) 删除缓存中某个元素。

回头再去看CacheDispatcher的run方法,是不是很清楚了。

entry.isExpired()/entry.refreshNeeded() 都是根据服务器响应头中获取设置的,并不是我们本地写死的,符合Http规范

关键来看
HttpHeaderParser.parseCacheHeaders
方法

通过网络响应中的缓存控制Header和Body内容,构建缓存实体。如果Header的 Cache-Control 字段含有no-cache或no-store表示不缓存,返回 null,那么不加入硬盘缓存,下次直接进行网络请求。

如果不包含,那么就会根据Header中的其他字段去计算ttl和softTtl,具体计算方法,见parseCacheHeaders方法。这是符合Http规范来做的

试一下,add同一个StringRequest(www.baidu.com)三次

可以发现结果:

所以虽然是同一个StringRequest但是还是会请求网络三次,而不是用缓存

总结一下:

根据进行请求时服务器返回的缓存控制Header对请求结果进行缓存,下次请求时判断如果没有过期就直接使用缓存加快响应速度,如果需要会再次请求服务器进行刷新,如果服务器返回了304,表示请求的资源自上次请求缓存后还没有改变,这种情况就直接用缓存不用再次刷新页面,不过这要服务器支持了。

6.怎么断线重连,怎么设置超时时间

public static final int DEFAULT_TIMEOUT_MS = 2500;
public static final int DEFAULT_MAX_RETRIES = 0;
public static final float DEFAULT_BACKOFF_MULT = 1f;
myRequest.setRetryPolicy(new DefaultRetryPolicy(
MY_SOCKET_TIMEOUT_MS,
DefaultRetryPolicy.DEFAULT_MAX_RETRIES,
DefaultRetryPolicy.DEFAULT_BACKOFF_MULT));

public DefaultRetryPolicy() {
this(DEFAULT_TIMEOUT_MS, DEFAULT_MAX_RETRIES, DEFAULT_BACKOFF_MULT);
}
/**
* Constructs a new retry policy.
* @param initialTimeoutMs The initial timeout for the policy.
* @param maxNumRetries The maximum number of retries.
* @param backoffMultiplier Backoff multiplier for the policy.
*/
public DefaultRetryPolicy(int initialTimeoutMs, int maxNumRetries, float backoffMultiplier) {
mCurrentTimeoutMs = initialTimeoutMs;
mMaxNumRetries = maxNumRetries;
mBackoffMultiplier = backoffMultiplier;
}
@Override
public void retry(VolleyError error) throws VolleyError {
mCurrentRetryCount++;
mCurrentTimeoutMs += (mCurrentTimeoutMs * mBackoffMultiplier); //超时时间计算方法
if (!hasAttemptRemaining()) {
throw error; //没有重试机会则抛出VolleyError异常
}
}


可以看出默认超时时间2.5s 不会去重试

myRequest.setRetryPolicy(new DefaultRetryPolicy(3000,2,2));

第一次网络请求超时时间3s,第二次重试超时时间:3+3*2=9s,第三次重试超时时间:9+9*2=27s

重试发起时期,超时之后立即发起

怎么实现的超时重连

NetworkDispatcher.run->BasicNetwork.performRequest->HurlStack/HttpClientStack(api<9).performRequest

BasicNetwork的performRequest方法,是个true循环

@Override
public NetworkResponse performRequest(Request<?> request) throws VolleyError {
long requestStart = SystemClock.elapsedRealtime();
while (true) {
HttpResponse httpResponse = null;
byte[] responseContents = null;
Map<String, String> responseHeaders = Collections.emptyMap();
try {
// Gather headers.
Map<String, String> headers = new HashMap<String, String>();
addCacheHeaders(headers, request.getCacheEntry());
httpResponse = mHttpStack.performRequest(request, headers);
StatusLine statusLine = httpResponse.getStatusLine();
int statusCode = statusLine.getStatusCode();

responseHeaders = convertHeaders(httpResponse.getAllHeaders());
// Handle cache validation.
if (statusCode == HttpStatus.SC_NOT_MODIFIED) { //304状态码,说明资源没有被修改

Entry entry = request.getCacheEntry();
if (entry == null) { //如果缓存中没有,用网络请求的,否则用缓存中的
return new NetworkResponse(HttpStatus.SC_NOT_MODIFIED, null,
responseHeaders, true,
SystemClock.elapsedRealtime() - requestStart);
}

// A HTTP 304 response does not have all header fields. We
// have to use the header fields from the cache entry plus
// the new ones from the response.
// http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.5 entry.responseHeaders.putAll(responseHeaders);
return new NetworkResponse(HttpStatus.SC_NOT_MODIFIED, entry.data,
entry.responseHeaders, true,
SystemClock.elapsedRealtime() - requestStart);
}
//301,302 表示资源被临时移动或永久移动了,所以需要重定向,会重新发起请求
// Handle moved resources
if (statusCode == HttpStatus.SC_MOVED_PERMANENTLY || statusCode == HttpStatus.SC_MOVED_TEMPORARILY) {
String newUrl = responseHeaders.get("Location");
request.setRedirectUrl(newUrl);
}

// Some responses such as 204s do not have content.  We must check.
if (httpResponse.getEntity() != null) {
responseContents = entityToBytes(httpResponse.getEntity());
} else {
// Add 0 byte response as a way of honestly representing a
// no-content request.
responseContents = new byte[0];
}

// if the request is slow, log it.
long requestLifetime = SystemClock.elapsedRealtime() - requestStart;
logSlowRequests(requestLifetime, request, responseContents, statusLine);
//如果不在200-299这个范围那么抛出IOException,并在随后的catch中被捕获
if (statusCode < 200 || statusCode > 299) {
throw new IOException();
}
//200-299说明网络请求正常,直接返回,最后回调Listener的onResponse方法
return new NetworkResponse(statusCode, responseContents, responseHeaders, false,
SystemClock.elapsedRealtime() - requestStart);
} catch (SocketTimeoutException e) { //捕获到套接字连接异常,检查是否需要重试,如果有重试机会那么继续while循环否则往上抛异常
attemptRetryOnException("socket", request, new TimeoutError());
} catch (ConnectTimeoutException e) { //捕获到连接超时异常,检查是否需要重试,如果有重试机会那么继续while循环否则往上抛异常
attemptRetryOnException("connection", request, new TimeoutError());
} catch (MalformedURLException e) {
throw new RuntimeException("Bad URL " + request.getUrl(), e);
} catch (IOException e) {
int statusCode = 0;
NetworkResponse networkResponse = null;
if (httpResponse != null) { //httpResponse不为空嘛,说明网络请求走通了,但是响应码不是200到299之间,那么走该分支
statusCode = httpResponse.getStatusLine().getStatusCode();
} else { //没有网络连接,此时httpResponse为null
throw new NoConnectionError(e); //抛NoConnectionError异常,是VolleyError子类,就不走后面的逻辑了
}
if (statusCode == HttpStatus.SC_MOVED_PERMANENTLY ||
statusCode == HttpStatus.SC_MOVED_TEMPORARILY) {
VolleyLog.e("Request at %s has been redirected to %s", request.getOriginUrl(), request.getUrl());
} else {
VolleyLog.e("Unexpected response code %d for %s", statusCode, request.getUrl());
}
if (responseContents != null) {
networkResponse = new NetworkResponse(statusCode, responseContents,
responseHeaders, false, SystemClock.elapsedRealtime() - requestStart);
if (statusCode == HttpStatus.SC_UNAUTHORIZED ||
statusCode == HttpStatus.SC_FORBIDDEN) {
attemptRetryOnException("auth",
request, new AuthFailureError(networkResponse)); //检查是否需要重试
} else if (statusCode == HttpStatus.SC_MOVED_PERMANENTLY ||
statusCode == HttpStatus.SC_MOVED_TEMPORARILY) {
attemptRetryOnException("redirect",
request, new RedirectError(networkResponse)); //检查是否需要重试
} else {
// TODO: Only throw ServerError for 5xx status codes.
throw new ServerError(networkResponse); //ServerError也是VolleyError子类
}
} else {
throw new NetworkError(e); //NetworkError也是VolleyError子类
}
}
}
}

private static void attemptRetryOnException(String logPrefix, Request<?> request,
VolleyError exception) throws VolleyError {
RetryPolicy retryPolicy = request.getRetryPolicy();
int oldTimeout = request.getTimeoutMs();

try {
retryPolicy.retry(exception);
} catch (VolleyError e) {
request.addMarker(
String.format("%s-timeout-giveup [timeout=%s]", logPrefix, oldTimeout));
throw e; //继续往上抛VolleyError异常
}
request.addMarker(String.format("%s-retry [timeout=%s]", logPrefix, oldTimeout));
}
@Override
public void retry(VolleyError error) throws VolleyError {
mCurrentRetryCount++;
mCurrentTimeoutMs += (mCurrentTimeoutMs * mBackoffMultiplier);
if (!hasAttemptRemaining()) {
throw error; //没有重试机会则抛出VolleyError异常
}
}
protected boolean hasAttemptRemaining() {
return mCurrentRetryCount <= mMaxNumRetries;
}


如果正常直接return响应,否则捕获SocketTimeoutException,ConnectTimeoutException异常,attemptRetryOnException方法判断如果有重连机会,就不抛出异常,那么while(true)继续,所以重新发起网络请求。否则如果没有重连机会,那么抛出VolleyError异常,BasicNetwork.performRequest继续往上抛,
networkdiapatcher.run方法
捕获VolleyError异常,调用parseAndDeliverNetworkError方法,回调给主线程。。..当网络请求正常时,没socket连接超时,也没有connect连接超时,但是当
网络请求的response码是SC_UNAUTHORIZED,SC_FORBIDDEN...时也会发起重试


可以看到这里的实现是尊从Http协议的标准来实现的,实现了的不同的状态码的不同表现

没有网络的时候,肯定是抛出UnknownHostException异常哦,
UnknownHostException extends IOException
但是代码中捕获到IOException,然后抛出throw new NoConnectionError(e); NoConnectionError间接继承自VolleyError异常,所以还是会调用parseAndDeliverNetworkError方法,所以最后回调

所以不管是超时异常,还是没有网络连接,或者是网络返回码不是200到299,同时如果有重试,失败,那么最后都走以下的流程,
networkdiapatcher.run方法
捕获VolleyError/Exception异常,调用parseAndDeliverNetworkError方法,然后调用mErrorListener的

//networkdiapatcher的run方法
catch (VolleyError volleyError) {
volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
parseAndDeliverNetworkError(request, volleyError);
} catch (Exception e) {
VolleyLog.e(e, "Unhandled exception %s", e.toString());
VolleyError volleyError = new VolleyError(e);
volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
mDelivery.postError(request, volleyError);
}
private void parseAndDeliverNetworkError(Request<?> request, VolleyError error) {
error = request.parseNetworkError(error);
mDelivery.postError(request, error);
}

mErrorListener.onErrorResponse(error);
public StringRequest(int method, String url, Listener<String> listener, ErrorListener errorListener) {


只有当网络请求的状态码为200-299才会调用Listener的onResponse方法,否则其它情况都是调用ErrorListener的onErrorResponse方法

另外超时时间是在哪里设置获取的呢?肯定是在>HurlStack/HttpClientStack相应方法里面设置超时参数了。。HurlStack设置如下

private HttpURLConnection openConnection(URL url, Request<?> request) throws IOException {
HttpURLConnection connection = createConnection(url);

int timeoutMs = request.getTimeoutMs();
connection.setConnectTimeout(timeoutMs);
connection.setReadTimeout(timeoutMs);
connection.setUseCaches(false);
connection.setDoInput(true);

// use caller-provided custom SslSocketFactory, if any, for HTTPS
if ("https".equals(url.getProtocol()) && mSslSocketFactory != null) {
((HttpsURLConnection)connection).setSSLSocketFactory(mSslSocketFactory);
}

return connection;
}


7. Post请求方式,怎么传递post参数,怎么给http请求添加Head?

前面说过只需要重写getParams方法就能添加post请求参数,自定义请求头的话,需要重写getHeaders方法

看以下Request的方法

//如果需要自定义Http请求头,复写这个方法
public Map<String, String> getHeaders() throws AuthFailureError {
return Collections.emptyMap();
}
//如果需要添加post请求参数,复写这个方法
protected Map<String, String> getParams() throws AuthFailureError {
return null;
}
public byte[] getBody() throws AuthFailureError {
Map<String, String> params = getParams();
if (params != null && params.size() > 0) {
return encodeParameters(params, getParamsEncoding());
}
return null;
}
private static final String DEFAULT_PARAMS_ENCODING = "UTF-8";
protected String getParamsEncoding() {
return DEFAULT_PARAMS_ENCODING;
}
/**
* Returns the content type of the POST or PUT body.
*/
public String getBodyContentType() { //Post方法Content-Type请求头的值
return "application/x-www-form-urlencoded; charset=" + getParamsEncoding();
}
private byte[] encodeParameters(Map<String, String> params, String paramsEncoding) {
StringBuilder encodedParams = new StringBuilder();
try {
for (Map.Entry<String, String> entry : params.entrySet()) {
encodedParams.append(URLEncoder.encode(entry.getKey(), paramsEncoding));
encodedParams.append('=');
encodedParams.append(URLEncoder.encode(entry.getValue(), paramsEncoding));
encodedParams.append('&');
}
return encodedParams.toString().getBytes(paramsEncoding);
} catch (UnsupportedEncodingException uee) {
throw new RuntimeException("Encoding not supported: " + paramsEncoding, uee);
}
}


encodedParams就是 param1=value1¶m2=value2这类字符串,然后转换为byte[]字节数组

getBody()和getHeaders()在什么地方调用呢?

这里只分析HurlStack内部使用HttpURLConnection,HttpClientStack内部使用HttpClient这个太简单就不看了

NetworkDispatcher.run->BasicNetwork.performRequest->HurlStack/HttpClientStack(api<9).performRequest 这条链

HurlStack方法

addCacheHeaders(headers, request.getCacheEntry());

httpResponse = mHttpStack.performRequest(request, headers);

@Override
public HttpResponse performRequest(Request<?> request, Map<String, String> additionalHeaders)
throws IOException, AuthFailureError {
String url = request.getUrl();
HashMap<String, String> map = new HashMap<String, String>();
map.putAll(request.getHeaders()); //取出request设置的head
map.putAll(additionalHeaders); //取出缓存中的Header,如果该request前面发起过网络请求,那么就会缓存下来,同时会把http的head也缓存
........
URL parsedUrl = new URL(url);
HttpURLConnection connection = openConnection(parsedUrl, request);
for (String headerName : map.keySet()) {
connection.addRequestProperty(headerName, map.get(headerName)); //设置Http请求的head
}
setConnectionParametersForRequest(connection, request);
// Initialize HttpResponse with data from the HttpURLConnection.
ProtocolVersion protocolVersion = new ProtocolVersion("HTTP", 1, 1);
int responseCode = connection.getResponseCode();
if (responseCode == -1) {
// -1 is returned by getResponseCode() if the response code could not be retrieved.
// Signal to the caller that something was wrong with the connection.
throw new IOException("Could not retrieve response code from HttpUrlConnection.");
}
StatusLine responseStatus = new BasicStatusLine(protocolVersion,
connection.getResponseCode(), connection.getResponseMessage());
BasicHttpResponse response = new BasicHttpResponse(responseStatus);
if (hasResponseBody(request.getMethod(), responseStatus.getStatusCode())) {
response.setEntity(entityFromConnection(connection));
}
for (Entry<String, List<String>> header : connection.getHeaderFields().entrySet()) {
if (header.getKey() != null) {
Header h = new BasicHeader(header.getKey(), header.getValue().get(0));
response.addHeader(h);
}
}
return response;
}
//可以看到volley支持get,post,PUT等都支持的,符合http规范
static void setConnectionParametersForRequest(HttpURLConnection connection,
Request<?> request) throws IOException, AuthFailureError {
switch (request.getMethod()) {
case Method.GET:
// Not necessary to set the request method because connection defaults to GET but
// being explicit here.
connection.setRequestMethod("GET");
break;
case Method.POST:
connection.setRequestMethod("POST");
addBodyIfExists(connection, request);
...........
}
}
private static void addBodyIfExists(HttpURLConnection connection, Request<?> request)
throws IOException, AuthFailureError {
byte[] body = request.getBody();
if (body != null) {
connection.setDoOutput(true);
connection.addRequestProperty(HEADER_CONTENT_TYPE, request.getBodyContentType()); //request的getBodyContentType()方法可以设置Content-Type请求头
DataOutputStream out = new DataOutputStream(connection.getOutputStream());
out.write(body); //把getBody()返回的byte[]数组写入输入流,,
out.close();
}
}


另外一方面:为请求添加缓存头

performRequest
方法的
Map<String, String> additionalHeaders参数是从缓存中读取的
,到底添加了那些请求头呢?可以看到,如果有的话只会添加If-None-Match,If-Modified-Since这两个头。。

CacheDispatcher的run方法

if (entry.isExpired()) {
request.addMarker("cache-hit-expired");
request.setCacheEntry(entry); //缓存跟request绑定
mNetworkQueue.put(request);
continue;
}


Request方法

public Request<?> setCacheEntry(Cache.Entry entry) {
mCacheEntry = entry;
return this;
}

/**
* Returns the annotated cache entry, or null if there isn't one.
*/
public Cache.Entry getCacheEntry() {
return mCacheEntry;
}


BasicNetwork.performRequest

addCacheHeaders(headers, request.getCacheEntry());
httpResponse = mHttpStack.performRequest(request, headers); //HttpClientStack或者HurlStack的performRequest方法

private void addCacheHeaders(Map<String, String> headers, Cache.Entry entry) {
// If there's no cache entry, we're done.
if (entry == null) {
return;
}

if (entry.etag != null) {
headers.put("If-None-Match", entry.etag); //If-None-Match头从缓存中取出来,如果有
}

if (entry.lastModified > 0) {
Date refTime = new Date(entry.lastModified);
headers.put("If-Modified-Since", DateUtils.formatDate(refTime)); //If-Modified-Since头从缓存中取出来,如果有
}
}


7.怎么防止多线程并发问题?

HttpClientStack
使用的是
new HttpClientStack(AndroidHttpClient.newInstance(userAgent));


AndroidHttpClient继承HttpClient,内部使用ThreadSafeClientConnManager线程安全连接管理器

HurlStack
内部使用的是HttpURLConnection,在最新的版本上HttpURLConnection底层是通过Okhttp来实现的,Okhttp是线程安全的

DDMS可以看到多了一个OKhttp ConnectionPool,这个连接池解决了多线程问题,类似上面的ThreadSafeClientConnManager线程安全连接管理器

这样就解决了多个线程共享同一个HurlStack/HttpClientStack,造成的多线程问题

另外一方面,如果多个线程使用的是同一个Request呢? 这里必然会出现多线程问题,代码实例如下

for (int i = 0; i < 3; i++) {
mQueue.add(stringRequest);
new MyThread(stringRequest, this).start();
}
class MyThread extends Thread {
private StringRequest stringRequest;
private Context mContext;

public MyThread(StringRequest stringRequest, Context context) {
this.stringRequest = stringRequest;
this.mContext = context;
}

@Override
public void run() {
RequestQueue mQueue = Volley.newRequestQueue(mContext);
mQueue.add(stringRequest);
}
}


8.为什么不适合大数据量的网络操作

HurlStack中

response.setEntity(entityFromConnection(connection)); //response是HttpResponse类型
private static HttpEntity entityFromConnection(HttpURLConnection connection) {
BasicHttpEntity entity = new BasicHttpEntity();
InputStream inputStream;
try {
inputStream = connection.getInputStream();
} catch (IOException ioe) {
inputStream = connection.getErrorStream();
}
entity.setContent(inputStream);
entity.setContentLength(connection.getContentLength());
entity.setContentEncoding(connection.getContentEncoding());
entity.setContentType(connection.getContentType());
return entity;
}


return mClient.execute(httpRequest); //返回HttpResponse类型


看到木有,都是直接把先写入内存,然后写入硬盘缓存的,并没有针对大数量的操作直接写入文件处理,所以当然不合适,很可能出现OOM异常啊

9.为什么不适合大文件下载,而适合小文件

BasicNetwork的performRequest方法中

.........
byte[] responseContents = null;
.........
if (httpResponse.getEntity() != null) {
responseContents = entityToBytes(httpResponse.getEntity());
}
.........
return new NetworkResponse(statusCode, responseContents, responseHeaders, false, SystemClock.elapsedRealtime() - requestStart);
//entityToBytes方法
/** Reads the contents of HttpEntity into a byte[]. */
private byte[] entityToBytes(HttpEntity entity) throws IOException, ServerError {
PoolingByteArrayOutputStream bytes =
new PoolingByteArrayOutputStream(mPool, (int) entity.getContentLength());
byte[] buffer = null;
try {
InputStream in = entity.getContent();
if (in == null) {
throw new ServerError();
}
buffer = mPool.getBuf(1024);
int count;
while ((count = in.read(buffer)) != -1) {
bytes.write(buffer, 0, count);
}
return bytes.toByteArray();
} finally {
try {
// Close the InputStream and release the resources by "consuming the content".
entity.consumeContent();
} catch (IOException e) {
// This can happen if there was an exception above that left the entity in
// an invalid state.
VolleyLog.v("Error occured when calling consumingContent");
}
mPool.returnBuf(buffer);
bytes.close();
}
}


看到了吧,不管什么文件都会通过entityToBytes方法把InputStream写入byte[]数组
InputStream in = entity.getContent()拿到输入流
,如果文件非常大,所以byte[]数组非常大,那么极可能肯定会OOM。另一方面,因为直接写入内存byte[],所以不需要额外IO读操作(但还是有一次IO写操作,写入硬盘缓存),所以很快速度更快。。看了UIL的源码,对大图片的下载,网络请求得到的输入流-输出流先写入硬盘缓存,然后再从硬盘缓存中根据图片的大小对decodingOptions进行设置(而不是整张大图直接读取),读取输入流到内存
decodedBitmap = BitmapFactory.decodeStream(imageStream, null, decodingOptions);
这样就经历了两次IO操作,速度慢一些,但是不容易OOM

10. Volley框架设计好在哪里,缺点?

优点

泛型的使用,良好的代码设计,我们的项目中就几乎没看到过泛型的身影

public abstract class Request<T> implements Comparable<Request<T>>
public class StringRequest extends Request<String>
public abstract class JsonRequest<T> extends Request<T>
public class JsonArrayRequest extends JsonRequest<JSONArray>
public class JsonObjectRequest extends JsonRequest<JSONObject>


扩展性强。Volley 中大多是基于接口的设计,可配置性强。

一定程度符合 Http 规范,包括返回 ResponseCode(2xx、3xx、4xx、5xx)的处理,请求头的处理,缓存机制的支持等。并支持重试及优先级定义。

默认 Android2.3 及以上基于 HttpURLConnection,2.3 以下基于 HttpClient 实现。

提供简便的图片加载工具。

适合小文件下载,因为如果文件从是网络请求中加载,那么请求得到的输入流直接写入内存流,不需要额外的IO读操作(只需要一次IO写操作)

缺点

1. 不适合大文件下载,比如下载一本书的内容,但是下载整本书都加载到byte[]内存,肯定是不合适的,应该是把输出流写入硬盘缓存,然后再从硬盘缓存中根据需要读取文件输入流到byte[],比如读取某个章节的内容。这样byte[]就小了很多;大图片下载其实也是,输入流byte[]数组全部写入内存,然后再对图片压缩,然并苒,还是可能出现内存溢出。

2. 没有针对listview,gridview做优化,比如listview图片错乱,滑动暂停加载,都没有处理,据说Universal-Image-Loader优化了这方面,下一步研究下这个开源项目,

3. 没有使用线程池管理线程啊,一不小心如果没对requestqueue stop/cancel,线程就一直得不到释放,资源就耗尽。android-async-http听说使用了线程池管理线程,但是不推荐使用这个了,内部httpclient实现,落后了。

4. 没有对文件上传做处理,增加api会更好多了,这个容易实现,另一方面也体现了volley良好的拓展性

5. 没有对持久化cookie,增加api会更好多了,这个也容易实现,另一方面也体现了volley良好的拓展性

6. 硬盘缓存默认缓存在内部存储中,没有提供api使缓存存放在外部存储中
File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);private static final String DEFAULT_CACHE_DIR = "volley";
默认在/data/data/…应用包名/cache/volley中

有了这些知识做基础,如果现在需要我写一个网络通信框架,那将得心应手
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  android volley