您的位置:首页 > 其它

图片缓存LruCache 高效加载图片 学习笔记 + 开源项目:DiskLruCache

2014-02-10 18:29 399 查看
Bitmap缓存参考:http://developer.android.com/training/displaying-bitmaps/cache-bitmap.html

Caching Bitmap的翻译:http://dongg.diandian.com/post/2012-04-17/19116310


LruCache参考:http://developer.android.com/reference/android/util/LruCache.html

guolin:Android高效加载大图、多图解决方案,有效避免程序OOM

guolin:Android照片墙应用实现,再多的图片也不怕崩溃

使用LruCache缓存内存和SDCard上的图片,使用DiskLruCache缓存SDCard和网络服务器上的图片,链接:Android
应用开发 之使用LruCache和DiskLruCache来在内存和SD卡中缓存图片

1. 公司项目:项目中使用LruCache缓存图片 加载图片:

项目中加载图片的思想:项目中的核心代码是:GlobalImageLruCache.java、BtImageLoader.java。就是内存(LruCache)、磁盘(SDCard)和网络服务器三者之间交换数据。

首先区分传入的url是文件路径还是网络路径,从【本地SDCard中取出】和【从服务器中取出】。

加载图片,本地SDCard文件 or HTTP

使用LruCache和SDCard共同缓存图片。

如果是文件路径,直接使用GlobalImageLRUCacher类那一套加载图片;
如果是网路路径,则首先查看存储在SDCard卡上的网络图片文件是否存在,如果存在,则直接使用GlobalImageLRUCacher类那一套加载图片,如果文件不存在了,则通过AsyncHttpClient异步下载到本地SDCard指定的路径中之后再使用GlobalImageLRUCacher类那一套加载图片。

【LruCache目前缓存的是SDCard上的图片Bitmap】,服务器上的图片没有和LruCache直接打交道,而是直接和SDCard打交道。



(1)问题:SDCard上的图片与LruCache中的图片会存在不一致的情况,怎样解决的?

    LruCache是缓存在手机内存中的,当App重启的时候就没了,就会从新从服务器上拉取最新的图片,所以不存在不一致的问题。

(2)问题:服务器上的图片与SDCard上的图片出现不一致的情况,怎样解决?

    目前解决办法是,开启App的时候,在代码中删除SDCard上的缓存图片,这样如果从网络上拉取图片的话,就可以重新从服务器上拉取新的图片到SDCard上,但是这个方法不好,没有采用,项目目前存在这个同步问题。

(3)项目中使用的LruCache(LruCache原理和代码分析在:LruCache类分析),主要是用于缓存手机SDCard图片文件目录,根据当前手机内存大小来设定一个合理的LruCache缓存大小,重写sizeOf()方法,重写entryRemoved方法,在这个方法中把要删除的当前使用最少的item存入一个软引用的LinkedHashMap,这个软引用LinkedHashMap是做为SDCard的二级缓存存在的,当LruCache中的item数量超过最大值时候,就LruCache就会删除队头(当前使用最少的item)的item,然后这里就是把该item再次存储在了这个软引用LinkedHashMap中了。

    获取图片的逻辑(GlobalImageLruCache)是:首先从LruCache中取,如果有,直接返回,如果没有,再从软引用LinkedHashMap中取,如果有,返回,如果没有,则通过一个Handler异步从SDCard中对应的图片文件路径中取出,然后在通过一个回调返回给用户使用,并且把取到的图片存储到LruCache中。

    客户开发者直接调用的获取图片的逻辑(BtImageLoader)是:传入的url如果是网络url,则首先把这个url组合成SDCard文件路径,查看SDCard上是否存在这个文件,如果存在直接走LruCache逻辑,取出图片;如果没有这个文件,即第一次获取这个图片文件,则从网络服务器上取,取出后首先存入SDCard中,然后再走LruCache逻辑,取出图片。

2. DiskLruCache开源项目(来自Github:https://github.com/JakeWharton/DiskLruCache)分析

参考:Android开源项目分析之DiskLruCache


优缺点:

这个 开源项目和公司项目实现本质区别是:我们的缓存是内存、软引用LinkedHashMap、SDCard,而且直接使用Bitmap存储;而开源项目仅仅是使用SDCard来当网络图片的缓存,LinkedHashMap中只是存储了SDCard上的图片文件存储路径,每次要使用的时候,再重新通过InputStream从SDCard上获取,这样获取图片速度慢,但是节省了手机内存,毕竟在手机内存中直接存储Bitmap比较占用空间,公司的这个设计适合获取较小图片(比如:缩略图),这个开源项目比较适合获取大图(比如:图片浏览器、相册)。

代码分析如下:

文件目录,主要目录就是DiskLruCache:



代码分析注释如下:

/*
* Copyright (C) 2011 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
*      http://www.apache.org/licenses/LICENSE-2.0 *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package com.jakewharton.disklrucache;

import java.io.BufferedWriter;
import java.io.Closeable;
import java.io.EOFException;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.FilterOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.io.OutputStreamWriter;
import java.io.Writer;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.concurrent.Callable;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
import java.util.regex.Matcher;
import java.util.regex.Pattern;

/**
* A cache that uses a bounded amount of space on a filesystem. Each cache entry
* has a string key and a fixed number of values. Each key must match the regex
* <strong>[a-z0-9_-]{1,64}</strong>. Values are byte sequences, accessible as
* streams or files. Each value must be between {@code 0} and
* {@code Integer.MAX_VALUE} bytes in length.
*
* <p>
* The cache stores its data in a directory on the filesystem. This directory
* must be exclusive to the cache; the cache may delete or overwrite files from
* its directory. It is an error for multiple processes to use the same cache
* directory at the same time.
*
* <p>
* This cache limits the number of bytes that it will store on the filesystem.
* When the number of stored bytes exceeds the limit, the cache will remove
* entries in the background until the limit is satisfied. The limit is not
* strict: the cache may temporarily exceed it while waiting for files to be
* deleted. The limit does not include filesystem overhead or the cache journal
* so space-sensitive applications should set a conservative limit.
*
* <p>
* Clients call {@link #edit} to create or update the values of an entry. An
* entry may have only one editor at one time; if a value is not available to be
* edited then {@link #edit} will return null.
* <ul>
* <li>When an entry is being <strong>created</strong> it is necessary to supply
* a full set of values; the empty value should be used as a placeholder if
* necessary.
* <li>When an entry is being <strong>edited</strong>, it is not necessary to
* supply data for every value; values default to their previous value.
* </ul>
* Every {@link #edit} call must be matched by a call to {@link Editor#commit}
* or {@link Editor#abort}. Committing is atomic: a read observes the full set
* of values as they were before or after the commit, but never a mix of values.
*
* <p>
* Clients call {@link #get} to read a snapshot of an entry. The read will
* observe the value at the time that {@link #get} was called. Updates and
* removals after the call do not impact ongoing reads.
*
* <p>
* This class is tolerant of some I/O errors. If files are missing from the
* filesystem, the corresponding entries will be dropped from the cache. If an
* error occurs while writing a cache value, the edit will fail silently.
* Callers should handle other problems by catching {@code IOException} and
* responding appropriately.
*
* DiskLruCache是Android提供的一个管理磁盘缓存的类。该类可用于在程序中把从网络加载的数据
* 保存到磁盘上作为缓存数据,例如一个显示网络图片的gridView,可对从网络加载的图片进行缓存,
* 提高程序的可用性。
*/
public final class DiskLruCache implements Closeable {
static final String JOURNAL_FILE = "journal";
static final String JOURNAL_FILE_TEMP = "journal.tmp";
static final String JOURNAL_FILE_BACKUP = "journal.bkp";

static final String MAGIC = "libcore.io.DiskLruCache";
static final String VERSION_1 = "1";
static final long ANY_SEQUENCE_NUMBER = -1;

static final Pattern LEGAL_KEY_PATTERN = Pattern.compile("[a-z0-9_-]{1,64}");

private static final String CLEAN = "CLEAN";
private static final String DIRTY = "DIRTY";
private static final String REMOVE = "REMOVE";
private static final String READ = "READ";

/*
* This cache uses a journal file named "journal". A typical journal file
* looks like this: libcore.io.DiskLruCache 1 100 2
*
* CLEAN 3400330d1dfc7f3f7f4b8d4d803dfcf6 832 21054
* DIRTY 335c4c6028171cfddfbaae1a9c313c52
* CLEAN 335c4c6028171cfddfbaae1a9c313c52 3934 2342
* REMOVE 335c4c6028171cfddfbaae1a9c313c52
* DIRTY 1ab96a171faeeee38496d8b330771a7a
* CLEAN 1ab96a171faeeee38496d8b330771a7a 1600 234
* READ 335c4c6028171cfddfbaae1a9c313c52
* READ 3400330d1dfc7f3f7f4b8d4d803dfcf6
*
* The first five lines of the journal form its header. They are the
* constant string "libcore.io.DiskLruCache", the disk cache's version, the
* application's version, the value count, and a blank line.
*
* Each of the subsequent lines in the file is a record of the state of a
* cache entry. Each line contains space-separated values: a state, a key,
* and optional state-specific values. o DIRTY lines track that an entry is
* actively being created or updated. Every successful DIRTY action should
* be followed by a CLEAN or REMOVE action. DIRTY lines without a matching
* CLEAN or REMOVE indicate that temporary files may need to be deleted. o
* CLEAN lines track a cache entry that has been successfully published and
* may be read. A publish line is followed by the lengths of each of its
* values. o READ lines track accesses for LRU. o REMOVE lines track entries
* that have been deleted.
*
* The journal file is appended to as cache operations occur. The journal
* may occasionally be compacted by dropping redundant lines. A temporary
* file named "journal.tmp" will be used during compaction; that file should
* be deleted if it exists when the cache is opened.
*/

private final File directory; // 指向该DiskLruCache的工作目录
private final File journalFile; // 指向journal文件

/*
* 当构建一个journal文件时,先会生成一个journalTmp文件,当文件构建完成时,
* 会将该journalTmp重命名为journal,这是一个临时文件。
*/
private final File journalFileTmp;
private final File journalFileBackup;
private final int appVersion;
private long maxSize; // 是该DiskLruCache所允许的最大缓存空间。
private final int valueCount; // 每个entry对应的缓存文件的大小,默认情况下,该值为1.
private long size = 0; // DiskLruCache当前缓存的大小
private Writer journalWriter; // 指向journal文件,主要向该文件中写内容

// Cache的核心数据结构
private final LinkedHashMap<String, Entry> lruEntries = new LinkedHashMap<String, Entry>(0, 0.75f, true);

/*
* 当前journal文件中entry状态记录的个数,主要用来当该值大于一定限制时,对journal文件进行清理
*/
private int redundantOpCount;

/**
* To differentiate between old and current snapshots, each entry is given a
* sequence number each time an edit is committed. A snapshot is stale if
* its sequence number is not equal to its entry's sequence number.
* 新旧entry的特征值。
*/
private long nextSequenceNumber = 0;

/** This cache uses a single background thread to evict entries. */
final ThreadPoolExecutor executorService = new ThreadPoolExecutor(0, 1, 60L, TimeUnit.SECONDS,
new LinkedBlockingQueue<Runnable>());

private final Callable<Void> cleanupCallable = new Callable<Void>() {
@Override
public Void call() throws Exception {
synchronized (DiskLruCache.this) {
if (journalWriter == null) {
return null; // Closed.
}
trimToSize();
if (journalRebuildRequired()) {
rebuildJournal();
redundantOpCount = 0;
}
}
return null;
}
};

private DiskLruCache(File directory, int appVersion, int valueCount, long maxSize) {
this.directory = directory;
this.appVersion = appVersion;
this.journalFile = new File(directory, JOURNAL_FILE);
this.journalFileTmp = new File(directory, JOURNAL_FILE_TEMP);
this.journalFileBackup = new File(directory, JOURNAL_FILE_BACKUP);
this.valueCount = valueCount;
this.maxSize = maxSize;
}

/**
* Opens the cache in {@code directory}, creating a cache if none exists
* there.
* 该方法用于初始化一个DiskLruCache对象,在初始化过程中,如果之前已经创建过该缓存,则通过存在
* 的journal文件构建已有的entry列表(其中涉及的方法可在DiskLruCache中进行查看),否则,则创建一
* 个新的journal文件,调用rebuildJournal方法。
*
* @param directory
*            a writable directory
* @param valueCount
*            the number of values per cache entry. Must be positive.
* @param maxSize
*            the maximum number of bytes this cache should use to store
* @throws IOException
*             if reading or writing the cache directory fails
*/
public static DiskLruCache open(File directory, int appVersion, int valueCount, long maxSize)
throws IOException {

if (maxSize <= 0) {
throw new IllegalArgumentException("maxSize <= 0");
}
if (valueCount <= 0) {
throw new IllegalArgumentException("valueCount <= 0");
}

// If a bkp file exists, use it instead.
File backupFile = new File(directory, JOURNAL_FILE_BACKUP);
if (backupFile.exists()) {
File journalFile = new File(directory, JOURNAL_FILE);

// If journal file also exists just delete backup file.
if (journalFile.exists()) {
backupFile.delete();
} else {
renameTo(backupFile, journalFile, false);
}
}

// Prefer to pick up where we left off.
DiskLruCache cache = new DiskLruCache(directory, appVersion, valueCount, maxSize);
if (cache.journalFile.exists()) {
try {
cache.readJournal();
cache.processJournal();
cache.journalWriter = new BufferedWriter(new OutputStreamWriter(
new FileOutputStream(cache.journalFile, true), Util.US_ASCII));

return cache;
} catch (IOException journalIsCorrupt) {
System.out.println("DiskLruCache " + directory + " is corrupt: "
+ journalIsCorrupt.getMessage() + ", removing");

cache.delete();
}
}

// Create a new empty cache.
directory.mkdirs();
cache = new DiskLruCache(directory, appVersion, valueCount, maxSize);
cache.rebuildJournal();
return cache;
}

/*
* 按照格式读取日子文件
*/
private void readJournal() throws IOException {
StrictLineReader reader = new StrictLineReader(new FileInputStream(journalFile), Util.US_ASCII);
try {
String magic = reader.readLine();
String version = reader.readLine();
String appVersionString = reader.readLine();
String valueCountString = reader.readLine();
String blank = reader.readLine();

if (!MAGIC.equals(magic) || !VERSION_1.equals(version)
|| !Integer.toString(appVersion).equals(appVersionString)
|| !Integer.toString(valueCount).equals(valueCountString)
|| !"".equals(blank)) {

throw new IOException("unexpected journal header: [" + magic + ", "
+ version + ", " + valueCountString + ", " + blank + "]");
}

int lineCount = 0;
while (true) {
try {
readJournalLine(reader.readLine());
lineCount++;
} catch (EOFException endOfJournal) {
break;
}
}
redundantOpCount = lineCount - lruEntries.size();
} finally {
Util.closeQuietly(reader);
}
}

/*
* 处理日志文件的一行
*/
private void readJournalLine(String line) throws IOException {
int firstSpace = line.indexOf(' ');
if (firstSpace == -1) {
throw new IOException("unexpected journal line: " + line);
}

int keyBegin = firstSpace + 1;
int secondSpace = line.indexOf(' ', keyBegin);
final String key;
if (secondSpace == -1) {
key = line.substring(keyBegin);
if (firstSpace == REMOVE.length() && line.startsWith(REMOVE)) {
lruEntries.remove(key);
return;
}
} else {
key = line.substring(keyBegin, secondSpace); // 获得key
}

Entry entry = lruEntries.get(key);
if (entry == null) {
entry = new Entry(key);
lruEntries.put(key, entry);
}

if (secondSpace != -1 && firstSpace == CLEAN.length() && line.startsWith(CLEAN)) { // 已经完成的文件
String[] parts = line.substring(secondSpace + 1).split(" ");
entry.readable = true;
entry.currentEditor = null;
entry.setLengths(parts);
} else if (secondSpace == -1 && firstSpace == DIRTY.length() && line.startsWith(DIRTY)) { // 正在编辑
entry.currentEditor = new Editor(entry);
} else if (secondSpace == -1 && firstSpace == READ.length() && line.startsWith(READ)) {
// This work was already done by calling lruEntries.get().
} else {
throw new IOException("unexpected journal line: " + line);
}
}

/**
* Computes the initial size and collects garbage as a part of opening the
* cache. Dirty entries are assumed to be inconsistent and will be deleted.
*/
private void processJournal() throws IOException {
deleteIfExists(journalFileTmp);
for (Iterator<Entry> i = lruEntries.values().iterator(); i.hasNext(); ) {
Entry entry = i.next();
if (entry.currentEditor == null) {
for (int t = 0; t < valueCount; t++) {
size += entry.lengths[t];
}
} else {
entry.currentEditor = null;
for (int t = 0; t < valueCount; t++) {
deleteIfExists(entry.getCleanFile(t));
deleteIfExists(entry.getDirtyFile(t));
}
i.remove();
}
}
}

/**
* Creates a new journal that omits redundant information. This replaces the
* current journal if it exists.
* 按照Journal文件的格式重新建立Journal文件。
*/
private synchronized void rebuildJournal() throws IOException {
if (journalWriter != null) {
journalWriter.close();
}

Writer writer = new BufferedWriter(new OutputStreamWriter(
new FileOutputStream(journalFileTmp), Util.US_ASCII));
try {
writer.write(MAGIC);
writer.write("\n");
writer.write(VERSION_1);
writer.write("\n");
writer.write(Integer.toString(appVersion));
writer.write("\n");
writer.write(Integer.toString(valueCount));
writer.write("\n");
writer.write("\n");

for (Entry entry : lruEntries.values()) {
if (entry.currentEditor != null) {
writer.write(DIRTY + ' ' + entry.key + '\n');
} else {
writer.write(CLEAN + ' ' + entry.key + entry.getLengths() + '\n');
}
}
} finally {
writer.close();
}

if (journalFile.exists()) {
renameTo(journalFile, journalFileBackup, true);
}
renameTo(journalFileTmp, journalFile, false);
journalFileBackup.delete();

journalWriter = new BufferedWriter(new OutputStreamWriter(
new FileOutputStream(journalFile, true), Util.US_ASCII));
}

private static void deleteIfExists(File file) throws IOException {
if (file.exists() && !file.delete()) {
throw new IOException();
}
}

/*
* 文件改名
*/
private static void renameTo(File from, File to, boolean deleteDestination) throws IOException {
if (deleteDestination) {
deleteIfExists(to);
}
if (!from.renameTo(to)) {
throw new IOException();
}
}

/**
* Returns a snapshot of the entry named {@code key}, or null if it doesn't
* exist is not currently readable. If a value is returned, it is moved to
* the head of the LRU queue.
*/
public synchronized Snapshot get(String key) throws IOException {
checkNotClosed();
validateKey(key);
Entry entry = lruEntries.get(key);
if (entry == null) {
return null;
}

if (!entry.readable) {
return null;
}

// Open all streams eagerly to guarantee that we see a single published
// snapshot. If we opened streams lazily then the streams could come
// from different edits.
InputStream[] ins = new InputStream[valueCount];
try {
for (int i = 0; i < valueCount; i++) {
ins[i] = new FileInputStream(entry.getCleanFile(i));
}
} catch (FileNotFoundException e) {	// A file must have been deleted manually!
for (int i = 0; i < valueCount; i++) {
if (ins[i] != null) {
Util.closeQuietly(ins[i]);
} else {
break;
}
}
return null;
}

redundantOpCount++;
journalWriter.append(READ + ' ' + key + '\n');
if (journalRebuildRequired()) {
executorService.submit(cleanupCallable);
}

return new Snapshot(key, entry.sequenceNumber, ins, entry.lengths);
}

/**
* Returns an editor for the entry named {@code key}, or null if another
* edit is in progress.
* 用于获取Editor对象,向Cache中写入数据
*/
public Editor edit(String key) throws IOException {
return edit(key, ANY_SEQUENCE_NUMBER);
}

/*
* 获得Editor对象,全局只有一个Editor对象存在
*/
private synchronized Editor edit(String key, long expectedSequenceNumber) throws IOException {
checkNotClosed();
validateKey(key);
Entry entry = lruEntries.get(key);
if (expectedSequenceNumber != ANY_SEQUENCE_NUMBER
&& (entry == null || entry.sequenceNumber != expectedSequenceNumber)) {

return null; // Snapshot is stale.
}
if (entry == null) {
entry = new Entry(key);
lruEntries.put(key, entry);
} else if (entry.currentEditor != null) {
return null; // Another edit is in progress.
}

Editor editor = new Editor(entry);
entry.currentEditor = editor;

// Flush the journal before creating files to prevent file leaks.
journalWriter.write(DIRTY + ' ' + key + '\n');
journalWriter.flush();
return editor;
}

/** Returns the directory where this cache stores its data. */
public File getDirectory() {
return directory;
}

/**
* Returns the maximum number of bytes that this cache should use to store
* its data.
*/
public synchronized long getMaxSize() {
return maxSize;
}

/**
* Changes the maximum number of bytes the cache can store and queues a job
* to trim the existing store, if necessary.
*/
public synchronized void setMaxSize(long maxSize) {
this.maxSize = maxSize;
executorService.submit(cleanupCallable);
}

/**
* Returns the number of bytes currently being used to store the values in
* this cache. This may be greater than the max size if a background
* deletion is pending.
*/
public synchronized long size() {
return size;
}

/*
* 使用Editor编辑Cache完成
*/
private synchronized void completeEdit(Editor editor, boolean success) throws IOException {
Entry entry = editor.entry;
if (entry.currentEditor != editor) {
throw new IllegalStateException();
}

// If this edit is creating the entry for the first time, every index
// must have a value.
if (success && !entry.readable) {
for (int i = 0; i < valueCount; i++) {
if (!editor.written[i]) {
editor.abort();
throw new IllegalStateException("Newly created entry didn't create value for index " + i);
}
if (!entry.getDirtyFile(i).exists()) {
editor.abort();
return;
}
}
}

for (int i = 0; i < valueCount; i++) {
File dirty = entry.getDirtyFile(i);
if (success) {
if (dirty.exists()) {
File clean = entry.getCleanFile(i);
dirty.renameTo(clean);
long oldLength = entry.lengths[i];
long newLength = clean.length();
entry.lengths[i] = newLength;
size = size - oldLength + newLength;
}
} else {
deleteIfExists(dirty);
}
}

redundantOpCount++;
entry.currentEditor = null;
if (entry.readable | success) {
entry.readable = true;
journalWriter.write(CLEAN + ' ' + entry.key + entry.getLengths() + '\n');
if (success) {
entry.sequenceNumber = nextSequenceNumber++;
}
} else {
lruEntries.remove(entry.key);
journalWriter.write(REMOVE + ' ' + entry.key + '\n');
}
journalWriter.flush();

if (size > maxSize || journalRebuildRequired()) {
executorService.submit(cleanupCallable);
}
}

/**
* We only rebuild the journal when it will halve the size of the journal
* and eliminate at least 2000 ops.
*/
private boolean journalRebuildRequired() {
final int redundantOpCompactThreshold = 2000;
return redundantOpCount >= redundantOpCompactThreshold && redundantOpCount >= lruEntries.size();
}

/**
* Drops the entry for {@code key} if it exists and can be removed. Entries
* actively being edited cannot be removed.
*
* @return true if an entry was removed.
*/
public synchronized boolean remove(String key) throws IOException {
checkNotClosed();
validateKey(key);
Entry entry = lruEntries.get(key);
if (entry == null || entry.currentEditor != null) {
return false;
}

for (int i = 0; i < valueCount; i++) {
File file = entry.getCleanFile(i);
if (file.exists() && !file.delete()) {
throw new IOException("failed to delete " + file);
}
size -= entry.lengths[i];
entry.lengths[i] = 0;
}

redundantOpCount++;
journalWriter.append(REMOVE + ' ' + key + '\n');
lruEntries.remove(key);

if (journalRebuildRequired()) {
executorService.submit(cleanupCallable);
}

return true;
}

/** Returns true if this cache has been closed. */
public synchronized boolean isClosed() {
return journalWriter == null;
}

private void checkNotClosed() {
if (journalWriter == null) {
throw new IllegalStateException("cache is closed");
}
}

/** Force buffered operations to the filesystem. */
public synchronized void flush() throws IOException {
checkNotClosed();
trimToSize();
journalWriter.flush();
}

/**
* Closes this cache. Stored values will remain on the filesystem.
*/
public synchronized void close() throws IOException {
if (journalWriter == null) {
return; // Already closed.
}
for (Entry entry : new ArrayList<Entry>(lruEntries.values())) {
if (entry.currentEditor != null) {
entry.currentEditor.abort();
}
}
trimToSize();
journalWriter.close();
journalWriter = null;
}

/*
* 当前Cache大小超过最大设定值时,则删除Hash队头的entry(即目前最少使用的的entry)
*/
private void trimToSize() throws IOException {
while (size > maxSize) {
Map.Entry<String, Entry> toEvict = lruEntries.entrySet().iterator().next();
remove(toEvict.getKey());
}
}

/**
* Closes the cache and deletes all of its stored values. This will delete
* all files in the cache directory including files that weren't created by
* the cache.
*/
public void delete() throws IOException {
close();
Util.deleteContents(directory);
}

/*
* 检查Key是否有效
*/
private void validateKey(String key) {
Matcher matcher = LEGAL_KEY_PATTERN.matcher(key);
if (!matcher.matches()) {
throw new IllegalArgumentException("keys must match regex [a-z0-9_-]{1,64}: \"" + key + "\"");
}
}

private static String inputStreamToString(InputStream in) throws IOException {
return Util.readFully(new InputStreamReader(in, Util.UTF_8));
}

/**
* A snapshot of the values for an entry.
* 该类表示DiskLruCache中每一个entry中缓存文件的快照,它持有该entry中每个文件的
* inputStream,通过该inputStream可读取该文件的内容。
*/
public final class Snapshot implements Closeable {
private final String key;
private final long sequenceNumber;
private final InputStream[] ins;
private final long[] lengths;

private Snapshot(String key, long sequenceNumber, InputStream[] ins, long[] lengths) {
this.key = key;
this.sequenceNumber = sequenceNumber;
this.ins = ins;
this.lengths = lengths;
}

/**
* Returns an editor for this snapshot's entry, or null if either the
* entry has changed since this snapshot was created or if another edit
* is in progress.
* 一次只能获得一个Editor对象
*/
public Editor edit() throws IOException {
return DiskLruCache.this.edit(key, sequenceNumber);
}

/** Returns the unbuffered stream with the value for {@code index}. */
public InputStream getInputStream(int index) {
return ins[index];
}

/** Returns the string value for {@code index}. */
public String getString(int index) throws IOException {
return inputStreamToString(getInputStream(index));
}

/** Returns the byte length of the value for {@code index}. */
public long getLength(int index) {
return lengths[index];
}

public void close() {
for (InputStream in : ins) {
Util.closeQuietly(in);
}
}
}

private static final OutputStream NULL_OUTPUT_STREAM = new OutputStream() {
@Override
public void write(int b) throws IOException {
// Eat all writes silently. Nom nom.
}
};

/**
* Edits the values for an entry.
* 该类控制对每一个entry的读写操作。
*/
public final class Editor {
private final Entry entry;
private final boolean[] written;
private boolean hasErrors;
private boolean committed;

private Editor(Entry entry) {
this.entry = entry;
this.written = (entry.readable) ? null : new boolean[valueCount];
}

/**
* Returns an unbuffered input stream to read the last committed value,
* or null if no value has been committed.
*/
public InputStream newInputStream(int index) throws IOException {
synchronized (DiskLruCache.this) {
if (entry.currentEditor != this) {
throw new IllegalStateException();
}
if (!entry.readable) {
return null;
}
try {
return new FileInputStream(entry.getCleanFile(index));
} catch (FileNotFoundException e) {
return null;
}
}
}

/**
* Returns the last committed value as a string, or null if no value has
* been committed.
*/
public String getString(int index) throws IOException {
InputStream in = newInputStream(index);
return in != null ? inputStreamToString(in) : null;
}

/**
* Returns a new unbuffered output stream to write the value at
* {@code index}. If the underlying output stream encounters errors when
* writing to the filesystem, this edit will be aborted when
* {@link #commit} is called. The returned output stream does not throw
* IOExceptions.
* 返回该editor的key对应的文件的outputstream,文件名为key.index.tmp
*/
public OutputStream newOutputStream(int index) throws IOException {
synchronized (DiskLruCache.this) {
if (entry.currentEditor != this) {
throw new IllegalStateException();
}
if (!entry.readable) {
written[index] = true;
}
File dirtyFile = entry.getDirtyFile(index);
FileOutputStream outputStream;
try {
outputStream = new FileOutputStream(dirtyFile);
} catch (FileNotFoundException e) {

// Attempt to recreate the cache directory.
directory.mkdirs();
try {
outputStream = new FileOutputStream(dirtyFile);
} catch (FileNotFoundException e2) {

// We are unable to recover. Silently eat the writes.
return NULL_OUTPUT_STREAM;
}
}
return new FaultHidingOutputStream(outputStream);
}
}

/**
* Sets the value at {@code index} to {@code value}.
*/
public void set(int index, String value) throws IOException {
Writer writer = null;
try {
writer = new OutputStreamWriter(newOutputStream(index), Util.UTF_8);
writer.write(value);
} finally {
Util.closeQuietly(writer);
}
}

/**
* Commits this edit so it is visible to readers. This releases the edit
* lock so another edit may be started on the same key.
*/
public void commit() throws IOException {
if (hasErrors) {
completeEdit(this, false);
remove(entry.key); // The previous entry is stale.
} else {
completeEdit(this, true);
}
committed = true;
}

/**
* Aborts this edit. This releases the edit lock so another edit may be
* started on the same key.
*/
public void abort() throws IOException {
completeEdit(this, false);
}

public void abortUnlessCommitted() {
if (!committed) {
try {
abort();
} catch (IOException ignored) {
}
}
}

private class FaultHidingOutputStream extends FilterOutputStream {
private FaultHidingOutputStream(OutputStream out) {
super(out);
}

@Override
public void write(int oneByte) {
try {
out.write(oneByte);
} catch (IOException e) {
hasErrors = true;
}
}

@Override
public void write(byte[] buffer, int offset, int length) {
try {
out.write(buffer, offset, length);
} catch (IOException e) {
hasErrors = true;
}
}

@Override
public void close() {
try {
out.close();
} catch (IOException e) {
hasErrors = true;
}
}

@Override
public void flush() {
try {
out.flush();
} catch (IOException e) {
hasErrors = true;
}
}
}
}

/*
* 该类表示DiskLruCache中每一个条目, SDCard上的文件路径格式是:key.index.tmp和key.index
*/
private final class Entry {
private final String key;	// 当做lruEntries的Key和SDCard上的目录key.index.tmp OR key.index

/** Lengths of this entry's files. */
private final long[] lengths; // 该entry中每个文件的长度,该数组长度为valueCount

/** True if this entry has ever been published. */
private boolean readable; // 该entry曾经被发布过,该项为true,表示可读了,编辑完成了

/** The ongoing edit or null if this entry is not being edited. */
private Editor currentEditor; // 该entry所对应的editor

/**
* The sequence number of the most recently committed edit to this
* entry.
*/
private long sequenceNumber; // 最近编辑这个entry的序列号

private Entry(String key) {
this.key = key;
this.lengths = new long[valueCount];
}

public String getLengths() throws IOException {
StringBuilder result = new StringBuilder();
for (long size : lengths) {
result.append(' ').append(size);
}
return result.toString();
}

/** Set lengths using decimal numbers like "10123". */
private void setLengths(String[] strings) throws IOException {
if (strings.length != valueCount) {
throw invalidLengths(strings);
}

try {
for (int i = 0; i < strings.length; i++) {
lengths[i] = Long.parseLong(strings[i]);
}
} catch (NumberFormatException e) {
throw invalidLengths(strings);
}
}

private IOException invalidLengths(String[] strings) throws IOException {
throw new IOException("unexpected journal line: "
+ java.util.Arrays.toString(strings));
}

public File getCleanFile(int i) {
return new File(directory, key + "." + i);
}

public File getDirtyFile(int i) {
return new File(directory, key + "." + i + ".tmp");
}
}
}


3. 图片加载过程中,出现的OOM问题,解决方案:Android高效加载大图、多图解决方案,有效避免程序OOM,这里仅仅是说了加载图片后怎样获得缩略图,避免OOM,但是从网络服务器上拉取的图片仍旧是原始图片大小,很费流量,怎样才能从在刚开始从网络服务器上获取图片的时候就指定缩略图大小呐(节省流量)?

参考开源框架:ImageLoader,我们项目中也没有处理这个,这是个可以优化的地方,我们项目中的另外一个从服务器上获取图片的方法是:在url中指定图片请求大小,在服务器中进行剪裁,之后再返回给客户端,我们再处理。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐