您的位置:首页 > 其它

从HashMap到LruCache的源码分析

2016-03-03 11:30 465 查看
android的图片加载库Android-Universal-Image-Loader中的缓存策略,内存缓存LruCache,是一个最近最少使用算法LRU。前几天看操作系统也看到了LRU算法,是用在缺页中断发生时,进行置换算法才用的一种。缓存中的LruCache和操作系统中的页置换算法思想是一样的,于是心血来潮,决定把这部分实现看看,然后就有了这篇博客,从HashMap的实现到LinkedHashMap再到LruCache,总共包含三个类的源码分析,花费了整整一晚上。

HashMap的实现中主要维护一个数组,发生冲突通过链表来解决,链表插入类似于头插法

LinkedHashMap继承自HashMap,在hash的基础上,又维护了一个链表,这个链表是带头结点的双向循环链表,需要注意的链表的元素都是hash里面的元素,链表仅仅是在hash的基础上用指针将hash中的节点连接了起来

LruCache是android的utils包里面的一个类,用来实现缓存防止OOM的一个工具类,用途非常广泛。

关于LRU算法:Least Recently Used最近最少使用算法,在操作系统中,对内存的访问满足局部性原理,于是LRU用在缺页中断发生时的置换算法,将内存中的最近最长未使用的页面置换到磁盘,可以实现的方式可以为维护一个链表,当访问一个页面是,将该页面移动至表头(尾),发生缺页时取链表最后(前)的页面置换,这样存在问题是读取某个页面的复杂度太高,于是可以考虑将其进行hash,这样读取速度会提高,于是用到了LinkedHashMap这种数据结构。

android实现的LruCache类主要使用来进行内存缓存的,维护所用资源的强引用,当内存超过设定的缓存值时,将好久未使用的资源从内存删除。
LruCache的实现中在缓存的值达到最大值时采用的方法是,循环迭代从链表中取eldest的元素进行删除,知道占用的控件小于最大的缓存值。LinkedHashMap中提供的removeEldestEntry函数可以简单实现LRU的功能,但不能很好的满足一些场景,因为里面存放的元素的大小不总是大小一致的,或者说不仅仅是以缓存数据的个数来看的。

下面基本上没有太多的文字,所有的解释都详细的列在代码里面


HashMap

http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/7-b147/java/util/HashMap.java

[java] view
plain copy







public class HashMap<K,V> extends AbstractMap<K,V> implements Map<K,V>, Cloneable, Serializable{

// 默认初始容量16

static final int DEFAULT_INITIAL_CAPACITY = 16;

// 最大容量2^30

static final int MAXIMUM_CAPACITY = 1 << 30;

// 默认加载因子

static final float DEFAULT_LOAD_FACTOR = 0.75f;

// hash映射的数组槽

transient Entry[] table;

// 元素个数

transient int size;

// 阈值 = 加载因子 * 容量

int threshold;

// 加载因子

final float loadFactor;

// 修改次数,判断迭代期间容器被修改,不然抛出ConcurrentModificationException

transient int modCount;

public HashMap(int initialCapacity, float loadFactor) {

if (initialCapacity < 0)

throw new IllegalArgumentException("Illegal initial capacity: " + initialCapacity);

// 参数调整

if (initialCapacity > MAXIMUM_CAPACITY)

initialCapacity = MAXIMUM_CAPACITY;

if (loadFactor <= 0 || Float.isNaN(loadFactor))

throw new IllegalArgumentException("Illegal load factor: " + loadFactor);

// 找到大于initialCapacity的最小的2次幂

int capacity = 1;

while (capacity < initialCapacity)

capacity <<= 1;

this.loadFactor = loadFactor;

// 设置阈值

threshold = (int)(capacity * loadFactor);

// 定义数组,大小为capacity

table = new Entry[capacity];

// 这里是空的实现,实际让其子类覆写该方法

init();

}

public HashMap(int initialCapacity) {

this(initialCapacity, DEFAULT_LOAD_FACTOR);

}

// 默认情况下默认的加载因子,默认的容量16

public HashMap() {

this.loadFactor = DEFAULT_LOAD_FACTOR;

threshold = (int)(DEFAULT_INITIAL_CAPACITY * DEFAULT_LOAD_FACTOR);

table = new Entry[DEFAULT_INITIAL_CAPACITY];

init();

}

// 从已存在的Map创建HashMap

public HashMap(Map<? extends K, ? extends V> m) {

// 容量为Math.max((int) (m.size() / DEFAULT_LOAD_FACTOR) + 1,DEFAULT_INITIAL_CAPACITY),默认的加载因子

this(Math.max((int) (m.size() / DEFAULT_LOAD_FACTOR) + 1, DEFAULT_INITIAL_CAPACITY), DEFAULT_LOAD_FACTOR);

// 遍历m将其元素添加到hashmap中

putAllForCreate(m);

}

void init() {

}

// hash算法

// 可以将1变的松散,可以减少冲突

static int hash(int h) {

// This function ensures that hashCodes that differ only by

// constant multiples at each bit position have a bounded

// number of collisions (approximately 8 at default load factor).

h ^= (h >>> 20) ^ (h >>> 12);

return h ^ (h >>> 7) ^ (h >>> 4);

}

// 根据hash值获得在我们维护的数组的索引

// 即取hash值的小于length的部分,这样才能将其限定在数组大小的范围里面,这样的处理也会带来冲突

static int indexFor(int h, int length) {

return h & (length-1);

}

public int size() {

return size;

}

public boolean isEmpty() {

return size == 0;

}

// 根据键获取值

public V get(Object key) {

if (key == null)

return getForNullKey();

int hash = hash(key.hashCode());

for (Entry<K,V> e = table[indexFor(hash, table.length)];

e != null;

e = e.next) {

Object k;

if (e.hash == hash && ((k = e.key) == key || key.equals(k)))

return e.value;

}

return null;

}

// 键为null的Entry都放在第0个槽中,相当于null经过hash后为0

private V getForNullKey() {

for (Entry<K,V> e = table[0]; e != null; e = e.next) {

if (e.key == null)

return e.value;

}

return null;

}

public boolean containsKey(Object key) {

return getEntry(key) != null;

}

// 返回对应键的Entry,若不存在返回null

final Entry<K,V> getEntry(Object key) {

// 计算key的hash值

int hash = (key == null) ? 0 : hash(key.hashCode());

// 根据hash值获取其存放的槽,即indexFor函数的作用

// 遍历这个槽上的链表

for (Entry<K,V> e = table[indexFor(hash, table.length)]; e != null; e = e.next) {

Object k;

// hash值一样且键一样(同一个内存地址或者值相同)即返回。

if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k))))

return e;

}

return null;

}

// 添加键值对

public V put(K key, V value) {

// 如果键为null,那么存放在第0个槽上

if (key == null)

return putForNullKey(value);

// 获得键的hash值

int hash = hash(key.hashCode());

// 根据hash值得到保存在我们维护的数组中的那个下标处

int i = indexFor(hash, table.length);

for (Entry<K,V> e = table[i]; e != null; e = e.next) {

Object k;

if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {

// 已存在,修改

V oldValue = e.value;

e.value = value;

e.recordAccess(this);

// 将旧的值返回

return oldValue;

}

}

modCount++;

// 不存在则添加

addEntry(hash, key, value, i);

return null;

}

// 添加键位null的键值对

private V putForNullKey(V value) {

// 键位null的放在第0个槽

for (Entry<K,V> e = table[0]; e != null; e = e.next) {

if (e.key == null) {

// 已存在则替换

V oldValue = e.value;

e.value = value;

// 被子类覆盖

e.recordAccess(this);

return oldValue;

}

}

modCount++;

// 不存在,添加

addEntry(0, null, value, 0);

return null;

}

// 和put类似,用在构造函数、clone

private void putForCreate(K key, V value) {

int hash = (key == null) ? 0 : hash(key.hashCode());

int i = indexFor(hash, table.length);

for (Entry<K,V> e = table[i]; e != null; e = e.next) {

Object k;

if (e.hash == hash &&

((k = e.key) == key || (key != null && key.equals(k)))) {

e.value = value;

return;

}

}

createEntry(hash, key, value, i);

}

// 遍历map添加到新建的hashmap中

private void putAllForCreate(Map<? extends K, ? extends V> m) {

for (Map.Entry<? extends K, ? extends V> e : m.entrySet())

putForCreate(e.getKey(), e.getValue());

}

// 扩容

void resize(int newCapacity) {

Entry[] oldTable = table;

int oldCapacity = oldTable.length;

// 旧的容量已经达到最大了,调整阈值即可

if (oldCapacity == MAXIMUM_CAPACITY) {

threshold = Integer.MAX_VALUE;

return;

}

// 用新的容量创建新数组

Entry[] newTable = new Entry[newCapacity];

// 并将原数组里面的hash表全部搬移到新的数组槽中

transfer(newTable);

// 将维护的数组引用重新赋值

table = newTable;

// 调整阈值

threshold = (int)(newCapacity * loadFactor);

}

// 将原数组table里面的hash表全部搬移到新的数组槽中填充newTable

void transfer(Entry[] newTable) {

Entry[] src = table;

int newCapacity = newTable.length;

// 遍历数组的每个槽,每个槽中在一次遍历链表

for (int j = 0; j < src.length; j++) {

Entry<K,V> e = src[j];

if (e != null) {

src[j] = null;

do {

Entry<K,V> next = e.next;

int i = indexFor(e.hash, newCapacity);

e.next = newTable[i];

newTable[i] = e;

e = next;

} while (e != null);

}

}

}

//

public void putAll(Map<? extends K, ? extends V> m) {

// map元素个数为0,什么也不用做

int numKeysToBeAdded = m.size();

if (numKeysToBeAdded == 0)

return;

// 如果待复制的元素个数大于阈值,需要扩容

if (numKeysToBeAdded > threshold) {

// 目标容量为满足当前设置的加载因子情况下的容量

int targetCapacity = (int)(numKeysToBeAdded / loadFactor + 1);

// 参数调整

if (targetCapacity > MAXIMUM_CAPACITY)

targetCapacity = MAXIMUM_CAPACITY;

int newCapacity = table.length;

// 找到大于targetCapacity的最小2的n次幂

while (newCapacity < targetCapacity)

newCapacity <<= 1;

if (newCapacity > table.length)

// 扩容为新的容量

resize(newCapacity);

}

for (Map.Entry<? extends K, ? extends V> e : m.entrySet())

put(e.getKey(), e.getValue());

}

public V remove(Object key) {

Entry<K,V> e = removeEntryForKey(key);

return (e == null ? null : e.value);

}

// 移除key所对应的键值对

// 和removeMapping类似,只是在判断相等时有点区别

final Entry<K,V> removeEntryForKey(Object key) {

int hash = (key == null) ? 0 : hash(key.hashCode());

int i = indexFor(hash, table.length);

Entry<K,V> prev = table[i];

Entry<K,V> e = prev;

while (e != null) {

Entry<K,V> next = e.next;

Object k;

if (e.hash == hash &&

((k = e.key) == key || (key != null && key.equals(k)))) {

modCount++;

size--;

if (prev == e)

table[i] = next;

else

prev.next = next;

// 依然在删除该键值对时调用,留给LinkedHashMap,因为可能会在访问hashmap时重新整理链表的指向关系

e.recordRemoval(this);

return e;

}

prev = e;

e = next;

}

return e;

}

// 移除键值对

final Entry<K,V> removeMapping(Object o) {

// 传递参数不是Entry的子类,什么也不做

if (!(o instanceof Map.Entry))

return null;

Map.Entry<K,V> entry = (Map.Entry<K,V>) o;

Object key = entry.getKey();

// 获取要删除的键值对的键的哈希值

int hash = (key == null) ? 0 : hash(key.hashCode());

// 根据hash值得到保存在我们维护的数组中的那个下标处

int i = indexFor(hash, table.length);

Entry<K,V> prev = table[i];

Entry<K,V> e = prev;

while (e != null) {

Entry<K,V> next = e.next;

if (e.hash == hash && e.equals(entry)) {

// hash值相同并且entry内容一样,即找到了

modCount++;

size--;

if (prev == e)

table[i] = next;

else

prev.next = next;

// 空的实现,给LinkedHashMap实现,在删除键值对后执行

e.recordRemoval(this);

return e;

}

prev = e;

e = next;

}

return e;

}

public void clear() {

modCount++;

Entry[] tab = table;

for (int i = 0; i < tab.length; i++)

tab[i] = null;

size = 0;

}

// 判断是否包含值为value的键值对

public boolean containsValue(Object value) {

if (value == null)

return containsNullValue();

Entry[] tab = table;

for (int i = 0; i < tab.length ; i++)

for (Entry e = tab[i] ; e != null ; e = e.next)

if (value.equals(e.value))

return true;

return false;

}

// 判断是否有值为null的键值对

private boolean containsNullValue() {

Entry[] tab = table;

// 依次迭代数组和每个数组槽所对应的链表

for (int i = 0; i < tab.length ; i++)

for (Entry e = tab[i] ; e != null ; e = e.next)

if (e.value == null)

return true;

return false;

}

public Object clone() {

HashMap<K,V> result = null;

try {

result = (HashMap<K,V>)super.clone();

} catch (CloneNotSupportedException e) {

// assert false;

}

result.table = new Entry[table.length];

result.entrySet = null;

result.modCount = 0;

result.size = 0;

result.init();

result.putAllForCreate(this);

return result;

}

// hashmap的底层节点结构

static class Entry<K,V> implements Map.Entry<K,V> {

final K key;

V value;

Entry<K,V> next;

final int hash;

Entry(int h, K k, V v, Entry<K,V> n) {

value = v;

next = n;

key = k;

hash = h;

}

public final K getKey() {

return key;

}

public final V getValue() {

return value;

}

public final V setValue(V newValue) {

V oldValue = value;

value = newValue;

return oldValue;

}

public final boolean equals(Object o) {

if (!(o instanceof Map.Entry))

return false;

Map.Entry e = (Map.Entry)o;

Object k1 = getKey();

Object k2 = e.getKey();

if (k1 == k2 || (k1 != null && k1.equals(k2))) {

Object v1 = getValue();

Object v2 = e.getValue();

if (v1 == v2 || (v1 != null && v1.equals(v2)))

return true;

}

return false;

}

public final int hashCode() {

return (key==null ? 0 : key.hashCode()) ^

(value==null ? 0 : value.hashCode());

}

public final String toString() {

return getKey() + "=" + getValue();

}

/*******两个空的方法,分别在添加和删除时调用,用以子类实现访问该容器时做一些其他操作*******/

void recordAccess(HashMap<K,V> m) {

}

void recordRemoval(HashMap<K,V> m) {

}

}

// 添加一个Entry到bucketIndex槽的位置

void addEntry(int hash, K key, V value, int bucketIndex) {

Entry<K,V> e = table[bucketIndex];

// 下面这句简单的表述实际上创建了一个Entry节点,下一个节点是e

// 也就是说数组索引所在位置,然后在调整数组索引处为新创建的节点,即链表的头插法

table[bucketIndex] = new Entry<>(hash, key, value, e);

// 元素个数超过了阈值,进行扩容为原来的两倍

if (size++ >= threshold)

resize(2 * table.length);

}

// 逻辑和addEntry一模一样,只是少了扩容的判断,该函数用在构造函数里拷贝另一个map的值

// 此前已经调整了容量,因此不会出现扩容的情况

void createEntry(int hash, K key, V value, int bucketIndex) {

Entry<K,V> e = table[bucketIndex];

table[bucketIndex] = new Entry<>(hash, key, value, e);

size++;

}

// 迭代器部分

private abstract class HashIterator<E> implements Iterator<E> {

Entry<K,V> next; // next entry to return

// 迭代器的fast-fail机制,迭代期间不允许修改容器

int expectedModCount; // For fast-fail

int index; // current slot

Entry<K,V> current; // current entry

HashIterator() {

expectedModCount = modCount;

if (size > 0) { // advance to first entry

Entry[] t = table;

while (index < t.length && (next = t[index++]) == null)

;

}

}

public final boolean hasNext() {

return next != null;

}

final Entry<K,V> nextEntry() {

if (modCount != expectedModCount)

throw new ConcurrentModificationException();

Entry<K,V> e = next;

if (e == null)

throw new NoSuchElementException();

if ((next = e.next) == null) {

Entry[] t = table;

while (index < t.length && (next = t[index++]) == null)

;

}

current = e;

return e;

}

public void remove() {

if (current == null)

throw new IllegalStateException();

if (modCount != expectedModCount)

throw new ConcurrentModificationException();

Object k = current.key;

current = null;

HashMap.this.removeEntryForKey(k);

expectedModCount = modCount;

}

}

private final class ValueIterator extends HashIterator<V> {

public V next() {

return nextEntry().value;

}

}

private final class KeyIterator extends HashIterator<K> {

public K next() {

return nextEntry().getKey();

}

}

private final class EntryIterator extends HashIterator<Map.Entry<K,V>> {

public Map.Entry<K,V> next() {

return nextEntry();

}

}

// Subclass overrides these to alter behavior of views' iterator() method

Iterator<K> newKeyIterator() {

return new KeyIterator();

}

Iterator<V> newValueIterator() {

return new ValueIterator();

}

Iterator<Map.Entry<K,V>> newEntryIterator() {

return new EntryIterator();

}

// Views

// hasp里面的entry所对应的Set

private transient Set<Map.Entry<K,V>> entrySet = null;

// 键对应的Set

public Set<K> keySet() {

Set<K> ks = keySet;

return (ks != null ? ks : (keySet = new KeySet()));

}

private final class KeySet extends AbstractSet<K> {

public Iterator<K> iterator() {

return newKeyIterator();

}

public int size() {

return size;

}

public boolean contains(Object o) {

return containsKey(o);

}

public boolean remove(Object o) {

return HashMap.this.removeEntryForKey(o) != null;

}

public void clear() {

HashMap.this.clear();

}

}

public Collection<V> values() {

Collection<V> vs = values;

return (vs != null ? vs : (values = new Values()));

}

// 值的集合

private final class Values extends AbstractCollection<V> {

public Iterator<V> iterator() {

return newValueIterator();

}

public int size() {

return size;

}

public boolean contains(Object o) {

return containsValue(o);

}

public void clear() {

HashMap.this.clear();

}

}

public Set<Map.Entry<K,V>> entrySet() {

return entrySet0();

}

private Set<Map.Entry<K,V>> entrySet0() {

Set<Map.Entry<K,V>> es = entrySet;

return es != null ? es : (entrySet = new EntrySet());

}

private final class EntrySet extends AbstractSet<Map.Entry<K,V>> {

public Iterator<Map.Entry<K,V>> iterator() {

return newEntryIterator();

}

public boolean contains(Object o) {

if (!(o instanceof Map.Entry))

return false;

Map.Entry<K,V> e = (Map.Entry<K,V>) o;

Entry<K,V> candidate = getEntry(e.getKey());

return candidate != null && candidate.equals(e);

}

public boolean remove(Object o) {

return removeMapping(o) != null;

}

public int size() {

return size;

}

public void clear() {

HashMap.this.clear();

}

}

// 序列化部分

private void writeObject(java.io.ObjectOutputStream s)

throws IOException

{

Iterator<Map.Entry<K,V>> i =

(size > 0) ? entrySet0().iterator() : null;

// Write out the threshold, loadfactor, and any hidden stuff

s.defaultWriteObject();

// Write out number of buckets

s.writeInt(table.length);

// Write out size (number of Mappings)

s.writeInt(size);

// Write out keys and values (alternating)

if (i != null) {

while (i.hasNext()) {

Map.Entry<K,V> e = i.next();

s.writeObject(e.getKey());

s.writeObject(e.getValue());

}

}

}

private static final long serialVersionUID = 362498820763181265L;

/**

* Reconstitute the <tt>HashMap</tt> instance from a stream (i.e.,

* deserialize it).

*/

private void readObject(java.io.ObjectInputStream s)

throws IOException, ClassNotFoundException

{

// Read in the threshold, loadfactor, and any hidden stuff

s.defaultReadObject();

// Read in number of buckets and allocate the bucket array;

int numBuckets = s.readInt();

table = new Entry[numBuckets];

init(); // Give subclass a chance to do its thing.

// Read in size (number of Mappings)

int size = s.readInt();

// Read the keys and values, and put the mappings in the HashMap

for (int i=0; i<size; i++) {

K key = (K) s.readObject();

V value = (V) s.readObject();

putForCreate(key, value);

}

}

// These methods are used when serializing HashSets

int capacity() { return table.length; }

float loadFactor() { return loadFactor; }

}


LinkedHashMap

http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/7-b147/java/util/LinkedHashMap.java

[java] view
plain copy







public class LinkedHashMap<K,V> extends HashMap<K,V> implements Map<K,V>{

private static final long serialVersionUID = 3801124242820219131L;

// 带头结点的双向循环链表 的头

private transient Entry<K,V> header;

// 取值代表使用的方式:false链表按照添加顺序组织,true按照使用顺序组织

private final boolean accessOrder;

// 在构造方法中accessOrder均被初始化为false

public LinkedHashMap(int initialCapacity, float loadFactor) {

super(initialCapacity, loadFactor);

accessOrder = false;

}

public LinkedHashMap(int initialCapacity) {

super(initialCapacity);

accessOrder = false;

}

public LinkedHashMap() {

super();

accessOrder = false;

}

public LinkedHashMap(Map<? extends K, ? extends V> m) {

super(m);

accessOrder = false;

}

public LinkedHashMap(int initialCapacity,

float loadFactor,

boolean accessOrder) {

super(initialCapacity, loadFactor);

this.accessOrder = accessOrder;

}

// 复写父类的init方法,该方法在父类的构造方法里面调用

void init() {

// 初始化链表头结点header

// 该头结点数字无意义。

header = new Entry<>(-1, null, null, null);

// 双向循环链表

header.before = header.after = header;

}

// hashmap里面的该函数的意义是:将原数组table里面的hash表全部搬移到新的数组槽中填充newTable

// 由于已经将所有元素用链表连起来了所以是用链表来赋值更加快速

//

void transfer(HashMap.Entry[] newTable) {

int newCapacity = newTable.length;

for (Entry<K,V> e = header.after; e != header; e = e.after) {

int index = indexFor(e.hash, newCapacity);

e.next = newTable[index];

newTable[index] = e;

}

}

// 判断是否含有某个value

// 直接遍历链表会有更好的时间复杂度

public boolean containsValue(Object value) {

// Overridden to take advantage of faster iterator

if (value==null) {

for (Entry e = header.after; e != header; e = e.after)

if (e.value==null)

return true;

} else {

for (Entry e = header.after; e != header; e = e.after)

if (value.equals(e.value))

return true;

}

return false;

}

public V get(Object key) {

Entry<K,V> e = (Entry<K,V>)getEntry(key);

if (e == null)

return null;

// 访问即有可能要改变他在链表中的位置

e.recordAccess(this);

return e.value;

}

public void clear() {

super.clear();

header.before = header.after = header;

}

// linkedHashMap的节点

private static class Entry<K,V> extends HashMap.Entry<K,V> {

// 比起hashmap的节点多了两个指针,一个指向前一个节点一个指向后一个节点

Entry<K,V> before, after;

Entry(int hash, K key, V value, HashMap.Entry<K,V> next) {

super(hash, key, value, next);

}

// 从链表中移除本身节点,仅仅指的是修改指针指向

private void remove() {

before.after = after;

after.before = before;

}

// 从链表中添加本节点至existingEntry的前面

private void addBefore(Entry<K,V> existingEntry) {

after = existingEntry;

before = existingEntry.before;

before.after = this;

after.before = this;

}

// 覆盖父类的方法

void recordAccess(HashMap<K,V> m) {

LinkedHashMap<K,V> lm = (LinkedHashMap<K,V>)m;

// 如果accessOrder为false什么都不做

if (lm.accessOrder) {

lm.modCount++;

// 从链表中移除

remove();

// 将该节点添加到链表header的前面,也就是将其添加到链表末尾(header不变)

addBefore(lm.header);

//前两步其实就是移动该节点到连飙头,因为他刚被访问过

}

}

// 覆盖父类的方法,删除键值对时同时从链表中移除

void recordRemoval(HashMap<K,V> m) {

remove();

}

}

// 迭代器部分

private abstract class LinkedHashIterator<T> implements Iterator<T> {

Entry<K,V> nextEntry = header.after;

Entry<K,V> lastReturned = null;

int expectedModCount = modCount;

public boolean hasNext() {

return nextEntry != header;

}

public void remove() {

if (lastReturned == null)

throw new IllegalStateException();

if (modCount != expectedModCount)

throw new ConcurrentModificationException();

LinkedHashMap.this.remove(lastReturned.key);

lastReturned = null;

expectedModCount = modCount;

}

Entry<K,V> nextEntry() {

if (modCount != expectedModCount)

throw new ConcurrentModificationException();

if (nextEntry == header)

throw new NoSuchElementException();

Entry<K,V> e = lastReturned = nextEntry;

nextEntry = e.after;

return e;

}

}

private class KeyIterator extends LinkedHashIterator<K> {

public K next() { return nextEntry().getKey(); }

}

private class ValueIterator extends LinkedHashIterator<V> {

public V next() { return nextEntry().value; }

}

private class EntryIterator extends LinkedHashIterator<Map.Entry<K,V>> {

public Map.Entry<K,V> next() { return nextEntry(); }

}

// These Overrides alter the behavior of superclass view iterator() methods

Iterator<K> newKeyIterator() { return new KeyIterator(); }

Iterator<V> newValueIterator() { return new ValueIterator(); }

Iterator<Map.Entry<K,V>> newEntryIterator() { return new EntryIterator(); }

// 添加键值对

void addEntry(int hash, K key, V value, int bucketIndex) {

createEntry(hash, key, value, bucketIndex);

// Remove eldest entry if instructed, else grow capacity if appropriate

Entry<K,V> eldest = header.after;

// 判断最旧的,也就是在链表头部的节点是否需要被删除

if (removeEldestEntry(eldest)) {

removeEntryForKey(eldest.key);

} else {

if (size >= threshold)

resize(2 * table.length);

}

}

// 比起hashmap中的createEntry方法,增加了修改链表

void createEntry(int hash, K key, V value, int bucketIndex) {

HashMap.Entry<K,V> old = table[bucketIndex];

Entry<K,V> e = new Entry<>(hash, key, value, old);

table[bucketIndex] = e;

// 添加一个键值对时,总要将其链接到维护的链表结尾

e.addBefore(header);

size++;

}

/******LinkedHashMap暴露的方法,可以用起来实现LRU算法*****/

protected boolean removeEldestEntry(Map.Entry<K,V> eldest) {

return false;

}

}


LruCache

[java] view
plain copy







public class LruCache<K, V> {

// LRC算法底层由LinkedHashMap实现

private final LinkedHashMap<K, V> map;

// 缓存的数量大小,可以使元素个数、字节数等等任何想要的

private int size;

// 缓存的最大值

private int maxSize;

private int putCount;

private int createCount;

// 由于缓存空间满了被逐出的次数

private int evictionCount;

// 从缓存取命中次数

private int hitCount;

// 为在缓存中找到的次数,即失败次数

private int missCount;

public LruCache(int maxSize) {

if (maxSize <= 0) {

throw new IllegalArgumentException("maxSize <= 0");

}

this.maxSize = maxSize;

this.map = new LinkedHashMap<K, V>(0, 0.75f, true);

}

//

public void resize(int maxSize) {

if (maxSize <= 0) {

throw new IllegalArgumentException("maxSize <= 0");

}

synchronized (this) {

this.maxSize = maxSize;

}

trimToSize(maxSize);

}

//

public final V get(K key) {

if (key == null) {

// 不允许出现null的键和HashMap不一样

throw new NullPointerException("key == null");

}

V mapValue;

synchronized (this) {

mapValue = map.get(key);

if (mapValue != null) {

// 每get成功一次hitCount就自加一次,表示命中次数

hitCount++;

// 如果该键对应的值存在,返回之。

return mapValue;

}

missCount++;

}

// 否则,创建该键值对,默认值为null

V createdValue = create(key);

if (createdValue == null) {

return null;

}

synchronized (this) {

createCount++;

// 将创建的value添加到map

mapValue = map.put(key, createdValue);

if (mapValue != null) {

// mapValue部位空,表示本线程在put之前已经被别的线程put了一个值,即产生了冲突

// 此时我们扔掉刚创建的value,而是使用其他地方产生的value

map.put(key, mapValue);

} else {

// 将其放进map中的同时缓存的size增加

size += safeSizeOf(key, createdValue);

}

}

if (mapValue != null) {

entryRemoved(false, key, createdValue, mapValue);

return mapValue;

} else {

// 根据maxSize修改map,因为有可能由于此次的put操作使得容量超过最大值,具体的修改方式在子函数中

trimToSize(maxSize);

return createdValue;

}

}

// 和get基本一样

public final V put(K key, V value) {

if (key == null || value == null) {

throw new NullPointerException("key == null || value == null");

}

V previous;

synchronized (this) {

putCount++;

size += safeSizeOf(key, value);

previous = map.put(key, value);

if (previous != null) {

// 返回值部位null,说明之前该键对应的有值,即使替换,因此占用空间减去之前元素

size -= safeSizeOf(key, previous);

}

}

if (previous != null) {

// 移除元素时调用

entryRemoved(false, key, previous, value);

}

trimToSize(maxSize);

return previous;

}

// 根据maxSize增删map

private void trimToSize(int maxSize) {

while (true) {

K key;

V value;

synchronized (this) {

if (size < 0 || (map.isEmpty() && size != 0)) {

throw new IllegalStateException(getClass().getName() + ".sizeOf() is reporting inconsistent results!");

}

// 当前占用的空间小于最大空间时跳出

if (size <= maxSize) {

break;

}

// 否则,取出最近最长未使用的元素,也就是链表最前面的一个

// v5.0.1版本的utils包提供的感觉有问题。

/*Map.Entry<K, V> toEvict = null;

for (Map.Entry<K, V> entry : map.entrySet()) {

// 循环直到最后一个???

toEvict = entry;

}

if (toEvict == null) {

break;

}

*/

// V4 包里面的实现https://github.com/android/platform_frameworks_support/blob/master/v4/java/android/support/v4/util/LruCache.java

Map.Entry<K, V> toEvict = map.entrySet().iterator().next();

// 然而google已经提供的LinkedHashMap中就有一个函数获得eldest的元素,于是有些版本()4.4.2的写法比较好理解

key = toEvict.getKey();

value = toEvict.getValue();

// 移除该元素

map.remove(key);

// 并将占用空间减少

size -= safeSizeOf(key, value);

evictionCount++;

}

entryRemoved(true, key, value, null);

}

}

/**

* Removes the entry for {@code key} if it exists.

*

* @return the previous value mapped by {@code key}.

*/

public final V remove(K key) {

if (key == null) {

throw new NullPointerException("key == null");

}

V previous;

synchronized (this) {

previous = map.remove(key);

if (previous != null) {

size -= safeSizeOf(key, previous);

}

}

if (previous != null) {

entryRemoved(false, key, previous, null);

}

return previous;

}

// true if the entry is being removed to make space, false if the removal was caused by a put or remove.

/****** 可以覆盖进行其他操作 ******/

protected void entryRemoved(boolean evicted, K key, V oldValue, V newValue) {}

// 当需要的元素不存在时执行,可以自行覆盖

protected V create(K key) {

return null;

}

// 返回一个值表示占用的空间,这里做了参数检查

private int safeSizeOf(K key, V value) {

int result = sizeOf(key, value);

if (result < 0) {

throw new IllegalStateException("Negative size: " + key + "=" + value);

}

return result;

}

// 返回一个值表示占用的空间

/****** 需要覆盖对不同的元素(键值对)进行不同的处理 ******/

protected int sizeOf(K key, V value) {

return 1;

}

// 逐出所有的元素,参数为-1,只要里面还有元素就会大于-1,于是要全部移除

public final void evictAll() {

trimToSize(-1); // -1 will evict 0-sized elements

}

public synchronized final int size() {

return size;

}

public synchronized final int maxSize() {

return maxSize;

}

public synchronized final int hitCount() {

return hitCount;

}

public synchronized final int missCount() {

return missCount;

}

public synchronized final int createCount() {

return createCount;

}

public synchronized final int putCount() {

return putCount;

}

public synchronized final int evictionCount() {

return evictionCount;

}

public synchronized final Map<K, V> snapshot() {

return new LinkedHashMap<K, V>(map);

}

@Override

public synchronized final String toString() {

int accesses = hitCount + missCount;

int hitPercent = accesses != 0 ? (100 * hitCount / accesses) : 0;

return String.format("LruCache[maxSize=%d,hits=%d,misses=%d,hitRate=%d%%]",

maxSize, hitCount, missCount, hitPercent);

}

}

android 4.4.2中的LinkedHashMap直接提供了获得最旧元素的方法

[java] view
plain copy







/**

* Returns the eldest entry in the map, or {@code null} if the map is empty.

* @hide

*/

public Entry<K, V> eldest() {

LinkedEntry<K, V> eldest = header.nxt;

return eldest != header ? eldest : null;

}

上面提到的HashMap和LinkedHashMap在jdk的不同版本变化较大,并且和android包中的实现也有一些差异。

以上。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: