JDK之HashMap源码解读
2017-03-15 10:50
489 查看
一方面不断在看书,一方面HashMap的确比List源码难一点,所以发的有点迟。
JDK版本依旧是1.7。
在看HashMap源码之前希望大家看过ArrayList源码,这样对理解HashMap源码也有很大帮助。
希望大家看完后既能理解HashMap的源码也能收获HashMap代码编写风格(例如:被调用的方法都在调用者方法的下面,这是一个很好的编程风格,不记得这风格名字了,谁记得请留言)
2017/06/03更新:
重新复习一遍源码后,为了大家方便,总结了其中一些重点知识(源码还是要读,这里主要方便大家快速复习)
重点知识集合:
HashMap 的实例有两个参数影响其性能:初始容量 和 加载因子。 容量是哈希表中桶的数量,初始容量只是哈希表在创建时的容量。
加载因子是哈希表在其容量自动增加之前可以达到多满的一种尺度。 当哈希表中的条目数超出了加载因子与当前容量的乘积时,则要对该哈希表 进行
rehash 操作(即重建内部数据结构)
默认的加载因子(0.75f)是对时间和空间的一种很好的权衡
加载因子的值越高,空间的利用率越高,但是包括 get 和 set 在内的绝大多操作时间将会变慢。
提升HashMap效率的一个关键点就是尽量减少rehash操作,即 初始容量满足: “初始容量 * 加载因子 >= 存储数量 ”
容量必须为2^n原因:将indexFor()中的“取余运算”(hashCode%length)变为“位与运算”(hashCode&(length-1))
(存储实际大小>容量*加载因子)时,HashMap将进行扩容(rehash)
HashMap底层是数组中存储链表。
HashMap中有一些钩子方法(模板方法模式),比如init(),自身调用但不实现,
由子类实现,而该方法中调用时会调用子类实现的init()
计算hash的时候,HashMap将对象分为String和其他Object来分别计算 9、key为Null都被放在table[0]上
JDK版本依旧是1.7。
在看HashMap源码之前希望大家看过ArrayList源码,这样对理解HashMap源码也有很大帮助。
希望大家看完后既能理解HashMap的源码也能收获HashMap代码编写风格(例如:被调用的方法都在调用者方法的下面,这是一个很好的编程风格,不记得这风格名字了,谁记得请留言)
2017/06/03更新:
重新复习一遍源码后,为了大家方便,总结了其中一些重点知识(源码还是要读,这里主要方便大家快速复习)
重点知识集合:
HashMap 的实例有两个参数影响其性能:初始容量 和 加载因子。 容量是哈希表中桶的数量,初始容量只是哈希表在创建时的容量。
加载因子是哈希表在其容量自动增加之前可以达到多满的一种尺度。 当哈希表中的条目数超出了加载因子与当前容量的乘积时,则要对该哈希表 进行
rehash 操作(即重建内部数据结构)
默认的加载因子(0.75f)是对时间和空间的一种很好的权衡
加载因子的值越高,空间的利用率越高,但是包括 get 和 set 在内的绝大多操作时间将会变慢。
提升HashMap效率的一个关键点就是尽量减少rehash操作,即 初始容量满足: “初始容量 * 加载因子 >= 存储数量 ”
容量必须为2^n原因:将indexFor()中的“取余运算”(hashCode%length)变为“位与运算”(hashCode&(length-1))
(存储实际大小>容量*加载因子)时,HashMap将进行扩容(rehash)
HashMap底层是数组中存储链表。
HashMap中有一些钩子方法(模板方法模式),比如init(),自身调用但不实现,
由子类实现,而该方法中调用时会调用子类实现的init()
计算hash的时候,HashMap将对象分为String和其他Object来分别计算 9、key为Null都被放在table[0]上
package java.util; import java.io.*; /** *允许使用 null 值和 null 键 *迭代所需的时间与 HashMap *实例的容量及其实际大小(键-值映射关系数)成比例。 *所以,如果迭代性能很重要,则不要将初始容量设置得太高(或将加载因子设置得太低)。 *HashMap 的实例有两个参数影响其性能:初始容量 和 加载因子。 *容量是哈希表中桶的数量,初始容量只是哈希表在创建时的容量。 *加载因子是哈希表在其容量自动增加之前可以达到多满的一种尺度。 *当哈希表中的条目数超出了加载因子与当前容量的乘积时,则要对该哈希表 *进行 rehash 操作(即重建内部数据结构),从而哈希表将具有大约两倍的桶数。 * *在通常情况下,默认的加载因子(0.75)是对时间和空间的一种很好的权衡。 *加载因子的值越高,空间的利用率越高,但是包括 get 和 set *在内的绝大多操作时间将会变慢。在设置初始容量时应该考虑到映射中所需的 *条目数及其加载因子,以便最大限度地减少 rehash 操作次数。 *注:这里是提升HashMap效率的一个关键点。因为每次进行rehash操作的时候, *在扩充容量的同时,还会将存储的元素进行重新放置(包括计算、复制移动),会花费大量的时间。 *所以尽量减少rehash操作将会提升效率。如果在预先知道存储元素的数量的时候, *初始容量满足: “初始容量 * 加载因子 >= 存储数量 ” ,将会极大提升效率。 */ public class HashMap<K,V> extends AbstractMap<K,V> implements Map<K,V>, Cloneable, Serializable { /** * The default initial capacity - MUST be a power of two. *默认初始化容量。 *左移计算:例如3<<4即3*(2^4) *容量必须为2的n次方的原因: * ··· static int indexFor(int h, int length) { return h & (length-1); } ``` 这是HashMap中的一个调用最频繁的方法,用于计算一个Key对应的Hash桶的索引, Hash桶放在一个数组中,这个方法返回的就是数组的索引,为了更加平均的分配容器内的元素, 采用的是取模运算来分配。参数里的h就是key的hashCode,length就是容量capacity。 这里假如h为70(二进制:**0100 0110**),length为64(二进制:**0100 0000**),length-1也就是63(二进制:**0011 1111**)。 ``` h & (length-1) =01000110 & 00111111=110(十进制正好为6 ==h%length) ``` 可以看到,如果length为2的N次方,取模运算可以变成位与运算,效率显著提高!但是要浪费一些容量的空间。 */ static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16 /** *int最大值为2^31 - 1 *默认最大容量,但是如果在构造方法参数中给出了更大的值,则使用参数值 * The maximum capacity, used if a higher value is implicitly specified * by either of the constructors with arguments. * MUST be a power of two <= 1<<30. */ static final int MAXIMUM_CAPACITY = 1 << 30; /** *默认加载因子,当(存储实际大小>容量*加载因子)的时候 *HashMap将进行扩容(rehash) * The load factor used when none specified in constructor. */ static final float DEFAULT_LOAD_FACTOR = 0.75f; /** * An empty table instance to share when the table is not inflated. */ static final Entry<?,?>[] EMPTY_TABLE = {}; /** * The table, resized as necessary. Length MUST Always be a power of two. */ /** *从这里可以看到,HashMap其实是一个数组, *每个数组中的元素又存储的是一个链表 */ transient Entry<K,V>[] table = (Entry<K,V>[]) EMPTY_TABLE; /** *实际存储大小 * The number of key-value mappings contained in this map. */ transient int size; /** * The next size value at which to resize (capacity * load factor). *负载值,下次扩容的临界值,为:(capacity * load factor) * @serial */ // If table == EMPTY_TABLE then this is the initial capacity at which the // table will be created when inflated. int threshold; /** * The load factor for the hash table. * * @serial */ final float loadFactor; /** * The number of times this HashMap has been structurally modified * Structural modifications are those that change the number of mappings in * the HashMap or otherwise modify its internal structure (e.g., * rehash). This field is used to make iterators on Collection-views of * the HashMap fail-fast. (See ConcurrentModificationException). */ transient int modCount; /** *直译:当key为String并且hashMap的容量超过时,该字段提供了一个备用默认临界值。 *该字段供新hash算法使用,新hash算法将减少由于String key的weak hashCode计算而引起的冲突。 *如果想启用这个特性,你需要设置jdk.m 4000 ap.althashing.threshold这个系统属性的值为一个非负数(默认是-1) *该值代表了一个集合大小的threshold,超过这个值,就会使用新的hash算法。 *需要注意的一点,只有当re-hash的时候,新的hash算法才会起作用 * The default threshold of map capacity above which alternative hashing is * used for String keys. Alternative hashing reduces the incidence of * collisions due to weak hash code calculation for String keys. * <p/> * This value may be overridden by defining the system property * {@code jdk.map.althashing.threshold}. A property value of {@code 1} * forces alternative hashing to be used at all times whereas * {@code -1} value ensures that alternative hashing is never used. */ static final int ALTERNATIVE_HASHING_THRESHOLD_DEFAULT = Integer.MAX_VALUE; /** *Holder这个类用来在VM启动后初始化ALTERNATIVE_HASHING_THRESHOLD * holds values which can't be initialized until after VM is booted. */ private static class Holder { /** * Table capacity above which to switch to use alternative hashing. */ static final int ALTERNATIVE_HASHING_THRESHOLD; static { String altThreshold = java.security.AccessController.doPrivileged( new sun.security.action.GetPropertyAction( "jdk.map.althashing.threshold")); int threshold; try { threshold = (null != altThreshold) ? Integer.parseInt(altThreshold) : ALTERNATIVE_HASHING_THRESHOLD_DEFAULT; // disable alternative hashing if -1 if (threshold == -1) { //如果是-1则说明不是用,设置为Integer.MAX_VALUE //也相当于ALTERNATIVE_HASHING_THRESHOLD_DEFAULT threshold = Integer.MAX_VALUE; } if (threshold < 0) { throw new IllegalArgumentException("value must be positive integer."); } } catch(IllegalArgumentException failed) { throw new Error("Illegal value for 'jdk.map.althashing.threshold'", failed); } ALTERNATIVE_HASHING_THRESHOLD = threshold; } } /** *一个随机值,用来减少hashCode冲突。 * A randomizing value associated with this instance that is applied to * hash code of keys to make hash collisions harder to find. If 0 then * alternative hashing is disabled. */ transient int hashSeed = 0; /** * Constructs an empty <tt>HashMap</tt> with the specified initial * capacity and load factor. * * @param initialCapacity the initial capacity * @param loadFactor the load factor * @throws IllegalArgumentException if the initial capacity is negative * or the load factor is nonpositive */ public HashMap(int initialCapacity, float loadFactor) { if (initialCapacity < 0) throw new IllegalArgumentException("Illegal initial capacity: " + initialCapacity); if (initialCapacity > MAXIMUM_CAPACITY) initialCapacity = MAXIMUM_CAPACITY; if (loadFactor <= 0 || Float.isNaN(loadFactor)) throw new IllegalArgumentException("Illegal load factor: " + loadFactor); this.loadFactor = loadFactor; threshold = initialCapacity; /** *在hashMap中init()方法中无内容,且访问为default。 *这个方法的作用在于多态。主要让子类重写, *即提供一个方法给子类初始化。比如在LinkedHashMap中。 *实例:HashMap<?,?> map = new LinkedHashMap<?,?>(); *map.init()会调用LinkedHashMap中的init方法来。 */ init(); } /** * Constructs an empty <tt>HashMap</tt> with the specified initial * capacity and the default load factor (0.75). * * @param initialCapacity the initial capacity. * @throws IllegalArgumentException if the initial capacity is negative. */ public HashMap(int initialCapacity) { this(initialCapacity, DEFAULT_LOAD_FACTOR); } /** * Constructs an empty <tt>HashMap</tt> with the default initial capacity * (16) and the default load factor (0.75). */ public HashMap() { this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR); } /** * Constructs a new <tt>HashMap</tt> with the same mappings as the * specified <tt>Map</tt>. The <tt>HashMap</tt> is created with * default load factor (0.75) and an initial capacity sufficient to * hold the mappings in the specified <tt>Map</tt>. * * @param m the map whose mappings are to be placed in this map * @throws NullPointerException if the specified map is null */ public HashMap(Map<? extends K, ? extends V> m) { this(Math.max((int) (m.size() / DEFAULT_LOAD_FACTOR) + 1,DEFAULT_INITIAL_CAPACITY) , DEFAULT_LOAD_FACTOR); inflateTable(threshold); putAllForCreate(m); } // Find a power of 2 >= toSize private static int roundUpToPowerOf2(int number) { // assert number >= 0 : "number must be non-negative"; /** * Integer.highestOneBit()的作用: *比如int num = 170,那二进制就是10101010, *highestOneBit就是把10101010变成10000000, *即除最高位的1以外,其余都变为0 */ /** *而方法中number-1的意义在于roundUp, *假如number = 4(100),那么number-1 就是3(11),就会返回4(10 << 1)。 *假如number = 5(101),那么number-1就是4(100),就会返回8(100 << 1) */ return number >= MAXIMUM_CAPACITY ? MAXIMUM_CAPACITY : (number > 1) ? Integer.highestOneBit((number - 1) << 1) : 1; } /** * Inflates the table. *扩容的后重置hashSeed, *这个方法只有在table为空的情况下才会调用来创建table *即table初始化 */ private void inflateTable(int toSize) { // Find a power of 2 >= toSize int capacity = roundUpToPowerOf2(toSize); threshold = (int) Math.min(capacity * loadFactor, MAXIMUM_CAPACITY + 1); table = new Entry[capacity]; initHashSeedAsNeeded(capacity); } // internal utilities /** * Initialization hook for subclasses. This method is called * in all constructors and pseudo-constructors (clone, readObject) * after HashMap has been initialized but before any entries have * been inserted. (In the absence of this method, readObject would * require explicit knowledge of subclasses.) *初始化方法,这里为空是为了让子类重写(多态) *如LinkedHashMap中就重写了这个方法,那么 *HashMap map = new LinkedHashMap<T>(); *map.init中就调用的LinkedHashMap的init() */ void init() { } /** * Initialize the hashing mask value. We defer initialization until we * really need it. *初始化hashSeed值 */ final boolean initHashSeedAsNeeded(int capacity) { boolean currentAltHashing = hashSeed != 0; boolean useAltHashing = sun.misc.VM.isBooted() && (capacity >= Holder.ALTERNATIVE_HASHING_THRESHOLD); //按异或运算 boolean switching = currentAltHashing ^ useAltHashing; if (switching) { hashSeed = useAltHashing ? sun.misc.Hashing.randomHashSeed(this) : 0; } return switching; } /** * Retrieve object hash code and applies a supplemental hash function to the * result hash, which defends against poor quality hash functions. This is * critical because HashMap uses power-of-two length hash tables, that * otherwise encounter collisions for hashCodes that do not differ * in lower bits. Note: Null keys always map to hash 0, thus index 0. *如果key为String,就直接调用stringHash32返回hash值,如果是其他对象,则 *进行计算返回。(key为Null那么hash值为0) */ final int hash(Object k) { int h = hashSeed; if (0 != h && k instanceof String) { return sun.misc.Hashing.stringHash32((String) k); } h ^= k.hashCode(); // This function ensures that hashCodes that differ only by // constant multiples at each bit position have a bounded // number of collisions (approximately 8 at default load factor). h ^= (h >>> 20) ^ (h >>> 12); return h ^ (h >>> 7) ^ (h >>> 4); } /** * Returns index for hash code h. */ static int indexFor(int h, int length) { // assert Integer.bitCount(length) == 1 : "length must be a non-zero power of 2"; return h & (length-1); } /** * Returns the number of key-value mappings in this map. * * @return the number of key-value mappings in this map */ public int size() { return size; } /** * Returns <tt>true</tt> if this map contains no key-value mappings. * * @return <tt>true</tt> if this map contains no key-value mappings */ public boolean isEmpty() { return size == 0; } /** * Returns the value to which the specified key is mapped, * or {@code null} if this map contains no mapping for the key. * * <p>More formally, if this map contains a mapping from a key * {@code k} to a value {@code v} such that {@code (key==null ? k==null : * key.equals(k))}, then this method returns {@code v}; otherwise * it returns {@code null}. (There can be at most one such mapping.) * * <p>A return value of {@code null} does not <i>necessarily</i> * indicate that the map contains no mapping for the key; it's also * possible that the map explicitly maps the key to {@code null}. * The {@link #containsKey containsKey} operation may be used to * distinguish these two cases. * * @see #put(Object, Object) */ public V get(Object key) { if (key == null) //查找key为Null时的value, //key为Null都放在table[0]上 return getForNullKey(); //没找到就返回null Entry<K,V> entry = getEntry(key); return null == entry ? null : entry.getValue(); } /** * Offloaded version of get() to look up null keys. Null keys map * to index 0. This null case is split out into separate methods * for the sake of performance in the two most commonly used * operations (get and put), but incorporated with conditionals in * others. */ private V getForNullKey() { if (size == 0) { return null; } for (Entry<K,V> e = table[0]; e != null; e = e.next) { if (e.key == null) return e.value; } 157f8 return null; } /** * Returns <tt>true</tt> if this map contains a mapping for the * specified key. * * @param key The key whose presence in this map is to be tested * @return <tt>true</tt> if this map contains a mapping for the specified * key. */ public boolean containsKey(Object key) { return getEntry(key) != null; } /** * Returns the entry associated with the specified key in the * HashMap. Returns null if the HashMap contains no mapping * for the key. */ final Entry<K,V> getEntry(Object key) { if (size == 0) { return null; } //计算key的hash值 int hash = (key == null) ? 0 : hash(key); //indexFor通过hash值与table.length取余来计算key所在table的位置 for (Entry<K,V> e = table[indexFor(hash, table.length)]; //遍历entry链表,直到最后元素 e != null; e = e.next) { Object k; //判断是同一元素:首先hash要相同,其次key值要相同或相等 if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) return e; } return null; } /** * Associates the specified value with the specified key in this map. * If the map previously contained a mapping for the key, the old * value is replaced. * * @param key key with which the specified value is to be associated * @param value value to be associated with the specified key * @return the previous value associated with <tt>key</tt>, or * <tt>null</tt> if there was no mapping for <tt>key</tt>. * (A <tt>null</tt> return can also indicate that the map * previously associated <tt>null</tt> with <tt>key</tt>.) */ public V put(K key, V value) { if (table == EMPTY_TABLE) { inflateTable(threshold); } if (key == null) return putForNullKey(value); int hash = hash(key); int i = indexFor(hash, table.length); for (Entry<K,V> e = table[i]; e != null; e = e.next) { Object k; //判断链表上是否存在同一元素,如果存在就替换 if (e.hash == hash && ((k = e.key) == key || key.equals(k))) { V oldValue = e.value; e.value = value; //recordAccess()方法与init()一样,都是为了给 //子类重写。比如LinkedHashMap中,它有插入顺序和 //访问顺序,recoredAccess就是记录访问顺序的。 //访问顺序即最新访问的放到entry链表表头. //LinkedHashMap中有个accessOrder,如果accessOrder为 //false,则为插入顺序,插入顺序即插入到链表表尾, //true,则为访问顺序,被访问后该entry被移到表头。 //但put()Map中没有的值时,不会触发该方法。 //recordAccess在HashMap中主要在put重复元素的时候被调用, //相当于该重复元素被访问了。 e.recordAccess(this); return oldValue; } } modCount++; addEntry(hash, key, value, i); return null; } /** * Offloaded version of put for null keys */ private V putForNullKey(V value) { for (Entry<K,V> e = table[0]; e != null; e = e.next) { if (e.key == null) { V oldValue = e.value; e.value = value; e.recordAccess(this); return oldValue; } } modCount++; addEntry(0, null, value, 0); return null; } /** * This method is used instead of put by constructors and * pseudoconstructors(pseudo伪) (clone, readObject). It does not resize the table, * check for comodification, etc. It calls createEntry rather than * addEntry. *它和put()不同。putForCreate()是内部方法,它被构造函数等调用,用来创建HashMap *而put()是对外提供的往HashMap中添加元素的方法。 */ private void putForCreate(K key, V value) { int hash = null == key ? 0 : hash(key); int i = indexFor(hash, table.length); /** * Look for preexisting entry for key. This will never happen for * clone or deserialize. It will only happen for construction if the * input Map is a sorted map whose ordering is inconsistent w/ equals. */ for (Entry<K,V> e = table[i]; e != null; e = e.next) { Object k; if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) { e.value = value; return; } } createEntry(hash, key, value, i); } private void putAllForCreate(Map<? extends K, ? extends V> m) { for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) putForCreate(e.getKey(), e.getValue()); } /** * Rehashes the contents of this map into a new array with a * larger capacity. This method is called automatically when the * number of keys in this map reaches its threshold. * * If current capacity is MAXIMUM_CAPACITY, this method does not * resize the map, but sets threshold to Integer.MAX_VALUE. * This has the effect of preventing future calls. * * @param newCapacity the new capacity, MUST be a power of two; * must be greater than current capacity unless current * capacity is MAXIMUM_CAPACITY (in which case value * is irrelevant). *当size达到threshold时为table扩容,但如果数组大小已经为 *MAXIMUM_CAPACITY了,那就调整threshold为MAXIMUM_CAPACITY *而不再扩容。 */ void resize(int newCapacity) { Entry[] oldTable = table; int oldCapacity = oldTable.length; if (oldCapacity == MAXIMUM_CAPACITY) { threshold = Integer.MAX_VALUE; return; } Entry[] newTable = new Entry[newCapacity]; //将table中元素转到新数组中 transfer(newTable, initHashSeedAsNeeded(newCapacity)); table = newTable; threshold = (int)Math.min(newCapacity * loadFactor, MAXIMUM_CAPACITY + 1); } /** * Transfers all entries from current table to newTable. *将table中元素全部转移到新table中, *reshsh即是否重新计算hash值 */ void transfer(Entry[] newTable, boolean rehash) { int newCapacity = newTable.length; for (Entry<K,V> e : table) { while(null != e) { Entry<K,V> next = e.next; if (rehash) { e.hash = null == e.key ? 0 : hash(e.key); } int i = indexFor(e.hash, newCapacity); //链表节点操作 e.next = newTable[i]; newTable[i] = e; e = next; } } } /** * Copies all of the mappings from the specified map to this map. * These mappings will replace any mappings that this map had for * any of the keys currently in the specified map. * * @param m mappings to be stored in this map * @throws NullPointerException if the specified map is null * */ public void putAll(Map<? extends K, ? extends V> m) { int numKeysToBeAdded = m.size(); if (numKeysToBeAdded == 0) return; if (table == EMPTY_TABLE) { inflateTable((int) Math.max(numKeysToBeAdded * loadFactor, threshold)); } /* * Expand the map if the map if the number of mappings to be added * is greater than or equal to threshold. This is conservative; the * obvious condition is (m.size() + size) >= threshold, but this * condition could result in a map with twice the appropriate capacity, * if the keys to be added overlap with the keys already in this map. * By using the conservative calculation, we subject ourself * to at most one extra resize. */ if (numKeysToBeAdded > threshold) { int targetCapacity = (int)(numKeysToBeAdded / loadFactor + 1); if (targetCapacity > MAXIMUM_CAPACITY) targetCapacity = MAXIMUM_CAPACITY; int newCapacity = table.length; while (newCapacity < targetCapacity) newCapacity <<= 1; if (newCapacity > table.length) resize(newCapacity); } for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) put(e.getKey(), e.getValue()); } /** * Removes the mapping for the specified key from this map if present. * * @param key key whose mapping is to be removed from the map * @return the previous value associated with <tt>key</tt>, or * <tt>null</tt> if there was no mapping for <tt>key</tt>. * (A <tt>null</tt> return can also indicate that the map * previously associated <tt>null</tt> with <tt>key</tt>.) */ public V remove(Object key) { Entry<K,V> e = removeEntryForKey(key); return (e == null ? null : e.value); } /** * Removes and returns the entry associated with the specified key * in the HashMap. Returns null if the HashMap contains no mapping * for this key. */ final Entry<K,V> removeEntryForKey(Object key) { if (size == 0) { return null; } int hash = (key == null) ? 0 : hash(key); int i = indexFor(hash, table.length); Entry<K,V> prev = table[i]; Entry<K,V> e = prev; while (e != null) { Entry<K,V> next = e.next; Object k; if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) { modCount++; size--; if (prev == e) table[i] = next; else prev.next = next; /** *和recordAccess()等方法一样,该函数在HashMap中也是空的,主要用来重写. *LinkedHashMap没有重写remove(Object key)方法,重写了被remove调用的recordRemoval方法 *注:这种设计模式叫“模板方法模式”,其中像recordRemovel()这种空或默认被子类重写的 *的方法叫钩子(好形象的命名) *在LinkedHashMap里面,recordRemoval()用来移除header链表里面Entry的after和before关系 */ e.recordRemoval(this); return e; } prev = e; e = next; } return e; } /** * Special version of remove for EntrySet using {@code Map.Entry.equals()} * for matching. * */ final Entry<K,V> removeMapping(Object o) { if (size == 0 || !(o instanceof Map.Entry)) return null; Map.Entry<K,V> entry = (Map.Entry<K,V>) o; Object key = entry.getKey(); int hash = (key == null) ? 0 : hash(key); int i = indexFor(hash, table.length); Entry<K,V> prev = table[i]; Entry<K,V> e = prev; while (e != null) { Entry<K,V> next = e.next; if (e.hash == hash && e.equals(entry)) { modCount++; size--; if (prev == e) table[i] = next; else prev.next = next; e.recordRemoval(this); return e; } prev = e; e = next; } return e; } /** * Removes all of the mappings from this map. * The map will be empty after this call returns. */ public void clear() { modCount++; //将数组内元素都变为Null Arrays.fill(table, null); size = 0; } /** * Returns <tt>true</tt> if this map maps one or more keys to the * specified value. * * @param value value whose presence in this map is to be tested * @return <tt>true</tt> if this map maps one or more keys to the * specified value */ public boolean containsValue(Object value) { if (value == null) return containsNullValue(); Entry[] tab = table; for (int i = 0; i < tab.length ; i++) for (Entry e = tab[i] ; e != null ; e = e.next) if (value.equals(e.value)) return true; return false; } /** * Special-case code for containsValue with null argument */ private boolean containsNullValue() { Entry[] tab = table; for (int i = 0; i < tab.length ; i++) for (Entry e = tab[i] ; e != null ; e = e.next) if (e.value == null) return true; return false; } /** * Returns a shallow copy of this <tt>HashMap</tt> instance: the keys and * values themselves are not cloned. * * @return a shallow copy of this map */ public Object clone() { HashMap<K,V> result = null; try { result = (HashMap<K,V>)super.clone(); } catch (CloneNotSupportedException e) { // assert false; } if (result.table != EMPTY_TABLE) { result.inflateTable(Math.min( (int) Math.min( size * Math.min(1 / loadFactor, 4.0f), // we have limits... HashMap.MAXIMUM_CAPACITY), table.length)); } result.entrySet = null; result.modCount = 0; result.size = 0; result.init(); result.putAllForCreate(this); return result; } static class Entry<K,V> implements Map.Entry<K,V> { final K key; V value; Entry<K,V> next; int hash; /** * Creates new entry. */ Entry(int h, K k, V v, Entry<K,V> n) { value = v; next = n; key = k; hash = h; } public final K getKey() { return key; } public final V getValue() { return value; } public final V setValue(V newValue) { V oldValue = value; value = newValue; return oldValue; } public final boolean equals(Object o) { if (!(o instanceof Map.Entry)) return false; Map.Entry e = (Map.Entry)o; Object k1 = getKey(); Object k2 = e.getKey(); if (k1 == k2 || (k1 != null && k1.equals(k2))) { Object v1 = getValue(); Object v2 = e.getValue(); if (v1 == v2 || (v1 != null && v1.equals(v2))) return true; } return false; } public final int hashCode() { return Objects.hashCode(getKey()) ^ Objects.hashCode(getValue()); } public final String toString() { return getKey() + "=" + getValue(); } /** * This method is invoked whenever the value in an entry is * overwritten by an invocation of put(k,v) for a key k that's already * in the HashMap. */ /** *recordAccess()方法与init()一样,都是为了给 * 子类重写。比如LinkedHashMap中,它有插入顺序和 * 访问顺序,recoredAccess就是记录访问顺序的。 * 访问顺序即最新访问的放到entry链表表头. * LinkedHashMap中有个accessOrder,如果accessOrder为 * false,则为插入顺序,插入顺序即插入到链表表尾, * true,则为访问顺序,被访问后该entry被移到表头。 * 但put()Map中没有的值时,不会触发该方法。 * recordAccess在HashMap中主要在put重复元素的时候被调用, * 相当于该重复元素被访问了。 */ void recordAccess(HashMap<K,V> m) { } /** * This method is invoked whenever the entry is * removed from the table. */ /** *和recordAccess()等方法一样,该函数在HashMap中也是空的,主要用来重写. *LinkedHashMap没有重写remove(Object key)方法,重写了被remove调用的recordRemoval方法 *注:这种设计模式叫“模板方法模式”,其中像recordRemovel()这种空或默认被子类重写的 *的方法叫钩子 *在LinkedHashMap里面,recordRemoval()用来移除header链表里面Entry的after和before关系 */ void recordRemoval(HashMap<K,V> m) { } } /** * Adds a new entry with the specified key, value and hash code to * the specified bucket. It is the responsibility of this * method to resize the table if appropriate. * * Subclass overrides this to alter the behavior of put method. */ void addEntry(int hash, K key, V value, int bucketIndex) { if ((size >= threshold) && (null != table[bucketIndex])) { resize(2 * table.length); hash = (null != key) ? hash(key) : 0; bucketIndex = indexFor(hash, table.length); } createEntry(hash, key, value, bucketIndex); } /** * Like addEntry except that this version is used when creating entries * as part of Map construction or "pseudo-construction" (cloning, * deserialization). This version needn't worry about resizing the table. * * Subclass overrides this to alter the behavior of HashMap(Map), * clone, and readObject. */ /** *从这里可以看出,新增的entry放在链表表头 */ void createEntry(int hash, K key, V value, int bucketIndex) { Entry<K,V> e = table[bucketIndex]; table[bucketIndex] = new Entry<>(hash, key, value, e); size++; } private abstract class HashIterator<E> implements Iterator<E> { Entry<K,V> next; // next entry to return int expectedModCount; // For fast-fail int index; // current slot Entry<K,V> current; // current entry HashIterator() { expectedModCount = modCount; if (size > 0) { // advance to first entry Entry[] t = table; /** *一直循环找到第一个不为null的entry为止 *注意:index++ 相当于t[index],然后在循环中index++; */ while (index < t.length && (next = t[index++]) == null) ; } } public final boolean hasNext() { return next != null; } final Entry<K,V> nextEntry() { if (modCount != expectedModCount) throw new ConcurrentModificationException(); Entry<K,V> e = next; if (e == null) throw new NoSuchElementException(); if ((next = e.next) == null) { Entry[] t = table; while (index < t.length && (next = t[index++]) == null) ; } current = e; return e; } public void remove() { if (current == null) throw new IllegalStateException(); if (modCount != expectedModCount) throw new ConcurrentModificationException(); Object k = current.key; current = null; HashMap.this.removeEntryForKey(k); expectedModCount = modCount; } } private final class ValueIterator extends HashIterator<V> { public V next() { return nextEntry().value; } } private final class KeyIterator extends HashIterator<K> { public K next() { return nextEntry().getKey(); } } private final class EntryIterator extends HashIterator<Map.Entry<K,V>> { public Map.Entry<K,V> next() { return nextEntry(); } } // Subclass overrides these to alter behavior of views' iterator() method Iterator<K> newKeyIterator() { return new KeyIterator(); } Iterator<V> newValueIterator() { return new ValueIterator(); } Iterator<Map.Entry<K,V>> newEntryIterator() { return new EntryIterator(); } // Views private transient Set<Map.Entry<K,V>> entrySet = null; /** * Returns a {@link Set} view of the keys contained in this map. * The set is backed by the map, so changes to the map are * reflected in the set, and vice-versa. If the map is modified * while an iteration over the set is in progress (except through * the iterator's own <tt>remove</tt> operation), the results of * the iteration are undefined. The set supports element removal, * which removes the corresponding mapping from the map, via the * <tt>Iterator.remove</tt>, <tt>Set.remove</tt>, * <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt> * operations. It does not support the <tt>add</tt> or <tt>addAll</tt> * operations. */ public Set<K> keySet() { Set<K> ks = keySet; return (ks != null ? ks : (keySet = new KeySet())); } private final class KeySet extends AbstractSet<K> { public Iterator<K> iterator() { return newKeyIterator(); } public int size() { return size; } public boolean contains(Object o) { return containsKey(o); } public boolean remove(Object o) { return HashMap.this.removeEntryForKey(o) != null; } public void clear() { HashMap.this.clear(); } } /** * Returns a {@link Collection} view of the values contained in this map. * The collection is backed by the map, so changes to the map are * reflected in the collection, and vice-versa. If the map is * modified while an iteration over the collection is in progress * (except through the iterator's own <tt>remove</tt> operation), * the results of the iteration are undefined. The collection * supports element removal, which removes the corresponding * mapping from the map, via the <tt>Iterator.remove</tt>, * <tt>Collection.remove</tt>, <tt>removeAll</tt>, * <tt>retainAll</tt> and <tt>clear</tt> operations. It does not * support the <tt>add</tt> or <tt>addAll</tt> operations. */ public Collection<V> values() { Collection<V> vs = values; return (vs != null ? vs : (values = new Values())); } private final class Values extends AbstractCollection<V> { public Iterator<V> iterator() { return newValueIterator(); } public int size() { return size; } public boolean contains(Object o) { return containsValue(o); } public void clear() { HashMap.this.clear(); } } /** * Returns a {@link Set} view of the mappings contained in this map. * The set is backed by the map, so changes to the map are * reflected in the set, and vice-versa. If the map is modified * while an iteration over the set is in progress (except through * the iterator's own <tt>remove</tt> operation, or through the * <tt>setValue</tt> operation on a map entry returned by the * iterator) the results of the iteration are undefined. The set * supports element removal, which removes the corresponding * mapping from the map, via the <tt>Iterator.remove</tt>, * <tt>Set.remove</tt>, <tt>removeAll</tt>, <tt>retainAll</tt> and * <tt>clear</tt> operations. It does not support the * <tt>add</tt> or <tt>addAll</tt> operations. * * @return a set view of the mappings contained in this map */ public Set<Map.Entry<K,V>> entrySet() { return entrySet0(); } private Set<Map.Entry<K,V>> entrySet0() { Set<Map.Entry<K,V>> es = entrySet; return es != null ? es : (entrySet = new EntrySet()); } private final class EntrySet extends AbstractSet<Map.Entry<K,V>> { public Iterator<Map.Entry<K,V>> iterator() { return newEntryIterator(); } public boolean contains(Object o) { if (!(o instanceof Map.Entry)) return false; Map.Entry<K,V> e = (Map.Entry<K,V>) o; Entry<K,V> candidate = getEntry(e.getKey()); return candidate != null && candidate.equals(e); } public boolean remove(Object o) { return removeMapping(o) != null; } public int size() { return size; } public void clear() { HashMap.this.clear(); } } /** * Save the state of the <tt>HashMap</tt> instance to a stream (i.e., * serialize it). * * @serialData The <i>capacity</i> of the HashMap (the length of the * bucket array) is emitted (int), followed by the * <i>size</i> (an int, the number of key-value * mappings), followed by the key (Object) and value (Object) * for each key-value mapping. The key-value mappings are * emitted in no particular order. */ private void writeObject(java.io.ObjectOutputStream s) throws IOException { // Write out the threshold, loadfactor, and any hidden stuff s.defaultWriteObject(); // Write out number of buckets if (table==EMPTY_TABLE) { s.writeInt(roundUpToPowerOf2(threshold)); } else { s.writeInt(table.length); } // Write out size (number of Mappings) s.writeInt(size); // Write out keys and values (alternating) if (size > 0) { for(Map.Entry<K,V> e : entrySet0()) { s.writeObject(e.getKey()); s.writeObject(e.getValue()); } } } private static final long serialVersionUID = 362498820763181265L; /** * Reconstitute the {@code HashMap} instance from a stream (i.e., * deserialize it). */ private void readObject(java.io.ObjectInputStream s) throws IOException, ClassNotFoundException { // Read in the threshold (ignored), loadfactor, and any hidden stuff s.defaultReadObject(); if (loadFactor <= 0 || Float.isNaN(loadFactor)) { throw new InvalidObjectException("Illegal load factor: " + loadFactor); } // set other fields that need values table = (Entry<K,V>[]) EMPTY_TABLE; // Read in number of buckets /** * writeObject中按顺序输出两个int, *所以这里也按顺序读入两个。 */ s.readInt(); // ignored. // Read number of mappings int mappings = s.readInt(); if (mappings < 0) throw new InvalidObjectException("Illegal mappings count: " + mappings); // capacity chosen by number of mappings and desired load (if >= 0.25) int capacity = (int) Math.min( mappings * Math.min(1 / loadFactor, 4.0f), // we have limits... HashMap.MAXIMUM_CAPACITY); // allocate the bucket array; if (mappings > 0) { inflateTable(capacity); //当mappings<0时会抛出异常结束,所以这里只有mappings = 0 的时候进来, //但当mappings为0的时候,capacity也为0 } else { threshold = capacity; } init(); // Give subclass a chance to do its thing. // Read the keys and values, and put the mappings in the HashMap for (int i = 0; i < mappings; i++) { K key = (K) s.readObject(); V value = (V) s.readObject(); putForCreate(key, value); } } // These methods are used when serializing HashSets int capacity() { return table.length; } float loadFactor() { return loadFactor; } }
相关文章推荐
- JDK源码之解读hashMap 的put和get方法的实现原理
- JDK8之HashMap源码解读
- java源码解读之HashMap------jdk 1.7
- jdk 1.7 hashMap源码解读
- jdk1.8.0_45源码解读——HashMap的实现
- jdk 1.8 hashmap源码解读(详细)(上)
- java jdk 中HashMap的源码解读
- JDK源码之解读hashMap 的put和get方法的实现原理
- HashMap源码分析(基于JDK1.6)
- HashMap源码解读
- HashMap的put方法源码解析_JDK1.7
- JDK源码学习(6)-ConcurrentHashMap代码学习
- JDK 1.8 HashMap 源码阅读二
- 深入理解JAVA集合系列一:HashMap源码解读
- HashMap源码分析(基于JDK1.6)
- JDK8源码学习——HashMap
- JDK中Future的源码解读
- 给jdk写注释系列之jdk1.6容器(4)-HashMap源码解析
- JDK源码分析(三)——HashMap 上(基于JDK7)
- JDK1.8 HashMap源码分析