文章目录
数组
- 优势:是一种简单的线性序列,可以快速访问,效率高。从效率和类型检查角度,数组是最好的。
- 劣势:不灵活,长度初始化时固定了;采用了连续的存储空间,删除和添加效率低下;无法保存直接映射关系;缺乏封装,操作繁琐;
集合(Collection)
偷的图,原作者做好的了,(●ˇ∀ˇ●),请原谅
List 接口
- 有序:使用索引标记元素(底层数组)
- 可重复:不同索引位置可使用添加相同元素,即e1.equals(e2)
ArrayList
底层使用数组实现,查询效率高,增删效率低,线程不安全。
源码学习
/**
* 继承 AbstractList,实现 List,即继承了有序可重复
* 实现 RandomAccess,可以快速访问元素
* 实现 Cloneable,可被复制
* 实现 Serializable,可被序列化
* 线程不安全的
*/
public class ArrayList<E> extends AbstractList<E>
implements List<E>, RandomAccess, Cloneable, java.io.Serializable
{
private static final long serialVersionUID = 8683452581122892189L;
/**
* 默认长度
*/
private static final int DEFAULT_CAPACITY = 10;
/**
* 构造函数使用的默认数组
*/
private static final Object[] EMPTY_ELEMENTDATA = {};
public ArrayList(int initialCapacity) {
if (initialCapacity > 0) {
this.elementData = new Object[initialCapacity];
} else if (initialCapacity == 0) {
this.elementData = EMPTY_ELEMENTDATA;
} else {
throw new IllegalArgumentException("Illegal Capacity: "+
initialCapacity);
}
}
/**
* 构造函数使用的默认数组(未传参)
* DEFAULTCAPACITY_EMPTY_ELEMENTDATA 默认大小为10
*/
private static final Object[] DEFAULTCAPACITY_EMPTY_ELEMENTDATA = {};
public ArrayList() {
this.elementData = DEFAULTCAPACITY_EMPTY_ELEMENTDATA;
}
/**
* 底层使用 Object 数组存储内容
*/
transient Object[] elementData;
private int size;
}
transient
:短暂的,即在对象序列化时,变量不参与序列化
容量不够时扩容,增加50%
/**
* 设置缓冲区大小
* 先判断是否为默认数组,如果是,长度为0,如果不是,minExpand 为0;
* minCapacity 最小需求容量 > minExpand
*/
public void ensureCapacity(int minCapacity) {
int minExpand = (elementData != DEFAULTCAPACITY_EMPTY_ELEMENTDATA)
? 0
: DEFAULT_CAPACITY;
if (minCapacity > minExpand) {
ensureExplicitCapacity(minCapacity);
}
}
/**
* 设置内部缓冲区大小
* 先判断是否为默认数组,如果是,minCapacity >= 10
*/
private void ensureCapacityInternal(int minCapacity) {
if (elementData == DEFAULTCAPACITY_EMPTY_ELEMENTDATA) {
minCapacity = Math.max(DEFAULT_CAPACITY, minCapacity);
}
ensureExplicitCapacity(minCapacity);
}
/**
* modCount,容量变化次数
*/
private void ensureExplicitCapacity(int minCapacity) {
modCount++;
if (minCapacity - elementData.length > 0)
grow(minCapacity);
}
/**
* 先扩容50%,若不够,直接使用给定的需求容易,若新容量超过数组的最大限制长度,即限定容量为 Integer.MAX_VALUE,后数组复制
*/
private void grow(int minCapacity) {
int oldCapacity = elementData.length;
int newCapacity = oldCapacity + (oldCapacity >> 1);//右移位1,即 n/2
if (newCapacity - minCapacity < 0)
newCapacity = minCapacity;
if (newCapacity - MAX_ARRAY_SIZE > 0)
newCapacity = hugeCapacity(minCapacity);
elementData = Arrays.copyOf(elementData, newCapacity);
}
private static int hugeCapacity(int minCapacity) {
if (minCapacity < 0)
throw new OutOfMemoryError();
return (minCapacity > MAX_ARRAY_SIZE) ?
Integer.MAX_VALUE :
MAX_ARRAY_SIZE;
}
private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
LinkedList
LinkedList底层采用双向链表实现存储,查询效率低,增删效率高(底层不需要移动数组数据,只需修改链表节点指针),线程不安全
和ArrayLis
t比,没有实现RandomAccess
所以其无下标,随机访问元素速度较慢
双向链表
:也叫双链表,是链表的一种,其每个节点都有两个指针,分别指向前一个节点和后一个节点
源码学习
/**
* 继承 AbstractList,实现 List,即继承了有序可重复
* 实现 Deque,可以作为一个双端队列
* 实现 Cloneable,可被复制
* 线程不安全的
*/
public class LinkedList<E>
extends AbstractSequentialList<E>
implements List<E>, Deque<E>, Cloneable, java.io.Serializable
{
transient int size = 0;
/**
* 指向第一个节点的指针
*/
transient Node<E> first;
/**
* 指向最后一个节点的指针
*/
transient Node<E> last;
public LinkedList() {
}
public LinkedList(Collection<? extends E> c) {
this();
addAll(c);
}
/**
* 将集合中的元素全部插入链表中
* 以当前size为下标插入
*/
public boolean addAll(Collection<? extends E> c) {
return addAll(size, c);
}
public boolean addAll(int index, Collection<? extends E> c) {
checkPositionIndex(index);
Object[] a = c.toArray();
int numNew = a.length;
if (numNew == 0)
return false;
Node<E> pred, succ; //index节点的前后节点
if (index == size) {
succ = null; //index节点的后节点为null
pred = last; //前节点为队尾
} else {
succ = node(index); //index节点作为后节点
pred = succ.prev; //前节点为index的前一个节点
}
//链表for循环遍历数组,依次执行插入节点操作
for (Object o : a) {
@SuppressWarnings("unchecked") E e = (E) o;
Node<E> newNode = new Node<>(pred, e, null);
if (pred == null)
first = newNode;
else
pred.next = newNode;
pred = newNode;
}
if (succ == null) {
last = pred;
} else {
pred.next = succ;
succ.prev = pred;
}
size += numNew;
modCount++;
return true;
}
/**
* 链表
*/
private static class Node<E> {
E item; //元素
Node<E> next; //前节点
Node<E> prev; //后节点
Node(Node<E> prev, E element, Node<E> next) {
this.item = element;
this.next = next;
this.prev = prev;
}
}
Vector
Vector和ArrayList差不多,实现定义基本相同,只是对元素操作的方法都添加了synchronized,保证线程的安全
Vector扩容和ArrayList不同,并不是50%,而是若在初始化时未指定扩容容量大小(capacityIncrement=0),则默认扩容一倍,若初始化指定容量大小和扩容容量大小,则扩容按照定义的容量扩容
源码学习
/**
* 默认大小
*/
public Vector() {
this(10);
}
/**
* 指定默认大小容量-initialCapacity和扩容容量-capacityIncrement
*/
public Vector(int initialCapacity, int capacityIncrement) {
super();
if (initialCapacity < 0)
throw new IllegalArgumentException("Illegal Capacity: "+
initialCapacity);
this.elementData = new Object[initialCapacity];
this.capacityIncrement = capacityIncrement;
}
/**
* 容量校验
*/
private void ensureCapacityHelper(int minCapacity) {
// overflow-conscious code
if (minCapacity - elementData.length > 0)
grow(minCapacity);
}
/**
* 若初始化时未指定默认大小容量和扩容容量,即capacityIncrement=0,则默认扩容一倍
* 若初始化指定容量大小和扩容容量大小,则扩容按照定义容量扩容
* 然后Arrays.copyOf
*/
private void grow(int minCapacity) {
int oldCapacity = elementData.length;
int newCapacity = oldCapacity + ((capacityIncrement > 0) ?
capacityIncrement : oldCapacity);
if (newCapacity - minCapacity < 0)
newCapacity = minCapacity;
if (newCapacity - MAX_ARRAY_SIZE > 0)
newCapacity = hugeCapacity(minCapacity);
elementData = Arrays.copyOf(elementData, newCapacity);
}
private static int hugeCapacity(int minCapacity) {
if (minCapacity < 0) // overflow
throw new OutOfMemoryError();
return (minCapacity > MAX_ARRAY_SIZE) ?
Integer.MAX_VALUE :
MAX_ARRAY_SIZE;
}
Map 接口
Map采用K-V存储,通过K键标识,K键不能重复(键重复将会被新数据覆盖)
HashMap
HashMap底层实现采用了哈希表,本质是“数组+链表”,因为1.数组查询快增删慢,2.链表查询慢增删快,所以使用了哈希表后查询快增删也快
JDK8开始,在链表大于8时,自动变为红黑树
copy的图
源码学习
/**
* 实现 Cloneable ,可以对对象进行位复制,使用clone()方法必须实现其
*/
public class HashMap<K,V> extends AbstractMap<K,V>
implements Map<K,V>, Cloneable, Serializable {
/**
* 初始化容易大小为16,数组大小必须为2的幂
*/
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4;
/**
* 最大容量为2的30次幂
*/
static final int MAXIMUM_CAPACITY = 1 << 30;
/**
* 默认负载因子-0.75,即数组被使用超过0.75时,则自动扩容
*/
static final float DEFAULT_LOAD_FACTOR = 0.75f;
/**
* 由链表转换成树的阈值,从JDK8开始,当链表在大于8时,自动变为红黑树
*/
static final int TREEIFY_THRESHOLD = 8;
/**
* 当小于6时,红黑树变为链表
*/
static final int UNTREEIFY_THRESHOLD = 6;
/**
* 当元素被树化时最小的hash表容量,如果没有达到这个阈值,即hash表容量小于64,当元素太多执行resize扩容操作时,MIN_TREEIFY_CAPACITY至少时TREEIFY_CAPACITY的4倍
*/
static final int MIN_TREEIFY_CAPACITY = 64;
/**
* 核心数组,默认为16长度
*/
transient Node<K,V>[] table;
}
Node
/**
* Node节点,只有下一个节点指针,所以为单链表
*/
static class Node<K,V> implements Map.Entry<K,V> {
final int hash; //哈希值
final K key;
V value;
Node<K,V> next; //下一节点
Node(int hash, K key, V value, Node<K,V> next) {
this.hash = hash;
this.key = key;
this.value = value;
this.next = next;
}
public final K getKey() { return key; }
public final V getValue() { return value; }
public final String toString() { return key + "=" + value; }
public final int hashCode() {
return Objects.hashCode(key) ^ Objects.hashCode(value);
}
public final V setValue(V newValue) {
V oldValue = value;
value = newValue;
return oldValue;
}
/**
* 判断两个节点是否相等
*/
public final boolean equals(Object o) {
if (o == this)
return true;
if (o instanceof Map.Entry) {
Map.Entry<?,?> e = (Map.Entry<?,?>)o;
if (Objects.equals(key, e.getKey()) &&
Objects.equals(value, e.getValue()))
return true;
}
return false;
}
}
key能为null
// 可以有一个null
static final int hash(Object key) {
int h;
return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}
红黑树
static final class TreeNode<K,V> extends LinkedHashMap.Entry<K,V> {
TreeNode<K,V> parent; // 父节点
TreeNode<K,V> left;
TreeNode<K,V> right;
TreeNode<K,V> prev; // needed to unlink next upon deletion
boolean red;
TreeNode(int hash, K key, V val, Node<K,V> next) {
super(hash, key, val, next);
}
/**
* 返回当前节点的根节点
*/
final TreeNode<K,V> root() {
for (TreeNode<K,V> r = this, p;;) {
if ((p = r.parent) == null)
return r;
r = p;
}
}
//.................
}
构造函数
/**
* 指定初始化容量和负载因子
*/
public HashMap(int initialCapacity, float loadFactor) {
if (initialCapacity < 0)
throw new IllegalArgumentException("Illegal initial capacity: " +
initialCapacity);
if (initialCapacity > MAXIMUM_CAPACITY)
initialCapacity = MAXIMUM_CAPACITY;
if (loadFactor <= 0 || Float.isNaN(loadFactor))
throw new IllegalArgumentException("Illegal load factor: " +
loadFactor);
this.loadFactor = loadFactor;
this.threshold = tableSizeFor(initialCapacity);
}
//指定容量
public HashMap(int initialCapacity) {
this(initialCapacity, DEFAULT_LOAD_FACTOR);
}
//默认
public HashMap() {
this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted
}
public HashMap(Map<? extends K, ? extends V> m) {
this.loadFactor = DEFAULT_LOAD_FACTOR;
putMapEntries(m, false);
}
final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) {
int s = m.size();
if (s > 0) {
if (table == null) { // pre-size
float ft = ((float)s / loadFactor) + 1.0F;
int t = ((ft < (float)MAXIMUM_CAPACITY) ?
(int)ft : MAXIMUM_CAPACITY);
if (t > threshold)
threshold = tableSizeFor(t);
}
else if (s > threshold)
resize();
for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) {
K key = e.getKey();
V value = e.getValue();
putVal(hash(key), key, value, false, evict);
}
}
}
put方法
public V put(K key, V value) {
return putVal(hash(key), key, value, false, true);
}
/**
* onlyIfAbsent,为true即不会覆盖相同key的值
* evict,为false即表示初始化时调用
*/
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
boolean evict) {
Node<K,V>[] tab; Node<K,V> p; int n, i;
//初始化时当前哈希桶为空,则扩容哈希表
if ((tab = table) == null || (n = tab.length) == 0)
n = (tab = resize()).length;
//如果当前index的节点为空,即表示没有发生哈希碰撞,直接构建一个新节点,挂载在index处
//index=哈希值 & 哈希桶-1计算
if ((p = tab[i = (n - 1) & hash]) == null)
tab[i] = newNode(hash, key, value, null);
else {//发生了哈希碰撞
Node<K,V> e; K k;
//若哈希值相等,key也相等,则覆盖值
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k))))
e = p;
//判断为红黑树,执行红黑树插入操作,复杂复杂复杂复杂复杂.....
else if (p instanceof TreeNode)
e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
else {
//遍历到链表尾部,追加节点
for (int binCount = 0; ; ++binCount) {
if ((e = p.next) == null) {
p.next = newNode(hash, key, value, null);
//若节点追加后链表数量大于等于8,则转换为红黑树
if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
treeifyBin(tab, hash);
break;
}
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
break;
p = e;
}
}
//如果e不是null,则覆盖节点
if (e != null) { // existing mapping for key
V oldValue = e.value;
if (!onlyIfAbsent || oldValue == null)
e.value = value;
//回调允许LinkedHashMap后期操作
afterNodeAccess(e);
return oldValue;
}
}
++modCount;
if (++size > threshold)
resize();
//回调允许LinkedHashMap后期操作
afterNodeInsertion(evict);
return null;
}
红黑树的put方法(红黑树有点复杂,后面慢慢看了)
final TreeNode<K,V> putTreeVal(HashMap<K,V> map, Node<K,V>[] tab,
int h, K k, V v) {
Class<?> kc = null;
boolean searched = false;
TreeNode<K,V> root = (parent != null) ? root() : this;
for (TreeNode<K,V> p = root;;) {
int dir, ph; K pk;
if ((ph = p.hash) > h)
dir = -1;
else if (ph < h)
dir = 1;
else if ((pk = p.key) == k || (k != null && k.equals(pk)))
return p;
else if ((kc == null &&
(kc = comparableClassFor(k)) == null) ||
(dir = compareComparables(kc, k, pk)) == 0) {
if (!searched) {
TreeNode<K,V> q, ch;
searched = true;
if (((ch = p.left) != null &&
(q = ch.find(h, k, kc)) != null) ||
((ch = p.right) != null &&
(q = ch.find(h, k, kc)) != null))
return q;
}
dir = tieBreakOrder(k, pk);
}
TreeNode<K,V> xp = p;
if ((p = (dir <= 0) ? p.left : p.right) == null) {
Node<K,V> xpn = xp.next;
TreeNode<K,V> x = map.newTreeNode(h, k, v, xpn);
if (dir <= 0)
xp.left = x;
else
xp.right = x;
xp.next = x;
x.parent = x.prev = xp;
if (xpn != null)
((TreeNode<K,V>)xpn).prev = x;
moveRootToFront(tab, balanceInsertion(root, x));
return null;
}
}
}
扩容resize()
final Node<K,V>[] resize() {
Node<K,V>[] oldTab = table;
int oldCap = (oldTab == null) ? 0 : oldTab.length;// 当前哈希桶长度
int oldThr = threshold;// 当前阈值
int newCap, newThr = 0;// 初始化新的容量和阈值
if (oldCap > 0) {//若当前表非空
if (oldCap >= MAXIMUM_CAPACITY) {// 若当前容量大于上限容量
threshold = Integer.MAX_VALUE;//阈值设置为最大值
return oldTab;//直接返回当前哈希桶,不再扩容
}
else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY && // 新容量为旧容量的两倍
oldCap >= DEFAULT_INITIAL_CAPACITY)
newThr = oldThr << 1; // 阈值也扩大为当前两倍
}
else if (oldThr > 0) // 若当前表空,但是有阈值,即初始化了阈值容量但未添加数据
newCap = oldThr;// 新表容量为旧的阈值
else { // 如果当前表是空的,而且也没有阈值。代表是初始化时没有任何容量/阈值参数的情况
newCap = DEFAULT_INITIAL_CAPACITY;
newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}
if (newThr == 0) {//如果新的阈值是0,对应的是 当前表是空的,但是有阈值的情况
float ft = (float)newCap * loadFactor;//根据新表容量 和 负载因子 求出新的阈值
//进行越界修复
newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
(int)ft : Integer.MAX_VALUE);
}
//更新阈值
threshold = newThr;
@SuppressWarnings({"rawtypes","unchecked"})
//根据新的容量 构建新的哈希桶
Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
//更新哈希桶引用
table = newTab;
//如果以前的哈希桶中有元素
//下面开始将当前哈希桶中的所有节点转移到新的哈希桶中
if (oldTab != null) {
//遍历老的哈希桶
for (int j = 0; j < oldCap; ++j) {
//取出当前的节点 e
Node<K,V> e;
//如果当前桶中有元素,则将链表赋值给e
if ((e = oldTab[j]) != null) {
//将原哈希桶置空以便GC
oldTab[j] = null;
//如果当前链表中就一个元素,(没有发生哈希碰撞)
if (e.next == null)
//直接将这个元素放置在新的哈希桶里。
//注意这里取下标 是用 哈希值 与 桶的长度-1 。 由于桶的长度是2的n次方,这么做其实是等于 一个模运算。但是效率更高
newTab[e.hash & (newCap - 1)] = e;
//如果发生过哈希碰撞 ,而且是节点数超过8个,转化成了红黑树(暂且不谈 避免过于复杂, 后续专门研究一下红黑树)
else if (e instanceof TreeNode)
((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
//如果发生过哈希碰撞,节点数小于8个。则要根据链表上每个节点的哈希值,依次放入新哈希桶对应下标位置。
else { // preserve order
//因为扩容是容量翻倍,所以原链表上的每个节点,现在可能存放在原来的下标,即low位, 或者扩容后的下标,即high位。 high位= low位+原哈希桶容量
//低位链表的头结点、尾节点
Node<K,V> loHead = null, loTail = null;
//高位链表的头节点、尾节点
Node<K,V> hiHead = null, hiTail = null;
Node<K,V> next;//临时节点 存放e的下一个节点
do {
next = e.next;
//这里又是一个利用位运算 代替常规运算的高效点: 利用哈希值 与 旧的容量,可以得到哈希值去模后,是大于等于oldCap还是小于oldCap,等于0代表小于oldCap,应该存放在低位,否则存放在高位
if ((e.hash & oldCap) == 0) {
//给头尾节点指针赋值
if (loTail == null)
loHead = e;
else
loTail.next = e;
loTail = e;
}//高位也是相同的逻辑
else {
if (hiTail == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
}//循环直到链表结束
} while ((e = next) != null);
//将低位链表存放在原index处,
if (loTail != null) {
loTail.next = null;
newTab[j] = loHead;
}
//将高位链表存放在新index处
if (hiTail != null) {
hiTail.next = null;
newTab[j + oldCap] = hiHead;
}
}
}
}
}
return newTab;
}
HashTable
HashTable和HashMap基本相同,只是是线程安全的,不允许key/value为null,效率低些
源码学习
/**
* 继承自Dictionary(过时)
*/
public class Hashtable<K,V>
extends Dictionary<K,V>
implements Map<K,V>, Cloneable, java.io.Serializable {
// 指定容量和因子
public Hashtable(int initialCapacity, float loadFactor) {
if (initialCapacity < 0)
throw new IllegalArgumentException("Illegal Capacity: "+
initialCapacity);
if (loadFactor <= 0 || Float.isNaN(loadFactor))
throw new IllegalArgumentException("Illegal Load: "+loadFactor);
if (initialCapacity==0)
initialCapacity = 1;
this.loadFactor = loadFactor;
table = new Entry<?,?>[initialCapacity];
threshold = (int)Math.min(initialCapacity * loadFactor, MAX_ARRAY_SIZE + 1);
}
// 指定容量
public Hashtable(int initialCapacity) {
this(initialCapacity, 0.75f);
}
// 默认容量为11,0.75f因子
public Hashtable() {
this(11, 0.75f);
}
public Hashtable(Map<? extends K, ? extends V> t) {
this(Math.max(2*t.size(), 11), 0.75f);
putAll(t);
}
}
方法(synchronized)
public synchronized int size() {
return count;
}
public synchronized boolean isEmpty() {
return count == 0;
}
public synchronized V get(Object key) {
Entry<?,?> tab[] = table;
int hash = key.hashCode();
int index = (hash & 0x7FFFFFFF) % tab.length;
for (Entry<?,?> e = tab[index] ; e != null ; e = e.next) {
if ((e.hash == hash) && e.key.equals(key)) {
return (V)e.value;
}
}
return null;
}
public synchronized V put(K key, V value) {
// Make sure the value is not null
if (value == null) {
throw new NullPointerException();
}
// Makes sure the key is not already in the hashtable.
Entry<?,?> tab[] = table;
int hash = key.hashCode();
int index = (hash & 0x7FFFFFFF) % tab.length;
@SuppressWarnings("unchecked")
Entry<K,V> entry = (Entry<K,V>)tab[index];
for(; entry != null ; entry = entry.next) {
if ((entry.hash == hash) && entry.key.equals(key)) {
V old = entry.value;
entry.value = value;
return old;
}
}
addEntry(hash, key, value, index);
return null;
}
public synchronized V remove(Object key) {
Entry<?,?> tab[] = table;
int hash = key.hashCode();
int index = (hash & 0x7FFFFFFF) % tab.length;
@SuppressWarnings("unchecked")
Entry<K,V> e = (Entry<K,V>)tab[index];
for(Entry<K,V> prev = null ; e != null ; prev = e, e = e.next) {
if ((e.hash == hash) && e.key.equals(key)) {
modCount++;
if (prev != null) {
prev.next = e.next;
} else {
tab[index] = e.next;
}
count--;
V oldValue = e.value;
e.value = null;
return oldValue;
}
}
return null;
}
public synchronized void putAll(Map<? extends K, ? extends V> t) {
for (Map.Entry<? extends K, ? extends V> e : t.entrySet())
put(e.getKey(), e.getValue());
}
key / value 不能为null
/**
* hash = key.hashCode();无法做null的hashCode,所以不能为null
*/
public synchronized V get(Object key) {
Entry<?,?> tab[] = table;
int hash = key.hashCode();
int index = (hash & 0x7FFFFFFF) % tab.length;
for (Entry<?,?> e = tab[index] ; e != null ; e = e.next) {
if ((e.hash == hash) && e.key.equals(key)) {
return (V)e.value;
}
}
return null;
}
/**
* 对 value 做了限制,为 null 将抛出
*/
public synchronized V put(K key, V value) {
// Make sure the value is not null
if (value == null) {
throw new NullPointerException();
}
// Makes sure the key is not already in the hashtable.
Entry<?,?> tab[] = table;
int hash = key.hashCode();
int index = (hash & 0x7FFFFFFF) % tab.length;
@SuppressWarnings("unchecked")
Entry<K,V> entry = (Entry<K,V>)tab[index];
for(; entry != null ; entry = entry.next) {
if ((entry.hash == hash) && entry.key.equals(key)) {
V old = entry.value;
entry.value = value;
return old;
}
}
addEntry(hash, key, value, index);
return null;
}
rehash-扩容
/**
* 扩容为 2n+1
*/
protected void rehash() {
int oldCapacity = table.length;
Entry<?,?>[] oldMap = table;
// overflow-conscious code
int newCapacity = (oldCapacity << 1) + 1;
if (newCapacity - MAX_ARRAY_SIZE > 0) {
if (oldCapacity == MAX_ARRAY_SIZE)
// Keep running with MAX_ARRAY_SIZE buckets
return;
newCapacity = MAX_ARRAY_SIZE;
}
Entry<?,?>[] newMap = new Entry<?,?>[newCapacity];
modCount++;
threshold = (int)Math.min(newCapacity * loadFactor, MAX_ARRAY_SIZE + 1);
table = newMap;
for (int i = oldCapacity ; i-- > 0 ;) {
for (Entry<K,V> old = (Entry<K,V>)oldMap[i] ; old != null ; ) {
Entry<K,V> e = old;
old = old.next;
int index = (e.hash & 0x7FFFFFFF) % newCapacity;
e.next = (Entry<K,V>)newMap[index];
newMap[index] = e;
}
}
}
TreeMap
红黑二叉树典型实现,自动排序
// 根节点
private transient Entry<K,V> root;
put
public V put(K key, V value) {
Entry<K,V> t = root;
if (t == null) {
compare(key, key); // type (and possibly null) check 空验证
root = new Entry<>(key, value, null);
size = 1;
modCount++;
return null;
}
int cmp;
Entry<K,V> parent;//父节点
// split comparator and comparable paths
Comparator<? super K> cpr = comparator;
if (cpr != null) {
do {
parent = t;//父节点=root根节点,判断值后,t=左子节点或右子节点,直到t的左或右节点为null,获取最后一次parent=t父节点
cmp = cpr.compare(key, t.key);
if (cmp < 0)//小于
t = t.left;
else if (cmp > 0)
t = t.right;
else
return t.setValue(value);//相同覆盖设值
} while (t != null);
}
else {
if (key == null)
throw new NullPointerException();
@SuppressWarnings("unchecked")
Comparable<? super K> k = (Comparable<? super K>) key;
do {
parent = t;
cmp = k.compareTo(t.key);
if (cmp < 0)
t = t.left;
else if (cmp > 0)
t = t.right;
else
return t.setValue(value);
} while (t != null);
}
Entry<K,V> e = new Entry<>(key, value, parent);//存入key,value,parent
if (cmp < 0)//将插入的值设为左右节点
parent.left = e;
else
parent.right = e;
fixAfterInsertion(e);// 修复红黑色节点的,保持黑节点平衡
size++;
modCount++;
return null;
}
remove
public V remove(Object key) {
Entry<K,V> p = getEntry(key);//找到节点
if (p == null)//判null
return null;
V oldValue = p.value;
deleteEntry(p);
return oldValue;
}
private void deleteEntry(Entry<K,V> p) {
modCount++;
size--;
// If strictly internal, copy successor's element to p and then make p
// point to successor.
if (p.left != null && p.right != null) {//左右节点都不为null
Entry<K,V> s = successor(p);// 1.
p.key = s.key;
p.value = s.value;
p = s;
} // p has 2 children
// Start fixup at replacement node, if it exists.
Entry<K,V> replacement = (p.left != null ? p.left : p.right);//获取删除节点的左右子节点
if (replacement != null) {//删除节点有子节点
// Link replacement to parent
replacement.parent = p.parent;//删除节点的子节点父节点=删除节点的父节点
if (p.parent == null)//删除节点的父节点=null
root = replacement;//根节点=删除节点的子节点
else if (p == p.parent.left)//删除节点是左子节点
p.parent.left = replacement;//删除节点的子节点代替删除节点
else
p.parent.right = replacement;//删除节点是右子节点情况
// Null out links so they are OK to use by fixAfterDeletion.
p.left = p.right = p.parent = null;//清空删除节点的节点信息
// Fix replacement
if (p.color == BLACK)//删除节点是黑节点
fixAfterDeletion(replacement);//修复平衡
} else if (p.parent == null) { // return if we are the only node.删除节点父节点为null,只有一个节点
root = null;
} else { // No children. Use self as phantom replacement and unlink.删除节点无子节点,且有多个节点
if (p.color == BLACK)
fixAfterDeletion(p);//修复平衡
if (p.parent != null) {
if (p == p.parent.left)//删除节点是左节点
p.parent.left = null;//设null
else if (p == p.parent.right)//删除节点为右节点
p.parent.right = null;
p.parent = null;
}
}
}
/**
* 1.
* Returns the successor of the specified Entry, or null if no such.
*/
static <K,V> TreeMap.Entry<K,V> successor(Entry<K,V> t) {
if (t == null)//节点为null
return null;
else if (t.right != null) {//t.右节点不为null
Entry<K,V> p = t.right;//p=t.右节点
while (p.left != null)//右节点的左子节点不为null
p = p.left;
return p;//返回t.右节点的最左子节点(一直左)
} else {//右节点为null
Entry<K,V> p = t.parent;//p=t.父节点
Entry<K,V> ch = t;//ch=t
while (p != null && ch == p.right) {//1...t.父节点不为null & t.的同胞节点(右节点)=t
ch = p;
p = p.parent;
}
return p;
}
}
Set
Set接口继承自Collection,Set接口中并没有新增方法,保持了和Collection的方法一致
Set:无序,不重复
懒了
HashSet
HashSet采用哈希算法实现,底层使用的是HashMap实现,本质为一个简化版的HashMap,增删查询效率到高
源码学习
public class HashSet<E>
extends AbstractSet<E>
implements Set<E>, Cloneable, java.io.Serializable
{
/*
* 底层实现既是 HashMap
*/
private transient HashMap<E,Object> map;
/*
* KV存入HashMap中时 K=E,V=PRESENT(虚拟值)
*/
private static final Object PRESENT = new Object();
public HashSet() {
map = new HashMap<>();
}
/**
* max(c.size/0.75+1,16)
*/
public HashSet(Collection<? extends E> c) {
map = new HashMap<>(Math.max((int) (c.size()/.75f) + 1, 16));
addAll(c);
}
public HashSet(int initialCapacity, float loadFactor) {
map = new HashMap<>(initialCapacity, loadFactor);
}
public HashSet(int initialCapacity) {
map = new HashMap<>(initialCapacity);
}
/**
* map中K不重复,所以set不重复
*/
public boolean add(E e) {
return map.put(e, PRESENT)==null;
}
//删除
public boolean remove(Object o) {
return map.remove(o)==PRESENT;
}
}
TreeSet
TreeSet:不允许null,不重复。
(内部排列有序)
二叉树实现,所以确保集合元素处于排序状态。支持两种排序方式,自然排序 和 定制排序,其中自然排序为默认的排序方式。向TreeSet中加入的应该是同一个类的对象。
TreeSet判断两个对象相等的方式是两个对象通过equals方法,或者通过CompareTo方法比较
public class TreeSet<E> extends AbstractSet<E>
implements NavigableSet<E>, Cloneable, java.io.Serializable
{
private transient NavigableMap<E,Object> m;
// 虚拟值
private static final Object PRESENT = new Object();
/**
* Constructs a set backed by the specified navigable map.
*/
TreeSet(NavigableMap<E,Object> m) {
this.m = m;
}
/**
* 构造一个新的空树集,根据元素的自然顺序排序。所有插入到集合中的元素都必须实现{@link Comparable}接口。
* 此外,所有这些元素都必须相互可比{ @code e1.compareTo(e2)}不能把任何元素的{ @code ClassCastException } { @code e1 }和{ @code e2 }。
* 如果用户试图添加一个元素的集合违反这个约束(例如,用户试图添加一个字符串元素的一组元素是整数),{ @code添加}调用就会抛出一个{ @code ClassCastException }。
*/
public TreeSet() {
this(new TreeMap<E,Object>());
}
/**
* 构造一个新的空树集,按照指定的树进行排序比较器。插入到集合中的所有元素必须是相互<i>comparable</i>由指定的comparator: {@code comparator.compare(e1,e2)}不能为任何元素抛出{@code ClassCastException}在集合中 {@code e1}和{@code e2}。
* 如果用户试图添加违反此约束的集合的元素 {@code add}调用将抛出{@codeClassCastException}。@param comparator将用于对这个集合进行排序。如果{@code null}, {@linkplain Comparable将使用元素的order}。
*/
public TreeSet(Comparator<? super E> comparator) {
this(new TreeMap<>(comparator));
}
public TreeSet(Collection<? extends E> c) {
this();
addAll(c);
}
public TreeSet(SortedSet<E> s) {
this(s.comparator());
addAll(s);
}
}
LinkedHashSet
采用hash表存储,并用双向链表记录插入顺序
内部是LinkedHashMap
/**
* 继承自 HashSet
*/
public class LinkedHashSet<E>
extends HashSet<E>
implements Set<E>, Cloneable, java.io.Serializable {
public LinkedHashSet(int initialCapacity, float loadFactor) {
super(initialCapacity, loadFactor, true);
}
public LinkedHashSet(int initialCapacity) {
super(initialCapacity, .75f, true);
}
public LinkedHashSet() {
super(16, .75f, true);
}
public LinkedHashSet(Collection<? extends E> c) {
super(Math.max(2*c.size(), 11), .75f, true);
addAll(c);
}
}
超详细大佬文章: