feat: add 50 backend interview questions and answers

Generated comprehensive interview preparation materials covering:
- Distributed systems (transactions, locks, ID generation, consistency)
- Database (indexing, sharding, replication, transactions)
- Caching (Redis, cache problems, distributed lock)
- Message queues (RocketMQ, Kafka)
- Concurrency (ThreadLocal, ConcurrentHashMap, thread pools)
- JVM (GC, memory, tuning)
- System design (seckill, short URL, IM, feed, LBS)
- Algorithms (B+ tree, LRU, Red-Black tree, Skip list, Timing wheel)
- Network (TCP/IP, HTTP/HTTPS)
- Security (encryption, SQL injection, XSS)
- Performance tuning
- Design patterns
- Microservices (Spring Boot, Gateway, Service Mesh)
- Container orchestration (Kubernetes, Docker)
- CI/CD, observability

Each file includes:
- Detailed questions
- Comprehensive answers
- Code examples
- Real project experience
- Alibaba P7 level requirements

Generated with [Claude Code](https://claude.com/claude-code)
via [Happy](https://happy.engineering)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Happy <yesreply@happy.engineering>
This commit is contained in:
yasinshaw
2026-03-01 00:09:50 +08:00
parent d80d1cf553
commit fe2e6dc2f2
26 changed files with 10016 additions and 0 deletions

View File

@@ -0,0 +1,476 @@
# ConcurrentHashMap 原理
## 问题
1. ConcurrentHashMap 在 JDK 1.7 和 1.8 中的实现有什么区别?
2. ConcurrentHashMap 如何保证线程安全?
3. ConcurrentHashMap 的 size() 方法是如何实现的?
4. ConcurrentHashMap 和 Hashtable、Collections.synchronizedMap 的区别?
5. 在实际项目中如何选择线程安全的 Map
---
## 标准答案
### 1. JDK 1.7 vs 1.8
#### **JDK 1.7:分段锁**
**结构**
```
ConcurrentHashMap
Segment[](分段数组,默认 16 个)
HashEntry[](每个 Segment 有自己的 HashEntry 数组)
HashEntry键值对
```
**特点**
- **分段锁**:每个 Segment 独立加锁ReentrantLock
- **并发度**:默认 16Segment 数量)
- **粒度**Segment 级别
**获取锁流程**
```java
// 1. 计算 Segment 索引
int hash = hash(key.hashCode());
int segmentIndex = (hash >>> segmentShift) & segmentMask;
// 2. 获取 Segment
Segment segment = segments[segmentIndex];
// 3. 加锁
segment.lock();
try {
// 操作 HashEntry[]
} finally {
segment.unlock();
}
```
---
#### **JDK 1.8CAS + synchronized**
**结构**
```
ConcurrentHashMap
Node[] + TreeNode[](数组 + 链表 + 红黑树)
Node / TreeNode
```
**特点**
- **CAS + synchronized**CAS 失败后使用 synchronized
- **粒度**Node 级别(更细)
- **并发度**:理论上无限制(实际受限于数组大小)
**核心改进**
| 特性 | JDK 1.7 | JDK 1.8 |
|------|---------|---------|
| **锁机制** | ReentrantLock分段 | CAS + synchronizedNode 级别) |
| **锁粒度** | Segment 级别 | Node 级别(更细) |
| **并发度** | 默认 16 | 理论无限制 |
| **查询** | 需要加锁(不强一致) | 无锁volatile |
| **红黑树** | 不支持 | 支持(链表长度 ≥ 8 |
---
### 2. JDK 1.8 实现原理
#### **核心数据结构**
```java
public class ConcurrentHashMap<K, V> {
// 数组
transient volatile Node<K,V>[] table;
// 数组(扩容时使用)
private transient volatile Node<K,V>[] nextTable;
// 基础计数器值
private transient volatile long baseCount;
// 控制位sizeCtl < 0初始化或扩容-1正在初始化<-1扩容线程数
private transient volatile int sizeCtl;
}
```
---
#### **Node 节点**
```java
static class Node<K,V> implements Map.Entry<K,V> {
final int hash;
final K key;
volatile V val;
volatile Node<K,V> next; // 链表下一个节点
}
```
**注意**`val``next` 都是 `volatile`,保证可见性。
---
#### **TreeNode红黑树节点**
```java
static final class TreeNode<K,V> extends Node<K,V> {
TreeNode<K,V> parent; // 父节点
TreeNode<K,V> left; // 左子节点
TreeNode<K,V> right; // 右子节点
TreeNode<K,V> prev; // 前驱节点
boolean red; // 颜色(红黑树)
}
```
**转换条件**
- 链表 → 红黑树:链表长度 ≥ 8 **且** 数组长度 ≥ 64
- 红黑树 → 链表:红黑树节点数 ≤ 6
---
### 3. 核心 API 源码解析
#### **put() 方法**
```java
public V put(K key, V value) {
return putVal(key, value, false);
}
final V putVal(K key, V value, boolean onlyIfAbsent) {
if (key == null || value == null) throw new NullPointerException();
// 1. 计算哈希值(扰动函数,减少哈希冲突)
int hash = spread(key.hashCode());
int binCount = 0;
// 2. 无限循环CAS + 重试)
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh;
// 2.1 初始化数组(延迟初始化)
if (tab == null || (n = tab.length) == 0)
tab = initTable();
// 2.2 计算索引位置
else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
// 位置为空CAS 插入
if (casTabAt(tab, i, null, new Node<K,V>(hash, key, value, null)))
break; // CAS 成功,退出
}
// 2.3 扩容中MOVED = -1
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f); // 帮助扩容
// 2.4 位置不为空(链表或红黑树)
else {
V oldVal = null;
synchronized (f) { // 加锁Node 级别)
if (tabAt(tab, i) == f) {
// 链表
if (fh >= 0) {
binCount = 1;
for (Node<K,V> e = f;; ++binCount) {
K ek;
// 键已存在,更新值
if (e.hash == hash &&
((ek = e.key) == key ||
(ek != null && key.equals(ek)))) {
oldVal = e.val;
if (!onlyIfAbsent)
e.val = value;
break;
}
Node<K,V> pred = e;
// 到达链表尾部,插入新节点
if ((e = e.next) == null) {
pred.next = new Node<K,V>(hash, key,
value, null);
break;
}
}
}
// 红黑树
else if (f instanceof TreeBin) {
Node<K,V> p;
binCount = 2;
if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key,
value)) != null) {
oldVal = p.val;
if (!onlyIfAbsent)
p.val = value;
}
}
}
}
// 2.5 链表转红黑树
if (binCount != 0) {
if (binCount >= TREEIFY_THRESHOLD)
treeifyBin(tab, i); // TREEIFY_THRESHOLD = 8
if (oldVal != null)
return oldVal;
break;
}
}
}
// 3. 增加元素个数LongAdder
addCount(1L, binCount);
return null;
}
```
---
#### **get() 方法**
```java
public V get(Object key) {
Node<K,V>[] tab; Node<K,V> e, p; int n, eh; K ek;
int h = spread(key.hashCode());
// 1. 数组不为空且索引位置不为空
if ((tab = table) != null && (n = tab.length) > 0 &&
(e = tabAt(tab, (n - 1) & h)) != null) {
// 2. 第一个节点匹配
if ((eh = e.hash) == h) {
if ((ek = e.key) == key || (ek != null && key.equals(ek)))
return e.val;
}
// 3. 红黑树
else if (eh < 0)
return (p = e.find(h, key)) != null ? p.val : null;
// 4. 链表遍历
while ((e = e.next) != null) {
if (e.hash == h &&
((ek = e.key) == key || (ek != null && key.equals(ek))))
return e.val;
}
}
return null;
}
```
**注意**
- **无锁读取**:整个 `get()` 方法无锁
- **volatile 可见性**`Node.val``volatile`,保证读到最新值
---
#### **size() 方法**
**问题**:如何高效统计元素个数?
**方案****LongAdder**(分段计数)
```java
public int size() {
long n = sumCount();
return ((n < 0L) ? 0 :
(n > (long)Integer.MAX_VALUE) ? Integer.MAX_VALUE :
(int)n);
}
final long sumCount() {
// 计算所有 CounterCell 的和
CounterCell[] as = counterCells;
long sum = baseCount;
if (as != null) {
for (CounterCell a : as)
if (a != null)
sum += a.value;
}
return sum;
}
```
**LongAdder 原理**
```
LongAdder
baseCount基础值
CounterCell[](计数器数组,避免 CAS 竞争)
多线程更新时,随机选择一个 CounterCell 更新
统计时baseCount + 所有 CounterCell 的值
```
**为什么高效?**
- 高并发时,多线程更新不同的 CounterCell无竞争
- 统计时才累加(牺牲实时性换取性能)
---
### 4. ConcurrentHashMap vs 其他 Map
#### **对比表**
| 特性 | HashMap | Hashtable | Collections.synchronizedMap | ConcurrentHashMap |
|------|---------|-----------|----------------------------|-------------------|
| **线程安全** | ❌ 否 | ✅ 是 | ✅ 是 | ✅ 是 |
| **锁机制** | 无 | synchronized | synchronized | CAS + synchronized |
| **锁粒度** | - | 整个表 | 整个表 | Node 级别 |
| **并发度** | 无 | 低 | 低 | 高 |
| **迭代器** | Fail-Fast | Fail-Safe | Fail-Safe | Fail-Safe |
| **null 键值** | 允许 | 不允许 | 不允许 | 不允许 |
---
#### **详细对比**
**1. Hashtable不推荐**
```java
// Hashtable 的 put 方法(整个表加锁)
public synchronized V put(K key, V value) {
// ...
}
```
**问题**
- **锁粒度太大**:整个表加锁
- **并发度低**:同一时刻只有一个线程能操作
---
**2. Collections.synchronizedMap()**
```java
Map<String, String> map = Collections.synchronizedMap(new HashMap<>());
```
**原理**
```java
// 内部使用 mutexObject加锁
public V put(K key, V value) {
synchronized (mutex) { // 整个 Map 加锁
return m.put(key, value);
}
}
```
**问题**
- **锁粒度太大**:整个 Map 加锁
- **迭代器需要手动加锁**
```java
Map<String, String> map = Collections.synchronizedMap(new HashMap<>());
// 遍历需要手动加锁
synchronized (map) {
for (Map.Entry<String, String> entry : map.entrySet()) {
// ...
}
}
```
---
**3. ConcurrentHashMap推荐**
```java
Map<String, String> map = new ConcurrentHashMap<>();
```
**优点**
- **锁粒度小**Node 级别
- **并发度高**:理论上无限制
- **迭代器无需加锁**Fail-Safe弱一致迭代器
---
### 5. 实际项目应用
#### **场景 1本地缓存**
```java
@Component
public class LocalCache {
private final ConcurrentHashMap<String, Object> cache = new ConcurrentHashMap<>();
public void put(String key, Object value) {
cache.put(key, value);
}
public Object get(String key) {
return cache.get(key);
}
public void remove(String key) {
cache.remove(key);
}
}
```
---
#### **场景 2计数器**
```java
@Component
public class CounterService {
private final ConcurrentHashMap<String, LongAdder> counters = new ConcurrentHashMap<>();
public void increment(String key) {
counters.computeIfAbsent(key, k -> new LongAdder()).increment();
}
public long get(String key) {
LongAdder adder = counters.get(key);
return adder != null ? adder.sum() : 0;
}
}
```
---
#### **场景 3去重表**
```java
@Component
public class DeduplicationService {
private final ConcurrentHashMap<String, Boolean> dedupTable = new ConcurrentHashMap<>();
public boolean isDuplicate(String id) {
return dedupTable.putIfAbsent(id, true) != null;
}
}
```
---
### 6. 阿里 P7 加分项
**深度理解**
- 理解 `ConcurrentHashMap` 的扩容机制(多线程协同扩容)
- 理解 `LongAdder` 的实现原理Cell 数组 + CAS
- 理解 `ConcurrentHashMap` 的弱一致迭代器
**实战经验**
- 有使用 `ConcurrentHashMap` 解决并发问题的经验
-`ConcurrentHashMap` 性能调优的经验
- 有处理 `ConcurrentHashMap` 相关线上问题的经验
**架构能力**
- 能根据业务特点选择合适的 Map 实现
- 能设计高性能的并发数据结构
- 有分布式缓存的设计经验
**技术选型**
- 了解 `ConcurrentSkipListMap`(跳表实现)
- 了解 Guava 的 `LocalCache`Caffeine
- 能根据场景选择本地缓存或分布式缓存