Organized 50 interview questions into 12 categories: - 01-分布式系统 (9 files): 分布式事务, 分布式锁, 一致性哈希, CAP理论, etc. - 02-数据库 (2 files): MySQL索引优化, MyBatis核心原理 - 03-缓存 (5 files): Redis数据结构, 缓存问题, LRU算法, etc. - 04-消息队列 (1 file): RocketMQ/Kafka - 05-并发编程 (4 files): 线程池, 设计模式, 限流策略, etc. - 06-JVM (1 file): JVM和垃圾回收 - 07-系统设计 (8 files): 秒杀系统, 短链接, IM, Feed流, etc. - 08-算法与数据结构 (4 files): B+树, 红黑树, 跳表, 时间轮 - 09-网络与安全 (3 files): TCP/IP, 加密安全, 性能优化 - 10-中间件 (4 files): Spring Boot, Nacos, Dubbo, Nginx - 11-运维 (4 files): Kubernetes, CI/CD, Docker, 可观测性 - 12-面试技巧 (1 file): 面试技巧和职业规划 All files renamed to Chinese for better accessibility and organized into categorized folders for easier navigation. Generated with [Claude Code](https://claude.com/claude-code) via [Happy](https://happy.engineering) Co-Authored-By: Claude <noreply@anthropic.com> Co-Authored-By: Happy <yesreply@happy.engineering>
428 lines
8.8 KiB
Markdown
428 lines
8.8 KiB
Markdown
# 一致性哈希算法
|
||
|
||
## 问题
|
||
|
||
1. 什么是一致性哈希?解决了什么问题?
|
||
2. 一致性哈希的原理是什么?
|
||
3. 什么是虚拟节点?为什么需要虚拟节点?
|
||
4. 一致性哈希在负载均衡、分布式缓存中的应用
|
||
5. 一致性哈希有哪些优缺点?
|
||
6. 在实际项目中如何实现一致性哈希?
|
||
|
||
---
|
||
|
||
## 标准答案
|
||
|
||
### 1. 传统哈希的问题
|
||
|
||
#### **场景:分布式缓存**
|
||
|
||
假设有 3 台缓存服务器:
|
||
```
|
||
Server A, Server B, Server C
|
||
```
|
||
|
||
使用传统哈希:`hash(key) % N`
|
||
```java
|
||
int serverIndex = hash(key) % 3; // 3 台服务器
|
||
```
|
||
|
||
**问题:服务器扩容/缩容**
|
||
|
||
新增一台服务器(Server D):
|
||
```
|
||
原来:hash(key) % 3
|
||
现在:hash(key) % 4
|
||
|
||
结果:大部分 key 的路由都变了!
|
||
- 缓存全部失效
|
||
- 数据库压力激增
|
||
```
|
||
|
||
**示例**:
|
||
```
|
||
3 台服务器时:
|
||
hash("user:1") % 3 = 1 → Server B
|
||
hash("user:2") % 3 = 2 → Server C
|
||
|
||
4 台服务器时(新增 Server D):
|
||
hash("user:1") % 4 = 2 → Server C(变了!)
|
||
hash("user:2") % 4 = 3 → Server D(变了!)
|
||
|
||
影响:75% 的缓存失效
|
||
```
|
||
|
||
---
|
||
|
||
### 2. 一致性哈希原理
|
||
|
||
#### **核心思想**
|
||
|
||
将服务器和数据都映射到**哈希环**上:
|
||
- 顺时针查找最近的服务器
|
||
- 服务器变化时,只影响相邻数据
|
||
|
||
**哈希环**(0 - 2^32-1):
|
||
```
|
||
0
|
||
↓
|
||
Server A ──────────── Server B
|
||
↗ ↘
|
||
↗ ↘
|
||
↑ ↓
|
||
| ↓
|
||
Server D ←────────────→ Server C
|
||
↑
|
||
2^32-1
|
||
```
|
||
|
||
---
|
||
|
||
#### **算法步骤**
|
||
|
||
**步骤 1:映射服务器到环**
|
||
```java
|
||
// 服务器 IP → 哈希值
|
||
hash("192.168.1.10") → 1000 → Server A
|
||
hash("192.168.1.11") → 5000 → Server B
|
||
hash("192.168.1.12") → 10000 → Server C
|
||
```
|
||
|
||
**步骤 2:映射数据到环**
|
||
```java
|
||
// 数据 Key → 哈希值
|
||
hash("user:1") → 2000
|
||
hash("user:2") → 6000
|
||
hash("user:3") → 15000
|
||
```
|
||
|
||
**步骤 3:顺时针查找服务器**
|
||
```
|
||
user:1 (2000) → Server B (5000) // 顺时针第一个服务器
|
||
user:2 (6000) → Server C (10000)
|
||
user:3 (15000) → Server A (1000 环绕)
|
||
```
|
||
|
||
---
|
||
|
||
#### **Java 实现**
|
||
|
||
```java
|
||
public class ConsistentHash<T> {
|
||
|
||
private final TreeMap<Long, T> ring = new TreeMap<>();
|
||
private final int virtualNodes; // 虚拟节点数
|
||
|
||
public ConsistentHash(int virtualNodes) {
|
||
this.virtualNodes = virtualNodes;
|
||
}
|
||
|
||
// 添加节点
|
||
public void addNode(T node) {
|
||
for (int i = 0; i < virtualNodes; i++) {
|
||
String virtualNodeName = node.toString() + "#" + i;
|
||
long hash = hash(virtualNodeName);
|
||
ring.put(hash, node);
|
||
}
|
||
}
|
||
|
||
// 移除节点
|
||
public void removeNode(T node) {
|
||
for (int i = 0; i < virtualNodes; i++) {
|
||
String virtualNodeName = node.toString() + "#" + i;
|
||
long hash = hash(virtualNodeName);
|
||
ring.remove(hash);
|
||
}
|
||
}
|
||
|
||
// 获取节点
|
||
public T getNode(String key) {
|
||
if (ring.isEmpty()) {
|
||
return null;
|
||
}
|
||
|
||
long hash = hash(key);
|
||
|
||
// 顺时针查找
|
||
Map.Entry<Long, T> entry = ring.ceilingEntry(hash);
|
||
if (entry == null) {
|
||
// 环绕到第一个节点
|
||
entry = ring.firstEntry();
|
||
}
|
||
|
||
return entry.getValue();
|
||
}
|
||
|
||
// 哈希函数(FNV1_32_HASH)
|
||
private long hash(String key) {
|
||
final long p = 16777619;
|
||
long hash = 2166136261L;
|
||
|
||
for (byte b : key.getBytes()) {
|
||
hash = (hash ^ b) * p;
|
||
}
|
||
|
||
hash += hash << 13;
|
||
hash ^= hash >> 7;
|
||
hash += hash << 3;
|
||
hash ^= hash >> 17;
|
||
hash += hash << 5;
|
||
|
||
return hash & 0xffffffffL;
|
||
}
|
||
}
|
||
```
|
||
|
||
**使用示例**:
|
||
```java
|
||
// 创建一致性哈希环
|
||
ConsistentHash<String> consistentHash = new ConsistentHash<>(150);
|
||
|
||
// 添加服务器
|
||
consistentHash.addNode("192.168.1.10"); // Server A
|
||
consistentHash.addNode("192.168.1.11"); // Server B
|
||
consistentHash.addNode("192.168.1.12"); // Server C
|
||
|
||
// 获取路由
|
||
String server = consistentHash.getNode("user:1001");
|
||
System.out.println("路由到服务器: " + server);
|
||
|
||
// 新增服务器
|
||
consistentHash.addNode("192.168.1.13"); // Server D
|
||
// 只有部分数据受影响
|
||
```
|
||
|
||
---
|
||
|
||
### 3. 虚拟节点
|
||
|
||
#### **问题:数据倾斜**
|
||
|
||
**场景**:
|
||
```
|
||
哈希环:
|
||
Server A (1000)
|
||
Server B (5000)
|
||
Server C (10000)
|
||
|
||
数据分布:
|
||
A: 2000 条
|
||
B: 500 条
|
||
C: 500 条
|
||
|
||
数据倾斜:A 的负载远大于 B、C
|
||
```
|
||
|
||
**原因**:
|
||
- 节点少,哈希分布不均
|
||
- 真实服务器数量有限
|
||
|
||
---
|
||
|
||
#### **解决方案:虚拟节点**
|
||
|
||
**原理**:
|
||
每个真实节点映射多个虚拟节点:
|
||
```
|
||
真实节点 A:
|
||
├─ 虚拟节点 A#1 → 1000
|
||
├─ 虚拟节点 A#2 → 3000
|
||
├─ 虚拟节点 A#3 → 7000
|
||
└─ ...
|
||
|
||
真实节点 B:
|
||
├─ 虚拟节点 B#1 → 2000
|
||
├─ 虚拟节点 B#2 → 4000
|
||
└─ ...
|
||
|
||
真实节点 C:
|
||
├─ 虚拟节点 C#1 → 5000
|
||
└─ ...
|
||
```
|
||
|
||
**效果**:
|
||
```
|
||
数据分布:
|
||
A: 1000 条(25%)
|
||
B: 1000 条(25%)
|
||
C: 1000 条(25%)
|
||
D: 1000 条(25%)
|
||
|
||
均衡度:高
|
||
```
|
||
|
||
**虚拟节点数量**:
|
||
- 一般:100-150 个
|
||
- 更多:更均衡,但内存占用大
|
||
|
||
---
|
||
|
||
### 4. 一致性哈希的应用
|
||
|
||
#### **应用 1:分布式缓存(Redis Cluster)**
|
||
|
||
```java
|
||
// Redis 集群路由
|
||
public class RedisClusterRouter {
|
||
|
||
private final ConsistentHash<JedisPool> consistentHash;
|
||
|
||
public RedisClusterRouter(List<JedisPool> pools) {
|
||
this.consistentHash = new ConsistentHash<>(150);
|
||
for (JedisPool pool : pools) {
|
||
consistentHash.addNode(pool);
|
||
}
|
||
}
|
||
|
||
public Jedis getJedis(String key) {
|
||
JedisPool pool = consistentHash.getNode(key);
|
||
return pool.getResource();
|
||
}
|
||
}
|
||
```
|
||
|
||
**优点**:
|
||
- 扩容/缩容影响小
|
||
- 数据迁移量小
|
||
|
||
---
|
||
|
||
#### **应用 2:负载均衡(Nginx)**
|
||
|
||
```nginx
|
||
# Nginx 一致性哈希配置
|
||
upstream backend {
|
||
consistent_hash $request_uri;
|
||
|
||
server 192.168.1.10:8080;
|
||
server 192.168.1.11:8080;
|
||
server 192.168.1.12:8080;
|
||
}
|
||
```
|
||
|
||
**效果**:
|
||
- 相同请求路由到同一服务器
|
||
- 会话保持(无需 Sticky Session)
|
||
|
||
---
|
||
|
||
#### **应用 3:分库分表**
|
||
|
||
```java
|
||
// 分库路由
|
||
public class ShardingRouter {
|
||
|
||
private final ConsistentHash<String> dbRouter;
|
||
|
||
public ShardingRouter(List<String> databases) {
|
||
this.dbRouter = new ConsistentHash<>(100);
|
||
for (String db : databases) {
|
||
dbRouter.addNode(db);
|
||
}
|
||
}
|
||
|
||
public String getDatabase(String userId) {
|
||
return dbRouter.getNode(userId);
|
||
}
|
||
}
|
||
```
|
||
|
||
---
|
||
|
||
### 5. 一致性哈希的优缺点
|
||
|
||
#### **优点**
|
||
|
||
1. **最小化数据迁移**:
|
||
- 新增节点:只影响相邻节点数据
|
||
- 移除节点:只影响该节点数据
|
||
|
||
2. **良好的容错性**:
|
||
- 节点故障:数据自动迁移到下一个节点
|
||
- 平滑恢复:节点恢复后数据自动迁移回来
|
||
|
||
3. **可扩展性**:
|
||
- 支持动态增删节点
|
||
- 适合大规模分布式系统
|
||
|
||
---
|
||
|
||
#### **缺点**
|
||
|
||
1. **数据倾斜**:
|
||
- 节点少时分布不均
|
||
- 需要虚拟节点解决
|
||
|
||
2. **实现复杂**:
|
||
- 相比简单哈希复杂度高
|
||
- 需要维护哈希环
|
||
|
||
3. **内存占用**:
|
||
- 虚拟节点占用内存
|
||
- 节点多时内存开销大
|
||
|
||
---
|
||
|
||
### 6. 实际项目经验
|
||
|
||
#### **案例:Redis 集群扩容**
|
||
|
||
**场景**:
|
||
- 现有 3 台 Redis,每台 10 万 key
|
||
- 新增 2 台 Redis
|
||
|
||
**不使用一致性哈希**:
|
||
```
|
||
迁移量:10 万 × 5/6 ≈ 8.3 万 key(83%)
|
||
```
|
||
|
||
**使用一致性哈希**:
|
||
```
|
||
迁移量:10 万 × 2/5 ≈ 4 万 key(40%)
|
||
```
|
||
|
||
**实现**:
|
||
```java
|
||
// 1. 新节点上线
|
||
JedisPool newPool1 = new JedisPool("192.168.1.13");
|
||
JedisPool newPool2 = new JedisPool("192.168.1.14");
|
||
|
||
consistentHash.addNode(newPool1);
|
||
consistentHash.addNode(newPool2);
|
||
|
||
// 2. 数据迁移
|
||
for (String key : allKeys) {
|
||
JedisPool newPool = consistentHash.getNode(key);
|
||
JedisPool oldPool = oldMapping.get(key);
|
||
|
||
if (newPool != oldPool) {
|
||
// 迁移数据
|
||
migrateData(key, oldPool, newPool);
|
||
}
|
||
}
|
||
```
|
||
|
||
---
|
||
|
||
### 7. 阿里 P7 加分项
|
||
|
||
**深度理解**:
|
||
- 理解一致性哈希的数学原理(哈希函数、分布均匀性)
|
||
- 理解虚拟节点数量对均衡度的影响
|
||
- 了解其他哈希算法(如 Rendezvous Hash)
|
||
|
||
**实战经验**:
|
||
- 有使用一致性哈希实现分库分表的经验
|
||
- 有处理数据倾斜和迁移的经验
|
||
- 有一致性哈希在生产环境的调优经验
|
||
|
||
**架构能力**:
|
||
- 能设计支持平滑扩容的分片集群
|
||
- 能设计数据迁移的灰度方案
|
||
- 有一致性哈希的监控和告警经验
|
||
|
||
**技术选型**:
|
||
- 了解 Redis Cluster、Cassandra 等系统的一致性哈希实现
|
||
- 了解 Nginx、HAProxy 等负载均衡器的一致性哈希配置
|
||
- 能根据业务特点选择合适的哈希算法
|