引言
关于生成分布式 ID 服务的方案有很多,基本上都是基于 twitter 的 snowflake 来实现,而美团的leaf则把snowflake和号段模式给集成到一起。问题是美团开源的是java实现的,团队则使用 golang 的多,在网上搜索了一波相关 golang 版本到 leaf 服务,没找到相关仓库,于是把 java 的版本移植到了 go 版本。
目录
- 准备
- 移植中遇到的问题
- 新增etcd支持
- 总结
1、准备
美团分享的技术文章里针对两种方案的设计和优化进行阐述。技术要求点是db模式的双 buffer 以及动态调整号段的 step,号段模式下对 db 的高可用保障要求高。 snowflake 模式下,美团实现了通过 zookeeper 维护 workerID 来保障高可用性,snowflake 模式下服务器时间回调可能出现重复问题。
SegmentIDGenImplSnowflakeZookeeperHolderSnowflakeIDGenImplspring bootmybatiscurator
2、移植中遇到的问题
getAndIncrement()
CuratorFramework
初期使用没问题,但是测试发现 zookeeper 的节点名字有问题,会自动编号,对于 zookeeper 用的少,了解不多,又是补了一波 zookeeper 的相关内容,发现节点创建模式,分别是
Enum Constant and Description
CONTAINER The znode will be a container node.
EPHEMERAL The znode will be deleted upon the client's disconnect.
EPHEMERAL_SEQUENTIAL The znode will be deleted upon the client's disconnect, and its name will be appended with a monotonically increasing number.
PERSISTENT The znode will not be automatically deleted upon client's disconnect.
PERSISTENT_SEQUENTIAL The znode will not be automatically deleted upon client's disconnect, and its name will be appended with a monotonically increasing number.
PERSISTENT_SEQUENTIAL_WITH_TTL The znode will not be automatically deleted upon client's disconnect, and its name will be appended with a monotonically increasing number.
PERSISTENT_WITH_TTL The znode will not be automatically deleted upon client's disconnect.
go 的 zookeeper 库里面实现了3种模式,分别是
const (
FlagEphemeral = 1
FlagSequence = 2
FlagTTL = 4
)
变量定义没有注释,导致查了半天几个使用的区别。最后才确定正确的使用方式。
号段模式:低位趋势增长,较少的ID号段浪费,能够容忍MySQL的短时间不可用。
java 版本里则只提供了 zk 模式维护 workerID 的方式,对于小规模部署来说,上 zk 可能太重了,所以在移植过程中提供了配置选项,可通过参数开启和关闭 zk 模式,这样小规模部署只需配置好 workerID 就可以了。
3、新增 etcd 支持
java 版本的 leaf 只实现了 zookeeper 维护 workerID,基于团队的技术栈是 go,所以有了对了 etcd 的支持。本地开发搭建 etcd 集群环境,通过 docker-composer 来启动3个节点,
docker-composer.yml 配置如下
version: '3'
networks:
cluster_net:
driver: ${NETWORKS_DRIVER}
ipam:
driver: default
config:
-
subnet: 10.0.75.1/24
volumes:
etcd1:
driver: ${VOLUMES_DRIVER}
etcd2:
driver: ${VOLUMES_DRIVER}
etcd3:
driver: ${VOLUMES_DRIVER}
services:
etcd1:
image: quay.io/coreos/etcd:v3.5.0-alpha.0
volumes:
- "${DATA_PATH_HOST}/etcd/node1:/etcd-data"
expose:
- 2379
- 2380
ports:
- "${ETCD1_API_PORT}:2379"
- "${ETCD1_ADMIN_PORT}:2380"
networks:
cluster_net:
ipv4_address: 10.0.75.100
environment:
- ETCDCTL_API=3
command:
- /usr/local/bin/etcd
- --data-dir=/etcd-data
- --name
- node1
- --initial-advertise-peer-urls
- http://10.0.75.100:2380
- --listen-peer-urls
- http://0.0.0.0:2380
- --advertise-client-urls
- http://0.0.0.0:2379
- --listen-client-urls
- http://10.0.75.100:2379
- --initial-cluster
- node1=http://10.0.75.100:2380,node2=http://10.0.75.101:2380,node3=http://10.0.75.102:2380
- --initial-cluster-state
- new
- --initial-cluster-token
- docker-etcd
etcd2:
image: quay.io/coreos/etcd:v3.5.0-alpha.0
volumes:
- "${DATA_PATH_HOST}/etcd/node2:/etcd-data"
expose:
- 2379
- 2380
ports:
- "${ETCD2_API_PORT}:2379"
- "${ETCD2_ADMIN_PORT}:2380"
networks:
cluster_net:
ipv4_address: 10.0.75.101
environment:
- ETCDCTL_API=3
command:
- /usr/local/bin/etcd
- --data-dir=/etcd-data
- --name
- node2
- --initial-advertise-peer-urls
- http://10.0.75.101:2380
- --listen-peer-urls
- http://0.0.0.0:2380
- --advertise-client-urls
- http://10.0.75.101:2379
- --listen-client-urls
- http://0.0.0.0:2379
- --initial-cluster
- node1=http://10.0.75.100:2380,node2=http://10.0.75.101:2380,node3=http://10.0.75.102:2380
- --initial-cluster-state
- new
- --initial-cluster-token
- docker-etcd
etcd3:
image: quay.io/coreos/etcd:v3.5.0-alpha.0
volumes:
- "${DATA_PATH_HOST}/etcd/node3:/etcd-data"
expose:
- 2379
- 2380
ports:
- "${ETCD3_API_PORT}:2379"
- "${ETCD3_ADMIN_PORT}:2380"
networks:
cluster_net:
ipv4_address: 10.0.75.102
environment:
- ETCDCTL_API=3
command:
- /usr/local/bin/etcd
- --data-dir=/etcd-data
- --name
- node3
- --initial-advertise-peer-urls
- http://10.0.75.102:2380
- --listen-peer-urls
- http://0.0.0.0:2380
- --advertise-client-urls
- http://10.0.75.102:2379
- --listen-client-urls
- http://0.0.0.0:2379
- --initial-cluster
- node1=http://10.0.75.100:2380,node2=http://10.0.75.101:2380,node3=http://10.0.75.102:2380
- --initial-cluster-state
- new
- --initial-cluster-token
- docker-etcd
代码逻辑基本上和 zookeeper 实现差不多,就是客户端连接不一样。
4、总结
移植准备工作不充分,对使用到的库不熟悉,后面应对 java 的常用组件要加强熟悉,对 go 相关库的源码要看一遍,能够快速确定相关方法及定义的大概功能。
下面是运行的录频,可以看一下效果。
点击下方图片观看⬇️
项目里相关链接