单机部署
之前发过一个单机部署教程,集群部署类似,这次我们通过docker-compse进行编排部署
http://www.php20.cn/article/sw/z/317 单机安装
制作docker镜像
由于centos8停止维护了,这次我们选择Ubuntu,新建dockerfile文件:
FROM ubuntu ARG ZK_VERSION=3.8.0 WORKDIR /zk/ RUN apt-get update RUN apt-get install curl -y RUN apt-get install openjdk-8-jdk -y RUN curl -o zookeeper.tar.gz https://downloads.apache.org/zookeeper/zookeeper-${ZK\_VERSION}/apache-zookeeper-${ZK\_VERSION}-bin.tar.gz RUN tar -zvxf zookeeper.tar.gz && \\cp -rf apache-zookeeper-${ZK_VERSION}-bin/* /zk/ RUN cp /zk/conf/zoo_sample.cfg /zk/conf/zoo.cfg RUN rm -rf zookeeper.tar.g apache-zookeeper-${ZK_VERSION}-bin/ CMD ./bin/zkServer.sh start-foreground
build镜像:
docker build -t zk-test:v1 -f ./zkList/dockerfile/Dockerfile ./zkList/dockerfile
启动:
docker run -it zk-test:v1
可看到单机启动成功
docker-compose容器编排
容器编排目录结构
tioncico@xianshikedeMBP zkStudy % tree . └── zkList ├── docker-compose.yml ├── dockerfile │ └── Dockerfile └── service ├── zk01 │ ├── data │ │ └── myid │ └── zoo.cfg ├── zk02 │ ├── data │ │ └── myid │ └── zoo.cfg ├── zk03 │ ├── data │ │ └── myid │ └── zoo.cfg ├── zk04 │ ├── data │ │ └── myid │ └── zoo.cfg └── zk05 ├── data │ └── myid └── zoo.cfg
编写docker-compose.yml
version: "3" services: zk01: image: zk-test:v1 ports: - "10001:2181" volumes: - ${SOURCE_DIR}/service/zk01/data:/zk/data/ - ${SOURCE_DIR}/service/zk01/data/myid:/zk/data/myid - ${SOURCE_DIR}/service/zk01/zoo.cfg:/zk/conf/zoo.cfg restart: always networks: - default environment: TZ: "Asia/Shanghai" command: \[ ./bin/zkServer.sh,start-foreground \] zk02: image: zk-test:v1 ports: - "10002:2181" volumes: - ${SOURCE_DIR}/service/zk02/data:/zk/data/ - ${SOURCE_DIR}/service/zk02/data/myid:/zk/data/myid - ${SOURCE_DIR}/service/zk02/zoo.cfg:/zk/conf/zoo.cfg restart: always networks: - default environment: TZ: "Asia/Shanghai" command: \[ ./bin/zkServer.sh,start-foreground \] zk03: image: zk-test:v1 ports: - "10003:2181" volumes: - ${SOURCE_DIR}/service/zk03/data:/zk/data/ - ${SOURCE_DIR}/service/zk03/data/myid:/zk/data/myid - ${SOURCE_DIR}/service/zk03/zoo.cfg:/zk/conf/zoo.cfg restart: always networks: - default environment: TZ: "Asia/Shanghai" command: \[ ./bin/zkServer.sh,start-foreground \] zk04: image: zk-test:v1 ports: - "10004:2181" volumes: - ${SOURCE_DIR}/service/zk04/data:/zk/data/ - ${SOURCE_DIR}/service/zk04/data/myid:/zk/data/myid - ${SOURCE_DIR}/service/zk04/zoo.cfg:/zk/conf/zoo.cfg restart: always networks: - default environment: TZ: "Asia/Shanghai" command: \[ ./bin/zkServer.sh,start-foreground \] zk05: image: zk-test:v1 ports: - "10005:2181" volumes: - ${SOURCE_DIR}/service/zk05/data:/zk/data/ - ${SOURCE_DIR}/service/zk05/data/myid:/zk/data/myid - ${SOURCE_DIR}/service/zk05/zoo.cfg:/zk/conf/zoo.cfg restart: always networks: - default environment: TZ: "Asia/Shanghai" command: \[ ./bin/zkServer.sh,start-foreground \] networks: default:
注意 里面有个${SOURCE_DIR} 是.env 环境文件配置的
tioncico@xianshikedeMBP zkList % cat .env SOURCE_DIR=./ tioncico@xianshikedeMBP zkList %
编写zoo.cfg
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/zk/data/ # the port at which the clients will connect clientPort=2181 # 集群地址配置 server.A=B:C:D A为序号 B为ip地址或者host C为通信端口 D为集群选举端口 server.1=zk01:2888:3888 server.2=zk02:2888:3888 server.3=zk03:2888:3888 server.4=zk04:2888:3888 server.5=zk05:2888:3888 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 ## Metrics Providers # # https://prometheus.io Metrics Exporter #metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider #metricsProvider.httpHost=0.0.0.0 #metricsProvider.httpPort=7000 #metricsProvider.exportJvmInfo=true
编写myid
每个myid文件写上服务器编号即可,1-5
启动
docker-composer up
验证集群
打开任意n个(n>2) 容器(多个shell窗口打开):
docker exec -it zklist\_zk01\_1 ./bin/zkCli.sh docker exec -it zklist\_zk02\_1 ./bin/zkCli.sh docker exec -it zklist\_zk03\_1 ./bin/zkCli.sh
使用其中一个create /test
create /test 123456
使用另外2个查询
get /test
集群搭建成功