docker-compose搭建kafka集群
Linux 上我们可以从 Github 上下载它的二进制包来使用,最新发行的版本地址:https://github.com/docker/compose/releases。zoo1,zoo2 ,zoo3 分别创建对应的myid文件以及zoo.cfg文件。创建kafka1、kafka2、kafka3的挂载目录。安装之前先卸载老旧的版本(如未装过忽略此步)创建zoo1、zoo2、zoo3的挂载目录。通
·
docker-compose搭建kafka集群
安装docker
安装之前先卸载老旧的版本(如未装过忽略此步)
sudo apt-get remove docker docker-engine docker.io
首先,更新软件包索引,并且安装必要的依赖软件,来添加一个新的 HTTPS 软件源:
sudo apt update
sudo apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
导入源仓库的 GPG key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
添加docker软件源
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
安装docker最新版
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io```
安装docker-compose
Linux 上我们可以从 Github 上下载它的二进制包来使用,最新发行的版本地址:https://github.com/docker/compose/releases。
sudo curl -L "https://github.com/docker/compose/releases/download/v2.2.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
授予可执行权限
sudo chmod +x /usr/local/bin/docker-compose
测试是否安装成功
docker-compose version
搭建kafka、zk集群
1.新建网络
通过docker新建容器内部通信网络
docker network create --driver bridge --subnet 172.23.0.0/25 --gateway 172.23.0.1 kafka_zk
查看网络是否创建成功
docker network ls
删除网络命令
docker network rm {id}
2.创建需要的文件夹以及文件
cd /mnt
mkdir kafka_zk
cd kafka_zk
mkdir zoo1 zoo2 zoo3 kafka1 kafka2 kafka3
创建zoo1、zoo2、zoo3的挂载目录
mkdir ./zoo1/data
mkdir ./zoo1/logs
创建kafka1、kafka2、kafka3的挂载目录
mkdir ./kafka1/data
mkdir ./kafka1/logs```
zoo1,zoo2 ,zoo3 分别创建对应的myid文件以及zoo.cfg文件
cd zoo1
vi myid
1
zoo.cfg文件内容
cd zoo1
vi zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data
dataLogDir=/datalog
clientPort=2181
server.1=172.23.0.11:2888:3888;2181
server.2=172.23.0.12:2888:3888;2181
server.3=172.23.0.13:2888:3888;2181
3.创建docker-compose.yml文件
cd kafka_zk
vim docker-compose.yml
具体内容
version: '2'
services:
zoo1:
image: wurstmeister/zookeeper
restart: always
hostname: zoo1
container_name: zoo1
ports:
- "2181:2181"
volumes:
- "/home/xxx/kafka_zk/zoo1/data:/data" # “:”前改成自己的目录
- "/home/xxx/kafka_zk/zoo1/datalog:/datalog" # “:”前改成自己的目录
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
networks:
default:
ipv4_address: 172.23.0.11
zoo2:
image: wurstmeister/zookeeper
restart: always
hostname: zoo2
container_name: zoo2
ports:
- "2182:2181"
volumes:
- "/home/xxx/kafka_zk/zoo2/data:/data"
- "/home/xxx/kafka_zk/zoo2/datalog:/datalog"
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
networks:
default:
ipv4_address: 172.23.0.12
zoo3:
image: wurstmeister/zookeeper
restart: always
hostname: zoo3
container_name: zoo3
ports:
- "2183:2181"
volumes:
- "/home/xxx/kafka_zk/zoo1/data:/data"
- "/home/xxx/kafka_zk/zoo3/datalog:/datalog"
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
networks:
default:
ipv4_address: 172.23.0.13
broker1:
image: wurstmeister/kafka:2.13-2.7.0
restart: always
hostname: broker1
container_name: broker1
ports:
- "9091:9091"
external_links:
- zoo1
- zoo2
- zoo3
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9091
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.1:9091 ## 宿主机IP
KAFKA_ADVERTISED_HOST_NAME: broker1
KAFKA_ADVERTISED_PORT: 9091
JMX_PORT: 9988
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- "/home/xxx/kafka_zk/kafka1/data:/kafka"
- "/home/xxx/kafka_zk/kafka1/logs:/opt/kafka/logs"
networks:
default:
ipv4_address: 172.23.0.14
broker2:
image: wurstmeister/kafka:2.13-2.7.0
restart: always
hostname: broker2
container_name: broker2
ports:
- "9092:9092"
external_links:
- zoo1
- zoo2
- zoo3
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.1:9092 ## 宿主机IP
KAFKA_ADVERTISED_HOST_NAME: broker2
KAFKA_ADVERTISED_PORT: 9092
JMX_PORT: 9988
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- "/home/xxx/kafka_zk/kafka2/data:/kafka"
- "/home/xxx/kafka_zk/kafka2/logs:/opt/kafka/logs"
networks:
default:
ipv4_address: 172.23.0.15
broker3:
image: wurstmeister/kafka:2.13-2.7.0
restart: always
hostname: broker3
container_name: broker3
ports:
- "9093:9093"
external_links:
- zoo1
- zoo2
- zoo3
environment:
KAFKA_BROKER_ID: 3
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9093
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.1.1:9093 ## 宿主机IP
KAFKA_ADVERTISED_HOST_NAME: broker3
KAFKA_ADVERTISED_PORT: 9093
JMX_PORT: 9988
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- "/home/xxx/kafka_zk/kafka3/data:/kafka"
- "/home/xxx/kafka_zk/kafka3/logs:/opt/kafka/logs"
networks:
default:
ipv4_address: 172.23.0.16
networks:
default:
external:
name: kafka_zk
4.启动
-d后台启动
docker-compose up -d
关闭
docker-compose down
更多推荐
已为社区贡献1条内容
所有评论(0)