解压kafka_2.11-0.10.0.1.tgz

解压:
tar -zxvf kafka_2.11-0.10.0.1.tgz

配置环境变量

配置环境变量
vi ~/.bashrc
export KAFKA_HOME=/usr/local/soft/kafka
export PATH=${PATH}:${KAFKA_HOME}/bin
scp ~/.bashrc slave1:~/.bashrc
scp ~/.bashrc slave2:~/.bashrc
source ~/bashrc
配置
/usr/local/soft/kafka/config/server.properties
broker.id=100
listeners=PLAINTEXT://:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/usr/local/soft/kafka-logs/log
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=master:2181,slave1:2181,slave2:2181
zookeeper.connection.timeout.ms=6000

把kafka 分发到slave1 slave2

scp -r ./kafka slave1:/usr/local/soft/
scp -r ./kafka slave2:/usr/local/soft/

在slave1修改

/usr/local/soft/kafka/config/server.properties	
broker.id=101
listeners=PLAINTEXT://:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/usr/local/soft/kafka-logs/log
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=master:2181,slave1:2181,slave2:2181
zookeeper.connection.timeout.ms=6000

在slave2修改

/usr/local/soft/kafka/config/server.properties
broker.id=102
listeners=PLAINTEXT://:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/usr/local/soft/kafka-logs/log
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=master:2181,slave1:2181,slave2:2181
zookeeper.connection.timeout.ms=6000
在master ,slave1,slave2中创建目录
mkdir -p  /usr/local/soft/kafka-logs/log

分别开启master,slave1,slave2的zookeeper

zkServer.sh start

启动master,slave1,slave2的kafka服务器

bin/kafka-server-start.sh config/server.properties

图示:




Logo

Kafka开源项目指南提供详尽教程,助开发者掌握其架构、配置和使用,实现高效数据流管理和实时处理。它高性能、可扩展,适合日志收集和实时数据处理,通过持久化保障数据安全,是企业大数据生态系统的核心。

更多推荐