前言

之所以写这篇文章,是因为要集成ELK,如果只是安装Kafka,这篇也是ok的,如果是为了了解ELK如果集成Kafka的,首先得先会安装ELK,可以参考这篇文章

【ELK】ElasticSearch+Logstash+Kibana搭建并集成SpringBoot_阿小冰的博客-CSDN博客【ELK】ElasticSearch+Logstash+Kibana搭建并集成SpringBoothttps://blog.csdn.net/qq_38377525/article/details/124449938

安装

ZooKeeper的安装

1、下载安装包

Index of /dist/zookeeper/zookeeper-3.4.9icon-default.png?t=M3K6http://archive.apache.org/dist/zookeeper/zookeeper-3.4.9/可以在上面的这个链接下载,我这里直接用的3.4.9版本的zk

2、放到服务器上,解压到指定目录

[root@localhost home]# tar -zxvf zookeeper-3.4.9.tar.gz -C /usr/local

3、切换到安装目录

[root@localhost home]# cd /usr/local/zookeeper-3.4.9/

4、复制一份出来配置文件

[root@localhost zookeeper-3.4.9]# cp conf/zoo_sample.cfg conf/zoo.cfg

5、启动zk

[root@localhost zookeeper-3.4.9]# bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.9/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

6、查看进程

[root@localhost zookeeper-3.4.9]# ps -ef|grep zoo
root      29957      1  5 19:40 pts/3    00:00:00 java -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/local/zookeeper-3.4.9/bin/../build/classes:/usr/local/zookeeper-3.4.9/bin/../build/lib/*.jar:/usr/local/zookeeper-3.4.9/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/local/zookeeper-3.4.9/bin/../lib/slf4j-api-1.6.1.jar:/usr/local/zookeeper-3.4.9/bin/../lib/netty-3.10.5.Final.jar:/usr/local/zookeeper-3.4.9/bin/../lib/log4j-1.2.16.jar:/usr/local/zookeeper-3.4.9/bin/../lib/jline-0.9.94.jar:/usr/local/zookeeper-3.4.9/bin/../zookeeper-3.4.9.jar:/usr/local/zookeeper-3.4.9/bin/../src/java/lib/*.jar:/usr/local/zookeeper-3.4.9/bin/../conf: -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /usr/local/zookeeper-3.4.9/bin/../conf/zoo.cfg
root      29985  11939  0 19:40 pts/3    00:00:00 grep --color=auto zoo

单机版zk安装成功

Kafka的安装 

 1、下载安装包

[root@localhost home]# wget http://archive.apache.org/dist/kafka/3.0.1/kafka_2.13-3.0.1.tgz

这个过程,时间有点长,因为直接从国外网站上直接下载的,

2、解压到指定目录

[root@localhost home]# tar -zxvf kafka_2.13-3.0.1.tgz -C /usr/local/

3、进入安装目录并创建软连接

[root@localhost home]# cd /usr/local/
[root@localhost local]# ln -s kafka_2.13-3.0.1 kafka

4、编辑配置文件,修改一下配置

[root@localhost kafka]# vim config/server.properties
broker.id=1   #对应zookeeper的id
listeners=PLAINTEXT://1127.0.0.1:9092 #配置自己的ip地址
log.retention.hours=24   #改成24小时
zookeeper.connect=127.0.0.1:2181#配置集群

5、后台启动

[root@localhost kafka]# bin/kafka-server-start.sh -daemon config/server.properties

6、查看日志是否启动成功

[root@localhost kafka]# tail -f logs/server.log 
[2022-04-27 21:49:38,049] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 9 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2022-04-27 21:49:38,070] INFO [ProducerId Manager 1]: Acquired new producerId block (brokerId:1,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
[2022-04-27 21:49:38,091] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2022-04-27 21:49:38,093] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2022-04-27 21:49:38,113] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2022-04-27 21:49:38,156] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2022-04-27 21:49:38,182] INFO [SocketServer brokerId=1] Started processors for 1 acceptors (kafka.network.SocketServer)
[2022-04-27 21:49:38,223] INFO Kafka version : 2.1.1 (org.apache.kafka.common.utils.AppInfoParser)
[2022-04-27 21:49:38,223] INFO Kafka commitId : 21234bee31165527 (org.apache.kafka.common.utils.AppInfoParser)
[2022-04-27 21:49:38,225] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)

或者查看进程

[root@localhost kafka]# ps -ef|grep kafka

 

Logo

Kafka开源项目指南提供详尽教程,助开发者掌握其架构、配置和使用,实现高效数据流管理和实时处理。它高性能、可扩展,适合日志收集和实时数据处理,通过持久化保障数据安全,是企业大数据生态系统的核心。

更多推荐