一,Java常用的MQ介绍:盘点 常见MQ : 消息队列总览_Java学术趴的博客-CSDN博客_java mq消息队列详解

二,Kafka依赖Zookeeper,所以先安装Zookeeper

1)官方下载地址:Apache Downloads

2)解压到自己的安装目录,找到/conf/zoo_sample.cfg,复制并重命名为zoo.cfg,修改配置:

# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
# 自定义,修改为本地磁盘路径,建议与zookeeper存放路径一致
dataDir=F:/qihh/install/apache-zookeeper-3.7.1-bin/data
# the port at which the clients will connect
clientPort=2181


#最后一行添加以下配置,如果不添加会出现disabled错误
audit.enable=true

3) 双击启动:/bin/zkServer.cmd

 

三,安装Kafka

1)官方下载地址:Apache Kafka

2)解压到自己的安装目录,找到配置文件修改 /config/server.properties:

# A comma separated list of directories under which to store log files
log.dirs=F:/qihh/install/kafka_2.13-3.2.3/kafka-logs

# root directory for all kafka znodes.
zookeeper.connect=localhost:2181

3)以配置文件方式启动

- 先启动zookeeper

- 打开cmd命令窗口,进入kafaka安装目录,执行以下命令

.\bin\windows\kafka-server-start.bat .\config\server.properties

四,Kafaka可视化工具kafkatool下载

1)下载地址: Offset Explorer

Kafka - 可视化工具(Kafka Tool)快速入门_51CTO博客_kafka可视化工具

五,在项目中的使用

1)pom.xml添加依赖包

<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
</dependency>

2)application.yml 添加配置文件,放在spring目录下面:

spring:
# =========================================================================
  kafka:
    bootstrap-servers: localhost:9092
    #初始化生产者配置
    producer:
      #重试次数
      retries: 0
      #应答级别:多少个分区副本备份完成时向生产者发送ack确认(可选0、1、all/-1)
      acks: 1
      #批量大小
      batch-size: 65536
      #提交延时:当生产端积累的消息达到batch-size或接收到消息linger.ms后,生产者就会将消息提交给kafka
      properties.linger.ms: 0
      #生产端缓冲区大小
      buffer-memory: 524288
      #Kafka提供的序列化和反序列化类
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
    
    #初始化消费者配置
    consumer:
      #是否自动提交偏移量
      enable-auto-commit: true
      #提交offset延时(接收到消息后多久提交offset)
      auto.commit.interval.ms: 1000
      #默认的消费者组,代码中可以热键修改
      group-id: test
      # earliest:当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,从头开始消费
      # latest:当各分区下有已提交的offset时,从提交的offset开始消费;无提交的offset时,消费新产生的该分区下的数据
      # none:topic各分区都存在已提交的offset时,从offset后开始消费;只要有一个分区不存在已提交的offset,则抛出异常
      auto-offset-reset: latest
      properties:
        # 消费会话超时时间(超过这个时间consumer没有发送心跳,就会触发rebalance操作)
        session.timeout.ms: 120000
        # 消费请求超时时间
        request.timeout.ms: 180000
      #序列化和反序列化方式
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer

# ===========================================================================

3)添加测试代码,结构如下: 

生产消息:

package com.qi.study.springboot.controller;

import javax.annotation.Resource;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.SendResult;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.ListenableFutureCallback;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

import com.qi.study.springboot.result.JsonResult;
import com.qi.study.springboot.util.JsonResultBuilder;

@RestController
@RequestMapping("/demo")
public class KafkaProviderController {
	
	private final static Logger LOGGER = LoggerFactory.getLogger(KafkaProviderController.class);
	
	@Resource
	private KafkaTemplate<String,String> kafkaTemplate;
	
	@RequestMapping(value="/kafka/sendMsg",method=RequestMethod.POST)
	public JsonResult sendMsg(@RequestParam String message) {
		LOGGER.info("KafkaController-发送kafka消息:{}",message);
		try{
            //生产消息
            ListenableFuture<SendResult<String, String>> listenableFuture = kafkaTemplate.send("test1","HelloWorld", message);
            listenableFuture.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
                @Override
                public void onSuccess(SendResult<String, String> result) {
                	LOGGER.info("KafkaController-发送kafka消息成功:{}", result);
                }
                @Override
                public void onFailure(Throwable ex) {
                	LOGGER.error("KafkaController-发送kafka消息失败:{}", ex.getMessage());
                }
            });
            return JsonResultBuilder.ok();
        }catch (Exception e){
        	LOGGER.error("KafkaController-发送kafka消息异常:{}", e.getMessage());
        	return JsonResultBuilder.error();
        }
	}
	
}

消费消息:

package com.qi.study.springboot.component;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;

@Component
public class KafkaConsumer {
	private final static Logger LOGGER = LoggerFactory.getLogger(KafkaConsumer.class);
	
	@KafkaListener(topics = "test1")
    public void onMassage(ConsumerRecord<Integer, String> record) {
		String topic = record.topic();
        String msg = record.value();
        LOGGER.info("消费者接受消息:topic-->{},msg-->{}",topic,msg);
    }
}

4)启动测试

- 启动zookeeper

- 启动kafka

- 启动springboot

 

 

六,源代码下载(含以上所需安装包): https://download.csdn.net/download/MyNoteBlog/86729096

Logo

Kafka开源项目指南提供详尽教程,助开发者掌握其架构、配置和使用,实现高效数据流管理和实时处理。它高性能、可扩展,适合日志收集和实时数据处理,通过持久化保障数据安全,是企业大数据生态系统的核心。

更多推荐