srping 集成kafka
使用maven项目,main方法集成测试1、项目文件结构spring 集成kafka所依赖的jar包<dependency><groupId>com.fasterxml.jackson.core</groupId><artifactId>jackson-core</artifactId><version...
·
使用maven项目,main方法集成测试
1、项目文件结构
spring 集成kafka所依赖的jar包
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>2.5.1</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-annotations</artifactId>
<version>2.7.5</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.7.5</version>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>2.1.4.RELEASE</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>1.0.0</version>
</dependency>
kafka.properties 配置
#brokers集群
bootstrap.servers=192.168.72.129:9092
#即所有副本都同步到数据时send方法才返回, 以此来完全判断数据是否发送成功, 理论上来讲数据不会丢失.
acks=all
#发送失败重试次数
retries=10
#批处理条数:当多个记录被发送到同一个分区时,生产者会尝试将记录合并到更少的请求中。这有助于客户端和服务器的性能。
batch.size=1638
#批处理延迟时间上限:即1ms过后,不管是否达到批处理数,都直接发送一次请求
linger.ms=1
#即32MB的批处理缓冲区
buffer.memory=32768
#消费者群组ID,发布-订阅模式,即如果一个生产者,多个消费者都要消费,那么需要定义自己的群组,同一群组内的消费者只有一个能消费到消息
group.id=order-beta
#如果为true,消费者的偏移量将在后台定期提交。
enable.auto.commit=true
#如何设置为自动提交(enable.auto.commit=true),这里设置自动提交周期
auto.commit.interval.ms=1000
#在使用Kafka的组管理时,用于检测消费者故障的超时
session.timeout.ms=15000
#消费监听器容器并发数
concurrency = 3
spring-kafkaProducer.xml(消息生产者配置信息)
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context.xsd">
<!-- 加载配置文件 -->
<context:property-placeholder location="kafka.properties"/>
<!-- 定义producer的参数 -->
<bean id="producerProperties" class="java.util.HashMap">
<constructor-arg>
<map>
<entry key="bootstrap.servers" value="${bootstrap.servers}" />
<entry key="group.id" value="${group.id}" />
<entry key="retries" value="${retries}" />
<entry key="batch.size" value="${batch.size}" />
<entry key="linger.ms" value="${linger.ms}" />
<entry key="buffer.memory" value="${buffer.memory}" />
<entry key="acks" value="${acks}"/>
<entry key="key.serializer"
value="org.apache.kafka.common.serialization.StringSerializer" />
<entry key="value.serializer"
value="org.apache.kafka.common.serialization.StringSerializer" />
</map>
</constructor-arg>
</bean>
<!-- 创建kafkatemplate需要使用的producerfactory bean -->
<bean id="producerFactory"
class="org.springframework.kafka.core.DefaultKafkaProducerFactory">
<constructor-arg>
<ref bean="producerProperties" />
</constructor-arg>
</bean>
<!-- 创建kafkatemplate bean,使用的时候,只需要注入这个bean,即可使用template的send消息方法 -->
<bean id="kafkaTemplate" class="org.springframework.kafka.core.KafkaTemplate">
<constructor-arg ref="producerFactory" />
<constructor-arg name="autoFlush" value="true" />
<property name="defaultTopic" value="defaultTopic" />
<!--<property name="producerListener" ref="producerListener"/>-->
</bean>
<!--<bean id="producerListener" class="com.git.kafka.producer.KafkaProducerListener" />-->
</beans>
测试生产者发送消息
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.SendResult;
import org.springframework.util.concurrent.FailureCallback;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.SuccessCallback;
public class KafkaSendMsgUtils {
public static final ClassPathXmlApplicationContext CONTEXT = new ClassPathXmlApplicationContext("spring-kafkaProducer.xml");
public static <K,T>void sendMessage(String topic, Integer partition, Long timestamp, K key, T data) {
KafkaTemplate<K, T> kafkaTemplate = (KafkaTemplate<K, T>) CONTEXT.getBean("kafkaTemplate");
System.out.println("向topic1队列发送消息");
ListenableFuture<SendResult<K, T>> listenableFuture = null;
if (kafkaTemplate.getDefaultTopic().equals(topic)) {
listenableFuture = kafkaTemplate.sendDefault(partition,timestamp,key,data);
}else {
listenableFuture = kafkaTemplate.send(topic, partition, timestamp, key, data);
}
//发送成功回调
SuccessCallback<SendResult<K, T>> successCallback = new SuccessCallback<SendResult<K, T>>() {
@Override
public void onSuccess(SendResult<K, T> result) {
//成功业务逻辑
System.out.println("成功");
}
};
//发送失败回调
FailureCallback failureCallback = new FailureCallback() {
@Override
public void onFailure(Throwable ex) {
System.out.println("消息发送失败");
ex.printStackTrace();
//失败业务逻辑
throw new RuntimeException(ex);
}
};
listenableFuture.addCallback(successCallback, failureCallback);
}
public static void main(String[] args) {
for (int i = 0; i < 5; i++) {
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
}
//KafkaTemplate<String, String> kafkaTemplate = (KafkaTemplate<String, String>) CONTEXT.getBean("kafkaTemplate");
KafkaSendMsgUtils.sendMessage("topic1",0,null,"key","kafka-test");
}
}
}
测试运行结果
log4j:WARN No appenders could be found for logger (org.springframework.core.env.StandardEnvironment).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
向topic1队列发送消息
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
成功
向topic1队列发送消息
成功
向topic1队列发送消息
成功
向topic1队列发送消息
成功
向topic1队列发送消息
成功
Process finished with exit code 0
可以通过运行kafka消费者客户端查看消息是否正常发送出去
[root@developmentEnvironment kafka_2.12-2.1.0]# bin/kafka-console-consumer.sh --bootstrap-server 192.168.72.129:9092 --topic topic1 --from-beginning(启动kafka服务消费者客户端指令)
test
dsaasdqsq
kafka-test(Java发送的消息)
kafka-test(Java发送的消息)
kafka-test(Java发送的消息)
kafka-test(Java发送的消息)
kafka-test(Java发送的消息)
异常情况分析:
1、集成spring依赖关系
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'kafkaTemplate' defined in class path resource [spring-kafkaProducer.xml]: Bean instantiation via constructor failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.kafka.core.KafkaTemplate]: Constructor threw exception; nested exception is java.lang.NoClassDefFoundError: com/fasterxml/jackson/databind/deser/std/StdNodeBasedDeserializer
需要将升级至2.7.5版本
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.7.5</version>
</dependency>
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'kafkaTemplate' defined in class path resource [spring-kafkaProducer.xml]: Bean instantiation via constructor failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.kafka.core.KafkaTemplate]: Constructor threw exception; nested exception is java.lang.NoClassDefFoundError: com/fasterxml/jackson/annotation/JsonInclude$Value
需要将升级至2.7.5版本
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-annotations</artifactId>
<version>2.7.5</version>
</dependency>
kafka服务监听异常
错误信息:
org.springframework.kafka.core.KafkaProducerException: Failed to send; nested exception is org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for topic1-0: 30030 ms has passed since batch creation plus linger time
消息发送失败
at org.springframework.kafka.core.KafkaTemplate$1.onCompletion(KafkaTemplate.java:354)
at org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:204)
at org.apache.kafka.clients.producer.internals.ProducerBatch.done(ProducerBatch.java:187)
at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:627)
at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:287)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for topic1-0: 30030 ms has passed since batch creation plus linger time
上述异常信息的意思是,由于kafka服务在启动的时候其监听对外开发的域名和端口的参数是advertised.listeners,假如没有设置
的话,kafka会使用默认的参数进行启动(是kafka服务所在服务器的主机名host),
#代理将向生产者和消费者发布主机名和端口。
#如果没有设置,如果配置了监听器,它将使用“监听器”的值。否则,
#它将使用该值回来java.net.InetAddress.getCanonicalHostName ()。
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://192.168.72.129:9092
更多推荐
已为社区贡献1条内容
所有评论(0)