kafka使用的java例子demo

前提

zookeeper和kafka的win10下单机伪集群请参考:
https://blog.csdn.net/sndayYU/article/details/90718238
https://blog.csdn.net/sndayYU/article/details/90718786

代码

maven

<dependency>
   <groupId>org.apache.kafka</groupId>
     <artifactId>kafka-clients</artifactId>
     <version>2.2.0</version>
 </dependency>

消费者

KfkConsumer.java

package com.ydfind.kafka.simple;

import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.util.Arrays;
import java.util.Properties;

public class KfkConsumer {

   public static void main(String[] args) {
       String topic = "test-topic";

       Properties props = new Properties();
       props.put("bootstrap.servers", "localhost:9092,localhost:9093,localhost:9094");
       props.put("group.id", "testGroup1");
       props.put("enable.auto.commit", "true");
       props.put("auto.commit.interval.ms", "1000");
       props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
       props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
       Consumer<String, String> consumer = new KafkaConsumer(props);
       consumer.subscribe(Arrays.asList(topic));
       while (true) {
           ConsumerRecords<String, String> records = consumer.poll(100);
           for (ConsumerRecord<String, String> record : records) {
               System.out.printf("partition = %d, offset = %d, key = %s, value = %s%n", record.partition(), record.offset(), record.key(), record.value());
           }
       }
   }
}

生产者

KfkProducer.java

package com.ydfind.kafka.simple;

import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.HashMap;
import java.util.Map;

public class KfkProducer {
   public static void main(String[] args) {
       Map<String, Object> props = new HashMap<String, Object>();
       props.put("bootstrap.servers", "localhost:9092,localhost:9093,localhost:9094");
       props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
       props.put("value.serializer","org.apache.kafka.common.serialization.StringSerializer");
       String topic = "test-topic";
       Producer<String, String> producer = new org.apache.kafka.clients.producer.KafkaProducer<String, String>(props);
       producer.send(new ProducerRecord(topic, "idea-key2", "java-message 1"));
       producer.send(new ProducerRecord(topic, "idea-key2", "java-message 2"));
       producer.send(new ProducerRecord(topic, "idea-key2", "java-message 3"));

       producer.close();
   }
}

运行

1.启动zookeeper集群;
2.启动kafka集群;
3.运行消费者;
4.运行生产者;

结果可以看到消费者输出了生产者发过来的消息:
在这里插入图片描述

Logo

Kafka开源项目指南提供详尽教程,助开发者掌握其架构、配置和使用,实现高效数据流管理和实时处理。它高性能、可扩展,适合日志收集和实时数据处理,通过持久化保障数据安全,是企业大数据生态系统的核心。

更多推荐