一、相关配置

1、JAAS 配置文件

KafkaClient {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    serviceName="kafka"
    keyTab="D:/code/demo/conf/kafka.service.keytab"
    principal="kafka/hdp-1";
};

2、keytab 文件(kafka.service.keytab)

从 Kerberos 服务器上拷贝到目标机器 或 找运维人员要一份

3、Kerberos 配置文件(krb5.conf)

krb5文件参数说明:krb5.conf(5)

从 Kerberos 服务器上拷贝到目标机器 或 找运维人员要一份

# Configuration snippets may be placed in this directory as well
# JDK11此行配置要去掉
includedir /etc/krb5.conf.d/

[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
  default_realm = HADOOP.COM 
  dns_lookup_realm = false
  dns_lookup_kdc = false
  ticket_lifetime = 24h  
  renew_lifetime = 7d   
  forwardable = true
  rdns = false
  udp_preference_limit = 1

[realms]
 HADOOP.COM = {   
  kdc = hdp-1:88  
  admin_server = hdp-1:749
  default_domain = HADOOP.COM
 }

[domain_realm]
 .HADOOP.COM = HADOOP.COM 
 HADOOP.COM = HADOOP.COM

Tip:JDK11版本 sun.security.krb5.Config 类有修改,不去掉会有如下报错:

Caused by: KrbException: krb5.conf loading failed

 readConfigFileLines方法:

二、修改hosts文件

192.168.16.14  hdp-1

三、根据自己的kafka版本引入依赖

        <!-- 需要引入与所安装的kafka对应版本的依赖 -->
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>3.1.0</version>
        </dependency>

四、生产者样例代码

package com.example.demo.kafka;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

/**
 * @Author: meng
 * @Version: 1.0
 */
public class ProductKafkaKerberos {

    public static void main(String[] args) {
        String filePath = System.getProperty("user.dir") + "\\conf\\";
        System.setProperty("java.security.auth.login.config", filePath + "kafka_client_jaas.conf");
        System.setProperty("java.security.krb5.conf", filePath + "krb5.conf");

        Properties props = new Properties();
        props.put("bootstrap.servers", "hdp-1:9092");
        props.put("acks", "all");
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        // sasl
        props.put("jaas.enabled", true);
        props.put("sasl.mechanism", "GSSAPI");
        props.put("security.protocol", "SASL_PLAINTEXT");
        props.put("sasl.kerberos.service.name", "kafka");

        Producer<String, String> producer = new KafkaProducer<>(props);
        for (int i = 0; i < 3; i++) {
            producer.send(new ProducerRecord<String, String>("test", Integer.toString(i), Integer.toString(i)));
        }
        System.out.println("producer is success");
        producer.close();
    }

}

五、消费者样例代码

package com.example.demo.kafka;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.time.Duration;
import java.util.Arrays;
import java.util.Properties;

/**
 * @Author: meng
 * @Version: 1.0
 */
public class ConsumertKafkaKerberos {

    public static void main(String[] args) {
        String filePath = System.getProperty("user.dir") + "\\conf\\";
        System.setProperty("java.security.auth.login.config", filePath + "kafka_client_jaas.conf");
        System.setProperty("java.security.krb5.conf", filePath + "krb5.conf");

        Properties props = new Properties();
        props.put("bootstrap.servers", "hdp-1:9092");
        props.put("group.id", "test_group");
        props.put("enable.auto.commit", "true");
        props.put("auto.commit.interval.ms", "1000");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        // sasl
        props.put("sasl.mechanism", "GSSAPI");
        props.put("security.protocol", "SASL_PLAINTEXT");
        props.put("sasl.kerberos.service.name", "kafka");

        @SuppressWarnings("resource")
        KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
        String topic = "test";
        consumer.subscribe(Arrays.asList(topic));
        while (true) {
            try {
                ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(1000));
                for (ConsumerRecord<String, String> record : records) {
                    System.out.printf("offset = %d, partition = %d, key = %s, value = %s%n",
                            record.offset(), record.partition(), record.key(), record.value());
                }
            } catch (Exception e) {
                e.printStackTrace();
            }
        }
    }

}

相关博客:https://www.cnblogs.com/myownswordsman/p/kafka-security-kerberos.htmlicon-default.png?t=N7T8https://www.cnblogs.com/myownswordsman/p/kafka-security-kerberos.html详解kerberos认证原理 | Hero前言Kerberos协议是一个专注于验证通信双方身份的网络协议,不同于其他网络安全协议的保证整个通信过程的传输安全,kerberos侧重于通信前双方身份的认定工作,帮助客户端和服务端解决“证明我自己是我自己”的问题,从而使得通信两端能够完全信任对方身份,在一个不安全的网络中完成一次安全的身份认证继而进行安全的通信。由于整个Kerberos认证过程较为复杂繁琐且网上版本较多,特整理此文以便复习与分享icon-default.png?t=N7T8https://seevae.github.io/2020/09/12/%E8%AF%A6%E8%A7%A3kerberos%E8%AE%A4%E8%AF%81%E6%B5%81%E7%A8%8B/

Logo

Kafka开源项目指南提供详尽教程,助开发者掌握其架构、配置和使用,实现高效数据流管理和实时处理。它高性能、可扩展,适合日志收集和实时数据处理,通过持久化保障数据安全,是企业大数据生态系统的核心。

更多推荐