目录

一、项目背景

二、项目目标

三、项目部署

1、前提条件

2、流程

2.1、准备工作

2.2、安装AKHQ

2.3、配置AKHQ

2.4、启动AKHQ

四、验证


一、项目背景

日常运维工作中,越来越多的团队成员,包括开发人员、数据分析师和业务运营团队,需要实时查看和监控kafka主题中的数据,这对快速诊断问题、优化性能和支持数据驱动的决策十分重要。

二、项目目标

本文档旨在指导技术团队将 Apache Kafka 集群与 AKHQ 监控工具成功集成。

适用于需要监控和管理 Kafka 集群的技术团队。

三、项目部署

1、前提条件

  • 集群为prod环境kafka集群与log_kafka集群

  • 具有访问 Kafka 集群所需的kerberos相关文件和权限。

  • 一台服务器用于部署 AKHQ。

  • JDK 11+ 我安装的JDK 17+

2、流程

2.1、准备工作

集群keytabprincipal
kafka

kafka-01:9092

kafka-02:9092

kafka-03:9092

kafka-04:9092

kafka-05:9092

/etc/keytabs/kafka-01.keytabkafka/kafka-01@principal
log_kafka

log_kafka-01:9092

log_kafka-02:9092

log_kafka-03:9092

log_kafka-04:9092

log_kafka-05:9092

/etc/keytabs/log_kafka-01.keytabkafka/log_kafka-01@principal
ladp

server: ldap://ldap.server.com:389
managerDn: "cn=ladp,OU=ldap,OU=Corp Services,DC=dc,DC=intra"
managerPassword: "*****"
searchbase: "OU=Ou Users,OU=Corp Users,DC=DC,DC=intra"

filter: "sAMAccountName={0}"
groupbase: "OU=Ou Users,OU=Corp Users,DC=DC,DC=intra"
filter: "(objectClass=groupofnames)"

AKHQip:8443/data/src

2.2、安装AKHQ

在目标服务器上,安装Java 17 环境,上传akhq-0.24.0-all.jar

#登陆akhq机器,检查是否有jdk11+
java -version
cd /data/src/
tar -zxvf akhq-0.24.0-all.jar
ln -s /data/src/akhq-0.24.0/ /data/service/akhq

2.3、配置AKHQ

  创建 AKHQ 的 application.yml 文件,添加 Kafka 集群的配置信息。

  配置 AKHQ 的LDAP 集成、用户角色和权限。

  设置日志格式。

# 进入AKHQ配置主机
cd /data/service/akhq/
# 注意创建log目录
mkdir log
touch application.yml

application.yml内容

# debug配置
#logger:
#   levels:
     #root: DEBUG
#     org.akhq.configs: TRACE
#     io.micronaut: DEBUG
#     io.micronaut.security.ldap: TRACE
#     io.micronaut.configuration.security: TRACE
#     java.security: TRACE

micronaut:
  security:
    enabled: true #是否开启安全认证
    # Ldap authenticaton configuration
    ldap:
      default:
        enabled: true #是否开启ladp
        context:
          server: "ldap://ldap.server.com:389" #ladp地址
          managerDn: "cn=cn,OU=ldap,OU=Corp Services,DC=DC,DC=intra" #ldap管理员账号密码
          managerPassword: "密码"
        search:
          base: "OU=Ou Users,OU=Corp Users,DC=DC,DC=intra"
          filter: "sAMAccountName={0}" #ladp 查询用户
        groups:
          enabled: true        #是否使用ldap组
          base: "OU=Ou Users,OU=Corp Users,DC=DC,DC=intra"
          filter: "(objectClass=groupofnames)" #查询组
  server:
    port: 8443        #akhq服务端口
    cors:
      enabled: true    #是否转发前端(无影响)
      configurations:
        all:
          allowedOrigins:
            - http://localhost:3000

akhq:                #服务配置
  server:
    access-log: # 日志配置 (可选)
      enabled: true # true by default
      name: org.akhq.log.access # Logger name
      format: "[Date: {}] [Duration: {} ms] [Url: {} {}] [Status: {}] [Ip: {}] [User: {}]" # Logger format
  connections:        #集群配置
    kafka-prod:       # prod集群
      properties:
        bootstrap.servers:  "kafka-01:9092,kafka-02:9092,kafka-03:9092,kafka-04:9092,kafka-05:9092" #kafka集群地址
        security.protocol: SASL_PLAINTEXT
        sasl.mechanism: GSSAPI
        sasl.jaas.config: com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/keytabs/kafka-01.keytab" storeKey=true serviceName="kafka" client=true useTicketCache=true principal="kafka/kafka-01@principal"; #认证文件,连接参数
    
    log-kafka-prod: #logkafka集群
      properties:
        bootstrap.servers: "log_kafka-01:9092,log_kafka-02:9092,log_kafka-03:9092,log_kafka-04:9092,log_kafka-05:9092"
        security.protocol: SASL_PLAINTEXT
        sasl.mechanism: GSSAPI
        sasl.jaas.config: com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/keytabs/log_kafka-01.keytab" storeKey=true serviceName="kafka" client=true useTicketCache=true principal="kafka/log_kafka-01@principal";

  pagination:    #页面页数,线程数
    page-size: 25 # number of elements per page (default : 25)
    threads: 16 # Number of parallel threads to resolve page
    
  security:
    default-group: no-roles # Default groups for all the user even unlogged user
    # Groups definition
    groups:        #权限相关配置
      admin: # unique key
        name: admin # Group name
        roles:  # roles for the group
          - topic/read
          - topic/insert
          - topic/delete
          - topic/config/update
          - node/read
          - node/config/update
          - topic/data/read
          - topic/data/insert
          - topic/data/delete
          - group/read
          - group/delete
          - group/offsets/update
          - registry/read
          - registry/insert
          - registry/update
          - registry/delete
          - registry/version/delete
          - acls/read
          - connect/read
          - connect/insert
          - connect/update
          - connect/delete
          - connect/state/update
        attributes:  #正则过滤指定主题,*表示全部显示,test.*表示test开始的主题显示
          # Regexp list to filter topic available for group
          topics-filter-regexp:
            - ".*"
          # Regexp list to filter connect configs visible for group
          connects-filter-regexp:
            - ".*"
          # Regexp list to filter consumer groups visible for group
          consumer-groups-filter-regexp:
            - ".*"
      topic-reader: # unique key #只读权限
        name: topic-reader # Other group
        roles:
          - topic/read
          - topic/data/read
          
    # Basic auth configuration
    basic-auth:        #本地账号
      - username: admin # Username    
        password: 8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 # Password in sha256
        groups: # Groups for the user
          - admin
          - topic-reader

    # Ldap Groups configuration (when using ldap)
    ldap:     #根据ldap组来设置权限,目前我们ladp只有一个组,所有用现在只有只读权限
      default-group: topic-reader
      groups:
        - name: group-ldap-1
          groups: # Akhq groups list
            - topic-reader
      users:  #指定ldap用户有管理员权限
        - username: user1 # ldap user id
          groups: # Akhq groups list
            - admin
        - username: user2
          groups:
            - admin
        - username: user3
          groups:
            - admin
    # 数据脱敏相关配置
    # # Data masking configuration
    # data-masking:
    #   filters:
    #     - description: "Masks value for secret-key fields"
    #       search-regex: '"(secret-key)":".*"'
    #       replacement: '"$1":"xxxx"'
    #     - description: "Masks last digits of phone numbers"
    #       search-regex: '"([\+]?[(]?[0-9]{3}[)]?[-\s\.]?[0-9]{3}[-\s\.]?)[0-9]{4,6}"'
    #       replacement: '"$1xxxx"'

2.4、启动AKHQ

  配置akhq启动脚本

  检查启动日志,确认没有错误信息。

#进入akhq配置主机
ssh ip
cd /data/service/akhq/
touch akhq_service.sh
vi akhq_service.sh
#添加如下内容

#!/bin/bash

# AKHQ start/stop/restart script

# Configuration
AKHQ_JAR="/data/service/akhq/akhq-0.24.0-all.jar"
CONFIG_FILE="/data/service/akhq/application.yml"
LOG_DIR="/data/service/akhq/log"
LOG_FILE="${LOG_DIR}/akhq.log"
PID_FILE="/data/service/akhq/akhq.pid"
JAVA_HOME="/data/src/zulu17.42.21-ca-crac-jdk17.0.7-linux_x64"

# Ensure log directory exists
mkdir -p "$LOG_DIR"

start() {
    if [ -f "$PID_FILE" ]; then
        echo "AKHQ is already running."
    else
        echo "Starting AKHQ..."
        #nohup "$JAVA_HOME/bin/java" -Dmicronaut.config.files="$CONFIG_FILE"  -jar "$AKHQ_JAR" >> "$LOG_FILE" 2>&1 &
        nohup "$JAVA_HOME/bin/java" -Dmicronaut.config.files="$CONFIG_FILE" -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=0.0.0.0:5005 -jar "$AKHQ_JAR" >> "$LOG_FILE" 2>&1 &
        echo $! > "$PID_FILE"
        echo "AKHQ started."
    fi
}

stop() {
    if [ ! -f "$PID_FILE" ]; then
        echo "AKHQ is not running."
    else
        PID=$(cat "$PID_FILE")
        echo "Stopping AKHQ..."
        kill "$PID"
        rm "$PID_FILE"
        echo "AKHQ stopped."
    fi
}

restart() {
    echo "Restarting AKHQ..."
    stop
    sleep 2
    start
}

case "$1" in
    start)
        start
        ;;
    stop)
        stop
        ;;
    restart)
        restart
        ;;
    *)
        echo "Usage: $0 {start|stop|restart}"
        exit 1
esac

四、验证

访问 AKHQ 的 UI,检查是否能够成功连接到 Kafka 集群并显示相关信息。

Prod: http://ip:8443/ui/login

尝试执行一些基本操作,如查看主题、消费者组等,以验证集成成功。

Logo

Kafka开源项目指南提供详尽教程,助开发者掌握其架构、配置和使用,实现高效数据流管理和实时处理。它高性能、可扩展,适合日志收集和实时数据处理,通过持久化保障数据安全,是企业大数据生态系统的核心。

更多推荐