Flink整合Kafka报错:Marking the coordinator hadoop000:9092(id: 2147483647 rack: null) dead
代码很简单,如下:package com.imooc.flink.course08import java.util.Propertiesimport org.apache.flink.api.common.serialization.SimpleStringSchemaimport org.apache.flink.streaming.api.scala.{DataStream, Str...
·
代码很简单,如下:
package com.imooc.flink.course08
import java.util.Properties
import org.apache.flink.api.common.serialization.SimpleStringSchema
import org.apache.flink.streaming.api.scala.{DataStream, StreamExecutionEnvironment}
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer011
import org.apache.flink.api.scala._
import org.apache.flink.streaming.api.CheckpointingMode
object KafkaConnectorConsumerApp {
def main(args: Array[String]): Unit = {
val env: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment
val topic = "flinktest"
val properties = new Properties()
properties.setProperty("bootstrap.servers", "192.168.64.233:9092")
properties.setProperty("group.id", "pk")
val data: DataStream[String] = env.addSource(new FlinkKafkaConsumer011[String](topic, new SimpleStringSchema(), properties))
data.print()
env.execute("KafkaConnectorConsumerApp")
}
}
具体原因是:properties.setProperty(“bootstrap.servers”, “192.168.64.233:9092”),这个地方设置bootstrap server的地址,不管是ip还是域名,都需要在本地的hosts文件中配置ip和域名的映射关系,因为它是按节点名称进行查找的
更多推荐
已为社区贡献2条内容
所有评论(0)