今天使用阿里云druid.io服务,发现kafka-index-service这个任务都以失败结束,查询错误日志,错误如下


io.druid.java.util.common.ISE: Could not allocate segment for row with timestamp[2019-11-21T09:17:29.000Z]
	at io.druid.indexing.kafka.KafkaIndexTask.run(KafkaIndexTask.java:642) ~[?:?]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444) [druid-indexing-service-0.12.3.jar:0.12.3]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416) [druid-indexing-service-0.12.3.jar:0.12.3]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_151]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_151]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_151]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]
2019-11-21T09:17:36,821 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_kafka_TEST_CTI_PT30M_PT1H_92985e76664003b_cihombii] status changed to [FAILED].
2019-11-21T09:17:36,824 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_kafka_TEST_CTI_PT30M_PT1H_92985e76664003b_cihombii",
  "status" : "FAILED",
  "duration" : 516
}

傻缺,你看看你的
segmentGranularity和queryGranularity是不是写错了啊,一定要保证queryGranularity>=segmentGranularity

  "granularitySpec": {
      "type": "uniform",
      "segmentGranularity": "DAY",
      "queryGranularity": "DAY",
      "rollup": true
    }
Logo

Kafka开源项目指南提供详尽教程,助开发者掌握其架构、配置和使用,实现高效数据流管理和实时处理。它高性能、可扩展,适合日志收集和实时数据处理,通过持久化保障数据安全,是企业大数据生态系统的核心。

更多推荐