java代码实现kafka消费端consumer的from-beginning功能
只需要在代码中加入只需要在代码中加入
props.put("auto.offset.reset", "earliest");
props.put("group.id", UUID.randomUUID().toString()); props.put("auto.offset.reset", "earliest"); props.put("group.id", UUID.randomUUID().toString());
props.put("group.id", UUID.randomUUID().toString()); props.put("auto.offset.reset", "earliest"); props.put("group.id", UUID.randomUUID().toString());
完整例子
//1、准备配置文件
Properties props = new Properties();
props.put("bootstrap.servers", "hadoop1:9092");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("group.id", "test");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("key.deserializer", "org.apache.kafka.common.serializatio 只需要在代码中加入 props.put("auto.offset.reset", "earliest"); props.put("group.id", UUID.randomUUID().toString()); 完整例子 //1、准备配置文件 Properties props = new Properties(); props.put("bootstrap.servers", "hadoop1:9092"); props.put("acks", "all"); props.put("retries", 0); props.put("batch.size", 16384); props.put("linger.ms", 1); props.put("buffer.memory", 33554432); props.put("group.id", "test"); props.put("enable.auto.commit", "true"); props.put("auto.commit.interval.ms", "1000"); props.put("key.deserializer", "org.apache.kafka.common.serializatio
上一篇:
JS实现多线程数据分片下载
