Kafka Tools

本文涉及的产品
注册配置 MSE Nacos/ZooKeeper,182元/月
MSE Nacos/ZooKeeper 企业版试用,1600元额度,限量50份
任务调度 XXL-JOB 版免费试用,400 元额度,开发版规格
简介: 参考,https://cwiki.apache.org/confluence/display/KAFKA/System+Toolshttps://cwiki.apache.
参考,

https://cwiki.apache.org/confluence/display/KAFKA/System+Tools

https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools

http://kafka.apache.org/documentation.html#quickstart

http://kafka.apache.org/documentation.html#operations

 

为了便于使用,kafka提供了比较强大的Tools,把经常需要使用的整理一下

 

开关kafka Server

bin/kafka-server-start.sh config/server.properties
bin/kafka-server-stop.sh
JMX_PORT=9999 nohup bin/kafka-server-start.sh config/server.properties &

 

topic相关

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
bin/kafka-topics.sh --list --zookeeper localhost:2181

describe topic的详细情况

bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test

修改topic的partition,只能增加

bin/kafka-topics.sh --alter --zookeeper localhost:2181 --partitions 3 --topic test

到0.8.2才正式支持删除topic,当前是beta版

bin/kafka-topics.sh --zookeeper zk_host:port/chroot --delete --topic my_topic_name

查看有问题的partition

bin/kafka-topics.sh --describe --zookeeper localhost:2181 --unavailable-partitions --topic test
per-topic 修改参数
> bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic my-topic --partitions 1 
        --replication-factor 1 --config max.message.bytes=64000 --config flush.messages=1
> bin/kafka-topics.sh --zookeeper localhost:2181 --alter --topic my-topic 
    --config max.message.bytes=128000
> bin/kafka-topics.sh --zookeeper localhost:2181 --alter --topic my-topic 
    --deleteConfig max.message.bytes

 

集群扩展
集群扩展,对于broker还是比较简单的,但是现有的topic上的partition是不会做自动迁移的
需要手工做迁移,但kafka提供了比较方便的工具,

--generate,生成参考的迁移计划
given a list of topics and a list of brokers,工具会给出迁徙方案

把topic完全迁移到新的brokers

> cat topics-to-move.json
{"topics": [{"topic": "foo1"},
            {"topic": "foo2"}],
 "version":1
}
复制代码
> bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --topics-to-move-json-file topics-to-move.json --broker-list "5,6" --generate 
Current partition replica assignment

{"version":1,
 "partitions":[{"topic":"foo1","partition":2,"replicas":[1,2]},
               {"topic":"foo1","partition":0,"replicas":[3,4]},
               {"topic":"foo2","partition":2,"replicas":[1,2]},
               {"topic":"foo2","partition":0,"replicas":[3,4]},
               {"topic":"foo1","partition":1,"replicas":[2,3]},
               {"topic":"foo2","partition":1,"replicas":[2,3]}]
}

Proposed partition reassignment configuration

{"version":1,
 "partitions":[{"topic":"foo1","partition":2,"replicas":[5,6]},
               {"topic":"foo1","partition":0,"replicas":[5,6]},
               {"topic":"foo2","partition":2,"replicas":[5,6]},
               {"topic":"foo2","partition":0,"replicas":[5,6]},
               {"topic":"foo1","partition":1,"replicas":[5,6]},
               {"topic":"foo2","partition":1,"replicas":[5,6]}]
}
复制代码

给出当前的assignment情况和,迁移方案

我们可以同时保存当前的assignment情况和迁移方案,当前的assignment情况可以用于rollback

--execute,开始执行迁移

复制代码
> bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file expand-cluster-reassignment.json --execute
Current partition replica assignment

{"version":1,
 "partitions":[{"topic":"foo1","partition":2,"replicas":[1,2]},
               {"topic":"foo1","partition":0,"replicas":[3,4]},
               {"topic":"foo2","partition":2,"replicas":[1,2]},
               {"topic":"foo2","partition":0,"replicas":[3,4]},
               {"topic":"foo1","partition":1,"replicas":[2,3]},
               {"topic":"foo2","partition":1,"replicas":[2,3]}]
}

Save this to use as the --reassignment-json-file option during rollback
Successfully started reassignment of partitions
{"version":1,
 "partitions":[{"topic":"foo1","partition":2,"replicas":[5,6]},
               {"topic":"foo1","partition":0,"replicas":[5,6]},
               {"topic":"foo2","partition":2,"replicas":[5,6]},
               {"topic":"foo2","partition":0,"replicas":[5,6]},
               {"topic":"foo1","partition":1,"replicas":[5,6]},
               {"topic":"foo2","partition":1,"replicas":[5,6]}]
}
复制代码

--verify,check当前的迁移状态

复制代码
> bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file expand-cluster-reassignment.json --verify
Status of partition reassignment:
Reassignment of partition [foo1,0] completed successfully
Reassignment of partition [foo1,1] is in progress
Reassignment of partition [foo1,2] is in progress
Reassignment of partition [foo2,0] completed successfully
Reassignment of partition [foo2,1] completed successfully 
Reassignment of partition [foo2,2] completed successfully
复制代码

选择topic的某个partition的某些replica进行迁徙

moves partition 0 of topic foo1 to brokers 5,6 and partition 1 of topic foo2 to brokers 2,3

> cat custom-reassignment.json
{"version":1,"partitions":[{"topic":"foo1","partition":0,"replicas":[5,6]},{"topic":"foo2","partition":1,"replicas":[2,3]}]}
复制代码
> bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file custom-reassignment.json --execute
Current partition replica assignment

{"version":1,
 "partitions":[{"topic":"foo1","partition":0,"replicas":[1,2]},
               {"topic":"foo2","partition":1,"replicas":[3,4]}]
}

Save this to use as the --reassignment-json-file option during rollback
Successfully started reassignment of partitions
{"version":1,
 "partitions":[{"topic":"foo1","partition":0,"replicas":[5,6]},
               {"topic":"foo2","partition":1,"replicas":[2,3]}]
}
复制代码

brokers下线

当前版本不支持下线的规划,需要到0.8.2才支持,这需要把一个broker上的replica清空

增加replication factor

partition 0的replica数从1增长到3,当前replica存在broker5,在broker6,7上增加replica

> cat increase-replication-factor.json
{"version":1,
 "partitions":[{"topic":"foo","partition":0,"replicas":[5,6,7]}]}
复制代码
> bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file increase-replication-factor.json --execute
Current partition replica assignment

{"version":1,
 "partitions":[{"topic":"foo","partition":0,"replicas":[5]}]}

Save this to use as the --reassignment-json-file option during rollback
Successfully started reassignment of partitions
{"version":1,
 "partitions":[{"topic":"foo","partition":0,"replicas":[5,6,7]}]}
复制代码

 

Producer console

> bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test 
This is a message
This is another message

后面可以任意的输入message,都会发到broker的topic中

 

Comsumer console

bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning

从头读这个topic,可以重复读到所有数据
我在想为啥,每次都能replay,原来每次都是随机产生一个groupid
consumerProps.put("group.id","console-consumer-" + new Random().nextInt(100000))

 

Consumer Offset Checker

这个会显示出consumer group的offset情况, 必须参数为--group, 不指定--topic,默认为所有topic

Displays the:  Consumer Group, Topic, Partitions, Offset, logSize, Lag, Owner for the specified set of Topics and Consumer Group

bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker

required argument: [group]
Option Description
------ -----------
--broker-info Print broker info
--group Consumer group.
--help Print this message.
--topic Comma-separated list of consumer
   topics (all topics if absent).
--zkconnect ZooKeeper connect string. (default: localhost:2181)

Example,

bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group pv

Group           Topic                          Pid Offset          logSize         Lag             Owner
pv              page_visits                    0   21              21              0               none
pv              page_visits                    1   19              19              0               none
pv              page_visits                    2   20              20              0               none

 

 

Export Zookeeper Offsets

将Zk中的offset信息以下面的形式打到file里面去

A utility that retrieves the offsets of broker partitions in ZK and prints to an output file in the following format:

/consumers/group1/offsets/topic1/1-0:286894308
/consumers/group1/offsets/topic1/2-0:284803985

bin/kafka-run-class.sh kafka.tools.ExportZkOffsets

required argument: [zkconnect]
Option Description
------ -----------
--group Consumer group.
--help Print this message.
--output-file Output file
--zkconnect ZooKeeper connect string. (default: localhost:2181)

 

Update Offsets In Zookeeper

这个挺有用,用于replay, kafka的文档有点坑爹,看了不知道咋用,还是看源码才看明白

A utility that updates the offset of every broker partition to the offset of earliest or latest log segment file, in ZK.

bin/kafka-run-class.sh kafka.tools.UpdateOffsetsInZK

USAGE: kafka.tools.UpdateOffsetsInZK$ [earliest | latest] consumer.properties topic

Example,

bin/kafka-run-class.sh kafka.tools.UpdateOffsetsInZK earliest config/consumer.properties  page_visits

Group           Topic                          Pid Offset          logSize         Lag             Owner
pv              page_visits                    0   0               21              21              none
pv              page_visits                    1   0               19              19              none
pv              page_visits                    2   0               20              20              none

可以看到offset已经被清0,Lag=logSize

 

更加直接的方式是,直接去Zookeeper里面看

通过zkCli.sh连上后,通过ls查看

Broker Node Registry

/brokers/ids/[0...N] --> host:port (ephemeral node)

Broker Topic Registry

/brokers/topics/[topic]/[0...N] --> nPartions (ephemeral node)

Consumer Id Registry

/consumers/[group_id]/ids/[consumer_id] --> {"topic1": #streams, ..., "topicN": #streams} (ephemeral node)

Consumer Offset Tracking

/consumers/[group_id]/offsets/[topic]/[broker_id-partition_id] --> offset_counter_value ((persistent node)

Partition Owner registry

/consumers/[group_id]/owners/[topic]/[broker_id-partition_id] --> consumer_node_id (ephemeral node)
目录
相关文章
|
网络协议 Java Nacos
Nacos—配置管理
Nacos—配置管理
949 0
|
消息中间件 监控 NoSQL
|
12月前
|
SQL 存储 Linux
从配置源到数据库初始化一步步教你在CentOS 7.9上安装SQL Server 2019
【11月更文挑战第8天】本文介绍了在 CentOS 7.9 上安装 SQL Server 2019 的详细步骤,包括系统准备、配置安装源、安装 SQL Server 软件包、运行安装程序、初始化数据库以及配置远程连接。通过这些步骤,您可以顺利地在 CentOS 系统上部署和使用 SQL Server 2019。
483 1
|
Linux Docker 容器
在CentOS上安装Docker的指南:
【8月更文挑战第19天】介绍在CentOS上安装Docker的过程:首先确认CentOS版本兼容,建议使用7或更高版本,并通过`yum update -y`更新系统。安装Docker时推荐使用官方仓库方法,需安装`yum-utils`等工具,设置Docker仓库简化安装流程。可选配置国内镜像源(如阿里云)提升下载速度。亦可通过RPM包离线安装。安装后启动Docker服务并通过`systemctl enable docker`设为开机启动。最后,运行`docker run hello-world`验证安装。如遇问题,使用`sudo journalctl -u docker`查看日志排错。
898 0
|
消息中间件 中间件 Kafka
测试中间件 - Kafka Tool 快速入门(一)
测试中间件 - Kafka Tool 快速入门(一)
613 94
测试中间件 - Kafka Tool 快速入门(一)
|
消息中间件 数据采集 分布式计算
【数据采集与预处理】数据接入工具Kafka
【数据采集与预处理】数据接入工具Kafka
296 1
【数据采集与预处理】数据接入工具Kafka
|
开发框架 .NET Nacos
使用 Nacos 在 C# (.NET Core) 应用程序中实现高效配置管理和服务发现
使用 Nacos 在 C# (.NET Core) 应用程序中实现高效配置管理和服务发现
1498 0
|
SQL 资源调度 关系型数据库
实时计算 Flink版产品使用合集之是否可以用jdk17版本使FlinkCDC
实时计算Flink版作为一种强大的流处理和批处理统一的计算框架,广泛应用于各种需要实时数据处理和分析的场景。实时计算Flink版通常结合SQL接口、DataStream API、以及与上下游数据源和存储系统的丰富连接器,提供了一套全面的解决方案,以应对各种实时计算需求。其低延迟、高吞吐、容错性强的特点,使其成为众多企业和组织实时数据处理首选的技术平台。以下是实时计算Flink版的一些典型使用合集。
402 0
|
消息中间件 Java Kafka
kafka log4j日志级别修改,一天生成一个日志文件
kafka log4j日志级别修改,一天生成一个日志文件
|
消息中间件 JSON Cloud Native
Filebeat收集日志的那些事儿
云栖号资讯:【点击查看更多行业资讯】在这里您可以找到不同行业的第一手的上云资讯,还在等什么,快来! 【编者的话】最近因为云原生日志收集的需要,我们打算使用Filebeat作为容器日志收集工具,并对其进行二次开发,因此笔者将谈谈Filebeat收集日志的那些事儿。
Filebeat收集日志的那些事儿