脚本名称 | 脚本用途 |
kafka-log-dirs.sh | 查看指定broker上日志目录使用情况 |
kafka-verifiable-consumer.sh | 检验kafka消费者 |
kafka-verifiable-producer.sh | 检验kafka生产者 |
kafka-log-dirs.sh
--bootstrap-server
kafka地址
--broker-list
要查询的broker地址列表,broker之间逗号隔开,不配置该命令则查询所有broker
--topic-list
指定查询的topic列表,逗号隔开
--command-config
配置Admin Client
--describe
显示详情
1[root@10 kafka_2.11-2.2.0]# bin/kafka-log-dirs.sh --bootstrap-server 10.211.55.3:9092 --describe --broker-list 0 --topic-list first,topic-3 2Querying brokers for log directories information 3Received log directory information from brokers 0 4{"version":1,"brokers":[{"broker":0,"logDirs":[{"logDir":"/tmp/kafka-logs","error":null,"partitions":[{"partition":"topic-3-0","size":474,"offsetLag":0,"isFuture":false},{"partition":"first-0","size":310,"offsetLag":0,"isFuture":false}]}]}]}
kafka-verifiable-consumer.sh
--broker-list
broker列表, HOST1:PORT1,HOST2:PORT2,…
--topic
要消费的topic
--group-id
消费组id
--max-messages
最大消费消息数量,默认-1,一直消费
1#设置消费两次后,自动停止 2[root@10 kafka_2.11-2.2.0]# bin/kafka-verifiable-consumer.sh --broker-list 10.211.55.3:9092 --topic first --group-id group.demo --max-messages 2 3{"timestamp":1558869583036,"name":"startup_complete"} 4{"timestamp":1558869583329,"name":"partitions_revoked","partitions":[]} 5{"timestamp":1558869583366,"name":"partitions_assigned","partitions":[{"topic":"first","partition":0}]} 6{"timestamp":1558869590352,"name":"records_consumed","count":1,"partitions":[{"topic":"first","partition":0,"count":1,"minOffset":37,"maxOffset":37}]} 7{"timestamp":1558869590366,"name":"offsets_committed","offsets":[{"topic":"first","partition":0,"offset":38}],"success":true} 8{"timestamp":1558869595328,"name":"records_consumed","count":1,"partitions":[{"topic":"first","partition":0,"count":1,"minOffset":38,"maxOffset":38}]} 9{"timestamp":1558869595335,"name":"offsets_committed","offsets":[{"topic":"first","partition":0,"offset":39}],"success":true} 10{"timestamp":1558869595355,"name":"shutdown_complete"}
--session-timeout
消费者会话超时时间,默认30000ms,服务端如果在该时间内没有接收到消费者的心跳,就会将该消费者从消费组中删除
--enable-autocommit
自动提交,默认false
1#比较一下两者的差别 2#没有--enable-autocommit 3[root@10 kafka_2.11-2.2.0]# bin/kafka-verifiable-consumer.sh --broker-list 10.211.55.3:9092 --topic first --group-id group.demo 4{"timestamp":1558875063613,"name":"startup_complete"} 5{"timestamp":1558875063922,"name":"partitions_revoked","partitions":[]} 6{"timestamp":1558875063952,"name":"partitions_assigned","partitions":[{"topic":"first","partition":0}]} 7{"timestamp":1558875069603,"name":"records_consumed","count":1,"partitions":[{"topic":"first","partition":0,"count":1,"minOffset":47,"maxOffset":47}]} 8{"timestamp":1558875069614,"name":"offsets_committed","offsets":[{"topic":"first","partition":0,"offset":48}],"success":true} 9 10#有--enable-autocommit 11[root@10 kafka_2.11-2.2.0]# bin/kafka-verifiable-consumer.sh --broker-list 10.211.55.3:9092 --topic first --group-id group.demo --enable-autocommit 12{"timestamp":1558874772119,"name":"startup_complete"} 13{"timestamp":1558874772408,"name":"partitions_revoked","partitions":[]} 14{"timestamp":1558874772449,"name":"partitions_assigned","partitions":[{"topic":"first","partition":0}]} 15{"timestamp":1558874820898,"name":"records_consumed","count":1,"partitions":[{"topic":"first","partition":0,"count":1,"minOffset":46,"maxOffset":46}]}
--reset-policy
设置消费偏移量,earliest从头开始消费,latest从最近的开始消费,none抛出异常,默认earliest
--assignment-strategy
消费者的分区配置策略, 默认 RangeAssignor
--consumer.config
配置文件
kafka-verifiable-producer.sh
该脚本可以生产测试数据发送到指定topic,并将数据已json格式打印到控制台
--topic
主题名称
--broker-list
broker列表, HOST1:PORT1,HOST2:PORT2,…
--max-messages
最大消息数量,默认-1,一直生产消息
--throughput
设置吞吐量,默认-1
--acks
指定分区中必须有多少个副本收到这条消息,才算消息发送成功,默认-1
--producer.config
配置文件
--message-create-time
设置消息创建的时间,时间戳
--value-prefix
设置消息前缀
--repeating-keys
key从0开始,每次递增1,直到指定的值,然后再从0开始
1[root@10 kafka_2.11-2.2.0]# bin/kafka-verifiable-producer.sh --broker-list 10.211.55.3:9092 --topic first --message-create-time 1527351382000 --value-prefix 1 --repeating-keys 10 --max-messages 20 2{"timestamp":1558877565069,"name":"startup_complete"} 3{"timestamp":1558877565231,"name":"producer_send_success","key":"0","value":"1.0","topic":"first","partition":0,"offset":1541118} 4{"timestamp":1558877565238,"name":"producer_send_success","key":"1","value":"1.1","topic":"first","partition":0,"offset":1541119} 5{"timestamp":1558877565238,"name":"producer_send_success","key":"2","value":"1.2","topic":"first","partition":0,"offset":1541120} 6{"timestamp":1558877565238,"name":"producer_send_success","key":"3","value":"1.3","topic":"first","partition":0,"offset":1541121} 7{"timestamp":1558877565238,"name":"producer_send_success","key":"4","value":"1.4","topic":"first","partition":0,"offset":1541122} 8{"timestamp":1558877565239,"name":"producer_send_success","key":"5","value":"1.5","topic":"first","partition":0,"offset":1541123} 9{"timestamp":1558877565239,"name":"producer_send_success","key":"6","value":"1.6","topic":"first","partition":0,"offset":1541124} 10{"timestamp":1558877565239,"name":"producer_send_success","key":"7","value":"1.7","topic":"first","partition":0,"offset":1541125} 11{"timestamp":1558877565239,"name":"producer_send_success","key":"8","value":"1.8","topic":"first","partition":0,"offset":1541126} 12{"timestamp":1558877565239,"name":"producer_send_success","key":"9","value":"1.9","topic":"first","partition":0,"offset":1541127} 13{"timestamp":1558877565239,"name":"producer_send_success","key":"0","value":"1.10","topic":"first","partition":0,"offset":1541128} 14{"timestamp":1558877565239,"name":"producer_send_success","key":"1","value":"1.11","topic":"first","partition":0,"offset":1541129} 15{"timestamp":1558877565239,"name":"producer_send_success","key":"2","value":"1.12","topic":"first","partition":0,"offset":1541130} 16{"timestamp":1558877565240,"name":"producer_send_success","key":"3","value":"1.13","topic":"first","partition":0,"offset":1541131} 17{"timestamp":1558877565240,"name":"producer_send_success","key":"4","value":"1.14","topic":"first","partition":0,"offset":1541132} 18{"timestamp":1558877565241,"name":"producer_send_success","key":"5","value":"1.15","topic":"first","partition":0,"offset":1541133} 19{"timestamp":1558877565244,"name":"producer_send_success","key":"6","value":"1.16","topic":"first","partition":0,"offset":1541134} 20{"timestamp":1558877565244,"name":"producer_send_success","key":"7","value":"1.17","topic":"first","partition":0,"offset":1541135} 21{"timestamp":1558877565244,"name":"producer_send_success","key":"8","value":"1.18","topic":"first","partition":0,"offset":1541136} 22{"timestamp":1558877565244,"name":"producer_send_success","key":"9","value":"1.19","topic":"first","partition":0,"offset":1541137} 23{"timestamp":1558877565262,"name":"shutdown_complete"} 24{"timestamp":1558877565263,"name":"tool_data","sent":20,"acked":20,"target_throughput":-1,"avg_throughput":100.50251256281408}
Reference:
[1] [Kafka之实战指南-朱小厮] http://wxpic.178le.net/FhlwV5J7M9LbXElyMGQMPE67CzGr
[2] [阿飞的博客] https://blog.csdn.net/feelwing1314
[3] [Apache Kafka] http://kafka.apache.org/documentation/