概述
传统的消息传递模式有2种:队列( queue) 和(publish-subscribe)
- queue模式:多个consumer从服务器中读取数据,消息只会到达一个consumer
- publish-subscribe模式:消息会被广播给所有的consumer
Kafka基于这2种模式提供了一种consumer的抽象概念: consumer group
- queue模式:所有的consumer都位于同一个consumer group 下。
- publish-subscribe模式:所有的consumer都有着自己唯一的consumer group
说明: 由2个broker组成的kafka集群,某个主题总共有4个partition(P0-P3),分别位于不同的broker上。这个集群由2个Consumer Group消费, A有2个consumer instances ,B有4个。
通常一个topic会有几个consumer group,每个consumer group都是一个逻辑上的订阅者( logicalsubscriber )。每个consumer group由多个consumer instance组成,从而达到可扩展和容灾的功能。
广播模式的应用 ----> 应用里缓存了数据字典等配置表在内存中,可以通过 Kafka 广播消费,实现每个应用节点都消费消息,刷新本地内存的缓存。
Code
POM依赖
<dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <!-- 引入 Spring-Kafka 依赖 --> <dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <scope>test</scope> </dependency> </dependencies>
配置文件
spring: # Kafka 配置项,对应 KafkaProperties 配置类 kafka: bootstrap-servers: 192.168.126.140:9092 # 指定 Kafka Broker 地址,可以设置多个,以逗号分隔 # Kafka Producer 配置项 producer: acks: 1 # 0-不应答。1-leader 应答。all-所有 leader 和 follower 应答。 retries: 3 # 发送失败时,重试发送的次数 key-serializer: org.apache.kafka.common.serialization.StringSerializer # 消息的 key 的序列化 value-serializer: org.springframework.kafka.support.serializer.JsonSerializer # 消息的 value 的序列化 # Kafka Consumer 配置项 consumer: auto-offset-reset: latest # 在广播订阅下,一般情况下,无需消费历史的消息,而是从订阅的 Topic 的队列的尾部开始消费即可 key-deserializer: org.apache.kafka.common.serialization.StringDeserializer value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer properties: spring: json: trusted: packages: com.artisan.springkafka.domain # Kafka Consumer Listener 监听器配置 listener: missing-topics-fatal: false # 消费监听接口监听的主题不存在时,默认会报错。所以通过设置为 false ,解决报错 logging: level: org: springframework: kafka: ERROR # spring-kafka apache: kafka: ERROR # kafka
auto-offset-reset: latest
广播模式,一般情况下,无需消费历史的消息,从订阅的 Topic 的队列的尾部开始消费即可
生产者
package com.artisan.springkafka.producer; import com.artisan.springkafka.constants.TOPIC; import com.artisan.springkafka.domain.MessageMock; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.kafka.core.KafkaTemplate; import org.springframework.kafka.support.SendResult; import org.springframework.stereotype.Component; import org.springframework.util.concurrent.ListenableFuture; import java.util.Random; import java.util.concurrent.ExecutionException; /** * @author 小工匠 * @version 1.0 * @description: TODO * @date 2021/2/17 22:25 * @mark: show me the code , change the world */ @Component public class ArtisanProducerMock { @Autowired private KafkaTemplate<Object,Object> kafkaTemplate ; /** * 异步发送 * @return * @throws ExecutionException * @throws InterruptedException */ public ListenableFuture<SendResult<Object, Object>> sendMsgASync() throws ExecutionException, InterruptedException { // 模拟发送的消息 Integer id = new Random().nextInt(100); MessageMock messageMock = new MessageMock(id,"messageSendByAsync-" + id); // 异步发送消息 ListenableFuture<SendResult<Object, Object>> result = kafkaTemplate.send(TOPIC.TOPIC, messageMock); return result ; } }
消费者
package com.artisan.springkafka.consumer; import com.artisan.springkafka.domain.MessageMock; import com.artisan.springkafka.constants.TOPIC; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.kafka.annotation.KafkaListener; import org.springframework.stereotype.Component; /** * @author 小工匠 * @version 1.0 * @description: TODO * @date 2021/2/17 22:33 * @mark: show me the code , change the world */ @Component public class ArtisanCosumerMockDiffConsumeGroup { private Logger logger = LoggerFactory.getLogger(getClass()); private static final String CONSUMER_GROUP_PREFIX = "MOCK-B" ; @KafkaListener(topics = TOPIC.TOPIC ,groupId = CONSUMER_GROUP_PREFIX + TOPIC.TOPIC + "-" + "#{T(java.util.UUID).randomUUID()})") public void onMessage(MessageMock messageMock){ logger.info("【接受到消息][线程:{} 消息内容:{}]", Thread.currentThread().getName(), messageMock); } }
注意: groupId 通过 Spring EL 表达式,在每个消费者分组的名字上配合 UUID 生成其后缀。这样,就能保证每个项目启动的消费者分组不同,从而达到广播消费的目的。
单元测试
package com.artisan.springkafka.produceTest; import com.artisan.springkafka.SpringkafkaApplication; import com.artisan.springkafka.producer.ArtisanProducerMock; import org.junit.Test; import org.junit.runner.RunWith; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.kafka.support.SendResult; import org.springframework.test.context.junit4.SpringRunner; import org.springframework.util.concurrent.ListenableFuture; import org.springframework.util.concurrent.ListenableFutureCallback; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutionException; import java.util.concurrent.TimeUnit; /** * @author 小工匠 * * @version 1.0 * @description: TODO * @date 2021/2/17 22:40 * @mark: show me the code , change the world */ @RunWith(SpringRunner.class) @SpringBootTest(classes = SpringkafkaApplication.class) public class ProduceMockTest { private Logger logger = LoggerFactory.getLogger(getClass()); @Autowired private ArtisanProducerMock artisanProducerMock; @Test public void testAsynSend() throws ExecutionException, InterruptedException { logger.info("开始发送 测试广播模式"); artisanProducerMock.sendMsgASync().addCallback(new ListenableFutureCallback<SendResult<Object, Object>>() { @Override public void onFailure(Throwable throwable) { logger.info(" 发送异常{}]]", throwable); } @Override public void onSuccess(SendResult<Object, Object> objectObjectSendResult) { logger.info("回调结果 Result = topic:[{}] , partition:[{}], offset:[{}]", objectObjectSendResult.getRecordMetadata().topic(), objectObjectSendResult.getRecordMetadata().partition(), objectObjectSendResult.getRecordMetadata().offset()); } }); // 阻塞等待,保证消费 new CountDownLatch(1).await(); } }
启动多次单元测试, 观察消息的接受情况
测速结果
可以看到不消费组下的 消费者(目前是一个消费组下一个消费者) 均收到了 这条消息,这就是广播模式
源码地址
https://github.com/yangshangwei/boot2/tree/master/springkafkaBroadCast