1.模拟异地容灾的TC集群
计划启动两台seata的tc服务节点:
节点名称 |
ip地址 |
端口号 |
集群名称 |
seata |
127.0.0.1 |
8091 |
GZ |
seata2 |
127.0.0.1 |
8092 |
HZ |
之前我们已经启动了一台seata服务,端口是8091,集群名为GZ。现在,将seata目录复制一份,起名为seata2。修改seata2/conf/application.yml内容如下:
server:
port: 7092
spring:
application:
name: seata-server
logging:
config: classpath:logback-spring.xml
file:
path: ${user.home}/logs/seata
console:
user:
username: seata
password: seata
seata:
config:
# 读取tc服务端的配置文件的方式,这里是从nacos配置中心读取,这样如果tc是集群,可以共享配置
type: nacos
nacos:
server-addr: 127.0.0.1:8848
namespace:
group: SEATA_GROUP
username: nacos
password: nacos
##if use MSE Nacos with auth, mutex with username/password attribute
#access-key: ""
#secret-key: ""
data-id: seataServer.properties
registry:
# tc服务的注册中心类,这里选择nacos,也可以是eureka、zookeeper等
type: nacos
nacos:
application: seata-server
server-addr: 127.0.0.1:8848
group: DEFAULT_GROUP
namespace:
cluster: HZ
username: nacos
password: nacos
server:
service-port: 8092 #If not configured, the default is '${server.port} + 1000'
security:
secretKey: SeataSecretKey0c382ef121d778043159209298fd40bf3850a017
tokenValidityInMilliseconds: 1800000
ignore:
urls: /,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-fe/public/**,/api/v1/auth/login
进入seata2/bin目录,然后运行双击 seata-server.bat
。打开nacos控制台,查看服务列表:
点进详情查看:
2.将事务组映射配置到nacos
接下来,我们需要将tx-service-group与cluster的映射关系都配置到nacos配置中心。新建一个配置:
配置的内容如下:
# 事务组映射关系
service.vgroupMapping.seata-demo=GZ
service.enableDegrade=false
service.disableGlobalTransaction=false
# 与TC服务的通信配置
transport.type=TCP
transport.server=NIO
transport.heartbeat=true
transport.enableClientBatchSendRequest=false
transport.threadFactory.bossThreadPrefix=NettyBoss
transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker
transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler
transport.threadFactory.shareBossWorker=false
transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector
transport.threadFactory.clientSelectorThreadSize=1
transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread
transport.threadFactory.bossThreadSize=1
transport.threadFactory.workerThreadSize=default
transport.shutdown.wait=3
# RM配置
client.rm.asyncCommitBufferLimit=10000
client.rm.lock.retryInterval=10
client.rm.lock.retryTimes=30
client.rm.lock.retryPolicyBranchRollbackOnConflict=true
client.rm.reportRetryCount=5
client.rm.tableMetaCheckEnable=false
client.rm.tableMetaCheckerInterval=60000
client.rm.sqlParserType=druid
client.rm.reportSuccessEnable=false
client.rm.sagaBranchRegisterEnable=false
# TM配置
client.tm.commitRetryCount=5
client.tm.rollbackRetryCount=5
client.tm.defaultGlobalTransactionTimeout=60000
client.tm.degradeCheck=false
client.tm.degradeCheckAllowTimes=10
client.tm.degradeCheckPeriod=2000
# undo日志配置
client.undo.dataValidation=true
client.undo.logSerialization=jackson
client.undo.onlyCareUpdateColumns=true
client.undo.logTable=undo_log
client.undo.compress.enable=true
client.undo.compress.type=zip
client.undo.compress.threshold=64k
client.log.exceptionRate=100
3.微服务读取nacos配置
接下来,需要修改每一个微服务的application.yml文件,让微服务读取nacos中的client.properties文件:
seata:
config:
type: nacos
nacos:
server-addr: 127.0.0.1:8848
username: nacos
password: nacos
group: SEATA_GROUP
data-id: client.properties
也注释掉原来文件中的如下内容(如果不注释也可以;则作为一个默认,对动态切换集群不受影响):
service:
vgroup-mapping: # 事务组与TC服务cluster的映射关系
seata-demo: GZ
重启微服务,现在微服务到底是连接tc的GZ集群,还是tc的HZ集群,都统一由nacos的client.properties来决定了。