承接上文《从0到1 手把手搭建spring cloud alibaba 微服务大型应用框架(五) SEATA分布式事务篇(中)shardingshere 多库读写分离/分库分表下分布式事务完整代码及案例》
上文中没有集成seata的情况下,跑了正常扣除库存以及生成订单的情况,下面我们来模拟上图中扣除库存成功,但是生成订单的情况,由于没有集成分布式事务,所以理论的结果肯定是库存少了,但是订单没有生成
还是使用上文中写好的业务代码,不过这次在生成订单的时候手动抛出异常,代码如下
首先清空订单数据库和还原一下库存数据库,给商品id是1的商品初始化10 个数量
然后postman 执行该接口
查看执行结果
库存服务正常
订单服务报错
库存服务库存扣除了3,订单服务没有生产订单
结果符合预期,也就是没有分布式事务情况下,数据不一致了
服务集成seata at
seata-server 配置上传到nacos
如果要让seata 识别某个或者某些服务,需要提前将一些配置上传到nacos上,上传可以使用nacos-config.sh 脚本
然后执行以下命令
sh nacos-config.sh -h 127.0.0.1
-h 是nacos 所在ip/
在nacos-condig.sh 同级目录下创建config.txt
内容如下
transport.type=TCP transport.server=NIO transport.heartbeat=true transport.enableClientBatchSendRequest=true transport.threadFactory.bossThreadPrefix=NettyBoss transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler transport.threadFactory.shareBossWorker=false transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector transport.threadFactory.clientSelectorThreadSize=1 transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread transport.threadFactory.bossThreadSize=1 transport.threadFactory.workerThreadSize=default transport.shutdown.wait=3 service.vgroupMapping.mini-cloud-authentication-center_tx_group=default service.vgroupMapping.mini-cloud-upms-center-biz_tx_group=default service.vgroupMapping.mini-cloud-simulate-order-biz_tx_group=default service.vgroupMapping.mini-cloud-simulate-goods-biz_tx_group=default service.default.grouplist=127.0.0.1:8091 service.enableDegrade=false service.disableGlobalTransaction=false client.rm.asyncCommitBufferLimit=10000 client.rm.lock.retryInterval=10 client.rm.lock.retryTimes=30 client.rm.lock.retryPolicyBranchRollbackOnConflict=true client.rm.reportRetryCount=5 client.rm.tableMetaCheckEnable=false client.rm.tableMetaCheckerInterval=60000 client.rm.sqlParserType=druid client.rm.reportSuccessEnable=false client.rm.sagaBranchRegisterEnable=false client.rm.tccActionInterceptorOrder=-2147482648 client.tm.commitRetryCount=5 client.tm.rollbackRetryCount=5 client.tm.defaultGlobalTransactionTimeout=60000 client.tm.degradeCheck=false client.tm.degradeCheckAllowTimes=10 client.tm.degradeCheckPeriod=2000 client.tm.interceptorOrder=-2147482648 store.mode=db store.lock.mode=db store.session.mode=db store.publicKey= store.file.dir=file_store/data store.file.maxBranchSessionSize=16384 store.file.maxGlobalSessionSize=512 store.file.fileWriteBufferCacheSize=16384 store.file.flushDiskMode=async store.file.sessionReloadReadSize=100 store.db.datasource=druid store.db.dbType=mysql store.db.driverClassName=com.mysql.jdbc.Driver store.db.url=jdbc:mysql://mini-cloud-mysql:3306/mini_cloud_seata?useUnicode=true&rewriteBatchedStatements=true store.db.user=root store.db.password=root store.db.minConn=5 store.db.maxConn=30 store.db.globalTable=global_table store.db.branchTable=branch_table store.db.queryLimit=100 store.db.lockTable=lock_table store.db.maxWait=5000 store.redis.mode=sentinel store.redis.single.host=127.0.0.1 store.redis.single.port=6379 store.redis.sentinel.masterName= store.redis.sentinel.sentinelHosts= store.redis.maxConn=10 store.redis.minConn=1 store.redis.maxTotal=100 store.redis.database=0 store.redis.password=123456 store.redis.queryLimit=100 server.recovery.committingRetryPeriod=1000 server.recovery.asynCommittingRetryPeriod=1000 server.recovery.rollbackingRetryPeriod=1000 server.recovery.timeoutRetryPeriod=1000 server.maxCommitRetryTimeout=-1 server.maxRollbackRetryTimeout=-1 server.rollbackRetryTimeoutUnlockEnable=false server.distributedLockExpireTime=10000 client.undo.dataValidation=true client.undo.logSerialization=jackson client.undo.onlyCareUpdateColumns=true server.undo.logSaveDays=7 server.undo.logDeletePeriod=86400000 client.undo.logTable=undo_log client.undo.compress.enable=true client.undo.compress.type=zip client.undo.compress.threshold=64k log.exceptionRate=100 transport.serialization=seata transport.compressor=none metrics.enabled=false metrics.registryType=compact metrics.exporterList=prometheus metrics.exporterPrometheusPort=9898
重点两个地方 ,第一是配置服务名
一般格式是服务名+_tx_group default是默认分组
第二是修改配置模式,我这里配置的是使用db,也就是连接mysql
执行命令后会看到参数都已经传到了nacos
订单服务集成seata
由于我们集成了shardingshpere读写分离分库分表,所以不能只用传统的seata,需要使用如下maven依赖
<dependency> <groupId>org.apache.shardingsphere</groupId> <artifactId>sharding-transaction-base-seata-at</artifactId> </dependency>
<dependency> <groupId>io.seata</groupId> <artifactId>seata-spring-boot-starter</artifactId> </dependency>
resources 目录下添加如下三个文件,库存服务也一样
file.conf
service { vgroupMapping.mini-cloud-simulate-order-biz_tx_group = "default" #修改事务组名称为:my_test_tx_group,和客户端自定义的名称对应 #only support single node default.grouplist = "127.0.0.1:8091" #degrade current not support enableDegrade = false #disable disable = false disableGlobalTransaction = false #unit ms,s,m,h,d represents milliseconds, seconds, minutes, hours, days, default permanent max.commit.retry.timeout = "-1" max.rollback.retry.timeout = "-1" }
registry.conf
registry { # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa type = "nacos" loadBalance = "RandomLoadBalance" loadBalanceVirtualNodes = 10 nacos { application = "seata-server" serverAddr = "127.0.0.1:8848" group = "SEATA_GROUP" namespace = "" cluster = "default" username = "" password = "" } }
seata.conf
sharding { transaction.seata.at.enable = true } client { application.id = mini-cloud-simulate-order-biz transaction.service.group = mini-cloud-simulate-order-biz_tx_group }
库存服务集成seata
maven 依赖同上
file.conf
service { vgroupMapping.mini-cloud-simulate-goods-biz_tx_group = "default" #修改事务组名称为:my_test_tx_group,和客户端自定义的名称对应 #only support single node default.grouplist = "127.0.0.1:8091" #degrade current not support enableDegrade = false #disable disable = false #disable seata disableGlobalTransaction = false #unit ms,s,m,h,d represents milliseconds, seconds, minutes, hours, days, default permanent max.commit.retry.timeout = "-1" max.rollback.retry.timeout = "-1" }
registry.conf
registry { # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa type = "nacos" loadBalance = "RandomLoadBalance" loadBalanceVirtualNodes = 10 nacos { application = "seata-server" serverAddr = "127.0.0.1:8848" group = "SEATA_GROUP" namespace = "" cluster = "default" username = "" password = "" } }
seata.conf
sharding { transaction.seata.at.enable = true } client { application.id = mini-cloud-simulate-goods-biz transaction.service.group = mini-cloud-simulate-goods-biz_tx_group }
两个服务添加seataFilter,解决xid不传递问题
SeataFilter.java
@Component public class SeataFilter implements Filter { @Override public void init(FilterConfig filterConfig) throws ServletException { } @Override public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException { HttpServletRequest req = (HttpServletRequest) servletRequest; String xid = req.getHeader(RootContext.KEY_XID.toLowerCase()); boolean isBind = false; if (StringUtils.isNotBlank(xid)) { RootContext.bind(xid); isBind = true; } try { filterChain.doFilter(servletRequest, servletResponse); } finally { if (isBind) { RootContext.unbind(); } } } @Override public void destroy() { } }
SeataXidRequestInterceptor.java
public class SeataXidRequestInterceptor implements RequestInterceptor { @Override public void apply(RequestTemplate template) { String xid = RootContext.getXID(); if (StringUtils.isEmpty(xid)) { return; } List<String> fescarXid = new ArrayList<>(); fescarXid.add(xid); template.header(RootContext.KEY_XID, fescarXid); System.err.println("添加XID:" + fescarXid); } }
SeataConfiguration.java
@Configuration public class SeataConfiguration { @Bean public RequestInterceptor requestInterceptor(){ return new SeataXidRequestInterceptor(); } }
配置完毕,启动校验是否连接到seata server
信息显示注册成功,之所有有多条信息是因为我们有多个读写分离数据源,每个数据源都会注册
集成seata后再来测试一下上面的模拟订单库存失败场景
发起端订单服务接口添加全局事务注解然后重启服务
恢复商品库存为10
再测执行创建订单接口
查看结果
订单服务报错回滚
库存服务也跟着回滚
查看数据库情况,确实回滚到了10 并没有多扣除库存
以上实际案例整明seata at确实可以达到分布式事务强一直性控制