暂时未有相关云产品技术能力~
elasticsearch 使用版本7.6.2kibana 使用版本7.6.2报错1 can not run elasticsearch as rootfuture versions of Elasticsearch will require Java 11; your Java version from [/usr/local/jdk1.8.0_291/jre] does not meet this requirement [2022-06-21T15:21:40,405][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [node-1] uncaught exception in thread [main] org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:174) ~[elasticsearch-7.6.2.jar:7.6.2] at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:161) ~[elasticsearch-7.6.2.jar:7.6.2] at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-7.6.2.jar:7.6.2] at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:125) ~[elasticsearch-cli-7.6.2.jar:7.6.2] at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-7.6.2.jar:7.6.2] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:126) ~[elasticsearch-7.6.2.jar:7.6.2] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-7.6.2.jar:7.6.2] Caused by: java.lang.RuntimeException: can not run elasticsearch as root at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:105) ~[elasticsearch-7.6.2.jar:7.6.2] at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-7.6.2.jar:7.6.2] at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:349) ~[elasticsearch-7.6.2.jar:7.6.2] at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:170) ~[elasticsearch-7.6.2.jar:7.6.2] ... 6 more uncaught exception in thread [main]解决# 添加分组 groupadd es # 在es分组中添加es账户 useradd es -g es # 设置es账户的密码 passwd es # 切换到es 账户 su es ./elasticsearch -d # -d 后台启动报错2 没有权限导致启动失败future versions of Elasticsearch will require Java 11; your Java version from [/usr/local/jdk1.8.0_291/jre] does not meet this requirement Exception in thread "main" java.nio.file.AccessDeniedException: /usr/local/elk/elasticsearch-7.6.2/config/jvm.options at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214) at java.nio.file.Files.newByteChannel(Files.java:361) at java.nio.file.Files.newByteChannel(Files.java:407) at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384) at java.nio.file.Files.newInputStream(Files.java:152) at org.elasticsearch.tools.launchers.JvmOptionsParser.main(JvmOptionsParser.java:64)原因是elasticsearch用户没有该文件夹的权限,执行命令解决#赋予root权限 sudo chown -R 用户名:用户名 /usr/local/elasticsearch-7.6.2/报错3 ERROR: [2] bootstrap checks failedERROR: [2] bootstrap checks failed [1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535] [2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]---解决问题 [1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]vim /etc/security/limits.conf 添加如下内容* soft nofile 65536 * hard nofile 131072问题 [2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]su root # 切换到root账户 vim /etc/sysctl.conf添加vm.max_map_count=655360sysctl -p # 编辑完保存之后,执行该命令进行刷新 su es #切换回 es用户继续启动elasticsearch ./elasticsearch[root@YDGY-AS12 ~]# curl 127.0.0.1:9200 { "name" : "YDGY-AS12", "cluster_name" : "elasticsearch", "cluster_uuid" : "s0qC8rTIR92LwjqC5m9BNQ", "version" : { "number" : "7.6.2", "build_flavor" : "default", "build_type" : "tar", "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f", "build_date" : "2020-03-26T06:34:37.794943Z", "build_snapshot" : false, "lucene_version" : "8.4.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }
前言大概在去年的时候发现生产环境使用eureka经常会发现服务假死eureka没有给踢掉的情况,然后就衍生了要不就换个注册中心试试,然后就了解到了nacos,正好他还融合了配置中心,但是后来碍于切换时怕生产环境不稳定,丢数据等问题就一直没有换,但后续的项目的注册中心都换成了nacos,这篇文章我就来模拟一下如何将eureka平滑切换成nacos父工程构建这里我在父工程里边又单独创建了一层父工程,我分别在alibaba-cloud 、netflix-cloud 中模拟新旧微服务父工程pom<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>top.fate</groupId> <artifactId>nacoAndEureka</artifactId> <packaging>pom</packaging> <version>1.0.0</version> <modules> <module>netflix-cloud</module> <module>alibaba-cloud</module> </modules> </project>模拟旧版微服务 netflix-cloud pom如下 ,因为这里是模拟旧服务,所以都采用旧版本<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>nacoAndEureka</artifactId> <groupId>top.fate</groupId> <version>1.0.0</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>netflix-cloud</artifactId> <packaging>pom</packaging> <modules> <module>eureka</module> <module>eureka-provider</module> <module>eureka-consumer</module> </modules> <properties> <spring.boot.version>2.1.2.RELEASE</spring.boot.version> <spring.cloud.version>Greenwich.SR5</spring.cloud.version> </properties> <dependencyManagement> <dependencies> <!-- springBoot --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-dependencies</artifactId> <version>${spring.boot.version}</version> <type>pom</type> <scope>import</scope> </dependency> <!-- springCloud --> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>${spring.cloud.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> </project>搭建eurekapom依赖如下<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>netflix-cloud</artifactId> <groupId>top.fate</groupId> <version>1.0.0</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>eureka</artifactId> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-eureka-server</artifactId> </dependency> </dependencies> </project>EurekaApplication 启动类package top.fate.eureka; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer; @EnableEurekaServer @SpringBootApplication public class EurekaApplication { public static void main(String[] args) { SpringApplication.run(EurekaApplication.class, args); } }application.ymlserver: port: 8761 spring: application: name: eureka-service eureka: instance: # 设置该服务注册中心的hostname hostname: 127.0.0.1 client: # 我们创建的是服务注册中心,而不是普通的应用,这个应用会向注册中心注册它自己 #,设置为false就是禁止自己向自己注册的这个种行为 register-with-eureka: false # 不去检索其他的服务,因为注册中心本身的职责就是维护服务实例 fetch-registry: false # 制定服务注册中心的位置 service-url.defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/eureka-providerpom依赖如下<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>netflix-cloud</artifactId> <groupId>top.fate</groupId> <version>1.0.0</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>eureka-provider</artifactId> <properties> <maven.compiler.source>8</maven.compiler.source> <maven.compiler.target>8</maven.compiler.target> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId> </dependency> </dependencies> </project>EurekaProviderApplication 启动类package top.fate.eurekaprovider; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.netflix.eureka.EnableEurekaClient; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RestController; /** * @auther:Wangxl * @Emile:18335844494@163.com * @Time:2022/6/16 14:23 */ @SpringBootApplication @EnableEurekaClient @RestController public class EurekaProviderApplication { public static void main(String[] args) { SpringApplication.run(EurekaProviderApplication.class, args); } @GetMapping("/info") public String info(){ return "this is eureka-service"; } } application.ymlserver: port: 8081 spring: application: name: provider eureka: client: service-url: defaultZone: "http://localhost:8761/eureka"eureka-consumerpom依赖如下<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>netflix-cloud</artifactId> <groupId>top.fate</groupId> <version>1.0.0</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>eureka-consumer</artifactId> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId> </dependency> <dependency> <groupId>io.github.openfeign</groupId> <artifactId>feign-httpclient</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-openfeign</artifactId> </dependency> </dependencies> </project>EurekaConsumerApplication 启动类package top.fate.eurekaconsumer; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.netflix.eureka.EnableEurekaClient; import org.springframework.cloud.openfeign.EnableFeignClients; import top.fate.eurekaconsumer.client.EurekaProviderClient; /** * @auther:Wangxl * @Emile:18335844494@163.com * @Time:2022/6/16 14:43 */ @SpringBootApplication @EnableEurekaClient @EnableFeignClients(clients = EurekaProviderClient.class) public class EurekaConsumerApplication { public static void main(String[] args) { SpringApplication.run(EurekaConsumerApplication.class, args); } }EurekaProviderClientpackage top.fate.eurekaconsumer.client; import org.springframework.cloud.openfeign.FeignClient; import org.springframework.web.bind.annotation.GetMapping; /** * @auther:Wangxl * @Emile:18335844494@163.com * @Time:2022/6/16 14:48 */ @FeignClient(value = "provider") public interface EurekaProviderClient { @GetMapping("info") String info(); }ConsumerControllerpackage top.fate.eurekaconsumer.controller; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RestController; import top.fate.eurekaconsumer.client.EurekaProviderClient; import javax.annotation.Resource; /** * @auther:Wangxl * @Emile:18335844494@163.com * @Time:2022/6/16 14:48 */ @RestController public class ConsumerController { @Resource private EurekaProviderClient eurekaProviderClient; @GetMapping("getProvider") public String getProvider(){ return eurekaProviderClient.info(); } }测试服务是否可以调通这里我三个服务都启动正常,直接访问8091consumer测试 ,如下图所示consumer 可以访问provider第一阶段流程图模拟新版微服务alibaba-cloud pom如下,采用最新版技术栈<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>nacoAndEureka</artifactId> <groupId>top.fate</groupId> <version>1.0.0</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>alibaba-cloud</artifactId> <packaging>pom</packaging> <modules> <module>nacos-consumer</module> <module>nacos-provider</module> </modules> <properties> <spring.boot.version>2.6.3</spring.boot.version> <spring.cloud.version>2021.0.1</spring.cloud.version> <spring.cloud.alibaba.version>2021.0.1.0</spring.cloud.alibaba.version> </properties> <dependencyManagement> <dependencies> <!-- springBoot --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-dependencies</artifactId> <version>${spring.boot.version}</version> <type>pom</type> <scope>import</scope> </dependency> <!-- springCloud --> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>${spring.cloud.version}</version> <type>pom</type> <scope>import</scope> </dependency> <!-- spring-cloud-alibaba --> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-alibaba-dependencies</artifactId> <version>${spring.cloud.alibaba.version}</version> </dependency> </dependencies> </dependencyManagement> </project>启动安装nacos可以参考SpringCloudAlibaba篇(二)整合Nacos注册配置中心 这篇文章我就不重复操作了nacos-providerpom 依赖<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>alibaba-cloud</artifactId> <groupId>top.fate</groupId> <version>1.0.0</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>nacos-provider</artifactId> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId> <version>2021.0.1.0</version> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId> </dependency> </dependencies> </project>NacosProviderApplication 启动类package top.fate.nacosprovider; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.boot.context.properties.EnableConfigurationProperties; import org.springframework.cloud.client.serviceregistry.AutoServiceRegistrationProperties; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RestController; /** * @auther:Wangxl * @Emile:18335844494@163.com * @Time:2022/6/16 16:55 */ @SpringBootApplication @RestController @EnableConfigurationProperties(AutoServiceRegistrationProperties.class) public class NacosProviderApplication { public static void main(String[] args) { SpringApplication.run(NacosProviderApplication.class, args); } @GetMapping("/info") public String info() { return "this is nacos-service"; } }application.propertiesspring.autoconfigure.exclude=org.springframework.cloud.client.serviceregistry.AutoServiceRegistrationAutoConfigurationapplication.ymlurl: nacos: localhost:8848 server: port: 8082 spring: application: name: provider profiles: active: dev cloud: nacos: discovery: #集群环境隔离 cluster-name: shanghai #命名空间 namespace: ${spring.profiles.active} #持久化实例 ture为临时实例 false为持久化实例 临时实例发生异常直接剔除, 而持久化实例等待恢复 ephemeral: true #注册中心地址 server-addr: ${url.nacos} eureka: client: service-url: defaultZone: "http://localhost:8761/eureka"nacos-consumerpom依赖<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>alibaba-cloud</artifactId> <groupId>top.fate</groupId> <version>1.0.0</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>nacos-consumer</artifactId> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId> <version>2021.0.1.0</version> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId> </dependency> <dependency> <groupId>io.github.openfeign</groupId> <artifactId>feign-httpclient</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-openfeign</artifactId> </dependency> </dependencies> </project>NacosConsumerApplication 启动类package top.fate.nacosconsumer; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.boot.context.properties.EnableConfigurationProperties; import org.springframework.cloud.client.serviceregistry.AutoServiceRegistrationProperties; import org.springframework.cloud.openfeign.EnableFeignClients; import top.fate.nacosconsumer.client.EurekaProviderClient; import top.fate.nacosconsumer.client.NacosProviderClient; import top.fate.nacosconsumer.client.ProviderClient; /** * @auther:Wangxl * @Emile:18335844494@163.com * @Time:2022/6/16 16:39 */ @SpringBootApplication @EnableFeignClients(clients = {EurekaProviderClient.class, NacosProviderClient.class, ProviderClient.class}) @EnableConfigurationProperties(AutoServiceRegistrationProperties.class) public class NacosConsumerApplication { public static void main(String[] args) { SpringApplication.run(NacosConsumerApplication.class, args); } }ProviderClientpackage top.fate.nacosconsumer.client; import org.springframework.cloud.openfeign.FeignClient; import org.springframework.web.bind.annotation.GetMapping; /** * @auther:Wangxl * @Emile:18335844494@163.com * @Time:2022/6/16 18:24 */ @FeignClient(value = "provider") public interface ProviderClient { @GetMapping("info") String info(); }ConsumerControllerpackage top.fate.nacosconsumer.controller; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RestController; import top.fate.nacosconsumer.client.ProviderClient; import javax.annotation.Resource; /** * @auther:Wangxl * @Emile:18335844494@163.com * @Time:2022/6/16 14:48 */ @RestController public class ConsumerController { @Resource private ProviderClient providerClient; @GetMapping("getProvider") public String getProvider(){ return providerClient.info(); } }application.propertiesspring.autoconfigure.exclude=org.springframework.cloud.client.serviceregistry.AutoServiceRegistrationAutoConfigurationapplication.ymlurl: nacos: localhost:8848 server: port: 8092 spring: application: name: nacos-consumer profiles: active: dev cloud: nacos: discovery: #集群环境隔离 cluster-name: shanghai #命名空间 namespace: ${spring.profiles.active} #持久化实例 ture为临时实例 false为持久化实例 临时实例发生异常直接剔除, 而持久化实例等待恢复 ephemeral: true #注册中心地址 server-addr: ${url.nacos} eureka: client: service-url: defaultZone: "http://localhost:8761/eureka"上线双注册双订阅新provider服务先启动nacosProviderApplication如下图所示,我们已经实现了双注册,nacos和eureka中都注册了服务nacoseureka平滑切换注册中心验证旧consumer这里我访问8091的旧版Netflix客户端也就是eureka-consumer,看一下调用的是8081 eureka 还是8082 nacos , 这里我反复调用了十几次,返回结果为this is nacos-servicethis is eureka-service因为此时我们的8091客户端只有eurekaClient,然后我们的provider在eureka注册中心有两个实例,所以就触发了负载均衡,这里我们用的默认轮询模式,当前流程如下图下线旧provider现在我们就可以陆续开始平滑切换注册中心了,旧provider可以关掉了,关掉旧provider之后此时的流程就如下图所示了此时我们再访问旧consumer只会返回 this is nacos-service,因为旧的provider已经下线了 ,新provider当前已经切换完成!上线双注册双订阅新consumer服务,下线旧consumer启动nacoConsumerApplication访问8092验证是否能正常访问,继续访问getProvider接口,如下图所示访问正常,然后我们就可以下线旧consumer服务了疑惑 (该步骤可以直接略过)现在我们有个疑惑,现在有两个注册中心,服务发现是走的eureka还是nacos呢为此,我做了个实验,我分别启动了 旧provider、新provider、新consumer 此时双注册中心的服务eurekaprovider8081、provider8082consumer8092nacosprovider8082consumer8092现在我通过consumer8092客户端去请求,得到的结果只有 this is nacos-service ,因此判断注册中心默认走的是nacos.因为走nacos只会返回this is nacos-service, nacos只有一个实例。如果走eureka的话会轮询返回this is nacos-service、this is eureka-service ,eureka有两个实例。此时的流程图 虚线代表该线路空闲这里我找了下源码CompositeDiscoveryClient,调用的时候打了断点,发现系统创建了三个discoveryClient ,nacos排在第一个,如果可用的话直接就返回了 ,所以可以理解为默认走的是nacos这里我想到了nacos有个服务下线功能,如果我将nacos中的服务下线之后应该就会去走eureka了吧等待几秒过后,通过consumer8092客户端去请求,得到了我想要的结果分别轮询返回了 this is nacos-service、this is eureka-service,证明已经走eureka了此时流程图虚线代表该线路空闲最后此时我们生产上边是 新consumer、新provider、eureka、nacos,既然我们要切换到nacos,那eureka就也要停掉了,我们可以在下一版的服务中去掉 eureka的依赖和配置,只留下nacos,将这一个新版本部署上去之后就可以停掉eureka了如下图所示注意如果直接引入eureka-client和nacos-client 会报错,如下Field autoServiceRegistration in org.springframework.cloud.client.serviceregistry.AutoServiceRegistrationAutoConfiguration required a single bean, but 2 were found: - nacosAutoServiceRegistration: defined by method 'nacosAutoServiceRegistration' in class path resource [com/alibaba/cloud/nacos/registry/NacosServiceRegistryAutoConfiguration.class] - eurekaAutoServiceRegistration: defined by method 'eurekaAutoServiceRegistration' in class path resource [org/springframework/cloud/netflix/eureka/EurekaClientAutoConfiguration.class]需要在配置文件添加如下内容spring.autoconfigure.exclude=org.springframework.cloud.client.serviceregistry.AutoServiceRegistrationAutoConfiguration启动类添加注解@EnableConfigurationProperties(AutoServiceRegistrationProperties.class)原创不易,请点个赞再走吧!感谢
报错信息org.springframework.context.ApplicationContextException: Failed to start bean 'documentationPluginsBootstrapper'; nested exception is java.lang.NullPointerException环境依赖springboot 2.6.3springfox 3.0.0jdk 1.8Springfox 设置 Spring MVC 的路径匹配策略是 ant-path-matcher,而 Spring Boot 2.6.x版本的默认匹配策略是 path-pattern-matcher,这就造成了上面的报错尝试使用 spring.mvc.pathmatch.matching-strategy: ant_path_matcher,未生效。这是网上大多数的解决办法我的解决办法参考github方法1spring: mvc: pathmatch: matching-strategy: ANT_PATH_MATCHER添加依赖<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency>注入bean@Bean public WebMvcEndpointHandlerMapping webEndpointServletHandlerMapping( WebEndpointsSupplier webEndpointsSupplier, ServletEndpointsSupplier servletEndpointsSupplier, ControllerEndpointsSupplier controllerEndpointsSupplier, EndpointMediaTypes endpointMediaTypes, CorsEndpointProperties corsProperties, WebEndpointProperties webEndpointProperties, Environment environment) { List<ExposableEndpoint<?>> allEndpoints = new ArrayList<>(); Collection<ExposableWebEndpoint> webEndpoints = webEndpointsSupplier.getEndpoints(); allEndpoints.addAll(webEndpoints); allEndpoints.addAll(servletEndpointsSupplier.getEndpoints()); allEndpoints.addAll(controllerEndpointsSupplier.getEndpoints()); String basePath = webEndpointProperties.getBasePath(); EndpointMapping endpointMapping = new EndpointMapping(basePath); boolean shouldRegisterLinksMapping = webEndpointProperties.getDiscovery().isEnabled() && (org.springframework.util.StringUtils.hasText(basePath) || ManagementPortType.get(environment).equals(ManagementPortType.DIFFERENT)); return new WebMvcEndpointHandlerMapping(endpointMapping, webEndpoints, endpointMediaTypes, corsProperties.toCorsConfiguration(), new EndpointLinksResolver(allEndpoints, basePath), shouldRegisterLinksMapping, null); }方案2@Bean public static BeanPostProcessor springfoxHandlerProviderBeanPostProcessor() { return new BeanPostProcessor() { @Override public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException { if (bean instanceof WebMvcRequestHandlerProvider || bean instanceof WebFluxRequestHandlerProvider) { customizeSpringfoxHandlerMappings(getHandlerMappings(bean)); } return bean; } private <T extends RequestMappingInfoHandlerMapping> void customizeSpringfoxHandlerMappings(List<T> mappings) { List<T> copy = mappings.stream() .filter(mapping -> mapping.getPatternParser() == null) .collect(Collectors.toList()); mappings.clear(); mappings.addAll(copy); } @SuppressWarnings("unchecked") private List<RequestMappingInfoHandlerMapping> getHandlerMappings(Object bean) { try { Field field = ReflectionUtils.findField(bean.getClass(), "handlerMappings"); field.setAccessible(true); return (List<RequestMappingInfoHandlerMapping>) field.get(bean); } catch (IllegalArgumentException | IllegalAccessException e) { throw new IllegalStateException(e); } } }; }写在最后直接使用方法二即可, 方法一是我最开始的解决办法原创不易,如果有帮助到你,请点个赞再离开吧 !
Mybatis-Plus多数据源插件源码地址前言dynamic-datasource-spring-boot-starter 是一个基于springboot的快速集成多数据源的启动器。其支持 Jdk 1.7+, SpringBoot 1.4.x 1.5.x 2.x.x。官方免费文档特性支持 数据源分组 ,适用于多种场景 纯粹多库 读写分离 一主多从 混合模式。支持数据库敏感配置信息 加密 ENC()。支持每个数据库独立初始化表结构schema和数据库database。支持无数据源启动,支持懒加载数据源(需要的时候再创建连接)。支持 自定义注解 ,需继承DS(3.2.0+)。提供并简化对Druid,HikariCp,BeeCp,Dbcp2的快速集成。提供对Mybatis-Plus,Quartz,ShardingJdbc,P6sy,Jndi等组件的集成方案。提供 自定义数据源来源 方案(如全从数据库加载)。提供项目启动后 动态增加移除数据源 方案。提供Mybatis环境下的 纯读写分离 方案。提供使用 spel动态参数 解析数据源方案。内置spel,session,header,支持自定义。支持 多层数据源嵌套切换 。(ServiceA >>> ServiceB >>> ServiceC)。提供 基于seata的分布式事务方案。提供 本地多数据源事务方案。 附:不能和原生spring事务混用。准备数据库如下图所示, 我这边有名为cloud_order ,cloud_user两个数据库,并且创建了相同表结构的表cloud_order.dynamic 数据cloud_user.dynamic 数据SpringBoot整合Mp1、创建工程2、添加依赖这里springboot使用2.6.3版本, mp使用3.5.1<parent> <artifactId>spring-boot-starter-parent</artifactId> <groupId>org.springframework.boot</groupId> <version>2.6.3</version> </parent> <modelVersion>4.0.0</modelVersion> <packaging>jar</packaging> <artifactId>Mp-Dynamic-datasource-SpringBoot2.6.x</artifactId> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <!-- Druid --> <dependency> <groupId>com.alibaba</groupId> <artifactId>druid-spring-boot-starter</artifactId> <version>1.1.22</version> </dependency> <!-- MySQL --> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>8.0.16</version> </dependency> <!-- MyBatis-Plus--> <dependency> <groupId>com.baomidou</groupId> <artifactId>mybatis-plus-boot-starter</artifactId> <version>3.5.1</version> </dependency> <dependency> <groupId>com.baomidou</groupId> <artifactId>dynamic-datasource-spring-boot-starter</artifactId> <version>3.5.1</version> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <version>1.18.22</version> </dependency> </dependencies>3、配置文件server: port: 8081 application: name: dynamic-datasource datasource: dynamic: primary: order datasource: order: url: jdbc:mysql://127.0.0.1:3306/cloud_order?serverTimezone=GMT%2B8&useUnicode=true&characterEncoding=utf-8 username: root password: 123456 driver-class-name: com.mysql.cj.jdbc.Driver user: url: jdbc:mysql://127.0.0.1:3306/cloud_user?serverTimezone=GMT%2B8&useUnicode=true&characterEncoding=utf-8 username: root password: 123456 driver-class-name: com.mysql.cj.jdbc.Driver # mybatis-plus相关配置 mybatis-plus: # xml扫描,多个目录用逗号或者分号分隔(告诉 Mapper 所对应的 XML 文件位置) mapper-locations: classpath:mapper/*/*.xml # 以下配置均有默认值,可以不设置 global-config: db-config: #主键类型 auto:"数据库ID自增" 1:"用户输入ID",2:"全局唯一ID (数字类型唯一ID)", 3:"全局唯一ID UUID"; id-type: auto #字段策略 IGNORED:"忽略判断" NOT_NULL:"非 NULL 判断") NOT_EMPTY:"非空判断" field-strategy: NOT_EMPTY #数据库类型 db-type: MYSQL configuration: # 是否开启自动驼峰命名规则映射:从数据库列名到Java属性驼峰命名的类似映射 map-underscore-to-camel-case: true # 如果查询结果中包含空值的列,则 MyBatis 在映射的时候,不会映射这个字段 call-setters-on-nulls: true # 这个配置会将执行的sql打印出来,在开发或测试的时候可以用 log-impl: org.apache.ibatis.logging.stdout.StdOutImpl4、生成代码entitypackage top.fate.entity; import com.baomidou.mybatisplus.annotation.IdType; import com.baomidou.mybatisplus.annotation.TableId; import lombok.Getter; import lombok.Setter; import java.io.Serializable; /** * @author fate急速出击 * @since 2022-06-08 */ @Getter @Setter public class Dynamic implements Serializable { private static final long serialVersionUID = 1L; @TableId(value = "id", type = IdType.AUTO) private Long id; private String context; } mapperpackage top.fate.mapper; import com.baomidou.mybatisplus.core.mapper.BaseMapper; import top.fate.entity.Dynamic; /** * <p> * Mapper 接口 * </p> * * @author fate急速出击 * @since 2022-06-08 */ public interface DynamicMapper extends BaseMapper<Dynamic> { } DynamicMapper.xml<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd"> <mapper namespace="top.fate.mapper.DynamicMapper"> </mapper>servicepackage top.fate.service; import com.baomidou.mybatisplus.extension.service.IService; import top.fate.entity.Dynamic; /** * <p> * 服务类 * </p> * * @author fate急速出击 * @since 2022-06-08 */ public interface IDynamicService extends IService<Dynamic> { }serviceimplpackage top.fate.service.impl; import com.baomidou.mybatisplus.extension.service.impl.ServiceImpl; import org.springframework.stereotype.Service; import top.fate.entity.Dynamic; import top.fate.mapper.DynamicMapper; import top.fate.service.IDynamicService; /** * <p> * 服务实现类 * </p> * * @author fate急速出击 * @since 2022-06-08 */ @Service public class DynamicServiceImpl extends ServiceImpl<DynamicMapper, Dynamic> implements IDynamicService { }测试这里我为了偷懒直接在controller切数据源了, 不建议学习这里我图快 package top.fate.controller; import com.baomidou.dynamic.datasource.annotation.DS; import io.swagger.annotations.Api; import io.swagger.annotations.ApiOperation; import lombok.AllArgsConstructor; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; import top.fate.entity.Dynamic; import top.fate.service.IDynamicService; import java.util.List; /** * 控制器 * * @author fate急速出击 * @since 2022-06-08 */ @RestController @AllArgsConstructor @RequestMapping("/dynamic") public class DynamicController { private IDynamicService dynamicService; @DS("user") @GetMapping("list1") public List<Dynamic> list1(){ return dynamicService.list(); } @DS("order") @GetMapping("list2") public List<Dynamic> list2(){ return dynamicService.list(); } }访问 http://localhost:8081/dynamic/list1访问 http://localhost:8081/dynamic/list2写在最后原创不易 ,请大家留个赞再走吧 !
1、下载安装,只下载elasticSearch、Kibana即可下载安装参考Springboot/Springcloud整合ELK平台,(Filebeat方式)日志采集及管理(Elasticsearch+Logstash+Filebeat+Kibana)elastic中文社区 下载地址这里我使用7.6.2的elasticsearch版本, 因为项目使用的springboot2.3.x,避免低版本客户端,高版本索引库·,这里我先退回使用低版本索引库插件安装ik 分词器ingest-attachment 这里将链接修改为自己的版本即可插件下载完成之后,将压缩包解压到 elasticsearch的plugins目录, 之后重启elasticsearch定义文本抽取管道PUT /_ingest/pipeline/attachment { "description" : "Extract attachment information", "processors":[ { "attachment":{ "field":"data", "indexed_chars" : -1, "ignore_missing":true } }, { "remove":{"field":"data"} }]}2、SpringBoot整合ElasticSearch<dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-elasticsearch</artifactId> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>fastjson</artifactId> <version>1.2.58</version> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <version>1.18.20</version> </dependency> </dependencies>application.ymlserver: port: 9090 spring: application: name: elasticsearch-service elasticsearch: rest: uris: http://127.0.0.1:9200实体类package top.fate.entity; import lombok.Data; import org.springframework.data.elasticsearch.annotations.Document; import org.springframework.data.elasticsearch.annotations.Field; import org.springframework.data.elasticsearch.annotations.FieldType; /** * @auther:Wangxl * @Emile:18335844494@163.com * @Time:2020/11/2 14:15 */ @Data @Document(indexName = "filedata") public class FileData { @Field(type = FieldType.Keyword) private String filePk; @Field(type = FieldType.Keyword) private String fileName; @Field(type = FieldType.Keyword) private Integer page; @Field(type = FieldType.Keyword) private String departmentId; @Field(type = FieldType.Keyword) private String ljdm; @Field(type = FieldType.Text, analyzer = "ik_max_word") private String data; @Field(type = FieldType.Keyword) private String realName; @Field(type = FieldType.Keyword) private String url; @Field(type = FieldType.Keyword) private String type; }接口类package top.fate.controller; import com.alibaba.fastjson.JSON; import org.elasticsearch.action.index.IndexRequest; import org.elasticsearch.action.index.IndexResponse; import org.elasticsearch.action.search.SearchRequest; import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.client.RequestOptions; import org.elasticsearch.client.RestHighLevelClient; import org.elasticsearch.common.text.Text; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.search.SearchHit; import org.elasticsearch.search.builder.SearchSourceBuilder; import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder; import org.elasticsearch.search.fetch.subphase.highlight.HighlightField; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.data.elasticsearch.core.ElasticsearchRestTemplate; import org.springframework.data.elasticsearch.core.IndexOperations; import org.springframework.data.elasticsearch.core.document.Document; import org.springframework.data.elasticsearch.core.mapping.IndexCoordinates; import org.springframework.util.Base64Utils; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.RestController; import top.fate.entity.FileData; import java.io.File; import java.io.FileInputStream; import java.lang.reflect.Method; import java.util.ArrayList; import java.util.Iterator; import java.util.List; import java.util.Map; /** * @auther:Wangxl * @Emile:18335844494@163.com * @Time:2022/6/1 16:33 */ @RestController @RequestMapping(value = "fullTextSearch") public class FullTextSearchController { @Autowired private ElasticsearchRestTemplate elasticsearchRestTemplate; @Autowired private RestHighLevelClient restHighLevelClient; @GetMapping("createIndex") public void add() { IndexOperations indexOperations = elasticsearchRestTemplate.indexOps(IndexCoordinates.of("testindex")); indexOperations.create(); Document mapping = indexOperations.createMapping(FileData.class); indexOperations.putMapping(mapping); } @GetMapping("deleteIndex") public void deleteIndex() { IndexOperations indexOperations = elasticsearchRestTemplate.indexOps(FileData.class); indexOperations.delete(); } @GetMapping("uploadFileToEs") public void uploadFileToEs() { try { // File file = new File("D:\\desktop\\Java开发工程师-4年-王晓龙-2022-05.pdf"); File file = new File("D:\\desktop\\Java开发工程师-4年-王晓龙-2022-05.docx"); FileInputStream inputFile = new FileInputStream(file); byte[] buffer = new byte[(int)file.length()]; inputFile.read(buffer); inputFile.close(); //将文件转成base64编码 String fileString = Base64Utils.encodeToString(buffer); FileData fileData = new FileData(); fileData.setFileName(file.getName()); fileData.setFilePk(file.getName()); fileData.setData(fileString); IndexRequest indexRequest = new IndexRequest("testindex").id(fileData.getFilePk()); indexRequest.source(JSON.toJSONString(fileData),XContentType.JSON); indexRequest.setPipeline("attachment"); IndexResponse index = restHighLevelClient.index(indexRequest, RequestOptions.DEFAULT); return; } catch (Exception e) { e.printStackTrace(); } } @GetMapping("search") public Object search(@RequestParam("txt") String txt) { List list = new ArrayList(); try { SearchRequest searchRequest = new SearchRequest("testindex"); SearchSourceBuilder builder = new SearchSourceBuilder(); builder.query(QueryBuilders.matchQuery("attachment.content",txt).analyzer("ik_max_word")); searchRequest.source(builder); // 返回实际命中数 builder.trackTotalHits(true); //高亮 HighlightBuilder highlightBuilder = new HighlightBuilder(); highlightBuilder.field("attachment.content"); highlightBuilder.requireFieldMatch(false);//多个高亮关闭 highlightBuilder.preTags("<span style='color:red'>"); highlightBuilder.postTags("</span>"); builder.highlighter(highlightBuilder); SearchResponse search = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT); if (search.getHits() != null) { for (SearchHit documentFields : search.getHits().getHits()) { Map<String, HighlightField> highlightFields = documentFields.getHighlightFields(); HighlightField title = highlightFields.get("attachment.content"); Map<String, Object> sourceAsMap = documentFields.getSourceAsMap(); if (title != null) { Text[] fragments = title.fragments(); String n_title = ""; for (Text fragment : fragments) { n_title += fragment; } sourceAsMap.put("data", n_title); } list.add(dealObject(sourceAsMap, FileData.class)); } } } catch (Exception e) { e.printStackTrace(); } return list; } /*public static void ignoreSource(Map<String, Object> map) { for (String key : IGNORE_KEY) { map.remove(key); } }*/ public static <T> T dealObject(Map<String, Object> sourceAsMap, Class<T> clazz) { try { // ignoreSource(sourceAsMap); Iterator<String> keyIterator = sourceAsMap.keySet().iterator(); T t = clazz.newInstance(); while (keyIterator.hasNext()) { String key = keyIterator.next(); String replaceKey = key.replaceFirst(key.substring(0, 1), key.substring(0, 1).toUpperCase()); Method method = null; try { method = clazz.getMethod("set" + replaceKey, sourceAsMap.get(key).getClass()); } catch (NoSuchMethodException e) { continue; } method.invoke(t, sourceAsMap.get(key)); } return t; } catch (Exception e) { e.printStackTrace(); } return null; } }测试创建索引 localhost:9090/fullTextSearch/createIndex 上传文档localhost:9090/fullTextSearch/uploadFileToEs搜索localhost:9090/fullTextSearch/search?txt=索引库
前言通常微服务的认证和授权思路有两种:网关只负责转发请求,认证鉴权交给每个微服务控制统一在网关层面认证鉴权,微服务只负责业务第二种方案的流程图采用技术栈父工程依赖及统一版本附:父工程依赖 <packaging>pom</packaging> <properties> <fate.project.version>1.0.0</fate.project.version> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <java.version>1.8</java.version> <maven.plugin.version>3.8.1</maven.plugin.version> <spring.boot.version>2.6.3</spring.boot.version> <spring-cloud.version>2021.0.1</spring-cloud.version> <spring-cloud-alibaba.version>2021.0.1.0</spring-cloud-alibaba.version> <alibaba.nacos.version>1.4.2</alibaba.nacos.version> <alibaba.sentinel.version>1.8.3</alibaba.sentinel.version> <alibaba.dubbo.version>2.7.15</alibaba.dubbo.version> <alibaba.rocketMq.version>4.9.2</alibaba.rocketMq.version> <alibaba.seata.version>1.4.2</alibaba.seata.version> <mybatis.plus.version>3.5.1</mybatis.plus.version> <knife4j.version>3.0.2</knife4j.version> <swagger.version>3.0.0</swagger.version> </properties> <dependencyManagement> <dependencies> <!-- springBoot --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-dependencies</artifactId> <version>${spring.boot.version}</version> <type>pom</type> <scope>import</scope> </dependency> <!-- springCloud --> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>${spring-cloud.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-boot-starter</artifactId> <version>3.0.0</version> </dependency> </dependencies>1. 搭建Oauth2-server1.1 oauth2-server 依赖引用版本可查看上面的父工程依赖<dependencies> <dependency> <groupId>top.fate</groupId> <artifactId>fate-common</artifactId> <version>1.0.0</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <!--排除logback--> <exclusions> <exclusion> <artifactId>spring-boot-starter-logging</artifactId> <groupId>org.springframework.boot</groupId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <!--添加log4j2--> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> </dependency> <!--SpringBoot2.4.x之后默认不加载bootstrap.yml文件,需要在pom里加上依赖--> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-bootstrap</artifactId> </dependency> <!-- Nacos --> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId> <exclusions> <exclusion> <groupId>com.alibaba.nacos</groupId> <artifactId>nacos-client</artifactId> </exclusion> </exclusions> <version>${spring-cloud-alibaba.version}</version> </dependency> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId> <exclusions> <exclusion> <groupId>com.alibaba.nacos</groupId> <artifactId>nacos-client</artifactId> </exclusion> </exclusions> <version>${spring-cloud-alibaba.version}</version> </dependency> <dependency> <groupId>com.alibaba.nacos</groupId> <artifactId>nacos-client</artifactId> <version>${alibaba.nacos.version}</version> </dependency> <!-- Druid --> <dependency> <groupId>com.alibaba</groupId> <artifactId>druid-spring-boot-starter</artifactId> <version>1.1.22</version> </dependency> <!-- MySQL --> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>8.0.16</version> </dependency> <!-- MyBatis-Plus--> <dependency> <groupId>com.baomidou</groupId> <artifactId>mybatis-plus-boot-starter</artifactId> <version>${mybatis.plus.version}</version> </dependency> <!-- zipkin --> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-zipkin</artifactId> <version>2.2.8.RELEASE</version> </dependency> <!-- redis--> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </dependency> <dependency> <groupId>com.nimbusds</groupId> <artifactId>nimbus-jose-jwt</artifactId> <version>9.9.3</version> </dependency> <!--security --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-oauth2</artifactId> <version>2.2.5.RELEASE</version> </dependency> <dependency> <groupId>cn.hutool</groupId> <artifactId>hutool-all</artifactId> <version>5.8.0</version> </dependency> </dependencies>1.2 bootstrap.ymlserver: port: 9999 url: nacos: localhost:8848 spring: application: name: oauth2-server #实例名 profiles: active: dev cloud: nacos: discovery: #集群环境隔离 cluster-name: shanghai #命名空间 namespace: ${spring.profiles.active} #持久化实例 ture为临时实例 false为持久化实例 临时实例发生异常直接剔除, 而持久化实例等待恢复 ephemeral: true #注册中心地址 server-addr: ${url.nacos} config: namespace: ${spring.profiles.active} file-extension: yaml #配置中心地址 server-addr: ${url.nacos} extension-configs[0]: data-id: mysql-oauth2.yaml group: DEFAULT_GROUP refresh: false extension-configs[1]: data-id: log.properties group: DEFAULT_GROUP refresh: false extension-configs[2]: data-id: zipkin.yaml group: DEFAULT_GROUP refresh: false extension-configs[3]: data-id: mybatis-plus.yaml group: DEFAULT_GROUP refresh: false extension-configs[4]: data-id: redis.yaml group: DEFAULT_GROUP refresh: falsemysql-oauth2.yamlspring: datasource: url: jdbc:mysql://localhost:3306/oauth?useSSL=false&allowPublicKeyRetrieval=true username: root password: 123456 driver-class-name: com.mysql.cj.jdbc.Driverlog.propertieslogging.level.root=infozipkin.yamlspring: zipkin: base-url: http://127.0.0.1:9411 sender: type: web sleuth: sampler: probability: 1.0mybatis-plus.yamlmybatis-plus: mapper-locations: classpath:mapper/*/*.xml,mapper/*.xml global-config: db-config: id-type: auto field-strategy: NOT_EMPTY db-type: MYSQL configuration: map-underscore-to-camel-case: true call-setters-on-nulls: true #log-impl: org.apache.ibatis.logging.stdout.StdOutImplredis.yamlspring: redis: host: localhost port: 63791.3 keytool生成RSA证书在jdk/bin目录下执行该命令,生成jks文件之后复制到项目中resources中keytool -genkey -alias jwt -keyalg RSA -keystore jwt.jks -keypass 1234561.4 SysUserServiceImpl 用户信息实现类实现Spring Security的UserDetailsService接口,用于加载用户信息package top.fate.service.impl; import cn.hutool.core.util.ArrayUtil; import com.baomidou.mybatisplus.extension.service.impl.ServiceImpl; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.security.core.authority.AuthorityUtils; import org.springframework.security.core.userdetails.UserDetails; import org.springframework.security.core.userdetails.UserDetailsService; import org.springframework.security.core.userdetails.UsernameNotFoundException; import org.springframework.stereotype.Service; import top.fate.domain.SecurityUser; import java.util.ArrayList; import java.util.List; import java.util.Objects; /** * <p> * 用户信息表 服务实现类 * </p> * * @author fate急速出击 * @since 2022-05-13 */ @Service public class SysUserServiceImpl implements UserDetailsService { @Override public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException { return SecurityUser.builder() .userId(UUID.randomUUID().toString().replaceAll("-","")) .username("admin") .password(new BCryptPasswordEncoder().encode("123456")) .authorities(AuthorityUtils.createAuthorityList("user", "admin")) .build(); } }SecurityUser 用户封装类package top.fate.domain; import lombok.AllArgsConstructor; import lombok.Builder; import lombok.Data; import lombok.NoArgsConstructor; import org.springframework.security.core.GrantedAuthority; import org.springframework.security.core.userdetails.UserDetails; import java.util.Collection; /** * 存储用户的详细信息,实现UserDetails,后续有定制的字段可以自己拓展 * @auther:Wangxl * @Emile:18335844494@163.com * @Time:2022/5/13 11:36 */ @Data @Builder @AllArgsConstructor @NoArgsConstructor public class SecurityUser implements UserDetails { private String userId; //用户名 private String username; //密码 private String password; //权限+角色集合 private Collection<? extends GrantedAuthority> authorities; @Override public Collection<? extends GrantedAuthority> getAuthorities() { return authorities; } @Override public String getPassword() { return password; } @Override public String getUsername() { return username; } // 账户是否未过期 @Override public boolean isAccountNonExpired() { return true; } // 账户是否未被锁 @Override public boolean isAccountNonLocked() { return true; } @Override public boolean isCredentialsNonExpired() { return true; } @Override public boolean isEnabled() { return true; } }## 1.5 JWT内容增强器package top.fate.component;import org.springframework.security.oauth2.common.DefaultOAuth2AccessToken;import org.springframework.security.oauth2.common.OAuth2AccessToken;import org.springframework.security.oauth2.provider.OAuth2Authentication;import org.springframework.security.oauth2.provider.token.TokenEnhancer;import org.springframework.stereotype.Component;import top.fate.domain.SecurityUser;import java.util.HashMap;import java.util.Map;/**JWT内容增强器@auther:Wangxl@Emile:18335844494@163.com@Time:2022/5/13 11:32*/@Componentpublic class JwtTokenEnhancer implements TokenEnhancer {@Override public OAuth2AccessToken enhance(OAuth2AccessToken accessToken, OAuth2Authentication authentication) { SecurityUser securityUser = (SecurityUser) authentication.getPrincipal(); Map<String, Object> info = new HashMap<>(); //把用户ID设置到JWT中 info.put("id", securityUser.getUserId()); ((DefaultOAuth2AccessToken) accessToken).setAdditionalInformation(info); return accessToken; }}## 1.6 Oauth2ServerConfig 认证服务器配置 > 加载用户信息的服务UserServiceImpl及RSA的钥匙对KeyPairpackage top.fate.config;import lombok.AllArgsConstructor;import org.springframework.context.annotation.Bean;import org.springframework.context.annotation.Configuration;import org.springframework.core.io.ClassPathResource;import org.springframework.security.authentication.AuthenticationManager;import org.springframework.security.crypto.password.PasswordEncoder;import org.springframework.security.oauth2.config.annotation.configurers.ClientDetailsServiceConfigurer;import org.springframework.security.oauth2.config.annotation.web.configuration.AuthorizationServerConfigurerAdapter;import org.springframework.security.oauth2.config.annotation.web.configuration.EnableAuthorizationServer;import org.springframework.security.oauth2.config.annotation.web.configurers.AuthorizationServerEndpointsConfigurer;import org.springframework.security.oauth2.config.annotation.web.configurers.AuthorizationServerSecurityConfigurer;import org.springframework.security.oauth2.provider.token.TokenEnhancer;import org.springframework.security.oauth2.provider.token.TokenEnhancerChain;import org.springframework.security.oauth2.provider.token.store.JwtAccessTokenConverter;import org.springframework.security.rsa.crypto.KeyStoreKeyFactory;import top.fate.component.JwtTokenEnhancer;import top.fate.service.impl.SysUserServiceImpl;import java.security.KeyPair;import java.util.ArrayList;import java.util.List;/**认证服务器配置@auther:Wangxl@Emile:18335844494@163.com@Time:2022/5/13 11:36*/@AllArgsConstructor@Configuration@EnableAuthorizationServerpublic class Oauth2ServerConfig extends AuthorizationServerConfigurerAdapter {private final PasswordEncoder passwordEncoder; private final SysUserServiceImpl userDetailsService; private final AuthenticationManager authenticationManager; private final JwtTokenEnhancer jwtTokenEnhancer; @Override public void configure(ClientDetailsServiceConfigurer clients) throws Exception { clients.inMemory() .withClient("client-app") .secret(passwordEncoder.encode("123456")) .scopes("all") .authorizedGrantTypes("password", "refresh_token") .accessTokenValiditySeconds(3600) .refreshTokenValiditySeconds(86400); } @Override public void configure(AuthorizationServerEndpointsConfigurer endpoints) throws Exception { TokenEnhancerChain enhancerChain = new TokenEnhancerChain(); List<TokenEnhancer> delegates = new ArrayList<>(); delegates.add(jwtTokenEnhancer); delegates.add(accessTokenConverter()); enhancerChain.setTokenEnhancers(delegates); //配置JWT的内容增强器 endpoints.authenticationManager(authenticationManager) .userDetailsService(userDetailsService) //配置加载用户信息的服务 .accessTokenConverter(accessTokenConverter()) .tokenEnhancer(enhancerChain); } @Override public void configure(AuthorizationServerSecurityConfigurer security) throws Exception { security.allowFormAuthenticationForClients(); } @Bean public JwtAccessTokenConverter accessTokenConverter() { JwtAccessTokenConverter jwtAccessTokenConverter = new JwtAccessTokenConverter(); jwtAccessTokenConverter.setKeyPair(keyPair()); return jwtAccessTokenConverter; } @Bean public KeyPair keyPair() { //从classpath下的证书中获取秘钥对 KeyStoreKeyFactory keyStoreKeyFactory = new KeyStoreKeyFactory(new ClassPathResource("jwt.jks"), "123456".toCharArray()); return keyStoreKeyFactory.getKeyPair("jwt", "123456".toCharArray()); } }## 1.7 获取RSA公钥接口package top.fate.controller;import com.nimbusds.jose.jwk.JWKSet;import com.nimbusds.jose.jwk.RSAKey;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.web.bind.annotation.GetMapping;import org.springframework.web.bind.annotation.RequestMapping;import org.springframework.web.bind.annotation.RestController;import java.security.KeyPair;import java.security.interfaces.RSAPublicKey;import java.util.Map;/**@auther:Wangxl@Emile:18335844494@163.com@Time:2022/5/13 17:48*/@RestController@RequestMapping(value = "rsa")public class KeyPairController {@Autowired private KeyPair keyPair; @GetMapping(value = "publicKey") public Map<String, Object> getKey(){ RSAPublicKey aPublic = (RSAPublicKey) keyPair.getPublic(); RSAKey key = new RSAKey.Builder(aPublic).build(); return new JWKSet(key).toJSONObject(); }}## 1.8 配置Spring Security,允许获取公钥接口的访问package top.fate.config;import org.springframework.boot.actuate.autoconfigure.security.servlet.EndpointRequest;import org.springframework.context.annotation.Bean;import org.springframework.context.annotation.Configuration;import org.springframework.security.authentication.AuthenticationManager;import org.springframework.security.config.annotation.web.builders.HttpSecurity;import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder;import org.springframework.security.crypto.password.PasswordEncoder;/**SpringSecurity配置@auther:Wangxl@Emile:18335844494@163.com@Time:2022/5/14 15:36*/@Configuration@EnableWebSecuritypublic class WebSecurityConfig extends WebSecurityConfigurerAdapter {@Override protected void configure(HttpSecurity http) throws Exception { http.authorizeRequests() .requestMatchers(EndpointRequest.toAnyEndpoint()).permitAll() .antMatchers("/rsa/publicKey").permitAll() .anyRequest().authenticated(); } @Bean @Override public AuthenticationManager authenticationManagerBean() throws Exception { return super.authenticationManagerBean(); } @Bean public PasswordEncoder passwordEncoder() { return new BCryptPasswordEncoder(); }}## 1.9 初始化用户权限demopackage top.fate.service.impl;import cn.hutool.core.collection.CollUtil;import org.springframework.data.redis.core.RedisTemplate;import org.springframework.stereotype.Service;import top.fate.model.SysConstant;import javax.annotation.PostConstruct;import javax.annotation.Resource;import java.util.List;import java.util.Map;import java.util.TreeMap;/**@auther:Wangxl@Emile:18335844494@163.com@Time:2022/5/14 10:33*/@Servicepublic class ResourceServiceImpl {private Map<String, List<String>> resourceRolesMap; @Resource private RedisTemplate<String,Object> redisTemplate; @PostConstruct public void initData() { resourceRolesMap = new TreeMap<>(); resourceRolesMap.put("/user/tb-user/list", CollUtil.toList("ADMIN")); resourceRolesMap.put("/order/order/getUserService", CollUtil.toList("ADMIN", "ROOT")); redisTemplate.opsForHash().putAll("oauth2:oauth_urls", resourceRolesMap); }}## 1.10 Redis相关配置package top.fate.config;import org.springframework.context.annotation.Bean;import org.springframework.context.annotation.Configuration;import org.springframework.data.redis.connection.RedisConnectionFactory;import org.springframework.data.redis.core.RedisTemplate;import org.springframework.data.redis.repository.configuration.EnableRedisRepositories;import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer;import org.springframework.data.redis.serializer.StringRedisSerializer;/**Redis相关配置@auther:Wangxl@Emile:18335844494@163.com@Time:2022/5/16 14:29*/@Configuration@EnableRedisRepositoriespublic class RedisRepositoryConfig {@Bean public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory connectionFactory) { RedisTemplate<String, Object> redisTemplate = new RedisTemplate<>(); redisTemplate.setConnectionFactory(connectionFactory); StringRedisSerializer stringRedisSerializer = new StringRedisSerializer(); redisTemplate.setKeySerializer(stringRedisSerializer); redisTemplate.setHashKeySerializer(stringRedisSerializer); Jackson2JsonRedisSerializer<?> jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer<>(Object.class); redisTemplate.setValueSerializer(jackson2JsonRedisSerializer); redisTemplate.setHashValueSerializer(jackson2JsonRedisSerializer); redisTemplate.afterPropertiesSet(); return redisTemplate; } }# 2. gateway ## 2.1 网关依赖 <!--网关依赖gateway--> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-gateway</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-loadbalancer</artifactId> </dependency> <!-- Sentinel --> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-sentinel</artifactId> <version>${spring-cloud-alibaba.version}</version> </dependency> <!-- Nacos --> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId> <exclusions> <exclusion> <groupId>com.alibaba.nacos</groupId> <artifactId>nacos-client</artifactId> </exclusion> </exclusions> <version>${spring-cloud-alibaba.version}</version> </dependency> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId> <exclusions> <exclusion> <groupId>com.alibaba.nacos</groupId> <artifactId>nacos-client</artifactId> </exclusion> </exclusions> <version>${spring-cloud-alibaba.version}</version> </dependency> <dependency> <groupId>com.alibaba.nacos</groupId> <artifactId>nacos-client</artifactId> <version>${alibaba.nacos.version}</version> </dependency> <!--SpringBoot2.4.x之后默认不加载bootstrap.yml文件,需要在pom里加上依赖--> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-bootstrap</artifactId> </dependency> <!-- 加入 log4j2 --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> <exclusions> <exclusion> <groupId>*</groupId> <artifactId>*</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> </dependency> <!-- zipkin --> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-zipkin</artifactId> <version>2.2.8.RELEASE</version> </dependency> <dependency> <groupId>io.zipkin.brave</groupId> <artifactId>brave-instrumentation-dubbo</artifactId> </dependency> <dependency> <groupId>com.github.xiaoymin</groupId> <artifactId>knife4j-spring-boot-starter</artifactId> <version>3.0.3</version> </dependency> <!-- redis--> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </dependency> <!-- oauth2 --> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-config</artifactId> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-oauth2-resource-server</artifactId> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-oauth2-client</artifactId> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-oauth2-jose</artifactId> </dependency> <dependency> <groupId>com.nimbusds</groupId> <artifactId>nimbus-jose-jwt</artifactId> <version>9.9.3</version> </dependency> <dependency> <groupId>cn.hutool</groupId> <artifactId>hutool-all</artifactId> <version>5.8.0</version> </dependency> <dependency> <groupId>top.fate</groupId> <artifactId>fate-common</artifactId> <version>1.0.0</version> </dependency> </dependencies>## 2.2 applicaion.yamlspring:security: oauth2: resourceserver: jwt: jwk-set-uri: 'http://localhost:9999/rsa/publicKey'secure: ignore:urls: #配置白名单路径 - "/actuator/**" - "/oauth2-server/oauth/token" - "/oauth2-server/**" - "/order/**"## 2.3 bootstrap.yamllogging: file:# 配置日志的路径,包含 spring.application.name path: ${spring.application.name}url: nacos: localhost:8848spring: application:name: gateway #实例名profiles:active: devcloud:nacos: discovery: #集群环境隔离 cluster-name: shanghai #命名空间 namespace: ${spring.profiles.active} #持久化实例 ture为临时实例 false为持久化实例 临时实例发生异常直接剔除, 而持久化实例等待恢复 ephemeral: true #注册中心地址 server-addr: ${url.nacos} config: namespace: ${spring.profiles.active} file-extension: yaml #配置中心地址 server-addr: ${url.nacos} extension-configs[0]: data-id: gateway.yaml group: DEFAULT_GROUP refresh: false extension-configs[1]: data-id: sentinel.yaml group: DEFAULT_GROUP refresh: false extension-configs[2]: data-id: log.properties group: DEFAULT_GROUP refresh: false extension-configs[3]: data-id: redis.yaml group: DEFAULT_GROUP refresh: false### gateway.yamlserver: port: 30001spring: cloud:gateway: enabled: true discovery: locator: lower-case-service-id: true routes: - id: user-service uri: lb://user-service predicates: - Path=/user/** filters: - StripPrefix=1 - id: order-service uri: lb://order-service predicates: - Path=/order/** filters: - StripPrefix=1 - id: uri: lb://oauth2-server predicates: - Path=/oauth2-server/** filters: - StripPrefix=1### sentinel.yamlspring: cloud:sentinel: transport: dashboard: localhost:8080### log.propertieslogging.level.root=info### redis.yamlspring: redis:host: localhost port: 6379 ## 2.4 鉴权管理器package top.fate.authorization;import cn.hutool.core.convert.Convert;import org.springframework.data.redis.core.RedisTemplate;import org.springframework.security.authorization.AuthorizationDecision;import org.springframework.security.authorization.ReactiveAuthorizationManager;import org.springframework.security.core.Authentication;import org.springframework.security.core.GrantedAuthority;import org.springframework.security.web.server.authorization.AuthorizationContext;import org.springframework.stereotype.Component;import reactor.core.publisher.Mono;import javax.annotation.Resource;import java.net.URI;import java.util.List;import java.util.stream.Collectors;/**鉴权管理器,用于判断是否有资源的访问权限@auther:Wangxl@Emile:18335844494@163.com@Time:2022/5/7 9:27*/@Componentpublic class AuthorizationManager implements ReactiveAuthorizationManager {@Resource private RedisTemplate<String, Object> redisTemplate; @Override public Mono<AuthorizationDecision> check(Mono<Authentication> mono, AuthorizationContext authorizationContext) { //从Redis中获取当前路径可访问角色列表 URI uri = authorizationContext.getExchange().getRequest().getURI(); Object obj = redisTemplate.opsForHash().get("oauth2:oauth_urls", uri.getPath()); List<String> authorities = Convert.toList(String.class, obj); authorities = authorities.stream().map(i -> i = "ROLE_" + i).collect(Collectors.toList()); //认证通过且角色匹配的用户可访问当前路径 return mono .filter(Authentication::isAuthenticated) .flatMapIterable(Authentication::getAuthorities) .map(GrantedAuthority::getAuthority) .any(authorities::contains) .map(AuthorizationDecision::new) .defaultIfEmpty(new AuthorizationDecision(false)); } }## 2.5 自定义返回结果:没有登录或token过期时package top.fate.component;import cn.hutool.json.JSONUtil;import org.springframework.core.io.buffer.DataBuffer;import org.springframework.http.HttpHeaders;import org.springframework.http.HttpStatus;import org.springframework.http.MediaType;import org.springframework.http.server.reactive.ServerHttpResponse;import org.springframework.security.core.AuthenticationException;import org.springframework.security.web.server.ServerAuthenticationEntryPoint;import org.springframework.stereotype.Component;import org.springframework.web.server.ServerWebExchange;import reactor.core.publisher.Mono;import top.fate.api.R;import top.fate.api.ResultCode;import java.nio.charset.Charset;/**自定义返回结果:没有登录或token过期时@auther:Wangxl@Emile:18335844494@163.com@Time:2022/5/15 19:37*/@Componentpublic class RestAuthenticationEntryPoint implements ServerAuthenticationEntryPoint {@Override public Mono<Void> commence(ServerWebExchange exchange, AuthenticationException e) { ServerHttpResponse response = exchange.getResponse(); response.setStatusCode(HttpStatus.OK); response.getHeaders().add(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE); String body= JSONUtil.toJsonStr(R.fail(ResultCode.UN_AUTHORIZED.getCode(),ResultCode.UN_AUTHORIZED.getMessage())); DataBuffer buffer = response.bufferFactory().wrap(body.getBytes(Charset.forName("UTF-8"))); return response.writeWith(Mono.just(buffer)); }} ## 2.6 自定义返回结果:没有权限访问时package top.fate.component;import cn.hutool.json.JSONUtil;import org.springframework.core.io.buffer.DataBuffer;import org.springframework.http.HttpHeaders;import org.springframework.http.HttpStatus;import org.springframework.http.MediaType;import org.springframework.http.server.reactive.ServerHttpResponse;import org.springframework.security.core.AuthenticationException;import org.springframework.security.web.server.ServerAuthenticationEntryPoint;import org.springframework.stereotype.Component;import org.springframework.web.server.ServerWebExchange;import reactor.core.publisher.Mono;import top.fate.api.R;import top.fate.api.ResultCode;import java.nio.charset.Charset;/**自定义返回结果:没有登录或token过期时@auther:Wangxl@Emile:18335844494@163.com@Time:2022/5/15 19:37*/@Componentpublic class RestAuthenticationEntryPoint implements ServerAuthenticationEntryPoint {@Override public Mono<Void> commence(ServerWebExchange exchange, AuthenticationException e) { ServerHttpResponse response = exchange.getResponse(); response.setStatusCode(HttpStatus.OK); response.getHeaders().add(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE); String body= JSONUtil.toJsonStr(R.fail(ResultCode.UN_AUTHORIZED.getCode(),ResultCode.UN_AUTHORIZED.getMessage())); DataBuffer buffer = response.bufferFactory().wrap(body.getBytes(Charset.forName("UTF-8"))); return response.writeWith(Mono.just(buffer)); }}## 2.7 网关白名单配置package top.fate.config;import lombok.Data;import lombok.EqualsAndHashCode;import org.springframework.boot.context.properties.ConfigurationProperties;import org.springframework.stereotype.Component;import java.util.List;/**网关白名单配置@auther:Wangxl@Emile:18335844494@163.com@Time:2022/5/16 14:27*/@Data@EqualsAndHashCode(callSuper = false)@Component@ConfigurationProperties(prefix="secure.ignore")public class IgnoreUrlsConfig {private List<String> urls;} ## 2.8 Redis相关配置package top.fate.config;import org.springframework.context.annotation.Bean;import org.springframework.context.annotation.Configuration;import org.springframework.data.redis.connection.RedisConnectionFactory;import org.springframework.data.redis.core.RedisTemplate;import org.springframework.data.redis.repository.configuration.EnableRedisRepositories;import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer;import org.springframework.data.redis.serializer.StringRedisSerializer;/**Redis相关配置@auther:Wangxl@Emile:18335844494@163.com@Time:2022/5/16 14:29*/@Configuration@EnableRedisRepositoriespublic class RedisRepositoryConfig {@Bean public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory connectionFactory) { RedisTemplate<String, Object> redisTemplate = new RedisTemplate<>(); redisTemplate.setConnectionFactory(connectionFactory); StringRedisSerializer stringRedisSerializer = new StringRedisSerializer(); redisTemplate.setKeySerializer(stringRedisSerializer); redisTemplate.setHashKeySerializer(stringRedisSerializer); Jackson2JsonRedisSerializer<?> jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer<>(Object.class); redisTemplate.setValueSerializer(jackson2JsonRedisSerializer); redisTemplate.setHashValueSerializer(jackson2JsonRedisSerializer); redisTemplate.afterPropertiesSet(); return redisTemplate; }}## 2.9 资源服务器配置package top.fate.config;import cn.hutool.core.util.ArrayUtil;import lombok.AllArgsConstructor;import org.springframework.context.annotation.Bean;import org.springframework.context.annotation.Configuration;import org.springframework.core.convert.converter.Converter;import org.springframework.security.authentication.AbstractAuthenticationToken;import org.springframework.security.config.annotation.web.reactive.EnableWebFluxSecurity;import org.springframework.security.config.web.server.SecurityWebFiltersOrder;import org.springframework.security.config.web.server.ServerHttpSecurity;import org.springframework.security.oauth2.jwt.Jwt;import org.springframework.security.oauth2.server.resource.authentication.JwtAuthenticationConverter;import org.springframework.security.oauth2.server.resource.authentication.JwtGrantedAuthoritiesConverter;import org.springframework.security.oauth2.server.resource.authentication.ReactiveJwtAuthenticationConverterAdapter;import org.springframework.security.web.server.SecurityWebFilterChain;import reactor.core.publisher.Mono;import top.fate.authorization.AuthorizationManager;import top.fate.component.RestAuthenticationEntryPoint;import top.fate.component.RestfulAccessDeniedHandler;import top.fate.filter.IgnoreUrlsRemoveJwtFilter;/**资源服务器配置@auther:Wangxl@Emile:18335844494@163.com@Time:2022/5/16 14:31*/@AllArgsConstructor@Configuration@EnableWebFluxSecuritypublic class ResourceServerConfig {private final AuthorizationManager authorizationManager; private final IgnoreUrlsConfig ignoreUrlsConfig; private final RestfulAccessDeniedHandler restfulAccessDeniedHandler; private final RestAuthenticationEntryPoint restAuthenticationEntryPoint; private final IgnoreUrlsRemoveJwtFilter ignoreUrlsRemoveJwtFilter; @Bean public SecurityWebFilterChain springSecurityFilterChain(ServerHttpSecurity http) { http.oauth2ResourceServer().jwt() .jwtAuthenticationConverter(jwtAuthenticationConverter()); //自定义处理JWT请求头过期或签名错误的结果 http.oauth2ResourceServer().authenticationEntryPoint(restAuthenticationEntryPoint); //对白名单路径,直接移除JWT请求头 http.addFilterBefore(ignoreUrlsRemoveJwtFilter, SecurityWebFiltersOrder.AUTHENTICATION); http.authorizeExchange() .pathMatchers(ArrayUtil.toArray(ignoreUrlsConfig.getUrls(),String.class)).permitAll()//白名单配置 .anyExchange().access(authorizationManager)//鉴权管理器配置 .and().exceptionHandling() .accessDeniedHandler(restfulAccessDeniedHandler)//处理未授权 .authenticationEntryPoint(restAuthenticationEntryPoint)//处理未认证 .and().csrf().disable(); return http.build(); } @Bean public Converter<Jwt, ? extends Mono<? extends AbstractAuthenticationToken>> jwtAuthenticationConverter() { JwtGrantedAuthoritiesConverter jwtGrantedAuthoritiesConverter = new JwtGrantedAuthoritiesConverter(); jwtGrantedAuthoritiesConverter.setAuthorityPrefix("ROLE_"); jwtGrantedAuthoritiesConverter.setAuthoritiesClaimName("authorities"); JwtAuthenticationConverter jwtAuthenticationConverter = new JwtAuthenticationConverter(); jwtAuthenticationConverter.setJwtGrantedAuthoritiesConverter(jwtGrantedAuthoritiesConverter); return new ReactiveJwtAuthenticationConverterAdapter(jwtAuthenticationConverter); } } ## 2.10 将登录用户的JWT转化成用户信息的全局过滤器 package top.fate.filter;import cn.hutool.core.util.StrUtil;import com.nimbusds.jose.JWSObject;import org.slf4j.Logger;import org.slf4j.LoggerFactory;import org.springframework.cloud.gateway.filter.GatewayFilterChain;import org.springframework.cloud.gateway.filter.GlobalFilter;import org.springframework.core.Ordered;import org.springframework.http.server.reactive.ServerHttpRequest;import org.springframework.stereotype.Component;import org.springframework.web.server.ServerWebExchange;import reactor.core.publisher.Mono;import java.text.ParseException;/**将登录用户的JWT转化成用户信息的全局过滤器@auther:Wangxl@Emile:18335844494@163.com@Time:2022/5/16 12:31*/@Componentpublic class AuthGlobalFilter implements GlobalFilter, Ordered {private static Logger LOGGER = LoggerFactory.getLogger(AuthGlobalFilter.class); @Override public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) { String token = exchange.getRequest().getHeaders().getFirst("Authorization"); if (StrUtil.isEmpty(token)) { return chain.filter(exchange); } try { //从token中解析用户信息并设置到Header中去 String realToken = token.replace("Bearer ", ""); JWSObject jwsObject = JWSObject.parse(realToken); String userStr = jwsObject.getPayload().toString(); LOGGER.info("AuthGlobalFilter.filter() user:{}",userStr); ServerHttpRequest request = exchange.getRequest().mutate().header("user", userStr).build(); exchange = exchange.mutate().request(request).build(); } catch (ParseException e) { e.printStackTrace(); } return chain.filter(exchange); } @Override public int getOrder() { return 0; }}## 2.11 白名单路径访问时需要移除JWT请求头package top.fate.filter;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.http.server.reactive.ServerHttpRequest;import org.springframework.stereotype.Component;import org.springframework.util.AntPathMatcher;import org.springframework.util.PathMatcher;import org.springframework.web.server.ServerWebExchange;import org.springframework.web.server.WebFilter;import org.springframework.web.server.WebFilterChain;import reactor.core.publisher.Mono;import top.fate.config.IgnoreUrlsConfig;import java.net.URI;import java.util.List;/**白名单路径访问时需要移除JWT请求头@auther:Wangxl@Emile:18335844494@163.com@Time:2022/5/16 16:42*/@Componentpublic class IgnoreUrlsRemoveJwtFilter implements WebFilter {@Autowired private IgnoreUrlsConfig ignoreUrlsConfig; @Override public Mono<Void> filter(ServerWebExchange exchange, WebFilterChain chain) { ServerHttpRequest request = exchange.getRequest(); URI uri = request.getURI(); PathMatcher pathMatcher = new AntPathMatcher(); //白名单路径移除JWT请求头 List<String> ignoreUrls = ignoreUrlsConfig.getUrls(); for (String ignoreUrl : ignoreUrls) { if (pathMatcher.match(ignoreUrl, uri.getPath())) { request = exchange.getRequest().mutate().header("Authorization", "").build(); exchange = exchange.mutate().request(request).build(); return chain.filter(exchange); } } return chain.filter(exchange); }}# 测试 > 访问 [http://localhost:30001/oauth2-server/oauth/token](http://localhost:30001/oauth2-server/oauth/token) > 密码模式  > 刷新token > 
上一篇,SpringCloudAlibaba篇(七)SpringCloud整合Zipkin分布式链路跟踪系统(SpringCloud+dubbo+Zipkin)@[toc]前言Knife4j的前身是swagger-bootstrap-ui,前身swagger-bootstrap-ui是一个纯swagger-ui的ui皮肤项目knife4j官网服务端构建依赖<dependency> <groupId>io.springfox</groupId> <artifactId>springfox-boot-starter</artifactId> <version>3.0.0</version> </dependency>配置类package top.fate.config; import io.swagger.annotations.ApiOperation; import org.springframework.boot.actuate.autoconfigure.endpoint.web.CorsEndpointProperties; import org.springframework.boot.actuate.autoconfigure.endpoint.web.WebEndpointProperties; import org.springframework.boot.actuate.autoconfigure.web.server.ManagementPortType; import org.springframework.boot.actuate.endpoint.ExposableEndpoint; import org.springframework.boot.actuate.endpoint.web.*; import org.springframework.boot.actuate.endpoint.web.annotation.ControllerEndpointsSupplier; import org.springframework.boot.actuate.endpoint.web.annotation.ServletEndpointsSupplier; import org.springframework.boot.actuate.endpoint.web.servlet.WebMvcEndpointHandlerMapping; import org.springframework.context.EnvironmentAware; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.core.env.Environment; import org.springframework.http.HttpMethod; import springfox.documentation.builders.ApiInfoBuilder; import springfox.documentation.builders.PathSelectors; import springfox.documentation.builders.RequestHandlerSelectors; import springfox.documentation.builders.ResponseBuilder; import springfox.documentation.oas.annotations.EnableOpenApi; import springfox.documentation.service.ApiInfo; import springfox.documentation.service.Contact; import springfox.documentation.service.Response; import springfox.documentation.spi.DocumentationType; import springfox.documentation.spring.web.plugins.Docket; import java.util.ArrayList; import java.util.Collection; import java.util.List; /** * @auther:Wangxl * @Emile:18335844494@163.com * @Time:2022/5/7 9:27 */ @Configuration @EnableOpenApi //注解启动用Swagger的使用,同时在配置类中对Swagger的通用参数进行配置 public class Swagger3Config implements EnvironmentAware { private String applicationName; private String applicationDescription; @Bean public Docket createRestApi(){ //返回文档概要信息 return new Docket(DocumentationType.OAS_30) .apiInfo(apiInfo()) .select() .apis(RequestHandlerSelectors.withMethodAnnotation(ApiOperation.class)) .paths(PathSelectors.any()) .build() // .globalRequestParameters(getGlobalRequestParameters()) .globalResponses(HttpMethod.GET,getGlobalResponseMessage()) .globalResponses(HttpMethod.POST,getGlobalResponseMessage()); } /* 生成接口信息,包括标题,联系人等 */ private ApiInfo apiInfo() { return new ApiInfoBuilder() .title(applicationName+"接口文档") .description(applicationDescription) .contact(new Contact("fate急速出击","http://www.baidu.com","18335844494@163.com")) .version("1.0") .build(); } /* 封装全局通用参数 */ /*private List<RequestParameter> getGlobalRequestParameters() { List<RequestParameter> parameters=new ArrayList<>(); parameters.add(new RequestParameterBuilder() .name("uuid") .description("设备uuid") .required(true) .in(ParameterType.QUERY) .query(q->q.model(m->m.scalarModel((ScalarType.STRING)))) .required(false) .build()); return parameters; }*/ /* 封装通用相应信息 */ private List<Response> getGlobalResponseMessage() { List<Response> responseList=new ArrayList<>(); responseList.add(new ResponseBuilder().code("404").description("未找到资源").build()); return responseList; } @Bean public WebMvcEndpointHandlerMapping webEndpointServletHandlerMapping( WebEndpointsSupplier webEndpointsSupplier, ServletEndpointsSupplier servletEndpointsSupplier, ControllerEndpointsSupplier controllerEndpointsSupplier, EndpointMediaTypes endpointMediaTypes, CorsEndpointProperties corsProperties, WebEndpointProperties webEndpointProperties, Environment environment) { List<ExposableEndpoint<?>> allEndpoints = new ArrayList<>(); Collection<ExposableWebEndpoint> webEndpoints = webEndpointsSupplier.getEndpoints(); allEndpoints.addAll(webEndpoints); allEndpoints.addAll(servletEndpointsSupplier.getEndpoints()); allEndpoints.addAll(controllerEndpointsSupplier.getEndpoints()); String basePath = webEndpointProperties.getBasePath(); EndpointMapping endpointMapping = new EndpointMapping(basePath); boolean shouldRegisterLinksMapping = webEndpointProperties.getDiscovery().isEnabled() && (org.springframework.util.StringUtils.hasText(basePath) || ManagementPortType.get(environment).equals(ManagementPortType.DIFFERENT)); return new WebMvcEndpointHandlerMapping(endpointMapping, webEndpoints, endpointMediaTypes, corsProperties.toCorsConfiguration(), new EndpointLinksResolver(allEndpoints, basePath), shouldRegisterLinksMapping, null); } @Override public void setEnvironment(Environment environment) { this.applicationDescription = environment.getProperty("spring.application.description"); this.applicationName = environment.getProperty("spring.application.name"); } }配置文件springfox: documentation: swagger-ui: enabled: true # false关闭swagger-ui界面 但不关闭openapi spring: mvc: pathmatch: matching-strategy: ANT_PATH_MATCHER #springboot2.6.x如果不加该配置会报错启动服务端网关端 (聚合swagger,将所有微服务的文档集中到网关中)添加依赖<!-- knife4j--> <dependency> <groupId>com.github.xiaoymin</groupId> <artifactId>knife4j-spring-boot-starter</artifactId> <version>3.0.3</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-boot-starter</artifactId> <version>3.0.0</version> </dependency>配置类SwaggerHandlerpackage top.fate.config; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.HttpStatus; import org.springframework.http.ResponseEntity; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; import reactor.core.publisher.Mono; import springfox.documentation.swagger.web.*; import java.util.Optional; /** * @auther:Wangxl * @Emile:18335844494@163.com * @Time:2022/5/9 17:28 */ @RestController @RequestMapping("/swagger-resources") public class SwaggerHandler { @Autowired(required = false) private SecurityConfiguration securityConfiguration; @Autowired(required = false) private UiConfiguration uiConfiguration; private final SwaggerResourcesProvider swaggerResources; @Autowired public SwaggerHandler(SwaggerResourcesProvider swaggerResources) { this.swaggerResources = swaggerResources; } @GetMapping("/configuration/security") public Mono<ResponseEntity<SecurityConfiguration>> securityConfiguration() { return Mono.just(new ResponseEntity<>( Optional.ofNullable(securityConfiguration).orElse(SecurityConfigurationBuilder.builder().build()), HttpStatus.OK)); } @GetMapping("/configuration/ui") public Mono<ResponseEntity<UiConfiguration>> uiConfiguration() { return Mono.just(new ResponseEntity<>( Optional.ofNullable(uiConfiguration).orElse(UiConfigurationBuilder.builder().build()), HttpStatus.OK)); } @GetMapping("") public Mono<ResponseEntity> swaggerResources() { return Mono.just((new ResponseEntity<>(swaggerResources.get(), HttpStatus.OK))); } }SwaggerHeaderFilterpackage top.fate.config; import org.springframework.cloud.gateway.filter.GatewayFilter; import org.springframework.cloud.gateway.filter.factory.AbstractGatewayFilterFactory; import org.springframework.http.server.reactive.ServerHttpRequest; import org.springframework.stereotype.Component; import org.springframework.util.StringUtils; import org.springframework.web.server.ServerWebExchange; /** * @auther:Wangxl * @Emile:18335844494@163.com * @Time:2022/5/9 17:13 */ @Component public class SwaggerHeaderFilter extends AbstractGatewayFilterFactory { private static final String HEADER_NAME = "X-Forwarded-Prefix"; @Override public GatewayFilter apply(Object config) { return (exchange, chain) -> { ServerHttpRequest request = exchange.getRequest(); String path = request.getURI().getPath(); if (!StringUtils.endsWithIgnoreCase(path, SwaggerProvider.API_URI)) { return chain.filter(exchange); } String basePath = path.substring(0, path.lastIndexOf(SwaggerProvider.API_URI)); ServerHttpRequest newRequest = request.mutate().header(HEADER_NAME, basePath).build(); ServerWebExchange newExchange = exchange.mutate().request(newRequest).build(); return chain.filter(newExchange); }; } }SwaggerProviderpackage top.fate.config; import lombok.AllArgsConstructor; import org.springframework.cloud.gateway.config.GatewayProperties; import org.springframework.cloud.gateway.route.RouteLocator; import org.springframework.cloud.gateway.support.NameUtils; import org.springframework.context.annotation.Primary; import org.springframework.stereotype.Component; import springfox.documentation.swagger.web.SwaggerResource; import springfox.documentation.swagger.web.SwaggerResourcesProvider; import java.util.ArrayList; import java.util.List; @Component @Primary @AllArgsConstructor public class SwaggerProvider implements SwaggerResourcesProvider { public static final String API_URI = "/v2/api-docs"; private final RouteLocator routeLocator; private final GatewayProperties gatewayProperties; /** * 这个类是核心,这个类封装的是SwaggerResource,即在swagger-ui.html页面中顶部的选择框,选择服务的swagger页面内容。 * RouteLocator:获取spring cloud gateway中注册的路由 * RouteDefinitionLocator:获取spring cloud gateway路由的详细信息 * RestTemplate:获取各个配置有swagger的服务的swagger-resources */ @Override public List<SwaggerResource> get() { List<SwaggerResource> resources = new ArrayList<>(); List<String> routes = new ArrayList<>(); //取出gateway的route routeLocator.getRoutes().subscribe(route -> routes.add(route.getId())); //结合配置的route-路径(Path),和route过滤,只获取有效的route节点 gatewayProperties.getRoutes().stream().filter(routeDefinition -> routes.contains(routeDefinition.getId())) .forEach(routeDefinition -> routeDefinition.getPredicates().stream() .filter(predicateDefinition -> ("Path").equalsIgnoreCase(predicateDefinition.getName())) .forEach(predicateDefinition -> resources.add(swaggerResource(routeDefinition.getId(), predicateDefinition.getArgs().get(NameUtils.GENERATED_NAME_PREFIX + "0") .replace("/**", API_URI))))); return resources; } private SwaggerResource swaggerResource(String name, String location) { SwaggerResource swaggerResource = new SwaggerResource(); swaggerResource.setName(name); swaggerResource.setLocation(location); swaggerResource.setSwaggerVersion("3.0.0"); return swaggerResource; } }配置文件server: port: 30001 spring: cloud: gateway: enabled: true discovery: locator: lower-case-service-id: true routes: - id: user-service uri: lb://user-service predicates: - Path=/user/** filters: - StripPrefix=1 - id: order-service uri: lb://order-service predicates: - Path=/order/** filters: - StripPrefix=1启动 通过下拉框选择服务文档
上一篇,SpringCloudAlibaba篇(六)整合Seata(微服务分布式事务nacos+seata)@[toc]前言zipkin官网Zipkin是一个分布式跟踪系统。它有助于收集解决服务体系结构中的延迟问题所需的计时数据。功能包括此数据的收集和查找。如果日志文件中有跟踪 ID,则可以直接跳转到该 ID。否则,您可以根据服务、操作名称、标签、持续时间等属性进行查询。将为您汇总一些有趣的数据,例如在服务中花费的时间百分比以及操作是否失败。Zipkin是一个分布式跟踪系统。它有助于收集解决服务体系结构中的延迟问题所需的计时数据。功能包括此数据的收集和查找。如果日志文件中有跟踪 ID,则可以直接跳转到该 ID。否则,您可以根据服务、操作名称、标签、持续时间等属性进行查询。将为您汇总一些有趣的数据,例如在服务中花费的时间百分比以及操作是否失败Zipkin UI 还提供了一个依赖关系图,显示通过每个应用程序跟踪的请求数。这对于识别聚合行为(包括错误路径或对已弃用服务的调用)很有帮助。1、 zipkin下载安装1.1、zipkin下载下载地址1.2、zipkin建表语句-- -- Copyright 2015-2019 The OpenZipkin Authors -- -- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except -- in compliance with the License. You may obtain a copy of the License at -- -- http://www.apache.org/licenses/LICENSE-2.0 -- -- Unless required by applicable law or agreed to in writing, software distributed under the License -- is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express -- or implied. See the License for the specific language governing permissions and limitations under -- the License. -- CREATE TABLE IF NOT EXISTS zipkin_spans ( `trace_id_high` BIGINT NOT NULL DEFAULT 0 COMMENT 'If non zero, this means the trace uses 128 bit traceIds instead of 64 bit', `trace_id` BIGINT NOT NULL, `id` BIGINT NOT NULL, `name` VARCHAR(255) NOT NULL, `remote_service_name` VARCHAR(255), `parent_id` BIGINT, `debug` BIT(1), `start_ts` BIGINT COMMENT 'Span.timestamp(): epoch micros used for endTs query and to implement TTL', `duration` BIGINT COMMENT 'Span.duration(): micros used for minDuration and maxDuration query', PRIMARY KEY (`trace_id_high`, `trace_id`, `id`) ) ENGINE=InnoDB ROW_FORMAT=COMPRESSED CHARACTER SET=utf8 COLLATE utf8_general_ci; ALTER TABLE zipkin_spans ADD INDEX(`trace_id_high`, `trace_id`) COMMENT 'for getTracesByIds'; ALTER TABLE zipkin_spans ADD INDEX(`name`) COMMENT 'for getTraces and getSpanNames'; ALTER TABLE zipkin_spans ADD INDEX(`remote_service_name`) COMMENT 'for getTraces and getRemoteServiceNames'; ALTER TABLE zipkin_spans ADD INDEX(`start_ts`) COMMENT 'for getTraces ordering and range'; CREATE TABLE IF NOT EXISTS zipkin_annotations ( `trace_id_high` BIGINT NOT NULL DEFAULT 0 COMMENT 'If non zero, this means the trace uses 128 bit traceIds instead of 64 bit', `trace_id` BIGINT NOT NULL COMMENT 'coincides with zipkin_spans.trace_id', `span_id` BIGINT NOT NULL COMMENT 'coincides with zipkin_spans.id', `a_key` VARCHAR(255) NOT NULL COMMENT 'BinaryAnnotation.key or Annotation.value if type == -1', `a_value` BLOB COMMENT 'BinaryAnnotation.value(), which must be smaller than 64KB', `a_type` INT NOT NULL COMMENT 'BinaryAnnotation.type() or -1 if Annotation', `a_timestamp` BIGINT COMMENT 'Used to implement TTL; Annotation.timestamp or zipkin_spans.timestamp', `endpoint_ipv4` INT COMMENT 'Null when Binary/Annotation.endpoint is null', `endpoint_ipv6` BINARY(16) COMMENT 'Null when Binary/Annotation.endpoint is null, or no IPv6 address', `endpoint_port` SMALLINT COMMENT 'Null when Binary/Annotation.endpoint is null', `endpoint_service_name` VARCHAR(255) COMMENT 'Null when Binary/Annotation.endpoint is null' ) ENGINE=InnoDB ROW_FORMAT=COMPRESSED CHARACTER SET=utf8 COLLATE utf8_general_ci; ALTER TABLE zipkin_annotations ADD UNIQUE KEY(`trace_id_high`, `trace_id`, `span_id`, `a_key`, `a_timestamp`) COMMENT 'Ignore insert on duplicate'; ALTER TABLE zipkin_annotations ADD INDEX(`trace_id_high`, `trace_id`, `span_id`) COMMENT 'for joining with zipkin_spans'; ALTER TABLE zipkin_annotations ADD INDEX(`trace_id_high`, `trace_id`) COMMENT 'for getTraces/ByIds'; ALTER TABLE zipkin_annotations ADD INDEX(`endpoint_service_name`) COMMENT 'for getTraces and getServiceNames'; ALTER TABLE zipkin_annotations ADD INDEX(`a_type`) COMMENT 'for getTraces and autocomplete values'; ALTER TABLE zipkin_annotations ADD INDEX(`a_key`) COMMENT 'for getTraces and autocomplete values'; ALTER TABLE zipkin_annotations ADD INDEX(`trace_id`, `span_id`, `a_key`) COMMENT 'for dependencies job'; CREATE TABLE IF NOT EXISTS zipkin_dependencies ( `day` DATE NOT NULL, `parent` VARCHAR(255) NOT NULL, `child` VARCHAR(255) NOT NULL, `call_count` BIGINT, `error_count` BIGINT, PRIMARY KEY (`day`, `parent`, `child`) ) ENGINE=InnoDB ROW_FORMAT=COMPRESSED CHARACTER SET=utf8 COLLATE utf8_general_ci;1.3、zipkin启动java -jar zipkin-server-2.23.16-exec.jar --storage_type=mysql --MYSQL_DB=zipkin --MYSQL_USER=root --MYSQL_PASS=123456 --MYSQL_HOST=localhost --MYSQL_TCP_PORT=3306访问localhost:94112、zipkin整合SpringCloud2.1、添加依赖brave-instrumentation-dubbo 这里我用的版本是5.13.7<!-- zipkin --> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-zipkin</artifactId> <version>2.2.8.RELEASE</version> </dependency> <dependency> <groupId>io.zipkin.brave</groupId> <artifactId>brave-instrumentation-dubbo</artifactId> </dependency>2.2、修改配置文件因为我的项目的配置中心是nacos所以我直接在nacos新建一个zipkin.yamlspring: zipkin: base-url: http://127.0.0.1:9411 #zipkin server 的地址 sender: type: web #如果ClassPath里没有kafka, active MQ, 默认是web的方式 sleuth: sampler: probability: 1.0 #100%取样,生产环境应该低一点,用不着全部取出来bootstrap.yml中追加extension-configs[5]: data-id: zipkin.yaml group: DEFAULT_GROUP refresh: false2.3、dubbo配置修改添加红色方框配置,即可在zipkin中观察到dubbo调用启动微服务2.4、测试这里我通过网关分别调用一下order-service和user-service
@[toc]前言Pinpoint 是用 Java 编写的 APM(应用性能管理)工具,用于大规模分布式系统。在 Dapper 之后,Pinpoint 提供了一个解决方案,以帮助分析系统的总体结构以及分布式应用程序的组件之间是如何进行数据互联的。安装agent是无侵入式的对性能的影响最小(只增加约3%资源利用率)服务之间的调用链路图单个请求数据链路1.下载pinpoint github地址pinpoint-agent-2.3.3.tar.gzpinpoint-collector-boot-2.3.3.jarpinpoint-web-boot-2.3.3.jarSource code (zip) (解压完 pinpoint\pinpoint-2.3.3\hbase\scripts\hbase-create.hbase)- 初始化hbase脚本hbase数据库hbase-1.2.6-bin.tar.gz2.安装JDK是必备要求,自行安装2.1 安装hbase2.1.1 解压tar -zxvf hbase-1.2.6-bin.tar.gz2.1.2 修改配置文件1.hbase-env.shvim hbase-1.2.6/conf/hbase-env.sh 注释46,47行 ,添加javahome配置。export JAVA_HOME=/usr/local/jdk1.8.0_291 export HBASE_MANAGES_ZK=true # Configure PermSize. Only needed in JDK7. You can safely remove it for JDK8+ #export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m" #export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"2.hbase-site.xmlvim hbase-1.2.6/conf/hbase-site.xml <configuration> <property> <name>hbase.rootdir</name> <value>file:///xxx/hbase/data</value> </property> </configuration>3.启动启动hbase ./bin/start-hbase.sh4.初始化hbase的pinpoint库cd /hbase-1.2.6/bin 进入该目录,hbase shell是命令不是可执行脚本 ./hbase shell /usr/local/pinpoint/hbase-create.hbase 初始化完成2.2 部署pinpoint-collector2.2.1 创建日志文件夹2.2.2 启动nohup java -jar -Dpinpoint.zookeeper.address=127.0.0.1 pinpoint-collector-boot-2.3.3.jar > log/collector.log 2>&1 &2.3 部署pinpoint-web-bootnohup java -jar -Dpinpoint.zookeeper.address=127.0.0.1 pinpoint-web-boot-2.3.3.jar > log/web.log 2>&1 &2.4 pinpoint-agent配置(无需运行,后续将在微服务启动时引用)2.4.1 解压tar -zxvf pinpoint-agent-2.3.3.tar.gz #进入该文件夹修改配置文件 cd pinpoint-agent-2.3.3/profiles/release/ vim pinpoint.config 将这两个配置改为部署collector的ip,我的都在一台机器直接127.0.0.1就行 找到这一行改为 profiler.sampling.rate=13 运行一个springboot3.1 linuxjava -javaagent:/usr/local/pinpoint/pinpoint-agent-2.3.3/pinpoint-bootstrap-2.3.3.jar -Dpinpoint.agentId=xdclass-redis -Dpinpoint.applicationName=xdclass-redis -jar xdclass-redis-0.0.1-SNAPSHOT.jar3.2 windows1.在win环境下解压pinpoint-agent-2.3.3修改配置文件 pinpoint-agent-2.3.3/profiles/release/pinpoint.config2.idea添加参数启动springboot-javaagent:E:\pinpoint\pinpoint-agent-2.3.3\pinpoint-bootstrap-2.3.3.jar -Dpinpoint.agentId=UserApplication -Dpinpoint.applicationName=UserApplication 启动成功3.3 k8s环境如果是在k8s环境下部署需要每个物理机拷贝一份pinpoint-agent
前言最近在搞一套完整的云原生框架,详见 spring-cloud-alibaba专栏,目前已经整合的log4j2,但是想要一套可以实时观察日志的系统,就想到了ELK,然后上一篇文章是socket异步发送给logstash,logstash再输出到elasticsearch索引库中。logstash是java应用,解析日志是非的消耗cpu和内存,logstash安装在应用部署的机器上显得非常的笨重。最常见的做法是用filebeat部署在应用的机器上,logstash单独部署,然后由filebeat将日志输出给logstash解析,解析完由logstash再传给elasticsearch。ELKElasticsearch是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等。Logstash是一个完全开源的工具,他可以对你的日志进行收集、过滤,并将其存储供以后使用(如,搜索)。Beats在是一个轻量级日志采集器,其实Beats家族有6个成员,早期的ELK架构中使用Logstash收集、解析日志,但是Logstash对内存、cpu、io等资源消耗比较高。相比Logstash,Beats所占系统的CPU和内存几乎可以忽略不计。Kibana 也是一个开源和免费的工具,它Kibana可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助您汇总、分析和搜索重要数据日志。SrpingBoot+Log4j2 配置修改Log4j2为我们提供SocketAppender,使得我们可以通过TCP或UDP发送日志我的上一篇文章就是通过这种方式推送到了logstash,但这次我改成beats定时推送 修改配置文件springboot整合log4j2详见我的上一篇博客什么是日志门面? SpringBoot整合log4j2 ,日志落地修改log4j2-spring.xml文件 注意看这个地方${sys:LOG_PATH},我借用了springboot中的日志路径配置使得我的xml中可以读取yml的值 然后我的日志结构就是每个微服务有一个自己的日志文件夹,我这么做是为了后边的beats 如果是k8s+docker部署的话,可以不用这种方式参考docker-compose部署多个微服务,ELK日志收集方案 真的有用不是引流application.yml logging: file: # 配置日志的路径,包含 spring.application.name path: ${spring.application.name}附:完整xml<?xml version="1.0" encoding="UTF-8"?> <!-- Configuration后面的status,这个用于设置log4j2自身内部的信息输出,可以不设置, 当设置成trace时,可以看到log4j2内部各种详细输出 --> <!-- monitorInterval:Log4j能够自动检测修改配置 文件和重新配置本身,设置间隔秒数 --> <configuration monitorInterval="5" xmlns:context="http://www.springframework.org/schema/context"> <!-- 日志级别以及优先级排序: OFF > FATAL > ERROR > WARN > INFO > DEBUG > TRACE > ALL --> <!-- 变量配置 --> <Properties> <!-- 格式化输出: %d表示日期, %thread表示线程名, %-5level:级别从左显示5个字符宽度 %msg:日志消息,%n是换行符 %logger{36} 表示 Logger 名字最长36个字符 --> <property name="LOG_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} %highlight{%-5level}[%thread] %style{%logger{36}}{cyan} : %msg%n" /> <!-- 定义日志存储的路径,不要配置相对路径 --> <property name="FILE_PATH" value="./logs/${sys:LOG_PATH}" /> <property name="FILE_NAME" value="ysdd-example-spring-boot" /> <property name="FILE_NAME" value="" /> </Properties> <appenders> <console name="Console" target="SYSTEM_OUT"> <!--输出日志的格式--> <PatternLayout pattern="${LOG_PATTERN}" disableAnsi="false" noConsoleNoAnsi="false"/> <!--控制台只输出level及其以上级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/> </console> <!-- 这个会打印出所有的info及以下级别的信息,每次大小超过size, 则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档 --> <RollingFile name="RollingFileInfo" fileName="${FILE_PATH}/info.log" filePattern="${FILE_PATH}/$${date:yyyy-MM}/${FILE_NAME}-%d{yyyy-MM-dd}-INFO_%i.log.gz"> <!--控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Policies> <!--interval属性用来指定多久滚动一次,默认是1 hour--> <TimeBasedTriggeringPolicy interval="1"/> <SizeBasedTriggeringPolicy size="20MB"/> </Policies> <!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖 --> <DefaultRolloverStrategy max="15"/> </RollingFile> <!-- 这个会打印出所有的warn及以下级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档--> <RollingFile name="RollingFileWarn" fileName="${FILE_PATH}/warn.log" filePattern="${FILE_PATH}/$${date:yyyy-MM}/${FILE_NAME}-%d{yyyy-MM-dd}-WARN_%i.log.gz"> <!--控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <ThresholdFilter level="warn" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Policies> <!--interval属性用来指定多久滚动一次,默认是1 hour--> <TimeBasedTriggeringPolicy interval="1"/> <SizeBasedTriggeringPolicy size="20MB"/> </Policies> <!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖--> <DefaultRolloverStrategy max="15"/> </RollingFile> <!-- 这个会打印出所有的error及以下级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档--> <RollingFile name="RollingFileError" fileName="${FILE_PATH}/error.log" filePattern="${FILE_PATH}/$${date:yyyy-MM}/${FILE_NAME}-%d{yyyy-MM-dd}-ERROR_%i.log.gz"> <!--控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <ThresholdFilter level="error" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Policies> <!--interval属性用来指定多久滚动一次,默认是1 hour--> <TimeBasedTriggeringPolicy interval="1"/> <SizeBasedTriggeringPolicy size="20MB"/> </Policies> <!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖--> <DefaultRolloverStrategy max="15"/> </RollingFile> <!--不再使用socket推送,改为fileseats推送到elasticsearch--> <!--<Socket name="logstash" host="127.0.0.1" port="4560" protocol="TCP"> <PatternLayout pattern="${LOG_PATTERN}"/> </Socket>--> </appenders> <!--Logger节点用来单独指定日志的形式,比如要为指定包下的class指定不同的日志级别等。--> <!--然后定义loggers,只有定义了logger并引入的appender,appender才会生效--> <loggers> <!--过滤掉spring和mybatis的一些无用的DEBUG信息--> <logger name="org.mybatis" level="info" additivity="false"> <AppenderRef ref="Console"/> </logger> <!--监控系统信息--> <!--若是additivity设为false,则 子Logger 只会在自己的appender里输出,而不会在 父Logger 的appender里输出。--> <Logger name="top.fate" level="info" additivity="false"> <AppenderRef ref="Console"/> </Logger> <root level="info"> <appender-ref ref="Console"/> <appender-ref ref="RollingFileInfo"/> <appender-ref ref="RollingFileWarn"/> <appender-ref ref="RollingFileError"/> <appender-ref ref="logstash"/> </root> </loggers> </configuration>下载安装ELK这里我就直接装我windows电脑了es中文社区这里我用7.16.2Elasticsearchkibanalogstashfilebeat_x64Elasticsearch直接elasticsearch.bat启动即可访问localhost:9200 验证启动成功,这里基本不会有什么问题Logstash创建配置文件conf下创建spring-boot-logstash.yml 输出的索引名称我改为动态方式,每个微服务每天有一个自己的索引文件。 [fields][servicename] -------> beats推送的时候会带过来input { beats { port => 5044 } tcp { mode => "server" host => "127.0.0.1" port => 4560 codec => json_lines } } filter { } output { elasticsearch { hosts => ["http://127.0.0.1:9200"] index => "%{[fields][servicename]}-%{+yyyy.MM.dd}" } } 运行Logstashlogstash.bat -f E:\elasticsearch\ELK\logstash-7.16.2\config\spring-boot-logstash.yml Filebeat修改filebeat.yaml # ============================== Filebeat inputs =============================== filebeat.inputs: - type: log enabled: true paths: - F:\2022Projects\SpringCloudAlibaba2022\logs\order-service\*.log #- c:\programdata\elasticsearch\logs\* fields: servicename: order-service multiline: pattern: '^\{' negate: true match: after timeout: 5s - type: log enabled: true paths: - F:\2022Projects\SpringCloudAlibaba2022\logs\user-service\*.log #- c:\programdata\elasticsearch\logs\* fields: servicename: user-service multiline: pattern: '^\{' negate: true match: after timeout: 5s # ============================== Filebeat modules ============================== filebeat.config.modules: # Glob pattern for configuration loading path: ${path.config}/modules.d/*.yml # Set to true to enable config reloading reload.enabled: false # ======================= Elasticsearch template setting ======================= setup.template.settings: index.number_of_shards: 1 # =================================== Kibana =================================== setup.kibana: # ---------------------------- Elasticsearch Output ---------------------------- #output.elasticsearch: # Array of hosts to connect to. #hosts: ["localhost:9200"] # Protocol - either `http` (default) or `https`. #protocol: "https" # Authentication credentials - either API key or username/password. #api_key: "id:api_key" #username: "elastic" #password: "changeme" # ------------------------------ Logstash Output ------------------------------- output.logstash: # The Logstash hosts hosts: ["localhost:5044"] # ================================= Processors ================================= processors: - add_host_metadata: when.not.contains.tags: forwarded - add_cloud_metadata: ~ - add_docker_metadata: ~ - add_kubernetes_metadata: ~运行beatsfilebeat.exe -e -c filebeat.ymlKibana修改config/kibana.ymlelasticsearch.hosts: ["http://localhost:9200"] i18n.locale: "zh-CN"直接kibana.bat启动访问http://localhost:5601/查看索引列表创建索引模式查看日志
前言最近在搞一套完整的云原生框架,详见 spring-cloud-alibaba专栏,目前已经整合的log4j2,但是想要一套可以实时观察日志的系统,就想到了ELKELKElasticsearch是个开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,restful风格接口,多数据源,自动搜索负载等。Logstash是一个完全开源的工具,他可以对你的日志进行收集、过滤,并将其存储供以后使用(如,搜索)。Kibana 也是一个开源和免费的工具,它Kibana可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助您汇总、分析和搜索重要数据日志。下载安装ELK这里我就直接装我windows电脑了es中文社区这里我用7.16.2ElasticsearchkibanalogstashElasticsearch直接elasticsearch.bat启动即可访问localhost:9200 验证启动成功,这里基本不会有什么问题Logstash创建配置文件conf下创建spring-boot-logstash.ymlinput { tcp { #模式选择为server mode => "server" #ip和端口根据自己情况填写,端口默认4560,对应下文logback.xml里appender中的destination host => "127.0.0.1" port => 4560 #格式json codec => json_lines } } filter { #过滤器,根据需要填写 } output { elasticsearch { action => "index" #这里填写es的地址,多个es要写成数组的形式 hosts => "127.0.0.1:9200" #存放的索引名称,这里每天会创建一个新的索引保存当天的日志 index => "springfate-log-%{+YYYY.MM.dd}" } }启动logstash.bat -f E:\elasticsearch\ELK\logstash-7.16.2\config\spring-boot-logstash.yml Kibana修改config/kibana.ymlelasticsearch.hosts: ["http://localhost:9200"] i18n.locale: "zh-CN"直接kibana.bat启动访问http://localhost:5601/创建索引模式查看日志SrpingBoot+Log4j2 整合 ELKLog4j2为我们提供SocketAppender,使得我们可以通过TCP或UDP发送日志修改配置文件springboot整合log4j2详见我的上一篇博客什么是日志门面? SpringBoot整合log4j2 ,日志落地修改log4j2-spring.xml文件附:完整xml<?xml version="1.0" encoding="UTF-8"?> <!-- Configuration后面的status,这个用于设置log4j2自身内部的信息输出,可以不设置, 当设置成trace时,可以看到log4j2内部各种详细输出 --> <!-- monitorInterval:Log4j能够自动检测修改配置 文件和重新配置本身,设置间隔秒数 --> <configuration monitorInterval="5"> <!-- 日志级别以及优先级排序: OFF > FATAL > ERROR > WARN > INFO > DEBUG > TRACE > ALL --> <!-- 变量配置 --> <Properties> <!-- 格式化输出: %d表示日期, %thread表示线程名, %-5level:级别从左显示5个字符宽度 %msg:日志消息,%n是换行符 %logger{36} 表示 Logger 名字最长36个字符 --> <property name="LOG_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} %highlight{%-5level}[%thread] %style{%logger{36}}{cyan} : %msg%n" /> <!-- 定义日志存储的路径,不要配置相对路径 --> <property name="FILE_PATH" value="./logs" /> <property name="FILE_NAME" value="ysdd-example-spring-boot" /> </Properties> <appenders> <console name="Console" target="SYSTEM_OUT"> <!--输出日志的格式--> <PatternLayout pattern="${LOG_PATTERN}" disableAnsi="false" noConsoleNoAnsi="false"/> <!--控制台只输出level及其以上级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/> </console> <!-- 这个会打印出所有的info及以下级别的信息,每次大小超过size, 则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档 --> <RollingFile name="RollingFileInfo" fileName="${FILE_PATH}/info.log" filePattern="${FILE_PATH}/$${date:yyyy-MM}/${FILE_NAME}-%d{yyyy-MM-dd}-INFO_%i.log.gz"> <!--控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Policies> <!--interval属性用来指定多久滚动一次,默认是1 hour--> <TimeBasedTriggeringPolicy interval="1"/> <SizeBasedTriggeringPolicy size="20MB"/> </Policies> <!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖 --> <DefaultRolloverStrategy max="15"/> </RollingFile> <!-- 这个会打印出所有的warn及以下级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档--> <RollingFile name="RollingFileWarn" fileName="${FILE_PATH}/warn.log" filePattern="${FILE_PATH}/$${date:yyyy-MM}/${FILE_NAME}-%d{yyyy-MM-dd}-WARN_%i.log.gz"> <!--控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <ThresholdFilter level="warn" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Policies> <!--interval属性用来指定多久滚动一次,默认是1 hour--> <TimeBasedTriggeringPolicy interval="1"/> <SizeBasedTriggeringPolicy size="20MB"/> </Policies> <!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖--> <DefaultRolloverStrategy max="15"/> </RollingFile> <!-- 这个会打印出所有的error及以下级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档--> <RollingFile name="RollingFileError" fileName="${FILE_PATH}/error.log" filePattern="${FILE_PATH}/$${date:yyyy-MM}/${FILE_NAME}-%d{yyyy-MM-dd}-ERROR_%i.log.gz"> <!--控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <ThresholdFilter level="error" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Policies> <!--interval属性用来指定多久滚动一次,默认是1 hour--> <TimeBasedTriggeringPolicy interval="1"/> <SizeBasedTriggeringPolicy size="20MB"/> </Policies> <!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖--> <DefaultRolloverStrategy max="15"/> </RollingFile> <Socket name="logstash" host="127.0.0.1" port="4560" protocol="TCP"> <PatternLayout pattern="${LOG_PATTERN}"/> </Socket> </appenders> <!--Logger节点用来单独指定日志的形式,比如要为指定包下的class指定不同的日志级别等。--> <!--然后定义loggers,只有定义了logger并引入的appender,appender才会生效--> <loggers> <!--过滤掉spring和mybatis的一些无用的DEBUG信息--> <logger name="org.mybatis" level="info" additivity="false"> <AppenderRef ref="Console"/> </logger> <!--监控系统信息--> <!--若是additivity设为false,则 子Logger 只会在自己的appender里输出,而不会在 父Logger 的appender里输出。--> <Logger name="top.fate" level="info" additivity="false"> <AppenderRef ref="Console"/> </Logger> <root level="info"> <appender-ref ref="Console"/> <appender-ref ref="RollingFileInfo"/> <appender-ref ref="RollingFileWarn"/> <appender-ref ref="RollingFileError"/> <appender-ref ref="logstash"/> </root> </loggers> </configuration> 启动测试这里我让他报一个dubbo找不到服务的错 进入kibana查询 这里已经可以查到springboot服务的日志了
@[toc]什么是日志门面?市面上的日志框架:JUL、JCL、Jboss-logging、logback、log4j、log4j2、slf4j 等等日志门面就是在日志框架和应用程序之间架设一个沟通的桥梁,对于应用程序来说,无论底层的日志框架如何变,都不需要有任何感知日志门面可以理解为java中的一个interface(接口),而日志框架就是就是实现类SpringBoot默认的日志门面和日志框架下图是springboot默认的日志框架 ,slf4j作为日志门面,而logback作为日志框架的实现SLF4JJava 的简单日志记录外观 (SLF4J) 用作各种日志记录框架(如 java.util.logging、logback 和 reload4j)的简单外观或抽象。SLF4J 允许最终用户在部署时插入所需的日志记录框架。请注意,启用 SLF4J 库/应用程序意味着仅添加一个强制依赖项,即 slf4j-api-1.7.36.jar。logback在springboot中如果要使用自己的配置直接在resources创建配置文件就可以了<?xml version="1.0" encoding="UTF-8"?> <configuration> <!-- %m输出的信息,%p日志级别,%t线程名,%d日期,%c类的全名,%i索引【从数字0开始递增】,,, --> <!-- appender是configuration的子节点,是负责写日志的组件。 --> <!-- ConsoleAppender:把日志输出到控制台 --> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>%d %p (%file:%line\)- %m%n</pattern> <!-- 控制台也要使用UTF-8,不要使用GBK,否则会中文乱码 --> <charset>UTF-8</charset> </encoder> </appender> <!-- RollingFileAppender:滚动记录文件,先将日志记录到指定文件,当符合某个条件时,将日志记录到其他文件 --> <!-- 以下的大概意思是:1.先按日期存日志,日期变了,将前一天的日志文件名重命名为XXX%日期%索引,新的日志仍然是demo.log --> <!-- 2.如果日期没有发生变化,但是当前日志的文件大小超过1KB时,对当前日志进行分割 重命名--> <appender name="log-info" class="ch.qos.logback.core.rolling.RollingFileAppender"> <File>log/info.log</File> <!-- rollingPolicy:当发生滚动时,决定 RollingFileAppender 的行为,涉及文件移动和重命名。 --> <!-- TimeBasedRollingPolicy: 最常用的滚动策略,它根据时间来制定滚动策略,既负责滚动也负责出发滚动 --> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <!-- 活动文件的名字会根据fileNamePattern的值,每隔一段时间改变一次 --> <!-- 文件名:log/demo.2017-12-05.0.log --> <fileNamePattern>log/info.%d.%i.log</fileNamePattern> <!-- 每产生一个日志文件,该日志文件的保存期限为30天 --> <maxHistory>30</maxHistory> <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP"> <!-- maxFileSize:这是活动文件的大小,默认值是10MB,测试时可改成1KB看效果 --> <maxFileSize>2MB</maxFileSize> </timeBasedFileNamingAndTriggeringPolicy> </rollingPolicy> <encoder> <!-- pattern节点,用来设置日志的输入格式 --> <pattern> %d %p (%file:%line\)- %m%n </pattern> <!-- 记录日志的编码:此处设置字符集 - --> <charset>UTF-8</charset> </encoder> </appender> <!-- ERROR日志输出--> <appender name="error-info" class="ch.qos.logback.core.rolling.RollingFileAppender"> <File>log/error.log</File> <filter class="ch.qos.logback.classic.filter.ThresholdFilter"> <level>ERROR</level> <!-- 日志过滤级别 --> </filter> <!-- rollingPolicy:当发生滚动时,决定 RollingFileAppender 的行为,涉及文件移动和重命名。 --> <!-- TimeBasedRollingPolicy: 最常用的滚动策略,它根据时间来制定滚动策略,既负责滚动也负责出发滚动 --> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <!-- 活动文件的名字会根据fileNamePattern的值,每隔一段时间改变一次 --> <!-- 文件名:log/demo.2017-12-05.0.log --> <fileNamePattern>log/error.%d.%i.log</fileNamePattern> <!-- 每产生一个日志文件,该日志文件的保存期限为30天 --> <maxHistory>30</maxHistory> <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP"> <!-- maxFileSize:这是活动文件的大小,默认值是10MB,测试时可改成1KB看效果 --> <maxFileSize>2MB</maxFileSize> </timeBasedFileNamingAndTriggeringPolicy> </rollingPolicy> <encoder> <!-- pattern节点,用来设置日志的输入格式 --> <pattern> %d %p (%file:%line\)- %m%n </pattern> <!-- 记录日志的编码:此处设置字符集 - --> <charset>UTF-8</charset> </encoder> </appender> <!-- 控制台输出日志级别 --> <root level="info"> <appender-ref ref="STDOUT" /> <appender-ref ref="error-info"/> <appender-ref ref="log-info" /> </root> <!-- 指定项目中某个包,当有日志操作行为时的日志记录级别 --> <!-- com.cars.ysdd为根包,也就是只要是发生在这个根包下面的所有日志操作行为的权限都是DEBUG --> <!-- 级别依次为【从高到低】:FATAL > ERROR > WARN > INFO > DEBUG > TRACE --> <logger name="top.fate" level="info" > <appender-ref ref="log-info" /> </logger> </configuration>application.properties#==================== 日志配合·标准 ============================ logging.config=classpath:logback.xml推荐使用log4j2而不是logbackLog4j2Apache Log4j 2是对Log4j的升级,它比其前身Log4j 1.x提供了重大改进,并提供了Logback中可用的许多改进,同时修复了Logback架构中的一些问题。现在最优秀的Java日志框架是Log4j2,没有之一。根据官方的测试表明,在多线程环境下,Log4j2的异步日志表现更加优秀。在异步日志中,Log4j2使用独立的线程去执行I/O操作,可以极大地提升应用程序的性能。SpringBoot整合log4j21.依赖修改pom<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <!--排除logback--> <exclusions> <exclusion> <artifactId>spring-boot-starter-logging</artifactId> <groupId>org.springframework.boot</groupId> </exclusion> </exclusions> </dependency> <!--添加log4j2--> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> </dependency>2.配置文件 log4j2-spring.xml<?xml version="1.0" encoding="UTF-8"?> <!-- Configuration后面的status,这个用于设置log4j2自身内部的信息输出,可以不设置, 当设置成trace时,可以看到log4j2内部各种详细输出 --> <!-- monitorInterval:Log4j能够自动检测修改配置 文件和重新配置本身,设置间隔秒数 --> <configuration monitorInterval="5"> <!-- 日志级别以及优先级排序: OFF > FATAL > ERROR > WARN > INFO > DEBUG > TRACE > ALL --> <!-- 变量配置 --> <Properties> <!-- 格式化输出: %d表示日期, %thread表示线程名, %-5level:级别从左显示5个字符宽度 %msg:日志消息,%n是换行符 %logger{36} 表示 Logger 名字最长36个字符 --> <property name="LOG_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} %highlight{%-5level}[%thread] %style{%logger{36}}{cyan} : %msg%n" /> <!-- 定义日志存储的路径,不要配置相对路径 --> <property name="FILE_PATH" value="./logs" /> <property name="FILE_NAME" value="ysdd-example-spring-boot" /> </Properties> <appenders> <console name="Console" target="SYSTEM_OUT"> <!--输出日志的格式--> <PatternLayout pattern="${LOG_PATTERN}" disableAnsi="false" noConsoleNoAnsi="false"/> <!--控制台只输出level及其以上级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/> </console> <!-- 这个会打印出所有的info及以下级别的信息,每次大小超过size, 则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档 --> <RollingFile name="RollingFileInfo" fileName="${FILE_PATH}/info.log" filePattern="${FILE_PATH}/$${date:yyyy-MM}/${FILE_NAME}-%d{yyyy-MM-dd}-INFO_%i.log.gz"> <!--控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Policies> <!--interval属性用来指定多久滚动一次,默认是1 hour--> <TimeBasedTriggeringPolicy interval="1"/> <SizeBasedTriggeringPolicy size="20MB"/> </Policies> <!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖 --> <DefaultRolloverStrategy max="15"/> </RollingFile> <!-- 这个会打印出所有的warn及以下级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档--> <RollingFile name="RollingFileWarn" fileName="${FILE_PATH}/warn.log" filePattern="${FILE_PATH}/$${date:yyyy-MM}/${FILE_NAME}-%d{yyyy-MM-dd}-WARN_%i.log.gz"> <!--控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <ThresholdFilter level="warn" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Policies> <!--interval属性用来指定多久滚动一次,默认是1 hour--> <TimeBasedTriggeringPolicy interval="1"/> <SizeBasedTriggeringPolicy size="20MB"/> </Policies> <!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖--> <DefaultRolloverStrategy max="15"/> </RollingFile> <!-- 这个会打印出所有的error及以下级别的信息,每次大小超过size,则这size大小的日志会自动存入按年份-月份建立的文件夹下面并进行压缩,作为存档--> <RollingFile name="RollingFileError" fileName="${FILE_PATH}/error.log" filePattern="${FILE_PATH}/$${date:yyyy-MM}/${FILE_NAME}-%d{yyyy-MM-dd}-ERROR_%i.log.gz"> <!--控制台只输出level及以上级别的信息(onMatch),其他的直接拒绝(onMismatch)--> <ThresholdFilter level="error" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="${LOG_PATTERN}"/> <Policies> <!--interval属性用来指定多久滚动一次,默认是1 hour--> <TimeBasedTriggeringPolicy interval="1"/> <SizeBasedTriggeringPolicy size="20MB"/> </Policies> <!-- DefaultRolloverStrategy属性如不设置,则默认为最多同一文件夹下7个文件开始覆盖--> <DefaultRolloverStrategy max="15"/> </RollingFile> </appenders> <!--Logger节点用来单独指定日志的形式,比如要为指定包下的class指定不同的日志级别等。--> <!--然后定义loggers,只有定义了logger并引入的appender,appender才会生效--> <loggers> <!--过滤掉spring和mybatis的一些无用的DEBUG信息--> <logger name="org.mybatis" level="info" additivity="false"> <AppenderRef ref="Console"/> </logger> <!--监控系统信息--> <!--若是additivity设为false,则 子Logger 只会在自己的appender里输出,而不会在 父Logger 的appender里输出。--> <Logger name="top.fate" level="info" additivity="false"> <AppenderRef ref="Console"/> </Logger> <root level="info"> <appender-ref ref="Console"/> <appender-ref ref="RollingFileInfo"/> <appender-ref ref="RollingFileWarn"/> <appender-ref ref="RollingFileError"/> </root> </loggers> </configuration>application.propertieslogging.level.root=error日志框架已经生效,同时本地
@[toc]前言前段时间我不是做MP的动态表名嘛,详见Mybatis-Plus 动态表名,然后我去MP的动态表名的demo中看到了动态表名的传值方式,没错就是ThreadLocal。这个是他的传递辅助类public class RequestDataHelper { /** * 请求参数存取 */ private static final ThreadLocal<Map<String, Object>> REQUEST_DATA = new ThreadLocal<>(); /** * 设置请求参数 * * @param requestData 请求参数 MAP 对象 */ public static void setRequestData(Map<String, Object> requestData) { REQUEST_DATA.set(requestData); } /** * 获取请求参数 * * @param param 请求参数 * @return 请求参数 MAP 对象 */ public static <T> T getRequestData(String param) { Map<String, Object> dataMap = getRequestData(); if (CollectionUtils.isNotEmpty(dataMap)) { return (T) dataMap.get(param); } return null; } }附 我的替换表名的拦截器代码@Configuration @MapperScan("com.cars.ysdd.clts.domain.clts.dao") public class MybatisPlusConfig { static List<String> tableList(){ List<String> tables = new ArrayList<>(); tables.add("user"); return tables; } @Bean public MybatisPlusInterceptor mybatisPlusInterceptor() { MybatisPlusInterceptor interceptor = new MybatisPlusInterceptor(); DynamicTableNameInnerInterceptor dynamicTableNameInnerInterceptor = new DynamicTableNameInnerInterceptor(); dynamicTableNameInnerInterceptor.setTableNameHandler((sql, tableName) -> { // 获取参数方法 String newTable = null; for (String table : tableList()) { newTable = RequestDataHelper.getRequestData(table); if (table.equals(tableName) && newTable!=null){ tableName = newTable; break; } } return tableName; }); interceptor.addInnerInterceptor(dynamicTableNameInnerInterceptor); return interceptor; } }然后代码中是这么使用的。RequestDataHelper.setRequestData(new HashMap<String, Object>() {{ put("user", "user_2018"); }});那会儿就纳闷如果有并发的话,肯定会出问题啊,然后我就想试试,于是我新开了个线程进行赋值。RequestDataHelper.setRequestData(new HashMap<String, Object>() {{ put("user", "user_2019"); }}); CompletableFuture<Void> completableFuture = CompletableFuture.runAsync(() -> { RequestDataHelper.setRequestData(new HashMap<String, Object>() {{ put("user", "user_2018"); }}); }); completableFuture.get(); User user = userMapper.selectById(1); //sql-> select * from user_2019 where id = 1 最后发现执行的sql表名居然还是user_2019,可明明user_2018是最后赋值的呀,带着疑惑我去研究了。ThreadLocal后来一想,ThreadLocal这个类名可以顾名思义的进行理解,表示线程的“本地变量”,即每个线程都拥有该变量副本,达到人手一份的效果,各用各的这样就可以避免共享资源的竞争。ThreadLocal的作用主要是做数据隔离,填充的数据只属于当前线程,变量的数据对别的线程而言是相对隔离的,在多线程环境下,如何防止自己的变量被其它线程篡改。有什么问题?ThreadLocal在保存的时候会把自己当做Key存在ThreadLocalMap中,正常情况应该是key和value都应该被外界强引用才对,但是现在key被设计成WeakReference弱引用了。只具有弱引用的对象拥有更短暂的生命周期,在垃圾回收器线程扫描它所管辖的内存区域的过程中, 一旦发现了只具有弱引用的对象,不管当前内存空间足够与否,都会回收它的内存。 不过,由于垃圾回收器是一个优先级很低的线程,因此不一定会很快发现那些只具有弱引用的对象。 这就导致了一个问题,ThreadLocal在没有外部强引用时,发生GC时会被回收,如果创建ThreadLocal的线程一直持续运行,那么这个Entry对象中的value就有可能一直得不到回收,发生内存泄露。就比如线程池里面的线程,线程都是复用的,那么之前的线程实例处理完之后,出于复用的目的线程依然存活,所以,ThreadLocal设定的value值被持有,导致内存泄露。解决办法最后给他清空就行了REQUEST_DATA .remove();这些解释是我在关注的两位博主文章中看到的,沉默王二,三太子敖丙,希望有一天也可以成为像他们一样的大佬
@[toc]一、Mysql1. sum1.1. sum条件判断语句select SUM( IF(判断条件,合计使用的字段名,默认值) end )别名 from dual1.2. 示例--我这里合计field1,前提是field2必须大于0 select SUM( IF(field2>0,field1,0) end )field1 from dual执行结果2. count2.1. count条件判断语句select COUNT( IF(判断条件,TRUE,NULL) )别名 from dual2.2. 示例还是以上边的数据为例,我要合计fieid2>0的记录数select COUNT( IF(field2>0,TRUE,NULL) ) '满足条数' from dual执行结果二、Oracle1. sum1.1. sum条件判断语句select sum( case when 判断条件 then 符合判断条件 else 不符合判断条件 end )别名 from dual1.2. 示例select sum( case when field1>0 then field2 else 0 end )field2 from dual2. count2.1. count条件判断语句select count( case when 判断条件 then 符合判断条件 else 不符合判断条件 end )别名 from dual2.2. 示例select sum( case when field1>0 then 1 else null end )field3 from dual
上一篇,SpringCloudAlibaba篇(五)整合GateWay(微服务网关)@[toc]Seata 是什么?Seata 是一款开源的分布式事务解决方案,致力于提供高性能和简单易用的分布式事务服务。Seata 将为用户提供了 AT、TCC、SAGA 和 XA 事务模式,为用户打造一站式的分布式解决方案。AT 模式前提基于支持本地 ACID 事务的关系型数据库。Java 应用,通过 JDBC 访问数据库。整体机制两阶段提交协议的演变:一阶段:业务数据和回滚日志记录在同一个本地事务中提交,释放本地锁和连接资源。二阶段:提交异步化,非常快速地完成。回滚通过一阶段的回滚日志进行反向补偿。seata官方文档项目整合seata1.拉取并运行seata服务端1.1 拉取seatahttps://github.com/seata/seata/releases1.2 修改配置参考文档修改conf/registry.conf,配上自己的nacos注意将type 修改为nacos ,建议新建一个namespace 如何新建因为配置文件实在是太多了1.3 下载配置文件config.txt下载修改存储模式修改连接池信息我的mysql是8.0的所以 store.db.driverClassName=com.mysql.jdbc.Driver 需要修改为store.db.driverClassName=com.mysql.cj.jdbc.Driver1.4 下载seata数据库的建表SQLhttps://github.com/seata/seata/tree/develop/script/server/db数据库中新建seata数据库 并执行 mysql.sql-- -------------------------------- The script used when storeMode is 'db' -------------------------------- -- the table to store GlobalSession data CREATE TABLE IF NOT EXISTS `global_table` ( `xid` VARCHAR(128) NOT NULL, `transaction_id` BIGINT, `status` TINYINT NOT NULL, `application_id` VARCHAR(32), `transaction_service_group` VARCHAR(32), `transaction_name` VARCHAR(128), `timeout` INT, `begin_time` BIGINT, `application_data` VARCHAR(2000), `gmt_create` DATETIME, `gmt_modified` DATETIME, PRIMARY KEY (`xid`), KEY `idx_status_gmt_modified` (`status` , `gmt_modified`), KEY `idx_transaction_id` (`transaction_id`) ) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4; -- the table to store BranchSession data CREATE TABLE IF NOT EXISTS `branch_table` ( `branch_id` BIGINT NOT NULL, `xid` VARCHAR(128) NOT NULL, `transaction_id` BIGINT, `resource_group_id` VARCHAR(32), `resource_id` VARCHAR(256), `branch_type` VARCHAR(8), `status` TINYINT, `client_id` VARCHAR(64), `application_data` VARCHAR(2000), `gmt_create` DATETIME(6), `gmt_modified` DATETIME(6), PRIMARY KEY (`branch_id`), KEY `idx_xid` (`xid`) ) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4; -- the table to store lock data CREATE TABLE IF NOT EXISTS `lock_table` ( `row_key` VARCHAR(128) NOT NULL, `xid` VARCHAR(128), `transaction_id` BIGINT, `branch_id` BIGINT NOT NULL, `resource_id` VARCHAR(256), `table_name` VARCHAR(32), `pk` VARCHAR(36), `status` TINYINT NOT NULL DEFAULT '0' COMMENT '0:locked ,1:rollbacking', `gmt_create` DATETIME, `gmt_modified` DATETIME, PRIMARY KEY (`row_key`), KEY `idx_status` (`status`), KEY `idx_branch_id` (`branch_id`) ) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4; CREATE TABLE IF NOT EXISTS `distributed_lock` ( `lock_key` CHAR(20) NOT NULL, `lock_value` VARCHAR(20) NOT NULL, `expire` BIGINT, primary key (`lock_key`) ) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4; INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('HandleAllSession', ' ', 0);1.5 下载向nacos推送配置的脚本https://github.com/seata/seata/tree/develop/script/config-center/nacos将config.txt 文件放置到根目录nacos-config.sh文件放到conf目录下1.6 向nacos推送配置这里我用一下git bash here执行shell脚本执行命令sh nacos-config.sh -h 127.0.0.1 -p 8848 -g SEATA_GROUP -t seata -u nacos -p nacos # h: nacos服务ip. # p: nacos服务端口号. # g: 想要的分组信息. # t: 第一步新建的命名空间. # u: nacos登录名. # w: nacos登录密码推送完毕,我们这里可以看出这个是一行配置创建了一个文件,以后修改配置就不用脚本推送了, 直接在nacos中修改即可, 前提是你在registry.conf文件中配置了nacos 1.7 启动seata-server执行/bin/seata-server.bat启动完成2. 配置客户端2.1 添加依赖版本已在父工程中定义了参数,详见SpringCloudAlibaba篇(一)搭建父工程,并初始化推送到git仓库<alibaba.seata.version>1.4.2</alibaba.seata.version>service-starter-parent工程中添加依赖 , service-starter-parent是所有微服务依赖的一个pom微服务模块, 微服务添加依赖直接在service-starter-parent中添加即可。搭建service-starter-parent 详见SpringCloudAlibaba篇(二)整合Nacos注册配置中心pom.xml<!-- seata --> <dependency> <groupId>io.seata</groupId> <artifactId>seata-spring-boot-starter</artifactId> <version>${alibaba.seata.version}</version> 1.4.2 </dependency>2.2 配置seatanacos中新建seata-client.yamlseata: enabled: true application-id: seata-service tx-service-group: default_tx_group enable-auto-data-source-proxy: true config: type: nacos nacos: server-addr: 127.0.0.1:8848 namespace: seata group: SEATA_GROUP username: nacos password: nacos registry: type: nacos nacos: server-addr: 127.0.0.1:8848 namespace: seata application: seata-server group: SEATA_GROUP username: nacos password: nacosbootstrap.yml 追加2.3 添加表每个参与分布式事务的数据库都需要加该表-- for AT mode you must to init this sql for you business database. the seata server not need it. CREATE TABLE IF NOT EXISTS `undo_log` ( `branch_id` BIGINT NOT NULL COMMENT 'branch transaction id', `xid` VARCHAR(128) NOT NULL COMMENT 'global transaction id', `context` VARCHAR(128) NOT NULL COMMENT 'undo_log context,such as serialization', `rollback_info` LONGBLOB NOT NULL COMMENT 'rollback info', `log_status` INT(11) NOT NULL COMMENT '0:normal status,1:defense status', `log_created` DATETIME(6) NOT NULL COMMENT 'create datetime', `log_modified` DATETIME(6) NOT NULL COMMENT 'modify datetime', UNIQUE KEY `ux_undo_log` (`xid`, `branch_id`) ) ENGINE = InnoDB AUTO_INCREMENT = 1 DEFAULT CHARSET = utf8mb4 COMMENT ='AT transaction mode undo table';2.4 启动在方法上添加@GlobalTransactional 即可开启分布式事务 详见seata官方文档 后续有时间我会自己测试timeoutMills 超时时间-毫秒 默认60000msname 事务组名称默认default脚本说明client存放用于客户端的配置和SQLat: AT模式下的 undo_log 建表语句conf: 客户端的配置文件saga: SAGA 模式下所需表的建表语句spring: SpringBoot 应用支持的配置文件server存放server侧所需SQL和部署脚本db: server 侧的保存模式为 db 时所需表的建表语句docker-compose: server 侧通过 docker-compose 部署的脚本helm: server 侧通过 Helm 部署的脚本kubernetes: server 侧通过 Kubernetes 部署的脚本config-center用于存放各种配置中心的初始化脚本,执行时都会读取 config.txt配置文件,并写入配置中心nacos: 用于向 Nacos 中添加配置zk: 用于向 Zookeeper 中添加配置,脚本依赖 Zookeeper 的相关脚本,需要手动下载;ZooKeeper相关的配置可以写在 zk-params.txt 中,也可以在执行的时候输入apollo: 向 Apollo 中添加配置,Apollo 的地址端口等可以写在 apollo-params.txt,也可以在执行的时候输入etcd3: 用于向 Etcd3 中添加配置consul: 用于向 consul 中添加配置
@[toc]前因这俩天闲来无事想搭建一套最新版本的微服务,顺便写博客记录一下,我用的是当前时间(2022-04-14)最新版本gateway 版本 3.3.1 报错内容Whitelabel Error PageThis application has no configured error view, so you are seeing this as a fallback.Sat Apr 16 13:54:55 CST 2022[ea5eb192-1] There was an unexpected error (type=Service Unavailable, status=503).解决方案pom添加依赖<dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-loadbalancer</artifactId> </dependency>我估摸着是springboot 版本问题? 我在使用2.2.7gateway的时候并不需要加载这个依赖大家有知道是什么原因的可以给我留言
@[toc]前几天才操作过, 将公司1t机械换成 500g固态1.下载一个分区助手必须是专业版才有(迁移系统功能)分区助手专业版1.1 打开分区助手,点击新磁盘,接下来的操作将会对其生效。 ## 2.迁移系统到固态硬盘向导2.1 迁移系统到固态硬盘“向导2.2 选择目标上的未分配空间,点击下一步。2.3 如果想要扩展整个SSD的话,可以拖动小圆球,以扩展整个磁盘,然后点击下一步。2.4如果想要扩展整个SSD的话,可以拖动小圆球,以扩展整个磁盘,然后点击下一步。3.执行操作3.1 回到主界面后, 点击左上方的 “提交“3.2 提示“当前操作执行时需要重启电脑“ ,点击执行。3.3 选择“重启进入Windows PE模式“ ,确定。在这个过程中会创建Windows PE,可能会花个几分钟,制作完成后,会自动重启进入PE。3.4 重启,进入PE,整个过程都是自动的,不需要干预。等待自动完成进入桌面就行。
@[toc]前因这俩天闲来无事想搭建一套最新版本的微服务,顺便写博客记录一下,我用的是当前时间(2022-04-14)最新版本报错内容Description: The dependencies of some of the beans in the application context form a cycle: com.alibaba.cloud.dubbo.autoconfigure.DubboLoadBalancedRestTemplateAutoConfiguration (field private com.alibaba.cloud.dubbo.metadata.repository.DubboServiceMetadataRepository com.alibaba.cloud.dubbo.autoconfigure.DubboLoadBalancedRestTemplateAutoConfiguration.repository) ↓ com.alibaba.cloud.dubbo.metadata.repository.DubboServiceMetadataRepository (field private com.alibaba.cloud.dubbo.service.DubboMetadataServiceProxy com.alibaba.cloud.dubbo.metadata.repository.DubboServiceMetadataRepository.dubboMetadataConfigServiceProxy) ↓ com.alibaba.cloud.dubbo.service.DubboMetadataServiceProxy ┌─────┐ | com.alibaba.cloud.dubbo.autoconfigure.DubboMetadataAutoConfiguration (field private com.alibaba.cloud.dubbo.metadata.resolver.MetadataResolver com.alibaba.cloud.dubbo.autoconfigure.DubboMetadataAutoConfiguration.metadataResolver) └─────┘ Action: Relying upon circular references is discouraged and they are prohibited by default. Update your application to remove the dependency cycle between beans. As a last resort, it may be possible to break the cycle automatically by setting spring.main.allow-circular-references to true. 翻译不鼓励依赖循环引用,默认情况下禁止使用循环引用。更新应用程序以消除bean之间的依赖循环。作为最后手段,通过设置弹簧,可能会自动中断循环。主要的允许循环引用true。解决方案application.propertiesspring.main.allow-circular-references=true不知道是为什么会出现这个错误,暂时只能先这么设置
@[TOC]1.准备Linux机器1.1准备一台虚拟机我这边用的是新导入的本地虚拟机(centos7.9_2009)1.2安装必要依赖项我这边先参考KubeSphere的文档为虚拟机装必备的依赖包 socat , conntrackKubeSphere地址yum update yum install -y curl yum install -y socat yum install -y vim yum install -y conntrack关闭swap交换分区, 关闭防火墙swapoff -a #查看防火墙状态 firewall-cmd --state #CentOS 7.0默认使用的是firewall作为防火墙 #停止firewall systemctl stop firewalld.service #禁止防火墙开启启动 systemctl disable firewalld.service 1.3导出上边配置的机器,然后再复制两台每个节点分别设置对应主机名hostnamectl set-hostname master hostnamectl set-hostname node1 hostnamectl set-hostname node21.4验证三台机器ssh 是否互通我这里是互通的, 可以继续执行以下操作2.下载 KubeKey (master 节点执行即可)先执行以下命令以确保您从正确的区域下载 KubeKeyexport KKZONE=cn执行以下命令下载 KubeKey:curl -sfL https://get-kk.kubesphere.io | VERSION=v2.0.0 sh -为 kk 添加可执行权限:chmod +x kk3.创建集群3.1创建示例配置文件./kk create config [--with-kubernetes version] [--with-kubesphere version] [(-f | --file) path]我这边用的kubernete版本是1.21.5,kubesphere 版本是3.2.1 ,落地文件名为k8s.yaml./kk create config --with-kubernetes v1.21.5 --with-kubesphere v3.2.1 -f k8s.yaml3.2编辑配置文件vim k8s.yaml4.使用配置文件创建集群./kk create cluster -f k8s.yaml安装完成后会看到下图这个日志打印然后就可以可以通过 IP:30880 使用默认帐户和密码 (admin/P@88w0rd) 访问 KubeSphere 的 Web 控制台
@[TOC]报错信息初始化命令kubeadm init --image-repository registry.aliyuncs.com/google_containers初始化报错[kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.紧接着再次尝试初始化error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher系统版本名称版本oscentos7.9_2009kubeadm1.23.5-0kubectl1.23.5-0kubelet1.23.5-0解决办法关闭swap交换分区swapoff -a重置k8s,再重新初始化kubeadm reset kubeadm init --image-repository registry.aliyuncs.com/google_containers
@[TOC]软件简介kaptcha 是一个扩展自 simplecaptcha 的验证码库,默认情况下,Kaptcha非常易于设置和使用,并且默认输出会产生一个很难验证的验证码。默认情况下,它生成的验证码看起来与上面的非常相似。如果您想更改输出的外观,则有几个配置选项,并且该框架是模块化的,因此您可以编写自己的变形代码1.添加依赖<dependency> <groupId>com.baomidou</groupId> <artifactId>kaptcha-spring-boot-starter</artifactId> <version>1.0.0</version> </dependency>2.代码示例配置类import com.google.code.kaptcha.Constants; import com.google.code.kaptcha.impl.DefaultKaptcha; import com.google.code.kaptcha.util.Config; import org.springframework.beans.factory.annotation.Qualifier; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import java.util.Properties; /** * @auther:Wangxl * @Emile:18335844494@163.com * @Time:2021/8/2 10:47 */ @Configuration public class CaptchaConfig { /** * 验证码配置 * Kaptcha配置类名 * * @return */ @Bean @Qualifier("captchaProducer") public DefaultKaptcha kaptcha() { DefaultKaptcha kaptcha = new DefaultKaptcha(); Properties properties = new Properties(); //验证码个数 properties.setProperty(Constants.KAPTCHA_TEXTPRODUCER_CHAR_LENGTH, "4"); //字体间隔 properties.setProperty(Constants.KAPTCHA_TEXTPRODUCER_CHAR_SPACE,"8"); //干扰线颜色 properties.setProperty(Constants.KAPTCHA_NOISE_COLOR,"red"); //干扰实现类 properties.setProperty(Constants.KAPTCHA_NOISE_IMPL, "com.google.code.kaptcha.impl.DefaultNoise"); // properties.setProperty(Constants.KAPTCHA_NOISE_IMPL, "com.google.code.kaptcha.impl.NoNoise"); //图片样式 properties.setProperty(Constants.KAPTCHA_OBSCURIFICATOR_IMPL, "com.google.code.kaptcha.impl.WaterRipple"); //文字来源 properties.setProperty(Constants.KAPTCHA_TEXTPRODUCER_CHAR_STRING, "0123456789qwertyuiopasdfghjklzxcvbnmQWERTYUIOPLKJHGFDSAZXCVBNM"); Config config = new Config(properties); kaptcha.setConfig(config); return kaptcha; } }servicepackage wxl.top.service; import wxl.top.util.JsonData; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; /** * @auther:Wangxl * @Emile:18335844494@163.com * @Time:2021/8/2 11:44 */ public interface CaptchaService { void getCaptcha(HttpServletRequest request, HttpServletResponse response); }impl @Service public class CaptchaServiceImpl implements CaptchaService { @Autowired private StringRedisTemplate redisTemplate; @Autowired private Producer captchaProducer; @Override public void getCaptcha(HttpServletRequest request, HttpServletResponse response) { String captchaText = captchaProducer.createText(); String key = getCaptchaKey(request); //五分钟过期 redisTemplate.opsForValue().set(key,captchaText,5, TimeUnit.MINUTES); BufferedImage bufferedImage = captchaProducer.createImage(captchaText); ServletOutputStream outputStream = null; try { outputStream = response.getOutputStream(); ImageIO.write(bufferedImage,"jpg",outputStream); outputStream.flush(); outputStream.close(); } catch (IOException e) { e.printStackTrace(); } } } private String getCaptchaKey(HttpServletRequest httpServletRequest){ String ipAddr = CommonUtil.getIpAddr(httpServletRequest); String userAgent = httpServletRequest.getHeader("User-Agent"); String key = "user-service:captcha:"+CommonUtil.MD5(ipAddr+userAgent); return key; }controller@RestController @RequestMapping("captcha") public class CaptchaController { @Autowired private CaptchaService captchaService; @GetMapping(value = "get_captcha") public void getCaptcha(HttpServletRequest request, HttpServletResponse response){ captchaService.getCaptcha(request,response); } }3.演示
@[TOC]前言今日访问mybatis-plus 官网偶然看到一个爱组搭广告,出于好奇点进去看了一下1.构建maven<dependency> <groupId>com.aizuda</groupId> <artifactId>aizuda-monitor</artifactId> <version>1.0.0</version> </dependency>2.代码示例package cn.itcast.user.web; import com.aizuda.monitor.DiskInfo; import com.aizuda.monitor.OshiMonitor; import com.alibaba.nacos.common.utils.CollectionUtils; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.PostMapping; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; import oshi.software.os.OSProcess; import oshi.software.os.OperatingSystem; import javax.annotation.Resource; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; /** * @auther:Wangxl * @Emile:18335844494@163.com * @Time:2022/4/2 15:16 */ @RestController @RequestMapping("/v1/monitor") public class MonitorController { // 注入监控模块 Oshi 调用类 @Resource private OshiMonitor oshiMonitor; @GetMapping("/server") public Map<String, Object> monitor() { Map<String, Object> server = new HashMap<>(5); // 系统信息 server.put("sysInfo", oshiMonitor.getSysInfo()); // CPU 信息 server.put("cupInfo", oshiMonitor.getCpuInfo()); // 内存信息 server.put("memoryInfo", oshiMonitor.getMemoryInfo()); // Jvm 虚拟机信息 server.put("jvmInfo", oshiMonitor.getJvmInfo()); // 磁盘信息 List<DiskInfo> diskInfos = oshiMonitor.getDiskInfos(); server.put("diskInfos", diskInfos); if (CollectionUtils.isNotEmpty(diskInfos)) { long usableSpace = 0; long totalSpace = 0; for (DiskInfo diskInfo : diskInfos) { usableSpace += diskInfo.getUsableSpace(); totalSpace += diskInfo.getTotalSpace(); } double usedSize = (totalSpace - usableSpace); // 统计所有磁盘的使用率 server.put("diskUsePercent", oshiMonitor.formatDouble(usedSize / totalSpace * 100)); } // 系统前 10 个进程 List<OSProcess> processList = oshiMonitor.getOperatingSystem().getProcesses(null, OperatingSystem.ProcessSorting.CPU_DESC, 10); List<Map<String, Object>> processMapList = new ArrayList<>(); for (OSProcess process : processList) { Map<String, Object> processMap = new HashMap<>(5); processMap.put("name", process.getName()); processMap.put("pid", process.getProcessID()); processMap.put("cpu", oshiMonitor.formatDouble(process.getProcessCpuLoadCumulative())); processMapList.add(processMap); } server.put("processList", processMapList); return server; } }3.结果返回(前端展示需要自己开发,oshi只提供数据){ "cupInfo": { "physicalProcessorCount": 4, "logicalProcessorCount": 8, "systemPercent": 0.09, "userPercent": 0.08, "waitPercent": 0.0, "usePercent": 0.18 }, "memoryInfo": { "total": "15.81GB", "used": "11.79GB", "free": "4.02GB", "usePercent": 0.75 }, "processList": [ { "name": "Idle", "cpu": 7.41, "pid": 0 }, { "name": "javaw", "cpu": 0.4, "pid": 9540 }, { "name": "idea64", "cpu": 0.07, "pid": 12840 }, { "name": "msedge", "cpu": 0.06, "pid": 15956 }, { "name": "java", "cpu": 0.05, "pid": 2400 }, { "name": "java", "cpu": 0.05, "pid": 9760 }, { "name": "msedge", "cpu": 0.04, "pid": 16768 }, { "name": "msedge", "cpu": 0.04, "pid": 15444 }, { "name": "msedge", "cpu": 0.03, "pid": 7860 }, { "name": "QQPCTray", "cpu": 0.02, "pid": 9568 } ], "sysInfo": { "name": "DESKTOP-4BGLRMJ", "ip": "172.23.39.68", "osName": "Windows 10", "osArch": "amd64", "userDir": "F:\\2022Projects\\cloud-demo-dubbo" }, "diskUsePercent": 47.1, "diskInfos": [ { "name": "本地固定磁盘 (C:)", "volume": "\\\\?\\Volume{0ee2db33-fc0a-464f-aa79-f110edd1be4b}\\", "label": "OS", "logicalVolume": "", "mount": "C:\\", "description": "Fixed drive", "options": "rw,reparse,sparse,trans,journaled,quota,casess,oids,casepn,efs,streams,unicode,acls,fcomp", "type": "NTFS", "size": "103.73GB", "totalSpace": 111376592896, "used": "89.17GB", "usableSpace": 15626911744, "avail": "14.55GB", "usePercent": 85.97, "uuid": "0ee2db33-fc0a-464f-aa79-f110edd1be4b" }, { "name": "本地固定磁盘 (D:)", "volume": "\\\\?\\Volume{b6dd9496-40b0-416d-8fca-fb27692d3883}\\", "label": "DATA", "logicalVolume": "", "mount": "D:\\", "description": "Fixed drive", "options": "rw,reparse,sparse,trans,journaled,quota,casess,oids,casepn,efs,streams,unicode,acls,fcomp", "type": "NTFS", "size": "531.39GB", "totalSpace": 570572140544, "used": "244.99GB", "usableSpace": 307517067264, "avail": "286.4GB", "usePercent": 46.1, "uuid": "b6dd9496-40b0-416d-8fca-fb27692d3883" }, { "name": "本地固定磁盘 (E:)", "volume": "\\\\?\\Volume{3fdb949e-bba5-4052-bad9-275232a18e7e}\\", "label": "新加卷", "logicalVolume": "", "mount": "E:\\", "description": "Fixed drive", "options": "rw,reparse,sparse,trans,journaled,quota,casess,oids,casepn,efs,streams,unicode,acls,fcomp", "type": "NTFS", "size": "200GB", "totalSpace": 214747312128, "used": "24.41GB", "usableSpace": 188541489152, "avail": "175.59GB", "usePercent": 12.2, "uuid": "3fdb949e-bba5-4052-bad9-275232a18e7e" }, { "name": "本地固定磁盘 (F:)", "volume": "\\\\?\\Volume{72ed311d-d25b-4368-b5d3-e3f2e40d6aa8}\\", "label": "新加卷", "logicalVolume": "", "mount": "F:\\", "description": "Fixed drive", "options": "rw,reparse,sparse,trans,journaled,quota,casess,oids,casepn,efs,streams,unicode,acls,fcomp", "type": "NTFS", "size": "200GB", "totalSpace": 214747312128, "used": "128.99GB", "usableSpace": 76241727488, "avail": "71.01GB", "usePercent": 64.5, "uuid": "72ed311d-d25b-4368-b5d3-e3f2e40d6aa8" } ], "jvmInfo": { "jdkVersion": "1.8.0_131", "jdkHome": "D:\\Java\\jdk1.8.0_131\\jre", "jdkName": "Java HotSpot(TM) 64-Bit Server VM", "jvmTotalMemory": "540MB", "maxMemory": "3.51GB", "freeMemory": "446.6MB", "usedMemory": "93.4MB", "usePercent": 0.17, "startTime": 1648884476737, "uptime": 310174 } }
@[TOC]1.maven打包跳过单元测试mvn install -DskipTests #或者 mvn install -Dmaven.test.skip=true1.1.idea跳过单元测试2.idea服务多开在idea中找到services工具栏 (如果没有的话alt+8打开)选中一个服务点击复制配置弹出一个编辑配置的窗口,我这里改成userApplication 复制 然后程序参数中修改下服务端口(我这里是dubbo框架, 所以也需要修改下dubbo端口)启动成功
@TOC前言日常写代码过程中,我经常会有一些处理很多数据的业务,如一些定时任务,需要用到线程池1.定义一个线程池ThreadPoolExecutor poolExecutor = new ThreadPoolExecutor( 2, Runtime.getRuntime().availableProcessors(),//这里我获取的物理机的核心线程数 2, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<>(10), Executors.defaultThreadFactory(), new ThreadPoolExecutor.AbortPolicy());1.1线程池七大参数corePoolSize 核心线程数目 - 池中会保留的最多线程数maximumPoolSize 最大线程数目 - 核心线程+救急线程的最大数目keepAliveTime 生存时间 - 救急线程的生存时间,生存时间内没有新任务,此线程资源会释放unit 时间单位 - 救急线程的生存时间单位,如秒、毫秒等workQueue - 当没有空闲核心线程时,新来任务会加入到此队列排队,队列满会创建救急线程执行任务threadFactory 线程工厂 - 可以定制线程对象的创建,例如设置线程名字、是否是守护线程等handler 拒绝策略 - 当所有线程都在繁忙,workQueue 也放满时,会触发拒绝策略1 抛异常 java.util.concurrent.ThreadPoolExecutor.AbortPolicy2 由调用者执行任务 java.util.concurrent.ThreadPoolExecutor.CallerRunsPolicy3 丢弃任务 java.util.concurrent.ThreadPoolExecutor.DiscardPolicy4 丢弃最早排队任务 java.util.concurrent.ThreadPoolExecutor.DiscardOldestPolicy1.2使用线程池(1.配合CompletableFuture.supplyAsync()使用)List<Integer> list = new CopyOnWriteArrayList(); Data data = new Data(); for (int i = 0; i < 3000; i++) { CompletableFuture<Integer> supplyAsync = CompletableFuture.supplyAsync(() -> { try { System.out.println(Thread.currentThread().getName()); return data.getData(); } catch (InterruptedException e) { e.printStackTrace(); } return 1; },poolExecutor); try { list.add(supplyAsync.get()); } catch (InterruptedException e) { e.printStackTrace(); } catch (ExecutionException e) { e.printStackTrace(); } } poolExecutor.shutdown();1.2.1 CopyOnWriteArrayListCopyOnWriteArrayList类最大的特点就是,在对其实例进行修改操作(add/remove等)会新建一个数据并修改,修改完毕之后,再将原来的引用指向新的数组。这样,修改过程没有修改原来的数组。也就没有了ConcurrentModificationException错误。1.3使用线程池(2.配合CountDownLatch()使用)List<String> list = new ArrayList<>(); list.add("test1"); list.add("test2"); list.add("test3"); list.add("test4"); list.add("test5"); CountDownLatch countDownLatch = new CountDownLatch(list.size()); ExecutorService threadPool = new ThreadPoolExecutor( 2, Runtime.getRuntime().availableProcessors(), 3, TimeUnit.SECONDS, new LinkedBlockingQueue<>(3), Executors.defaultThreadFactory(), new ThreadPoolExecutor.CallerRunsPolicy());//四种拒绝策略 try { for (String s : list) { threadPool.execute(() -> { try { System.out.println(Thread.currentThread().getName() + " --------------ok"); } catch (Exception e) { e.printStackTrace(); } finally { countDownLatch.countDown(); //-1 } }); } } catch (Exception e) { e.printStackTrace(); } finally { countDownLatch.await(); //线程池用完 程序结束, 关闭线程池 threadPool.shutdown(); }countDownLatch.await(); 等待计数器归零总结CPU密集型任务(N+1):这种任务消耗的主要是CPU资源,可以将线程数设置为N(CPU核心数)+1,比CPU核心数多出来一个线程是为了防止线程偶发的缺页中断,或者其他原因导致的任务暂停而带来的影响。一旦任务停止,CPU就会处于空闲状态,而这种情况下多出来一个线程就可以充分利用CPU的空闲时间I/O密集型(2N):这种任务应用起来,系统大部分时间用来处理I/O交互,而线程在处理I/O的是时间段内不会占用CPU来处理,这时就可以将CPU交出给其他线程使用。因此在I/O密集型任务的应用中,可以配置多一些线程,具体计算方是2N。
@TOC1. 有返回值的异步调用CompletableFuture.supplyAsync()1. 代码示例//有返回值的异步调用 CompletableFuture<Integer> completableFutureSupply = CompletableFuture.supplyAsync(() -> { System.out.println(Thread.currentThread().getName() + " supplyAsync => Integer"); int i = 10/1; return 1024; });2.获取返回值System.out.println(completableFutureSupply.get()); //输出结果-> 10243.异步方法中报错处理这里我们模拟一下代码块中报错之后如何返回1.写个异常(被除数为整型时不可为零)我这里将被除数改成0//有返回值的异步调用 CompletableFuture<Integer> completableFutureSupply = CompletableFuture.supplyAsync(() -> { System.out.println(Thread.currentThread().getName() + " supplyAsync => Integer"); int i = 10/0; return 1024; });这里运行直接报错 (附图)2. 使用whenComplete解决异常后返回值的问题System.out.println(completableFutureSupply.whenComplete((t, u) -> { System.out.println("t=" + t); System.out.println("u=" + u); }).exceptionally((e) -> { System.out.println("ex:"+e.getMessage()); return 233; }).get());这里代码依旧还是出现异常,但是我们在异常代码块中给他定义了一个报错还可以正常返回的默认值t =正常返回值u=异常信息e=异常信息1.无异常情况,我这边将被除数改回非02.异步调用CompletableFuture.runAsync()CompletableFuture<Void> completableFuture = CompletableFuture.runAsync(() -> { try { TimeUnit.SECONDS.sleep(2); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println(Thread.currentThread().getName() + " runAsync => Void"); }); completableFuture.get();//阻塞获取执行结果 //批量阻塞(等待) CompletableFuture.allOf(completableFuture,completableFuture1).join();
近日公司需要新框架需要兼容旧代码,旧代码用的mybatis手写的动态表名 ,大概是实体类定义一个table字段 然后将table的值传到映射文件中,${table} 这种方式, 研究了一下mp发现可以直接用拦截器替换表名就有了以下代码1. 3.4.3.4 (最新版)动态表名实现1.配置类 (官方方式)@Configuration @MapperScan("com.cars.ysdd.clts.domain.clts.dao") public class MybatisPlusConfig { static List<String> tableList(){ List<String> tables = new ArrayList<>(); tables.add("user"); return tables; } @Bean public MybatisPlusInterceptor mybatisPlusInterceptor() { MybatisPlusInterceptor interceptor = new MybatisPlusInterceptor(); DynamicTableNameInnerInterceptor dynamicTableNameInnerInterceptor = new DynamicTableNameInnerInterceptor(); dynamicTableNameInnerInterceptor.setTableNameHandler((sql, tableName) -> { // 获取参数方法 /*Map<String, Object> paramMap = RequestDataHelper.getRequestData(); paramMap.forEach((k, v) -> System.err.println(k + "----" + v)); System.err.println("表名:"+tableName); System.err.println("sql:"+sql); String year = "_2018"; int random = new Random().nextInt(10); if (random % 2 == 1) { year = "_2019"; }*/ String newTable = null; for (String table : tableList()) { newTable = RequestDataHelper.getRequestData(table); if (table.equals(tableName) && newTable!=null){ tableName = newTable; break; } } return tableName; // return tableName + year; }); interceptor.addInnerInterceptor(dynamicTableNameInnerInterceptor); // 3.4.3.2 作废该方式 // dynamicTableNameInnerInterceptor.setTableNameHandlerMap(map); return interceptor; } }2.请求参数传递辅助类package com.baomidou.mybatisplus.samples.dytablename.config; import com.baomidou.mybatisplus.core.toolkit.CollectionUtils; import java.util.Map; /** * 请求参数传递辅助类 */ public class RequestDataHelper { /** * 请求参数存取 */ private static final ThreadLocal<Map<String, Object>> REQUEST_DATA = new ThreadLocal<>(); /** * 设置请求参数 * * @param requestData 请求参数 MAP 对象 */ public static void setRequestData(Map<String, Object> requestData) { REQUEST_DATA.set(requestData); } /** * 获取请求参数 * * @param param 请求参数 * @return 请求参数 MAP 对象 */ public static <T> T getRequestData(String param) { Map<String, Object> dataMap = getRequestData(); if (CollectionUtils.isNotEmpty(dataMap)) { return (T) dataMap.get(param); } return null; } /** * 获取请求参数 * * @return 请求参数 MAP 对象 */ public static Map<String, Object> getRequestData() { return REQUEST_DATA.get(); } }3.使用@SpringBootTest class DynamicTableNameTest { @Resource private UserMapper userMapper; @Test void test() { RequestDataHelper.setRequestData(new HashMap<String, Object>() {{ put("user", "user_2018"); }}); // 自己去观察打印 SQL 目前随机访问 user_2018 user_2019 表 for (int i = 0; i < 6; i++) { User user = userMapper.selectById(1); System.err.println(user.getName()); } } }2. 3.4.1版1. 配置类这里的RequestDataHelper 用的是上文的3.4.3.4中的@Configuration @MapperScan("com.cars.ysdd.clts.domain.clts.dao") public class DynamicTableNameHandler { static List<String> tableList() { List<String> tables = new ArrayList<>(); tables.add("CR_CZJM"); return tables; } @Bean public MybatisPlusInterceptor paginationInterceptor() { MybatisPlusInterceptor paginationInterceptor = new MybatisPlusInterceptor(); // DynamicTableNameParser 表名解析器,动态解析表名,ITableNameHandler 表名处理。 DynamicTableNameInnerInterceptor dynamicTableNameParser = new DynamicTableNameInnerInterceptor(); dynamicTableNameParser.setTableNameHandlerMap(new HashMap<String, TableNameHandler>(2) {{ //metaObject 元对象 ;sql 执行的SQL ;tableName 表名 //这里put的key就是需要替换的原始表名,也就是实体类的表名 //这里的tableName就是我们定义的动态表名变量, for (String table : tableList()) { put(table, (sql, tableName) -> { // 获取传入参数 tableName,tableName的值就是替换后的表名 return RequestDataHelper.getRequestData(table) == null ? tableName : RequestDataHelper.getRequestData(table); }); } }}); paginationInterceptor.addInnerInterceptor(dynamicTableNameParser); return paginationInterceptor; } }2. 使用/** * @param crCzjmDO 实体类 * @return 执行条数 */ @Override public int insert(CrCzjmDO crCzjmDO) { RequestDataHelper.setRequestData(new HashMap<String, Object>(){{ put("CR_CZJM","TJFX.CR_CZJM"); }}); return crCzjmMapper.insertSelective(crCzjmDO); }3. 3.4 以下版本 (3.1.2 版为例)1.配置类@Configuration public class DynamicTableNameHandler { /** * 适用于相同表结构 不同表名 */ //实体类对应的表名字段 public static final String DYNAMIC_TABLE_NAME = "tableName"; @Bean public PaginationInterceptor paginationInterceptor() { PaginationInterceptor paginationInterceptor = new PaginationInterceptor(); // DynamicTableNameParser 表名解析器,动态解析表名,ITableNameHandler 表名处理。 DynamicTableNameParser dynamicTableNameParser = new DynamicTableNameParser(); dynamicTableNameParser.setTableNameHandlerMap(new HashMap<String, ITableNameHandler>(2) {{ //metaObject 元对象 ;sql 执行的SQL ;tableName 表名 //这里put的key就是需要替换的原始表名,也就是实体类的表名 //这里的tableName就是我们定义的动态表名变量, put("payment", (metaObject, sql, tableName) -> { // 获取传入参数 tableName,tableName的值就是替换后的表名 Object param = getParamValue(DYNAMIC_TABLE_NAME, metaObject); if(param == null){ return tableName;//不带tableName参数就返回原表名 }else { return param.toString();//带tableName参数就返回新表名 } }); /*put("payment", (metaObject, sql, tableName) -> { // 获取传入参数 tableName,tableName的值就是替换后的表名 Object param = getParamValue(DYNAMIC_TABLE_NAME, metaObject); if(param == null){ return tableName;//不带tableName参数就返回原表名 }else { return param.toString();//带tableName参数就返回新表名 } });*/ }}); paginationInterceptor.setSqlParserList(Collections.singletonList(dynamicTableNameParser)); return paginationInterceptor; } /** * 获取动态表tableName参数的值 */ private Object getParamValue(String title, MetaObject metaObject){ //获取参数 Object originalObject = metaObject.getOriginalObject(); JSONObject originalObjectJSON = JSON.parseObject(JSON.toJSONString(originalObject)); JSONObject boundSql = originalObjectJSON.getJSONObject("boundSql"); try { JSONObject parameterObject = boundSql.getJSONObject("parameterObject"); return parameterObject.get(title); }catch (Exception e) { return null; } } }2. 实体类示例定义一个与上边配置类中DYNAMIC_TABLE_NAME 的值对应名称的字段(非数据库)@Data public class Payment implements Serializable { private static final long serialVersionUID = 1L; private String serial; @TableField(exist = false) private String tableName; }3. 使用直接setTableName即可 @Override public List<Payment> selectByTable(String table) { Payment payment = new Payment(); payment.setTableName(table); return paymentMapper.select(payment); }以上内容有错误欢迎指出~
@[TOC]1. 编写Dockerfile文件创建Dockerfile文件touch Dockerfile vim DockerfileFROM java:11 MAINTAINER wangxl ADD gateway-1.0.jar gateway.jar EXPOSE 10010 ENTRYPOINT ["java","-jar","gateway.jar"]FROM: 基础镜像,通过jdk8镜像开始MAINTAINER: 维护者ADD: 复制jar包到镜像内,名字为app.jarEXPOSE: 声明端口ENTRYPOINT: docker启动时,运行的命令.这里就是容器运行就启动jar2.进入Dockerfile所在目录下开始编译docker build -t gateway . # 镜像名 -> . <- 表示当前文件夹编译完成查看是否构建好镜像ok3.创建容器docker run -d --name gateway -p 31001:31001 gateway
@[TOC]1. 查询已运行的容器2. 停止docker服务systemctl stop docker3. 进入主机配置文件目录cd /var/lib/docker/containers/51360d643a33* ##51360d643a33 --->容器id4. 修改配置文件vim hostconfig.json原先我的mysql 映射端口是3306 ,这里我将它修改为3307如果是修改docker 中的端口的话还需要执行下面这一步操作vim config.v2.json5. 验证是否生效启动docker启动容器这边我已经修改成功sqlyog 连接一下端口3307 连接成功
@[TOC]1. list去重 List<String> list = new ArrayList<>(); list.add("123"); list.add("22"); list.add("22"); list.add("123"); list.add("234"); list.add("234"); list.add("99"); list.add("99"); list = list.stream().distinct().collect(Collectors.toList()); System.out.println(list.toString()); //输出 [123, 22, 234, 99]2. 根据对象中的某个字段进行list去重List<User> list = new ArrayList<User>(); list.add(new User("小南", 23, "18335888888")); list.add(new User("小南", 22, "18335888888")); list.add(new User("小南", 21, "18335888888")); list.add(new User("小南", 20, "18335888888")); list = list.stream().filter(distinctByKey(User :: getName)).collect(Collectors.toList()); System.out.println(list.toString()); //输出结果 private static <T> Predicate<T> distinctByKey(Function<? super T, Object> keyExtractor) { ConcurrentHashMap<Object, Boolean> map = new ConcurrentHashMap<>(); return t -> map.putIfAbsent(keyExtractor.apply(t), Boolean.TRUE) == null; }3. 排序// 根据age排序 (正序) list = list.stream().sorted(Comparator.comparing(User::getAge)).collect(Collectors.toList()); //输出 //[User(name=小南, age=20, phone=18335888888), User(name=小南, age=21, phone=18335888888), User(name=小南, age=22, phone=18335888888), User(name=小南, age=23, phone=18335888888)] // (倒序) list = list.stream().sorted(Comparator.comparing(User::getAge).reversed()).collect(Collectors.toList()); //输出 //[User(name=小南, age=23, phone=18335888888), User(name=小南, age=22, phone=18335888888), User(name=小南, age=21, phone=18335888888), User(name=小南, age=20, phone=18335888888)] //如果排序字段为空将空的某一条默认排到开头还是结尾 //放到结尾 list = list.stream().sorted(Comparator.comparing(User::getAge,Comparator.nullsLast(Integer::compare).reversed())).collect(Collectors.toList()); //放到开头 list = list.stream().sorted(Comparator.comparing(User::getAge,Comparator.nullsFirst(Integer::compare).reversed())).collect(Collectors.toList());4. 排序并去重list = list.stream().sorted(Comparator.comparing(User::getAge,Comparator.nullsLast(Integer::compare).reversed())).filter(distinctByKey(User :: getName)).collect(Collectors.toList());
1. 数据卷作用将容器与数据分离, 解耦合, 方便操作容器内的数据, 保证数据安全2. 数据卷操作命令#help 帮助 docker volume --helpdocker volume create ${数据卷名} ##创建数据卷 docker volume inspect${数据卷名} ##查看数据卷路径 docker volume ls ##查看全部数据卷 docker volume prune ##删除未使用的数据卷 docker volume rm ${数据卷名} ##删除数据卷3. 数据卷挂载案例1 (nginx)3.1 创建数据卷docker volume create html docker volume ls3.2 拉取nginx 镜像# 拉取镜像 docker pull nginx # 查看镜像 docker images3.3 创建容器并挂载数据卷#未挂载数据卷 docker run --name myNginx -p 80:80 -d nginx #挂载数据卷 docker run --name myNginx -p 80:80 -v html:/usr/share/nginx/html -d nginx挂载数据卷并启动成功3.4 修改数据卷验证是否成功我们修改 Welcome to nginx# 查看数据卷位置 docker volume inspect html修改index.html 文件vim index.html修改成功4. 数据卷挂载案例2 (mysql)docker run --name mysql -p 3306:3306 -v /tmp/mysql/conf/hmy.cnf:/etc/mysql/conf.d/hmy.cnf -v /tmp/mysql/data:/var/lib/mysql -d mysql
这里以上一篇文章的nginx 容器为例,进入容器修改Nginx默认的 Welcome to nginx!1. 进入容器#进入容器命令 docker exec -it ${容器名称} bash docker exec -it myNginx bash2. 找到nginx默认页面对应的index.html 文件2.1 找index.html我们去docker hub 中查看我们拉取的镜像对应的说明文档2.2 进入目录cd /user/share/nginx/html3. 修改index 文件cat index.html这边我们想通过vim index.html结果没有该命令 , 因为镜像封装是被阉割过的,只保留镜像自己所需我们用第二种方式 替换sed -i 's#Welcome to nginx!#hello world 程序员!#g' index.html sed -i 's#<head>#<head><meta charset="utf-8">#g' index.html如下图所示已经修改成功4. 退出容器exit
1.nginx 容器拉取2.运行创建一个nginx容器# 查看镜像 docker imagesdocker run --name myNginx-p 80:80 -d nginx2.1. 命令解读docker run : 创建并运行一个容器--name : 给容器起一个名字, 比如叫做 myNginx-p : 将宿主机端口与容器端口映射, 冒号左侧是宿主机端口, 右侧是docker容器端口-d : 后台运行容器nginx : 镜像名称 , 例如nginx# 查看运行中的容器 docker ps # 查看所有容器 docker ps -a2.2. 通过查询得知nginx容器已经创建并运行成功2.3. 验证是否启动成功3. 停止运行容器命令3.1. 执行命令#docker stop ${names}或者${容器唯一id (CONTAINER ID)} docker stop myNginx3.2. 停止成功4. 启动已创建的容器# docker start ${names}或者${容器唯一id (CONTAINER ID)} docker start myNginx5. 查看容器运行日志# docker logs ${names}或者${容器唯一id (CONTAINER ID)} docker logs myNginx5.1. docker持续跟踪日志#docker logs ${names}或者${容器唯一id (CONTAINER ID)} -f docker logs myNginx -f
文章所用命令1.拉取一个镜像# 拉取镜像 docker pull nginx # 查看镜像 docker images2.导出镜像docker save -o nginx.tar nginx:latest #导出到当前pwd路径下的文件夹 3.模拟导入镜像# 查看镜像 docker images# 删除镜像 docker rmi nginx:latest# 我们已经删除了docker 中的镜像 接着用前面导出成tar格式的文件导入成docker镜像docker load -i nginx.tardocker images4.导入成功
1、yum 包更新到最新yum update2、安装需要的软件包, yum-util 提供yum-config-manager功能,另外两个是devicemapper驱动依赖的yum install -y yum-utils device-mapper-persistent-data lvm23、 设置yum源yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo4、 安装docker,出现输入的界面都按 yyum install -y docker-ce5、 查看docker版本,验证是否验证成功docker -v6、关闭防火墙#docker开机自启 systemctl enable docker #启动docker systemctl start docker #查看防火墙状态 firewall-cmd --state #CentOS 7.0默认使用的是firewall作为防火墙 #停止firewall systemctl stop firewalld.service #禁止防火墙开启启动 systemctl disable firewalld.service
@TOCnacos地址查看历史版本@TOC解压缩后进入bin目录#单机启动命令 sh startup.cmd -m standalone
@Configuration public class MongoDbSettings { @Bean public MongoClientOptions mongoOptions(){ return MongoClientOptions .builder() .socketTimeout(2000) //查询超时 .serverSelectionTimeout(5000) //连接超时 // .connectTimeout(5000) .build(); } }
@TOCNginx官网@TOC直接双击运行或者 终端 ./nginx.exe@TOC@TOC@TOC@TOC@TOC@TOC@TOC 访问localhost 启动成功如下图@TOC./nginx 启动 ./nginx -s stop 停止 ./nginx -s quit 安全退出 ./nginx -s reload 重新加载配置文件 ps aux|grep nginx 查看nginx进程@TOC
打开bin目录下的setclasspath.bat添加如下两行配置, 帮助tomcat 找到jdkset JAVA_HOME=D:\Java\jdk1.8.0_131 set JRE_HOME=D:\Java\jdk1.8.0_131\jre
@[toc]winjps # 命令查看端口和服务名称杀死进程taskkill /F /PID 端口号linux# 查看进程 jps # 根据pid杀死进程 kill -9 PID
/** * @auther:Wangxl * @Emile:18335844494@163.com * @Time:2021/1/6 12:13 */ public class test { public static void main(String[] args) { String[] weekDays = {"周日", "周一", "周二", "周三", "周四", "周五", "周六"}; Calendar calendar=Calendar.getInstance(); System.out.println("今天是"+weekDays[calendar.get(Calendar.DAY_OF_WEEK)-1]); } }
package com.example.xiaohe.utils; import java.text.ParseException; import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.Calendar; import java.util.Date; import java.util.List; /** * @auther:Wangxl * @Emile:18335844494@163.com * @Time:2020/12/24 10:02 */ public class DateUtils { /** * 日期正则表达式 */ public static String YEAR_REGEX = "^\\d{4}$"; public static String MONTH_REGEX = "^\\d{4}(\\-|\\/|\\.)\\d{1,2}$"; public static String DATE_REGEX = "^\\d{4}(\\-|\\/|\\.)\\d{1,2}\\1\\d{1,2}$"; /** * 格式化日期 * - yyyy-MM-dd HH:mm:ss * * @param date 日期 * @param pattern 日期格式 * @return 日期字符串 */ public static String format(Date date, String pattern) { SimpleDateFormat sd = new SimpleDateFormat(pattern); return sd.format(date); } /** * 格式化日期 * - yyyy-MM-dd HH:mm:ss * * @param date 日期字符串 * @param pattern 日期格式 * @return 日期 * @throws ParseException 解析异常 */ public static Date parse(String date, String pattern) throws ParseException { SimpleDateFormat sd = new SimpleDateFormat(pattern); try { return sd.parse(date); } catch (ParseException e) { throw e; } } /** * 日期范围 - 切片 * <pre> * -- eg: * 年 ----------------------- sliceUpDateRange("2018", "2020"); * rs: [2018, 2019, 2020] * * 月 ----------------------- sliceUpDateRange("2018-06", "2018-08"); * rs: [2018-06, 2018-07, 2018-08] * * 日 ----------------------- sliceUpDateRange("2018-06-30", "2018-07-02"); * rs: [2018-06-30, 2018-07-01, 2018-07-02] * </pre> * * @param startDate 起始日期 * @param endDate 结束日期 * @return 切片日期 */ public static List<String> sliceUpDateRange(String startDate, String endDate) { List<String> rs = new ArrayList<>(); try { int dt = Calendar.DATE; String pattern = "yyyy-MM-dd"; if(startDate.matches(YEAR_REGEX)) { pattern = "yyyy"; dt = Calendar.YEAR; } else if(startDate.matches(MONTH_REGEX)) { pattern = "yyyy-MM"; dt = Calendar.MONTH; } else if(startDate.matches(DATE_REGEX)) { pattern = "yyyy-MM-dd"; dt = Calendar.DATE; } Calendar sc = Calendar.getInstance(); Calendar ec = Calendar.getInstance(); sc.setTime(parse(startDate, pattern)); ec.setTime(parse(endDate, pattern)); while(sc.compareTo(ec) < 1){ rs.add(format(sc.getTime(), pattern)); sc.add(dt, 1); } } catch (ParseException e) { e.printStackTrace(); } return rs; } public static void main(String[] args) { List<String> strings = sliceUpDateRange("2020-11-29", "2020-12-12"); for (String string : strings) { System.out.println(string); } } }
select sysdate,(sysdate-to_date('1970-01-01 08:00:00','yyyy-mm-dd HH24:MI:SS'))*24*60*60*1000 from dual
package com.example.xiaohe.config; import com.alibaba.fastjson.JSON; import com.google.gson.Gson; import org.aspectj.lang.ProceedingJoinPoint; import org.aspectj.lang.annotation.Around; import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Pointcut; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.context.annotation.Configuration; import org.springframework.web.context.request.RequestAttributes; import org.springframework.web.context.request.RequestContextHolder; import org.springframework.web.context.request.ServletRequestAttributes; import javax.servlet.http.HttpServletRequest; import java.lang.reflect.Field; import java.util.HashMap; import java.util.Map; /** * @auther:Wangxl * @Emile:18335844494@163.com * @Time:2020/4/11 23:07 */ @Aspect @Configuration//定义一个切面 public class LogRecordAspect { private static final Logger logger = LoggerFactory.getLogger(LogRecordAspect.class); // 定义切点Pointcut @Pointcut("execution(* com.example.xiaohe.controller..*.*(..))") public void excudeService() { } @Around("excudeService()") public Object doAround(ProceedingJoinPoint pjp) throws Throwable { RequestAttributes ra = RequestContextHolder.getRequestAttributes(); ServletRequestAttributes sra = (ServletRequestAttributes) ra; HttpServletRequest request = sra.getRequest(); String url = request.getRequestURL().toString(); String method = request.getMethod(); String uri = request.getRequestURI(); String queryString = request.getQueryString(); Object[] args = pjp.getArgs(); String params = ""; //获取请求参数集合并进行遍历拼接 if(args.length>0){ if("PUT".equals(method)){ Object object = args[0]; Map map = getKeyAndValue(object); params = JSON.toJSONString(map); ; }else if("GET".equals(method)){ params = queryString; } } System.out.println("请求开始===地址:"+url); System.out.println("请求开始===类型:"+method); if("PUT".equals(method)){ System.out.println("请求开始===参数:"+params); } // logger.info("请求开始===地址:"+url); // logger.info("请求开始===类型:"+method); // logger.info("请求开始===参数:"+params); // result的值就是被拦截方法的返回值 Object result = pjp.proceed(); Gson gson = new Gson(); // logger.info("请求结束===返回值:" + gson.toJson(result)); System.out.println("请求结束===返回值:" + gson.toJson(result)); return result; } public static Map<String, Object> getKeyAndValue(Object obj) { Map<String, Object> map = new HashMap<>(); // 得到类对象 Class userCla = (Class) obj.getClass(); /* 得到类中的所有属性集合 */ Field[] fs = userCla.getDeclaredFields(); for (int i = 0; i < fs.length; i++) { Field f = fs[i]; f.setAccessible(true); // 设置些属性是可以访问的 Object val = new Object(); try { val = f.get(obj); // 得到此属性的值 map.put(f.getName(), val);// 设置键值 } catch (IllegalArgumentException e) { e.printStackTrace(); } catch (IllegalAccessException e) { e.printStackTrace(); } } return map; } }
首先找到elasticsearch-7.9.2\plugins\elasticsearch-analysis-ik-7.9.2\config目录打开IKAnalyzer.cfg.xml<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd"> <properties> <comment>IK Analyzer 扩展配置</comment> <!--用户可以在这里配置自己的扩展字典 --> <entry key="ext_dict">new_word.dic;GBT5271.1-2000信息技术基本术语.dic;GBT22263.1-2008 物流公共信息平台应用开发指南 第1部分:基础术语.dic;TJDW114-2008 中国列车运行控制系统CTCS名词术语(V1-0).dic;术语表(中英).dic;铁路车站及枢纽术语.dic;铁路旅客运输组织术语.dic;铁路名词术语全集.dic;业务术语表.dic</entry> <!--用户可以在这里配置自己的扩展停止词字典--> <entry key="ext_stopwords"></entry> <!--用户可以在这里配置远程扩展字典 --> <!-- <entry key="remote_ext_dict">words_location</entry> --> <!--用户可以在这里配置远程扩展停止词字典--> <!-- <entry key="remote_ext_stopwords">words_location</entry> --> </properties> 配置多个词典使用分号分隔字典格式如图字典格式如图 windows(CRLF) UTF-8这里分享一个程序中全文搜索未查询出输入后,将搜索关键字添加到词典中 自动追加词典的代码public static void main(String[] args) { writeFile("C:\\Users\\Herbs\\Desktop\\"+"dic.dic","我爱"+"\n"); } /** * 写入文件,如果文件存在,追加写入 */ public static void writeFile(String pathname, String content) { try { File writeName = new File(pathname); try (FileWriter writer = new FileWriter(writeName, true); BufferedWriter out = new BufferedWriter(writer) ) { out.write(content); out.flush(); } } catch (IOException e) { e.printStackTrace(); } }
**今天在开发过程中遇到了一个关于es外部访问不通的问题,情况如下:本地访问localhost:9200可以访问成功,别的机器访问自己的es访问不通**解决办法如下elasticSearch版本 : 7.9.2修改es目录conf/elasticsearch.yml添加:network.host: 0.0.0.0 http.port: 9200 transport.host: localhost transport.tcp.port: 9300
ElasticSearch7.9.3下载IK分词器7.9.3下载kibana7.9.3下载kibana是一个es索引库的可视化工具, 这边我也刚入门也做不了太多介绍,大家感兴趣的可以自行去研究一下!!!下载完成之后我们就可以解压启动运行了bin目录下点击elasticsearch.bat 运行访问"http://localhost:9200/"这边可以看到es的版本号, 查看已安装的插件http://localhost:9200/_cat/plugins安装插件有两种方法 , 一个是如上图所示, 将插件直接解压拷贝到 elasticSearch 的plugins 文件夹下, 另一种是进入elasticSearch 的bin目录下输入命令 : plugin install 插件名例 : plugin install license接下来运行kibana 直接进入bin目录,双击kibana运行启动完毕之后http://localhost:5601/准备工作做好之后接下来开始跟springboot整合吧这边我没有使用springboot下的es 依赖 我用的是elasticSearch 高级客户端<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-elasticsearch</artifactId> </dependency>yml如下es: address: 127.0.0.1 port: 9200接下来是elasticSearch 的配置类package com.es.esservice.bean; import org.apache.http.HttpHost; import org.elasticsearch.client.RestClient; import org.elasticsearch.client.RestHighLevelClient; import org.springframework.beans.factory.annotation.Value; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; /** * @auther:Wangxl * @Emile:18335844494@163.com * @Time:2020/11/2 14:35 */ @Configuration public class ElasticSearchClientConfig { @Value("${es.address}") private String address; @Value("${es.port}") private Integer port; @Bean public RestHighLevelClient restHighLevelClient() { RestHighLevelClient client = new RestHighLevelClient( RestClient.builder( new HttpHost(address, port, "http") ) ); return client; } } 做到这一步我们已经把springboot 和 elasticSearch 整合在一块了接下来我们可以在java代码中调用es的方法了package com.es.esservice.service; import com.alibaba.fastjson.JSONObject; import com.es.esservice.bean.FileData; import com.es.esservice.bean.Outcome; import org.apache.pdfbox.multipdf.Splitter; import org.apache.pdfbox.pdmodel.PDDocument; import org.apache.pdfbox.text.PDFTextStripper; import org.elasticsearch.action.admin.indices.create.CreateIndexRequest; import org.elasticsearch.action.admin.indices.create.CreateIndexResponse; import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest; import org.elasticsearch.action.index.IndexRequest; import org.elasticsearch.action.index.IndexResponse; import org.elasticsearch.action.search.SearchRequest; import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.client.RequestOptions; import org.elasticsearch.client.RestHighLevelClient; import org.elasticsearch.client.indices.GetIndexRequest; import org.elasticsearch.common.text.Text; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.query.BoolQueryBuilder; import org.elasticsearch.index.query.MatchQueryBuilder; import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.index.query.TermQueryBuilder; import org.elasticsearch.index.reindex.DeleteByQueryRequest; import org.elasticsearch.search.SearchHit; import org.elasticsearch.search.builder.SearchSourceBuilder; import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder; import org.elasticsearch.search.fetch.subphase.highlight.HighlightField; import org.elasticsearch.search.sort.SortOrder; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import org.springframework.web.multipart.MultipartFile; import sun.misc.BASE64Encoder; import java.io.File; import java.io.FileOutputStream; import java.io.InputStream; import java.io.OutputStream; import java.util.ArrayList; import java.util.List; import java.util.ListIterator; import java.util.Map; import java.util.concurrent.TimeUnit; /** * @auther:Wangxl * @Emile:18335844494@163.com * @Time:2020/11/4 15:11 */ @Service public class FileDataNewServiceImpl implements FileDataNewService { @Autowired private RestHighLevelClient restHighLevelClient; /** * 创建索引 * * @param indexName * @return */ @Override public Outcome createIndex(String indexName) { try { XContentBuilder builder = XContentFactory.jsonBuilder() .startObject() .field("properties") .startObject() .field("fileName").startObject().field("index", "true").field("type", "text").field("analyzer", "ik_max_word").endObject() .field("page").startObject().field("index", "true").field("type", "keyword").endObject() .field("departmentId").startObject().field("index", "true").field("type", "keyword").endObject() .field("ljdm").startObject().field("index", "true").field("type", "keyword").endObject() .field("data").startObject().field("index", "true").field("type", "text").field("analyzer", "ik_max_word").endObject() .endObject() .endObject(); CreateIndexRequest createIndexRequest = new CreateIndexRequest(indexName); createIndexRequest.mapping("_doc", builder); CreateIndexResponse createIndexResponse = restHighLevelClient.indices().create(createIndexRequest, RequestOptions.DEFAULT); boolean acknowledged = createIndexResponse.isAcknowledged(); if (acknowledged) { return Outcome.ok("新增成功"); } else { return Outcome.error("新增失败"); } } catch (Exception e) { e.printStackTrace(); return Outcome.error("新增失败"); } } /** * 新增文档 * * @param indexName * @param multipartFile * @return */ @Override public Outcome addFileDataPageNew(String indexName, String departmentId, String ljdm, MultipartFile multipartFile) { try { BASE64Encoder base64Encoder = new BASE64Encoder(); File file = multipartFileToFile(multipartFile); PDDocument document = PDDocument.load(file); Splitter splitter = new Splitter(); List<PDDocument> pages = splitter.split(document); ListIterator<PDDocument> iterator = pages.listIterator(); int i = 1; while (iterator.hasNext()) { PDDocument currentDocument = iterator.next(); //获取一个PDFTextStripper文本剥离对象 PDFTextStripper stripper = new PDFTextStripper(); //获取文本内容 String content = stripper.getText(currentDocument); FileData fileData = new FileData(); fileData.setDepartmentId(departmentId); fileData.setLjdm(ljdm); fileData.setFileName(multipartFile.getOriginalFilename()); fileData.setData(content); fileData.setPage(i); IndexRequest source = new IndexRequest(indexName).source(JSONObject.toJSONString(fileData), XContentType.JSON); IndexResponse index = restHighLevelClient.index(source, RequestOptions.DEFAULT); i++; } // return index.toString(); return Outcome.ok("新增成功"); } catch (Exception e) { e.printStackTrace(); return Outcome.error("新增失败"); } } /** * 删除文档 * * @param indexName * @param fileName * @return */ @Override public Outcome deleteFile(String indexName, String fileName) { try { DeleteByQueryRequest filedata = new DeleteByQueryRequest(indexName); filedata.setQuery(new MatchQueryBuilder("fileName", fileName)); restHighLevelClient.deleteByQuery(filedata, RequestOptions.DEFAULT); return Outcome.ok("删除成功"); } catch (Exception e) { e.printStackTrace(); return Outcome.error("删除失败"); } } /** * 根据文件名查询 并且高亮显示 * @param fileName * @return */ @Override public List queryFileByFileName(String fileName) { ArrayList fileNames = new ArrayList(); try { //创建搜索请求 SearchRequest searchRequest = new SearchRequest(); //构造搜索参数 SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); MatchQueryBuilder builderByFileName = QueryBuilders.matchQuery("fileName", fileName); searchSourceBuilder.query(builderByFileName); searchSourceBuilder.timeout(new TimeValue(60, TimeUnit.SECONDS)); searchSourceBuilder.sort("page", SortOrder.ASC); //高亮 HighlightBuilder highlightBuilder = new HighlightBuilder(); //设置高亮字段 highlightBuilder.field("fileName"); //如果要多个字段高亮,这项要为false highlightBuilder.requireFieldMatch(true); highlightBuilder.preTags("<span style='color:red'>"); highlightBuilder.postTags("</span>"); highlightBuilder.fragmentSize(800000); //最大高亮分片数 highlightBuilder.numOfFragments(0); //从第一个分片获取高亮片段 searchSourceBuilder.highlighter(highlightBuilder); searchRequest.source(searchSourceBuilder); SearchResponse response = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT); // List<Map<String, Object>> list = new ArrayList<>(); for (SearchHit hit : response.getHits().getHits()) { Map<String, Object> sourceAsMap = hit.getSourceAsMap(); FileData fileData = new FileData(); fileData.setFileName((String) sourceAsMap.get("fileName")); fileData.setPage((Integer) sourceAsMap.get("page")); fileData.setData((String) sourceAsMap.get("data")); //解析高亮字段 Map<String, HighlightField> highlightFields = hit.getHighlightFields(); HighlightField field1 = highlightFields.get("fileName"); if (field1 != null) { Text[] fragments = field1.fragments(); String n_field = ""; for (Text fragment : fragments) { n_field += fragment; } //高亮标题覆盖原标题 fileData.setFileName(n_field); // sourceAsMap.put("data",n_field); } fileNames.add(fileData); // list.add(hit.getSourceAsMap()); } return fileNames; } catch (Exception e) { e.printStackTrace(); } return fileNames; } /** * 删除索引 * * @param indexName * @return */ @Override public Outcome removeIndex(String indexName) { try { DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest(indexName); restHighLevelClient.indices().delete(deleteIndexRequest, RequestOptions.DEFAULT); return Outcome.ok("删除索引成功"); } catch (Exception e) { e.printStackTrace(); return Outcome.error("删除索引失败"); } } /** * 查询索引是否存在 * * @param indexName * @return */ @Override public Outcome selectIndex(String indexName) { try { GetIndexRequest indexRequest = new GetIndexRequest(indexName); boolean exists = restHighLevelClient.indices().exists(indexRequest, RequestOptions.DEFAULT); if (exists) { return Outcome.ok("索引已存在"); } else { return Outcome.error("索引不存在"); } } catch (Exception e) { e.printStackTrace(); return Outcome.error("查询索引出错"); } } public static File multipartFileToFile(MultipartFile file) throws Exception { File toFile = null; if (file.equals("") || file.getSize() <= 0) { file = null; } else { InputStream ins = null; ins = file.getInputStream(); toFile = new File(file.getOriginalFilename()); inputStreamToFile(ins, toFile); ins.close(); } return toFile; } //获取流文件 private static void inputStreamToFile(InputStream ins, File file) { try { OutputStream os = new FileOutputStream(file); int bytesRead = 0; byte[] buffer = new byte[8192]; while ((bytesRead = ins.read(buffer, 0, 8192)) != -1) { os.write(buffer, 0, bytesRead); } os.close(); ins.close(); } catch (Exception e) { e.printStackTrace(); } }以上的业务代码大家可以参考一下 感 谢 阅 读 !
开发使用Feign做微服开发调用客户端时,几乎都是普通接口调用,返回一些JSON数据,今天刚好要进行Feign客户端(服务消费者)调用服务提供者的文件下载接口,记录一下!代码如下:服务提供者: @PostMapping(value = "/downLoadFile") public void downloadFile(@RequestParam String path, HttpServletResponse response) { File file = new File(path); FileUtil.fileDownload(response, file, false); }FileUtil工具类import org.slf4j.Logger; import org.slf4j.LoggerFactory; import javax.servlet.http.HttpServletResponse; import java.io.*; import java.net.URLEncoder; public class FileUtil { private static final Logger logger = LoggerFactory.getLogger(FileUtil.class); /** * 将指定文件对象写入到特定的Servlet响应对象,进而实现下载 * 一般用于下载已经存在的文件 * * @param response 目标Servlet响应对象 * @param file 要下载的文件对象 * @param isDeleteOriginal 是否删除服务器文件原本,true下载后将删除服务器上的文件 */ public static void fileDownload(HttpServletResponse response, File file, Boolean isDeleteOriginal) { try { InputStream inputStream = new FileInputStream(file); BufferedInputStream bufferedInputStream = new BufferedInputStream(inputStream); response.setContentType("multipart.form-data") response.setHeader("Content-Disposition", "attachment; filename=" + URLEncoder.encode(file.getName(), "UTF-8")); BufferedOutputStream bufferedOutputStream = new BufferedOutputStream(response.getOutputStream()); int length = 0; byte[] temp = new byte[1024 * 10]; while ((length = bufferedInputStream.read(temp)) != -1) { bufferedOutputStream.write(temp, 0, length); } bufferedOutputStream.flush(); bufferedOutputStream.close(); bufferedInputStream.close(); inputStream.close(); if (isDeleteOriginal) { file.delete(); } } catch (FileNotFoundException e) { e.printStackTrace(); logger.error(e.getMessage()); } catch (IOException e) { e.printStackTrace(); logger.error(e.getMessage()); } } }Feign客户端(服务消费者)的代码import feign.Response; import org.springframework.cloud.openfeign.FeignClient; import org.springframework.http.MediaType; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestParam; @FeignClient(name = "file-service") public interface UserFeignClient extends FeignClientParent { @GetMapping(value = "/update/downLoadFile",consumes = MediaType.APPLICATION_PROBLEM_JSON_VALUE) Response downloadFile(@RequestParam String path); }Feign客户端(服务消费者)的Controller接口方法@GetMapping(value = "/downLoadFile") public void downloadFile(@RequestParam String path, HttpServletResponse response) { InputStream inputStream = null; try { Response serviceResponse = this.userFeignClient.downloadFile(path); Response.Body body = serviceResponse.body(); inputStream = body.asInputStream(); BufferedInputStream bufferedInputStream = new BufferedInputStream(inputStream); response.setContentType("multipart.form-data") response.setHeader("Content-Disposition", serviceResponse.headers().get("Content-Disposition").toString().replace("[","").replace("]","")); BufferedOutputStream bufferedOutputStream = new BufferedOutputStream(response.getOutputStream()); int length = 0; byte[] temp = new byte[1024 * 10]; while ((length = bufferedInputStream.read(temp)) != -1) { bufferedOutputStream.write(temp, 0, length); } bufferedOutputStream.flush(); bufferedOutputStream.close(); bufferedInputStream.close(); inputStream.close(); } catch (IOException e) { e.printStackTrace(); } }这一段代码是为了保持服务提供者一致的文件下载输出,其中就包括文件名!serviceResponse.headers().get("Content-Disposition").toString().replace("[","").replace("]","")原理:就是将服务提供者的文件下载响应的响应体(文件内容)复制到服务消费者对外的文件下载响应体中
很多小伙伴开发过程中 ,总是会遇到不小心删除表数据或直接干掉某个表 ,导致数据丢失, 接下来的方法可以帮助大家把表数据给恢复回来.第一种数据恢复方法是利用oracle提供的闪回方法进行数据恢复,适用于delete删除(一条记录)方式:首先需要知道是什么时间进行的删除操作,如果不能确定具体时间点则选择尽量准确的删除数据前的时间。然后利用select * from 表名 as of timestamp to_timestamp('删除时间点','yyyy-mm-dd hh24:mi:ss')查询出某个时间点之前的数据然后insert进去就可以恢复insert into 表名 (select * from 表名 as of timestamp to_timestamp('删除时间点','yyyy-mm-dd hh24:mi:ss'));(注意要保证主键不重复)另一种方法可以通过闪回整个表来恢复误删除的数据,但仅适用于表结构没有发生改变且用户有flash any table权限的情况下,语句如下: alter table 表名 enable row movement flashback table 表名 to timestamp to_timestamp(删除时间点',' frombyte yyyy-mm-dd hh24:mi:ss')第二种数据恢复方法的原理是因为oracle数据库在删除表时会将删除信息存放于某虚拟“回收站”中而非直接清空,再此种状态下数据库标记该表的数据库为“可以复写”,所以在该块未被重新使用前依然可以恢复数据。该方法多用于drop删除。首先需要查询user_table视图找到被删除的表: select table_name,dropped from user_tables select object_name,original_name,type,droptime from user_recyclebin注意此时的表名称已经被重新命名,table_name和object_name就是回收站中的存放表名,如果管理员此时可以明确原表的名称可以通过 flashback table 原表名 to before drop语句进行数据恢复,如果不知道原表名称可以直接按照回收站中的表名称将数据恢复回复来,然后通过 flashback table "回收站中的表名(如:Bin$DSbdfd4rdfdfdfegdfsf==$0)" to before drop rename to 新表名语句就可以重新命名。第三种方法同样利用oracle数据库的闪回功能可以将数据库恢复到过去某一状态,语法如下:SQL>alter database flashback on SQL>flashback database to scn SCNNO; SQL>flashback database to timestamp to_timestamp('frombyte 2007-2-12 12:00:00','yyyy-mm-dd hh24:mi:ss');注意这里会有一个问题,oracle数据库提供了可以恢复数据的保障机制,但也不可避免的占用了大量空间,使用drop一个表或者delete数据后空间并不能自动进行回收,如果确定需要删除的数据又不想无谓的占用空间该如何操作呢?我们可以使用以下两种方式:1、采用truncate方式进行截断。(但不能进行数据回恢复了)2、在drop时加上purge选项:drop table 表名 purge该选项也可以通过删除recyclebin区域来永久性删除表 ,原始删除表drop table emp cascade constraintspurge table emp;删除当前用户的回收站:purge recyclebin;删除全体用户在回收站的数据:purge dba_recyclebin
Java String 转double 精度丢失原本是我写了一个除法的方法然后返回值是 BigDecimal类型的数据原代码 float num = (float) num1 * 100 / num2; // num1 = 1 num2 = 1 DecimalFormat df = new DecimalFormat("0.00"); String format = df.format(num); //100.00 reture BigDecimal.valueOf(Double.parseDouble(format));//返回值为 100.0 精度丢失 经查询比如一个字符串 str="30.273705487000021"用方法Double.parseDouble(str)转化为double类型后,变成了30.27370548700002,最后一位消失不见了。似乎0-4都会省略掉最后一位。修改代码为float num = (float) num1 * 100 / num2; // num1 = 1 num2 = 1 DecimalFormat df = new DecimalFormat("0.00"); String format = df.format(num); //100.00 reture new BigDecimal(format);//返回值为 100.00 成功解决
Select * from tablename where col in (‘col1’,’col2’ …….., ‘col1000’) or col in (‘col1001’, …………)
2022年07月
2022年06月
2022年05月