暂时未有相关云产品技术能力~
正文一.下载Apollomysql安装包:https://pan.baidu.com/s/1swrV9ffJnmz4S0mfkuBbIw 提取码:1111二.运行apollo# 解压到对应的文件夹 unzip apollo.zip -d /opt/apollo # 修改demo.sh cat demo.shapollo_config_db_url=jdbc:mysql://ip:3306/apolloconfigdb?characterEncoding=utf8 apollo_config_db_username=root apollo_config_db_password=xxx # apollo portal db info apollo_portal_db_url=jdbc:mysql://ip:3306/apolloportaldb?characterEncoding=utf8 apollo_portal_db_username=root apollo_portal_db_password=xxx config_server_url=http://0.0.0.0:8080 admin_server_url=http://0.0.0.0:8090 eureka_service_url=http://ip:port/eureka/ portal_url=http://0.0.0.0:8070# 授予 demo.sh 权限 chmod 777 /opt/apollo/demo.sh # 启动 ./demo start # 查看启动日志 cd /opt/apollo/service tail -1000f apollo-service_optapolloservice.log # 查看进程 出现 8080 8090 8070说明成功 netstat -ntlp # 访问页面 http://ip:8070 账号为 apollo 密码为 admin
正文一、制作jdk镜像1.下载jdk链接:https://pan.baidu.com/s/1swrV9ffJnmz4S0mfkuBbIw 提取码:11112.Dockerfile# 基础镜像,必须第一个写 FROM centos:7 # 作者 LABEL maintainer="koushenhai" # 介绍 LABEL description="jdk:1.8 image" # 在当前目录的文件,拷贝过去会自动解压到指定的目录 ADD jdk-8u161-linux-x64.tar.gz /usr/local # 运行命令 RUN cd /usr/local && mv jdk1.8.0_161 /usr/local/jdk # 设置环境变量 ENV JAVA_HOME /usr/local/jdk ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar ENV PATH $PATH:$JAVA_HOME/bin3.运行命令# 构建镜像 docker build -t jdk:1.8 . # 查看镜像 > jdk 和 centos 说明构建成功 docker images # 运行容器 # -i > 即使没有连接,也要保持标准输入保持打开状态 一般与-t连用 # -d > 后台运行容器,并打印容器id # -t > 分配一个伪tty,一般与-i连用 docker run -itd --name jdk8 jdk:1.8 # 列出容器 docker ps -a # 进入容器 docker exec -it jdk8 /bin/bash # 查看jdk版本 > 出现版本号则说明启动成功 java -version # 退出容器 exit二、制作apollo镜像1.下载Apollomysql安装包:https://pan.baidu.com/s/1swrV9ffJnmz4S0mfkuBbIw 提取码:11112.Dockerfile# 基础镜像 FROM jdk:1.8 # 信息 LABEL maintainer="koushenhai" LABEL description="laokou-apollo" # 开放端口 EXPOSE 8080 8090 8070 # 挂载 VOLUME /data # 复制文件 ADD apollo.zip /opt/apollo.zip # 运行 RUN cd /opt \ && mkdir -p /opt/apollo \ && yum install -y unzip zip \ && unzip apollo.zip -d /opt/apollo \ && chmod 777 /opt/apollo/demo.sh # 执行命令 ENTRYPOINT cd /opt/apollo && ./demo.sh start && tail -f /dev/null3.修改demo.sh并移动到zip压缩包apollo_config_db_url=jdbc:mysql://ip:3306/apolloconfigdb?characterEncoding=utf8 apollo_config_db_username=root apollo_config_db_password=xxx # apollo portal db info apollo_portal_db_url=jdbc:mysql://ip:3306/apolloportaldb?characterEncoding=utf8 apollo_portal_db_username=root apollo_portal_db_password=xxx config_server_url=http://0.0.0.0:8080 admin_server_url=http://0.0.0.0:8090 eureka_service_url=http://ip:port/eureka/ portal_url=http://0.0.0.0:80704.运行命令# 构造镜像 docker build -t apollo:1.0 . # 启动容器 # -d 后台运行 # -p 映射端口 容器端口:服务器端口 # --name 重命名 # /bin/bash 启动shell脚本 docker run -itd -p 8080:8080 -p 8090:8090 -p 8070:8070 --name laokou-apollo apollo:1.0 /bin/bash # 进入容器 docker exec -it laokou-apollo /bin/bash # 进入apollo目录 cd /opt/apollo # 查看日志 tail -1000f apollo-service_optapolloservice.log # 启动成功后退出容器 exit # 访问页面 http://ip:8070 账号为 apollo 密码为 admin
正文一、制作jdk镜像1.下载jdk链接:https://pan.baidu.com/s/1swrV9ffJnmz4S0mfkuBbIw 提取码:11112.Dockerfile# 基础镜像,必须第一个写 FROM centos:7 # 作者 LABEL maintainer="koushenhai" # 介绍 LABEL description="jdk:1.8 image" # 在当前目录的文件,拷贝过去会自动解压到指定的目录 ADD jdk-8u161-linux-x64.tar.gz /usr/local # 运行命令 RUN cd /usr/local && mv jdk1.8.0_161 /usr/local/jdk # 设置环境变量 ENV JAVA_HOME /usr/local/jdk ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar ENV PATH $PATH:$JAVA_HOME/bin3.运行命令# 构建镜像 docker build -t jdk:1.8 . # 查看镜像 > jdk 和 centos 说明构建成功 docker images # 运行容器 # -i > 即使没有连接,也要保持标准输入保持打开状态 一般与-t连用 # -d > 后台运行容器,并打印容器id # -t > 分配一个伪tty,一般与-i连用 docker run -itd --name jdk8 jdk:1.8 # 列出容器 docker ps -a # 进入容器 docker exec -it jdk8 /bin/bash # 查看jdk版本 > 出现版本号则说明启动成功 java -version # 退出容器 exit二、部署eureka1.Dockerfile# 基础镜像 FROM jdk:1.8 # 信息 LABEL maintainer="koushenhai" LABEL description="laokou-register" # 开放端口 EXPOSE 1000 # 挂载 VOLUME /data # 复制文件 ADD register.jar /opt/register.jar # 执行命令 ENTRYPOINT ["java","-jar","/opt/register.jar"]2.执行命令 # 构造镜像 docker build -t register:1.0 . # 查看镜像 docker images # 启动容器 # -d 后台运行 # -p 端口映射 第一个是宿主端口 第二个是容器端口 尽量端口保持一致 docker run -itd -p 1000:1000 --name laokou-register register:1.0 # 访问页面 http://ip:1000三、部署apollo参考地址:https://github.com/apolloconfig/apollo/wiki/Apollo-Quick-Start-Docker%E9%83%A8%E7%BD%B2四、部署gateway待更新。。。
正文一、职责链模式介绍顾名思义,职责链模式是为请求创建一个接收者对象的链,对请求的发送者和接收者进行解耦。举个例子,比如说,公司请假,根据请假时长不同,递交到公司领导的级别也不一样,这种层级递交的关系就是一种链式结构实现创建抽象类AbstractArticleHandler,创建两种类型的文章处理类,都扩展了AbstractArticleHandler,每个文章处理类都有自己的逻辑,通过文章类型判断,如果是则执行相应文章处理类,否则把消息传给下一个文章处理类步骤1创建抽象的文章处理类public abstract class AbstractArticleHandler { /** * 下一个处理者 */ private AbstractArticleHandler abstractArticleHandler; /** * 获取articleType * @return */ protected abstract ArticleTypeEnum getArticleTypeEnum(); /** * 拉取文章 * @param uris 链接数组 * @return */ protected abstract void articlePull(String[] uris); public final void handlerArticle(final List<String> links,final String articleType) { if (this.getArticleTypeEnum().getValue().equals(articleType)) { this.articlePull(links.toArray(new String[links.size()])); } else { if (this.abstractArticleHandler != null) { this.abstractArticleHandler.handlerArticle(links, articleType); } } } public void setNext(AbstractArticleHandler abstractArticleHandler) { this.abstractArticleHandler = abstractArticleHandler; } } enum ArticleTypeEnum { CSDN("csdn"), BKY("bky"); private final String value; ArticleTypeEnum(String value) { this.value = value; } public String getValue() { return value; } }步骤2创建扩展文章处理类public class CsdnArticleHandler extends AbstractArticleHandler{ @Override protected ArticleTypeEnum getArticleTypeEnum() { return ArticleTypeEnum.CSDN; } @Autowired private PipelineObserver pipelineObserver; @Override protected void articlePull(String[] uris) { } }public class BkyArticleHandler extends AbstractArticleHandler{ @Override protected ArticleTypeEnum getArticleTypeEnum() { return ArticleTypeEnum.BKY; } @Override protected void articlePull(String[] uris) { } }步骤3添加文章处理器,形成链式调用public class ArticleService { public static void main(String[] args) { AbstractArticleHandler a1 = new CsdnArticleHandler(); AbstractArticleHandler a2 = new BkyArticleHandler(); a1.setNext(a2); a1.handlerArticle("链接地址","csdn"); } }二、装饰器模式介绍装饰器模式允许向一个现有的对象添加新的功能,同时又不修改其结构举个例子,比如说,手机有没有贴膜,都是是可以使用,手机贴上膜,不影响手机的使用实现创建ProcessStrategy接口和实现了ProcessStrategy接口的实体类,然后创建一个实现ProcessStrategy接口的抽象装饰类ProcessHandler,并把processStrategy对象作为它的实例变量,IteratorProcess实现ProcessHandler实体类,ArticleHandler类使用ProcessHandler来装饰ProcessStrategy步骤1创建接口 /** * @author Kou Shenhai * @version 1.0 * @date 2021/4/24 0024 下午 3:44 */ public interface ProcessStrategy { /** * 爬虫具体执行方法 * @param page */ void process(Page page); }步骤2创建实现接口的实现类/** * * @author Kou Shenhai * @version 1.0 * @date 2021/4/24 0024 下午 4:05 */ public class BkyArticleProcess implements ProcessStrategy{ @Override public void process(Page page) { } }/** * * @author Kou Shenhai * @version 1.0 * @date 2021/4/24 0024 下午 4:05 */ public class CsdnArticleProcess implements ProcessStrategy{ @Override public void process(Page page) { } }步骤3创建实现ProcessStrategy接口的抽象装饰类/** * 装饰类 ,伪实现类 * @author Kou Shenhai * @version 1.0 * @date 2021/4/24 0024 下午 4:01 */ public abstract class ProcessHandler implements ProcessStrategy{ protected volatile ProcessStrategy processStrategy; public ProcessHandler(ProcessStrategy processStrategy) { this.processStrategy = processStrategy; } @Override public void process(Page page) { processStrategy.process(page); } }步骤4扩展ProcessHandler类的实体装饰类/** * 装饰者,用来装饰csdn文章 * @author Kou Shenhai * @version 1.0 * @date 2021/4/24 0024 下午 4:15 */ public class IteratorProcess extends ProcessHandler{ public IteratorProcess(ProcessStrategy processStrategy) { super(processStrategy); } }步骤5使用IteratorProcess来装饰ProcessStrategy对象public class ArticleHandler{ public static void main(String[] args) { //装饰 IteratorProcess process = new IteratorProcess(new BkyArticleProcess()); } }三、观察者模式介绍当对象存在一对多关系时,则使用观察者模式。举个例子,比如说一个对象的数据发生变更,则会自动通知依赖它的对象注:jdk有对观察者模式的支持类实现(采用jdk自带的观察者模式并进行扩展)观察者模式使用三个类,ArticleObserver、Observer和Observable(由具体的类来实现)。Observable对象带有绑定观察者到ArticleObserver对象和从Client对象解绑观察者的方法。我们创建Observable类、Observer接口和实现Observer类的实体类步骤1创建Observable类/** * 参考java.util.Observable * 让具体的实现类实现相关逻辑,^秒啊^ * @author Kou Shenhai */ public interface Observable { /** * 加入观察者 * @param o */ void addObserver(Observer o); /** * 通知观察者 * @param arg */ void notifyObservers(Object arg); /** * 解绑观察者 * @param o */ void deleteObserver(Observer o); }步骤2实现Observable类public class ArticlePipeline implements Observable{ private Vector<Observer> obs; public ArticlePipeline() { obs = new Vector<>(1); } @Override public void process(ResultItems resultItems, Task task) { notifyObservers(resultItems.getAll()); } @Override public synchronized void addObserver(Observer o) { if (o == null) { throw new NullPointerException(); } if (!obs.contains(o)) { obs.addElement(o); } } @Override public synchronized void notifyObservers(Object arg) { Object[] arrLocal; synchronized (this) { arrLocal = obs.toArray(); } for (int i = arrLocal.length - 1; i >= 0; i--) { ((Observer)arrLocal[i]).update(this, arg); } } @Override public synchronized void deleteObserver(Observer o) { obs.removeElement(o); } }步骤3创建 Observer 类/** * 参考{@link java.util.Observer}设计 * @author Kou Shenhai */ public interface Observer { /** * 信息变更 * @param o * @param data */ void update(Observable o, Object data); }步骤4创建实体观察类public class PipelineObserver implements Observer { @Override public void update(Observable o, Object data) { } }步骤5使用Observable和实体观察者对象public class ArticleHandler{ public static void main(String[] args) { Observer o = new PipelineObserver(); Observable ob = new ArticlePipeline(); ob.addObserver(o); } }
正文四、webmagic官方文档介绍webmagic的是参考业界最优秀爬虫Scrapy来实现的,使用了HttpClient、Jsoup等Java世界最成熟的工具架构WebMagic的结构分为Downloader(下载)、PageProcessor(处理)、Scheduler(管理)、Pipeline(持久化)四个组件,并由Spider(容器)将它们彼此组织起来,可以互相交互、流程化的执行,总体架构图如下组件Downloader负责从网络上下载页面,以便后续处理,webmagic默认使用httpclientPageProcessor负责解析页面,抽取有用信息,以及发现新的链接,使用Jsoup来解析HTMLScheduler负责管理待抓取URL,以及一些去重工作。webmagic默认使用JDK自带的内存队列来管理URL,用集合去重,支持redis分布式管理Pipeline负责抽取结果的处理,包括计算、持久化到文件、数据库等XSoup基于Jsoup开发的一款XPath解析器五、微服务集成数据库表设计-- ---------------------------- -- Table structure for boot_link -- ---------------------------- DROP TABLE IF EXISTS `boot_link`; CREATE TABLE `boot_link` ( `id` bigint(20) NOT NULL COMMENT 'id', `uri` varchar(400) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '文章链接', `type` varchar(20) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '网站类型', PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic; INSERT INTO `boot_link` VALUES ('11', 'https://www.cnblogs.com/koushenhai/p/12595630.html', 'bky'); INSERT INTO `boot_link` VALUES ('12', 'https://kcloud.blog.csdn.net/article/details/118633942', 'csdn'); INSERT INTO `boot_link` VALUES ('20', 'https://kcloud.blog.csdn.net/article/details/121491124', 'csdn'); INSERT INTO `boot_link` VALUES ('33', 'https://kcloud.blog.csdn.net/article/details/82109656', 'csdn'); INSERT INTO `boot_link` VALUES ('41', 'https://kcloud.blog.csdn.net/article/details/117769662', 'csdn'); INSERT INTO `boot_link` VALUES ('49', 'https://kcloud.blog.csdn.net/article/details/118660073', 'csdn'); INSERT INTO `boot_link` VALUES ('57', 'https://kcloud.blog.csdn.net/article/details/119720174', 'csdn'); INSERT INTO `boot_link` VALUES ('65', 'https://kcloud.blog.csdn.net/article/details/123179670', 'csdn'); INSERT INTO `boot_link` VALUES ('66', 'https://kcloud.blog.csdn.net/article/details/117635759', 'csdn'); INSERT INTO `boot_link` VALUES ('74', 'https://kcloud.blog.csdn.net/article/details/117771583', 'csdn'); INSERT INTO `boot_link` VALUES ('78', 'https://kcloud.blog.csdn.net/article/details/123039609', 'csdn'); INSERT INTO `boot_link` VALUES ('79', 'https://kcloud.blog.csdn.net/article/details/82588914', 'csdn'); INSERT INTO `boot_link` VALUES ('96', 'https://kcloud.blog.csdn.net/article/details/108021143', 'csdn'); INSERT INTO `boot_link` VALUES ('118', 'https://kcloud.blog.csdn.net/article/details/121305244', 'csdn'); INSERT INTO `boot_link` VALUES ('128', 'https://kcloud.blog.csdn.net/article/details/82110125', 'csdn'); INSERT INTO `boot_link` VALUES ('129', 'https://kcloud.blog.csdn.net/article/details/123630814', 'csdn'); INSERT INTO `boot_link` VALUES ('130', 'https://kcloud.blog.csdn.net/article/details/116420798', 'csdn'); INSERT INTO `boot_link` VALUES ('131', 'https://kcloud.blog.csdn.net/article/details/123484520', 'csdn'); INSERT INTO `boot_link` VALUES ('132', 'https://kcloud.blog.csdn.net/article/details/123013305', 'csdn'); INSERT INTO `boot_link` VALUES ('133', 'https://kcloud.blog.csdn.net/article/details/123390833', 'csdn'); INSERT INTO `boot_link` VALUES ('134', 'https://kcloud.blog.csdn.net/article/details/123311487', 'csdn'); INSERT INTO `boot_link` VALUES ('135', 'https://kcloud.blog.csdn.net/article/details/123292276', 'csdn'); INSERT INTO `boot_link` VALUES ('136', 'https://kcloud.blog.csdn.net/article/details/123123229', 'csdn'); INSERT INTO `boot_link` VALUES ('137', 'https://kcloud.blog.csdn.net/article/details/116704223', 'csdn'); INSERT INTO `boot_link` VALUES ('145', 'https://kcloud.blog.csdn.net/article/details/123739314', 'csdn'); INSERT INTO `boot_link` VALUES ('146', 'https://kcloud.blog.csdn.net/article/details/123688809', 'csdn'); INSERT INTO `boot_link` VALUES ('147', 'https://kcloud.blog.csdn.net/article/details/123673741', 'csdn'); INSERT INTO `boot_link` VALUES ('148', 'https://kcloud.blog.csdn.net/article/details/123628721', 'csdn'); INSERT INTO `boot_link` VALUES ('149', 'https://kcloud.blog.csdn.net/article/details/123599384', 'csdn'); INSERT INTO `boot_link` VALUES ('150', 'https://kcloud.blog.csdn.net/article/details/122181814', 'csdn'); INSERT INTO `boot_link` VALUES ('151', 'https://kcloud.blog.csdn.net/article/details/121557788', 'csdn'); INSERT INTO `boot_link` VALUES ('159', 'https://kcloud.blog.csdn.net/article/details/116449621', 'csdn'); INSERT INTO `boot_link` VALUES ('160', 'https://kcloud.blog.csdn.net/article/details/83623118', 'csdn'); INSERT INTO `boot_link` VALUES ('161', 'https://kcloud.blog.csdn.net/article/details/84777724', 'csdn'); INSERT INTO `boot_link` VALUES ('162', 'https://kcloud.blog.csdn.net/article/details/105587614', 'csdn'); INSERT INTO `boot_link` VALUES ('163', 'https://kcloud.blog.csdn.net/article/details/83515122', 'csdn'); INSERT INTO `boot_link` VALUES ('164', 'https://kcloud.blog.csdn.net/article/details/83451040', 'csdn'); INSERT INTO `boot_link` VALUES ('165', 'https://kcloud.blog.csdn.net/article/details/117252826', 'csdn'); INSERT INTO `boot_link` VALUES ('166', 'https://kcloud.blog.csdn.net/article/details/84826176', 'csdn'); INSERT INTO `boot_link` VALUES ('167', 'https://kcloud.blog.csdn.net/article/details/120031600', 'csdn'); INSERT INTO `boot_link` VALUES ('168', 'https://kcloud.blog.csdn.net/article/details/119685953', 'csdn'); INSERT INTO `boot_link` VALUES ('169', 'https://kcloud.blog.csdn.net/article/details/120147123', 'csdn'); INSERT INTO `boot_link` VALUES ('170', 'https://kcloud.blog.csdn.net/article/details/120245035', 'csdn'); INSERT INTO `boot_link` VALUES ('171', 'https://kcloud.blog.csdn.net/article/details/120190383', 'csdn'); INSERT INTO `boot_link` VALUES ('179', 'https://kcloud.blog.csdn.net/article/details/94590629', 'csdn'); INSERT INTO `boot_link` VALUES ('187', 'https://kcloud.blog.csdn.net/article/details/116949872', 'csdn'); INSERT INTO `boot_link` VALUES ('192', 'https://kcloud.blog.csdn.net/article/details/123789292', 'csdn'); INSERT INTO `boot_link` VALUES ('193', 'https://kcloud.blog.csdn.net/article/details/123780832', 'csdn'); INSERT INTO `boot_link` VALUES ('194', 'https://kcloud.blog.csdn.net/article/details/123771040', 'csdn'); INSERT INTO `boot_link` VALUES ('195', 'https://kcloud.blog.csdn.net/article/details/122522290', 'csdn'); INSERT INTO `boot_link` VALUES ('196', 'https://kcloud.blog.csdn.net/article/details/123833614', 'csdn'); DROP TABLE IF EXISTS `boot_article`; CREATE TABLE `boot_article` ( `id` bigint(20) NOT NULL COMMENT 'id', `title` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '文章链接', `content` longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '网站类型', PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;微服务引入依赖 <dependency> <groupId>us.codecraft</groupId> <artifactId>webmagic-core</artifactId> <version>0.7.3</version> <exclusions> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> </exclusion> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>us.codecraft</groupId> <artifactId>webmagic-extension</artifactId> <version>0.7.3</version> <exclusions> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> </exclusion> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> </exclusion> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>com.esotericsoftware</groupId> <artifactId>reflectasm</artifactId> <version>1.11.7</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-aspects</artifactId> </dependency>代码架构核心代码(以采集csdn为例)创建CsdnArticleSpider类/** * 爬虫默认实现 * @author Kou Shenhai * @version 1.0 * @date 2020/11/15 0015 下午 4:40 */ @Configuration @Slf4j public class CsdnArticleSpider implements PageProcessor { private ProcessStrategy processStrategy; private static final int SLEEP_TIME = 3000; private static final int TIMEOUT = 3000; private static final int RETRY_TIMES = 10; private static final int RETRY_SLEEP_TIME = 3000; private static final String CHARSET = "utf-8"; private static final String DOMAIN = "csdn.net"; private static final String USER_AGENT = "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36"; /** * * @param site 抓取网站的相关配置,包括编码、重试次数、抓取间隔 */ private Site site = Site .me() .setRetryTimes(RETRY_TIMES) .setRetrySleepTime(RETRY_SLEEP_TIME) .setDomain(DOMAIN) .setSleepTime(SLEEP_TIME) .setTimeOut(TIMEOUT) .setCharset(CHARSET) .setUserAgent(USER_AGENT) .addHeader("Cookie",""); public void setProcessStrategy(ProcessStrategy processStrategy) { this.processStrategy = processStrategy; } /** * * @param page process是定制爬虫逻辑的核心接口,在这里编写抽取逻辑 */ @Override public void process(Page page) { if (processStrategy == null) { throw new NullPointerException(); } /** * 开始 */ preProcess(page); //策略模式 processStrategy.process(page); /** * 结束 */ afterProcess(page); } @Override public Site getSite() { return site; } public Spider getSpider() { return Spider.create(this); } /** * 下面两个方法用于扩展自定义的process方法,比如加入迭代url等等,主要逻辑放在processStategy */ protected void preProcess(Page page) { log.info("开始爬取..."); } protected void afterProcess(Page page) { log.info("完成爬取..."); } }创建CsdnArticleHandler/** * @author Kou Shenhai */ @Component public class CsdnArticleHandler extends AbstractArticleHandler{ @Autowired private CsdnArticleSpider csdnArticleSpider; @Autowired private ArticlePipeline articlePipeline; @Override protected ArticleTypeEnum getArticleTypeEnum() { return ArticleTypeEnum.CSDN; } @Autowired private PipelineObserver pipelineObserver; @Override @Async protected void articlePull(String[] uris) { HttpClientDownloader httpClientDownloader = new HttpClientDownloader(); articlePipeline.addObserver(pipelineObserver); csdnArticleSpider.setProcessStrategy(new IteratorProcess(new CsdnArticleProcess())); csdnArticleSpider.getSpider().addUrl(uris) .setDownloader(httpClientDownloader) // 开启线程抓取 .thread(2 * Runtime.getRuntime().availableProcessors()) .addPipeline(articlePipeline) //启动爬虫 .start(); } }创建ArticlePipelinepublic class ArticlePipeline implements CallablePipeline{ private Vector<Observer> obs; public ArticlePipeline() { obs = new Vector<>(1); } @Override public void process(ResultItems resultItems, Task task) { notifyObservers(resultItems.getAll()); } @Override public synchronized void addObserver(Observer o) { if (o == null) { throw new NullPointerException(); } if (!obs.contains(o)) { obs.addElement(o); } } @Override public synchronized void notifyObservers(Object arg) { Object[] arrLocal; synchronized (this) { arrLocal = obs.toArray(); } for (int i = arrLocal.length - 1; i >= 0; i--) { ((Observer)arrLocal[i]).update(this, arg); } } @Override public synchronized void deleteObserver(Observer o) { obs.removeElement(o); } }创建CsdnArticleProcess/** * * @author Kou Shenhai * @version 1.0 * @date 2021/4/24 0024 下午 4:05 */ public class CsdnArticleProcess implements ProcessStrategy{ @Override public void process(Page page) { String content = page.getHtml().xpath("//*[@id='mainBox']/main/div[1]/article").get(); String title = page.getHtml().xpath("//*[@id='articleContentId']/text()").get(); page.putField("content",content); page.putField("title",title); } }六、测试数据采集运行概况参考教程:菜鸟教程-设计模式参考教程:webmagic文档本项目仅作为技术学习研究使用,禁止用于任何商业用途,禁止任何损害网站利益的行为本项目仅作为技术学习研究使用,禁止用于任何商业用途,禁止任何损害网站利益的行为本项目仅作为技术学习研究使用,禁止用于任何商业用途,禁止任何损害网站利益的行为
正文一、前提条件搭建kafka集群搭建elasticsearch集群搭建微服务环境二、准备工作192.168.1.1192.168.1.2192.168.1.3三、ELK介绍ELK由Elasticsearch、LogStash、Kibana三部分组成,应用于实时数据检索和分析Elasticsearch简介开源分布式搜索引擎,基于Lucene(全文检索引擎)开发的Java开发,通过RESTful Web接口,让用户通过浏览器与Elasticsearch通信对大量的数据进行接近实时的存储,搜索和分析特点配置简单易上手,采用JSON接口处理方式灵活集群支持线性扩展检索性能高效功能实时全文搜索数据分析数据存储数据概念NRT(近实时)在进行搜索时,从索引的一个文档到这个文档被搜索出来,有一个轻微的延迟(通常1秒延迟)shards(分片)在实际情况下,索引存储的数据可能超过单个节点的硬件限制。如20亿的文档需要2TB,不适合存储在单节点的磁盘上,或单节点搜索请求太慢。因此,为了解决这个问题,elasticsearch提出将索引分成多个分片的功能。当创建索引时,可以定义想要分片的数量,每一个分片就是一个全功能独立的索引,可以存储在集群的任何节点上。分片的两个最主要的原因:水平分割扩展,增加存储量;分布式并行跨分片操作,提供性能和吞吐量replicas(副本)网络问题或其他问题都有可能造成数据丢失,为了数据的健壮性,需要有一个故障切换机制,防止发生任何故障导致分片或节点不可用,因此,需要将索引分片复制一份或多份,称为副本或分片副本。副本的两个主要原因:高可用行,以应对分片或节点故障,出于这个原因,分片副本要在不同的节点上面;高性能,增大吞吐量,搜索可以并行在所有副本上执行index(索引)索引是拥有几分相似特征的文档集合,类似数据库中的表,如一个客户数据索引、一个订单数据索引。对索引中的文档进行搜索、更新和删除,都要用到这个名字document(文档)文档是索引的基础信息单元,类似于数据库中的某条记录,如一条用户信息文档。文档以JSON格式来表示,索引中可以存储任何多的文档cluster(集群)由一个或多个节点组织在一起(集群也可以只有一个节点),它们共同持有你的整个数据,并一起提供索引和搜索功能。其中有一个节点为主节点,主节点可以通过选举产生,并提供跨节点的联合索引和搜索功能。每个节点加入集群都是基于集群名称加入的,因此,确保不同的环境中使用不同集群名称node(节点)节点是单一的服务器,是集群的一部分,存储数据并参与集群的索引和搜索功能,节点名称需要是唯一的,在集群中用于识别服务器对应的节点type(类型)一个索引可以定义一种或多种类型,用于对一组共同字段的文档定义一个类型,elasticsearch 7之后的版本废除该typeLogstash简介用来收集、分析、过滤日志的开源工具,几乎支持任何类型的日志(系统日志、业务日志)支持多种数据源接收日志(Mysql、Kafka),以多种方式输出数据(Elasticsearch、邮件)Kibana简介开源工具,为Elasticsearch的日志分析提供友好的web界面用于搜索、分析和可视化存在Elasticsearch指标中的日志数据利用Elasticsearch的RESTful 接口来检索数据,不仅可以创建数据的定制仪表盘,还能以特殊的方式查询和过滤数据四、Kafka介绍Kafka是分布式的基于发布/订阅模式的消息队列,主要应用于大数据实时处理领域Kafka简介分布式、支持分区(partition)、多副本(replica)基于Zookeeper协调的分布式消息中间件使用scala语言编写,实时处理大量数据特性时效性每秒可处理几十万条消息,延迟最低只有几毫秒;每个topic可以分多个分区(partition),Consumer Group对Partition进行消费,提高负载均衡和消费的能力,具备高吞吐,低延迟的特性拓展性kafka集群支持热扩展持久性消息持久到本地磁盘,支持数据备份防止数据丢失读写性能支持数千个客户端同时读写集群一个kafka节点就是一个broker,消息由topic来承载,并且可以存储在1个或多个partition中,发布消息的应用是producer,消费消息的应用是consumer,多个consumer可以促成consumer group来共同消息一个topic中的消息五、日志监控架构图之前搭建ELK环境中,日志的处理流程为 logstash > elasticsearch,但是随着业务量的增长,需要对日志监控的架构进一步扩展,引入kafka集群。因此,日志的处理流程变为 kafka > logstash > elasticsearch 思考:ELK加入Kafka有什么好处?logstash client和logstash server之间没有消息缓存,如果server宕机不可用,会有消息丢失的风险。引入kafka消息机制,保证了即使logstash server因故障停止运行,数据也会缓存下来,避免数据丢失由于在高并发环境下,数据读写特别频繁,导致logstash运行占用CPU和内存较高,kafka作为消息缓存队列解耦了处理过程,缓解系统的压力,同时提高了可扩展性,具有峰值处理能力,能够使关键组件顶住突发的访问压力,而不会因为并发的超负荷的请求而完全崩溃六、微服务集成springboot 2.0 集成elk 7.6.21.引入依赖 <!-- logback 推送日志文件到kafka --> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>1.2.3</version> </dependency> <dependency> <groupId>com.github.danielwegener</groupId> <artifactId>logback-kafka-appender</artifactId> <version>0.2.0-RC2</version> </dependency>2.修改logback-spring.xml<?xml version="1.0" encoding="UTF-8"?> <configuration> <include resource="org/springframework/boot/logging/logback/base.xml" /> <logger name="org.springframework.web" level="INFO"/> <logger name="org.springboot.sample" level="TRACE" /> <!-- 开发、测试环境 --> <springProfile name="dev,test"> <logger name="org.springframework.web" level="INFO"/> <logger name="org.springboot.sample" level="INFO" /> <logger name="io.laokou.elasticsearch" level="DEBUG" /> </springProfile> <!-- 生产环境 --> <springProfile name="prod"> <logger name="org.springframework.web" level="ERROR"/> <logger name="org.springboot.sample" level="ERROR" /> <logger name="io.laokou.elasticsearch" level="ERROR" /> </springProfile> <appender name="KAFKA" class="com.github.danielwegener.logback.kafka.KafkaAppender"> <!-- encoder负责两件事,一是把日志信息转换成字节数组,二是把字节数组写入到输出流 --> <encoder> <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{35} - %msg %n</pattern> </encoder> <!-- 配置topic,自动创建topic --> <topic>laokou-elasticsearch</topic> <!-- 相关配置信息 --> <keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy" /> <deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" /> <!-- kafka集群地址 --> <producerConfig>bootstrap.servers=192.168.1.1:9092,192.168.1.2:9092,192.168.1.3:9092</producerConfig> <!-- acks=0 消息只要发送出去,不管那条数据有没有落在磁盘上,直接认为发送成功--> <producerConfig>acks=0</producerConfig> <!-- 消息量较少,过了1000ms自动发送出去 --> <producerConfig>linger.ms=1000</producerConfig> <!-- 消息不被阻塞 --> <producerConfig>max.block.ms=0</producerConfig> <appender-ref ref="CONSOLE" /> </appender> <root level="INFO"> <appender-ref ref="KAFKA" /> </root> </configuration>3.配置logstash.kafka.confinput{ kafka { #kafka服务地址 bootstrap_servers => "192.168.1.1:9092,192.168.1.2:9092,192.168.1.3:9092" topics => "laokou-elasticsearch" } } output{ elasticsearch{ hosts=>["192.168.1.1:9200","192.168.1.2:9200","192.168.1.3:9200"] index => "laokou-elasticsearch-%{+YYYY.MM.dd}" } stdout{ codec => rubydebug } }4.启动logstash 和 kibanalogstash -f logstash.kafka.conf大功告成参考博文:ELK日志分析系统(基本原理简介+ELK群集部署)参考博文:消息队列(MQ)与kafaka概述(Filebeat+Kafka+ELK部署)参考博文:Kafka总结(八):KafKa与ELK整合应用参考博文:ELK-基础系列(六)-ELK加入消息队列-Kafka部署
正文一、前提条件安装mycat 1.6安装es 7.6.2集群安装logstash 7.6.2二、可选方案在实际项目中,业务数据主流存储在mysql,但是mysql处理海量数据的搜索能力较差,推荐mysql搭配es,为业务提供强大的搜索能力成为业界主流方案,难点在于如何将mysql导入到es。mysql数据同步到es,有以下几种方案:更新mysql数据库的同时通过es api实现数据写入es(同步)通过logstash同步数据到es(异步)通过mysql binlog,将数据同步到es(异步)三、es api同步数据直接调用es api即可 @PutMapping("/sync") @ApiOperation("同步修改ES") @CrossOrigin public void updateData(@RequestBody final ElasticsearchModel model) { String id = model.getId(); String indexName = model.getIndexName(); String paramJson = model.getData(); elasticsearchUtil.updateData(indexName,id,paramJson); }四、logstash同步数据到es参考博客:同步MYSQL数据到Elasticsearchlogstash通过使用jdbc-input实现定时同步mysql数据,因此需要提前下载mysql的jdbc驱动1.配置logstash.mysql.confinput { # 配置JDBC数据源 jdbc { # mysql jdbc驱动路径 jdbc_driver_library => "D:/xxxxx/mysql-connector-java-5.1.35.jar" # mysql jdbc驱动 jdbc_driver_class => "com.mysql.jdbc.Driver" # mysql 连接地址 jdbc_connection_string => "jdbc:mysql://localhost:8066/TESTDB" # mysql 账号 jdbc_user => "root" # mysql 密码 jdbc_password => "123456" # 定时任务配置,一分钟执行一次 schedule => "* * * * *" # sql查询语句 # 增量更新数据 statement => "select * from boot_resource where id > :sql_last_value" # 允许sql_last_value的值来源于结果的某个字段值 use_column_value => true # sql_last_value的值来自查询结果的最后一个id值 tracking_column => "id" # 开启分页 jdbc_paging_enabled => true # 分页大小 jdbc_page_size => 50 } } output { # 配置数据导入es elasticsearch { # 索引名称 index => "laokou-resource" # es地址 hosts => ["192.168.1.1:9200","192.168.1.2:9200","192.168.1.3:9200"] # document_id设置mysql表主键 document_id => "%{id}" } stdout { codec => rubydebug } }2.jdbc schedule 配置说明jdbc schedule的配置规则* * * * * 分 时 天 月 星期各字段取值范围分 - 0 ~ 59时 - 0 ~ 23天 - 1 ~ 31月 - 1 ~ 12星期 - 0 ~ 6常见特殊字符含义星号(*):代表所有值,如第一个星号,表示每分钟斜线(/):指定时间的间隔频率,如*/5,用在第一个分钟字段,表示每5分钟执行一次举个栗子# 每分钟执行一次 * * * * * # 每5分钟执行一次 */5 * * * * # 每小时执行一次 * */1 * * * # 每天八点执行一次 0 8 * * *3.增量同步数据同步数据的sql语句,直接扫描全表的数据,如果数据量较小,问题不大,如果数据量比较大,会直接卡死,logstash OOM挂掉,因此需要增量同步数据,并且每次仅同步新增数据logstash提供了sql_last_value字段值,可以实现增量同步数据实现思路:logstash每次执行sql时,会将sql查询结果的最后一条记录某个字段的值保存到sql_last_value字段中,下一次执行sql时,就以sql_last_value的值做条件,从这个值往后查询数据statement => "select * from boot_resource where id > :sql_last_value"4.数据分页增量同步的数据太多,会导致logstash卡死,尤其是首次增量同步时(首次同步数据,其实是全表扫描),为避免一次查询太多数据,可以配置分页 # 开启分页 jdbc_paging_enabled => true # 分页大小 jdbc_page_size => 505.启动logstashlogstash -f logstash.mysql.conf五、mysql通过binlog同步数据到es阿里的开源神器canal1. canal简介canal主要用于对mysql的增量日志进行解析(请注意,只支持增量解析,不支持全量解析),提供增量数据的订阅和消费,对mysql增量数据进行实时同步,支持同步到mysql、elasticsearch、hbase等数据源2.canal常用组件canal-deployer(canal-server):监听mysql的binlog,把自己伪装成mysql slave,只负责接收数据,不做数据处理canal-adapter:canal客户端,从canal-server中获取数据,对数据进行同步,可以同步到mysql、elasticsearch、hbase等数据源canal-admin:提供整体配置管理、节点运维等功能,web界面方便用户快速和安全的操作3.canal工作原理 canal模拟mysql master slave的交互协议,把自己伪装mysql slave,向mysql master发送dump协议(dump用于备份)mysql master接收到dump请求,向canal推送binlogcanal通过解析binlog,将数据同步到其他的数据源4.MySQL主从复制原理 mysql master在每个事务更新数据完成之前,将该操作记录串行写入binlog(二进制文件)文件中mysql slave 开启一个I/O线程,该线程在master打开一个普通连接,主要工作的是binlog dump process,如果读取的进度已经跟上master,就会进入休眠状态并等待master产生新的事件,I/O线程的最终目的就是将这些事件写入到replay log(中继日志)sql线程会读取中继日志,并按顺序执行该日志的sql事件,从而与mysql master的数据保持一致5.下载canal 1.1.5并安装6.mysql配置master 配置log-bin=mysql-bin ##开启二进制日志 binlog-format=row ##二进制日志格式(row\mixed\statement)查看binlog是否启用show variables like '%log_bin%'; 查看binlog模式show variables like '%binlog_format%'; 7.创建账号,用于订阅binlogCREATE USER canal IDENTIFIED BY '123456'; GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%'; FLUSH PRIVILEGES;8.创建数据库DROP TABLE IF EXISTS `boot_city`; CREATE TABLE `boot_city` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `parent_id` bigint(20) NOT NULL COMMENT '上级编号', `region` varchar(100) NOT NULL COMMENT '地区名称', `region_id` int(11) NOT NULL COMMENT '地区编号', PRIMARY KEY (`id`) USING BTREE ) ENGINE=InnoDB AUTO_INCREMENT=1324914279466823682 DEFAULT CHARSET=utf8mb4 ROW_FORMAT=DYNAMIC COMMENT='城市';9.配置canal-server1.配置conf/example/instance.properties# 需要同步数据的MySQL地址 canal.instance.master.address=192.168.1.1:3306 canal.instance.master.journal.name= canal.instance.master.position= canal.instance.master.timestamp= canal.instance.master.gtid= # 用于同步数据的数据库账号 canal.instance.dbUsername=canal # 用于同步数据的数据库密码 canal.instance.dbPassword=123456 # 数据库连接编码 canal.instance.connectionCharset = UTF-8 # 需要订阅binlog的表过滤正则表达式 canal.instance.filter.regex=.*\\..*2.启动服务只需要双击bin/startup.bat10.配置canal-adapter1.修改conf/application.yml#修改 es\mysql配置 srcDataSources: defaultDS: url: jdbc:mysql://192.168.1.1:3306/kcloud?useUnicode=true username: root password: 123456 canalAdapters: - instance: example # canal instance Name or mq topic name groups: - groupId: g1 outerAdapters: - name: logger - name: es7 hosts: 192.168.1.1:9300,192.168.1.2:9300,192.168.1.3:9300 # 127.0.0.1:9200 for rest mode properties: mode: transport # or rest # security.auth: test:123456 # only used for rest mode cluster.name: laokou-elasticsearch2.修改conf/es7/mytest_user.ymldataSourceKey: defaultDS destination: example groupId: g1 esMapping: _index: laokou-city _id: id sql: "select a.id,a.parent_id,a.region,a.region_id from boot_city a" etlCondition: "where a.id >= {}" commitBatch: 30003.启动服务只需要双击bin/startup.bat11.启动kibana,创建索引PUT /laokou-city { "mappings": { "properties": { "id":{ "type": "long" }, "parent_id":{ "type": "long" }, "region":{ "type": "keyword" }, "region_id":{ "type": "integer" } } } }12.数据库插入数据INSERT INTO `boot_city` VALUES ('1324912501966925826', '227', '东安县', '2093'); INSERT INTO `boot_city` VALUES ('1324912504676446210', '227', '双牌县', '2094'); INSERT INTO `boot_city` VALUES ('1324912506161229825', '227', '道县', '2095'); INSERT INTO `boot_city` VALUES ('1324912507595681794', '227', '江永县', '2096'); INSERT INTO `boot_city` VALUES ('1324912509306957825', '227', '宁远县', '2097'); INSERT INTO `boot_city` VALUES ('1324912511995506689', '227', '蓝山县', '2098'); INSERT INTO `boot_city` VALUES ('1324912514667278338', '227', '新田县', '2099'); INSERT INTO `boot_city` VALUES ('1324912516038815746', '227', '江华瑶族自治县', '2100'); INSERT INTO `boot_city` VALUES ('1324912517494239234', '228', '鹤城区', '2101'); INSERT INTO `boot_city` VALUES ('1324912520161816578', '228', '中方县', '2102'); INSERT INTO `boot_city` VALUES ('1324912521667571714', '228', '沅陵县', '2103'); INSERT INTO `boot_city` VALUES ('1324912523072663553', '228', '辰溪县', '2104'); INSERT INTO `boot_city` VALUES ('1324912524578418690', '228', '溆浦县', '2105'); INSERT INTO `boot_city` VALUES ('1324912526096756737', '228', '会同县', '2106'); INSERT INTO `boot_city` VALUES ('1324912527514431490', '228', '麻阳苗族自治县', '2107'); INSERT INTO `boot_city` VALUES ('1324912529036963841', '228', '新晃侗族自治县', '2108'); INSERT INTO `boot_city` VALUES ('1324912530588856321', '228', '芷江侗族自治县', '2109'); INSERT INTO `boot_city` VALUES ('1324912531956199426', '228', '靖州苗族侗族自治县', '2110'); INSERT INTO `boot_city` VALUES ('1324912533394845697', '228', '通道侗族自治县', '2111');canal-server会监听binlog日志数据是否发生更改13.效果截图14.问题解决注:如果能获取canal-sever数据,不能写入es请替换canal-adapter的plugin目录下的client-adapter.es7x-1.1.5-jar-with-dependencies.jar
正文一、前提条件jdk版本 : 1.8主节点 : 192.168.1.1从节点1: 192.168.1.2从结点2: 192.168.1.3二、mycat介绍mycat是数据库中间件,说白了就是实现了mysql协议的server,一个mysql数据库代理三、安装过程1.解压安装包(解压到d:/mycat)2.配置server.xml(不改数据库名可跳过) <user name="root"> <property name="password">123456</property> <property name="schemas">KCLOUDDB</property> <!-- 表级 DML 权限设置 --> <!-- <privileges check="false"> <schema name="TESTDB" dml="0110" > <table name="tb01" dml="0000"></table> <table name="tb02" dml="1111"></table> </schema> </privileges> --> </user> <user name="user"> <property name="password">user</property> <property name="schemas">KCLOUDDB</property> <property name="readOnly">true</property> </user>3.配置schema.xml分片规则sharding-by-murmur : 一致性hash算法有效解决了分布式数据的扩容问题 <schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100"> <!-- auto sharding by id (long) --> <table name="boot_chat_message" dataNode="dn1,dn2,dn3" rule="sharding-by-murmur" /> </schema> <!-- <dataNode name="dn1$0-743" dataHost="localhost1" database="db$0-743" /> --> <dataNode name="dn1" dataHost="localhost1" database="db1" /> <dataNode name="dn2" dataHost="localhost1" database="db2" /> <dataNode name="dn3" dataHost="localhost1" database="db3" /> <!--<dataNode name="dn4" dataHost="sequoiadb1" database="SAMPLE" /> <dataNode name="jdbc_dn1" dataHost="jdbchost" database="db1" /> <dataNode name="jdbc_dn2" dataHost="jdbchost" database="db2" /> <dataNode name="jdbc_dn3" dataHost="jdbchost" database="db3" /> --> <dataHost name="localhost1" maxCon="1000" minCon="10" balance="0" writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"> <heartbeat>select user()</heartbeat> <!-- can have multi write hosts --> <writeHost host="hostM1" url="192.168.1.1:3306" user="root" password="123456"> <!-- can have multi read hosts --> <readHost host="hostS1" url="192.168.1.2:3306" user="root" password="123456" /> <readHost host="hostS2" url="192.168.1.3:3306" user="root" password="123456" /> </writeHost> <!--<writeHost host="hostS1" url="localhost:3316" user="root"--> <!-- password="123456" />--> <!-- <writeHost host="hostM2" url="localhost:3316" user="root" password="123456"/> --> </dataHost>4.配置rule.xml(在文本上搜索'murmur') <function name="murmur" class="io.mycat.route.function.PartitionByMurmurHash"> <property name="seed">0</property><!-- 默认是0 --> <property name="count">3</property><!-- 要分片的数据库节点数量,必须指定,否则没法分片 --> <property name="virtualBucketTimes">160</property><!-- 一个实际的数据库节点被映射为这么多虚拟节点,默认是160倍,也就是虚拟节点数是物理节点数的160倍 --> <property name="weightMapFile">weightMapFile</property> <!-- 节点的权重,没有指定权重的节点默认是1。以properties文件的格式填写,以从0开始到count-1的整数值也就是节点索引为key,以节点权重值为值。所有权重值必须是正整数,否则以1代替 --> <property name="bucketMapPath">/etc/mycat/bucketMapPath</property> <!-- 用于测试时观察各物理节点与虚拟节点的分布情况,如果指定了这个属性,会把虚拟节点的murmur hash值与物理节点的映射按行输出到这个文件,没有默认值,如果不指定,就不会输出任何东西 --> </function>5.启动mycat在bin文件夹下双击startup_nowrap.bat6.新建数据库(主节点执行)CREATE DATABASE `db1` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci; CREATE DATABASE `db2` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci; CREATE DATABASE `db3` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci;7.新建表(主节点执行,db1\db2\db3都要执行)DROP TABLE IF EXISTS `boot_resource`; CREATE TABLE `boot_resource` ( `id` varchar(64) NOT NULL COMMENT 'id', `title` varchar(200) NOT NULL COMMENT '名称', `author` varchar(100) NOT NULL DEFAULT 'admin' COMMENT '作者', `uri` varchar(500) NOT NULL COMMENT '地址', `status` tinyint(1) NOT NULL DEFAULT '0' COMMENT '审核状态(0待审核 10待签收 20待办理 30管理员审核 40超级管理审核 50审核通过)', `code` varchar(10) NOT NULL COMMENT '类型 audio音频 video视频 image图片 text文本 other其他', `status_desc` varchar(200) NOT NULL DEFAULT '待审核' COMMENT '状态说明', `code_desc` varchar(200) NOT NULL COMMENT '类型说明', `create_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', `update_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '修改时间', `remark` longtext COMMENT '备注', `tags` longtext COMMENT '标签', PRIMARY KEY (`id`,`create_date`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 ROW_FORMAT=DYNAMIC PARTITION BY RANGE ( UNIX_TIMESTAMP(`create_date`)) (PARTITION lk202110 VALUES LESS THAN (1633017600) ENGINE = InnoDB, PARTITION lk202111 VALUES LESS THAN (1635696000) ENGINE = InnoDB, PARTITION lk202112 VALUES LESS THAN (1638288000) ENGINE = InnoDB, PARTITION lk202203 VALUES LESS THAN (1646064000) ENGINE = InnoDB, PARTITION lk202204 VALUES LESS THAN (1648742400) ENGINE = InnoDB ); INSERT INTO `boot_resource`(id,title,author,uri,`status`,`code`,status_desc,code_desc,create_date,update_date,remark,tags) VALUES ('1429355654328815617', '白月光与朱砂痣.mp3', 'admin', 'https://1.com/upload/node4/f906b6a282564c559632a1beeb449f5f.mp3', '50', 'audio', '审核通过', '音频', '2021-10-21 13:05:09', '2021-12-03 19:28:16', null, null); INSERT INTO `boot_resource`(id,title,author,uri,`status`,`code`,status_desc,code_desc,create_date,update_date,remark,tags) VALUES ('1429355954762616834', '出山.mp3', 'admin', 'https://1.com/upload/node1/ebd577c32a8d448c8349af779d36110a.mp3', '50', 'audio', '审核通过', '音频', '2021-10-21 13:05:09', '2021-12-03 19:28:16', null, null); INSERT INTO `boot_resource`(id,title,author,uri,`status`,`code`,status_desc,code_desc,create_date,update_date,remark,tags) VALUES ('1429355987293638657', '错位时空.mp3', 'admin', 'https://1.com/upload/node2/a673b6697e4142e5b24e5347b2b32fe8.mp3', '50', 'audio', '审核通过', '音频', '2021-10-21 13:05:09', '2021-12-03 19:28:16', null, null); INSERT INTO `boot_resource`(id,title,author,uri,`status`,`code`,status_desc,code_desc,create_date,update_date,remark,tags) VALUES ('1429356071594954753', '稻香.mp3', 'admin', 'https://1.com/upload/node4/5874dacd9b9a499891cfce031f10d2c4.mp3', '50', 'audio', '审核通过', '音频', '2021-10-21 13:05:09', '2021-12-03 19:28:16', null, null); INSERT INTO `boot_resource`(id,title,author,uri,`status`,`code`,status_desc,code_desc,create_date,update_date,remark,tags) VALUES ('1441610450502848513', '银河与星斗.mp3', 'admin', 'https://1.com/upload/node5/f96ff9b14ce94f8e8746ef8738614fcd.mp3', '50', 'audio', '审核通过', '音频', '2021-10-21 13:05:09', '2021-12-03 19:28:17', null, null);8.navicat连接mycat默认开启8066端口,账号密码是server.xml配置的 9.查询数据 四、集成springboot只需要修改application.ymlspring: datasource: druid: driver-class-name: com.mysql.jdbc.Driver url: jdbc:mysql://127.0.0.1:8066/TESTDB username: root password: 123456
正文一、前提条件1.es 7.6.2集群2.安装 kibana 7.6.2 logstash 7.6.2二、集成过程注:针对elasticsearch的搜索,来进行elk分析1.配置logstash.confinput{ tcp { port => 5044 #暴露5044端口 codec => json_lines #编码格式 json字符串并换行 } } output{ elasticsearch{ #es配置 hosts=>["http://192.168.1.1:9200","http://192.168.1.2:9200","http://192.168.1.3:9200"] #集群地址 index => "laokou-%{+YYYY.MM.dd}" #索引名 } stdout{codec => rubydebug} #编码格式 rubydebug }注:rubydebug编码格式,如下图所示2.导入依赖 <!-- logback 推送日志文件到logstash --> <dependency> <groupId>net.logstash.logback</groupId> <artifactId>logstash-logback-encoder</artifactId> <version>5.1</version> </dependency>3.配置logback-spring.xml<?xml version="1.0" encoding="UTF-8"?> <configuration> <include resource="org/springframework/boot/logging/logback/base.xml" /> <logger name="org.springframework.web" level="INFO"/> <logger name="org.springboot.sample" level="TRACE" /> <!-- 开发、测试环境 --> <springProfile name="dev,test"> <logger name="org.springframework.web" level="INFO"/> <logger name="org.springboot.sample" level="INFO" /> <logger name="io.laokou.elasticsearch" level="DEBUG" /> </springProfile> <!-- 生产环境 --> <springProfile name="prod"> <logger name="org.springframework.web" level="ERROR"/> <logger name="org.springboot.sample" level="ERROR" /> <logger name="io.laokou.elasticsearch" level="ERROR" /> </springProfile> <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender"> <destination>127.0.0.1:5044</destination> <!--logstash ip和暴露的端口,logback就是通过这个地址把日志发送给logstash--> <encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder" /> </appender> <root level="INFO"> <appender-ref ref="LOGSTASH" /> <appender-ref ref="CONSOLE" /> </root> </configuration>4.启动es集群/kibana/logstash5.启动项目6.查看是否创建laokou-yyyy.mm.dd7.进行es搜索(省略,根据业务条件搜索即可)8.访问(浏览器输入 http://localhost:5601)
正文一、前提条件系统:centos 7二、安装过程1.创建opt文件夹mkdir /opt2.通过xftp 或 finalshell 移动到/opt文件夹3.解压tar -zxvf mysql-5.7.9-linux-glibc2.5-x86_64.tar.gz4.移动到/usr/local/mysql并重命名mv mysql-5.7.9-linux-glibc2.5-x86_64 /usr/local/mysql5.创建mysql用户组合用户并修改权限groupadd mysql useradd -r -g mysql mysql6.创建存储mysql数据目录并授予权限mkdir -p /data/mysql #创建目录 chown mysql:mysql -R /data/mysql #授予权限7.配置my.conf(可使用finalshell直接修改文件内容)vi /etc/my.cnf[mysqld] bind-address=0.0.0.0 port=3306 user=mysql basedir=/usr/local/mysql datadir=/data/mysql socket=/tmp/mysql.sock log-error=/data/mysql/mysql.err pid-file=/data/mysql/mysql.pid character_set_server=utf8mb4 symbolic-links=0 explicit_defaults_for_timestamp=true sql_mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION lower_case_table_names=18.初始化数据库cd /usr/local/mysql注:先初始化可能会报错,建议先安装libaio错误信息:Installing MySQL system tables..../bin/mysqld: error while loading shared libraries: libaio.so.1: cannot open shared object file: No such file or directory解决方案:yum install -y libaio./mysqld --defaults-file=/etc/my.cnf --basedir=/usr/local/mysql/ --datadir=/data/mysql/ --user=mysql --initialize9.查看密码cat /data/mysql/mysql.err9.将mysql.server复制到/etc/init.d/mysql中cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysql10.启动service mysql start #启动mysql ps -ef|grep mysql #查看mysql进程11.修改密码(输入前面随机生成的密码)./mysql -u root -p #在/usr/local/mysql/bin目录下输入该命令SET PASSWORD = PASSWORD('123456'); #这里为了方便,将密码设置为123456,后面自己根据实际情况修改 ALTER USER 'root'@'localhost' PASSWORD EXPIRE NEVER; FLUSH PRIVILEGES; 12.设置远程连接use mysql #访问mysql库 update user set host = '%' where user = 'root'; #让root能访问任何host flush privileges; #刷新13.自动启动(不需要自动启动就跳过)ln -s /usr/local/mysql/bin/mysql /usr/bin大功告成
正文一、准备机器(三台机器都需要提前配置好环境,虚拟机可以克隆)主节点: 192.168.1.1从节点1:192.168.1.2从节点2:192.168.1.3二、主节点配置1.配置/etc/my.cnf(追加)# servier-id 每台mysql都要不同 server-id=1 log-bin=mysql-bin2.重启服务3.进入mysql命令窗口mysql -uroot -p4.新增用户(账号slave 密码123456)CREATE USER 'slave'@'%' IDENTIFIED BY '123456'; GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'slave'@'%'; flush privileges;5.查看用户use mysql; select user,host from user; 6.查看master状态SHOW master status\G; 三、从节点1配置1.配置/etc/my.cnf(追加)# 设置server_id,注意唯一 server-id=2 # 打开二进制功能,MASTER主服务器必须打开此项 log-bin=slave1-mysql-bin2.重启服务3.进入mysql命令窗口mysql -uroot -p4.关联mastermaster_log_file=master_log_pos= master_log_file,master_log_pos分别是master中File,Position两个字段的值change master to master_host='192.168.1.1',master_user='slave',master_password='123456',master_log_file='mysql-bin.000004',master_log_pos=462;5.启动slavestart slave;6.查看slave状态show slave status\G; 遇到问题Slave_IO_Running: NoSlave_SQL_Running: YesLast_IO_Error: Fatal error: The slave I/O thread stops because master and slave have equal MySQL server UUIDs; these UUIDs must be different for replication to work修改/mysql/data/auto.cnf主节点[auto] server-uuid=6ac0fdae-b5d7-11e4-a9f3-0800278ce5c9从节点1[auto] server-uuid=c9f4b0b7-6765-11ec-adb9-000c2949536c从节点2[auto] server-uuid=c9f4b0b7-6765-11ec-adb9-000c294953ac注意:需要重新启动 并启动从节点7.重新设置slave节点(不需要重新设置,请跳过)(1)停止slave节点stop slave;(2)重新设置change master to master_host='192.168.1.1',master_user='slave',master_password='123456',master_log_file='mysql-bin.000004',master_log_pos=462;(3)启动slavestart slave;四、从节点2配置 1.配置/etc/my.cnf(追加)# 设置server_id,注意唯一 server-id=3 # 打开二进制功能,MASTER主服务器必须打开此项 log-bin=slave2-mysql-bin2.重启服务3.进入mysql命令窗口mysql -uroot -p4.关联mastermaster_log_file=master_log_pos= master_log_file,master_log_pos分别是master中File,Position两个字段的值change master to master_host='192.168.1.1',master_user='slave',master_password='123456',master_log_file='mysql-bin.000004',master_log_pos=462;5.启动slavestart slave;6.查看slave状态show slave status\G;大功告成
正文四、filter查询在老寇云交流群发了关于生日快乐的内容(两种方式)GET /message/_search { "query": { "bool": { "must": [ { "match": { "data": "生日快乐" } }, { "match": { "remark": "老寇云交流群" } } ] } } }GET /message/_search { "query": { "bool": { "must": [ { "match": { "data": "生日快乐" } } ], "filter": [ { "term": { "remark": "老寇云交流群" } } ] } } }query 与 filter 对比filter:只会按照搜索条件过滤出所需要的数据,不会计算相关度分数,因此对相关度没有影响,同时内置自动缓存最常使用filter的数据query:会计算每个文档对搜索条件的相关度,并按相关度排序,然而需要计算相关度,所以无法缓存结果应用场景:在进行搜索时,需要根据一些条件筛选部分数据,而且不关注其排序,建议使用filter,反之,使用query五、定制排序规则constant_score - 举个栗子GET /message/_search { "query": { "constant_score": { "filter": { "term": { "data": "生日快乐" } }, "boost": 1.2 } }, "sort": [ { "createDate": { "order": "desc" } } ] }六、代码实现 1.引入依赖 <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> </dependency> <dependency> <groupId>org.elasticsearch.client</groupId> <artifactId>elasticsearch-rest-high-level-client</artifactId> <version>7.6.2</version> <exclusions> <exclusion> <groupId>org.elasticsearch</groupId> <artifactId>elasticsearch</artifactId> </exclusion> <exclusion> <groupId>org.elasticsearch.client</groupId> <artifactId>elasticsearch-rest-client</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.elasticsearch.client</groupId> <artifactId>elasticsearch-rest-client</artifactId> <version>7.6.2</version> </dependency> <dependency> <groupId>org.elasticsearch</groupId> <artifactId>elasticsearch</artifactId> <version>7.6.2</version> </dependency>2.增加实体类@Data @ApiModel(description = "查询表单实体类") public class QueryForm implements Serializable { /** * 页码 */ private Integer pageNum = 1; /** * 条数 */ private Integer pageSize = 10; /** * 是否分页 */ private boolean needPage = false; /** * 查询索引名称 */ private String[] indexNames; /** * 分词搜索 */ private List<SearchDTO> queryStringList; /** * 排序 */ private List<SearchDTO> sortFieldList; /** * 高亮搜索字段 */ private List<String> highlightFieldList; /** * or搜索-精准匹配 */ private List<SearchDTO> orSearchList; }/** * 搜索DTO * @author Kou Shenhai 2413176044@leimingtech.com * @version 1.0 * @date 2022/3/15 0015 上午 9:45 */ @Data public class SearchDTO implements Serializable { private String field; private String value; }3.分词查询@Slf4j @Component public class ElasticsearchUtil { @Autowired private RestHighLevelClient restHighLevelClient; private static final String HIGHLIGHT_PRE_TAGS = "<span class='highlight'>"; private static final String HIGHLIGHT_POST_TAGS = "</span>"; private static final String PINYIN_SUFFIX = ".pinyin"; /** * 关键字高亮显示 * @param queryForm 查询实体类 * @return * @throws IOException */ public HttpResultUtil<SearchVO> search(QueryForm queryForm) throws IOException { long startTime = System.currentTimeMillis(); final String[] indexName = queryForm.getIndexNames(); final List<SearchDTO> orSearchList = queryForm.getOrSearchList(); final List<SearchDTO> sortFieldList = queryForm.getSortFieldList(); final List<String> highlightFieldList = queryForm.getHighlightFieldList(); final List<SearchDTO> queryStringList = queryForm.getQueryStringList(); final Integer pageNum = queryForm.getPageNum(); final Integer pageSize = queryForm.getPageSize(); BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery(); SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); //用于搜索文档,聚合,定制查询有关操作 SearchRequest searchRequest = new SearchRequest(); searchRequest.indices(indexName); //or查询 BoolQueryBuilder orQuery = QueryBuilders.boolQuery(); for (SearchDTO dto : orSearchList) { orQuery.should(QueryBuilders.termQuery(dto.getField(),dto.getValue())); } boolQueryBuilder.must(orQuery); //分词查询 BoolQueryBuilder analysisQuery = QueryBuilders.boolQuery(); for (SearchDTO dto : queryStringList) { final String field = dto.getField(); //清除左右空格 String keyword = dto.getValue().trim(); //处理特殊字符 keyword = QueryParser.escape(keyword); analysisQuery.should(QueryBuilders.queryStringQuery(keyword).field(field).field(field.concat(PINYIN_SUFFIX))); } boolQueryBuilder.must(analysisQuery); //高亮显示数据 HighlightBuilder highlightBuilder = new HighlightBuilder(); //设置关键字显示颜色 highlightBuilder.preTags(HIGHLIGHT_PRE_TAGS); highlightBuilder.postTags(HIGHLIGHT_POST_TAGS); //设置显示的关键字 for (String field : highlightFieldList) { highlightBuilder.field(field, 0, 0).field(field.concat(PINYIN_SUFFIX), 0, 0); } highlightBuilder.requireFieldMatch(false); //分页 int start = 0; int end = 10000; if (queryForm.isNeedPage()) { start = (pageNum - 1) * pageSize; end = pageSize; } //设置高亮 searchSourceBuilder.highlighter(highlightBuilder); searchSourceBuilder.from(start); searchSourceBuilder.size(end); //追踪分数开启 searchSourceBuilder.trackScores(true); //注解 searchSourceBuilder.explain(true); //排序 for (SearchDTO dto : sortFieldList) { SortOrder sortOrder; final String desc = "desc"; final String value = dto.getValue(); final String field = dto.getField(); if (desc.equalsIgnoreCase(value)) { sortOrder = SortOrder.DESC; } else { sortOrder = SortOrder.ASC; } searchSourceBuilder.sort(field,sortOrder); } searchSourceBuilder.query(boolQueryBuilder); searchRequest.source(searchSourceBuilder); SearchHits hits = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT).getHits(); List<Map<String,Object>> data = new ArrayList<>(); for (SearchHit hit : hits){ Map<String,Object> sourceData = hit.getSourceAsMap(); Map<String, HighlightField> highlightFields = hit.getHighlightFields(); for (String key : highlightFields.keySet()){ sourceData.put(key,highlightFields.get(key).getFragments()[0].string()); } data.add(sourceData); } long endTime = System.currentTimeMillis(); HttpResultUtil<SearchVO> result = new HttpResultUtil(); SearchVO vo = new SearchVO(); final String searchData = queryStringList.stream().map(i -> i.getValue()).collect(Collectors.joining(",")); final List<String> searchFieldList = queryStringList.stream().map(i -> i.getField()).collect(Collectors.toList()); vo.setRecords(handlerData(data,searchFieldList)); vo.setTotal(hits.getTotalHits().value); vo.setPageNum(queryForm.getPageSize()); vo.setPageSize(queryForm.getPageSize()); result.setMsg("搜索 <span class='highlight'>" + searchData + "</span> 找到 " + vo.getTotal() + " 个与之相关的内容,耗时:" + (endTime - startTime) +"ms"); //处理数据 result.setData(vo); return result; } /** * 处理高亮后的数据 * @param data ES查询结果集 */ private List<Map<String,Object>> handlerData(List<Map<String,Object>> data,List<String> fieldList) { log.info("查询结果:{}",data); if (CollectionUtils.isEmpty(data)) { return Lists.newArrayList(); } if (CollectionUtils.isEmpty(fieldList)) { return data; } for (Map<String, Object> map : data) { for (String field : fieldList) { if (map.containsKey(field.concat(PINYIN_SUFFIX))) { String result1 = map.get(field).toString(); String result2 = map.get(field.concat(PINYIN_SUFFIX)).toString(); //将同义词合并 for (;;) { int start = result1.indexOf(HIGHLIGHT_PRE_TAGS); int end = result1.indexOf(HIGHLIGHT_POST_TAGS); if (start == -1 || end == -1) { break; } String replaceKeyword = result1.substring(start, end).replace(HIGHLIGHT_PRE_TAGS, ""); result2 = result2.replaceAll(replaceKeyword, HIGHLIGHT_PRE_TAGS + replaceKeyword + HIGHLIGHT_POST_TAGS); result1 = result1.substring(end + 1); } map.put(field, result2); map.remove(field.concat(PINYIN_SUFFIX)); } } } return data; } }4.api测试@RestController @RequestMapping("/api") @Api(tags = "Elasticsearch API 服务") public class ElasticsearchController { @Autowired private ElasticsearchUtil elasticsearchUtil; @PostMapping("/search") @ApiOperation("ES关键字搜索-高亮显示") @ResponseBody public HttpResultUtil<SearchVO> search(@RequestBody @Validated final QueryForm queryForm, BindingResult bindingResult) throws IOException { if (bindingResult.hasErrors()) { return new HttpResultUtil<SearchVO>().error(bindingResult.getFieldError().getDefaultMessage()); } return elasticsearchUtil.search(queryForm); } }大功告成
正文一、搜索入门1.无条件搜索命令GET /index/_searchGET /message/_search2.传参搜索命令GET /index/_search?q=filed:valueGET /message/_search?q=id:1424966164936024065问题扩展: + 和 - 区别(见如下举例说明)GET /message/_search?q=+id:1424966164936024065 #查询id=1424966164936024065的数据 GET /message/_search?q=-id:1424966164936024065 #查询id!=1424966164936024065的数据3.分页搜索命令GET /index/_search?size=x&from=xGET /message/_search?size=10&from=0 注:类似sql > select * from message 0,10问题扩展:分页过深,对性能有哪些影响?1.消耗网络带宽,搜的过深,各分片(shard)要把数据传递给协调节点(coordinating node),这个过程中有大量数据传输,消耗网络 2.消耗内存,各节点(shard)要把数据传给协调节点(coordinating node),这个传递回来的数据,被协调节点(coordinating node)保存在内存中,这样会大量消耗内存 3.消耗cpu,协调节点(coordinating node)要把传回来的数据进行排序,这个排序过程很消耗cpu 因此,出于对深度分页(deep paging)的性能考虑,能少用就尽量少用二、DSL入门es所独有的搜索语言(ps:有点类似sql语言),可以在请求体携带搜索条件,功能强大1.查询全部举个栗子GET /message/_search { "query": { "match_all": {} } }注:看到这里,小伙伴们就要问了,es的get请求为什么可以携带请求体?这是因为es对此做出处理,至于是怎么处理的,感兴趣的小伙伴,可以去查阅资料2.数据排序举个栗子GET /message/_search { "query": { "match": { "desc": "群聊" } }, "sort": [ { "createDate": { "order": "desc" } } ] }3.分页查询举个栗子GET /message/_search { "query": { "match_all": {} }, "from": 0, "size": 10 }4.返回指定字段举个栗子GET /message/_search { "query": { "match_all": {} }, "_source": ["username","data"] }三、Query DSL语法1.DSL 命令{ query_name: { argument:value ...... } } 或者 { query_name:{ field_name:{ argument:value ...... } } }举个栗子1. GET /message/_search 2. { 3. "query": { 4. "match": { 5. "desc": "群聊" 6. } 7. } 8. }2.多条件组合搜索举个栗子GET /message/_search { "query": { "bool": { "must": [ { "match": { "username": "admin" } } ], "should": [ { "match": { "desc": "群聊" } } ], "must_not": [ { "match": { "desc": "私聊" } } ] } } }GET /message/_search { "query": { "bool": { "must": [ { "match": { "sendId": "1363109342432645122" } } ], "should": [ { "match": { "username": "admin" } }, { "bool": { "must": [ { "match": { "data": "无名,天地之始,有名,万物之母。" } } ] } } ] } } }3.dsl语法match_all - 举个栗子GET /message/_search { "query": { "match_all": {} } }match - 举个栗子GET /message/_search { "query": { "multi_match": { "query": "生日快乐", "fields": ["data","data.pinyin"] } } }range query - 举个栗子GET /message/_search { "query": { "range": { "id": { "gte": 1359036315055083522, "lte": 1359036315055083522 } } } }term query - 举个栗子GET /message/_search { "query": { "term": { "username": { "value": "admin" } } } }terms query - 举个栗子GET /message/_search { "query": { "terms": { "data": [ "年年", "岁岁" ] } } }exists query(查询有默写字段值的文档) - 举个栗子GET /message/_search { "query": { "exists": { "field": "remark" } } }full query(返回包含与搜索词类似词的文档) - 举个栗子生日1 > 生日GET /message/_search { "query": { "fuzzy": { "data": { "value": "生日1" } } } }ids query - 举个栗子GET /message/_search { "query": { "ids": { "values": ["1426744462376591362","1426752233562071042"] } } } prefix(前缀查询) - 举个栗子GET /message/_search { "query": { "prefix": { "data": { "value": "生日快乐" } } } }regexp query(正则查询) - 举个栗子GET /message/_search { "query": { "regexp": { "data": "生日*" } } }分词搜索 - 举个栗子GET /message/_search { "query": { "bool": { "should": [ { "query_string": { "default_field": "data", "query": "shengri" } }, { "query_string": { "default_field": "data.pinyin", "query": "shengri" } } ] } } }
正文五、ES工具类 (索引相关配置不懂的,请查看elasticsearch 7.6.2 - 索引管理)/** * Elasticsearch工具类-用于操作ES * @author Kou Shenhai 2413176044@leimingtech.com * @version 1.0 * @date 2021/1/24 0024 下午 5:42 */ @Slf4j @Component public class ElasticsearchUtil { private static final String PRIMARY_KEY_NAME = "id"; @Value("${elasticsearch.synonym.path}") private String synonymPath; @Autowired private RestHighLevelClient restHighLevelClient; /** * 批量同步数据到ES * @param indexName 索引名称 * @param indexAlias 别名 * @param jsonDataList 数据列表 * @param clazz 类型 * @return * @throws IOException */ public boolean saveDataBatch(String indexName,String indexAlias,String jsonDataList,Class clazz) throws IOException { //判空 if (StringUtils.isBlank(jsonDataList)) { return false; } if (syncIndex(indexName, indexAlias, clazz)) { return false; } //3.批量操作Request BulkRequest bulkRequest = packBulkIndexRequest(indexName, jsonDataList); if (bulkRequest.requests().isEmpty()) { return false; } final BulkResponse bulk = restHighLevelClient.bulk(bulkRequest, RequestOptions.DEFAULT); if (bulk.hasFailures()) { for (BulkItemResponse item : bulk.getItems()) { log.error("索引[{}],主键[{}]更新操作失败,状态为:[{}],错误信息:{}",indexName,item.getId(),item.status(),item.getFailureMessage()); } return false; } // 记录索引新增与修改数量 Integer createdCount = 0; Integer updatedCount = 0; for (BulkItemResponse item : bulk.getItems()) { if (DocWriteResponse.Result.CREATED.equals(item.getResponse().getResult())) { createdCount++; } else if (DocWriteResponse.Result.UPDATED.equals(item.getResponse().getResult())){ updatedCount++; } } log.info("索引[{}]批量同步更新成功,共新增[{}]个,修改[{}]个",indexName,createdCount,updatedCount); return true; } /** * ES修改数据 * @param indexName 索引名称 * @param id 主键 * @param paramJson 参数JSON * @return */ public boolean updateData(String indexName,String id,String paramJson) { UpdateRequest updateRequest = new UpdateRequest(indexName, id); //如果修改索引中不存在则进行新增 updateRequest.docAsUpsert(true); //立即刷新数据 updateRequest.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE); updateRequest.doc(paramJson,XContentType.JSON); try { UpdateResponse updateResponse = restHighLevelClient.update(updateRequest, RequestOptions.DEFAULT); log.info("索引[{}],主键:【{}】操作结果:[{}]",indexName,id,updateResponse.getResult()); if (DocWriteResponse.Result.CREATED.equals(updateResponse.getResult())) { //新增 log.info("索引:【{}】,主键:【{}】新增成功",indexName,id); return true; } else if (DocWriteResponse.Result.UPDATED.equals(updateResponse.getResult())) { //修改 log.info("索引:【{}】,主键:【{}】修改成功",indexName, id); return true; } else if (DocWriteResponse.Result.NOOP.equals(updateResponse.getResult())) { //无变化 log.info("索引:[{}],主键:[{}]无变化",indexName, id); return true; } } catch (IOException e) { e.printStackTrace(); log.error("索引:[{}],主键:【{}】,更新异常:[{}]",indexName, id,e); return false; } return false; } /** * 批量修改ES * @param indexName 索引名称 * @param indexAlias 别名 * @param jsonDataList 数据列表 * @param clazz 类型 * @return * @throws IOException */ public boolean updateDataBatch(String indexName,String indexAlias, String jsonDataList,Class clazz) throws IOException { //1.创建索引 boolean createIndexFlag = createIndex(indexName,indexAlias, clazz); if (!createIndexFlag) { return false; } return this.updateDataBatch(indexName,jsonDataList); } /** * 删除数据 * @param indexName 索引名称 * @param id 主键 * @return */ public boolean deleteData(String indexName,String id) { DeleteRequest deleteRequest = new DeleteRequest(indexName); deleteRequest.id(id); deleteRequest.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE); try { DeleteResponse deleteResponse = restHighLevelClient.delete(deleteRequest, RequestOptions.DEFAULT); if (DocWriteResponse.Result.NOT_FOUND.equals(deleteResponse.getResult())) { log.error("索引:【{}】,主键:【{}】删除失败",indexName, id); return false; } else { log.info("索引【{}】主键【{}】删除成功",indexName, id); return true; } } catch (IOException e) { e.printStackTrace(); log.error("删除索引【{}】出现异常[{}]",indexName,e); return false; } } /** * 批量删除ES * @param indexName 索引名称 * @param ids id列表 * @return */ public boolean deleteDataBatch(String indexName,List<String> ids) { if (CollectionUtils.isEmpty(ids)) { return false; } BulkRequest bulkRequest = packBulkDeleteRequest(indexName, ids); try { BulkResponse bulk = restHighLevelClient.bulk(bulkRequest, RequestOptions.DEFAULT); if (bulk.hasFailures()) { for (BulkItemResponse item : bulk.getItems()) { log.error("删除索引:[{}],主键:{}失败,信息:{}",indexName,item.getId(),item.getFailureMessage()); } return false; } //记录索引新增与修改数量 Integer deleteCount = 0; for (BulkItemResponse item : bulk.getItems()) { if (DocWriteResponse.Result.DELETED.equals(item.getResponse().getResult())) { deleteCount++; } } log.info("批量删除索引[{}]成功,共删除[{}]个",indexName,deleteCount); return true; } catch (IOException e) { e.printStackTrace(); log.error("删除索引:【{}】出现异常:{}",indexName,e); return false; } } /** * 组装删除操作 * @param indexName 索引名称 * @param ids id列表 * @return */ private BulkRequest packBulkDeleteRequest(String indexName, List<String> ids) { BulkRequest bulkRequest = new BulkRequest(); bulkRequest.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE); ids.forEach(id -> { DeleteRequest deleteRequest = new DeleteRequest(indexName); deleteRequest.id(id); bulkRequest.add(deleteRequest); }); return bulkRequest; } /** * 批量修改ES * @param indexName 索引名称 * @param jsonDataList json数据列表 * @return */ public boolean updateDataBatch(String indexName, String jsonDataList) { //判空 if (StringUtils.isBlank(jsonDataList)) { return false; } BulkRequest bulkRequest = packBulkUpdateRequest(indexName, jsonDataList); if (bulkRequest.requests().isEmpty()) { return false; } try { //同步执行 BulkResponse bulk = restHighLevelClient.bulk(bulkRequest, RequestOptions.DEFAULT); if (bulk.hasFailures()) { for (BulkItemResponse item : bulk.getItems()) { log.error("索引【{}】,主键[{}]修改操作失败,状态为【{}】,错误信息:{}",indexName,item.status(),item.getFailureMessage()); } return false; } //记录索引新增与修改数量 Integer createCount = 0; Integer updateCount = 0; for (BulkItemResponse item : bulk.getItems()) { if (DocWriteResponse.Result.CREATED.equals(item.getResponse().getResult())) { createCount++; } else if (DocWriteResponse.Result.UPDATED.equals(item.getResponse().getResult())) { updateCount++; } } log.info("索引【{}】批量修改更新成功,共新增[{}]个,修改[{}]个",indexName,createCount,updateCount); } catch (IOException e) { e.printStackTrace(); log.error("索引【{}】批量修改更新出现异常",indexName); return false; } return true; } /** * 组装bulkUpdate * @param indexName 索引名称 * @param jsonDataList 数据列表 * @return */ private BulkRequest packBulkUpdateRequest(String indexName,String jsonDataList) { BulkRequest bulkRequest = new BulkRequest(); bulkRequest.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE); JSONArray jsonArray = JSONArray.parseArray(jsonDataList); if (jsonArray.isEmpty()) { return bulkRequest; } jsonArray.forEach(o -> { Map<String, String> map = (Map<String, String>) o; UpdateRequest updateRequest = new UpdateRequest(indexName,map.get(PRIMARY_KEY_NAME)); // 修改索引中不存在就新增 updateRequest.docAsUpsert(true); updateRequest.doc(JSON.toJSONString(o),XContentType.JSON); bulkRequest.add(updateRequest); }); return bulkRequest; } /** * 删除索引、新建索引 * @param indexName 索引名称 * @param indexAlias 别名 * @param clazz 类型 * @return * @throws IOException */ private boolean syncIndex(String indexName, String indexAlias, Class clazz) throws IOException { //1.删除索引 boolean deleteAllFlag = deleteIndex(indexName); if (!deleteAllFlag) { return true; } //2.创建索引 boolean createIndexFlag = createIndex(indexName, indexAlias, clazz); if (!createIndexFlag) { return true; } return false; } /** * 根据主键查询ES * @param indexName 索引名称 * @param id 主键 * @return */ public String getDataById(String indexName,String id) { //判断索引是否存在 //1.判断索引是否存在 boolean result = isIndexExists(indexName); if (!result) { return ""; } GetRequest getRequest = new GetRequest(indexName, id); try { GetResponse getResponse = restHighLevelClient.get(getRequest, RequestOptions.DEFAULT); String resultJson = getResponse.getSourceAsString(); log.info("索引【{}】主键【{}】,查询结果:【{}】",indexName,id,resultJson); return resultJson; } catch (IOException e) { e.printStackTrace(); log.error("索引【{}】主键[{}],查询异常:{}",indexName,id,e); return ""; } } /** * 清空索引内容 * @param indexName 索引名称 * @return */ public boolean deleteAll(String indexName) { //1.判断索引是否存在 boolean result = isIndexExists(indexName); if (!result) { log.error("索引【{}】不存在,删除失败",indexName); return false; } DeleteRequest deleteRequest = new DeleteRequest(indexName); deleteRequest.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE); try { DeleteResponse deleteResponse = restHighLevelClient.delete(deleteRequest, RequestOptions.DEFAULT); if (DocWriteResponse.Result.NOT_FOUND.equals(deleteResponse.getResult())) { log.error("索引【{}】删除失败",indexName); return false; } log.info("索引【{}】删除成功",indexName); return true; } catch (IOException e) { e.printStackTrace(); log.error("删除索引[{}],出现异常[{}]",indexName,e); return false; } } /** * 批量数据保存到ES-异步 * @param indexName 索引名称 * @param indexAlias 索引别名 * @param jsonDataList 数据列表 * @param clazz 类型 * @return * @throws IOException */ public boolean saveDataBatchSync(String indexName,String indexAlias,String jsonDataList,Class clazz) throws IOException { //判空 if (StringUtils.isBlank(jsonDataList)) { return false; } if (syncIndex(indexName, indexAlias, clazz)) { return false; } //3.批量操作Request BulkRequest bulkRequest = packBulkIndexRequest(indexName, jsonDataList); if (bulkRequest.requests().isEmpty()) { return false; } //4.异步执行 ActionListener<BulkResponse> listener = new ActionListener<BulkResponse>() { @Override public void onResponse(BulkResponse bulkItemResponses) { if (bulkItemResponses.hasFailures()) { for (BulkItemResponse item : bulkItemResponses.getItems()) { log.error("索引【{}】,主键【{}】更新失败,状态【{}】,错误信息:{}",indexName,item.getId(), item.status(),item.getFailureMessage()); } } } //失败操作 @Override public void onFailure(Exception e) { log.error("索引【{}】批量异步更新出现异常:{}",indexName,e); } }; restHighLevelClient.bulkAsync(bulkRequest,RequestOptions.DEFAULT,listener); log.info("索引批量更新索引【{}】中",indexName); return true; } /** * 删除索引 * @param indexName 索引名称 * @return * @throws IOException */ public boolean deleteIndex(String indexName) throws IOException { //1.判断索引是否存在 boolean result = isIndexExists(indexName); if (!result) { log.error("索引【{}】不存在,删除失败",indexName); return false; } //2.删除操作Request DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest(indexName); deleteIndexRequest.indicesOptions(IndicesOptions.LENIENT_EXPAND_OPEN); AcknowledgedResponse acknowledgedResponse = restHighLevelClient.indices().delete(deleteIndexRequest, RequestOptions.DEFAULT); if (!acknowledgedResponse.isAcknowledged()) { log.error("索引【{}】删除失败",indexName); return false; } log.info("索引【{}】删除成功",indexName); return true; } /** * 批量操作的Request * @param indexName 索引名称 * @param jsonDataList json数据列表 * @return */ private BulkRequest packBulkIndexRequest(String indexName,String jsonDataList) { BulkRequest bulkRequest = new BulkRequest(); //IMMEDIATE > 请求向es提交数据,立即进行数据刷新<实时性高,资源消耗高> //WAIT_UNTIL > 请求向es提交数据,等待数据完成刷新<实时性高,资源消耗低> //NONE > 默认策略<实时性低> bulkRequest.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE); JSONArray jsonArray = JSONArray.parseArray(jsonDataList); if (jsonArray.isEmpty()) { return bulkRequest; } //循环数据封装bulkRequest jsonArray.forEach(obj ->{ final Map<String, String> map = (Map<String, String>) obj; IndexRequest indexRequest = new IndexRequest(indexName); indexRequest.source(JSON.toJSONString(obj),XContentType.JSON); indexRequest.id(map.get(PRIMARY_KEY_NAME)); bulkRequest.add(indexRequest); }); return bulkRequest; } /** * 创建索引 * @param indexName 索引名称 * @param indexAlias 别名 * @param clazz 类型 * @return * @throws IOException */ public boolean createIndex(String indexName,String indexAlias,Class clazz) throws IOException { //判断索引是否存在 boolean result = isIndexExists(indexName); if (!result) { boolean createResult = createIndexAndCreateMapping(indexName,indexAlias, FieldMappingUtil.getFieldInfo(clazz)); if (!createResult) { log.info("索引【{}】创建失败",indexName); return false; } } log.info("索引:[{}]创建成功",indexName); return true; } /** * 数据同步到ES * @param id 主键 * @param indexName 索引名称 * @param jsonData json数据 * @param clazz 类型 * @return */ public boolean saveData(String id,String indexName,String indexAlias,String jsonData,Class clazz) throws IOException { //1.创建索引 boolean createIndexFlag = createIndex(indexName,indexAlias, clazz); if (!createIndexFlag) { return false; } //2.创建操作Request IndexRequest indexRequest = new IndexRequest(indexName); //3.配置相关信息 indexRequest.source(jsonData, XContentType.JSON); //IMMEDIATE > 立即刷新 indexRequest.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE); indexRequest.id(id); IndexResponse response = restHighLevelClient.index(indexRequest, RequestOptions.DEFAULT); //4.判断索引是新增还是修改 if (IndexResponse.Result.CREATED.equals(response.getResult())) { log.info("索引【{}】保存成功",indexName); return true; } else if (IndexResponse.Result.UPDATED.equals(response.getResult())) { log.info("索引【{}】修改成功",indexName); return true; } return false; } /** * 判断索引是否存在 * @param indexName 索引名称 * @return */ public boolean isIndexExists(String indexName) { try { GetIndexRequest getIndexRequest = new GetIndexRequest(indexName); return restHighLevelClient.indices().exists(getIndexRequest, RequestOptions.DEFAULT); }catch (Exception e) { e.printStackTrace(); } return false; } /** * 创建索引设置相关配置信息 * @param indexName 索引名称 * @param indexAlias 索引别名 * @param fieldMappingList 数据列表 * @return * @throws IOException */ private boolean createIndexAndCreateMapping(String indexName,String indexAlias, List<FieldMapping> fieldMappingList) throws IOException { //封装es索引的mapping XContentBuilder mapping = packEsMapping(fieldMappingList, null); mapping.endObject().endObject(); mapping.close(); //进行索引的创建 CreateIndexRequest createIndexRequest = new CreateIndexRequest(indexName); //配置分词器 XContentBuilder settings = packSettingMapping(); XContentBuilder aliases = packEsAliases(indexAlias); log.info("索引配置脚本:{}",settings); log.info("索引字段内容:{}",mapping); createIndexRequest.settings(settings); createIndexRequest.mapping("_doc", mapping); createIndexRequest.aliases(aliases); //同步方式创建索引 CreateIndexResponse createIndexResponse = restHighLevelClient.indices().create(createIndexRequest,RequestOptions.DEFAULT); boolean acknowledged = createIndexResponse.isAcknowledged(); if (acknowledged) { log.info("索引:{}创建成功", indexName); return true; } else { log.error("索引:{}创建失败", indexName); return false; } } /** * 配置ES别名 * @author Kou Shenhai * @param alias 别名 * @return * @throws IOException */ private XContentBuilder packEsAliases(String alias) throws IOException{ XContentBuilder aliases = XContentFactory.jsonBuilder().startObject() .startObject(alias).endObject(); aliases.endObject(); aliases.close(); return aliases; } /** * 配置Mapping * @param fieldMappingList 组装的实体类信息 * @param mapping * @return * @throws IOException */ private XContentBuilder packEsMapping(List<FieldMapping> fieldMappingList,XContentBuilder mapping) throws IOException { if (mapping == null) { //如果对象是空,首次进入,设置开始结点 mapping = XContentFactory.jsonBuilder().startObject() .field("dynamic",true) .startObject("properties"); } //循环实体对象的类型集合封装ES的Mapping for (FieldMapping fieldMapping : fieldMappingList) { String field = fieldMapping.getField(); String dataType = fieldMapping.getType(); Integer participle = fieldMapping.getParticiple(); //设置分词规则 if (Constant.NOT_ANALYZED.equals(participle)) { if (FieldTypeEnum.DATE.getValue().equals(dataType)) { mapping.startObject(field) .field("type", dataType) .field("format","yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis") .endObject(); } else { mapping.startObject(field) .field("type", dataType) .endObject(); } } else if (Constant.IK_INDEX.equals(participle)) { mapping.startObject(field) .field("type",dataType) .field("eager_global_ordinals",true) //fielddata=true 用来解决text字段不能进行聚合操作 .field("fielddata",true) .field("boost",100.0) .field("analyzer","ik-index-synonym") .field("search_analyzer","ik-search-synonym") .startObject("fields").startObject("pinyin") .field("term_vector", "with_positions_offsets") .field("analyzer","ik-search-pinyin") .field("type",dataType) .field("boost",100.0) .endObject().endObject() .endObject(); } } return mapping; } /** * 配置Settings * @return * @throws IOException */ private XContentBuilder packSettingMapping() throws IOException { XContentBuilder setting = XContentFactory.jsonBuilder().startObject() .startObject("index") .field("number_of_shards",5) .field("number_of_replicas",1) .field("refresh_interval","120s") .endObject() .startObject("analysis"); //ik分词 同义词 拼音 setting.startObject("analyzer") .startObject("ik-search-pinyin") .field("type","custom") .field("tokenizer","ik_smart") .field("char_filter",new String[] {"html_strip"}) .field("filter", new String[]{"laokou-pinyin","word_delimiter","lowercase", "asciifolding"}) .endObject(); setting.startObject("ik-index-synonym") .field("type","custom") .field("tokenizer","ik_max_word") .field("char_filter",new String[] {"html_strip"}) .field("filter", new String[]{"laokou-remote-synonym"}) .endObject(); setting.startObject("ik-search-synonym") .field("type","custom") .field("tokenizer","ik_smart") .field("char_filter",new String[] {"html_strip"}) .field("filter", new String[]{"laokou-remote-synonym"}) .endObject(); setting.endObject(); //设置拼音分词器 同义词分词 setting.startObject("filter") .startObject("laokou-pinyin") .field("type", "pinyin") .field("keep_first_letter", false) .field("keep_separate_first_letter", false) .field("keep_full_pinyin", true) .field("keep_original", false) .field("keep_joined_full_pinyin",true) .field("limit_first_letter_length", 16) .field("lowercase", true) .field("remove_duplicated_term", true) .endObject() .startObject("laokou-remote-synonym") .field("type","dynamic_synonym") .field("synonyms_path", synonymPath) .field("interval",120) .field("dynamic_reload",true) .endObject() .endObject(); setting.endObject().endObject(); setting.close(); return setting; } }问题思考:比如说,我有几条记录,文章记录,聊天记录,订单记录,它们是不同的索引,需要单独建立索引,怎么根据不同的数据类型来创建不同的索引?你会怎么做?六、索引管理工具类/** * 索引管理 * @author Kou Shenhai 2413176044@leimingtech.com * @version 1.0 * @date 2021/10/31 0031 上午 10:11 */ public class FieldUtil { public static final String MESSAGE_INDEX = "message"; private static final Map<String,Class<?>> classMap = new HashMap<>(16); static { classMap.put(FieldUtil.MESSAGE_INDEX, MessageIndex.class); } public static Class<?> getClazz(final String indexName) { return classMap.getOrDefault(indexName,Object.class); } }七、测试es/** * Elasticsearch API 服务 * @author Kou Shenhai 2413176044@leimingtech.com * @version 1.0 * @date 2021/2/8 0008 下午 6:33 */ @RestController @RequestMapping("/api") @Api(tags = "Elasticsearch API 服务") public class ElasticsearchController { @Autowired private ElasticsearchUtil elasticsearchUtil; @PostMapping("/sync") @ApiOperation("同步数据到ES") @CrossOrigin public void syncIndex(@RequestBody final ElasticsearchModel model) throws IOException { String id = model.getId(); String indexName = model.getIndexName(); String indexAlias = model.getIndexAlias(); String jsonData = model.getData(); Class<?> clazz = FieldUtil.getClazz(indexAlias); elasticsearchUtil.saveData(id,indexName,indexAlias,jsonData,clazz); } @PostMapping("/batchSync") @ApiOperation("批量数据保存到ES-异步") @CrossOrigin public void batchSyncIndex(@RequestBody final ElasticsearchModel model) throws IOException { String indexName = model.getIndexName(); String indexAlias = model.getIndexAlias(); String jsonDataList = model.getData(); Class<?> clazz = FieldUtil.getClazz(indexAlias); elasticsearchUtil.saveDataBatchSync(indexName,indexAlias,jsonDataList,clazz); } @PostMapping("/batch") @ApiOperation("批量同步数据到ES") @CrossOrigin public void saveBatchIndex(@RequestBody final ElasticsearchModel model) throws IOException { String indexName = model.getIndexName(); String indexAlias = model.getIndexAlias(); String jsonDataList = model.getData(); Class<?> clazz = FieldUtil.getClazz(indexAlias); elasticsearchUtil.saveDataBatch(indexName,indexAlias,jsonDataList,clazz); } @GetMapping("/get") @ApiOperation("根据主键获取ES") @CrossOrigin @ApiImplicitParams({ @ApiImplicitParam(name = "indexName",value = "索引名称",required = true,paramType = "query",dataType = "String"), @ApiImplicitParam(name = "id",value = "主键",required = true,paramType = "query",dataType = "String") }) public HttpResultUtil<String> getDataById(@RequestParam("indexName")String indexName,@RequestParam("id")String id) { return new HttpResultUtil<String>().ok(elasticsearchUtil.getDataById(indexName,id)); } @PutMapping("/batch") @ApiOperation("批量修改ES") @CrossOrigin public void updateDataBatch(@RequestBody final ElasticsearchModel model) throws IOException { String indexName = model.getIndexName(); String indexAlias = model.getIndexAlias(); String jsonDataList = model.getData(); Class<?> clazz = FieldUtil.getClazz(indexAlias); elasticsearchUtil.updateDataBatch(indexName,indexAlias,jsonDataList,clazz); } @PutMapping("/sync") @ApiOperation("同步修改ES") @CrossOrigin public void updateData(@RequestBody final ElasticsearchModel model) { String id = model.getId(); String indexName = model.getIndexName(); String paramJson = model.getData(); elasticsearchUtil.updateData(indexName,id,paramJson); } @DeleteMapping("/batch") @ApiOperation("批量删除ES") @CrossOrigin public void deleteDataBatch(@RequestParam("indexName")String indexName,@RequestParam("ids")List<String> ids) { elasticsearchUtil.deleteDataBatch(indexName,ids); } @DeleteMapping("/sync") @ApiOperation("同步删除ES") @CrossOrigin public void deleteData(@RequestParam("indexName")String indexName,@RequestParam("id")String id) { elasticsearchUtil.deleteData(indexName,id); } @DeleteMapping("/all") @ApiOperation("清空ES") @CrossOrigin public void deleteAll(@RequestParam("indexName")String indexName) { elasticsearchUtil.deleteAll(indexName); } }大功告成补充:可根据自己的业务进行数据分区
正文一、引入依赖配置pom.xml <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> </dependency> <dependency> <groupId>org.elasticsearch.client</groupId> <artifactId>elasticsearch-rest-high-level-client</artifactId> <version>7.6.2</version> <exclusions> <exclusion> <groupId>org.elasticsearch</groupId> <artifactId>elasticsearch</artifactId> </exclusion> <exclusion> <groupId>org.elasticsearch.client</groupId> <artifactId>elasticsearch-rest-client</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.elasticsearch.client</groupId> <artifactId>elasticsearch-rest-client</artifactId> <version>7.6.2</version> </dependency> <dependency> <groupId>org.elasticsearch</groupId> <artifactId>elasticsearch</artifactId> <version>7.6.2</version> </dependency>二、配置application-dev.yml(生产就克隆application-dev改成生产配置)elasticsearch: host: 192.168.1.1:9200,192.1.2.133:9200,192.168.1.3:9200 cluster-name: laokou-elasticsearch username: password: synonym: path: http://192.168.1.1:9048/laokou-service/synonym问题思考:比如说,一条文章记录,它有标题,内容,阅读量,在数据存入es时,我需要对es配置分词器,并且能够通过阅读量来筛选数据,你怎么做?三、配置ES注解注解可以修饰属性或方法(前提是先配置)type > 需要在es配置什么类型participle > 需要配置什么分词器/** * @author Kou Shenhai */ @Target({ElementType.FIELD,ElementType.METHOD}) @Retention(RetentionPolicy.RUNTIME) @Documented @Inherited public @interface FieldInfo { /** * 默认 keyword * @return */ String type() default "keyword"; /** * 0 not_analyzed 1 ik_smart 2.ik_max_word 3.ik-index(自定义分词器) * @return */ int participle() default 0; }拼接属性对应的类型及分词器/** * 属性、类型、分词器 * @author Kou Shenhai 2413176044@leimingtech.com * @version 1.0 * @date 2021/2/9 0009 上午 10:20 */ @Data @NoArgsConstructor public class FieldMapping { private String field; private String type; private Integer participle; public FieldMapping(String field, String type, Integer participle) { this.field = field; this.type = type; this.participle = participle; } }组装每个属性对应的类型及分词器 => List/** * 每个属性对应的类型及分词器 * @author Kou Shenhai 2413176044@leimingtech.com * @version 1.0 * @date 2021/1/24 0024 下午 7:51 */ @Slf4j public class FieldMappingUtil { public static List<FieldMapping> getFieldInfo(Class clazz) { return getFieldInfo(clazz, null); } public static List<FieldMapping> getFieldInfo(Class clazz, String fieldName) { //返回class中的所有字段(包括私有字段) Field[] fields = clazz.getDeclaredFields(); //创建FieldMapping集合 List<FieldMapping> fieldMappingList = new ArrayList<>(); for (Field field : fields) { //获取字段上的FieldInfo对象 boolean annotationPresent = field.isAnnotationPresent(FieldInfo.class); if (annotationPresent) { FieldInfo fieldInfo = field.getAnnotation(FieldInfo.class); //获取字段名称 String name = field.getName(); fieldMappingList.add(new FieldMapping(name, fieldInfo.type(), fieldInfo.participle())); } else { continue; } } return fieldMappingList; } }四、配置es及swagger/** * es配置文件 * @author Kou Shenhai 2413176044@leimingtech.com * @version 1.0 * @date 2020/8/9 0009 下午 2:01 */ @Configuration public class ElasticsearchConfig { private static final String HTTP_SCHEME = "http"; private static final Logger LOGGER = LoggerFactory.getLogger(ElasticsearchConfig.class); /** * 权限验证 */ final CredentialsProvider credentialsProvider = new BasicCredentialsProvider(); /** * es主机 */ @Value("${elasticsearch.host}") private String[] host; @Value("${elasticsearch.username}") private String username; @Value("${elasticsearch.password}") private String password; @Bean public RestClientBuilder restClientBuilder() { HttpHost[] hosts = Arrays.stream(host) .map(this::makeHttpHost) .filter(Objects::nonNull) .toArray(HttpHost[]::new); LOGGER.info("host:{}",Arrays.toString(hosts)); //配置权限验证 credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(username, password)); return RestClient.builder(hosts).setHttpClientConfigCallback(httpClientBuilder -> httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider) .setMaxConnPerRoute(100) //最大连接数 .setMaxConnTotal(100) ).setRequestConfigCallback(builder -> { builder.setConnectTimeout(-1); builder.setSocketTimeout(60000); builder.setConnectionRequestTimeout(-1); return builder; }); } /** * 处理请求地址 * @param address * @return */ private HttpHost makeHttpHost(String address) { assert StringUtils.isNotEmpty(address); String[] hostAddress = address.split(":"); if (hostAddress.length == 2) { String ip = hostAddress[0]; Integer port = Integer.valueOf(hostAddress[1]); return new HttpHost(ip, port, HTTP_SCHEME); } else { return null; } } /** * 配置highLevelClient bean * @param restClientBuilder * @return */ @Bean(name = "restHighLevelClient") public RestHighLevelClient restHighLevelClient(@Autowired RestClientBuilder restClientBuilder) { return new RestHighLevelClient(restClientBuilder); } }/** * @author Kou Shenhai */ @Configuration @EnableSwagger2 public class SwaggerConfig { @Bean public Docket createRestApi() { return new Docket(DocumentationType.SWAGGER_2) .apiInfo(apiInfo()) .select() .apis(RequestHandlerSelectors.withMethodAnnotation(ApiOperation.class)) .paths(PathSelectors.any()) .build(); } private ApiInfo apiInfo() { return new ApiInfoBuilder() .title("API文档") .version("2.0.0") .description("API文档 - Elasticsearch服务") //作者信息 .contact(new Contact("寇申海", "https://blog.csdn.net/qq_39893313", "2413176044@qq.com")) .build(); } }
正文小伙伴们,你们好,我是老寇一、下载kibana(跳过)注意:一定要与es版本保持一致二、启动es集群(es集群搭建)三、配置kibana.ymlserver.maxPayloadBytes: 1048576 server.port: 5601 #端口 server.host: "0.0.0.0" #解除ip限制 elasticsearch.hosts: ["http://192.168.1.1:9200","http://192.168.1.2:9200","http://192.168.1.3:9200"] #es集群地址 elasticsearch.requestTimeout: 90000 #请求超时时间 i18n.locale: "zh-CN" #中文四、启动kibana找到kibana/bin安装目录,双击kibana.bat五、访问路径在浏览器输入框输入:http://localhost:5601
正文小伙伴们,你好呀,我是老寇!一起安装docker一、下载虚拟机(win11必须安装高版本的虚拟机)二、下载win10镜像(跳过)地址:MSDN, 我告诉你 - 做一个安静的工具站三、安装win10(跳过)有什么不懂在评论下留言注意:本机 win11 Hyper-V 一定不要勾选,不然无法启动win10系统四、远程桌面(账号和密码 就是安装win10设置用户名/密码)五、下载docker下载地址:Docker Hub注:Win10 专业版、企业版、教育版和部分家庭版六.安装 Hyper-V(win10镜像)七、启动docker(安装完Hyper-V需要重启电脑再启动docker)双击docker.exe文件八、查看版本(1).一种方式:鼠标移动右下角docker图标上面,右击选择'About Docker Desktop' (2).另一种方式:cmd命令窗 命令:docker -v
正文一.创建索引及参数解析1.创建索引 (语法及例子)1).新建索引语法PUT 索引名称 { "setting" : { ... }, "mapping" : { "properties" : { "field1" : { "type" : "keyword" }, "field2" : { "type" : "keyword" } ... ... ... } } } 2).新建消息索引例子PUT msg_202203 { "settings": { "index": { "refresh_interval": "120s", "number_of_shards": "5", "analysis": { "filter": { "laokou-remote-synonym": { "dynamic_reload": "true", "interval": "120", "type": "dynamic_synonym", "synonyms_path": "http://localhost:9048/laokou-service/synonym" }, "laokou-pinyin": { "lowercase": "true", "keep_original": "true", "remove_duplicated_term": "true", "keep_first_letter": "false", "keep_separate_first_letter": "false", "type": "pinyin", "limit_first_letter_length": "16", "keep_full_pinyin": "true", "keep_joined_full_pinyin":"true" } }, "analyzer": { "ik-search-pinyin": { "filter": ["laokou-pinyin", "word_delimiter", "lowercase", "asciifolding"], "char_filter": ["html_strip"], "type": "custom", "tokenizer": "ik_smart" }, "ik-search-synonym": { "filter": ["laokou-remote-synonym"], "char_filter": ["html_strip"], "type": "custom", "tokenizer": "ik_smart" }, "ik-index-synonym": { "filter": ["laokou-remote-synonym"], "char_filter": ["html_strip"], "type": "custom", "tokenizer": "ik_max_word" } } }, "number_of_replicas": "1" } }, "aliases": { "msg": {} }, "mappings": { "dynamic": "true", "properties": { "sendId": { "type": "long" }, "data": { "eager_global_ordinals": true, "search_analyzer": "ik-search-synonym", "fielddata": true, "analyzer": "ik-index-synonym", "boost": 100, "type": "text", "fields": { "data-pinyin": { "analyzer": "ik-search-pinyin", "term_vector": "with_positions_offsets", "boost": 100, "type": "text" } } }, "type": { "type": "integer" }, "remark":{ "type": "keyword" }, "fromId": { "type": "long" }, "createDate": { "format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis", "type": "date" }, "username": { "type": "keyword" } } } }2.索引参数解析 (着重解析例子中参数含义)1).settingsetting是索引的配置属性文章参考:Elasticsearch 拼音分词器(上){ "settings": { #settings配置 "index": { "refresh_interval": "60s", #表示索引刷新频率,频繁刷新索引会降低性能,一般设置为30s~60s;-1表示禁用刷新 "number_of_shards": "5", #index分片数,需要注意的是es7.0默认索引分片调整为1 "analysis": { #定制分词器,包含filter、analyzer "filter": { "laokou-remote-synonym": { #同义词分词过滤器名称 "dynamic_reload": "true", #开启动态加载同义词库 "interval": "60", #同步频率,单位为秒 "type": "dynamic_synonym", #同义词分词 "synonyms_path": "http://localhost:9048/laokou-service/synonym" #远程同义词库地址 }, "laokou-pinyin": { #拼音分词过滤器名称 "lowercase": "true", #开启小写 "keep_original": "true", #是否保留原始输入 默认值false "remove_duplicated_term": "true", #是否删除重复项保留索引,默认值false "keep_first_letter": "false", #是否开启首字母匹配 例如:寇申海 > ksh 默认值true "keep_separate_first_letter": "false", #保留第一个字母分开,例如:寇申海> k,s,h,默认:false "type": "pinyin", #拼音分词 "limit_first_letter_length": "16", #设置first_letter结果的最大长度,默认值:16 "keep_full_pinyin": "true" #是否开启全拼 例如寇申海 > kou shen hai 默认值true "keep_joined_full_pinyin":"true" #启用该选项,例如寇申海 > koushenhai,默认为false } }, "analyzer": { #分词器配置 "ik-search-pinyin": { #自定义查询拼音分词器 "filter": ["laokou-pinyin", "word_delimiter","lowercase", "asciifolding"], #拼音过滤器,word_delimiter 词元分析器(将单词分为字词,例如koushenhai 分成kou shen hai)、单词小写过滤器,asciifolding词元过滤器(将不在前127个ASCII字符(“基本拉丁文”Unicode块)中的字母,数字和符号Unicode字符转换为ASCII等效项) "char_filter": ["html_strip"], #字符过滤器,html_filter字符过滤器可删除所有html标签,例如<p> "type": "custom", #自定义 "tokenizer": "ik_smart" #ik分词中的简单分词器,支持自定义字典,远程字典 }, "ik-search-synonym": { #自定义查询同义词分词器 "filter": ["laokou-remote-synonym"], #同义词过滤器 "char_filter": ["html_strip"], #字符过滤器,html_filter字符过滤器删除所有html标签 "type": "custom", #自定义 "tokenizer": "ik_smart" #ik分词的简单分词,支持自定义字典、远程字典 }, "ik-index-synonym": { #自定义索引同义词分词器 "filter": ["laokou-remote-synonym"], #同义词过滤器 "char_filter": ["html_strip"], #字符过滤器,html_filter字符过滤器删除所有html标签 "type": "custom", #自定义 "tokenizer": "ik_max_word" #ik分词中的最大分词器,支持自定义字典,远程字典,例如我是中国人,分成我是,中国,中国人,我是中国人 } } }, "number_of_replicas": "1" #索引分片的备份数 } } }2).mappingmapping是指定索引存储文档的字段类型"mappings": { "dynamic": "true", #动态加载 "properties": { #属性 "sendId": { "type": "long" }, "data": { "eager_global_ordinals": true, #开启后,每次refresh以后即可更新字典,字典常驻内存,减少查询时构建字典的耗时 "search_analyzer": "ik-search-synonym", #查询时 ik-smart分词器 "fielddata": true, "analyzer": "ik-index-synonym", #建立索引-ik-max分词器 "boost": 100, "type": "text", "fields": { "data-pinyin": { #因进行拼音查询,需要设置属性字段 "analyzer": "ik-search-pinyin", #自定义拼音查询分词器 "term_vector": "with_positions_offsets", #文档的统计信息 "boost": 100, #权重 "type": "text" } } }, "type": { "type": "integer" }, "remaker":{ "type": "keyword" } "fromId": { "type": "long" }, "createDate": { "format": "yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis", #存入date类型,需要进行格式化 "type": "date" }, "username": { "type": "keyword" } } }3).aliasesaliases是给索引取别名aliases:{ "msg":{} #别名名称 }4).执行命令(使用kibana执行,没有就安装kibana)注:图片仅供参考,文章内容已更新补充: 获取远程同义词库核心代码(详细代码已上传到码云)@RestController @RequestMapping("/synonym") public class SynonymController { private static final Logger log = LoggerFactory.getLogger(SynonymController.class); @Autowired private SynonymDao synonymDao; @Autowired private RedisUtil redisUtil; /** * SimpleDateFormat线程不安全 */ private static final ThreadLocal<DateFormat> df = ThreadLocal.withInitial(() -> new SimpleDateFormat(DateUtil.DATE_TIME)); /** * 查询同义词库 * @param request * @param response * @return * @apiNote 小伙伴们,你好呀,我是老寇 * 思考1:一分钟调一次接口,但是如果有很多索引同时调用一个接口,接口的性能会受到影响 * 因此可以引入redis,来分摊mysql查询的压力 */ @GetMapping @CrossOrigin public String text(HttpServletRequest request, HttpServletResponse response) { String result = ""; String eTag = request.getHeader("If-None-Match"); String modified = request.getHeader("If-Modified-Since"); String currentDate = df.get().format(new Date()); String synonymKey = RedisKeyUtil.getSynonymKey(); String dataJson = redisUtil.get(synonymKey); List<SynonymEntity> list; if (StringUtils.isNotBlank(dataJson)) { list = JSON.parseArray(dataJson,SynonymEntity.class); } else { QueryWrapper<SynonymEntity> queryWrapper = new QueryWrapper<SynonymEntity>().select("value"); list = synonymDao.selectList(queryWrapper); redisUtil.set(synonymKey,JSON.toJSONString(list),RedisUtil.HOUR_ONE_EXPIRE); } if (CollectionUtils.isEmpty(list)) { return null; } List<String> valueList = list.stream().map(SynonymEntity::getValue).collect(Collectors.toList()); log.info("加载ik同义词,上次同义词:{},上次修改时间:{},当前日期:{}",eTag,modified,currentDate); if(!valueList.isEmpty()) { // 获取数据库同义词 StringBuilder words = new StringBuilder(); for (String synonym : valueList) { // 可以使用“=>”方式或者直接“,”分割形式,自行考虑应用场景 words.append(synonym); words.append("\n"); } modified = currentDate; result = words.toString(); } //更新时间 response.setHeader("Last-Modified", modified); response.setHeader("ETag", String.valueOf(list.size())); response.setHeader("Content-Type", "text/plain"); return result; } }二.批量导入数据1.批量导入(语法及例子)1).批量导入语法POST /_bulk {"action": {"metadata"}} {"data"} POST /_bulk {"create":{"_index":"索引名称"}} #填写索引名称 {"field1":"value1","field2":"value2"} #实体类的数据2).批量导入例子(因为后面的教程会用到一些数据,索性就多导入几条数据)POST /_bulk {"create":{"_index":"msg_202203"}} {"createDate":1614129064000,"data":"谷神不死,是谓玄牝。玄牝之门,是谓天地根。绵绵若存,用之不勤。","fromId":"1341620898007281665","remark":"群聊","sendId":"1363109342432645122","type":2,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1614060893000,"data":"天地不仁,以万物為芻狗;圣人不仁,以百姓為芻狗。","fromId":"1341620898007281665","remark":"群聊","sendId":"1363109342432645122","type":2,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1614059233000,"data":"道沖,而用之或不盈。淵兮,似万物之宗;湛兮,似或存。吾不知誰之子,象帝之先。","fromId":"1341620898007281665","remark":"群聊","sendId":"1363109342432645122","type":2,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1613826379000,"data":"是以圣人之治,虛其心,實其腹,弱其志,強其骨。常使民無知無欲。使夫智者不敢為也。為無為,則無不治。","fromId":"1341620898007281665","remark":"群聊","sendId":"1363109342432645122","type":2,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1613826260000,"data":"不尚賢,使民不爭;不貴難得之貨,使民不為盜;不見可欲,使民心不亂。","fromId":"1341620898007281665","remark":"群聊","sendId":"1363109342432645122","type":2,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1613825567000,"data":"夫唯不居,是以不去。","fromId":"1341620898007281665","remark":"群聊","sendId":"1363109342432645122","type":2,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1613825555000,"data":"是以圣人处无为之事,行不言之教,万物作焉而不辞,生而不有,为而不恃,功成而不居。","fromId":"1341620898007281665","remark":"群聊","sendId":"1363109342432645122","type":2,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1613825534000,"data":"天下皆知美之为美,斯恶矣;皆知善之为善,斯不善矣。","fromId":"1341620898007281665","remark":"群聊","sendId":"1363109342432645122","type":2,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1613825140000,"data":"故常无欲,以观其妙;常有欲,以观其徼。","fromId":"1341620898007281665","remark":"群聊","sendId":"1363109342432645122","type":2,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1613551503000,"data":"此两者,同出而异名,同谓之玄,玄之又玄,众妙之门。","fromId":"1341620898007281665","remark":"群聊","sendId":"1363109342432645122","type":2,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1613377999000,"data":"无名,天地之始,有名,万物之母。","fromId":"1341620898007281665","remark":"群聊","sendId":"1363109342432645122","type":2,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1613375454000,"data":"道可道,非常道;名可名,非常名。","fromId":"1341620898007281665","remark":"群聊","sendId":"1363109342432645122","type":2,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1612854931000,"data":"真常应物,真常得性;常应常静,常清静矣。如此清静,渐入真道;既入真道,名为得道;虽名得道,实无所得;为化众生,名为得道;能悟之者,可传圣道。","fromId":"1341620898007281665","remark":"群聊","sendId":"1363109342432645122","type":2,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1612854896000,"data":"夫,人神好清,而心扰之;人心好静,而欲牵之。常能遣其欲,而心自静;澄其心,而神自清;自然六欲不生,三毒消灭。所以不能者,为心未澄,欲未遣也,能遣之者:内观其心,心无其心;外观其形,形无其形;远观其物,物无其物;三者既无,唯见於空。观空亦空,空无所空;所空既无,无无亦无;无无既无,湛然常寂。寂无所寂,欲岂能生;欲既不生,即是真静。","fromId":"1341620898007281665","remark":"私聊","sendId":"1341623527018004481","type":1,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1612854529000,"data":"夫道者:有清有浊,有动有静;天清地浊,天动地静;男清女浊,男动女静;降本流末,而生万物。清者浊之源,动者静之基;人能常清静,天地悉皆归。","fromId":"1341620898007281665","remark":"群聊","sendId":"1363109342432645122","type":2,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1612854482000,"data":"大道无形,生育天地;大道无情,运行日月;大道无名,长养万物;吾不知其名,强名曰道。","fromId":"1341620898007281665","remark":"私聊","sendId":"1341623527018004481","type":1,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1635830300000,"data":"好东西不用你去记,它自会留下很深的印象。","fromId":"1341623527018004481","remark":"群聊","sendId":"1363109342432645122","type":2,"username":"test"} {"create":{"_index":"msg_202203"}} {"createDate":1635814452000,"data":"最困难的事情就是认识自己。","fromId":"1341620898007281665","remark":"群聊","sendId":"1363109342432645122","type":2,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1627093178000,"data":"上善若水。","fromId":"1341620898007281665","remark":"私聊","sendId":"1341623527018004481","type":1,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1627090427000,"data":"戒为良药,以戒为师。","fromId":"1341620898007281665","remark":"私聊","sendId":"1341623527018004481","type":1,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1629213470000,"data":"有时候,不是因为自己不坚强,而是,总想着有一个人关心一下自己,鼓励自己。","fromId":"1341623527018004481","remark":"私聊","sendId":"1341620898007281665","type":1,"username":"test"} {"create":{"_index":"msg_202203"}} {"createDate":1628999314000,"data":"祝你的生日天天快乐,永远都幸福.在新的一年里感情好,身体好,事业好,对你的朋友们要更好!","fromId":"1341623527018004481","remark":"群聊","sendId":"1363109342432645122","type":2,"username":"test"} {"create":{"_index":"msg_202203"}} {"createDate":1628999280000,"data":"绿色是生命的颜色,绿色的浪漫是生命的浪漫。因此,我选择了这个绿色的世界,馈赠给你的生日。愿你充满活力,青春常在。","fromId":"1341623527018004481","remark":"私聊","sendId":"1341620898007281665","type":1,"username":"test"} {"create":{"_index":"msg_202203"}} {"createDate":1628999215000,"data":"羡慕你的生日是这样浪漫,充满诗情画意,只希望你的每一天都快乐、健康、美丽,生命需要奋斗、创造、把握!生日快乐","fromId":"1341620898007281665","remark":"群聊","sendId":"1363109342432645122","type":2,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1628999188000,"data":"青春的树越长赵葱茏,生命的花就越长越艳丽。在你生日的这一天,请接受我对你深深的祝福。愿这独白,留在你生命的扉页;愿这切切祈盼,带给你新的幸福。","fromId":"1341620898007281665","remark":"私聊","sendId":"1341623527018004481","type":1,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1628998224000,"data":"今夜,惊悉你的生日。窗外的风带上我的祝福,祝愿你在新的一年里心想事成花容月貌而且又乖又可爱","fromId":"1341623527018004481","remark":"私聊","sendId":"1341620898007281665","type":1,"username":"test"} {"create":{"_index":"msg_202203"}} {"createDate":1628998136000,"data":"长长的距离,长长的线,连着长长的思念。远远的空间,久久的时间,剪不断远方的掂念!祝你生日快乐","fromId":"1341620898007281665","remark":"群聊","sendId":"1363109342432645122","type":2,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1628998098000,"data":"虽然不能陪你度过这特别的日子,但我的祝福依旧准时送上:在你缤纷的人生之旅中,心想事成!生日快乐!","fromId":"1341620898007281665","remark":"私聊","sendId":"1341623527018004481","type":1,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1628997414000,"data":"愿我的祝福,如一缕灿烂的阳光,在您的眼里流尚,生日快乐!","fromId":"1341623527018004481","remark":"私聊","sendId":"1341620898007281665","type":1,"username":"test"} {"create":{"_index":"msg_202203"}} {"createDate":1628997401000,"data":"又是你的生日了,虽然残破的爱情让我们彼此变得陌生,然而我从未忘你的生日,好多年了,每一个生日我们都没忘记给对方祝福,希望这一生都拥有这友情,真心祝你生日快乐!","fromId":"1341620898007281665","remark":"群聊","sendId":"1363109342432645122","type":2,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1628997381000,"data":"祝我漂亮的、乐观的、热情的、健康自信的、布满活力的大朋友——妈妈,生日快乐!","fromId":"1341620898007281665","remark":"私聊","sendId":"1341623527018004481","type":1,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1628997362000,"data":"你用厚重的年轮,编成一册散发油墨清香的日历;年年,我都会在日历的这一天上,带着崇高的敬意,祝福你的生日。","fromId":"1341623527018004481","remark":"私聊","sendId":"1341620898007281665","type":1,"username":"test"} {"create":{"_index":"msg_202203"}} {"createDate":1628950044000,"data":"远方的朋友,我衷心的祝你:年年好“薪”情,岁岁好“钱”景,天天好福气,时时好运来,做个“四好”新人。","fromId":"1341623527018004481","remark":"私聊","sendId":"1341620898007281665","type":1,"username":"test"} {"create":{"_index":"msg_202203"}} {"createDate":1628949929000,"data":"年年岁岁花相似,岁岁年年人不同。醒来惊觉不是梦,眉间皱纹又一重。","fromId":"1341620898007281665","remark":"私聊","sendId":"1341623527018004481","type":1,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1628573383000,"data":"烛光支支辉煌,寿诞岁岁吉利。童颜鹤发逢盛世,百年不老福乐绵。麻姑献寿奉蟠桃,我来贺寿送祝愿:祝你与天地比寿,愿你与日月同光!","fromId":"1341620898007281665","remark":"私聊","sendId":"1341623527018004481","type":1,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1628573219000,"data":"黄金白银,每一两都坚强无比,珍珠美玉,每一粒都完美无瑕,钻石流星,每一颗都永恒流传,可是这些都比不过今天你的心,因为寿星老最高贵,生日快乐。","fromId":"1341623527018004481","remark":"私聊","sendId":"1341620898007281665","type":1,"username":"test"} {"create":{"_index":"msg_202203"}} {"createDate":1628557084000,"data":"懒人无法享受休息之乐。","fromId":"1341623527018004481","remark":"私聊","sendId":"1341620898007281665","type":1,"username":"test"} {"create":{"_index":"msg_202203"}} {"createDate":1628557013000,"data":"伟大的事业,需要决心,能力,组织和责任感。","fromId":"1341620898007281665","remark":"私聊","sendId":"1341623527018004481","type":1,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1628556795000,"data":"学而不思则罔,思而不学则殆。","fromId":"1341623527018004481","remark":"私聊","sendId":"1341620898007281665","type":1,"username":"test"} {"create":{"_index":"msg_202203"}} {"createDate":1628556767000,"data":"只有把抱怨环境的心情,化为上进的力量,才是成功的保证。","fromId":"1341620898007281665","remark":"私聊","sendId":"1341623527018004481","type":1,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1628556726000,"data":"最具挑战性的挑战莫过于提升自我。","fromId":"1341623527018004481","remark":"私聊","sendId":"1341620898007281665","type":1,"username":"test"} {"create":{"_index":"msg_202203"}} {"createDate":1628556628000,"data":"老寇,生日快乐","fromId":"1341620898007281665","remark":"私聊","sendId":"1341623527018004481","type":1,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1624878509000,"data":"民之饥,以其上食税之多,是以饥。","fromId":"1341620898007281665","remark":"群聊","sendId":"1363109342432645122","type":2,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1624878463000,"data":"强大处下,柔弱处上。","fromId":"1341620898007281665","remark":"私聊","sendId":"1341623527018004481","type":1,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1624874411000,"data":"天之道,其犹张弓与?高者抑下,下者举之,有余者损之,不足者补之。","fromId":"1341620898007281665","remark":"私聊","sendId":"1341623527018004481","type":1,"username":"admin"} {"create":{"_index":"msg_202203"}} {"createDate":1623390047000,"data":"顺其自然,无为而治。","fromId":"1341620898007281665","remark":"群聊","sendId":"1363109342432645122","type":2,"username":"admin"}3).等待数据批量导入(连接kibana进行批量导入)注:因为是异步,可能在kibana执行完了,数据并没有提交到es中,需要等待一会儿三、查询索引1.查询mappingGET /索引/_mapping GET /msg_202203/_mapping2.查询settingsGET /索引/_settings GET /msg_202203/_settings四、修改索引 PUT /msg_202203/_settings { "index": { "refresh_interval": "-1", "number_of_shards": "5", "number_of_replicas": "1" } }五、删除索引DELETE /msg_202203 DELETE /msg_*
正文项目备注项目:KCloud作者:老寇语言:Java职位:Java工程师时间:2020.06.08 ~ 至今项目介绍老寇云,是基于SpringCloud开发,面向Java编程的学习者,用于技术进阶,技术知识体系架构的构建,用生动的代码来感受技术的魅力。使用的中间件有redis、elasticsearch等等………功能介绍SSO登录(账号密码登录、微信公众号登录、手机号登录、授权码登录、邮箱登录、支付宝登录)视频通话视频直播好友聊天订单管理(支付\购买\取消)消息记录(敏感词过滤,高亮显示)数据爬虫资源管理(OA工作流审核、静态化)代码生成技术体系基础框架SpringBootSpringCloudShiroSpringSecurity技术栈mysql/hbaserabbitmq/rocketmqelasticsearchredisfastdfssharding-jdbcnetty/websocketdocker+docker-composefreemarker/thymeleaf/velocitymybatis+mybatis-pluswebmagicmongodb一键部署docker + jinkens + shelldocker + kubernates项目结构KCloud |--db -- 数据库相关sql |--laokou-cloud |--laokou-dubbo -- dubbo模块 |--laokou-feign -- feign模块 |--laokou-gateway -- 服务网关 |--laokou-monitor -- 服务监控 |--laokou-register -- 服务治理 |--laokou-sentinel -- 服务监控 |--laokou-skywalking -- 服务监控 |--laokou-sleuth -- 服务调用链 |--laokou-turbine -- 服务监控 |--laokou-common -- 常用工具类 |--laokou-service |--laokou-activiti -- 工作流模块 |--laokou-chat -- IM模块 |--laokou-concurrent -- 并发测试模块 |--laokou-data -- api调用模块 |--laokou-datasource -- 多数据源模块 |--laokou-elasticsearch -- 搜索模块 |--laokou-file -- 文件模块 |--laokou-flv -- 直播模块 |--laokou-freemarker -- 模板模块 |--laokou-generator -- 代码生成模块 |--laokou-hbase -- 分布式数据库 |--laokou-lock -- 分布式锁 |--laokou-netty-client -- netty客户端 |--laokou-netty-server -- netty服务端 |--laokou-order -- 订单模块 |--laokou-oss -- oss配置 |--laokou-rabbitmq -- rabbitmq消息模块 |--laokou-redis-tools -- redis模块 |--laokou-resource -- 资源模块 |--laokou-rocketmq -- rocketmq消息模块 |--laokou-sensitive-words -- 敏感词模块 |--laokou-sharding-jdbc -- ip模块 |--laokou-sso |--laokou-sso-captcha -- 验证码模块 |--laokou-sso-security-auth -- sso登录模块 |--laokou-sso-security-server -- sso授权码模块 |--laokou-sso-shiro -- sso登录模块 |--laokou-third-party |--laokou-third-party-email -- 邮件模块 |--laokou-third-party-pay -- 支付模块 |--laokou-third-party-sms -- 短信模块 |--laokou-third-party-wechat -- 微信模块 |--laokou-video -- 视频通话模块 |--laokou-webmagic -- 爬虫模块 |--laokou-webservice -- webservice模块 |--laokou-xxl-job -- xxl-job定时任务模块 项目配置安装jdk1.8、mysql5.7、elasticsearch7.6.2、fastdfs、rabbitmq、redis、rocketmq、nginx+openresty+lua、mongodb创建数据库 > 见db文件夹修改第三方相关配置修改中间件相关配置 # rabbitmq rabbitmq: # mq连接地址 addresses: 127.0.0.1:5672 # mq账号 username: root # mq密码 password: XXXXXX # redis redis: # 连接地址 host: 127.0.0.1 # 端口号 port: 6379 # mysql datasource: druid: # 连接地址 url: jdbc:mysql://127.0.0.1:3306/kcloud?allowMultiQueries=true&useUnicode=true&characterEncoding=UTF-8&useSSL=false # 用户名 username: root # 密码 password: XXXXXX # es elasticsearch: # 节点名称 cluster-name: laokou-elasticsearch # 地址 host: 127.0.0.1:9200 # 账号 username: elastic # 密码 password: XXXXXX 写到最后我深知个人的力量是有限的,欢迎小伙伴们加入…
前言小伙伴们,你们好呀,我是老寇,跟我一起安装elasticsearch 7.6.2安装elasticsearch的教程数不胜数,本文的安装方式是经过自己测试的,因此分享给有需要的小伙伴,一来是避免小伙伴少走弯路,二来方便后面知识的整合。本文是基于ES 7.6.2的版本进行安装的,话不多说,我们开始吧。正文一、提前条件1.安装centos 7.x2.准备elasticsearch 7.6.2 安装包及插件3.谷歌插件:elasticsearch-head二、安装过程1.解压elasticsearch压缩包tar -zxvf elasticsearch-7.6.2-linux-x86_64.tar.gz2.将文件移到/usr/local目录下,并重命名为elasticsearchmv elasticsearch-7.6.2 /usr/local/elasticsearch3.创建新增账号(出于安全考虑,elasticsearch默认不允许使用root账号运行)useradd 新用户名4.设置密码passwd 新密码5.创建data和logs文件夹mkdir -p /home/新用户名/elasticsearch/data mkdir -p /home/新用户名/elasticsearch/logs6.给新建文件夹授予权限chmod -R 777 /home/新用户名/elasticsearch chmod -R 777 /usr/local/elasticsearch7.进入elasticsearch/config文件夹cd /usr/local/elasticsearch/config8.设置elasticsearch.ymlvi elasticsearch.ymlhttp.cors.enabled: true http.cors.allow-origin: "*" network.host: 0.0.0.0 cluster.name: laokou-elasticsearch #可自定义 node.name: node-elasticsearch #可自定义 http.port: 9200 cluster.initial_master_nodes: ["node-elasticsearch"] #这里就是node.name path.data: /home/koushenhai/elasticsearch/data # 数据目录位置 path.logs: /home/koushenhai/elasticsearch/logs # 日志目录位置 设置好之后要保存 9.设置jvm.option(默认为1g 服务器内存足够,可跳过)vi jvm.options-Xms512m -Xmx512m 设置好之后要保存 10.设置vm.max_map_count(如果小于262144就修改或没有设置就执行这一步)报错信息:max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]vi /etc/sysctl.confvm.max_map_count = 655360 设置好之后要保存(记得刷新参数) #刷新参数 sysctl -p11.设置limits.conf(没有设置就执行这一步) 报错信息:max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]vi /etc/security/limits.conf新用户 soft nofile 65535 新用户 hard nofile 65537 设置好之后要保存 新用户是启动es的账号追加内容后,需要关闭shell连接工具,重新登录用户才会生效12.切换账号su 新用户 13.启动elasticsearchcd /usr/local/elasticsearch/bin ./elasticsearch14.启动成功截图(在谷歌浏览器输入 -> http://ip:9200) 15.进入elasticsearch.yml设置密码(不设置密码,可跳过)cd /usr/local/elasticsearch/config vi elasticsearch.ymlhttp.cors.enabled: true http.cors.allow-origin: "*" network.host: 0.0.0.0 cluster.name: laokou-elasticsearch node.name: node-elasticsearch http.port: 9200 cluster.initial_master_nodes: ["node-elasticsearch"] path.data: /home/koushenhai/elasticsearch/data # 数据目录位置 path.logs: /home/koushenhai/elasticsearch/logs # 日志目录位置 #设置密码 http.cors.allow-headers: Authorization xpack.security.enabled: true xpack.security.transport.ssl.enabled: true 设置好之后要保存16.启动elasticsearch密码设置并自定义密码(不设置密码,可跳过)cd /usr/local/elasticsearch/bin ./elasticsearch-setup-passwords interactive17.用elasticsearch-head连接elasticsearch(设置密码了,就需要用密码登录,账号为elastic) 18.安装同义词(ik、同义词、拼音)su root mkdir -p /usr/local/elasticsearch/plugins/analysis-synonym mkdir -p /usr/local/elasticsearch/plugins/analysis-ik mkdir -p /usr/local/elasticsearch/plugins/analysis-pinyin yum install -y unzip zip unzip -d /usr/local/elasticsearch/plugins/analysis-ik /opt/elasticsearch-analysis-ik-7.6.2.zip unzip -d /usr/local/elasticsearch/plugins/analysis-pinyin /opt/elasticsearch-analysis-pinyin-7.6.2.zip unzip -d /usr/local/elasticsearch/plugins/analysis-synonym /opt/elasticsearch-analysis-dynamic-synonym-7.6.2.zip重启es即可大功告成,欢迎在评论下留言,你所遇到的问题...
正文小伙伴们,你们好呀,我是老寇,话不多说,直接上图!docker版本:k8s运行成功:仪表盘启动:1.问题:虚拟机无法运行win10解决方案: 需要去掉勾选2.问题:虚拟机里面的win10无法开启Hyper-V解决方案:需要勾选3.win10无法启动docker解决方案:勾选 Hyper-V 所有选项(控制面板 -> 程序 -> 开启或关闭windows功能 -> 勾选Hyper-V)4.k8s一直处于starting解决方案(版本1.22.5):4.1.确保image.properties没有问题(亲测可用,建议复制我的)k8s.gcr.io/pause:3.5=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5 k8s.gcr.io/kube-controller-manager:v1.22.5=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.5 k8s.gcr.io/kube-scheduler:v1.22.5=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.5 k8s.gcr.io/kube-proxy:v1.22.5=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.5 k8s.gcr.io/kube-apiserver:v1.22.5=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.5 k8s.gcr.io/etcd:3.5.0-0=registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0 k8s.gcr.io/coredns/coredns:v1.8.4=registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.8.4 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1=registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.26.14.2.执行 load_images.ps1 (用管理员运行powershell,进入自己下载的k8s目录 里面再 输入 .\load_images.ps1) 注意:不能运行 .\load_images.ps1 命令,就执行 Set-ExecutionPolicy RemoteSigned 命令,然后再执行 .\load_images.ps1 命令4.3.等待所有镜像都拉取完之后,再启动k8s
正文一、前提条件docker安装二、安装过程1.搜索mysql 5.7镜像(搜索是否存在mysql 5.7镜像)docker search mysql:5.72.拉取mysq 5.7镜像(漫长等待...)docker pull mysql:5.73.查看镜像docker images4.运行mysql 5.7容器docker run -d -p 3306:3306 --privileged=true -v /docker/mysql/conf/my.cnf:/etc/my.cnf -v /docker/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 --name mysql mysql:5.7 --character-set-server=utf8mb4 --collation-server=utf8mb4_general_ci 参数说明: run 运行一个容器 -d 后台运行 -p 3306:3306 容器内部端口和服务器端口映射关联 --privileged=true 设置mysql用户,否则外部不能使用root用户登录 -v /docker/mysql/conf/my.cnf:/etc/my.cnf 服务器的/docker/mysql/conf/my.cnf配置映射到docker的my.cnf -v /docker/mysql/data:/var/lib/mysql 映射数据库的数据目录,避免docker删除重新运行mysql容器,导致数据丢失 -e MYSQL_ROOT_PASSWORD=123456 设置root账号的密码 --name mysql mysql:5.7 从docker镜像mysql:5.7启动一个容器,并设置容器的名称为mysql --character-set-server=utf8mb4 --collation-server=utf8mb4_general_ci 设置数据库默认编码5.查看容器运行状态docker ps -a
正文小伙伴们,你们好呀,我是老寇!(如果能解决小伙伴们的问题,老寇不胜荣幸!)话不多说,直接看图!!!解决方案(见代码):@Configuration @Data public class RocketMqProducerProperties { @Value("${rocket.name-server}") private String nameServer; @Value("${rocket.producer.group}") private String group; } @Configuration @Slf4j public class DefaultRocketMqProducerConfig { @Autowired private RocketMqProducerProperties producerProperties; /** * 创建普通消息发送者实例 * @return * @throws MQClientException */ @Bean @Primary public DefaultMQProducer defaultMQProducer() throws MQClientException { DefaultMQProducer producer = new DefaultMQProducer(producerProperties.getGroup()); producer.setNamesrvAddr(producerProperties.getNameServer()); producer.setVipChannelEnabled(false); producer.setRetryTimesWhenSendAsyncFailed(10); producer.start(); log.info("default producer创建成功:{},{}",producerProperties.getNameServer(),producerProperties.getGroup()); return producer; } }调用代码@Service @Slf4j public class RocketMqServiceImpl { @Autowired private DefaultMQProducer defaultMQProducer; public boolean sendMessage(Map<String, Object> dataMap) { Message message = new Message("laokou.queue", JSON.toJSONBytes(dataMap)); try { SendResult sendResult = defaultMQProducer.send(message); if (SendStatus.SEND_OK.equals(sendResult.getSendStatus())) { log.info("发送成功, sendResult:{}", JSON.toJSONString(sendResult)); return true; } } catch (MQClientException e) { e.printStackTrace(); log.error("消息发送失败, error:{}", e); } catch (RemotingException e) { e.printStackTrace(); log.error("消息发送失败, error:{}", e); } catch (MQBrokerException e) { e.printStackTrace(); log.error("消息发送失败, error:{}", e); } catch (InterruptedException e) { e.printStackTrace(); log.error("消息发送失败, error:{}", e); } finally { return false; } } }
正文一、环境准备SpringBoot 2.0.0QQ邮箱授权码二、开发环节2.1.导入依赖(2021/11/13)<?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>laokou-third-party-demo</groupId> <artifactId>laokou-third-party-demo</artifactId> <version>1.0-SNAPSHOT</version> </parent> <groupId>io.laokou</groupId> <artifactId>laokou-third-party-email-demo</artifactId> <version>0.0.1-SNAPSHOT</version> <name>laokou-third-party-email-demo</name> <description>第三方-邮件</description> <properties> <java.version>1.8</java.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-mail</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> </dependencies> </project>2.2.配置文件(2021/11/13)server: port: 12032 servlet: context-path: /laokou-demo spring: application: name: laokou-third-party-email-demo mail: default-encoding: UTF-8 host: smtp.qq.com username: 你的邮箱地址 password: 授权码 properties: mail: smtp: auth: true starttls: enable: true required: true2.3.核心代码(2021/11/13)package io.laokou.email.utils; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Value; import org.springframework.mail.SimpleMailMessage; import org.springframework.mail.javamail.JavaMailSender; import org.springframework.stereotype.Component; import java.util.Date; /** * TODO * * @author Kou Shenhai 2413176044 * @version 1.0 */ @Component public class EmailUtil { @Autowired private JavaMailSender sender; @Value("${spring.mail.username}") private String fromUser; public Integer send() { try{ // 构建一个邮件对象 SimpleMailMessage message = new SimpleMailMessage(); //发送人 message.setFrom(fromUser); //接收人 message.setTo(你发给哪个人的邮箱); //邮件主题 message.setSubject("验证码"); //邮件时间 message.setSentDate(new Date()); //邮件内容 message.setText(String.format("验证码:%s,您正在登录,若非本人操作,请勿泄露。", 233232)); sender.send(message); return 1; } catch (Exception e) { e.printStackTrace(); System.out.println("error:"+e.getMessage()); return 0; } } }三、运行截图初始效果截图(2021/11/13) 四、优化过程4.1.优化(一) 2021/11/13将写死的验证码变成随机的六位数1).在pom.xml加入依赖 <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-lang3</artifactId> </dependency>2)修改核心代码message.setText(String.format("验证码:%s,您正在登录,若非本人操作,请勿泄露。", RandomStringUtils.randomNumeric(6)));注:代码已更新4.2 优化(二)2021/11/14加入freemarker + mysql 生成对应的模板内容1).数据库设计DROP TABLE IF EXISTS `boot_freemarker_demo_template`; CREATE TABLE `boot_freemarker_demo_template` ( `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id', `template_code` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '模板编码 login_email_captcha登录邮件验证码', `template_content` longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL COMMENT '模板内容', `template_sign` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '模板签名', `template_subject` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '模板主题', `template_id` varchar(200) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL COMMENT '模板编号', `template_type` varchar(100) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '模板类型 email邮件 sms短信', PRIMARY KEY (`id`, `template_id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;2).加入依赖 2021/11/15 <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-freemarker</artifactId> </dependency> <dependency> <groupId>com.baomidou</groupId> <artifactId>mybatis-plus-boot-starter</artifactId> <version>3.0.5</version> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>druid-spring-boot-starter</artifactId> <version>1.1.17</version> </dependency>3).yml配置 2021/11/15spring.jackson.time-zone=GMT+8 spring.jackson.date-format=yyyy-MM-dd HH:mm:ss mybatis-plus.mapper-locations=classpath:/mybatis/mapper/*.xml mybatis-plus.type-aliases-package=io.laokou.freemarker.entity mybatis-plus.global-config.db-config.id-type=id_worker mybatis-plus.global-config.db-config.field-strategy=not_null mybatis-plus.global-config.db-config.column-like=true mybatis-plus.global-config.banner=false mybatis-plus.configuration.map-underscore-to-camel-case=true mybatis-plus.configuration.cache-enabled=false mybatis-plus.configuration.call-setters-on-nulls=true spring.datasource.druid.driver-class-name=com.mysql.jdbc.Driver spring.datasource.druid.url=jdbc:mysql://127.0.0.1:3306/kcloud?serverTimezone=CTT&useUnicode=true&characterEncoding=utf-8&allowMultiQueries=true spring.datasource.druid.username=root spring.datasource.druid.password=123456 spring.datasource.druid.initial-size=10 spring.datasource.druid.max-active=100 spring.datasource.druid.min-idle=10 spring.datasource.druid.max-wait=60000 spring.datasource.druid.pool-prepared-statements=true spring.datasource.druid.max-pool-prepared-statement-per-connection-size=20 spring.datasource.druid.time-between-eviction-runs-millis=60000 spring.datasource.druid.min-evictable-idle-time-millis=300000 spring.datasource.druid.test-while-idle=true spring.datasource.druid.test-on-borrow=false spring.datasource.druid.test-on-return=false spring.datasource.druid.stat-view-servlet.enabled=true spring.datasource.druid.stat-view-servlet.url-pattern=/druid/** spring.datasource.druid.stat-view-servlet.login-username=root spring.datasource.druid.stat-view-servlet.login-password=123456 spring.datasource.druid.filter.stat.log-slow-sql=true spring.datasource.druid.filter.stat.slow-sql-millis=1000 spring.datasource.druid.filter.stat.merge-sql=false spring.datasource.druid.filter.wall.config.multi-statement-allow=true4).核心代码 2021/11/15public class TemplateUtil { public static String getTemplate(String templateContent,Map<String,Object> params) throws IOException, TemplateException { //Freemarker配置 Configuration configuration = new Configuration(Configuration.getVersion()); configuration.setLocale(Locale.CHINA); configuration.setDefaultEncoding("UTF-8"); StringTemplateLoader loader = new StringTemplateLoader(); //略过替换部分标签 loader.putTemplate("templates",templateContent); configuration.setTemplateLoader(loader); Template template = configuration.getTemplate("templates","utf-8"); StringWriter result = new StringWriter(); template.process(params,result); return result.toString(); } }注:代码已更新4.2 优化(三)2021/11/17加入redis + mysql索引4.3 优化(四)2021/11/19服务解耦,架构升级,由单体架构改成微服务架构,拆分如下:laokou-third-party-email-demo:邮件发送服务laokou-sso-captcha-demo:服务调用方laokou-freemarker-demo:模板生成服务4.4 优化(五)2021/11/19提取公共代码,进行代码优化
正文小伙伴们,你们好呀!我是老寇!跟我看两道题删除有序数组中的重复项Iclass Solution { public int removeDuplicates(int[] nums) { int len = nums.length; if(len < 2) return len; int j = 0; for(int i = 0; i < len; i++) if(nums[j] != nums[i]) nums[++j] = nums[i]; return j + 1; } } 删除有序数组中的重复项 II过程分析(例1) class Solution { public int removeDuplicates(int[] nums) { int len = nums.length,slow = 2; for(int fast = 2;fast < len; fast++) if(nums[fast] != nums[slow - 2]) nums[slow++] = nums[fast]; return slow; } }滑动窗口是基于暴力解法的优化,其强大的思想被应用在很多地方,比如我们熟知的网关限流算法,其中有滑动窗口算法,滑动窗口算法有个缺点,就是临界的问题,现在都采用令牌桶算法来限流,感兴趣的小伙伴可以看我之前的博客springcloud之gateway限流。做这类题目时,需要找到循环不变量,这是解题的关键。(我理解的循环不变量是一个if条件)区间不同的定义决定了不同的初始化逻辑、遍历过程中的逻辑。最后问一句,你们是用什么画算法的过程图的?欢迎在评论下留言!!!
前言小伙伴们,你们好呀!我是老寇!我们都知道计算机只能理解二进制码,一个二进制位(bit)只有0或1两种状态,而一个字节(byte)由8个二进制位组成,因此有256种组合,即00000000 ~ 111111111。ASCII编码是美国制定的一套字符编码,对英文的字符和二进制位之间的关系,做了统一规定,沿用至今。ASCII编码一共规定了128个字符,包括阿拉伯数字、大小写字母、其他字符(空格、换行.....)48~57为0到9十个阿拉伯数字: '0'~'9'65~90为26个大写英文字母 :'A'~'Z'97~122号为26个小写英文字母:'a'~'z'之所以只有128个字符,是因为最前面的一位统一规定为 0如果是使用英语是没有问题,但是欧洲的一些不使用英语的国家,却发现128个字符不够用,又扩展了一些字符,加入一些带重音的字符、希腊字母等等字符加进去,由原本 0 最高位改成 1拓展出来了128个二进制数,即10000000 ~ 111111111,从而涵盖基本的西欧语言。说到这里,厉害啦我的国,中国NB!随着使用计算机的国家越来越多,ascii编码不能满足我们汉语的需要,需要扩展字符集,因此国家标准总局发布了一套国家标准,也就是gb2312(简体中文),后面又加入了生僻字、繁体字等等,就有了后面的gbk字符集。gbk又是向下兼容gb2312。每个国家都有一套自己的编码系统,通过编码和解码出来的字符五花八门,出现了乱码的情况,因此,为了实现跨语言、跨平台的文本转换和处理需求,ISO国际标准化组织提出了Unicode的新标准,Unicode字符集涵盖了世界上所有的文字和符号字符,Unicode编码为字符集中的每一个字符指定了唯一的二进制编码,彻底解决之前不同编码系统的冲突和乱码问题。正文我们一起看个程序public class Ascii { public static void main(String[] args) { char c = 23495; System.out.println("Unicode编码:" + c); char chinese = '寇'; int number = chinese; System.out.println("中文的Unicode码:" + number); char upper = 'A'; number = upper; System.out.println("大写字母的ASCII码:" + number); char lower = 'a'; number = lower; System.out.println("小写字母的ASCII码:" + number); char num = '0'; number = num; System.out.println("数字的ASCII码:" + number); System.out.println("Unicode码为23495,强制类型转换为" + (char)23495); System.out.println("ASCII码为65,强制类型转换为" + (char)65); System.out.println("ASCII码为97,强制类型转换为" + (char)97); System.out.println("ASCII码为48,强制类型转换为" + (char)48); //char向上兼容int,因此'c'和'a'会转成int再进行计算 System.out.println("'c'减去'a':" + ('c' - 'a')); } }Unicode编码:寇中文的Unicode码:23495大写字母的ASCII码:65小写字母的ASCII码:97数字的ASCII码:48Unicode码为23495,强制类型转换为寇ASCII码为65,强制类型转换为AASCII码为97,强制类型转换为aASCII码为48,强制类型转换为0'c'减去'a':2运行结果分析:每个字符都有唯一对应的编码值,而char转换为int类型,不需要强制类型转换,而int转为char需要强制类型转换。byte 取值范围:-2^8 ~ 2^8-1,即 -128 ~ 127char 取值范围: 0 ~ 65535int 取值范围: -2^31 ~ 2^31-1,即 -2147483648 ~ 2147483647=> int > char,因此char能够向上兼容int,但是int要强制类型转换为char。说到这里,你们可能就有疑问啦,为什么char可以存汉字?这是因为char是按照字符存储的,不管英文还是中文,固定占用占用2个字节,用来储存Unicode字符。范围在0~65536。c语言说的char类型与我们java的byte类型是一致的补充:char是有16个二进制位,2个字节,希望大家不要与c语言的char类型弄混淆啦结论:如果是做算法题并且只有字母的情况下,完全没有必要开辟256,这样做会造成资源浪费,通过减'a'的方式,将数组长度缩小到26技巧:求重复数字可以通过arr[i]++方式来做,arr[i]++ => arr[i] = arr[i] + 1,arr[i]能够充当临时变量
正文一、提前条件1.云服务器:CentOS 82.微服务项目:已打包好jar包3.jdk环境:免积分下载-jdk-linux-1.84.安装好容器docker二、制作镜像1.创建Dockerfile(简单理解就是制作镜像的文本文档)#必须第一个写,指定基础镜像 FROM centos:8 #作者 MAINTAINER laokou-koushenhai #在当前目录的文件,拷贝过去会自动解压到指定目录 ADD jdk-linux-1.8.tar.gz /laokou #环境变量 ENV JAVA_HOME /laokou/jdk1.8 ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar ENV PATH $PATH:$JAVA_HOME/bin注意:jdk1.8和Dockerfile放在同一个目录2.制作镜像docker build -t 镜像名称 . 注意:命令后面还有一个"."不要忘记加上啦3.查看镜像docker images三、部署微服务1.安装docker-compose(可以理解为我敲完运行命令,能够一下子运行多个项目)1.1 免积分下载-docker-compose1.2 创建文件夹mkdir -p /laokou/data1.3 上传docker-compose到该目录下1.4 重命名为docker-compose,修改其权限chmod +x /laokou/data/docker-compose2.创建docker-compose.ymlversion: '3' services: laokou-gateway-service: # 指定容器名称 container_name: laokou-gateway # 重启机制 restart: always image: jdk:latest volumes: # 挂载本地jar包路径 - /laokou/gateway.jar:/gateway.jar # 挂载日志 - ./log:/log ports: - "1234:1234" environment: # 指定时区 - TZ="Asia/Shanghai" command: java -jar gateway.jar > gatway.log laokou-sso-service: # 指定容器名称 container_name: laokou-sso # 重启机制 restart: always image: jdk:latest volumes: # 挂载本地jar包路径 - /laokou/sso.jar:/sso.jar # 挂载日志 - ./log:/log # 访问端口 ports: - "1111:1111" environment: # 指定时区 - TZ="Asia/Shanghai" #启动容器后执行的命令 entrypoint: java -jar sso.jar > sso.log3.启动项目docker-compose up -d
正文一、提前条件负载均衡:nginx.tar.gz云服务器:CentOS 8二、安装过程1.安装必要插件yum -y install gcc pcre pcre-devel zlib zlib-devel openssl openssl-devel参数解释:1.gcc用于编译c(nginx是由c开发的)2.pcre pcre-devel是一个库,包括一些正则表达式库(pcre来解析正则表达式,因此nginx支持正则表达式匹配)3.zlib zlib-devel提供多种压缩和解压的方式(nginx使用zlib对http包进行gzip)4.openssl openssl-devel是用来数据传输加密,防止数据泄露(nginx因此能够去安装ssl证书)2.解压tar -zxvf nginx.tar.gz3.开始安装-指定安装路径./configure --prefix=/laokou/nginx4.编译make && make install5.启动cd /laokou/nginx/sbin./nginx6.配置(反向代理网关) worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name localhost; add_header 'Access-Control-Allow-Origin' "$http_origin" always; add_header 'Access-Control-Allow-Credentials' 'true' always; add_header 'Access-Control-Allow-Methods' 'GET, OPTIONS' always; add_header 'Access-Control-Allow-Headers' 'Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified- ?Since,Keep-Alive,Origin,User-Agent,X-Requested-With' always; location / { proxy_pass http://127.0.0.1:1234; proxy_connect_timeout 6000; proxy_read_timeout 6000; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_set_header Connection ""; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } }7.重启./nginx -s reload
正文一、提前条件服务器:centos 8.3(64位)二、安装过程1.查看Linux内核(Docker最低支持CentOS 7 64位 内核3.10)uname -a2.安装docker(输入yes,然后等待...)yum install docker3.启动docker(安装没报错就可以启动啦)启动报错,错误如下: Failed to start docker.service: Unit docker.service not found. 错误分析:CentOS 8 中安装 docker 和 Podman 冲突 解决方式: 1.查看是否安装 Podman rpm -q podman 2.删除podman(输入yes,然后等待...) dnf remove podman 3.重装docker(分别执行如下命令) yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum install docker-ce docker-ce-cli containerd.io yum install docker-ce docker-ce-cli4.启动dockersystemctl start docker5.其他操作查看版本 docker -v 查看状态 systemctl status docker6.修改docker镜像仓库为阿里云 (阿里云真香)1.创建文件 mkdir -p /etc/docker 2.将阿里云配置写入daemon.json tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://xirgurp7.mirror.aliyuncs.com"] } EOF 3.加载配置 systemctl daemon-reload 4.重启docker systemctl restart docker7.开机启动systemctl enable docker
正文一、提前条件谷歌插件:elasticsearch-head服务器:centos 7.5(64位)容器:Docker(已安装)二、安装过程1.拉取镜像docker pull elasticsearch:7.6.22.启动容器*注*:如果索引包括同义词分词、拼音分词、ik分词,建议分配的空间最小为512m,不然后面索引创建不了docker run --restart=always -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms512m -Xmx512m" --name='docker-elasticsearch' -d elasticsearch:7.6.23.配置文件docker ps -a //查看所有容器,找到docker-elasticsearch的namesdocker exec -it docker-elasticsearch bash //进入docker-elasticsearch的容器 cd config //进入config目录 ll表示当前的文件和文件夹vi elasticsearch.yml //修改配置文件 修改完之后,保存退出http.cors.enabled: true //支持跨域 http.cors.allow-origin: "*" //支持所有域名 network.host: 0.0.0.0 //云服务器设置为允许任何ip地址访问 cluster.name: docker-elasticsearch //节点名称 http.port: 9200 //端口号 http.cors.allow-headers: Authorization xpack.security.enabled: true //开启xpack xpack.security.transport.ssl.enabled: true 4.设置密码cd bin ./elasticsearch-setup-passwords interactive //设置密码 自己根据自己的情况去进行设置5.退出容器exit6.重启容器docker restart docker-elasticsearch docker ps -a //查看容器7.谷歌插件
正文小伙伴们,你们好,我是老寇Apollo部署在私有云上,本地开发环境无法连接,但又需要做开发测试的话,客户端可以升级到0.11.0版本及以上,然后通过-Dapollo.configService=http://config-service的公网IP:端口来跳过meta service的服务发现错误如下reason: Could not complete get operation [Cause: connect timed out]解决方法:
正文小伙伴们,你们好,我是老寇 使用springboot整合shiro时,@value注解无法读取application.yml中的配置 解决方法:将LifecycleBeanPostProcessor的配置方法改成静态就可以了LifecycleBeanPostProcessor将Initializable和Destroyable的实现类统一在其内部自动分别调用了Initializable.init()和Destroyable.destroy()方法,从而达到管理shiro bean生命周期的目的 */ @Bean(name = "lifecycleBeanPostProcessor") public static LifecycleBeanPostProcessor lifecycleBeanPostProcessor() { return new LifecycleBeanPostProcessor(); }
正文一、操作命令1.端口及进程#端口情况 netstat -ntlp #进程启动情况 ps -ef|grep 进程名 #查看端口占用 netstat -tunlp | grep 端口号 #查看进程 ps -aux #终止进程 kill 进程号2.CPUtop3.账号#切换账号 su 账号 #新增用户 useradd 用户名 #添加到分组 useradd -g 组名 用户名 #设置密码 passwd 用户名 #删除用户 userdel 用户名 #登录用户信息 whomi4.文件#文件夹授权 shown 777 -R directory #文件夹授权 chmod 777 -R directory #创建文件夹 mkdir -p directory #创建文件 touch file #移动文件 mv 文件 目录 #解压 tar.gz tar -zxvf 压缩包 #解压 zip unzip 压缩包 #查看当前目录绝对路径 pwd #删除文件或文件夹(递归) rm -f 名称 #查看文件 cat 文件 vi 文件 #内容写入文件 echo 内容 >> 文件 #输出文件尾部内容 tail -n 行数 文件 #查找文件 find /-name 文件5.网络service network restart6.防火墙# 查看防火墙状态 systemctl status firewalld.service #停止防火墙 systemctl stop firewalld.service #启动防火墙 systemctl start firewalld.service #重启防火墙 systemctl restart firewalld.service7.服务# 查看服务开机启动状态 systemctl list-run-files # 关掉指定服务自动启动 systemctl disable 服务名 # 开启指定服务自动启动 systemctl enable 服务名8.磁盘df -h二、服务命令mysqlsservice mysql start #启动mysql mysql -uroot -p #登录mysql show master stauts\G; #查看主节点状态 show slave status\G; #查看从结点状态rabbitmqrabbitmq-server start #启动单节点rabbitmqelasticsearch./elasticsearch #启动elasticsearchredis./redis/src/redis-server ./redis/etc/redis.conf #启动rediskafka./bin/kafka-server-start.sh ./config/server.properties #启动kafkazookeeper./bin/zookeeper-server-start.sh ./config/zookeeper.properties #启动zookeepermongodbbin/mongod -f config/mongodb.confdocker#查看内核 unmae -r #查看版本 docker version #查看信息 docker info #获取帮助 docker --help #制作镜像 docker build -t 镜像名称 . #查看镜像 docker images #启动镜像 docker run -d -p 服务器端:容器端口(与项目端口保持一致) 镜像名称 #查看日志 docker logs 容器id #查找镜像 docker search 镜像名 #打包镜像 docker tag 镜像名字:标签 ip或域名 #删除镜像 docker rmi 镜像id #进入容器 docker exec -it 容器id /bin/bash #重启容器 docker restart 容器名称 #停止容器 docker stop 容器id #启动容器 docker start 容器id #删除容器 docker rm 容器id #列出容器 docker ps --a #强制停止容器 docker kill 容器id #查看容器内运行的进程 docker top 容器id #查看容器内部细节 docker inspect 容器id #查看所有volume情况 docker volume ls #查看某个卷 docker volume inspect 卷名 #docker-compose启动 docker-compose up -d
正文一、项目介绍一款用于网上购票的系统,比如飞猪、淘票票二、技术架构框架:springboot + shiro + mysql + redis + rabbitmq + elasticsearch + websocket模板:thymeleaf三、核心功能介绍及技术难点介绍:用于监控服务器的使用情况难点:无难点介绍:用于向前端推送公告,比如说当前xxx影院有打折活动难点:集成websocket实现广播式推送介绍:用于监控数据库的使用情况难点:集成阿里druid,还要配置yml文件介绍:用于系统内部人员通讯,可在此基础上实现客服难点:集成websocket实现广播式推送介绍:用于获取附件的影厅难点:调用百度地图API介绍:用于获取附件电影院难点:调用百度地图API介绍:用于获取用户列表,并统计当前在线用户难点:集成websocket实现一对一推送介绍:用于电影资源的管理难点:集成七牛云OSS介绍:用于查询订单列表难点:无难点介绍:用于获取登录日志难点:aspect aop实现日志介绍:用于获取操作日志难点:aspect aop操作日志介绍:用于查询我的收入难点:无难点(支付需接入支付宝,本人用的是支付宝沙盒测试)介绍:用于观看电影难点:无难点介绍:用于查询我的订单难点:无难点介绍:用于展示订单及用户统计数据难点:接入eachart介绍:用于支付难点:对接支付宝沙盒测试介绍:用于在线选座难点:集成websocket实现一对一推送
正文小伙伴们,你们好呀!我是老寇!package sort; public class SortTest { public static void main(String[] args) throws Exception{ int[] d={52, 39, 67, 95, 70, 8, 25, 52}; SeqList seqList=new SeqList(20); for (int i = 0; i < d.length; i++) { RecordNode r=new RecordNode(d[i]); seqList.insert(seqList.length(), r); } seqList.display(); /* seqList.insertSort(); * */ seqList.insert(0, new RecordNode(0)); seqList.insertSortWithGuard(); seqList.display(9); } }package sort; /* * 顺序表的 */ public class RecordNode { public Comparable key;//关键字 public Object element;//数据元素 public RecordNode(){//无参构造函数 } public RecordNode(Comparable key){ this.key=key; } public RecordNode(Comparable key,Object element){ this.key=key; this.element=element; } /* public String toString(){//重写toStirng() return "["+key+","+element+"]"; }*/ }package sort; /* * 我的数据结构>>>插入排序 */ public class SeqList { public RecordNode[] r;//结点数组 public int curLen;//顺序表长度 public SeqList(){} public SeqList(int maxSize){ this.r=new RecordNode[maxSize]; this.curLen=0; } public int length(){ return curLen; } /* * 插入操作 */ public void insert(int i,RecordNode x) throws Exception{ if(this.curLen==this.r.length){ throw new Exception("顺序表已满"); } if(i<0 || i>this.curLen){ throw new Exception("插入位置不合理"); } for(int j=this.curLen;j>i;j--){ this.r[j]=this.r[j-1]; } this.r[i]=x; this.curLen++; } public void display() { for (int i = 0; i < this.curLen; i++) { System.out.print(" "+r[i].key.toString()); } System.out.println(); } public void display(int sortMode){ int i; if(sortMode==9) i=1; else i=0; for(;i<this.curLen;i++){ System.out.print(" "+r[i].key.toString()); } System.out.println(); } public void insertSort(){ RecordNode temp; int i,j; for(i=1;i<curLen;i++){ temp=r[i]; //t<r[j] for(j=i-1;j>=0&&temp.key.compareTo( r[j].key)<0;j--){ r[j+1]=r[j]; } r[j+1]=temp; } } public void insertSortWithGuard(){ int i, j; for (i = 2; i <this.curLen; i++) { //n-1趟扫描 r[0] = r[i]; //将待插入的第i条记录暂存在r[0]中,同时r[0]为监视哨 for (j = i - 1; r[0].key.compareTo(r[j].key) < 0; j--) { //将前面较大元素向后移动 r[j + 1] = r[j]; } r[j + 1] = r[0]; // r[i]插入到第j+1个位置 // System.out.println("第" + i + "趟: "); // display(); } } }运行的效果图:
正文一、功能简介1.批量删除:获取要删除商品的id,在servlet进行判断并删除2.全选:通过你点击这个节点找自己的父节点或兄弟节点二、核心代码<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%> <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %> <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>网上书城购物车</title> <script src="${pageContext.request.contextPath }/client/js/jquery-3.3.1.min.js"></script> </head> <body> <script type="text/javascript"> function jia(count,total,id,file){ count = parseInt(count); total = parseInt(total); if(count==total+1){ alert("已达到图书最大购买量"); count=total; }else if(count>100){ return; } location.href=file+"/removeBookServlet?book_id="+id+"&count="+count; } function remove(id,file){ var mes="确认删除图书?"; var remove=confirm(mes); if(remove){ location.href=file+"/removeBookServlet?book_id="+id+"&count=0"; } } function jian(count,total,id,file){ count = parseInt(count); total = parseInt(total); if(count==0){ var flag=window.confirm("确认删除图书?"); if(flag){ location.href=file+"/removeBookServlet?book_id="+id+"&count=0"; return; }else{ count=1; } } location.href=file+"/removeBookServlet?book_id="+id+"&count="+count; } </script> <script type="text/javascript"> $(function(){ $('#j_removeproducts').click(function(){ var arr=new Array(); var count=0; var flag=window.confirm("确定删除?"); if(flag){ $('[name=items]:checkbox').each(function(){ if($(this).prop('checked')){ arr[count++]=$(this).siblings('.myid2').val(); } }); location.href="/OnlineBookStore/batchDeleteServlet?id="+arr; } }); $("#AllChecked").click(function(){ var sum=0; var num=0; $('#AllCheck').prop('checked',this.checked); $('[name=items]:checkbox').prop('checked',this.checked); $('[name=items]:checkbox').each(function(){ if($(this).prop('checked')){ $('#checkout_btn').attr({"style":"background-color: #ff2832;cursor: pointer;"}); num+=parseInt($(this).siblings('input').val()); sum+=parseInt($(this).val()); }else{ $('#checkout_btn').removeAttr('style'); } }); $('#total').text("¥"+sum+".00"); $('#payAmount').text("¥"+sum+".00"); $('font').text(num); }); $("#AllCheck").click(function(){ var sum=0; var num=0; $('#AllChecked').prop('checked',this.checked); $('[name=items]:checkbox').prop('checked',this.checked); $('[name=items]:checkbox').each(function(){ if($(this).prop('checked')){ $('#checkout_btn').attr({"style":"background-color: #ff2832;cursor: pointer;"}); num+=parseInt($(this).siblings('input').val()); sum+=parseInt($(this).val()); }else{ $('#checkout_btn').removeAttr('style'); } }); $('#total').text("¥"+sum+".00"); $('#payAmount').text("¥"+sum+".00"); $('font').text(num); }); $('[name=items]:checkbox').click(function(){ $('[name=items]:checkbox').each(function(){ if(!$(this).prop('checked')){ $('#AllChecked').prop('checked',false); $('#AllCheck').prop('checked',false); } }); }); $('[name=items]:checkbox').click(function(){ var sum=0; var num=0; $('[name=items]:checkbox').each(function(){ if($(this).prop('checked')){ num+=parseInt($(this).siblings('input').val()); sum+=parseInt($(this).val()); $('#total').text("¥"+sum+".00"); $('#payAmount').text("¥"+sum+".00"); $('font').text(num); $('#checkout_btn').attr({"style":"background-color: #ff2832;cursor: pointer;"}); }else{ $('#total').text("¥"+sum+".00"); $('#payAmount').text("¥"+sum+".00"); $('font').text(num); } }); if(num==0){ $('#checkout_btn').removeAttr('style'); } }); $('#checkout_btn').click(function(){ var num=parseInt($('font').text()); var arr=new Array(); var date=new Array(); var count=0; if(num>0){ $('[name=items]:checkbox').each(function(){ if($(this).prop('checked')){ arr[count++]=$(this).siblings('.myid').val(); } }); location.href="/OnlineBookStore/settlementServlet?id="+arr+"&date="+date; } }); }); </script> <div> <jsp:include page="head.jsp"/> <jsp:include page="search.jsp"/> <div class="navg"><img src="${pageContext.request.contextPath }/client/images/book.jpg" alt="" height="100px" width="250px"></div> </div> <div class="logo_line"> <div class="w960"> <div class="shopping_procedure"><span class="current">我的购物车</span><span>填写订单</span><span>完成订单</span></div> </div> </div> <c:choose> <c:when test="${sessionScope.user==null }"> <div id="LoginFalse" class="login_tip"> <span class="icon"></span> 您还没有登录!登录后购物车的商品将保存到您的帐号中 <a href="${pageContext.request.contextPath }/client/login.jsp?action=1" class="btn">立即登录</a> </div> </c:when> <c:otherwise> <div id="LoginFalse" class="login_tip"> <span class="icon"></span> 亲,您已登录,赶快去购物吧! </div> </c:otherwise> </c:choose> <div class="w960" id="cart"> <ul class="shopping_title" id="j_carttitle"> <li class="f1"><input type="checkbox" id="AllChecked"/>&nbsp;<label for="AllChecked">全选</label></li> <li class="f2">序号</li> <li class="f3">图书名称</li> <li class="f4">单价(元)</li> <li class="f5">数量</li> <li class="f6">金额(元)</li> <li class="f7">操作</li> </ul> <div class="fn-shops" id="J_cartContent"> <div class="fn-shop" data-shopids="0"> <table width="100%" cellspacing="0" cellpadding="0" border="0"> <c:forEach items="${sessionScope.cart}" var="entry" varStatus="vs"> <tr> <td style="width: 150px;padding-left: 63px;"><input type="checkbox" name="items" value="${entry.key.price*entry.value}" style="margin-top: 3px;"/> <input type="hidden" value="${entry.value }"/> <input type="hidden" class="myid2" value="${entry.key.book_id}"/> <input type="hidden" class="myid" value="${entry.key.book_id}"/> </td> <td style="width: 150px;float: left;margin-left: 5px;">${vs.count }</td> <td style="width: 150px;float: left;margin-left: -25px;"><a class="book_name">${entry.key.book_name}</a></td> <td style="width: 150px;float: left;margin-left: 1px;">¥${entry.key.price }0</td> <td style="width: 150px;float: left;margin-left: -5px;"> <img onclick="jian('${entry.value-1}','${sessionScope.num }','${entry.key.book_id }','${pageContext.request.contextPath }')" alt="-" src="${pageContext.request.contextPath }/client/images/jians.jpg" style="margin-top: 3px;cursor: pointer;"> <input type="text" class="num" value="${entry.value }" readonly="readonly"/> <img onclick="jia('${entry.value+1}','${sessionScope.num }','${entry.key.book_id }','${pageContext.request.contextPath }')" alt="+" src="${pageContext.request.contextPath }/client/images/jias.jpg" style="margin-left: -2px;margin-top: 3px;cursor: pointer;"> </td> <td style="width: 150px;float: left;margin-left: 6px;"><span style=" color:#ff2832;">¥${entry.key.price*entry.value}0</span></td> <td style="width: 70px;float: left;margin-left: 18px;"><a class="cao" href="javascript:void(0);" onclick="remove('${entry.key.book_id }','${pageContext.request.contextPath }')">删除</a></td> </tr> </c:forEach> <tr class="total"> <td><div class="row_img">店铺合计</div></td> <td class="row4"><span class="red big ooline alignright" id="total" style="color:#ff2832;font-size:16px;margin-left: 579px">¥0</span></td> </tr> </table> </div> </div> <div class="shop_total"> <div class="shopping_total_right"> <a class="total_btn fn-checkout unable" href="javascript:void(0);" id="checkout_btn" title="结算">结&nbsp;&nbsp;算</a> <div class="subtotal"> <p><span class="cartsum">总计(不含运费):</span><span id="payAmount" class="price">¥0</span></p> <p><span class="cartsum">已节省:</span><span id="totalFavor">¥0.00</span></p> </div> </div> <div class="shopping_total_left" id="J_leftBar"> <input type="checkbox" id="AllCheck"/>&nbsp;<label for="AllChecked">全选</label> <a id="j_removeproducts" href="javascript:void(0);" class="fn-batch-remove" title="批量删除按钮">批量删除</a> <span>已选择<font color="red">0</font>件商品</span> </div> </div> </div> </body> </html>package cn.bookstore.servlet; import java.io.IOException; import java.util.Iterator; import java.util.Map; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import cn.bookstore.domain.Book; /** * 批量删除 */ @WebServlet("/batchDeleteServlet") public class batchDeleteServlet extends HttpServlet { private static final long serialVersionUID = 1L; protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String id=request.getParameter("id"); String[] myid=id.split(","); Map<Book,Integer> cart=(Map<Book, Integer>) request.getSession().getAttribute("cart"); /** * Map的实现不是同步的。如果程序中出现多个线程同时访问一个Map,而其中至少一个线程修改Map时, 它必须保持外部同步。而通过查看Iterator原理发现,Iterator是工作在一个独立的线程中,并且拥有一个 mutex锁, 就是说Iterator在工作的时候,是不允许被迭代的对象被改变的,所以调用Iterator操作获得的对象在多线程修改Map的时候会自动失效。 */ Iterator<Book> it = cart.keySet().iterator(); if(cart.size()==myid.length){ cart.clear(); }else{ while(it.hasNext()) { Book book=it.next(); for(String bookid:myid){ if(bookid.equals(book.getBook_id())){ System.out.println("移出书籍:"+book.getBook_name()); it.remove(); } } } } request.getRequestDispatcher("/client/shoppingcart.jsp").forward(request, response); return; } }效果图批量删除
正文一、前提条件centos7 安装hadoop 2.7.6centos7 安装zookeeper 3.4.6二、安装过程解压(放在/opt)tar -zxvf hbase-2.1.0-bin.tar.gz移动文件mv hbase-2.1.0 /usr/local/hbase创建文件夹mkdir -p /data/hbase/tmp chown 777 -R /data/hbase/tmp配置环境vi /etc/profile HBASE_HOME=/usr/local/hbase PATH=$PATH:$HBASE_HOME/bin export HBASE_HOME PATH生效配置source /etc/profile修改/hbase/conf/hbase-env.shvi /usr/local/hbase/conf/hbase-env.sh export JAVA_HOME=jdk地址找到zookeeper/conf/zoo.cfg定义的dataDir,清空该目录下的文件配置conf/hbase-site.xml<property> <name>hbase.rootdir</name> <value>hdfs://localhost:9000/hbase</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>file:/usr/local/zookeeper</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.unsafe.stream.capability.enforce</name> <value>false</value> </property> <property> <name>hbase.tmp.dir</name> <value>/data/hbase/tmp</value> </property> <property> <name>hbase.master.port</name> <value>16000</value> </property> <property> <name>hbase.master.info.port</name> <value>16010</value> </property> <property> <name>hbase.regionserver.port</name> <value>16201</value> </property> <property> <name>hbase.regionserver.info.port</name> <value>16301</value> </property>拷贝文件(hadoop的core-site.xml和hdfs-site.xml拷到conf目录下)cd /usr/local/hadoop/etc/hadoop cp core-site.xml hdfs-site.xml /usr/local/hbase/conf启动hbasecd /usr/local/hbase/bin ./start-hbase.sh输入 http://localhost:16010大功告成
正文问题描述:无法注入bean问题分析:1.查看@SpringBootTest 源码@Target({ElementType.TYPE}) @Retention(RetentionPolicy.RUNTIME) @Documented @Inherited @BootstrapWith(SpringBootTestContextBootstrapper.class) public @interface SpringBootTest { /** * explicit classes are defined the test will look for nested * 这句话的意思是 如果没有指定class的情况下,会去该测试类所在的包及子包下面搜索启动类,如果没有则注入为空, * 如果指定class,则会将扫描到的类组件加载到测试环境 */ Class<?>[] classes() default {}; }2.增加@RunWith注解RunWith就是一个运行器@RunWith(SpringJUnit4ClassRunner.class) 运行spring测试环境这里,我们使用@RunWith(SpringRunner.class)补:小伙伴可能会问了,这里不是应该使用SpringJUnit4ClassRunner?为什么使用SpringRunner?//这里使用SpringRunner没毛病 public final class SpringRunner extends SpringJUnit4ClassRunner { public SpringRunner(Class<?> clazz) throws InitializationError { super(clazz); } }正确写法@SpringBootTest(classes = {QueryApplication.class}) @RunWith(SpringRunner.class) @Slf4j public class QueryApplicationTest { @Autowired private DataSource dataSource; @Test public void test() { log.info(dataSource.toString()); } }注:如果与启动类是同一个包及子包,就没有必要指定@SpringBootTest的classes
正文小伙伴们,你们好,我是老寇据查询相关资料,在elasticsearch 7.x以后的版本,当查询的结果总数大于1万时,默认total返回总数为10000在kibana获取真实总数,只需要加添加 track_total_hits 参数{ "query": { "match_all": {} }, "track_total_hits":true }在springboot项目中,增加配置//获取真实总数 searchSourceBuilder.trackTotalHits(true);
正文一、安装过程1.解压tar -zxvf jdk-8u161-linux-x64.tar.gz2.移动到/usr/local/jdkmv jdk1.8.0_161 /usr/local/jdk3.编辑/etc/profile环境变量vi /etc/profile #文件最后追加内容 export JAVA_HOME=/usr/local/jdk export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export PATH=$PATH:$JAVA_HOME/bin4.使变量配置生效source /etc/profile5.查看版本java -version大功告成
正文一、简单介绍logstash是一个数据同步工具,一般是在ELK(Elasticsearch + Logstash + Kibana)技术栈解决数据同步问题,通过logstash将mysql、日志文件、redis等数据源的数据同步到ES,通过ES搜索数据。二、基本语法1.配置文件格式# 输入插件配置,主要配置需要同步数据源,如mysql input { } # 过滤器插件配置,主要用于数据进行过滤,格式化操作,filter是可选的 filter { } # 输出插件配置,主要配置同步数据的地址,如同步到es output{ }2.日志同步到es例子input { tcp { port => 5044 codec => json_lines } } output { elasticsearch { hosts => ["127.0.0.1:9200"] index => "laokou-%{+YYYY.MM.dd}" stdout { codec => rubydebug } #stdout控制台打印 stdin控制台输入 } }3.基本流程(logstash处理流程,就像一条管道,数据从管道一端流向另一端)datasources(redis/mysql/日志文件) > inputs > filters > outputs > elasticsearchinputs:负责收集数据(常见数据源 redis/kafka/rabbitmq/mysql/file/ filebeat 轻量级文件数据采集器)filters:对收集到的数据进行格式化、过滤、简单的数据处理(grok 格式化文本内容 drop 丢弃一些数据)outputs:负责数据同步到目的地(目的地 elasticsearch/file )codesc:编码器,对数据进行序列化处理,主要就是json和文本两种编码器
正文小伙伴们,你们好呀!我是老寇!老寇兴高采烈的启动了项目,出现 java.lang.IllegalArgumentException: jdbcUrl is required with driverClassName.解决方案 较低版本写法: spring: datasource: driver-class-name: com.mysql.jdbc.Driver url: jdbc:mysql://X.X.X.X:3306/kcloud?serverTimezone=CTT&useUnicode=true&characterEncoding=utf-8&allowMultiQueries=true username: XXX password: XXX 高版本写法: spring: datasource: driver-class-name: com.mysql.jdbc.Driver jdbc-url: jdbc:mysql://X.X.X.X:3306/kcloud?serverTimezone=CTT&useUnicode=true&characterEncoding=utf-8&allowMultiQueries=true username: XXX password: XXX
2022年05月