3)Mapping
(1)字段类型
(2)映射
Mapping(映射)
Maping是用来定义一个文档(document),以及它所包含的属性(field)是如何存储和索引的。比如:使用maping来定义:
哪些字符串属性应该被看做全文本属性(full text fields);
哪些属性包含数字,日期或地理位置;
文档中的所有属性是否都嫩被索引(all 配置);
日期的格式;
自定义映射规则来执行动态添加属性;
查看mapping信息
GET bank/_mapping
{ "bank" : { "mappings" : { "properties" : { "account_number" : { "type" : "long" }, "address" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "age" : { "type" : "long" }, "balance" : { "type" : "long" }, "city" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "email" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "employer" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "firstname" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "gender" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "lastname" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } }, "state" : { "type" : "text", "fields" : { "keyword" : { "type" : "keyword", "ignore_above" : 256 } } } } } } }
- 修改mapping信息
(3)新版本改变
ElasticSearch7-去掉type概念
关系型数据库中两个数据表示是独立的,即使他们里面有相同名称的列也不影响使用,但ES中不是这样的。elasticsearch是基于Lucene开发的搜索引擎,而ES中不同type下名称相同的filed最终在Lucene中的处理方式是一样的。
两个不同type下的两个user_name,在ES同一个索引下其实被认为是同一个filed,你必须在两个不同的type中定义相同的filed映射。否则,不同type中的相同字段名称就会在处理中出现冲突的情况,导致Lucene处理效率下降。
去掉type就是为了提高ES处理数据的效率。
Elasticsearch 7.x URL中的type参数为可选。比如,索引一个文档不再要求提供文档类型。
Elasticsearch 8.x 不再支持URL中的type参数。
解决:
将索引从多类型迁移到单类型,每种类型文档一个独立索引
将已存在的索引下的类型数据,全部迁移到指定位置即可。详见数据迁移
Elasticsearch 7.x
Specifying types in requests is deprecated. For instance, indexing a document no longer requires a document type. The new index APIs are PUT {index}/_doc/{id} in case of explicit ids and POST {index}/_doc for auto-generated ids. Note that in 7.0, _doc is a permanent part of the path, and represents the endpoint name rather than the document type.
The include_type_name parameter in the index creation, index template, and mapping APIs will default to false. Setting the parameter at all will result in a deprecation warning.
The _default_ mapping type is removed.
Elasticsearch 8.x
Specifying types in requests is no longer supported.
The include_type_name parameter is removed.
创建映射
创建索引并指定映射
PUT /my_index { "mappings": { "properties": { "age": { "type": "integer" }, "email": { "type": "keyword" }, "name": { "type": "text" } } } }
输出:
{ "acknowledged" : true, "shards_acknowledged" : true, "index" : "my_index" }
查看映射
GET /my_index
输出结果:
{ "my_index" : { "aliases" : { }, "mappings" : { "properties" : { "age" : { "type" : "integer" }, "email" : { "type" : "keyword" }, "employee-id" : { "type" : "keyword", "index" : false }, "name" : { "type" : "text" } } }, "settings" : { "index" : { "creation_date" : "1588410780774", "number_of_shards" : "1", "number_of_replicas" : "1", "uuid" : "ua0lXhtkQCOmn7Kh3iUu0w", "version" : { "created" : "7060299" }, "provided_name" : "my_index" } } } }
添加新的字段映射
PUT /my_index/_mapping { "properties": { "employee-id": { "type": "keyword", "index": false } } }
输出:
{ "acknowledged" : true, "shards_acknowledged" : true, "index" : "my_index" }
查看映射
GET /my_index
输出结果:
{ "my_index" : { "aliases" : { }, "mappings" : { "properties" : { "age" : { "type" : "integer" }, "email" : { "type" : "keyword" }, "employee-id" : { "type" : "keyword", "index" : false }, "name" : { "type" : "text" } } }, "settings" : { "index" : { "creation_date" : "1588410780774", "number_of_shards" : "1", "number_of_replicas" : "1", "uuid" : "ua0lXhtkQCOmn7Kh3iUu0w", "version" : { "created" : "7060299" }, "provided_name" : "my_index" } } } }
添加新的字段映射
PUT /my_index/_mapping { "properties": { "employee-id": { "type": "keyword", "index": false } } }
这里的 “index”: false,表明新增的字段不能被检索,只是一个冗余字段。
更新映射
对于已经存在的字段映射,我们不能更新。更新必须创建新的索引,进行数据迁移。
数据迁移
先创建new_twitter的正确映射。然后使用如下方式进行数据迁移。
POST reindex [固定写法] { "source":{ "index":"twitter" }, "dest":{ "index":"new_twitters" } }
将旧索引的type下的数据进行迁移
POST reindex [固定写法] { "source":{ "index":"twitter", "twitter":"twitter" }, "dest":{ "index":"new_twitters" } }
更多详情见: https://www.elastic.co/guide/en/elasticsearch/reference/7.6/docs-reindex.html
GET /bank/_search
{ "took" : 0, "timed_out" : false, "_shards" : { "total" : 1, "successful" : 1, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : { "value" : 1000, "relation" : "eq" }, "max_score" : 1.0, "hits" : [ { "_index" : "bank", "_type" : "account",//类型为account "_id" : "1", "_score" : 1.0, "_source" : { "account_number" : 1, "balance" : 39225, "firstname" : "Amber", "lastname" : "Duke", "age" : 32, "gender" : "M", "address" : "880 Holmes Lane", "employer" : "Pyrami", "email" : "amberduke@pyrami.com", "city" : "Brogan", "state" : "IL" } }, ...
GET /bank/_search
想要将年龄修改为integer
PUT /newbank { "mappings": { "properties": { "account_number": { "type": "long" }, "address": { "type": "text" }, "age": { "type": "integer" }, "balance": { "type": "long" }, "city": { "type": "keyword" }, "email": { "type": "keyword" }, "employer": { "type": "keyword" }, "firstname": { "type": "text" }, "gender": { "type": "keyword" }, "lastname": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } }, "state": { "type": "keyword" } } } }
查看“newbank”的映射:
GET /newbank/_mapping
能够看到age的映射类型被修改为了integer.
将bank中的数据迁移到newbank中
POST _reindex { "source": { "index": "bank", "type": "account" }, "dest": { "index": "newbank" } }
运行输出:
#! Deprecation: [types removal] Specifying types in reindex requests is deprecated. { "took" : 768, "timed_out" : false, "total" : 1000, "updated" : 0, "created" : 1000, "deleted" : 0, "batches" : 1, "version_conflicts" : 0, "noops" : 0, "retries" : { "bulk" : 0, "search" : 0 }, "throttled_millis" : 0, "requests_per_second" : -1.0, "throttled_until_millis" : 0, "failures" : [ ] }
查看newbank中的数据
4)分词
一个tokenizer(分词器)接收一个字符流,将之分割为独立的tokens(词元,通常是独立的单词),然后输出tokens流。
例如:whitespace tokenizer遇到空白字符时分割文本。它会将文本“Quick brown fox!”分割为[Quick,brown,fox!]。
该tokenizer(分词器)还负责记录各个terms(词条)的顺序或position位置(用于phrase短语和word proximity词近邻查询),以及term(词条)所代表的原始word(单词)的start(起始)和end(结束)的character offsets(字符串偏移量)(用于高亮显示搜索的内容)。
elasticsearch提供了很多内置的分词器,可以用来构建custom analyzers(自定义分词器)。
关于分词器: https://www.elastic.co/guide/en/elasticsearch/reference/7.6/analysis.html
POST _analyze { "analyzer": "standard", "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." }
执行结果:
{ "tokens" : [ { "token" : "the", "start_offset" : 0, "end_offset" : 3, "type" : "<ALPHANUM>", "position" : 0 }, { "token" : "2", "start_offset" : 4, "end_offset" : 5, "type" : "<NUM>", "position" : 1 }, { "token" : "quick", "start_offset" : 6, "end_offset" : 11, "type" : "<ALPHANUM>", "position" : 2 }, { "token" : "brown", "start_offset" : 12, "end_offset" : 17, "type" : "<ALPHANUM>", "position" : 3 }, { "token" : "foxes", "start_offset" : 18, "end_offset" : 23, "type" : "<ALPHANUM>", "position" : 4 }, { "token" : "jumped", "start_offset" : 24, "end_offset" : 30, "type" : "<ALPHANUM>", "position" : 5 }, { "token" : "over", "start_offset" : 31, "end_offset" : 35, "type" : "<ALPHANUM>", "position" : 6 }, { "token" : "the", "start_offset" : 36, "end_offset" : 39, "type" : "<ALPHANUM>", "position" : 7 }, { "token" : "lazy", "start_offset" : 40, "end_offset" : 44, "type" : "<ALPHANUM>", "position" : 8 }, { "token" : "dog's", "start_offset" : 45, "end_offset" : 50, "type" : "<ALPHANUM>", "position" : 9 }, { "token" : "bone", "start_offset" : 51, "end_offset" : 55, "type" : "<ALPHANUM>", "position" : 10 } ] }
(1)安装ik分词器
所有的语言分词,默认使用的都是“Standard Analyzer”,但是这些分词器针对于中文的分词,并不友好。为此需要安装中文的分词器。
注意:不能用默认elasticsearch-plugin install xxx.zip 进行自动安装
https://github.com/medcl/elasticsearch-analysis-ik/releases/download 对应es版本安装
在前面安装的elasticsearch时,我们已经将elasticsearch容器的“/usr/share/elasticsearch/plugins”目录,映射到宿主机的“ /mydata/elasticsearch/plugins”目录下,所以比较方便的做法就是下载“/elasticsearch-analysis-ik-7.6.2.zip”文件,然后解压到该文件夹下即可。安装完毕后,需要重启elasticsearch容器。
如果不嫌麻烦,还可以采用如下的方式。
(1)查看elasticsearch版本号:
[root@hadoop-104 ~]# curl http://localhost:9200 { "name" : "0adeb7852e00", "cluster_name" : "elasticsearch", "cluster_uuid" : "9gglpP0HTfyOTRAaSe2rIg", "version" : { "number" : "7.6.2", #版本号为7.6.2 "build_flavor" : "default", "build_type" : "docker", "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f", "build_date" : "2020-03-26T06:34:37.794943Z", "build_snapshot" : false, "lucene_version" : "8.4.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" } [root@hadoop-104 ~]#
(2)进入es容器内部plugin目录
- docker exec -it 容器id /bin/bash
[root@hadoop-104 ~]# docker exec -it elasticsearch /bin/bash [root@0adeb7852e00 elasticsearch]#
[root@0adeb7852e00 elasticsearch]# pwd /usr/share/elasticsearch #下载ik7.6.2 [root@0adeb7852e00 elasticsearch]# wget https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.6.2/elasticsearch-analysis-ik-7.6.2.zip
- unzip 下载的文件
[root@0adeb7852e00 elasticsearch]# unzip elasticsearch-analysis-ik-7.6.2.zip -d ink Archive: elasticsearch-analysis-ik-7.6.2.zip creating: ik/config/ inflating: ik/config/main.dic inflating: ik/config/quantifier.dic inflating: ik/config/extra_single_word_full.dic inflating: ik/config/IKAnalyzer.cfg.xml inflating: ik/config/surname.dic inflating: ik/config/suffix.dic inflating: ik/config/stopword.dic inflating: ik/config/extra_main.dic inflating: ik/config/extra_stopword.dic inflating: ik/config/preposition.dic inflating: ik/config/extra_single_word_low_freq.dic inflating: ik/config/extra_single_word.dic inflating: ik/elasticsearch-analysis-ik-7.6.2.jar inflating: ik/httpclient-4.5.2.jar inflating: ik/httpcore-4.4.4.jar inflating: ik/commons-logging-1.2.jar inflating: ik/commons-codec-1.9.jar inflating: ik/plugin-descriptor.properties inflating: ik/plugin-security.policy [root@0adeb7852e00 elasticsearch]# #移动到plugins目录下 [root@0adeb7852e00 elasticsearch]# mv ik plugins/
- rm -rf *.zip
[root@0adeb7852e00 elasticsearch]# rm -rf elasticsearch-analysis-ik-7.6.2.zip
确认是否安装好了分词器
(2)测试分词器
使用默认
GET my_index/_analyze { "text":"我是中国人" }
请观察执行结果:
{ "tokens" : [ { "token" : "我", "start_offset" : 0, "end_offset" : 1, "type" : "<IDEOGRAPHIC>", "position" : 0 }, { "token" : "是", "start_offset" : 1, "end_offset" : 2, "type" : "<IDEOGRAPHIC>", "position" : 1 }, { "token" : "中", "start_offset" : 2, "end_offset" : 3, "type" : "<IDEOGRAPHIC>", "position" : 2 }, { "token" : "国", "start_offset" : 3, "end_offset" : 4, "type" : "<IDEOGRAPHIC>", "position" : 3 }, { "token" : "人", "start_offset" : 4, "end_offset" : 5, "type" : "<IDEOGRAPHIC>", "position" : 4 } ] }
GET my_index/_analyze { "analyzer": "ik_smart", "text":"我是中国人" }
输出结果:
{ "tokens" : [ { "token" : "我", "start_offset" : 0, "end_offset" : 1, "type" : "CN_CHAR", "position" : 0 }, { "token" : "是", "start_offset" : 1, "end_offset" : 2, "type" : "CN_CHAR", "position" : 1 }, { "token" : "中国人", "start_offset" : 2, "end_offset" : 5, "type" : "CN_WORD", "position" : 2 } ] }
GET my_index/_analyze { "analyzer": "ik_max_word", "text":"我是中国人" }
输出结果:
{ "tokens" : [ { "token" : "我", "start_offset" : 0, "end_offset" : 1, "type" : "CN_CHAR", "position" : 0 }, { "token" : "是", "start_offset" : 1, "end_offset" : 2, "type" : "CN_CHAR", "position" : 1 }, { "token" : "中国人", "start_offset" : 2, "end_offset" : 5, "type" : "CN_WORD", "position" : 2 }, { "token" : "中国", "start_offset" : 2, "end_offset" : 4, "type" : "CN_WORD", "position" : 3 }, { "token" : "国人", "start_offset" : 3, "end_offset" : 5, "type" : "CN_WORD", "position" : 4 } ] }
(3)自定义词库
- 修改/usr/share/elasticsearch/plugins/ik/config中的IKAnalyzer.cfg.xml
/usr/share/elasticsearch/plugins/ik/config
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd"> <properties> <comment>IK Analyzer 扩展配置</comment> <!--用户可以在这里配置自己的扩展字典 --> <entry key="ext_dict"></entry> <!--用户可以在这里配置自己的扩展停止词字典--> <entry key="ext_stopwords"></entry> <!--用户可以在这里配置远程扩展字典 --> <entry key="remote_ext_dict">http://192.168.137.14/es/fenci.txt</entry> <!--用户可以在这里配置远程扩展停止词字典--> <!-- <entry key="remote_ext_stopwords">words_location</entry> --> </properties>
原来的xml
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd"> <properties> <comment>IK Analyzer 扩展配置</comment> <!--用户可以在这里配置自己的扩展字典 --> <entry key="ext_dict"></entry> <!--用户可以在这里配置自己的扩展停止词字典--> <entry key="ext_stopwords"></entry> <!--用户可以在这里配置远程扩展字典 --> <!-- <entry key="remote_ext_dict">words_location</entry> --> <!--用户可以在这里配置远程扩展停止词字典--> <!-- <entry key="remote_ext_stopwords">words_location</entry> --> </properties>
修改完成后,需要重启elasticsearch容器,否则修改不生效。
更新完成后,es只会对于新增的数据用更新分词。历史数据是不会重新分词的。如果想要历史数据重新分词,需要执行:
POST my_index/_update_by_query?conflicts=proceed
http://192.168.137.14/es/fenci.txt,这个是nginx上资源的访问路径
在运行下面实例之前,需要安装nginx(安装方法见安装nginx),然后创建“fenci.txt”文件,内容如下:
echo "樱桃萨其马,带你甜蜜入夏" > /mydata/nginx/html/fenci.txt
测试效果:
GET my_index/_analyze { "analyzer": "ik_max_word", "text":"樱桃萨其马,带你甜蜜入夏" }
输出结果:
{ "tokens" : [ { "token" : "樱桃", "start_offset" : 0, "end_offset" : 2, "type" : "CN_WORD", "position" : 0 }, { "token" : "萨其马", "start_offset" : 2, "end_offset" : 5, "type" : "CN_WORD", "position" : 1 }, { "token" : "带你", "start_offset" : 6, "end_offset" : 8, "type" : "CN_WORD", "position" : 2 }, { "token" : "甜蜜", "start_offset" : 8, "end_offset" : 10, "type" : "CN_WORD", "position" : 3 }, { "token" : "入夏", "start_offset" : 10, "end_offset" : 12, "type" : "CN_WORD", "position" : 4 } ] }
4、elasticsearch-Rest-Client
1)9300: TCP
spring-data-elasticsearch:transport-api.jar;
springboot版本不同,ransport-api.jar不同,不能适配es版本
7.x已经不建议使用,8以后就要废弃
2)9200: HTTP
jestClient: 非官方,更新慢;
RestTemplate:模拟HTTP请求,ES很多操作需要自己封装,麻烦;
HttpClient:同上;
Elasticsearch-Rest-Client:官方RestClient,封装了ES操作,API层次分明,上手简单;
最终选择Elasticsearch-Rest-Client(elasticsearch-rest-high-level-client);
https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high.html
5、附录:安装Nginx
- 随便启动一个nginx实例,只是为了复制出配置
docker run -p80:80 --name nginx -d nginx:1.10
将容器内的配置文件拷贝到/mydata/nginx/conf/ 下
mkdir -p /mydata/nginx/html mkdir -p /mydata/nginx/logs mkdir -p /mydata/nginx/conf docker container cp nginx:/etc/nginx/* /mydata/nginx/conf/ #由于拷贝完成后会在config中存在一个nginx文件夹,所以需要将它的内容移动到conf中 mv /mydata/nginx/conf/nginx/* /mydata/nginx/conf/ rm -rf /mydata/nginx/conf/nginx
终止原容器:
docker stop nginx
执行命令删除原容器:
docker rm nginx
创建新的Nginx,执行以下命令
docker run -p 80:80 --name nginx \ -v /mydata/nginx/html:/usr/share/nginx/html \ -v /mydata/nginx/logs:/var/log/nginx \ -v /mydata/nginx/conf/:/etc/nginx \ -d nginx:1.10
设置开机启动ngin
docker update nginx --restart=always
创建“/mydata/nginx/html/index.html”文件,测试是否能够正常访问
echo '<h2>hello nginx!</h2>' >index.html
- 访问:http://ngix所在主机的IP:80/index.html
SpringBoot整合ElasticSearch
1、导入依赖
这里的版本要和所按照的ELK版本匹配。
<dependency> <groupId>org.elasticsearch.client</groupId> <artifactId>elasticsearch-rest-high-level-client</artifactId> <version>7.6.2</version> </dependency>
在spring-boot-dependencies中所依赖的ELK版本位6.8.7
<elasticsearch.version>6.8.7</elasticsearch.version>
需要在项目中将它改为7.6.2
<properties> ... <elasticsearch.version>7.6.2</elasticsearch.version> </properties>
2、编写测试类
1)测试保存数据
@Test public void indexData() throws IOException { IndexRequest indexRequest = new IndexRequest ("users"); User user = new User(); user.setUserName("张三"); user.setAge(20); user.setGender("男"); String jsonString = JSON.toJSONString(user); //设置要保存的内容 indexRequest.source(jsonString, XContentType.JSON); //执行创建索引和保存数据 IndexResponse index = client.index(indexRequest, GulimallElasticSearchConfig.COMMON_OPTIONS); System.out.println(index); }
测试前:
测试后:
2)测试获取数据
https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high-search.html
@Test public void searchData() throws IOException { GetRequest getRequest = new GetRequest( "users", "_-2vAHIB0nzmLJLkxKWk"); GetResponse getResponse = client.get(getRequest, RequestOptions.DEFAULT); System.out.println(getResponse); String index = getResponse.getIndex(); System.out.println(index); String id = getResponse.getId(); System.out.println(id); if (getResponse.isExists()) { long version = getResponse.getVersion(); System.out.println(version); String sourceAsString = getResponse.getSourceAsString(); System.out.println(sourceAsString); Map<String, Object> sourceAsMap = getResponse.getSourceAsMap(); System.out.println(sourceAsMap); byte[] sourceAsBytes = getResponse.getSourceAsBytes(); } else { } }
查询state="AK"的文档:
{ "took": 1, "timed_out": false, "_shards": { "total": 1, "successful": 1, "skipped": 0, "failed": 0 }, "hits": { "total": { "value": 22, //匹配到了22条 "relation": "eq" }, "max_score": 3.7952394, "hits": [{ "_index": "bank", "_type": "account", "_id": "210", "_score": 3.7952394, "_source": { "account_number": 210, "balance": 33946, "firstname": "Cherry", "lastname": "Carey", "age": 24, "gender": "M", "address": "539 Tiffany Place", "employer": "Martgo", "email": "cherrycarey@martgo.com", "city": "Fairacres", "state": "AK" } }, ....//省略其他 ] } }
搜索address中包含mill的所有人的年龄分布以及平均年龄,平均薪资
GET bank/_search { "query": { "match": { "address": "Mill" } }, "aggs": { "ageAgg": { "terms": { "field": "age", "size": 10 } }, "ageAvg": { "avg": { "field": "age" } }, "balanceAvg": { "avg": { "field": "balance" } } } }
java实现
/** * 复杂检索:在bank中搜索address中包含mill的所有人的年龄分布以及平均年龄,平均薪资 * @throws IOException */ @Test public void searchData() throws IOException { //1. 创建检索请求 SearchRequest searchRequest = new SearchRequest(); //1.1)指定索引 searchRequest.indices("bank"); //1.2)构造检索条件 SearchSourceBuilder sourceBuilder = new SearchSourceBuilder(); sourceBuilder.query(QueryBuilders.matchQuery("address","Mill")); //1.2.1)按照年龄分布进行聚合 TermsAggregationBuilder ageAgg=AggregationBuilders.terms("ageAgg").field("age").size(10); sourceBuilder.aggregation(ageAgg); //1.2.2)计算平均年龄 AvgAggregationBuilder ageAvg = AggregationBuilders.avg("ageAvg").field("age"); sourceBuilder.aggregation(ageAvg); //1.2.3)计算平均薪资 AvgAggregationBuilder balanceAvg = AggregationBuilders.avg("balanceAvg").field("balance"); sourceBuilder.aggregation(balanceAvg); System.out.println("检索条件:"+sourceBuilder); searchRequest.source(sourceBuilder); //2. 执行检索 SearchResponse searchResponse = client.search(searchRequest, RequestOptions.DEFAULT); System.out.println("检索结果:"+searchResponse); //3. 将检索结果封装为Bean SearchHits hits = searchResponse.getHits(); SearchHit[] searchHits = hits.getHits(); for (SearchHit searchHit : searchHits) { String sourceAsString = searchHit.getSourceAsString(); Account account = JSON.parseObject(sourceAsString, Account.class); System.out.println(account); } //4. 获取聚合信息 Aggregations aggregations = searchResponse.getAggregations(); Terms ageAgg1 = aggregations.get("ageAgg"); for (Terms.Bucket bucket : ageAgg1.getBuckets()) { String keyAsString = bucket.getKeyAsString(); System.out.println("年龄:"+keyAsString+" ==> "+bucket.getDocCount()); } Avg ageAvg1 = aggregations.get("ageAvg"); System.out.println("平均年龄:"+ageAvg1.getValue()); Avg balanceAvg1 = aggregations.get("balanceAvg"); System.out.println("平均薪资:"+balanceAvg1.getValue()); }
可以尝试对比打印的条件和执行结果,和前面的ElasticSearch的检索语句和检索结果进行比较;
其他
1. kibana控制台命令
ctrl+home:回到文档首部;
ctril+end:回到文档尾部。