ES doc_values的来源,field data——就是doc->terms的正向索引啊,不过它是在查询阶段通过读取倒排索引loading segments放在内存而得到的?

本文涉及的产品
检索分析服务 Elasticsearch 版,2核4GB开发者规格 1个月
简介:

Support in the Wild: My Biggest Elasticsearch Problem at Scale

Java Heap Pressure

Elasticsearch has so many wildly different use cases that I could not write a reasonably short blog post describing what can and cannot consume memory. However, there is one thing that constantly stands out above all of the other concerns that you might have while running an Elasticsearch cluster at scale.

For the users that I help, fielddata is the problem that is the most likely to cause their cluster's instability. Fielddata is the bane of my existence and it's the most frequent cause of the highest severity issues that I handle with our customers.

Understanding Fielddata

The inverted index is the magic that makes Elasticsearch queries so fast. This data structure holds a sorted list of all the unique terms that appear in a field, and each term points to the list of documents that contain that term:

Term:    Docs: 1 2 3 4 5 ---------------------------- brown X X X fox X X quick X X ----------------------------

Search asks the question: What documents contain term brown in the foo field? The inverted index is the perfect structure to answer this question: look up the term of interest in the sorted list and you immediately know which documents match your query.

Sorting or aggregations, however, need to be able to answer this question: What terms does Document 1 contain in the foo field? To answer this, we need a data structure that is the opposite of the inverted index:

Docs:   Terms: ---------------------------- 1 [ brown ] 2 [ quick ] 3 [ brown ] 4 [ brown, fox, quick ] 5 [ fox ] ----------------------------

This is the purpose of fielddata. Fielddata can be generated at query time by reading the inverted index, inverting the term <-> doc data structure, and storing the results in memory.(就是doc->terms的正向索引啊,不过它是在查询阶段通过读取倒排索引得到的?如果真是这样,那么如何能够比doc values更快?) The two major downsides of this approach should be obvious:

  1. Loading fielddata can be slow, especially with big segments.
  2. It consumes a lot of valuable heap space.

Because loading fielddata is costly, we try to do it as seldom as possible. Once loaded, we keep it in memory for as long as possible.

By default, fielddata is loaded on demand, which means that you will not see it until you are using it. Also, by being loaded per segment, it means that new segments that get created will slowly add to your overall memory usage until the field's fielddata is evicted from memory. Eviction happens in only a few ways:

  1. Deleting the index or indices that contains it.
  2. Closing the index or indices that contains it.
  3. Segment fielddata is removed when segments are removed (e.g., background merging).
    • This usually just means that the problem is moving rather than going away.
  4. Restarting the node containing the fielddata.
  5. Clearing the relevant fielddata cache.
  6. Automatically evicting the fielddata to make room for other fielddata.
    • This will not happen with default settings.

While the first two ways will cause the memory to be evicted, they're not useful in terms of solving the problem because they make the index unusable. Segment merging is happening in the background and it is not a way to clear fielddata. The fourth and fifth ways are unlikely to be a long term solution because they do not prevent fielddata from being reloaded.

The sixth option, evicting fielddata when the cache is full, leads to different issues: one request triggers fielddata loading for one field and the next request triggers loading for another, causing the first field to be evicted. This causes memory thrashing and slow garbage collections, and your users suffer from very slow queries while they wait for their fielddata to be loaded.

Simply put, once fielddata becomes a problem, then it stays a problem.

Why Fielddata is Bad

At small scales, you can generally get away with fielddata usage without even realizing that you are using it. In highly controlled environments, you may even enjoy that specific fields are being loaded into memory for theoretically faster access.

However, almost without fail, you are bound to run into a problem with it eventually. Whether it's because someone ran a test request on the production system without thinking that it would be a problem (it's just one query, right?), your queries changed to match new data, or you just finally reached a scale where it no longer works: you will eventually run into memory pressure that does not go away.

As I noted earlier, fielddata does not go away on its own. In Elasticsearch 1.3 and later, we allow up to 60% of your Java heap's memory to be consumed by fielddata per node. We control this via the Fielddata Circuit Breaker, which checks incoming requests for potential fielddata usage and then blocks them if they require more memory than is currently available. Any circuit breaker's purpose is to prevent any bad requests, which means that it never gets the chance to cause a problem (e.g., allocate even more fielddata), but it's important to note that it will not clear any existing fielddata.

For example, if a node has 10 GBs of Java heap, then 60% of that is going to be 6 GBs. If a new request requires 1 GB of fielddata to be loaded for that node that is already using 4 GBs of the heap for fielddata, then it will allow it because 4 GB, plus 1 GB, is less than 6 GB. However, if the next request needed 2 GB for yet another field's fielddata, then the entire request would be rejected because the fielddata is exhausted (5 GB + 2 GB = 7 GB, which is clearly greater than 6 GB).

Note: for versions prior to Elasticsearch 1.3, we allowed an unlimited amount of your Java heap to be consumed by fielddata.

Finding Your Fielddata

Fortunately, it's not all bad news. Not only do we have a solution to the problem, but we also provide a way to find and understand your problem with it.

$ curl -XGET 'localhost:9200/_cat/fielddata?v&fields=*'

This will provide a list of each node with its fielddata usage. For instance, at startup, my local node is using absolutely no fielddata:

id                     host            ip          node  total 
iExRFXn1Qw23iRzhwor-Wg Chriss-MBP.home 192.168.1.2 WallE 0b

To see it change, it's as simple as sorting, scripting, or aggregating any field. So let's do all three!

$ curl -XGET localhost:9200/test-index/_search -d '{ "query": { "filtered": { "filter": { "script": { "script": "doc[\"percentage\"].value > 0.5" } } } }, "aggs": { "max_number": { "max": { "field": "number" } } }, "sort": [ { "@timestamp": { "order": "desc" } } ] }'


Although order is irrelevant for this, the first field that will be impacted will be the percentage field that is accessed inside of the scripted filter. The second field used will be the number field from the aggregation. Finally, the last field is the @timestamp field used to sort the filtered results. Taking another look at the _cat/fielddata command above confirms this:

id                     host            ip          node   total number @timestamp percentage
iExRFXn1Qw23iRzhwor-Wg Chriss-MBP.home 192.168.1.2 WallE 49.9kb 24.8kb 24.8kb 192b

Use Doc Values

The solution to this fielddata problem is to avoid it altogether. Fortunately, you can avoid the use of fielddata bymanually mapping all of your fields to use doc values. Without repeating too much from the guide, doc values offload this burden by writing the fielddata to disk at index time, thereby allowing Elasticsearch to load the values outside of your Java heap as they are needed.

By taking the burden out of your heap, you get fast access to the on-disk fielddata through the file system cache, which gives in-memory performance without the cost of garbage collections coming into play. This also frees up a lot of headroom for the Elasticsearch heap so that more operations (e.g., bulk indexing and concurrent searches) can use the heap without placing the node under memory pressure, which leads to garbage collection that will slow it down.

 








本文转自张昺华-sky博客园博客,原文链接:http://www.cnblogs.com/bonelee/p/6401686.html,如需转载请自行联系原作者


相关实践学习
使用阿里云Elasticsearch体验信息检索加速
通过创建登录阿里云Elasticsearch集群,使用DataWorks将MySQL数据同步至Elasticsearch,体验多条件检索效果,简单展示数据同步和信息检索加速的过程和操作。
ElasticSearch 入门精讲
ElasticSearch是一个开源的、基于Lucene的、分布式、高扩展、高实时的搜索与数据分析引擎。根据DB-Engines的排名显示,Elasticsearch是最受欢迎的企业搜索引擎,其次是Apache Solr(也是基于Lucene)。 ElasticSearch的实现原理主要分为以下几个步骤: 用户将数据提交到Elastic Search 数据库中 通过分词控制器去将对应的语句分词,将其权重和分词结果一并存入数据 当用户搜索数据时候,再根据权重将结果排名、打分 将返回结果呈现给用户 Elasticsearch可以用于搜索各种文档。它提供可扩展的搜索,具有接近实时的搜索,并支持多租户。
相关文章
|
10月前
|
Oracle 关系型数据库 Linux
解决在linux服务器上部署定时自动查找cpu,内存,磁盘使用量,并将查询结果写入数据库的脚本,只能手动运行实现插库操作
问题描述:将脚本名命名为mortior.sh(以下简称mo),手动执行脚本后查询数据库,表中有相应的信息,放入自动执行队列中,脚本被执行,但是查询数据库,并没有新增数据。
74 0
|
1月前
|
Java Docker 索引
记录一次索引未建立、继而引发一系列的问题、包含索引创建失败、虚拟机中JVM虚拟机内存满的情况
这篇文章记录了作者在分布式微服务项目中遇到的一系列问题,起因是商品服务检索接口测试失败,原因是Elasticsearch索引未找到。文章详细描述了解决过程中遇到的几个关键问题:分词器的安装、Elasticsearch内存溢出的处理,以及最终成功创建`gulimall_product`索引的步骤。作者还分享了使用Postman测试接口的经历,并强调了问题解决过程中遇到的挑战和所花费的时间。
|
1天前
|
XML IDE 前端开发
IDEA忽略node_modules减少内存消耗,提升索引速度
在后端开发中,IDEA 在运行前端代码时,频繁扫描 `node_modules` 文件夹会导致高内存消耗和慢索引速度,甚至可能会导致软件卡死。为了改善这一问题,可以按照以下步骤将 `node_modules` 文件夹设为忽略:通过状态菜单右键排除该文件夹、在设置选项中将其加入忽略列表,并且手动修改项目的 `.iml` 文件以添加排除配置。这些操作可以有效提高IDE的运行性能、减少内存占用并简化项目结构,但需要注意的是,排除后将无法对该文件夹进行索引,操作文件时需谨慎。
12 4
IDEA忽略node_modules减少内存消耗,提升索引速度
|
1月前
|
存储 Java API
【Azure Developer】通过Azure提供的Azue Java JDK 查询虚拟机的CPU使用率和内存使用率
【Azure Developer】通过Azure提供的Azue Java JDK 查询虚拟机的CPU使用率和内存使用率
|
4月前
|
存储 Web App开发 运维
|
4月前
|
存储 缓存 监控
Linux 系统 内存通用指标以及查询方式
Linux 系统 内存通用指标以及查询方式
51 0
|
4月前
|
SQL 算法 关系型数据库
大查询会不会把内存打爆
大查询会不会把内存打爆
|
4月前
|
运维 Linux
Linux 查询 OS、CPU、内存、硬盘信息
Linux 查询 OS、CPU、内存、硬盘信息
129 0
|
10月前
|
Linux
linux下的内存查看(virt,res,shr,data的意义)
linux下的内存查看(virt,res,shr,data的意义)
153 0
|
SQL 存储 缓存
MySQL高级第三篇(共四篇)之应用优化、查询缓存优化、内存管理优化、MySQL锁问题、常用SQL技巧(一)
前面章节,我们介绍了很多数据库的优化措施。但是在实际生产环境中,由于数据库本身的性能局限,就必须要对前台的应用进行一些优化,来降低数据库的访问压力。
16082 7

热门文章

最新文章