概述
继续跟中华石杉老师学习ES,第15篇
课程地址: https://www.roncoo.com/view/55
白话Elasticsearch14-基于multi_match 使用most_fields策略进行cross-fields search弊端
白话Elasticsearch15-使用copy_to定制组合field解决cross-fields搜索弊端
承接上两篇, 接下来看下如何使用原生cross-fiels技术解决搜索的弊端
例子
使用DSL如下,可以解决 "operator": "and",
GET /forum/article/_search { "query": { "multi_match": { "query": "Peter Smith", "type": "cross_fields", "operator": "and", "fields": ["author_first_name", "author_last_name"] } } }
返回结果:
{ "took": 3, "timed_out": false, "_shards": { "total": 1, "successful": 1, "skipped": 0, "failed": 0 }, "hits": { "total": 2, "max_score": 2.3258216, "hits": [ { "_index": "forum", "_type": "article", "_id": "1", "_score": 2.3258216, "_source": { "articleID": "XHDK-A-1293-#fJ3", "userID": 1, "hidden": false, "postDate": "2017-01-01", "tag": [ "java", "hadoop" ], "tag_cnt": 2, "view_cnt": 30, "title": "this is java and elasticsearch blog", "content": "i like to write best elasticsearch article", "sub_title": "learning more courses", "author_first_name": "Peter", "author_last_name": "Smith", "new_author_last_name": "Smith", "new_author_first_name": "Peter" } }, { "_index": "forum", "_type": "article", "_id": "5", "_score": 1.7770995, "_source": { "articleID": "DHJK-B-1395-#Ky5", "userID": 3, "hidden": false, "postDate": "2019-05-01", "tag": [ "elasticsearch" ], "tag_cnt": 1, "view_cnt": 10, "title": "this is spark blog", "content": "spark is best big data solution based on scala ,an programming language similar to java", "sub_title": "haha, hello world", "author_first_name": "Tonny", "author_last_name": "Peter Smith", "new_author_last_name": "Peter Smith", "new_author_first_name": "Tonny" } } ] } }
那是如何解决cromss fields的弊端的呢? 我们来分析下
问题1:只是找到尽可能多的field匹配的doc,而不是某个field完全匹配的doc
答: 解决,要求每个term都必须在任何一个field中出现
Peter,Smith
要求Peter必须在author_first_name或author_last_name中出现
要求Smith必须在author_first_name或author_last_name中出现
Peter Smith可能是横跨在多个field中的,所以必须要求每个term都在某个field中出现,组合起来才能组成我们想要的标识,完整的人名
原来most_fiels,可能像Smith Williams也可能会出现,因为most_fields要求只是任何一个field匹配了就可以,匹配的field越多,分数越高
问题2:most_fields,没办法用minimum_should_match去掉长尾数据,就是匹配的特别少的结果 --> 解决,既然每个term都要求出现,长尾肯定被去除掉了
答:java hadoop spark --> 这3个term都必须在任何一个field出现了
比如有的document,只有一个field中包含一个java,那就被干掉了,作为长尾就没了
问题3:TF/IDF算法,比如Peter Smith和Smith Williams,搜索Peter Smith的时候,由于first_name中很少有Smith的,所以query在所有document中的频率很低,得到的分数很高,可能Smith Williams反而会排在Peter Smith前面
答:计算IDF的时候,将每个query在每个field中的IDF都取出来,取最小值,就不会出现极端情况下的极大值了
Peter Smith
Peter
Smith
Smith,在author_first_name这个field中,在所有doc的这个Field中,出现的频率很低,导致IDF分数很高;Smith在所有doc的author_last_name field中的频率算出一个IDF分数,因为一般来说last_name中的Smith频率都较高,所以IDF分数是正常的,不会太高;然后对于Smith来说,会取两个IDF分数中,较小的那个分数。就不会出现IDF分过高的情况。