analyzer
分词使用的两个时机:
1,Index time analysis. 创建或者更新文档时,会对文档进行分词
2,Search time analysis. 查询时,对查询语句分词
查询时分词的指定方式有几种:
-查询时通过analyzer指定分词器
-通过index mapping设置search_analyzer
索引时分词是通过配置 Index mapping中的每个字段的analyzer属性实现的
# 不指定分词时,会使用默认的standard
PUT test_index
{
"mappings": {
"doc": {
"properties": {
"title":{
"type": "text", "analyzer": "whitespace" #指定分词器 } } }}}
- 一般不需要特别指定查询时分词器,直接使用索引时分词器即可,否则可能会出现无法匹配的情况
- 明确字段是否需要分词,不需要分词的字段将type设置为keyword,可以节省空间和提高写性能。
- 善用_analyzer api,查看文档的具体分词结果
设置analyzer
PUT test { "settings": { "analysis": { "analyzer": { # 关键字 "my_analyzer":{ # 自定义的分词器 "type":"standard", #分词器类型standard "stopwords":"_english_" #standard分词器的参数,默认的stopwords是\_none_ } } } }, "mappings": { "doc":{ "properties": { "my_text":{ "type": "text", "analyzer": "standard", # my_text字段使用standard分词器 "fields": { "english":{ # my_text.english字段使用my_analyzer分词器 "type": "text", "analyzer": "my_analyzer" }}}}}}} POST test/_analyze { "field": "my_text", # my_text字段使用的是standard分词器 "text": ["The test message."] } -------------->[the,test,message] POST test/_analyze { "field": "my_text.english", #my_text.english使用的是my_analyzer分词器 "text": ["The test message."] } ------------>[test,message]
es自带了很多种analyzer
- standard 由以下组成
- tokenizer:Standard Tokenizer
- token filter:Standard Token Filter,Lower Case Token Filter,Stop Token Filter
-
测试:
POST _analyze { "analyzer": "standard", "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." } 得到 [the,2,quick,brown,foxes,jumped,over,the,lazy,dog's,bone] #standard可选参数 # max_token_length # stopkeyword # stopkeywords_path - whitespace 空格为分隔符
POST _analyze { "analyzer": "whitespace", "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." } --> [ The,2,QUICK,Brown-Foxes,jumped,over,the,lazy,dog's,bone. ]
- simple
POST _analyze { "analyzer": "simple", "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." } ---> [ the, quick, brown, foxes, jumped, over, the, lazy, dog, s, bone ]
- stop 默认stopwords用_english_
POST _analyze { "analyzer": "stop", "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." } -->[ quick, brown, foxes, jumped, over, lazy, dog, s, bone ] 可选参数: # stopwords # stopwords_path
- keyword 不分词的
POST _analyze { "analyzer": "keyword", "text": ["The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."] } 得到 "token": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." 一条完整的语句
- pattern
- language
- fingerprint
还可以用内置的character filter, tokenizer, token filter组装一个analyzer(custom analyzer)
- custom 定制analyzer,由以下几部分组成
- 0个或多个e character filters
- 1个tokenizer
- 0个或多个 token filters
-
PUT t_index { "settings": { "analysis": { "analyzer": { "my_analyzer":{ "type":"custom", "tokenizer":"standard", "char_filter":["html_strip"], "filter":["lowercase"] } } } } } POST t_index/_analyze { "analyzer": "my_analyzer", "text": ["The 2 QUICK Brown-Foxes jumped over the lazy dog's <b> bone.</b>"] } 得到:[the,2,quick,brown,foxes,jumped,over,the,lazy,dog's,bone]
自定义分词器
自定义分词需要在索引的配置中设定,如下所示:
PUT test_index { "settings": { "analysis": { # 分词设置,可以自定义 "char_filter": {}, #char_filter 关键字 "tokenizer": {}, #tokenizer 关键字 "filter": {}, #filter 关键字 "analyzer": {} #analyzer 关键字 } } }
character filter 在tokenizer之前对原始文本进行处理,比如增加,删除,替换字符等
会影响后续tokenizer解析的position和offset信息
- html strip 除去html标签和转换html实体
- 参数:escaped_tags不删除的标签
-
POST _analyze { "tokenizer": "keyword", "char_filter": ["html_strip"], "text": ["<p>I'm so <b>happy</b>!</p>"] } 得到: "token": """ I'm so happy! """ #配置示例 PUT t_index { "settings": { "analysis": { "analyzer": { #关键字 "my_analyzer":{ #自定义analyzer "tokenizer":"keyword", "char_filter":["my_char_filter"] } }, "char_filter": { #关键字 "my_char_filter":{ #自定义char_filter "type":"html_strip", "escaped_tags":["b"] #不从文本中删除的HTML标记数组 } }}}} POST t_index/_analyze { "analyzer": "my_analyzer", "text": ["<p>I'm so <b>happy</b>!</p>"] } 得到: "token": """ I'm so <b>happy</b>! """,
- mapping 映射类型,以下参数必须二选一
- mappings 指定一组映射,每个映射格式为 key=>value
- mappings_path 绝对路径或者相对于config路径 key=>value
-
PUT t_index { "settings": { "analysis": { "analyzer": { #关键字 "my_analyzer":{ #自定义分词器 "tokenizer":"standard", "char_filter":"my_char_filter" } }, "char_filter": { #关键字 "my_char_filter":{ #自定义char_filter "type":"mapping", "mappings":[ #指明映射关系 ":)=>happy", ":(=>sad" ] }}}}} POST t_index/_analyze { "analyzer": "my_analyzer", "text": ["i am so :)"] }
得到 [i,am,so,happy] - pattern replace
- pattern参数 正则
- replacement 替换字符串 可以使用$1..$9
- flags 正则标志
tokenizer 将原始文档按照一定规则切分为单词
- standard
- 参数:max_token_length,最大token长度,默认是255
-
PUT t_index { "settings": { "analysis": { "analyzer": { "my_analyzer":{ "tokenizer":"my_tokenizer" } }, "tokenizer": { "my_tokenizer":{ "type":"standard", "max_token_length":5 }}}}} POST t_index/_analyze { "analyzer": "my_analyzer", "text": ["The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."] } 得到 [ The, 2, QUICK, Brown, Foxes, jumpe, d, over, the, lazy, dog's, bone ] # jumped 长度为6 在5这个位置被分割
- letter 非字母时分成多个terms
POST _analyze { "tokenizer": "letter", "text": ["The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."] } 得到 [ The, QUICK, Brown, Foxes, jumped, over, the, lazy, dog, s, bone ]
- lowcase 跟letter tokenizer一样 ,同时将字母转化成小写
POST _analyze { "tokenizer": "lowercase", "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." } 得到 [ the, quick, brown, foxes, jumped, over, the, lazy, dog, s, bone ]
- whitespace 按照空白字符分成多个terms
- 参数:max_token_length
-
POST _analyze { "tokenizer": "whitespace", "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." } 得到 [ The, 2, QUICK, Brown-Foxes, jumped, over, the, lazy, dog's, bone. ]
- keyword 空操作,输出完全相同的文本
- 参数:buffer_size,单词一个term读入缓冲区的长度,默认256
-
POST _analyze { "tokenizer": "keyword", "text": ["The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."] } 得到"token": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." 一个完整的文本
token filter 针对tokenizer 输出的单词进行增删改等操作
- lowercase 将输出的单词转化成小写
POST _analyze { "filter": ["lowercase"], "text": ["The 2 QUICK Brown-Foxes jumped over the lazy dog's bone"] } ---> "token": "the 2 quick brown-foxes jumped over the lazy dog's bone" PUT t_index { "settings": { "analysis": { "analyzer": { "my_analyzer":{ "type":"custom", "tokenizer":"standard", "filter":"lowercase" } } } } } POST t_index/_analyze { "analyzer": "my_analyzer", "text": ["The 2 QUICK Brown-Foxes jumped over the lazy dog's bone"] }
- stop 从token流中删除stop words 。
参数有:
# stopwords 要使用的stopwords, 默认_english_ # stopwords_path # ignore_case 设置为true则为小写,默认false
# remove_trailingPUT t_index { "settings": { "analysis": { "analyzer": { "my_analyzer":{ "type":"custom", "tokenizer":"standard", "filter":"my_filter" } }, "filter": { "my_filter":{ "type":"stop", "stopwords":["and","or","not"] } } } } } POST t_index/_analyze { "analyzer": "my_analyzer", "text": ["lucky and happy not sad"] }
-------------->
[lucky,happy,sad]