Kibana版本: 7.1.1
Elasticsearch版本: 7.1.1
原始安装方法(例如,下载页面,yum,从源代码等):
使用yum从头开始安装,通用配置,100%自动安装。
群集ES 3节点
描述错误:
Kibana在创建时似乎对它的索引.kibana有问题:
尝试访问保存的对象时,Kibana返回400 Bad Request错误,Elasticsearch在.kibana索引上抛出FieldData错误
我可以使用API创建并找到索引模式,但是Kibana找不到它们,因为它的搜索请求遇到了FieldData异常。
注意:这个问题似乎有点随机,它发生在我今天创建的三个集群中的一个(由于我们的年龄在7岁以上)中,所有集群的创建方式都与脚本相同。
注意:我在弹性论坛上找到了一个帖子,其中自7+起,有6+人的行为似乎相同
https://discuss.elastic.co/t/kibana-7-cant-load-index-pattern/180167
明天我将创建更多群集,以观察更多该问题的发生频率。
提供日志和/或服务器输出(如果有):
刷新“保存的对象”页面时的弹性日志:
[2019-07-02T11:08:48,327][DEBUG][o.e.a.s.TransportSearchAction] [elastic01] [.kibana][0],
node[RmpqDbnZTMmmrGTVe5sOZA], [R], s[STARTED], a[id=UOCFUQwpREy44aF76avXfw]:
Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[.kibana],
indicesOptions=IndicesOptions[ignore_unavailable=false,
...
Caused by: java.lang.IllegalArgumentException: Fielddata is disabled on text fields by default.
Set fielddata=true on [type] in order to load fielddata in memory by uninverting the inverted
index. Note that this can however use significant memory. Alternatively use a keyword field
instead.
保存的对象中存在索引模式,并且卷曲GET工作,但是Kibana找不到它,因为它被FieldData错误击中curl -X GET "http://localhost:5601/api/saved_objects/index-pattern/filebeat-ulf" -H 'kbn-xsrf: true'
{"id":"filebeat-ulf","type":"index-pattern","updated_at":"2019-07-02T11:07:17.553Z","version":"WzUsMV0=","attributes":{"title":"filebeat-7.1.1-ulf-*","timeFieldName":"@timestamp"},"references":[],"migrationVersion":{"index-pattern":"6.5.0"}}
平弹@ elastic / kibana-platform
还有另一篇博客文章:
https://discuss.elastic.co/t/not-possible-to-create-index-patterns-in-kibana/185591/2
用户通过以下方式修复它:
我再次创建了另一个集群(第4个)相同的问题
我试图停止kibana,删除.kibana索引,然后再次启动Kibana,这是弹性日志:
[2019-07-03T03:02:16,659][INFO ][o.e.c.m.MetaDataDeleteIndexService] [elastic01]
[.kibana/1Z8-n6nCSza4pm2HXtWG_Q] deleting index
[2019-07-03T03:03:15,155][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elastic01]
adding template [.management-beats] for index patterns [.management-beats]
[2019-07-03T03:03:15,820][INFO ][o.e.c.m.MetaDataCreateIndexService] [elastic01]
[.kibana] creating index, cause [auto(bulk api)], templates [], shards [1]/[1], mappings []
[2019-07-03T03:03:15,944][INFO ][o.e.c.m.MetaDataMappingService] [elastic01]
[.kibana/x0ymkiGpRxWJA_rMJ-T3Nw] create_mapping [_doc]
[2019-07-03T03:03:15,945][INFO ][o.e.c.m.MetaDataMappingService] [elastic01]
[.kibana/x0ymkiGpRxWJA_rMJ-T3Nw] update_mapping [_doc]
[2019-07-03T03:03:16,021][INFO ][o.e.c.r.a.AllocationService] [elastic01]
Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana][0]] ...]).
[2019-07-03T03:03:37,218][INFO ][o.e.c.m.MetaDataMappingService] [elastic01]
[.kibana/x0ymkiGpRxWJA_rMJ-T3Nw] update_mapping [_doc]
[2019-07-03T03:03:55,567][DEBUG][o.e.a.s.TransportSearchAction] [elastic01] [.kibana][0],
node[UKPhnQePR6-3EJMobt8mbw], [R], s[STARTED], a[id=oVInWbneRLicfKSIqL_uwA]:
Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[.kibana],
indicesOptions=IndicesOptions[ignore_unavailable=false, allow_no_indices=true,
expand_wildcards_open=true, expand_wildcards_closed=false, allow_aliases_to_multiple_indices=true,
forbid_closed_indices=true, ignore_aliases=false, ignore_throttled=true], types=[], routing='null',
preference='null', requestCache=null, scroll=null, maxConcurrentShardRequests=0,
batchedReduceSize=512, preFilterShardSize=128, allowPartialSearchResults=true, localClusterAlias=null,
getOrCreateAbsoluteStartMillis=-1, ccsMinimizeRoundtrips=true, source={"from":0,"size":20,"query":
{"bool":{"filter":[{"bool":{"should":[{"bool":{"must":[{"term":{"type":{"value":"index-
pattern","boost":1.0}}}],"must_not":[{"exists":
{"field":"namespace","boost":1.0}}],"adjust_pure_negative":true,"boost":1.0}},{"bool":{"must":[{"term":
{"type":{"value":"visualization","boost":1.0}}}],"must_not":[{"exists":
{"field":"namespace","boost":1.0}}],"adjust_pure_negative":true,"boost":1.0}},{"bool":{"must":[{"term":
{"type":{"value":"dashboard","boost":1.0}}}],"must_not":[{"exists":
{"field":"namespace","boost":1.0}}],"adjust_pure_negative":true,"boost":1.0}},{"bool":{"must":[{"term":
{"type":{"value":"search","boost":1.0}}}],"must_not":[{"exists":
{"field":"namespace","boost":1.0}}],"adjust_pure_negative":true,"boost":1.0}}],"adjust_pure_negative":true,"
minimum_should_match":"1","boost":1.0}}],"adjust_pure_negative":true,"boost":1.0}},"seq_no_primary_ter
m":true,"_source":{"includes":["index-pattern","visualization","dashboard","search.title","index-
pattern","visualization","dashboard","search.id","namespace","type","references","migrationVersion",
"updated_at","title","id"],"excludes":[]},"sort":[{"type":
{"order":"asc","unmapped_type":"keyword"}}],"track_total_hits":2147483647}}]
org.elasticsearch.transport.RemoteTransportException: [elastic03][x.x.x.x:9300]
[indices:data/read/search[phase/query]]
Caused by: java.lang.IllegalArgumentException: Fielddata is disabled on text fields by default.
Set fielddata=true on [type] in order to load fielddata in memory by uninverting the inverted index.
Note that this can however use significant memory. Alternatively use a keyword field instead.
编辑:
我创建了另一个集群(第5个)((来自scatch的相同脚本,包括VM创建),这次没有错误:thinking:我将尝试查看选举问题是否会导致此问题?
编辑2:
6号集群再次出现问题(scatch中的脚本相同,包括虚拟机创建)
在节点3上,我可以看到有趣的日志:
该节点在第一次尝试进行主选举/加入时遇到一些错误,但仍然成功完成此操作并进行引导,然后在创建.kibana索引别名时该节点报告错误:
我从日志中删除了节点ID / {ml.machine_memory = ....,xpack.installed = true},以清除一些噪音并使它更具可读性
[2019-07-03T03:57:29,167][INFO ][o.e.c.c.JoinHelper] [elastic03]
failed to join {elastic01} {x.x.x.x}{x.x.x.x:9300}
with JoinRequest{sourceNode={elastic03}{y.y.y.y} {y.y.y.y:9300},
optionalJoin=Optional[Join{term=1, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode=
{elastic03}{y.y.y.y}{y.y.y.y:9300}, targetNode={elastic01}{x.x.x.x}{x.x.x.x:9300}}]}
org.elasticsearch.transport.NodeNotConnectedException: [elastic01][x.x.x.x:9300] Node not connected
at org.elasticsearch.transport.ConnectionManager.getConnection(ConnectionManager.java:151)
....
[2019-07-03T03:57:29,179][INFO ][o.e.c.c.Coordinator] [elastic03]
setting initial configuration to VotingConfiguration{ID elastic01 ,{bootstrap-
placeholder}-elastic02,ID elastic03}
[2019-07-03T03:57:29,180][INFO ][o.e.c.c.JoinHelper] [elastic03]
failed to join {elastic01}{x.x.x.x}{x.x.x.x:9300}
with JoinRequest{sourceNode={elastic03}{y.y.y.y}{y.y.y.y:9300},
optionalJoin=Optional[Join{term=1, lastAcceptedTerm=0, lastAcceptedVersion=0, sourceNode=
{elastic03}{y.y.y.y}{y.y.y.y:9300}, targetNode={elastic01}{x.x.x.x}{x.x.x.x:9300}}]}
org.elasticsearch.transport.NodeNotConnectedException: [elastic01][x.x.x.x:9300] Node not connected
at org.elasticsearch.transport.ConnectionManager.getConnection(ConnectionManager.java:151)
....
[2019-07-03T03:57:29,318][INFO ][o.e.c.s.MasterService] [elastic03]
elected-as-master ([2] nodes joined)[{elastic03}{y.y.y.y}{y.y.y.y:9300} elect leader,
{elastic01}{x.x.x.x}{x.x.x.x:9300} elect leader,
_BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 2, version: 1, reason: master node changed
{previous [], current [{elastic03}{y.y.y.y}{y.y.y.y:9300}}]}, added {{elastic01}{x.x.x.x}{x.x.x.x:9300},}
[2019-07-03T03:57:29,410][INFO ][o.e.c.c.CoordinationState] [elastic03]
cluster UUID set to [oQs2zr6XTM6spzQSvJ079w]
[2019-07-03T03:57:29,463][INFO ][o.e.c.s.ClusterApplierService] [elastic03]
master node changed {previous [], current [{elastic03}{y.y.y.y}{y.y.y.y:9300}]},
added {{elastic01}{x.x.x.x}{x.x.x.x:9300},}, term: 2, version: 1, reason: Publication{term=2, version=1}
[2019-07-03T03:57:29,538][INFO ][o.e.h.AbstractHttpServerTransport] [elastic03]
publish_address {y.y.y.y:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}, {y.y.y.y:9200}
[2019-07-03T03:57:29,539][INFO ][o.e.n.Node] [elastic03]
started
[2019-07-03T03:57:29,559][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [elastic03]
Failed to clear cache for realms [[]]
[2019-07-03T03:57:29,618][INFO ][o.e.g.GatewayService] [elastic03]
recovered [0] indices into cluster_state
...
[2019-07-03T03:57:30,255][INFO ][o.e.c.s.MasterService] [elastic03]
node-join[{elastic02}{z.z.z.z}{z.z.z.z:9300} join existing leader], term: 2, version: 8, reason: added
{{elastic02}{z.z.z.z}{z.z.z.z:9300},}
[2019-07-03T03:57:30,543][INFO ][o.e.c.s.ClusterApplierService] [elastic03]
added {{elastic02}{z.z.z.z}{z.z.z.z:9300},}, term: 2, version: 8, reason: Publication{term=2, version=8}
[2019-07-03T03:57:30,749][INFO ][o.e.l.LicenseService] [elastic03]
license [] mode [basic] - valid
现在群集已启动,但是.kibana会引发一些错误:
[2019-07-03T03:57:52,002][INFO ][o.e.c.m.MetaDataCreateIndexService] [elastic03]
[.kibana_task_manager] creating index, cause [auto(bulk api)], templates [.kibana_task_manager], shards
[1]/[1], mappings [_doc]
[2019-07-03T03:57:53,018][INFO ][o.e.c.m.MetaDataCreateIndexService] [elastic03]
[.kibana_1] creating index, cause [api], templates [], shards [1]/[1], mappings [_doc]
[2019-07-03T03:57:53,279][INFO ][o.e.c.m.MetaDataCreateIndexService] [elastic03]
[.kibana] creating index, cause [auto(bulk api)], templates [], shards [1]/[1], mappings []
[2019-07-03T03:57:53,382][DEBUG][o.e.a.a.i.a.TransportIndicesAliasesAction] [elastic03]
failed to perform aliases
org.elasticsearch.indices.InvalidAliasNameException: Invalid alias name [.kibana],
an index exists with the same name as the alias
at org.elasticsearch.cluster.metadata.AliasValidator.validateAlias(AliasValidator.java:93)
...
@tbuchier非常感谢详细的错误报告!
我只想确认一下,您有一个由3个ES节点组成的集群,您正在运行多少个Kibana节点,还是仅一个?
我们基于具有Kibana + Elastic的黄金映像引导集群
因此,有3个Kibana正在运行(我们可能会禁用其中一个,并保留2个用于HA /稍后进行负载平衡)。
实例化之前,elastic的数据文件夹已完全清洁(正确的引导程序)
但也许不包含UUID的/ var / lib / kibana,因此它们可能具有相同的名称。 但这只影响监控吗?
您能否为处于此错误状态的集群发布所有三个Kibana实例的日志?
直到星期一,我才可以访问环境。
我记得什么都没记录(因为我有logging.quiet:true)
我将在星期一发布Kibana日志。
我在弹性论坛上与用户发现了另外3个主题,这些主题似乎都面临相同的问题:
全部为7+,因为UI找不到保存在索引中的对象,因此只能通过UI无限创建索引模式
https://discuss.elastic.co/t/created-index-pattern-is-not-visible-in-kibana-7-0-1/184098/
https://discuss.elastic.co/t/i-cant-create-indexes-patterns-with-eck/184194/
https://discuss.elastic.co/t/kibana-7-0-1-wont-lad-index-pattern/187934/
似乎某种竞争条件导致.kibana索引的映射为{"type": {"type": "text"}}
而不是{"type": {"type": "keyword"}}
我已经尝试过在本地计算机上创建3节点ES + Kibana群集的无数次尝试,但是还无法为设置为“文本”的“类型”属性复制映射。
我可以确认手动创建带有{"type": {"type": "text"}}
的映射会产生此中所述的症状,并且链接的讨论线程如“默认情况下,文本字段中的Fielddata被禁用”错误。
非常感谢您提供@tbuchier的详细调试帮助! 仍在通读它,但出于好奇,您是否在循环中对Kibana服务器执行ping操作,以确定它是否已在脚本中启动?
我曾经见过这种情况发生过,随机因素对我来说意味着这是某种竞赛情况,但可能是什么竞赛? 我假设这是针对迁移到Kibana服务器的请求的迁移完成,(如果启用了安全性)尝试加载uiSettings
服务,该服务将在之前自动创建配置保存的对象。实际上创建了kibana索引,从而通过自动映射输入并将{"type": "text"}
用作type
字段来创建索引...
以前不可能做到这一点,因为在迁移完成之前,我们什至不接受HTTP请求,但是随着向新平台的过渡,操作顺序已经稍有变化,现在迁移是在HTTP启动之后执行的,这意味着可以在savedObjects
服务实际可用之前触发路由,这可能会导致基于计时的问题。
编辑:我们可以通过在遇到此错误时将映射和文档转储到.kibana索引中来验证这一点。 如果索引没有包含任何配置文件,那么我很确定这是正在发生的事情。
我能够在7.1.1环境中重现此问题。 群集详细信息:
当由于硬件故障而不得不停止所有Elasticsearch节点(尽管Kibana并未停止)时,我们首先遇到了这个问题。 删除所有Elasticserch节点的数据目录中的所有内容。 重新启动所有Elasticsearch节点。 在完整集群重新启动期间,Kibana并未停止。
我们能够通过删除.kibana*
索引来重现此问题,而无需停止Kibana服务。
要解决此问题,我们采取了以下步骤:
你好 !!
我今天早上产生了群集,直到再次遇到问题(第三个问题):
@rudolf
对于kibana日志:似乎确实是种族问题:
创建了kibana_1和kibana_2,在Kibana 1上,我遇到以下错误:
别名名称[.kibana]无效,存在与别名相同的索引
和所有的基巴纳都有:
另一个Kibana实例似乎正在迁移索引。 等待该迁移完成。
@spalger
对于.kibana映射:确实确实是空的:
编辑: @ navneet83提到的步骤:
要在脚本中修复该问题,我们仅在引导程序中启用1个Kibana,一旦成功创建.kibana_1,该脚本将启动其他实例。
@tbuchier我已经能够重现该问题,而正如spalger猜测的那样,迁移系统存在竞争状况。 我们将阻止针对Elasticsearch的所有操作,直到完成索引和迁移的初始化为止。 即使初始化和迁移仍在进行中,逻辑错误也允许操作继续进行。 这导致一些插件开始写入.kibana
索引,Elasticsearch会自动创建具有错误映射的索引。
好消息是,此问题已在7.2.0中修复并发布(https://github.com/elastic/kibana/pull/37674)
感谢您帮助调试此问题,并将所有讨论主题链接到此问题!
@rudolf您好,我在7.2.0中也面临这个问题。 Kibana反复询问索引模式,并且es日志显示fielddata错误。
"Caused by: java.lang.IllegalArgumentException: Fielddata is disabled on text fields by default. Set fielddata=true on [process.name] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.",
"at org.elasticsearch.index.mapper.TextFieldMapper$TextFieldType.fielddataBuilder(TextFieldMapper.java:711) ~[elasticsearch-7.2.0.jar:7.2.0]",
"at org.elasticsearch.index.fielddata.IndexFieldDataService.getForField(IndexFieldDataService.java:116) ~[elasticsearch-7.2.0.jar:7.2.0]",
@ ntsh999我们仅将github用于可重现的错误报告。 如果您可以在7.2上重现此行为,请在github上打开一个新期刊并共享步骤。 但是,如果您正在寻求帮助,请在我们的论坛上开始一个新主题https://discuss.elastic.co/请包括来自elasticsearch和kibana的所有日志以及任何其他相关信息,例如您如何创建集群,以及是否从早期版本的ELK堆栈进行了任何升级。
对于那些发现此线程的人,我在集群上做了什么才能使其起作用:
PUT /.kibana
{
"aliases": {},
"mappings": {
"properties": {
"config": {
"properties": {
"buildNum": {
"type": "long"
}
}
},
"index-pattern": {
"properties": {
"fields": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"timeFieldName": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"title": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
},
"type": {
"type": "keyword",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"updated_at": {
"type": "date"
}
}
},
"settings": {
"index": {
"number_of_shards": "5",
"number_of_replicas": "1"
}
}
}
(确保碎片和副本的数量符合您的需求)
@ allan-simon太棒了! 那对我来说很棒!
@ allan-simon也谢谢你,你救了我的夜晚。
@ allan-simon干杯! 花了很长时间尝试在今晚在AWS Elasticsearch服务上弄清楚这一点,然后才能找到效果理想的帖子!
最有用的评论
对于那些发现此线程的人,我在集群上做了什么才能使其起作用:
(确保碎片和副本的数量符合您的需求)