Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kibana exits with a fatal crash if ES cluster status is red #33316

Closed
bhavyarm opened this issue Mar 15, 2019 · 10 comments
Closed

Kibana exits with a fatal crash if ES cluster status is red #33316

bhavyarm opened this issue Mar 15, 2019 · 10 comments
Assignees
Labels
bug Fixes for quality problems that affect the customer experience Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc

Comments

@bhavyarm
Copy link
Contributor

bhavyarm commented Mar 15, 2019

Kibana version: 7.0.0 rc1 BC1

Elasticsearch version: 7.0.0 rc1 BC1

Server OS version: darwin_x86_64

Original install method (e.g. download page, yum, from source, etc.): from staging

Describe the bug:
If ES cluster status is red - Kibana is exiting with a fatal error. Multiple restarts of Kibana and ES are not fixing the error.

Please note this bug is an accidental catch.

Steps to reproduce:

  1. Start ES/Kibana 6.7.0 in your local
  2. Forget that you started 6.7.0 stack and run 7.0.0 ES/Kibana
  3. Stop ES/Kibana 6.7.0 and restart ES/Kibana 7.0.0
  4. ES cluster remains red and Kibana exits with a fatal error

ES logs:

[2019-03-15T09:53:46,180][WARN ][r.suppressed             ] [bhavyarajumandya] path: /.kibana/_doc/config%3A7.0.0-rc1, params: {index=.kibana, id=config:7.0.0-rc1}
org.elasticsearch.action.NoShardAvailableActionException: No shard available for [get [.kibana][_doc][config:7.0.0-rc1]: routing [null]]
	at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.perform(TransportSingleShardAction.java:233) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.support.single.shard.TransportSingleShardAction$AsyncSingleAction.start(TransportSingleShardAction.java:210) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.support.single.shard.TransportSingleShardAction.doExecute(TransportSingleShardAction.java:103) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.support.single.shard.TransportSingleShardAction.doExecute(TransportSingleShardAction.java:62) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:145) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.apply(SecurityActionFilter.java:122) [x-pack-security-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:143) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:121) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:64) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:83) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:72) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:393) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.client.support.AbstractClient.get(AbstractClient.java:491) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.rest.action.document.RestGetAction.lambda$prepareRequest$0(RestGetAction.java:97) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:113) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.xpack.security.rest.SecurityRestFilter.handleRequest(SecurityRestFilter.java:69) [x-pack-security-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:240) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.rest.RestController.tryAllHandlers(RestController.java:337) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:174) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.http.AbstractHttpServerTransport.dispatchRequest(AbstractHttpServerTransport.java:317) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.http.AbstractHttpServerTransport.handleIncomingRequest(AbstractHttpServerTransport.java:367) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.http.AbstractHttpServerTransport.incomingRequest(AbstractHttpServerTransport.java:296) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:66) [transport-netty4-client-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:31) [transport-netty4-client-7.0.0-rc1.jar:7.0.0-rc1]
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at org.elasticsearch.http.netty4.Netty4HttpPipeliningHandler.channelRead(Netty4HttpPipeliningHandler.java:58) [transport-netty4-client-7.0.0-rc1.jar:7.0.0-rc1]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) [netty-codec-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) [netty-codec-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323) [netty-codec-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297) [netty-codec-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) [netty-handler-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:556) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:510) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909) [netty-common-4.1.32.Final.jar:4.1.32.Final]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2019-03-15T09:53:46,182][DEBUG][o.e.a.s.TransportSearchAction] [bhavyarajumandya] All shards failed for phase: [query]
[2019-03-15T09:53:46,183][WARN ][r.suppressed             ] [bhavyarajumandya] path: /.kibana_task_manager/_search, params: {ignore_unavailable=true, index=.kibana_task_manager}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:296) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:139) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:259) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.search.InitialSearchPhase.onShardFailure(InitialSearchPhase.java:105) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.search.InitialSearchPhase.lambda$performPhaseOnShard$1(InitialSearchPhase.java:251) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.search.InitialSearchPhase$1.doRun(InitialSearchPhase.java:172) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:751) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2019-03-15T09:53:46,185][WARN ][r.suppressed             ] [bhavyarajumandya] path: /.kibana/_search, params: {size=1000, ignore_unavailable=true, index=.kibana, filter_path=hits.hits._id}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:296) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:139) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:259) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.search.InitialSearchPhase.onShardFailure(InitialSearchPhase.java:105) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.search.InitialSearchPhase.lambda$performPhaseOnShard$1(InitialSearchPhase.java:251) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.search.InitialSearchPhase$1.doRun(InitialSearchPhase.java:172) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:751) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2019-03-15T09:53:46,908][DEBUG][o.e.a.s.TransportSearchAction] [bhavyarajumandya] All shards failed for phase: [query]
[2019-03-15T09:53:46,908][WARN ][r.suppressed             ] [bhavyarajumandya] path: /.kibana/_count, params: {index=.kibana}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:296) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:139) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:259) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.search.InitialSearchPhase.onShardFailure(InitialSearchPhase.java:105) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.search.InitialSearchPhase.lambda$performPhaseOnShard$1(InitialSearchPhase.java:251) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.search.InitialSearchPhase$1.doRun(InitialSearchPhase.java:172) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:751) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2019-03-15T09:54:06,781][WARN ][r.suppressed             ] [bhavyarajumandya] path: /.kibana_task_manager/_doc/oss_telemetry-vis_telemetry, params: {refresh=true, index=.kibana_task_manager, id=oss_telemetry-vis_telemetry}
org.elasticsearch.action.UnavailableShardsException: [.kibana_task_manager][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.kibana_task_manager][0]] containing [index {[.kibana_task_manager][_doc][oss_telemetry-vis_telemetry], source[{"type":"task","task":{"taskType":"vis_telemetry","state":"{\"stats\":{},\"runs\":0}","params":"{}","attempts":0,"scheduledAt":"2019-03-15T13:53:06.776Z","runAt":"2019-03-15T13:53:06.776Z","status":"idle"},"kibana":{"uuid":"16ce89b7-a834-44a6-bbd7-4348debc28a3","version":7000099,"apiVersion":1}}]}] and a refresh]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryBecauseUnavailable(TransportReplicationAction.java:959) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryIfUnavailable(TransportReplicationAction.java:836) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:788) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onTimeout(TransportReplicationAction.java:919) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:322) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:249) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:549) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) [elasticsearch-7.0.0-rc1.jar:7.0.0-rc1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]

Kibana logs:

error  [13:53:46.194] [warning][stats-collection] [no_shard_available_action_exception] No shard available for [get [.kibana][_doc][config:7.0.0-rc1]: routing [null]] :: {"path":"/.kibana/_doc/config%3A7.0.0-rc1","query":{},"statusCode":503,"response":"{\"error\":{\"root_cause\":[{\"type\":\"no_shard_available_action_exception\",\"reason\":\"No shard available for [get [.kibana][_doc][config:7.0.0-rc1]: routing [null]]\"}],\"type\":\"no_shard_available_action_exception\",\"reason\":\"No shard available for [get [.kibana][_doc][config:7.0.0-rc1]: routing [null]]\"},\"status\":503}"}
    at respond (/Users/bhavyarajumandya/Desktop/rc1_release_7.0.0_bc1/kibana-7.0.0-rc1-darwin-x86_64/node_modules/elasticsearch/src/lib/transport.js:308:15)
    at checkRespForFailure (/Users/bhavyarajumandya/Desktop/rc1_release_7.0.0_bc1/kibana-7.0.0-rc1-darwin-x86_64/node_modules/elasticsearch/src/lib/transport.js:267:7)
    at HttpConnector.<anonymous> (/Users/bhavyarajumandya/Desktop/rc1_release_7.0.0_bc1/kibana-7.0.0-rc1-darwin-x86_64/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)
    at IncomingMessage.wrapper (/Users/bhavyarajumandya/Desktop/rc1_release_7.0.0_bc1/kibana-7.0.0-rc1-darwin-x86_64/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)
    at IncomingMessage.emit (events.js:194:15)
    at endReadableNT (_stream_readable.js:1103:12)
    at process._tickCallback (internal/process/next_tick.js:63:19)
  log   [13:53:46.194] [warning][stats-collection] Unable to fetch data from kibana_settings collector
  log   [13:53:46.916] [error][status][plugin:spaces@7.0.0] Status changed from yellow to red - all shards failed: [search_phase_execution_exception] all shards failed
  log   [13:53:46.918] [fatal][root] { [search_phase_execution_exception] all shards failed :: {"path":"/.kibana/_count","query":{},"body":"{\"query\":{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"exists\":{\"field\":\"graph-workspace\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.graph-workspace\":\"7.0.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"canvas-workpad\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.canvas-workpad\":\"7.0.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"index-pattern\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.index-pattern\":\"6.5.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"visualization\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.visualization\":\"7.0.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"dashboard\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.dashboard\":\"7.0.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"search\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.search\":\"7.0.0\"}}}}]}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
    at respond (/Users/bhavyarajumandya/Desktop/rc1_release_7.0.0_bc1/kibana-7.0.0-rc1-darwin-x86_64/node_modules/elasticsearch/src/lib/transport.js:308:15)
    at checkRespForFailure (/Users/bhavyarajumandya/Desktop/rc1_release_7.0.0_bc1/kibana-7.0.0-rc1-darwin-x86_64/node_modules/elasticsearch/src/lib/transport.js:267:7)
    at HttpConnector.<anonymous> (/Users/bhavyarajumandya/Desktop/rc1_release_7.0.0_bc1/kibana-7.0.0-rc1-darwin-x86_64/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)
    at IncomingMessage.wrapper (/Users/bhavyarajumandya/Desktop/rc1_release_7.0.0_bc1/kibana-7.0.0-rc1-darwin-x86_64/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)
    at IncomingMessage.emit (events.js:194:15)
    at endReadableNT (_stream_readable.js:1103:12)
    at process._tickCallback (internal/process/next_tick.js:63:19)
  status: 503,
  displayName: 'ServiceUnavailable',
  message:
   'all shards failed: [search_phase_execution_exception] all shards failed',
  path: '/.kibana/_count',
  query: {},
  body:
   { error:
      { root_cause: [],
        type: 'search_phase_execution_exception',
        reason: 'all shards failed',
        phase: 'query',
        grouped: true,
        failed_shards: [] },
     status: 503 },
  statusCode: 503,
  response:
   '{"error":{"root_cause":[],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[]},"status":503}',
  toString: [Function],
  toJSON: [Function],
  isBoom: true,
  isServer: true,
  data: null,
  output:
   { statusCode: 503,
     payload:
      { message:
         'all shards failed: [search_phase_execution_exception] all shards failed',
        statusCode: 503,
        error: 'Service Unavailable' },
     headers: {} },
  reformat: [Function],
  [Symbol(SavedObjectsClientErrorCode)]: 'SavedObjectsClient/esUnavailable' }

 FATAL  [search_phase_execution_exception] all shards failed :: {"path":"/.kibana/_count","query":{},"body":"{\"query\":{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"exists\":{\"field\":\"graph-workspace\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.graph-workspace\":\"7.0.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"canvas-workpad\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.canvas-workpad\":\"7.0.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"index-pattern\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.index-pattern\":\"6.5.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"visualization\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.visualization\":\"7.0.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"dashboard\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.dashboard\":\"7.0.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"search\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.search\":\"7.0.0\"}}}}]}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}

Cluster status:

curl localhost:9200/_cluster/health
{"cluster_name":"elasticsearch","status":"red","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":0,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":68,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":0.0}

.kibana documents are zero:

url localhost:9200/_cat/indices?v
health status index                        uuid                   pri rep docs.count docs.deleted store.size pri.store.size
red    open   shakespeare                  KofYJazCQseaCUP7mhROzg   5   1                                                  
red    open   .kibana_1                    UXIswmUhRBCHx9NBJcqu6g   1   0                                                  
red    open   logstash-2015.05.20          17JDFwe2RpmVDCmpuyCabA   5   1                                                  
red    open   kibana_sample_data_ecommerce 92LRjFVyRpaL2iY3YmPNzw   1   0                                                  
red    open   kibana_sample_data_logs      K_LFLnw4S1uDzTTyV5yQYg   1   0                                                  
red    open   kibana_sample_data_flights   sp7-pjldSgKIjwbKn6iOzA   1   0                                                  
red    open   heartbeat-2019.03.15         hkEHtNjKSKWJQU-WfHPJTA   5   1                                                  
red    open   logstash-2015.05.18          EQMJpkBMQ_6_ztwCg6pOqg   5   1                                                  
red    open   logstash-2015.05.19          TRDSZJJFRBuH3MHcI9Ixhw   5   1                                                  
red    open   .kibana_task_manager         l5KLYcfTRAa1GYveiObu0A   1   0                                                  
red    open   bank                         gO42NWPxRqOLjRr0LVZwiw   5   1                                                  
red    open   .security-6                  ssOZXn4JShi6Iq3K9G-cPw   1   0                                                  
red    open   metricbeat-6.7.0-2019.03.15  lMOwI3GvSd6MLkw2Y4OP6g   1   1           
@bhavyarm bhavyarm added bug Fixes for quality problems that affect the customer experience Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc labels Mar 15, 2019
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-platform

@mikecote
Copy link
Contributor

As discussed with @bhavyarm, I can reproduce something similar on my machine where I get All shards failed for phase: [query] errors when trying to load a dashboard but I'm not able to reproduce a crash on the kibana instance. The next assumption is the task manager may be trying a query and crashing the kibana instance when that error is returned.

@mikecote
Copy link
Contributor

After some investigation, the error comes from https://github.com/elastic/kibana/blob/master/src/legacy/server/saved_objects/migrations/core/elastic_index.ts#L197, changing the throw e; to return false; kept the server up and cycling through errors for 5+ minutes.

@bhavyarm
Copy link
Contributor Author

bhavyarm commented Apr 5, 2019

Saw the error again on 7.0.0 BC1. But without any mixups. Kibana restarted without any problems at the second try.

@tsullivan
Copy link
Member

I got a crash when Kibana started up, got to green status initially, then went to red because Elasticsearch queries resulted in 503 status. Shortly after, Kibana crashed:

% yarn start --config=config/kibana.xpack.dev.yml --no-base-path
yarn run v1.15.2
$ node --trace-warnings --trace-deprecation scripts/kibana --dev  --config=config/kibana.xpack.dev.yml --no-base-path
  log   [15:15:13.515] [warning][plugins-discovery] Explicit plugin paths [/Users/tsullivan/elastic/kibana/x-pack] are only supported in development. Relative imports will not work in production.
  log   [15:15:13.580] [info][plugins-service] Plugin initialization disabled.
 watching for changes  (4447 files)
optmzr    log   [15:15:15.806] [warning][plugins-discovery] Explicit plugin paths [/Users/tsullivan/elastic/kibana/x-pack] are only supported in development. Relative imports will not work in production.
server    log   [15:15:15.850] [warning][plugins-discovery] Explicit plugin paths [/Users/tsullivan/elastic/kibana/x-pack] are only supported in development. Relative imports will not work in production.
optmzr    log   [15:15:15.958] [info][plugins-service] Plugin initialization disabled.
server    log   [15:15:15.971] [info][plugins-system] Setting up [3] plugins: [testbed,translations,data]
server    log   [15:15:15.972] [info][plugins][testbed] Setting up plugin
server    log   [15:15:15.973] [info][plugins][translations] Setting up plugin
server    log   [15:15:15.973] [info][data][plugins] Setting up plugin
server    log   [15:15:15.974] [info][plugins-system] Starting [3] plugins: [testbed,translations,data]
server    log   [15:15:19.863] [warning][config][deprecation] You should set server.basePath along with server.rewriteBasePath. Starting in 7.0, Kibana will expect that all requests start with server.basePath rather than expecting you to rewrite the requests in your reverse proxy. Set server.rewriteBasePath to false to preserve the current behavior and silence this warning.
optmzr    log   [15:15:19.865] [warning][config][deprecation] You should set server.basePath along with server.rewriteBasePath. Starting in 7.0, Kibana will expect that all requests start with server.basePath rather than expecting you to rewrite the requests in your reverse proxy. Set server.rewriteBasePath to false to preserve the current behavior and silence this warning.
server    log   [15:15:19.985] [warning][plugin] ENOENT: Unable to scan directory for plugins "/Users/tsullivan/elastic/kibana/plugins"
optmzr    log   [15:15:19.985] [warning][plugin] ENOENT: Unable to scan directory for plugins "/Users/tsullivan/elastic/kibana/plugins"
server    log   [15:15:43.583] [info][optimize] Waiting for optimizer to be ready
optmzr    log   [15:15:43.884] [info][optimize:dynamic_dll_plugin] Started dynamic dll plugin tasks
optmzr    log   [15:15:43.968] [info][optimize] Optimization started
optmzr    log   [15:15:43.973] [info] Plugin initialization disabled.
server    log   [15:15:47.547] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/core/public/index.scss (theme=light)
server    log   [15:15:47.547] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/core/public/index.scss (theme=dark)
server    log   [15:15:47.547] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/x-pack/legacy/plugins/spaces/public/index.scss (theme=light)
server    log   [15:15:47.548] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/x-pack/legacy/plugins/spaces/public/index.scss (theme=dark)
server    log   [15:15:47.548] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/x-pack/legacy/plugins/security/public/index.scss (theme=light)
server    log   [15:15:47.548] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/x-pack/legacy/plugins/security/public/index.scss (theme=dark)
server    log   [15:15:47.548] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/x-pack/legacy/plugins/watcher/public/index.scss (theme=light)
server    log   [15:15:47.548] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/x-pack/legacy/plugins/watcher/public/index.scss (theme=dark)
server    log   [15:15:47.549] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/x-pack/legacy/plugins/maps/public/index.scss (theme=light)
server    log   [15:15:47.549] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/x-pack/legacy/plugins/maps/public/index.scss (theme=dark)
server    log   [15:15:47.549] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/x-pack/legacy/plugins/canvas/public/style/index.scss (theme=light)
server    log   [15:15:47.549] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/x-pack/legacy/plugins/canvas/public/style/index.scss (theme=dark)
server    log   [15:15:47.549] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/x-pack/legacy/plugins/license_management/public/index.scss (theme=light)
server    log   [15:15:47.549] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/x-pack/legacy/plugins/license_management/public/index.scss (theme=dark)
server    log   [15:15:47.550] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/x-pack/legacy/plugins/index_management/public/index.scss (theme=light)
server    log   [15:15:47.550] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/x-pack/legacy/plugins/index_management/public/index.scss (theme=dark)
server    log   [15:15:47.550] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/x-pack/legacy/plugins/upgrade_assistant/public/index.scss (theme=light)
server    log   [15:15:47.550] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/x-pack/legacy/plugins/upgrade_assistant/public/index.scss (theme=dark)
server    log   [15:15:47.550] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/console/public/index.scss (theme=light)
server    log   [15:15:47.550] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/console/public/index.scss (theme=dark)
server    log   [15:15:47.551] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/dashboard_embeddable_container/public/index.scss (theme=light)
server    log   [15:15:47.551] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/dashboard_embeddable_container/public/index.scss (theme=dark)
server    log   [15:15:47.551] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/data/public/index.scss (theme=light)
server    log   [15:15:47.551] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/data/public/index.scss (theme=dark)
server    log   [15:15:47.551] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/embeddable_api/public/index.scss (theme=light)
server    log   [15:15:47.551] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/embeddable_api/public/index.scss (theme=dark)
server    log   [15:15:47.552] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/input_control_vis/public/index.scss (theme=light)
server    log   [15:15:47.552] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/input_control_vis/public/index.scss (theme=dark)
server    log   [15:15:47.552] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/inspector_views/public/index.scss (theme=light)
server    log   [15:15:47.552] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/inspector_views/public/index.scss (theme=dark)
server    log   [15:15:47.552] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/kibana/public/index.scss (theme=light)
server    log   [15:15:47.553] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/kibana/public/index.scss (theme=dark)
server    log   [15:15:47.553] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/metric_vis/public/index.scss (theme=light)
server    log   [15:15:47.553] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/metric_vis/public/index.scss (theme=dark)
server    log   [15:15:47.553] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/region_map/public/index.scss (theme=light)
server    log   [15:15:47.553] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/region_map/public/index.scss (theme=dark)
server    log   [15:15:47.553] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/metrics/public/index.scss (theme=light)
server    log   [15:15:47.553] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/metrics/public/index.scss (theme=dark)
server    log   [15:15:47.554] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/tagcloud/public/index.scss (theme=light)
server    log   [15:15:47.554] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/tagcloud/public/index.scss (theme=dark)
server    log   [15:15:47.554] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/table_vis/public/index.scss (theme=light)
server    log   [15:15:47.554] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/table_vis/public/index.scss (theme=dark)
server    log   [15:15:47.554] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/tile_map/public/index.scss (theme=light)
server    log   [15:15:47.554] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/tile_map/public/index.scss (theme=dark)
server    log   [15:15:47.555] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/timelion/public/index.scss (theme=light)
server    log   [15:15:47.555] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/timelion/public/index.scss (theme=dark)
server    log   [15:15:47.555] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/vega/public/index.scss (theme=light)
server    log   [15:15:47.555] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/vega/public/index.scss (theme=dark)
server    log   [15:15:47.555] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/vis_type_markdown/public/index.scss (theme=light)
server    log   [15:15:47.555] [info][scss] Compiled CSS: /Users/tsullivan/elastic/kibana/src/legacy/core_plugins/vis_type_markdown/public/index.scss (theme=dark)
server    log   [15:15:47.941] [info][status][plugin:kibana@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:47.946] [info][status][plugin:elasticsearch@8.0.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
server    log   [15:15:47.950] [info][status][plugin:xpack_main@8.0.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
server    log   [15:15:47.960] [info][status][plugin:telemetry@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:47.964] [info][status][plugin:spaces@8.0.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
server    log   [15:15:47.978] [warning][security] Session cookies will be transmitted over insecure connections. This is not recommended.
server    log   [15:15:48.006] [info][status][plugin:security@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:48.008] [info][status][plugin:watcher@8.0.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
server    log   [15:15:48.021] [info][status][plugin:dashboard_mode@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:48.022] [info][status][plugin:tile_map@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:48.024] [info][status][plugin:task_manager@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:48.028] [info][status][plugin:maps@8.0.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
server    log   [15:15:48.033] [info][status][plugin:interpreter@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:48.051] [info][status][plugin:canvas@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:48.055] [info][status][plugin:license_management@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:48.057] [info][status][plugin:index_management@8.0.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
server    log   [15:15:48.070] [info][status][plugin:console@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:48.072] [info][status][plugin:console_extensions@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:48.074] [info][status][plugin:notifications@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:48.083] [info][status][plugin:upgrade_assistant@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:48.110] [info][status][plugin:uptime@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:48.112] [info][status][plugin:oss_telemetry@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:48.116] [info][status][plugin:encrypted_saved_objects@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:48.117] [info][status][plugin:apm_oss@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:48.121] [info][status][plugin:data@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:48.137] [info][status][plugin:metrics@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:49.241] [info][status][plugin:timelion@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:49.244] [info][status][plugin:ui_metric@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:49.247] [info][status][plugin:visualizations@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:49.288] [debug][exportTypes][reporting] Found exportType at /Users/tsullivan/elastic/kibana/x-pack/legacy/plugins/reporting/export_types/csv_from_savedobject/server/index.ts
server    log   [15:15:49.302] [debug][exportTypes][reporting] Found exportType at /Users/tsullivan/elastic/kibana/x-pack/legacy/plugins/reporting/export_types/csv/server/index.js
server    log   [15:15:49.345] [debug][exportTypes][reporting] Found exportType at /Users/tsullivan/elastic/kibana/x-pack/legacy/plugins/reporting/export_types/png/server/index.js
server    log   [15:15:49.504] [debug][exportTypes][reporting] Found exportType at /Users/tsullivan/elastic/kibana/x-pack/legacy/plugins/reporting/export_types/printable_pdf/server/index.js
server    log   [15:15:50.179] [info][status][plugin:elasticsearch@8.0.0] Status changed from yellow to green - Ready
server    log   [15:15:50.230] [info][license][xpack] Imported license information from Elasticsearch for the [data] cluster: mode: platinum | status: active | expiry date: 2030-09-29T16:59:59-07:00
server    log   [15:15:50.231] [info][status][plugin:xpack_main@8.0.0] Status changed from yellow to green - Ready
server    log   [15:15:50.232] [info][status][plugin:watcher@8.0.0] Status changed from yellow to green - Ready
server    log   [15:15:50.233] [info][status][plugin:index_management@8.0.0] Status changed from yellow to green - Ready
server    log   [15:15:50.259] [info][status][plugin:maps@8.0.0] Status changed from yellow to green - Ready
server    log   [15:15:51.653] [debug][browser-driver][reporting] Browser installed at /Users/tsullivan/elastic/kibana/data/headless_shell-darwin/headless_shell
server    log   [15:15:51.654] [debug][reporting] Browser type: chromium
server    log   [15:15:51.654] [debug][reporting] Chromium sandbox disabled: false
server    log   [15:15:51.686] [info][status][plugin:reporting@8.0.0] Status changed from uninitialized to green - Ready
server    log   [15:15:51.693] [debug][esqueue][reporting][worker] jy98h6rx1bcn594c106788pp - Created worker for reporting jobs
server    log   [15:15:51.712] [debug][reporting] Running on os "darwin", distribution "undefined", release "undefined"
server    log   [15:15:51.784] [debug][task_manager] Not installing .kibana_task_manager index template: version 8000099 already exists.
server    log   [15:15:51.907] [debug][reporting] Reporting plugin self-check ok!
server    log   [15:15:51.912] [debug][task_manager] Not installing .kibana_task_manager index template: version 8000099 already exists.
server    log   [15:15:51.965] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:15:54.974] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:15:57.987] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
optmzr    log   [15:15:59.063] [info][optimize:dynamic_dll_plugin] No need to compile client vendors dll
optmzr    log   [15:15:59.064] [info][optimize:dynamic_dll_plugin] Finished all dynamic dll plugin tasks
optmzr    log   [15:15:59.066] [info][optimize] Optimization success in 15.10 seconds
server    log   [15:16:00.997] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:16:04.012] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:16:07.029] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:16:10.044] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:16:13.051] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:16:16.061] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:16:19.080] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:16:21.916] [warning][maps] Error scheduling telemetry task, received Request Timeout after 30000ms
server    log   [15:16:21.928] [debug][task_manager] Not installing .kibana_task_manager index template: version 8000099 already exists.
server    log   [15:16:22.093] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:16:25.111] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:16:28.125] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:16:31.132] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:16:34.142] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:16:37.161] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:16:40.175] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:16:43.185] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:16:46.199] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:16:49.215] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:16:51.931] [warning][telemetry] Error scheduling task, received Request Timeout after 30000ms
server    log   [15:16:52.232] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:16:55.255] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:16:58.269] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:17:01.287] [error][task_manager] Failed to poll for work: [search_phase_execution_exception] all shards failed :: {"path":"/.kibana_task_manager/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
server    log   [15:17:02.178] [error][status][plugin:spaces@8.0.0] Status changed from yellow to red - all shards failed: [search_phase_execution_exception] all shards failed
server    log   [15:17:02.180] [fatal][root] { [search_phase_execution_exception] all shards failed :: {"path":"/.kibana/_count","query":{},"body":"{\"query\":{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"exists\":{\"field\":\"space\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.space\":\"6.6.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"map\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.map\":\"7.2.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"canvas-workpad\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.canvas-workpad\":\"7.0.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"index-pattern\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.index-pattern\":\"6.5.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"visualization\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.visualization\":\"7.3.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"dashboard\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.dashboard\":\"7.3.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"search\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.search\":\"7.0.0\"}}}}]}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}
    at respond (/Users/tsullivan/elastic/kibana/node_modules/elasticsearch/src/lib/transport.js:315:15)
    at checkRespForFailure (/Users/tsullivan/elastic/kibana/node_modules/elasticsearch/src/lib/transport.js:274:7)
    at HttpConnector.<anonymous> (/Users/tsullivan/elastic/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)
    at IncomingMessage.wrapper (/Users/tsullivan/elastic/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4929:19)
    at IncomingMessage.emit (events.js:194:15)
    at endReadableNT (_stream_readable.js:1103:12)
    at process._tickCallback (internal/process/next_tick.js:63:19)
  status: 503,
  displayName: 'ServiceUnavailable',
  message:
   'all shards failed: [search_phase_execution_exception] all shards failed',
  path: '/.kibana/_count',
  query: {},
  body:
   { error:
      { root_cause: [],
        type: 'search_phase_execution_exception',
        reason: 'all shards failed',
        phase: 'query',
        grouped: true,
        failed_shards: [] },
     status: 503 },
  statusCode: 503,
  response:
   '{"error":{"root_cause":[],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[]},"status":503}',
  toString: [Function],
  toJSON: [Function],
  isBoom: true,
  isServer: true,
  data: null,
  output:
   { statusCode: 503,
     payload:
      { message:
         'all shards failed: [search_phase_execution_exception] all shards failed',
        statusCode: 503,
        error: 'Service Unavailable' },
     headers: {} },
  reformat: [Function],
  [Symbol(SavedObjectsClientErrorCode)]: 'SavedObjectsClient/esUnavailable' }
server    log   [15:17:02.200] [info][plugins-system] Stopping all plugins.
server    log   [15:17:02.200] [info][data][plugins] Stopping plugin
server    log   [15:17:02.200] [info][plugins][translations] Stopping plugin
server    log   [15:17:02.200] [info][plugins][testbed] Stopping plugin

 FATAL  [search_phase_execution_exception] all shards failed :: {"path":"/.kibana/_count","query":{},"body":"{\"query\":{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"exists\":{\"field\":\"space\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.space\":\"6.6.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"map\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.map\":\"7.2.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"canvas-workpad\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.canvas-workpad\":\"7.0.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"index-pattern\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.index-pattern\":\"6.5.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"visualization\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.visualization\":\"7.3.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"dashboard\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.dashboard\":\"7.3.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"search\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.search\":\"7.0.0\"}}}}]}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}

 server crashed  with status code 1

@rudolf
Copy link
Contributor

rudolf commented Jan 31, 2020

May have been fixed in #51324

@rudolf
Copy link
Contributor

rudolf commented Jan 31, 2020

I was able to create a red cluster status by changing this line:

const settings = { number_of_shards: 1, auto_expand_replicas: '0-1' };

to:

const settings = { number_of_shards: 1, auto_expand_replicas: '0-1',   'index.routing.allocation.require._name': 'haskibanaindex', };

The following will start ES in a green state:

node scripts/es archive /Users/rudolf/Downloads/elasticsearch-7.5.0-darwin-x86_64.tar.gz -E node.name=haskibanaindex

Whereas this will result in a red ES cluster when kibana starts up:

node scripts/es archive /Users/rudolf/Downloads/elasticsearch-7.5.0-darwin-x86_64.tar.gz -E node.name=nokibanaindex

I can confirm that a red ES cluster causes Kibana to crash in 7.3.0 and that it no longer crashes in 7.5.0 (fixed in #51324). However, even though Kibana no longer crashes in 7.5, it does not automatically recover when the ES cluster goes from RED -> GREEN but requires a restart of Kibana. This might be due to security and the way I'm reproducing this error.

@ntsh999
Copy link

ntsh999 commented Apr 15, 2020

I am also facing this issue when Kibana Pod goes into CrashLoopBackOff with the below error. I am using Elastic Stack 7.2.0

{"type":"log","@timestamp":"2020-04-15T16:12:19Z","tags":["fatal","root"],"pid":1,"message":"{ Error: [search_phase_execution_exception] all shards failed\n    at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:315:15)\n    at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:274:7)\n    at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:166:7)\n    at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)\n    at IncomingMessage.emit (events.js:194:15)\n    at endReadableNT (_stream_readable.js:1103:12)\n    at process._tickCallback (internal/process/next_tick.js:63:19)\n  status: 503,\n  displayName: 'ServiceUnavailable',\n  message: '[search_phase_execution_exception] all shards failed',\n  path: '/.kibana/_count',\n  query: {},\n  body:\n   { error:\n      { root_cause: [],\n        type: 'search_phase_execution_exception',\n        reason: 'all shards failed',\n        phase: 'query',\n        grouped: true,\n        failed_shards: [] },\n     status: 503 },\n  statusCode: 503,\n  response:\n   '{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}',\n  toString: [Function],\n  toJSON: [Function] }"}

 FATAL  [search_phase_execution_exception] all shards failed :: {"path":"/.kibana/_count","query":{},"body":"{\"query\":{\"bool\":{\"should\":[{\"bool\":{\"must\":[{\"exists\":{\"field\":\"index-pattern\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.index-pattern\":\"6.5.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"visualization\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.visualization\":\"7.2.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"dashboard\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.dashboard\":\"7.0.0\"}}}}]}},{\"bool\":{\"must\":[{\"exists\":{\"field\":\"search\"}},{\"bool\":{\"must_not\":{\"term\":{\"migrationVersion.search\":\"7.0.0\"}}}}]}}]}}}","statusCode":503,"response":"{\"error\":{\"root_cause\":[],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[]},\"status\":503}"}

@joshdover
Copy link
Contributor

@rudolf did we decide not to fix this for 7.9?

@rudolf
Copy link
Contributor

rudolf commented Jul 7, 2020

Both the reports happen when we check for outdated documents in the .kibana index to check if we need to perform a migration. If Elasticsearch is red these queries fail and there's not much we can do about it. It was mitigated in v6.5 #25255 where we added 30 retries that are 1 second apart.

In v7.5.0 #51324 we started retrying 503 errors like these indefinitely so if the cluster eventually recovers the migration will succeed.

@rudolf rudolf closed this as completed Jul 7, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Fixes for quality problems that affect the customer experience Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc
Projects
None yet
Development

No branches or pull requests

7 participants