-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Fleet] /api/fleet/setup fails with HTTP 500 #234
Comments
Pinging @elastic/ingest-management (Team:Ingest Management) |
While discussing this issue with @ph (thanks!) , he suggested to add another setting for the Elasticsearch docker image:
I think I will transfer this issue to the elastic-package, so we can fix it in all places. |
Just to make sure the loop is complete. By default ES will uses indices to store the credentials, the shard needs to go green before it's available. This means that if the host machine is over capacity it can take a while for it to go green. When I was hit by this issue, I was not able to reproduce it easily. Adding timeout or weirds check never really fixed the problem. If you look at Elasticsearch integration testing they are all using the file ream as the backend for credentials and by using them it won't need to replicate the shard and the test will be stable and fast. This is the strategy we have used in https://github.com/elastic/beats/blob/master/x-pack/libbeat/docker-compose.yml |
Please reopen if still an issue. |
Hi,
while executing Integration Tests in package-storage (elastic/package-storage#824 (comment)), we spotted a problem with Kibana and Elasticsearch. Kibana failed with HTTP 500 due to
unavailable_shards_exception
for.security
shard.The goal of this issue is to research if there is a possibility to find a mitigation for this.
Logs available at: elastic/package-storage#824 (comment)
The text was updated successfully, but these errors were encountered: