Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOC] Resources management and volume claim template #1252

Merged
merged 11 commits into from
Jul 22, 2019

Conversation

barkbay
Copy link
Contributor

@barkbay barkbay commented Jul 16, 2019

This PR adds some documentation about resources management and volume claim template.
Related to #1031

Note: I'm using camel case for API object as it is a best practice in the K8S documentation. I'm not sure about our policy on that point.

docs/managing-compute-resources.asciidoc Outdated Show resolved Hide resolved
docs/elasticsearch-spec.asciidoc Outdated Show resolved Hide resolved
docs/managing-compute-resources.asciidoc Outdated Show resolved Hide resolved
@anyasabo
Copy link
Contributor

It looks like I was stepping on your toes a little bit since I had the pod template section a few days ago, so went ahead and did some of the sections that also fell under the pod template umbrella:
#1245

I'm good with merging yours first and then I can update mine as necessary to fit around yours unless you have a different suggestion

[id="{p}-volume-claim-templates"]
=== Volume Claim Templates

By default the operator creates a `PVC` with a capacity of 1Gi for every Pod in an Elasticsearch cluster. This is to ensure that there is no data loss if a Pod is deleted.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be worth spelling out what a PVC is before using it in this doc

@thbkrkr thbkrkr added the >docs Documentation label Jul 18, 2019
Copy link
Contributor

@sebgl sebgl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@barkbay barkbay merged commit 8a19795 into elastic:master Jul 22, 2019
barkbay added a commit to barkbay/cloud-on-k8s that referenced this pull request Jul 23, 2019
* Add resources and persistent volume templates documentation
barkbay added a commit that referenced this pull request Jul 23, 2019
* Add resources and persistent volume templates documentation
sebgl added a commit that referenced this pull request Jul 24, 2019
* Use the setvmmaxmapcount initcontainer by default in E2E tests (#1300)

Let's keep our default defaults :)

The setting is disabled explicitly for E2E tests where we enable a
restricted security context.

* Add docs for plugins, custom configuration files and secure settings (#1298)

* Allow license secret webhook to fail (#1301)

Webhooks on core k8s objects are just too debilitating in case our
webhook service fails. This sets the failure policy for the secret
webhook to ignore to strike a balance between UX (immediate feedback)
and keeping the users k8s cluster in a working state. Also we have an
additional validation run on controller level so this does not allow
circumventing our validation logic.

* Revert "Use the setvmmaxmapcount initcontainer by default in E2E tests (#1300)" (#1302)

This reverts commit fff1526.
This commit is breaking our E2E tests chain, which deploy a
PodSecurityPolicy by default. Any privileged init container will not
work.

I'll open an issue for a longer-term fix to properly handle this.

* Update quickstart (#1307)

* Update the name of the secret for the elastic user
* Bump the Elastic Stack version from 7.1.0 to 7.2.0

* Change Kibana readiness endpoint to return a 200 OK (#1309)

The previous endpoint returned an http code 302. While this is fine for
Kubernetes, some derived systems like GCP LoadBalancers mimic the
container readiness check for their own readiness check. Except GCP
Loadbalancers only work with status 200.

It's not up to us to adapt GCP LoadBalancers to K8s, but this is a
fairly trivial fix.

* Fix pod_forwarder to support two part DNS names, adjust e2e http_client (#1297)

* Fix pod_forwarder to support two part DNS names, adjust e2e http_client url

* Revert removing .svc in e2e http_client

* [DOC] Resources management and volume claim template (#1252)

* Add resources and persistent volume templates documentation

* Ignore resources reconciled by older controllers (#1286)

* Document PodDisruptionBudget section of the ES spec (#1306)

* Document PodDisruptionBudget section of the ES spec

I suspect this might slightly change in the feature depending on how we
handle the readiness check, so I'm keeping this doc minimal for now:

* what is a PDB, briefly (with a link)
* default PDB we apply
* how to set a different PDB
* how to disable the default PDB

* Move version out from Makefile (#1312)

* Add release note generation tool (#1314)

* no external dependencies
* inspects PRs by version label
* generates structured release notes in asciidoc grouped by type label

* Add console output to standalone apm sample (#1321)

* Update Quickstart to 0.9.0 (#1317)

* Update doc (#1319)

* Update persistent storage section
* Update kibana localhost url to use https
* Update k8s resources names in accessing-services doc
* Mention SSL browser warning
* Fix bulleted list

* Add CI job for nightly builds (#1248)

* Move version to a file

* Add CI implementation

* Update VERSION

* Depend on another PR for moving out version from Makefile

* Update Jenkinsfile

* Don't build and push operator image in bootstrap-gke (#1332)

We don't need to do that anymore, since we don't use an init container
based on the operator image.

* Remove Docker image publishing from devops-ci (#1339)

* Suppress output of certain commands from Makefile (#1342)

* Document how to disable TLS (#1341)

* Use new credentials for Docker registry (#1346)

* Workaround controller-runtime webhook upsert bug (#1337)

* Fix docs build on PR job (#1351)

* Fix docs build on PR job

* Cleanup workspace before doing other steps

* APM: remove "output" element and add elasticsearchRef (#1345)

* Don't rely on buggy metaObject Kind (#1324)

* Don't rely on buggy metaObject Kind

A bug in our client implementation may clear the object's Kind on
certain scenarios. See
kubernetes-sigs/controller-runtime#406.

Let's avoid that by fixing a constant Kind returned by a method call on
the resource.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>docs Documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants