Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement the kubectl plugin for the Postgres CustomResourceDefinition #519

Closed
VineethReddy02 opened this issue Mar 13, 2019 · 21 comments
Closed

Comments

@VineethReddy02
Copy link
Contributor

@sdudoladov As this proposal doesn't have issue tracker, I have created one. And everyone can share their views, design related suggestions and functionalities such delete, auto-completion of resource names etc...

@sdudoladov
Copy link
Member

sdudoladov commented Mar 14, 2019

@VineethReddy02 we expect you to contribute your own ideas, it is your proposal after all. read the user/administrator documentation to see what it is possible and think about how to automate this

a few starting points to consider (not mandatory or even necessary)

# correct deletion of a PG cluster, with warnings/confirmation requests
kubectl postgresql delete your-cluster-name

# compare manifest with the actual state of the cluster stored in the operator
kubectl postgresql diff your-cluster-name 

# convenience alias for normal manifest editing 
kubectl postgresql update your-cluster-name

# trickier: aliases for common Patroni commands
kubectl postgresql reinit replica-name

# short name for the command 
kubectl pg ...
kubectl postgresql

# ideally autocompletion

@erthalion @FxKu @Jan-M pls have a look

@erthalion
Copy link
Contributor

# trickier: aliases for common Patroni commands
kubectl postgresql reinit replica-name

I doubt it makes sense to implement something like this, since it has to be interactive and full of confirmation that you really want to do this (that's the point of patronictl). But at the same time this should be implemented outside of patronictl, which means code duplication and dependency.

@FxKu
Copy link
Member

FxKu commented Apr 8, 2019

I'm also not so sure about wrapping patronictl commands. Some more ideas:

  • List databases managed by operator
  • Access Postgres logs (zkubectl logs only shows Patroni logs)
  • Connect to database (master or repl)
  • Add roles / users

@sdudoladov
Copy link
Member

We are pleased to announce that this project was accepted to the GSoC 2019. Congratulations, @VineethReddy02 !

Vineeth, we are now in the community bonding period, and we would like to use this an opportunity to introduce you to our community and vice versa. Our suggestion for this period is straightforward:

  • You should (if not already) familiarize yourself with the most important concepts behind the operator and prepare the development environment to ensure you are ready to start coding in 2 weeks. So:
  • Finalize the deliverables:
  1. The project goal is to develop a kubectl plugin for the Postgres operator
  2. The progress is to be reported weekly on Friday afternoons as a message formatted according tho the template below. A comment in this issue will do, or you may go with the blogging platform of your choice.
  3. The real-time communication is to be done in the postgres-operator slack channel. Since we are an OSS project we strongly prefer public communication and discourage private message unless in exceptional circumstances
  4. The development is to be done in a separate PR to the operator repository. We believe moving the development to an independent repo will separate you too much form the community.
  5. We expect mergeable code at the end of each milestone. This means readable documented code that does what it is supposed to do and handles basic corner cases.
  6. The deliverables / milestones must follow the page 6 in the proposal.

Report template:

Calendar week #0

  • Summary of the things done last week
  • What can be improved
  • What went well
  • TODOs for the upcoming week. For new commands these must include sketch of the usage string as in
# correct deletion of a PG cluster, with warnings/confirmation requests
kubectl postgresql delete your-cluster-name

Please inform us if you have an instable Internet connection and any planned absences.
Finally, keep in mind that it is still not core GSoC, so you are not expected to work full-time :)

@VineethReddy02 before we embark on these tasks, we would like to hear you feedback on the suggestions.

@FxKu will be a 2nd mentor, so his comments will be well appreciated :)

@VineethReddy02
Copy link
Contributor Author

Thank you, Sergey! for the introduction.

  • I have already set up my machine with all the required installations.
  • I also have a stable internet connection. If there are any sudden circumstances I will let you know in advance.
  • Going forward during the community bonding period I will spend time in understanding k8s, PG Operator(especially) and k8s plugin architecture.
  • From now on I will prefer Postgres-operator slack channel for communication.
  • I will follow the above-mentioned template for updating weekly progress. I will update in my public repository here.
  • So kubectl pg plugin will be part of the operator repository in a separate folder. Just to make sure, I don't think it should be a problem to the operator repository to have some dependencies vendored that are required by the plugin.

Looking forward to working with you all!
@sdudoladov @FxKu

@sdudoladov
Copy link
Member

sdudoladov commented May 15, 2019

Going forward during the community bonding period I will spend time in understanding k8s, PG Operator(especially) and k8s plugin architecture.

Great, then please post a first brief progress summary using the template around the end of this week, and we will review it on Monday.

I will follow the above-mentioned template for updating weekly progress. I will update in my public repository here.

Sounds good. We will then post links from it in this repo.

So kubectl pg plugin will be part of the operator repository in a separate folder. Just to make sure, I don't think it should be a problem to the operator repository to have some dependencies vendored that are required by the plugin.

At this point it should not be much of a problem. Much more important is that you do not stay too far away from the mainline operator development and community. We can always move a plugin to a separate project later.

@VineethReddy02
Copy link
Contributor Author

As the Community bonding period started on May 7 I will update both the week's progress summary by end of this week. So that you review the last two weeks progress summary.

I totally understood your intention for having kubectl plugin work within operator repo. I am happy that I can work closely with the community. 💯

Thanks!

@VineethReddy02
Copy link
Contributor Author

I will be updating the work on kubectl plugin development here. This acts as a tracker which comprises of proposal, design docs and any further discussions on implementation with weekly reports.

@sdudoladov
Copy link
Member

@VineethReddy02 please provide the weekly report for the last week. The coding officially starts today :)

@VineethReddy02
Copy link
Contributor Author

Hey, @sdudoladov I forgot to commit the summary committed it yesterday after our call. Thanks for your time. I have started coding.

@sdudoladov
Copy link
Member

sdudoladov commented Jun 3, 2019

@VineethReddy02
1.

I was validating kubectl pg check by the status of postgres operator deployment status. But when i realised crd can be deleted manually and operator status is still running. My initial work fails. I should have done this validations before writing the code.

your work does not fail; you just need to check that the operator deployment exits and has one replica

  1. Where are the examples of the commands you want to implement ? I mean sth like
# correct deletion of a PG cluster, with warnings/confirmation requests
kubectl postgresql delete your-cluster-name

using my own example from above.

this is a very quick-and-dirty way of prototyping that can save you a lot of time later because through that you can avoid implementing unnecessary things

@VineethReddy02
Copy link
Contributor Author

@sdudoladov
1.
Initially, I was validating the number of replicas as well. but when CRD is deleted manually. CRD check for postgresql fails. After CRD deleted manually operator is in running state. Which doesn't work for the CRD manually deleted usecase.

for _,deployment := range allDeployments.Items {
	if(deployment.Name=="postgres-operator" && deployment.Status.ReadyReplicas>=1){
		fmt.Println("postgresql CRD Successfully Registered.")
		registered = true
		break
	}
}
if(!registered) {
		fmt.Println("Make sure Postgres-Operator is running.")
}

Example commands for this week work
Create/update/delete of kind:postgresql

# This will create postgresql cluster
kubectl pg create abc.yaml

# This will update existing cluster with latest changes
kubectl pg update abc.yaml 

# This will delete the postgresql cluster by filename.yaml/postgresql-cluster-name
kubectl pg delete abc.yaml/postgresql_name

@sdudoladov
Copy link
Member

but when CRD is deleted manually. CRD check for postgresql fails. After CRD deleted manually operator is in running state.

if you delete the operator CRD and restart the operator pod, does the pod fail ?

This will delete the postgresql cluster by filename.yaml/postgresql-cluster-name
kubectl pg delete abc.yaml/postgresql_name

start with that one, it is actually important. The command has to ask for confirmation (as in "are you sure you want to delete the cluster-name ? [y/N]" with "No" being the default) and show a success message saying the cluster manifest has been deleted and related objects will be garbaged collected by the k8s/operator. It also has to include a help message saying that only manifest deletion is necessary to delete a DB ( a common mistake is to delete other objects manually only to see them re-created by the operator later)

I'd also split the command in 2 forms:

kubectl pg delete postgresql_name
kubectl pg delete --file cluster-manifest.yaml

to match conventions from the original kubectl delete

kubectl pg create --file abc.yaml
kubectl pg update --file abc.yaml

these should be simple wrapper commands, do not spend much time on them

@VineethReddy02
Copy link
Contributor Author

On deletion of CRD i.e kubectl delete crds/postgresqls.acid.zalan.do The state of the postgres operator is still running and the validation by evaluating the status of operator fails. Let me know if am missing something.

Screenshot (17)

I will start working on the delete, create and update commands as discussed.

Thanks!

@VineethReddy02
Copy link
Contributor Author

As per the discussion with @sdudoladov @FxKu
Implementation of adding USER, DB, and extending the VOLUME.
The format of commands will look like

Adding USER
kubectl pg add-user XYZ -clusterName cluster01

Adding DB and associated OWNER
kubectl pg add-db XYZ -owner zalando -cluster cluster01

Extending the existing volume size
kubectl pg ext-volume 2Gi -clusterName cluster01

Please feel free to share your views on this.
Thanks!

@sdudoladov
Copy link
Member

sdudoladov commented Jun 17, 2019

weekly summaries from Vineeth for the record

@VineethReddy02
regarding

some times I find postgresql resource being deleted but pods remain to stay as orphaned

see #551

@sdudoladov
Copy link
Member

week 6

@sdudoladov
Copy link
Member

week 7

@sdudoladov
Copy link
Member

sdudoladov commented Jul 30, 2019

week 8
week 9
week 10

@hjacobs
Copy link
Contributor

hjacobs commented Jul 30, 2019

Related: #635

@FxKu FxKu removed the needs review label Sep 2, 2019
@FxKu
Copy link
Member

FxKu commented Sep 2, 2019

Closing as PR got merged.

@FxKu FxKu closed this as completed Sep 2, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants