Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes #280 #281

Open
wants to merge 6 commits into
base: master
Choose a base branch
from
Open

Conversation

NicolasRouquette
Copy link
Contributor

This PR addresses the intent of #280 with two API-breaking changes:

  1. The long-duration watch... operations have a different signature to allow passing an optional pool and for returning a skuber pool that can be used for long-duration calls to the same server and the underlying akka connection pool that can be shutdown when no more calls to the same server are needed.

  2. New API operations to facilitate the following:

  • monitoring a resource that is either deleted or renamed until it is no longer available by the same name
  • executing a job. monitoring its progress, executing a completion callback and monitoring its deletion

Note that due to licensing changes in Oracle JDK8, the Travis CI config had to be updated.

Also, updated to Scala 2.12.8 and Akka 2.5.23

@NicolasRouquette
Copy link
Contributor Author

@doriordan This PR is ready for you to review. I've used it at JPL quite effectively!

@doriordan
Copy link
Owner

Thanks for the PR Nicolas, I will look to review it in the next day or two.

Copy link
Owner

@doriordan doriordan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't had a chance to finish reviewing the changes as I am travelling, but have made some comments on the overall direction of the PR.
As per one of my comments, I am currently working on a list/watch specific API and we coud consider whether your changes could be somehow brought into that, although it is not ready for review/release itself yet.

- 2.11.8
jdk:
- oraclejdk8
- 2.12.8
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be possible to submit the various build updates as a separate PR, unless those updates are required by the other changes?
(I find small, cohesive PRs generally are best)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The .travis.yml file is currently broken; see: https://travis-ci.community/t/install-of-oracle-jdk-8-failing/3038

Do you want to fix the .travis.yml file as part of a separate issue? If so, this PR will fail until the travis ci issue is resolved.

*/
def watchContinuously[O <: ObjectResource](obj: O)(implicit fmt: Format[O], rd: ResourceDefinition[O], lc: LoggingContext): Source[WatchEvent[O], _]
def watchContinuously[O <: ObjectResource](obj: O, pool: Option[Pool[WatchSource.Start[O]]])(implicit fmt: Format[O], rd: ResourceDefinition[O], lc: LoggingContext): Source[WatchEvent[O], (Pool[WatchSource.Start[O]], Option[Http.HostConnectionPool])]
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm reluctant to modify the existing watch methods API signatures here, it affects all clients. In fact, because there are other watch related enhancements that I am working on, I plan to add a single new method to the existing API which will return a separate API object for some new/enhanced watch/list related functionality. That new API might be the right place to consider this new pool parameter.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current API completely hides the fact that there's a pool involved and therefore prevents doing anything with it:

  • reuse the same across multiple calls (see the PiJobsSequential example)
  • create an independent pool for each call (see the PiJobsParallel example)
  • shutting down a pool when it is no longer needed

If you're thinking of refactoring this API, it would be good to keep the above use cases in mind; the distinction between sequential & parallel invocations of long-duration operations is very important for us.

I don't know how to support these use cases without changing the API in some incompatible way. Please advise how you want to proceed. I'm OK in deferring action until you refactor the API as you mentioned.

* - the akka host connection pool that can be shutdown when no further jobs need to be executed on the same server
* - the last pod status received when the pod progress predicate became unsatisfied
*/
def executeJobAndWaitUntilDeleted(
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general I try to keep the API generic in regards to resource types, which reflects the actual Kubernetes REST API design largely (exception of some specific methods on pods such as exec). This is specific to Job resources, so again my thoughts are that it could be implemented in the examples sub-project.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@@ -403,6 +405,17 @@ class KubernetesClientImpl private[client] (
} yield ()
}

override def monitorResourceUntilUnavailable[O <: ObjectResource](name: String, monitorRepeatDelay: FiniteDuration)(
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not use watch here instead of polling?

@hagay3
Copy link
Contributor

hagay3 commented Jul 8, 2021

This repo seems to have been abandoned.
A maintained fork is available here
https://github.com/hagay3/skuber
Please open the PR in the new fork.

Thanks,
Hagai

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants