Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhance status for deploymentconfigs #6233

Merged
merged 2 commits into from
Jun 23, 2016
Merged

Enhance status for deploymentconfigs #6233

merged 2 commits into from
Jun 23, 2016

Conversation

0xmichalis
Copy link
Contributor

@0xmichalis 0xmichalis commented Dec 8, 2015

Add various status fields for enhancing user experience. The fields are
also required for round-tripping upstream Deployments.

@ironcladlou @openshift/api-review PTAL

Fixes #5918
Part of https://trello.com/c/Mljldox7/643-5-deployments-downstream-support-for-upstream-features

@mfojtik @pweil- fyi

@@ -137,7 +138,10 @@ func (c *DeploymentConfigController) Handle(config *deployapi.DeploymentConfig)
return fmt.Errorf("couldn't create deployment for deployment config %s: %v", deployutil.LabelForDeploymentConfig(config), err)
}
c.recorder.Eventf(config, "DeploymentCreated", "Created new deployment %q for version %d", created.Name, config.LatestVersion)
return nil
// Update deploymentConfig status; we probably need a separate call for status
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're introducing the idea of a status, let's introduce a status at least internally. Since status should be read-only, we can be compatible by duplicating existing fields in two locations.

Does the deployer pod update logical status? If so, then we'll need to need to respect changes on it if the status stanza is missing. If not and its only the baked in controller, then we should be good to ignore status updates on old fields.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The deployer pod updates the status on the replication controller, it never touches the config. I think that status would be internal only, @ironcladlou thoughts?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The deployer pod updates the status on the replication controller, it never touches the config. I think that status would be internal only, @ironcladlou thoughts?

I'm fine with a deployer pod updating status. I just wanted to make sure we handled it properly for backwards compatibility. Sounds like we don't have to worry about it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The strategies just scale RCs up and down and don't touch the config. They also have no reason to read the new field via public API or anything. So I don't think introducing status to the config should have an effect outside the controllers.

@0xmichalis 0xmichalis changed the title api: New deploymentConfig field [WIP] api: New deploymentConfig field Dec 8, 2015
Replicas int

// Selector is a label query over pods that should match the Replicas count.
Selector map[string]string
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this new? What was the existing field?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can a single selector select all the replicas we care about? What if you are rolling between two deployments which resulted from two versions of the config, and have different selectors?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can a single selector select all the replicas we care about? What if you are rolling between two deployments which resulted from two versions of the config, and have different selectors?

I asked @Kargakis to split this into one pull for making spec/status with no other changes and then a separate pull to add the changes. I think this is new, so that should help us find and see what this is doing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All of these fields are copied from our versioned objects.

@liggitt how would we end up with an old deployment having a different selector?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new fields were behind DeploymentTemplate previously.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new fields were behind DeploymentTemplate previously.

Ah, definitely interested in that separate pull then. Checking congruence in one and then reasoning about changes in another..

Couldn't we have updated the Selector of the DC and orphaned existing pods for an old RC created by this DC?

@0xmichalis 0xmichalis changed the title [WIP] api: New deploymentConfig field api: New deploymentConfig field Dec 22, 2015
@@ -137,7 +138,10 @@ func (c *DeploymentConfigController) Handle(config *deployapi.DeploymentConfig)
return fmt.Errorf("couldn't create deployment for deployment config %s: %v", deployutil.LabelForDeploymentConfig(config), err)
}
c.recorder.Eventf(config, "DeploymentCreated", "Created new deployment %q for version %d", created.Name, config.Status.LatestVersion)
return nil
// Update deploymentConfig status; we probably need a separate call for status
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Opened #6465 for a separate status update call.

oldReplicas := deploymentConfig.Spec.Replicas
deploymentConfig.Spec.Replicas = scale.Spec.Replicas
if err := r.registry.UpdateDeploymentConfig(ctx, deploymentConfig); err != nil {
return nil, false, err
}

scaleRet.Spec.Replicas = scale.Spec.Replicas
scaleRet.Status.Replicas = totalReplicas + (scale.Spec.Replicas - oldReplicas)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With @ironcladlou's change to how DeploymentConfig scaling works, I don't think this calculation is accurate any more. We should just keep TotalReplicas == oldReplicas, since the replica count on RCs controlled by the DC won't change until the DC controller updates them.

@0xmichalis
Copy link
Contributor Author

#6040 flake

@0xmichalis
Copy link
Contributor Author

Error from server: 501: All the given peers are not reachable (failed to propose on members [https://172.18.4.185:4001] twice [last error: Unexpected HTTP status code]) [0]
!!! Error in hack/../test/end-to-end/core.sh:162
    'oc delete pod cli-with-token' exited with status 1
Call stack:
    1: hack/../test/end-to-end/core.sh:162 main(...)
Exiting with status 1

// TODO: Use a separate call for updating the status of a DC
// https://github.com/openshift/origin/issues/6465
config.Status.Replicas = kdeputil.GetReplicaCountForRCs(deployments)
_, err := c.osClient.DeploymentConfigs(config.Namespace).Update(config)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ironcladlou won't this result in a conflict if an older client had scaled the DC (L220)?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you're right- shouldn't we be reassigning config local after the update on L220?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you're right- shouldn't we be reassigning config local after the update on L220?

Not finding the conflict in a test, suggests we have a test missing.

It seems like this update is more properly a patch until we have separate status updating.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you're right- shouldn't we be reassigning config local after the update on L220?

Would using a copy of the dc in the last update call not conflict?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not finding the conflict in a test, suggests we have a test missing.

We need to use older clients in our tests

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or in this specific case we could emulate an older client by manually updating the latest rc for a dc

@smarterclayton
Copy link
Contributor

This API change is approved.

@0xmichalis
Copy link
Contributor Author

I would say that this is blocked on #6465 (so we can avoid resource conflicts in case of old client updates) which in turn is blocked on #6337 (the trickiest one; planning to start working on it soon)

@openshift-bot openshift-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jan 28, 2016
@0xmichalis 0xmichalis changed the title api: New deploymentConfig field [WIP] api: New deploymentConfig field Jan 28, 2016
@openshift-bot openshift-bot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Feb 4, 2016
@openshift-bot openshift-bot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Feb 19, 2016
@openshift-bot openshift-bot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Mar 9, 2016
@openshift-bot openshift-bot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels May 31, 2016
@openshift-bot openshift-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 9, 2016
@openshift-bot openshift-bot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Jun 13, 2016
@0xmichalis 0xmichalis mentioned this pull request Jun 20, 2016
10 tasks
@openshift-bot openshift-bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 21, 2016
@openshift-bot openshift-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 22, 2016
@0xmichalis
Copy link
Contributor Author

Rebased on top of the latest controller changes, shared a pod cache, added an extended test. This will need another look @mfojtik @ironcladlou @smarterclayton @deads2k

@openshift-bot openshift-bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 22, 2016
@0xmichalis
Copy link
Contributor Author

[test]

@openshift-bot
Copy link
Contributor

Evaluated for origin test up to 0e85054

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/5286/)

@mfojtik
Copy link
Contributor

mfojtik commented Jun 23, 2016

LGTM

@0xmichalis
Copy link
Contributor Author

[merge]

@0xmichalis
Copy link
Contributor Author

This will also enable oc rollout status.

@openshift-bot
Copy link
Contributor

openshift-bot commented Jun 23, 2016

continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/5286/) (Image: devenv-rhel7_4443)

@openshift-bot
Copy link
Contributor

Evaluated for origin merge up to 0e85054

@openshift-bot openshift-bot merged commit cc9cf0e into openshift:master Jun 23, 2016
@0xmichalis 0xmichalis deleted the dc-new-api-field branch June 23, 2016 15:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants