Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deployapi: Refactor internal objects to match versioned #6246

Merged
merged 4 commits into from
Dec 22, 2015
Merged

deployapi: Refactor internal objects to match versioned #6246

merged 4 commits into from
Dec 22, 2015

Conversation

0xmichalis
Copy link
Contributor

Forked from #6233

@deads2k the PR you asked for.

cc: @ironcladlou @liggitt

TODO:

Fixes #2511

@0xmichalis 0xmichalis changed the title deployapi: Refactor internal object to match versioned [WIP] deployapi: Refactor internal object to match versioned Dec 9, 2015
@0xmichalis
Copy link
Contributor Author

wip because tests need to be updated

@0xmichalis 0xmichalis changed the title [WIP] deployapi: Refactor internal object to match versioned [WIP] deployapi: Refactor internal objects to match versioned Dec 9, 2015
@deads2k
Copy link
Contributor

deads2k commented Dec 9, 2015

The usage changes look pretty straightforward.

return nil
}

func convert_v1beta3_DeploymentConfigSpec_To_api_DeploymentConfigSpec(in *DeploymentConfigSpec, out *newer.DeploymentConfigSpec, s conversion.Scope) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks really close to what the generator would produce. Can you highlight the difference for me?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's because I have a bachelor on conversion now. Will do.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, actually all this code is here because we were doing manual conversion from the internal to out and vice versa. After running the generator though, all the conversions have been updated with these conversions. So should I drop manual? @ironcladlou ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's hard for me to keep up with the API conversion stuff in between the times I'm actually working on the API, so at this point I have no reason to believe the manual stuff is necessary. I'd stick with the generator by default unless a problem arises...

@0xmichalis
Copy link
Contributor Author

[test]

@@ -143,24 +206,6 @@ func convert_v1_DeploymentTriggerImageChangeParams_To_api_DeploymentTriggerImage
if err := s.Convert(&in.From, &out.From, 0); err != nil {
return err
}
switch in.From.Kind {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we know for sure it's okay to stop supporting ancient triggers stored with those kinds?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The kinds are still copied as part of the object reference. This part seems to be for populating the deprecated internal fields (RepositoryName and Tag).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am going over the generated conversions and they seem to be covering all of the manual so I'll probably remove these.

@0xmichalis
Copy link
Contributor Author

@ironcladlou we currently require the name of an object (imagestream, istag) in DeploymentTriggerImageChangeParams to be a valid dns subdomain while 1) "origin-ruby-sample:latest" for example is an invalid vaue and we parse for those when converting and 2) image stream and image stream tag names have no similar kind of validation. Should I update validation?

Also do we still support imageRepository kinds?

@0xmichalis
Copy link
Contributor Author

Another thing regarding validateImageChangeParams is that it's doing work that should happen when defaulting. Is this correct?

@ironcladlou
Copy link
Contributor

@ironcladlou we currently require the name of an object (imagestream, istag) in DeploymentTriggerImageChangeParams to be a valid dns subdomain while 1) "origin-ruby-sample:latest" for example is an invalid vaue and we parse for those when converting and 2) image stream and image stream tag names have no similar kind of validation. Should I update validation?

How about reusing ValidateImageStreamName?

@ironcladlou
Copy link
Contributor

Also do we still support imageRepository kinds?

I don't think so. @smarterclayton?

Ultimately I think conversion needs to resolve every possible trigger kind/name to an ImageStream at this point. Agree?

@ironcladlou
Copy link
Contributor

Another thing regarding validateImageChangeParams is that it's doing work that should happen when defaulting. Is this correct?

Yeah, I think that also needs cleaned up.

@smarterclayton
Copy link
Contributor

We do not.

On Thu, Dec 10, 2015 at 10:07 AM, Dan Mace notifications@github.com wrote:

Also do we still support imageRepository kinds?

I don't think so. @smarterclayton https://github.com/smarterclayton?

Ultimately I think conversion needs to resolve every possible trigger
kind/name to an ImageStream at this point. Agree?


Reply to this email directly or view it on GitHub
#6246 (comment).

@0xmichalis
Copy link
Contributor Author

Ultimately I think conversion needs to resolve every possible trigger kind/name to an ImageStream at this point. Agree?

Right now when it's v1->api, we switch istags to is and when it's api->v1, we switch to istags. So image stream references exist only internally. Is there a reason to favor image stream references over istags? I thought the intention was to switch to the latter as buildConfigs do.

@0xmichalis
Copy link
Contributor Author

How about reusing ValidateImageStreamName?

I was thinking of a combination between this and SplitImageStreamTag. Something like:

name, _, ok := SplitImageStreamTag(toBeValidated)
if !ok {
    // append error
}
if ok := ValidateImageStreamName(name); !ok {
    // append error
}

If we end up switching to istags as default. There is a comment about defaults but the actual default is different.

@0xmichalis
Copy link
Contributor Author

I thought the intention was to switch to the latter as buildConfigs do.

Ref issue: #2511

@ironcladlou
Copy link
Contributor

I was thinking of a combination between this and SplitImageStreamTag.

Sounds good to me.

@0xmichalis
Copy link
Contributor Author

Right now when it's v1->api, we switch istags to is and when it's api->v1, we switch to istags. So image stream references exist only internally.

I tried two different ways to verify this on master:

  • Try to create a DC with an ImageChange trigger that uses an ImageStream as a kind to watch
  • Edit an existing DC with an ImageChange trigger to use an ImageStream as a kind to watch

In both cases I see ImageStreamTag afterwards. Went ahead and switched to use only istags - addressing #2511 as part of this api cleanup.

@openshift-bot openshift-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 11, 2015
@openshift-bot openshift-bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 11, 2015
@0xmichalis
Copy link
Contributor Author

Finally got a clean unit test run locally. This is ready for reviews

@ironcladlou @openshift/api-review

@0xmichalis
Copy link
Contributor Author

Error from server: DeploymentConfig "frontend" is invalid: spec.triggers[0].imageChangeParams.from.kind: invalid value 'ImageRepository', Details: kind must be an ImageStreamTag

The image change controller for deployments never cared (prior and after this PR) about the kind of the object reference in the imageChangeParams of an ICT and since so far we allow ImageRepository references in our validation, any DC with ImageRepository set instead of {ImageStream, ImageStreamTag} is working like a charm. The validation changes in this PR break this behavior. Is this ok? Should we make oc status warn about ImageRepository references (and any kind of deprecated references in general)?

@ironcladlou @smarterclayton @deads2k @liggitt

@deads2k
Copy link
Contributor

deads2k commented Dec 11, 2015

any DC with ImageRepository set instead of {ImageStream, ImageStreamTag} is working like a charm.

What does it actually do? Does it treat it as an ImageStream?

@0xmichalis
Copy link
Contributor Author

Does it treat it as an ImageStream?

Yes.

@deads2k
Copy link
Contributor

deads2k commented Dec 11, 2015

Seems like there's a reasonable case for silently coercing it. Why would we want to support returning the old value from a GET? Do old clients not handle both?

@0xmichalis
Copy link
Contributor Author

All green here.

Two outstanding issues:

  1. ImageStream supprot in image triggers: We currently convert any imageStream references to an istTag. Should we continue doing that?
  2. ImageRepository support in image triggers: deployapi: Refactor internal objects to match versioned #6246 (comment) . Should we continue supporting those references? Maybe do something similar to 1. ?

@ironcladlou
Copy link
Contributor

All green here.
Two outstanding issues:

  1. ImageStream supprot in image triggers: We currently convert any imageStream references to an istTag. Should we continue doing that?
  2. ImageRepository support in image triggers: deployapi: Refactor internal objects to match versioned #6246 (comment) . Should we continue supporting those references? Maybe do something similar to 1. ?

Correct me if I'm wrong but we've been storing the original Kind values (which could include ImageStream or even DockerImage) even though we do conversions on the other fields. So we do need to support dealing with the old stored Kinds, but I don't know that we need to accept anything but ImageStreamTag through the API... @deads2k @liggitt?

@smarterclayton
Copy link
Contributor

ImageRepository kinds still have to be supported in triggers until you
write a migration that converts them to something that doesn't.

On Mon, Dec 14, 2015 at 9:31 AM, Michail Kargakis notifications@github.com
wrote:

All green here.

Two outstanding issues:

  1. ImageStream supprot in image triggers: We currently convert any
    imageStream references to an istTag. Should we continue doing that?
  2. ImageRepository support in image triggers: deployapi: Refactor internal objects to match versioned #6246 (comment)
    deployapi: Refactor internal objects to match versioned #6246 (comment) .
    Should we continue supporting those references? Maybe do something similar
    to 1. ?


Reply to this email directly or view it on GitHub
#6246 (comment).

@0xmichalis
Copy link
Contributor Author

@deads2k g.Edges() seems to return the edges non-deterministically. We didn't hit it before either in TestDCPodTemplateSpecNode (there was only one edge returned) or elsewhere (in every test we use it, we just check for the length of the set or the kind of the edges which always is the same for the tests we are running). Now that there are two edges in TestDCPodTemplateSpecNode and we inspect them, I scarcely see the second edge being returned first.

func TestDCPodTemplateSpecNode(t *testing.T) {
    // other stuff
    edges := g.Edges()
    // other stuff
    if edges[0].From().ID() != dcNode.ID() {
        t.Errorf("expected %v (id: %d), got %v (id: %d)", dcNode, dcNode.ID(), edges[0].From(), edges[0].From().ID())
        t.Errorf("EDGES: %v", edges)
    }
    podTemplateNode := edges[0].To()
    if _, ok := podTemplateNode.(*kubetypes.PodTemplateSpecNode); !ok {
        t.Errorf("expected a PodTemplate from the DC node")
        t.Errorf("got %v (id: %d)", edges[0].To(), edges[0].To().ID())
    }
    podSpecNode := edges[1].To()
    if _, ok := podSpecNode.(*kubetypes.PodSpecNode); !ok {
        t.Errorf("expected a PodSpec from the PodTemplate node")
        t.Errorf("got %v (id: %d)", edges[1].To(), edges[1].To().ID())
    }
}
--- FAIL: TestDCPodTemplateSpecNode (0.00s)
    nodes_test.go:40: expected DeploymentConfig|ns/foo (id: 1), got PodTemplateSpec|DeploymentConfig|ns/foo (id: 2)
    nodes_test.go:41: EDGES: [{{{PodTemplateSpec|DeploymentConfig|ns/foo PodSpec|PodTemplateSpec|DeploymentConfig|ns/foo} map[Contains:{}]} 1} {{{DeploymentConfig|ns/foo PodTemplateSpec|DeploymentConfig|ns/foo} map[Contains:{}]} 1}]
    nodes_test.go:44: expected a PodTemplate from the DC node
    nodes_test.go:45: got PodSpec|PodTemplateSpec|DeploymentConfig|ns/foo (id: 3)
    nodes_test.go:49: expected a PodSpec from the PodTemplate node
    nodes_test.go:50: got PodTemplateSpec|DeploymentConfig|ns/foo (id: 2)
FAIL
FAIL    github.com/openshift/origin/pkg/deploy/graph/nodes  0.005s

@0xmichalis
Copy link
Contributor Author

I opened #6394 and will refactor the test to just check for the nodes for the time being.

@0xmichalis
Copy link
Contributor Author

ERROR: gave up waiting for https://127.0.0.1:28443/healthz

!!! Error in hack/test-cmd.sh:364
    'return 1' exited with status 1
Call stack:
    1: hack/test-cmd.sh:364 main(...)
Exiting with status 1

[test]

@deads2k
Copy link
Contributor

deads2k commented Dec 18, 2015

I opened #6394 and will refactor the test to just check for the nodes for the time being.

Two "normal" choices:

  1. run a topo sort and compare against expected.
  2. walk the nodes from your expected root like: https://github.com/openshift/origin/blob/master/pkg/api/kubegraph/nodes/nodes_test.go#L56

@0xmichalis
Copy link
Contributor Author

run a topo sort and compare against expected.

Nice! That topo package is golden. Updated the graph test, PTAL

@deads2k
Copy link
Contributor

deads2k commented Dec 18, 2015

Nice! That topo package is golden. Updated the graph test, PTAL

lgtm, squash it in for merge.

@0xmichalis
Copy link
Contributor Author

ready

@deads2k
Copy link
Contributor

deads2k commented Dec 18, 2015

lgtm [merge]

@0xmichalis
Copy link
Contributor Author

flake #6419

[merge]

@0xmichalis
Copy link
Contributor Author

imagestreamtag ❄️

@0xmichalis
Copy link
Contributor Author

[merge]

@0xmichalis
Copy link
Contributor Author

oc status broken test #6423

fix in #6425

@0xmichalis
Copy link
Contributor Author

[merge]

@0xmichalis
Copy link
Contributor Author

flake #6065

re[merge]

@0xmichalis
Copy link
Contributor Author

flake #6065

again?

👻 [merge] 👻

@0xmichalis
Copy link
Contributor Author

ERROR: gave up waiting for http://127.0.0.1:44001/version

!!! Error in hack/test-integration.sh:364
    'return 1' exited with status 1
Call stack:
    1: hack/test-integration.sh:364 main(...)
Exiting with status 1
make: *** [test-int-plus] Error 1

[merge]

@0xmichalis
Copy link
Contributor Author

ui flake

[merge]

@0xmichalis
Copy link
Contributor Author

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/merge_pull_requests_origin/4458/) (Image: devenv-rhel7_3020)

@0xmichalis
Copy link
Contributor Author

#6149 flake

@0xmichalis
Copy link
Contributor Author

[test]

@openshift-bot
Copy link
Contributor

Evaluated for origin test up to 05e2052

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin/8065/)

@0xmichalis
Copy link
Contributor Author

Tests passed at once but this still won't [merge] :')

@openshift-bot
Copy link
Contributor

Evaluated for origin merge up to 05e2052

openshift-bot pushed a commit that referenced this pull request Dec 22, 2015
@openshift-bot openshift-bot merged commit 5e2cd8f into openshift:master Dec 22, 2015
@0xmichalis 0xmichalis deleted the refactor-internal-dc-object branch December 22, 2015 15:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants