Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add an eclipse/che nightly test against che.osio #13259

Closed
l0rd opened this issue Apr 29, 2019 · 14 comments
Closed

Add an eclipse/che nightly test against che.osio #13259

l0rd opened this issue Apr 29, 2019 · 14 comments
Labels
area/qe kind/task Internal things, technical debt, and to-do tasks to be performed. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. status/open-for-dev An issue has had its specification reviewed and confirmed. Waiting for an engineer to take it.
Milestone

Comments

@l0rd
Copy link
Contributor

l0rd commented Apr 29, 2019

Description

We should test every night that we are not breaking hosted Che (currently che.openshift.io). To do that we should set up a new nightly test that:

  • Build a rh-che image based on the upstream nightly build
  • Push nightly rh-che image on a container registry
  • Deploy nightly rh-che on osio (a second instance on dsaas for example)
  • Run an happy path e2e test

cc @rhopp @ibuziuk

@l0rd l0rd added kind/task Internal things, technical debt, and to-do tasks to be performed. team/che-qe labels Apr 29, 2019
@l0rd l0rd changed the title Add an eclipse/che nightly test that tests against che.osio Add an eclipse/che nightly test against che.osio Apr 29, 2019
@l0rd l0rd mentioned this issue Apr 29, 2019
@rhopp
Copy link
Contributor

rhopp commented Apr 29, 2019

@l0rd Worth mentioning this instance wouldn't be just for running tests, but basically for everybody who wants to try nightly version, right?

@ibuziuk
Copy link
Member

ibuziuk commented Apr 29, 2019

@rhopp but this looks exactly like a compatibility job [1] which @Katka92 have been working on, right?
The only difference is that deployment happens against dev-cluster [2], not dsaas (as I understand from the issue description deployment on dsaas is not a requirement and dev cluster should also work perfectly)
Could we simply finalize what is already in place and make it usable?

[1] redhat-developer/rh-che#936
[2] https://devtools-dev.ext.devshift.net:8443/console/project/compatibility-check/overview

@l0rd
Copy link
Contributor Author

l0rd commented Apr 29, 2019

@rhopp yes that's the goal, but that has other implications as users provisioning that may be worth a separate issue

@ibuziuk
Copy link
Member

ibuziuk commented Apr 29, 2019

IMO, we should really separate 2 things:

And before working on the second one we need to have a call / proper discussion since I'm not 100 % sure we need it and IMO our goal should be making che.openshift.io usable for dogfooding.

@l0rd
Copy link
Contributor Author

l0rd commented Apr 29, 2019

@ibuziuk We are not talking about dogfooding here at all. It's about verifying earlier that upstream Che and Theia don't break che.openshift.io.

Moreover che.openshift.io and che-nightly.openshift.io will be both based on rh-che. The difference is just that one is based on latest and the other is based on nightly.

@ibuziuk
Copy link
Member

ibuziuk commented Apr 29, 2019

We are not talking about dogfooding here at all. It's about verifying earlier that upstream Che and Theia don't break che.openshift.io.

So, this is redhat-developer/rh-che#936 IMO

Moreover che.openshift.io and che-nightly.openshift.io will be both based on rh-che. The difference is just that one is based on the latest and the other is based on nightly.

So, no diffs in config (idling / quota etc) - identical config but different images?

@l0rd
Copy link
Contributor Author

l0rd commented Apr 29, 2019

Yes this is redhat-developer/rh-che#936 but integrated in upstream nightly Che. The idea is simple: hosted Che is where we showcase Che so we should verify, at every commit, we do not break it.

And no that's not about different config. I know we have been talking about that as well @ibuziuk but I am not sure it's a good idea neither so let's not take that into consideration.

@rhopp
Copy link
Contributor

rhopp commented Apr 30, 2019

There is one question which could make difference:
Do we want everybody to have access to the nightly version of rh-che?
If no - prod-preview access is ok - then it's exactly rh-ch#936
If we want to enable everybody to take a look at latest-greatest version of (rh-)che, then it's #936 with some modifications (we would either need to find a way how to deploy to saas in "nightly" manner, or we would need to enable some external cluster in prod auth service)

@Katka92
Copy link
Contributor

Katka92 commented Apr 30, 2019

@rhopp but this looks exactly like a compatibility job [1] which @Katka92 have been working on, right?

@ibuziuk I think you're right, but as @rhopp wrote - if everybody should have an access here, we need to change a flow.
We've discussed that today and @ScrewTSW will work on stabilising the compatibility check.

@l0rd
Copy link
Contributor Author

l0rd commented Apr 30, 2019

Eventually I would like any register user to have access to nightly-che.openshift.io. But that's a different issue (with lower priority). This issue is only about building/deploying/testing rh-che as part of the nightly tests.

@Katka92
Copy link
Contributor

Katka92 commented Apr 30, 2019

@l0rd The documentation about our compatibility check is here https://github.com/redhat-developer/rh-che/tree/master/documentation/compatibility-check but I would like to explain the current flow here.
The current nightly version is taken from here https://raw.githubusercontent.com/eclipse/che/master/pom.xml and changed in rh-che pom.xml. The rh-che is build and pushed to dev-cluster, so we have direct access to that. This deployment stays here until it is redeployed (it is not removed after tests).
Once this new version of rh-che is deployed, tests are run.
When new version is introduced, the new PR is automatically created by bot. All members can commit to that PR to fix possible problems. The PR can seem like that: redhat-developer/rh-che#1272 Then the job automatically runs with this branch with changes once a day or can be triggered manually in the PR. Anyway, the part with creating/updating PR has some bugs. We are aware of that and will fix that.

@ibuziuk
Copy link
Member

ibuziuk commented Apr 30, 2019

When new version is introduced, the new PR is automatically created by bot

@Katka92 does it work now or it is known bug atm? I have not seen 7 beta 4 snapshot PR so far ¯_(ツ)_/¯

@Katka92
Copy link
Contributor

Katka92 commented Apr 30, 2019

@ibuziuk Yes it is known bug. We have already planned to fix that.

@rhopp rhopp added this to the Backlog - QE milestone Sep 23, 2019
@vkuznyetsov vkuznyetsov added the status/open-for-dev An issue has had its specification reviewed and confirmed. Waiting for an engineer to take it. label Oct 16, 2019
@che-bot
Copy link
Contributor

che-bot commented Apr 22, 2020

Issues go stale after 180 days of inactivity. lifecycle/stale issues rot after an additional 7 days of inactivity and eventually close.

Mark the issue as fresh with /remove-lifecycle stale in a new comment.

If this issue is safe to close now please do so.

Moderators: Add lifecycle/frozen label to avoid stale mode.

@che-bot che-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 22, 2020
@rhopp rhopp added the area/qe label May 11, 2020
@che-bot che-bot closed this as completed May 20, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/qe kind/task Internal things, technical debt, and to-do tasks to be performed. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. status/open-for-dev An issue has had its specification reviewed and confirmed. Waiting for an engineer to take it.
Projects
None yet
Development

No branches or pull requests

6 participants