-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: Testing infrastructure #303
Comments
Great write-up @smothiki. Going to sit down and think about this one overnight. |
Some great thoughts in here. On the view of the current architecture, one of our most recent projects is registry-proxy, and it was not mentioned here. I think registry-proxy should be considered as a sub-component, since it's just a dumb reverse proxy for the registry. When you mean
What would this look like? workflow-e2e becomes strictly for the control plane, then we would implement a separate |
WRT the last point, I think beefy unit tests and local functional tests are the place to start for a component-specific suite. Ideally we can get to a point where we test all visible interfaces thoroughly enough with mocks that workflow-e2e is not required to merge a PR. |
This definitely represents a smarter approach for when and how to utilize the e2e suite(s); thanks, @smothiki. Here are my ideas of possible immediate next steps that can be pursued:
|
I forgot to mention this one in few ideas I discussed with @vdice. We will still run the entire test suite like the current e2e but not for every PR but as a cron Job which runs 4 times every day on a dedicated cluster. |
Will we be doing this as well as the entire test suite for control plane PRs? |
Just had a discussion with @vdice thanks to his suggestions.
So for every PR to control plane components we have a clusters which has logging and monitoring and helm keep manifests already installed . Just doing helmc install chart-name will install only control plane components. If we have consent on this we would love to proceed to implementation on this front. Pros : turn around test time is optimized . |
@bacongobbler as per vdice suggestion cron jobs are not helping much in identifying bugs. So thinking of better approach as of now. |
If all you are testing is the control plane there is no benefit to installing the monitoring and logging stack. Second you gain no benefit to keeping the cluster around. There is also no reason to stand up the control plane if you are testing the monitoring stack. I'm not sure I understand the benefits here. |
@jchauncey as per my knowledge we need to install logging stack atleast to check |
Then why the hassle of keeping it around? It literally takes seconds for On Aug 10, 2016 2:56 PM, "Sivaram Mothiki" notifications@github.com wrote:
|
If time is not an issue we can install the entire chart. |
For testing the controller plane if you are doing e2e tests there is no |
This proposal comes down to how we test the parts in such a way as to In almost all cases we need a suite of tests that exercise API calls We then take the current e2e tests and move them into cli functional tests This means component prs only run their functional tests for validation and We can rely on a larger suite of e2e tests for validation of a release but |
So after thinking about this all day I would like to counter with this proposal that is just a slight variation on @smothiki's.
The pipeline for this would look like the following:
The last job might only need to happen ever so often so we could make that a manual job for now if we want. |
+1, I believe that's the idea behind the sub-component projects. It becomes a little harder to do that for the controller or for builder at the moment, but I imagine it would be nice to get to that point for those components (the control plane, specifically) as well.
How would users run the latest release of Workflow without a new chart? What would the new process look like? Remember that we don't want to break userspace; users are already comfortable with the idea of
So only workflow-cli gets tested end-to-end on changes? That doesn't make too much sense to me as we'd like to have end-to-end tests for the controller, at the very least (for example, the migration from replication controllers to deployments caught a few bugs).
If this is necessary for this shift then I'm okay with this. I'd like to err on the side of "try not to bite off more than you can chew". These small tasks eventually add up.
This feels slightly in-but-out of scope for the topic of "revisiting how we test". Are you suggesting feature-flagging the test suite or the feature itself? For significant features like deployments we already did the latter. I'm all for whatever comes out of this proposal, but for me I'd like to see (as a developer):
How we end up achieving those three bullet points does not matter to me so much. Bonus points for a visible reduction in our GCE costs. |
After working on the test suite for a good amount of time and getting help from fellow folks.
These are some issues I'm thinking of proposing to make the suite better. During our last retro, we have already talked about running smoke tests.
Some Proposals about proceeding further with current CI/CD infrastructure.
view of the current Deis architecture
The control Plane:
Any changes made to these components or Deis Cli or controller-sdk-go repository will affect workflow functionality and should run the full test suite.
The Logging and Monitoring stack:
A Pr change to this stack will not affect any CLI functionality or workflow features that are related to control plane except
Deis logs
sub-components:
A pr change to these will only affect a few tests in the workflow e2e and instead of running entire test suite and git push with different app types test should be sufficinet for testing.
Ideas
Please provide valuable feedback and comments to proceed further in this proposal and come up with action items that we can work on to make Jenkins a green place .
The text was updated successfully, but these errors were encountered: