Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Makefile: add run-e2e-smoke #259

Merged

Conversation

weizhouapache
Copy link
Collaborator

Issue #, if available:

Description of changes:

"make run-e2e-smoke" will do everything "make run-e2e" does but skips all e2e tests on remote servers.

  • generate cluster templates
  • create kind cluster
  • generate manifests
  • build docker image
  • push to local docker registry
  • apply kubectl config
  • setup bootstrap cluster
  • initialize bootstrap cluster, including the deployments of capc-system/capc-controller-manager, capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, capi-system/capi-controller-manager
  • (SKIPPED) run e2e tests
  • delete kind cluster

Testing performed:

$ make run-e2e-smoke

...
secret/secret1 created
Will skip:
  ./kubeconfig_helper
Running Suite: capi-e2e
=======================
Random Seed: 1685429950
Will run 0 of 29 specs

STEP: Initializing a runtime.Scheme with all the GVK relevant for this test
STEP: Loading the e2e test configuration from "/data/git/cluster-api-provider-cloudstack/test/e2e/config/cloudstack.yaml"
STEP: Launching Toxiproxy Server
STEP: Creating a clusterctl local repository into "/data/git/cluster-api-provider-cloudstack/_artifacts"
STEP: Reading the ClusterResourceSet manifest ./data/cni/kindnet.yaml
STEP: Setting up the bootstrap cluster
STEP: Initializing the bootstrap cluster
INFO: clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure cloudstack
INFO: Waiting for provider controllers to be running
STEP: Waiting for deployment capc-system/capc-controller-manager to be available
INFO: Creating log watcher for controller capc-system/capc-controller-manager, pod capc-controller-manager-7bf5c5d95c-ht2hq, container manager
STEP: Waiting for deployment capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager to be available
INFO: Creating log watcher for controller capi-kubeadm-bootstrap-system/capi-kubeadm-bootstrap-controller-manager, pod capi-kubeadm-bootstrap-controller-manager-65f7b5f55c-2knlh, container manager
STEP: Waiting for deployment capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager to be available
INFO: Creating log watcher for controller capi-kubeadm-control-plane-system/capi-kubeadm-control-plane-controller-manager, pod capi-kubeadm-control-plane-controller-manager-6f68d65bc8-2c2bv, container manager
STEP: Waiting for deployment capi-system/capi-controller-manager to be available
INFO: Creating log watcher for controller capi-system/capi-controller-manager, pod capi-controller-manager-594c44c5f7-b87kn, container manager
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSTEP: Dumping logs from the bootstrap cluster
Failed to get logs for the bootstrap cluster node capi-test-control-plane: exit status 2
STEP: Tearing down the management cluster
STEP: Killing Toxiproxy Server

JUnit report was created: /data/git/cluster-api-provider-cloudstack/_artifacts/junit.e2e_suite.1.xml

Ran 0 of 29 Specs in 59.860 seconds
SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 29 Skipped
PASS

Ginkgo ran 1 suite in 1m3.008089424s
Test Suite Passed
Deleted nodes: ["capi-test-control-plane"]
Deleted clusters: ["capi-test"]

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label May 30, 2023
@netlify
Copy link

netlify bot commented May 30, 2023

Deploy Preview for kubernetes-sigs-cluster-api-cloudstack ready!

Name Link
🔨 Latest commit 68707ed
🔍 Latest deploy log https://app.netlify.com/sites/kubernetes-sigs-cluster-api-cloudstack/deploys/64759ffffa5b7600081f8dcb
😎 Deploy Preview https://deploy-preview-259--kubernetes-sigs-cluster-api-cloudstack.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site settings.

@k8s-ci-robot k8s-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels May 30, 2023
@weizhouapache
Copy link
Collaborator Author

/assign rohityadavcloud

@codecov-commenter
Copy link

Codecov Report

Patch and project coverage have no change.

Comparison is base (7196931) 34.17% compared to head (68707ed) 34.17%.

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #259   +/-   ##
=======================================
  Coverage   34.17%   34.17%           
=======================================
  Files          43       43           
  Lines        3915     3915           
=======================================
  Hits         1338     1338           
  Misses       2394     2394           
  Partials      183      183           

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@rohityadavcloud
Copy link
Member

/approve
/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label May 30, 2023
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: rohityadavcloud, weizhouapache

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [rohityadavcloud,weizhouapache]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@blueorangutan
Copy link

Test Results : (tid-3)
Environment: kvm Rocky8(x3), Advanced Networking with Management Server Rocky8
Kubernetes Version: v1.27.2
Kubernetes Version upgrade from: v1.26.5
Kubernetes Version upgrade to: v1.27.2
Template: ubuntu-2004-kube
E2E Test Run Logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/capc-e2e-artifacts-ll-3.zip

[PASS] When testing with disk offering Should successfully create a cluster with disk offering
[PASS] When testing app deployment to the workload cluster with slow network [ToxiProxy] Should be able to download an HTML from the app deployed to the workload cluster
[PASS] When the specified resource does not exist Should fail due to the specified account is not found [TC4a]
[PASS] When the specified resource does not exist Should fail due to the specified domain is not found [TC4b]
[PASS] When the specified resource does not exist Should fail due to the specified control plane offering is not found [TC7]
[PASS] When the specified resource does not exist Should fail due to the specified template is not found [TC6]
[PASS] When the specified resource does not exist Should fail due to the specified zone is not found [TC3]
[PASS] When the specified resource does not exist Should fail due to the specified disk offering is not found
[PASS] When the specified resource does not exist Should fail due to the compute resources are not sufficient for the specified offering [TC8]
[PASS] When the specified resource does not exist Should fail due to the specified disk offer is not customized but the disk size is specified
[PASS] When the specified resource does not exist Should fail due to the specified disk offer is customized but the disk size is not specified
[PASS] When the specified resource does not exist Should fail due to the public IP can not be found
[PASS] When the specified resource does not exist When starting with a healthy cluster Should fail to upgrade worker machine due to insufficient compute resources
[PASS] When the specified resource does not exist When starting with a healthy cluster Should fail to upgrade control plane machine due to insufficient compute resources
[PASS] When testing K8S conformance [Conformance] Should create a workload cluster and run kubetest
[PASS] When testing affinity group Should have host affinity group when affinity is pro
[PASS] When testing affinity group Should have host affinity group when affinity is anti
[PASS] When testing with custom disk offering Should successfully create a cluster with a custom disk offering
[PASS] with two clusters should successfully add and remove a second cluster without breaking the first cluster
[PASS] When testing app deployment to the workload cluster [TC1][PR-Blocking] Should be able to download an HTML from the app deployed to the workload cluster
[PASS] When testing multiple CPs in a shared network with kubevip Should successfully create a cluster with multiple CPs in a shared network
[PASS] When testing machine remediation Should replace a machine when it is destroyed
[PASS] When testing node drain timeout A node should be forcefully removed if it cannot be drained in time
[PASS] When testing horizontal scale out/in [TC17][TC18][TC20][TC21] Should successfully scale machine replicas up and down horizontally
[PASS] When testing resource cleanup Should create a new network when the specified network does not exist
[PASS] When testing MachineDeployment rolling upgrades Should successfully upgrade Machines upon changes in relevant MachineDeployment fields


Summarizing 3 Failures:

[Fail] When testing subdomain [It] Should create a cluster in a subdomain 
/root/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.12/framework/cluster_helpers.go:143

[Fail] When testing Kubernetes version upgrades [It] Should successfully upgrade kubernetes versions when there is a change in relevant fields 
/root/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.12/framework/controlplane_helpers.go:152

[Fail] When testing app deployment to the workload cluster with network interruption [ToxiProxy] [BeforeEach] Should be able to create a cluster despite a network interruption during that process 
/jenkins/workspace/capc-e2e-new-by-wei/test/e2e/toxiproxy/toxiProxy.go:170

Ran 28 of 29 Specs in 8764.343 seconds
FAIL! -- 25 Passed | 3 Failed | 0 Pending | 1 Skipped
--- FAIL: TestE2E (8764.44s)
FAIL

@blueorangutan
Copy link

Tests were aborted.

2 similar comments
@blueorangutan
Copy link

Tests were aborted.

@blueorangutan
Copy link

Tests were aborted.

@blueorangutan
Copy link

Test Results : (tid-7)
Environment: kvm Rocky8(x3), Advanced Networking with Management Server Rocky8
Kubernetes Version: v1.27.2
Kubernetes Version upgrade from: v1.26.5
Kubernetes Version upgrade to: v1.27.2
Template: rockylinux-8-kube
E2E Test Run Logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/capc-e2e-artifacts-ll-7.zip

[PASS] When testing MachineDeployment rolling upgrades Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
[PASS] When testing K8S conformance [Conformance] Should create a workload cluster and run kubetest
[PASS] When testing app deployment to the workload cluster with network interruption [ToxiProxy] Should be able to create a cluster despite a network interruption during that process
[PASS] When testing Kubernetes version upgrades Should successfully upgrade kubernetes versions when there is a change in relevant fields
[PASS] When testing horizontal scale out/in [TC17][TC18][TC20][TC21] Should successfully scale machine replicas up and down horizontally
[PASS] When testing app deployment to the workload cluster [TC1][PR-Blocking] Should be able to download an HTML from the app deployed to the workload cluster
[PASS] When testing with disk offering Should successfully create a cluster with disk offering
[PASS] When testing machine remediation Should replace a machine when it is destroyed
[PASS] When testing affinity group Should have host affinity group when affinity is pro
[PASS] When testing affinity group Should have host affinity group when affinity is anti
[PASS] When testing with custom disk offering Should successfully create a cluster with a custom disk offering
[PASS] When testing node drain timeout A node should be forcefully removed if it cannot be drained in time
[PASS] When testing resource cleanup Should create a new network when the specified network does not exist


Summarizing 16 Failures:

[Fail] When testing subdomain [It] Should create a cluster in a subdomain 
/root/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.12/framework/cluster_helpers.go:143

[Fail] with two clusters [It] should successfully add and remove a second cluster without breaking the first cluster 
/root/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.12/framework/clusterctl/clusterctl_helpers.go:330

[Fail] When the specified resource does not exist [It] Should fail due to the specified account is not found [TC4a] 
/jenkins/workspace/capc-e2e-new-by-wei/test/e2e/invalid_resource.go:253

[Fail] When the specified resource does not exist [It] Should fail due to the specified domain is not found [TC4b] 
/jenkins/workspace/capc-e2e-new-by-wei/test/e2e/invalid_resource.go:253

[Fail] When the specified resource does not exist [It] Should fail due to the specified control plane offering is not found [TC7] 
/jenkins/workspace/capc-e2e-new-by-wei/test/e2e/invalid_resource.go:253

[Fail] When the specified resource does not exist [It] Should fail due to the specified template is not found [TC6] 
/jenkins/workspace/capc-e2e-new-by-wei/test/e2e/invalid_resource.go:253

[Fail] When the specified resource does not exist [It] Should fail due to the specified zone is not found [TC3] 
/jenkins/workspace/capc-e2e-new-by-wei/test/e2e/invalid_resource.go:253

[Fail] When the specified resource does not exist [It] Should fail due to the specified disk offering is not found 
/jenkins/workspace/capc-e2e-new-by-wei/test/e2e/invalid_resource.go:253

[Fail] When the specified resource does not exist [It] Should fail due to the compute resources are not sufficient for the specified offering [TC8] 
/jenkins/workspace/capc-e2e-new-by-wei/test/e2e/invalid_resource.go:253

[Fail] When the specified resource does not exist [It] Should fail due to the specified disk offer is not customized but the disk size is specified 
/jenkins/workspace/capc-e2e-new-by-wei/test/e2e/invalid_resource.go:253

[Fail] When the specified resource does not exist [It] Should fail due to the specified disk offer is customized but the disk size is not specified 
/jenkins/workspace/capc-e2e-new-by-wei/test/e2e/invalid_resource.go:253

[Fail] When the specified resource does not exist [It] Should fail due to the public IP can not be found 
/jenkins/workspace/capc-e2e-new-by-wei/test/e2e/invalid_resource.go:253

[Fail] When the specified resource does not exist When starting with a healthy cluster [It] Should fail to upgrade worker machine due to insufficient compute resources 
/jenkins/workspace/capc-e2e-new-by-wei/test/e2e/invalid_resource.go:253

[Fail] When the specified resource does not exist When starting with a healthy cluster [It] Should fail to upgrade control plane machine due to insufficient compute resources 
/jenkins/workspace/capc-e2e-new-by-wei/test/e2e/invalid_resource.go:253

[Fail] When testing app deployment to the workload cluster with slow network [ToxiProxy] [BeforeEach] Should be able to download an HTML from the app deployed to the workload cluster 
/jenkins/workspace/capc-e2e-new-by-wei/test/e2e/toxiproxy/toxiProxy.go:170

[Fail] When testing multiple CPs in a shared network with kubevip [It] Should successfully create a cluster with multiple CPs in a shared network 
/root/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.12/framework/cluster_helpers.go:143

Ran 28 of 29 Specs in 14657.765 seconds
FAIL! -- 12 Passed | 16 Failed | 0 Pending | 1 Skipped
--- FAIL: TestE2E (14657.80s)
FAIL

@weizhouapache
Copy link
Collaborator Author

/retest-required

@k8s-ci-robot k8s-ci-robot merged commit afea09b into kubernetes-sigs:main May 30, 2023
@blueorangutan
Copy link

Test Results : (tid-8)
Environment: kvm Rocky8(x3), Advanced Networking with Management Server Rocky8
Kubernetes Version: v1.27.2
Kubernetes Version upgrade from: v1.26.5
Kubernetes Version upgrade to: v1.27.2
Template: rockylinux-8-kube
E2E Test Run Logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/capc-e2e-artifacts-ll-8.zip

[PASS] When testing with custom disk offering Should successfully create a cluster with a custom disk offering
[PASS] When the specified resource does not exist Should fail due to the specified account is not found [TC4a]
[PASS] When the specified resource does not exist Should fail due to the specified domain is not found [TC4b]
[PASS] When the specified resource does not exist Should fail due to the specified control plane offering is not found [TC7]
[PASS] When the specified resource does not exist Should fail due to the specified template is not found [TC6]
[PASS] When the specified resource does not exist Should fail due to the specified zone is not found [TC3]
[PASS] When the specified resource does not exist Should fail due to the specified disk offering is not found
[PASS] When the specified resource does not exist Should fail due to the compute resources are not sufficient for the specified offering [TC8]
[PASS] When the specified resource does not exist Should fail due to the specified disk offer is not customized but the disk size is specified
[PASS] When the specified resource does not exist Should fail due to the specified disk offer is customized but the disk size is not specified
[PASS] When the specified resource does not exist Should fail due to the public IP can not be found
[PASS] When the specified resource does not exist When starting with a healthy cluster Should fail to upgrade worker machine due to insufficient compute resources
[PASS] When the specified resource does not exist When starting with a healthy cluster Should fail to upgrade control plane machine due to insufficient compute resources
[PASS] When testing node drain timeout A node should be forcefully removed if it cannot be drained in time
[PASS] When testing MachineDeployment rolling upgrades Should successfully upgrade Machines upon changes in relevant MachineDeployment fields
[PASS] When testing horizontal scale out/in [TC17][TC18][TC20][TC21] Should successfully scale machine replicas up and down horizontally
[PASS] When testing app deployment to the workload cluster with network interruption [ToxiProxy] Should be able to create a cluster despite a network interruption during that process
[PASS] When testing app deployment to the workload cluster [TC1][PR-Blocking] Should be able to download an HTML from the app deployed to the workload cluster
[PASS] When testing with disk offering Should successfully create a cluster with disk offering
[PASS] When testing K8S conformance [Conformance] Should create a workload cluster and run kubetest
[PASS] with two clusters should successfully add and remove a second cluster without breaking the first cluster
[PASS] When testing multiple CPs in a shared network with kubevip Should successfully create a cluster with multiple CPs in a shared network
[PASS] When testing Kubernetes version upgrades Should successfully upgrade kubernetes versions when there is a change in relevant fields
[PASS] When testing resource cleanup Should create a new network when the specified network does not exist
[PASS] When testing machine remediation Should replace a machine when it is destroyed
[PASS] When testing affinity group Should have host affinity group when affinity is pro
[PASS] When testing affinity group Should have host affinity group when affinity is anti


Summarizing 2 Failures:

[Fail] When testing subdomain [It] Should create a cluster in a subdomain 
/root/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.2.12/framework/cluster_helpers.go:143

[Fail] When testing app deployment to the workload cluster with slow network [ToxiProxy] [BeforeEach] Should be able to download an HTML from the app deployed to the workload cluster 
/jenkins/workspace/capc-e2e-new-by-wei/test/e2e/toxiproxy/toxiProxy.go:170

Ran 28 of 29 Specs in 11174.398 seconds
FAIL! -- 26 Passed | 2 Failed | 0 Pending | 1 Skipped
--- FAIL: TestE2E (11174.42s)
FAIL

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants