-
Notifications
You must be signed in to change notification settings - Fork 39.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
e2e storage: test labels #121391
e2e storage: test labels #121391
Conversation
This is unusual and complicates the upcoming refactoring of test definitions.
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The first commit removes a dot at the end after The second commit makes more changes. Here's a comparison between the Spaces get inserted, for example: 1127,1129c1127,1129
< storage/testsuites/disruptive.go: [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
< storage/testsuites/disruptive.go: [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
< storage/testsuites/disruptive.go: [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
---
> storage/testsuites/disruptive.go: [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive [Disruptive] [LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
> storage/testsuites/disruptive.go: [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive [Disruptive] [LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
> storage/testsuites/disruptive.go: [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive [Disruptive] [LinuxOnly] Should test that pv written before kubelet restart is readable after restart. When ignoring whitespace changes with 1285d1284
< storage/testsuites/snapshottable_stress.go: [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
1288c1287
< storage/testsuites/snapshottable_stress.go: [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
---
> storage/testsuites/snapshottable_stress.go: [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress [Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
1290a1290
> storage/testsuites/snapshottable_stress.go: [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress [Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
1570d1569
< storage/testsuites/snapshottable_stress.go: [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
1573c1572
< storage/testsuites/snapshottable_stress.go: [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
---
> storage/testsuites/snapshottable_stress.go: [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io] [Serial] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress [Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
1575a1575
> storage/testsuites/snapshottable_stress.go: [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io] [Serial] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress [Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial] This also seems to be because of whitespaces - not sure why diff didn't ignore this. 🤷 |
@@ -316,7 +318,7 @@ func (t *multiVolumeTestSuite) DefineTests(driver storageframework.TestDriver, p | |||
// [ node1 ] | |||
// | | <- same volume mode | |||
// [volume1] -> [restored volume1 snapshot] | |||
ginkgo.It("should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]", func(ctx context.Context) { | |||
f.It("should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly]", feature.VolumeSnapshotDataSource, feature.VolumeSourceXFS, func(ctx context.Context) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This shows how tests will be registered going forward:
f.It
is a shorthand forframework.It
, a wrapper aroundginkgo.It
which inserts the traditional inline text tags in addition to adding Ginkgo labels.feature.VolumeSnapshotDataSource
is one of the registered feature tags from https://github.com/kubernetes/kubernetes/blob/master/test/e2e/feature/feature.go. No more typos and better visibility when new features get added... ideally those then should also come with documentation of what they mean. We don't have that for existing features.
e397e70
to
8e0edec
Compare
@@ -219,7 +219,7 @@ type DriverInfo struct { | |||
// plugin if it exists and is empty if this DriverInfo represents a CSI | |||
// Driver | |||
InTreePluginName string | |||
FeatureTag string // FeatureTag for the driver | |||
TestTags []interface{} // tags for the driver (e.g. framework.WithSlow()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This field cannot be set via YAML or JSON, which affects external users of the e2e.test binary - see PR description.
Looks like there is one problem that I need to resolve:
|
8e0edec
to
2e581e4
Compare
This makes it possible to select tests through `ginkgo --label-filter` also for the custom labels.
2e581e4
to
27afb7d
Compare
Thanks for the improvement.
Hopefully that doesn't break anyone, seems fairly small, we can merge and find out. /lgtm |
LGTM label has been added. Git tree hash: 4f65a83a3d61fa10b3378bc2e75631352ea3e9b1
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: pohly, saad-ali The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
Adding tags with the new framework helper functions also adds them as Ginkgo labels. Labels enable much more flexible filtering of specs. The goal is to register all tests this way, with no or only minimal changes to the full test names, so existing regular expressions should continue to work.
There's one exception: a space now gets inserted around tags, which wasn't always the case before. If a regular expression in a Prow job is sensitive to white space, then it will not match anymore.
Special notes for your reviewer:
This PR starts that work by converting the E2E storage framework, which is the tricky part. The rest of the conversion is just search/replace. Because tags need to be added via function calls, tags embedded inside the driver definition YAML or JSON file no longer works. Some in-tree drivers use this to mark all of their tests as "slow" or as "depends on Windows" - this still works.
It's unclear if users need this. If they do, support could be added by allowing tags as a string slice and then converting those string entries to function calls. But that is complicated and not done yet.
Does this PR introduce a user-facing change?
/sig storage