-
Notifications
You must be signed in to change notification settings - Fork 12.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[mlir][vector][NFC] Make function name more meaningful in lit tests. #94538
Conversation
It also moves the test near other similar test cases.
@llvm/pr-subscribers-mlir @llvm/pr-subscribers-mlir-vector Author: Han-Chung Wang (hanhanW) ChangesIt also moves the test near other similar test cases. Full diff: https://github.com/llvm/llvm-project/pull/94538.diff 1 Files Affected:
diff --git a/mlir/test/Dialect/Vector/vector-transfer-flatten.mlir b/mlir/test/Dialect/Vector/vector-transfer-flatten.mlir
index 65bf0b9335d28..de45a530db9a6 100644
--- a/mlir/test/Dialect/Vector/vector-transfer-flatten.mlir
+++ b/mlir/test/Dialect/Vector/vector-transfer-flatten.mlir
@@ -102,6 +102,25 @@ func.func @transfer_read_dims_mismatch_non_zero_indices(
// -----
+func.func @transfer_read_dims_mismatch_contiguous_non_zero_idx(
+ %subview : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>,
+ %idx0 : index, %idx1 : index) -> vector<2x2xf32> {
+ %c0 = arith.constant 0 : index
+ %cst_1 = arith.constant 0.000000e+00 : f32
+ %8 = vector.transfer_read %subview[%c0, %idx0, %idx1, %c0], %cst_1 {in_bounds = [true, true]} : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>, vector<2x2xf32>
+ return %8 : vector<2x2xf32>
+}
+
+// CHECK: #[[$MAP:.+]] = affine_map<()[s0] -> (s0 * 2)>
+// CHECK-LABEL: func.func @transfer_read_dims_mismatch_contiguous_non_zero_idx(
+// CHECK: %[[COLLAPSE:.+]] = memref.collapse_shape %{{.*}} {{\[}}[0], [1], [2, 3]] : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>> into memref<1x3x6xf32, strided<[40, 10, 1], offset: ?>>
+// CHECK: %[[APPLY:.*]] = affine.apply #[[$MAP]]()
+
+// CHECK-128B-LABEL: func @transfer_read_dims_mismatch_contiguous_non_zero_idx(
+// CHECK-128B: memref.collapse_shape
+
+// -----
+
// The input memref has a dynamic trailing shape and hence is not flattened.
// TODO: This case could be supported via memref.dim
@@ -212,6 +231,25 @@ func.func @transfer_write_dims_mismatch_contiguous(
// -----
+func.func @transfer_write_dims_mismatch_contiguous_non_zero_idx(
+ %value : vector<2x2xf32>,
+ %subview : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>,
+ %idx0 : index, %idx1 : index) {
+ %c0 = arith.constant 0 : index
+ vector.transfer_write %value, %subview[%c0, %idx0, %idx1, %c0] {in_bounds = [true, true]} : vector<2x2xf32>, memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>
+ return
+}
+
+// CHECK: #[[$MAP:.+]] = affine_map<()[s0] -> (s0 * 2)>
+// CHECK-LABEL: func.func @transfer_write_dims_mismatch_contiguous_non_zero_idx(
+// CHECK: %[[APPLY:.*]] = affine.apply #[[$MAP]]()
+// CHECK: %[[COLLAPSE:.+]] = memref.collapse_shape %{{.*}} {{\[}}[0], [1], [2, 3]] : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>> into memref<1x3x6xf32, strided<[40, 10, 1], offset: ?>>
+
+// CHECK-128B-LABEL: func @transfer_write_dims_mismatch_contiguous_non_zero_idx(
+// CHECK-128B: memref.collapse_shape
+
+// -----
+
func.func @transfer_write_dims_mismatch_non_contiguous(
%arg : memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>>, %vec : vector<2x1x2x2xi8>) {
%c0 = arith.constant 0 : index
@@ -459,43 +497,6 @@ func.func @fold_unit_dims_entirely(%arg0 : vector<8xi32>,
// CHECK-128B-LABEL: func @fold_unit_dims_entirely(
// CHECK-128B-NOT: memref.collapse_shape
-
-// -----
-
-func.func @regression_non_contiguous_dim_read(%subview : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>,
- %idx0 : index, %idx1 : index) -> vector<2x2xf32> {
- %c0 = arith.constant 0 : index
- %cst_1 = arith.constant 0.000000e+00 : f32
- %8 = vector.transfer_read %subview[%c0, %idx0, %idx1, %c0], %cst_1 {in_bounds = [true, true]} : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>, vector<2x2xf32>
- return %8 : vector<2x2xf32>
-}
-
-// CHECK: #[[$MAP:.+]] = affine_map<()[s0] -> (s0 * 2)>
-// CHECK-LABEL: func.func @regression_non_contiguous_dim_read(
-// CHECK: %[[COLLAPSE:.+]] = memref.collapse_shape %{{.*}} {{\[}}[0], [1], [2, 3]] : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>> into memref<1x3x6xf32, strided<[40, 10, 1], offset: ?>>
-// CHECK: %[[APPLY:.*]] = affine.apply #[[$MAP]]()
-
-// CHECK-128B-LABEL: func @regression_non_contiguous_dim_read(
-// CHECK-128B: memref.collapse_shape
-
-// -----
-
-func.func @regression_non_contiguous_dim_write(%value : vector<2x2xf32>,
- %subview : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>,
- %idx0 : index, %idx1 : index) {
- %c0 = arith.constant 0 : index
- vector.transfer_write %value, %subview[%c0, %idx0, %idx1, %c0] {in_bounds = [true, true]} : vector<2x2xf32>, memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>
- return
-}
-
-// CHECK: #[[$MAP:.+]] = affine_map<()[s0] -> (s0 * 2)>
-// CHECK-LABEL: func.func @regression_non_contiguous_dim_write(
-// CHECK: %[[APPLY:.*]] = affine.apply #[[$MAP]]()
-// CHECK: %[[COLLAPSE:.+]] = memref.collapse_shape %{{.*}} {{\[}}[0], [1], [2, 3]] : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>> into memref<1x3x6xf32, strided<[40, 10, 1], offset: ?>>
-
-// CHECK-128B-LABEL: func @regression_non_contiguous_dim_write(
-// CHECK-128B: memref.collapse_shape
-
// -----
func.func @negative_out_of_bound_transfer_read(
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is greatly appreciated, thank you for following-up 🙏🏻
LGTM modulo one small comment :)
@@ -102,6 +102,25 @@ func.func @transfer_read_dims_mismatch_non_zero_indices( | |||
|
|||
// ----- | |||
|
|||
func.func @transfer_read_dims_mismatch_contiguous_non_zero_idx( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
func.func @transfer_read_dims_mismatch_contiguous_non_zero_idx( | |
func.func @transfer_read_dims_mismatch_non_contiguous_non_zero_indices( |
- contiguous -> non_contiguous (probably a typo?)
- idx -> indices (for consistency with
transfer_read_dims_mismatch_non_zero_indices
)
Similar comment for transfer_write_dims_mismatch_contiguous_non_zero_idx
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I don't follow why it is non_contiguous
? I think it is writing vector<2x2xf32>
to a contiguous memory?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
%subview : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>
Note the strides - the memref is not contiguous when considering all dims. However, your point is correct:
I think it is writing vector<2x2xf32> to a contiguous memory?
When only considering the trailing 2 dims, memref is indeed contiguous. IIRC, the point of this test was to verify that the pattern works even if:
- overall, memref is non-contiguous, however
- the inner dims are contiguous and that's sufficient.
Does it make sense? If yes, then we probably need a better name or a comment to explain 😅 If not, bare with me :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, it makes sense to me now! Thanks for the explanation. I found that you're improving the comments in #94490. Given that it will follow the naming convention, I'll leave the "comment improvement" to the PR that you're working on. And I will update the function name in this PR.
It also moves the test near other similar test cases.