Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[mlir][vector][NFC] Make function name more meaningful in lit tests. #94538

Merged
merged 2 commits into from
Jun 7, 2024

Conversation

hanhanW
Copy link
Contributor

@hanhanW hanhanW commented Jun 5, 2024

It also moves the test near other similar test cases.

It also moves the test near other similar test cases.
@llvmbot
Copy link
Member

llvmbot commented Jun 5, 2024

@llvm/pr-subscribers-mlir

@llvm/pr-subscribers-mlir-vector

Author: Han-Chung Wang (hanhanW)

Changes

It also moves the test near other similar test cases.


Full diff: https://github.com/llvm/llvm-project/pull/94538.diff

1 Files Affected:

  • (modified) mlir/test/Dialect/Vector/vector-transfer-flatten.mlir (+38-37)
diff --git a/mlir/test/Dialect/Vector/vector-transfer-flatten.mlir b/mlir/test/Dialect/Vector/vector-transfer-flatten.mlir
index 65bf0b9335d28..de45a530db9a6 100644
--- a/mlir/test/Dialect/Vector/vector-transfer-flatten.mlir
+++ b/mlir/test/Dialect/Vector/vector-transfer-flatten.mlir
@@ -102,6 +102,25 @@ func.func @transfer_read_dims_mismatch_non_zero_indices(
 
 // -----
 
+func.func @transfer_read_dims_mismatch_contiguous_non_zero_idx(
+    %subview : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>,
+    %idx0 : index, %idx1 : index) -> vector<2x2xf32> {
+  %c0 = arith.constant 0 : index
+  %cst_1 = arith.constant 0.000000e+00 : f32
+  %8 = vector.transfer_read %subview[%c0, %idx0, %idx1, %c0], %cst_1 {in_bounds = [true, true]} : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>, vector<2x2xf32>
+  return %8 : vector<2x2xf32>
+}
+
+//       CHECK:  #[[$MAP:.+]] = affine_map<()[s0] -> (s0 * 2)>
+// CHECK-LABEL:  func.func @transfer_read_dims_mismatch_contiguous_non_zero_idx(
+//       CHECK:    %[[COLLAPSE:.+]] = memref.collapse_shape %{{.*}} {{\[}}[0], [1], [2, 3]] : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>> into memref<1x3x6xf32, strided<[40, 10, 1], offset: ?>>
+//       CHECK:    %[[APPLY:.*]] = affine.apply #[[$MAP]]()
+
+// CHECK-128B-LABEL: func @transfer_read_dims_mismatch_contiguous_non_zero_idx(
+//       CHECK-128B:   memref.collapse_shape
+
+// -----
+
 // The input memref has a dynamic trailing shape and hence is not flattened.
 // TODO: This case could be supported via memref.dim
 
@@ -212,6 +231,25 @@ func.func @transfer_write_dims_mismatch_contiguous(
 
 // -----
 
+func.func @transfer_write_dims_mismatch_contiguous_non_zero_idx(
+    %value : vector<2x2xf32>,
+    %subview : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>,
+    %idx0 : index, %idx1 : index) {
+  %c0 = arith.constant 0 : index
+  vector.transfer_write %value, %subview[%c0, %idx0, %idx1, %c0] {in_bounds = [true, true]} : vector<2x2xf32>, memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>
+  return
+}
+
+//       CHECK:  #[[$MAP:.+]] = affine_map<()[s0] -> (s0 * 2)>
+// CHECK-LABEL:  func.func @transfer_write_dims_mismatch_contiguous_non_zero_idx(
+//       CHECK:    %[[APPLY:.*]] = affine.apply #[[$MAP]]()
+//       CHECK:    %[[COLLAPSE:.+]] = memref.collapse_shape %{{.*}} {{\[}}[0], [1], [2, 3]] : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>> into memref<1x3x6xf32, strided<[40, 10, 1], offset: ?>>
+
+// CHECK-128B-LABEL: func @transfer_write_dims_mismatch_contiguous_non_zero_idx(
+//       CHECK-128B:   memref.collapse_shape
+
+// -----
+
 func.func @transfer_write_dims_mismatch_non_contiguous(
       %arg : memref<5x4x3x2xi8, strided<[24, 6, 2, 1], offset: ?>>, %vec : vector<2x1x2x2xi8>) {
     %c0 = arith.constant 0 : index
@@ -459,43 +497,6 @@ func.func @fold_unit_dims_entirely(%arg0 : vector<8xi32>,
 // CHECK-128B-LABEL: func @fold_unit_dims_entirely(
 //   CHECK-128B-NOT:   memref.collapse_shape
 
-
-// -----
-
-func.func @regression_non_contiguous_dim_read(%subview : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>,
-                                              %idx0 : index, %idx1 : index) -> vector<2x2xf32> {
-  %c0 = arith.constant 0 : index
-  %cst_1 = arith.constant 0.000000e+00 : f32
-  %8 = vector.transfer_read %subview[%c0, %idx0, %idx1, %c0], %cst_1 {in_bounds = [true, true]} : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>, vector<2x2xf32>
-  return %8 : vector<2x2xf32>
-}
-
-//       CHECK:  #[[$MAP:.+]] = affine_map<()[s0] -> (s0 * 2)>
-// CHECK-LABEL:  func.func @regression_non_contiguous_dim_read(
-//       CHECK:    %[[COLLAPSE:.+]] = memref.collapse_shape %{{.*}} {{\[}}[0], [1], [2, 3]] : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>> into memref<1x3x6xf32, strided<[40, 10, 1], offset: ?>>
-//       CHECK:    %[[APPLY:.*]] = affine.apply #[[$MAP]]()
-
-// CHECK-128B-LABEL: func @regression_non_contiguous_dim_read(
-//       CHECK-128B:   memref.collapse_shape
-
-// -----
-
-func.func @regression_non_contiguous_dim_write(%value : vector<2x2xf32>,
-                                                %subview : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>,
-                                                %idx0 : index, %idx1 : index) {
-  %c0 = arith.constant 0 : index
-  vector.transfer_write %value, %subview[%c0, %idx0, %idx1, %c0] {in_bounds = [true, true]} : vector<2x2xf32>, memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>
-  return
-}
-
-//       CHECK:  #[[$MAP:.+]] = affine_map<()[s0] -> (s0 * 2)>
-// CHECK-LABEL:  func.func @regression_non_contiguous_dim_write(
-//       CHECK:    %[[APPLY:.*]] = affine.apply #[[$MAP]]()
-//       CHECK:    %[[COLLAPSE:.+]] = memref.collapse_shape %{{.*}} {{\[}}[0], [1], [2, 3]] : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>> into memref<1x3x6xf32, strided<[40, 10, 1], offset: ?>>
-
-// CHECK-128B-LABEL: func @regression_non_contiguous_dim_write(
-//       CHECK-128B:   memref.collapse_shape
-
 // -----
 
 func.func @negative_out_of_bound_transfer_read(

Copy link
Contributor

@banach-space banach-space left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is greatly appreciated, thank you for following-up 🙏🏻

LGTM modulo one small comment :)

@@ -102,6 +102,25 @@ func.func @transfer_read_dims_mismatch_non_zero_indices(

// -----

func.func @transfer_read_dims_mismatch_contiguous_non_zero_idx(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
func.func @transfer_read_dims_mismatch_contiguous_non_zero_idx(
func.func @transfer_read_dims_mismatch_non_contiguous_non_zero_indices(
  • contiguous -> non_contiguous (probably a typo?)
  • idx -> indices (for consistency with transfer_read_dims_mismatch_non_zero_indices)

Similar comment for transfer_write_dims_mismatch_contiguous_non_zero_idx

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I don't follow why it is non_contiguous? I think it is writing vector<2x2xf32> to a contiguous memory?

Copy link
Contributor

@banach-space banach-space Jun 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

%subview : memref<1x3x3x2xf32, strided<[40, 10, 2, 1], offset: ?>>

Note the strides - the memref is not contiguous when considering all dims. However, your point is correct:

I think it is writing vector<2x2xf32> to a contiguous memory?

When only considering the trailing 2 dims, memref is indeed contiguous. IIRC, the point of this test was to verify that the pattern works even if:

  • overall, memref is non-contiguous, however
  • the inner dims are contiguous and that's sufficient.

Does it make sense? If yes, then we probably need a better name or a comment to explain 😅 If not, bare with me :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, it makes sense to me now! Thanks for the explanation. I found that you're improving the comments in #94490. Given that it will follow the naming convention, I'll leave the "comment improvement" to the PR that you're working on. And I will update the function name in this PR.

@hanhanW hanhanW merged commit b653357 into llvm:main Jun 7, 2024
7 checks passed
@hanhanW hanhanW deleted the follow-up-94051 branch June 7, 2024 16:43
@HerrCai0907 HerrCai0907 mentioned this pull request Jun 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants