-
Notifications
You must be signed in to change notification settings - Fork 584
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[LLVMGPU] Move bufferization after vectorization for matmulSIMT #10217
Merged
Merged
Changes from all commits
Commits
Show all changes
2 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
72 changes: 72 additions & 0 deletions
72
compiler/src/iree/compiler/Codegen/LLVMGPU/LLVMGPUTensorAlloc.cpp
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,72 @@ | ||
// Copyright 2022 The IREE Authors | ||
// | ||
// Licensed under the Apache License v2.0 with LLVM Exceptions. | ||
// See https://llvm.org/LICENSE.txt for license information. | ||
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception | ||
|
||
#include "iree/compiler/Codegen/LLVMGPU/TilingUtils.h" | ||
#include "iree/compiler/Codegen/PassDetail.h" | ||
#include "iree/compiler/Codegen/Passes.h" | ||
#include "mlir/Dialect/Bufferization/IR/Bufferization.h" | ||
#include "mlir/Dialect/Func/IR/FuncOps.h" | ||
#include "mlir/Transforms/Passes.h" | ||
|
||
#define DEBUG_TYPE "iree-llvmgpu-alloc" | ||
|
||
namespace mlir { | ||
namespace iree_compiler { | ||
|
||
/// Filter to decide which ops need allocations. | ||
static bool filter(Operation *op) { | ||
auto linalgOp = dyn_cast<linalg::LinalgOp>(op); | ||
if (!linalgOp) return false; | ||
// Can't promote dynamic shapes. | ||
if (linalgOp.hasDynamicShape()) return false; | ||
return linalg::isaContractionOpInterface(op) && | ||
linalgOp.getNumParallelLoops() >= 2 && | ||
linalgOp.getNumParallelLoops() <= 3; | ||
} | ||
|
||
namespace { | ||
struct LLVMGPUTensorAllocPass | ||
: public LLVMGPUTensorAllocBase<LLVMGPUTensorAllocPass> { | ||
void getDependentDialects(DialectRegistry ®istry) const override { | ||
registry.insert<bufferization::BufferizationDialect>(); | ||
} | ||
void runOnOperation() override { | ||
auto funcOp = getOperation(); | ||
|
||
// Tile the reduction first to reduce the alloc size. | ||
if (failed(tileReduction(funcOp))) { | ||
return signalPassFailure(); | ||
} | ||
|
||
SmallVector<Operation *> opsToPromote; | ||
funcOp.walk([&](Operation *op) { | ||
if (filter(op)) opsToPromote.push_back(op); | ||
}); | ||
for (Operation *op : opsToPromote) { | ||
OpBuilder builder(op); | ||
auto linalgOp = cast<linalg::LinalgOp>(op); | ||
bufferization::BufferizationOptions options; | ||
// Promote all the input operands. | ||
for (auto operand : linalgOp.getInputOperands()) { | ||
FailureOr<Value> ret = bufferization::allocateTensorForShapedValue( | ||
builder, op->getLoc(), operand->get(), false, options, true); | ||
if (failed(ret)) { | ||
return signalPassFailure(); | ||
} | ||
Value v = ret.getValue(); | ||
operand->get().replaceAllUsesExcept(v, v.getDefiningOp()); | ||
} | ||
} | ||
} | ||
}; | ||
} // namespace | ||
|
||
std::unique_ptr<OperationPass<func::FuncOp>> createLLVMGPUTensorAlloc() { | ||
return std::make_unique<LLVMGPUTensorAllocPass>(); | ||
} | ||
|
||
} // namespace iree_compiler | ||
} // namespace mlir |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is going to be walking the function through all nesting etc. How do you ensure that the ops picked up here are not scoped at the same level?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
each op is tiled individually (there is no tile and fuse at this level). This was already like that for the second level of LLVMGPU backend.
I'm not sure I see the problem?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Earlier this was using
getComputeOps
. It is very constrained in that it looks for ops within a block, and errors out if the dispatch deviates from that. This is intentional to avoid compilation go down paths that are uninteded. A walk is more free form. How do you guarantee that the ops collected here are all to be tiled at the same time. So its not the same.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right
walk
is what I want I think as I want to distribute all operations. I could look at the attribute but I don't think I need to as I always want to distribute all the ops across the groupThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Asking differently, is there a case here where ops from different blocks need to be distributed. Most of the ops we use have a single block, so different blocks represent different levels of nesting. From that I am unsure distributing ops at different levels of nesting is valid... So I think flipping this to not distribute ops at different levels of nesting at the same time and erroring out inside avoids unintended lowering.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We tile the reduction loop before doing the second level of tile and distribute (to handle shared memory promotion) so we do end up with linalg.matmul within a scf.for region while the fused ops would be the top level basic block. Both those needs to be distributed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that the reduction loop being outside causes other problems that I'm trying to solve (I shared a doc with you where I'm starting to discuss this) so ideally we would remove it at some point but for now I don't have an alternative solution to move the bufferization down.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that makes sense and gives me a better idea. Thanks for explaining!