Update l2i invoke and seamless to support AutoencoderTiny, remove att… #5936
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
…ention processors if no mid_block is detected
What type of PR is this? (check all applicable)
Have you discussed this change with the InvokeAI team?
Have you updated all relevant documentation?
Description
L2i throws an assertion error when run with
madebyollin/taesdxl
due to it requiring a different class in diffusers to load it. This is a small PR to update seamless and l2i to accept AutoencoderTiny models and not throw exceptions while processing them.QA Instructions, Screenshots, Recordings
Run an sdxl pipeline using a vae that requires AutoencoderTiny and validate that the image successfully encodes and decodes.Merge Plan
This PR can be merged when approved