Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#8246: Port Whisper Model functionality to n300 card (single device) with bs>1 support #8339

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

kkeerthana0573
Copy link
Contributor

@kkeerthana0573 kkeerthana0573 commented May 10, 2024

This PR contains :

  1. Single Card demo implementation of ttnn_functional_whisper, ttnn_optimized_functional_whisper for Audio Classification and Conditional Generation supporting batch_size > 1.
  2. Additionally, moved functional_whisper from models/experimental to models/demos/whisper.

@tt-rkim
Copy link
Collaborator

tt-rkim commented May 10, 2024

  • Why did we move ttnn whisper into demos/grayskull, but run it in common models?
  • Did you run nightly fast dispatch?

@kkeerthana0573
Copy link
Contributor Author

kkeerthana0573 commented May 13, 2024

@tt-rkim,

  • The whisper model runs on both grayskull and wormhole, so the tests are added in run_common_models.sh.
    Since the whisper model runs on multiple devices, should we move the model to models/demo/ folder?

  • CI links -

@esmalTT
Copy link
Contributor

esmalTT commented Oct 30, 2024

@kkeerthana0573 Please update the description with the passing pipelines. Looks like this should run in nightly and the performance pipeline

@kkeerthana0573
Copy link
Contributor Author

@esmalTT,
I've triggered CIs and will update the description with passing pipelines as soon as possible.
Nightly CI - passed for the model and failed for the other models - link.
Device Perf CI - timedout for the model on n300 and is working as expected locally. We're currently debugging this - link.
e2e perf CI - passed - link.

@kkeerthana0573 kkeerthana0573 force-pushed the keerthana/functional_whisper_demo branch 4 times, most recently from 01644b3 to fb8fe3f Compare November 6, 2024 06:32
@kkeerthana0573 kkeerthana0573 force-pushed the keerthana/functional_whisper_demo branch from fb8fe3f to 8ecf382 Compare November 13, 2024 17:01
@kkeerthana0573 kkeerthana0573 force-pushed the keerthana/functional_whisper_demo branch from 8ecf382 to ed66476 Compare November 15, 2024 11:57
@tt-rkim
Copy link
Collaborator

tt-rkim commented Nov 18, 2024

Please post updates

And ping in slack if ready for reviedw

@kkeerthana0573 kkeerthana0573 force-pushed the keerthana/functional_whisper_demo branch from ed66476 to f285521 Compare November 19, 2024 02:43
@kkeerthana0573 kkeerthana0573 force-pushed the keerthana/functional_whisper_demo branch from f285521 to 63b2949 Compare November 19, 2024 11:16
@tt-rkim
Copy link
Collaborator

tt-rkim commented Nov 19, 2024

This is not ready for review, per your lack of response.

@bkeith-TT @zzigler-tt Do not include this in the PR numbers for blocked by code review.

@kkeerthana0573
Copy link
Contributor Author

@tt-rkim,
Device Perf test for this model hangs in CIs and locally.
Initially, the test was working as expected - here is the link for Device Perf CI when it was working as expected.
Later, it started hanging even before the test began - here is the link for latest Device Perf CI.

Please find the same issue reported here.
Thank you.

@tt-rkim
Copy link
Collaborator

tt-rkim commented Nov 19, 2024

I see, I will respond to your issue there.

For the purposes of this PR, can we take out the device perf test, and work on it in a separate PR? We should not be merging broken tests into main.

Otherwise you are blocking yourself from merging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants