Replies: 3 comments 6 replies
-
There's essentially two ways to do it, and it depends on the workload you're attempting to run. If your workload is real-time safe, you can run the code in the Generally, this means that your workload can process and return synchronously in less than If it isn't possible to guarantee those properties for your particular problem (synchronicity + consistent processing time), the solution is to send the real-time audio data to a Web Worker, and to perform the analysis there. The correct way to do this is to not use A mini-site (https://ringbuf-js.netlify.app/) has the documentation, important links and two examples. The second example, named "Getting audio out of an With this setup, to have a lot more flexibility, there is no real-time constraint, but you still have very low latency. You can for example wait and combine multiple 128-frames chunks of audio to process 1024 frames of audio at a time, for efficiency, if your problem is ok with adding a bit of latency (in this example, 1024 / 44100., about 23ms or so, which is acceptable in most scenarios). |
Beta Was this translation helpful? Give feedback.
-
@padenot Hello! Thanks a lot for your explanation, it helped a lot. Scheme with ringbuffer worked great, but we meet some problems with necessity of turning on headers ( Do you have an example of Worklet - Worker conversation with postMessages. Can they communicate directly without sending postMessages to main thread? Now we only have the scheme working with |
Beta Was this translation helpful? Give feedback.
-
@padenot Hello! Can you tell about AudioWorklet internal buffering? I'm profiling AudioWorklet + Worker + postMessages scheme (in Chrome) and seeing the following picture: Timings are activated when AudioWorklet runs the Initially I though that AudioWorklet run And because of this, there may not be enough samples in the buffer for the AudioWorklet to take frames from there in time. So, I've created starting latency about 3 * 480 samples (When we take first 128 samples from outputBuffer, there are already 3 * 480 processed samples there), that fixes this problem. But I'm trying to understand how to make my implementation better, using knowledge about frames buffering. More about implementation: |
Beta Was this translation helpful? Give feedback.
-
Hello! I am currently working on implementing a deep learning model in WebAudio for real-time inference. My model is basically written in "pytorch" and I want it to work inside an AudioWorklet. Do you have experience or best practices for this? I would like to hear your advices
Beta Was this translation helpful? Give feedback.
All reactions