-
-
Notifications
You must be signed in to change notification settings - Fork 381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrating projectM into a web app #812
Comments
SDL isn't really required, but very convenient as it's integrated into Emscripten, making it basically free to use. You can use any other means of acquiring a WebGL context, e.g. Emscripten's built-in C++ functions and handle input and audio recording elsewhere. The WebGL context is always bound to a canvas, so it'll automatically receive the rendering output. projectM itself just needs the active GL context and audio data being passed to it. Anything else is totally up to the integrating app. You'll at least need some C++ code to glue functionality to the JavaScript side, there are functions in Emscripten's API to do this. |
Thanks, good to know it's possible. I will attempt a proof of concept next week, rendering into a WebGL canvas. |
Would be great to hear how it worked out! |
An update on this. I have studied WebAssembly/emscripten and I have an idea of how this can work. It's more complicated that I thought. The main issue is passing audio data from my app (JavaScript) to projectM (WebAssembly). As far as I can tell, it is not possible to access the raw audio data from an AudioContext instance in the browser's main thread. It seems the correct way to process audio data is by using an Audio Worklet which runs in a separate thread. They recommend WebAssembly to do this:
https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API/Using_AudioWorklet So it seems I need two WebAssembly modules; one for projectM, and an audio worklet, and audio data is shared between the two threads. At a high level, this is what I believe needs to be done:
Whether or not I will continue with this, I'm not sure. I'm concerned about burdening myself with technical debt, even if I get it working. |
I don't think there's a need to use workers, as the only thing you have to do is getting the audio data from an Audio buffer, then created an interleaved array from the buffer (AudioBuffer stores channels separately(e.g. one LLLL, the other RRRR), but projectM required an array with samples for each channel following each other, e.g. LRLRLRLR...), but that's a very fast operation. The actual audio processing is done by projectM in WASM, which is exactly what the quote above states. projectM doesn't do any complex processing though, just a FFT tonget spectrum data and some simple smoothing. I'm not too familiar with the web audio API, but I guess you can just query the audio buffer for samples each time the rendering function is called. Ideally, there should be around 735 frames of audio available if your context captures with 44.1 kHz and you render at 60 FPS. |
Thanks for the AudioBuffer tip! I did look into AudioBuffer but I think I got confused by decodeAudioData (which requires loading an actual audio file). So I will continue with this, and I will update again soon with my progress! |
I have built projectM for Emscripten, as suggested at the link below, but I have run into a problem importing from it. https://github.com/projectM-visualizer/examples-emscripten#configure-and-compile-projectm To build for Emscripten:
After the build I can see the headers and static lib:
This is what I am trying to compile with Emscripten: #include <emscripten/html5.h>
#include <projectM-4/projectM.h>
int main() {
// initialize WebGL context attributes
EmscriptenWebGLContextAttributes webgl_attrs;
emscripten_webgl_init_context_attributes(&webgl_attrs);
EMSCRIPTEN_WEBGL_CONTEXT_HANDLE gl_ctx = emscripten_webgl_create_context("#my-canvas", &webgl_attrs);
emscripten_webgl_make_context_current(gl_ctx);
// enable floating-point texture support for motion vector grid
// https://github.com/projectM-visualizer/projectm/blob/master/docs/emscripten.rst#initializing-emscriptens-opengl-context
// https://emscripten.org/docs/api_reference/html5.h.html#c.emscripten_webgl_enable_extension
emscripten_webgl_enable_extension(gl_ctx, "OES_texture_float");
projectm_handle projectMHandle = projectm_create();
return 0;
} But it results in what a appears to be a linking error:
Adding these flags:
Results in:
If I comment out the following line it does compile: projectm_handle projectMHandle = projectm_create(); I am completely stuck. Do you have any idea what could be wrong? |
For reference, I have encapsulated my projectM-emscripten build in this Dockerfile: # Build:
# docker build --tag projectm-emscripten-builder .
#
# Run:
# docker run --rm -t -u $(id -u):$(id -g) -v $(pwd):/src projectm-emscripten-builder emcc ...
FROM emscripten/emsdk:3.1.61
ARG PROJECTM_VERSION=4.1.1
RUN apt-get update && apt-get install -y --no-install-recommends \
# libprojectM build tools and dependencies
# https://github.com/projectM-visualizer/projectm/wiki/Building-libprojectM#install-the-build-tools-and-dependencies
libgl1-mesa-dev \
libglm-dev \
mesa-common-dev \
&& rm -rf /var/lib/apt/lists/* \
# download projectM
&& wget https://github.com/projectM-visualizer/projectm/releases/download/v$PROJECTM_VERSION/libprojectM-$PROJECTM_VERSION.tar.gz \
&& tar xzf libprojectM-*.tar.gz \
&& rm libprojectM-*.tar.gz \
&& cd libprojectM-* \
# build projectM
# https://github.com/projectM-visualizer/projectm/blob/master/BUILDING-cmake.md
&& mkdir build \
&& cd build \
&& emcmake cmake .. \
-D CMAKE_BUILD_TYPE=Release \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D ENABLE_EMSCRIPTEN=1 \
&& emmake cmake \
--build . \
--target install \
--config Release \
# allow container to be run as a non-root user
&& chmod 777 /emsdk/upstream/emscripten/cache/symbol_lists* |
Solved! (#812 (comment)) It turns out I need to link the library and code together, like this:
There goes my entire Sunday. Sorry for the noise! |
I managed to get projectM working in an HTML canvas in my browser, but only with the default idle preset. As I attempted to pass the audio data I realised that zero-filled arrays of sample data were always returned by
https://developer.mozilla.org/en-US/docs/Web/API/BaseAudioContext/createBuffer
https://developer.mozilla.org/en-US/docs/Web/API/BaseAudioContext/decodeAudioData This is a shame. It seems I was kind of right the first time. My web app streams audio from files hosted on AWS S3. It does not have access to the complete file data immediately. Many of the audio files are recordings that last for hours. So it seems that this is not going to work for me after all. I guess now I will have to stop work on this. I will checkpoint my work in case some new feature comes along that makes it possible to decode fragments of audio file data. The basic demo I created works very well and it is simple to bind functions to control projectM entirely with JavaScript. I'm not sure whether to close this issue or not. Please feel free to close it if you wish. |
I had a thought. The Web Audio API provides an AnalyserNode.
This node was specifically designed to facilitate audio visualizers. It is what typical web based audio visualizers use as input data. I'm fairly sure it's what Butterchurn uses. I wonder if I can use or adapt this data to work with projectM. So I'm not ready to give up just yet. I will see if I can get the data from the analyser node to work with projectM... |
Oh that's interesting, super neat if we can bypass the addPCM() call and directly send the output from the AnalyserNode to projectM, like some sort of addFFT() call instead of addPCM() |
That would be ideal. For now I will simply process the data into the form After that I need to test packaging and loading presets, and then finally optimize the build. Then I will publish my code and host a demo online. |
Having thought about it, since I believe a fourier transform (or signal processing magic) has already been done on the raw signal to produce an interpolated frequency array representing a moment in time, I'm not sure I can reverse that back into raw pcm. At least I have some kind of data to give projectM, but it won't be pcm data, |
All things considered, it looks more like an issue with handling the stream data between the server and the browser APIs properly. Most streaming websites either use an audio streaming service like IceCast, which will provide the proper MP3 header after connecting, then just streams the data. Or, you could use HLS, which will split your audio data into segments (3-5s long each) and using a playlist (M3U8) to get each chunk. the player regularly updates the playlist from the server to retrieve new chunks. Thus, HLS doesn't require any specialized audio server, it's often based on simple physical files hosted on the web server. projectM requires the actual waveform data to render the visuals properly, so you should use AnalyserNode.getFloatTimeDomainData() or the originally decoded audio and pass the result to projectM's add_pcm_float method. projectM has own FFT implementation internally. It's a specially adapted algorithm taken directly from Milkdrop, which does a bit more than just running a discrete FFT on the waveform. It applies an envelope filter and does additional noise filtering. This FFT also returns a very specific value range required for preset spectrum and beat detection data. If these values are off, presets will render erratically - this was one of the main issues in earlier projectM versions, which used an off-the-shelf FFT implementation. The actual FFT algorithm is very fast (just a bunch of additions and multiplications running at near-native speeds as it's compiled in WASM), so there won't be any (measurable) performance difference in comparison to the AnalyserNode. |
It would be nice to be able to use projectM with generated and live audio streams and as well though.
I will try this, using the maximum |
Good news, it seems to work! The default preset is reacting to my audio and seems to be synchronized with the beat. There is a drawback with Emscripten that when calling compiled C functions from JavaScript, only byte arrays can be passed to them (without manually allocating memory and writing to it; something I'd rather not do). Fortunately projectM provides the function I am aware that projectM will have to interpret my audio data as mono as I don't have access to the separate channels. In the future maybe I could make the effort to create a worklet to decode the audio, instead of using the AnalyserNode, and then I'd be able to get the raw pcm data in stereo. But I wonder if it would be worth it, do presets generally look better when projectM is supplied with stereo data? |
You can use Embind to create a JavaScript binding for any C/C++ function, and even use C++ classes from JavaScript. There are many examples in the docs. This allows you to pass a float array to projectM. You can even expose the whole projectM API to JS using this technique and implement all the control/setup code in JS. |
I'm using Embind. These are my bindings which are simple wrappers that encapsulate the projectm_handle:
I tried passing a float array using Embind but could not get it to work, so I am exporting a wrapper using Emscripten's ccall feature. This allows passing byte arrays but not float arrays.
|
When I call Because I have set the fftSize of my Web Audio analyser to 32768, and so my samples array has length 32768. |
I believe yes you don't need a super great amount of bins. Most presets just work off bass/mid/treble anyway. |
I'm not having much luck with presets. They blend in correctly but then crash after about 2 seconds and just freeze/flicker. I wonder if WebGL is not configured correctly. I'm using the default attributes. Any idea what could be causing this? Edit: Since the default idle preset runs perfectly, I don't think it is a problem with WebGL. It might be a memory problem, when loading presets. Though I am using ALLOW_MEMORY_GROWTH=1, maybe there is something else I need to do. |
I have published my work here: https://github.com/evoyy/projectm-webgl-demo I would be very grateful if somebody could take a look. If you don't want to build the docker image, and run the demo, no problem. At least you can see what I'm trying to do. |
projectM uses half-float textures for the motion vector grid to store the displacement of the previous frame's warp mesh. WebGL 2.0 sadly doesn't support this texture format by default (while OpenGL ES 3 does), so you'll have to at least enable the following WebGL extensions after context creation and before initializing projectM:
Otherwise, the textures will be missing or incomplete, which will cause presets using motion vectors to break, and can have other issues as the rendering framebuffers may also be marked as incomplete. |
Unfortunately that didn't work. I am enabling these extensions before initializing projectM:
When I load a preset, it takes about 7 seconds before the transition starts. The transition goes perfectly, and the new preset runs for about 2 or 3 seconds before freezing. I still have a feeling it might be a memory problem. |
I discovered that if I call It seems that projectM's Any idea why loading a preset is delayed for several seconds, instead of taking effect immediately? |
Please confirm the following points:
Topic
Third-Party Application Interfaces and Remote Control
Your Request
I'm investigating the possiblity of replacing Butterchurn with projectM. Butterchurn is no longer maintained and projectM seems to be the focal point of Milkdrop related development now.
Butterchurn renders into a canvas in the DOM. This is great because it allows the visualizer window to be controlled and styled with HTML elements, resized, transitioned, full-screened, and even detached from the browser using the Picture-in-Picture API.
As I understand, projectM can be compiled into WebAssembly using Emscripten. I found an example here:
https://github.com/projectM-visualizer/examples-emscripten
My plan is to do something similar, except without using the SDL library; the browser's DOM will be the UI and projectM will be controlled by Javascript. I wondered if this is possible and I found Embind:
https://emscripten.org/docs/porting/connecting_cpp_and_javascript/embind.html
Do you think what I want to do is a good idea, or even possible? I'm not a C programmer and I am new to WebAssembly so it will be a challenge for me.
The text was updated successfully, but these errors were encountered: