a small tool to render spectrograms (waterfall graphs) or waveforms from audio in your browser
Demo 1 (standard config using microphone)
Demo 2 (showing different config possibilities with audio tracks)
npm i @fjw/audiovisualizer
Initialise the AudioVisualizer object:
new AudioVisualizer({options});
new AudioVisualizer({ // no src, uses the microphone
v: [
{
type: "spectrum",
container: "#myspectrum"
},
{
type: "waveform",
container: "#mywaveform"
}
]
});
v
an array of visualizations, each with individual optionsv.type
the type of the visualization, possible values arewaveform
andspectrum
v.container
the css selector of the container (HTMLElement) where the canvas gets rendered in, if the container is resized, the canvas will be resized, too.
-
src
URL of audio file/stream. skip to use microphone as source -
muted
start muted or with hearable audio (default: true) -
analyser
object with additional options for the analyser (see AnalyserNode (Mozilla Docs)) for exampleanalyser.fftSize: 4096
increases the resolution (standard is 2048) -
v.background
background color
v.lineWidth
width of the linev.strokeStyle
strokeStyle (color) of the line
v.rowsPerSec
speed of the waterfallv.colortheme
array of colors for the gradients (see examples)
mute()
mutes the audiounmute()
unmutes the audiosetSource(url)
sets a new audiosource (false/null/undefined = microphone)