Web Audio API
Happy Halloween! 🎃
The last few weeks I've been digging into the Web Audio API. I first explored this browser API a few years ago after getting a copy of Web Audio API by Borus Smus. It's a really short and digestible book, but sadly none of it stuck with me as I moved on over the years, and I always wanted to revisit it and build something on my own. With a little bit of spare time this last month, I decided to dive back in.
How does the Web Audio API work?
In order to work with the Web Audio API, you have to create an audio context, and within that context you connect together various audio nodes that generate sound, add sound effects, adjust volume or panning, and eventually connect to a destination.
The overall effect here is that you create an audio routing graph. I admittedly don't know anything about graph theory or graph data structures, but basic implementations of the Web Audio API are very linear, each node affecting one part of the sound, and are overall easily digestible. It's not too dissimilar from a guitar pedalboard chain, where you start with an audio source (like a guitar or synth), then pass it through a series of guitar pedals that process the sound at various stages of the signal chain. A volume pedal will raise and lower the volume, a distortion pedal will apply a specified amount of gain to the sound, a delay pedal will enable an echoing effect, a reverb pedal will apply some spatial reverberation to the sound, and then at the end of the chain the output is connected to an amplifier speaker or PA system.
The Web Audio API breaks the signal chain down in a very similar manner, so even though I don't fully understand the theory behind it, the code format really speaks to ways that I am already familiar with making music.
Creating audio nodes and save them to variables
All of the following commands can be run in your browser's console. To scaffold out a very basic Web Audio API application, we need to start by creating our distinct nodes. It's helpful to save these to a series of variables.
Creating an Audio Context
To start with, I will create an Audio Context using the AudioContext()
constructor, and save that to a variable named actx
:
let actx = new AudioContext();
With this context created, we have access to a lot of methods to start creating our audio nodes, and we've created our little music-making world.
Creating a new Oscillator
Next, we need an audio source. Broadly speaking, audio sources might look like a microphone or an audio file or sample, or a guitar or synthesizer. In our small example, we'll be using an Oscillator. An Oscillator is a device that generates electrical currents. In the world of audio, an Oscillator will create currents of sound waves. When started, this Oscillator will deliver a continuous sound signal until we tell it to stop.
So let's create our Oscillator:
let osc1 = actx.createOscillator();
Creating a new Gain
Now we will create a Gain node. Gain is basically volume. It is more complicated than that, but for our purposes here, Gain allows us to adjust our sound signal to be loud, quiet, muted, or somewhere in the middle.
let gain1 = actx.createGain();
Creating a Destination
Lastly, we will need a place for the sound to go. The AudioContext has a destination
property by default, but again for our purposes it's handy to save it to a variable.
let out = actx.destination;
By default, this will route the audio signal to our computer speakers.
Connecting audio nodes
So we have an Audio Context and a couple of disparate nodes. Time to start wiring them up.
Our Oscillator is already designated as a source within our Audio Context. Our first step is to connect it to the Gain node:
osc1.connect(gain1);
Next, we need to connect the Gain node to the Destination:
gain1.connect(out);
Start playing "music"
With everything wired up, we can begin making sound with the following:
osc1.start();
And we can stop making sound with the following:
osc1.stop();
You'll need to reload the page to be able to run osc1.start()
again. If you try to restart the Oscillator without a reload, you'll get the following error:
Uncaught InvalidStateError: Failed to execute 'start' on 'AudioScheduledSourceNode': cannot call start more than once.
Building a small web interface
The last thing I want to do here is build a little web interface that controls what we've done.
I'll create an HTML doc with a couple of buttons with some IDs:
<button id="startOsc">Start Oscillator</button>
<button id="stopOsc">Stop Oscillator</button>
And then I'll drop all of the code we wrote previously into a JavaScript file loaded by that HTML doc:
let actx = new AudioContext();let osc1 = actx.createOscillator();let gain1 = actx.createGain();let out = actx.destination;
osc1.connect(gain1);gain1.connect(out);
const startButton = document.querySelector('button#startOsc');
startButton.addEventListener('click', () => { osc1.start();});
const stopButton = document.querySelector('button#stopOsc');
stopButton.addEventListener('click', () => { osc1.stop();});
The important part is that we've wrapped osc1.start()
and osc1.stop()
within event listeners that listen to clicks on each of the two buttons, for example the "Start Oscillator" button:
const startButton = document.querySelector('button#startOsc');
startButton.addEventListener('click', () => { osc1.start();});
You can see this in action in the following CodePen: