Audio synthesis in the browser with WebAudio

Audio synthesis with WebAudio is easy. To prove it I'll get straight to the point...

(If you don't want to follow along via blog post, a working example project with code broken up into logical commits and releases is available here:

The first thing you need is an instance of AudioContext:

var ctx = new AudioContext();
var node = ctx.createGain();

The "node" object is actually a gain node that you can use to adjust the audio level but you can ignore that for now. All I did was connect it to the AudioContext destination (our ouput).

Now you need to create an audio buffer to stuff some samples into:

var duration = 1.0;
var rate = ctx.sampleRate;
var length = rate * duration;
var buffer = ctx.createBuffer(1, length, rate);
var data = buffer.getChannelData(0);

In case you're lost here, duration will be the length of our audio buffer in seconds. I've grabbed the sample rate from the audio context and used that to compute the length of the buffer. Now we have "data", which is a Float32Array that we can start adding samples to.

Let's fill that buffer with some noise (random noise in this case):

for (var i = 0; i < data.length; i++) {
    data[i] = Math.random() * 2 - 1;

Now all you have to do is play the buffer by creating an audio source and connecting it to the gain node.

var source = ctx.createBufferSource();
source.buffer = buffer;

That's it. The Github code will show you how to turn this random noise into musical notes and goes into a bit more detail in the comments so go check it out!


Popular posts from this blog

Procedural music with PyAudio and NumPy