How I made a synth running in the browser using Web Audio API

This is the story of my three-days quest into understanding what is the Web Audio API and what are its capabilities. During this time I have also experimented with creating an extremely simplified synthesiser using the Web Audio API and p5.js.

The Web Audio API is a high-level JavaScript API for processing and synthesizing audio in web applications started out by the W3C Audio Working Group in 2012. It has now developed into a solid JS interface for building music-specific applications, like drum machines and synthesisers, or complementing modern in-browser game engines.

The Web Audio API involves handling audio operations inside an audio context, and has been designed to allow modular routing. Basic audio operations are performed with audio nodes, which are linked together to form an audio routing graph. […] This modular design provides the flexibility to create complex audio functions with dynamic effects. (Mozilla Developer Webpage)

Here I could be taking time to explain how the basics of Web Audio work, but (thankfully!) other more experienced folks have already had a successful go at that which I would rather not replicate. I recommend following Greg Hovanesyan’s tutorial and Boris Smus’ Getting Started with Web Audio API, as well as the project’s official website if you want to learn more on how to implement Web Audio into your projects.

Here’s how my synth works

The synthesiser I created using Web Audio API

Available here is my modest attempt at building a simple “multi-waveform” synthesiser. It is composed of 3 main groups of elements (excluding other HTML components such as buttons, sliders, or text):

Oscillators, Filters, and LFOs.

First step to creating your own sound with this synth is choosing a waveform among the four available. The frequencies of the two oscillators can be set using the first two sliders. Once you are happy with the pitch you’ve chosen, you can play with the speed of the LFO which is connected to two inverse filters — a low and a high-pass — opening and closing simultaneously following an inaudible sinewave. You can also control the upper boundary of the filters by setting a frequency, which I named Contour, using the slider at the bottom— “opening up” the sound by doing so.

I used p5.js to automate the movements of the sliders, as well as positioning the elements on the page. Basic CSS styling has also been added.

A more precise approach would make use of other Web Audio elements to modify audio parameters. In this case I only needed to test the capabilities of the API. Therefore, I settled on using a simpler approach —p5.

Below is the Javascript code that runs behind the scenes:


This synth is not playable via your computer keyboard, nor via MIDI. Frequencies cannot be set and parameters cannot be saved. It is only a minuscule example of the power of Web Audio.

It took me three entire days to complete this synth, but one can only imagine what impressive projects could be achieved in Web Audio if more time, and experienced hands, were to be used.

Few examples of things that can be done using Web Audio

Omnitone: JavaScript implementation of an ambisonic decoder that also allows you to binaurally render an ambisonic recording directly on the browser.


Babylon Synth: a mini 3D visual interactive experience of Gershon Kingsley’s Popcorn.


Source Image: Scott Schiller https://www.flickr.com/photos/schill/6839260527/

Francesco Imola is a London-based musician, weekend photographer, and current Sound Design student at the University of Greenwich.

w. http://francescoimola.com/

tw. https://twitter.com/francescoimola