How I made a synth running in the browser using Web Audio API

This is the story of my three-days quest into understanding what is the Web Audio API and what are its capabilities. During this time I have also experimented with creating an extremely simplified synthesiser using the Web Audio API and p5.js.

The Web Audio API is a high-level JavaScript API for processing and synthesizing audio in web applications started out by the W3C Audio Working Group in 2012. It has now developed into a solid JS interface for building music-specific applications, like drum machines and synthesisers, or complementing modern in-browser game engines.

The Web Audio API involves handling audio operations inside an audio context, and has been designed to allow modular routing. Basic audio operations are performed with audio nodes, which are linked together to form an audio routing graph. […] This modular design provides the flexibility to create complex audio functions with dynamic effects. (Mozilla Developer Webpage)

Here I could be taking time to explain how the basics of Web Audio work, but (thankfully!) other more experienced folks have already had a successful go at that which I would rather not replicate. I recommend following Greg Hovanesyan’s tutorial and Boris Smus’ Getting Started with Web Audio API, as well as the project’s official website if you want to learn more on how to implement Web Audio into your projects.

Here’s how my synth works

The synthesiser I created using Web Audio API

Available here is my modest attempt at building a simple “multi-waveform” synthesiser. It is composed of 3 main groups of elements (excluding other HTML components such as buttons, sliders, or text):

Oscillators, Filters, and LFOs.

First step to creating your own sound with this synth is choosing a waveform among the four available. The frequencies of the two oscillators can be set using the first two sliders. Once you are happy with the pitch you’ve chosen, you can play with the speed of the LFO which is connected to two inverse filters — a low and a high-pass — opening and closing simultaneously following an inaudible sinewave. You can also control the upper boundary of the filters by setting a frequency, which I named Contour, using the slider at the bottom— “opening up” the sound by doing so.

I used p5.js to automate the movements of the sliders, as well as positioning the elements on the page. Basic CSS styling has also been added.

A more precise approach would make use of other Web Audio elements to modify audio parameters. In this case I only needed to test the capabilities of the API. Therefore, I settled on using a simpler approach —p5.

Below is the Javascript code that runs behind the scenes:

This synth is not playable via your computer keyboard, nor via MIDI. Frequencies cannot be set and parameters cannot be saved. It is only a minuscule example of the power of Web Audio.

It took me three entire days to complete this synth, but one can only imagine what impressive projects could be achieved in Web Audio if more time, and experienced hands, were to be used.

Few examples of things that can be done using Web Audio

Omnitone: JavaScript implementation of an ambisonic decoder that also allows you to binaurally render an ambisonic recording directly on the browser.


Babylon Synth: a mini 3D visual interactive experience of Gershon Kingsley’s Popcorn.


Source Image: Scott Schiller

Francesco Imola is a London-based musician, weekend photographer, and current Sound Design student at the University of Greenwich.



Building my first Pure Data patch.

I’ve been an hardcore fan of open-source software (OSS) for years but never relied on it to the point where I felt confident in using an OSS as a tool to build my work on.

And there are good reasons to be scared of using OSS professionally. Especially if the outcome of your work as an artist depends on that of an unknown community of non full-time developers who don’t have time to take into account your level of IT knowledge, learning curve and, in particular, your time, when building and upgrading a software.

It is a well known issue that using OSS is often more time consuming and daunting – both as a first time and as an experienced user – than using proprietary software. Pure Data is a very good example of this.

Pure Data is an open source visual programming environment that runs on anything from personal computers to embedded devices and smartphones [..]. It is a major branch of the family of patcher programming languages known as Max (Max/FTS, ISPW Max, Max/MSP, etc), originally developed by Miller Puckette at IRCAM. (Pure Data Website)

Don’t get me wrong. Pure Data is a brilliant programming language with an endless potential and a relatively stable community of developers, most of which are also artists.

But sometimes this is not enough.

A part-time community of developers will never be able to manage the large scale upgrades that lead to a stable product, which should be canvas to the work of professionals at all levels. Hence the reason why the old-fashioned proprietary software Max/MSP has become a standard against Pd for artists and engineers in the creative industries.

“I have to wonder how anything actually gets done in the open-source community. In general, the collective crowd has to be a royal pain to work with. And that’s why any large-scale project within that community goes so slowly […]. But I wonder if all open-source projects will eventually go the way of all software. Upgrade after upgrade adds more and more complexity, to the point where the software begins to get bogged down under its own weight. How many open-source projects have you seen in which the code gets leaner and meaner rather than fatter and fatter?” (John Dvorak, PC Mag, 2007)

Example of a typical PD patch.

I still gave Pure Data a go and, after just about three weeks of trials and errors, I decided to build a simple FM synthesiser with a few additional features. Besides having to download three different versions of the software before I could stick with a definitive one (Pd-extended in my case), everything else went fairly smoothly.

For the core of the synth, the Frequency Modulation, I took one of the built-in Pd examples as a first building block and created the rest from there.

Screenshot of my first Pd Patch

The synth features a table (pitchmod) modulated by two frequencies, Modulation deptht and Carrier frequency, through multiplication and sum of numbers to the original wavetable. The signal is sent to the oscillator, which output is split into two: while one signal goes into the autopan subpatch — responsible for shifting the audio from left to right at a controllable rate — the other goes into the next multiplication object. I’ve also created a simple dry/wet slider that controls the depth of the auto-pan.

Auto-pan patch

Before the signal is sent to the final output, this is directed through the freeverb~ object, a Pd in-built reverb with some controllable parameters. Final touches are the visualisers for the resulting waveform, as well as RMS meters, play and mute buttons.

After all, I’m quite satisfied with the results. The patch does what I’d expect it to do and there are no unwanted clicks and pops (I’ve done a good amount of work with line objects!). However, the synth still lacks a few fundamental functions that I am planning on studying and implementing soon, such as the option for playing notes trough external midi, randomising parameters, and saving presets.

The patch can be freely downloaded here. Feel free to test it out and make upgrades.

Francesco Imola is a London-based musician, weekend photographer, and current Sound Design student at the University of Greenwich.