How I helped create a generative audiovisual app that runs in your browser

This past two months have been nerve-racking. But we’ve eventually created something to be proud of.

A group of four undergrad students. We’ve build a net artwork, or as programmers like to call it: a web app.

We named it avtest. In a nutshell, avtest is an interactive multimedia web app that gathers textual data via the YouTube API from either predefined or user-selected videos to create generative music and visuals. It’s still a work in progress*, but it’s near enough to completion to be shared to with everyone.

*It works in almost every aspect but its behaviour changes independently from platform to platform.

What does it do

On the surface, the artwork generates visuals and music when clicking on one of the 5 checkboxes or when inserting a valid link in the underlying text-box and checking the relative checkbox to its side. Each checkbox is “connected” to a YouTube video selected by our team. The text-box + checkbox option instead allows users to insert their own link to any youtube video; this has to be a special API link formatted through Google’s YouTube Data API platform. We imagined most people would not be able to retrieve such info, so we’ve added our own links to make the artwork function without much effort.

The 20 most-recent comments from a YouTube video are displayed in a random order below the group of checkboxes. In a similar way, the “waterfall” of letters is generated using random characters from a random comment among the list of 20. The sound composition that starts about 2–5 seconds in (performance upgrades are still on our to do list!) is also generated using text from the comments, first converted into ASCII values and then into notes playing a synthesisers I personally built using the Web Audio API and other Javascript tools (libraries).

Checking into the “About” box will take you to another page where some info about our team and the project are shown.

Details and code

I created and programmed the sound composition and designed the website (HTML, CSS, text, DOM elements, etc). The rest of the team has worked on setting up the data-retrieving system and designing the visuals.

The music composition is created using two javascript libraries: Tone.js and tonal. I have used Tone.js to build the sound of the synthesiser playing the notes. It’s a polyphonic synth with long decay and sustain. A chain of effects — delay, reverb, compressor, limiter — has been added also using the same library. I then created a pattern elements responsible for playing the sequence of notes. Such sequence is generated using of the YouTube comments as input and the tonal library to convert number to musical values.

The list of comments is first “translated” from an array of strings (text) into numeric ASCII values (numbers) and then scanned to find the most recurrent number — which happens to be a value between 0 and 127. Such number is converted first to MIDI and then to a note value (C, D, E, F …) through tonal. From here — using sorting tools, plenty of for loops and arrays, and a list of all possible scale modes, I build a system that automatically defines a custom root note and a scale mode for every input (API link). Such scale is eventually played by the pattern element mentioned above so that sound can be heard on the page.

An almost transparent ellipse shows up when the synth is finally loaded and is playing.

Elements of the visuals and the DOM design may not work/display properly at first try nor instantly. We are not all professional coders and our experience in front-end dev is limited — hence many improvements are still to be made. The project has not yet been tested on every current browser and device. The audio does not work when the site is open on Internet Explorer (IE doesn’t support the Web Audio API).

The entire javascript code, including sound and visuals, is accessible here: https://github.com/francescoimola/avtest/blob/master/machinemusic.js

and here is the main Github page: https://github.com/francescoimola/avtest

> The artwork is accessible here.

Contributors:

Francesco Imola : design + web audio 
Jameel Knight : API + visuals 
Anthony Luc : visuals 
Ryan Nguyen : visuals


Francesco Imola is a London-based musician, multimedia artist, and current Sound Design student at the University of Greenwich.

website| twitter | instagram | linkedin