This past two months have been nerve-racking. But we’ve eventually created something to be proud of.
A group of four undergrad students. We’ve build a net artwork, or as programmers like to call it: a web app.
We named it avtest. In a nutshell, avtest is an interactive multimedia web app that gathers textual data via the YouTube API from either predefined or user-selected videos to create generative music and visuals. It’s still a work in progress*, but it’s near enough to completion to be shared to with everyone.
*It works in almost every aspect but its behaviour changes independently from platform to platform.
What does it do
On the surface, the artwork generates visuals and music when clicking on one of the 5 checkboxes or when inserting a valid link in the underlying text-box and checking the relative checkbox to its side. Each checkbox is “connected” to a YouTube video selected by our team. The text-box + checkbox option instead allows users to insert their own link to any youtube video; this has to be a special API link formatted through Google’s YouTube Data API platform. We imagined most people would not be able to retrieve such info, so we’ve added our own links to make the artwork function without much effort.
Checking into the “About” box will take you to another page where some info about our team and the project are shown.
Details and code
I created and programmed the sound composition and designed the website (HTML, CSS, text, DOM elements, etc). The rest of the team has worked on setting up the data-retrieving system and designing the visuals.
The list of comments is first “translated” from an array of strings (text) into numeric ASCII values (numbers) and then scanned to find the most recurrent number — which happens to be a value between 0 and 127. Such number is converted first to MIDI and then to a note value (C, D, E, F …) through tonal. From here — using sorting tools, plenty of for loops and arrays, and a list of all possible scale modes, I build a system that automatically defines a custom root note and a scale mode for every input (API link). Such scale is eventually played by the pattern element mentioned above so that sound can be heard on the page.
An almost transparent ellipse shows up when the synth is finally loaded and is playing.
Elements of the visuals and the DOM design may not work/display properly at first try nor instantly. We are not all professional coders and our experience in front-end dev is limited — hence many improvements are still to be made. The project has not yet been tested on every current browser and device. The audio does not work when the site is open on Internet Explorer (IE doesn’t support the Web Audio API).
and here is the main Github page: https://github.com/francescoimola/avtest
> The artwork is accessible here.
Francesco Imola : design + web audio
Jameel Knight : API + visuals
Anthony Luc : visuals
Ryan Nguyen : visuals
Francesco Imola is a London-based musician, multimedia artist, and current Sound Design student at the University of Greenwich.