Generative Art

How I helped create a generative audiovisual app that runs in your browser

This past two months have been nerve-racking. But we’ve eventually created something to be proud of.

A group of four undergrad students. We’ve build a net artwork, or as programmers like to call it: a web app.

We named it avtest. In a nutshell, avtest is an interactive multimedia web app that gathers textual data via the YouTube API from either predefined or user-selected videos to create generative music and visuals. It’s still a work in progress*, but it’s near enough to completion to be shared to with everyone.

*It works in almost every aspect but its behaviour changes independently from platform to platform.

What does it do

On the surface, the artwork generates visuals and music when clicking on one of the 5 checkboxes or when inserting a valid link in the underlying text-box and checking the relative checkbox to its side. Each checkbox is “connected” to a YouTube video selected by our team. The text-box + checkbox option instead allows users to insert their own link to any youtube video; this has to be a special API link formatted through Google’s YouTube Data API platform. We imagined most people would not be able to retrieve such info, so we’ve added our own links to make the artwork function without much effort.

The 20 most-recent comments from a YouTube video are displayed in a random order below the group of checkboxes. In a similar way, the “waterfall” of letters is generated using random characters from a random comment among the list of 20. The sound composition that starts about 2–5 seconds in (performance upgrades are still on our to do list!) is also generated using text from the comments, first converted into ASCII values and then into notes playing a synthesisers I personally built using the Web Audio API and other Javascript tools (libraries).

Checking into the “About” box will take you to another page where some info about our team and the project are shown.

Details and code

I created and programmed the sound composition and designed the website (HTML, CSS, text, DOM elements, etc). The rest of the team has worked on setting up the data-retrieving system and designing the visuals.

The music composition is created using two javascript libraries: Tone.js and tonal. I have used Tone.js to build the sound of the synthesiser playing the notes. It’s a polyphonic synth with long decay and sustain. A chain of effects — delay, reverb, compressor, limiter — has been added also using the same library. I then created a pattern elements responsible for playing the sequence of notes. Such sequence is generated using of the YouTube comments as input and the tonal library to convert number to musical values.

The list of comments is first “translated” from an array of strings (text) into numeric ASCII values (numbers) and then scanned to find the most recurrent number — which happens to be a value between 0 and 127. Such number is converted first to MIDI and then to a note value (C, D, E, F …) through tonal. From here — using sorting tools, plenty of for loops and arrays, and a list of all possible scale modes, I build a system that automatically defines a custom root note and a scale mode for every input (API link). Such scale is eventually played by the pattern element mentioned above so that sound can be heard on the page.

An almost transparent ellipse shows up when the synth is finally loaded and is playing.

Elements of the visuals and the DOM design may not work/display properly at first try nor instantly. We are not all professional coders and our experience in front-end dev is limited — hence many improvements are still to be made. The project has not yet been tested on every current browser and device. The audio does not work when the site is open on Internet Explorer (IE doesn’t support the Web Audio API).

The entire javascript code, including sound and visuals, is accessible here: https://github.com/francescoimola/avtest/blob/master/machinemusic.js

and here is the main Github page: https://github.com/francescoimola/avtest

> The artwork is accessible here.

Contributors:

Francesco Imola : design + web audio 
Jameel Knight : API + visuals 
Anthony Luc : visuals 
Ryan Nguyen : visuals


Francesco Imola is a London-based musician, multimedia artist, and current Sound Design student at the University of Greenwich.

website| twitter | instagram | linkedin

Generative art: a practice in constant change

“Floating Lines in the DeepSpace”. A generative artwork by Miguel Neto & Rodrigo Carvalho.

To “generate” — as described by the Merriam-Webster dictionary — is to define or originate (something, such as a mathematical or linguistic set or structure) by the application of one or more rules or operations.

For generations, artists and scientists have helped reshaping this term into an abstraction:

Generative art takes place in a structured system – such as a set of natural language rules, a computer program, a machine, or other procedural inventions¹ – created by the artist and aimed at producing multiple, and potentially endless, results from the manipulation of an initial form.

[When we talk about generative art], the term (generative) is simply a reference to how the art is made, and it makes no claims as to why the art is made this way or what its content is.¹

Development

Despite its modern approach, generative art is “as old as art itself”¹. Since Wolfgang Amadeus Mozart, and throughout history, artists have designed complex and simple systems — as in the works of Elsworth Kelly or John Cage — for the creation of new generative artworks.

Set aside Computer Science and AI, several art practices have contributed to the development of generative art. These include Electronic Music, Computer Graphics, Animation, VJ Culture, Industrial Design, and Architecture.

The youth culture and audiovisual artists, in particular, are bringing generative art to the eye of the media as no one has ever done before.

To bring generative art to a club night is to expose and showcase the potential of such practice to a massive crowd. Max Cooper, Alva Noto, Ryoichi Sakamoto, Squarepusher, and many other A/V artists are currently basing most of their work and live shows on machine generated art.

Systems

Complexity science is a relatively young discipline aimed at understanding how the systems that rule the generative world work.

Complex systems are called so because they (typically) have a large number of small components that interact with similar nearby parts.²

Local components will interact in “nonlinear” ways, meaning that the interactions act in a non-sequential or straightforward manner. These local interactions are dynamic and in constant change, leading to the system organising itself. Scientists define these self-organising systems as complex systems.

Examples of complex systems are the human brain, Earth’s climate, living cells, the stock market, etc.

Order

It is important to remember that complex systems may act in a chaotic manner, but never do so randomly. There is a somewhat clear distinction between chaos and randomness, especially within the field of generative art.

Philip Galanter provides us with a great example of the difference between chaos and randomness:

“…even though it is difficult to predict the specific weather 6 months from now, we can be relatively sure it won’t be 200 degrees outside, nor will we be getting 30 feet of rain on a single day, and so on. The weather exists within some minimum and maximum limits, and those expectations are a sort of container for all possible weather states.”

Generative code

There is an aspect to code which goes beyond its pure written form. Its execution constitutes what we experience.

[However], to appreciate generative code fully we need to ‘sense’ the code to build an understanding of the code’s actions. To separate the code and the resultant actions would simply limit our experience, as well as the ultimate study of these forms.²


References:

  1. Galanter, P. What is Generative Art? — Complexity Theory as a Context for Art Theory, Philipgalanter.com. Available at: http://www.philipgalanter.com/downloads/ga2003_paper.pdf.
  2. Cox, G., McLean, A. and Ward, A. The Aesthetics of Generative Code, Generative.net. Available at: http://generative.net/papers/aesthetics/.

Francesco Imola is a London-based musician, weekend photographer, and current Sound Design student at the University of Greenwich.

w. http://francescoimola.com/

tw. https://twitter.com/francescoimola