ALife for Real and Virtual Audio-Video Performances

Authors
Luigi Pagliarini, Henrik Hautop Lund
Corresponding Author
Luigi Pagliarini
Available Online 30 June 2014.
DOI
https://doi.org/10.2991/jrnal.2014.1.1.7How to use a DOI?
Keywords
Playware, Art, Music, Graphics, Artificial Life
Abstract
MAG (an Italian acronym which stands for Musical Genetic Algorithms) is an electronic art piece in which a multifaceted software attempts to “translate” musical expression into a corresponding static or animated graphical expressions. The mechanism at the base of such “translation” consists in a quite complex and articulated algorithm that, in short, is based on artificial learning. Indeed, MAG implements different learning techniques to allow artificial agents to learn about music flow by developing an adaptive behaviour. In our specific case, such a technique consists of a population of neural networks – one dimensional artificial agents that populate their two dimensional artificial world, and which are served by a simple input output control system – that can use both genetic and reinforcement learning algorithms to evolve appropriate behavioural answers to an impressively large shapes of inputs, through both a fitness formula based genetic pressure, and, eventually, a user-machine based feedbacks. More closely, in the first version of MAG algorithm the agents’ control system is a perceptron; the world of the agents is a two dimensional grid that changes its dimensions accordingly to the host-screen; the most important input artificial agents get (i.e. not necessarily the only one) is the musical wave that any given musical file produces, run-time; the output is the behavioural answer that agents produce by moving, and thereby drawing on to a computer screen, therefore graphical. The combination of artificial evolution and the flows of a repeated song or different musical tunes make it possible for the software to obtain a special relationship between sound waves and the aesthetics of consequent graphical results. Further, we started to explore the concept of run-time creation of both music and graphical expression. Recently, we developed a software by which it is possible to allow any user to create new song versions of popular music with the MusicTiles app simply by connecting musical building blocks. This creation of musical expression can happen as a performance (i.e. run-time). When connecting the MusicTiles app to the MAG software, we provide the connection and the possibility to melt both musical expression and graphical expression in parallel and at run-time, and therefore creating an audio-video performance that is always unique.

Copyright
© 2013, the Authors. Published by ALife Robotics Corp. Ltd.
Open Access
This is an open access article distributed under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/).