AI project ‘Neural Synesthesia’ brings paintings to life with music
Watch a machine learning algorithm morph faces and landscapes in time with the beat.
Xander Steenbrugge, an engineer from Belgium, has developed an AI system that creates visualisations in time with music, achieving mesmerising results. The piece is part of the project Neural Synesthesia, which is designed to use music to create visualisations in a number of different ways – another example of this in action is with Kaleido Beats, embedded below. Steenbrugge makes it clear that he “[does] not create these works, I co-create them with the AI models that I bring to life”.
Steenbrugge explains his process on his Vimeo page as follows:
“My basic workflow:
1. I first collect a dataset of images which will define the visual style/theme that the AI algorithm has to learn.
2. I then train the AI model to mimic and replicate this visual style (this is done using large amounts of computational power in the cloud and can take several days.)
3. Next, I choose the audio and process it through a custom feature extraction pipeline written in Python.
4. Finally, I let the AI create new, entirely unique visuals with the audio features as input. I then start the final feedback loop where I manually curate, rearrange and synthesise these visual elements into the final work.
The AI does not fully create the work, and neither do I. It is very much a collaboration.”
Steenbrugge also runs a YouTube channel, Arxiv Insights, in which he provides tutorials in machine learning and neural networks.
For more Music Technology news, click here.