Artificial Intelligence (AI)-generated art has been consistently making headlines over the past few months, with one of the most noteworthy recent events being the first sale of an AI artwork at Christie’s for $432,500, over 40 times the original auction estimate. Similar AI projects in music, media, and even literature are garnering industry attention as they push the boundaries of human and machine creativity. The Christie’s sale is a tentative first indication that at least in the realm of visual art, it is now possible to put a price tag on AI-generated work.
However, there is a significant distinction to be made between this traditional model of AI-generation - where an algorithm simply produces a piece of art - and a more interactive form of generation, where the algorithm is actually part of the art. The former model is fairly straightforward: the role of the human artist is replaced by a programmer who develops and trains an algorithm that, in turn, produces a tangible, physical piece of art. To a certain extent, this model also applies to AI-assisted art production, where a human artist creates work - such as a song or a novel - based on the suggestions of an algorithm. In both cases, the separation between producer and product is clear. However, artwork where the AI is a part of the piece itself, as opposed to a tool in the creation process for a static final work, involves the use of AI to constantly modify the artwork by responding to an input, whether that is data, viewer interaction, or environmental stimuli. This necessary machine integration raises a host of new questions when it comes to value and means of transaction in the art market.
There are already a number of interdisciplinary collectives and artists around the world that are using cutting-edge AI technology to create dynamic, interactive installations. One such example is Astrocyte by Philip Beesley Architect Inc., which was exhibited at the Design Exchange EDIT: Expo for Design, Innovation & Technology in Toronto in 2017. The installation, blending chemistry, AI, and immersive soundscapes, explores the question of whether architecture can truly be “alive” with the aid of artificial intelligence. Astrocyte was designed to respond to the physical movement of viewers with evolving patterns of light, vibration, and sound. Similar works by Philip Beesley have been recently exhibited at the Isabella Stewart Gardner Museum and the Royal Ontario Museum, suggesting that there is sustained interest for this form of immersive, AI-powered art at an institutional level. However, the site-specific nature and technological requirements of many of these works pose a challenge when it comes to transporting, storing, and exhibiting them in a variety of spaces, which could limit their ability to be bought and sold by private collectors. Over the coming period, this research project will explore how artwork that requires ongoing AI generation and adaptation can be integrated into the traditional marketplace.
In the meantime, if you would like to contribute to the research or have any questions about AI-generated art, feel free to reach out in the comments section!