top of page
  • Writer's pictureT Lines

A few thoughts on AI and art


Image produced by Dall-E 2


Recently I have been delving into the back catalogue of Phantom Power, a podcast which explores sound art, various forms of listening, and audio media theory. I would highly recommend their miniseries on the voice, which investigates how audio technology intersects with disability and gender.

However, it was something that Phantom Power’s presenter, Mack Hagood, said on a recent episode about AI that I want to explore here. He begins the episode with a fairly well argued ‘rant’:


I have my own perspective on so-called ‘artificial intelligence’ as you might glean from the fact that I use the term ‘so-called AI’. AI is a branding term. The chatbots that we hear about today have nothing to do with the general artificial intelligence that science fiction writers and philosophers have speculated about for so many decades.


As you probably know, chatbots are just word prediction algorithms, autofill on steroids. They have no understanding or intention, and it’s my strong opinion that an algorithm without a body will never develop human understanding because cognition is embodied and enacted. […]


Our understanding of any utterance is embodied. It’s social, it’s emotional. As one well-cited paper from 20 years ago points out: For us to understand the sentence past the salt, we need to have had an entire set of sensory motor experiences, such as grasping something and moving one’s arm through space […][see the study he references here]


How can we speak of grasping an idea if we’ve never grasped the salt, or at least seen someone else grasp the salt? In my view, without concrete sensory experiences, there’s just no foundation to build understanding upon.


Here, Hagood puts into words a half-formed thought that has been floating around my head for almost a year. At various workshops and talks involving artists, curators, and producers I have attended over the last year, I have been bothered by a certain naivety around AI, a general acceptance of a grand narrative pushed by tech companies. Often in discussions among artists, writers, and curators, I have noticed two troubling attitudes towards AI – mysticism or complacency.


Let's start with mysticism. This is a belief that puts a lot of faith in the 'intelligence' part of AI. The AI mysticist believes that ChatGPT 'thinks', has a personality, perhaps consciousness. Consciously or not, they see the creations of ChatGPT and Dall-E as almost a form of magic and may employ this type of language to talk about AI. Often this attitude comes from an unawareness of the basics of how one of these GPT systems works. The AI mysticist falls for the subtle marketing around Large Language Models (LLM). The discourse around AI, what we call it, contributes to an idea that it is more intelligent than it is. But even ChatGPT will tell you there is no 'mind' comparable to that of an animal in these systems, no awareness of the material context of the text, sound, or imagery it replicates so confidently. As Hagood argues, human language is inherently embodied (as are music and art), and so an AI cannot understand the content it is generating.


A similar expression of this argument lies in John Searle’s semiotic distinction between syntax and semantics. To Searle, computers deal with pure syntax, working within highly complex structures of code-based rules and language. However, these syntactical workings are a qualitively different thing to semantics: the meaning behind words. Searle’s sits in the semiotic framework in which language acts as signs and symbols referring to real world ‘signifieds’. The computer has no physical references beyond digital code: it can only refer to signs. With this in mind, AI could be thought of as an extension of a postmodern tendency, outlined by Jean Baudrillard and Frederic Jameson, that reality consist more and more of references to references, the Real becoming increasingly obscured by layers of abstraction (more on this later).


While the AI mystic may put too much stock into AI’s intelligence, pacifying themselves in the process, they at least recognise that AI will bring significant social change. The more worrying attitude perhaps, is the strangely conservative view that artists have always adapted, so this will be no different. This position is faulty in two directions. Firstly, it maintains a certain faith that essentially nothing changes. Sure, the technology and the jobs people have might change, but by adapting and making use of the new tools, artists can fundamentally keep doing the same kind of work. One only has to look at the music industry to see that, while people do continue to make music, the streaming age has changed their material conditions to the point where it is almost impossible to make a living from music (more on this later). Retrospectively, a more active political response to technological developments may have prevented technology from devaluing the work of artists and musicians. So, it is important, as AI threatens the work of artists further, not to underestimate how much things could change.


Secondly, the complacent attitude assumes that artists will have meaningful access to building blocks of AI which will allow them to adapt the tool to their purposes. ChatGPT and Dall-E are powerful tools, but how much control does the user really have, and over what exactly? There are undoubtably some interesting ways to manipulate these systems for creative ends. Personally, I struggle to get ChatGPT (3.5) to produce anything other than vague cliches that feel like they should belong in a GSCE English Lit essay, although some artists have more success. For example, see the disturbing Dall-E cryptid Loab, or how writer Vauhini Vara used ChatGPT to write about grief. However, these creative avenues tend to emerge specifically out of the AI model failing; producing something which ‘shouldn’t’ be there. I doubt that OpenAI want their chatbot to get stuck repeating short phrases or for their image generator to produce horrifying gremlins and monsters out of nonsense words. Although the strange corners of these models are the exciting part to artists, to tech companies these are bugs to remove as soon as possible.



Noise, Encoding and Control


I was recently playing with a portable analogue radio with tape player, thinking about how many sonic artists and media theorists (Jez Riley French and Mack Hagood, to name only a couple) cite early experiences pushing radios and tape recorders to their limits as the starting point for their practices. This was possible partly because analogue equipment has a sort of openness to it. There is no encoding inherent to transmitting and receiving signals, a process that is inherent to digital technology. Physical movement through space alters what an analogue radio’s antenna might pick up, while a digital radio can only pick up signals from digital radio transmitters. Both the analogue radio and the tape recorder can even pick up vibrations and electromagnetic waves from Earth and outer space, accidentally or not. A cheap piece of analogue equipment allows for open explorations into niche and eerie areas of the radioscape and sound environment, and, to their benefit, these tools had a very low barrier to entry, in terms of cost and technical skill.


Perhaps the digital equivalent to such analog experimentation would be Open-Source and hacking communities, which unfortunately generally require a higher skill level to take part in. The digital is arguably more mediated than the analog, for the same reason that it is less noisy. It functions through processes of encoding. Digital technology is powerful precisely because it does not include the strange noises of the world, just the coded signal.

The lack of noise within digital media has allowed for far more complex communication networks (your computer, the internet, etc), while restricting the user’s ability to play with the rules of those networks. I talk to my friends and family through user interfaces on social media platforms, which, in comparison to a ham radio or a landline is a lot more convenient but a lot more mediated. The user interface, with its illusion of immediacy and invisibility, guides all my interactions along opaque data streams.


Dating to the origins of the term Web 2.0, quick-and-easy amateur engagement with the internet has been facilitated through a platform-based model. However, this ‘opening up’ is paradigmatic of what Gilles Deleuze called the ‘society of control’: users moving freely along undulating participation networks which control and feed off them. Although Deleuze did not use the word ‘digital’ in his short “Postscript on the Societies of Control”, in his references to ‘codes’ and ‘passwords’, a connection is cemented between this new form of social control a digital technology.

The data that Google has on me is highly personalised, but perhaps paradoxically undermines my individuality, fragmenting my perception of self as it vies for my constant attention. I feel like Deleuze’s ‘dividual’, a split and distracted subject, a networked packet of data. But the content that is presented specifically for me is for an encoded self while my actual self is there in the world, with cosmic radiation running through it, hearing the neighbour play ukulele, a physical body breathing air.


The terms of the platforms which most of us spend our online time on are set by a handful of extremely large corporations (Google, Meta, Apple, etc.) and these platforms are set up specifically to profit from our engagement. Users benefit from more content, slicker interfaces and streamlined participation meanwhile giving up a certain amount of control and autonomy within platforms that happen to be quite opaque in their workings. I think that OpenAI’s flagship AI models may fit all too well within this paradigm, dictating the rules of participation while access behind the curtain is denied.

Recently, if you attempt a ‘search’ on YouTube, the results of your search terms switch to algorithmically generated suggestions after ten results. Similarly, ChatGPT’s future iterations may aim to control the rules of user engagement, offering solutions rather than taking detailed instructions. At the very least, OpenAI and similar companies will want to limit how strange a user can get with their prompts, nudging them back to the more polite and ‘professional’ realms.



Pastiche and empty signs


Almost all the data that has been fed into LLMs has been scraped from the massive data source that is the World Wide Web (Wikipedia, eBooks, social media, independent webpages). Dall-E and Midjourney are fed from millions of images from across the internet, while Spotify, among others, is attempting to generate musical AI equivalents from their extensive databases. In my experience, conversations among artists about where all this data comes from – and the human work that was originally put into generating that data – are too rare. Silence around these datasets only benefits of the massive corporations that can profit hugely from the proliferation of this new technology. It is not only artists who have been taken advantage of. AI breakthroughs have only been made possible by the wide accumulation of the labour of everyone who has uploaded content onto the internet.


However, the AI systems built from this stolen labour are not as nuanced as they may initially appear. To paraphrase Hagood, the danger of AI comes from its stupidity, its blinkered nature, the lack of context to each of its actions. A model like ChatGPT is a style imitator, finding patterns in previous pieces of writing and mimicking these exact patterns, but without the semantic content. While ChatGPT and Dall-E (usually) tend not to copy directly from their source material, the result is still essentially plagiarised, from many sources simultaneously rather than one.


GPT systems, for all their technical innovations, seem to be the just another heightening of the postmodernism. My use of ‘postmodernism’ follows the example of Jean Baudrillard and Frederic Jameson, as well as Mark Fisher, who drew heavily on Jameson’s writing. ChatGPT is just a further distillation of what Jameson calls ‘nostalgia mode’. ‘Nostalgia mode’ does not describe a Proustian exploration of one’s memory, but the opposite: the flattening of cultural memory into its formal content. Fisher writes in Ghosts of My Life, “Jameson’s nostalgia mode is better understood in terms of a formal attachment to the techniques and formulas of the past [and] a retreat from the modernist challenge of innovating cultural forms adequate to contemporary experience.” [my italics]


All cultural signifiers become flattened in the AI system, removed from their historical and social contexts, and turned into pure style, or pastiche. When ChatGPT writes a horror story in the style of Shakespeare, or Dall-E produce an a Big MacTM in the style of Holbein, we are not brought closer to the cultural specificity of Shakespeare’s or Holbein’s work. LLMs are context ambivalent. They do not stand for anything – apart from being polite – even if the work they draw on had an inherently political or ethical stance. Any values are watered down by LLMs into empty signs. As a result of this purely referential form of cultural production, users are denied the cultural tools to make sense of their contemporary condition and dream of potential alternatives. I ask ChatGPT to dream for me, and I am given clichés not worthy of the shallowest spiritualist.


Jameson noted that the postmodern period has brought higher levels of abstraction, in every field, including aesthetics finance. In art, figurative abstraction is replaced by a form of realism which references the image itself; a symptom of a spectacle-based society. We are not presented with the actual thing but “only… simulacra of things [that] can be called upon to take their place and offer their appearance.” LLMs add a layer of simulation on top of that. They produce imitations of references, hallucinations of pastiche. Stacked iterations of references producing floating signifiers which have an delirious relationship to embodied experience.



AI Hauntology?


Some artists, especially those using more primitive models have managed to access the strangeness of AI generated content to generate new and unexpected artistic approaches. The failures of AI models to create seamless and wholly convincing forms of pastiche can generate a sense of uncanny. Yair Rubinstein sees the uncanniness of AI generated music as potentially disruptive to the postmodern status quo, arguing that, while it is does not ‘break from the past’, it ‘reanimates’ it disquietingly. Rubinstein draws on Fisher’s writing on sonic hauntology, in which Fisher argues that certain musicians accentuate a sense of ‘time out of joint’ by accentuating or faking redundant sonic media artefacts. As Rubinstein writes, “sonic hauntology made concretely audible the temporal disunity and material artifice (the hiss, the crackle) that troubled common-sense assumptions of media’s smooth and effortless recapitulation of the past.” This sonic disruption elicits melancholy for a time in which different futures seemed possible and unease about the temporal flattening of contemporary culture.


To Rubinstein, AI music is inherently uncomfortable to the point of unbearable. While it cannot offer cultural alternatives to late-stage capitalism, he thinks AI’s cultural ‘power lies in the potential to solicit the intolerable nature of this condition from the listener.” Rubinstein adopts an almost accelerationist approach to AI music, believing that its prevalence will create such a hauntological unease that the public begin to reject a stagnant status quo, as it “audibly foregrounds the asynchronies and discontinuities that capitalism’s foreclosure of alternatives attempts to hastily conceal.” (Find the full essay here)


However, I think Rubinstein underestimates the capitalist system’s ability to normalised absurdity and unease. With time, AI generated music will sound cleaner, more ‘real’, less noisy, less strange. A few weeks ago, three years after Rubinstein’s article, I was recommended a video on YouTube in which (AI) Frank Sinatra sings Coolio’s ‘Gangsta’s Paradise’. The comments under the video are far from the public response that Rubinstein describes; instead, users remark on and celebrate how developed AI music has become. Below are a few examples:


“It's unbelievable how good the AI ​​covers are with Frank Sinatra, absurd. The instrumental, tones, everything fits so perfectly, synchronized, harmonious and yet they all sound original and faithful at the same time.”


“It's literally by far the best AI cover of all time. Frank's voice perfectly suits this song and this stylistic!”


“I love how Frank Sinatra's AI covers are adapted to fit his style. Nice job!”


These do not read as people who are concerned about late-stage capitalism’s foreclosure of political alternatives. They are simply impressed by how smoothly and convincingly the AI can simulate past cultural forms. Note that the emulation of his ‘style’ is paramount, rather than social context or original innovation. By capturing Sinatra’s purely formal qualities, any content can be made to sound ‘original’.



AI effects on labour


There is a strong relationship between the data that AI is trained on and how it is going to be used to damage the creative industry (as well as many white collar and tele-service jobs). As we probably all know, music consumption has shifted heavily in the past 25 years due to the proliferation and distribution of MP3s and video files through online piracy, followed by the dominance of streaming services. Arguably, early online piracy generally did not hurt small artists that much, as the potential lost sales were offset by an increased platform and distribution network, which allowed some small acts to reach new audiences they would not have reached in a pre-digital age. In fact, some artists, such as The Coup (whose 3rd album was titled Steal this Album), encouraged musical piracy as a political act. However, major record labels stood to lose billions in sales to the unregulated distribution of music online.


Through aggressive lawsuits and the rise of Spotify, which made the legal streaming of music cheap and convenient for the consumer, piracy fell out of fashion. However, the hole in the commodity market remained: consumers no longer expected to pay for specific music releases - after all, they were paying £60 a year on a Spotify subscription. Similarly with film and TV, consumers were paying their cursory dues to Netflix, so buying DVDs seemed pointless. Everyone was paying for their music again and that money was making its way back to the biggest artists and the major labels. However, for all the other artists, from early career acts to established but more alternative musicians, making money from recorded music slipped out of grasp. Moreover, it became a requirement to have your work uploaded to streaming platforms such as Spotify, or maybe Bandcamp, to develop an audience.


This is just one quick example of how the development of a digital platform economy consolidated the work of artists onto digital platforms, all the while disenfranchising the majority of these artists. To make things worse, uploading content to these platforms has left artists open to be exploited by AI companies.


These AIs don't produce something out of nothing. They are making pastiches and collages of previously made work, the type of work which is often underpaid or unpaid. And while artists are struggling to make any money from difficult, highly technical work, let alone pay the rent with it, Google and Spotify are taking AI systems under their wing with the intention of making huge profits from this work. A painting that you put on your website in hopes that a curator might see it one day can be used to feed Dall-E, and that data could be used to generate stock imagery for advertising companies. An ambient piece that you paid Distrokid to put on Spotify might be analysed by an in-house AI model to create tracks that saturate the music market you are trying to access. In both these cases, small-scale independent artists made some time-consuming, highly skilled and, personal work, and it (alongside the work of millions of others) has been used to profit a corporation with no creative motivation beyond capital.


So, when I hear an artist say "AI is scary, but humans have always adapted" or "People are going to do bad things with technology but they are also going to do good things, like with any technology" or the equivalent, I hear complacency. I hear a strange decision to ignore the real large-scale extraction of wealth from artists and the exploitation most internet users' cultural production. As artists, we need to look directly at the platforms we rely on to share our work and ask, "What is the platform going to do with this content, who will benefit, and who will suffer?"



Postscript

1: who are you to criticise my code?


A question may be raised from science and technology specialists as to whether criticisms from Theory writers and artists are useful or well informed. It is true that my technical knowledge of computer science, statistics, and coding are limited, but to shut humanities out of the conversation would be ill judged.


Louis Daguerre could not predict Man Ray.


The inventor of the transistor could not anticipate heavy metal.

Roland did not intend acid house or techno.


Technical breakthroughs never sit neatly in the framework they were developed in. AI developers may have their heads too deep in the code to fathom the cultural effect of their work, due to the nature of their specialism and the corporate or academic structure they work in.


At the start of his seminal text, We Have Never Been Modern, Bruno Latour flicks through his morning newspaper and observes that every major news story he comes across is made up of various overlapping academic, scientific, artistic, and political components. This observation runs counter to a Modernist tendency to create a clean split between politics and culture and the study of science and Nature.


Latour argues against such an epistemological split between scientific fact and social theory, recommending that we recognise that the biggest problems humans face come in the form of hybrids. AI is a archetypal hybrid, consisting of RAM, datasets, legislation, music production, labour disputes, corporate investments, environmental impacts. Its danger lies partly in its multiplicity. It is impossible to observe its workings and its effects in one go, within one framework. The dangers of AI have emerged in large part from a blinkered and optimistic approach to technological development (as well as the influence of capital) and a more integrated relationship between philosophers, artists, and programmers is necessary for understanding and managing the problems it brings.



2: AI may just eat itself


In a June article for the Verge, James Vincent listed many of the ways that AI has caused chaos within platform ecosystems, flooding them with bots, spam, and misinformation. AI is threatening the open, easy to use, participation friendly internet. He writes,


Given money and compute, AI systems — particularly the generative models currently in vogue — scale effortlessly. They produce text and images in abundance, and soon, music and video, too. Their output can potentially overrun or outcompete the platforms we rely on for news, information, and entertainment. But the quality of these systems is often poor, and they’re built in a way that is parasitical to the web today. [19]


AI companies have managed to build a pastiche machine that can create ‘good enough to pass’ but still fundamentally poor content at a rate that dwarfs the cultural production of actual people. As a result, they have undermined an unspoken contract on which Web 2.0 relies. Maintained by moderators and algorithmic, platforms rely on a certain amount of authenticity. While the content that prosumers (producer-consumers) posted on YouTube and Instagram often acted as a form of social currency – an exchange of appearances – we (usually (sometimes)) knew, or at least believed, that there was an actual person behind the content. AI has tipped the balance, and the fake and the hoax have outweighed what we might consider ‘authentic’ content.



What then, when the platforms which acted as such good data sources are filled with the productions of AI? There is no easy way to sort the authentic writing/image/music from the fake, especially as LLMs are rolled out into professional settings. The AI model will inevitably, without a lot of human control over its datasets, end up being trained on AI generated data which, according to a study in May, could make the AI system fully collapse. Feeding AI to AI leads to a sort of digital prion disease, the faulty neurons folding outwards, decaying the rest of the neural network. It seems there is more AI strangeness to come.




Not referenced but relevant:

Noam Chomsky: The False Promise of ChatGPT


Trashfuture (Re)Wrapped



24 views0 comments

Recent Posts

See All
bottom of page