Published  26/07/2023

AI: Who’s Looking After Me?

AI: Who’s Looking After Me?

This exhibition considers the hottest – and to some most frightening – of hot topics, artificial intelligence, and its role within systems affecting everyday life

Cat Royale by Blast Theory. Photo: Stephen Daly. Science Gallery London. King’s College London, 21 June 2023 – 20 January 2024. © George Torode.

Science Gallery London, King’s College London
21 June 2023 – 20 January 2024


Every time I sit down to gather some thoughts on this exhibition, a headline, almost in the tone of a war bulletin, darts in, raising the alarm on the exponential rate at which artificial intelligence (AI) is developing – or, some would say, spiralling out of control. Among these news flashes, I learn that a 21-year-old intruder, who broke into the grounds of Windsor Castle in 2021 intending to assassinate the Queen, was encouraged in this crazed mission by his AI “girlfriend”, according to a psychiatric assessment provided during court proceedings. Looking at the transcript of the defendant’s interaction with his robotic companion, the crudeness of the bot’s verbal expression, let alone “comprehension”, seems mind-boggling, making its power to exert any sort of influence that much more disturbing. The case is one of the first in legal history to assess the role played by AI in encouraging a defendant to commit a crime.

James Bridle, Autonomous Trap 001. © James Bridle. Science Gallery London, King’s College London, 21 June 2023 – 20 January 2024.

Much more convincing in its ability to masquerade as a human was the recent deepfake video of consumer champion Martin Lewis recommending an investment scheme purportedly from Elon Musk. Lewis called the forgery “frightening” – certainly the simulation of his voice was impressively accurate – and no doubt his faultless reputation is highly alluring to fraudsters in its ability to engender trust, crucial for ensnaring the gullible (and younger generations have been found to be much more prone to such scams, according to a poll conducted by Goldman Sachs).

The same day, at the perhaps optimistically named AI for Good global summit in Geneva, the historian Yuval Noah Harari called for harsh criminal sentences for the creators of bots impersonating people. That same gathering also gifted us the first press conference with robots, who seemed simultaneously macabre, absurd and dispiritingly banal.

Elsewhere, apocalyptic warnings about the dangers of AI have been emerging from people who presumably have a good idea of what they’re talking about. One of the godfathers of AI, Geoffrey Hinton, pithily told the New Statesman last month: “We’re toast. This is the actual end of history,” and Demis Hassabis, CEO of Google DeepMind, recently signed a terse but unequivocal statement, with other industry luminaries, warning that the possibility of the extinction of the human race from AI is a real one if steps are not taken to mitigate the risks. Nearly 10 years ago, the theoretical physicist and cosmologist Stephen Hawking had also speculated that AI could trigger the end of humanity, gaining autonomy from its creators by re-designing itself.

Mimi Onuoha, The Future is Here! © Mini Onuoha. Science Gallery London, King’s College London, 21 June 2023 – 20 January 2024.

The sticky dilemma of these end-is-nigh scenarios lies in some of the potential marvels of AI, and several of the most impressive of these are within the field of healthcare. For example, machine technologies can diagnose heart attacks in an emergency setting more efficiently than physicians, and the use of AI in imaging – where it is deemed particularly effective – has superior detection rates for cancers and neurodegenerative conditions. Within the realm of mental health, artificial intelligence is being deployed in the context of that other hot topic, the promising research into psychedelics, with AI technologies able to sift through reams of data, whether to identify the substances’ effects on consciousness or to find new compounds that could treat a wide range of conditions such as depression, anxiety, addiction, post-traumatic stress disorder, eating disorders and chronic pain.

The creators of this exhibition, Science Gallery London, part of King’s College London, and Future Everything, frame their investigation of AI in terms of the concept of care, starting from their title: Who’s Looking After Me? (or not, in some cases). Several displays address the application of AI within healthcare systems, drawing attention to its considerable diagnostic potential, but also questioning the role of the human factor. “Does AI care?” is the caption displayed within an installation considering the role of machine technologies in the experience of young cancer patients. Nearby, an interactive survey, The Doctor Will See You Now, asks us patients the burning questions: Do you think AI would understand your medical problems better than doctors? Could AI replace human doctors? Would it be acceptable for an AI doctor to make decisions without considering your feelings?

Does AI Care at Science Gallery London, King’s College London, 21 June 2023 – 20 January 2024. © George Torode.

To some, of course, it will seem inconceivable that machines could possess not only the medical knowhow but also, perhaps especially, the soft powers of doctors – the empathy, the reassuring bedside manner, the glance of understanding in a frightening situation. Yet, in practice, how consistently do patients experience these less tangible but still potent benefits? A recent study showed that the AI chatbot assistant ChatGPT gave replies to health queries that were not only more precise but, crucially, also perceived as more empathetic than those provided by human doctors. Perhaps compassion simply needs to be performed, the healthcare provider must be seen to be sympathetic to relieve the patient. And, since doctors too need looking after, a judicious use of AI could also mitigate healthcare worker burnout that makes the provision of empathy to patients, whether genuine or performative, that much less likely.

An ample, hospital-like curtain in the gallery displays some definitions of what this ineffable concept of care might be, suggestive phrases with an almost lyrical ring of truth that I cannot imagine artificial intelligence coming up with in its current art and textual generative forms (to me the weakest in the current AI landscape – vide the abysmal quality of Dall-E, a system that can produce images from text prompts): “Care is a breeze, something that surrounds you”; “Care is fire in a cave”; “Care is a cat, an animal I feel happy and comfortable with.”

Cat Royale at Science Gallery London, King’s College London, 21 June 2023 – 20 January 2024. © George Torode.

It is cats and their care that are the subject of a video display, Cat Royale, real-time footage of three felines tended to by a robot within a nightmarishly DayGlo environment in which I could easily picture Barbie and Ken living. “Would you trust a robot to care for your pet?” asks a caption. In this case, an emphatic “No” comes to mind, in comparison with my mixed, tending-towards-positive feelings about AI-assisted medical care, for I know in my bones that none of the many dogs and cats I have lived with would thrive without the mutually doting, tactile togetherness experienced with their caregivers.

Elderly care is another area in which technological interventions seem more problematic than salutary because of such a lack of reciprocity, what I would doggedly call the human touch. A study published in 2020 identifies the dangers of AI within gerontology as the four Ds, and these could easily be applied to other contexts: depersonalisation through standardisation; discrimination of minority groups through generalisation; the dehumanisation of the care relationship through automatisation; and the disciplination of users through monitoring surveillance.

The kind of automatisation that AI systems apply to data is especially unsettling in the realm of immigration control, examined here in an installation with testimonies from people navigating the dehumanising and sometimes cruelly random UK border system (though perhaps no other kind exists), a scheme mapped out in a chilling table depicting people being dropped into “sorting machines”, resembling a kind of rudimentary food processor that classifies them as desirable or not.

Newly Forgotten Technologies 3 at Science Gallery London. King’s College London, 21 June 2023 – 20 January 2024. © George Torode.

From the perspective of spectacle, the most commanding exhibit is Wesley Goatley’s installation of moribund Alexa voice assistants, Newly Forgotten Technologies, the discarded speakers strewn across desolate shores of black sandy landfill, gone the way of all redundant “smart” gizmos.

It remains to be seen how smart artificial intelligence will become, whether it could ever be sentient and self-aware, not that there is any consensus as to what human consciousness is or where it is located, for all that the brain is being mapped out by contemporary scientists with the meticulousness of illuminated manuscripts. While there seems no doubt that there is potential for AI to improve some aspects of life, such as the climate crisis and healthcare, the conundrum is whether it will be deployed and regulated for the wider good, to safely avoid the lunatics – whether human, machine or their hybrid progeny – taking over the asylum. In the meantime, this exhibition is a reminder that, whether or not we reach the apocalypse, AI technologies are already embedded in various aspects of our here-and-now and the more we know about these the better.

Click on the pictures below to enlarge

studio international logo

Copyright © 1893–2024 Studio International Foundation.

The title Studio International is the property of the Studio International Foundation and, together with the content, are bound by copyright. All rights reserved.

twitter facebook instagram

Studio International is published by:
the Studio International Foundation, PO Box 1545,
New York, NY 10021-0043, USA