Generative AI is making technical strides at such dizzying speed that the general public and educational programs simply can’t process. How can we humans keep up?
Gabriele Magro
Images have been tampered with since the birth of photography, but the tampering processes (generating them photorealistically from scratch was almost impossible) was long and required virtually professional-level expertise. Thus only just a few years ago, the public could look at visual media with reasonable confidence that they could believe what they were seeing, or at least be sure that the image they were looking at was captured by a photographer (who could be operating on a higher or lower degree of honesty and good faith) and that the people depicted in it were humans, too. This is no longer the case.
The first, fundamental step in developing a skill set that enables both professionals and ordinary users to navigate digital media in the age of generative AI is doubt. Developing methods and curricula is an exciting, crucial project, but it will take time: until then, it is reasonable to set the goal of educating people to question what they see.
An article from McGill University’s Office for Science and Society titles “how to spot AI fakes”, with a crucial adjunct: “for now.”
As of late 2024, we mostly know what we should be looking at in a picture when trying to determine if it was AI-generated: the wobbly hands with the missing or bonus fingers, or the background texts that do not make any sense. In photos, many generative AIs tend to treat text and typefaces as decorative elements, in a visual rendition of a linguistic effect called the “stochastic parrot”: language models like ChatGPT do not understand what they are writing, much like a parrot simply repeats words it has heard. We know we should be skeptical when the textures look eerily smooth and when the outlines of elements blend into one another in an unnatural way.
But as AI advances rapidly and learns from its mistakes, will any of those criteria still be helpful in detecting AI-generated images in, say, in one year’s time?
And so, that’s the advantage of doubting: far from an obsolescent skill!
Indeed, doubt in the AI age must be trained anew: we need media consumers, internet users and visual professionals alike to build up strong questioning muscles and be familiar with the procedures it takes to verify facts and cross-check sources.
Doubt fosters a culture of inquiry and leads to a deeper understanding and more responsible engagement with images: it is, for all intents and purposes, the first and foremost step towards building visual media literacy.
This article was written by a human, with no employment of Artificial Intelligence.