Eliot Higgins, Marilín Gonzalo, Felix Simon and Valentina de Marval discuss the challenges posed by software such as Midjourney and Dall-E.
Over the past few weeks, a number of improbable images went viral: former US President Donald Trump getting arrested; Pope Francis wearing a stylish white puffer coat; Elon Musk walking hand in hand with General Motors CEO Mary Barra.
These pictures are not that improbable though: President Trump was indeed getting arrested; Popes are known to wear ostentatious outfits; and Elon Musk has been one half of an unconventional pairing before. What is peculiar though is that they are all fake images created by generative artificial intelligence software.
AI image generators like DALL-E and Midjourney are popular and easy to use. Anyone can create new images through text prompts. Both applications are getting a lot of attention. DALL-E claims more than 3 million users. Midjourney has not published numbers, but they recently halted free trials citing a massive influx of new users.
While the most popular uses of generative AI so far are for satire and entertainment purposes, the sophistication of their technology is growing fast. A number of prominent researchers, technologists and public figures have signed an open letter asking for a moratorium of at least six months on the training and research of AI systems more powerful than GPT-4, a large language model created by US company Open AI. “Should we let machines flood our information channels with propaganda and untruth?” they ask.
I spoke to several journalists, experts, and fact-checkers to assess the dangers posed by visual generative AI. When seeing is no longer believing, what are the implications this technology has on misinformation? How will this impact journalists and fact-checkers who debunk hoaxes? Will our information channels be flooded with “propaganda and untruth”?
A fake Trump gets out of jail
On 20 March, journalist Eliot Higgins, founder of Bellingcat, tweeted a series of images he made using Midjourney. The pictures depicted a narrative around former US President Donald Trump’s criminal conviction: from fictional arrest to fictional escape from prison. The pictures quickly went viral and Higgins was subsequently locked out of the AI image generator’s server.
“The thread I posted proves how quickly images that appeal to individuals’ interests and biases can become viral,” Higgins says. “Fact-checking is something that takes a lot more time than a retweet.”
For those who work to debunk disinformation, the rise of AI generated images is indeed a growing concern since a big proportion of the fact-checking they do is image or video-based. Marilín Gonzalo writes a technology column at Newtral, an independent Spanish fact-checking organisation. She says that visual disinformation is a particular concern since images are especially compelling and they can have a strong emotive impact on audiences’ perceptions.
“You can talk to a person for an hour and give him 20 arguments for one thing, but if you show him an image that makes sense to him, it is going to be very difficult to convince him that’s not true,” Gonzalo says.
Is a tsunami on its way?
Chilean journalist Valentina de Marval, a professor of journalism in the Universidad Diego Portales with previous fact-checking experience for agencies like AFP, Chicas Poderosas and LaBot Chequea, is also worried about the rise of AI-generated images. While there are clues to these images that show they are fake, like hands, teeth or ears, De Marval is concerned that the rapid improvement of these models will render these indicators obsolete.
“Maybe in a couple of months or days artificial intelligence will have learned, for example, to draw hands well, to outline the eyes well, to put teeth or ears, to make the skin less smooth and make it more real with imperfections,“ she says.
Despite concerns that AI generated imagery might lead to a truth crisis, experts like Felix Simon, a communication researcher and a PhD student at the Oxford Internet Institute, warns against taking an alarmist view on these new technologies saying that its proliferation does not necessarily equate to more people believing in those images.
“The relationship between image and truth has always been unstable,” says Simon. “One could say that what we see with generative AI is just a continuation of that. Many people will get used to it. They will develop defence mechanisms both on a personal level but also on institutional level, where news organisations will probably go to greater lengths to check if images show what they claim to show.”
Simon says that concerns about a new image-based information warfare and the proliferation of fake news date back to the days when photography was introduced to newsrooms. More recently, concerns about the impact of deep fakes have been around for years. Even going a few years back, similar concerns regarding image-based fake news emerged when Photoshop became accessible to the public. Even a few days ago, a suggestive Playboy magazine cover of French government minister Marlène Schiappa went viral. The image was quickly proven to be faked, done through a photomontage of the face of the politician and the body of another woman.
The problem of speed
Bellingcat’s Higgins believes that AI-generated images are a phenomenon that will be most likely contained to social media platforms rather than being something that reaches anywhere near the mainstream media. He also thinks that that fake images will be debunked as they go viral.
“The kind of people who are trying for a certain degree of mainstream legitimacy aren’t going to let themselves be called out constantly by sharing fake images,” he says. “I really think it is going to be something that is more about kind of gut reactions and memes, rather than anyone serious campaigning around fake images.”
However, what concerns fact-checkers is not necessarily what these software produces, but the speed in which they are produced. News organisations will not only have to properly verify information but do so in a timely manner to avoid an information vacuum.
Unlike Photoshop or deep fake softwares, DALL-E and Midjourney are able to generate media within seconds with just a few text prompts. Gonzalo calls this phenomenon ‘a digital fire,’ the rapid distribution of a fake image or video through social media platforms. “This is a constant concern for fact-checkers because we can’t see what is moving at the level of WhatsApp groups or other messaging groups and this runs very fast because it is a viral type of distribution,” she says.
De Marval thinks fact-checkers will have to adapt their methodology and rhythms to be able to catch-up to the potential influx of synthetic images. “Verification methods have to be adapted and streamlined in all newsrooms so they can process videos and images before showing them,” she says.