When former president Donald Trump announced last week that he'd be arrested Tuesday over a hush money investigation, the countdown was on. The internet waited for the inevitable perp walk that (so far) has never come.
But plenty of people were fooled into believing the arrest actually happened—you can thank the rapidly evolving technology of artificial intelligence for that. A journalist named Eliot Higgins used the AI tool Midjourney to generate images of what a Trump arrest might look like. Those images turned out to be pretty damn convincing.
Now, Higgins is apparently banned from using Midjourney for creating misleading images (though Higgins was clear in his Tweets that these weren't real). But the incident opens up a bigger debate about how AI is accelerating so-called "deepfake" images and videos, basically media that can make you believe someone is doing something that's not actually real. What happens when they get so sophisticated and so cheap and easy to make that nobody can tell that they're fake?
Social media companies have tried to combat deepfakes by labeling them as misleading or false, but the technology filters and moderation can't seem to keep up with the content. And lawmakers, who can barely get their head around whether TikTok is a Chinese company or not, haven't been able to do much to combat the threat of deepfakes, which could include AI-generated revenge porn, which only recently has been banned by law.
So how can you, a person in the world, tell what's fake and what's real? Wired this week posted a guide on what to look for to spot an AI image, such as exaggerated facial expressions, text that looks garbled on name tags or badges and unnatural body proportions, things that AI software still struggles to replicate.
In the meantime, until you hear it from legit news outlets (and it will be unavoidable if it happens), don't fall for any images claiming to show the arrest of Donald Trump.
More From LEVEL: