4 Tips for Spotting AI Videos
Have you noticed a lot of wildly intriguing nature content in your social media feeds lately?
I’m talking fantastical plants, animals acting weirdly human, exotic places worthy of your desktop background.
Many of these posts are AI-generated, but looking at the comments, nobody is clocking it.
If you don’t think to question, or if you don’t know what to look at, it’s easy to be fooled.
If you’re looking at a post that seems unbelievable, maybe you shouldn’t believe it. Here are some questions to ask yourself.
Is this content the algorithm would like?
Wild and wonderful nature fakes are so prominent because people universally love cute nature videos. There is just something so shareable about a nervous looking seal.
When I shared this video to my own Instagram stories, people were surprised to learn it’s AI. But seals don’t act this way. A nervous seal would be hiding, not standing in the middle of the pool. And how realistic is it for this seal to be standing regardless? For me, the hands give it away. It’s quirky and sweet. I'm elbow-deep into prompt engineering 101, and this is the kind of thing AI would do.
AI would do it because it knows we’d like it. If a piece of content seems engineered to go viral, it probably is.
How hard would it be to get this footage?
When we’re talking about pictures of manta ray embryos, microscopic organisms, rare carnivorous plants — these shots are hard to come by. Only a handful of small, specialized groups like National Geographic have the access, technology, and wherewithal to catch them.
The obscurity of these deep nature factoids makes them hard to fact-check. Referring back to the seal video, for example, how many people know off the top of their heads what a seal’s hands look like?
GenAI is great at producing breathtaking “footage”. So if a meme account is sharing it without giving credit to an organization, it’s worth raising an eyebrow.
Is the production quality over the top?
We tend to associate quality with truthfulness, but genAI turns this on its head. GenAI excels at making images that look brilliant and vivid… but it’s terrible at making something look basic. I’ve literally tried to get it to make a stick-figure girl and it wouldn’t stop giving me immaculately realistic portraits of children.
It’s easy to forget that most media is of medium quality, at best. Look at news footage of anything. If a video has Avatar-level vividness and saturation, think twice.
What’s the deal with this account?
It isn’t just nature. Exotic homes and vacation spots are a genAI favorite, and there are plenty of other subject areas.
In the example below, the view out the windows raises a flag. I did some digging, and the account credited has an AI team that goes unmentioned here. So maybe this isn’t AI. But I’d bet you a big pot of money that it is.
Heading to the account and checking out the post history is always a good idea. If the general theme of the feed is something like, “Here are some very interesting and beautiful things,” then everything is worth questioning. Ditto if there’s no clear person or organization behind the account.
Again, these are creators who are trying to go viral. They may or may not care whether a post is real or not. They may or may not do the due diligence to find out. I’m not casting shade on anyone trying to create or curate compelling content, or simply trying to make money by gaming the algorithm.
I’m just saying it’s a good idea to kick the tires on what you consume these days.
Big fat caveats
When you’re 10 minutes deep into a good doom scroll, your guard is down. You’re not rigorously interrogating the dozens of images you see in a given scroll. It’s exhausting to imagine doing so.
At the same time, we are getting even deeper into an era where you have to question everything you see online. Everything. The potential for disinformation and propaganda has never been stronger. The most insidious fakes are ones that could be true… or untrue. Imagine a video where Biden is made to stutter just a little more. Imagine Trump being able to claim the “pussy-grabbing” soundbite was a deep fake.
It’s become clear we can’t expect tech companies to police themselves. Their creeds like “do no evil” and “for the good of mankind” are merely acts of branding. As strong as my distaste may be for lip service morality, they are businesses in a capitalist economy. It is fine that they are out to make money, even at the expense of decency.
But someone has to draw the line. Regulatory bodies were too slow in regulating social media, but I have to hope it’s not too late to put guardrails on AI.
Back in September, Nebraska Senator Pete Ricketts introduced a bill that would require a watermark for AI-generated materials, and many laws at the state level are taking shape. Biden signed an executive order back in the fall that ordered NIST and other agencies to create guidelines for AI products, and it required developers to submit information about the risks of their products.
Watermarking is by no means a silver bullet, even as TikTok and Adobe work to find ways to require and enforce watermarking.
Algorithms themselves need to operate differently, to evaluate content on more than just engagement.
Free speech and content moderation standards and best practices need to directly speak to AI-specific conflicts.
The Big Tech monopolies must be broken up. We are at their mercy individually, culturally, and governmentally.
Everyday people will have to start caring about the truth again, even when the truth is a letdown.
The common denominator of all these measures is to work in good faith, and as capitalism does not incentivize good faith, I’m rooting for meaningful legislation this election season.