AI, CGI, conspiracies, misinformation, disinformation, Oh My
published 2023-06-19 by Brad Dobson
Exposure to convincing lies is nothing new. It’s been established for a long time that human beings can be tricked: our senses, beliefs, and morals are all up for grabs. As hypnotists and con men know, some are more susceptible than others. My Mom once related the story of her grandparents talking to the characters on the television when they first got one.
Many of us are familiar with the stories of the Mechanical Turk and The War of the Worlds radio broadcast. The 1770’s brought us the Mechanical Turk, fake automaton that played chess (a real chess player was hidden inside). Certainly it had skeptics but generally people were wowed by such a machine, believing they were seeing a real chess-playing robot.
AI representation of The Mechanical Turk
In a similar way, the original radio broadcast of The War of the Worlds actually caused some listeners to believe it was real. Although the reported hysteria seems to have been overblown the fact remains that without being given a wider context (ie. that they were just listening a radio serial), people thought an actual invasion was occurring.
We like to think we’re more sophisticated than those people from a different era but the same things happen today. With the pervasive nature of today’s media it’s even harder to discern what’s real. We can expect that trend to worsen.
The effects of this exposure range from the blissful ignorance of someone whose problem was solved by a helpful “person” that was actually an AI, all the way to someone convinced that violence is the only solution to prevent the perceived war being waged against them and their cultural beliefs. It’s the latter, that “I don’t trust the gubmint, I’ll do my own research”, that pushes rational people into the fringes and extreme acts.
It’s not that they fully believe something silly, it’s that they have little to no trust in the person or institution providing the actual facts.
There are multiple things at play here:
Space.com’s article sums the lack of trust up nicely:
So if you find yourself talking to a flat-Earther, skip the evidence and arguments and ask yourself how you can build trust.
— Paul Sutter, Space.com
We see real world examples of the effects of disinformation and misinformation every day, whether it’s trusting our food and water, the all-to-common tale of the grandparents caught in the propaganda trap of Fox News, a quarter (!) of the UK believing that COVID was a hoax, or Samsung phone cameras inventing pixels in images of the moon. A quick run through twitter shows that it’s tough to know whether a ‘verified’ user is actually who they say they are. Twitter is already well known for how easily it can spread fake news.
Every single pixel will be generated soon. Not rendered: generated.
— Jensen Huang, NVIDIA CEO
That quote by Jensen Huang (NVIDIA makes graphics cards and is the 6th largest company in the world) hits at the core of what our online experience is becoming. It’s not just the pixels though. We are not very far off from our online and telephonic interactions being 100% fakeable. Even the best-trained eyes and ears will not be able to discern whether something is real or not. The advent of usable generative AI (eg. ChatGPT, and Midjourney) means that generated things start to not just look real but also make sense in context.
Protestants in Germany (knowingly) sat down for their first sermon delivered by an AI.
The UN sees “information integrity” and our ability to distinguish between disinformation and misinformation as crucial to human progress.
Writers in the 2023 WGA strike have grave concerns about studios using AI to write television shows: it’s coming.
Video deepfakes and voice cloning technology make it easy for anyone to “say” and do anything. Is that really the Pope wearing Balenciaga? Sure papi, sure it is.
AI-generated image from Reddit
In the real world we should always be circumspect about a salesman at our door offering a great price for our house, a Craigslist car buyer wanting to pay with a money order, or a preacher with personal jets that asks for our money. Where do those skills apply when we find out that the news we watch was generated using an AI that was trained with a bias towards Big Oil, Big Pharma, and Big Cereal? What does it mean when the pop star we’ve been listening to and following on social media is entirely fake, songs and all?
If we say “I’m not going to trust anything online” what does that mean when we see important guidance from the scientists, government, or the police? If we say “I’m only going to trust information from X and Y” we end up in an information silo and part of those tribes whether they are truthful or not.
We aren’t all able to look for gold standard science (randomized control trial, large cohort, not paid for by giant companies) and it often does not exist.
Sometimes our institutions make preliminary or conflicting statements.
As has always been true, there are very large players using media to change public opinion and spread fear, uncertainty, and doubt. The facts they present are malleable.
The media - social media, traditional media - amplify everything. It’s their job.
Think of this like the ‘Stop, Drop, and Roll’ of online information consumption (you can find other ones like this by searching for ‘How to spot online disinformation’):
Trust but verify, indeed:
Our goal with Strong99 is to not just live longer but to thrive longer. I argue that to do that we need to be fully capable of navigating the online world throughout our lives. That means staying well-informed, well-versed in qualifying what we think we’re seeing and hearing, and vigilant in checking where information is coming from. Many times it means just plain waiting for more information and consensus instead of spreading the current story ourselves and making something worse.