The 137th Block: "Life-wrecking" AI, misinformation and scams
Or, just another week on the Internet
This week…
Welp, it wasn’t the flu.
Anyway, it’s that time of the year. I’m making a list of information gathered through this newsletter and around the web of what people seem to spend time on where news and analysis about misinformation, data, and ML are concerned (or anything related to Internet culture, in general). Sharing it next week. For now, you can also skim through some essential media, last updated on November 7, 2021.
You can also read last year’s round up.
And now, a selection of top stories on my radar, a few personal recommendations, and the chart of the week.
Sponsoring misinformation
Judd Legum for Popular Information:
In October, Popular Information reported that Semafor — a high-profile new media company — launched a climate newsletter sponsored by Chevron.
Chevron is not only one of the world's largest producers of climate emissions but also is notorious for spreading climate disinformation. Chevron is currently being sued by 20 cities and states for misleading the public about how its products drive climate change. “Big Oil companies have engaged in a decades-long campaign of misinformation that has contributed to global warming, which has disproportionately impacted our residents,” Hoboken Mayor Ravi Bhalla said when the city filed a lawsuit against Chevron and other oil companies in 2020.
As Emily Atkin noted in HEATED, Chevron’s ads in Semafor were themselves misleading. The ad claims that Chevron is working on “renewable natural gas” developed from cow manure. While Chevron is working to create fuel from cow manure, it “is not renewable or natural – and it is certainly not a large-scale climate solution.”
AI image generation tech can now create life-wrecking deepfakes with ease
Benji Edward for Ars Technica:
When we started writing this article, we asked a brave volunteer if we could use their social media images to attempt to train an AI model to create fakes. They agreed, but the results were too convincing, and the reputational risk proved too great. So instead, we used AI to create a set of seven simulated social media photos of a fictitious person we'll call “John.” That way, we can safely show you the results. For now, let’s pretend John is a real guy. The outcome is exactly the same, as you'll see below.
In our pretend scenario, “John” is an elementary school teacher. Like many of us, over the past 12 years, John has posted photos of himself on Facebook at his job, relaxing at home, or while going places.
Using nothing but those seven images, someone could train AI to generate images that make it seem like John has a secret life. For example, he might like to take nude selfies in his classroom. At night, John might go to bars dressed like a clown. On weekends, he could be part of an extremist paramilitary group. And maybe he served prison time for an illegal drug charge but has hidden that from his employer.
‘Made my blood run cold’: Unmasking a TikTok creator who doesn’t really exist
Katherine Denkinson for Vice:
Relatively unknown until November 2020, [Carrie Jade] Williams’ status in the literary community grew after she won the Financial Times’ Bodley Head/FT Essay Prize, which is open to writers under the age of 35. The winning entry is published in the FT Weekend, the weekend edition of the British newspaper, although the competition does not appear to have been run for the last two years. Williams’ entry was a moving essay about her diagnosis with Huntington’s Disease, a debilitating, degenerative genetic condition that affects the brain. Written using a speech-to-text computer programme, the essay won her a £1,000 prize.
[…]
Everything seemed to be going so well for Williams. Despite challenging circumstances, she was flourishing as a writer and creator. Friends said she was a “lovely person,” an inspirational figure living with Huntington’s Disease.
Except, Williams does not exist.
What I read, listen, and watch…
I’m reading “18 Pitfalls to Beware of in AI Journalism” by Sayash Kapoor and Arvind Narayanan for AI Snake Oil.
I’m listening to “The Foundational Myth Machine: Indigenous Peoples of North America and Hollywood,” an episode of Citations Needed, with Anishinaabe writer, broadcaster and arts leader Jesse Wente.
I’m watching Bob ❤️ Abishola.
Reviews, opinion pieces and other stray links:
The golden age of the streaming wars has ended by Alex Cranz for The Verge.
Twitter is blocking links to Mastodon by Jay Peters for The Verge.
UN to use crypto aid for Ukraine refugees by Marco Quiroz-Gutierrez for Fortune.
Goncharov: why has the internet invented a fake Martin Scorsese film? Sian Cain explains to Steph Harmon for The Guardian.
Chart of the week
Our World in Data published a new topic page on AI. Here’s their take on a brief history of artificial intelligence, including how rapidly the language and image recognition capabilities of AI systems have improved.