The 165th Block: Hop on the fact-checking bus, quite literally
Plus, boomers are better than us at detecting misinformation. (Can someone fact-check this?)
This week…
Your reading time is about 6 minutes. Let’s start.
Earlier this week, I returned as the tabulator officer for the Toronto mayoral by-election. The turnout was slightly better than in the October 2022 municipal election, and more people asked more questions about the tabulator machine. Where does the ballot go? How does the machine know if I voted for two candidates? How does the tabulator tabulate the vote? Does the vote get counted immediately? Does the ballot get shredded? Oh, customer service. I prefer the solitude of writing.
Anyway, in my drafts, I have several unfinished, unpolished pieces, such as an unsympathetic criticism against pride parades, how a clean-as-you-go policy applies in digital spaces, and the neocolonialism of digital tourism… and I realise I probably have an unreasonable disdain for pests, clutter, and noise; I am not sentimental nor do I indulge in fantasies; and I think joy is not a pursuit—is often performative and inauthentic. These will not be well-received by a majority audience. So off we go into this week with so many things said, yet nothing said at all.
And now, a selection of top stories on my radar, a few personal recommendations, and the chart of the week.
ICYMI: The Previous Block underscored the importance of fact-checking and the impermanence of digitally published information. CORRECTION NOTICE: None notified.
First misinformation susceptibility test finds ‘very online’ Gen Z and millennials are most vulnerable to fake news
Fred Lewsey for the University of Cambridge:
The first survey to use the new 20-point test, called ‘MIST’ by researchers and developed using an early version of ChatGPT, has found that – on average – adult US citizens correctly classified two-thirds (65%) of headlines they were shown as either real or fake.
However, the polling found that younger adults are worse than older adults at identifying false headlines, and that the more time someone spent online recreationally, the less likely they were to be able to tell real news from misinformation.
This runs counter to prevailing public attitudes regarding online misinformation spread, say researchers – that older, less digitally-savvy “boomers” are more likely to be taken in by fake news.
The study presenting the validated MIST is published in the journal Behavior Research Methods, and the polling is released […] on the YouGov US website.
The researchers encourage the public to test themselves via this link. Also, if you like a Canadian-focused newsletter on business, tech, and finance, The Peak’s daily email has a section with a list of outrageous headlines, challenging you to pick the one which is fake (the answer is always provided at the end).
Fact-checkers’ bus tour taught older people in Spain useful internet tips. Here’s what they learned
Madison Czopek for Poynter:
One of the early lessons was about language: The BuloBús team had initially introduced its effort by talking about media literacy and outreach to elderly people in rural areas.
Some people took offense at the terms elderly and rural, because even towns of fewer than 40,000 are often large and more urban. Others misunderstood the meaning of media literacy and thought the BuloBús educators were trying to teach them how to read, [CEO and co-founder at Maldita.es Clara Jiménez Cruz] said.
In response, the BuloBús team pivoted. Rather than mention rural areas, they explained that the BuloBús was visiting places that don’t often have the resources to invite Maldita journalists for education purposes, Jiménez Cruz said. They stopped talking about media literacy, and instead discussed the specific ways older people encounter misinformation in their daily lives.
The team said things like, “You probably feel a bit anxious sometimes when you have to confront scams or misinformation arriving on your phone. This happens to everyone, and here are some tips that you can benefit from,” Jiménez Cruz recounted.
Once it took steps to avoid sounding patronizing, the BuloBús was largely well-received by locals at its stops, Jiménez Cruz said.
The logistic arrangement for this project must be quite something.
Inside the AI factory: the humans that make tech seem human
![A collage of images with instructions in green or red boxes under each image to label certain items such as shoes, costume, and real items that can be worn by real people and to not label costumes that cannot be worn by real people A collage of images with instructions in green or red boxes under each image to label certain items such as shoes, costume, and real items that can be worn by real people and to not label costumes that cannot be worn by real people](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f86ca46-56dd-4d43-b3bc-6304f083894b_750x750.webp)
Josh Dzieza for NY Mag and The Verge:
Annotation remains a foundational part of making AI, but there is often a sense among engineers that it’s a passing, inconvenient prerequisite to the more glamorous work of building models. You collect as much labeled data as you can get as cheaply as possible to train your model, and if it works, at least in theory, you no longer need the annotators. But annotation is never really finished.
Machine-learning systems are what researchers call “brittle,” prone to fail when encountering something that isn’t well represented in their training data. These failures, called “edge cases,” can have serious consequences. In 2018, an Uber self-driving test car killed a woman because, though it was programmed to avoid cyclists and pedestrians, it didn’t know what to make of someone walking a bike across the street. The more AI systems are put out into the world to dispense legal advice and medical help, the more edge cases they will encounter and the more humans will be needed to sort them. Already, this has given rise to a global industry staffed by people like Joe who use their uniquely human faculties to help the machines.
Also, this bit is utterly shambolic:
Feeling confident in my ability to distinguish between real clothes that can be worn by real people and not-real clothes that cannot, I proceeded to the test. Right away, it threw an ontological curveball: a picture of a magazine depicting photos of women in dresses. Is a photograph of clothing real clothing? No, I thought, because a human cannot wear a photograph of clothing. Wrong! As far as AI is concerned, photos of real clothes are real clothes. Next came a photo of a woman in a dimly lit bedroom taking a selfie before a full-length mirror. The blouse and shorts she’s wearing are real. What about their reflection? Also real! Reflections of real clothes are also real clothes.
After an embarrassing amount of trial and error, I made it to the actual work, only to make the horrifying discovery that the instructions I’d been struggling to follow had been updated and clarified so many times that they were now a full 43 printed pages of directives: Do NOT label open suitcases full of clothes; DO label shoes but do NOT label flippers; DO label leggings but do NOT label tights; do NOT label towels even if someone is wearing it; label costumes but do NOT label armor. And so on.
What I read, listen, and watch…
I’m reading about science publishing’s Achilles’ heel—paper mills, by Jonathan Moens for Undark and Retraction Watch.
I’m listening to Molly Crabapple on Paris Marx’s Tech Won’t Save Us as they discuss AI’s threat to artists.
I’m watching Casey Fiesler, a professor of information science, talk about the AI revolution and misinformation on a webinar by The Institute for Science & Policy.
Other curious links:
“‘The damage has already been done’: Hong Kong journalist Bao Choy on winning a battle but not the war” by Irene Chan for Hong Kong Free Press.
“China’s banned online communities have found a new home on Reddit” by Caiwei Chen for Rest of World.
“AI is killing the old web, and the new web struggles to be born” by James Vincent for The Verge.
“You think the internet is a clown show now? You ain’t seen nothing yet…” by John Naughton for The Guardian.
“A conversation with Maria Bustillos about the limits of expert analysis” by Tom Scocca on Indignity. Ah, westsplaining.
“El ‘pinkwashing’ va camino de convertirse en un gesto valiente: cuando el acoso a las campañas de Zara o Bud Light compromete a las marcas” por Jaime Lorite y Guillermo Alonso en El País.
« Au terme de trois ans de suivi de l’épidémie, la fin du tableau de bord du Covid-19 du ‹ Monde › » par Gary Dagorn dans Le Monde.
Chart of the week
More from the first story above on YouGov US, where Linley Sanders provides a further breakdown of the survey of 1,516 US adult citizens who were tested on their susceptibility to falsehoods in the news.
And one more thing
On Good Internet, here’s René Walter’s assessment of the AI-bullshit factory... so far.
The link for the misinformation test doesn't seem to be working for me unfortunately.