This week…
I watch with fascination the ongoing German election—the first without the world’s most powerful woman, Angela Merkel, running. It is worth noting that the German green party came in third—almost doubling their seats in the process—compared to sixth place in the previous election. It now is in the position to play kingmaker, potentially forming the coalition government. Meanwhile, the German far-right party drops to fifth.
In comparison, in the recent disappointingly anti-climatic Canadian federal election, the Canadian green party was practically obliterated, with its party leader coming dead-last in the fourth position in her riding. The Canadian far-right party, on the other hand, did not pick up any seats but saw gains in its popularity votes.
Green parties and far-right parties aren’t necessarily the likeliest to form governments in mature democracies, and often don’t get taken very seriously as they traditionally are outliers. However, both types of political parties rely heavily on social media and other online spaces to galvanise and mobilise grassroots. (James McBride, CFR; Angie Waller and Colin Lecher, The Markup).
In the process, supporters of either political ideology are susceptible to radicalisation, as well as creating, spreading and believing in misinformation. And, in the next two or three rounds of federal elections in countries with both types of parties, it would be curious to see if there is a pattern to map.
Anyway, here’s a selection of other stories on my radar, a few personal recommendations, and the chart of the week.
Wikipedia’s next leader on preventing misinformation: ‘Neutrality requires understanding.’ ($)
Davey Alba interviews Maryana Iskander, a social entrepreneur in South Africa, who will become Wikimedia Foundation’s chief executive in January for NYT:
What is your take on how Wikipedia fits into the widespread problem of disinformation online?
[…] We have tech that alerts these editors when changes are made to any of the pages so they can go see what the changes are. If there’s a risk that, actually, misinformation may be creeping in, there’s an opportunity to temporarily lock a page. Nobody wants to do that unless it’s absolutely necessary. The climate change example is useful because the talk pages behind that have massive debate. Our editor is saying: “Let’s have the debate. But this is a page I’m watching and monitoring carefully.”
The Internet’s original sin
Another interview, a long one this time, but an insightful one nonetheless between two Adweek alumni. Charlie Warzel chats with Gizmodo’s Shoshana Wodinsky about online advertising:
A lot of this personal data is cheap to buy. A lot of it is bad, too — corrupted or out of date or not all the reliable. And people in the industry know this. But many people don’t. I think this is like the fundamental brain scrambler of reporting on all this stuff. The invasions of privacy are real. The potential for abuse is real. But also it’s true that there’s a lot of garbage data, the importance of which is being way oversold by marketers and touted by targeting companies. How do you balance those two things in your reporting?
At the end of the day the ethos I try to carry in all my stories is that we all live under capitalism and the only way we do that is by making a profit for somebody at the end of the line. I’m kind of obsessed with charting that line. I do my best to show the flows of information. When you show the third parties that Facebook shares data with, you’re showing the flow of money. But that’s not enough because, as you said, the value of data isn’t always clear.
Mark Zuckerberg is one of the most wealthy people on the planet and so his company must be doing something valuable, right? Facebook is mostly selling ads and making money so people think, ‘my data must be very valuable.’ But the work is trying to add nuance to that story. Yes, your data is very valuable, but only in aggregate. It’s all very sticky and hard to parse out. Your individual data, in the grand scheme, might not be worth a ton on a market but some of that data is very valuable to you. And that means something. For example, health data feels personal and because of that it shouldn’t be commoditized. The stakes are too high. But then then you look at the big picture and your piece of health data is worth a fraction of a cent. Which is also why companies have to collect so much of it, to scale it up.
The business of computational propaganda needs to end
Samuel Woolley for CIGI:
Seven years later, a quick Google search reveals that his firm is still in business. In fact, it seems to have grown — boasting more employees and larger clients. Very little has changed in the menu of things the company offers, despite the fact that many social media companies have clamped down on what they call “coordinated inauthentic behaviour.” And his work, like many of his competitors, is often just that: an organized effort to use various “inorganic” mechanisms and tactics to build clients’ digital clout. Bots (automated programs), sock puppets (false online identities) and groups of coordinated social media users, for instance, were crucial parts of his toolkit.
These tools continue to play a role in this business of computational propaganda — defined as the use of automation and algorithms in efforts to manipulate public opinion over social media. That this business still exists — and is, if anything, thriving — represents a serious problem for society.
What I read, watch and listen to…
I’m reading how one coder became Indonesia’s misinformation guru by Antonia Timmerman for Rest of World.
I’m watching BBC’s Vigil. It was good, refreshing take on thr conspiracy thriller genre until it became clear that it’s yet another tiring anti-Russian, (semi) anti-Chinese propaganda by Western cinema.
I’m listening to Medieval Science on BBC’s You’re Dead to Me.
Chart of the week
Did you know that AFP is the world’s largest fact-checking operation? Yet only Big Tech is willing to pay for fact-checking services, reports Sarah Scire for Nieman Lab.