The 153rd Block: Twitter v. Substack
This edition is not automatically shared to my Twitter account
This week…
On Apr 26, 2022, Elon Musk, self-proclaimed “free speech absolutist,” tweeted, “By ‘free speech,’ I simply mean that which matches the law. I am against censorship that goes far beyond the law.” I could have embedded the tweet, but that’s no longer supported on this platform. Here’s why, in a simplified timeline:
Wednesday – Substack announced Notes, a feature that is akin to Twitter.
Thursday – Twitter blocked Substack users from embedding tweets into their posts.
Late Thursday/Early Friday – Twitter blocked engagements on tweets that link to Substack. Users could not like or retweet them, although they could still quote-retweet them.
Friday morning – Twitter enforced the above measure to all tweets from the official Substack account, whether they include any Substack links or not.
Friday evening – Twitter marked links to Substack as unsafe, even though they are harmless (the links, not necessarily the content of the Substack posts).
Saturday – Searching for “Substack” on Twitter yields a result for the word “newsletter” instead. This suggests that Twitter is attempting to limit users’ ability to get more information about it. Even though people are tweeting about it, the subject is suppressed, and you won’t find it under the trending topic.
Twitter has made similar hostile (and petty) moves on rival platforms, such as Mastodon, during the great Twitter exodus that never was, even if those measures were later relaxed or removed.
Embedding tweets has become a big part of online writing, regardless of the platform, since many writers rely on them for context, authentification, and engagement, especially, but not exclusively when working with user-generated content. Twitter benefits from this too, because the conversation can continue on that tweet, which boosts their traffic. Without the ability to embed tweets conveniently, it could be a lose-lose situation for Substack and Twitter.
Of course, taking screenshots of tweets, uploading them as images, and hyperlinking them to the original tweets could work too, but that is tedious.
And now, a selection of top stories on my radar, a few personal recommendations, and the chart of the week.
ChatGPT is making up fake Guardian articles. Here’s how we’re responding
Chris Moran, The Guardian’s head of editorial innovation:
Last month one of our journalists received an interesting email. A researcher had come across mention of a Guardian article, written by the journalist on a specific subject from a few years before. But the piece was proving elusive on our website and in search. Had the headline perhaps been changed since it was launched? Had it been removed intentionally from the website because of a problem we’d identified? Or had we been forced to take it down by the subject of the piece through legal means?
The reporter couldn’t remember writing the specific piece, but the headline certainly sounded like something they would have written. It was a subject they were identified with and had a record of covering. Worried that there may have been some mistake at our end, they asked colleagues to go back through our systems to track it down. Despite the detailed records we keep of all our content, and especially around deletions or legal issues, they could find no trace of its existence.
Why? Because it had never been written.
Artificial hallucination, or more specifically, as some have called it, hallu-citation.
How AI-generated content could both fuel disinformation and improve fact-checking
Borja Lozano and Irene Larraz or Poynter:
The list of concerns is long, but three points are of particular interest to us: people’s trust in such tools, the potential they have to misinform or encourage false narratives, and, conversely, how they can help to improve automated fact-checking.
Users’ blind trust in these models is a significant concern. The way ChatGPT generates text suggests that it is like a large database, when in fact it is a language system whose abilities rely on predicting very accurately the next word in a sentence to compose meaningful texts. Hence, the content it generates is not always true, even more so if the system has been trained with all the information stored on the internet, where examples of misinformation abound.
However, as the GPT-4 technical report points out, ChatGPT’s responses are so convincing that it “has the potential to cast doubt on the whole information environment, threatening our ability to distinguish fact from fiction.”
NPR says it won’t tweet from @NPR until Twitter removes false “state-affiliated” label
Sarah Scire for Nieman Lab:
NPR has not tweeted since Twitter slapped a “US state-affiliated media” label on its main account on Wednesday, a designation that lumps the news org in with propaganda outlets like Russian broadcaster RT and China’s People’s Daily newspaper. And it doesn’t plan to until the label is removed.
The @NPR account — which has more than 8.8 million followers — has an updated bio: “You can find us every other place you read the news.” The header image now includes the words: “Always free and independent. Always at NPR.org.”
An NPR reporter, Bobby Allyn, provided Musk with publicly available documents to show that government aid accounts for one per cent of NPR’s finances.
What I read, listen, and watch…
I’m reading “Rubber, Race and Colonial Exploitation” by Loh Pei Ying and Heleena Panicker on Kontinentalist.
I’m listening to “Generating Creativity” on In Machines We Trust on MIT TR, hosted by Jennifer Strong.
I’m watching Rabbit Hole. The show uses buzzwords that would perk the ears of disinfo folks, such as “algorithms of control” that use online users’ personal data to predict and manipulate their behaviours, win elections and form whatever government you wish. Sounds familiar enough, this business of disinformation as a consultation service (cough, cough Cambridge Analytica). It seems far more conspiratorial than that, but that’s showbiz for you.
Reviews, opinion pieces, and other stray links:
Prenatal screening for autism is an ethical dilemma by Matthew Hutson for proto.life.
Weaving Indigenous and western ways of knowing can help Canada achieve its biodiversity goals by Lydia Johnson and Diane Orihel for The Conversation.
The Bitcoin whitepaper is hidden in every modern copy of macOS by Andy Baio on Waxy.
Chart of the week
Among US journalists, men are far more likely to cover sports while women are more likely to cover health and education, according to a Pew survey of 11,889 US-based journalists conducted in 2022. Emily Tomasik and Jeffrey Gottfried have the text version on Pew Research Center.
And one more thing
The third story on Ryan Broderick’s Garbage Day this week is on “fingindo podcast” (pretending podcast), viral clips from podcasts that don’t exist—a content growth hack popular in Brazil, but also elsewhere in the world. In an earlier post, Broderick posited that “the podcast mic during the time of COVID has evolved into a visual signal of importance, sort of like how during the era of peak TED Talk, a bunch of guys would film themselves on stages, add some inspirational music, and then post it to Facebook.”
In this piece, he said that content creators have “figured out that there are just too many podcasts to keep track of, they’re all essentially the same, they’re too long and hard to find, and no one actually cares what the shows are” So, he argued, “why not just cut out the middleman?”
How are you enjoying Notes?
if you hate twitter so much jsut delete it all you do is whine about it