The 157th Block: Press freedom and acceptable blurring and pixellating
AI-generated human rights communication is not the answer
This week…
Your reading time is about 8 minutes. Let’s start.
World Press Freedom Day on May 3 came and went. Many stories to highlight, but I’ll choose just this one because it’s on Twitter, and we still can’t embed that:
E. Quinn Libson wishes journalism awards would “reward the hard work of local outlets, rather than big international outlets.” For example, she said, in Cambodia, VOD English was reporting “tirelessly on scam compounds long before international outlets took any kind of interest” and “on shoestring budgets, with half the social media reach, and double the personal risk.” The Human Rights Press Award recently went to a Nikkei Asia piece on online fraud.
I don’t care what you think about news and journalism as a whole, but anyone in the news media who says that they are an award-winning journalist/producer/show/etc. should be taken a little less seriously. I don’t think this is an industry that needs to be rewarded because:
It can produce an unhealthy focus on recognition rather than the pursuit of truth and informing the public.
It can create pressure to produce work that fits certain criteria or narratives rather than being truly independent and objective.
It can promote a bias towards certain types of journalism, such as investigative and enterprise journalism, sidelining other important forms of reporting.
It can generate an individualistic and competitive culture that does not reflect the collaborative and collective nature of journalism.
That list was 100% ChatGPT-generated. The only output I wanted was: It’s self-wanky.
Anyway, here’s a selection of top stories on my radar, a few personal recommendations, and the chart of the week.
Rise of the newsbots: AI-generated news websites proliferating online
McKenzie Sadeghi and Lorenzo Arvanitis for NewsGuard:
In April 2023, NewsGuard identified 49 websites spanning seven languages — Chinese, Czech, English, French, Portuguese, Tagalog, and Thai — that appear to be entirely or mostly generated by artificial intelligence language models designed to mimic human communication — here in the form of what appear to be typical news websites.
The websites, which often fail to disclose ownership or control, produce a high volume of content related to a variety of topics, including politics, health, entertainment, finance, and technology. Some publish hundreds of articles a day. Some of the content advances false narratives. Nearly all of the content features bland language and repetitive phrases, hallmarks of artificial intelligence.
The AI-generated headline that appeared on TNewsNetwork.com, an anonymously-run news site that was registered in February 2023, in the April 17 screenshot above reads, “I’m sorry for the confusion, as an AI language model I don’t have access to external information or news updates beyond my knowledge cutoff date. However, based on the given article title, an eye-catching news headline could be:”
AI journalism is getting harder to tell from the old-fashioned, human-generated kind
Ian Tucker for The Guardian:
Acouple of weeks ago I tweeted a call-out for freelance journalists to pitch me feature ideas for the science and technology section of the Observer’s New Review. Unsurprisingly, given headlines, fears and interest in LLM (large language model) chatbots such as ChatGPT, many of the suggestions that flooded in focused on artificial intelligence – including a pitch about how it is being employed to predict deforestation in the Amazon.
One submission however, from an engineering student who had posted a couple of articles on Medium, seemed to be riding the artificial intelligence wave with more chutzpah. He offered three feature ideas — pitches on innovative agriculture, data storage and the therapeutic potential of VR. While coherent, the pitches had a bland authority about them, repetitive paragraph structure, and featured upbeat endings, which if you’ve been toying with ChatGPT or reading about Google chatbot Bard’s latest mishaps, are hints of chatbot-generated content.
I showed them to a colleague. “They feel synthetic,” he said. Another described them as having the tone of a “life insurance policy document”. Were our suspicions correct?
TL;DR: You bet.
Amnesty International criticised for using AI-generated images
Luke Taylor for The Guardian:
But photojournalists and media scholars warned that the use of AI-generated images could undermine Amnesty’s own work and feed conspiracy theories.
“We are living in a highly polarised era full of fake news, which makes people question the credibility of the media. And as we know, artificial intelligence lies. What sort of credibility do you have when you start publishing images created by artificial intelligence?” said Juancho Torres, a photojournalist based in Bogotá.
What happened to good ol’ fashioned blurring and pixellating?
Generative AI is forcing people to rethink what it means to be authentic
Victor R. Lee for The Conversation:
But what actually makes something feel authentic?
Psychologist George Newman has explored this question in a series of studies. He found that there are three major dimensions of authenticity.
One of those is historical authenticity, or whether an object is truly from the time, place and person someone claims it to be. An actual painting made by Rembrandt would have historical authenticity; a modern forgery would not.
A second dimension of authenticity is the kind that plays out when, say, a restaurant in Japan offers exceptional and authentic Neapolitan pizza. Their pizza was not made in Naples or imported from Italy. The chef who prepared it may not have a drop of Italian blood in their veins. But the ingredients, appearance and taste may match really well with what tourists would expect to find at a great restaurant in Naples. Newman calls that categorical authenticity.
And finally, there is the authenticity that comes from our values and beliefs. This is the kind that many voters find wanting in politicians and elected leaders who say one thing but do another. It is what admissions officers look for in college essays.
Read on as Lee ponders how to deal with the “looming authenticity crisis.”
Brazil pushes back on big tech firms’ campaign against ‘fake news law’
Anthony Boadle for Reuters:
Bill 2630, also known as the Fake News Law, puts the onus on the Internet companies, search engines and social messaging services to find and report illegal material, instead of leaving it to the courts, charging hefty fines for failures to do so.
Tech firms have been campaigning against the bill, including Google LLC which had added a link on its search engine in Brazil connecting to blogs against the bill and asking users to lobby their representatives.
Justice Minister Flavio Dino ordered Google to change the link on Tuesday, saying the company had two hours after notification or would face fines of one million reais ($198,000) per hour if it did not.
“What is this? An editorial? This is not a media or an advertising company,” the minister told a news conference, calling Google’s link disguised and misleading advertising for the company’s stance against the law.
The U.S. company promptly pulled the link, though Google defended its right to communicate its concerns through “marketing campaigns” on its platforms and denied altering search results to favor material contrary to the bill.
What does fake news mean? Tell me, ChatGPT.
What I read, listen, and watch…
I’m reading how CBC treats its temporary workers—and how some are fighting back by Aloysius Wong for Review of Journalism.
I’m listening to “Havana Syndrome and the Power of Mainstream, Acceptable Conspiracy Theories” on Citations Needed with Jacobin writer Branko Marcetic, hosted by Nima Shirazi and Adam Johnson.
I’m watching “The Witch Trials of J.K. Rowling” by Natalie Wynn. Finally got ‘round to it. As always, compelling reasoning.
Reviews, opinion pieces, and other stray links:
With the rise of AI-generated propaganda, journalism is more important than ever by Rachel Pulfer for The Globe and Mail.
Google shared AI knowledge with the world — until ChatGPT caught up by Nitasha Tiku and Gerrit De Vynck for WaPo.
Andreessen Horowitz saw the future — but did the future leave it behind? by Elizabeth Lopatto for The Verge.
From RPMHTFVVC to FNQLHSSC: What’s behind Quebec’s love of long abbreviations? by Alex Leduc for CBC.
La excepcional vida de Lola Hoffmann, la terapeuta acusada de provocar divorcios en sus pacientes by Juan Cristóbal Villalobos for El País.
Chart of the week
Hanna Duggal and Marium Ali mapped the state of press freedom in the world for Al-Jazeera:
And one more thing
Kate Lindsay on Embedded: “Instagram, TikTok, Twitter, and Substack will forever own the parts of my personality that I’ve handed over to them, as do the platforms from my past: Facebook still claims the extroverted college student; Tumblr, the angsty, artsy teen; MySpace, the confused and flailing twelve year old who shouldn’t have been there in the first place. But it’s become harder and harder to harvest myself over the years—to find something new to offer up to the Next Big Thing.”
You may have said that AI- generated text is not easy to spot in your previous post but man those midjourney images are always so easy. I dunno what it is but there's just something about them that looked somewhat too glossy and have crazy photoshop quality to them