This week…
On The Conversation, Brock University’s Renata Roma said that your dog’s behaviours can impact your quality of life, for better or worse. And she’s right; my dog is unwell and is on medication that makes her urinate every two to three hours, which means no one is getting enough sleep in this household. One week down, two more weeks to go.
And now, a selection of top stories on my radar, a few personal recommendations, and the chart of the week.
Magic Editor in Google Photos: New AI editing features for reimagining your photos
Shimrit Ben-Yair for Google:
Since its launch in 2015, Google Photos has used AI to help you get the most out of your memories — from automatically organizing and resurfacing your photos to helping you edit them with advanced tools like Magic Eraser and Photo Unblur. Today at I/O, we gave a sneak peek of Magic Editor, a new experimental editing experience that uses generative AI to help you reimagine your photos and make editing even easier.
People are trying to claim real videos are deepfakes. The courts are not amused.
Shannon Bond for NPR:
Thanks to advances in artificial intelligence, it's easier than ever to create images and video of things that don’t exist, or events that never happened. That’s spurring warnings about digital fakery being used to spread propaganda and disinformation, impersonate celebrities and politicians, manipulate elections and scam people.
But the unleashing of powerful generative AI to the public is also raising concerns about another phenomenon: that as the technology becomes more prevalent, it will become easier to claim that anything is fake.
“That’s exactly what we were concerned about: that when we entered this age of deepfakes, anybody can deny reality,” said Hany Farid, a digital forensics expert and professor at the University of California, Berkeley. “That is the classic liar’s dividend.”
NewsGuard and Barometer unveil episode-;evel misinformation detection for podcasts
From NewsGuard’s press release:
NewsGuard and Barometer’s Misinformation Detection Solution for Podcasts works by integrating AI-powered data identification algorithms with NewsGuard’s Misinformation Fingerprints — its journalist-powered catalog of false narratives spreading online — to ensure users can see and act on potential misinformation as quickly as possible. By tapping into Barometer’s natural language processing to create risk profiles for podcasts based on the Global Advertisers for Responsible Media’s (GARM) Brand Safety Floor and Suitability Framework and Media roundtable values, the solution makes it easy for audio advertisers to plan values-driven ad buys, support transparency, and empower context-based advertising.
NewsGuard’s machine-readable “Fingerprints” each contain a description of the false narrative, a detailed debunk with factual information, citations to authoritative sources, associated keywords, and hashtags, examples of the false narrative spreading online, and other descriptive metadata. Barometer’s AI uses these inputs as “data seeds” in order to detect when misinformation narrative(s) are likely being discussed in a particular podcast episode.
A lot of big words. Still curious about its efficacy beyond English-language podcasts.
What I read, listen, and watch…
I’m reading Mark Vanhoenacker’s essay on Aeon about “Aeroese”, or Aviation English, the language of pilots.
I’m listening to “How to Speak Bad English” on NPR’s Rough Translation. Host Gregory Warner talks “inside baseball” of American English and revisits an episode about the global pursuit of “good English.” He really stepped up to the plate to cover all the bases with this one.
I’m watching a video on Eurovision’s queer politics by Ada Černoša and Verity Ritchie. I learned about homonationalism in Eurovision — essentially, using queer equality as a justification for systemic xenophobia in the West. The argument goes that since homophobia is inherently a cultural problem in the East (but if there is any instance of homophobia in the West, that’s an isolated, individual case), in order to protect the poor, defenceless queer people of the West, stronger anti-immigration policies should be in place. (I don’t even watch Eurovision.)
Reviews, opinion pieces, and other stray links:
‘I didn’t see him show up’: Ex-Googlers blast ‘AI godfather’ Geoffrey Hinton’s silence on fired AI experts by Wilfred Chan for Fast Company.
Does generative AI contribute to more culturally inclusive higher education and research? by Dimitrinka Atanasova for LSE.
Fake scientific papers are alarmingly common by Jeffrey Brainard for Science.
Ambivalence in exile: After fleeing Pakistan, Ahmad Noorani attempts to regain his footing as a journalist by Mercy Tonnia Orengo for CJR.
J’ai décidé de politiser mon arrêt du cinéma ($) by Adèle Haenel for Télérama.
Chart of the week
Global life expectancy from 2001 to 2021 from Our World in Data shows a visible drop in the pandemic years. Physician Pedro Lérias elaborates with more charts to show the impacts of varying COVID policies on selected countries, such as Ecuador and China, and continents, such as Africa and Asia. A dynamic and screen reader-friendly version of the chart is available here.
And one more thing
Those pearl-opening live videos on social media? Scam. Total scam. I never bought into it, but I have always wondered how they pulled it off. (h/t @hoaxeye on Twitter.)