GPT-3’s grandpa contributed to this newsletter
Zoe Schiffer for The Verge: Amazon removes 20,000 reviews for fraud.
Daniel Kelley for Slate: Disinformation will come for Animal Crossing.
Daniel Kolitz for Gizmodo: What happens if all personal data leaked at once?
The guardian of the theatrical AI coverage
The Guardian, while often one of the best sources of news, did the AI community dirty this week with its publication of an op-ed written by GPT-3, OpenAI’s language model. After reading the piece, you will be informed by a long paragraph of the Editor’s Note, showing how exactly GPT-3 wrote it:
GPT-3 was guided by a prompt (why humans have nothing to fear from AI) and a prescribed opening paragraph:
I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could “spell the end of the human race.” I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.
GPT-3 wrote eight separate essays that the editor(s) then spliced together. The originals and edits were not disclosed, which makes it difficult to judge if there were any incomprehensible portions in the original essays that would have shown GPT-3’s true nature.
Did I mention the headline? I hated it. First of all, it contributes so much to the misguided sensationalisation of AI. The relationship between tech journalism and the tech industry must go beyond contributing to the ‘hype cycle’ of AI.
Second of all, the headline was originally: “A robot wrote this entire article. Does that scare you, human?” It was changed to: “A robot wrote this entire article. Are you scared yet, human?” Thanks to the Wayback Machine and time-stamped screenshots from my Facebook timeline, I have evidence. I want an explanation. It went from fear-mongering to extra dramatic fear-mongering. Is it necessary?
“Overall, it took less time to edit than many human op-eds,” went the final bit of the editor’s note. Of course it did; you had eight samples to remix and a strongly suggestive opening paragraph!
Don’t get me wrong, GPT-3 is a remarkable language model. In fact, it is the most powerful one yet. It may be the first time you heard of it, but OpenAI first described it in a paper published in May, and by July, demonstrated through a blog GPT-3’s capabilities of producing paragraphs that could seem like they were written by a person such as the ones stitched together into the aforementioned article. (Which means, by the way, it also wrote some incredibly sexist and racist things.)
I attended a seminar last year about a paper that studied the knowledge gap between tech journalism and the tech industry when it comes to the coverage of AI and machine learning stories. I tried to dig out my notes but… it’s a bit complex. No, the irony is not lost on me. But in summary, it took a systematic look at the UK media (The Guardian included) and its tech reporting of AI over the last few years and essentially, the poor coverage of AI resulted in:
fear-mongering (“ARE YOU SCARED YET, HUMAN?”)
mismatched socioeconomic and political expectations (Ban TikTok!!)
poor tech ethics (TikTok is for sale, so is your data??)
And we wonder why people think 5G causes COVID-19 (it doesn’t) and Bill Gates is implanting chips in everyone’s bodies (he isn’t but Elon Musk might).
What I read, watch and listen to…
I’m reading how Thomas Smith made $74k on Amazon for selling dog eye crust-removal combs.
I’m recommending Chua Minxi’s New Naratif article on the arrest of union workers for protesting against unfair treatment and insufficient PPEs for hospital cleaners that I translated.
I’m watching Asha Tomlinson’s brilliant CBC Marketplace piece on fake online testimonials. It’s from 2017, but shows how little has changed:
Chart of the week
It’s not serious, but it’s funny. Why are all good lesbian films period dramas?
Fakta, Auta & Data #8: Berselesalah dengan ketidakpastian
Tanyalah mana-mana saintis tentang pendapat mereka mengenai liputan berita sains – dan kebanyakannya akan memberitahu anda bahawa ia mengecewakan. Disalahtafsirkan. Terlalu sensasi. Persoalannya, kenapa?
Kajian 2014 yang diterbitkan British Medical Journal menunjukkan ianya bukan selalu salah media. Kajian itu menunjukkan maklumat yang mengelirukan, “sudah wujud dalam teks sumber asli yang dihasilkan oleh ahli akademik dan institusi mereka.”
Ini tanggungjawab bahagian komunikasi dan perhubungan awam universiti. Mereka tidak semestinya pakar dalam bidang sains, tetapi dalam bidang komunikasi. Mereka menghabiskan masa dan tenaga untuk mempakej teks akhbar tentang sebarang penemuan sains dengan cara yang menarik, dan menyembunyikan sebarang ketidakpastian. Ia kemudiannya dipakej semula oleh pengamal media yang tidak mempunyai latar belakang sains yang memusingkan cerita itu dengan berlebihan, agar lebih sensasi, kadang kala sehingga ia langsung bercanggah dengan hasil kajian asal.
Penyelesaiannya mungkin terletak pada wartawan sains yang pakar dalam merungkai proses sains. Kita perlu bersikap telus mengenai cara kajian saintifik dilakukan dan disampaikan. Kita perlu merasa selesa dengan ketidakpastian terutamanya dalam proses awal sebarang bentuk penyelidikan. Kerana orang awam sudah mula menyedari bahawa kebanyakan yang dilaporkan sebagai “penemuan menggemparkan” ini telah dilaporkan secara pramatang atau secara keterlaluan dan dramatik. Hasilnya, mereka menjadi kecewa dan was-was terhadap sains, saintis, dan media. Inilah kegagalan kita dalam komunikasi sains.
Share this post