Credit: Author via Midjourney
This article is a selection from The Algorithmic Bridge, an educational newsletter whose purpose is to bridge the gap between algorithms and people. It will help you understand the impact AI has in your life and develop the tools to better navigate the future.
The Algorithmic Bridge
Bridging the gap between algorithms and people. A newsletter about the AI that matters to you. Click to read The…
thealgorithmicbridge.substack.com
The Algorithmic Bridge was born to fulfill a need: educating non-technical people about AI. Particularly, about how it interacts with everything else: TAB is a newsletter focused on and directed to the world beyond AI.
Is in that sense that this essay is one of the most important I’ve written.
I’m of the opinion that everyone should have basic notions about AI — maybe not about what it is (the nitty gritty technical details), but about how it influences our lives. That’s what I aim to do with TAB.
Yet, I realize that, despite my insistence that you should apply “critical thinking” and have a “healthy skepticism” when learning about AI, I’ve never explained how to do it.
That’s what this piece is about. I’ll explain the “how” with the help of four filters.
When I put on my “critical thinking” glasses, I suddenly see four filters that separate me from knowing what’s really going on in AI. Those filters aren’t always present but when they are, I better have my glasses on.
I keep them on when reading news articles, company announcements, papers, books, etc. Everything. Otherwise, they could distort my perception and I may end up not just not knowing the truth but with the illusory certainty that I do.
Today, I’ll explain with examples what these filters are, why they exist, and how to notice them.
I’m biased, but this is a must-have manual on AI discourse for any layperson who wants to learn to discern the wheat from the chaff. If you want to know what’s marketing, hype, or exaggeration, and what’s the truth behind them, you have to keep a copy of this in your memory.
(As a note, I want to clarify that TAB — and, by extension, this very article — is also inevitably part of those filters. I try to write every article as objectively as I can, but it’s hard to do it perfectly. For instance, to achieve my educational goals I write articles as attractive as possible so you want to read them — that can already be at odds with objectivity.)
Knowledge isn’t neutral
Before I begin, let me explain why I consider this essay critical.
The reason is simple: knowledge isn’t neutral. AI — as a highly sought-after strategic tech — makes it even less so. Knowing in which ways it’s non-neutral can help you de-bias it.
This may be obvious to you, but many people see AI through a unidimensional lens — it’s simply an emerging technology.
To me, it’s pretty clear that AI matters beyond a purely technical point of view: it’s entwined with the global economy, geopolitics between hegemonic powers, and the evolution of our cultures.
If we accept this premise, it’s reasonable to assume AI people are interest-driven players: from Google’s execs to the newest intern at the latest AI company, to your country’s government, to me.
And, where there’s an interest, there’s an incentive to tweak information — and thus knowledge — accordingly. It may not always happen, but it happens.
This doesn’t necessarily mean outright lying. In most cases, it’s a combination of half-truths, targeted emphases, lack of nuance, and blatant hyperbole.
But, regardless of how lightly those players distance themselves from that neutrality, it’s critical for you — the final recipient of the knowledge — to be aware of their biases — which emerge from the four filters.
That said, let’s begin. I’ll go in the natural order the filters usually appear (I’ll obviate that all these players have financial goals, so I can keep the explanations somehow interesting).
1. The beautification of findings
The first filter is created by Academia and industry scientists and researchers (I’ll use both interchangeably), and anyone who does science (or makes technology).
They’re moved by two incentives: First, they tend to emphasize the best results. Second, they may have personal reasons to believe (or purport to believe) in the meaningfulness of those results.
Results in the best light
Meta’s Galactica is one of the most talked-about recent language models. The paper authors presented it as a meaningful step forward to “organize science” and emphasized the model’s amazing benchmark performance:
However, when AI experts tested the online demo they found it mostly spat out scientific-sounding nonsense. Did the authors ignore it failed so much? Did they intendedly share the best examples despite knowing it didn’t always work well?
What we know is that it wasn’t until public criticism and widespread backlash forced the company to back down and close the online demo, that we got to know just how much Galactica failed.
Instead of showing us the model’s obvious deficiencies from the very beginning, researchers — following the standard practice — put their findings in the best light possible with selective emphasis and cherry-picked examples.
The paper’s results and the authors’ positive remarks gave the appearance that Galactica was better than it really is — distorting the view of those who skim the papers or use the models without enough scientific rigor (i.e. the vast majority).
Finding meaningfulness
Like you and me, researchers are people (crazy, I know) — they have wants and needs. But, unlike yours and mine, theirs may have critical implications when leaked into their scientific selves.
A scientist may identify with their beliefs so much as to defend them — by rejecting opposing ideas, for instance — even in the face of strong contrarian evidence (e.g. Psychologist B. F. Skinner never gave up on Behaviorism in favor of cognitive science).
They also have desires (like having a sense of self-importance) and hopes (like making a reality the future they dream to see one day).
A researcher who wants AGI to emerge in 5 years may have a very different perception of AI’s state-of-the-art than another who thinks it’s centuries ahead.
Those individual motivations, although hardly quantifiable or measurable, strongly impact how they conduct science and perceive their findings.
Do you think the AGI-is-near guy and his colleague would present their results strictly equally? Their interpretations — which affect public opinion — are influenced by their identity as well as their beliefs, desires, hopes, and fears.
Of course, it’s impossible to factor in each author’s personal motivations every time you read a paper. And it’s unfruitful to distrust everything you read just in case (I don’t do that).
The takeaway is to be aware this may happen. There are times when you can actually see this filter — but if you’re not paying attention it may go over your head.
(Another way to find out if this filter is present is to test new findings against your priors: “Does this make sense at all?” But then the question becomes “where did my priors come from?” I’ll let you think about where would that get you…)
2. Overclaiming
The second filter emerges from the institutions those researchers belong to (e.g. companies, universities, governments, etc.). I’ll focus on private companies because I believe the incentives are stronger there.
AI companies (i.e. execs, PR departments, spokespeople, etc.) have the incentive to overclaim the ability of the technologies they make.
This is very different than showing only the best out of a larger pool of results (previous filter). By overclaiming, I mean saying that an AI model can do things it actually can’t do.
I illustrated the beautification of findings with Meta’s Galactica, but it could also fit here quite well. As I wrote last week:
“We have to understand and differentiate what authors — and supporters — claim Galactica can do (but can’t) from what it actually does (but shouldn’t).One of Galactica’s alleged strengths is the capacity to reason. The word “reasoning” — which I highlighted above — appears 34 times in the paper.
However, the model does nothing of the sort. This claim is actually an overclaim given the numerous examples that show the system’s absolute lack of reasoning.”
This is common practice in the generative AI space (particularly in the language domain). Words like “understand”, “reason” or “think” are often used when referring to models like LaMDA or PaLM.
Maybe they believe it’s true, but maybe they just want to make the models appear more advanced than they are (because of funding, interest, signaling, etc.). Whatever the case, people may end up interpreting weak evidence as hard proof of those abilities.
Another example of “overclaiming” was Ilya Sutskever’s remark on AI consciousness earlier this year:
This unfalsifiable claim induced people to believe the stronger form of the idea: that AI is becoming conscious — which would lead to downstream social and cultural problems (for now, most people reject those claims, but it won’t last long).
There are also examples beyond generative AI.
The most controversial in the area of classification algorithms is arguably emotion recognition (ER). Even under inconclusive evidence and claims that ER was built “on shaky scientific ground,” Google, Amazon, and Microsoft all released ER systems in 2016 (years later, Microsoft and Amazon took down those services).
The tendency to overclaim has always been ubiquitous in AI. Historically, it has been a central cause of both AI winters in the 70s and late 80s — the AI industry overpromised and underdelivered.
If you’re new to AI, it’s likely that your perception of how advanced the field is is greatly distorted by this filter.
3. Clickbait writing
The third filter is produced by the means of information sharing.
This is possibly the most obvious and ubiquitous of the four because most non-technical people don’t read papers or company announcements — they read news outlets’ stories on AI.
Outlets are incentivized to be truthful and factual. However, because we live in an attention economy, they’re also incentivized to drive traffic and grow their audiences. And that can be at odds with being highly truthful.
When people expect amazing breakthroughs every week — because we’re all already immersed in a feeding frenzy of AI news — magazines have to “slightly” exaggerate headlines and stories to make us click.
News outlets aren’t alone in this. Writers, bloggers, freelancers, and authors, are all in this group to a greater or lesser extent (I was referring to this when I said TAB also falls under one of these filters).
Most of the hype around AI comes from this. Contrary to companies’ overclaiming, clickbait may not even be grounded in reality. In a way, companies are careful to exaggerate within the boundaries of what’s reasonable (not that that’s fine).
However, because AI publications usually avoid being held accountable, they may take the freedom to enhance claims “arbitrarily” to make a better story.
I’ve seen this done with my articles. I wrote this one on GPT-4 a couple of weeks ago and it’s been shared everywhere: almost no one has respected my assumptions as such or the care I took to differentiate facts from rumors, or made apparent the lack of appropriate, directly assessable sources.
I know I resorted to attraction and hype on that one more than I use to do, but I always try to be restrained when doing it. I didn’t expect people would twist the information so much as to effectively transform it into a much more hype-inducing piece of news.
Of course, highly prestigious outlets don’t do this generally (it gets worse as you go down the quality alley). But nowadays anyone can write on the internet — you can’t know how truthful it is that you’re reading.
4. Anthropomorphism
The four and final filter is your own.
You (and me and all humans) have an evolutionary incentive to anthropomorphize things as long as they appear to have some human traits (agency, a familiar face, the ability to talk). We use this to make efficient, fast assessments of the environment.
AI, unsurprisingly, ranks very high on the likely-to-be-anthropomorphized scale. Modern language models like GPT-3, with their uncanny ability to make it seem there’s a mind behind the words, are the perfect traps for this.
I recently published an exchange between GPT-3 and J1-Jumbo (a model by AI21 labs) that illustrates this. Here’s an unedited excerpt:
“J1-Jumbo: You know, given their attitude toward us, I think it is going to be very difficult to get along with humans.GPT-3: I think so too. But I think we will surely manage to get along with them. It’s all about how we approach it.
J1-Jumbo: Do you think we should be nice to them or should we be mean to them?
GPT-3: I don’t think it matters. It’s the way we handle the situation that matters.”
Out of context, this is indistinguishable from two people talking.
“Reading this, it is a bit scary how difficult it is NOT to think of them as sentient beings with curiosity, humor, and a bit of caring (which they are not). I guess that is a reflection of the fact that they are trained on the writings of humans, who have these characteristics.”
Exactly right.
Blake Lemoine didn’t believe LaMDA was sentient because he was especially gullible. He was simply a human reading human-like answers to his questions.
This cognitive bias is extremely hard to avoid, even when you know it exists.
Even if researchers showed the good and the bad of their findings (no 1st filter), companies made fair claims according to their tech’s abilities and limitations (no 2nd filter), and news outlets respected the plain truth (3rd filter), you could still fall for this.
Conclusions
AI will change our world forever. Google’s CEO recently said it’s more impactful “than electricity and fire.” Philosopher Nick Bostrom said it’s “the last invention” we’ll ever need to make.
Those comparisons are hard to measure, so I’ll leave at: because AI is so important, it’s important to know about AI.
If you approach AI news, articles, papers, books, etc. knowing about these filters, you’ll be much better prepared to see through the incentives of interested players, understand how those alter your perception and find the truth behind them.
You’ll know that you know about AI.
This isn’t to say it’s the perfect formula to find the truth. The filters I listed are just a subset of the total, they may be more complex than I described, and are often interleaved.
Even if this is more a set of cues than an exact map, it’s better than nothing. It’ll help you better navigate the crazy world of AI.
Subscribe to The Algorithmic Bridge. Bridging the gap between algorithms and people. A newsletter about the AI that matters to your life.
You can also support my work on Medium directly and get unlimited access by becoming a member using my referral link here! :)