Generative AI, which is based on Large Language Models (LLMs) and transformer neural networks, has certainly created a lot of buzz. Unlike hype cycles around new technologies such as the metaverse, crypto and Web3, generative AI tools such as Stable Diffusion and ChatGPT are poised to have tremendous, possibly revolutionary impacts. These tools are already disrupting multiple fields — including the film industry — and are a potential game-changer for enterprise software.
All of this has led Ben Thompson to declare in his Stratechery newsletter to declare generative AI advances as marking “a new epoch in technology.”
No compatible source was found for this media.
Even so, in a broad sense, it is still early for AI. On a subsequent Plain English podcast, Thompson said that AI is “still in the first inning.” Rex Woodbury in his Digital Native newsletter concurred: “We’re still in the early innings of AI applications, and every year leaps are being made.” A New York Times story stated that this has led to a new “AI arms race.” More companies are expected to enter this race “in the coming weeks and months.”
A foreshock to AI singularity
With the generative AI era now duly anointed, what might be the next leap or next epoch and when might that occur? It would be comforting to think that we will all have sufficient time to adjust to the changes coming with generative AI. However, much like a foreshock can presage a large earthquake, this new epoch could be a precursor to one even larger event, the coming AI singularity.
Event
Intelligent Security Summit On-Demand
Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.
AI singularity refers to two concepts: The first defines “singularity” as a point when AI surpasses human intelligence, leading to rapid and exponential advancements in technology. The second refers to a belief that the technology will be able to improve itself at an accelerating rate, leading to a point where technological progress becomes so fast that it exceeds human ability to understand or predict it.
The first concept sounds exciting and full of promise — from developing cures for previously incurable diseases to solving nuclear fusion leading to cheap and unlimited energy — while the latter conjures frightening Skynet-like concerns.
Even Sam Altman — OpenAI CEO and a leading proponent of generative AI and the developer of ChatGPT and DALL-E 2 — has expressed concern. He said recently that a worst-case scenario for AI “is, like, lights out for all of us.” He added that it is “impossible to overstate the importance of AI safety and alignment work.”
When will the singularity arrive?
Expert predictions for when the arrival of singularity vary considerably; the most aggressive being that it will be very soon. There are others who say it will be reached sometime in the next century, if at all. The most quoted and one of the more credible is futurist Ray Kurzweil, presently director of engineering at Google. He famously predicted the arrival of the singularity in 2045 in this 2005 book The Singularity is Near.
Deep learning expert François Chollet similarly notes that predictions of the singularity are always 30 to 35 years away.
Nevertheless, it is increasingly looking as if Vernor Vinge’s prediction will prove closest. He coined the singularity term in a 1993 article with an attention-grabbing statement: “We are on the edge of change comparable to the rise of human life on earth.”
Translated, an Italian language translation startup recently asserted that the singularity occurs at the moment when AI provides “a perfect translation.” According to CEO Marco Trombetti: “Language is the most natural thing for humans.” He adds that language translation “remains one of the most complex and difficult problems for a machine to perform at the level of a human” and is therefore a good proxy test for determining the arrival of the singularity.
To assess this, the company uses Matecat, an open-source computer-assisted translation (CAT) tool. The company has been tracking improvements since 2011 using Time to Edit (TTE), a metric in the tool to calculate the time it takes for professional human editors to fix the AI-generated translations compared to human ones.
Over the last 11 years, the company has seen strongly linear performance gains. Based on this, they estimate that the time needed for a perfect machine language translation will occur by the end of this decade, and at that point, they believe the singularity will have arrived.
How will we know when the singularity arrives?
Of course, TTE is only one metric and may not by itself indicate a seminal moment. As described in a Popular Mechanics article, “it’s enormously difficult to predict where the singularity begins.”
It may be difficult to pinpoint, at least at the time. It likely will not be a single day when any one metric is achieved. The impact of AI is going to continually increase, with the inevitable peaks and valleys of progress. With every advance in AI, the tasks it can accomplish will expand.
There are many signs of this already, including DeepMind’s AlphaFold, which predicts the folding pattern of virtually every protein and could lead to radical improvements in drug development.
And, Meta recently unveiled “Cicero,” an AI system that bested people in Diplomacy, a strategic war game. Unlike other games that AI has mastered such as chess and Go, Diplomacy is collaborative and competitive at the same time. As reported by Gizmodo, “to ‘win’ at Diplomacy [Cicero], one needs to both understand the rules of the game efficiently [and] fundamentally understand human interactions, deceptions, and cooperation.”
Whisper emerged late last year to finally produce fast and reliable voice-to-text transcriptions of conversations. According to The New Yorker, decades of work led to this. Based on open-source code from OpenAI, it is free, runs on a laptop, and (according to the reviewer) is far better than anything that came before.
What might be the impact?
Identifying the arrival of singularity is made more difficult because there is no widely accepted definition of what intelligence means. This makes it problematic to know exactly when AI becomes more intelligent than humans. What can be said is that the capabilities of AI continue to advance and at what feels like a breakneck pace.
Even if it has not yet — and may never — achieve the singularity, the list of AI accomplishments continues to expand. The impacts of this, both for good and not, will likewise expand. One day, possibly within the next couple of decades, there could be a ChatGPT-like moment when the world shakes again, even more than it has with generative AI. With the “big one,” the singularity will be understood to have arrived.
It is good to keep in mind what computer scientist and University of Washington professor Pedro Domingos said in his book The Master Algorithm: “Humans are not a dying twig on the tree of life. On the contrary, we are about to start branching. In the same way that culture coevolved with larger brains, we will co-evolve with our creations. We always have: Humans would be physically different if we had not invented fire or spears. We are Homo technicus as much as Homo sapiens.”
Gary Grossman is SVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.