IN THIS RESEARCH
Artificial intelligence scares and intrigues us. Almost every week, there’s a new AI scare on the news like developers afraid of what they’ve created or shutting down bots because they got too intelligent. Most of these AI myths result from research misinterpreted by those outside the field. For the fundamentals of AI, feel free to read our comprehensive AI article.
The greatest fear about AI is singularity (also called Artificial General Intelligence), a system capable of human-level thinking. According to some experts, singularity also implies machine consciousness. Regardless of whether it is conscious or not, such a machine could continuously improve itself and reach far beyond our capabilities. Even before artificial intelligence was a computer science research topic, science fiction writers like Asimov were concerned about this and were devising mechanisms (i.e. Asimov’s Laws of Robotics) to ensure the benevolence of intelligent machines.
For those who came to get quick answers:
- Will singularity ever happen? According to most AI experts, yes.
- When will the singularity happen? Before the end of the century.
The more nuanced answers are below. There have been several surveys and research of AI scientists asking about when such developments will take place.
Understand the results of major surveys of AI researchers in 2 minutes
We looked at the results of 5 surveys with around 1700 participants where researchers estimated when singularity would happen. In all cases, the majority of participants expected AI singularity before 2060.
Source: Survey distributed to attendees of the Artificial General Intelligence 2009 (AGI-09) conference
In 2009, 21 AI experts participating the in AGI-09 conference were surveyed. Experts believe AGI will occur around 2050, and plausibly sooner. You can see above their estimates regarding specific AI achievements: passing the Turing test, passing third grade, accomplishing Nobel worthy scientific breakthroughs and achieving superhuman intelligence.
In 2012/2013, Vincent C. Muller, the president of the European Association for Cognitive Systems, and Nick Bostrom from the University of Oxford, who published over 200 articles on superintelligence and artificial general intelligence (AGI), conducted a survey of AI researchers. 550 participants answered the question: “When is AGI likely to happen?” The answers are distributed as
- 10% of participants think that AGI is likely to happen by 2022
- For 2040, the share is 50%
- 90% of participants think that AGI is likely to happen by 2075.
In 2017 May, 352 AI experts who published at the 2015 NIPS and ICML conferences were surveyed. Based on survey results, experts estimate that there’s a 50% chance that AGI will occur until 2060. However, there’s a significant difference of opinion based on geography: Asian respondents expect AGI in 30 years, whereas North Americans expect it in 74 years. Some significant job functions that are expected to be automated until 2030 are: Call center reps, truck driving, retail sales.
In 2019, 32 AI experts participated in a survey on AGI timing:
- 45% of respondents predict a date before 2060
- 34% of all participants predicted a date after 2060
- 21% of participants predicted that singularity will never occur.
In the 2022 Expert Survey on Progress in AI, conducted with 738 experts who published at the 2021 NIPS and ICML conferences, AI experts estimate that there’s a 50% chance that high-level machine intelligence will occur until 2059.
AI entrepreneurs are also making estimates on when we will reach singularity and they are a bit more optimistic than researchers:
- Patrick Winston, MIT professor and director of the MIT Artificial Intelligence Laboratory from 1972 to 1997: He mentioned 2040 while stressing that while it would take place, it is a very hard-to-estimate date.
- Ray Kuzweil, computer scientist, entrepreneur and writer of 5 national best sellers including The Singularity Is Near: 2045
- Jürgen Schmidhuber, co-founder at AI company NNAISENSE and director of the Swiss AI lab IDSIA: ~2050
Keep in mind that AI researchers were over-optimistic before
Examples include:
- AI pioneer Herbert A. Simon in 1965: “machines will be capable, within twenty years, of doing any work a man can do.”
- Japan’s Fifth Generation Computer in 1980 had a ten-year timeline with goals like “carrying on casual conversations”
This historical experience contributes to most current scientists shying away from predicting AGI in bold time frames like 10-20 years. However, just because they are more conservative now doesn’t mean that they are right this time around.
Understand why reaching AGI seems inevitable to most experts
These may seem like wild predictions, but they seem quite reasonable when you consider these facts:
- Human intelligence is fixed unless we somehow merge our cognitive capabilities with machines. Elon Musk’s neural lace startup aims to do this but research on neural laces is in the early stages.
- Machine intelligence depends on algorithms, processing power and memory. Processing power and memory have been growing at an exponential rate. As for algorithms, until now we have been good at supplying machines with the necessary algorithms to use their processing power and memory effectively.
Considering that our intelligence is fixed and machine intelligence is growing, it is only a matter of time before machines surpass us unless there’s some hard limit to their intelligence. We haven’t encountered such a limit yet.
This is a good analogy for understanding exponential growth. While machines can seem dumb right now, they can grow quite smart, quite soon.
Source: Mother Jones
If classic computing slows its growth, quantum computing could complement it
Classic computing has taken us quite far. AI algorithms on classical computers can exceed human performance in specific tasks like playing chess or Go. For example, AlphaGo Zero beat AlphaGo by 100-0. AlphaGo had beaten the best players on earth. However, we are approaching the limits of how fast classical computers can be.
Moore’s law, which is based on the observation that the number of transistors in a dense integrated circuit double about every two years, implies that the cost of computing halves approximately every 2 years. However, most experts believe that Moore’s law is coming to an end during this decade. Though there are efforts to keep improving application performance, it will be challenging to keep the same rates of growth.
Quantum Computing, which is still an emerging technology, can contribute to reducing computing costs after Moore’s law comes to an end. Quantum Computing is based on the evaluation of different states at the same time whereas classical computers can calculate one state at one time. The unique nature of quantum computing can be used to efficiently train neural networks, currently the most popular AI architecture in commercial applications. AI algorithms running on stable quantum computers have a chance to unlock singularity.
For more information about quantum computers feel free to read our articles on quantum computing.
Understand why some do not believe that we will never reach AGI
There are 3 major arguments against the importance or existence of AGI. We examined them along with their common rebuttals:
1- Intelligence is multi-dimensional
Therefore, AGI will be different, not superior to human intelligence. This is true and human intelligence is also different than animal intelligence. Some animals are capable of amazing mental feats like squirrels remembering where they hid hundreds of nuts for months.
However, these differences do not stop humans from achieving far more than other species in terms of many typical measures of success for a species. For example, homo sapiens is the species that contribute most to the bio-mass on the globe among mammals.
Source: Kevin Nelly
2- Intelligence is not the solution to all problems
For example, even the best machine analyzing existing data will probably not be able to find a cure for cancer. It will need to run experiments and analyze results to discover new knowledge in most areas.
This is true with some caveats. More intelligence can lead to better-designed and managed experiments, enabling more discovery per experiment. History of research productivity should probably demonstrate this but data is quite noisy and there are diminishing returns on research. We encounter harder problems like quantum physics as we solve simpler problems like Newtonian motion.
3- AGI is not possible because it is not possible to model the human brain
Theoretically, it is possible to model any computational machine including the human brain with a relatively simple machine that can perform basic computations and has access to infinite memory and time. This is the Church-Turing hypothesis laid out in 1950. It is universally accepted. However as stated, it requires certain difficult conditions: infinite time and memory.
Most computer scientists believe that it will take less than infinite time and memory to model the human brain. However, there is not a mathematically sound way to prove this belief as we do not understand the brain enough to exactly understand its computational power. We will just have to build such a machine!
And we haven’t been successful, yet. Though the GPT-3 language model launched in June/2020 caused significant excitement with its fluency, its lack of logical understanding makes its output error-prone. For a more dramatic example, this is a video of what happens when machines play soccer. It is a bit dated (from 2017) but makes even me feel like a soccer legend in comparison:
— MagX(ニューモデルマガジンX) (@CyberMagazineX)
To learn more about Artificial General Intelligence
Joshua Brett Tenenbaum, a Professor of Cognitive Science and Computation at MIT, is explaining how we can achieve AGI singularity:
Hope this clarifies some of the major points regarding AGI. For more on how AI is changing the world, you can check out articles on AI, AI technologies and AI applications in marketing, sales, customer service, IT, data or analytics.
And if you have a business problem that is not addressed here:
Source: Arguments against AGI based partially on Wired’s summary of arguments against AGI and Wikipedia
Cem has been the principal analyst at AIMultiple since 2017. AIMultiple informs hundreds of thousands of businesses (as per similarWeb) including 55% of Fortune 500 every month.
Cem's work has been cited by leading global publications including Business Insider, Forbes, Washington Post, global firms like Deloitte, HPE and NGOs like World Economic Forum and supranational organizations like European Commission. You can see more reputable companies and resources that referenced AIMultiple.
Throughout his career, Cem served as a tech consultant, tech buyer and tech entrepreneur. He advised enterprises on their technology decisions at McKinsey & Company and Altman Solon for more than a decade. He also published a McKinsey report on digitalization.
He led technology strategy and procurement of a telco while reporting to the CEO. He has also led commercial growth of deep tech company Hypatos that reached a 7 digit annual recurring revenue and a 9 digit valuation from 0 within 2 years. Cem's work in Hypatos was covered by leading technology publications like TechCrunch like Business Insider.
Cem regularly speaks at international technology conferences. He graduated from Bogazici University as a computer engineer and holds an MBA from Columbia Business School.
RELATED RESEARCH
Generative AI
Generative AI , Education
Generative AI , Healthcare
Leave a Reply YOUR EMAIL ADDRESS WILL NOT BE PUBLISHED. REQUIRED FIELDS ARE MARKED *
Comment * <div></div>
20 Comments
- Reply
- Reply
- Reply
- Reply
- Reply
- Reply
- Reply
- Reply
- Reply
- Reply
- Reply
I think we are far away from the point of singularity.
It is not only that intelligence is multi dimensional, but also what is deemed as being intelligent (e.g., IQ, EQ) changes with time.
People also change with time.
So what is that point of singularity may change.
Hello,
Achieving the singularity from where we are now is relatively a simple jump, it is just time and advancements combined with a team somewhere who is dedicated to it and has the money to pull it off. The missing part of the equation would be asking the question “what is consciousness?” and understanding that. Then, understanding how to model that with non-biological machinery even at small levels, like modeling the consciousness of an amoeba or more advanced things like snakes and squirrels. Then if we know for certain what it is and how to model it, just run an adaptive evolution algorithm on itself, modeling out all of the processes in human cognition until it can beat them everywhere. Then, allow it to simply rebuild itself to continuously improve.
The problem currently preventing this, is that human beings have no idea what consciousness is at all. It is a great mystery. One person thinks it is in the brain. Another thinks the brain is like a tuning fork, channeling the consciousness from somewhere else. It is a great mystery in science. When this problem is solved, then machine consciousness can be built most likely, depending on what it actually is.
If consciousness is something weird, such as “human beings have spirits in other dimensions that are planned for their bodies by a supreme being. The brain creates a quantum resonant frequency that links it together with this already conscious entity, and then several universes are interacting simultaneously to create the actual experience of being self aware and sentient” well then, it will be very difficult to design a machine that does that same thing. It is more likely that we figure out how to model the resonance in the brain and then transfer an already existing consciousness of an animal or a human into a machine and keep it going, if that even makes any sense at all.
However, maybe that’s not how it works, and it is something simple like the holographic connection of energy patterns fluctuating in the mind – this can be modeled and a machine can be built that does these sorts of things with much more efficiency. Right now the mystery of the problem is consciousness itself.
Hope that helps. I really enjoyed the robot soccer tournament. I also feel like a superhero at soccer now.
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
I think Patrick Winston was joking when he said 20 years. From the linked quote: “I was recently asked a variant on this question. People have been saying we will have human-level intelligence in 20 years for the past 50 years. My answer: I’m ok with it. It will be true eventually.” “Forced into a corner, with a knife at my throat, I would say 20 years, and I say that fully confident that it will be true eventually.”
I have the impression that the nerds that make this kind of prediction (replicate human brain) know a whole lot about computer programming but are ignorant about neuroscience/psychology. We are nor even scratching the surface about primary phenomenon, such as counsciousness / unconsciousness. How do you claim that you can replicate something that we are still far from understanding how it works?
mmm… I’m not sure we can reach to this point: “benevolence of intelligent machines” Emotions and Feelings are there to guide our actions, to improve ourselves and to make a better world, can we make a machine to feel guilt of being smarter than us??
Saying human intelligence is fixed ignores that as we learn more about how the human brain works we may learn how to expand its capability’s ie through some form of enhanced learning, targeted drugs, gene therapy, electro stimulation and not just direct brain computer connections being the only potential for doing this. More so currently hampered by our lack of understanding even the language you use has an effect on your cognitive ability’s its one of the reasons deaf people were called dumb was the occurrence of language deprivation and how it negatively effected neurodevelopment it was a major problem when deaf children were forced to lip read instead of using sign language . But we will need more powerful AIs to achieve an understanding of our brains
Intelligent doesn’t solve our all problems maybe yes but certainly its essential and more intelligent you are faster you solve problems. If you are a chimp you can not even pour water to a glass. You do not even know what glass is used for. Yes if you are human being you still need to get up and grab the glass but intellegence is essential. I do not think human brain is impossible to create in a lab. I think earth is a lab. Anything found in nature can be replicate in the lab.
if P=NP then the singularity may happen also. Saying the human brain is impossible to recreate I dont agree with, but to say its intractable probably is approximately true. So P=NP, if you could solve that mystery (which is the millenial prize funnily) with an intractable calculation, that could make all the magic happen as well.
The claim that “humans contribute most to the biomass” on the planet is likely to be wrong. Check out this paper for a careful estimation:
https://www.pnas.org/content/115/25/6506
Thank you! That was insightful. Biology is not my strong suit, I should stick to computer science.