Photo by Possessed Photography on Unsplash
We discuss the implications of the Metaverse on our Business Models, buying decisions, and everyday life. But somehow in all of those discussions, the underlying assumption is, that humans will make the rules.
But why are we so sure about that?
AI is not a future technology
The first movie appearance of an evil AI in form of a humanoid robot dates back to 1927 when the groundbreaking film Metropolis by Fritz Lang hit the cinemas.
Since then the narrative of AI in charge of major processes of the world is a recurring motive in almost any Sci-Fi film out there.
But the warning idea of a big brother of 1984 leads to the effect that dictating AI lives forever in the realm of a distant future.
And it is not. It is here. Now.
I know, now I sound like one of those doomsday predictors. And this is definitely not what I want to transmit — I am a futurist, after all, the job comes with an immanent optimism.
We tend to see the global future darker than it has to be, so let’s start with facts before we make assumptions.
Facts are:
AI industry revenues are expected to reach a value of around 126 billion USD in 2025. (Just to have a reference: Shampoo market globally ~31 billion USD and Cyber security market ~183 billion USD in 2020.)
Already 37% of companies have implemented AI in some form, and it is expected that by 2025 95% of customer interactions will be powered by AI.
See? AI is not the future it is the present.
From bots to humanoids
Photo by Alex Knight on Unsplash
Well, you might say, but current AI is not as sophisticated as the one we see on the movies.
And that is true.
The term AI is used in an inflationary manner for a lot of things. When you take the time to look closer, you will find, four categories of AI (at least for now).
These are reactive, limited memory, theory of mind, and self-aware.
Currently, we mastered the first category reactive (examples are chess computers or recommendation engines (though at least in my case, I would not say mastered, this is another story).
The second category, limited memory, is now in the phase of reaching perfection. These systems are implemented for example in self-driving cars.
If you want to go into more detail on this matter I recommend reading:
4 Types of Artificial Intelligence
We are further down the road of A.I. As we grow in understanding, so, too, do we grow to understand its differences. In…
In the next step, the theory of mind, the step taken will be that AI can interact with human emotions.
There are first steps taken already.
There is Sophia a humanoid robot presented already back in 2016 by the robotic company Hanson robotics.
But Sophia is not alone anymore. There are several companies in the world working on human-like robots. And they are getting closer.
How do you see it?
Are you ready for a world you share with humanoid robots?
LaMDA, Tang Yu and HAL 9000
Photo by Thierry K on Unsplash
One category we did not discuss yet.
The self-aware artificial intelligence.
Meaning: a robot (or AI) which (or who?) is aware of itself, of its existence taking its own decisions, and has its own proper emotions and beliefs.
Still, from the technical viewpoint, there is still a way to go to reach this stage. Says the scientific world.
And then there is
.
Earlier this year, this former engineer at Google claimed that a created bot, called LaMDA, made statements it was not programmed to make.
Since this statement, there was of course a lot of discussion in the AI programming world, the basic tenor is that the claims of Mr. Lemoine were simply wrong.
But when you listen to the statements of Blake Lemoine, what is even more important than if LaMDA is self-aware or not, is the questions he is raising.
In order to define if something is conscious, we would need scientific definitions on terms like consciousness, soul, intelligence, and similar.
But these discussions are taking place only in highly philosophical circles at this stage, where they would be needed on a more practical and also political level. Because depending on the agreements we, as humans, can make, our legal frameworks and policies on these matters have to be created.
Is intelligence only the solving of more and more complicated problems?
Or is there a creative aspect?
Is creativity immanently human, or can algorithms be designed in a way to be creative? Would we want that?
The latest news on these matters came from Chinese company NetDragon Websoft. The mobile game company announced that it is appointing an artificial intelligence-supported virtual human as general manager.
It will support the leaders of the company in risk management and will also provide real-time data.
Will it take decisions of any kind? That is to be seen.
But as a signal of change, this news represents an ongoing major value shift.
Company decisions should be made based on facts and data only, and let’s be honest, this is better done by algorithms than by human beings.
We are biased, filled with hopes and fears, which turn out to be false sometimes. We have good days and bad ones, filled with all sorts of emotions and handicaps.
That is what makes us humans.
And that is also what sometimes leads to creative, reckless decisions, which turn out to lead to better futures than we thought possible.
Can that be done by data-based algorithms?
Every one of us, who already faced situations where we were discriminated, overlooked, or judged on basis of someone's bias, had at least one moment where we thought that the world would be a better place with fact-based decisions.
But are we ready for the business world with only decisions of that kind?
Are we ready for a world in which companies are led by AI?
This article is part of the series 101 Things that can be different in the future. If you are interested in any specific topic or have other signals to show please comment or reach out to me.