A snapshot from movie Ex Machina. Original image via Alamy (Editorial License)
When Yuval Noah Harari was promoting his second bestselling book, he said the following thing:
“…Humanity will create something more powerful than itself. When you have something that understands you better than you understand yourself, then you’re useless. Anything you can do, this system can do better.”
The idea he’s perpetuating is that artificial intelligence will somehow bring a sudden, dramatic change to everything we know and love in the near future. This is fine for a science fiction writer.
But, from a historian, I expected more.
Because history tells us that if AI does indeed change our lives in any dramatic way, it most certainly won’t be sudden.
First of all, let’s take a look at the Industrial Revolution.
It is agreed that the Industrial Revolution started around 1750s, when a confluence of factors (one of which is technology) prompted a gradual increase in productivity, which, in turn, led to other changes.
Many people have been taught to believe that the Industrial Revolution happened thanks to a series of inventions that disrupted the old ways of living. This was indeed the historian consensus 70 years ago. Today, as this Yale professor has noted, this analysis has been largely debunked.
Even the most violent industrialization, such as China’s, happen over decades. Source: Statista
The new view is that the Industrial Revolution happened slowly and gradually, fitting nicely into the lifestyles people were already living. Instead of one big change that separated the “now” and “then,” there were many small changes that slowly transformed how we live and cooperate.
For example, textile was the first industry that truly adopted the idea of a factory filled with machines that automated some tasks for the workers.
However, even in 1870 Paris (one of the epicenters of Industrial Revolution,) the average unit of production only had about 7 people. 120 years into the process, we’re still quite a ways from the production line mindset of the mid-20th century.
Around 1880, or about 130 years into the Revolution. Source: YWCA USA, Flickr (Creative Commons)
To an actual person, the Revolution happened more or less like this:
- It’s 1800s, you’re raising cows for a living
- Years go by, someone somewhere patents something that makes cow business more productive
- More years go by, the thing gets made
- Another decade passes, someone in your town gets the thing
- Five more years, your grandchildren realize that the thing is actually pretty good…
- …and so on.
What I’m getting at is that all of this change was extremely slow and drawn out over the years. The Industrial Revolution only seems like a revolution when you skim through it.
More people, more problems. Source: Statista
Of course, this doesn’t mean that the Industrial Revolution wasn’t significant or disruptive. It was. But for different reasons.
Thanks to all the extra productivity, people now had lots of extra food, which meant that there could now be a lot more people on the planet. This is nicely illustrated in the graph above.
Not only did the population of humankind explode during this period, but these new people also flocked into cities — pursuing work, status and opportunity. This meant that competition rose exponentially in those densely populated, desirable areas, and for people who chose to live there, life was indeed a lot different from what it used to be.
But for the millions of people who didn’t pursue city life, life was pretty much the same. Nobody forced them to move. It wasn’t disruption — it was elective change.
Only now is internet truly overtaking TV on TV’s home ground. Source: Statista
So when we talk about AI taking jobs, AI destroying jobs, or any other brand of AI changing our lives, let’s at least acknowledge that if the change does happen, it will most likely be slow, gradual and more or less voluntary.
The other myth widespread among the more anxious part of the population is that new technology cannibalizes old technology. That is simply not true.
Whenever a new technology goes mainstream, it usually spreads like a virus, making it look like the old technology may go obsolete. In reality, even competing technologies usually coexist and sometimes even reinforce each other. For example, TV usage in the U.S. didn’t peak until 2010 — which was well into the internet and smartphone revolution — and the fact that Game of Thrones was the most downloaded show online didn’t stop it from being the most popular show on TV as well.
If TV is an ancient technology that should’ve been cannibalized by the internet, then writing should only be found in history books (er-videos.) And yet, you’d be surprised that Millennials read more than the previous generations, and that’s not even accounting for online content (the study only considered books.)
Or what about warfare?
WW2 may have been the ultimate display of modern, mechanized warfare, but the truth is that the German army employed 2.75 million horses during that time, while the Soviets had upwards of 3 million. Even in a war based on new technology, old technology remained instrumental.
Even when you single out any single technology, there’s never really a point where technology suddenly starts existing. For example, there was never really a point in time when a motor vehicle was invented. Instead, variants of the modern vehicle progressed continuously over centuries (!!) until a confluence of factors made mass adoption possible.
This little thing was amazing, but it wasn’t new technology. Wikimedia Commons image
You may say, well, what about the smartphone? That sure didn’t take decades to replace whatever people used before.
The smartphone was a revolutionary product, not a revolutionary technology. From a technological standpoint, there was pretty much nothing exclusive to the smartphone: it simply combined function of several other devices in a novel, neatly designed package. And while it did spark a change in how people live, that change is only a small part of the digital revolution that’s been happening since the 1960s.
Since AI is a whole direction of technology that may or may not find dramatic real-world application, it’s very unlikely that it will make any of the existing industries obsolete anytime soon, even if we do have a major breakthrough.
And the truth is, we’re struggling.
Forms of AI have been successfully powering popular systems like Google Search or Amazon recommendations for years now, but whenever we try to create a truly revolutionary, “intelligent” AI that could at least interact with — if not resemble — real humans, we fail miserably.
Take autonomous cars for example. An AI that could drive needs to do a lot more than just spit back video recommendations based on your browsing history. It needs to make intelligent, real-time decisions in a real-world setting. It needs to think. Right now, we’re nowhere near this point.
Even though there’s been a lot of buzz and a lot of promises about self-driving cars in the past 10 years, nobody has done it successfully, and when it did fail, it failed spectacularly. There have been at least 19 pedestrians killed by Tesla cars during tests (here’s a whole website called TeslaDeaths), and Uber recently shut down its AI-car division entirely.
Not everything with the word “AI” in it is set to revolutionize the world. Image by Marco Verch via Flickr (Creative Commons)
Many people are afraid that AI will automate their jobs, but whenever AI ventures into traditionally human domains, it struggles.
A few years ago, Amazon launched a series of cashier-less, self-service grocery stores that automatically tracked what you purchased through visual recognition cameras. Every news website wrote about how this is the future of retail and how everyone is doomed.
Well, guess what? Years have gone by, and they have only built 30 of these mini-stores. For scale, Walmart owns nearly 12,000.
Why is Amazon not jumping on what they believe is the future? Because they probably don’t. In Go stores, the cameras can be tricked, the equipment is probably way more expensive than hiring a cashier, and it still takes people to run the store. Marketing effort, at best.
Or what about manufacturing? Sure, some tasks are now automated by robots. But, as this Quoran nicely summarized
Manufacturing is extremely heterogenous and you’ll find a surprising mix of old and new equipment, processes, controls, and production volumes within a single factory, let alone a sector that ranges from making tortillas to space satellites.The tremendous flexibility of human beings is often a better solution so far in material handling, material movement, some assembly work, cumulative judgement and skills (that’s the area that gets automated the most, not carrying a component a few feet) and sometimes quality control by inspection (also rapidly phasing out since people get bored and half blind at this quickly.)
(Not to say that it’s often simply cheaper to hire humans instead of a robot that needs maintenance, might break, etc.)
I asked AI to paint me a hand. This was its third attempt. Image via Deepai
This year, the creator economy had a scare when AI suddenly started writing blog posts and generating visual art.
Well, as you can see from the picture above, it’s not exactly an intelligent life form, is it?
Generators like these use a combination of statistics and computing to crunch through an enormous amount of data to find common patterns. Then, they ask you what you want to see. You say, ‘a cat.’ The generator says, good, I have seen 10 trillion images tagged ‘cat.’ Here is a collection of pixels that should fit into that category.
The problem is, the generator doesn’t know what a cat is. It doesn’t know what a human is, or what it means to know. It’s merely a fancy kaleidoscope that shuffles pixels (or words) until they match a certain pattern it has programmed onto itself.
Art is exactly the opposite of that. Art comes from intent, a desire to show something, a desire to say something. A desire to make someone else feel something. Art in all forms is primarily communication, not a collection of colors or words. If you consider AI art your competition, you may want to reexamine why you’re writing in the first place.
This is nothing, compared to the wealth we’ve created since then. Source: Statista
If there’s one bit of truth to the current AI narrative, it’s inequality.
Whenever a new, useful technology comes around, it almost invariably benefits the privileged members of society more than it does everyone else. You may have heard the mantra “the rich get richer.” Well, in this case, it’s true.
The rich will always have access to new technology before everybody else, hence they’ll have the time to figure it out and make use of it. The rich are the ones who design technology, hence they do it in a way that (mostly) benefits them.
Even if we take all the gloom out of the equation, the poor still remain on the receiving end of technological advancement due to how our society works.
For example, a study in Brazil showed that during 2000–2009, when the country was massively adopting broadband internet, the people who saw salary increases were mostly managers (9%) and board members (19%,) while worker compensation remained more or less the same (2.3%.)
This isn’t necessarily because someone is evil or someone is stealing from someone — it’s just that the productivity increases translate into higher profits and higher stock prices, which have little to do with those who do the actual work. If AI does indeed revolutionize something in any significant way, it’s probably not going to benefit the person who’s afraid it will take their job (even if it doesn’t take their job.)
But that’s a larger question with no simple answer. It’s about power dynamics, and how we choose to organize ourselves as a society. Until we figure out a better way to manage our resources, every change is bad change for the unprivileged.
In other words, hate humans, not AI.