I think probably everything I say tonight I have said in several versions to the students at New College of the Humanities in the last few years when I've given lectures. They've very much helped me write the book that's now emerged. And I was thinking, just as you were introducing me, of a science teacher I had when I was 16 or so, who said a wonderful thing-- a physics teacher. He said science done right is one of the humanities. And I thought, oh, what a great idea. And I sort of kept that in my mind all along. And when Anthony at New College of the Humanities asked me-- and, of course, I'm a philosopher-- to do my thing, I knew that I was encouraged to talk about the science that I was interested in, the scientific ideas, which I think are also important philosophical ideas. And that's what I'm going to talk about tonight. So thank you all for coming. If some of you have seen my Royal Institution talk of about two years ago, you will recognise a few slides. This is a later development of my thinking and pretty much lines up with what's in my new book, called From Bacteria to Bach and Back. Here's a sort of punchline. Should be fairly obvious. We are the first intelligent designers in the tree of life. Now, this is my favourite diagram of the tree of life. And if you see, this is the present all along here. This is the origin of life. So time goes out here. And here are the earliest life forms, the bacteria in the Archean. And here's this great, great moment, the eukaryotic revolution, which led to this wonderful fanning out of basically all the living things you can see with your naked eye. And that little Y there, well, that's about six million years. And that's how long we've been separated from our common ancestor with the chimpanzee. So human beings have only been on the scene for just a tiny little bit of this diagram. And I'm claiming that-- is there intelligent design? Yes. Intelligent designers in this room by the dozens. And our scientists and artists are intelligent designers. So the problem then that I'm facing in this book is, how did intelligent designers evolve? If natural selection is not intelligent design, and it isn't, how did small I, small D intelligent designers evolve? And some people have real trouble, including somebody who's going to be here, I understand, in the next month. And that's Roger Penrose, who says "I am a strong believer in the power of natural selection. But I do not see how natural selection in itself can evolve algorithms which could have the kind of conscious judgements of the validity of other algorithms that we seem to have." He goes on, "to my way of thinking, there is still something mysterious about evolution and its apparent groping towards some future purpose. Things at least seem to organise themselves somewhat better than they ought to, just on the basis of blind chance evolution and natural selection." Now, that's a bit of Darwin doubting by one of the most eminent scientists around. And he's far from alone. There are a lot of-- I like the way he puts it too, because he's a great believer in our natural selection, but it bothers him, and his nagging thought that there's got to be something that doesn't quite add up. And I'm going to try to point out what it is and then show you how to get yourself out of that puzzle. And here's the way it could go. How could a slow, mindless process build a thing that could build a thing that a slow mindless process couldn't build on its own? There does seem to be something faintly miraculous or pulling yourself up by your own bootstraps there. And my book is an attempt to answer that question. Well, of course, you know the answer. First, you evolve Alan Turing. And then, he intelligently designs a computer. And we're home. But how do we get an Alan Turing? How do we evolve an Alan Turing? Well, the answer, of course, is natural selection. But this is the main point of my book in a way, not just natural selection of genes. We have to also talk about cultural evolution and the natural selection of memes, Richard Dawkins' idea of cultural units that replicate differentially. And the ones that replicate best are fitter, survive, and make more, and that human culture is the medium, the source, and ultimately the power that makes somebody like Alan Turing possible. Well, this idea suggests a question, the question in my title. So are brains computers? I say, if brains are computers, then who writes the software? Well, let's pause and look at whether our brains are computers at all. And some people think not. Some insist that they aren't. They're scientists, such as Roger Penrose, very clear about that in his book, The Emperor's New Mind. Gerry Edelman, the late Nobel laureate-- I was a little puzzled by Jerry's insistence that brains were not computers, while he modelled brains on computers and used his models to demonstrate why a computer couldn't do that sort of thing, which was a problem that Gerry had. But there's also Jaak Panksepp, some of you may know, an eminent neuroscientist whose main area of interest is emotion. But there's philosophers as well, of course. John Searle comes to mind, famously. And your own Raymond Tallis. And I'm not going to say any more about either one of them tonight. I had my say elsewhere. Because I want to talk about computer phobia and, in fact, two different varieties of computer phobia, which is my perhaps somewhat rude term for those who really don't like the idea that our brains are computers at all. If brains aren't computers, what are they? Well, they're not pumps. They're not factories. They're not purifiers. The task of brains is to take information in and yield control. Of course, they're computers. That's what a computer is. It uses information to control something. This is not the kind of computer that the people are imagining. So I wanted to help you imagine a different kind of computer, an organic if you like or an evolved computer, which is what I think we have between our ears. So the mis-imagination of computers is something that needs a diagnosis. And I'm going to try to provide it. And I'm going to make a new suggestion. Well, it's a newish suggestion, because others have made it before. But I want to remind you that there have been a number of attempts to say, well, brains are sort of like computers, but they aren't-- well, you know, they're not made of silicon. They're made of protein. That's not what I think is important. They're not digital. They're analogue. That's slightly true, but I don't think that's the important point. They're not serial. They're parallel. True, but that's not what I want to focus on. I want to focus on something else entirely, the difference between cooperative and competitive computers-- I mean, computers that are made of cooperative versus competitive parts. So the default image that most of us have of computation and certainly, say, that Roger Penrose has is that it's ultra efficient. There's no waste motion, no cross-purposes. And there's redundancy only for safety. It's hierarchically organised, where routines call subroutines and the sub-routines answer. It's all like a well-oiled corporation with chains of command and control all the way up and down. And there is also controlled prioritisation. That is, there are, as it were, built-in traffic cops that decide what happens next. You don't have any fighting over that. And there is competition in the brain-- I mean, in computers. But it's, as it were, friendly opponent processes. There's sort of tugs of war that are carefully set up in order to resolve some issue in a tug of war sort of way. But it's not, as it were, deadly competition. It's just for the sake of finding a midpoint, usually, something like that. The computer scientist Eric Baum has a nice name for this kind of architecture. He calls it the Politburo architecture or Politburo control, like the old Soviet Union. I want to compare that with what one of my postdocs once called brain wars. In brain wars, we have real, not notional competition. It's even, in some cases, a matter of life or death. You have micro agents with their own agendas-- neurons, astrocytes, glia cells. But neurons are the ones I'll concentrate on. Tecumseh Fitch, a friend of mine and colleague, in the paper called "Nano-Intentionality" in Biology and Philosophy a few years ago, spells out the idea pretty clearly that individual neurons are agents. And they're semi-autonomous. And they do have agendas. And that's very different from what you have in your digital computer. So I want you to compare Marx, to each according to his needs, from each according to his talents. Compare that with dog-eat-dog, free-for-all, laissez-faire capitalism, where there's no central or higher control. Cooperation does happen, but it's not a precondition. It's an intermittent achievement. OK? Now, having presented this stark contrast, I do need to do a preemptive noting of the irony, what I'm not saying. You might think, uh oh, Dennett has fallen in with the likes of Ayn Rand and Milton Friedman and laissez-faire capitalism. No, no, no. I'm not a fan of that view of the economy. But, still, centrally planned economies don't work, and neither do centrally controlled, top-down brains. Neither do centrally controlled hierarchically organised top-down brains. Even the best cognitive architectures that have been developed so far in cognitive science have tended to be too disciplined, too neat, too not have the sort of unruly competition that I think now is essential in an actual organic brain, especially the human brain. They're too bureaucratic, you might say. They have millions of identical elements, which is also important. I didn't really appreciate this until very recently, when I was talking with my good friend Rod Brooks, a roboticist, an extraordinary roboticist with whom I worked for some years on the Cog Project. But we've had these models of the brain ever since the McCulloch-Pitts neurons, which is one of the great oversimplifications of all times that came along-- I can't remember the year right now, but it was in the '50s, I think, which had these very simple elements, logical neurons, which emitted a single branching output and had bunches of inputs that could either have positive or negative inhibitory or excitatory attachments or stimuli inputs. And then, they summed the results, and they either fired or they didn't. That was a brilliant simplification. And they're wonderful little thinking tools. But brains aren't like that. This is mis-imagination. And when we think about computers, we tend to imagine the algorithms with which we are somewhat familiar, things you know like Word and Photoshop and Google Desktop, which are all brilliantly designed from the top down with hierarchical control. And if we compare those with brains, just intuitively, you think, nah. Brains just aren't like that. And you're right. They aren't. But that doesn't mean they're not computers. It just means they aren't computers with Politburo control and top-down designed software. They're not cold, orderly, ultra efficient, and authoritarian machines, composed of units that are mindless little machines. So I want to compare those models with, oh, how about the stock market. Is the stock market a computer? Is stock trading a computational phenomenon? Yeah. It is. And in fact, in a way, the proof of that is that those traders are being replaced by machines right now. And more and more stock trading is done entirely in the digital world. And so whatever they were doing all those years was something that could be easily done by machines, because machines are doing it now. But now, let's look at neurons. For once, I'm going to ask you to look at a neuron. Let's see if this is running. Yes. This is a little looped bit of a film. These are neurons in a dish. And you see they're putting out their little dendritic graspers and looking around. Here's another one. Is it going to make that connection? No. Yes. No. So I want you to replace the image you have of a McCulloch-Pitts neuron with these squiggly little agents by the billions, gathered in your brain and faced with the task of keeping life and limb together for you. Now, one thing that you get immediately when you start thinking of neurons in this more agent-like way is you get a good account of brain plasticity. As you no doubt know, if a little bit of your brain is damaged, very often, not always, but in many conditions, the neighbouring parts of the brain that are spared can take over the work that was being done by that part that's now died or gone missing. And some of the degrees of versatility of brain tissue, especially cortical tissue, is just stunning. And experiments show, for instance, the famous Merzenich experiments, where he mapped the brain areas that were involved in the digits of a monkey and then sutured the fingers together, so that the monkey just had three digits, and after a week or so, went in and looked at the areas that were responsible and saw that there was a reorganisation of the [? cortex. ?] It wasn't as much work to do. And so the neurons were recruited to do jobs for other purposes. And the way to think about this, I've decided, is these are neurons hungry for work. And they've got to stay alive. You may remember a wonderful line from Francois Jacobs, who once said the dream of every cell is to become two cells, which is good, true at first approximation, but not true of neurons. Pretty much, they're like mules. They are the offspring cells, but they're not going to have any offspring of their own. So their dream is just to stay alive. But they got to stay alive. They got to fight for their energy. And the only way to fight for their energy is to find useful work that they can get paid for. And so that's the kind of energetic economy that I think we have to replace the Politburo model, where all of this is taken care of in a bureaucratic way. Obviously, obviously, when a bit of brain tissue dies, there's no central personnel director who reassigns the neurons in the neighbourhood. That has to be figured out in some sort of bottom-up way by the neurons, not by some boss neuron or commissar in some part of the brain. So there's no central administrator doing the reassignment at all. But-- moreover, no two neurons are exactly alike. And this is the point that I got from Rod Brooks, which I mentioned earlier. Rod is an unusual man with many talents and many projects. And one of his-- I don't know if he's finished it yet. I have spoken to him about this for it's a couple of years, I guess. He was going to make what I'm going to call a steampunk computer, a pre-electronic computer, no chips, not even any vacuum tubes. He was going to make a computer out of relays and solenoids and the sorts of electrical switching that you had before you had the electronic age at all, just to see if he could do it. And of course, you can make a computer out of just about anything if you're clever enough. And so he set out to make an actual computer. He loves to solder and put wires between things and so forth. And he had a big room, not quite as big as this. But this is where he's building the computer. And of course, the computer he was building would have way, way less than 1% of the power and speed of your cell phone. But it was an energy hog, a giant electrical, not electronic computer. And I was talking with him about the challenges of this. He said, you know, the hardest part of all was getting all the flip-flops exactly alike, simply making them so exactly similar that you could get reliable computation out of them. The timing has to be precise. And the response has to be precise. And that was a major challenge for him as the builder of this. And he said, we've been taking for granted one of the features of the digital age, which is that the manufacture, the manufacturing processes of chips are just stunningly high quality and regular. You can have a bubble memory with billions of just test about exactly identical to the atom little memory units, or flip-flops. And without that, the architecture that you build on top of them wouldn't work. And we don't have that between our ears. We just don't. So we have to rethink the idea of computer architecture to make an architecture out of these different, unruly, clueless, little, multi-armed, blind cells. So a brain is not made of units at all, in those properties like what you've got on your cell phone. So now, I'm going to show an image I've shown often before, because it so vividly makes the point I want to make. On the left, you see a termite castle, an Australian termite castle. On the right, you see, of course, Antoni Gaudi's famous church in Barcelona, Sagrada Familia. They look stunningly alike. And that's even no accident. Even the interiors and the structural members have some striking similarities. So here we have two artefacts, both made by living things. The one on the left is made by termites. The one on the right was designed and built by Gaudi. So here's a puzzle. On the left, we have bottom-up design. On the right, we have top-down design. And I mean that just about literally. There's no boss termite. There's no architect termite. There's no blueprints. There's no second in command and echelons of command. The termites are just doing their individual thing. And they don't know what they're doing or why, but they're doing it. And the result is that amazing structure. That's bottom-up design and construction. Gaudi, on the other hand, was a charismatic, megalomaniac, creative genius with blueprints, manifestos. He had it all worked out in advance in his head. And he lorded it over his second in command, who lorded it over the lieutenants, who lorded it over the sergeants. And down it went to the people who actually put the bricks together or cut the stone and so forth. So we have a stark contrast, two different ways of designing and building. One is bottom-up. I'm going to say, that's a Darwinian way of building. It's done by a lot of competent but uncomprehending processes. On the right, we have top-down, intelligent design by Gaudi, which is where it's mind first. You come up with the idea. You prove it's a good idea. And off you go. Of course, my other favourite example of something that was top-down design is Turing's first computer. He had the proof of concept. Before they paid anything to build the chassis or put in the tubes, they knew it was going to work, because he proved it was going to work. And he knew exactly how it was going to be put together. They changed some things along the way, of course. So we have a stark contrast here. Now, here's the puzzle. A termite colony might contain, I am told, up to 70 million clueless termites. Latest count is that your brain contains about 86 billion even more clueless neurons. Now, here's the puzzle. How do you get a Gaudi-type mind out of a termite colony brain? What you have between your ears is 86 billion semi-autonomous, clueless neurons. Not a one of them knows who you are or cares. Somehow, that has to be organised into something that can do what Gaudi did or what Turing did. Well, when I was thinking about this, I was reminded first of all of another great triumph of about the same time as Turing. And that's the great K-25 building at Oak Ridge in Tennessee, built in record time during World War II, part of the Manhattan Project. This is where they did the gaseous diffusion of uranium to make the weapons-grade uranium for the atomic bomb. When it was built, it was, I think, the largest building in the world. And over 12,000 workers in round-the-clock shifts worked there. And they didn't know what they were doing. They were clueless about what they were doing. They were trained to push buttons and turn dials and look at dials. And they had no idea what they were doing. After the bomb was dropped and they learned, they were flabbergasted. They didn't know they were-- they had no idea what they were making there. But now, think about it. First of all, it is possible to organise armies of clueless operators to perform some highly sophisticated control task. Witness Oak Ridge. But who or what does the organising? And the answer in the case of Oak Ridge is that it was top-down, intelligently designed by a brilliant team of physicists, engineers, and a brilliant leader named Leslie Groves, General Leslie Groves. So it's like Gaudi's church. And it's like Turing's computer. And it's like all good old-fashioned AI programmes. GOFAI was introduced by the philosopher John Haugeland some years ago. And it stuck. People in AI have adopted his sort of deliberately snide term, "good old-fashioned AI." It's the kind of AI that led to the AI winter that is now over. And we're now in a new AI spring, with all of the new bottom-up computer processes that are dazzling everybody today. So good old-fashioned AI was top-down, not bottom-up design. This is a cartoon that I made some years ago, the walking encyclopaedia. It was supposed to give you an idea of what was involved. This is the sort of phony flow graph. But we have a belief box or belief fixation box, and the planning committee, and the action, and the language acquisition device that Chomsky made famous. And here's the lexicon. Here's where the logic is, perceptual analysis. And you've got all these departments interacting, sending memos back and forth. It's very, very bureaucratic and very well organised, very efficient, and very brittle. And it doesn't work. And I say it doesn't work because we tried for several decades, very, very smart people, and more or less proved that it didn't work. So when we look at Oak Ridge, we see it was top-down design, and it did work. But who knew what at Oak Ridge? I tried to find out. And as near as I can tell, there's still not much publicly available information. It's still clouded in security. But General Leslie Groves knew. And so did his immediate staff and many of those who reported to them, and the head engineers who designed the plant. But probably the architects that designed the building, just the outer shell, had no idea what the building was going to have inside it. It had a waterproof roof and very strong supports for various things. But they didn't know what was going in. So I think there's a sort of diminishing level of comprehension all the way down. But for most of the people that work there every day, not a clue what they were doing. And so we have to face what we might call the I can't see the woods for the trees phenomenon. There were levels of granularity or insularity, people who knew a little bit, but only a few people who pretty well knew the whole thing. The bird's eye view was had by General Leslie Groves and a few others. And we can compare that to, as it were, the termites' view, which is most of the work force there had. And the interesting thing is that in the termite colony, no termite or junta of top termites sees the woods at all. They just see the trees they're working on. And of course, one effect of that is that redesign is achingly slow. Takes evolutionary time to get the termites to do something else, because there's nobody there that can-- to take a term from good old-fashioned AI, there's nobody there who can do the blame assignment and the credit assignment that you need to figure out why the thing doesn't work. They can't reverse engineer their own engineer system, which is what we can do. Another effect is with no boss, there's no issue of what the boss has access to, to use a term from philosophy. So there's no reason to posit access consciousness to the colony. In other words, it's not like anything to be a termite colony. Maybe like something to be a termite. But I think you'd probably agree. It's not like anything to be a termite colony, because there's no organisation that gives access of any kind, bird's eye view or otherwise, to what's going on below. Doesn't have to be. So I can't prove that. I think it's a highly probable proposition. And if you doubt it, you might want to consider questions like what is it like to be the Seattle Seahawks? Not individual players on the team, but the team. Probably you'd say, no, it's not like anything. It's a bunch of individuals. They may be very well organised, but it's not like anything to be the team. And it's not like anything to be a termite colony, which would seem to lead to the conclusion it's not like anything to have a brain or to be a brain. What is it like to be a human brain would seem to give the same answer. But then, it sure seems to be like something. It's an important and obvious fact. There's no General Leslie Groves in your brain. And yet there seems to be. Certainly, it seems to be like something to be you and that you that it seems to be like something is in charge. So now, we have the question, how can we explain that there seems to be a General Leslie Groves in your brain, when there isn't? This is the view that is often called, or today often called, illusionism, that the whole idea that consciousness is a sort of useful illusion. And I see by reviews today in the New Statesman, an otherwise very, very friendly and positive review, but the reviewer just thinks this idea that consciousness is some kind of illusion, it's just hopeless. Well, I beg to differ. But that's a long story, and I'm not going to be able to spend a whole lot of time on it. I've given you a hint about what the answer might be. So if we compare the organisation of Oak Ridge to the organisation of the mind, brain, and we compare it and GOFAI, one of the things we see in this diagram, we see you have all these parts. I can't leave it up there while I ask my questions. Does the LAD know its job is acquiring a natural language? No. Does the belief box understand its role in informing the other departments? No. Only the AI designers know the functions of the parts. The intelligent designers off on the side. GOFAI is a top-down intelligent design. Now, this is not the threadbare criticism that everything these AI systems know is what the creators installed in them. No. Garbage in, garbage out. Nothing in the programme that isn't known by the programme creators. That's just not true. Even of those systems it wasn't true. Many of those systems go way beyond their creators in what they know. This is a point that the design of the architecture that supports this knowledge is top-down design. It's hierarchical and efficient. So actually that's just the first kind of computer phobia that I wanted to try to alert you to and suggest a runaround. Yes, our brains are computers, but they're more like termite colonies than like your laptop. And that's all right. They can still be computers, because competitive architectures are still computational architectures. The next source of computer phobia is, but the mind isn't software. And I have to admit that even some of my best friends think I'm nuts on this score, among them Steve Pinker and Paul Churchland. But they haven't convinced me. And I'm going to defend the idea that, in fact, our minds are software running on that termite colony brain. And it's the software that distinguishes us from other animals. First of all, let's get rid of some obsolete objections. Where are the floppy discs? The brain doesn't have RAM, and the CPU software consists of bit strings. Software is hardware specific at the level of compiled code. Yeah, yeah, yeah, yeah. That's not what I'm talking about. I'm talking about something a little different. I'm talking about something more like Java applets. What's a Java applet? You use them every day. You don't know it probably. They're apps. Or they're like apps. They run on your hardware. And they can be written by a software designer who doesn't need to know what kind of hardware you have, whether you're running it on a cell phone, or on a Mac, or on an IBM computer, or a Linux computer, on Windows, or Mountain Lion. It doesn't matter. It'll run, as they say, on all platforms. That's not strictly true, but on all that matter. It runs on lots of different-- Java is a computer programming language it runs on. You don't have to specify what the underlying hardware is. Java takes care of that for you. So one who writes Java applets has no need to know the fine details of the hardware, because installed on your hardware is something called the JVM, the Java Virtual Machine. Now, that's software. But it's software that is specifically designed to fit on the hardware in question. And it protects that hardware from malicious use, among other things. But it also permits Java applets to be downloaded off the internet, for instance, and run on your computer. Now, I want to draw your attention. So the Java virtual machine has to be installed before the Java applet will run. In fact, the Java virtual machine on your cell phone or on your laptop has probably been updated several times in the last week without you even knowing it. You've gone online to get something, and you need a new version of Java VM. And so it's automatically downloaded from whatever site you're on. And it takes care of the problem. How many of you had a message come on your screen say, will you accept a Java update? Some people set their machines to make sure that they always know when that's happening. But in general, you don't know. So now, I want to draw attention to this. One who writes Java applets has no need to know the fine details of the hardware. Now, what's the importance of that? What am I doing right now? I'm talking to you all. Each of you has installed a version of the EVM on your neck-top. That's the English virtual machine. I don't have to know the details of the hardware between your ears. I can count on the fact that you have the EVM, and so you can run the code that I'm downloading to you as we speak. Now, that's, of course, an oversimplification. But the main idea is the one that I want to stress, is that the beauty of having language is it permits us to share ways of doing things that we wouldn't otherwise be able to share, because we can tell each other about them. Of course, you can also show people without telling. But the capacity to talk is really important to make a culture cumulative. These are thinking tools. Words are a good example of thinking tools. Each word is its own little tool, a way of doing something, a way of referring to something, remembering something, labelling something, et cetera, and a way of pronouncing something. And I love the line from Goethe, "when ideas fail, words come in very handy." That's important to remember, that sometimes you can use a word to great effect without really having much of an idea what it means. And sometimes, that's actually useful. Other thinking tools are numbers, diagrams, maps, methods, and intuition pumps. My last book was a collection of more than 70 thinking tools, most of them were what I call intuition pumps. They're a little fancier than an individual word. They're a way of thinking about something, which is worth adding to your kit. Notice, by the way, that what I've just done right now is I've downloaded an app to your neck-top. If you didn't have it before, you now have the idea of words and other cultural items being like apps that are downloaded to your neck-tops, where they can give you new competences that you didn't have before, in exactly the way your cell phone or your laptop can pick up new competences by downloading software. I think it's actually a very significant and deep parallel. There's lots of reasons. There's lots of disanalogies worth enumerating and considering. Any rate, I've given you the app. You're stuck with it. And it's what makes possible what I've called the MacCready Explosion. You remember the eukaryotic revolution led pretty soon to what's known as the Cambrian Explosion, which was a incredible diversity of life that grew out of the rise of the eukaryotic cell, which itself arose when two prokaryotic cells bumped into each other and decided neither one ate the other or disassembled the other. In fact, they joined forces and became a more powerful thing. That was the first great technology transfer in the history of evolution. The second was much more recent. It was when our brains began to be invaded by memes, another great technology transfer. You didn't have to invent the wheel yourself. You got that for free. It was in the culture. You didn't have to invent calculus or cost benefit analysis or French or English. You didn't have to invent any of those. The software was already available and almost free. All you have to do, download it to your neck-top. So MacCready, the late, great Paul MacCready calculates-- calculated, died recently-- that if you go back 10,000 years to the dawn of agriculture, and you put all the human beings, plus all their pets, plus their livestock on one side of the scale, and you put all the other animals, terrestrial vertebrates on the other side of the scale, 10,000 years ago, the percentage by mass of the humans, plus their domesticated animals, he calculated at a fraction of 1%. So what is it today? Some of you have heard me talk about this. Anybody hazard a guess? Is it 10%? 20%? 60%? 80%? 98%. We have swamped the planet with our cattle and our other domesticated animals and ourselves. This is one of the most vivid and fast biological phenomena that's ever occurred in the history of life on the planet. And it's taken only 10,000 years, which is just a eye blink of time. This is what MacCready says about this. "Over billions of years, on a unique sphere, chance has painted a thin covering of life-- complex, improbable, wonderful, and fragile. Suddenly we humans have grown in population, technology, and intelligence to a position of terrible power. We now wield the paintbrush." Now, all this happened way too fast to be due to genes. There have just not been enough generations in the last 10,000 years for there to be really major revolutions in our genes. If you could time machine a person from 10,000 years ago to today, they might have some disease vulnerabilities that we don't have. But otherwise, they'd do just fine. Give him a shave and a haircut dress them up, and they'd pass for one of us. But I'll give you another example, which some of you may know about, some of you may not-- the Flynn effect. The Flynn effect is perfectly real. And IQ is up just in the last century. The average in 1932 would be 80. That is to say, IQ is on a scale, so 100 is average. If you're above 100, you're above average. If you're below 100, you're below average. But if you give the very same tests that people-- where the average was 100 just in 1932, you give the very same test to people today, the people who scored 100 then would score 80 now. This is a robust, clear effect. We're getting smarter. And it's not due to genes. It cannot be due to genes. There hasn't been anywhere near enough time. So what's changed? The short answer, you can't do much carpentry with your bare hands. And you can't do much thinking with your bare brain. That was something that my friend Bo Dahlbom said a few years ago. I thought, boy, that nails it. You can't do much carpentry with your bare hands. You can't do much thinking with your bare brain. A termite colony is a bare brain. Gaudi had a well-equipped brain, full of thinking tools. And where did he get his tools? Well, here's the wrong answer, coming from Freeman Dyson. "Technology is a gift of God. After the gift of life, it is perhaps the greatest of God's gifts. It is the mother of civilizations, of arts, and of sciences." Everything but the first sentence I heartily endorse. But come on, Freeman. Technology is not a gift of God. So the long answer is that cultural evolution designed thinking tools that impose novel structures on our brains. These are evolved virtual machines. Virtual machines are machines made out of information, ways of doing things on your neck-top. And this leads to a chicken and egg puzzle. Did evolved mind tools make us smarter? Or did we evolve to become smart enough to make mind tools? And as usual with chicken and egg problems, the answer is yes. The effect, though, of human culture, which started slow and then sped up greatly, is that human cultural evolution has itself evolved. And I owe the next slide to my friend, Matt Ridley. it's his slide, which I do love. On the left, you see an actual hand axe. Our ancestors made these without any apparent change in design for a million years. On the right is a mouse. It was designed by Douglas Engelbart. And it's on the verge of extinction after only a few decades in use. That's a nice measure of the speed of cultural evolution. So if we think of our minds as software, if we think of words as virtual machines-- well, why not? Well, what is a word made of? Sounds? Ink? No, no, no, no. Words are more abstract than that. My colleague Ray Jackendoff in his wonderful book, Foundations of Language, says that words are semi-autonomous informational structures-- sounds like software to me-- with multiple roles to play in cognition. That's what a word is. And just the way you can copy software and move it around to other platforms, so you can do the same with words. Think of the diversity of words. Tens of thousands of words in many different languages-- where did they all come from? In thousands of languages, could they have a common ancestor? Yeah, they could. In fact, there's interesting attempts to trace back, trace back, trace back to find a common ancestor of even the most diverse languages on the planet. This is all controversial. And of course, there's no written records there, so it has to be very conjectural. But at least we can trace back a lot of the languages a very long way and know that they evolved, that the words in them evolved from those languages and often jumped to other languages. Did those words have intelligent designers? No. Words are brilliantly designed. They're great, infectious, replicable, complex informational structures. But they're designed by natural selection, not genetic natural selection, but cultural natural selection. You don't get your words with your genes. You move a baby born to Chinese parents to London, and that baby is going to learn English, not Chinese. Words have evolved. Darwin himself noted that in a famous passage. He saw a striking resemblance between the lineages of words and languages and the theme that he was developing in The Origin of Species. So we have phylogenetic trees like the tree of life. And we have glossogenetic trees, which show the evolution of languages-- the Romance language coming from Latin, for instance. I don't need to show those to you. So now, then, what is a meme? The other day I went and looked on the online Collins dictionary and got a rude awakening. "A meme is something such as a video, picture, or phrase that a lot of people sent to each other on the internet." Goes on, "short for mimeme, both coined by R Dawkins, 1941, British biologist?" Not the best source of information. But indeed, the word was coined by Richard Dawkins, the author of The Selfish Gene, in his book in 1976. And one thing that's clear is that the meme meme has gone viral. Just a few days ago, on the television show Jeopardy, an America quiz show, the very same quiz show in which IBM's Watson beat the human best contestants of all times handily-- on Jeopardy, the term is used without mention of Dawkins and without definition for the contestants. So they had a whole board of questions about different kinds of memes. I'm not going to give you a chance to look at all those categories. And nobody said boo about the fact that here the categories were all memes. Is this what Dawkins meant when he coined the term in 1976? Uh, no. Not at all. But I want to compare it to another scientific term, the Big Bang. I went online to check the Big Bang, which, of course, was coined by Fred Hoyle a few years earlier, astronomer. And what I found was that the first few pages of Google on Big Bang Theory were about the television sitcom. Had to go to third page before I got anything about Fred Hoyle and the origins of the universe. So "the Big Bang" still means what Hoyle meant. The question is does "meme" still mean what Dawkins meant when he coined the term? Well, let's look at Dawkins version very quickly. He says, "I think that a new kind of replicator has recently emerged on this very planet. It is staring us in the face. It is still in its infancy, still drifting clumsily in its primaeval soup. But already it's achieving evolutionary change at a rate which leaves the old gene panting far behind. The raw soup is the soup of human culture." Well, now, what do internet memes have to do with evolution by natural selection? So has Dawkins' term been hijacked? Has this meaning of meme gone extinct? Many, especially those in the humanities who hate the the idea of memes, fervently hope so, I think. And they think this was a suitably, fittingly unrespectable demise of an abhorrent idea. Well, really? What's so bad about it? What's not to like in the idea of a meme? To see why so many are opposed to it, we must look at the key features of Dawkins' concept of memes. Memes are replicators like genes. Culture evolves by a process of blind, purposeless, foresightless natural selection, not by intelligent design. And differential replication or reproduction is the key, not human genius. That's what people don't like. They don't like the idea that human culture is not to be due to the authorship of human geniuses over the age. That's one of the things they don't like about it. Well, you see, internet memes have authors. They are intelligently designed, some of them, by self-styled meme smiths. There are even competitions to see who can design the most viral meme. This is intelligent design run riot on the internet. Are these not such profound differences from Dawkins' memes that they must count as a different type or species altogether? No. And the key is in the word "species." Let me explain. Are dinosaurs extinct? How many say so? No, they're not extinct, in one sense. Birds are direct descendants of dinosaurs. There's probably a few hundred dinosaur descendants within 100 yards of where we're sitting right now. They're not much like the originals. But they are direct descendants of the dinosaurs. That's well-established biological fact. And it was a gradual process, which, of course, is very important. Very gradual change we can believe in. Well, now, the question is, is culture different? Did culture evolve by Darwinian processes? Or did it arrive by some sort of Big Bang, the way Freeman Dyson suggested? And we want to put in yet another gradualism. Cultural evolution happened gradually. The first memes were adopted unwittingly by hominids that didn't know what they were doing or why. They were more like termites, in this regard. Reflectiveness about memes came much later. And what's happened is the de-Darwinization of cultural evolution. It started very Darwinian, very, very much like termite design. And it's become ever more intelligent, as human culture has provided ever greater bounties of tools for the intelligent designers to put in their heads and then use to think of intelligent designs. So memes. Dawkins' list included tunes, ideas, catchphrases, clothes, fashions, ways of making pots or building arches. So memes are ways of doing things. The difference between memes and instincts is simple. Instincts are ways that are passed through the germ line, through the genes. Memes are ways that are passed otherwise, perceptually, socially. At this point, I know some people think, do memes even exist? I've encountered this often. Somebody will say, I don't know if memes even exist. Prove to me that memes exist. They seem like flights of fancy, just metaphors. Well, now, I want to consider-- how many of you would accept this statement? Hands up those who agree. I don't see any hands going up. Well, what are words? Dogs are a kind of mammal or a kind of pet. Words are a kind of what? Sound, sign, symbol? What they are is a kind of meme. What kind? The kind that can be pronounced. That's what distinguishes them from all other memes. So they're items of culture that spread by being reproduced, differentially reproduced, sometimes with changes or mutations. They form lineages, differential replication, and extinction. Memes evolve just as animals and plants and viruses do. Viruses aren't alive, but they sure do evolve by natural selection. So you don't have to be alive to evolve by natural selection, which is a pretty good thing, because words aren't alive. But they evolved by natural selection. Memes are not alive. They are subject to natural selection. I have a phrase for what viruses are. They're nucleic acid with attitude. That means something about their shape gives them the power, the competence to provoke their own replication when they get inside a cell. Memes are virtual machines, software with attitude. They provoke their own replication in various ways, for various reasons. They compete for transmission and also for local influence. What are memes made of? They're made of information. I'm going to speed up a little bit. I want to get to a punchline. People think genes are made of DNA. No, no, no. Genes are the information that are carried by DNA codes. Poems aren't made of ink. Right? You can send a poem to somebody with some ink, but poems are not made of ink. And genes are not made of DNA, although that's their normal vehicle of expression, but not the only one. When you get your genome coded, when you get your genome sequenced, you get a list of A, C, G, and T, so forth. Those are your genes in a different medium. So words are the best memes that we have that make good examples, because they're countable. They have clear lineages. They mutate in meaning and in pronunciation and grammatical role. And they compete for space in brains, the way bacteria and viruses compete for space in bodies. Well, if words are the best memes, why didn't Dawkins say so? Well, in fact, he did say this. "The survival or preservation of certain featured words in the struggle for existence is natural selection." And that's Darwin sort of prophesying Dawkins on memes. So words are brilliantly designed, but not by us. Phonemes, one of natural selection's most brilliant inventions-- I'm going to pass over this very swiftly. I think I've run out of time. But first of all, I want you to ask, what counts for replication in words? Is it physical similarity? Or is it something else? Is it physical resemblance? Well, let's look. Cat, cat, cat, cat, cat, cat, cat, cat, cat, cat, cat. How similar were those physical things? Not very. But they were all tokens of the type "cat." And you recognise them easily as such. That's because they're digitised. They are orally digitised as phonemes and orthographically digitised as letters of the alphabet. And this is what makes high fidelity transmission and replication in culture possible, in the same way that the four letter code of DNA permits the high fidelity replication of living organisms. So if words are virtual machines, who designed them? Evolution, cultural evolution. And how are they installed? They're installed by repetition. I'm going to go over that quickly. Go in more detail in my book. How did cultural evolution start? Well, we started with the genetic information highway that is used by all lifeforms. And then, a second information highway evolved by natural selection. And that was social learning. And when you have species where the children hang around with their parents, because they're dependent-- they're altricial, rather than precocial. Precocial species, they sort of hit the ground running. If they need parental care, then they're going to be hanging around their parents. This gives opportunities for information transfer that would not otherwise be there. And social learning is also abetted by other adaptations, some of which are quite easy to see. Well, there's prolonged infancy, which I just mentioned. And then, there's imprinting on parents, which is also seen in geese and ducks and birds, as Tinbergen and Lorenz famously shown. But now, I want to show you our nearest living relative. And I want to compare her to her. What's the most striking difference? The sclera, the whites of the eyes. Why do we have the whites of our eyes when our nearest neighbours and the orangutans don't? Because it enhances the capacity for gaze monitoring, for seeing where mum's looking, which enhances the capacity for shared attention, which is, by general agreement now, among people working on learning and cultural evolution, is a key feature of transmission of information from parents to children. But once you got parent-children information passing, then that's a highway that can be parasitised, just the way the internet, which was designed for transmitting classified information about military projects, has been parasitised by internet memes, pornography, all the rest of the things that we use the internet for. So a second information highway, once it's in place, it can be invaded in what Boyd and Richerson, who are the leading theorists, call oblique transmission. And they call these things that are obliquely transmitted rogue cultural variants. Another word for rogue cultural variants is memes. They choose not to use the word "meme" for various reasons. But that's what they are. Other theories of culture need memes just as much as Dawkins' and mine, even if they don't call them memes. They call them traditions or methods or ideas or ways or non-genetically transmitted adaptations and so forth. But they're all memes. Memes take advantage of the information highways built by evolution for many species and enhance in our species and our species only. Francis Crick once propounded Orgel's second rule. Evolution is cleverer than you are. Obviously, he doesn't mean intelligent design with a capital I, D. He means that evolution, a completely mindless, purposeless process, nevertheless can generate designs of cunning virtuosity and brilliant efficiency that is hard for human engineers, human intelligent designers to match. Well, intelligent design now exists. We have people like, oh, Bach and Turing and Gaudi, wonderful examples of intelligent designers. And intelligent design is becoming ever more intelligent, thanks to all the new thinking tools that we're creating all the time. And this has some surprising implications. So finally, I just want to finish off this topic. So is this a reductio ad absurdum? Internet memes, is this an embarrassment to Dawkins? Is it a reductio ad absurdum of his concept of memes? I don't think so. An intelligently designed meme is a contradiction in terms? No. Or so what? Here's another contradiction in terms. A splittable atom. After all, the word "atom" originally means unspittable. We learned that you can split an atom. We didn't change the term. And we learned that you can intelligently design a meme. And they belong to the same class, if not the same species, as the original memes. They're just evolved under different evolutionary regimes. So internet memes are actually prime examples of Dawkins' memes. They replicate because they can, not because they're necessarily good for us, not because they're good for us. And they have fitness independent of ours. They spread so fast. Tell me if there's anybody in this room who thinks that internet memes are an enhancement to the genetic fitness of the people that make them? You have another thing coming if you think that's likely to be true. They're cultural junk, not cultural treasure. Neither their authors nor their vectors-- that is, those who spread them-- need to understand why they are doing what they're doing, just like spreading cold germs. One of my favourite examples is the Polynesian canoe. In an article by Rogers and Ehrlich, they quote a French philosopher. He was not writing about Polynesian canoes, but, in fact, about French fishing boats. And he says, "every boat is copied from another boat. Let's reason as follows in the manner of Darwin. It is clear that a very badly made boat will end up on the bottom after one or two voyages and thus never be copied. One could then say, with complete rigour, that it is the sea herself who fashions the boats, choosing those which function and destroying the others." If it comes back, copy it. That's natural selection. The copiers don't have to understand why it's a better boat than the others. They simply trust the fact it came back. Don't fix what ain't broke. Copy it. So when we usually think of culture, we think about the grand, highest levels of culture, where we have the high culture-- opera and great art in museums and so forth, which we spend good money to maintain and preserve. And we very carefully bequeath it to the next generation, and so forth. But in addition to all that great stuff-- and that includes all the science too, of course-- there's all the junk. And it's just as much a part of human culture as the high culture is. And we want to have a perspective which treats all of the culture in the same diagram, in the same picture. And that's what we can do. Memes have their own fitness. And the memes-eye view provides a general perspective on cultural evolution, not just on the treasures, and not just the things noticed or valued, not just the actual inventions. We comprehend less than we think. And that's one of the features that I develop in the book. We don't need to comprehend many of the things in culture that we benefit from, in the same way that a butterfly with eye spots on its wings doesn't need to understand that this is really good at scaring away the birds. It benefits from having the spots and opening up its wings. It doesn't have to understand it to be the beneficiary. And similarly, we don't have to understand many of our cultural traditions that we endorse, spread, keep. They may be very good for us. But we don't have to understand it. So we're living in the age of intelligent design. It has become ever more top-down. And we even have things like GM food and people like Craig Venter. But now, we're entering the age of post-intelligent design. In many fields, intelligent designers are exploiting the truth of Orgel's second rule. Evolution is cleverer than you are. We have genetic algorithms and deep learning and evolutionary architecture and nanotechnology of various sorts. And all of these are Darwin-esque, evolution-like, bottom-up, mindless, competent processes that sift through enormous amounts of data and come up with new ideas. So now, let's recall my earlier question. How could a slow, mindless process build a thing that could build a thing that a slow mindless process couldn't build on its own? We've come full circle. Thank you very much for your attention. Is it actually true that nobody really wants to talk about that in science, because it's not a third-person phenomenon, or am I missing something? Why is there never talk about the most important thing in human biology, in brain science?
Files & media