It changed the way we organised society, it changed the way we educated ourselves, it changed the idea of work, and it changed the way we did business. We built infrastructure the like of which had never been seen before. Railways, roads, sewers, waterways, ports. These were new technologies. Today we don’t think of the road as a technology, or a sewer. But they are. As Google’s ‘Internet Evangelist’ Vint Cerf says, “If you grow up with a technology, it’s not technology. It’s just there.”
In 2009, John Seely Brown ex-Chief Scientist of the Xerox Corporation told a crowd of Silicon Valley business leaders how deep the industrial revolution was embedded in their high-tech business structures. “The structure and architecture of the firm reflects the structure and architecture of the infrastructure on which the firms are built,” he said. “Organisational infrastructure leverages the properties of infrastructural architectures.” Now, as a couplet this may fail to grab you as much as “You killed my father!” / “No Luke, I am your father!” did me when I was six, but Brown (paraphrasing the work of Harvard’s Alfred Chandler) is saying something profound.
Society is built on top of those roads, railways and schools, rather than those things being created underneath us as a means of support. Seely is saying that roads and schools shape us far more than we shape our roads and schools.
Educationalist Sir Ken Robinson points out that, “there were no systems of public education around the world before the nineteenth century. They all came into being to meet the needs of industrialism”. We built our school systems on top of the industrial revolution. We didn’t build the industrial revolution because of the collective effort of a pre-existing state-wide school system. (If you haven’t seen Ken’s TED talk, I recommend it. Not only is it a revelation, he’s also very funny).
We develop new technologies, that became infrastructure that shape society – and the infrastructure of the industrial revolution is still with us. That infrastructure grew rapidly and then reached a plateau. Cars are not radically different, nor are trains or indeed aircraft from their counterparts a generation ago. Neither are roads, railway tracks, airports, the judiciary, the school system or our systems of government. “Our infrastructure has been stabilised for a shockingly long period of time,” says Brown, “and we have now built institutions on top of that that expect that kind of stability.”
Our institutions are also still promoting a educational mindset that is born of the industrial age too. Ken Robinson says:
Every education system on earth has the same hierarchy of subjects. Every one, doesn’t matter where you go, you’d think it would be otherwise but it isn’t. At the top are mathematics and languages, then the humanities, and the bottom are the arts. Everywhere on earth.
Skills that will get you a job in an industrial society are valued the highest. The problem with this is that we’re moving from the industrial age to the information one and our institutions and education needs to shift. This isn’t to say that mathematics and languages aren’t valuable, but that those qualities that come from studying the humanities (understanding social systems for instance) or the ability of the arts to promote creativity and curiosity will become more so.
“We have a brand new type of infrastructure,” says Brown – and that infrastructure is the internet technologies that allow us to tease out and manipulate the world of data in a way never before possible. This new infrastructure will shape us just as profoundly as our industrial infrastructure did.
A man walks into a shop and picks up a packet of kitchen roll. As he does so an image appears on the packet telling him how much bleach was used in its manufacture. He picks up another and compares. The second gets a ‘green light’ that appears as ghostly image on back of the packet, signifying eco-friendliness. He chooses the latter as his purchase. Now he makes a phonecall, holding out his hand where the numbered buttons of a keypad appear sketched in light on his palm. He dials the number by tapping his own skin. Later, on the way to the airport, he pulls out his boarding pass and across the top some text appears telling him his flight is twenty minutes delayed.
This sounds like a scene from an (admittedly quite dull) science fiction movie, but it is not. These are scenes from a demonstration of a new device developed at MIT’s Fluid Interfaces Group – a combination of mobile phone, wearable camera and tiny projector that the lab’s director, Pattie Maes, calls ‘Sixth Sense’ – a technology designed to provide seamless and easy access to “information that may exist somewhere that may be relevant, to help us make the right decision about whatever it is we’re coming across,” to help us “make optimal decisions about what to do next and what actions to take.” (Click on the video below to see ‘Sixth Sense’ in action).
Everything is surrounded by a cloud of data you can’t see. A piece of clothing isn’t just the physical garment. In the shadow-world of data it is also how much it costs, whether it was manufactured ethically, the instructions for how best to wash it and so on. Crucially, your behaviour towards it may alter with access to any one of these pieces of data. Imagine walking into a trainer shop and being able to instantly see, by looking at a product’s bar code, whether it was made in a sweatshop or not, or if the shop around the corner had the same shoes on sale.
We are already beginning to layer this world of data on top of our day-to-day experiences. Download the ‘Better World Shopper’ App onto your iPhone for instance and it will give you an instant rating of a manufacturer’s record in regard to human rights, environmental policy, animal rights, social justice and community involvement. Google Goggles makes use of your mobile phone’s camera to recognize landmarks, book covers, even wine labels and return internet searches that relate to what you’re pointing it at. This is data layered over reality, or (depending how you look at it), reality revealed by data.
At a rapid pace we are becoming used to the idea that data should be accessible wherever we are, and on whatever subject we demand it, yet at the same time it’s hard to comprehend that the Internet is younger than I am, and the World Wide Web younger than my eldest niece. The ability of Internet services like the web and e-mail to connect both humans and machines has already had implications for society that we’re only just beginning to wake up to, implications as profound as those of the Industrial Revolution. We are now in a relationship with Internet-based technologies that we can’t get out of. Physicist and computer scientist W. Daniel Hillis has written:
We have linked our destinies, not only among ourselves across the globe, but with our technology. If the theme of the Enlightenment was independence, our own theme is interdependence. We are now all connected, humans and machines. Welcome to the dawn of the Entanglement.
As Vint Cerf said to me recently, “This is not new. We have always been entangled with our technology, we’ve always been entangled with knowledge. It may be more obvious now, because of the way it manifests. But if you were a cave man you might have become quite dependent on tools that you built, because without them you might be able to feed yourself, so you needed the knowledge to make those, or you needed the knowledge to find somebody who could make them. And then you also had to know that that thing over there was a sabre tooth tiger and it was a really good idea to get away from it, because the people who didn’t understand that didn’t survive to put their genes into the gene pool.” In short, entanglement with knowledge and technology keeps you alive.
Technology’s story is our story. Knowing how to turn Entanglement into Symbiosis and how the new infrastructure of data and networks can shape us, will determine who wins and who loses as we continue our shift from the Industrial to Information age. Countries that do not grasp the shift, especially in the way they educate their citizens will suffer. “We need to rethink a whole set of institutional architectures,” says Brown, to enable us to build organisations that focus on what he calls “scalable peer-based learning”, and what you and I would call ‘staying smart enough to keep up.’ Last year in Boston I met Juan Enriquez, author of As the Future Catches Youand a seminal speaker on how technology and knowledge are transforming us.
“I worry that if you’re not educated in this stuff, you’re toast,” he said. He’s very clear that new technologies quickly change the fate of nations, especially as knowledge becomes ever more accessible. “You don’t have to own a large piece of land or a lot of resources to get rich very quickly, but you do need to go to school. That didn’t use to be true. It used to be that it didn’t matter how smart you were, if you weren’t the king or part of the noble classes you were toast. Now you can get wealthy, and you can do it very quickly, but you have to do it through education. You see, the consequences of not being educated today are far different from what they were. You know, in the 1950s you had a high school diploma, you went to Detroit you did fine. That’s not true anymore.”
As Ken Robinson remarked to his audience at TED. “You were probably steered benignly away from things at school when you were a kid, things you liked on the grounds that you would never get a job doing that. Is that right? Don’t do music, you are not going to be a musician. Don’t do art, because you won’t be an artist. Benign advice. Now profoundly mistaken. The whole world is engulfed in a revolution.” He continues,
Our education system has mined our minds in the way we strip mine the earth for a particular commodity and for the future it won’t suffice. We have to rethink the fundamental principals on which we are educating our children. What TED celebrates is the gift of the human imagination. We have to be careful now that we use this gift wisely … and the only way we’ll do it is by seeing our creative capacities for the richness they are and seeing our children for the hope that they are. And our task is to educate their whole being so that they can face this future.
In short, critical thought, creativity, curiosity are the skills that need to get up the educational hierarchy as society shifts to a fluid infrastructure built on data and the links between it. Where we were taught to be arithmeticians now we must become mathematicians. Where we were taught vocabulary now we must learn semiotics. Where we taught to accept the industrial infrastructure as a fixed edifice we must now learn that the information infrastructure is a fluid tool to be pulled and shaped. Where we worked in silos, now we must learn to harness the crowd, to play our part in manipulating our collective creativity to solve the world’s problems and embrace its opportunities. Margaret Thatcher famously said, “There is no such thing as society.” She was entirely and completely wrong.
Einstein said, “We cannot solve our problems by thinking at the same level we were at when we created them” and by the same token we cannot solve our problems by leveraging the same infrastructure we used in creating them either. Politics, education and the press must, and will, out of necessity, change. Already we are seeing this shift, as the press wonder how to survive spreading a meme of division and conflict in a world that is increasingly collaborative. We are turning away from our newspapers and turning to each other. MIT is placing its courses online for free. TED is putting the world’s greatest speakers at our fingertips. Politics is the luddite. It’s not just industrial, it’s pre-industrial. It’ll change the last and it’ll hurt the hardest. Those nations who fail to understand this, that fail to change their institutions and equip their populaces by shifting to an educational paradigm born out of the coming information age instead of one made of the old industrial one, face a bleak future. Some nations will leapfrog ahead, offering a unprecedented opportunity for the developing world, perhaps taking something of story of the 60s most notable failed state (Singapore, now a knowledge powerhouse) to heart.
I call it the ‘Knowledge Combustion Engine’ and you are the fuel in the tank.
I’ve spent the last year being assailed by new ideas and ways of seeing the world at an unprecedented (for me) rate. The coming revolution in personal genomics, the project to create artificial life, the Transhumanists’ journey to ‘transcend our biology’, robots that get mood swings, machines that demonstrate curiosity, a post-scarcity world promised by atomically precise manufacture, holidays in space and our continued entanglement with the world’s biggest machine (the Internet). All of these are to one degree and another coming down the line, as long as the Maldives (and the rest of us) can stay above water, using our technologies and ingenuity to remove carbon-dioxide from our atmosphere (while simultaneously ushering in an energy revolution). I’ve met scientists, philosophers, gone diving with a president and invented a cocktail on the way. Now as I approach the end of my journey I’m looking for people who can help me make sense of it, to somehow pull all these strands together into a coherent view.
In his book Engines of Creation: The Coming Era of Nanotechnology Eric Drexler approaches the future by asking three questions – what is possible? what is achievable? and what is desirable? The question of what is possible seems easy to answer. As we learn to control the very atoms of matter, the mechanisms of biology and the power of computation there is, in fact, very little that we can’t do, in a physical (and indeed virtual sense). Solutions to climate change? Already developed. An end to the energy crisis? No sweat, sign on the line. Holiday in space? Why not, join our frequent flyer progamme. World peace even? Seems only reasonable.
But when we ask what is achievable, well that’s a different story. Because what we achieve will largely be determined by what we collectively decide is desirable. As George Church told me all those months ago at Harvard Medical School as we discussed personal genomics, “The only thing that puts this kind of medicine far away is really will, right? The question is, how motivated are we?” Do we, as a planet, have the will to take the bounty on offer while mitigating the risks? To get the medicine but not the weapons? To enjoy abundant clean energy while dealing with climate change? To use our technologies to bring us closer together, rather than isolate us?
It’s to ponder questions like these that I’ve come to meet Chris Anderson, the CEO of the TED Talks, the pre-eminent meeting of, as Chris puts it, “people who can offer a lens through which to see the world in a different way.” Every year Chris and his team gather together the world’s leading thinkers from every discipline and give them 18 minutes to tell the rest of the world how they see things. The results can be found on TED.com. Here you can see Ray Kurzweil summarise his law of accelerating returns, or Kevin Kelly talk about his idea of ‘The One Machine’ that the internet will become, or Hod Lipson demonstrate his robots (along with a host of other mind-shifting presentations that make you see things from a different angle). TED tells a different story of our world than the one we’re used to seeing, and it’s the same story I’ve seen on my travels. There is no shortage of fresh ways to see our future. It turns out we’re not necessarily looking at a damage limitation exercise, but a possible renaissance. But first we have to see it. Only then can we have to make it happen.
Seeing it is a revelation. We’re so used to being told that everything is getting worse, that the planet is doomed or that the next pandemic to finish you off is just around the corner, or that technology will subjugate us. It’s a world where a book called Is it me or is everything a bit shit? becomes a best seller. And it’s not true. Or at least it doesn’t have to be. Klaus Lackner has a machine, that works now, that takes CO2 out the air. George Church has co-developed a process that can take that CO2, mix is with sunlightfor pity’s sake! and create gasoline. Thin film solar technologies will soon take power to where there is no grid, while at the same time mobile devices will continue to take the world’s knowledge (accessed on billions of mobile devices) to every corner of the globe. Solar power continues to show exponential rises in efficiency while nanotechnology is already changing the face of manufacturing and will continue to do so. Medicine may soon see an end to a host of the things that kill us. This story is not being told, which is perhaps the biggest threat to our future. Not that it couldn’t be better, but that because we can’t see it, we don’t know it’s an option.
“The history of ideas is a really thrilling history,” says Chris, “and ultimately that is what will drive all of our futures. There’s a very boring view of the world which is that ‘things happen’ and you can’t really do much about it.” It’s something he’s experienced himself. “After I left university I became a journalist, then I started a company… and then fifteen years were taken over by all the stress of working. I didn’t have much spare time to think. When the whole ‘dot com’ bust happened the huge gift I got was discovering, holy crap, there’s so much amazing new thinking out there.” I know what he means. Before I decided I actually wanted to answer the question “what next?” I was on the same treadmill, too busy to look up to realise that the story we’re told wasn’t necessarily the only game in town. This book didn’t start off with the word ‘Optimist’ in the title. It was my agent Charlie, who when I told him the sort of thing I was finding out, remarked on how uplifting some of it was and suggested the change.
We communicate through stories. It is stories that grab us the most and it stories we identify with. Hollywood knows this, political spin doctors know this, newspaper editors know this. “What the story?!” ask editors pointedly when young journalists bring well written pieces that lack a narrative. My own editors were keen to make sure this book had a personal story, and encouraged me to make sure it wasn’t lost in the rush of facts. Chris is very interested in stories, and how the Internet, as it continues its prodigious growth across the globe, can help us, for the first time, tell a story that includes everyone.
The most memorable thing for Chris about the 2009 TED conference was a dance troupe called The Legion of Extraordinary Dancers. “This troop could not have existed ten years ago. They exist because kids who used to just dance down on the street corner started filming themselves, putting it up on YouTube and suddenly the community that they’re comparing themselves to is a global community. This kid in Tokyo sees a move from Detroit and innovates within hours, puts it online and so on, so the pace of innovation is dramatically increased.” John Chu, who created the troupe from finding the most popular of those YouTube clips says, “Dance has never had a better friend than technology. Online videos and social networking have created a whole global laboratory online for dance.” It’s not just in dance. “This is happening in hundreds of areas of human endeavour,” says Chris. “I’ve started to call it ‘crowd accelerated innovation’ and I find it incredibly exciting.”
Chris thinks rather than letting go of our humanity, we are re-discovering it. What could be more human than the Legion of Extraordinary Dancers? Kids from diverse backgrounds from across the world, innovating and collaborating to bring a new dimension to an art form as old as society, using technology to help them express themselves and innovate physically with their bodies, to meet, to collaborate, to just dance – and then show the world. Look what we did. Here is something of the exponential growth in wisdom, community, understanding I was looking for to go with Ray Kurzweil’s accelerating technologies.
“The acceleration of knowledge and ideas made possible by the fact that humanity is connected for the first time is vast,” says Chris. “The re-discovery of the spoken word as a tool for communicating is a big deal. If you think about it we evolved as human-to-human communicators. It was the village camp fire, the elder standing there with his painted face on a starry night, fire crackling, drums beating and telling a story and every eye locked on his and all those mirror-neurons in all those brains syncing up with what he was saying. By the end of this story his whole village would go to war against another village or make peace.”
“So TED is one of the new storytellers?” I ask
“It’s one of them. That mode of communication kind of got lost in the print age because it didn’t scale, it was a village-sized technology at best. To me it’s thrilling that it now scales and so one great teacher can inspire many people. One of the things that we see as our role is to try and help nurture that process of re-discovering how to do that, because I think we got to a place where lessons became a person in suit mumbling behind a lectern reading their notes for an hour while a class of people snoozed.” Suddenly, horrifying images of my ‘O’ level economics class come pouring into my brain. I shudder. “It shouldn’t be like that,” says Chris. “So, one of things we see, and this was a big kick for me, is TED speakers competing. An unexpected consequence of putting this stuff online is speakers are looking at what other speakers are doing and are putting in far more preparation time than they ever used to.”
Just as YouTube became a laboratory for dance, TED is becoming a laboratory for the art of oration. Here you will see a statistician blow your mind and end his talk with some sword swallowing. Here you will find Steven Pinker explain that the world is getting safer, and Robert Wright mix philosophy, sociology and stand-up comedy to give one explanation as to why – a theory he calls ‘the non-zero sum game’. I don’t know about you, but that’s the kind of lesson I can get on board with.
“We’ve actually got to bring back real creativity and find a way of nurturing that in the education process,” says Chris. “In the age of Google the notion of having to cram all these little brains with facts is bonkers. What’s needed is to build skills like how do you stimulate people to ask the right questions? how do you stimulate people to have a meaningful conversation? to think critically? What are lenses you give people to think about the world? I mean, if I’d have been taught Robert Wright’s non-zero view of history that would have had tremendously more value to me than endless facts about French kings.” It seems that the two things Artificial Intelligence needs the most if it’s ever to stop playing chess and start playing Madlibs, are the two things we need the most too: curiosity and creativity.
What is our collective story today and who tells it? The storytellers of our day-to-day lives used to be the press and our politicians. Like all good storytellers they used emotion to hook us into one of two, on the face of it, very uninspiring, dull stories. Story one: life happens to you, the future is not going to be very good (especially if you vote for that guy), it was better in the old days, you’ve got to look after yourself, the world is violent and unsafe, your job is at risk, the generation below you are feral and dangerous, things are changing too fast and you can’t trust those immigrants/ scientists/ left-wingers/ right-wingers/ nerds/ geeks/ religious people/ atheists/ football fans/ the rich/ the poor/ what you eat/ your neighbour. You are alone. Make the best of it. Vote for me. Buy my paper. I understand. (Story two is, in summary: ‘Shock! People have sex.’)
It’s hardly inspiring is it?
But the story is beginning to be told by other people now, by the Legion of Extraordinary Dancers, by speakers at TED talks, by Mohamed Nasheed who battled dictatorship to the brink of his own death and then got on with battling climate change, by Cynthia Breazeal who wants to build robots that help children learn, by Vicki Buck who quit government to create jobs to take on global warming, by George Church who wants you to stay healthy longer, by Eric Drexler who wants to usher in a post-scarcity world using technology on the nanoscale, by the good people at Konarka who take electricity out the sky and give to the developing world. A story being told by the curious and the smart, that inspires the curious and the smart in all of us, by people who wonder and ask the kind of questions that haven’t been asked before. Crucially, none of them wait for permission to ask those questions, or then to find the answers. It is being told through writers who find themselves traveling across America and readers of blogs who might say in the pub, “did you know the technology exists to make petrol out of the air?” It is being told by the cult of the possible, who seek to achieve, to bring us what we desire. Peace. Understanding. Space to love each other. People who encourage us to evolve.
Eric Drexler has written, “As the Web becomes more comprehensive and searchable, it helps us see what’s missing in the world. The emergence of more effective ways to detect the absence of a piece of knowledge is a subtle and slowly emerging contribution of the Web, yet important to the growth of human knowledge.”
I think we’re beginning to see, collectively, what’s missing, and crucially we’re now able to do something about it. Technology doesn’t give you permission like your teachers did. It gives you agency – to ask, to learn, to connect, to do. It says, “go on then, show me what you’ve got”.
“I don’t know that the future’s going to be better,” says Chris. “But I think there’s a very good chance that it will be and I think that’s something that everyone can do to further increase that chance. There are several quite profound and inspiring ways of thinking about the world that suggest there are these trends that have the potential to drive a better future and I think there is such a thing as moral progress, driven not by any difference in the DNA kids are born with, but just driven by what they see, and seeing more of humanity just naturally flicks on certain switches that make people more empathetic. Of course, the future might well be truly horrible. I think it’s all to play for and I think everyone of sound mind and conscience should be in the game, trying to shape it in the right way. It’s a very false and shallow view of history to say that it’s just one thing after another. Ultimately though our history is the history of ideas. It’s a really thrilling history and ultimately that is what will drive all of our futures.”
Ideas, creativity, curiosity – and dancing. Now there’s a mix.
More of my talk with Chris, will of course, make it into the book…
It’s amazing how quickly you can accept international travel as work-a-day. When I started my journey a flight heralded a feeling of adventure in me. Now, it’s like getting in a car. Another thing that’s changed is my attitude to my interviewees. When I first secured an interview with my quarry in Boston I was slightly intimidated. ‘How do you talk to someone like that?’ I asked myself, the ‘that’ in question being Ray Kurzweil. Now, as I come to end of my journey and try to tie it all together I find less trepidation in myself. I’ve spent the last year meeting extraordinary people, and I’ve got used to it. Turns out extraordinary people have plenty enough ordinary about them to get hold of.
I arrive in Boston, deal with the ever rude and superior immigration staff and am picked up by Tracy Wemett, who you may remember as Konarka’s PR woman and driver of some, shall we say, reckless enthusiasm. Tracy, on hearing of my return to Boston has generously offered me her basement for the week, which makes a welcome change from hotels. Still, we’ve got to get to her apartment alive which, given her driving, is not a certainty.
Since I saw Tracy last it seems I haven’t been the only one to notice her maverick approach to the road. One speeding ticket too many and she’s been required to take a driving education course by the state of Massachusetts. The results are reassuring. She tells me, “I was told I’m the sort of person who will make a road where there isn’t one.” She pauses. “Apparently that’s not good.”
I spend the next day preparing for my interview with Ray. (I also take a visit to meet genius-entrepreneur Howard Berke at Konarka, who was, like many genius-entrepreneurs, a mixture of enthralling, socially odd and genuinely entertaining. More on him in my chapter on Solar).
Ray Kurzweil is variously an inventor, guru, madman, prophet or genius depending on who you listen to. One indisputable truth is that Ray is a very good inventor. He invented the first machine that could scan text in any font and convert it into a computer document, a technology he applied to building a reading machine for the blind (which led to him, on the side, inventing the flatbed scanner and the text-to-speech synthesizer too). Stevie Wonder was the first customer – and this in turn led to Ray inventing a new breed of electronic synthesizers that captured the nuances of traditional ones. (In a former life as a musician I coveted the ‘Kurzweil K2000’ but not being very successful musician I could never afford one). Our interview opens in much the same way as Ray’s last book The Singularity is Near (hereafter referred to as TSIN). “The philosophy of my family, the religion, was the power of human ideas and it was personalised,” he says. “My parents told me, ‘you Ray can find the ideas to overcome challenges whether they’re grand challenges of humanity, or personal challenges’ ”.
Ray’s journey to visionary genius/ techno-prophet/ crazy person (delete as appropriate depending on your prejudices) had its genesis in his attempt to work out a way to time his inventions for maximum impact. “I realized that most inventions fail not because the R&D department can’t get them to work but because the timing is wrong. Inventing is a lot like surfing: you have to anticipate and catch the wave at just the right moment,” he writes on page three of TSIN. So Ray started looking at technology trends and he saw something extraordinary – a clear, unmistakable pattern of exponential innovation, something he calls ‘the law of accelerating returns’ – a phenomenon centred around the idea that technology regularly doubles in efficiency. Such doubling is seen, for instance, in the increasing processing power of computers. Reality has kept pace with the predictions of ‘Moore’s law’ with almost unwavering allegiance, with performance per dollar doubling about every 18 months. But Ray says the effects of the law can be found, well, nearly everywhere, that the law of accelerating returns is the governing law of all creation.
To understand the implications of Ray’s idea you have to get your head around how potent a force it is if something has the propensity to double. Think of it this way. Let’s say you travel a metre with each step you take. If you take ten steps you’ll have covered ten metres. Now imagine that instead of each step progressing one metre, it somehow doubles the distance you covered with the last one. So while your first step covers one metre, your second covers two and by your third your stride is four metres. The difference between ‘normal stepping’ you and ‘doubling stepping’ you is extreme and gets ever more so. As a doubling stepper your first ten steps will cover not ten metres, but one thousand and twenty four. Instead of covering the equivalent of about 1/10th of a football field you’ve covered over ten. And with your next step you’ll cover ten more – with the step after that covering another twenty whole pitches.
By the time you’ve done just 27 steps you’ve traversed 67 million metres, or to put it another way, you’ve gone one and a half times round the world. Your next step? You double that distance and do another 67 million metres. At this rate you could walk to the sun and back (and be 85% of the way to Mars) in 38 steps (your last step having covered 137,438,953,000 metres). One can only imagine the trousers you’d need. Meanwhile, normal stepping you is about a third of the way down a football pitch. Now, of course, you can’t step like that but technology, says Ray, can. And he’s not wrong.
Certainly on my trip I’ve seen other examples of mankind’s exponential adventure, in the plummeting cost of genome sequencing, or the ‘cost per watt’ performance of solar technologies for example. Ray cites these examples and others. The first hundred pages of TSIN almost bludgeons the reader with graph after graph, based on historical data showing exponential growth in the number of phone calls per day, cell phone subscriptions, wireless network price-performance, computers connected to the internet, internet bandwidth and so on. These all have a computing flavour, but Ray sees exponential growth of knowledge too, citing exponential growth in nanotechnology patents as an example. What about the economy? Ray plots exponential growth in the value of output per hour (measured in dollars) in private manufacturing and in the per-capita GDP of the US. Ray quotes example after example because he want us to get past what he sees as an inherit prejudice in our human thinking.
“Our intuition is linear and I believe that’s hard-wired in our brains. I have debates with sophisticated scientists all the time, including Nobel prize winners that take a linear projection and say “it’s going to be centuries before we…” and “we know so little about…” and here you can fill in the blank depending on their field of research. They just love to say that. But they’re completely oblivious to the exponential growth of information technology and how it’s invading one field after another, health and medicine being just the latest.”
You can’t get to Mars in 39 steps wearing linear trousers (like the one’s most of our minds wear). You need exponential ones (like technology has). But because we’re hard-wired to think in linear, rather than exponential terms we fail to see when things are coming, argues Ray. We’ll be far further than we think, far quicker than we expect. Ray predicts for instance that by the middle of the century we’ll have artificial intelligence that exceeds human cognition, a game-changing explosion of intelligence that we will merge with to usher in the next stage in our evolution – a human-machine hybrid, enhanced with similar exponential bounty brought to us by entwined revolutions in nanotechnology and biotechnology. Aging will be ‘cured’ and we’ll be able to move onto a more stable platform than our frail biology. At the same time we’ll have solved the energy crisis and dealt conclusively with climate change.
“All these Malthusian concerns that we’re running out of resources are absolutely true if it were the case that the law of accelerating returns didn’t exist,” he says. “For instance, people take current trends in the use of energy and just assume nothing’s going to change, ignoring the fact that we have 10,000 times more energy that falls on the Earth from the Sun every day than we are using. So if we restrict ourselves to 19th Century technologies, these Malthusian concerns would be correct.” In other words, the law of accelerating returns in solar energy will soon see a green energy revolution, as the technology keeps doubling its efficiency. Ray reckons five years from now solar will be taking coal to the cleaners when it comes to cost per watt. We won’t be switching to solar because we want to save the planet, we’ll be doing it to save our bank accounts.
“I just had a debate this week at a conference held by The Economist with Jared Diamond who basically sees our civilization going to hell in a hand-basket and points out various trends and makes this assumption that technology is a disaster and only creates problems and he has really no data to point to, it’s just aphorisms and scoffing at technology with no analysis. But he’s got a bestselling book because people love to read about how we’re heading to disaster.”
Part of understanding what Ray is getting at requires you to understand that he sees all creation as an exercise in information processing. Everything can be expressed as data coming in, some kind of manipulation or interaction, and some data goes out. So, two atoms collide (data in), they interact in some way (data processing) and emit light and heat (data out). This is the most boring way ever to describe fire, but it doesn’t take away from the essential premise that everything can be viewed as a manipulation of information. In other words, everything (including you) is an ‘information technology’ and therefore the law of accelerating returns becomes the fundamental law that governs all creation.
In 1999 Ray published a book called The Age of Spiritual Machines in which he applied this law to make predictions, and handily he made a bunch for the decade from 2009. Critics and advocates alike have lept on these, loudly proclaiming “Ray was right!” or “Ray was wrong!” depending, it seems, on how they view the world – and all ignoring the fact that Ray didn’t say his predictions were for one year, but for the period beginning 2009. “Most of Kurzweil’s predictions are actually astoundingly accurate,” writes one blogger, while another asserts his forecasts are “ludicrously inaccurate.” Oh dear.
My own analysis is that, with the odd caveat, Ray seems to be on the right track with his predictions and many seem extremely prescient. According to Ray 89 are correct, 13 are “essentially correct”, three are partially correct, two are ten years off, and just one is wrong (but he claims it was tongue in cheek anyway). Certainly there is some pride in Kurzweil’s response to his critics and you could argue he’s stretching the point a bit when he defends some of his predictions, massaging the semantics of the prediction to match the current situation, but, all that aside, he’s still been right more often than he hasn’t. By anybody’s reckoning that’s prediction nirvana, and a skill any investor would love to have (oh, Ray’s latest venture? A hedge fund.)
But part of the problem with Ray Kurzweil, or rather part of the problem in talking about Ray Kurzweil is that he raises strong emotions. Trying to separate reasoned debate from the howl of emotion that his work provokes is hard. Take the view of Douglas R. Hofstadter, now a cognitive scientist at Indiana University, but more famously the author of Gödel, Escher, Bach: An Eternal Golden Braid – an attempt to explain how consciousness can arise from a system, even though the system’s component parts aren’t individually conscious. (This is a key area of study for Ray too, because it is through reverse engineering the human brain that he believes we’ll be able to unlock the mechanisms of mind, replicate them in machines and so free ourselves from the biological limitations of our brain). Here’s what Hofstader has to say about Ray’s ideas:
“I find is that it’s a very bizarre mixture of ideas that are solid and good with ideas that are crazy. It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad. It’s an intimate mixture of rubbish and good ideas, and it’s very hard to disentangle the two…”
That’s like Stevie Wonder saying, “I can’t work out if Paul McCartney is a genius or a wanker”. Such is the trouble with talking about Ray. (You can see the full text of the interview this comes from here)
As I comment throughout An Optimist’s Tour of the Future, the advance of new technologies, particularly biotechnology, make many people (including me) uncomfortable – and then Ray comes along and says, ‘belt up, things are going way faster than you thought, and by the way, that means I’m not going to die. Would you like to transcend your biology with me? Hurry now’. It’s no wonder our linear-trousered brains are stretched to the limit, no wonder some people find Ray just too difficult to engage with. And on the other side of the coin are those who do see Ray as some kind of prophet, whose ideas save them from the sticky issue of their mortality. Ray’s ‘Singularity’ – the moment at which ‘strong AI’ arrives and we merge with it – has been called “the Rapture of the nerds” (a phrase coined by science fiction author Ken MacLeod). These Utopian-techno-nerds don’t really help Ray’s cause. I advocate the approach of Juan Enriquez, the founder of Harvard Business Schools’ Life Science Project, and another Boston resident, who told me, “Do I always agree with Ray? No. Does he make me think? Always.”
It seems to me (from my linear trousered perspective) that progress in robotics, AI, synthetic biology and genomics brings philosophical questions such as “what does it mean to be human?” into your living room, and not in an ‘interesting-debate-over-a-glass-of-wine’ sort of way, but in a ‘right-in-your-face-what-are-you-going-to-do-about-it?’ sort of way.
When the possibility that the hand your mate Robin lost to cancer three years ago can be replaced by a robotic one with a sense of touch becomes a real option we begin to ask ourselves, ‘Is that hand really part of Robin? If I shake that hand am I really shaking Robert’s hand? Gee I don’t know. I feel kinda weird’. (By the way, Robin isn’t fictional, he’s Robin af Ekenstam and you can watch a video of his new hand being attached here). And just as we can start to engineer robot hands and merge them with humans, we will soon, thanks to the law of accelerating returns, be able to engineer to genuine robot intelligence and merge it with our brains, argues Ray.
“The basic principles of intelligence are not that complicated, and we understand some of them, but we don’t fully understand them yet. When we understand them we’ll be able to amplify them, focus on them – we won’t be limited to a neo-cortex that fits into a less than one cubic foot skull and we certainly won’t run it on a chemical substrate that sends information at a few hundred feet per second, which is a million times slower than electronics. We can take those principles and re-engineer them and we’re going to merge them with our own brains”.
It’s statements like this that bring Ray into conflict with many scientists who think he’s not so much running before he can walk, as getting in jet fighter straight out of the crib. Although, for Ray, that’s kind of the point. Crib to jet fighter is really just a few doublings after all, the law of accelerating returns in action. But for some, Ray is a bit like Tracy. He makes a road where there isn’t one, they say.
One thing is certain. If a conscious human-like intelligence is ‘computable’ (i.e. it can be run on a machine substrate) the processing power to compute it will be within reach of the even your desktop very soon. Hans Moravec wondered, “what processing rate would be necessary to yield performance on par with the human brain?” and came up with the gargantuan figure of 100 trillion instructions per second, which is one of those numbers that generally makes most of us go “hmmm, I think I’ll make a cup of tea now.” To put this number in context, as I was ushered into the world in the early seventies IBM introduced a computer that could perform one million instructions per second. This is onemillionth of Moravec’s figure. By the dawn of the millennium chip-maker, AMD, were selling a microprocessor over three and half thousand times quicker (testament to a technological journey that had been populated with continual exponential leaps in processing power throughout the intervening period). This yielded a chip that is still 280 times less powerful than the brain’s computational prowess (by Moravec’s reckoning) but is a staggering upswing in power nonetheless. Intel have just released their ‘Core i7 Extreme’ chip which is forty times faster than the AMD device from 2000 and computes at the mind-numbing speed of 147,600,000,000 instructions per second – or about one seventh of Moravec’s figure. At this rate your new laptop will achieve the same computational speed as the human brain before the decade is out. Soon after that, if the exponential trend continues, your laptop (or whatever replaces it) will have more hard processing muscle than all human brains put together. This will happen sometime around the middle of the century according to Kurzweil.
Supercomputers have passed Moravec’s milestone and it’s therefore no surprise to find various projects using them to try to simulate parts of animal and human brains, merging neuroscience and computer science in an attempt to get to the bottom of what’s really going on in that skull of yours. It’s important to realise that simulating something often takes more computing power than being something (aircraft simulators have more computers than actual aircraft for instance) and a complete simulation of an entire human brain running in real-time is still beyond the reach of even the most powerful computers. But not for long. Henry Markram’s Blue Brain project (which works by simulating individual brain cells on different processors and then linking them together) believes “It is not impossible to build a brain, and we can do it in ten years.” He’s even joked (or not, depending on how seriously you take the claim) he’ll bring the result to talk at conferences. Markram has similarly upset more conservative voices in the AI field. Even Ray thinks he’s over-optimistic. (The prediction falls outside the curve predicted by Ray’s graphs by a hefty margin).
You can see Markram’s TED talk (where he suggests he’ll be bringing the Blue Brain back to the conference as a speaker within a decade) below.
I find myself thinking back to my talk with George Church, Professor of Genetics at Harvard Medical School. If you accept evolution as an explanation of how humanity came to be, that the common genetic code of all living things is proof that you, I and Paris Hilton all, at some point, evolved from the same source (that source being a collection of molecules that became the first cell) then one way of looking at the human being (and therefore the human brain) is ‘simply’ as a collection of unthinking tiny bio-machines computing away – reading genetic code, and spewing out ‘computed’ proteins and the rest. We’re machines too, just wet biological ones. You are an information technology.
Robotics pioneer Rodney Brooks makes this argument as well. “The body, this mass of biomolecules, is a machine that acts according to a set of specified rules,” he writes in Robot: The Future of Flesh and Machines
Needless to say, many people bristle at the use of the word “machine”. They will accept some description of themselves as collections of components that are governed by rules of interaction, and with no component beyond what can be understood with mathematics, physics and chemistry. But that to me is the essence of what a machine is, and I have chosen to use that word to perhaps brutalize the reader a little.
In short, intelligence and consciousness are computable, because you and I are computing it right now. I compute, therefore I am. George Church was less brutal in his take on the ‘human machine’. “I think of us more and more as mechanisms,” he told me. “We’re starting to see more and more of the mechanism exposed and it just makes it more impressive to me, not less. If someone showed me a really intricate clock or computer that had emotions and self awareness and spirituality and so forth I’d be very, very impressed and I think that’s where we are heading, were we can be impressed by the mechanism.”
But something’s not sitting right with me, and it’s not that I don’t like being called a ‘machine’ (believe me, that’s nothing compared to some of the heckles I’ve had). In fact, the machine metaphor makes a kind of sense given what I found out at Harvard.
It was Cynthia Breazeal, head of the personal Robotics lab who I met last time I was in Boston that expressed it best. “The bottom line is there’s still a long way to go before we can have a simulation actually doanything. I mean they can run the simulation but what is it doing that can be seen as being intelligent? How does that grind out into real behaviour, where you show it something and have it respond to it? I still think there’s a lot of understanding that needs to be done. I do, I really do. I think we’re making fantastic strides but I think,” (she dropped to a conspiratorial whisper, smiling) “there’s a lot we still don’t know!”
Cynthia nailed the root of my discomfort. Someone can give you the best calculator in the shop, but if you’ve never learned any maths, it’s largely useless to you. If the brain is computable, it’s not that we won’t have the processing power to recreate its mechanisms, but that we’re still a long way off working out how to drive that simulation. If you’d never learned to read your eyes could take in the shape of every letter on this page, but it’d mean nothing to you, and printing it out photocopying it a hundred times (or even inventing the printer and photocopying machine in order to do so) wouldn’t help you either. Just as you had to learn to read, AI and neuroscience research, collectively, have to tease out not only what it is they’re looking at, but what it means.
Sure, there’s exponential growth in processing power, but the jury is out as to whether there is an equivalent growth in understanding how to use that power more ‘intelligently’, to create (to paraphrase one of Henry Markram’s analogies) a concerto of the mind by playing the grand piano of the brain. If there had been, maybe your new laptop would be one-seventh as smart as you are. But it isn’t. This is where the strength of projects like the Blue Brain (and Cynthia’s work) really lie – as tools to slowly help us to pose the right questions that will lead to a better understanding of intelligence, emotion and consciousness.
This is what I really want to ask Ray. “Have you got any graphs that clearly show an exponential growth in understanding? or in the ability of us to collectively make sense of the great philosophical questions, the intractable questions – ‘What is life?’, ‘What is consciousness?’” I ask. “Have we seen the law of accelerating returns in our understanding of these questions? Is our knowledge, our wisdom also keeping pace?”
“Well, I’m actually working on that in connection with my next book which is called How the mind works and how to build one,”says Ray.
Well he would be, wouldn’t he?
More of my interview with Ray will, of course, be in the book…
It’s a big question, but one that is particularly pertinent to my interview today with Robotics and Artificial Intelligence researcher, Hod Lipson. Because Hod and his team build machines that find truths.
The search for truth has a long history (one could argue it is history) which I’m not about to get into (and it’s not the book I’m writing) but if someone said to me ‘Go on then, history of truth in 5 minutes’ I’d probably reach for two key figures – Socrates (born Greece, 469 BC) and Francis Bacon (born England, 1561), not least because they both died in interesting ways (which is useful for storytelling).
Socrates was put to death by the state of Athens for “refusing to recognise the gods recognised by the state” and “corrupting the youth” (explaining perhaps why Black Sabbath rarely toured in Greece). Despite clear chances to escape his fate, Socrates placidly took a drink containing poison hemlock prepared by the authorities. Francis Bacon, many believe, died as a result of trying to freeze a chicken. It might seem odd therefore to hold up both as key figures in the history of reason.
Socrates' natural heir?
You may also wonder why I am suddenly diving into the past when I’m writing a book about the future. Bear with me, and blame Hod Lipson and his robots.
Both Socrates and Bacon were very good at asking useful questions. In fact, Socrates is largely credited with coming up with a way of asking questions, ‘The Socratic Method’, which itself is at the core of the ‘Scientific Method’, popularised by Bacon during ‘The Enlightenment’ – a period of European history when ‘reason’ and ‘faith’ had an almighty bunfight and the balance of power between church, state and citizen was being questioned. Lots of philosophers and scientists challenged the prevailing orthodoxy of religious authority by saying ‘we need to make decisions based on critical thinking, evidence and reasoned debate, not on sacred texts and religious faith’ and the church replied with ‘yes, but we own most of the land, plus people really like the idea of God. Ask them’.
I'm pretty popular, actually
The Socratic Method disproves arguments by finding exceptions to them, and can therefore lead your opponent to a point where they admit something that contradicts their original position. It’s powerful because it kind of gets people to admit to themselves that they’re wrong. It’s also pretty good at exposing your own (as well as others’) prejudices and gaps in reasoning. Lawyers use it a lot. Don’t let this influence you against it. Lawyers also use toilet paper and you’re not about to reject that idea.
Used by lawyers
Here’s an example.
During excessive bouts of hard and progressive rock emanating my older brothers’ bedrooms my dad used to say, “people only play electric guitars because they can’t play real ones” (by which he meant acoustic guitars played by nice chaps called Julian with sensible haircuts, as apposed to electric guitars played by long haired geezers called Dave and Jimmy).
First step of Socratic method: assume your opponent’s statement is false and find an example to illustrate this. This You Tube clip of Pink Floyd’s David Gilmour playing acoustic guitar for instance. Clearly Dave Gilmour can play a ‘real’ guitar as well as an electric one and my dad must grudgingly accept the fact. At this point dad would assert that Dave Gilmour was ‘the exception that proved the rule’.
Next step. Take your opponent’s original statement and restate it to fit their new modified position. “So, dad, you’re saying that people only play electric guitars because they can’t play acoustic ones, except for Dave Gilmour who can do both?”. Then return to step one.
Ironically this led us to playing dad far more Black Sabbath, Pink Floyd, Aerosmith and Led Zeppelin than if he’d kept his theory to himself. (MTV’s ‘unplugged’ series would become his nemesis). Eventually dad would have to admit the truth – which was not that the rock musicians we listened to weren’t talented, but that he just didn’t like rock music.
This example is trivial but you can use the method to demonstrate some pretty esoteric points, and expose fundamental new insights. A popular example that can really annoy your mates in the pub is proving that things don’t have a colour.
Socratic argument, while undoubtedly one of the most useful things ever devised can also annoy the tits of people, as the man who lends it his name found out to his cost. The story is that Socrates used his technique to prove a lot of bigwigs in Athenian society were mistaken in their thinking – and they responded by having him killed. This proves that engaging people’s brains is never enough if you want change. You have to engage their emotions too. As Professor George Church said to me during our talk last week “Politicians know how effective emotion is in comparison to rational thought. You can really move mountains with emotion. With rational thought you just end up getting people to change the channel”.
By the time Francis Bacon went to university, teachings of one of Socrates’ students, Aristotle, had become entrenched as the way to conduct ‘scientific inquiry’. Aristotle had pioneered deductive reason, the practice of deriving new knowledge from foundational truths, or ‘axioms’. In short, it was generally believed that if you got enough boffins together to have a solid debate, scientific truth would be teased out over time. This worked well for mathematics where axioms had been long established (e.g. the basic mathematical operations – plus, minus, divide, multiply) but was less good for finding out new stuff about the physical world. Much to Francis’ dismay it seemed that science involved sitting around in armchairs. Nobody was getting off their arse and observing anything new or doing any experiments. Nobody was finding the ‘axioms of reality’ (which is arguably a good name for a progressive rock outfit).
'Let's do it in 13/8!'
In common with Socrates Bacon stressed it was just as important to disprove a theory as to prove one – and observation and experimentation were key to achieving both aims. In a way he was Socrates 2.0 (which is another good name for a prog band). He also saw science as a collaborative affair, with scientists working together, challenging each other. All of this is hallmark of scientific good practice today – observe, experiment, theorise… and then try to prove yourself wrong – all in collaboration with peers who can give you a hard time. It’s important to note that Bacon himself wasn’t a distinguished scientist. His main contribution was the articulation and championing of an empirical scientific method. That said, he did do the odd experiment, including the one that killed him.
While traveling from London to Highgate with the King’s personal physician, Bacon wondered whether snow might be used to preserve meat. The two got off their coach, bought a chicken and stuffed it with snow to test the theory. In his last letter Bacon is said to have written, “As for the experiment itself, it succeeded excellently well.” Some historians think the chicken story is made up, but the popular account is that the act of stuffing the chicken led to Bacon contracting fatal pneumonia. This is possibly the only instance of bacon being killed by eggs.
Hod Lipson looks like a very friendly bear. He has a round, but not chunky frame, thick black hair and looks healthy and happy. His features are open and innocent. He’s almost childlike if it weren’t for his demeanour – a kind of solid confidence that only comes with age. You get the feeling Hod knows exactly what he wants to achieve. I suspect he was a mischievous child, curious, poking his nose into most things. And whilst most of the scientists I’ve met are driven by an almost insatiable curiosity, Lipson takes curiosity to a new level, literally. He’s curious about curiosity.
“ ‘Artificial Intelligence’ is a moving target,” he says. “So, you can build machine that plays chess, then you build one that can drive through city streets and so on. People argue about whether it’s really intelligent or not – and usually it’s argued it isn’t. I want to create something where nobody can argue it isn’t intelligent. So, I was thinking about what’s an unmistakable, unequivocal hallmark of intelligence, and I think it’s creativity and particularly curiosity.”
“Does a curious and creative machine mean a sentient machine?” I ask.
“Well, what does that mean?” asks Hod. “I have to push you on what you mean by ‘sentient’.”
Bollocks. I’ve just been asked by a leading researcher into intelligent machines to define sentience – one of the biggest pending questions in philosophy. This is worse than when Cynthia Breazeal asked me to come up with an alternative word for ‘robot’. Or if Andrew Lloyd Webber asked me to say something nice about one of his musicals. I feel out of my depth and we’re barely into our chat. I do the only thing I can.
“Well, let me ask you,” I say. “What do you mean by it?”
Hod pauses. I’m not sure he was expecting a return serve, especially one that in any decent rule book would be considered cheating.
“I interpret it as deliberate versus reactive. Er… human-like…” He pauses again. “I don’t know.”
A-ha! Well, like I said, it is one of the biggest pending questions in philosophy.
“Alive?” I venture.
“It’s difficult to identify what life is right?”
And there’s the rub. Life has avoided a definitive definition for as long as we’ve tried to make one – as has ‘intelligence’. So if you’re trying to create ‘artifical intelligent life’ you’re already in a quagmire of semantic lobbying. I’m reminded of my chat last week with George Church (Professor of Genetics, Harvard Medical School). “I think life is actually quantitative measure,” said George, by which he means something that can be defined not with either a ‘yes’ or ‘no’ but on a scale. “It’s not something where either you either have it or your don’t. So I would say that there are some things that are more alive than others.” And I don’t think it’s overstating things to say that Hod certainly has made machines that are ‘more alive’ than many others.
Then he says an interesting thing. “I think men have this hubris of wanting to create life. We try to create life out of matter.”
‘Hubris’ is one of those words like ‘semiotics’ and ‘insurance’ that I’ve heard a lot but didn’t really know what it meant for a long time (I’m still struggling with ‘insurance’). I look up ‘hubris’ when I get to back to my hotel. It means excessive pride or arrogance. In classical literature it’s usually a precursor to, and the cause of, a character’s downfall. The legend of Icarus is a good example. With that one word Hod has encapsulated the two defining criticisms aimed at Artificial Intelligence research. On one end there are those who say we’ll never create a truly artificial intelligence and that we’re arrogant to believe we can. On the other there those who worry we will build smart machines and in our arrogance be blind to the danger that they will one day do away with or enslave us. (There are more measured positions in between the two such as Hubert Dreyfus’s and Hod’s own – both of who suggest that a lot of AI research has been in the wrong direction).
Hod doesn’t believe in the latter James Cameron-esque scenario, but sees a confederacy of man and machine. He has some sympathy for the ‘singularity hypothesis’ of Ray Kurzweil (who I’m interviewing early next year) which talks of a ‘merger of our biological thinking and the existence of our technology’ but doesn’t see a machine-human hybrid (Juan Enriquez’s Homo Evolutis) as the only scenario. “Merging could also mean intellectually merging, meaning that they explain stuff to us.”
Lipson became famous (in robotic circles) for his work building robots that are arguably self aware. His Starfish robot, which I see sitting forlornly on a shelf in his lab, is iconic for learning to walk from first principles. It wasn’t given a program that told it how to move its various motors and joints to achieve locomotion. Instead Lipson gave it a program that enabled it to learn about itself – and use this knowledge to subsequently work out how to move.
“The essential thing was it created a self image,” Lipson tells me. “It created that self image through physical experimentation. So it moved its motors, it sensed its motion and then it created various models of what it thought it might look like – ‘maybe I’m a snake? maybe I’m a spider?’ We told it to create models – multiple different explanations that might explain what it knows so far.”
The robot then stress-tested those models by sending them into competition with each other. “It creates an experiment for itself that focuses on the area where there’s the most disagreement between what the models predict. We put in the code to look for disagreements,” explains Hod.
For example, let’s say the robot is wondering which move to do next in order to learn about itself more. It could try a movement that, when completed, the models all predict it will be sitting at an angle of about 20 degrees. One model might predict 19 degrees, another 21 degrees, a third 21.2 degrees. However, if it tries another move the models have very different ideas about the result. One says the robot will be at an angle of 12 degrees, another predicts 25 degrees, a third says 45. This latter movement is more likely to be the one the robot chooses next, because it will learn the most from it, and get an idea of which model is closer to the truth. It’s where there’s most disagreement that there’s most to learn. “We tell it ‘you create models – multiple different explanations for what you see – and then look for what new experiment creates disagreement between predictions of these candidate hypotheses,” says Lipson “That’s the bottom line of curiosity”.
The models that do best ‘survive’ and the program kills off the others. The remaining models ‘give birth’ to a generation of slightly mutated tweaked versions of themselves and another round of ‘survival of the fittest’ ensues. Or to put it another way, over many iterations the program hones in on a model that describes reality. The predictions get closer and closer to what actually happens until one model is deemed sufficient for the robot to say ‘this is what I look like’.
If all this talk of ‘mutation’, successive ‘generations’ and ‘survival of the fittest’ sounds slightly familiar that’s because this kind of mathematics takes its inspiration from Darwin’s theories of evolution. Mathematicians might call it ‘reductive symbology’ or say Lipson’s work is a good example of ‘genetic algorithms’ – and it’s a technique that’s been around for decades. What’s different about Lipson’s work is the implementation, something he calls ‘co-evolution’.
“We set off two lines of enquiry. So one of them is the thing that creates models and the other is the thing asks questions, and they have a predator/ prey kind of relationship. Because the questions basically try to break the models.” The questions try to find something the models disagree about so they can kill off the weaker ones. It’s like Anne Robinson in code.
It has to be said that if you see the Starfish robot ‘walking’ you wouldn’t immediately think it had a future career as a dancer. It doesn’t so much walk as stagger and flop forward. It’s less Ginger Rogers and more gin and tonic. Still the achievement is not to be sniffed at. It had no parents and no role models. This was a robot actively learning to do something no one had taught it to. And robots that learn this way have all sort of interesting possibilities – as Lipson was about to find out.
You can see Hod’s demonstrating his starfish robot in this TED talk.
With colleague Michael Schmidt he wondered if the same computer program he’d placed at the core of his Starfish robot could go beyond working out merely what its host body looked like and begin to reach useful conclusions about the wider world.
“We said ‘let’s take it out of this particular body and let it control motors of any experiment’ ”. Their first idea was to give the robot brain control of motors that set up the starting position for a ‘double pendulum’ before letting it fall. The robot was also able to record the results of each experiment using motion capture technology – allowing it to accurately record the pendulum’s motion.
A double pendulum is a bonkers little contraption. It consists of two solid sticks jointed together in the middle by a free moving hinge. Double pendulums do wacky things (You can see one in action here). Whilst the top pendulum swings from left to right the bottom one likes to mix it up. Because it’s not attached to a stationary point (like the top pendulum) but something moving (the bottom end of that swinging top pendulum) it will swing left, swing right, spin round clockwise, or counter clockwise, seemingly at random. Lipson and Schmidt chose the double pendulum because it’s a good example of a system that’s simple to set up but which can quickly exhibit chaotic behaviour – and therefore would be a good test of the technology’s ability to build a useful conceptual model of what was going on. The results were startling. In fact, the program went a long way to deriving the laws of motion. In 3 hours.
It followed the same process as it had when it sat in the robot – guessing at equations that might explain what it had seen so far, then setting up new experiments (in this case new starting positions for the pendulum) that targeted areas of most disagreement between the equations. “With the double pendulum it very quickly puts it up exactly upright, because some models say it’s going to fall left and some models say it’s going to fall right. There’s disagreement. It’s not a passive algorithm that sits back, watching,” says Hod smiling. “It asks questions. That’s curiosity.”
Just like humans, it seems machines learn best when they ask their own questions and find their own answers, rather than being given huge amounts of data to absorb. “Most algorithms you see are passive. They’re data intensive. You feed in terabytes of data and these algorithms just sit back and watch. But in the real world you can’t sit back and watch. You have to probe, because collecting data is expensive, it takes time, it’s risky.” By constrast Lipson’s machine brain “only ever sees what it asks for. It does not see all the data.” In fact Lipson decided to compare the efficiency of this ‘active’ method of enquiry against a more traditional passive ‘here’s all the data, what can you tell me?’ method. “It doesn’t work. It has go through a reasoning.”
Remind you of anyone? I see the hemlock taker and the chicken freezer partially re-incarnated in machine form. The programming consigns inaccurate models to the dustbin by getting the robot to admit there are others that offer a better explanation of the real world (hello Socrates) and does this with evidence won via experimentation (hello Bacon). What Lipson has done is create a computational methodology for asking good questions. And asking good questions is what it is all about when it comes to understanding anything.
“Physicists like Newton and Kepler could have used a computer running this algorithm to figure out the laws that explain a falling apple or the motion of the planets with just a few hours of computation,” said Schmidt in an interview with the US National Science Foundation (who helped fund the research).
However, we’re still a long way off what I (or Hod) would call an intelligent machine. It still takes a human to work out if anything the machine has found is useful. The machine didn’t know it had found laws of motion, it took Hod and his colleagues to recognise the equations that were produced. “A human still needs to give words and interpretation to laws found by the computer,” says Schmidt. So, we’re still some distance from Hod’s confederacy of man and machine, where they explain stuff to us.
One of the areas Hod’s brains could turn out useful is cracking problems where there is lots of data, but we still have little idea what’s going on. Indeed plenty of people with acres of data have been beating a path to his door including heavyweight data generators like the Large Hadron Collider at CERN near Geneva. “The people as CERN said ‘there is this gap in a prediction of particle energy. Here’s data for 3,000 particles. Can you predict something?’ ” The result was a strange mix of elating and disappointing. “We let it run and it came up with a beautiful formula,” says Hod. “We were very excited but it was a famous formula they already knew. So for them it was a disappointment…. But for us… We rediscovered something that people are famous for.”
Again, the crucial insight comes from humans who can tell if something means anything or not. It’s the crucial step – and without it the results are largely worthless (which is not to say the time saved is not incredibly useful). I’m reminded of a scene from Douglas Adams’ comedy The Hitchhikers Guide to the Galaxy in which a supercomputer called Deep Thought is built by a race of supersmart humanoids to answer the ultimate question. ‘What is the answer?’ ask the humanoids awaiting instant enlightenment. ‘To what?’ says the computer. ‘Life! The Universe! Everything!’ they respond. ‘The ultimate question!’ The computer announces there is an answer… but it will take several million years to compute. At the duly allotted time millennia later the humanoid’s descendants gather to hear the answer, which is announced to be ‘42’. The problem, suggests Deep Thought, is that they don’t really know what ‘the question’ is.
"You're not going to like it"
No-one understands the irony in this story more than Hod Lipson. “In biology there are many systems where we do not know their dynamics or the rules that they obey”. So he set his machine looking at a process within a cell. True to form the program generated an equation in double quick time. But what did it mean?
“We’re still looking at it,” says Hod with a smile. “We’re staring at it very intently. But we still don’t have an explanation. And we can’t publish until we don’t know what it is.”
“You don’t understand what it’s saying?
“No,” says Hod.
“But in science you go from observations which produce data, to models which produce predictions, to underlying laws – and from there you go to meaning. What’s good is that we can go from data straight to laws, whereas previously people could only go from data to predictions. So now a scientist can throw it some data, go and have a cup of coffee, come back and see 15 different models that might explain what is going on. That saves a lot of time. Previously coming up with a predictive model could take a career. Now at least you can automate that so you can focus on meaning.” That’s a powerful enabling technology. More time to think. Hod is doing for thinking what dishwashers have done for after dinner conversation. Although it may not always work out that way.
Several months later I e-mail Hod to see if they’ve got anywhere with the equation his machine generated from the cell-observing experiment. “We’re still struggling,” he writes “We’ve been trying for months to get the AI to explain it to us through analogy. But we don’t get it.” It could be that Hod’s machine has discovered something our human brains are just not smart enough to see. “Maybe it’s hopeless,” he says “Like explaining Shakespeare to a dog.” This is why Hod is trying to convince his collaborators to publish the equation anyway – and see if anybody else out there can shed light on its meaning.
"Friends, Romans... Hey! Is that a biscuit?!"
Because Hod is curious about what makes us curious I ask him if his program could come up with a model of how to learn.
“Could we use your program to observe data about how machines learn, or how people learn, and come up with a model of learning?”
We’re getting seriously abstract now.
Hod laughs. “That’s what we’re working on now. We’re working on what we call self reflective systems. We want to make machines meta-cognitive – they are thinking about thinking.”
This is something of a departure from a lot of AI research. “Almost all the AI systems program a way of thinking and they do that thinking for you – which is the extent of it. You could argue that’s about as smart as a lizard. But if you want to get to human-like intelligence, you need a brain that can think about thinking…”
Sadly (for this blog) Hod’s work in this area is currently unpublished so out of courtesy I’m leaving a more detailed explanation of what we discussed until the book is published. In summary however, Hod is taking his model of ‘co-evolutionary AI’ to the next level. Instead of modeling robot physiology, the motion of pendulums or data from physicists in Switzerland he has one robot brain trying to model how another one learns – and then, in true Lipson style, he’s asking one to challenge the other – in order to find out more. In this way one brain builds a model of how the other learns, and can start to make helpful suggestions.
“That’s self reflection,” says Hod. He adds, “That’s important in life. You can learn things the hard way, or you can think about how you’ve been thinking.”
It’s something you can imagine Socrates or Bacon saying.
It’s a rollercoaster. Today I meet Juan Enriquez, described by himself as a ‘quasi-catholic in a Jesuit tradition’ and as a ‘renaissance futurist’ by his wife (whom I’m lucky enough to meet later). To be honest it’s hard to pigeon-hole Juan. His CV includes ‘peace negotiator’, ‘Harvard professor’, ‘urban development Tsar’ and ‘biotech investor’. During our conversation he says, “there’s only two things that matter: Nike and Nissan”. This strikes me as rather a trivial observation for one of America’s leading thinkers. He explains: ‘Just Do It and Enjoy the Ride’.
He’s a surprisingly reserved and gentle man in person, for someone who says quite remarkable and often strikingly important things. Voted best teacher at Harvard he’s regularly called upon to speak on how the future might pan out. This year he opened the mighty TED talks. His address was typically powerful, thought-provoking and very funny. He has an ability to synthesise and distil difficult and interweaved concepts into something you can get hold of. His book As the Future Catches You is one of the best attempts to make sense of how biology and silicon are combining in extraordinary ways and is an essential read (I think that’s the first book I’ve ever said that about). It’ll take you two hours. “It started off as 3,000 pages and took me six years to condense,” he tells me, reminding me of one of my favourite quotes, from George Bernard Shaw, who once wrote to a friend, “Sorry I wrote a long letter, I did not have time to write a short one”. You can see some of the themes in it discussed in this TED talk:
Juan describes his life as “a series of strange accidents”. ‘Strange accidents’ is rather a self-effacing way of describing an impressively eclectic powerhouse of a CV. Those “accidents” arguably started rolling off the conveyor belt when as a young man living in Mexico Juan walked into his parent’s room and said, ‘I’m not learning enough here, so I’m going to go to school in the US’. “I applied late, I had no idea it was hard to get into these places and even though I spoke English (my mother’s American) I’d never studied and written in English. I have no idea why I was admitted. I mean during the admission exam I was asked to write a paragraph and I asked ‘what’s a paragraph?’. I had no idea.”
He describes feeling “utterly stupid” for his first semester but obviously caught up fast and maintained that accelerated intellectual velocity, being admitted to Harvard to study Government and Economics, after which he returned home to ‘change Mexico’ – a childhood ambition borne out a belief that his home nation too readily disadvantaged those not in the ruling class. “I always thought I would work in and change Mexico. I was bothered by the poverty I saw there.” He became the youngest Budget Director ever (in the Ministry of Planning and Budget), then returned to Harvard before being offered “a dream job” back in Mexico as head of the Urban development Corporation. So far, so impressive (especially when you consider that during his time in Mexico Juan was also part of the team that negotiated peace with the Chiapas Indians). And then Juan discovered something more important. A revolution that would not only affect Mexico but the entire world. And all because of some lonely looking geeky guy at a New Year’s Eve party.
“I’m at a New Years party and there’s this guy is sitting over on a corner table by himself and I think ‘poor bastard, it’s New Years’ and I walk over and sit down and talk to him for the rest of the night. By the end of the evening we decided to sail across the Atlantic together in 2 weeks. By the end of that trip I had decided that I was going to change my entire career and learn biology.”
The guy in question was a young Craig Venter, who went from being an obscure scientist to sequencing the first human genome. Juan recalls, “That conversation was so interesting, all of a sudden I thought ‘I want to leant about this.’ I wondered, who gets affected by this stuff? What does it do? What does it matter?” In fact, Juan was so interested in these questions, he set up the Life Sciences Project at Harvard Business School.
"Poor bastard" - Juan Enriquez
In As the Future Catches You Juan writes:
“Your future, that of your children, and that of your country depend on understanding a global economy driven by technology. Understanding code, particularly genetic code, is today’s most powerful technology”.
We talk about this in the context of a society that actually doesn’t seem to be engaging with the implications of the genomics revolution (as I wasn’t before researching my own book). Juan says, “I worry that if you’re not educated in this stuff, you’re toast.” He’s very clear that new technologies quickly change the fate of nations, especially as knowledge becomes ever more accessible.
“You don’t have to own a large piece of land or a lot of resources to get rich very quickly, but you do need to go to school. That didn’t use to be true. It used to be that it didn’t matter how smart you were, if you weren’t the king or part of the noble classes you were toast” (Juan likes the word ‘toast’).
“Now you can get wealthy, and you can do it very quickly, but you have to do it through education. You see, the consequences of not being educated today are far different from what they were. You know, in the 1950s you had a high school diploma, you went to Detroit you did fine. That’s not true anymore.” So, it’s no pleasure for Juan to recount a meeting he attended along with the governor of Michigan three years ago with GM workers, where “60% didn’t consider it necessary for their kids to go to college. There are consequences of that decision.”
Don't become this - go to school
This is one example of what Juan calls an ‘anti-intellectual backlash’. I wonder, given that today more and more people have access to knowledge, why he perceives a rejection of engaging with it, applying it, or understanding it in some quarters? It’s something Mark Bedau talked about when I was in Denmark and it’s something I see too. I call it ‘aspirations to mediocrity’ and it worries me, because if you’re not informed you’re out of the loop, and you can get left behind. And people who get left behind tend to get angry at some point.
Juan argues that to succeed as a nation, a corporation, an individual you have to be agile, to adapt. “It took me a damn long time to figure out. It’s Darwin. It’s the ability to adapt and adopt. It’s not the most powerful who survive, it those who best adapt to change.”
“In the US there’s powerful anti-intellectual tradition that battles against the aspirations of the founding fathers. One of the most important things that people keep forgetting about America and the reason why I think America became truly a world power is because so many of the founders were adamant about education and science. Just look at Franklin, or Jefferson and you’ll see people deeply committed to critical thinking and education. There was a huge tradition of science and technology education, freedom of inquiry and that’s powered this country in an extraordinary way. But there’s a backlash to that.”
Juan believes the backlash is born of (reasonable) fear. “If you look at and a lot of the things that we’re building, they’re scary as hell to some people. You talk about programming cells or sentient robots or evolution of the species using technology – that is profoundly disturbing to some people because this stuff is very powerful. It upends industries, it changes how long we live, it changes what our kids may look like. I look at that stuff and say, ‘OK, it allows people who couldn’t have children to have children. We’re going to do away with some of the diseases, and so on’. Other people look at that in absolute horror. They say, ‘Stop the world. This isn’t natural. This isn’t what God ordered. I want to get off.’ They’re looking for an element of stability and certainty. This desire tends to manifest most during the periods of fastest change, like now. You want something to hold on to. And if you’re not part of that ride, if you don’t think you can play in that game then you get this anti-intellectual counterpoint.”
It strikes me that maybe one of the implicit drivers behind the creationism renaissance is so profound a fear of the possibility of us deliberately evolving into something else (Juan dubs this next technology-enhanced hominid homo evolutis) that one line of defence is to deny evolution’s central role in the world. In the Edge Foundation’s lovely book What are you optimistic about? Juan wrote an essay in which he said that our change as a species “will involve an ever-faster accumulation of small, useful improvements that eventually turn homo sapiens into a new hominid. We will likely see glimpses of this long-lived, partly mechanical, partly regrown creature that continues to rapidly drive its own evolution. …many of our grandchildren will likely engineer themselves into what we would consider a new species, one with extraordinary capabilities”. Intelligent design indeed. If you’re religious (or even if you’re not) it’s no surprise that the ‘Man playing God’ argument is strongly attractive. It’s a worry for a lot of people, and, I’d say, not an unreasonable one.
Juan isn’t worried about our self-directed evolution. “The notion of evolving into something else is terrifying until you consider the question ‘Are Russ Limbaugh and Howard Stern the be all and end all of evolution?’ If that’s all she wrote, then I’m scared. I look at this stuff and say, ‘if my kids could live 200 years with a good quality of life, if they could see a lot further than I could, if the could re-grow their joints, if they can hear a lot better than I can, if they could have brains that were 50 times as powerful as mine? Good for them. Cool. I’d rather things carry on.’ ”
Evolutionary work-in-progress 1
Evolutionary work-in-progress 2
But can our moral frameworks keep up? (Einstein famously said “It has become appallingly obvious that out technology has exceeded our humanity”.) Juan has an interesting observation. “To me religion looks like an evolutionary tree. Every civilisation has to a greater or lesser extent some religious moral background. There has to be some evolutionary advantage to having that kind of moral backbone and that kind of belief system, and I think it’s because it traces how you move from a hunter-gatherer society, where everybody knows each other and watches each other all day, into a town, into a city, into an empire… And just like most animals almost every religion and God has gone extinct. The interesting question is which ones survive and how do they survive and how do those moral backbones evolve? And what does a moral ethical background look like, should you start to speciate, should you start to alter fundamental characteristics of what we consider human?”
One thing history has taught us is that knowledge advances no matter how hard you try to suppress it. As Septimus Hodge says in Tom Stoppard’s Arcadia “You do not suppose, my lady, that if all of Archimedes had been hiding in the great library of Alexandria, we would be at a loss for a corkscrew?” You can stop knowledge’s advance in some places for a while if you’re brutally draconian or conservative but not for long – and the more technology allows autonomy of the individual (from wireless internet access to the world’s knowledge, to power independence through solar technology) the harder it becomes to suppress the spirit of enquiry that characterises enough of the human race to ensure that the growth of knowledge marches on. It’s harder to stop people discovering stuff when we aim to give a laptop to every child. “When you start putting every MIT course online, when kids start having access to TED talks…” Juan looks into space. “You know, knowledge is the great equaliser”. Knowledge is growing exponentially, and for those who want to engage, access to it is becoming easier.
I return to my current preoccupation – what moral frameworks are useful in this ever changing world? Well, if we take the evolutionary argument, it’s the ones that adapt and adopt. Those belief systems that are agile enough to keep us kind while embracing change are likely to prevail. If there is an evolutionary advantage to having a moral set of beliefs or a God that embodies them then you can’t keep your God static. Your God better evolve with you. This, I think, doesn’t mean watering down the essential need for compassion, it means helping us work out how to continually keep it central to what we do in a rapidly changing world. This is why Karen Armstrong’s ‘Charter for Compassion’ is so interesting.
The future won’t be a smooth ride. “Things evolve at different times at different paces, people make different choices and that’s one of the reason countries disappear so often. There really are consequences to your choices. If you choose to shut your doors and not follow technology you will vapourise your sovereignty. So, there are galactically stupid policies as far as individual countries are concerned. The future of the species worries me a lot less”
One thing Juan is worried about is what happens to those nations that don’t engage with the knowledge revolution. “There’s going to be a great deal more failed states. That’s bad. I mean there used to a restructuring mechanism for failed states – Genghis Khan would come by and install a government. Today, in a knowledge economy, why would you want to go and take over a failed state?”
I’d argue that a failed state represents an opportunity, an under-utilised platform of potential human innovation. After all, Singapore was a failed state 50 years ago, an example Juan uses regularly to demonstrate how nations can turn themselves around in short order if they invest in education and knowledge creation. Perhaps it won’t be Genghis Kahn coming by looking for natural resources, perhaps it’ll be Craig Venter or Google looking for untapped smarts. Let’s insist they bring Karen Armstrong with them.
I’ll leave the interview there – if I covered everything we spoke about I’d be writing the book. There’s a lot of ideas here I’m still not pulling together coherently, but it’s a start and I welcome comment.
By coincidence my interaction with Juan doesn’t end when I say goodbye to him at his office. I bump into him and his wife – a warm and sociable curator – at the airport, flying to New York to celebrate their anniversary. It’s a rare opportunity to discuss things ‘off topic’ and it’s nice to hear them talk warmly of their children and upcoming birthday celebrations. There’s something deeply comforting about hearing one of the most interesting thinkers on the planet discuss what flavour of birthday cake to get.
It's not just the future I think about...
I arrive in New York and make my way to Long Island City, where I’m staying with my friend Colin, a neuroscientist that I once shared a house with in London, and a man equally caressed by doubt and genius. He’s actually in San Diego tonight being courted by a biotech research laboratory so I have his place to myself. The apartment is full of papers with titles like: “Hippocampal CA3 output is crucial for ripple-associated reactivation and consolidation of memory”. What’s different about seeing this sort of thing today as compared to coming across similarly titled documents during the time we lived together is that now I want to pick these things up and understand them. Not tonight though, my mind is full of everything I’ve learned in Boston – I feel like a glass of wine.
Round the corner from Colin’s I find a great little wine bar called Domaine where I fall into a long conversation with Johanna, a friend of the owners and a fashion designer originally from Peurto Rico. In the end we talk for about 5 hours, drinking fine wine provided by the establishment and cover every subject from religion to politics to art to relationships. It’s just what I need and a perfect New York kind of evening, the city where you can meet just about anyone if you’re willing to start a conversation…