I am tired of me. After the best part of two weeks criss-crossing America promoting the book I have done so many interviews and answered the same questions so many times that I’ve almost forgotten my own name. The result? “I’m not sure who this guy is but I’m really bored with him”.
At one point I did 26 radio interviews straight in a row. Dumbest question: What’s the future of marriage? (It’s not a dumb question per se, but it’s a dumb question to ask an author whose book has nothing to do with this subject). Nicest question: who inspired you in your personal life journey the most? (my answer: my mum).
I’m not complaining (although I felt physically wrecked by the time I got back). There were some amazing experiences along the way including
Bumping into Ray Kurzweil at KRON-TV in San Francisco
Getting nice reviews in the Washington Post (“Stevenson turns out to be an energetic tour guide to the cutting edge of science”) and Wall Street Journal (“Sharp and Fascinating”) – this latter thanks to Matt Ridley
Having a party thrown for me in New York by the magnificent Laura Galloway where, among many incredible guests, I got to say hello to Juan Enriquez again – and meet his co-author of Homo EvolutisSteve Gullans
Hanging out with my lovely publicist (Beth Parker) and editor (Rachel Holtzman) for a day
Getting to meet (and be interviewed by) Seth Shostak – chief astronomer at the mighty SETI institute (and finding out his ‘statistical hunch’ is that we’ll discover alien life within 25 years)
Doing talks to Microsoft, Google and eight departments of the US Government (and telling them government isn’t working – and why)
Perhaps the scariest thing was flying into Heathrow to know that I had to be the final speaker at The Story, following a talks by Martin Parr, Cory Doctorow and Graham Linehan… gulp (see next blog post)
I’ve spent the last year being assailed by new ideas and ways of seeing the world at an unprecedented (for me) rate. The coming revolution in personal genomics, the project to create artificial life, the Transhumanists’ journey to ‘transcend our biology’, robots that get mood swings, machines that demonstrate curiosity, a post-scarcity world promised by atomically precise manufacture, holidays in space and our continued entanglement with the world’s biggest machine (the Internet). All of these are to one degree and another coming down the line, as long as the Maldives (and the rest of us) can stay above water, using our technologies and ingenuity to remove carbon-dioxide from our atmosphere (while simultaneously ushering in an energy revolution). I’ve met scientists, philosophers, gone diving with a president and invented a cocktail on the way. Now as I approach the end of my journey I’m looking for people who can help me make sense of it, to somehow pull all these strands together into a coherent view.
In his book Engines of Creation: The Coming Era of Nanotechnology Eric Drexler approaches the future by asking three questions – what is possible? what is achievable? and what is desirable? The question of what is possible seems easy to answer. As we learn to control the very atoms of matter, the mechanisms of biology and the power of computation there is, in fact, very little that we can’t do, in a physical (and indeed virtual sense). Solutions to climate change? Already developed. An end to the energy crisis? No sweat, sign on the line. Holiday in space? Why not, join our frequent flyer progamme. World peace even? Seems only reasonable.
But when we ask what is achievable, well that’s a different story. Because what we achieve will largely be determined by what we collectively decide is desirable. As George Church told me all those months ago at Harvard Medical School as we discussed personal genomics, “The only thing that puts this kind of medicine far away is really will, right? The question is, how motivated are we?” Do we, as a planet, have the will to take the bounty on offer while mitigating the risks? To get the medicine but not the weapons? To enjoy abundant clean energy while dealing with climate change? To use our technologies to bring us closer together, rather than isolate us?
It’s to ponder questions like these that I’ve come to meet Chris Anderson, the CEO of the TED Talks, the pre-eminent meeting of, as Chris puts it, “people who can offer a lens through which to see the world in a different way.” Every year Chris and his team gather together the world’s leading thinkers from every discipline and give them 18 minutes to tell the rest of the world how they see things. The results can be found on TED.com. Here you can see Ray Kurzweil summarise his law of accelerating returns, or Kevin Kelly talk about his idea of ‘The One Machine’ that the internet will become, or Hod Lipson demonstrate his robots (along with a host of other mind-shifting presentations that make you see things from a different angle). TED tells a different story of our world than the one we’re used to seeing, and it’s the same story I’ve seen on my travels. There is no shortage of fresh ways to see our future. It turns out we’re not necessarily looking at a damage limitation exercise, but a possible renaissance. But first we have to see it. Only then can we have to make it happen.
Seeing it is a revelation. We’re so used to being told that everything is getting worse, that the planet is doomed or that the next pandemic to finish you off is just around the corner, or that technology will subjugate us. It’s a world where a book called Is it me or is everything a bit shit? becomes a best seller. And it’s not true. Or at least it doesn’t have to be. Klaus Lackner has a machine, that works now, that takes CO2 out the air. George Church has co-developed a process that can take that CO2, mix is with sunlightfor pity’s sake! and create gasoline. Thin film solar technologies will soon take power to where there is no grid, while at the same time mobile devices will continue to take the world’s knowledge (accessed on billions of mobile devices) to every corner of the globe. Solar power continues to show exponential rises in efficiency while nanotechnology is already changing the face of manufacturing and will continue to do so. Medicine may soon see an end to a host of the things that kill us. This story is not being told, which is perhaps the biggest threat to our future. Not that it couldn’t be better, but that because we can’t see it, we don’t know it’s an option.
“The history of ideas is a really thrilling history,” says Chris, “and ultimately that is what will drive all of our futures. There’s a very boring view of the world which is that ‘things happen’ and you can’t really do much about it.” It’s something he’s experienced himself. “After I left university I became a journalist, then I started a company… and then fifteen years were taken over by all the stress of working. I didn’t have much spare time to think. When the whole ‘dot com’ bust happened the huge gift I got was discovering, holy crap, there’s so much amazing new thinking out there.” I know what he means. Before I decided I actually wanted to answer the question “what next?” I was on the same treadmill, too busy to look up to realise that the story we’re told wasn’t necessarily the only game in town. This book didn’t start off with the word ‘Optimist’ in the title. It was my agent Charlie, who when I told him the sort of thing I was finding out, remarked on how uplifting some of it was and suggested the change.
We communicate through stories. It is stories that grab us the most and it stories we identify with. Hollywood knows this, political spin doctors know this, newspaper editors know this. “What the story?!” ask editors pointedly when young journalists bring well written pieces that lack a narrative. My own editors were keen to make sure this book had a personal story, and encouraged me to make sure it wasn’t lost in the rush of facts. Chris is very interested in stories, and how the Internet, as it continues its prodigious growth across the globe, can help us, for the first time, tell a story that includes everyone.
The most memorable thing for Chris about the 2009 TED conference was a dance troupe called The Legion of Extraordinary Dancers. “This troop could not have existed ten years ago. They exist because kids who used to just dance down on the street corner started filming themselves, putting it up on YouTube and suddenly the community that they’re comparing themselves to is a global community. This kid in Tokyo sees a move from Detroit and innovates within hours, puts it online and so on, so the pace of innovation is dramatically increased.” John Chu, who created the troupe from finding the most popular of those YouTube clips says, “Dance has never had a better friend than technology. Online videos and social networking have created a whole global laboratory online for dance.” It’s not just in dance. “This is happening in hundreds of areas of human endeavour,” says Chris. “I’ve started to call it ‘crowd accelerated innovation’ and I find it incredibly exciting.”
Chris thinks rather than letting go of our humanity, we are re-discovering it. What could be more human than the Legion of Extraordinary Dancers? Kids from diverse backgrounds from across the world, innovating and collaborating to bring a new dimension to an art form as old as society, using technology to help them express themselves and innovate physically with their bodies, to meet, to collaborate, to just dance – and then show the world. Look what we did. Here is something of the exponential growth in wisdom, community, understanding I was looking for to go with Ray Kurzweil’s accelerating technologies.
“The acceleration of knowledge and ideas made possible by the fact that humanity is connected for the first time is vast,” says Chris. “The re-discovery of the spoken word as a tool for communicating is a big deal. If you think about it we evolved as human-to-human communicators. It was the village camp fire, the elder standing there with his painted face on a starry night, fire crackling, drums beating and telling a story and every eye locked on his and all those mirror-neurons in all those brains syncing up with what he was saying. By the end of this story his whole village would go to war against another village or make peace.”
“So TED is one of the new storytellers?” I ask
“It’s one of them. That mode of communication kind of got lost in the print age because it didn’t scale, it was a village-sized technology at best. To me it’s thrilling that it now scales and so one great teacher can inspire many people. One of the things that we see as our role is to try and help nurture that process of re-discovering how to do that, because I think we got to a place where lessons became a person in suit mumbling behind a lectern reading their notes for an hour while a class of people snoozed.” Suddenly, horrifying images of my ‘O’ level economics class come pouring into my brain. I shudder. “It shouldn’t be like that,” says Chris. “So, one of things we see, and this was a big kick for me, is TED speakers competing. An unexpected consequence of putting this stuff online is speakers are looking at what other speakers are doing and are putting in far more preparation time than they ever used to.”
Just as YouTube became a laboratory for dance, TED is becoming a laboratory for the art of oration. Here you will see a statistician blow your mind and end his talk with some sword swallowing. Here you will find Steven Pinker explain that the world is getting safer, and Robert Wright mix philosophy, sociology and stand-up comedy to give one explanation as to why – a theory he calls ‘the non-zero sum game’. I don’t know about you, but that’s the kind of lesson I can get on board with.
“We’ve actually got to bring back real creativity and find a way of nurturing that in the education process,” says Chris. “In the age of Google the notion of having to cram all these little brains with facts is bonkers. What’s needed is to build skills like how do you stimulate people to ask the right questions? how do you stimulate people to have a meaningful conversation? to think critically? What are lenses you give people to think about the world? I mean, if I’d have been taught Robert Wright’s non-zero view of history that would have had tremendously more value to me than endless facts about French kings.” It seems that the two things Artificial Intelligence needs the most if it’s ever to stop playing chess and start playing Madlibs, are the two things we need the most too: curiosity and creativity.
What is our collective story today and who tells it? The storytellers of our day-to-day lives used to be the press and our politicians. Like all good storytellers they used emotion to hook us into one of two, on the face of it, very uninspiring, dull stories. Story one: life happens to you, the future is not going to be very good (especially if you vote for that guy), it was better in the old days, you’ve got to look after yourself, the world is violent and unsafe, your job is at risk, the generation below you are feral and dangerous, things are changing too fast and you can’t trust those immigrants/ scientists/ left-wingers/ right-wingers/ nerds/ geeks/ religious people/ atheists/ football fans/ the rich/ the poor/ what you eat/ your neighbour. You are alone. Make the best of it. Vote for me. Buy my paper. I understand. (Story two is, in summary: ‘Shock! People have sex.’)
It’s hardly inspiring is it?
But the story is beginning to be told by other people now, by the Legion of Extraordinary Dancers, by speakers at TED talks, by Mohamed Nasheed who battled dictatorship to the brink of his own death and then got on with battling climate change, by Cynthia Breazeal who wants to build robots that help children learn, by Vicki Buck who quit government to create jobs to take on global warming, by George Church who wants you to stay healthy longer, by Eric Drexler who wants to usher in a post-scarcity world using technology on the nanoscale, by the good people at Konarka who take electricity out the sky and give to the developing world. A story being told by the curious and the smart, that inspires the curious and the smart in all of us, by people who wonder and ask the kind of questions that haven’t been asked before. Crucially, none of them wait for permission to ask those questions, or then to find the answers. It is being told through writers who find themselves traveling across America and readers of blogs who might say in the pub, “did you know the technology exists to make petrol out of the air?” It is being told by the cult of the possible, who seek to achieve, to bring us what we desire. Peace. Understanding. Space to love each other. People who encourage us to evolve.
Eric Drexler has written, “As the Web becomes more comprehensive and searchable, it helps us see what’s missing in the world. The emergence of more effective ways to detect the absence of a piece of knowledge is a subtle and slowly emerging contribution of the Web, yet important to the growth of human knowledge.”
I think we’re beginning to see, collectively, what’s missing, and crucially we’re now able to do something about it. Technology doesn’t give you permission like your teachers did. It gives you agency – to ask, to learn, to connect, to do. It says, “go on then, show me what you’ve got”.
“I don’t know that the future’s going to be better,” says Chris. “But I think there’s a very good chance that it will be and I think that’s something that everyone can do to further increase that chance. There are several quite profound and inspiring ways of thinking about the world that suggest there are these trends that have the potential to drive a better future and I think there is such a thing as moral progress, driven not by any difference in the DNA kids are born with, but just driven by what they see, and seeing more of humanity just naturally flicks on certain switches that make people more empathetic. Of course, the future might well be truly horrible. I think it’s all to play for and I think everyone of sound mind and conscience should be in the game, trying to shape it in the right way. It’s a very false and shallow view of history to say that it’s just one thing after another. Ultimately though our history is the history of ideas. It’s a really thrilling history and ultimately that is what will drive all of our futures.”
Ideas, creativity, curiosity – and dancing. Now there’s a mix.
More of my talk with Chris, will of course, make it into the book…
I arrive in New York after a long and slow train from Boston to Penn Station (surely a place specifically designed to confuse foreign travelers?) The constant rains have hit the trains hard and I was lucky to make it. (The rain kept coming putting large parts of the east coast under water and the service was later suspended due to flooding.) The delays mean I hit the New York rush hour carrying my luggage, which is about as much fun as gallstones. I make it to Lounge 47 in Long Island City to meet gent and scholar Adrian Mukasa, the wise-cracking videographer I met in this bar during my last visit and who has generously found me an apartment in Queens for my stay. We catch up over some beers before I head to the apartment. It’s blissfully quiet, which is just about the most important thing anywhere I sleep needs to be (and completely unlike my flat in London which is assailed from all sides by the lives and loves of my neighbours).
Today I meet my editor at Penguin Avery, the quietly formidable Rachel Holtzman. We have lunch at the swanky Marea restaurant bordering Central Park. Rachel has a kind of steely-softness that New York specialises in. She’s got a kind heart, but I suspect suffers fools about as gladly as the Vatican would respond to public conversion to catholicism by Gary Glitter right now. I’m glad to hear she’s happy with the four chapters I’ve delivered so far, and that the publicity and sales people at Penguin have responded well to the book (indeed, I’m to meet them, and the publisher Bill next Tuesday). Talking to Rachel also helps me begin to pull together some ideas about how the book’s narrative will play out. Most exciting however is that she’s brought a mock up of a front cover, and it’s brilliant. It’s simple but has a New Yorker kind of vibe. As soon as it’s finalised (we discussed a few tweaks) I hope to post it up here.
I spend the afternoon in the main branch of the New York public library preparing for tomorrow’s interview with Chris Anderson, CEO of the mighty TED talks. I’m hoping Chris will help me pull together some of the threads and trends I’ve been battling with, in short, to help me make sense of everything. Given that the TED talks are a nexus for the presentation and discussion of new ideas and ways of seeing the world Chris is probably in the top ten people assailed by the most new ideas on a regular basis on the planet – and so, I hope, has managed to develop a way of bringing them all together into a coherent world view, or (more likely), a coherent attitude to approaching the future.
After all, on one side you have James Lovelock who says, there’s no way to save the planet and on the other you have Ray Kurzweil who, as I reported in a recent post, says ‘Malthusian concerns’ about us using up the world’s resources are facile because they assume nothing in technology changes (i.e. we can engineer ourselves out of the climate crisis – and indeed just about anything else we care to think of). Meanwhile, in the middle you have eco-pragmatists like Stewart Brand (who I hope to interview in a couple of weeks) whose Whole Earth Discipline is described as ‘an eco-pragmatist manifesto’. (You can see Stewart talk about ‘four environmental heresis’ here.
It’s amazing how quickly you can accept international travel as work-a-day. When I started my journey a flight heralded a feeling of adventure in me. Now, it’s like getting in a car. Another thing that’s changed is my attitude to my interviewees. When I first secured an interview with my quarry in Boston I was slightly intimidated. ‘How do you talk to someone like that?’ I asked myself, the ‘that’ in question being Ray Kurzweil. Now, as I come to end of my journey and try to tie it all together I find less trepidation in myself. I’ve spent the last year meeting extraordinary people, and I’ve got used to it. Turns out extraordinary people have plenty enough ordinary about them to get hold of.
I arrive in Boston, deal with the ever rude and superior immigration staff and am picked up by Tracy Wemett, who you may remember as Konarka’s PR woman and driver of some, shall we say, reckless enthusiasm. Tracy, on hearing of my return to Boston has generously offered me her basement for the week, which makes a welcome change from hotels. Still, we’ve got to get to her apartment alive which, given her driving, is not a certainty.
Since I saw Tracy last it seems I haven’t been the only one to notice her maverick approach to the road. One speeding ticket too many and she’s been required to take a driving education course by the state of Massachusetts. The results are reassuring. She tells me, “I was told I’m the sort of person who will make a road where there isn’t one.” She pauses. “Apparently that’s not good.”
I spend the next day preparing for my interview with Ray. (I also take a visit to meet genius-entrepreneur Howard Berke at Konarka, who was, like many genius-entrepreneurs, a mixture of enthralling, socially odd and genuinely entertaining. More on him in my chapter on Solar).
Ray Kurzweil is variously an inventor, guru, madman, prophet or genius depending on who you listen to. One indisputable truth is that Ray is a very good inventor. He invented the first machine that could scan text in any font and convert it into a computer document, a technology he applied to building a reading machine for the blind (which led to him, on the side, inventing the flatbed scanner and the text-to-speech synthesizer too). Stevie Wonder was the first customer – and this in turn led to Ray inventing a new breed of electronic synthesizers that captured the nuances of traditional ones. (In a former life as a musician I coveted the ‘Kurzweil K2000’ but not being very successful musician I could never afford one). Our interview opens in much the same way as Ray’s last book The Singularity is Near (hereafter referred to as TSIN). “The philosophy of my family, the religion, was the power of human ideas and it was personalised,” he says. “My parents told me, ‘you Ray can find the ideas to overcome challenges whether they’re grand challenges of humanity, or personal challenges’ ”.
Ray’s journey to visionary genius/ techno-prophet/ crazy person (delete as appropriate depending on your prejudices) had its genesis in his attempt to work out a way to time his inventions for maximum impact. “I realized that most inventions fail not because the R&D department can’t get them to work but because the timing is wrong. Inventing is a lot like surfing: you have to anticipate and catch the wave at just the right moment,” he writes on page three of TSIN. So Ray started looking at technology trends and he saw something extraordinary – a clear, unmistakable pattern of exponential innovation, something he calls ‘the law of accelerating returns’ – a phenomenon centred around the idea that technology regularly doubles in efficiency. Such doubling is seen, for instance, in the increasing processing power of computers. Reality has kept pace with the predictions of ‘Moore’s law’ with almost unwavering allegiance, with performance per dollar doubling about every 18 months. But Ray says the effects of the law can be found, well, nearly everywhere, that the law of accelerating returns is the governing law of all creation.
To understand the implications of Ray’s idea you have to get your head around how potent a force it is if something has the propensity to double. Think of it this way. Let’s say you travel a metre with each step you take. If you take ten steps you’ll have covered ten metres. Now imagine that instead of each step progressing one metre, it somehow doubles the distance you covered with the last one. So while your first step covers one metre, your second covers two and by your third your stride is four metres. The difference between ‘normal stepping’ you and ‘doubling stepping’ you is extreme and gets ever more so. As a doubling stepper your first ten steps will cover not ten metres, but one thousand and twenty four. Instead of covering the equivalent of about 1/10th of a football field you’ve covered over ten. And with your next step you’ll cover ten more – with the step after that covering another twenty whole pitches.
By the time you’ve done just 27 steps you’ve traversed 67 million metres, or to put it another way, you’ve gone one and a half times round the world. Your next step? You double that distance and do another 67 million metres. At this rate you could walk to the sun and back (and be 85% of the way to Mars) in 38 steps (your last step having covered 137,438,953,000 metres). One can only imagine the trousers you’d need. Meanwhile, normal stepping you is about a third of the way down a football pitch. Now, of course, you can’t step like that but technology, says Ray, can. And he’s not wrong.
Certainly on my trip I’ve seen other examples of mankind’s exponential adventure, in the plummeting cost of genome sequencing, or the ‘cost per watt’ performance of solar technologies for example. Ray cites these examples and others. The first hundred pages of TSIN almost bludgeons the reader with graph after graph, based on historical data showing exponential growth in the number of phone calls per day, cell phone subscriptions, wireless network price-performance, computers connected to the internet, internet bandwidth and so on. These all have a computing flavour, but Ray sees exponential growth of knowledge too, citing exponential growth in nanotechnology patents as an example. What about the economy? Ray plots exponential growth in the value of output per hour (measured in dollars) in private manufacturing and in the per-capita GDP of the US. Ray quotes example after example because he want us to get past what he sees as an inherit prejudice in our human thinking.
“Our intuition is linear and I believe that’s hard-wired in our brains. I have debates with sophisticated scientists all the time, including Nobel prize winners that take a linear projection and say “it’s going to be centuries before we…” and “we know so little about…” and here you can fill in the blank depending on their field of research. They just love to say that. But they’re completely oblivious to the exponential growth of information technology and how it’s invading one field after another, health and medicine being just the latest.”
You can’t get to Mars in 39 steps wearing linear trousers (like the one’s most of our minds wear). You need exponential ones (like technology has). But because we’re hard-wired to think in linear, rather than exponential terms we fail to see when things are coming, argues Ray. We’ll be far further than we think, far quicker than we expect. Ray predicts for instance that by the middle of the century we’ll have artificial intelligence that exceeds human cognition, a game-changing explosion of intelligence that we will merge with to usher in the next stage in our evolution – a human-machine hybrid, enhanced with similar exponential bounty brought to us by entwined revolutions in nanotechnology and biotechnology. Aging will be ‘cured’ and we’ll be able to move onto a more stable platform than our frail biology. At the same time we’ll have solved the energy crisis and dealt conclusively with climate change.
“All these Malthusian concerns that we’re running out of resources are absolutely true if it were the case that the law of accelerating returns didn’t exist,” he says. “For instance, people take current trends in the use of energy and just assume nothing’s going to change, ignoring the fact that we have 10,000 times more energy that falls on the Earth from the Sun every day than we are using. So if we restrict ourselves to 19th Century technologies, these Malthusian concerns would be correct.” In other words, the law of accelerating returns in solar energy will soon see a green energy revolution, as the technology keeps doubling its efficiency. Ray reckons five years from now solar will be taking coal to the cleaners when it comes to cost per watt. We won’t be switching to solar because we want to save the planet, we’ll be doing it to save our bank accounts.
“I just had a debate this week at a conference held by The Economist with Jared Diamond who basically sees our civilization going to hell in a hand-basket and points out various trends and makes this assumption that technology is a disaster and only creates problems and he has really no data to point to, it’s just aphorisms and scoffing at technology with no analysis. But he’s got a bestselling book because people love to read about how we’re heading to disaster.”
Part of understanding what Ray is getting at requires you to understand that he sees all creation as an exercise in information processing. Everything can be expressed as data coming in, some kind of manipulation or interaction, and some data goes out. So, two atoms collide (data in), they interact in some way (data processing) and emit light and heat (data out). This is the most boring way ever to describe fire, but it doesn’t take away from the essential premise that everything can be viewed as a manipulation of information. In other words, everything (including you) is an ‘information technology’ and therefore the law of accelerating returns becomes the fundamental law that governs all creation.
In 1999 Ray published a book called The Age of Spiritual Machines in which he applied this law to make predictions, and handily he made a bunch for the decade from 2009. Critics and advocates alike have lept on these, loudly proclaiming “Ray was right!” or “Ray was wrong!” depending, it seems, on how they view the world – and all ignoring the fact that Ray didn’t say his predictions were for one year, but for the period beginning 2009. “Most of Kurzweil’s predictions are actually astoundingly accurate,” writes one blogger, while another asserts his forecasts are “ludicrously inaccurate.” Oh dear.
My own analysis is that, with the odd caveat, Ray seems to be on the right track with his predictions and many seem extremely prescient. According to Ray 89 are correct, 13 are “essentially correct”, three are partially correct, two are ten years off, and just one is wrong (but he claims it was tongue in cheek anyway). Certainly there is some pride in Kurzweil’s response to his critics and you could argue he’s stretching the point a bit when he defends some of his predictions, massaging the semantics of the prediction to match the current situation, but, all that aside, he’s still been right more often than he hasn’t. By anybody’s reckoning that’s prediction nirvana, and a skill any investor would love to have (oh, Ray’s latest venture? A hedge fund.)
But part of the problem with Ray Kurzweil, or rather part of the problem in talking about Ray Kurzweil is that he raises strong emotions. Trying to separate reasoned debate from the howl of emotion that his work provokes is hard. Take the view of Douglas R. Hofstadter, now a cognitive scientist at Indiana University, but more famously the author of Gödel, Escher, Bach: An Eternal Golden Braid – an attempt to explain how consciousness can arise from a system, even though the system’s component parts aren’t individually conscious. (This is a key area of study for Ray too, because it is through reverse engineering the human brain that he believes we’ll be able to unlock the mechanisms of mind, replicate them in machines and so free ourselves from the biological limitations of our brain). Here’s what Hofstader has to say about Ray’s ideas:
“I find is that it’s a very bizarre mixture of ideas that are solid and good with ideas that are crazy. It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad. It’s an intimate mixture of rubbish and good ideas, and it’s very hard to disentangle the two…”
That’s like Stevie Wonder saying, “I can’t work out if Paul McCartney is a genius or a wanker”. Such is the trouble with talking about Ray. (You can see the full text of the interview this comes from here)
As I comment throughout An Optimist’s Tour of the Future, the advance of new technologies, particularly biotechnology, make many people (including me) uncomfortable – and then Ray comes along and says, ‘belt up, things are going way faster than you thought, and by the way, that means I’m not going to die. Would you like to transcend your biology with me? Hurry now’. It’s no wonder our linear-trousered brains are stretched to the limit, no wonder some people find Ray just too difficult to engage with. And on the other side of the coin are those who do see Ray as some kind of prophet, whose ideas save them from the sticky issue of their mortality. Ray’s ‘Singularity’ – the moment at which ‘strong AI’ arrives and we merge with it – has been called “the Rapture of the nerds” (a phrase coined by science fiction author Ken MacLeod). These Utopian-techno-nerds don’t really help Ray’s cause. I advocate the approach of Juan Enriquez, the founder of Harvard Business Schools’ Life Science Project, and another Boston resident, who told me, “Do I always agree with Ray? No. Does he make me think? Always.”
It seems to me (from my linear trousered perspective) that progress in robotics, AI, synthetic biology and genomics brings philosophical questions such as “what does it mean to be human?” into your living room, and not in an ‘interesting-debate-over-a-glass-of-wine’ sort of way, but in a ‘right-in-your-face-what-are-you-going-to-do-about-it?’ sort of way.
When the possibility that the hand your mate Robin lost to cancer three years ago can be replaced by a robotic one with a sense of touch becomes a real option we begin to ask ourselves, ‘Is that hand really part of Robin? If I shake that hand am I really shaking Robert’s hand? Gee I don’t know. I feel kinda weird’. (By the way, Robin isn’t fictional, he’s Robin af Ekenstam and you can watch a video of his new hand being attached here). And just as we can start to engineer robot hands and merge them with humans, we will soon, thanks to the law of accelerating returns, be able to engineer to genuine robot intelligence and merge it with our brains, argues Ray.
“The basic principles of intelligence are not that complicated, and we understand some of them, but we don’t fully understand them yet. When we understand them we’ll be able to amplify them, focus on them – we won’t be limited to a neo-cortex that fits into a less than one cubic foot skull and we certainly won’t run it on a chemical substrate that sends information at a few hundred feet per second, which is a million times slower than electronics. We can take those principles and re-engineer them and we’re going to merge them with our own brains”.
It’s statements like this that bring Ray into conflict with many scientists who think he’s not so much running before he can walk, as getting in jet fighter straight out of the crib. Although, for Ray, that’s kind of the point. Crib to jet fighter is really just a few doublings after all, the law of accelerating returns in action. But for some, Ray is a bit like Tracy. He makes a road where there isn’t one, they say.
One thing is certain. If a conscious human-like intelligence is ‘computable’ (i.e. it can be run on a machine substrate) the processing power to compute it will be within reach of the even your desktop very soon. Hans Moravec wondered, “what processing rate would be necessary to yield performance on par with the human brain?” and came up with the gargantuan figure of 100 trillion instructions per second, which is one of those numbers that generally makes most of us go “hmmm, I think I’ll make a cup of tea now.” To put this number in context, as I was ushered into the world in the early seventies IBM introduced a computer that could perform one million instructions per second. This is onemillionth of Moravec’s figure. By the dawn of the millennium chip-maker, AMD, were selling a microprocessor over three and half thousand times quicker (testament to a technological journey that had been populated with continual exponential leaps in processing power throughout the intervening period). This yielded a chip that is still 280 times less powerful than the brain’s computational prowess (by Moravec’s reckoning) but is a staggering upswing in power nonetheless. Intel have just released their ‘Core i7 Extreme’ chip which is forty times faster than the AMD device from 2000 and computes at the mind-numbing speed of 147,600,000,000 instructions per second – or about one seventh of Moravec’s figure. At this rate your new laptop will achieve the same computational speed as the human brain before the decade is out. Soon after that, if the exponential trend continues, your laptop (or whatever replaces it) will have more hard processing muscle than all human brains put together. This will happen sometime around the middle of the century according to Kurzweil.
Supercomputers have passed Moravec’s milestone and it’s therefore no surprise to find various projects using them to try to simulate parts of animal and human brains, merging neuroscience and computer science in an attempt to get to the bottom of what’s really going on in that skull of yours. It’s important to realise that simulating something often takes more computing power than being something (aircraft simulators have more computers than actual aircraft for instance) and a complete simulation of an entire human brain running in real-time is still beyond the reach of even the most powerful computers. But not for long. Henry Markram’s Blue Brain project (which works by simulating individual brain cells on different processors and then linking them together) believes “It is not impossible to build a brain, and we can do it in ten years.” He’s even joked (or not, depending on how seriously you take the claim) he’ll bring the result to talk at conferences. Markram has similarly upset more conservative voices in the AI field. Even Ray thinks he’s over-optimistic. (The prediction falls outside the curve predicted by Ray’s graphs by a hefty margin).
You can see Markram’s TED talk (where he suggests he’ll be bringing the Blue Brain back to the conference as a speaker within a decade) below.
I find myself thinking back to my talk with George Church, Professor of Genetics at Harvard Medical School. If you accept evolution as an explanation of how humanity came to be, that the common genetic code of all living things is proof that you, I and Paris Hilton all, at some point, evolved from the same source (that source being a collection of molecules that became the first cell) then one way of looking at the human being (and therefore the human brain) is ‘simply’ as a collection of unthinking tiny bio-machines computing away – reading genetic code, and spewing out ‘computed’ proteins and the rest. We’re machines too, just wet biological ones. You are an information technology.
Robotics pioneer Rodney Brooks makes this argument as well. “The body, this mass of biomolecules, is a machine that acts according to a set of specified rules,” he writes in Robot: The Future of Flesh and Machines
Needless to say, many people bristle at the use of the word “machine”. They will accept some description of themselves as collections of components that are governed by rules of interaction, and with no component beyond what can be understood with mathematics, physics and chemistry. But that to me is the essence of what a machine is, and I have chosen to use that word to perhaps brutalize the reader a little.
In short, intelligence and consciousness are computable, because you and I are computing it right now. I compute, therefore I am. George Church was less brutal in his take on the ‘human machine’. “I think of us more and more as mechanisms,” he told me. “We’re starting to see more and more of the mechanism exposed and it just makes it more impressive to me, not less. If someone showed me a really intricate clock or computer that had emotions and self awareness and spirituality and so forth I’d be very, very impressed and I think that’s where we are heading, were we can be impressed by the mechanism.”
But something’s not sitting right with me, and it’s not that I don’t like being called a ‘machine’ (believe me, that’s nothing compared to some of the heckles I’ve had). In fact, the machine metaphor makes a kind of sense given what I found out at Harvard.
It was Cynthia Breazeal, head of the personal Robotics lab who I met last time I was in Boston that expressed it best. “The bottom line is there’s still a long way to go before we can have a simulation actually doanything. I mean they can run the simulation but what is it doing that can be seen as being intelligent? How does that grind out into real behaviour, where you show it something and have it respond to it? I still think there’s a lot of understanding that needs to be done. I do, I really do. I think we’re making fantastic strides but I think,” (she dropped to a conspiratorial whisper, smiling) “there’s a lot we still don’t know!”
Cynthia nailed the root of my discomfort. Someone can give you the best calculator in the shop, but if you’ve never learned any maths, it’s largely useless to you. If the brain is computable, it’s not that we won’t have the processing power to recreate its mechanisms, but that we’re still a long way off working out how to drive that simulation. If you’d never learned to read your eyes could take in the shape of every letter on this page, but it’d mean nothing to you, and printing it out photocopying it a hundred times (or even inventing the printer and photocopying machine in order to do so) wouldn’t help you either. Just as you had to learn to read, AI and neuroscience research, collectively, have to tease out not only what it is they’re looking at, but what it means.
Sure, there’s exponential growth in processing power, but the jury is out as to whether there is an equivalent growth in understanding how to use that power more ‘intelligently’, to create (to paraphrase one of Henry Markram’s analogies) a concerto of the mind by playing the grand piano of the brain. If there had been, maybe your new laptop would be one-seventh as smart as you are. But it isn’t. This is where the strength of projects like the Blue Brain (and Cynthia’s work) really lie – as tools to slowly help us to pose the right questions that will lead to a better understanding of intelligence, emotion and consciousness.
This is what I really want to ask Ray. “Have you got any graphs that clearly show an exponential growth in understanding? or in the ability of us to collectively make sense of the great philosophical questions, the intractable questions – ‘What is life?’, ‘What is consciousness?’” I ask. “Have we seen the law of accelerating returns in our understanding of these questions? Is our knowledge, our wisdom also keeping pace?”
“Well, I’m actually working on that in connection with my next book which is called How the mind works and how to build one,”says Ray.
Well he would be, wouldn’t he?
More of my interview with Ray will, of course, be in the book…
It’s a big question, but one that is particularly pertinent to my interview today with Robotics and Artificial Intelligence researcher, Hod Lipson. Because Hod and his team build machines that find truths.
The search for truth has a long history (one could argue it is history) which I’m not about to get into (and it’s not the book I’m writing) but if someone said to me ‘Go on then, history of truth in 5 minutes’ I’d probably reach for two key figures – Socrates (born Greece, 469 BC) and Francis Bacon (born England, 1561), not least because they both died in interesting ways (which is useful for storytelling).
Socrates was put to death by the state of Athens for “refusing to recognise the gods recognised by the state” and “corrupting the youth” (explaining perhaps why Black Sabbath rarely toured in Greece). Despite clear chances to escape his fate, Socrates placidly took a drink containing poison hemlock prepared by the authorities. Francis Bacon, many believe, died as a result of trying to freeze a chicken. It might seem odd therefore to hold up both as key figures in the history of reason.
Socrates' natural heir?
You may also wonder why I am suddenly diving into the past when I’m writing a book about the future. Bear with me, and blame Hod Lipson and his robots.
Both Socrates and Bacon were very good at asking useful questions. In fact, Socrates is largely credited with coming up with a way of asking questions, ‘The Socratic Method’, which itself is at the core of the ‘Scientific Method’, popularised by Bacon during ‘The Enlightenment’ – a period of European history when ‘reason’ and ‘faith’ had an almighty bunfight and the balance of power between church, state and citizen was being questioned. Lots of philosophers and scientists challenged the prevailing orthodoxy of religious authority by saying ‘we need to make decisions based on critical thinking, evidence and reasoned debate, not on sacred texts and religious faith’ and the church replied with ‘yes, but we own most of the land, plus people really like the idea of God. Ask them’.
I'm pretty popular, actually
The Socratic Method disproves arguments by finding exceptions to them, and can therefore lead your opponent to a point where they admit something that contradicts their original position. It’s powerful because it kind of gets people to admit to themselves that they’re wrong. It’s also pretty good at exposing your own (as well as others’) prejudices and gaps in reasoning. Lawyers use it a lot. Don’t let this influence you against it. Lawyers also use toilet paper and you’re not about to reject that idea.
Used by lawyers
Here’s an example.
During excessive bouts of hard and progressive rock emanating my older brothers’ bedrooms my dad used to say, “people only play electric guitars because they can’t play real ones” (by which he meant acoustic guitars played by nice chaps called Julian with sensible haircuts, as apposed to electric guitars played by long haired geezers called Dave and Jimmy).
First step of Socratic method: assume your opponent’s statement is false and find an example to illustrate this. This You Tube clip of Pink Floyd’s David Gilmour playing acoustic guitar for instance. Clearly Dave Gilmour can play a ‘real’ guitar as well as an electric one and my dad must grudgingly accept the fact. At this point dad would assert that Dave Gilmour was ‘the exception that proved the rule’.
Next step. Take your opponent’s original statement and restate it to fit their new modified position. “So, dad, you’re saying that people only play electric guitars because they can’t play acoustic ones, except for Dave Gilmour who can do both?”. Then return to step one.
Ironically this led us to playing dad far more Black Sabbath, Pink Floyd, Aerosmith and Led Zeppelin than if he’d kept his theory to himself. (MTV’s ‘unplugged’ series would become his nemesis). Eventually dad would have to admit the truth – which was not that the rock musicians we listened to weren’t talented, but that he just didn’t like rock music.
This example is trivial but you can use the method to demonstrate some pretty esoteric points, and expose fundamental new insights. A popular example that can really annoy your mates in the pub is proving that things don’t have a colour.
Socratic argument, while undoubtedly one of the most useful things ever devised can also annoy the tits of people, as the man who lends it his name found out to his cost. The story is that Socrates used his technique to prove a lot of bigwigs in Athenian society were mistaken in their thinking – and they responded by having him killed. This proves that engaging people’s brains is never enough if you want change. You have to engage their emotions too. As Professor George Church said to me during our talk last week “Politicians know how effective emotion is in comparison to rational thought. You can really move mountains with emotion. With rational thought you just end up getting people to change the channel”.
By the time Francis Bacon went to university, teachings of one of Socrates’ students, Aristotle, had become entrenched as the way to conduct ‘scientific inquiry’. Aristotle had pioneered deductive reason, the practice of deriving new knowledge from foundational truths, or ‘axioms’. In short, it was generally believed that if you got enough boffins together to have a solid debate, scientific truth would be teased out over time. This worked well for mathematics where axioms had been long established (e.g. the basic mathematical operations – plus, minus, divide, multiply) but was less good for finding out new stuff about the physical world. Much to Francis’ dismay it seemed that science involved sitting around in armchairs. Nobody was getting off their arse and observing anything new or doing any experiments. Nobody was finding the ‘axioms of reality’ (which is arguably a good name for a progressive rock outfit).
'Let's do it in 13/8!'
In common with Socrates Bacon stressed it was just as important to disprove a theory as to prove one – and observation and experimentation were key to achieving both aims. In a way he was Socrates 2.0 (which is another good name for a prog band). He also saw science as a collaborative affair, with scientists working together, challenging each other. All of this is hallmark of scientific good practice today – observe, experiment, theorise… and then try to prove yourself wrong – all in collaboration with peers who can give you a hard time. It’s important to note that Bacon himself wasn’t a distinguished scientist. His main contribution was the articulation and championing of an empirical scientific method. That said, he did do the odd experiment, including the one that killed him.
While traveling from London to Highgate with the King’s personal physician, Bacon wondered whether snow might be used to preserve meat. The two got off their coach, bought a chicken and stuffed it with snow to test the theory. In his last letter Bacon is said to have written, “As for the experiment itself, it succeeded excellently well.” Some historians think the chicken story is made up, but the popular account is that the act of stuffing the chicken led to Bacon contracting fatal pneumonia. This is possibly the only instance of bacon being killed by eggs.
Hod Lipson looks like a very friendly bear. He has a round, but not chunky frame, thick black hair and looks healthy and happy. His features are open and innocent. He’s almost childlike if it weren’t for his demeanour – a kind of solid confidence that only comes with age. You get the feeling Hod knows exactly what he wants to achieve. I suspect he was a mischievous child, curious, poking his nose into most things. And whilst most of the scientists I’ve met are driven by an almost insatiable curiosity, Lipson takes curiosity to a new level, literally. He’s curious about curiosity.
“ ‘Artificial Intelligence’ is a moving target,” he says. “So, you can build machine that plays chess, then you build one that can drive through city streets and so on. People argue about whether it’s really intelligent or not – and usually it’s argued it isn’t. I want to create something where nobody can argue it isn’t intelligent. So, I was thinking about what’s an unmistakable, unequivocal hallmark of intelligence, and I think it’s creativity and particularly curiosity.”
“Does a curious and creative machine mean a sentient machine?” I ask.
“Well, what does that mean?” asks Hod. “I have to push you on what you mean by ‘sentient’.”
Bollocks. I’ve just been asked by a leading researcher into intelligent machines to define sentience – one of the biggest pending questions in philosophy. This is worse than when Cynthia Breazeal asked me to come up with an alternative word for ‘robot’. Or if Andrew Lloyd Webber asked me to say something nice about one of his musicals. I feel out of my depth and we’re barely into our chat. I do the only thing I can.
“Well, let me ask you,” I say. “What do you mean by it?”
Hod pauses. I’m not sure he was expecting a return serve, especially one that in any decent rule book would be considered cheating.
“I interpret it as deliberate versus reactive. Er… human-like…” He pauses again. “I don’t know.”
A-ha! Well, like I said, it is one of the biggest pending questions in philosophy.
“Alive?” I venture.
“It’s difficult to identify what life is right?”
And there’s the rub. Life has avoided a definitive definition for as long as we’ve tried to make one – as has ‘intelligence’. So if you’re trying to create ‘artifical intelligent life’ you’re already in a quagmire of semantic lobbying. I’m reminded of my chat last week with George Church (Professor of Genetics, Harvard Medical School). “I think life is actually quantitative measure,” said George, by which he means something that can be defined not with either a ‘yes’ or ‘no’ but on a scale. “It’s not something where either you either have it or your don’t. So I would say that there are some things that are more alive than others.” And I don’t think it’s overstating things to say that Hod certainly has made machines that are ‘more alive’ than many others.
Then he says an interesting thing. “I think men have this hubris of wanting to create life. We try to create life out of matter.”
‘Hubris’ is one of those words like ‘semiotics’ and ‘insurance’ that I’ve heard a lot but didn’t really know what it meant for a long time (I’m still struggling with ‘insurance’). I look up ‘hubris’ when I get to back to my hotel. It means excessive pride or arrogance. In classical literature it’s usually a precursor to, and the cause of, a character’s downfall. The legend of Icarus is a good example. With that one word Hod has encapsulated the two defining criticisms aimed at Artificial Intelligence research. On one end there are those who say we’ll never create a truly artificial intelligence and that we’re arrogant to believe we can. On the other there those who worry we will build smart machines and in our arrogance be blind to the danger that they will one day do away with or enslave us. (There are more measured positions in between the two such as Hubert Dreyfus’s and Hod’s own – both of who suggest that a lot of AI research has been in the wrong direction).
Hod doesn’t believe in the latter James Cameron-esque scenario, but sees a confederacy of man and machine. He has some sympathy for the ‘singularity hypothesis’ of Ray Kurzweil (who I’m interviewing early next year) which talks of a ‘merger of our biological thinking and the existence of our technology’ but doesn’t see a machine-human hybrid (Juan Enriquez’s Homo Evolutis) as the only scenario. “Merging could also mean intellectually merging, meaning that they explain stuff to us.”
Lipson became famous (in robotic circles) for his work building robots that are arguably self aware. His Starfish robot, which I see sitting forlornly on a shelf in his lab, is iconic for learning to walk from first principles. It wasn’t given a program that told it how to move its various motors and joints to achieve locomotion. Instead Lipson gave it a program that enabled it to learn about itself – and use this knowledge to subsequently work out how to move.
“The essential thing was it created a self image,” Lipson tells me. “It created that self image through physical experimentation. So it moved its motors, it sensed its motion and then it created various models of what it thought it might look like – ‘maybe I’m a snake? maybe I’m a spider?’ We told it to create models – multiple different explanations that might explain what it knows so far.”
The robot then stress-tested those models by sending them into competition with each other. “It creates an experiment for itself that focuses on the area where there’s the most disagreement between what the models predict. We put in the code to look for disagreements,” explains Hod.
For example, let’s say the robot is wondering which move to do next in order to learn about itself more. It could try a movement that, when completed, the models all predict it will be sitting at an angle of about 20 degrees. One model might predict 19 degrees, another 21 degrees, a third 21.2 degrees. However, if it tries another move the models have very different ideas about the result. One says the robot will be at an angle of 12 degrees, another predicts 25 degrees, a third says 45. This latter movement is more likely to be the one the robot chooses next, because it will learn the most from it, and get an idea of which model is closer to the truth. It’s where there’s most disagreement that there’s most to learn. “We tell it ‘you create models – multiple different explanations for what you see – and then look for what new experiment creates disagreement between predictions of these candidate hypotheses,” says Lipson “That’s the bottom line of curiosity”.
The models that do best ‘survive’ and the program kills off the others. The remaining models ‘give birth’ to a generation of slightly mutated tweaked versions of themselves and another round of ‘survival of the fittest’ ensues. Or to put it another way, over many iterations the program hones in on a model that describes reality. The predictions get closer and closer to what actually happens until one model is deemed sufficient for the robot to say ‘this is what I look like’.
If all this talk of ‘mutation’, successive ‘generations’ and ‘survival of the fittest’ sounds slightly familiar that’s because this kind of mathematics takes its inspiration from Darwin’s theories of evolution. Mathematicians might call it ‘reductive symbology’ or say Lipson’s work is a good example of ‘genetic algorithms’ – and it’s a technique that’s been around for decades. What’s different about Lipson’s work is the implementation, something he calls ‘co-evolution’.
“We set off two lines of enquiry. So one of them is the thing that creates models and the other is the thing asks questions, and they have a predator/ prey kind of relationship. Because the questions basically try to break the models.” The questions try to find something the models disagree about so they can kill off the weaker ones. It’s like Anne Robinson in code.
It has to be said that if you see the Starfish robot ‘walking’ you wouldn’t immediately think it had a future career as a dancer. It doesn’t so much walk as stagger and flop forward. It’s less Ginger Rogers and more gin and tonic. Still the achievement is not to be sniffed at. It had no parents and no role models. This was a robot actively learning to do something no one had taught it to. And robots that learn this way have all sort of interesting possibilities – as Lipson was about to find out.
You can see Hod’s demonstrating his starfish robot in this TED talk.
With colleague Michael Schmidt he wondered if the same computer program he’d placed at the core of his Starfish robot could go beyond working out merely what its host body looked like and begin to reach useful conclusions about the wider world.
“We said ‘let’s take it out of this particular body and let it control motors of any experiment’ ”. Their first idea was to give the robot brain control of motors that set up the starting position for a ‘double pendulum’ before letting it fall. The robot was also able to record the results of each experiment using motion capture technology – allowing it to accurately record the pendulum’s motion.
A double pendulum is a bonkers little contraption. It consists of two solid sticks jointed together in the middle by a free moving hinge. Double pendulums do wacky things (You can see one in action here). Whilst the top pendulum swings from left to right the bottom one likes to mix it up. Because it’s not attached to a stationary point (like the top pendulum) but something moving (the bottom end of that swinging top pendulum) it will swing left, swing right, spin round clockwise, or counter clockwise, seemingly at random. Lipson and Schmidt chose the double pendulum because it’s a good example of a system that’s simple to set up but which can quickly exhibit chaotic behaviour – and therefore would be a good test of the technology’s ability to build a useful conceptual model of what was going on. The results were startling. In fact, the program went a long way to deriving the laws of motion. In 3 hours.
It followed the same process as it had when it sat in the robot – guessing at equations that might explain what it had seen so far, then setting up new experiments (in this case new starting positions for the pendulum) that targeted areas of most disagreement between the equations. “With the double pendulum it very quickly puts it up exactly upright, because some models say it’s going to fall left and some models say it’s going to fall right. There’s disagreement. It’s not a passive algorithm that sits back, watching,” says Hod smiling. “It asks questions. That’s curiosity.”
Just like humans, it seems machines learn best when they ask their own questions and find their own answers, rather than being given huge amounts of data to absorb. “Most algorithms you see are passive. They’re data intensive. You feed in terabytes of data and these algorithms just sit back and watch. But in the real world you can’t sit back and watch. You have to probe, because collecting data is expensive, it takes time, it’s risky.” By constrast Lipson’s machine brain “only ever sees what it asks for. It does not see all the data.” In fact Lipson decided to compare the efficiency of this ‘active’ method of enquiry against a more traditional passive ‘here’s all the data, what can you tell me?’ method. “It doesn’t work. It has go through a reasoning.”
Remind you of anyone? I see the hemlock taker and the chicken freezer partially re-incarnated in machine form. The programming consigns inaccurate models to the dustbin by getting the robot to admit there are others that offer a better explanation of the real world (hello Socrates) and does this with evidence won via experimentation (hello Bacon). What Lipson has done is create a computational methodology for asking good questions. And asking good questions is what it is all about when it comes to understanding anything.
“Physicists like Newton and Kepler could have used a computer running this algorithm to figure out the laws that explain a falling apple or the motion of the planets with just a few hours of computation,” said Schmidt in an interview with the US National Science Foundation (who helped fund the research).
However, we’re still a long way off what I (or Hod) would call an intelligent machine. It still takes a human to work out if anything the machine has found is useful. The machine didn’t know it had found laws of motion, it took Hod and his colleagues to recognise the equations that were produced. “A human still needs to give words and interpretation to laws found by the computer,” says Schmidt. So, we’re still some distance from Hod’s confederacy of man and machine, where they explain stuff to us.
One of the areas Hod’s brains could turn out useful is cracking problems where there is lots of data, but we still have little idea what’s going on. Indeed plenty of people with acres of data have been beating a path to his door including heavyweight data generators like the Large Hadron Collider at CERN near Geneva. “The people as CERN said ‘there is this gap in a prediction of particle energy. Here’s data for 3,000 particles. Can you predict something?’ ” The result was a strange mix of elating and disappointing. “We let it run and it came up with a beautiful formula,” says Hod. “We were very excited but it was a famous formula they already knew. So for them it was a disappointment…. But for us… We rediscovered something that people are famous for.”
Again, the crucial insight comes from humans who can tell if something means anything or not. It’s the crucial step – and without it the results are largely worthless (which is not to say the time saved is not incredibly useful). I’m reminded of a scene from Douglas Adams’ comedy The Hitchhikers Guide to the Galaxy in which a supercomputer called Deep Thought is built by a race of supersmart humanoids to answer the ultimate question. ‘What is the answer?’ ask the humanoids awaiting instant enlightenment. ‘To what?’ says the computer. ‘Life! The Universe! Everything!’ they respond. ‘The ultimate question!’ The computer announces there is an answer… but it will take several million years to compute. At the duly allotted time millennia later the humanoid’s descendants gather to hear the answer, which is announced to be ‘42’. The problem, suggests Deep Thought, is that they don’t really know what ‘the question’ is.
"You're not going to like it"
No-one understands the irony in this story more than Hod Lipson. “In biology there are many systems where we do not know their dynamics or the rules that they obey”. So he set his machine looking at a process within a cell. True to form the program generated an equation in double quick time. But what did it mean?
“We’re still looking at it,” says Hod with a smile. “We’re staring at it very intently. But we still don’t have an explanation. And we can’t publish until we don’t know what it is.”
“You don’t understand what it’s saying?
“No,” says Hod.
“But in science you go from observations which produce data, to models which produce predictions, to underlying laws – and from there you go to meaning. What’s good is that we can go from data straight to laws, whereas previously people could only go from data to predictions. So now a scientist can throw it some data, go and have a cup of coffee, come back and see 15 different models that might explain what is going on. That saves a lot of time. Previously coming up with a predictive model could take a career. Now at least you can automate that so you can focus on meaning.” That’s a powerful enabling technology. More time to think. Hod is doing for thinking what dishwashers have done for after dinner conversation. Although it may not always work out that way.
Several months later I e-mail Hod to see if they’ve got anywhere with the equation his machine generated from the cell-observing experiment. “We’re still struggling,” he writes “We’ve been trying for months to get the AI to explain it to us through analogy. But we don’t get it.” It could be that Hod’s machine has discovered something our human brains are just not smart enough to see. “Maybe it’s hopeless,” he says “Like explaining Shakespeare to a dog.” This is why Hod is trying to convince his collaborators to publish the equation anyway – and see if anybody else out there can shed light on its meaning.
"Friends, Romans... Hey! Is that a biscuit?!"
Because Hod is curious about what makes us curious I ask him if his program could come up with a model of how to learn.
“Could we use your program to observe data about how machines learn, or how people learn, and come up with a model of learning?”
We’re getting seriously abstract now.
Hod laughs. “That’s what we’re working on now. We’re working on what we call self reflective systems. We want to make machines meta-cognitive – they are thinking about thinking.”
This is something of a departure from a lot of AI research. “Almost all the AI systems program a way of thinking and they do that thinking for you – which is the extent of it. You could argue that’s about as smart as a lizard. But if you want to get to human-like intelligence, you need a brain that can think about thinking…”
Sadly (for this blog) Hod’s work in this area is currently unpublished so out of courtesy I’m leaving a more detailed explanation of what we discussed until the book is published. In summary however, Hod is taking his model of ‘co-evolutionary AI’ to the next level. Instead of modeling robot physiology, the motion of pendulums or data from physicists in Switzerland he has one robot brain trying to model how another one learns – and then, in true Lipson style, he’s asking one to challenge the other – in order to find out more. In this way one brain builds a model of how the other learns, and can start to make helpful suggestions.
“That’s self reflection,” says Hod. He adds, “That’s important in life. You can learn things the hard way, or you can think about how you’ve been thinking.”
It’s something you can imagine Socrates or Bacon saying.