I had hoped to pop in to see Cleveland Cutler at Boston University today to discuss the ‘nuclear renaissance’ but despite agreeing to an interview and numerous phone calls and e-mails to chase (including several chats to his administrator who assures me Cleveland will ‘call me back’) there has been a deathly, and let’s be honest, outrageously rude silence from the professor, not even a simple “I’m sorry, I need to cancel”. Having extended my stay in Boston (and my imposition on Tracy’s hospitality) to make time for this meeting I find this, well, just a bit arsey and disappointing. I must think of a suitably cutting joke about baldy geographers.
Instead I head into Cambridge and say hello to the personal robotics lab. Sadly both arch-enemy of lazy writers Polly Guggenhiem and uber-robotics pioneer Cynthia Breazeal aren’t there, but the ever-friendly Dan Stiehl who I met last time is on hand and I do get to see Nexi (the lab’s latest sociable robot in action). In tune with the lab’s focus on human-robot relationship Nexi is interacting with a young boy, no older than eight who finds Nexi’s human-like tracking of his movements as he dances in front of her enthralling. Despite being made of moulded white plastic Nexi’s face can express a whole gamut of emotions – her big eyes blinking, her white plastic ‘eyebrows’ moving, her mouth expressing slack jaw boredom (when not much is happening) to tight lipped interest (if the boy is doing something intriguing) or annoyance (if the boy gets too close). It’s startling to see how quickly all of us just accept Nexi as somehow sentient.
I give Nexi a personality. In fact, I can’t help myself, because this is a robot that acts in a, well, recognisably human way (and is therefore the exact opposite of Keanu Reeves). This is no accident. This is exactly what the Personal Robotics lab wants you to do.
“We put people at the core of what we’re trying to do,” explained Cynthia last time I was here. “A lot of work in robotics is still very focused on technology but in our lab we put these robots in front of real people so we can understand their impact. My group takes the relationship between social robots and people seriously and are trying to design for both sides. It’s not just about having the robot understand people, we’re trying to make people understand the robot so you’re naturally able to use your own way of thinking about the world to understand what must be going on in the mind, so to speak, of the robot.”
Mind of the robot? Let’s not get into that here, I have two chapters that address the subject of Artificial Intelligence in the book…
In the meantime, check out this animation of Nexi that demonstrates her range of facial expressiveness.
It’s amazing how quickly you can accept international travel as work-a-day. When I started my journey a flight heralded a feeling of adventure in me. Now, it’s like getting in a car. Another thing that’s changed is my attitude to my interviewees. When I first secured an interview with my quarry in Boston I was slightly intimidated. ‘How do you talk to someone like that?’ I asked myself, the ‘that’ in question being Ray Kurzweil. Now, as I come to end of my journey and try to tie it all together I find less trepidation in myself. I’ve spent the last year meeting extraordinary people, and I’ve got used to it. Turns out extraordinary people have plenty enough ordinary about them to get hold of.
I arrive in Boston, deal with the ever rude and superior immigration staff and am picked up by Tracy Wemett, who you may remember as Konarka’s PR woman and driver of some, shall we say, reckless enthusiasm. Tracy, on hearing of my return to Boston has generously offered me her basement for the week, which makes a welcome change from hotels. Still, we’ve got to get to her apartment alive which, given her driving, is not a certainty.
Since I saw Tracy last it seems I haven’t been the only one to notice her maverick approach to the road. One speeding ticket too many and she’s been required to take a driving education course by the state of Massachusetts. The results are reassuring. She tells me, “I was told I’m the sort of person who will make a road where there isn’t one.” She pauses. “Apparently that’s not good.”
I spend the next day preparing for my interview with Ray. (I also take a visit to meet genius-entrepreneur Howard Berke at Konarka, who was, like many genius-entrepreneurs, a mixture of enthralling, socially odd and genuinely entertaining. More on him in my chapter on Solar).
Ray Kurzweil is variously an inventor, guru, madman, prophet or genius depending on who you listen to. One indisputable truth is that Ray is a very good inventor. He invented the first machine that could scan text in any font and convert it into a computer document, a technology he applied to building a reading machine for the blind (which led to him, on the side, inventing the flatbed scanner and the text-to-speech synthesizer too). Stevie Wonder was the first customer – and this in turn led to Ray inventing a new breed of electronic synthesizers that captured the nuances of traditional ones. (In a former life as a musician I coveted the ‘Kurzweil K2000’ but not being very successful musician I could never afford one). Our interview opens in much the same way as Ray’s last book The Singularity is Near (hereafter referred to as TSIN). “The philosophy of my family, the religion, was the power of human ideas and it was personalised,” he says. “My parents told me, ‘you Ray can find the ideas to overcome challenges whether they’re grand challenges of humanity, or personal challenges’ ”.
Ray’s journey to visionary genius/ techno-prophet/ crazy person (delete as appropriate depending on your prejudices) had its genesis in his attempt to work out a way to time his inventions for maximum impact. “I realized that most inventions fail not because the R&D department can’t get them to work but because the timing is wrong. Inventing is a lot like surfing: you have to anticipate and catch the wave at just the right moment,” he writes on page three of TSIN. So Ray started looking at technology trends and he saw something extraordinary – a clear, unmistakable pattern of exponential innovation, something he calls ‘the law of accelerating returns’ – a phenomenon centred around the idea that technology regularly doubles in efficiency. Such doubling is seen, for instance, in the increasing processing power of computers. Reality has kept pace with the predictions of ‘Moore’s law’ with almost unwavering allegiance, with performance per dollar doubling about every 18 months. But Ray says the effects of the law can be found, well, nearly everywhere, that the law of accelerating returns is the governing law of all creation.
To understand the implications of Ray’s idea you have to get your head around how potent a force it is if something has the propensity to double. Think of it this way. Let’s say you travel a metre with each step you take. If you take ten steps you’ll have covered ten metres. Now imagine that instead of each step progressing one metre, it somehow doubles the distance you covered with the last one. So while your first step covers one metre, your second covers two and by your third your stride is four metres. The difference between ‘normal stepping’ you and ‘doubling stepping’ you is extreme and gets ever more so. As a doubling stepper your first ten steps will cover not ten metres, but one thousand and twenty four. Instead of covering the equivalent of about 1/10th of a football field you’ve covered over ten. And with your next step you’ll cover ten more – with the step after that covering another twenty whole pitches.
By the time you’ve done just 27 steps you’ve traversed 67 million metres, or to put it another way, you’ve gone one and a half times round the world. Your next step? You double that distance and do another 67 million metres. At this rate you could walk to the sun and back (and be 85% of the way to Mars) in 38 steps (your last step having covered 137,438,953,000 metres). One can only imagine the trousers you’d need. Meanwhile, normal stepping you is about a third of the way down a football pitch. Now, of course, you can’t step like that but technology, says Ray, can. And he’s not wrong.
Certainly on my trip I’ve seen other examples of mankind’s exponential adventure, in the plummeting cost of genome sequencing, or the ‘cost per watt’ performance of solar technologies for example. Ray cites these examples and others. The first hundred pages of TSIN almost bludgeons the reader with graph after graph, based on historical data showing exponential growth in the number of phone calls per day, cell phone subscriptions, wireless network price-performance, computers connected to the internet, internet bandwidth and so on. These all have a computing flavour, but Ray sees exponential growth of knowledge too, citing exponential growth in nanotechnology patents as an example. What about the economy? Ray plots exponential growth in the value of output per hour (measured in dollars) in private manufacturing and in the per-capita GDP of the US. Ray quotes example after example because he want us to get past what he sees as an inherit prejudice in our human thinking.
“Our intuition is linear and I believe that’s hard-wired in our brains. I have debates with sophisticated scientists all the time, including Nobel prize winners that take a linear projection and say “it’s going to be centuries before we…” and “we know so little about…” and here you can fill in the blank depending on their field of research. They just love to say that. But they’re completely oblivious to the exponential growth of information technology and how it’s invading one field after another, health and medicine being just the latest.”
You can’t get to Mars in 39 steps wearing linear trousers (like the one’s most of our minds wear). You need exponential ones (like technology has). But because we’re hard-wired to think in linear, rather than exponential terms we fail to see when things are coming, argues Ray. We’ll be far further than we think, far quicker than we expect. Ray predicts for instance that by the middle of the century we’ll have artificial intelligence that exceeds human cognition, a game-changing explosion of intelligence that we will merge with to usher in the next stage in our evolution – a human-machine hybrid, enhanced with similar exponential bounty brought to us by entwined revolutions in nanotechnology and biotechnology. Aging will be ‘cured’ and we’ll be able to move onto a more stable platform than our frail biology. At the same time we’ll have solved the energy crisis and dealt conclusively with climate change.
“All these Malthusian concerns that we’re running out of resources are absolutely true if it were the case that the law of accelerating returns didn’t exist,” he says. “For instance, people take current trends in the use of energy and just assume nothing’s going to change, ignoring the fact that we have 10,000 times more energy that falls on the Earth from the Sun every day than we are using. So if we restrict ourselves to 19th Century technologies, these Malthusian concerns would be correct.” In other words, the law of accelerating returns in solar energy will soon see a green energy revolution, as the technology keeps doubling its efficiency. Ray reckons five years from now solar will be taking coal to the cleaners when it comes to cost per watt. We won’t be switching to solar because we want to save the planet, we’ll be doing it to save our bank accounts.
“I just had a debate this week at a conference held by The Economist with Jared Diamond who basically sees our civilization going to hell in a hand-basket and points out various trends and makes this assumption that technology is a disaster and only creates problems and he has really no data to point to, it’s just aphorisms and scoffing at technology with no analysis. But he’s got a bestselling book because people love to read about how we’re heading to disaster.”
Part of understanding what Ray is getting at requires you to understand that he sees all creation as an exercise in information processing. Everything can be expressed as data coming in, some kind of manipulation or interaction, and some data goes out. So, two atoms collide (data in), they interact in some way (data processing) and emit light and heat (data out). This is the most boring way ever to describe fire, but it doesn’t take away from the essential premise that everything can be viewed as a manipulation of information. In other words, everything (including you) is an ‘information technology’ and therefore the law of accelerating returns becomes the fundamental law that governs all creation.
In 1999 Ray published a book called The Age of Spiritual Machines in which he applied this law to make predictions, and handily he made a bunch for the decade from 2009. Critics and advocates alike have lept on these, loudly proclaiming “Ray was right!” or “Ray was wrong!” depending, it seems, on how they view the world – and all ignoring the fact that Ray didn’t say his predictions were for one year, but for the period beginning 2009. “Most of Kurzweil’s predictions are actually astoundingly accurate,” writes one blogger, while another asserts his forecasts are “ludicrously inaccurate.” Oh dear.
My own analysis is that, with the odd caveat, Ray seems to be on the right track with his predictions and many seem extremely prescient. According to Ray 89 are correct, 13 are “essentially correct”, three are partially correct, two are ten years off, and just one is wrong (but he claims it was tongue in cheek anyway). Certainly there is some pride in Kurzweil’s response to his critics and you could argue he’s stretching the point a bit when he defends some of his predictions, massaging the semantics of the prediction to match the current situation, but, all that aside, he’s still been right more often than he hasn’t. By anybody’s reckoning that’s prediction nirvana, and a skill any investor would love to have (oh, Ray’s latest venture? A hedge fund.)
But part of the problem with Ray Kurzweil, or rather part of the problem in talking about Ray Kurzweil is that he raises strong emotions. Trying to separate reasoned debate from the howl of emotion that his work provokes is hard. Take the view of Douglas R. Hofstadter, now a cognitive scientist at Indiana University, but more famously the author of Gödel, Escher, Bach: An Eternal Golden Braid – an attempt to explain how consciousness can arise from a system, even though the system’s component parts aren’t individually conscious. (This is a key area of study for Ray too, because it is through reverse engineering the human brain that he believes we’ll be able to unlock the mechanisms of mind, replicate them in machines and so free ourselves from the biological limitations of our brain). Here’s what Hofstader has to say about Ray’s ideas:
“I find is that it’s a very bizarre mixture of ideas that are solid and good with ideas that are crazy. It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad. It’s an intimate mixture of rubbish and good ideas, and it’s very hard to disentangle the two…”
That’s like Stevie Wonder saying, “I can’t work out if Paul McCartney is a genius or a wanker”. Such is the trouble with talking about Ray. (You can see the full text of the interview this comes from here)
As I comment throughout An Optimist’s Tour of the Future, the advance of new technologies, particularly biotechnology, make many people (including me) uncomfortable – and then Ray comes along and says, ‘belt up, things are going way faster than you thought, and by the way, that means I’m not going to die. Would you like to transcend your biology with me? Hurry now’. It’s no wonder our linear-trousered brains are stretched to the limit, no wonder some people find Ray just too difficult to engage with. And on the other side of the coin are those who do see Ray as some kind of prophet, whose ideas save them from the sticky issue of their mortality. Ray’s ‘Singularity’ – the moment at which ‘strong AI’ arrives and we merge with it – has been called “the Rapture of the nerds” (a phrase coined by science fiction author Ken MacLeod). These Utopian-techno-nerds don’t really help Ray’s cause. I advocate the approach of Juan Enriquez, the founder of Harvard Business Schools’ Life Science Project, and another Boston resident, who told me, “Do I always agree with Ray? No. Does he make me think? Always.”
It seems to me (from my linear trousered perspective) that progress in robotics, AI, synthetic biology and genomics brings philosophical questions such as “what does it mean to be human?” into your living room, and not in an ‘interesting-debate-over-a-glass-of-wine’ sort of way, but in a ‘right-in-your-face-what-are-you-going-to-do-about-it?’ sort of way.
When the possibility that the hand your mate Robin lost to cancer three years ago can be replaced by a robotic one with a sense of touch becomes a real option we begin to ask ourselves, ‘Is that hand really part of Robin? If I shake that hand am I really shaking Robert’s hand? Gee I don’t know. I feel kinda weird’. (By the way, Robin isn’t fictional, he’s Robin af Ekenstam and you can watch a video of his new hand being attached here). And just as we can start to engineer robot hands and merge them with humans, we will soon, thanks to the law of accelerating returns, be able to engineer to genuine robot intelligence and merge it with our brains, argues Ray.
“The basic principles of intelligence are not that complicated, and we understand some of them, but we don’t fully understand them yet. When we understand them we’ll be able to amplify them, focus on them – we won’t be limited to a neo-cortex that fits into a less than one cubic foot skull and we certainly won’t run it on a chemical substrate that sends information at a few hundred feet per second, which is a million times slower than electronics. We can take those principles and re-engineer them and we’re going to merge them with our own brains”.
It’s statements like this that bring Ray into conflict with many scientists who think he’s not so much running before he can walk, as getting in jet fighter straight out of the crib. Although, for Ray, that’s kind of the point. Crib to jet fighter is really just a few doublings after all, the law of accelerating returns in action. But for some, Ray is a bit like Tracy. He makes a road where there isn’t one, they say.
One thing is certain. If a conscious human-like intelligence is ‘computable’ (i.e. it can be run on a machine substrate) the processing power to compute it will be within reach of the even your desktop very soon. Hans Moravec wondered, “what processing rate would be necessary to yield performance on par with the human brain?” and came up with the gargantuan figure of 100 trillion instructions per second, which is one of those numbers that generally makes most of us go “hmmm, I think I’ll make a cup of tea now.” To put this number in context, as I was ushered into the world in the early seventies IBM introduced a computer that could perform one million instructions per second. This is onemillionth of Moravec’s figure. By the dawn of the millennium chip-maker, AMD, were selling a microprocessor over three and half thousand times quicker (testament to a technological journey that had been populated with continual exponential leaps in processing power throughout the intervening period). This yielded a chip that is still 280 times less powerful than the brain’s computational prowess (by Moravec’s reckoning) but is a staggering upswing in power nonetheless. Intel have just released their ‘Core i7 Extreme’ chip which is forty times faster than the AMD device from 2000 and computes at the mind-numbing speed of 147,600,000,000 instructions per second – or about one seventh of Moravec’s figure. At this rate your new laptop will achieve the same computational speed as the human brain before the decade is out. Soon after that, if the exponential trend continues, your laptop (or whatever replaces it) will have more hard processing muscle than all human brains put together. This will happen sometime around the middle of the century according to Kurzweil.
Supercomputers have passed Moravec’s milestone and it’s therefore no surprise to find various projects using them to try to simulate parts of animal and human brains, merging neuroscience and computer science in an attempt to get to the bottom of what’s really going on in that skull of yours. It’s important to realise that simulating something often takes more computing power than being something (aircraft simulators have more computers than actual aircraft for instance) and a complete simulation of an entire human brain running in real-time is still beyond the reach of even the most powerful computers. But not for long. Henry Markram’s Blue Brain project (which works by simulating individual brain cells on different processors and then linking them together) believes “It is not impossible to build a brain, and we can do it in ten years.” He’s even joked (or not, depending on how seriously you take the claim) he’ll bring the result to talk at conferences. Markram has similarly upset more conservative voices in the AI field. Even Ray thinks he’s over-optimistic. (The prediction falls outside the curve predicted by Ray’s graphs by a hefty margin).
You can see Markram’s TED talk (where he suggests he’ll be bringing the Blue Brain back to the conference as a speaker within a decade) below.
I find myself thinking back to my talk with George Church, Professor of Genetics at Harvard Medical School. If you accept evolution as an explanation of how humanity came to be, that the common genetic code of all living things is proof that you, I and Paris Hilton all, at some point, evolved from the same source (that source being a collection of molecules that became the first cell) then one way of looking at the human being (and therefore the human brain) is ‘simply’ as a collection of unthinking tiny bio-machines computing away – reading genetic code, and spewing out ‘computed’ proteins and the rest. We’re machines too, just wet biological ones. You are an information technology.
Robotics pioneer Rodney Brooks makes this argument as well. “The body, this mass of biomolecules, is a machine that acts according to a set of specified rules,” he writes in Robot: The Future of Flesh and Machines
Needless to say, many people bristle at the use of the word “machine”. They will accept some description of themselves as collections of components that are governed by rules of interaction, and with no component beyond what can be understood with mathematics, physics and chemistry. But that to me is the essence of what a machine is, and I have chosen to use that word to perhaps brutalize the reader a little.
In short, intelligence and consciousness are computable, because you and I are computing it right now. I compute, therefore I am. George Church was less brutal in his take on the ‘human machine’. “I think of us more and more as mechanisms,” he told me. “We’re starting to see more and more of the mechanism exposed and it just makes it more impressive to me, not less. If someone showed me a really intricate clock or computer that had emotions and self awareness and spirituality and so forth I’d be very, very impressed and I think that’s where we are heading, were we can be impressed by the mechanism.”
But something’s not sitting right with me, and it’s not that I don’t like being called a ‘machine’ (believe me, that’s nothing compared to some of the heckles I’ve had). In fact, the machine metaphor makes a kind of sense given what I found out at Harvard.
It was Cynthia Breazeal, head of the personal Robotics lab who I met last time I was in Boston that expressed it best. “The bottom line is there’s still a long way to go before we can have a simulation actually doanything. I mean they can run the simulation but what is it doing that can be seen as being intelligent? How does that grind out into real behaviour, where you show it something and have it respond to it? I still think there’s a lot of understanding that needs to be done. I do, I really do. I think we’re making fantastic strides but I think,” (she dropped to a conspiratorial whisper, smiling) “there’s a lot we still don’t know!”
Cynthia nailed the root of my discomfort. Someone can give you the best calculator in the shop, but if you’ve never learned any maths, it’s largely useless to you. If the brain is computable, it’s not that we won’t have the processing power to recreate its mechanisms, but that we’re still a long way off working out how to drive that simulation. If you’d never learned to read your eyes could take in the shape of every letter on this page, but it’d mean nothing to you, and printing it out photocopying it a hundred times (or even inventing the printer and photocopying machine in order to do so) wouldn’t help you either. Just as you had to learn to read, AI and neuroscience research, collectively, have to tease out not only what it is they’re looking at, but what it means.
Sure, there’s exponential growth in processing power, but the jury is out as to whether there is an equivalent growth in understanding how to use that power more ‘intelligently’, to create (to paraphrase one of Henry Markram’s analogies) a concerto of the mind by playing the grand piano of the brain. If there had been, maybe your new laptop would be one-seventh as smart as you are. But it isn’t. This is where the strength of projects like the Blue Brain (and Cynthia’s work) really lie – as tools to slowly help us to pose the right questions that will lead to a better understanding of intelligence, emotion and consciousness.
This is what I really want to ask Ray. “Have you got any graphs that clearly show an exponential growth in understanding? or in the ability of us to collectively make sense of the great philosophical questions, the intractable questions – ‘What is life?’, ‘What is consciousness?’” I ask. “Have we seen the law of accelerating returns in our understanding of these questions? Is our knowledge, our wisdom also keeping pace?”
“Well, I’m actually working on that in connection with my next book which is called How the mind works and how to build one,”says Ray.
Well he would be, wouldn’t he?
More of my interview with Ray will, of course, be in the book…
It’s a big question, but one that is particularly pertinent to my interview today with Robotics and Artificial Intelligence researcher, Hod Lipson. Because Hod and his team build machines that find truths.
The search for truth has a long history (one could argue it is history) which I’m not about to get into (and it’s not the book I’m writing) but if someone said to me ‘Go on then, history of truth in 5 minutes’ I’d probably reach for two key figures – Socrates (born Greece, 469 BC) and Francis Bacon (born England, 1561), not least because they both died in interesting ways (which is useful for storytelling).
Socrates was put to death by the state of Athens for “refusing to recognise the gods recognised by the state” and “corrupting the youth” (explaining perhaps why Black Sabbath rarely toured in Greece). Despite clear chances to escape his fate, Socrates placidly took a drink containing poison hemlock prepared by the authorities. Francis Bacon, many believe, died as a result of trying to freeze a chicken. It might seem odd therefore to hold up both as key figures in the history of reason.
Socrates' natural heir?
You may also wonder why I am suddenly diving into the past when I’m writing a book about the future. Bear with me, and blame Hod Lipson and his robots.
Both Socrates and Bacon were very good at asking useful questions. In fact, Socrates is largely credited with coming up with a way of asking questions, ‘The Socratic Method’, which itself is at the core of the ‘Scientific Method’, popularised by Bacon during ‘The Enlightenment’ – a period of European history when ‘reason’ and ‘faith’ had an almighty bunfight and the balance of power between church, state and citizen was being questioned. Lots of philosophers and scientists challenged the prevailing orthodoxy of religious authority by saying ‘we need to make decisions based on critical thinking, evidence and reasoned debate, not on sacred texts and religious faith’ and the church replied with ‘yes, but we own most of the land, plus people really like the idea of God. Ask them’.
I'm pretty popular, actually
The Socratic Method disproves arguments by finding exceptions to them, and can therefore lead your opponent to a point where they admit something that contradicts their original position. It’s powerful because it kind of gets people to admit to themselves that they’re wrong. It’s also pretty good at exposing your own (as well as others’) prejudices and gaps in reasoning. Lawyers use it a lot. Don’t let this influence you against it. Lawyers also use toilet paper and you’re not about to reject that idea.
Used by lawyers
Here’s an example.
During excessive bouts of hard and progressive rock emanating my older brothers’ bedrooms my dad used to say, “people only play electric guitars because they can’t play real ones” (by which he meant acoustic guitars played by nice chaps called Julian with sensible haircuts, as apposed to electric guitars played by long haired geezers called Dave and Jimmy).
First step of Socratic method: assume your opponent’s statement is false and find an example to illustrate this. This You Tube clip of Pink Floyd’s David Gilmour playing acoustic guitar for instance. Clearly Dave Gilmour can play a ‘real’ guitar as well as an electric one and my dad must grudgingly accept the fact. At this point dad would assert that Dave Gilmour was ‘the exception that proved the rule’.
Next step. Take your opponent’s original statement and restate it to fit their new modified position. “So, dad, you’re saying that people only play electric guitars because they can’t play acoustic ones, except for Dave Gilmour who can do both?”. Then return to step one.
Ironically this led us to playing dad far more Black Sabbath, Pink Floyd, Aerosmith and Led Zeppelin than if he’d kept his theory to himself. (MTV’s ‘unplugged’ series would become his nemesis). Eventually dad would have to admit the truth – which was not that the rock musicians we listened to weren’t talented, but that he just didn’t like rock music.
This example is trivial but you can use the method to demonstrate some pretty esoteric points, and expose fundamental new insights. A popular example that can really annoy your mates in the pub is proving that things don’t have a colour.
Socratic argument, while undoubtedly one of the most useful things ever devised can also annoy the tits of people, as the man who lends it his name found out to his cost. The story is that Socrates used his technique to prove a lot of bigwigs in Athenian society were mistaken in their thinking – and they responded by having him killed. This proves that engaging people’s brains is never enough if you want change. You have to engage their emotions too. As Professor George Church said to me during our talk last week “Politicians know how effective emotion is in comparison to rational thought. You can really move mountains with emotion. With rational thought you just end up getting people to change the channel”.
By the time Francis Bacon went to university, teachings of one of Socrates’ students, Aristotle, had become entrenched as the way to conduct ‘scientific inquiry’. Aristotle had pioneered deductive reason, the practice of deriving new knowledge from foundational truths, or ‘axioms’. In short, it was generally believed that if you got enough boffins together to have a solid debate, scientific truth would be teased out over time. This worked well for mathematics where axioms had been long established (e.g. the basic mathematical operations – plus, minus, divide, multiply) but was less good for finding out new stuff about the physical world. Much to Francis’ dismay it seemed that science involved sitting around in armchairs. Nobody was getting off their arse and observing anything new or doing any experiments. Nobody was finding the ‘axioms of reality’ (which is arguably a good name for a progressive rock outfit).
'Let's do it in 13/8!'
In common with Socrates Bacon stressed it was just as important to disprove a theory as to prove one – and observation and experimentation were key to achieving both aims. In a way he was Socrates 2.0 (which is another good name for a prog band). He also saw science as a collaborative affair, with scientists working together, challenging each other. All of this is hallmark of scientific good practice today – observe, experiment, theorise… and then try to prove yourself wrong – all in collaboration with peers who can give you a hard time. It’s important to note that Bacon himself wasn’t a distinguished scientist. His main contribution was the articulation and championing of an empirical scientific method. That said, he did do the odd experiment, including the one that killed him.
While traveling from London to Highgate with the King’s personal physician, Bacon wondered whether snow might be used to preserve meat. The two got off their coach, bought a chicken and stuffed it with snow to test the theory. In his last letter Bacon is said to have written, “As for the experiment itself, it succeeded excellently well.” Some historians think the chicken story is made up, but the popular account is that the act of stuffing the chicken led to Bacon contracting fatal pneumonia. This is possibly the only instance of bacon being killed by eggs.
Hod Lipson looks like a very friendly bear. He has a round, but not chunky frame, thick black hair and looks healthy and happy. His features are open and innocent. He’s almost childlike if it weren’t for his demeanour – a kind of solid confidence that only comes with age. You get the feeling Hod knows exactly what he wants to achieve. I suspect he was a mischievous child, curious, poking his nose into most things. And whilst most of the scientists I’ve met are driven by an almost insatiable curiosity, Lipson takes curiosity to a new level, literally. He’s curious about curiosity.
“ ‘Artificial Intelligence’ is a moving target,” he says. “So, you can build machine that plays chess, then you build one that can drive through city streets and so on. People argue about whether it’s really intelligent or not – and usually it’s argued it isn’t. I want to create something where nobody can argue it isn’t intelligent. So, I was thinking about what’s an unmistakable, unequivocal hallmark of intelligence, and I think it’s creativity and particularly curiosity.”
“Does a curious and creative machine mean a sentient machine?” I ask.
“Well, what does that mean?” asks Hod. “I have to push you on what you mean by ‘sentient’.”
Bollocks. I’ve just been asked by a leading researcher into intelligent machines to define sentience – one of the biggest pending questions in philosophy. This is worse than when Cynthia Breazeal asked me to come up with an alternative word for ‘robot’. Or if Andrew Lloyd Webber asked me to say something nice about one of his musicals. I feel out of my depth and we’re barely into our chat. I do the only thing I can.
“Well, let me ask you,” I say. “What do you mean by it?”
Hod pauses. I’m not sure he was expecting a return serve, especially one that in any decent rule book would be considered cheating.
“I interpret it as deliberate versus reactive. Er… human-like…” He pauses again. “I don’t know.”
A-ha! Well, like I said, it is one of the biggest pending questions in philosophy.
“Alive?” I venture.
“It’s difficult to identify what life is right?”
And there’s the rub. Life has avoided a definitive definition for as long as we’ve tried to make one – as has ‘intelligence’. So if you’re trying to create ‘artifical intelligent life’ you’re already in a quagmire of semantic lobbying. I’m reminded of my chat last week with George Church (Professor of Genetics, Harvard Medical School). “I think life is actually quantitative measure,” said George, by which he means something that can be defined not with either a ‘yes’ or ‘no’ but on a scale. “It’s not something where either you either have it or your don’t. So I would say that there are some things that are more alive than others.” And I don’t think it’s overstating things to say that Hod certainly has made machines that are ‘more alive’ than many others.
Then he says an interesting thing. “I think men have this hubris of wanting to create life. We try to create life out of matter.”
‘Hubris’ is one of those words like ‘semiotics’ and ‘insurance’ that I’ve heard a lot but didn’t really know what it meant for a long time (I’m still struggling with ‘insurance’). I look up ‘hubris’ when I get to back to my hotel. It means excessive pride or arrogance. In classical literature it’s usually a precursor to, and the cause of, a character’s downfall. The legend of Icarus is a good example. With that one word Hod has encapsulated the two defining criticisms aimed at Artificial Intelligence research. On one end there are those who say we’ll never create a truly artificial intelligence and that we’re arrogant to believe we can. On the other there those who worry we will build smart machines and in our arrogance be blind to the danger that they will one day do away with or enslave us. (There are more measured positions in between the two such as Hubert Dreyfus’s and Hod’s own – both of who suggest that a lot of AI research has been in the wrong direction).
Hod doesn’t believe in the latter James Cameron-esque scenario, but sees a confederacy of man and machine. He has some sympathy for the ‘singularity hypothesis’ of Ray Kurzweil (who I’m interviewing early next year) which talks of a ‘merger of our biological thinking and the existence of our technology’ but doesn’t see a machine-human hybrid (Juan Enriquez’s Homo Evolutis) as the only scenario. “Merging could also mean intellectually merging, meaning that they explain stuff to us.”
Lipson became famous (in robotic circles) for his work building robots that are arguably self aware. His Starfish robot, which I see sitting forlornly on a shelf in his lab, is iconic for learning to walk from first principles. It wasn’t given a program that told it how to move its various motors and joints to achieve locomotion. Instead Lipson gave it a program that enabled it to learn about itself – and use this knowledge to subsequently work out how to move.
“The essential thing was it created a self image,” Lipson tells me. “It created that self image through physical experimentation. So it moved its motors, it sensed its motion and then it created various models of what it thought it might look like – ‘maybe I’m a snake? maybe I’m a spider?’ We told it to create models – multiple different explanations that might explain what it knows so far.”
The robot then stress-tested those models by sending them into competition with each other. “It creates an experiment for itself that focuses on the area where there’s the most disagreement between what the models predict. We put in the code to look for disagreements,” explains Hod.
For example, let’s say the robot is wondering which move to do next in order to learn about itself more. It could try a movement that, when completed, the models all predict it will be sitting at an angle of about 20 degrees. One model might predict 19 degrees, another 21 degrees, a third 21.2 degrees. However, if it tries another move the models have very different ideas about the result. One says the robot will be at an angle of 12 degrees, another predicts 25 degrees, a third says 45. This latter movement is more likely to be the one the robot chooses next, because it will learn the most from it, and get an idea of which model is closer to the truth. It’s where there’s most disagreement that there’s most to learn. “We tell it ‘you create models – multiple different explanations for what you see – and then look for what new experiment creates disagreement between predictions of these candidate hypotheses,” says Lipson “That’s the bottom line of curiosity”.
The models that do best ‘survive’ and the program kills off the others. The remaining models ‘give birth’ to a generation of slightly mutated tweaked versions of themselves and another round of ‘survival of the fittest’ ensues. Or to put it another way, over many iterations the program hones in on a model that describes reality. The predictions get closer and closer to what actually happens until one model is deemed sufficient for the robot to say ‘this is what I look like’.
If all this talk of ‘mutation’, successive ‘generations’ and ‘survival of the fittest’ sounds slightly familiar that’s because this kind of mathematics takes its inspiration from Darwin’s theories of evolution. Mathematicians might call it ‘reductive symbology’ or say Lipson’s work is a good example of ‘genetic algorithms’ – and it’s a technique that’s been around for decades. What’s different about Lipson’s work is the implementation, something he calls ‘co-evolution’.
“We set off two lines of enquiry. So one of them is the thing that creates models and the other is the thing asks questions, and they have a predator/ prey kind of relationship. Because the questions basically try to break the models.” The questions try to find something the models disagree about so they can kill off the weaker ones. It’s like Anne Robinson in code.
It has to be said that if you see the Starfish robot ‘walking’ you wouldn’t immediately think it had a future career as a dancer. It doesn’t so much walk as stagger and flop forward. It’s less Ginger Rogers and more gin and tonic. Still the achievement is not to be sniffed at. It had no parents and no role models. This was a robot actively learning to do something no one had taught it to. And robots that learn this way have all sort of interesting possibilities – as Lipson was about to find out.
You can see Hod’s demonstrating his starfish robot in this TED talk.
With colleague Michael Schmidt he wondered if the same computer program he’d placed at the core of his Starfish robot could go beyond working out merely what its host body looked like and begin to reach useful conclusions about the wider world.
“We said ‘let’s take it out of this particular body and let it control motors of any experiment’ ”. Their first idea was to give the robot brain control of motors that set up the starting position for a ‘double pendulum’ before letting it fall. The robot was also able to record the results of each experiment using motion capture technology – allowing it to accurately record the pendulum’s motion.
A double pendulum is a bonkers little contraption. It consists of two solid sticks jointed together in the middle by a free moving hinge. Double pendulums do wacky things (You can see one in action here). Whilst the top pendulum swings from left to right the bottom one likes to mix it up. Because it’s not attached to a stationary point (like the top pendulum) but something moving (the bottom end of that swinging top pendulum) it will swing left, swing right, spin round clockwise, or counter clockwise, seemingly at random. Lipson and Schmidt chose the double pendulum because it’s a good example of a system that’s simple to set up but which can quickly exhibit chaotic behaviour – and therefore would be a good test of the technology’s ability to build a useful conceptual model of what was going on. The results were startling. In fact, the program went a long way to deriving the laws of motion. In 3 hours.
It followed the same process as it had when it sat in the robot – guessing at equations that might explain what it had seen so far, then setting up new experiments (in this case new starting positions for the pendulum) that targeted areas of most disagreement between the equations. “With the double pendulum it very quickly puts it up exactly upright, because some models say it’s going to fall left and some models say it’s going to fall right. There’s disagreement. It’s not a passive algorithm that sits back, watching,” says Hod smiling. “It asks questions. That’s curiosity.”
Just like humans, it seems machines learn best when they ask their own questions and find their own answers, rather than being given huge amounts of data to absorb. “Most algorithms you see are passive. They’re data intensive. You feed in terabytes of data and these algorithms just sit back and watch. But in the real world you can’t sit back and watch. You have to probe, because collecting data is expensive, it takes time, it’s risky.” By constrast Lipson’s machine brain “only ever sees what it asks for. It does not see all the data.” In fact Lipson decided to compare the efficiency of this ‘active’ method of enquiry against a more traditional passive ‘here’s all the data, what can you tell me?’ method. “It doesn’t work. It has go through a reasoning.”
Remind you of anyone? I see the hemlock taker and the chicken freezer partially re-incarnated in machine form. The programming consigns inaccurate models to the dustbin by getting the robot to admit there are others that offer a better explanation of the real world (hello Socrates) and does this with evidence won via experimentation (hello Bacon). What Lipson has done is create a computational methodology for asking good questions. And asking good questions is what it is all about when it comes to understanding anything.
“Physicists like Newton and Kepler could have used a computer running this algorithm to figure out the laws that explain a falling apple or the motion of the planets with just a few hours of computation,” said Schmidt in an interview with the US National Science Foundation (who helped fund the research).
However, we’re still a long way off what I (or Hod) would call an intelligent machine. It still takes a human to work out if anything the machine has found is useful. The machine didn’t know it had found laws of motion, it took Hod and his colleagues to recognise the equations that were produced. “A human still needs to give words and interpretation to laws found by the computer,” says Schmidt. So, we’re still some distance from Hod’s confederacy of man and machine, where they explain stuff to us.
One of the areas Hod’s brains could turn out useful is cracking problems where there is lots of data, but we still have little idea what’s going on. Indeed plenty of people with acres of data have been beating a path to his door including heavyweight data generators like the Large Hadron Collider at CERN near Geneva. “The people as CERN said ‘there is this gap in a prediction of particle energy. Here’s data for 3,000 particles. Can you predict something?’ ” The result was a strange mix of elating and disappointing. “We let it run and it came up with a beautiful formula,” says Hod. “We were very excited but it was a famous formula they already knew. So for them it was a disappointment…. But for us… We rediscovered something that people are famous for.”
Again, the crucial insight comes from humans who can tell if something means anything or not. It’s the crucial step – and without it the results are largely worthless (which is not to say the time saved is not incredibly useful). I’m reminded of a scene from Douglas Adams’ comedy The Hitchhikers Guide to the Galaxy in which a supercomputer called Deep Thought is built by a race of supersmart humanoids to answer the ultimate question. ‘What is the answer?’ ask the humanoids awaiting instant enlightenment. ‘To what?’ says the computer. ‘Life! The Universe! Everything!’ they respond. ‘The ultimate question!’ The computer announces there is an answer… but it will take several million years to compute. At the duly allotted time millennia later the humanoid’s descendants gather to hear the answer, which is announced to be ‘42’. The problem, suggests Deep Thought, is that they don’t really know what ‘the question’ is.
"You're not going to like it"
No-one understands the irony in this story more than Hod Lipson. “In biology there are many systems where we do not know their dynamics or the rules that they obey”. So he set his machine looking at a process within a cell. True to form the program generated an equation in double quick time. But what did it mean?
“We’re still looking at it,” says Hod with a smile. “We’re staring at it very intently. But we still don’t have an explanation. And we can’t publish until we don’t know what it is.”
“You don’t understand what it’s saying?
“No,” says Hod.
“But in science you go from observations which produce data, to models which produce predictions, to underlying laws – and from there you go to meaning. What’s good is that we can go from data straight to laws, whereas previously people could only go from data to predictions. So now a scientist can throw it some data, go and have a cup of coffee, come back and see 15 different models that might explain what is going on. That saves a lot of time. Previously coming up with a predictive model could take a career. Now at least you can automate that so you can focus on meaning.” That’s a powerful enabling technology. More time to think. Hod is doing for thinking what dishwashers have done for after dinner conversation. Although it may not always work out that way.
Several months later I e-mail Hod to see if they’ve got anywhere with the equation his machine generated from the cell-observing experiment. “We’re still struggling,” he writes “We’ve been trying for months to get the AI to explain it to us through analogy. But we don’t get it.” It could be that Hod’s machine has discovered something our human brains are just not smart enough to see. “Maybe it’s hopeless,” he says “Like explaining Shakespeare to a dog.” This is why Hod is trying to convince his collaborators to publish the equation anyway – and see if anybody else out there can shed light on its meaning.
"Friends, Romans... Hey! Is that a biscuit?!"
Because Hod is curious about what makes us curious I ask him if his program could come up with a model of how to learn.
“Could we use your program to observe data about how machines learn, or how people learn, and come up with a model of learning?”
We’re getting seriously abstract now.
Hod laughs. “That’s what we’re working on now. We’re working on what we call self reflective systems. We want to make machines meta-cognitive – they are thinking about thinking.”
This is something of a departure from a lot of AI research. “Almost all the AI systems program a way of thinking and they do that thinking for you – which is the extent of it. You could argue that’s about as smart as a lizard. But if you want to get to human-like intelligence, you need a brain that can think about thinking…”
Sadly (for this blog) Hod’s work in this area is currently unpublished so out of courtesy I’m leaving a more detailed explanation of what we discussed until the book is published. In summary however, Hod is taking his model of ‘co-evolutionary AI’ to the next level. Instead of modeling robot physiology, the motion of pendulums or data from physicists in Switzerland he has one robot brain trying to model how another one learns – and then, in true Lipson style, he’s asking one to challenge the other – in order to find out more. In this way one brain builds a model of how the other learns, and can start to make helpful suggestions.
“That’s self reflection,” says Hod. He adds, “That’s important in life. You can learn things the hard way, or you can think about how you’ve been thinking.”
It’s something you can imagine Socrates or Bacon saying.
Today I take a five hour bus ride to the Ithaca campus of Cornell University, in preparation for my interview with Robotics and Artificial Intelligence researcher Hod Lipson, but not before eating the worst pizza in the world. My bus leaves from the Cornell Club on 44th Street, but having arrived three quarters of an hour before departure I decide to grab a bite to eat. This ranks as one of the worst decisions of my life, up there with asking out my landlady, arguing with security at the US embassy in London and agreeing to see Jimmy Nail live. There was once a rumour that Frank Zappa ate a shit on stage, which wasn’t true. As Frank pointed out, “the closest I ever came to eating shit anywhere was at a Holiday Inn buffet in Fayetteville, North Carolina, in 1973.” I don’t know how bad that Holiday Inn buffet really was, but I suggest it has stiff competition in the form of the Europa Café on 5th Avenue.
Cornell’s Ithaca campus is beautiful. This part of town sits over one of the 100-plus verdant gorges that the city is famous for, along with a brace of impressive 19th Century Architecture dating from the College’s formation.
There’s plenty of greenery and open spaces too and throngs of students wander about in a fantasy of American College life, their books clutched to their chests as they laugh, flirt and learn in equal measure. I feel incredibly old. And of course I am. It’s a fact that I’m over the twice the age of about 90% of the people I walk past. Another unequivocal fact is that there’s no shortage of students smoking weed at Cornell. I know this not because I see any toking on a huge bifter but because I find the Insomnia Cookies store advertising “Warm Cookies Delivered Late Night”.
Like, you know, like, wow, I'm kinda hungry
It is a store designed to perfectly service ‘The Munchies’ – and where better to put it than smack bang in the middle of a college campus? It’s a neat business model – and I subsequently find out that Insomnia Cookies has outlets on 18 university campuses throughout the states. Retail is all about location. I’m giggling uncontrollably as I walk past, which but for my age probably makes me look like a potential customer.