optimistontour.com

September23rd

4 Comments

It’s a big question, but one that is particularly pertinent to my interview today with Robotics and Artificial Intelligence researcher, Hod Lipson. Because Hod and his team build machines that find truths.

The search for truth has a long history (one could argue it is history) which I’m not about to get into (and it’s not the book I’m writing) but if someone said to me ‘Go on then, history of truth in 5 minutes’ I’d probably reach for two key figures – Socrates (born Greece, 469 BC) and Francis Bacon (born England, 1561), not least because they both died in interesting ways (which is useful for storytelling).

Socrates was put to death by the state of Athens for “refusing to recognise the gods recognised by the state” and “corrupting the youth” (explaining perhaps why Black Sabbath rarely toured in Greece). Despite clear chances to escape his fate, Socrates placidly took a drink containing poison hemlock prepared by the authorities. Francis Bacon, many believe, died as a result of trying to freeze a chicken. It might seem odd therefore to hold up both as key figures in the history of reason.

Socrates natural hier?

Socrates' natural heir?

You may also wonder why I am suddenly diving into the past when I’m writing a book about the future. Bear with me, and blame Hod Lipson and his robots.

Both Socrates and Bacon were very good at asking useful questions. In fact, Socrates is largely credited with coming up with a way of asking questions, ‘The Socratic Method’, which itself is at the core of the ‘Scientific Method’, popularised by Bacon during ‘The Enlightenment’ – a period of European history when ‘reason’ and ‘faith’ had an almighty bunfight and the balance of power between church, state and citizen was being questioned. Lots of philosophers and scientists challenged the prevailing orthodoxy of religious authority by saying ‘we need to make decisions based on critical thinking, evidence and reasoned debate, not on sacred texts and religious faith’ and the church replied with ‘yes, but we own most of the land, plus people really like the idea of God. Ask them’.

I'm pretty popular, actually

I'm pretty popular, actually

The Socratic Method disproves arguments by finding exceptions to them, and can therefore lead your opponent to a point where they admit something that contradicts their original position. It’s powerful because it kind of gets people to admit to themselves that they’re wrong. It’s also pretty good at exposing your own (as well as others’) prejudices and gaps in reasoning. Lawyers use it a lot. Don’t let this influence you against it. Lawyers also use toilet paper and you’re not about to reject that idea.

Used by lawyers

Used by lawyers

Here’s an example.

During excessive bouts of hard and progressive rock emanating my older brothers’ bedrooms my dad used to say, “people only play electric guitars because they can’t play real ones” (by which he meant acoustic guitars played by nice chaps called Julian with sensible haircuts, as apposed to electric guitars played by long haired geezers called Dave and Jimmy).

First step of Socratic method: assume your opponent’s statement is false and find an example to illustrate this. This You Tube clip of Pink Floyd’s David Gilmour playing acoustic guitar for instance. Clearly Dave Gilmour can play a ‘real’ guitar as well as an electric one and my dad must grudgingly accept the fact. At this point dad would assert that Dave Gilmour was ‘the exception that proved the rule’.

Next step. Take your opponent’s original statement and restate it to fit their new modified position. “So, dad, you’re saying that people only play electric guitars because they can’t play acoustic ones, except for Dave Gilmour who can do both?”. Then return to step one.

Ironically this led us to playing dad far more Black Sabbath, Pink Floyd, Aerosmith and Led Zeppelin than if he’d kept his theory to himself. (MTV’s ‘unplugged’ series would become his nemesis). Eventually dad would have to admit the truth – which was not that the rock musicians we listened to weren’t talented, but that he just didn’t like rock music.

This example is trivial but you can use the method to demonstrate some pretty esoteric points, and expose fundamental new insights. A popular example that can really annoy your mates in the pub is proving that things don’t have a colour.

Socratic argument, while undoubtedly one of the most useful things ever devised can also annoy the tits of people, as the man who lends it his name found out to his cost. The story is that Socrates used his technique to prove a lot of bigwigs in Athenian society were mistaken in their thinking – and they responded by having him killed. This proves that engaging people’s brains is never enough if you want change. You have to engage their emotions too. As Professor George Church said to me during our talk last week “Politicians know how effective emotion is in comparison to rational thought. You can really move mountains with emotion.  With rational thought you just end up getting people to change the channel”.

By the time Francis Bacon went to university, teachings of one of Socrates’ students, Aristotle, had become entrenched as the way to conduct ‘scientific inquiry’. Aristotle had pioneered deductive reason, the practice of deriving new knowledge from foundational truths, or ‘axioms’. In short, it was generally believed that if you got enough boffins together to have a solid debate, scientific truth would be teased out over time. This worked well for mathematics where axioms had been long established (e.g. the basic mathematical operations – plus, minus, divide, multiply) but was less good for finding out new stuff about the physical world. Much to Francis’ dismay it seemed that science involved sitting around in armchairs. Nobody was getting off their arse and observing anything new or doing any experiments. Nobody was finding the ‘axioms of reality’ (which is arguably a good name for a progressive rock outfit).

'Let's do it in 13/8!'

'Let's do it in 13/8!'

In common with Socrates Bacon stressed it was just as important to disprove a theory as to prove one – and observation and experimentation were key to achieving both aims. In a way he was Socrates 2.0 (which is another good name for a prog band). He also saw science as a collaborative affair, with scientists working together, challenging each other. All of this is hallmark of scientific good practice today – observe, experiment, theorise… and then try to prove yourself wrong – all in collaboration with peers who can give you a hard time. It’s important to note that Bacon himself wasn’t a distinguished scientist. His main contribution was the articulation and championing of an empirical scientific method. That said, he did do the odd experiment, including the one that killed him.

While traveling from London to Highgate with the King’s personal physician, Bacon wondered whether snow might be used to preserve meat. The two got off their coach, bought a chicken and stuffed it with snow to test the theory. In his last letter Bacon is said to have written, “As for the experiment itself, it succeeded excellently well.” Some historians think the chicken story is made up, but the popular account is that the act of stuffing the chicken led to Bacon contracting fatal pneumonia. This is possibly the only instance of bacon being killed by eggs.

Reason's nemesis?

Reason's nemesis?

Hod Lipson looks like a very friendly bear. He has a round, but not chunky frame, thick black hair and looks healthy and happy. His features are open and innocent. He’s almost childlike if it weren’t for his demeanour – a kind of solid confidence that only comes with age. You get the feeling Hod knows exactly what he wants to achieve. I suspect he was a mischievous child, curious, poking his nose into most things. And whilst most of the scientists I’ve met are driven by an almost insatiable curiosity, Lipson takes curiosity to a new level, literally. He’s curious about curiosity.

“ ‘Artificial Intelligence’ is a moving target,” he says. “So, you can build machine that plays chess, then you build one that can drive through city streets and so on. People argue about whether it’s really intelligent or not – and usually it’s argued it isn’t. I want to create something where nobody can argue it isn’t intelligent. So, I was thinking about what’s an unmistakable, unequivocal hallmark of intelligence, and I think it’s creativity and particularly curiosity.”

“Does a curious and creative machine mean a sentient machine?” I ask.

“Well, what does that mean?” asks Hod. “I have to push you on what you mean by ‘sentient’.”

Bollocks. I’ve just been asked by a leading researcher into intelligent machines to define sentience – one of the biggest pending questions in philosophy. This is worse than when Cynthia Breazeal asked me to come up with an alternative word for ‘robot’. Or if Andrew Lloyd Webber asked me to say something nice about one of his musicals. I feel out of my depth and we’re barely into our chat. I do the only thing I can.

“Well, let me ask you,” I say. “What do you mean by it?”

Hod pauses. I’m not sure he was expecting a return serve, especially one that in any decent rule book would be considered cheating.

“I interpret it as deliberate versus reactive. Er… human-like…” He pauses again. “I don’t know.”

A-ha! Well, like I said, it is one of the biggest pending questions in philosophy.

“Alive?” I venture.

“It’s difficult to identify what life is right?”

And there’s the rub. Life has avoided a definitive definition for as long as we’ve tried to make one – as has ‘intelligence’. So if you’re trying to create ‘artifical intelligent life’ you’re already in a quagmire of semantic lobbying. I’m reminded of my chat last week with George Church (Professor of Genetics, Harvard Medical School). “I think life is actually quantitative measure,” said George, by which he means something that can be defined not with either a ‘yes’ or ‘no’ but on a scale. “It’s not something where either you either have it or your don’t. So I would say that there are some things that are more alive than others.” And  I don’t think it’s overstating things to say that Hod certainly has made machines that are ‘more alive’ than many others.

Then he says an interesting thing. “I think men have this hubris of wanting to create life. We try to create life out of matter.”

‘Hubris’ is one of those words like ‘semiotics’ and ‘insurance’ that I’ve heard a lot but didn’t really know what it meant for a long time (I’m still struggling with ‘insurance’). I look up ‘hubris’ when I get to back to my hotel. It means excessive pride or arrogance. In classical literature it’s usually a precursor to, and the cause of, a character’s downfall. The legend of Icarus is a good example. With that one word Hod has encapsulated the two defining criticisms aimed at Artificial Intelligence research. On one end there are those who say we’ll never create a truly artificial intelligence and that we’re arrogant to believe we can. On the other there those who worry we will build smart machines and in our arrogance be blind to the danger that they will one day do away with or enslave us. (There are more measured positions in between the two such as Hubert Dreyfus’s and Hod’s own – both of who suggest that a lot of AI research has been in the wrong direction).

Hod doesn’t believe in the latter James Cameron-esque scenario, but sees a confederacy of man and machine. He has some sympathy for the ‘singularity hypothesis’ of Ray Kurzweil (who I’m interviewing early next year) which talks of a ‘merger of our biological thinking and the existence of our technology’ but doesn’t see a machine-human hybrid (Juan Enriquez’s Homo Evolutis) as the only scenario. “Merging could also mean intellectually merging, meaning that they explain stuff to us.”

Lipson became famous (in robotic circles) for his work building robots that are arguably self aware. His Starfish robot, which I see sitting forlornly on a shelf in his lab, is iconic for learning to walk from first principles. It wasn’t given a program that told it how to move its various motors and joints to achieve locomotion. Instead Lipson gave it a program that enabled it to learn about itself – and use this knowledge to subsequently work out how to move.

“The essential thing was it created a self image,” Lipson tells me. “It created that self image through physical experimentation. So it moved its motors, it sensed its motion and then it created various models of what it thought it might look like – ‘maybe I’m a snake? maybe I’m a spider?’ We told it to create models – multiple different explanations that might explain what it knows so far.”

The robot then stress-tested those models by sending them into competition with each other. “It creates an experiment for itself that focuses on the area where there’s the most disagreement between what the models predict. We put in the code to look for disagreements,” explains Hod.

For example, let’s say the robot is wondering which move to do next in order to learn about itself more. It could try a movement that, when completed, the models all predict it will be sitting at an angle of about 20 degrees. One model might predict 19 degrees, another 21 degrees, a third 21.2 degrees. However, if it tries another move the models have very different ideas about the result. One says the robot will be at an angle of 12 degrees, another predicts 25 degrees, a third says 45. This latter movement is more likely to be the one the robot chooses next, because it will learn the most from it, and get an idea of which model is closer to the truth. It’s where there’s most disagreement that there’s most to learn. “We tell it ‘you create models – multiple different explanations for what you see – and then look for what new experiment creates disagreement between predictions of these candidate hypotheses,” says Lipson “That’s the bottom line of curiosity”.

The models that do best ‘survive’ and the program kills off the others. The remaining models ‘give birth’ to a generation of slightly mutated tweaked versions of themselves and another round of ‘survival of the fittest’ ensues. Or to put it another way, over many iterations the program hones in on a model that describes reality. The predictions get closer and closer to what actually happens until one model is deemed sufficient for the robot to say ‘this is what I look like’.

If all this talk of ‘mutation’, successive ‘generations’ and ‘survival of the fittest’ sounds slightly familiar that’s because this kind of mathematics takes its inspiration from Darwin’s theories of evolution. Mathematicians might call it ‘reductive symbology’ or say Lipson’s work is a good example of ‘genetic algorithms’ – and it’s a technique that’s been around for decades. What’s different about Lipson’s work is the implementation, something he calls ‘co-evolution’.

“We set off two lines of enquiry. So one of them is the thing that creates models and the other is the thing asks questions, and they have a predator/ prey kind of relationship. Because the questions basically try to break the models.” The questions try to find something the models disagree about so they can kill off the weaker ones. It’s like Anne Robinson in code.

It has to be said that if you see the Starfish robot ‘walking’ you wouldn’t immediately think it had a future career as a dancer. It doesn’t so much walk as stagger and flop forward. It’s less Ginger Rogers and more gin and tonic. Still the achievement is not to be sniffed at. It had no parents and no role models. This was a robot actively learning to do something no one had taught it to. And robots that learn this way have all sort of interesting possibilities – as Lipson was about to find out.

You can see Hod’s demonstrating his starfish robot in this TED talk.

With colleague Michael Schmidt he wondered if the same computer program he’d placed at the core of his Starfish robot could go beyond working out merely what its host body looked like and begin to reach useful conclusions about the wider world.

“We said ‘let’s take it out of this particular body and let it control motors of any experiment’ ”.  Their first idea was to give the robot brain control of motors that set up the starting position for a ‘double pendulum’ before letting it fall. The robot was also able to record the results of each experiment using motion capture technology – allowing it to accurately record the pendulum’s motion.

A double pendulum is a bonkers little contraption. It consists of two solid sticks jointed together in the middle by a free moving hinge. Double pendulums do wacky things (You can see one in action here). Whilst the top pendulum swings from left to right the bottom one likes to mix it up. Because it’s not attached to a stationary point (like the top pendulum) but something moving (the bottom end of that swinging top pendulum) it will swing left, swing right, spin round clockwise, or counter clockwise, seemingly at random. Lipson and Schmidt chose the double pendulum because it’s a good example of a system that’s simple to set up but which can quickly exhibit chaotic behaviour – and therefore would be a good test of the technology’s ability to build a useful conceptual model of what was going on. The results were startling. In fact, the program went a long way to deriving the laws of motion. In 3 hours.

It followed the same process as it had when it sat in the robot – guessing at equations that might explain what it had seen so far, then setting up new experiments (in this case new starting positions for the pendulum) that targeted areas of most disagreement between the equations. “With the double pendulum it very quickly puts it up exactly upright, because some models say it’s going to fall left and some models say it’s going to fall right. There’s disagreement. It’s not a passive algorithm that sits back, watching,” says Hod smiling. “It asks questions. That’s curiosity.”

Just like humans, it seems machines learn best when they ask their own questions and find their own answers, rather than being given huge amounts of data to absorb. “Most algorithms you see are passive. They’re data intensive. You feed in terabytes of data and these algorithms just sit back and watch. But in the real world you can’t sit back and watch. You have to probe, because collecting data is expensive, it takes time, it’s risky.” By constrast Lipson’s machine brain “only ever sees what it asks for. It does not see all the data.” In fact Lipson decided to compare the efficiency of this ‘active’ method of enquiry against a more traditional passive ‘here’s all the data, what can you tell me?’ method. “It doesn’t work. It has go through a reasoning.”

Remind you of anyone? I see the hemlock taker and the chicken freezer partially re-incarnated in machine form. The programming consigns inaccurate models to the dustbin by getting the robot to admit there are others that offer a better explanation of the real world  (hello Socrates) and does this with evidence won via experimentation (hello Bacon). What Lipson has done is create a computational methodology for asking good questions. And asking good questions is what it is all about when it comes to understanding anything.

“Physicists like Newton and Kepler could have used a computer running this algorithm to figure out the laws that explain a falling apple or the motion of the planets with just a few hours of computation,” said Schmidt in an interview with the US National Science Foundation (who helped fund the research).

However, we’re still a long way off what I (or Hod) would call an intelligent machine. It still takes a human to work out if anything the machine has found is useful. The machine didn’t know it had found laws of motion, it took Hod and his colleagues to recognise the equations that were produced. “A human still needs to give words and interpretation to laws found by the computer,” says Schmidt. So, we’re still some distance from Hod’s confederacy of man and machine, where they explain stuff to us.

One of the areas Hod’s brains could turn out useful is cracking problems where there is lots of data, but we still have little idea what’s going on. Indeed plenty of people with acres of data have been beating a path to his door including heavyweight data generators like the Large Hadron Collider at CERN near Geneva. “The people as CERN said ‘there is this gap in a prediction of particle energy. Here’s data for 3,000 particles. Can you predict something?’ ” The result was a strange mix of elating and disappointing. “We let it run and it came up with a beautiful formula,” says Hod. “We were very excited but it was a famous formula they already knew. So for them it was a disappointment…. But for us… We rediscovered something that people are famous for.”

Again, the crucial insight comes from humans who can tell if something means anything or not. It’s the crucial step – and without it the results are largely worthless (which is not to say the time saved is not incredibly useful). I’m reminded of a scene from Douglas Adams’ comedy The Hitchhikers Guide to the Galaxy in which a supercomputer called Deep Thought is built by a race of supersmart humanoids to answer the ultimate question. ‘What is the answer?’ ask the humanoids awaiting instant enlightenment. ‘To what?’ says the computer. ‘Life! The Universe! Everything!’ they respond. ‘The ultimate question!’ The computer announces there is an answer… but it will take several million years to compute. At the duly allotted time millennia later the humanoid’s descendants gather to hear the answer, which is announced to be ‘42’. The problem, suggests Deep Thought, is that they don’t really know what ‘the question’ is.

"You're not going to like it"

"You're not going to like it"

No-one understands the irony in this story more than Hod Lipson. “In biology there are many systems where we do not know their dynamics or the rules that they obey”. So he set his machine looking at a process within a cell. True to form the program generated an equation in double quick time. But what did it mean?

“We’re still looking at it,” says Hod with a smile. “We’re staring at it very intently. But we still don’t have an explanation. And we can’t publish until we don’t know what it is.”

“You don’t understand what it’s saying?

“No,” says Hod.

“But in science you go from observations which produce data, to models which produce predictions, to underlying laws – and from there you go to meaning. What’s good is that we can go from data straight to laws, whereas previously people could only go from data to predictions. So now a scientist can throw it some data, go and have a cup of coffee, come back and see 15 different models that might explain what is going on. That saves a lot of time. Previously coming up with a predictive model could take a career. Now at least you can automate that so you can focus on meaning.” That’s a powerful enabling technology. More time to think. Hod is doing for thinking what dishwashers have done for after  dinner conversation. Although it may not always work out that way.

Several months later I e-mail Hod to see if they’ve got anywhere with the equation his machine generated from the cell-observing experiment. “We’re still struggling,” he writes “We’ve been trying for months to get the AI to explain it to us through analogy. But we don’t get it.” It could be that Hod’s machine has discovered something our human brains are just not smart enough to see. “Maybe it’s hopeless,” he says “Like explaining Shakespeare to a dog.” This is why Hod is trying to convince his collaborators to publish the equation anyway – and see if anybody else out there can shed light on its meaning.

"Shakespeare? It's above me."

"Friends, Romans... Hey! Is that a biscuit?!"

Because Hod is curious about what makes us curious I ask him if his program could come up with a model of how to learn.

“Could we use your program to observe data about how machines learn, or how people learn, and come up with a model of learning?”

We’re getting seriously abstract now.

Hod laughs. “That’s what we’re working on now. We’re working on what we call self reflective systems. We want to make machines meta-cognitive – they are thinking about thinking.”

This is something of a departure from a lot of AI research. “Almost all the AI systems program a way of thinking and they do that thinking for you – which is the extent of it. You could argue that’s about as smart as a lizard. But if you want to get to human-like intelligence, you need a brain that can think about thinking…”

Sadly (for this blog) Hod’s work in this area is currently unpublished so out of courtesy I’m leaving a more detailed explanation of what we discussed until the book is published. In summary however, Hod is taking his model of ‘co-evolutionary AI’ to the next level. Instead of modeling robot physiology, the motion of pendulums or data from physicists in Switzerland he has one robot brain trying to model how another one learns – and then, in true Lipson style, he’s asking one to challenge the other – in order to find out more. In this way one brain builds a model of how the other learns, and can start to make helpful suggestions.

“That’s self reflection,” says Hod. He adds, “That’s important in life. You can learn things the hard way, or you can think about how you’ve been thinking.”

It’s something you can imagine Socrates or Bacon saying.

4 Comments

RSS feed for comments on this post. TrackBack URL

Leave a comment

RSS