It changed the way we organised society, it changed the way we educated ourselves, it changed the idea of work, and it changed the way we did business. We built infrastructure the like of which had never been seen before. Railways, roads, sewers, waterways, ports. These were new technologies. Today we don’t think of the road as a technology, or a sewer. But they are. As Google’s ‘Internet Evangelist’ Vint Cerf says, “If you grow up with a technology, it’s not technology. It’s just there.”
In 2009, John Seely Brown ex-Chief Scientist of the Xerox Corporation told a crowd of Silicon Valley business leaders how deep the industrial revolution was embedded in their high-tech business structures. “The structure and architecture of the firm reflects the structure and architecture of the infrastructure on which the firms are built,” he said. “Organisational infrastructure leverages the properties of infrastructural architectures.” Now, as a couplet this may fail to grab you as much as “You killed my father!” / “No Luke, I am your father!” did me when I was six, but Brown (paraphrasing the work of Harvard’s Alfred Chandler) is saying something profound.
Society is built on top of those roads, railways and schools, rather than those things being created underneath us as a means of support. Seely is saying that roads and schools shape us far more than we shape our roads and schools.
Educationalist Sir Ken Robinson points out that, “there were no systems of public education around the world before the nineteenth century. They all came into being to meet the needs of industrialism”. We built our school systems on top of the industrial revolution. We didn’t build the industrial revolution because of the collective effort of a pre-existing state-wide school system. (If you haven’t seen Ken’s TED talk, I recommend it. Not only is it a revelation, he’s also very funny).
We develop new technologies, that became infrastructure that shape society – and the infrastructure of the industrial revolution is still with us. That infrastructure grew rapidly and then reached a plateau. Cars are not radically different, nor are trains or indeed aircraft from their counterparts a generation ago. Neither are roads, railway tracks, airports, the judiciary, the school system or our systems of government. “Our infrastructure has been stabilised for a shockingly long period of time,” says Brown, “and we have now built institutions on top of that that expect that kind of stability.”
Our institutions are also still promoting a educational mindset that is born of the industrial age too. Ken Robinson says:
Every education system on earth has the same hierarchy of subjects. Every one, doesn’t matter where you go, you’d think it would be otherwise but it isn’t. At the top are mathematics and languages, then the humanities, and the bottom are the arts. Everywhere on earth.
Skills that will get you a job in an industrial society are valued the highest. The problem with this is that we’re moving from the industrial age to the information one and our institutions and education needs to shift. This isn’t to say that mathematics and languages aren’t valuable, but that those qualities that come from studying the humanities (understanding social systems for instance) or the ability of the arts to promote creativity and curiosity will become more so.
“We have a brand new type of infrastructure,” says Brown – and that infrastructure is the internet technologies that allow us to tease out and manipulate the world of data in a way never before possible. This new infrastructure will shape us just as profoundly as our industrial infrastructure did.
A man walks into a shop and picks up a packet of kitchen roll. As he does so an image appears on the packet telling him how much bleach was used in its manufacture. He picks up another and compares. The second gets a ‘green light’ that appears as ghostly image on back of the packet, signifying eco-friendliness. He chooses the latter as his purchase. Now he makes a phonecall, holding out his hand where the numbered buttons of a keypad appear sketched in light on his palm. He dials the number by tapping his own skin. Later, on the way to the airport, he pulls out his boarding pass and across the top some text appears telling him his flight is twenty minutes delayed.
This sounds like a scene from an (admittedly quite dull) science fiction movie, but it is not. These are scenes from a demonstration of a new device developed at MIT’s Fluid Interfaces Group – a combination of mobile phone, wearable camera and tiny projector that the lab’s director, Pattie Maes, calls ‘Sixth Sense’ – a technology designed to provide seamless and easy access to “information that may exist somewhere that may be relevant, to help us make the right decision about whatever it is we’re coming across,” to help us “make optimal decisions about what to do next and what actions to take.” (Click on the video below to see ‘Sixth Sense’ in action).
Everything is surrounded by a cloud of data you can’t see. A piece of clothing isn’t just the physical garment. In the shadow-world of data it is also how much it costs, whether it was manufactured ethically, the instructions for how best to wash it and so on. Crucially, your behaviour towards it may alter with access to any one of these pieces of data. Imagine walking into a trainer shop and being able to instantly see, by looking at a product’s bar code, whether it was made in a sweatshop or not, or if the shop around the corner had the same shoes on sale.
We are already beginning to layer this world of data on top of our day-to-day experiences. Download the ‘Better World Shopper’ App onto your iPhone for instance and it will give you an instant rating of a manufacturer’s record in regard to human rights, environmental policy, animal rights, social justice and community involvement. Google Goggles makes use of your mobile phone’s camera to recognize landmarks, book covers, even wine labels and return internet searches that relate to what you’re pointing it at. This is data layered over reality, or (depending how you look at it), reality revealed by data.
At a rapid pace we are becoming used to the idea that data should be accessible wherever we are, and on whatever subject we demand it, yet at the same time it’s hard to comprehend that the Internet is younger than I am, and the World Wide Web younger than my eldest niece. The ability of Internet services like the web and e-mail to connect both humans and machines has already had implications for society that we’re only just beginning to wake up to, implications as profound as those of the Industrial Revolution. We are now in a relationship with Internet-based technologies that we can’t get out of. Physicist and computer scientist W. Daniel Hillis has written:
We have linked our destinies, not only among ourselves across the globe, but with our technology. If the theme of the Enlightenment was independence, our own theme is interdependence. We are now all connected, humans and machines. Welcome to the dawn of the Entanglement.
As Vint Cerf said to me recently, “This is not new. We have always been entangled with our technology, we’ve always been entangled with knowledge. It may be more obvious now, because of the way it manifests. But if you were a cave man you might have become quite dependent on tools that you built, because without them you might be able to feed yourself, so you needed the knowledge to make those, or you needed the knowledge to find somebody who could make them. And then you also had to know that that thing over there was a sabre tooth tiger and it was a really good idea to get away from it, because the people who didn’t understand that didn’t survive to put their genes into the gene pool.” In short, entanglement with knowledge and technology keeps you alive.
Technology’s story is our story. Knowing how to turn Entanglement into Symbiosis and how the new infrastructure of data and networks can shape us, will determine who wins and who loses as we continue our shift from the Industrial to Information age. Countries that do not grasp the shift, especially in the way they educate their citizens will suffer. “We need to rethink a whole set of institutional architectures,” says Brown, to enable us to build organisations that focus on what he calls “scalable peer-based learning”, and what you and I would call ‘staying smart enough to keep up.’ Last year in Boston I met Juan Enriquez, author of As the Future Catches Youand a seminal speaker on how technology and knowledge are transforming us.
“I worry that if you’re not educated in this stuff, you’re toast,” he said. He’s very clear that new technologies quickly change the fate of nations, especially as knowledge becomes ever more accessible. “You don’t have to own a large piece of land or a lot of resources to get rich very quickly, but you do need to go to school. That didn’t use to be true. It used to be that it didn’t matter how smart you were, if you weren’t the king or part of the noble classes you were toast. Now you can get wealthy, and you can do it very quickly, but you have to do it through education. You see, the consequences of not being educated today are far different from what they were. You know, in the 1950s you had a high school diploma, you went to Detroit you did fine. That’s not true anymore.”
As Ken Robinson remarked to his audience at TED. “You were probably steered benignly away from things at school when you were a kid, things you liked on the grounds that you would never get a job doing that. Is that right? Don’t do music, you are not going to be a musician. Don’t do art, because you won’t be an artist. Benign advice. Now profoundly mistaken. The whole world is engulfed in a revolution.” He continues,
Our education system has mined our minds in the way we strip mine the earth for a particular commodity and for the future it won’t suffice. We have to rethink the fundamental principals on which we are educating our children. What TED celebrates is the gift of the human imagination. We have to be careful now that we use this gift wisely … and the only way we’ll do it is by seeing our creative capacities for the richness they are and seeing our children for the hope that they are. And our task is to educate their whole being so that they can face this future.
In short, critical thought, creativity, curiosity are the skills that need to get up the educational hierarchy as society shifts to a fluid infrastructure built on data and the links between it. Where we were taught to be arithmeticians now we must become mathematicians. Where we were taught vocabulary now we must learn semiotics. Where we taught to accept the industrial infrastructure as a fixed edifice we must now learn that the information infrastructure is a fluid tool to be pulled and shaped. Where we worked in silos, now we must learn to harness the crowd, to play our part in manipulating our collective creativity to solve the world’s problems and embrace its opportunities. Margaret Thatcher famously said, “There is no such thing as society.” She was entirely and completely wrong.
Einstein said, “We cannot solve our problems by thinking at the same level we were at when we created them” and by the same token we cannot solve our problems by leveraging the same infrastructure we used in creating them either. Politics, education and the press must, and will, out of necessity, change. Already we are seeing this shift, as the press wonder how to survive spreading a meme of division and conflict in a world that is increasingly collaborative. We are turning away from our newspapers and turning to each other. MIT is placing its courses online for free. TED is putting the world’s greatest speakers at our fingertips. Politics is the luddite. It’s not just industrial, it’s pre-industrial. It’ll change the last and it’ll hurt the hardest. Those nations who fail to understand this, that fail to change their institutions and equip their populaces by shifting to an educational paradigm born out of the coming information age instead of one made of the old industrial one, face a bleak future. Some nations will leapfrog ahead, offering a unprecedented opportunity for the developing world, perhaps taking something of story of the 60s most notable failed state (Singapore, now a knowledge powerhouse) to heart.
I call it the ‘Knowledge Combustion Engine’ and you are the fuel in the tank.
It’s a big question, but one that is particularly pertinent to my interview today with Robotics and Artificial Intelligence researcher, Hod Lipson. Because Hod and his team build machines that find truths.
The search for truth has a long history (one could argue it is history) which I’m not about to get into (and it’s not the book I’m writing) but if someone said to me ‘Go on then, history of truth in 5 minutes’ I’d probably reach for two key figures – Socrates (born Greece, 469 BC) and Francis Bacon (born England, 1561), not least because they both died in interesting ways (which is useful for storytelling).
Socrates was put to death by the state of Athens for “refusing to recognise the gods recognised by the state” and “corrupting the youth” (explaining perhaps why Black Sabbath rarely toured in Greece). Despite clear chances to escape his fate, Socrates placidly took a drink containing poison hemlock prepared by the authorities. Francis Bacon, many believe, died as a result of trying to freeze a chicken. It might seem odd therefore to hold up both as key figures in the history of reason.
Socrates' natural heir?
You may also wonder why I am suddenly diving into the past when I’m writing a book about the future. Bear with me, and blame Hod Lipson and his robots.
Both Socrates and Bacon were very good at asking useful questions. In fact, Socrates is largely credited with coming up with a way of asking questions, ‘The Socratic Method’, which itself is at the core of the ‘Scientific Method’, popularised by Bacon during ‘The Enlightenment’ – a period of European history when ‘reason’ and ‘faith’ had an almighty bunfight and the balance of power between church, state and citizen was being questioned. Lots of philosophers and scientists challenged the prevailing orthodoxy of religious authority by saying ‘we need to make decisions based on critical thinking, evidence and reasoned debate, not on sacred texts and religious faith’ and the church replied with ‘yes, but we own most of the land, plus people really like the idea of God. Ask them’.
I'm pretty popular, actually
The Socratic Method disproves arguments by finding exceptions to them, and can therefore lead your opponent to a point where they admit something that contradicts their original position. It’s powerful because it kind of gets people to admit to themselves that they’re wrong. It’s also pretty good at exposing your own (as well as others’) prejudices and gaps in reasoning. Lawyers use it a lot. Don’t let this influence you against it. Lawyers also use toilet paper and you’re not about to reject that idea.
Used by lawyers
Here’s an example.
During excessive bouts of hard and progressive rock emanating my older brothers’ bedrooms my dad used to say, “people only play electric guitars because they can’t play real ones” (by which he meant acoustic guitars played by nice chaps called Julian with sensible haircuts, as apposed to electric guitars played by long haired geezers called Dave and Jimmy).
First step of Socratic method: assume your opponent’s statement is false and find an example to illustrate this. This You Tube clip of Pink Floyd’s David Gilmour playing acoustic guitar for instance. Clearly Dave Gilmour can play a ‘real’ guitar as well as an electric one and my dad must grudgingly accept the fact. At this point dad would assert that Dave Gilmour was ‘the exception that proved the rule’.
Next step. Take your opponent’s original statement and restate it to fit their new modified position. “So, dad, you’re saying that people only play electric guitars because they can’t play acoustic ones, except for Dave Gilmour who can do both?”. Then return to step one.
Ironically this led us to playing dad far more Black Sabbath, Pink Floyd, Aerosmith and Led Zeppelin than if he’d kept his theory to himself. (MTV’s ‘unplugged’ series would become his nemesis). Eventually dad would have to admit the truth – which was not that the rock musicians we listened to weren’t talented, but that he just didn’t like rock music.
This example is trivial but you can use the method to demonstrate some pretty esoteric points, and expose fundamental new insights. A popular example that can really annoy your mates in the pub is proving that things don’t have a colour.
Socratic argument, while undoubtedly one of the most useful things ever devised can also annoy the tits of people, as the man who lends it his name found out to his cost. The story is that Socrates used his technique to prove a lot of bigwigs in Athenian society were mistaken in their thinking – and they responded by having him killed. This proves that engaging people’s brains is never enough if you want change. You have to engage their emotions too. As Professor George Church said to me during our talk last week “Politicians know how effective emotion is in comparison to rational thought. You can really move mountains with emotion. With rational thought you just end up getting people to change the channel”.
By the time Francis Bacon went to university, teachings of one of Socrates’ students, Aristotle, had become entrenched as the way to conduct ‘scientific inquiry’. Aristotle had pioneered deductive reason, the practice of deriving new knowledge from foundational truths, or ‘axioms’. In short, it was generally believed that if you got enough boffins together to have a solid debate, scientific truth would be teased out over time. This worked well for mathematics where axioms had been long established (e.g. the basic mathematical operations – plus, minus, divide, multiply) but was less good for finding out new stuff about the physical world. Much to Francis’ dismay it seemed that science involved sitting around in armchairs. Nobody was getting off their arse and observing anything new or doing any experiments. Nobody was finding the ‘axioms of reality’ (which is arguably a good name for a progressive rock outfit).
'Let's do it in 13/8!'
In common with Socrates Bacon stressed it was just as important to disprove a theory as to prove one – and observation and experimentation were key to achieving both aims. In a way he was Socrates 2.0 (which is another good name for a prog band). He also saw science as a collaborative affair, with scientists working together, challenging each other. All of this is hallmark of scientific good practice today – observe, experiment, theorise… and then try to prove yourself wrong – all in collaboration with peers who can give you a hard time. It’s important to note that Bacon himself wasn’t a distinguished scientist. His main contribution was the articulation and championing of an empirical scientific method. That said, he did do the odd experiment, including the one that killed him.
While traveling from London to Highgate with the King’s personal physician, Bacon wondered whether snow might be used to preserve meat. The two got off their coach, bought a chicken and stuffed it with snow to test the theory. In his last letter Bacon is said to have written, “As for the experiment itself, it succeeded excellently well.” Some historians think the chicken story is made up, but the popular account is that the act of stuffing the chicken led to Bacon contracting fatal pneumonia. This is possibly the only instance of bacon being killed by eggs.
Hod Lipson looks like a very friendly bear. He has a round, but not chunky frame, thick black hair and looks healthy and happy. His features are open and innocent. He’s almost childlike if it weren’t for his demeanour – a kind of solid confidence that only comes with age. You get the feeling Hod knows exactly what he wants to achieve. I suspect he was a mischievous child, curious, poking his nose into most things. And whilst most of the scientists I’ve met are driven by an almost insatiable curiosity, Lipson takes curiosity to a new level, literally. He’s curious about curiosity.
“ ‘Artificial Intelligence’ is a moving target,” he says. “So, you can build machine that plays chess, then you build one that can drive through city streets and so on. People argue about whether it’s really intelligent or not – and usually it’s argued it isn’t. I want to create something where nobody can argue it isn’t intelligent. So, I was thinking about what’s an unmistakable, unequivocal hallmark of intelligence, and I think it’s creativity and particularly curiosity.”
“Does a curious and creative machine mean a sentient machine?” I ask.
“Well, what does that mean?” asks Hod. “I have to push you on what you mean by ‘sentient’.”
Bollocks. I’ve just been asked by a leading researcher into intelligent machines to define sentience – one of the biggest pending questions in philosophy. This is worse than when Cynthia Breazeal asked me to come up with an alternative word for ‘robot’. Or if Andrew Lloyd Webber asked me to say something nice about one of his musicals. I feel out of my depth and we’re barely into our chat. I do the only thing I can.
“Well, let me ask you,” I say. “What do you mean by it?”
Hod pauses. I’m not sure he was expecting a return serve, especially one that in any decent rule book would be considered cheating.
“I interpret it as deliberate versus reactive. Er… human-like…” He pauses again. “I don’t know.”
A-ha! Well, like I said, it is one of the biggest pending questions in philosophy.
“Alive?” I venture.
“It’s difficult to identify what life is right?”
And there’s the rub. Life has avoided a definitive definition for as long as we’ve tried to make one – as has ‘intelligence’. So if you’re trying to create ‘artifical intelligent life’ you’re already in a quagmire of semantic lobbying. I’m reminded of my chat last week with George Church (Professor of Genetics, Harvard Medical School). “I think life is actually quantitative measure,” said George, by which he means something that can be defined not with either a ‘yes’ or ‘no’ but on a scale. “It’s not something where either you either have it or your don’t. So I would say that there are some things that are more alive than others.” And I don’t think it’s overstating things to say that Hod certainly has made machines that are ‘more alive’ than many others.
Then he says an interesting thing. “I think men have this hubris of wanting to create life. We try to create life out of matter.”
‘Hubris’ is one of those words like ‘semiotics’ and ‘insurance’ that I’ve heard a lot but didn’t really know what it meant for a long time (I’m still struggling with ‘insurance’). I look up ‘hubris’ when I get to back to my hotel. It means excessive pride or arrogance. In classical literature it’s usually a precursor to, and the cause of, a character’s downfall. The legend of Icarus is a good example. With that one word Hod has encapsulated the two defining criticisms aimed at Artificial Intelligence research. On one end there are those who say we’ll never create a truly artificial intelligence and that we’re arrogant to believe we can. On the other there those who worry we will build smart machines and in our arrogance be blind to the danger that they will one day do away with or enslave us. (There are more measured positions in between the two such as Hubert Dreyfus’s and Hod’s own – both of who suggest that a lot of AI research has been in the wrong direction).
Hod doesn’t believe in the latter James Cameron-esque scenario, but sees a confederacy of man and machine. He has some sympathy for the ‘singularity hypothesis’ of Ray Kurzweil (who I’m interviewing early next year) which talks of a ‘merger of our biological thinking and the existence of our technology’ but doesn’t see a machine-human hybrid (Juan Enriquez’s Homo Evolutis) as the only scenario. “Merging could also mean intellectually merging, meaning that they explain stuff to us.”
Lipson became famous (in robotic circles) for his work building robots that are arguably self aware. His Starfish robot, which I see sitting forlornly on a shelf in his lab, is iconic for learning to walk from first principles. It wasn’t given a program that told it how to move its various motors and joints to achieve locomotion. Instead Lipson gave it a program that enabled it to learn about itself – and use this knowledge to subsequently work out how to move.
“The essential thing was it created a self image,” Lipson tells me. “It created that self image through physical experimentation. So it moved its motors, it sensed its motion and then it created various models of what it thought it might look like – ‘maybe I’m a snake? maybe I’m a spider?’ We told it to create models – multiple different explanations that might explain what it knows so far.”
The robot then stress-tested those models by sending them into competition with each other. “It creates an experiment for itself that focuses on the area where there’s the most disagreement between what the models predict. We put in the code to look for disagreements,” explains Hod.
For example, let’s say the robot is wondering which move to do next in order to learn about itself more. It could try a movement that, when completed, the models all predict it will be sitting at an angle of about 20 degrees. One model might predict 19 degrees, another 21 degrees, a third 21.2 degrees. However, if it tries another move the models have very different ideas about the result. One says the robot will be at an angle of 12 degrees, another predicts 25 degrees, a third says 45. This latter movement is more likely to be the one the robot chooses next, because it will learn the most from it, and get an idea of which model is closer to the truth. It’s where there’s most disagreement that there’s most to learn. “We tell it ‘you create models – multiple different explanations for what you see – and then look for what new experiment creates disagreement between predictions of these candidate hypotheses,” says Lipson “That’s the bottom line of curiosity”.
The models that do best ‘survive’ and the program kills off the others. The remaining models ‘give birth’ to a generation of slightly mutated tweaked versions of themselves and another round of ‘survival of the fittest’ ensues. Or to put it another way, over many iterations the program hones in on a model that describes reality. The predictions get closer and closer to what actually happens until one model is deemed sufficient for the robot to say ‘this is what I look like’.
If all this talk of ‘mutation’, successive ‘generations’ and ‘survival of the fittest’ sounds slightly familiar that’s because this kind of mathematics takes its inspiration from Darwin’s theories of evolution. Mathematicians might call it ‘reductive symbology’ or say Lipson’s work is a good example of ‘genetic algorithms’ – and it’s a technique that’s been around for decades. What’s different about Lipson’s work is the implementation, something he calls ‘co-evolution’.
“We set off two lines of enquiry. So one of them is the thing that creates models and the other is the thing asks questions, and they have a predator/ prey kind of relationship. Because the questions basically try to break the models.” The questions try to find something the models disagree about so they can kill off the weaker ones. It’s like Anne Robinson in code.
It has to be said that if you see the Starfish robot ‘walking’ you wouldn’t immediately think it had a future career as a dancer. It doesn’t so much walk as stagger and flop forward. It’s less Ginger Rogers and more gin and tonic. Still the achievement is not to be sniffed at. It had no parents and no role models. This was a robot actively learning to do something no one had taught it to. And robots that learn this way have all sort of interesting possibilities – as Lipson was about to find out.
You can see Hod’s demonstrating his starfish robot in this TED talk.
With colleague Michael Schmidt he wondered if the same computer program he’d placed at the core of his Starfish robot could go beyond working out merely what its host body looked like and begin to reach useful conclusions about the wider world.
“We said ‘let’s take it out of this particular body and let it control motors of any experiment’ ”. Their first idea was to give the robot brain control of motors that set up the starting position for a ‘double pendulum’ before letting it fall. The robot was also able to record the results of each experiment using motion capture technology – allowing it to accurately record the pendulum’s motion.
A double pendulum is a bonkers little contraption. It consists of two solid sticks jointed together in the middle by a free moving hinge. Double pendulums do wacky things (You can see one in action here). Whilst the top pendulum swings from left to right the bottom one likes to mix it up. Because it’s not attached to a stationary point (like the top pendulum) but something moving (the bottom end of that swinging top pendulum) it will swing left, swing right, spin round clockwise, or counter clockwise, seemingly at random. Lipson and Schmidt chose the double pendulum because it’s a good example of a system that’s simple to set up but which can quickly exhibit chaotic behaviour – and therefore would be a good test of the technology’s ability to build a useful conceptual model of what was going on. The results were startling. In fact, the program went a long way to deriving the laws of motion. In 3 hours.
It followed the same process as it had when it sat in the robot – guessing at equations that might explain what it had seen so far, then setting up new experiments (in this case new starting positions for the pendulum) that targeted areas of most disagreement between the equations. “With the double pendulum it very quickly puts it up exactly upright, because some models say it’s going to fall left and some models say it’s going to fall right. There’s disagreement. It’s not a passive algorithm that sits back, watching,” says Hod smiling. “It asks questions. That’s curiosity.”
Just like humans, it seems machines learn best when they ask their own questions and find their own answers, rather than being given huge amounts of data to absorb. “Most algorithms you see are passive. They’re data intensive. You feed in terabytes of data and these algorithms just sit back and watch. But in the real world you can’t sit back and watch. You have to probe, because collecting data is expensive, it takes time, it’s risky.” By constrast Lipson’s machine brain “only ever sees what it asks for. It does not see all the data.” In fact Lipson decided to compare the efficiency of this ‘active’ method of enquiry against a more traditional passive ‘here’s all the data, what can you tell me?’ method. “It doesn’t work. It has go through a reasoning.”
Remind you of anyone? I see the hemlock taker and the chicken freezer partially re-incarnated in machine form. The programming consigns inaccurate models to the dustbin by getting the robot to admit there are others that offer a better explanation of the real world (hello Socrates) and does this with evidence won via experimentation (hello Bacon). What Lipson has done is create a computational methodology for asking good questions. And asking good questions is what it is all about when it comes to understanding anything.
“Physicists like Newton and Kepler could have used a computer running this algorithm to figure out the laws that explain a falling apple or the motion of the planets with just a few hours of computation,” said Schmidt in an interview with the US National Science Foundation (who helped fund the research).
However, we’re still a long way off what I (or Hod) would call an intelligent machine. It still takes a human to work out if anything the machine has found is useful. The machine didn’t know it had found laws of motion, it took Hod and his colleagues to recognise the equations that were produced. “A human still needs to give words and interpretation to laws found by the computer,” says Schmidt. So, we’re still some distance from Hod’s confederacy of man and machine, where they explain stuff to us.
One of the areas Hod’s brains could turn out useful is cracking problems where there is lots of data, but we still have little idea what’s going on. Indeed plenty of people with acres of data have been beating a path to his door including heavyweight data generators like the Large Hadron Collider at CERN near Geneva. “The people as CERN said ‘there is this gap in a prediction of particle energy. Here’s data for 3,000 particles. Can you predict something?’ ” The result was a strange mix of elating and disappointing. “We let it run and it came up with a beautiful formula,” says Hod. “We were very excited but it was a famous formula they already knew. So for them it was a disappointment…. But for us… We rediscovered something that people are famous for.”
Again, the crucial insight comes from humans who can tell if something means anything or not. It’s the crucial step – and without it the results are largely worthless (which is not to say the time saved is not incredibly useful). I’m reminded of a scene from Douglas Adams’ comedy The Hitchhikers Guide to the Galaxy in which a supercomputer called Deep Thought is built by a race of supersmart humanoids to answer the ultimate question. ‘What is the answer?’ ask the humanoids awaiting instant enlightenment. ‘To what?’ says the computer. ‘Life! The Universe! Everything!’ they respond. ‘The ultimate question!’ The computer announces there is an answer… but it will take several million years to compute. At the duly allotted time millennia later the humanoid’s descendants gather to hear the answer, which is announced to be ‘42’. The problem, suggests Deep Thought, is that they don’t really know what ‘the question’ is.
"You're not going to like it"
No-one understands the irony in this story more than Hod Lipson. “In biology there are many systems where we do not know their dynamics or the rules that they obey”. So he set his machine looking at a process within a cell. True to form the program generated an equation in double quick time. But what did it mean?
“We’re still looking at it,” says Hod with a smile. “We’re staring at it very intently. But we still don’t have an explanation. And we can’t publish until we don’t know what it is.”
“You don’t understand what it’s saying?
“No,” says Hod.
“But in science you go from observations which produce data, to models which produce predictions, to underlying laws – and from there you go to meaning. What’s good is that we can go from data straight to laws, whereas previously people could only go from data to predictions. So now a scientist can throw it some data, go and have a cup of coffee, come back and see 15 different models that might explain what is going on. That saves a lot of time. Previously coming up with a predictive model could take a career. Now at least you can automate that so you can focus on meaning.” That’s a powerful enabling technology. More time to think. Hod is doing for thinking what dishwashers have done for after dinner conversation. Although it may not always work out that way.
Several months later I e-mail Hod to see if they’ve got anywhere with the equation his machine generated from the cell-observing experiment. “We’re still struggling,” he writes “We’ve been trying for months to get the AI to explain it to us through analogy. But we don’t get it.” It could be that Hod’s machine has discovered something our human brains are just not smart enough to see. “Maybe it’s hopeless,” he says “Like explaining Shakespeare to a dog.” This is why Hod is trying to convince his collaborators to publish the equation anyway – and see if anybody else out there can shed light on its meaning.
"Friends, Romans... Hey! Is that a biscuit?!"
Because Hod is curious about what makes us curious I ask him if his program could come up with a model of how to learn.
“Could we use your program to observe data about how machines learn, or how people learn, and come up with a model of learning?”
We’re getting seriously abstract now.
Hod laughs. “That’s what we’re working on now. We’re working on what we call self reflective systems. We want to make machines meta-cognitive – they are thinking about thinking.”
This is something of a departure from a lot of AI research. “Almost all the AI systems program a way of thinking and they do that thinking for you – which is the extent of it. You could argue that’s about as smart as a lizard. But if you want to get to human-like intelligence, you need a brain that can think about thinking…”
Sadly (for this blog) Hod’s work in this area is currently unpublished so out of courtesy I’m leaving a more detailed explanation of what we discussed until the book is published. In summary however, Hod is taking his model of ‘co-evolutionary AI’ to the next level. Instead of modeling robot physiology, the motion of pendulums or data from physicists in Switzerland he has one robot brain trying to model how another one learns – and then, in true Lipson style, he’s asking one to challenge the other – in order to find out more. In this way one brain builds a model of how the other learns, and can start to make helpful suggestions.
“That’s self reflection,” says Hod. He adds, “That’s important in life. You can learn things the hard way, or you can think about how you’ve been thinking.”
It’s something you can imagine Socrates or Bacon saying.