Delivered 2017-09-24 at WebExpo 2017, Prague, CZ



Prepared Talk







"Ahoj, já jsem Mike a je mi potěšením být v rodné zemi Karl Čapek!"

("Hi, I am Mike and I am pleased to be in the home country of Karel Capek!")

How did I do? Is it very bad? Was that awful? Sorry, I had to try.


So I have a few things first. Since I travel quite a bit, I need evidence, so I need to take your picture. So, first, everyone say, "Hello, ahoy." Excellent, okay so now we’re done with that.

The Rule of the Robots

Okay, so the name of my talk is, "These dreams lies in autonomous web." But first I want to say, we are under the rule of the robots, okay?

I first learned of Karel Capek through a book that translates in English to "The Absolute at Large" from 1922. It’s a novel about someone who invents a machine that does all the work of humans and humans have no more jobs. There’s an odd side-effect to the machine, people near it when it’s working get very religious, but that’s just a side-effect. But he has predicted almost 100 years ago: that robots will take all the jobs.

Of course, a book that most people associate with Karel Capek is this: "Rossum’s Universal Robots". Karel and his brother Josef invent the word "robot"; used the word "robota" in a way that no one had used it before. And in this play, robots destroy humanity and take over the world.

One hundred years ago! Here is an author who has explained already the end of all jobs and the final robot apocalypse right here. So I’m so happy to be here. But of course wait, I’ve already talked about the end of everything and I’m only just begun. So, maybe I need to back up just a little bit.

Okay, let’s back up

Okay. So, first, this is me. This is how you find me on LinkedIn, and Twitter, in GitHub and I would love to connect with you and learn about what you’re working on and what you’re doing in the world of software.

I work for a group called API Academy which allows me to travel all around the world as Daniel already mentioned. I’m very lucky, I get to meet some of the smartest people in the world and learn what they are working on.

We are at a crossroads

And this talk is about some of the things that I’ve learned in the last few years and it makes me think "What is the future of the web?" and "What is the future of machines in general?" So we’re in an amazing age, we’re at a crossroads where we have more and more devices. You know, Avi, in the talk before us talked about all these devices people are trying to create to connect to things.

And there are lots of challenge to this. So, there are predictions that we will have 50 billion devices sometime soon. We have lots and lots of programming languages. We keep inventing new programming languages because we’re struggling to try to get a hold of how we control and manipulate things. We’re getting more and more pieces of software downloaded all the time. Almost every time I pick up my phone it is asking me to download an update for something. So you have all these software, all this material. And we have all these things called APIs, Application Programming Interfaces.

Connections from machine to machine are usually over the web or over the Internet, which is an extreme challenge, a huge challenge to do, and they keep growing larger and larger. So as we get more devices, and we get more devices on machines, in buildings, on cars, all these other things, there’s a huge challenge ahead of us to figure out how to manage this. Because — it turns out — none of these behaves the way we would expect, often the way we wish. We want assistance everywhere we go, we want a smart house like Harvey talked about, we want cars that drive themselves.

The problem with every one of these examples, is they interact with real-life and real-life is not programmable in the way we think machines are programmable. In fact, we’re headed for this huge mess, we even experience a lot of it today. Many of us who work in a web space or work with APIs on the web, we know that lots of things don’t work the way we expect. There are lots of bugs and connecting them is incredibly costly and expensive. So along with this idea of automation, we have this other big idea of big data. There is lots of data out there, and that maybe big data somehow is gonna solve some problems for us. So we’re using lots of neural networks. We’re trying to see if we can teach machines to use big data. If we can give them enough data, maybe they can learn something that we can’t tell them. Often we’re thinking of trying to mimic the way the brain works. A gentleman by the name of Mel Conway — you might have heard of him from Conway’s Law — Mel Conway once said that the job of a brain is to make a brain. It’s to learn…it’s not to learn a specific thing but to learn in general.

And one of the challenges when we write software, we hardly ever write that software. We write the software to do one thing; we don’t write the software to learn to do something. That’s a very different kind of software. More importantly, most of us write software for this. We write software for one machine. We test it on one machine. We put all the data on one machine and then we let it lose. And the problem is, it isn’t one machine, it’s a network of machines. And in fact, most of the things that we are taught in schools, most of the things that we read about, most of the programming tools we have today, are all about helping us program just one machine, and that’s not enough.

Program the Network

Because we’re living in a world where there are lots and lots of machines, billions as we’ve already talked about. And those billions of machines need to be able to interact with each other, and they need to be able to do it in what we call at the Academy, a way that is "safe, cheap and easy." They need to be able to connect safely to machines they have never met before. We need to build a very different kind of network. The kind of network that looks like just a web, a series of bits and bytes connected to each other, and more specifically, a kind of network that changes every single day.

And our brains' neurons reprogram themselves every single day. That’s how we learn, so we have to build that as well. We have to learn to program the network itself, and that’s a huge challenge for most of us because that’s not really the way we’re taught. So as we go through these next several decades, I’m reminded of this great comment:

"Those who cannot remember the past are condemned to repeat it."

And this can apply in lots and lots of ways, Avi talked about this in the previous session where if we don’t pay attention, we realize this has already been invented a long time ago and we forgot. So we reinvent it again. And while I love this phrase, there’s another phrase I like even more:

"Those who ignore the mistakes of the future are bound to make them."

Think about what that means. We have a future ahead of us and it’s easy to take the easy path, to make some mistakes, to do things in ways that aren’t necessarily in everyone’s best interest in the long term but are on the short term. I think we experience this in the United States, England has experienced this as well. Facebook is struggling very much today with this notion of how they optimized for lots of profits today but did not think quite about what would happen in the future.

So as we program machines, we have other responsibilities as well and that responsibility is to the future. So I wanna spend the remaining time…we’ve got about 30 minutes, I wanna spend the remaining time talking about what that future could be like and what responsibilities and roles we have in it. And to do that I want to quote Andre Gide, "One does not discover new lands without consenting to lose sight of the shore for a very long time." Think of Columbus, this of Magellan, think of everyone from Europe who decided to sail the Atlantic the first time without ever seeing a coastline.

And they would go for months without seeing land. That took a lot of guts, that took a lot of nerve, a lot of courage. It didn’t always work out so well but we remember the ones that did. That’s what we’re gonna do today, we’re gonna lose sight of the shore for just a little while and talk about a few things.


So I wanna talk about this notion of dreams. We all experience them. In fact, dreams are this sort of weird hallucination. In our brains, we use dreams to practice as well as to project into the future. This image, by the way, is actually from a Google program called Google Dreams.

And these DeepDreams programs create images from information that’s already collected. These are all images that were created by Google DeepDream. Trying to get machines to hallucinate, trying to get machines to think about things, imagine things is a really important aspect to programming in the future. If we want machines to be autonomous, they need to have some of the same abilities to project into the future, to imagine, to create archtypes, that they can practice, that they can recognize in the real world.

And it helps us learn about the brain, It helps us learn about how our brain works as well. A brain is an amazing thing by the way. Brains are the way we hallucinate, we imagine. By the way, the hallucination and imagination allows us the capacity to lie. We can lie about the past, we can project into the future, we can convince others of other things, we get to practice in our brains. There’s lots of research that says if you’re an athlete or a musician or an artist, you have some manipulative skills you can practice in your head. I practice my talks in my head. It’s the way we learn.

What about Big Data?

Big data comes into this a lot when we talk about machinery. This is a building built in the United States, it’s called Bumblehive and it’s thought to be able to store 1 yottabyte or 1 million terabytes of data. In the United States, we’re collecting all this data from Facebook and apps and phones and everything else and we’re storing it in this big room. Well, how does storage work in the brain? It turns out the brain (it’s a little debatable) the brain holds about 100 terabytes of data which is about 100,000 gig.

And it turns out in our daily activities, we usually collect about two to four gigs of data every single day. So our brain has the power to save 250 years of information, each person. That’s pretty amazing, right? Especially since we’re lucky if we make it to 80 or 90. But the thing is, what is the brain doing with all that storage space? What is the brain doing? Does it actually store every bit of experience, every bit of data, every occurrence, every visit that we make? I know it doesn’t because I meet people every day and I can’t remember their names. So is it stored somewhere? It turns out, no.

Our brains do not store every experience we ever have. In fact, there’s a very mature and complex process of distilling our daily experiences into something that we would call a memory. Memories are actually manufactured bits, memories are not things that are just imprinted on us, memories are things that we create. We create the stories that we remember. And this ability to create a memory is really important. We prune data and place it in long-term memory. We summarize and store. We summarize, label and store. And in fact, we do this mostly during sleep.

Sleep deprivation…one of the biggest problems with sleep deprivation is that we cannot create memories. There’s this great book called, "The Secret World of Sleep" which talks a lot about this, and in that book, it actually identifies a person…a type of person who cannot erase memories. They actually remember everything. And it is debilitating. Every sound, every smell, every touch, every look, calls up another memory and they can’t control it, and it drives them mad. They can’t easily exist in the world because the memories, the past, keeps intruding upon the present and they cannot live in the present.

So, forgetting is incredibly important. Forgetting is how we get by, forgetting is how we get past terrible experiences. Forgetting is how I do not clutter my brain with lots of material that I…you know, the signs I saw, the cars I saw, the tram, the subway, all…I can’t afford to keep all of that. So forgetting is incredibly important because forgetting helps us cope with the world and forgetting lets us write our story through the use of memory. But choosing is also important. So, this is a real difficulty in computing. We can get computers to pay attention to a lot of data.

But how do we tell them which piece of data to select, which piece to remember, which to choose when they have a series of options? Learning to choose is very hard, even we know as children. If you have small children then you know, teaching them to be able to choose what we wear the very next day or what they want to eat, they change their minds constantly. Because they haven’t learned this process of choosing. Learning to choose is hard, learning to choose well is even harder. And Barry Schwartz in his book he talks about learning to choose well in a world of unlimited possibilities is almost too hard.

We have this idea, the notion that you have all these possible choices actually make it worse, so that when you finally make a choice you regret your choice because there were so many other possibilities that you don’t know about. So Barry talks about this in the book, "The Paradox of Choice." It turns out having too many choices makes us unhappy and we’re constantly surrounded by too many choices everywhere, everywhere there’s too many choices. So there’s a real challenge.

So it turns out this whole process of sleep and waking is incredibly important for constructing the memories, and for practicing, and for learning. Here’s a little trick, here’s a little sleep trick I’ll tell you that I learned. If you want to remember what happened during the day better, go to sleep early, because that early sleep is where you consolidate your memories from the day. If you want to think creatively about a problem and how to come up with creative solutions, sleep in late. Because it’s that last bit of deep sleep where you practice and you hallucinate and you work on problems that you already have.

So now I tell my wife, "You know there’s lots I have to remember and I have a lot of creative work to do, so I will be sleeping in. I will be going to bed early and sleeping in every night, I never get up." But this idea of learning to sort of hack the brain is incredibly important. There’s another big problem with big data. Edward Tufte, who creates lots of great visualizations of data has this great line:

"If you torture big data long enough, it will tell you exactly what you want to know."

We have to be very careful in this choosing process that we don’t just keep choosing the things that confirm what we already suspect.

And of course that’s exactly the Facebook problem, right? Those of us in social media, we end up choosing the things that confirm what we already suspect even if those things are not true. And we create memories, and we create stories, and that’s how we operate our brains, and that’s how we go out into the world with this information. Now, if we wanna build a web of autonomous machines, machines that can talk to each other, we’re gonna have to teach them to hallucinate. We’re gonna have to teach them how to figure out, how to practice, how to imagine, how to project, how to talk about a goal and get there.

We’re also gonna have to teach machines how to forget. We’re gonna have to give up this notion that all we want is every bit of data we ever have and what we want instead is machines that can create their own memories, their own stories, in order to get somewhere.


Now I wanna talk about another thing very important before we get on to the next step, and that is what I call lies. So this is a picture of one of the early Google test cars for automated driving, lots of sensors, lots of devices and all these other things on it. And you sort of get the sense that this pays attention to the landscape quite a bit.

It turns out that’s not really so true. I love Michael Lewis' quote. Michael Lewis wrote several very popular books, "Moneyball," books about the crash of the stock market and so on and so forth. And he says, "The key is simple, a car is complicated but driving the car in traffic, that’s complex." The problem isn’t the car, it’s the traffic. And we’ve learned to drive in a space where we don’t know…we can’t predict what’s going to happen next, right? That’s what we do, that’s how humans drive. We don’t know the whole traffic before we go out the door, if we had to know the whole traffic before we go out the door we would never leave.

But instead, we deal with what comes. It turns out that most automated driving technology even still today…this is a research that I was doing about four years ago. Automated car technology simply memorizes the path and travels that path and the sensors look out for unexpected things and if it runs into unexpected things, the car simply stops. As a matter of fact, I don’t have a slide for this, but I just learned a few months ago that Nissan Corporation is going to offer a service for automated car companies where there’s a human that actually pays attention to any car that comes in that’s driving.

And if their car has a problem it can call Nissan and say, "You know there’s a roadblock here, what do I do next?" And the human reprograms the car around the obstacle. So it’s like software support for cars, cars call, "You know, I can’t get through here." And then they tell them how to get to the next step. Because cars don’t deal with the unexpected, they simply turn off. All they know is the road that they have. If you look at what happens when you’re watching one of these Google cars, all it does is follow a predetermined path.

If it finds an obstacle, it just simply stops and waits for a human to solve the problem. Cars are not yet designed to be autonomous, they’re designed to simply operate safely in an environment that they already know. And that’s one of the reasons that all these cars don’t always operate everywhere. It’s also one of the reasons that Tesla Automotive ends up reporting more accidents than others, not that they report more accidents than humans because they end…are trying to burst that envelope. It turns out in real life we have this very unpredictable sense of things.

This is a flock of birds that start to follow each other. They don’t follow each other because they know where they’re going, they follow each other because they follow each other, right? A brain is made to build a brain. And it turns out most activities in life are very random. So this is a series of ants. It’s a great series of books on ants and ants' culture. There is no queen that tells everybody what to do in an ant colony, ants decide themselves based on simple messages. There is nobody in charge in an ant colony. It’s a lot like that flock of birds, there’s nobody in charge.

But the ants have actually survived for billions of years on the earth. And in fact, almost all life, in bacteria, in super-cellular structures, there’s no one in charge. They simply interact with each other in very small ways. This gives us a great opportunity to think about how we’re gonna manage machines. Because this world of complexity is not just statistics. It isn’t just, "Pick the easiest way to make it from this location to the next," it’s actually about interacting with things around you.

You know, IBM Watson is this really impressive technology that suppressed statistics to just about its edge, and it’s actually gonna do great things for us. But all it’s really doing is predicting based on the past, it’s not interacting. It turns out you can’t get rid of the complexity if you want to actually advance the technology. We’re coming up to a point…Google has already discovered this, we’re coming up to a point where all their technology that’s based on statistics, while they’re still improving, they’re getting to the point where there’s less benefit than the past.

We’re coming to the end of the sort of simple version of the story. The simple version was, "Use math, big numbers are your friend." Right? Well, now they’re no longer paying off, we have to get to the next level, the next set of steps. Because learning is a complex system, learning is different, statistics is not learning, statistics is just replay. I wanna add one more thing that’s come up in the last year or so and that is this notion of bias. As we are introducing more machine learning, more AI style, more statistical feedback as we’re trying to get to this next level, we’re discovering a great deal of bias.

It turns out that machines bring more than just computing, they actually bring bias and assumption. And whose bias and assumption ends up in the machine? The programmers', the programmers' bias and assumptions. So several years ago, I think it was 2009, HP released a camera that was supposed to track faces and found that it did not track faces of black people. This is an example of a camera that would help you if somebody had actually mistakenly blinked when you photographed someone. It turns out, however, this woman is Asian and she kept getting the question, "Did someone blink?"

There have been several examples recently of this, soap dispensers that would dispense to a white person’s hands but not a black person’s hand. Here is an interesting one. When you would search on LinkedIn, you would search on LinkedIn for Stephanie Williams, the AI would say, "I’m sorry, did you mean did you mean Stephen?" And here’s one that really surprised me. Voice recognition systems make more errors when they’re listening to women’s voices than men’s voices.

Now, the first few examples, not being able to track a face on screen or having a problem with facial recognition because of the eyes, or having a problem with dispensing liquid, that is actually an implementation detail. It turns out that we’re using algorithms, mostly in the case of skin, we’re using algorithms for reflectivity, right? Darker skin doesn’t reflect as much so some of these systems don’t work right, these photoelectric systems. That’s something that’s actually built into the software itself, and when a software is built and tested, it’s being built and tested by people who don’t encounter this problem.

What does that mean? We’ve got a diversity problem in the team that’s building the software. It’s not designed in, it’s not like somebody said, "Let’s make sure that we treat Asian people a certain way with the software." Nobody does that. It turns out it’s an unexpected consequence of the software we’re building, and we’re only now starting to see this unintended consequence. These last two, this one about LinkedIn and this one here about voice recognition, that’s a different problem. That’s actually the training data that we’re giving a machine.

The way voice recognition systems work is you train them, and it turns out in this case, that this software is mostly trained by men. Again, it’s a diversity problem. So in one case, it’s coded in and another case it’s learned. And that’s really, really important because it turns out the diversity of the group creating the product or service when that doesn’t reflect the diversity of the group using the product or service, suddenly we notice the bias. Notice what I said? We notice the bias, so it’s not like the bias wasn’t always there, it always was. This has always been true.

There’s another word that we have for these assumptions and discovered biases, we’ve used the word for quite a long time, we call them bugs. "Oh I’m sorry that interacted in a way I hadn’t thought of." Now, bugs in a particular single piece of code are relatively easy to find, although you’ll never find all of them. But as you start to connect machines together, you’re gonna find more and more instances of unexpected bias, unexpected assumptions resulting in what we would call bugs. And that’s why I love this sign that’s here in the hall because this is really true in many, many cases.

It’s not actually a bug, it’s just a random feature I didn’t know I had had. And in fact, brains have lots of random features. We make weird connections to things that can turn out to be very valuable because the brain is all about making a brain. Okay, so let’s talk about what we’re really here to talk about, this autonomous web, this idea of how we can start to create machines that really do connect to each other. So I’m gonna jump a little bit into information theory in complex systems, and then I’m gonna talk about this word hypermedia, this way that we connect the web today.

I’m gonna go back to the 1880’s because I find this really interesting. James Clerk Maxwell was trying to see if he could beat the Second Law of Thermodynamics and he tried to predict where atoms would be. And he knew that fast atoms were generating a lot of heat and slow atoms were not. He imagined in a thought experiment, "If I had this little demon that could actually know which atoms are fast and slow and separate them, I could find out some interesting information. I could actually predict where things would be." Of course, it’s just a thought experiment.

But another decade later, Ludwig Boltzmann came up with idea of intrepid, Boltzmann Intrepid. It turns out that every possibility of where atoms might be in a single space, exists all at one time, what we call the Aegean State or the proper or characteristical state. And that actually knowing where an atom is just a prediction of the number of possible possibilities. So all we can really do is predict the possible location of any particular thing. It takes years and several decades later, Claude Shannon who is kind of considered "The Father of Information Theory," he gave us the perigee [SP] bit.

Claude Shannon was the one who thought up the notion that, "If I send a message and I need to check and see if it’s actually the same message that is received.And I can get that by using a perigee bit without ever understanding the contents of the message." Claude Shannon says the number of a bit needed to represent something is actually his version of intrepid or what later gets known as surprisal. The surprise of the information is the things that were unexpected in its reply. This ability, this unexpected nature about information actually comes up in Turing, in Turing what’s called the Turing Halting Problem.

Turing writes…if he wants to write an application he comes up with this problem in 1930, "Does he know when the application will end, does he know when the computer program will end?" It turns out he cannot accurately predict when any computer program will end. Again, we only have this vague notion of what’s really going on. Finally, I love this trick. Kurt Gödel, a mathematician, Kurt Gödel gives us and he basically comes up with this statement, this phrase, "This statement is unprovable. Is that true or is it false?" What Kurt Gödel has done here is actually give us a little mental flip in the middle of a sentence.

He’s actually combined both the data and the program, the rules for deciding to enforce along with the data itself, and our mind kind of like gets confused at this. If you watch lots of 1960’s and '70’s sci-fi films, usually a human would use some trick like this to blow up the evil machine that was gonna take over the world. All right, they would use some trick of logic. Von Neumann who came up with the way we think of computers today, also struck on this notion of storing data and program in the same physical space and in the beginning they were two different things.

We thought that we would have data in one place and program in another place, but he says, "No, you mix them and match them." This allows us to create programs that write programs. So he comes up with this idea as well. So data and program are together. And it turns out that’s exactly how life works, RNA messaging is data and program together. All single cell, even below single cell, DNA and RNA all work on the notion of messages passed back and forth that can act as both program and as data. Now I wanna mention one other person, Roy Fielding.

Roy Fielding writes a dissertation in 2000 about this notion about how the web works and he says, "The web is this place where it’s only…you can only approximately know what the shape of it is. Machines can enter and leave, and the only way we can safely communicate is by messages back and forth, and the messages contain not just the data but they also contain the instructions for the very next steps." We call them links and forms in the human web, that is the program and then the rest of the display is the data.

And it turns out large networks of components with no central control and simple operation, that’s a complex system. That goes back to the ants that we talked about earlier. We can actually build these kinds of things. The other big thing about a complex system is that it has its own behavior that emerges as a result, behavior that is not owned by any one single component but is the combination of many components. And the web itself is this kind of complex system. We have the perfect experimental grounds for building these kinds of things in the web itself.

So I wanna dig deep into what we have today on the web and what we can do. So if you’re programming the web today, you program in a thing called Media Type messages that we pass back and forth and they’re registered with the IANA Group. In the last few years, there have been several new Media Types created that are designed for this thing about programming data together, and we use them in APIs in order to communicate back and forth. The more programming you can put in the message, the less code you have to write.

And that makes it possible for a machine to learn over time without having to be reprogrammed. And it turns out these messages operate in this notion of surprisal just like Claude Shannon had talked about, our Claude Shannon’s version of Intrepid. It’s a great story about how Claude came up with a name of intrepid, he got…it actually works opposite of what we think of the word intrepid is about everything becoming random, but it’s for another story. It turns out each Media Type has its own level of intrepid or surprisal.

There’s this thing called the URI list which is list URI, so there’s very little surprise here, very little new information, we know exactly what’s gonna happen. However plain text has lots of surprisal, we never know what’s gonna be in plain text. So we would have to do a lot to figure out what this plaintext message is. HTML is kind of a good in-betweener, it actually marks out where the commands are, the A tag, the form, the IMG tag, the I-frame, and where the data is. So it actually has sort of a medium size surprisal, but it doesn’t decide exactly what order things come in.

From a machine point of view, there’s a balance between this intrepid and the energy it takes to actually do something with this. So it turns out the computing power, the coding time, the source code and the memory goes up as intrepid goes up. So it turns out most applications today are one-off affairs, we custom-code them. We have a huge amount of surprisal in every message, so we have to write a lot of code. And that means if we just change the message a little bit, we have to write more code and we create versions and re-releases and all these other things that we saw earlier.

So this is high energy computing when I have to everything as a custom-built app. Now, the protocol of HTTP was designed to be low-energy computing. I can write an application in HTTP and I don’t have to change HTTP in order to get that new app running. HTTP operates on a low energy model, very predictable, very simple. But most things today still assume that HTTP would be the only protocol we would ever use. We’ve been on it for under 25 years, that’s very nice, but do we think it’s gonna last 100 years? What happens when we’re no longer using HTTP in get and post and put and delete?

What happens if we’re doing something else? What happens to all those machines? It turns out when you’re using HTML and HTTP, like Abraham Maslow, when all you have is a hammer, everything looks like a nail. "I can make everything work in HTTP." Well, of course, you can, but you need to be prepared for the idea that HTTP might not be there, and then what do we do? Have you thought about it, have you thought about how long this will last before it will be gone? Twenty-five years is a long time in internet time by the way.

So I talked about this idea of Calvinism, Lord Kelvin was this geologist and he was quite sure that the world would only be somewhere between 20 and 40 billion years old, but that’s it, because he had measured the heat dissipation. And he had forgotten about radiation and lots of other things, until his dying day he could never accept the notion that he would be wrong. We need to be careful that we don’t become Calvinistic about what we think is going to happen on the web. So we need more designs, more message models that experiment with this intrepid pattern.

We need lots of low intrepid, high information, we need to design for machines and not humans. And usually, that means creating these structures that change the actual format from the information we wanna give and from the actual protocol that we wanna use. We won’t talk a lot about this, you can look at the slides later. But we have laid out before us already enough technology to solve this problem. The higher the surprise in the message, the higher the dependence on code, we need to change that.

There are all sorts of ways that we can start to refactor the code that we have to lower that surprisal value while still increasing the information value. There’s a system called Alps which I and a few people are working on which helps you sort of create a different kind of contract. So the contract has more…focused more on the domain surprise rather than on the structure of the protocol, and you can do the rest of that independently. Eventually, we need to start writing messages that only machines understand.

There have been a few stories recently about how Google and Facebook have been experimenting with getting machines to talk and they invent their own language, a language that humans don’t speak. So we’ll see more and more of that in the future. So we need to stop thinking about protocols and start to think about the fact that other protocols may exist some day. We need to focus on the network itself, not any individual thing as a protocol. It turns out the web is one application. It is this one place where we can begin to work things.

And the web works on this very strange rule, it doesn’t actually work in a random way. It works in this way where it actually favors connections. Google was founded on the notion of that the links favor connections, there’s this thing called scale-free networking. So we need to lower intrepid, reduce dependence and treat the network as the application we’re working on, the machines we’re adding to the application. Of course, there are some hard things too, we have to get past the point of central control.

This is the thing called Conway’s Life, it’s a simple mathematical equation, very small, but it generates an infinite number of art pieces that actually some die, some live, some grow. We need to start thinking about the way that we’ll program things in the future when there’s no central control. Stephen Wolfram has talked a lot about this, this is some of Wolfram’s automata, cellular automata. The way Von Neumann created computing CPUs and all that, he had a choice between that and cellular systems, he picked the CPUs because they were easier to do.

But we’ve got a whole future in this version of automata, of treating every application as its own cell in a vast array of things. And this gets much closer to the way we start to see birds and ants and other things behave when we start to treat the network as the application space. So machines have to learn to adapt, this is an example of a program called Ruby, which actually learns to find little objects. In this case, it was soda cans in a screen, and actually, you start with a random act and then you do that 1,000 times.

You find 2 that had the best score, you splice them together and that becomes the start of the next 1,000 runs, and you splice the best 2, and you splice the best 2, and eventually it learns. What is that? Thousands die, a few live, this is exactly how evolution works today. So we already have people modeling this kind of activity. Modeling adaptation is another thing that we have to do. We have to figure out how people are going to favor one adaptation over another. There are a few bits and pieces that we’ll talk briefly about. Random Boolean networks gives us this idea of actually scoring things without any central hierarchy.

It’s just these sort of neurons that kind of get refreshed. And the biggest challenge is competition. In real-life the best learning instrument is death, you die if you don’t learn. Now, we can model a lot of this computing but we haven’t figured out how to do this well yet, and that’s gonna be a challenge for us. Okay, a quick summary. I know we’ve been through a lot but we’ll get a chance to talk about this a little bit. So we talked about information theory in biological systems, and it turns out that the web itself follows these rules.

The problem is most of us we ignore the features that we have available to us, these adaptations, ability to get autonomy because we have lots of tight coupling and we interdependence. We change one thing and the whole set of applications break. We have to start creating these low intrepid systems that let us decouple things by treating the machines as these little cells, and eventually, we have to give up the notion that we control the machines. That’s gonna be very difficult but that’s what we have to do.

We don’t control most of the world around us, and the machines are gonna be just part of that at the same time. So, how long? I love this, Hofstadter’s Law, "Things will take longer than you think even after you take the Hofstadter’s Law into account." So we don’t know how long this will take, but we know that we can make a step every day. We know we can get a little bit better every day. Because if we ignore the mistakes of the future, we’re going to make them. We need to get proactive today and think about how we can build safe, cheap and effective automata.

Because the rule of the robots is already here and we might as well start to join and enjoin those robots into the things we want. So we must be willing to lose sight of the shore in order to go ahead. And I wanna thank you very much, hopefully, this has given you some ideas, some things that we can talk about here in the break. And I just wanna say thank you very, very much. Thanks. Thanks.