Overview

Status

Delivered 2019-12-10 at apidays pari 2019 APIDays, Paris, Paris, France

Home

HTML

Slides

PDF

Video

NA

Audio

MP3

Transcript

HTML

Prepared Script

HTML

Visit superface.ai for more information about the demos and the Superface project

Transcript (Unedited)

Mike

Okay. All right. Very good. Okay. Okay. Actually, I don’t know if you can tell, but Z and I are little wound up here. We’ve been working on this for a little bit. This is a kind of a, it’s a really exciting talk for me, and I hope it will be interesting for you as well. You can hear me okay, right? I just, I’m having a little challenge. Okay. Very good. I wanna say for a minute, I’m really happy to be on stage here with Z. I’ve first learned and met him when he was working on the Apiary stuff and co-author of MSON. We’ve done a few things together. Recently, Good API, all the things you’ve been working on. You helped me actually have an experience with DHL as well. One of the things I’m really excited about that we’re talking about today is a project that Z’s been working on for quite a bit, and I’ve been helping him out. I’m really happy to be here. So we’ll get that taken care of.

Zdenek

Thank you very much, Mike, so am I, happy to be here with Mike. He is, of course, world-renowned speaker and author helping many companies with their API journeys and consulting on architecture microservices, but most importantly, he is the pioneer in APIs. He is a mentor of this to me, and many of us. He’s over the years helping us to, you know, understand the APIs. I always say that we as a group of experts are here, and Mike is somewhere here, you know, we are trying to follow him often in case [inaudible 00:02:24].

Mike

All right. I’m gonna need to refer to my notes. I apologize for that. This is a brand-new talk. This is something we just worked on, so I don’t have everything down perfectly. The real message of my talk is really based around this date, December 9th, 1968. December 9th, 1968 is when there was a demo in San Francisco that has come to be called "The Mother of All Demos." Who’s heard of "The Mother of All Demos" before? Okay, very good. Well, hopefully, this is still be interesting to you as well. But what I wanna do is more than just talk about "The Mother of All Demos" and Douglas Engelbart, who ran that demo. I wanna talk about the early days of programming computers, programming machines. This, by the way, is a machine that was built for NASA to help manage NASA aircraft. So that’s one example. However, I also wanna talk about how the early days of programming and the things we’re doing today are tied to this. This is the underground cave system in my state of Kentucky. Has anyone been to Mammoth Caves before? Anybody heard of Mammoth Caves before? All right. So it turns out there’s an amazing connection between the caves underground in Western Kentucky and early programming and this gentleman right here. This gentleman is Douglas Engelbart. And this is Doug actually, this is a shot from his "Mother of All Demos." A 90-minute tour de force that was just an amazing demo in 1968, and we’ll kinda use that as a jumping-off spot. What I really wanna talk about is what was happening in computer programming in the early age, that’s in the '50s and the decade beyond, how Engelbart wanted to change the way we thought about computing, and how it turns out this idea of caving and caves affected the way we write programs today, the way we actually network programs today, and how that can help us figure out how to deal with API’s in the future. I really liked Paul’s talk. I like that Paul was talking about ecosystems, and hopefully, this will give us some other ideas on those ecosystems as well.

Let’s talk about that demo day in 1968. So what happened is there was a conference in town. This is 1968. Think about what’s going on in 1968, where you were, if you were. Some of us were. Like today, it was breezy. It was cool. It was the overcast in San Francisco, and Douglas Engelbart starts off on this demo to about 2,000 people in the room. He starts showing off some amazing things in computing, things that at the time were just absolutely astounding. One of the biggest things is he’s actually working on an interactive computer. In 1968, computers were not interactive. They had no screens. If you were lucky, they took punch cards in, and they created print paper out, and that was it, and many of them in the beginning, didn’t even do that. They looked like that NASA computer I showed you a minute ago, which was just rows and rows of lights. If you look at science fiction shows from the '60s and '70s, all their computers don’t have screens either. They have lights. They have just rows and rows of lights as if we could all understand how to flip switches and understand the color of the light, and that’s how we would actually operate computers. That’s what people thought back then. What was amazing is not only what he showed, but how he showed it. This is actually the keyboard setup that Doug used while he was on stage. You’ll recognize, in one hand, the early version of what we now call the mouse, his pointing device. His mouse that he actually built and built himself on a block of wood and a couple little wheels and some rubber bands, and you can see he has four buttons on the top it. But what I find even more amazing is what he has in his other hand.What he has, in his other hand, is this paddle wheel or this set of paddles. There are four paddles. It turns out that Engelbart didn’t actually type of much on the keyboard. He typed all his lettering on that paddle on the left. He had worked out a system, and you’ll see a version of that a little bit later. He worked out a system where all of the letters could be touched with the paddle. Punctuations and some other things and line edits were actually done on the keyboard.

So what he was doing in 1968 is showing people an interactive computer system in a way that no one had ever really seen it before. And what he showed off were some amazing things. He showed off real-time multi cursor in-place editing. He had people in this location and 30 miles away over by Stanford University, operating at the same time. He showed point and click, drag and drop, cut and paste for the first time. He actually showed a hyperlink in hypermedia, clicking on something and then following into the next step, intelligent outline based editing, automatic indentation, and linking and so on, and so forth, text messaging between parties, live video and text editing on the same screen, and version control, revision control. Doing all of this in 1968 interactively blew people’s minds. Today, we expect that. Fifty years from now, we expect that every day, but in 1968, that was just seen as totally incredible. What he was really trying to do is to change the way we thought about what the future of computing would look like. He sat in this chair. It’s a specially built chair actually from Herman Miller. If you know what Aeron chairs are, if you ever need to know what Aeron chairs are, well, he had one specially built just for this system so he can hold this keyboard. This was all in a time when computers looked like this, when they were the size of trailers, big room trailers with special air conditioning. If you’ll notice in this picture, there’s just dozens and dozens of wires. Actually, computers were hardwired back then. You actually can program them by connecting one wire to another. He’s actually completely changing the way we think about what computing ought to be. This is the UNIVAC, at the time in the early '60s, late '50s, early '60s, the top-line, advanced computer. And again, you see lots and lots of lights, and a handful of switches but no screen. And that, by the way, is just the control console. The rest of the machine looks like this. That control console is connected to all these other parts, and there are rows and rows of banks of what are called plug-board to wiring. This is actually a very advanced machine because it actually has tape in the back. To accept tape input was really quite incredible.

In 1968, people thought computers were just fancy calculators. They did really complicated problems like missile trajectories and things like that, but they were just calculators. They weren’t really very far from what we originally had at that time. But Doug wanted a different world. He wanted a world where humans actually interacted with the machines, where teams of people worked together on a particular project in an interactive kind of way, where lots and lots of people could mix things like audio and video and everything. This is actually one of the behind-the-scenes looks at this demo. These are people that were actually at a distant location in a place called the Stanford Research Institute that Doug ran. They were actually sending video feeds and doing other programming tasks at a distance all at the same time in real-time. The way Doug thought about computing was like this. Computers should help us come up with better solutions and faster solutions, and that meant that we could tackle more complex problems, and that would mean that we were advancing human capability. We were augmenting our intellect. When we advanced human capabilities, we could then build other computers that would make us build better solutions and faster solutions to deal with more complex problems over and over and over again. What he called this was bootstrapping, that step-by-step, we could use technology to make us smarter, make us better, and make our communities and our world better. Isn’t that a lot of what we think about when we think about APIs. APIs are supposed to come up with better solutions, faster solutions so that we can focus on other problems. Here we are, 50 years later, we’re kind of doing the same thing that Engelbart was talking about, trying to figure out how to make things faster and better. What I’d like to do at this point is turn it over to my colleague Z, and you’re gonna show us what it’s like to build APIs today. Right? What we’re doing, right?

Zdenek

Of course. Thank you, Mike.

Mike

Okay, very good.

Zdenek

This will be some live demo. Hopefully, it would work with us. I’m trying to be here. The team of people who should know on the screen with that more modern technology. So let’s see how this works. What we have here on the screen, this is a map of islands of Koana, and Koana Islands are a beautiful country. The problem only is, the weather can be there quite harsh. You can have earthquakes in one place and thunderstorms in another just, you know, within a few hundred kilometers away. When you are traveling on the Koana Islands, you need to better know, you know, about the weather alerts. Luckily, there is a service that provides weather alerts information for the Koana Islands. Right? This is the API documentation of weather service. Here you can see that you can simply query, get alerts some point and read the [inaudible 00:12:11], and you might get the information of what’s going on, wherever you are on the Koana Islands, whether there are some important and critical alerts. Right? If you’ll be traveling on Koana Islands, you might be interested in connecting to this weather service. Right? Now the technology is helping me to do the demo.

The point is if you would be in a position to create a dashboard for, you know, these weather alerts, you would be probably doing some interactive programming, like you see here on the screen. Using JavaScript, you make a hard-coded URL, and the codes you will put there, okay, requests methods, you will put there some headers, and you will also hard-coded the response from whether you’re getting from the weather API or weather alerts API into your internal representation of those data. Right? This is how we are doing it today. It all should works just fine, you know. It’s a hard-coded and when I make the request, I’ll get the weather alerts, for example, here for Lexington. I might see that there are some thunderstorms going on. Right? Now, what happens today if a provider decides to change this API? Right? If you leave it with me, I’m going to change the provider, the service that provides the information about this API. For this, I’m going to change the version, for example, to version 4, redeploy the provider. The version 4 of this API is I did a small little change before the requested, required parameter here was called address locality. Right?

Now, if I refresh the documentation, it’s no longer address locality, it’s called Place. What happens now if I make the call and with such a hard-coded client. Right? If I try to acquire the weather again, I will get 400 missing required parameter, because I hard-coded address locality, but now, the service provider changed the parameter into a Place in my client’s broken. I have no idea what about the weather in where I’m about to go in Koana Islands. And with that, back to Mike.

Mike

Okay. All right.

Zdenek

Back to slides.

Mike

Back to slides. So, right. You can even hear it in the description of what we say. We say hard-coded. Right? Just like hardwired. Right? The way we build APIs today is very much the way we programmed computers in that first decade. And we know that needs to change. What we really saw is that there are humans inside of all of this. Right? You have to constantly keep feeding this. You have to keep watching and monitoring and see if somebody changes something. Everybody’s affected. Even when there’s a small change, everyone is affected, and this really kills our effectiveness when we need to build large scale systems. As we build smaller and smaller units, and we try to assemble them together, it becomes more and more brittle and more and more frightening. So this idea of having to nurse everything and make sure it still works comes from the very beginning of computing. By the way, these are some of the earliest computers part of a group called the ENIAC 6, the people who actually built and operated the very first computers that helped computer trajectories for World War II. The person sitting on the bottom here, sitting down at the keyboard, is a woman by the name of McNulty, Mary Kay McNulty. She actually designed many of the very first computer systems that we work on today and helped as part of this team to build the programs that we have today. One of the people who was effective in that team of six, the ENIAC 6, was this person, Grace Hopper. Anybody heard of Grace Hopper before? Grace Hopper, Admiral Hopper, she was a Navy admiral. Grace was a fantastic mathematician, and she understood what was going on with computers sooner than most.

She understood that the way they were doing it today, that they were wiring and step-by-step was non-scalable. It was not going to work. She understood the same thing that we understand today about the way the API’s work. You can’t hard-code everything, it’s gonna be a problem. She and her team, these are some more of the ENIAC 6, they made them pose into the goofy ways for publicity shots, but this is actually the ENIAC computer that was built in Pennsylvania that eventually helped to win the war. This team of people went to another company called the Mauchly, Eckert-Mauchly Computer Company, and they built the thing called the UNIVAC. The UNIVAC is what we were looking at earlier. Here is Hopper teaching other operators how to use the UNIVAC, and again, it’s a nicely posed shot. The UNIVAC is one of the very first commercially successful computers, but you still needed to wire it in. This is a plug-board. This is actually what programming used to look like. You’d have these boards that you slide-in in several places. I think this is the one from the IBM 1401 where you would wire up these, you’d set these wires, and then slip this in, and now that’s the program. Then if you wanted to run another program, you pull out these plug-boards, you’d rearrange the wires, and you’d stick them back in. And of course, you hope you did it right. Right? Because then you’d have to debug [inaudible 00:17:35] And the original computers were run by vacuum tubes. The first UNIVAC had 18,000 vacuum tubes. You can imagine, if one of those goes out, the whole system doesn’t work anymore, so you gotta figure that out as well. What Hopper needed, what she knew she needed, was something different. What she needed was what she called automated programming. She needed a system that would automatically write the program for her. Now everyone that built these computers told her she was crazy, that it’s not possible. That’s like asking cheese to curdle itself, or something that’s going to be weird story they told her. She said, "No, no, no, this is exactly what we need to do." So what she did is she started building the very first compilers. And what she wanted was… FLOW-MATIC was the compiler that was like of a third or fourth generation that she built, and she wanted one that would actually work for any computer, whether it was built by the Mauchly Company or by IBM or by Spiro or by Remington or any of these other companies. She wanted a universal programming language. This was in 1958, I think, or 1960. She and another team got together, and they built the very first universal programming language. Does anybody know what that language would have been?

Audience

[inaudible 00:18:48].

Mike

Yes? COBOL. All right? COBOL, the language that they thought would just get them through the next couple of years but actually it’s still active today. COBOL is really the universal language of computing, and it brought the notion of high-level abstraction to the idea of programming. Now it doesn’t matter whether wires are plugged in, it doesn’t matter how many peripherals there are, it doesn’t matter who built the machine, then we can now use the same language over and over again, and it will still work. There was still a big challenge. So even when we now have COBOL language, we still have lots of challenges, and those challenges are related. This is a 1401 by the way that I was talking about earlier. The challenges are related to what happens when we move a computer, what happens when we add peripherals, what happens when we change different parts of it? So automated programming was our first step, this notion of compiling. In a lot of ways, automated programming is a lot like autonomous APIs. How can I start getting things to work together? How can I talk in a universal form that isn’t implementation-specific, that would let people actually use APIs over and over again, even when small parts change? I think you may have an example of something like that [crosstalk 00:20:05.408].

Zdenek

Thank you, Mike. So alternate APIs is that something that I’ve been pondering about for last four or six years, Mike probably even longer than, you know, all of us. I’m going to show you this one example of this Koana Islands weather alerts. Now, this was our hard-coded client. Of course, if you would like to fix the implementation, you would have to go to your client, change the address locality into the Place and redeploy to all the devices. Right? This is possible if you control the client. It’s not so much possible when you have no control, and people have those clients on their phones, and we cannot, we can’t redeploy the clients, but we can try, but it’s very difficult. So we were thinking that there has to be a better way. I’m just going to [inaudible 00:20:53] of our API. I have come up with a solution that I’m going to show you right now. There is a simpler way to make a call using this declarative code, so this is my client code that doesn’t have any HTTP details, any details about a particular provider. Here’s my original version, just to show you that that is what is v1 with the address locality as a parameter. If I’m going to change it and refresh or make the call again, I’m going to get a response with the information about the thunderstorm.

Now, let’s walk through some possible changes, not all of them, of course. I’m going to change this API to another version, v2. This time, v2 has a different URL. Right? Before, it was a [inaudible 00:21:53] alerts. Now it’s the weather edition alerts. Right? That pretty much broken all the hard-code and hardwired clients. However, not under this new client. When I make the request, you will still get a quite important information that there is an earthquake going on at harbor there. Right? So let’s try to do some more changes. I’m going to switch to v3 which is using a post instead of [inaudible 00:22:20]. Right? Of course, another breaking change. Now you see the same API. It’s a post alert. Of course, when I make the request, I will still get this a storage post. Right? I can also show you here in the code or in the console of the provider. This is the server implementing the weather service. You can see here that the request that it is indeed getting, there’s a post method there instead of what was the original. But there is no notion about these technicalities in the client. Right? The last example, I’m going to go back to our version 4, so changing the address locality to the Place. Right? So going back to get, Place is in the Place, and of course, if I make the request again, I’m going to get a successful response for the information about the weather alerts and the location. All this without hardwiring stuff, without redeploying the clients that I might not even have control over. Mike?

Mike

Okay. You can see now we’re sort of at this next stage. We’re sort of at this idea that we can start to have a more abstract view, another higher-level view. We’re taking humans out of the process. Humans don’t have to actually monitor every step, every URL, every method change, every argument, change if we use a higher-level abstraction, a high-level approach. This is very much what Grace Hopper was doing with computing, you know, back in the '50s and '60s, and Engelbart knew that. Engelbart understood that that last decade, that from the '50s to the '60s had brought this new notion of leveling up on the abstraction, and he wanted to take it to the next step. He wanted to take it to the notion of making computers interactive for us so that we can actually solve problems. He knew that we could do things differently, and then he knew that because Grace Hopper had already changed the face of computing in the last 10 years. But there are still challenges ahead. This is the actual reconstruction of the IBM 360. The construction of the IBM 360 is this long story, this tale. This was actually the first really super scalable, affordable construction of a mainframe. It almost killed IBM building it the very first time. They had to learn so many things along the way. One of the things that happened to Engelbart is that he was told at his event, "You know, it’s interesting, but I don’t think it’s very practical." A lot of the things that he talked about 50 years ago took decades to implement. That interesting but not very practical, sounds a little familiar. If you recognize this document, this is what Tim Berners-Lee was told when he first described the World Wide Web. Somebody had written in the top of it, "Vague, but interesting." These are the words that kill your entire life. Right? [00:25:20] Not really practical. Right? But I’m getting ahead. What I wanna do is I wanna talk about this. I wanna talk about the ARPANET. This is one of the earliest versions of the ARPANET. ARPARNET was just actually,in test mode during the demo that Engelbart was doing. He actually was using an early part of it, and if you’ll notice, actually, that one of those dots is actually marked SRI. Remember, I mentioned the Stanford Research Institute? So Engelbart’s computer, his online system was actually one of the very first Intermediate Message Processors or IMPs on the ARPANET in the beginning. So he was already using the ARPANET.

The ARPANET is this idea of timeshare, of lots and lots of computers. The ARPANET is built by a company run by these three men, Bolt, Beranek, and Newman, BBN. They originally started as an acoustics company that actually did the acoustics for the United Nations building, but they had an employee. This person here, J.C.R. Licklider who change their future. Licklider saw the power of computing in the early '60s. He said, "We need to get involved in this." He actually helped test the first PDP-1 and lots of other things. He did a lot of investments in Douglas Engelbart’s work. And Licklider knew already that lots and lots of computers were gonna be a challenge. He wrote a memo that actually changed the way we thought about how we connect computers. Notice the memorandum is for the members and affiliates of the Intergalactic Computer Network because, in the '60s, we were dead serious about outer space, let me tell you. We were landing a person on the moon. Right? We knew we were gonna have lots of computers in lots and lots of places, lots of toggles, lots of switches, lots of lights, so we were gonna have to figure this out. One of the lines that he had, and this was really, really interesting. He was talking about what’s it gonna take to connect lots of computers, and he’s got this line. The problem is essentially one discussed by science fiction writers. How do you get communication started among totally uncorrelated sapient beings? What’s he really saying here? When you meet an alien, how do you talk to them? How do you figure this out? How do you start a conversation? And he likened computers on a network to this very same idea. We were going to figure out a way to start to talk to each other in a very abstract form to understand what each other are doing. He understood how important this was gonna be. So while people were still building the UNIVAC and still operating these computers, he was thinking ahead about how they were all gonna connect to each other, and that leads to this part of the story. My state of Kentucky, had to work this in.

At the same time that Engelbart is doing his demo, the year '68 and '69 and around that, there were people exploring the Mammoth Caves in Kentucky. One of the people that was doing a lot of exploring in the Mammoth Caves was Will Crowther. Will is the person down here with the glasses, sitting down. Has anybody ever heard of Will Crowther or Donnioe Woods? Will Crowther and his wife, Patricia, did a lot of cave exploration. This, by the way, is a shot of the programmers at Bolt, Beranek, and Newman, BBN, the people who built ARPANET. He was one of the programmers of the ARPANET. And it was his wife Patricia, who would actually accomplish a very important feat. She actually proved the connection between the Flint Ridge Cave System and the Mammoth Cave System in Kentucky, making the Mammoth Cave System, the largest known cave system in the world. She and Will mapped many, many, many miles of this system. It was really sort of near and dear to their hearts. Now, it was because of Patricia’s success in finally finding this sort of a link between these two systems and for other reasons that Will started working on a project, starting working on a project on his computer in [inaudible 00:29:08] and this project was actually a game. It was a game to travel to all these rooms in the cave. It was based on all the cave experience they have had. And what he had built in 1972 was a game called the Colossal Cave Adventure, sometimes just referred to as adventure or advent, which is the command that you could use to start playing the game. Will Crowther invented online gaming in 1972 with this text-based game, and gaming and adventure gaming has actually established the way we play games on the computer today. One of the important elements of it is that we can talk in simple English. We’re programming the computer to go from place to place and do things in simple words and phrases. This is a picture of Will today. He’s a little older, he’s a little wiser, and he’s probably a little bit more mellow than he used to be. He doesn’t cave as much as he used to, but he is considered the father of the way, what we think of as computer gaming. And remember, it started from this very place. It started from this SRI base node in the BBN system.

So lots and lots of people now really think about gaming. Gaming is a sort of a key element. This adventure gaming, in particular, is sort of a metaphor for a lot of things. In fact, one of the very first hypermedia formats in clients that I built is actually one based on a maze, based on a maze game, for the very same reasons that it’s very much like the way we expect APIs to work. We want our APIs to actually go out and do something at this shopping cart, and we’ll do something with this credit card service and then do something with this shipping service and put it all together. But the problem is, it’s dangerous out there, and you can end up in the wrong room and not have the right tool, and you can’t make it. Now humans are in the loop today [inaudible 00:31:01] But what would it be like if you could create autonomous APIs? When we create APIs that can actually solve problems, fix themselves, do things, look and find and recognize things along the way. And that’s the challenge I give to you next.

Zdenek

Of course. No small talk, Mike. Thank you. Going back to our story, we are on this beautiful island. We have a client that is quite [inaudible 00:31:27], but what if the service broke down, right? I have one little tricking point. Just don’t tell anybody. If I able this, you have to believe me, but the service is no longer available. Okay, don’t do this at home. Now, if I would try to do this with the hard-coded client, of course, if I switch to hard-coded here, it will time out, and it will take some 20 or so, 40 seconds maximum. Were not going to do it. However, I’m still on my new way of connecting two APIs when I make the call, it’s [inaudible 00:32:08]. I’m still getting a response. Like, okay, now this is the magic trick. I’m here with this black box. The truth is, if you should look closely, there is indeed another provider. The other one has a different URL if I look at the next documentation. So there is more services providing weather alerts on the Koana Islands. Right? So let the system load in a few moments, and you will see that this is a very different service yet within the same domain, providing a similar or the same information for our API. Okay, here it goes, or not. I can [inaudible 00:32:49] here. So the API description or API service of that service is very different.

It’s post. It is different URL than the other one. Right? This probably will not surprise you anymore. It’s not even taking a query parameter. It’s taking some [inaudible 00:33:04] parameters, today parameter, and it returns a response that is a very different field names than the one previously. Yet, my SmartClient was able to pick a service up and utilize without R coding to the [inaudible 00:33:25] service. Now, let’s say there was just a little hiccup in that main Koana Island service. Let me take this one down. Right? Let me first without the original service. Since I’m using Glitch, it should be pretty fast, and now it’s pick up. I’m going to shut down this pick up service that was there. If I make a call again, so this was the winter [inaudible [0:34:00.258] earthquake. Right? Now I’m going to make a call again with the same client. The important thing during this presentation, I did not redeploy the client. The client is still as it was. it was never deployed and the code was never changed. As you can see, I’m already getting the information about earthquakes at Darbydale. This time, I’m back to this ballistics Umbrella service, which is the original provider. Here, my client was able to overcome some [inaudible 00:34:32] on the provider using some other providers within the same domain, but maybe quite different API that was available.

Mike

All right. It’s yours.

Zdenek

It’s mine. Very good. This was the third MO, and that was the answer to the question, what if the provider goes down? Now, we call the super interface, simpler, superface. This is a technology that I have created based on many amazing things that also Michael and others who are in this room. It’s called superface. The objective of this were, of course, to create the first implementation of autonomous APIs. Most importantly, I wanted to create it in a way that it’s easier to get started with, so we don’t have to throw everything we have built already and start building the APIs differently. It’s very simple to get started with for both providers and consumers, even more, simple for the consumers. Right? The point there was to get rid of a Promaster targeted implementation, and as humans, I collect somebody’s documentation, editing the clients, and redeploying the clients. Right? Another thing about it, this helps you to get the way of [inaudible 00:35:52], so if you have [inaudible 00:35:54], you know, API providers within the same domain, you might use this. You should be able to determine between different providers and focus on the business objective of your clients. This is also moving away a little bit from discussions of [inaudible 00:36:12] whatnot. Because this is the construction where on top of these architectural styles and, you know, the client can work with many different types of API. Now, superface is coming in 2020. We take our little break over the Christmas of course, as an open-source project. So we totally really want everybody to use this. We think that we may be able to [inaudible 00:36:38] to move away from writing a documentation and using documentation to, you know, hard-code [inaudible 00:36:45] on the business project. We might have some commercial support for superface [inaudible 00:36:50]. If you are interested in this, you need to go and explain that this make sense. That’s why I’m interested in this technology. If you want to maybe employed an autonomous API for the next project or if you just want to work on this, then please let us know. Check out [inaudible 00:37:08], and you will find more information about autonomous APIs. Check us here at the conference or send us an email at the superface [inaudible 00:37:19]. We’ll be happy to discuss the autonomous APIs and superface. Thank you very much.

Mike

Thank you. Do we have time?

Moderator

We have about 10 minutes, 15 minutes. Does anybody have any questions? I’ll actually give the mic back to [inaudible 00:37:46].

Zdenek

Yeah.

Mike

Yes.

Audience

I noticed in your demo, I think you have something [inaudible 00:37:52] that register when you talk about providers. Do you envision clients themselves knowing about different providers, or are you considering a centralized registry for services?

Mike

Right. So the question was that you noticed that there was something referred to a registry in the demos, and you’re asking about where do clients or providers fit into this notion of registries. Right? [inaudible 00:38:14] So, yeah, the implementation as it is now is that the registry is where you go to find. Right? So you noticed that the service actually saw that there was something that was down. They went back to the registry to see if we can find another service that matches the same description. Is there another weather service on the Koana Islands? So both clients and providers will be aware of a registry. What will happen is when providers boot up, they’ll go register, acquire more places. It’s sort of like your television channel or your news store or your distribution whenever. And when anybody else goes shopping and clients are looking to put something together when they fire up, they can actually use descriptive information or profiled information to find all of the services that are gonna solve their problems, and they might actually find more than one and use them as backups.

Audience

They’re better, that stuff? [crosstalk 00:39:05.024].

Mike

Did that help?

Audience

Cool.

Mike

Other questions, anyone? Yes,

Audience

[inaudible 00:39:14].

Zdenek

Absolutely, so the [inaudible 00:39:29] with demos was there for, you know, in taking the [inaudible 00:39:34], you know, using the [inaudible 00:39:36] in this platform, but yes, the client currently is written in JavaScript and Node.js. We can run in Node. js or browser, and we plan to create more plans then [inaudible 00:39:49] starting with the Python, but absolutely, yeah, it’s not a broken thing. The first [inaudible 00:39:57] edition is better than JavaScript, though.

Mike

There’s another question. Yes.

Audience

How do you manage [inaudible 00:40:06], you know, language from all of this provider. Do you use [inaudible 00:40:13]?

Mike

Do you wanna take that?

Zdenek

No. It’s yours, go on.

Mike

Yes. The question was, how do you know that the client actually understands or has some understanding of what the provider is doing, right? Is it based on RDF or something else like, right? It actually is similar to what we’re using right now is profile language and whether we’re using as ALPS that I and Leonard Richardson had worked out a few years ago. There is a way to create a kind of zone like we’re going to talk about weather here and so their weather concepts like asking about an advisory and so on and so forth. It’s very similar in a way to what Paul Fremantle was talking about this notion of having a boundary or a fence or about some limitation, so that actually turns out to be the shared understanding between parties is this one, what we’re calling a profile of [inaudible 00:41:07] information.

Zdenek

Yeah. It’s a domain [inaudible 00:41:09], so it’s not specific to a particular provider, or no details at all of any kind of connotation or is the response [inaudible 00:41:16]. This is not the, let’s say, [inaudible 00:41:20] specifications this is just a matter of fact. I think of a profile as a used case with the [inaudible 00:41:27].

Mike

We have time left for another one?

Mike

Sure, go ahead.

Audience

Will this be dependent on a universal shared understanding of whether domains or is this going to be, "I’m gonna talk this type of weather language with these guys and maybe different types of weather likely." Do we need a centralized semantic registry? How are we gonna get there if we didn’t?

Mike

I’ll just give a short version of. This is Licklider’s question. Right? Licklider’s question is, "How do we get sapient beings to talk to each other?" Licklider’s decision was that we were going to create a high-level descriptive language but not a low-level implementation language, so that’s what we’re going to do. The way that ALPS was originally designed by Leonard and I (and Mark Foster --ed.), you can have lots and lots of variations of what a weather service is. What I would do is when I connect it, I would say, "Look, I need version 7 of weather, as you know, for something, something, something," who out there actually provides that. An individual service might actually talk more than one semantic version of weather, so this is still wide open. This allows the community to decide what it is. We don’t have to coordinate that language.

Zdenek

There’s one more question but just to add. This makes sense even if you’re basically talking to yourself just a few months ago. This, you know, resilience and be able to [inaudible 00:43:02] your API. If you want to have profiling for your services, it might seem a little ahead, but it might be worth, but yes, as Mike said in bigger scenarios, there might be different profiles for same thing. There might be also a [inaudible 00:43:15] within the profiles, but that’s…

Mike

Sure. Yes.

Audience

Do you investigate the attack surface for such kind of ecosystem?

Mike

Investigate the what?

Audience

The attack surface because it’s not something that I am confident that make decisions for me, between the…I have more intermediaries. I don’t have [inaudible 00:43:44] that networking to regulate just like DNS or HTTP [inaudible 00:43:52], but I have semantic intermedium, so the actual process of communication between me and you may be semantically modify bad intermediaries.

Mike

Okay, so I find this, and the question is, have we thought about the notion that there might actually be semantic intermediaries not just routing or protocol intermediaries? Is that the question?

Audience

And this exposes the communication to different kind of attacks.

Mike

Right. Oh, okay. I see. We haven’t addressed that directly, but we could use any of the existing encryption or certificate services, so this is actually true in sort of the Shannon sense. This is actually just bits on the wire. Each individual party does not have to understand the inside of the message so it could be totally encrypted. It can even be customized or run over VPNs, or something else like that to protect it. We also in this version didn’t address security in general, but in the discovery specification, open DISCO it’s called. There is a plug-in for any type of security that you’d like to do, so there are definitely possibilities, and we need some more work in that security space as well as security and privacy as well.

Moderator

One, maybe two more questions.

Mike

Okay, one, maybe two, to maybe one more. Anybody have any more? Yes.

Audience

How is this [inaudible 00:45:12]?

Zdenek

Right now, it’s not like passwords. Of course, there’s a couple of the whole things. There are a couple of formats that are making this possible, so that’s one thing. One of these formats is the Staff Profile, so semantic description of the use case. Then there is this client that is a higher-level client that’s URMJ. Basically, [inaudible 00:45:41] say what you want from their domain. This is called super driver.

Mike

It’s a library.

Zdenek

It’s a library and really better than them so you can see. That’s available on [inaudible 00:45:54] right now again, the Jasper ecosystem, and there is a DISCO, which is the…

Mike

It’s the registry. Actually, the DISCO registry is available. This version is on a slightly…They’ve drifted a little because they’re both early, but there’s…You can find a containerized version of an open DISCO registry so you can actually run a registry today by just running in [inaudible 00:46:19].

Zdenek

There’s a lot of pieces, actually, the [inaudible 00:46:22] to put it all together and prepare some like, you know, [inaudible 00:46:28] but you as a provider, you don’t need to do much and you as a consumer, you just provide all the SDP [inaudible [00:46:36]

Audience

It’s because [inaudible 00:46:40]

Mike

So the question is, does it become a single point of failure? It does in the same way that DNS does.

Zdenek

Communication is not happening through any of superface internally. The communication is always, from your consumer. Unlike any harmonization that we are seeing today with APIs. This is direct from consumer to be provided. This is not going through some superface…

Mike

There’s no hub. There’s no gateway or anything like that. I think we’re…Do we have some?

Moderator: Out of time.

Mike

Thank you.

Moderator: All right. Let’s give a round of applause for Mike and Z. [crosstalk 00:47:23.042]

Zdenek

Thank you.