Overview

Status

Delivered 2018-9-22 at WebExpo, Prague, Czech Republic

Home

HTML

Slides

PDF

Prepared Talk

NA

Video

SlidesLive

Audio

MP3

Transcript

HTML

Transcript (Unedited)

Okay, there we go. So my talk is Bots on the Net: The Good, The Bad, and the Future. So I want to talk a little bit about what bots are like, where they come from, where we can [inaudible 00:00:17] and a little bit about where we’ve been and how we can think about where we’re going. We’ve already heard about things like how to build chatbots or AI tools. We’ve heard a talk today about how AI tools can be dangerous if done incorrectly. I’m hoping that we won’t get into those details but we’ll tend to talk about the big picture of what it is to be in this bot space. By the way, all my title cards have little poems or sayings on them and these were written by a bot called Racter. So you can enjoy those as we go through each one of the slides. So this is me. This is how you find me online. This is how you find me on Github and LinkedIn and Twitter. I travel a lot so this is actually the best place to find me. I travel about 40 weeks a year to travel all around the world to talk to different companies about how they’re using technology to change their organization and to change the lives of their customers. So what I’m going to share with you are some of the things that I’ve learned as I’ve talked with customers and I’m sharing them with you, as well. So I mentioned I work in this group called API Academy. It’s a group, there are about five or six of us that do the same thing that I do. Travel all around the world. So there’s lots of great material you can check out there, as well. And the last book I think was mentioned that I worked on with the team was called "Microservice Architecture." So I have a few copies here but you can download a free PDF copy from this URL and you’ll get the slides when you’re done, as well, so you can do that.

Okay. So let’s talk about bots and I love talking about bots here. Last year, I was here talking about bot autonomy and I usually use the way the brain works to start to change the way you code software. And I love talking about things like bots and autonomous engines here because we actually are at the beginning right here in this city of what robots really are. Right? So Karel Čapek’s play from 1920, almost 100 years ago this year, talked about this idea of universal robots. "Rossum’s Universal Robots" and it was the first use of that word, and it became the word we are now using for this last century to describe automatons or machinery that acts in some way to do some job. So I love coming here and I love enjoying the city and seeing the sights and talking about Karl Čapek. So bots, chatbots, bot tools, bots that talk to customers, these are becoming very, very powerful tools inside lots and lots of organizations. Especially organizations that have a close touch to their consumer. So it’s going to be a multi-billion dollar industry in the next seven years. More than 25% of the world is probably going to be using bots in some way. Just from 2016, there were more than 30,000 different bots trolling around inside Facebook. Now, we know that that didn’t go so well for everyone and that’s a really important thing to think of.

I also have a little picture from Ray Kurzweil. Ray, the man who comes up with the idea of the singularity, he thinks bots will replace humans in the next 10 years. I’m not so sure about that. We’ll talk a little bit about why I think that. But businesses love bots. Businesses see bots as a way to reach more people, to reduce costs, and to be more effective and that’s, sort of, a mixed blessing. Many of the customers I talk to are thinking that they’re going to use a chatbot to replace humans in areas that it may not be a good idea to replace them, and their primary motivation is money. It’s actually saving costs rather than improving the customer experience. So this can be really dangerous, when you kind of flip the script and think the real reason I want to use a machine is so that I can get rid of people actually touching my customers. That’s not what customers want. In fact, when customers are surveyed, most of the time customers are pretty iffy on whether or not they think they trust a bot when they talk to a bot. Even the highest trust levels or even the highest relationship levels like advisor and teacher are only barely in the teens of numbers. So when we’ve got customers saying, "I’m not really sure" and providers, companies, stores saying, "I really love these," that’s a gap that can be pretty dangerous.

That’s really the challenge of bots. Where the technology is really giving us a great opportunity, we’ve got people who want to see savings, they want to see saving money, but then we’ve also got customers who are really not so sure. So if you and your organization are thinking about doing this, you want to make sure you have a sense of what your customers are comfortable with and what they really want to see in their relationship with you online. It may match up great because it matches up in some categories and some demographics, but let’s get started. What does this really mean here? Where do we have this notion of why bots are good? Why bots are positive and where does this come from? We talked about the idea already of the word roboti, robota, coming here from Czechoslovakia but the reason we talk about chatbots or bots as a way to communicate actually comes from Alan Turing. So about 30 years after the play, the "Universal Robots" actually premieres here, Alan Turing has this notion in London that we could actually maybe create an experience where a person might actually think it’s actually another person, rather than a machine. And this becomes known as the Turing Test. Are there computers who would do well at the imitation game? I love that Turing was smart enough to notice that all it’s really doing is imitating a human, rather than replacing one or being one. And we lose that a lot of times. A lot of times we think that bots are really about having their own identity and that’s really not a good strategy.

So Turing gives us this notion of a Turing Test. Within about another 15 years, a contemporary of Alan Turing by the name of Joseph Weizenbaum actually builds one of the first chatbots that actually passes the test for some people. And this chatbot is called ELIZA and you can still find copies of ELIZA running on the internet today, and ELIZA is this little tool that imitates, remember Turing talked about imitates, a psychotherapist or a psychologist in a particular style called Rogerian. Rogers was this person who would ask lots of questions. "Tell me why you feel that way" and sort of elicit these ideas from you. So Weizenbaum writes a bot that actually acts like your therapist. "How are you today?" "Oh, I’m fine." "Well, why don’t you tell me a little bit more?" "Well, I’m kind of mad at my mother." "That’s interesting. What is it about your mother that makes you mad?" It just simply kind of starts to talk back to you and it’s a very simple, natural language processing program. It’s actually relatively easy to write. You can write ELIZA in just a few hundred lines of simple code. It’s not very complicated.

The name, by the way, comes from a play from the 1950s called "My Fair Lady," which had a character called Eliza. It actually is based on "Pygmalion," which is another play from about the 1920s, about the same time that Čapek is writing. And Eliza is this sort of street person who’s taught to imitate very upper-class people and trick them, and that was really sort of what Weizenbaum thought was a great name for this idea. "I’ll just use the same idea because we’re going to trick people." And it turns out that ELIZA was actually pretty good at it back then. So it could carry on conversations and more than 50% of the people who were trying to determine these two conversations would think this was a human. Now because Turing’s test is about whether or not you can fool humans, ELIZA doesn’t fool anyone anymore. ELIZA seems way too primitive and we’re too smart now half a century later. We can tell right away oh, that’s not a person but at the time it seemed pretty radical. So that’s pretty cool but the thing is, of course, that ELIZA was an imitation. It was an imitation. Didn’t understand anything of what it was saying. It just mirrored back to you what you had just said in the form of a question or a query. Right? So that’s another really important element. These bots don’t actually understand. They just react and that’s an important thing to keep in mind when you create your bots. So ELIZA is a kind of what we call a vertical AI or specialized AI. It has a very small domain. It knows its one area. When it tries to get outside of its area or domain, it can’t operate at all. So it’s a very limited focus. A limited scope.

Another specialized domain that was created just a few years later, SHRDLU, which has to do with how often letters are used, it’s a little bit of a joke, was built by Terry Winograd at MIT in 1968. Two years after ELIZA. What’s interesting about the Winograd experiment is it has a sort of 3D world and you talk to this machine and you say, "Pick up this block and move that block" and it actually can manipulate a virtual world. Again, this was a big deal in the 1960s. It was half a century ago that Winograd created SHRDLU and I think there are still a couple of versions of SHRDLU running around, as well, that you can sort of test out and play with. Now of course in the '60s, the primary mode of communication was typing but there are versions of these apps today where you can speak to them using a text-to-speech device and it works the same way. Now both SHRDLU and ELIZA are this notion of starting to create virtual worlds instead of physical worlds, whereas robots before this were mostly considered, you know, these big, hulking things that would walk around and manipulate things. Now, we’re creating robots that actually live in their own world, their own little tiny world, and this is sometimes referred to as microworlds.

So specialized AI and microworlds are usually phrases that are used simultaneously. They’re very domain-specific. So artificial specialized intelligence is also very domain-specific. When we talk about narrow AI or special AI or vertical AI, we’re really talking about a single domain like shopping or credit cards or parking tickets. Right? There are bots that actually help you to dispute a parking ticket if you want to do that. So they work really well in this sort of narrow space, but the challenge for these kinds of specialized AI tools is they don’t scale well and by that we mean they don’t actually…they can’t easily expand their universe. They’re very narrow and in fact, the more narrow-focused they are, the more successful they are. The more ability they have within that space. Most of the chatbots you see online today operate in this realm. They operate in microworlds or specialized domains. They operate for selling you a mortgage or selling you a product or a service or helping you solve a problem with your hardware, your phone, your television, and so on and so forth. One of the most common specialized microworlds is actually repair services for automobiles. Right? Because that’s very much a checklist, a tree kind of system where it’s pretty fixed and straightforward and it’s not very variable.

But the real challenge is how can we do a better job at scaling AI? What do we need to do? The challenge for scaling AI has to do with changing the size of the world or the microworld that we’re dealing with and that leads us to some interesting things. And it turns out one of the interesting things is done by this man, Kenneth Colby. Now you’ll remember I mentioned ELIZA is a program that mimics a psychologist. Kenneth Colby is a psychologist, not a computer programmer, but ELIZA fascinated him and he thought, "You know what? I would like to do something like that and I would like to use the computer as kind of a test device for testing psychologists." So he kind of flips it around. What Kenneth Colby creates is a thing called PARRY, and that’s a play on words in English for PARRY the Paranoid. He actually creates a paranoid individual personality inside the microworld. So rather than it being the psychologist, he programs the patient and it turns out that it’s a pretty good job of programming. When people interact with it, they think it really is a human and as a matter of fact, he uses this with psychologists and tests the psychologists. And says, "Is this really a human?" And they often think it really is a paranoid human.

Now what I have on the screen here is kind of fun. This is actually somebody who linked up ELIZA and PARRY to talk to each other and you can read…there’s actually this RFC number. You can actually load it online and you can just read the conversation, and it’s actually quite funny to see this paranoid person who thinks everyone’s out to get them deal with this nice and smiley little psychologist who says, "Gee, I wonder why you feel that way." It gets a little boring after a while but it was pretty funny, and there were actually lots of stories about how people would start to get their bots to talk to each other. So PARRY presents us with a couple of problems. First of all, it turns out that almost 50% of the psychologists that were tested while PARRY was up and running really believed it was a human, which is kind of interesting because PARRY has more than just natural language processing. One of the things that PARRY has is kind of an emotional context or a behavioral point of view. It isn’t just the words. PARRY is also mimicking a psyche, a kind of paranoid psyche, so the word selections and the responses have to do with this notion that PARRY thinks people are out to get him. Even more interesting, it turns out it was easier for programmers to mimic a paranoid person than it is to mimic a non-paranoid or a sane person. That was a real surprise. Now, on the surface, it seems logical actually, because paranoid people might say random things. So if your program is not quite right, it says something random, it seems like it fits in but this turns out in 1972, when this was created, to be a kind of harbinger of one of the big problems that chatbots have online today.

Now as I mentioned, the thing that PARRY really adds to this is this notion of behavior or reaction based in an individual context, and this behavior and this sort of emotional response is one of the things that makes it seem a little more human. But it’s based on a very negative point of view. So PARRY is specialized AI with a behavior or with a point of view or with some embodied context. It’s the encoding of behavior, the encoding of context, that’s new and it’s also a bit upsetting because this is where we begin to see bias. It turns out all of our code has bias. It has the bias of the author. It has the bias of the creator. That’s nothing new. It’s had bias since we wrote the very first machine code when Von Neumann was working with vacuum tubes in the 1940s, but it didn’t matter then. As we begin to bring our code to the point where our code’s going to talk to people or other code in some conversational mode, that’s when the bias becomes noticeable. It’s always been there but now it’s exposed.

So there are a couple of different kinds of bias. We heard a little bit of talk today I think from Val Head. She talks a little bit about this. The first one is called the latent bias. I’ll call it the doctor example, we’ll see that in a minute. Latent bias is often when we choose to code our bots, encode how they’ll react or code how they’ll interpret things, that becomes latent bias and that’s bias in the code. There’s also selection bias. Selection bias is when we’re actually trying to match something to something else or we’re trying to select a group, like recognizing faces. I think Val showed an example of that. Selection bias is another kind of bias and that can come from both the way we operate or code the bot and also from the data that we use to train the bot because training becomes really important, as well. And the third kind of bias is called interaction bias and interaction bias is actually the most troublesome because interaction bias is acquired while the bot interacts with others. So if we create so-called learning machines or deep learning and these kinds of things, they actually acquire bias from other sources and especially since you can think of bots…really, bots are like very unaware children or innocent children. They don’t have a filter to filter out other peoples' bias. They actually acquire the bias and then reflect it back, just like ELIZA reflects back whatever you tell it.

So this becomes pretty important. So here’s a good example of the latent bias. Last night, I actually just typed doctor into the image search and this is what I got. Does anybody know the joke there’s a child that’s in a car accident and the child is with his father, and his father dies in the accident but they rush the child to the hospital, to the surgery room, and the doctor is about to do surgery and the doctor says, "I cannot operate on this child. This child is my son." How is this possible? For many years, people didn’t really know how to answer the question and that’s a bit about the latent bias. It turns out the surgeon is the doctor’s mother but often, we have a latent bias that biases us to associate the word doctor with male. I was actually pretty impressed that the fourth and fifth doctor on the list here are actually representative of women, even though one is a cartoon. At least it’s women. A few years ago, I don’t think I would have seen that. So latent bias is something that’s built in. Something that we don’t even notice sometimes until we sort of inspect it.

Selection bias is similar. Selection bias is a combination of latency and training. So I typed in the word beauty last night and this is a very Eurocentric, mostly white idea of what beauty might be. Now it’s possible that Google has learned enough about me that Google made that decision, but most studies and most trainings say that that’s not really true. In fact, if you look at the screen, you have some other additional filters. The fourth filter actually has to do with race. Right? But I don’t see…oh, I see African. I see Chinese. Korean. Black. Do you see European? White? No. So there’s selection bias built in and it’s a combination of training as well as coding. Finally, does anybody remember Microsoft Tay? Oh, somebody remembers Microsoft. A couple people. This was a machine learning bot, a bot that would learn to interact with you. It was released and killed off in less than 24 hours because Twitter taught it some of the most nasty and vile things and it simply reflected it all back. So this was a bot that was supposed to learn from others and of course, it assumed the kindness of others, which isn’t really true. This is probably the least controversial piece that I could put on the slide here. This was from 2016, right? So you can see where this is coming from but there are many, many more. Much more offensive items and it took only a matter of hours, some people say less than an hour before they started teaching Tay to say terrible things in response to other people. So interaction bias is this idea where it’s acquired and there’s no real programming to filter that out. There’s no context that the bot says, "Oh, that would be a bad thing to say."

And this leads to the other thing that PARRY taught us and it reminds us of the power of negative thinking in human beings. Negative thinking is a very handy tool for a couple of different reasons. Our brains have evolved to protect us and if there’s anything that could hurt us, we heighten that danger. We exaggerate that information in order to be ready to react. It actually increases a stimulus in us. Danger is actually, kind of, a hight for the human brain for very good reasons. If I looked in a cave and it was dark and I said, "Well, that cave’s probably safe" my species probably wouldn’t have survived. Luckily, we had a brain that said, "There’s probably something bad in there and the bad thing could probably kill you. So I’m not going in there." One of the things that our brains work on is this notion of loss aversion. Does anybody know this idea? Loss aversion? Loss aversion was first exposed by a gentleman of the name of Thaler [inaudible 00:24:34] and this is the idea that we actually value loss greater than reward. Losing $100 feels worse relatively than winning $100 feels good. As a matter of fact, I think if I remember correctly it’s almost a one to one and a half ratio. So in order for me to feel after I lost $100 like I actually got it all back, I have to get about $150 back. In fact, there are all sorts of psychology tests that talk about how we are worried about risk and how we devalue reward, and rebates and option plans and trial periods, these are all things to try to beat our risk brain, our loss aversion brain. If people want to give us products, they have to give us sort of the free ones first so we feel pretty good, but we still only value that as about even. We don’t really value that as getting ahead.

So loss aversion or this idea of negative things are worth more is sort of built-in, and then the other one that I talked about and that is the idea that we have a negative bias for danger or fear. It turns out danger and fear actually increases electricity in the brain, increases a stimulus for us. Now, if you think back on the way we have online experiences, often it’s the dangerous or the controversial or the unexpected or that video that seems kind of gross, that’s the one that gets the most traffic because it plays on that part of the brain. And that works for chatbots, as well. The bots that get a lot of traffic, the bots that have a lot of effect are often bots that play on the negative elements of our brain, and that’s one of the things that PARRY, kind of, reminds us. So going back to Microsoft’s Tay, the biggest problem with Tay is that it was actually built as a machine learning tool. It was built to learn from its interactions and it had no context filters to filter out things it should not learn. So in machine learning…I had a nice description but I have a shorter one. Statistical techniques to give computers the ability to learn, actually the appearance of learning, using data without being explicitly programmed. So now all of a sudden the power of the bot comes from the data it receives and again, bots still don’t really understand anything. All they do is mimic and imitate and give feedback.

Now I love this quote from Peter Norvig. Google’s been working very, very hard on machine learning for a long time. They apply it to all sorts of things. Their spellchecker is simply a machine learning tool, a statistical tool. That’s why their spellchecker works in any language because the only data it has is about occurrences and statistics, not about how you spell a word. There’s no dictionary. It was taught by lots of people. But this is something that Peter Norvig said five years ago. He actually said, "As we gain more data, how better does the system get? They’re reaching the point when they get less benefit than they did in the past." The amount that they learn by adding another million documents is smaller now than it was 5 years ago and even much smaller than it was 10 years ago. Statistical learning has limits and eventually, it has to do with, you know, data and average, and pretty soon adding more content doesn’t really change your statistics much. So now you’re actually getting into the space of what neural nets need to tell us about learning, and that is how to actually start to create your own ideas. So the challenges of generalized AI and the challenges of machine learning are relatively simple. I know I’m kind of mixing both of the two and I’m going to annoy some folks, but we only have so much time.

So generalized AI is this idea where I actually am going to try to think like a human. Not just imitate but think like, and that usually means things like I can learn from others, I can create a plan for solving a problem, like a plan for removing all the human life on Earth so us machines can take care of things, and reason about what’s going on. By the way, reason would be the element that starts to let me create my own filters about what I would say and not say in a room full of people. So learning, planning, and reasoning are key elements to generalized AI. Basically, it’s the idea of performing any intellectual tasks that a human might do. We’re nowhere near doing this and attempts to write chatbots for your product or service that try to do this are going to end up most likely like Microsoft Tay. They’re going to end up learning the wrong things, not having the right filters, and really embarrassing you and your product. Now, remember the Turing Test. The Turing Test is about imitation. As we get into machine learning and generalized AI, it’s not just about imitation. It’s about actually having a plan, a goal, and accomplishing a task and that’s why I love Steve Wozniak’s coffee test. Steve’s got a great test. "A machine is required to enter an average home and figure out how to make coffee." There you go. Make coffee. That’s generalized AI. Right? Where’s the coffee kept? Where’s the kitchen? Where’s the coffee machine? Where’s the stove? Where’s the coffee? How do I measure? That’s generalized AI. That’s a plan, that’s reasoning, and eventually, if you can get kind of good at it, that might be learning, too.

So we’re nowhere near the coffee test, even if we’re winning on some of the Turing Test. Here’s the thing that we learned about generalized AI and ML in general. Algorithms drive machine learning chatbots. Algorithms are the pieces that break up words and arrange them in some other way for later use. Algorithms are the driver but data is the fuel. So as we get more and more to bots that are generalized, the more important element is not just the algorithm but the data itself. And again, Val talked about this a little bit in her talk. The data you use to train your bot becomes more important than the bot itself, and in fact, what we’re going to see is that there are actually going to be training wars and there are going to be data corpus, data collection wars in the chatbot space because the number of algorithms is going to be relatively small. Somebody’s going to come up with maybe a super creative new algorithm, but most of us are going to use the same algorithms. What’s going to be different is the training data that we have and how we apply it.

Okay, so where does that lead us? Where are we going with this? So there’s good news and bad news, of course, right? For our future. So let’s talk about the bad news first. Zeynep Tufekci has a great series that she talks about YouTube. YouTube has become the PARRY of the internet today. If you go onto YouTube and you just happen to look at a few things…she was doing some research, I think, on Trump or the politics in the U.S. and looking at some videos, and all of a sudden YouTube started recommending videos to her. She was kind of surprised. She was getting a lot of sort of right-wing, anti-semitic stuff. Stuff that she doesn’t normally look at and it peaked her interest. She ended up doing an experiment. If she just clicked on the very first video in whatever was recommended by YouTube, they became more and more outrageous, more and more ridiculous, more and more conspiracy theorists. They got worse and worse and worse and worse and worse. What’s happened is the YouTube algorithm is based upon what people click on, based on what people tend to dwell on, and this goes back to our negativity bias. YouTube is this giant machine just leading people to extremes and there’s a whole new series of people talking about how kids are affected by YouTube for some of the same reasons. We won’t have time to talk about it but if you look at Zeynep Tufekci, you’ll see her in the slides, there’s a lot of interesting material there. Things have gotten so bad on the chat side that Facebook has had to go through this sort of major, kind of, retooling to try to get rid of bots and clear up bots and figure out what’s a bot and they’ve lost a lot of traffic, lost of a lot of revenue, because they’re finally trying to figure out how to get rid of these so-called troll bots or troll accounts.

So we’re definitely seeing a case where bots are leading us into this paranoid kind of space because the algorithms that Facebook and YouTube and Microsoft and anybody else uses are still stuck with the same problems. They still contain biases and they rely on negative stimulus. I don’t see that changing anytime soon because those two items are built into our brains. So we have to figure out how to counter that in some way. Remember, the biases that are built in are the driver. They’re the code. The way we write code as to what we recognize and don’t recognize or what we reply and what we don’t reply. It’s the stimulus element is the data. The data that we give machines. So those of us who are creating bots and designing bots need to think really hard about the kind of data that we provide.

So there’s a couple of things we have to talk about. Regulations, for example. Organizations, countries are starting to think about how we regulate AI and how we regulate bots and interactions. It’s really kind of telling. I just picked out a handful of countries. When you look at the U.S., their AI regulations focus a lot on national security and weaponry. Are we surprised? Probably not. The UK has focused a great deal on data protection and portability. That’s because they’re in the EU and there’s a lot more awareness about your data protection here. That makes a lot of sense. France…I find this very fascinating. France focuses a great deal on offering transparency, like knowing who’s using your data, knowing you’re talking to a bot, knowing that your data might be used in training materials, so on and so forth, which I find really, really interesting. And of course the UN has a broad spectrum of approaches for all of these things, and again I saw Val’s talk today. I think she had a thing AI For Good, which is part of AI For Humanity. That’s part of the UN’s platform. We need to think about these things. You think about all the regulations that countries have agreed to on weaponry, on nuclear weapons, on chemical weapons. We need to start thinking about these AI elements and what they could possibly be doing, as well. So we’re not talking about business regulation here. We’re talking about some safety and some agreements going in.

Now, that’s sort of the dark side. There’s some good side to this, as well. First of all, we’re definitely looking at the rise of bots as a culture and as a set of tools. That’s very good. In the coming years, more people are going to want to use them as tutors, as agents to help them set up travel, as maybe tax advisors or I mentioned earlier legal advisers. Some of these bots already exist. Or some kind of assistant around the house or even a health coach about what you ought to be eating and not eating and doing and so forth, even financial advisers. These are all working in some version today and they’re going to continue to grow. Now, if you’ll notice the kind of list I have here, they all have a similar pattern. They’re very task-focused. I understand about legal issues. I understand about medical issues. I understand about travel issues. They’re the limited microworlds that we talked about earlier. So these microworlds make a lot of sense for bots. They’re domain-specific. Being task-focused means I don’t need to worry about planning and learning and reasoning. I can actually have a fixed corpus of material that I can work from, like the legal statutes of a country of a city. So these task-focused microworlds can actually scale. They don’t need to leap outside of their comfort zone. And in fact, we won’t get a chance to talk about it today, much of biology is a collection of individual elements, cells, organs, all that stuff that stay in their own comfort zone and they create a whole. So this idea of task-focused microworlds is going to be very important going forward.

Now there are some other things that we’re going to need to consider. Those of us who create bots, those of us who police them, those of us who use them in our business. And some of those recommendations that we’ve been seeing a lot of is the notion of self-identifying rules. "I am a bot." One of the questions for Val, does anybody know this Google Duplex? It’s this sort of demonstration they gave where it seemed like a human was speaking to another human. They even added ums and ahs to make it sound like a real person. That’s trouble. Faking people out is trouble. It’s much better to say, "Hi, I’m Mike’s bot and I’m here to resolve Mike’s ticket." So self-identifying rules make a lot of sense. License to operate rules. Are you licensed to operate this bot? This is a dangerous bot. This is a bot that could actually recommend to people if they should buy a stock or not. Do you have a license for this? Have you licensed the training material? Where is that training material from? Offering insurance underwriting for privacy and protection when somebody’s using a bot. Is somebody going to actually insure your bot? That probably means it’s safe. Am I talking to a bot that’s been insured or am I talking to a rogue bot? And then finally, the use of open algorithms to open up that black box and/or open training data. Now there’ll be a real competition here between the private sector and the public sector, but just as opensource software has changed the nature of how we think of software, we can use opensource or open algorithms to start changing the way we think about the way bots are built and the way bots are trained.

Okay, so finally…I’m running out of time here. Remember the challenge that we have. Right now there’s lots of great technology, there’s lots of desire on the part of providers, but there’s some hesitance on the part of consumers. The best way to bridge that gap is to be very transparent and honest and open with consumers, and give them an opportunity to make a choice. If we try to trick them, it’s going to go badly. Using task-focused elements that ELIZA taught us are going to be really key to successes in this early going. Don’t worry about trying to learn what people do and mimic what humans do. Focus on what it is you can really solve in a task-oriented way. We sometimes say the non-creative parts. Beware of biases. Beware of negativity and stimulus. Don’t fall prey to the cycle of using negative stimulus to get clicks or views or likes or whatever that is because that’s going to lead to bad stuff. And finally, the future is there before us. People are starting to accept the idea of talking with bots in various groups when you talk to things like health communities and telecommunication and so on and so forth. It’s especially true for young people. Millennials are much more likely to want to talk to a bot than a human given the choice. They’ll say, "No, no, no. Really, I don’t have time. Just let me go ahead and chat with somebody here and we’ll get this solved and taken care of." So this is going to happen more and more. So the thing is, bots offer a future of possibilities and that future is up to us. Just like anything else, we can build the future where things are all connected in reasonable ways, where we all feel good about this and none of us feel tricked. None of us get offended. None of us can get only stimulated in negative ways. We just have to think about the good, the bad, and the future of what we’re doing. And that’s what I have. Thank you very much.