PRE-ROLL: Whether you're working on a personal project or managing enterprise infrastructure, you deserve simple, affordable, and accessible cloud computing solutions that allow you to take your project to the next level. Simplify your cloud infrastructure with Linode's Linux virtual machines and develop, deploy, and scale your modern applications faster and easier. Get started on Linode today with $100 in free credit for listeners of Greater Than Code. You can find all the details at linode.com/greaterthancode. Linode has 11 global data centers and provides 24/7/365 human support with no tiers or hand-offs regardless of your plan size. In addition to shared and dedicated compute instances, you can use your $100 in credit on S3-compatible object storage, Managed Kubernetes, and more. Visit linode.com/greaterthancode and click on the "Create Free Account" button to get started. JOHN: Welcome to Greater Than Code, Episode 217. I’m John Sawers and I’m here with Jamey Hampton. JAMEY: Thanks, John and I’m excited to introduce latest panelist, Casey Watts. CASEY: Hi, I’m Casey. Good to see you all. Next up on our introduction list is Laura Major. Laura Major is the CTO of Motional, where she leads the engineering team developing autonomous vehicles. She was previously the CTO at Aria Insights, a tethered drone start-up, and a Division Leader at Draper Laboratory where she led the development of machine intelligence solutions. She has an MS from MIT and a BS from Georgia Tech and she was recognized by the Society of Women Engineers as an Emerging Leader. Recently, Laura published a book called “What to Expect When You’re Expecting Robots.” So cool to have you on the show today, Laura. LAURA: Thank you, Casey. It’s great to be here. CASEY: We like to start every with the same question. This one is—no surprise probably—what is our superpower and how did you acquire it? LAURA: Yeah, that’s a great question and a hard one to answer, but if I have to pick one superpower, I would say I think it’s problem-solving. I’ve developed a way to take the chaos of any situation and figure out a path to get through it. So whether that’s a team that’s having a challenge and we have to redesign the team or it’s a program where we’re going to miss a milestone or where we’re struggling to develop capability we need or whether it’s technical where we’re trying to come up with a solution to a new technical challenge. I have a way of seeing through the noise and finding path to get to a solution. And how did I acquire it? That’s another question. I think it somewhat comes naturally to me. I tend to have strong analytics skills and solving problems is always I guess, been something I've enjoyed. But I think throughout my career, it's been really honed by experiencing so many complex situations and I think in the end, building any product, especially a complex product like an autonomous vehicle, is never easy. There are lots of problems that come up along the way. It's never a direct path from any plan you make; the day you implement it, you have to change it. So I think it’s by living through so many of those experiences, facing so many challenges, and having to find a way to fight through those challenges and work to get to a solution is really, I think what's allowed me to hone those skills and by seeing things through, especially it builds a lot of first of a kind systems. Being on one side of where you think something is impossible and then finding a way to get through it and realizing that you can solve the problem, I think there's been a lot of learning through experiencing that many times. CASEY: That sounds like a hugely useful skill. I love working with problem-solvers. I’m sure we’ll hear a ton of examples as we talk to you today. LAURA: Great. JAMEY: I would love to compliment you on the title, “What to Expect When You're Expecting Robots.” I think that's just delightful and I'm really excited to get some of your insight about robots and I think that's what we're going to talk about today. LAURA: Thank you. JAMEY: You gave us a list of really insightful questions and topics that are all kind of themes about robots. I'm really excited to get into them. LAURA: Thank you. Yeah, the title definitely is catchy. I think it sticks with you and we came up with it after writing the book. It was when we were into the book that we realized that that title really captured what we were trying to share, which was really around what's next and where's this all going, what's this leading to, and more of the reality of what can be expected and what cannot be expected. So I think in the end, the title does capture the theme of the book. JAMEY: I hope you’ll forgive me for asking a big question, but where is it going in your opinion? LAURA: Yeah, no, it's a great question and something Julia and I worked really hard on. We thought very deeply about that question. So I think the big takeaway from what we cover in the book is that it's probably not going exactly where people think it's going. I think we all had the Jetsons vision of robots and we expected them to be here and we expect them to be looking and acting like people. The reality is that it's different than that, that there are a lot of problems that robots can solve, and that a lot of ways that robots can help us. But that they need to be designed with that purpose in mind, they need to be designed to help us in those ways where people maybe are not as good at a task. Driving, let's say, especially when you're tired or have had some drinks. So when there's a task that a person's not good at that's again, where you can on robotic capability to help compliment the person, but it's less about the robot behaving like a person and it's more about the robot doing a very good job at performing that task in a repeatable way. That's where robots have really excelled. We cover a lot of different applications. If you look at industrial applications, let's say, in a factory, if there's a very straightforward task that's repeatable, a robot is really good at that. If there's a task that requires a little more judgment or creativity, that's where people excel. Robots aren't going to get there anytime soon. I think that's one of the themes of the book is that the expectations maybe are a little off from what we've thought in the past, but that there's a lot of potential robots in their forms today, capability they provide today, and ways that they can help everyday people. That's what we really wanted to focus on in the book is how can we accelerate designing robots in this way so that they can again, help society in a safe way. CASEY: I'm hearing a theme here the robots we imagined we'd have would be more human like, but I think the robots we have today actually are personified at least. LAURA: Oh yes, [chuckles] that is true. We inevitably will personify. It's funny you even personify other electronics, I think that we interact with a lot, but certainly, robots. If you have a Roomba at home, it's probably not that uncommon that you've given it a name and maybe view it as pet like. It wasn't designed to look like a person, I think that's one of the key things is if you look at even just the physical design of it, it's designed to do this task very well, to be able to glide through the room efficiently, and slide past objects very well to clean the floor underneath the chair or a table. It's not designed to look like a maid or what some of the past visions of what a vacuum robot might look like. It looks very different than that. JAMEY: I think it’s very human being things to do to project emotions onto other things. I also think that people have a really, I don't want to say a low bar because I agree with this, but we think a lot of things are cute like, Roombas are cute. [chuckles] LAURA: Yes, that is true and I think that's exactly the point that we can design the robots to do their tasks really well and again, match the form and the function of the robot to the task that they're going to do and it can still be delightful to people. It doesn't have to look like a person to delight the users who are going to encounter the robot and so certainly, those are important things. You're going to trust the system more if you enjoy using it and there are other factors as well like, we need the people who come across these robots, whether it's a robot they own, or a robot that they pass on the sidewalk to understand what the robot's doing and to be able to predict if the robots going to get in their way, or if the robot is going to complete its task effectively. So there certainly are reasons that we need the robot to behave in ways that people will understand. But again, it's really changing that the fundamental design premise that again, focusing more on what's the core task that the robots are doing and how to design them from the beginning to do that task very well. JAMEY: Do you think there are also downsides of the way that people put emotions onto machines in this way? LAURA: Yes. That's played out over time if you look in aviation and other industrial domains. People begin to over rely on the automation or the robot in times when maybe they shouldn't. We talk about some different accidents that have occurred in the book in aviation, where this became a problem where the human, in that case, the pilot lost cognitive control of the situation, because they were really relying on the autonomous system and not more deeply engaging with the task themselves. So that's exactly right that the interaction that people have with these systems has to be carefully designed in a way that allows the person to maintain the right level of understanding and awareness of what the robot's doing, what the task is, and how and if they may need intervene, or if they may need to change their behavior or actions to accommodate the robot’s tasks. JOHN: Yeah. I've heard stories about how you intend to think that, “Well, this robotic system, it was obviously designed by experts and it probably knows better than me in situation x,” so you leave it to run things rather than thinking critically about what it's doing and what it's not doing. I remember reading something a while back about how, in situations like these, that the robotic system can communicate its level of certainty about what it's doing like, “I'm doing this, but I'm only 60% certain that this is the right decision,” and that way the human can come in and say, “Oh, oh, you're not a 100% certain about this. Let me think about it more deeply and decide if I need to override.” There are ways of communicating that are not, “I'm perfect AI, I'm doing everything correctly.” LAURA: That's exactly right and I think it ties back to the question of what to expect out of these robots and it's that they aren't going to be perfect. There is no human-built system that has ever been perfect. It's important that when we design these systems and when we use these systems that we understand that there are going to be errors or failures. In designing the system, we cover the Swiss cheese model and how we can as designers and engineers, how we can make sure that there are levels of protection to make sure that we catch and gracefully fail safe when there is a problem. So your example, John is a great one. If a robot's only 60% sure of an answer, then the robot itself needs to factor that into his decision-making and if there's a way to communicate that out to a person to say, “Hey, can you help me here?” Let's say a sidewalk delivery robot comes across an obstacle that they can't quite make sense of. Let's say it's a fallen branch and the robot is capable of rolling over a falling branch, but it's not sure is it a fallen branch or it a child that is playing a game by laying down in front of the robot. It could stop and ask for help, whether that's somebody locally to say, “Hey, is anyone there? Anyone laying in front of me?” or whether there's a remote operator who can log in remotely and identify that nope, that's a branch, that's not a child. Go ahead and roll right over it. This is the whole idea of the human-robot partnership is that robots won't be perfect and they never will be, but they can ask for help, they need to fail safely, and get help from people. JOHN: I think that feeds into some of the topics that you had lined up for this, which is how people treat robots, how likely are people to help a robot that is standing there on the street saying, “Hey, I can't quite figure this out. Can you help me?” and just general approach to a robot doing that as far as what people are going to do in response. LAURA: That's right. Yeah, you can't expect much out of people or who are going to encounter these robots. So it can't become a nuisance. We saw that in San Francisco when some of the sidewalk delivery robots really started getting deployed at scale and even in a city that's as forward-leaning and early adopters of technology as in San Francisco, they really quickly responded negatively that these delivery robots became a nuisance and were getting in their way and posing a risk to, especially some of the vulnerable members of society, and they put restrictions around that. So it can't be too invasive, it can't become a nuisance, has to be carefully designed; how's this partnership going to work in a way that's very fluid and very natural and communicate in a way that is intuitive for the people who need to step in and help. I think in the book we talk about when we look at robots of the past or robots of the present, when we look at industrial applications, those robots are controlled by experts. So people who not only know how to do the task without the robot, but also who have a tremendous amount of training on the robot itself. They understand exactly how the robot works and they're therefore, very knowledgeable and very much able to speak the same language as the robot. As we see robots crossover into the consumer space, that's no longer the case. None of us want to have to go to a class or read a user's manual on how the robot works. We want it, just as our smartphones have evolved, to just work. We don't have to train on them. I guess, you can go to the genius bar if you want to get extra help, but mostly we want to be able to pick it up and use it and have it be very natural and we need robots to be moving in that same direction. JOHN: Speaking of how people treat robots, there are so many varying examples of human behavior towards robots that perhaps weren't expected. In some cases, they've been helpful and they've been treated well by the communities where they're living and other times, I remember there was a hitchhiking robot a couple of years ago that was trying to get across the country and then it was found beaten up on the side of the road later on. So I realize there's definitely some trickiness around the level of empathy people have for robots and the level of cruelty that they can deploy against them when they don't have that empathy and I'm curious about what your thoughts are there. LAURA: Yes. I think the more that the robots are designed to help and the more that they are fulfilling a need, the more likely people will be to be empathetic and to treat the robot with kindness or at least to not actively try to damage it. So I think that's an important part of the solution is again, back to designing robots, to perform tasks where they're needed, where there's a job to be done that the person isn't as good at. Some of it's an evolution. We see robots showing up in unexpected places right now and maybe they're not quite as good at the job that they're starting out with, but there's a learning process there. A security robot, for example, we see them falling over in fountains or having other mishaps. The robots that have shown up in grocery stores, we all are perplexed, wondering what is this robot really doing? It's causing a great nuisance during COVID as we try to socially distance already and then you've got this robot navigating through the aisles. I think part of it is having a robot that's clearly filling a need and then, I think the other part of it also is education. As people get more experience with robots, more knowledge of how they work and especially these new consumer robots, I think that the empathy that will grow, their understanding and trust will grow. They're new social entities; it’s really what we call them in the book. It's something that it will take time for us to adapt and figure out how to integrate these new social entities. JAMEY: That's really interesting to me to think about how society as a whole is going to decide what's socially acceptable about this brand-new thing. Like, do you tell Alexa, “Thank you” after she gives you information or not? That's the thing people are having arguments about right now. LAURA: Yeah. JAMEY: But I feel like in the future, there will just be a standard about what's socially acceptable. [chuckles] LAURA: Yeah. I have an Alexa and I have a 4-year-old and an 8-year-old daughters and they watch us, my husband and I, give commands without any typical politeness you would show a human being all the time. We’re yelling out commands to Alexa, “Play this song! Turn up the volume! Stop playing!” and then we see them model that behavior when they talk to Alexa and we wonder how is that going to affect their social skills down the road. So certainly, it's a big open question for us to figure out as a society. I think one of the key things we talk about in the book are that a big part of the empathy and helping society to accept these robots is going to be developing these robots in a way that allows them to follow social norms. The Alexa example is one, but I think as they enter our physical space, this becomes even more important. So if you try to make your way through a crowded aisle at a grocery store, you would say, “Excuse me,” or your behavior would change depending on who you are navigating around. If it were an elderly person with a cane, you would give them more space. If it were a parent holding a screaming toddler, you might turn and go down a different aisle to not create more chaos for that parent. So we're going to need robots to do similar things, to follow social norms, to have some understanding of the people that they're operating around and adapt their behavior based on who they're interacting with. JOHN: Yeah, I think one interesting thing that I don't think it's an issue right now, but inevitably will be. There will be certain class of autonomous robots that are doing their own thing, but there are also likely to be some percentage of remotely controlled robots where there's a human on the other end and as people, we probably want to make a distinction between how we treat one and how we treat the other. But it's tricky because you don't always know, like, if you're just saying, “Oh, you stupid robot, get the hell out of my way.” Maybe there's a person on the other line being the robot out in the world and you're being rude to them. I remember a story about how they had gotten people with cerebral palsy or some other thing where they weren't really mobile, but they were able to remotely operate robots and be servers in a restaurant. So the robots would do it, but these people would be the ones doing the interaction. It brings up so many weird issues about what the standards are for treating these robots when you don't necessarily know if there's a human behind them or not. LAURA: Yeah, that's an interesting dimension and I think certainly, remotely controlling robots or having some remote human ability for a person remotely to assist the robot is going to be a part of the rollout of these robots for the foreseeable future. Like you said, there will always be a human in some way involved in that robot’s activity. It's not just a machine. That's why we call it a social entity, right? It is a part of a bigger system and it will interact with people. It has been designed by people, it's likely in some way being commanded, whether at a high level or a detailed level, by people. So it's not just a box, a rolling box, it does have a role and certainly we will have to figure out what's the proper way for people to interact with these robots, especially when they are in our way or make mistakes because they will. Like you said, if they know there's a person behind it, they might be more patient for some of those mistakes. CASEY: I like that term social entity a lot. I think that's really interesting. Some robots might be social entities already and we don't realize it and some might not be at all. I don't think of a dishwasher as being very personable. LAURA: That's right. We are surrounded by autonomous systems more than we realize, which is in some ways, you can call a robot. But I think as they get up and move, and they have to co-exist in our physical space, that's where these problems really come to a head and where the social norms and those both, in terms of the need to engage in a more understandable way by other people but also, the safety of those systems comes to the forefront. Because if they are able to move, usually these systems are pretty powerful and heavy and so, we have to be able to manage that movement and make sure that it's done in a safe way. JOHN: Yeah, it strikes me that Silicon Valley startups are not the sort of entities that you would want to create like 1,000-pound creatures wandering the sidewalks because they're focused on quarterly deliverables rather than product safety and that there's definitely going to need to be some sort of government regulation in order to force that safety into the industry, because I don't think it's going to come there naturally. LAURA: Yeah. One thing that we say at Motional is that safety has to transcend competition. When you're dealing with a strong, powerful machine like a robot or an autonomous vehicle, safety has to come first. You're right that business, by nature, is designed to go fast and try to capture returns as quickly as possible. That might not always be the right path when you're dealing with a safety critical system and so, there have to be checks and balances put into place, there have to be standards, there has to be regulation, and I think we're just starting to see the beginnings of that and how we regulate the deployment of these systems. JOHN: Yeah. Hearkening back to something you said earlier, talking about how it helps to know what the robot is doing, like how it's helping, when you encounter it to contextualize what's going on and why it's useful and things like that. It strikes me that the communication of that purpose through not necessarily written or verbal words is going to be important part of the design of the robot so it's obvious why it's there and what it's doing and how helpful it is. LAURA: That's right. Yeah, it's been shown in other domains that when a robotics system is – there's three things that it needs to be able to communicate or the need to be a parent to a person, for a person to be able to effectively interact with that system. Those three things are, it needs to be observable so you need to be able to observe its goals, its action, and its intent. The second is predictable; you need to be able to predict what it's going to do next. The third is directable. If you need to give the robot some type of command, you need to be able to do that in a way that you understand. I think those are three things that are really not easy, but easier to design in when you have an expert user and you have a very complex control panel that they're using. But when we think about a consumer robot that's designed with industrial design in mind and trying to delight users while also functioning well, these three things become a lot more challenging to support. How do you do that there? How do you support developing that right mental model? So I think that's some of the hard work that the tech industry has to figure out and we we've seen it in other industries. I think back to when tablets and smartphones, before they existed and the whole generation of this direct manipulation models that were developed. Now, we all pinch and zoom and swipe with ease; we know how to interact with these direct manipulation devices. So we need similar type of new language in how we communicate and understand robots. JAMEY: Hope this question isn't too silly, but I wonder what your take is on how media affects the way that people perceive robots because I think one thing we haven't brought up is the stereotype of oh, are robotic overlords after they take over and the only reason people feel that way is because they've seen it in science fiction. So I wonder what your opinion is on that relationship and how we could maybe use it to guide the way that people interact with robots in the future? LAURA: Yeah. That's a good question and I think maybe part of the motivation for the title as well, that I think you're right. Science fiction creates one set of expectations that we think are somewhat pretty far from where we are today and so, we wanted to ground and provide a little more insight into where robotics are today, what they can help us with, and how we need to integrate them in. I think certainly, media and it's great for inspiring us and unlocking creative ideas, thinking in an unconstrained way. But as we evolve to have robots more integrated with our everyday lives, there are constraints and there are factors there that it's easy, I think in science fiction to not worry about. So I think while science fiction can certainly again, trigger a lot of good ideas, a lot of inspiration, the reality is pretty different from what we see in the movies. JOHN: Yeah. I think storytelling in general, tends to lean towards certain types of things. There is going to be a blockbuster movie that says robots are introduced and everything is fine. LAURA: That’s right. JOHN: We tend to see the stories on the extreme ends of things. Although, it does strike me that the movie, Robot & Frank, was a pretty good example of an unconventional take on bringing a robot into a situation. LAURA: Yes. You certainly see a variety that are some that are closer to reality than others. CASEY: So I'm wondering if someone wants to update the way they think about robots—I mean, one way they could update it easily is to read a book, that would be great. LAURA: Yes. CASEY: What's another action or a thing they could look up, a movie that's good or how would you suggest someone update themselves? LAURA: Yeah, so I think finding ways to experience robotics today is a great way. This is something we're passionate about it and also, at Motional, we made our autonomous vehicle available to the public through the Lyft app in Las Vegas. So you can go to Vegas and hail a ride and you choose, do you want an autonomous ride or a human driven ride? That's one example, but I know there are many others, but I would say, finding ways to experience robots today is a great way to update and to see it firsthand. I think our book is a great way to read about it and to think about it, but getting that exposure that direct exposure will also shed some light on things that you might not expect and I think overall, what we've seen is there's been tremendously positive feedback of people that have written in our cars are autonomous cars. I think people think it's science fiction to imagine an autonomous car and then they get in and take a ride. Today, we have safety operators monitoring the system in the car with you. But people are very surprised by how smooth of a ride it is, how safe it is, how it can handle so many different scenarios on the road. So getting that firsthand experience, I think is really enlightening and again, we've received really positive feedback on people who've had a chance to do that. CASEY: That's awesome. I'd love to try one of those. I wish we had it over here at DC already. LAURA: I guess, we'll have to wait until after COVID and people can travel again. CASEY: Right. JOHN: So one of the topics you had on your list was the downsides of creating robots as if they're human. So I'm curious if you go into that a little bit. LAURA: So I think robots, they have their own strengths and limitations. It's long been studied in other applications that their strengths and their limitations are different to people and so, there are certain things people are good at and certain things that robots are good at. I mentioned earlier, robots are really good at repeating a step very reliably over and over and over and over again; people are terrible at that. There's something called vigilance decrement. If you have somebody monitor, look for some rare event, situation to appear, we're really bad at that and the more time that goes on, the worst we get. A machine, if you give it a task to do it, it'll just crank it out and again, repeat that task very accurately; machines are very good at calculating making computations. Whereas, a person it might require you to stop and think, and you might not be able to do it as quickly or as robustly. But on the other hand, there are things people are really good at that we have not yet figured out how to make a robot good at, and we may never, or certainly not anytime soon. So things that require creativity, things that require a certain element of judgment, where you have to take your experiences from one very environment or domain and use that to make a judgment about a different problem. Computers and robots aren't very good at that yet. It's important that when we interact with robots and when we design robots, that people realize that they're different than a person, that they bring a different set of strengths and they have limits. I think in some ways, if you design a robot to look like too much like a person, then it can be confusing and misleading to the people who have to interact with that system. So it's less important that it looks like a person and more important than it looks like what it's supposed to do, that will help to develop and build the right mental models by the people who encounter these robots, because they will have a better direct understanding of what the robot is doing. JOHN: I'm curious about how that could get messed around with by manufacturers. For example, if they find that there's a robot that is getting pushed around or kicked or whatever by the people in its environment, what if they add in like, it cries out in pain when it notices it's being kicked or in any of social signaling to manipulate the human into treating it differently? LAURA: Yeah, absolutely. I think one of the things we advocate for is building in communication that fits mental models that people already have. You say the example of someone kicks the robot and you don't want them to, as the designer, have the robot respond in some way that a person will understand like crying or falling over. Another example that we look at in autonomous driving is if a pedestrian is waiting at a crosswalk and looking at your car, trying to decide if the car is going to stop or not, there are many ways you could try to communicate to the pedestrian, “I see and I'm stopping for you.” But a natural way that we all use today is when you hear that sound, the screeching sound of brakes, that gives you a signal that the car is slowing down and so then you might feel more comfort in crossing the road. I think there are many ways that we can build on people's mental models that they already have and use that and how the robot communicates to the person and it goes the other way around, too. I think as we see robots start to canvas our neighborhoods, there are ways that you might communicate with another person. Let's say, if you see a robot that's about to roll over an important package that was just delivered at the end of your driveway. If it were somebody riding a bike, you might say, “Hey, hey, watch out for that box over there!” There might be ways like that that are very natural for people to communicate to other people that we can start to think about how can you design a robot to be able to understand and respond to those same types of commands? So I think it goes both ways in terms of how a robot can communicate in ways that are natural for people to understand and build on their existing mental models, as well as allow the robot to understand the communication approaches people. JOHN: There was a really interesting passage in a sci-fi book I read recently, called Change Agent, where the person was treating the AI assistant as very politely saying, “Please,” and “Thank you,” and being very gracious to them and one of the other characters said, “Oh, don't do that to them because they will learn that you have empathy for them and use that to manipulate you.” LAURA: [chuckles] This goes back to what people are good at and what machines are good at. I don't think robots, anytime soon are going to be as capable at manipulation of data, unless the people who are designing it designed it that way, to learn and adapt their behavior in that way. Back to this idea of a social entity, that there's a bigger social system that these robots are a part of and maybe it's useful for the robot to understand that you're empathizing with it and maybe it can then depend on you as a user to respond in certain ways that might be helpful for the robot to complete its task or to effectively engage with you. So there could be positive direction of that, but I think it will only do what it's been programmed to do, in the near future anyways. JOHN: Yeah, that's a ways off for sure. [chuckles] LAURA: Yes. [laughs] JAMEY: This makes me think of one of the other topics that you provided us with, which is what we expect from robots, we've talked about quite a lot, but then also, what they expect from us and I was really curious about that one because my brain was well, robots can't really expect anything from us in the way that I am thinking about that word. So I'd really love to get your take on that. LAURA: Yeah. So I think a big theme of our book is that the solutions to make robots truly achieve the potential that they have to help us in our everyday lives. The solutions are really at the intersection of technology and society. It's not the robots on their own. It really is only one half of the solution. The other half is figuring out how these robots are going to interact with people and so what the robot can expect from you. Whether you know it or not, the designers of the robot are making some assumptions about the users, implicitly or explicitly and I guess, we advocate that it should be explicit that there should be dedicated effort put into thinking about that side of the equation, the human side of the equation. How can the robot be designed in a way that's going to match human expectations? Again, back to the mental models that behave in a way that the people will understand. So it comes back to that and then I think additionally, the other point that we're making is that while users and not just users, but what we call bystanders—people who will come across these robots in their normal lives—that there's going to have to be some evolution in adopting and accepting the social entities will co-exist with people and that I think what robots can expect from people will change over time. Because as we get more knowledgeable and more used to these systems, we will figure out ways to incorporate them into our lives in a way that's better for them, but because it's better for us, because it provides a benefit, then we'll be motivated to accommodate these robots. We have an example of a grocery store of the future, where you might have many robots doing different things, whether it's stocking shelves or even repositioning shelves. If you look at some of the robots used today in distribution centers, that the shelves themselves are robots and so you could extend that into stores. In that case, if you have that, there might need to be accommodations made in the store that the people understand. So you might want to have a crosswalk for robots that goes in and out of the stockroom, for example. Maybe once it comes out of the stock room and is interacting in the broader store, the robot is more cautious about the people that it's interacting around. But maybe at the crosswalk, the responsibility may be put on people to accommodate the robots more. That's the idea of this kind of what to expect both, from what can the robot expect from people and what can people expect from the robot, that both sides of the equation are going to have to change somewhat to get this right. JOHN: It's interesting to think about how it's going to be important when we look at how robots are behaving in the world, we're going to end up inevitably doing some forensic analysis about how they perform in various situations. Having that explainable inspectable state that you can understand how it got to do the thing it did, or how it got into the state that it got in, I think is going to be important. A, for helping the public start building its understanding of what the robot was doing, but also, even from a public safety perspective that it might end up being under the NTSB where they're going to have to go in and look at how the thing ended up doing the thing that it did, if that was incorrect. LAURA: Yeah, certainly. There's a lot of research going into explainable AI right now. I think that's a really active area that needs a lot of attention figuring out it's not enough to have the systems make decisions in a way that maybe produces a result, we have to also be able to trace what's the rationale that led to that result. Because inevitably again, errors will happen, mistakes will happen. That's just the reality of any human created sheep and so, we need to have a way to understand that, to trace it, to regulate it. That'll be an important part of scaling this technology more broadly. CASEY: I love hearing you talk about the human side of the equation. My background is in UX research and I've experienced that a lot of companies where UX research is sometimes an afterthought or it's on a team that it's not talking to the engineering team as often; they just throw designs over. The best work I’ve done is when we worked closely with them and we're the same team, we talk all the time. I'm curious how Motional has managed to keep this thinking about the human side front and center. LAURA: Yes, absolutely. At Motional, our team is people first in really all that we do. I would say, it's integrated into the fabric of what we do. I talked a little bit about this idea, we call it expressive robotics for how is the vehicle going to communicate to outside actors, whether it's pedestrians or drivers, but there are many ways. I think you see it in our leadership, you see it in our engineering organization. We don't just have people who have a strong robotics background we do have people who have a UX, who have designed AI that learns from human behavior and reflects human behavior. So I agree with you that it can't be an afterthought, it can't be a side project, it can't be just thinking about a display, a good design of a display. It really has to go into the core of the product, how the system behaves again, as a social entity. The system needs to be designed from how it makes decisions, how it understands the world, and how it ultimately, communicates with the outside world; that has to all be done with an understanding of the human side of the equation. JAMEY: This is a pretty intelligent question so I totally understand if you can't or don't want to answer, but something I think about often is will autonomous cars become the standard that everyone uses in my lifetime? I'm very curious about this, I think about it a lot, and I'm just wondering as someone who's doing this very professionally at a high level, what is your opinion on how far away we are from that? LAURA: Yeah, that's a great question. It's hard for me to predict, too. I don't know how long it will take for this technology to get to the level you're talking about where all of us are using AVs as our primary mode of transportation. I think what we're going to see in the next couple of years is that AVs will become a reality for robotaxi applications. In certain cities, you will be able to hail an AV, a driverless car will take you where you need to go in that city and then I think we'll see it scale up across cities. One thing that has to happen before it's really available and in our personal cars is today, in order to achieve the safety standards that are needed, there's a pretty complex sensor set that's needed to provide redundancy and full 360 coverage. We need to see some of those costs come down before it's affordable in personal vehicles. I think robotaxis, absolutely, in your lifetime. Hopefully, in the next couple of years, but then I think it'll scale up over time. I hope by the time I'm elderly and not able to see as well, I will no longer have to drive a car in any city that I'm in. JAMEY: Plus you're talking about it from a technological standpoint, but there's also a very social of does that become standard? How do people react to it, emotionally? LAURA: Yes, that is true and actually that’s the basis of our name, Motional, how we came up with it. It combines motion, which is the core of our product, with emotional. I think we see transportation decisions have always been somewhat emotional, but it's taken on a whole new level with COVID. How we move from point A to point B is an emotional decision. There's of course, the safety aspect and now, there's the health aspect. That very much is what drove the decision for our name because people first is really our core focus and Motional came from that combination. CASEY: Earlier, I heard you say Swiss cheese model. I happen to know that's the Swiss cheese model of error prevention and I've been thinking about a lot, especially during COVID. You need lots of layers of protection; you’ve got to wear a mask, you’re distancing, testing, and be outdoors if you meet anyone, all those layers. LAURA: That's right. CASEY: Curious to hear an example from autonomous vehicles. LAURA: Yeah, absolutely and we have some examples in the book as well. But from an AV point of view, you certainly have the interaction of the vehicle and the passenger so things like making sure the person is in their seat properly and their seatbelt is buckled is a layer of protection. We have the layer of protection of a remote operator to be monitoring and making sure that the system is fully functional and everything is going well, or to intervene if there's some unexpected situation like a construction zone or a traffic accident. Then you have other layers as well, regulation. We have a self-imposed safety process we go through, or we have an external assessor of our architecture and of our test results to ensure that we have followed all the proper procedures we need to implement a safe system. So these are some of the layers that all come together to create an overall safe experience. But I think people play a role in all of that whether again, that's the people in the car, the people monitoring the operation, or the people designing, assessing, and regulating. CASEY: Cool. That's exactly what I was looking for. There's more layers than I could have said, that I would've expected there. That's awesome. LAURA: Yes. There are many, many layers, that's right. JOHN: We’ve come to the place in the show where we go into reflections, which is each of us is going to talk about the things that have struck us from this conversation, maybe something we're going to be thinking about after the show, or just the ideas that we found most resonant. For me, I'm so fascinated about how things are going to turn out with the social constructs around these social entities, as you’ve described them, which I think is a great phrase because it really talks about how they're integrated into the social fabric in some way and we're still trying to define what that shape looks like. I think there are ways for it to go right and ways for it to go wrong. I think it's great that you've written this book to help people start thinking about this and help the industry start thinking about this, because there's still so much that has yet to be put in place around this stuff and the more thinking, the more books, the more writing, the more we can think about what these impacts are going to look like, the better prepared we're going to be once they actually get here. For me, it's always going to be a question of what's the right level of empathy for one of these social entities. How different are they from a pet or from a servant or someone works for you? There are different levels and they all have different implications about how you're going to treat them and I think it's still TBD where everything's going to end up setting. So that's still something I'm chewing on and probably will be for years. JAMEY: That actually leads right into what I was going to say for my reflection so thank you, John. [chuckles] I was also thinking just about how fascinating is to me to be in the process of creating a new social norm like this. Not that we haven't been in that process for a lot of things, but I think that we're not always in a situation where we realize that's what we're doing consciously in the way that we're able to have this conversation about it right now. The thing that that got me thinking about was this idea of having to think through more thoroughly who you could be hurting with your actions. Because we were talking about the way that people treat robots, maybe with cruelty, and I think that that probably stems from this feeling of like oh, well, it doesn't know, I'm not offending a robot, I'm not hurting it in that kind of way, the way that you would hurt a person. But how you treat a robot could offend or hurt a person. I think John touched on one of the really obvious ways, which is if there actually is a person on the other end of it and you didn't realize, I was thinking about Push the Talking Trash Can, which is a robot from Tomorrowland and Disney and it's a little trash can that rolls around and talks to people and interacts with people. It's just a guy that's navigating it and speaking through it, but people talk back to this trash can. So if you're rude to the trash can, you're really being rude to this person. But I can also see people get emotional. We talked about Roombas right at the beginning. If you kick my Roomba with the side of your foot, you're definitely not offending its feelings. You're probably not going to hurt this piece of tech that I own, if you're not being really violent with it. But you might offend me like, you can't find Roomba, like how dare you. So I think that that's kind of just an interesting way that you have to think things through and I'm also chewing on well, how many layers are there of that, that I should be thinking through and in what situations. CASEY: That's a couple of reflections I want to share, couple small ones. Sometimes I leave my apartment, I say, “When Casey's away, the robots will play,” and I turn on my dishwasher and I set my Roomba to vacuum the apartment. They're both very noisy. I don't want to be with them when they're doing their work, but they're very different, those two. One is a social entity. The Roomba, I personify a lot more than the dishwasher, which I never do and I never noticed that distinction until you said the word social entity and it clicked. That's powerful. The second thing I picked up was the way that you have to think about who's affected outside of the car, not just the passengers, that’s a big idea and it feels so parallel to work at UX. If you zoom out from UX, you get to surface design. Who else is affected by this one app we’re making? It’s not just the person who uses it but people around them just like zooming out of the trend, I think is powerful. I want to see more of that. The third thing I picked up was that we should all be the lookout for robotaxis like Las Vegas has. I didn't know they were live in Las Vegas, that's mind-blowing, that's so cool. JAMEY: I, too would like to ride a robotaxi, that sounds pretty exciting. [chuckles] LAURA: Yes, please do. We would welcome you to take a ride with us. Next time you're in Vegas, let me know, One thing I found really interesting today in our discussion was this play between not wanting these robots in the future to look too much like people because of the potential dangers that could cause by people misunderstanding how to properly interact with the robot. But also, the point between that and the need for the robot to behave in ways that people understand and how in some ways, that's very, very subtle different and hard to get that right. I think again, since I live in this world and think about it a lot, to me, that difference is clear. But in the discussion today. I think we talked about it, we talked about both sides of that and how important both are, and it became clear to me that it's a subtle difference and it's hard to get that right. JAMEY: This has been really fascinating. Thank you so much for coming on and talking with us about this. LAURA: Thank you so much for having me. I really enjoyed the discussion, it’s a lot of fun. JOHN: Laura Major, your book is called What to Expect When You’re Expecting Robots. It's out now? LAURA: It is, yeah. JOHN: Excellent. LAURA: It's available. JOHN: Everyone, please go buy it. LAURA: [chuckles] Yes. JOHN: Awesome. All right well, thank you for being the show. LAURA: Thank you so much for having me.