NOEL: Hello and welcome to Episode 35 of the Tech Done Right podcast, Table XI's podcast about building better software, careers, companies, and communities. I'm Noel Rappin. Table XI is offering training for developer and product teams. Topics include testing, improving your Legacy JavaScript, career development and Agile team process. For more information, email us at workshops@TableXI.com. We also have a free email course and tools on improving your company's career growth and goal strategy. You can find that at http://stickynote.game. My book Rails 5 Test Prescriptions is now shipping. The book is up to date with the latest Rails RSpec and MiniTest features, and has some great non-dogmatic content on how to get value from testing your Rails applications. You can buy the book at http://pragprog.com or wherever fine technical books are sold. Today on the show, I'm talking to Zach Pousman. Zach is the Principal at Helpfully, a unique consultancy that Zach will explain more about in a moment. Zach thinks a lot about artificial intelligence and how it might impact the future of development and technology design. So we'll talk about that. And of course, it's impossible to talk about AI these days without talking about the ethics of AI projects and how AI might affect the larger society. So we'll go a few rounds talking about possibilities for that as well. I really enjoy this conversation and I hope you do, too. And here I am with Zach. Zach, would you like to introduce yourself? ZACH: Hi, this is Zach Pousman. Really good to be here today. I run Helpfully, we're a consulting company based in Atlanta, Georgia. We have a slightly different model than other teams where we actually get to take a client engagement and then staff exactly the right team for that. So that's like subject matter experts, the right technical expertise and the right design teams. So every one of our client engagements looks a little bit different. We also get to focus, I think about 50% of our work is in the kind of two to five year time horizon. So ideas, technologies, new consumer behaviors that are a little bit farther out on the horizon, just over that horizon point. So it's sometimes hard for clients to sort of take advantage of them from what I might describe as a profitability or ROI perspective right away, but certainly want to be prepared, want to be ready for those new emerging technologies and new emerging trends. NOEL: Cool. That sounds really neat. We are here to talk about AI and the future of work and in particular design work and to some extent development work. Where do you want to get us started? ZACH: I think it might be fun just for us to go back and forth a little bit and talk a little bit about what is AI? I mean, AI means a lot of different things to a lot of different people. So maybe I'll just give a really quick intro. Artificial Intelligence to me and the way that I think about it from a historical perspective is really exactly what Alan Turing said it was. It is this ability for computers to act in ways that are unmistakable, almost unmistakable from human beings. So that's like a 70-year old definition at this point, but I think it's one that's really interesting because it's very behavioral. It's not about some philosophical thing. It's what computers actually can do in our lives. NOEL: I did a little bit of AI work when I was a grad student many years ago and there's kind of a history here where the original AI researchers targeted problems that were hard for people, so chess and things like that. And they were able to discover pretty quickly that you could solve chess, you can have chess programs that played chess better than humans without having anything that you would describe as intelligent or even really humanlike. And after that, AI sort of went after things that are really easy for people like facial recognition and recognition of letters and things like that and discovered that that was very hard for computers. And I think in the last like five or 10 years, there's been a tremendous amount of advancement in those tasks that are like sort of easy for people but hard for computers. Not necessarily, I don't think because the techniques have gotten much more sophisticated, but because the amount of data available to the AI algorithms has gotten so much more extensive. Do you think that's reasonable? ZACH: I think that's exactly right. NOEL: Yeah. ZACH: From the experience that we have in AI systems at Helpfully, most of the work that we have done has been in that easier realm. So, taking things that doctors can do very well, experts or the general regular people can do well, speak English, talk about the weather, those kinds of things and then try to build that kind of knowledge or that kind of knowledge production into systems. NOEL: The basic techniques are 30 years old at this point, I think. More probably. ZACH: Statistical machine learning is definitely old. I think the new realm as you describe it, it is the same techniques but just done in a slightly more involved way with more computation to throw at it. But then as you said, with a ton more data behind it, which is basically tens of thousands or millions of human being examples, human capable examples that then allow people who are designing systems to go off and actually try to model what human beings do so well, so easily. As you said, they're so facile at those kinds of things. NOEL: So how do you see this changing the way that designers and developers work? And I have a theory or at least a framework for this, but I want to hear what you have to say about it first. ZACH: I think from my perspective, there are some scary stories. I think one of the things we see a lot and that we hear in the news, that clients read, that everybody reads, your mom calls you and says, "Zach, I read an article and I'm worried about your job." I think a lot of moms are getting worried and I think the mass media, tech media stories are about technology systems, these AI systems are going to replace people or replace your whole job, take it away from people. And that does not seem likely or certainly does not seem eminent in very many kinds of roles in the tech space. Let's keep our eyes on truck drivers. I'm certainly worried about the truck driving industry, but I'm a lot less worried about technical creative development jobs falling to these kinds of statistical techniques and these kinds of statistical AI systems. NOEL: So, here's how I think about it. I have been a professional developer for about 20 years or so, and I've been hearing for my entire development career that inevitably programs will write programs in a way that goes beyond what we do now. And one of the ways I think about this is think about what's changed in the time that I've been doing this and the way that programming as a job has changed. And I think about my first sort of large professional project which was almost 20 years ago. And if I were to describe that project, I would say that like a team of five people worked for 90 days more or less on a project that using current development, web development tools, an individual developer could probably do in about two or three weeks, an expert. But the difference, the change there is not anything that I would characterize as artificial intelligence, but what I would characterize as the continual aggregation of beneficial tools, that there's an ecosystem to do a lot of things that didn't exist 20 years ago. There are frameworks that didn't exist 20 years ago. And so, we can start from much higher ground and use much more powerful tools. And to some extent, they're all programs, like we're writing programs at a slightly higher level, but it's not the kind of like generated code or generated structures that I think have been sort of continuously predicted. I feel like that commensurate with the tools getting better, like the expectations have gotten higher in a way that means that the projects have not gotten less complicated. One thing I like to think about in this structure is the idea that an individual frame render for Pixar takes like orders of magnitude longer now than it did on the original Toy Story movie even though the computers are orders of magnitude is more powerful, but because their expectations for what makes a good rendered frame have gotten so, so much more powerful. ZACH: Let's come back to human expectations which are always rising and always changing and certainly something that developers and designers, coders, creators need to think about. When I think about the argument that you're making, you're saying, "I've gotten so far, I've seen so much because I'm standing on the shoulders of giants." You're saying the whole endeavor of what is creating computer programs, that is, I think, certainly true. We are deploying tons of frameworks and libraries that were never available to us. Where in some senses technical people don't need to reinvent the wheel. You don't need to do something that has been done dozens or hundreds of times across all these different code libraries. The AI proponents are kind of coming at it from the other perspective. They're coming at it from the throw every piece of written code at some big neural network and then allow these algorithms to pick and choose to understand that in the alien way that they do, and then actually generate either specific programs or components of other programs. That work has not progressed very much. NOEL: Yeah. Except possibly in the limited realm of like optimization for server structures or things like that. ZACH: Yeah. I'm not familiar with cutting edge literature there. I think also you're starting to see some advances in UI components and I've seen some kind of layout engines or constraint based tools that kind of use neural networks to learn like, "Oh, this is what UI code looks like." I've seen a couple of interesting things there. NOEL: Anything that sort of fundamentally changed the role of the UI developer or are they still mostly toys? One of the things that I'm assuming right now is that, when I say that the job hasn't really changed in 20 years, I'm assuming that the development has been linear which is a big assumption. But I'm sorry, do you see that these UI tools is fundamentally changing the job or are they just assistance? ZACH: No, they're not even yet to the point where they're actually assistance. They're like proof of concept that there's something here. They are not really in use, I don't believe, by any actual working teams. These are things that a single grad student researcher has gone off to make or build as a proof of concept that sensical looking code could come out the other end of such a pipeline. NOEL: I can imagine an AI assistant that essentially did a perpetual round of AB testing and sort of used the learnings from that automatically. ZACH: Exactly, to generate the candidates for the next round of testing. NOEL: Right. There's a lot that has to be done, I think, before that's feasible. ZACH: Yeah. Ask yourself how many very bad cul-de-sacs on the road to a good user experience there are that of course, you as a human being, know not to chase down, where an AI system wouldn't know anything about that. So would that really actually even speed you up saying it was very doable? Would that actually improve your chances? NOEL: What we've seen so far from the deep learning algorithms is a combination of like tremendous power, but also a tremendous ability to sort of make sticky... but not just not just get stuck in like a local maximum, which is certainly a problem, but also the ability to sort of make concrete the implicit biases of the people who develop the algorithm. You have books like _Weapons of Math Destruction_, that talks about this this, Carina Zona who is on this podcast a while ago, talks a bunch about algorithmic failures, where the reach of the algorithms exceeds their grasp and they make errors in how they interact with people. ZACH: And they're not just errors. Those are like fundamentally the kinds of AIs that we've been succeeding with in building at the very best. They are weird mirrors, but they are just the mirrors of all the human decision making that has come before or ,the mirror of the design and development team that has created them. So in some cases that feels very neutral and easy and safe. And in many cases as you're implying, those are not just failures, those are deeply troubling societal things that could potentially be perpetuated into these AI systems and AI artifacts. And I've seen a lot of really bad examples. NOEL: Yeah. ZACH: Where if you imagine building AI systems only with people who come from certain socio-economic areas or who are with a certain kind of perspective about the world, you're going to wind up making AI systems that reinforce sexist, racist, ageist kinds of outcomes or thought patterns. And that is not necessarily explicit. I'm not saying those people are explicitly biased, but they work in teams where all the teammates look the same, you're going to wind up with systems that have a lot of inherent bias. And that's been borne out in law enforcement, that's been borne out even in things that seem neutral like web search. NOEL: Yeah. We'll put in the show notes a link to a talk by Carina Zona where she goes into this in some detail. But yeah, the trickiest thing in the world is to work against the 'this seems right, but I can't prove it' bias where you let things go because they kind of feel okay. As data pours in sort of reinforcing that track, it just gets worse and worse and worse. Where do you see the AI tools coming in for technical designers, for graphic, for developers, for things like that? ZACH: I think about these things as really smarter and smarter assistances or tools. So in the same way that we pick up the right tool for the job, I think some of these tools and IDEs are going to actually start to build in best practice, start to build in some context understanding that allows an AI system to, for example, do things like content aware fills for a graphic designer or do things like catch missing variable names for a developer where, "Hey, this is an unused variable." And that if you understood enough about enclosures and context and scoping, you could actually start to pull some of those even before things get compiled. You could do that at design time and not just watch programs fail or get inefficient. I think there are some of those things that does not take away from a designer or a developer that just, I think, changes the sometimes the altitude on which those players are going to participate. NOEL: At the beginning, that just feels like better and better tools. ZACH: Yeah. And the argument that I like is that even AI proponents know that that's a very non-linear thing. So the difference between an improvement to a tool or something that improves your efficiency as a developer, that's very different from AI systems that replace any human developers at any price point. NOEL: Right, because right now as a developer, it's pretty easy to get my hands on tools that will catch certain kinds of bad practices, but they're doing that through like a pattern matching algorithm that I basically understand. I can explain what it's doing, I can look at code and understand the code that is what it's doing. But as those tools become more and more complex, it becomes harder and harder. I take more and more on faith as the user of the tool in a way that matches the way that other technologies have worked. Initially, the first people who owned cars knew how every piece of the engine worked for the most part and over time those got more and more complicated and now I don't understand anything about how my car works. But how do you see that as allowing people who are not currently developers or designers to be able to do that work? Do you think it has more of improving expert performance or bringing beginner or novice performance up to what we would consider now expert level? ZACH: I see the bigger benefit being a broadening of the user population, of that designer set or the developer set. When you look at tools like The Grid which is an AI-based system that helps you build websites using some kind of reasonably simple but sort of statistically significant, interesting kind of AI system that's behind The Grid, that is really about democratizing good design for people who could not make a nice looking website come out of their fingertips either with raw html or with even a kind of kit of parts builder. This allows you to do that kind of AB testing, kind of on the fly, take templates that have worked in other domains and apply them into your space and see what happens. I think that it really is about democratization and it has, in that case, almost no benefit to an expert, nor would it ever really replace a specialist kind of web design team. So when I think about where these things are today, they're mostly in the realm of toys. But when they become tools, they'll become tools for democratization, not necessarily tools to replace human researchers. Think about research librarians, are there more of them now with Google or fewer? It's actually just democratized the way for regular people to get regular recipe advice that they get from Google than it is to replace reference librarians at university libraries. NOEL: In the case of the design tool, it's democratizing our current understanding of what good design is. ZACH: Yeah. And certainly subject to that idea that you already mentioned about local maxima, it's going to find you the very best version of the 2014 web. Is that the very best thing that could exist in the world? Well, I certainly wouldn't say that. But I would say that you could, when I look at... someone did a really nice thing, we'll put it in the show notes. It's like every website in 2016, they are sort of this, at least in the design world, things do fall into kind of these trend based or local maxima, and could that be captured and encapsulated in an AI system? Yeah, maybe. But that doesn't worry me from an expert design perspective or from people who are on the cutting edge or on the vanguard. Those kinds of teams and that kind of work is still just as valuable or even more valuable than it's ever been. NOEL: Is there a kind of knowledge work that you feel is particularly susceptible to being replaced or significantly augmented by AI? ZACH: I was racking my brain before we talked today. I cannot think of very many places where I'm seeing at least in the developer community as a really short term danger in the kind of two to five year or even 10 year time horizon. However, I think that there are jobs that fall into things like analyst kinds of roles or information processing kinds of roles that certainly many hundreds of thousands of human beings do today that I think as optimization starts to roll out, those systems may fall to a kind of expert level review and then sort of like turning that kind of work into an algorithm where even if you don't hit 100% performance, you could hit up into the 90s and just have a smaller, much smaller team of humans to oversee exception cases and oversee the quality of the work. I think that is definitely plausible to happen. There's a really nice story. It's posted by a Redditor and he's actually asking the ethical question. So he's been working as one of these information analysts for the last couple of years and he writes this kind of post to the Reddit community and says like, "Hey, I've accidentally automated my entire job. Should I tell my company? Should I tell my boss about this?" And the lesson is, first off, don't get your ethical advice from Reddit. NOEL: Always a good rule of thumb. ZACH: But I do think that there's actually a really interesting power in the idea where smart and industrious kinds of individuals, team members, not necessarily experts or consultants from the outside, are going to actually find ways to dramatically accelerate or dramatically optimize their work. And I think that that is a trend that we're going to start to see more of where people either on purpose or by accident begin to find tools or glue tools and systems together to kind of take care of 60%, 70%, 80%, and in this guy's case 90% or 95% of his daily work. And he was doing two and a half, three hours of work per week but getting paid for 40 hours a week. I don't want to go into the details because he wound up basically inserting errors into the spreadsheets that he was generating so that other downstream teams would have to go back and fix them. For me personally, I think that part is unethical. I think that creating work for other human beings is probably not ethical. But I think more than this particular gentleman, I'm less scared about bringing those kinds of efficiencies to the management team or to your boss and then getting assigned a new problem to solve as opposed to being really worried about his particular job, his particular job security. NOEL: Which is easy to say from the distance of it being somebody else's job. ZACH: Totally. It's not my seat that's on fire. It's someone else's seat far down the hall for me. NOEL: When they come up with robot podcast hosts, I'm going to be in a lot of trouble. ZACH: You will definitely be in trouble. NOEL: I keep hearing like dark glances being cast at like radiology. I have a coworker who is doing on the side a project to help evaluate MRI images, brain images for diagnostic purposes. That kind of thing that is expert pattern matching very explicitly I see as being kind of threatened. ZACH: I think those people are actually threatened and those are of course very highly paid roles. It would make sense from a kind of dollars and cents perspective to look for places where human beings are really playing a role of... its expert level performance, but it is really just a task that might not actually be what humans are best at or what humans could be best at. NOEL: Right. It's the kind of thing that's been hard. It's highly trained because it's hard for humans, but that puts the ethical question in a different light, not just the light of this one person, but what kind of structure do we as a society want to have around. It's hard to talk about AI abstractly without eventually getting into the very large questions about the nature of humanity and society and things like that, I suppose. ZACH: There's a really good McKinsey article about what AI can and can't do and how that's going to apply into business cases. What was cool about it from my perspective is they didn't just look at technical jobs, they looked across the broad swath of the economy since McKinsey just kind of advise all across. McKinsey divided up all of the jobs across things that seem rote versus a unique and also physical versus mental or information processing. And they wound up coding basically 50 or 60 jobs and certainly things that are very analytical but not necessarily highly lateral thinking or creatively minded are things that they put higher at risk. NOEL: The story that I heard relatively recently has been bouncing around in my head about…I'm going to put it in the show notes where this is from. But they were doing an AI game tournament for a Tic-Tac-Toe on an infinite board where the goal is to get a five in a row. The AI was a genetic algorithm. The successful AIs were sort of algorithmically developed. And one of the winning AI teams, the winning structure was to, the AI in question would place a piece on the infinite board, like a million or so spaces away. ZACH: Sure. A hundred thousand rows away. NOEL: Right. In an attempt to try to cause its opponent to model that internally and crash. ZACH: Yeah, you're just picking like a SQL injection error basically as your approach to solving the problem. NOEL: Yes. So to me, that is emblematic of a lot of current AI research, like it can solve the problem very, very effectively, but you have to be really, really careful how you define what the problem is or else you get solutions along an axis that you didn't even know existed until it starts optimizing along it. ZACH: To me, I think that's emblematic of kind of this philosophical difference between when we talk about artificial intelligence and it worries us, it worries us because we're worried about true intelligence. And I think the systems that we have wound up with in the world, they work in ways that are not intelligent in the same way that human beings are. They use basically a collection of parlor tricks across time and space. And I don't mean that in a demeaning way. NOEL: No, yeah. ZACH: That 50 years of AI research is a collection of parlor tricks, but it's certainly not the same kind of thing that human mental systems do on the day-to-day basis when they're trying to win a game or accomplish a task. NOEL: I've known AI researchers who would absolutely describe AI as a bunch of parlor tricks. ZACH: Well, when you look at chat bots, like go look at the…there's actually a prize that actually try to run every year, the Turing test. I don’t know if you're familiar with this. NOEL: Yes. ZACH: It's called the Loebner Prize. I think that’s right? NOEL: Yes. ZACH: Just for listeners there, the Loebner Prize is an AI contest where chat bots try to act like human beings for five totally unrestricted minutes with human interlocutors. So they just put you in a room with a chat terminal. You just go around chatting with various systems. Some of those systems might be human beings, some are computer chat bot algorithms. And the humans then at the end, these judges go and say, "Which one of those was a computer, which one was a chat bot?" And they give out the Loebner Prize every year and I'm pretty sure this is still right. Last year or two years ago, I believe, they did trick one human judge into putting one AI system above one human interlocutor across like 25 judges. But that was a big deal at the time. And what I would say about that is when you actually go and look at what those transcripts look like, you can just tell this is a collection of parlor tricks in a lot of cases. NOEL: Like even dating all the way back to like the original ELIZA Program. ZACH: Oh, yeah. That's great. NOEL: ELIZA is one of the original AI programs and basically all it did was reply back. Like you would say, "I'm really upset today." And it would have a bunch of essentially regular expressions that would make it say, "What is making you upset today?" And would occasionally jump in with like a non-sequitur question along the lines of how was your day or ZACH: Tell me about your parents NOEL: Something like that. It was meant to stimulate us. It was meant to simulate a specific style of talk therapy. ZACH: That's right. And the parlor trick in that case is not just those regular expressions, the kind of simple slot based stuff that was happening in AI in the 60's, but also the entire framing of that experience was intending to get you, the human, to act in certain ways in order to sort of vastly cut down on the possible response. NOEL: Yeah. But there's also been some research, I believe, that suggests that it's actually useful in some cases to interact with an ELIZA system even if you kind of know it's an ELIZA system. That people have gotten… ZACH: Therapeutic effect. NOEL: Therapeutic effect out of it, which is, I don't know, somewhere between amazing and disturbing. I'm not entirely sure. ZACH: I feel like human beings, they're the most interesting kinds of creatures because they are both totally driven by patterns, at least I personally believe, they are not driven by magic or by spirit. They're actually totally just very interesting kinds of creatures. But they're built of meat, they're built of atoms and at the same time, they're very hard to predict. NOEL: Right. The place where you really see a chat bot as a collection of parlor tricks is where you try to have two chat bots talk to each other. ZACH: It often, yeah, devolves into. NOEL: Right, without a human being to sort of direct that in a direction, they just sort of circle around the same pattern. ZACH: Yeah. I'm of the opinion that we're going to get closer to the kind of slot based or regular expression based systems that work in your home like these Alexas and Cortanas and the Google home devices. I think those things are going to be useful. You mentioned, the point you were just making, Noel, is that there's utility even if these are just parlor tricks. And I think that's going to be true and going to, if not accelerate in-depth, certainly accelerate in breadth, so more of your life will be assisted by these technology systems. But that doesn't necessarily mean that they're going to get a whole lot better at understanding human expressions. NOEL: And there's a way in which that's even scarier than the "AI will get super smart and self aware". A collection of really dumb but powerful parlor tricks actually to my mind has potential for a lot of damage on its own. ZACH: Yeah. Say more about that. Just from a 'who owns it and who controls it' question? NOEL: Yeah, I think about Alexa or those kinds of home based tools. Again, they're basically doing sophisticated pattern matching. But you could imagine an Alexa home system that could be easily fooled, is not quite the right word, but could easily misunderstand the external state so as to do not helpful or actively harmful things without even really understanding what was going on or without even really being hacked just through missed pattern matching and misunderstanding, the kind of algorithmic blind spots that we see in image recognition. I don't know. It seems to me like there's a potential for a fair amount of…I don't own an Alexa device, so this might be related. ZACH: I don't either. NOEL: Except for the fact that I have an iPhone, I guess. So I have a Siri device but I don't ever really use it. I don't find them super useful yet, but I don't know. I feel like the power along with the power or the potential for mischief, even like relatively well intentioned mischief… ZACH: Unintentional misuse, yeah. Unintentional mischief, yeah absolutely. I do want to call your attention actually to one kind of conversational system that is actually making some progress. Pretty interesting. It's called the SQUAD. It's a Stanford Question Answering Dataset, so that's an acronym, SQUAD. The squad is an open-ended kind of AI challenge and they've been working on answering questions based on Wikipedia pages, I think. So what they do is they just go off and say, "Hey, here's the crowd workers human beings working on Wikipedia pages to answer a series of kind of a canonical set of questions." So, it's a reading interpretation. What do they call that when you are a little kid? Reading comprehension. So the SQUAD is this reading comprehension test and they've been setting AI systems loose with Wikipedia as their input to answer the same kinds of questions. And those have actually, I think just this year it's been running three or four years, just this year they've gotten a system that basically equals human level performance on basically these reading comprehension kinds of questions. That's a huge thing. It doesn't work in any way like a human being would, but it certainly helps answer questions. And what's interesting there is the ability for systems to read human written, heterogeneous totally streaming data set of news going by and then being able to try to turn that into something which is, "Oh, here is what those news stories are about," without using kind of those simple old tools like TF-IDF or something like that. NOEL: Right. And you could imagine both power and mischief in that kind of algorithm. ZACH: Oh, absolutely. I think the worrisome parts of AI are very real. I mean, I'm most concerned about some of these ethical implications, not just the teams that are building the tools that we mentioned, but of course when you look at the kinds of attack vectors that become possible when you're willing to throw large computation, machine learning and statistics into the systems, I think that's a really dangerous place to be. NOEL: I always have been kind of interested in the way that when an AI solves a problem in a way that is fundamentally different from how a human would solve it. I read somewhere that the Google Translate neural networks had sort of inferred intermediate language. ZACH: Yes. They have built their own meta-language, a language about languages. NOEL: Do you know more about that? Because that sounds really neat to me. ZACH: Yeah. Well, I only know two sort of bigger facts about it. The way that the Google Translate system works is similar to the way that all of these deep learning systems works in that we're going to throw every possible corpus from every possible language at these systems and it's going to infer all the rules of grammar, all the rules of syntax and semantics. The interesting thing that happened on the Google team side is one of a really good example of the inability to inspect. So, the idea that these neural networks were creating representations that were not in any human language is real, but even the Google team can't explain what's really happening there. Just like they can't explain anything that's kind of ground truth about these tuned neural networks. They've just got like, here are the tunings and it just works. They know there's something happening there in terms of intermediate representations, but there's no reason to call that a language. You might as well just call that huge long vector of neural network weights. That's what I gathered from the article I read. All human languages are just a huge vector of neural network. Yeah, I guess that's true. NOEL: I so want them to have implicitly discovered Esperanto that… ZACH: Uh, that would be amazing. That really would be amazing. NOEL: That's an overreach, I guess. I mean, like all of these things come back. I would say that like self-driving cars and translation and things like that that were considered significantly out of reach 10 years ago, I think, now seem like tantalizingly in-reach at least within very, very constrained contexts. And to me again, I think that's the power of the amount of data that teams have been able to throw at these kinds of problems. It was kind of interesting. I noticed that you sent me an article about who will be responsible when an AI kills someone. And I think it's dated a couple days before… ZACH: It is literally like the week before AI did kill someone. NOEL: So timely. And I think that's an interesting construct too although it seems, from what I've read, that there's possibility of negligence not necessarily on the part of the software. ZACH: I think that’s a perfect storm. NOEL: Sorry. So, we're talking about the Uber self-driving car that killed a pedestrian in a tragic incident a couple of weeks ago as we record this. ZACH: Yeah. A woman was killed walking across the street pretty late at night. It was definitely after nightfall. And there's certainly, like all accidents, there'll be many, many factors to dig into. One of the things that at least some of the kind of autonomous vehicle experts were looking at, the team at Uber had turned off the LiDAR systems that were in the vehicle. So they had turned them off in order to test basically whether the system would be performant under new conditions basically with a smaller sensor array or with fewer of the sensors that are in the vehicle. And if the routine did do that, the autonomous vehicle outsiders or critics were looking at it like why would you not turn it off for the AI system to use but still keep it on for emergency kind of evasive maneuver. Because it's both like the human who was sitting in the driver's seat of the vehicle did not see this happening until it was, at least from what you said, far too late to do anything about it at all. It was below her threshold of abilities. NOEL: That reminds me of what I've read about like Chernobyl which was caused by extremely ill-advised maintenance task where you look at it at the end and you think, "Why did anybody think this was a good idea to start?" And that turns out again, that turns out potentially not to be an AI issue but it does become increasingly a question of, as these AI systems take roles in not just design, but as it's already taking roles in the legal system and in places where they can cause damage. How do we mitigate that? How do we deal with that? I'm expecting you to have all the answers. ZACH: One of the things that I think is, what is a seemingly good idea that turns out not to be that smart is to actually, in some legislative way, make sure that AI systems can have some human accounting at all times. I think that that's quite worrisome. And there's a recent Wired article that I really liked which said there really isn't a good reason to expect AI systems to be accessible in this way where there's always some simple human level accounting of every piece and where we took try to legislate the way in which these programs get built. It would cause probably systems to become much more simplistic than would otherwise than industry would choose. But I do think that it's important for us to consider not just the technical and commercial capabilities of these systems, but actually have some of these policy arguments out in public and in a way where legislators actually put constraints on the ways in which these new tools that are going to work in the world. And if we want to change our mind, I'd rather do that through policy and legislation, then I would just based on the whims of huge tech companies, Uber, Google, Facebook and Apple. I would much prefer us to actually open some of these broader sorts of STS's, socio-technical systems kinds of conversations now and without a lot of commercial pressure. Because I think that we want to divorce the commercial desires of companies who might, for example, design their algorithms to prioritize their own vehicles over other companies' vehicles. And I'd rather have that conversation in a policy and in a legal way with legislators, than I would just with Google and others duking it out in the real world with lives at stake. NOEL: Yeah. With the hope that we can figure out a way to have legislatures be able to keep up, which seems challenging. ZACH: I certainly think, and you didn't bring it up, but I do think that we need to upskill and certainly educate policy people and legislators on that issue. NOEL: Right. I'm not even thinking necessarily about the education level of the policy makers, although that's certainly an issue. I'm just thinking about the speed and pace of legislative work sort of by design, by necessity. ZACH: Sure. If you look at human history, technology has frequently outpaced our ability to regulate it from a governmental perspective. And for our ability as societies and cultures and civilizations to make sure that we're ready, we human beings are ready for these new tools and for what these new tools will bring. I think that that's the correct role. It is commercial industry's role to push and to push the boundaries and to do things that are, strictly speaking, legal until we decide as a society to make them illegal or to regulate them in other ways. But I don't think that we need to say, "Oh, we need to put a stop on technical effort or technical work," in order to let legislation catch up. I just think those are the sort of the two sides or the push and pull of how technologies enter into our everyday life. NOEL: One of the things that makes me nervous thinking about the history of technology is that the original Luddites were actually very highly skilled and trained textile workers. ZACH: Laborers. NOEL: And had very high social status and they have very highly skilled job doing textile work on looms and it was automated very relatively quickly within the space of a generation. And that was the technology that they were protesting. It was not marginal laborers protesting the loss of marginal labor. It was highly skilled, socially prominent people protesting the loss of highly skilled labor in technology. And the analogy to my current position is honestly a little terrifying. ZACH: It's not lost on you. NOEL: No, it's not lost on me at all. ZACH: I'm also worried. I think I'm worried about as big technological shifts happen, sort of the rise of mechanization even just at the edge of this steam evolution, I think that we're in the same place where we are on the cusp of something that is going to be transformative. It is going to change a lot of work. It is going to change a lot of job roles and what it means to be. We have already talked about what it means to be a program, what it means to be a designer, what it means to be a doctor. There are many changes that are coming and they are coming because of these specific technologies. And I think that we're going to put many, many hundreds of thousands or millions of people out of work. I have a young kid and I'm hopeful that she does not grow up to be a truck driver. I think that job is almost certainly going away. And the question is, is it going away over a 30 year or a 50 year time horizon, or is that going to be something that is actually accelerating? Is the pace of change truly accelerating for the ways in which technologies are getting into life? And if that is true, then that is actually probably societally pretty dangerous. NOEL: Yeah. And do we have the social will and the political will to deal with that in a way that is going to mitigate some of the worst effects? Or are we just going to let it happen? ZACH: I wish I had the answer. NOEL: Yeah. Tune in next week when we have all the answers. ZACH: Part two of this podcast, we'll just answer that one too. NOEL: Is there another point you want to make? Something else you want to cover right before we come up on time? ZACH: Yeah. I think one of the things that I really believe in and it's actually stolen from the education space. I've been thinking a lot about what are human skills and capabilities that are going to be maintained over eons, over real time. And I think that there are sort of a whole set of human skills that are not replaceable easily by technology systems. So, I'm thinking of the 4C's. This 21st century education partnership created kind of these 4C's, critical thinking, communication skills, collaboration skills, and creativity. Those things are not the things that technology systems are likely or at least not in the deep learning neural network way, they are not likely to take away our human skills and powerful ways in which we deliver value that are about critical thinking and decision making, that are about how do we communicate and collaborate effectively. I think those skills are going to maintain over time and are actually going to be useful even when AI systems get smart enough to be a full partner in some of those conversations. So some of the communication or collaboration you might do is with an AI system, but that doesn't mean that human beings aren't still the ones who are going to actually drive forward those kinds of conversations. So, I think that's a whole 'nother AI revolution away before technology systems are truly creative or truly good at making decisions in the way or at the same level that human beings do. So, I'm optimistic, I would say. I mean, I know we've only talked about pessimism today. But for me personally, I really believe in human skills and in trying to find ways to make the kinds of work that people do more productive but maybe centered on some of those things instead of centered on road or analytic work that's more easily taken care of by machines. NOEL: Well, thanks for being on the show, Zach. Where can people find you online if they want to continue this conversation? ZACH: Absolutely. I would love to chat. I'm pretty active on Twitter @thinky. And my company's called Helpfully, so you can just find us at helpfully.com. NOEL: Great. Zach, thanks for being on the show and we'll be back in a couple of weeks with the next episode of Tech Done Right. Tech Done Right is a production of Table XI and it's hosted by me, Noel Rappin. I'm @NoelRap on Twitter and Table XI is @TableXI. The podcast is edited by Mandy Moore. You can reach her on Twitter @TheRubyRep. Tech Done Right can be found at TechDoneRight.io or downloaded wherever you get your podcasts. You can send us feedback or ideas on Twitter @tech_done_right. Table XI is a UX design and software development company in Chicago with a 15-year history of building websites, mobile applications and custom digital experiences for everyone from startups to storied brands. Find us at TableXI.com where you can learn more about working with us or working for us. We’ll be back in a couple of weeks with the next episode of Tech Done Right.