The following is a rough transcript which has not been revised by Vanishing Gradients or Heather Nolis. Please check with us before using any quotations from this transcript. Thank you. hugo bowne-anderson Hi there, Heather, and welcome to the show. heather Hi, Hugo, nice to be here. hugo bowne-anderson Such a pleasure to have you on the show. And you are a principal machine learning engineer AT T Mobile. Is that correct? Yes, heather that is true. What does that mean? The very shortest version, what that means is that I'm in charge specifically, of all the machine learning that is within the care desktop organization, which is a lot of words to say, I'm in charge of giving the thumbs up in directing the machine learning pieces of anything that helps our call center employees, hugo bowne-anderson and you build a team from a very small number of team to maybe 15 people in the past several years. heather Yeah. So when I started AT T Mobile, I was actually a software development intern. And we had like an internal funding round for where you can like submit an idea. And the the executives vote on it Shark Tank style it if you if you're one of the best ideas, and you get money, and you get to go out and kind of build your own little project. And so with my manager, my director at the time, resubmitted an idea that ended up getting funding, and they were like, Okay, who's who's going to work on this? I said, pretty pleased me. I went to they said, you're an intern. And I said, Yes, I know, I want to be on this new initiative project. And I want to be in a leadership position. And so I started as the first machine learning engineer on the on our little proof of concept project. And now it's an enterprise team that we basically help Tin Tin ish, other teams around T Mobile, do any of the machine learning things that they want to do? hugo bowne-anderson Amazing. So I'm envisaging like, as you said, a shark tank ass type situation. But instead of Mark Cuban, we've got maybe some some VPS from around T Mobile or something. And you're pitching ideas to them? Or heather yes, yeah, it's actually all of the vice. We call it the vice president council. So it's all of the vice presidents that are under our CIO. And so literally all of them hugo bowne-anderson great. What did you pitch? What was the initial pitch? heather So our initial concept was just the team that I was on, we built all the plumbing that allows for people in our call centers to like either Facebook message with us or use the app to message with us or go on Twitter to get help with your phone. So we set on top of all of the data that flows through those pipes, these transcripts of conversations, and it's our customers telling us literally every problem that every T Mobile customer is having. And so we had the brilliant idea of just what if we put some AI on top of that, like, surely there's something good inside of it. So it started as a very like, high level pitch that was just like, let's put AI on top of those transcripts. And that's what we got the three month go ahead and funding for. And what it actually became by the end of those three months is we built a topic model with a custom taxonomy for T Mobile problems. And then we use that topic model to power a GUI that would sit in front of our care agents to if we know the conversations about network now, what can we show the agents knowing what customer is calling, knowing all the information in the T Mobile systems, knowing all of the information about that customers account? And the fact that they're talking about networking? What sort of facts or articles to help them understand how to solve the problem? What can we actually physically pull in to show them on the screen pretty much instantly. And so that was the the end of our 12 week POC hugo bowne-anderson sounds amazing. Can you liberty to tell us how much funding you received in this first three months? heather Yeah, so our initial investment was $250,000. So you get a quarter of a million dollars to do your three month POC, and this this program where you pitch in when money is still active AT T Mobile, it's now called the idea box program. And I actually, we pitched in one the first year that they had it. And then the next year, when they came up again, I pitched another AI idea. And when another $250,000 In three months to focus on what I'm actually doing. The thing that's really nice about it Well, there are a few things that are really nice about it from an enterprise perspective. Number one, when you are doing work in an enterprise, it's really hard to be innovative, because your roadmap is laid out so far in advance, and you have stakeholders. And when you're a publicly traded company, you have stakeholders that you owe some some sort of results too. So you're constantly saying, This is what we've promised our like our shareholders that we're going to deliver this year. And this is what I really want to do. And if I want to get this on the the big map, it's going to take five years. So finding a way to get funding to do the work that's really exciting right now is tricky. And then the second thing is when you work in an enterprise context, people tend to have an idea of what a team should look like in their head. They're like, yes, a software development team. You have these six people, and then you have a scrum master. And it's agile, and it works like this. But for a company that didn't have like a really solid machine learning software infrastructure. We didn't even have a template for what a real machine learning team should look like or an AI team or a data science team that's building production models. And so being able to take that funding and staff my team correctly was Very, very important. So you'll find this, obviously shocking. But I was on a team that had four data scientists, when we pitched this and we didn't have a single data engineer, because it's really hard to prove to enterprise, the value enterprise product development, the value a data engineer will create, because they'll say, Okay, well, what features are they going to build? Well, no features that you're going to see in the end product, but I promise that will make my job easier, please bring them on. That was also the first time that we brought a designer into the data science team. So that way design and data science iterate together to make sure that every design that we're making is data relevant based in the truth that we know that's happening in the world. And so those two things, being able to staff, my own team, add my designers add my data engineers and craft that experience, like those are all things that are now standard for how our team runs. And at the time were like, very shocking. Yeah. And hugo bowne-anderson so just to step back a bit, did you say when you when you pitch this the first time, you were still an intern? Yes. heather Yeah, I was an intern. So I was included on the like, the pitch deck with my manager, and my director, and they kind of like did the pitch. And I was the I was the tech person behind it being like, I promise we'll build this. And then the second round through I got to be the the product type person, crap, the whole hugo bowne-anderson concept. And your background at that point was in software engineering, not necessarily data and NLP and topic modeling and this type of stuff, right? heather Yes, yeah. So my background was in software engineering, I was in a master's program at the time. So that's why I was an intern. But the whole reason why I wanted to start coding was because my first career is in neuroscience. And in neuroscience, I was on a toxicology project for a year when I was an undergraduate, and I had to keep these rats alive. And I had to keep them alive and check on them every day. And what I actually had to do is use a very, very small ultrasound and measured the speed of blood going through different vessels in their body. So like, I got to know these rats very well spent hundreds of hours with them. And when I was done, they told me to hand all of my data to the analytics team. So that way, they could figure out what was significant and what the results of the study where and I almost blew a gasket, because I was like, Wait a second, I'm going to hand it to this team of people that don't understand anything about biology. And they're going to tell me my biology results, that doesn't make any sense that I was like, I'd rather go away and become qualified to do this myself. So that's when I took my first bioinformatics course and started learning Python. And so my whole way into computer science was all based in this idea that eventually I would go back and my data set would be the human genome, but I just, you know, kind of stayed over here. Awesome. hugo bowne-anderson So when was this initial pitch four years ago. heather So it's almost four years ago now hugo bowne-anderson in around four years from software engineering intern, to principal ml engineer with a 15 inch person team with 10 different teams who are stakeholders. Firstly, that sounds amazing. And congratulations, and clearly, incredibly well deserved. But maybe we can tease apart like, what that journey looks like. And you can start and then maybe I'll just interrupt you at several points. heather Yeah, absolutely. And it's really interesting, because I am responsible for the machine learning role actually be created AT T Mobile. So we have it as a specialization of software engineering. But so how, as an intern, I was in a master's program. So it's a little bit older than most interns and had industry relevant experience a hugo bowne-anderson neuroscience background is I mean, you're you're a science insider as well, let's, let's be clear. heather Yeah. And I always find it kind of interesting that I thought I would grow up and make fake brains made to act like human brains. And now I instead grew up to make fake brains to power software instead, like I always thought any modeling I would be doing would be to simulate what's really happening in people not to simulate something in software. In the end, when I began, that was our general concept when we and my first few POCs I was hired to do the software engineering internship specifically because I was making Twitter bots at the time. So I had all of these Twitter bots I was making using like Markov chains, you know, to like, make a horse ebooks style Twitter's if you remember horse ebooks from back, like, yeah, so I did. I did all sorts of these different projects like that. And those were all on my resume whenever I applied. Originally, they were like, great. This is somebody who knows the Twitter API really well, and can dev on the Twitter API. So that's kind of what they brought me in for but like my thing that I liked doing, it was making Twitter bots, the NLP part of making Twitter bots. And so we pitched the first, like, Hey, we're going to use AI. I said, Please, please, please, pretty, please, I want to be your first engineer. They said yes. And then immediately, they said, Who should we bring in to do the data science on this? I said, Well, good news. I'm married to a brilliant data scientist. Her name is Jacqueline dowless. And so my wife came to work on my team with me for a year to help us just kind of get the data science under control, because like you said, I wouldn't at that point of time, I wouldn't have known how to find a good data scientist in the wild. I would have probably trusted anybody. hugo bowne-anderson I think part of the challenges When hiring is our bias is to trust people who exude confidence, which perhaps is can be inversely correlated in some populations with actual skills. heather Yes, yes. Well, and that's part of why I think that there are all these things like was it? Was it weird to work with your wife? And it's like, no, we were doing these side projects together to begin with. And so I think that was hugo bowne-anderson when you were talking about Twitter stuff. Like I was like, was that stuff that you're interested in with Jacqueline as well? Because I know that she's she's a big fan of this type of stuff. Yes. Well, heather what I would say is that when Jacqueline and I work together, I'm product, right. And so like, when she built tweet, mashup, I was product for tweet mashup. And so I had all these ideas, I just didn't know how to execute on them, you know? Okay, cool. So that's, so we brought on Jacqueline, we spent a year actually standing up and releasing our first like full software product, because we spent that first three months building our topic model. But there's, there's a lot of other bells and whistles that need to go on to something to make it like a fully performant enterprise production software application. And so we spent about a year getting that stood up at that point in time we started to spin off into multiple teams were like, great, we have one product out and running, where are we innovating now. So that was the first time I had to be away from Jacqueline. Right. So we got to spend a year together very tightly working together. And then we split. And I ran a forecasting project where we basically if we own all the messages that are coming into TMobile, we should be able to do all sorts of very clever stuff with that, like, for instance, people message us very quickly whenever cell towers go down, because they can't call us because cell towers are down. So we're like, what sort of forecasting can we do per topic, so that way, maybe we could say like, Hey, billing, we know there's an issue in your billing system, because there's some increased traffic that's happening around the billing, and it's got outside of our bounds. So we did this dashboard project. After the dashboard project. We said, Okay, now we want to take our original product, make it speech related instead. So they're together, can you put a speech, like a speech to text on top of your current product? So okay, so now we're doing the dashboard, we're doing our main like, Help Desk product, and then we're doing speech to text as well. And then they said, Okay, and now we're going to stand up a chatbot. Okay, cool. Now we got that project, too. And so it was incorporating all of these items slowly, like I was able to really stretch the amount of things that I'm doing and kind of spend a year almost deep diving on each of these individual topics, which has kind of rolled up into the engineer that I am today. And so I was hired as a standard engineer, I was promoted to senior engineer really quickly as terms of my hiring. Because I knew I was like, I'm already as an intern, I'm already doing senior engineer work is how I felt about it. And then to get principle was a little bit trickier. Because you have to be principle, you have to do what I said, where it's like, you have to be over multiple teams, you have to be like writing roadmaps and guidelines. And so some of the things I did to help kind of push me to principle, were number one defining we have a software engineering competencies sheet. So for every level of being a software engineer, what exactly do we expect from you. So I had to make sure I met the principal once I created a complimentary competency sheet for machine learning. And then my pitch and win for the smart coach project, which is, we talked about the original topic model that I did, but when I pitched an idea to the internal Shark Tank, it was for automatic coaching for individual call centers. And I think that POC and that we have a patent pending associated with some of that stuff. I think that's what what made me principle worthy. hugo bowne-anderson That's awesome. There's so much to unpack in there. You just mentioned four different successive products and projects. What type of buying did you need from different stakeholders, I suppose people above CIO, whoever it is, at an executive level, to start on the next project and to hire more people. And to get more funding. heather In general, we are a POC first team. So when it comes to getting funding, in general, what my team likes to do is anything that we possibly can to prove our new use case on the side while also doing our regular job. So we ask, is there any proof of concept that we can do to prove this out? And we do that first, we get actual real working bare bones demo in front of our stakeholders, before we even start the conversation about what product we actually want to build. Because saying something like, Wouldn't it be great to have speech to text reading on every care call and use it to do AI? That's a very lofty concept. But if you could sit down and show somebody, Hey, watch, I'm making a call, watch it transcribe on the screen. As I'm speaking, watch my model, pick up that text and assign an intent. Now think about all the use cases and experiences that we could put at the end of that intent. Even if it's just one very small use case and you're just proving the technology, it becomes infinitely easier to show the value of your product and to get stakeholders and executives to give you their financial buy in. So in general, that's what we want to do. An example of this is with our first use case for our chatbot which was payment arrangements. It was the beginning of the week. David in the entire world was feeling the financial fallout from that. And we had customers coming in who wanted to set up a late payment plan, which we call a payment arrangement. It's a very routine task that our experts do setting up a payment arrangement. But we know from market research that first of all, not everybody wants to talk to a real person. And secondly, especially about sensitive topics, it can be really embarrassing to come to a human and say, I need to pay my bill late, and I need empathy from you. So it was a combination of those things that came together and got us approved for our original use case, which was just to set up basic payment arrangements. That was all it was going to do. But then, of course, when we were doing that, we were proving it out and developing features two, three, and four. So by the time that we gave our first deliverable to our stakeholders, we had three more demos queued up, ready for them, hoping that they would continue to fund our work hugo bowne-anderson great, what type of value the I mean, pick any number of these projects, and how do you measure success and demonstrate that it's actually creating value for the executive in question. And therefore, yeah, so heather I'm going to talk about my chatbot project here, because I think that one was really great. But first, let me start by saying that in our original topic, model POC, we were horrible at this, like absolutely horrible, I still don't know the value that my product brings, in some regards, due to some of the KPIs that we originally thought were going to be fantastically trackable, but then turn out to be noise and not really the KPIs that will drive value to my product. And we also didn't do the due diligence of creating really good logs to iterate from. So instead, I'm going to talk about my chatbot project, because we got to start that one when we were a little bit more mature. And I'm proud of it. So before we even launched, we came up with our exact formula for what a return on investment would be. And we did this by asking ourselves without our product, how much would this experience have cost the company. So before we even released, we had a dashboard that tracked our return on investment in real time, based on all of the logs coming out of our chat bot. And because it's a chatbot, we decided to also monitor her like we do our real call center experts. So asking questions like, What is her call volume? What is her customer satisfaction score? What is her NPI? Did she this chatbot actually complete an action or do we have to transfer it to a live agent. And by doing this by tracking both ROI directly and also our quality in real time, we were able to create a product that has the most fantastic ROI of any product that I've ever worked on, which was over seven times of the initial investment. And to me, the most astounding thing here is that this dashboard was absolutely key in our strategy during the first year, we referenced it literally daily to make sure that all the development we were doing was actually bringing value and high quality value to our company. But I honestly think that if we had released our initial chatbot, without the dashboard, I don't know if we would have ever gotten approval to go back and build this dashboard. So we would never have this level of insights on our product, which is pretty horrifying. Because to me, it's the most impactful thing that we built. I've just never had anywhere else that I could log on every single day and know exactly what my project is doing for the company every single day instead of just wondering, and I think it does a lot for developer morale as well. Because even for our data scientist and our software engineers on the team, being able to go to that live dashboard and say like, Hey, what I'm doing does have meaning. And it's actually very valuable. And here's here's the real hard numbers on it. It's great for both business and the team. hugo bowne-anderson Incredible. That is clearly an absolute success story that is also measurable, which I'm a huge fan of I mean, you know, there are things that metrics can't can't measure. And a lot of qualitative things we need to take into account when figuring out what creates value for business and customers and stakeholders. I am interested if you're at liberty to discuss any any fails. I mean, I honestly one thing with this, I don't think we hear enough about fails, besides ones that PR teams want to like put out there, right? So is there anything that's gone wrong or anything that's kind of suck that you're at liberty to discuss. heather So I'm gonna do the political thing here and talk about something that sucked. But but then I'm going to talk about how we fixed it and made it even better. Awesome. So the way that we were developing our chat bot was incredibly agile. There are some chatbot teams that will sit down and envision a fully fleshed out flow with many branches and do all sorts of research to hammer down the design. And then they will develop that over the course of like six months in absolute completion. And then they will launch it. But we're not like that we were more like Heather creates a tense, great, they look good, let's attach an experience on them and see how it goes. So we did model releases every single week. Sometimes it was tuning up old intents. But most of the time it was introducing something new little intense, maybe even small flows or experiences. And one idea that we had is we saw tons of people coming in with questions about our 5g network. So we said, Okay, well, I'll develop an intent about network. So I did and we released it. But then people would come in and say something like, Hey, I've got a question about my signal. And we would assume that they were talking about the signal from our network because we are a cellphone company, but they actually meant their Wi Fi signal. Or maybe they met their T mobile home internet signal. And sometimes we signal they actually met data signals specifically because what they were actually trying to do is increase data allowance or get a data pass or upgrade their plan. And so all of these are new wants is to signal and network questions, but we kind of assumed that everybody would be doing one thing. And that's asking about our 5g network. So it was a really stinky experience, we didn't know until we released it. After every release, we have eyes on glass time, where we just kind of watch what's happening in production and making sure that we're getting the results that we want. We immediately saw this network experiencing stink, and we turned it off. Because we have the capability in real time to turn things off, we have like a cute little GUI, and you just uncheck a box in the experience is no longer in production. So we turned it off almost immediately. But then we did something pretty cool to prevent this from happening in the future. And so what we did is we create a process we call Shadow Mode, which basically, before we launch any new intent, we take a month of the previous data, that so that includes utterances and the classification output, we take our new chatbot model, and we run our new model against the old data. And then we review any changes or unexpected classifications. And this is all to try and catch these unknown unknown experiences that could trip us up and cause us to release something where like 50% of the people who are triggering that intent are getting a chatbot response that just absolutely isn't relevant to them. hugo bowne-anderson Awesome. I love that example. Because it actually speaks to a lot of challenges in machine learning software chatbots, but machine learning data powered software in general, I was actually chatting with Rachael tatman, about this recently who I know visited, T Mobile. Yeah, and we were talking about the fact that like, no matter when you build a chatbot, no matter what you think, people are gonna say to it, you're going to get it wrong. Like they humans will put all types of stuff in there and mean different things and bring their own or their own things into it. And you can get it into production, or running it on previous conversations that you did as well is incredibly important in order to, I suppose debug in production as quickly as possible while trying to minimize harm, which is something that we'll need to get to as heather well. Well, there's also this thing where people are like, like, people are always going to say things to your chatbot that you don't expect. But also, with any feature in software, people are going to use it in a way that you don't expect. So one of my favorite examples is some people will message T Mobile in the app their grocery store list every week, because that way it's on their phone, when they get to the grocery store, and they can look at it. It's not actually meant for me, the first time we encountered data like this, people were were really surprised by it. And I wasn't because I was like, Do you want to know who is my URL holder? It's United Airlines Twitter account, like, anytime I need to transfer anything from me to my computer. It's United Airlines Twitter account. So just remembering that like, not only will people say things that you expect, in a different way, people might be using your software completely different than how you've envisioned it. And you don't want to take that functionality away from them. hugo bowne-anderson Yeah, very much. So speaks to a broader point that I've been thinking about recently, and writing about as well, the fact that because your backgrounds in neuroscience and software engineering, I'd be interested in your thoughts on this. But we're at a point in time where we're trying to figure out what type of best practices and tools techniques, methodologies, software to import from software engineering, into data science, right? And I'm thinking about the deployment story, which is not solved, or you have a great deployment, sir. But the great deployments are what what the hell am I talking about? The deployment story in general is something that is actively in development, the tooling space is incredibly fragmented. There are a lot of challenges when we're facing, I think one of the big things that has changed from software to data powered software is we are introducing a lot more real world complexity into the system. And we get an unexpected things happen, that we're learning about the types of things that that we should expect. And of course, we've seen as we'll get to, there are a lot of ethical challenges and problems, which I think we're learning a lot more about. There are lots of other things happening as well. But is this one of the differences, biggest differences between traditional software and machine data powered software? In your mind, heather I think about what is the difference between software engineering and data science? Like that's like 80% of my brain on any given day, because, as I mentioned, we have like these 10 different teams that we're servicing, and they're all incredibly traditional software development teams. So I spend a lot of time teaching product, people who have never thought about data, how to try and think about that, because I want them to be good customers, for me no, like, stinky backseat drivers stakeholders. So I think that that's definitely one I think that also Software Engineering has traditionally just closed its ears to this, right, like when we say what's the ethical implication of creating XYZ feature? Engineers in general, like pure engineers are way more likely to be like we did it because we could do it. What I find interesting about data science is that we are as scientists, we're always looking to close that loop and say you did it, but what's the impact? And so I almost look at it the other way where this is what data science is forcing software engineering to finally reckon with is that the things that you build, they have an impact and they scale infinitely because you're Very good at being an engineer. And so we really have to start thinking about that, and forcing software engineering to think about that by using numbers. And by using science, you know, great. hugo bowne-anderson I want to talk about, and this comes out of all the work you've done, but chatbots, in particular, about the labor market. And I suppose you know, what McKinsey Global would call the future of work or something, you know, quote, unquote, something, there's something like that. I think we have a lot of questions around what's automated and what's not whether AI machine learning is replacing certain jobs, and what happens to those parts of the workforce, I think, other questions which might not be so relevant for our conversation, like data science, and Tech has enabled the gig economy and ghosts work in a variety of ways, which I think is going to be one of the biggest challenges moving forward. But I suppose in your work, and what you see AT T Mobile, the first question is, I mean, the really dumb way of stating it is AI replacing humans heather No. So So in, in general, what what we do when we're trying to think, first of all, we're not there yet. Like we're not advanced enough, yet. AI systems have a lot of trouble holding context as quickly as humans in generating responses. And so like, technically, we couldn't do it. And then there's the second question where it's like, if we could, would we? And my answer to that is almost always no. So like, when we spend a lot of time going to call centers like actually watching people, because when I think about the future of T Mobile, and what this is all going to look like, I don't see a world where chat bots are answering all of our problems, because I look at the conversations people try and have a chat bots every day. And the answer is really that the technology is at where we can solve super simple things, super routine things, things that should be automated. And so a lot of the time I say what we do in my work is we build like trigger models that then kick off an experience. But that experience requires a human brain to actually execute it and go through it. And so in my work, the only like, the major goal is when we sit down to interview these, these call center experts is what parts of your job do you hate? Like, like what parts of your job do you hate? What do you have to do over and over and over again, where you're like, my brain would be so much happier if it could be focused on complex and rewarding tasks. And so for me, that's really what the future where both AI and humans are working together looks like as we have aI handling the simple the wrote, it's, for instance, when we asked our call center experts, what do you hate about about doing this, they said, we really don't like writing memos after the conversations done. Like we already had the whole conversation, we worked really hard on it, we solved your problem. So to me, I'm like great crew summarization of a conversation. That's something that AI can absolutely do. And it's not stealing their job. It's, it's giving them the next five minutes to actually help a person which is the part of the job they like, hugo bowne-anderson yeah, that's fantastic. Kind of reminds me of, um, I mean, that I actually don't know whether this is true or not, but the apocryphal tale, and it's published all over the place, I still don't know if it's true or not, is when, um, ATMs became a thing, right? It meant it meant there were a lot less bank tellers needed. But a lot of those people who would have those jobs went into, I suppose financial services for banks and that type of stuff. So we're doing more stuff with humans. heather Yeah, well, and in very explicitly, this is something we've toyed around with, but we don't have an official roadmap for is I think all the time that building chatbots building call center technology, the people who are actually know this data, the best are the people who have been answering the phone for years and years and years. We are in talks with our stakeholders, and they are very interested in for any of those people who who are like might feel that way might feel like is technology coming coming for me? How can we get them into a technical role? Because they are they are a subject matter expert in the area that we need subject matter experts. So whether it's helping us label data, or informing new features, or running focus groups for people like what does that look like? Because AI is only built on top of labeled data, right, like like, we still need data labeling, we still need people to be doing those jobs. And then we actually have a program AT T Mobile where experts from the call center can spend six months on our team. And we always have one or two people from the call centers, spending six months on our team helping us because they know, like an a single idea from somebody who actually works in the call center about AI that we can build is infinitely better than anything any executive or product person gives to me, because they can have a great idea. And they can read these articles about what AI can do. And they can look at the long term vision. But the only person who knows the day to day work, the only person who knows this menial task that we're trying to get at are the actual experts in those call centers. hugo bowne-anderson It's amazing that we have to have this conversation. It's incredibly frustrating actually, that we're at a stage in the industry where, where it's amazing to say, hey, who would have thought that consulting the people most impacted the people who have most of the knowledge about it, having conversations with them, conversations that aren't scalable, and for very good purpose and scalable and that getting their knowledge and their input into products that affect them and that they know a lot about actually is best for everyone. heather Right? Well and I think that there's a general concept have to look at people who are doing these jobs as if and this is why AT T Mobile we call them experts not call center employees because we're like they know everything about what they you're doing. And I think that there's a tendency to think that they're just writing words, that literal example that I can give is that at one point on our team, Jacqueline, actually, we have skunkworks projects. So you can go away in a hole for two weeks and come out with show us what you did. And at one point, Jacqueline, when she was on the team, we kept getting a lot of pressure, like, look at look at the time it was GPT. Two, but they're like, look at GPT. Two, please do anything with GPT. Two, and I was like, I don't think there's any production use case for this. And they're like, please, but the POC that we did that has like, stopped executives from asking for this as we just we retrained GPT, two on T Mobile conversations and tried to simulate a conversation within with a customer. So like, we took real like real first messages and just said, like, what would have GPT to responded as, and it's spaghetti string stuff. Like there's no no context maintained? And like, could we eventually do enough engineering to maintain the context and have it produce the right response? Maybe, but it's not a response, we can be held legally accountable to because our call center reps when they messed up, we can go and say, Hey, you said thing x. And we really wanted thing why? Like, why didn't you do that with a model, you can't have that level of accountability. And so that, that kind of squash the conversation going forward, it's just showing them straight up how ridiculous it would look, as opposed to when they read articles about these large language models. And they're like, look, Elon Musk's language model writes a story like okay, but like, it only has three memory slots, you know, and you can kind of break it down that way. hugo bowne-anderson This is actually a story I'm hearing increasingly more and more, I think, because the tooling landscape is becoming so Danson fragmented and rich in a variety of ways. But people in positions such as such as yours, I'm wondering if this is your experience as well, it happened clearly with GPT. Two is being sent from colleagues higher ups, these types of things. Hey, this new technology looks cool. Can you try this? Can you try that? Can you try that? heather Yeah, so so we have, we have like a simultaneous two pronged problem that I see happening a lot through enterprise, which is, number one, people don't want you to try your own ideas, because they really like the models you built before. So like our first model was a topic model, I struggled so hard to get some stakeholders to think of anything that's not a topic model, like anything, any model, that's not a topic model, because that's kind of what they know. And then simultaneously, they okay, we can build a topic model. And then they read, like what's going on with open AI, and they're like, Oh, my goodness, models can write stories, and it have full conversations. And so it creates like a horrible gap in education, where they like people believe because of AI like hyper media, that these companies are fully solving problems without understanding the nuts and bolts, and then also can't see past like the simple solutions that they've already encountered. So like, constantly, I get well, in fact, I'm on PTO right now. But I did get a ping yesterday about using GPT. Three, and I did respond to it for like 40 minutes in Slack. Because I was like, no, like, I'm coming off PTO to have this conversation with you. So yeah, a combination of I saw this online. That looks really cool, please. And can you just do that same trick that you did for us before over and over and over again? Because that's the only one we trust? hugo bowne-anderson Well, maybe in future you can send this brief clip from this podcast as well. Yeah. heather This is why it won't do it. Well, the thing that's wild is, it'll be like, there's also like, people don't understand that there's like families of models. So there'll be like, No, but you said that about Bert. But this is GPT. Two, and I'm like, or like GPT three, and I'm like it's all the same. Like it's all the same, the same ethical questions apply the same limitations apply. Yeah, hugo bowne-anderson this is awesome. Because you mentioned education. I'm interested in talking about how education works in like, multi directions, how we learn about what people's needs are from from the data world, and they can learn about what data and AI and these types of things can provide should not be a one, nothing should be. Okay. Very few things should be one way streets and a lot of things that shouldn't be a one way streets. Firstly, I'm impressed that you have non technical colleagues who know what a topic model is, I'd love to hear a bit more about that. These are several questions, I'm going to break the cardinal rule of asking questions and add a third question to it, which I think maybe can tie all of these things together. The third question might sound slightly more abstract to the listener, but we'll get into it. The third question is, what is Heather's home for lonely data scientists? Yeah, yes. heather Yeah. So as an example, for the first fit, which is like, like, you're like, I'm impressed. You have non technical coworkers to know about this. We actually got feedback recently, which was, why do our care stakeholders know so much about data science? And I was like, I've been teaching this other school. That's why. And so that's part of it. But in general, how would you approach a new team? For instance, I told you we have all these like this 10 different teams kind of orbiting around us right now. And so when we were introduced to them, I first was like, I'm going to do one talk for you, and it's an hour long and it will be just how to start thinking about product, which is that is powered by data if you're coming from the software engineering world, so like No, no context there. And I tell like some horror stories of how I've seen people back, like back drive algorithms and drive a product into the ground, I tell some just high level things to think about, like, you actually have to think about curating data, maybe years before you're able to execute that really cool idea you had data doesn't exist, you're screwed, just little things like that. So we have that conversation. And then that was one presentation that I did to all the product people. And then we set up an hour with each of them where they got to present their products to me, and I just asked questions about it. Like, I'm not trying to put AI into it. I'm not trying to solve anything, I'm just investigating and trying to understand for this whole world of software that you own, what are the major problems that you're solving? What are the major problems that you're seeing? And what do you think your roadmap looks like? And then we can start talking about how we're going to add AI on to that, but for at the end of every single one of those conversations, I'm just like, hello, I see that you're refactoring I would love to be involved in creating your telemetry. So that's my number one thing, because if they don't have the correct logs, we're never going to have the right data to in quotes make magic like they want us to. And so just making sure that that's, that's really tight, and getting in with the teams down to like, we're adding a new feature. I'm like, great, but you're writing blogs for that feature, right? Like, I'd love to help you show you exactly what they should look like. So that's one and then we also just periodically do kind of readouts that we invite any stakeholder to so like whether or not you're in the technology organization, or you're a care stakeholder or or frontline person where we just kind of go over some of the nuts and bolts of what we've been doing. So I have one that's like, like, how does a chatbot work? Phase one? And then how does the chat like deep dive? What are the models inside of the Chatbot? And how do these work? Phase two? We also have ones around speech to text and things like that. So just periodically Oh, I keep hitting my microphone. Oh, yeah, just for hugo bowne-anderson the listener, like Heather and I are sharing video. That's great. Heather. Heather is getting so excited. She's in the microphone. So I'm totally good with that every time you hear a bang, that is from excitement. And we're not going to edit that out. I don't think it's a great thing. Very of the vibe. heather Yeah, I get very animated with my hands. But yeah, so I just spend a lot of time talking to them a lot of time communicating with them, doing a lot of readouts. And then like I said that that one presentation that I had, that's like an hour long. And it's how to think about product for the first time for data product if you're from software engineering, and just being able to set that from the beginning. Because otherwise, we tend to have product people come in who are really excited about the solution, they've already decided in their head. And I'm like, Great, then why do you need a data scientist? Why do you need machine learning, don't call me here. So there's that. And then the final piece that we talked about was kind of so as I mentioned, my team does a lot of stuff. And it's a lot of stuff that I am not a specialist in like speech. When we started working on speech, I had never touched speech in my life. But I know there are people in T Mobile who have right like people who understand data, they either understand it because of their academics or because of the work that they're doing. And they exist throughout any company, and whether or not they have the title of data scientist. And so one of the things that we started on my team, as we call it, Heather's home for lonely data scientist, where it's Fridays, we invite anybody AT T Mobile, who has even an interest in data science, and we have standard talking topics. So it'll sometimes be somebody on my team sharing something that they worked on recently, like maybe a skunkworks project, or maybe something that we're doing to actually put in production. And it might be somebody else from another team coming in to tell us what they do. We've even had call center analyst who like sit in the call center, and they are like resolution experts, they are just in charge of their call center metrics who work entirely in Excel come in, and I'm like, what's important to you? What do you care about? What are you looking at, just to kind of create some community where we all get to learn all these different parts of the data world that we don't touch, and to have people kind of kind of tick and Tally check your work. You know, like, if I'm talking about speech, I feel like I've learned a lot by now we have a speech scientist on our team who knows more than me, but I want there to be anybody else eligible to offer corrections. We also for people who are actively working in data, you can read a paper and presented to us so all the time, people who are just wanting to understand more about data science will read an academic article, and then present it back to us so we can have a big group discussion about it. That hugo bowne-anderson sounds fantastic. And also just how great is it to work with people and hire people who know lots of stuff that you don't know, and you can build cool stuff and learn from them all the time, heather literally my favorite thing, and that's, I think that's the biggest like we were talking about how I've been promoted very quickly. And I think to me, that's the biggest difference between like senior engineer Heather and Principal Engineer, Heather is I'm able to admit all the things that I don't know now, like, as a senior engineer, you try to really be like, and look at all the things that I know. And now I'm like, I've worked on speech for two years. I'm not an expert, you know, and I can just like say that confidently and kind of own it and find the people who are who that's the only thing they think about and have them come and work for me. Do hugo bowne-anderson you last time we spoke? I think I've got some notes here. You may have referred to your team as an ML ops team. Did I make that up? Or no, heather we are spinning up the ML ops team. So no, that's something that we're actively doing still We have we have developers and data scientists right now. But we also have developers that are going through like the T Mobile bootcamp right now, so we're not like fully running, but but we're getting there. What's ml ops? Yeah. So I'm glad that you asked. I actually recently, on a call, gave my definition of mL mL ops, and somebody quoted Gartner back at me. So maybe I have a slightly different definition. But for me, it's really I'll tell you the fear that it comes from, and then why we built this team and why I think it's ml ops, which is our this topic model that we released out into the world, I think, all the time. How accurate is it and we can do a model at it, we can look at it, but I would love something to be systematically examining it. And I would love it to be examining it not just for accuracy. But I also want to know, data drift, have the things that our customers talked about change since we built this topic model. So we should think about refactoring our taxonomy. I would also like to think about feature importance, as we've added more features that are not strictly no you are they actually driving any value to the predictions that we're making. And any of this stuff is really hard to get prioritized on a product development team. Because in product development, we're constantly selling features to stakeholders and delivering new functionality. We can't necessarily go back and do all this, this investigation. And so what we're looking for ml ops team to do, is to be looking at these edges to be doing some, like automatic bias testing to be doing data drift, monitoring, and then to be issuing notifications to data scientist embedded in products that just kind of say, like, hey, we think we should look at this. We know you're working on this project right now, this project you just wrapped up might have an issue. Can you please look at that, instead of us constantly wondering, how is it performing? Or we have we have some like unsupervised information retrieval algorithms that exist. And the way that we kind of monitor how successful they are, is by clicks. But I know from reading research, that you can give anybody a ranked list, and they're going to click the first item, even if the fifth item is more relevant to what they're saying. So I know that there's bias in there. And just kind of answering the question for us, how accurate are those things? How do we know when we need to update them? What will that look like? And so for me what is in my locks, it's all of the DevOps style thinking and monitoring around model performance that should be done that often data scientists aren't given the ability to do because you're focused on creating a new feature. I love it. hugo bowne-anderson I love it. To take a slightly different perspective on it, I think we're saying the same thing. I actually I actually wrote something recently, an essay with a guy Avila, who was who he was, he ran ml infrastructure at Netflix, and I'll share the link to this in the show notes. But what we were thinking about is the fact that and we've discussed this briefly before is that we don't as an industry, we don't have a shared canonical infrastructure stack or shared best practices for developing and deploying data intensive applications. And so ml ops really is the tools, processes, techniques, methodologies that are arising in order to solve for all of these things. heather Yeah, yeah. And even like things I want, the ML ops team to own are like data scientists, the data scientist on my team have the ability to become as engineering as they want. So if you are a data scientist on my team, you build a brilliant model, and you want to learn how to package it up in Docker deployed on Kubernetes, as a Kafka Consumer, like do that, if you don't, and some people don't want to, if you're like, I built a beautiful model, somebody else take care of that I want our ML ops team to be able to step in and deploy and scale those, we also have models that automatically retrain at different times and then have to run like sorts of checks to make sure that they're good enough to go into production. And I don't want a data scientist to have to do that, right now, our data science team, if something has to automatically retrain, they're creating their own cron jobs in their own Kubernetes init containers. And I think that that's overhead, you know, like most data, scientists don't even want to learn that. So I want the ML ops team to own all that sort of, how is it being deployed? How are we scaling it? How is the retraining? And how are we monitoring? hugo bowne-anderson Yeah, fantastic. I wanted to save this for later. But we're kind of going this slightly. Now. What's the stack look like? What do you use? heather Yeah. So on my team, we like T Mobile in general, is people like to think of a telco as being a traditional vertical, but our enterprise technology like section is it's very, very up to date. And so we don't really care. Like when data scientists come on our team, they can use any language they want. So we have data scientists who use Python, obviously, but we also have data scientists to use our and use are primarily and that does not impact their work at all. Because in enterprise software, everything is eventually packaged up as a Docker container, which you can put whatever you want in your Docker container. It doesn't impact the system, and it's deployed on Kubernetes. And then we have Kafka Consumer producers whenever possible, because rolling it into a weird direction, some of our models that run if they run under 2 million times a day, they are totally performant to run in our and so we'll just develop them as an our API. We will have them released if they are a model that requires more than 2 million calls a day. Then we start seeing performance lags in general. And so that's when we look at taking a model that's trained and packaging it up actually As a Java service that will run the model as a sidecar. And so then again, it doesn't matter if you're doing it in R, you're doing it in Python, because you're going to export it, and we're going to put it into Java anyway. And then yeah, like they said, Kafka Consumer producers, when, whenever possible, though, we do have some API services still, if we need to integrate with teams, especially teams that are not on our Kubernetes cluster. Cool. hugo bowne-anderson Can you speak more about there's a myth that can't be used for production? I'm not interested in language was offline was particularly, but I think it would be good to dispel that, that myth. So maybe you could tell us a bit about how that how that works in production? heather Yeah, absolutely. So first of all, whenever somebody says, To me that art can't be used in production, my immediate thought is that you don't understand what production is, because Because again, like in my world, everything breaks down into a Docker container, which is a completely isolated little mini computer. So why wouldn't you be able to run whatever you wanted on that I should be able to have Excel in it if I want you to know like, so from that perspective, it's like, if you think are can't run in production, then your engineers aren't being creative enough. But for me, the the really powerful part about having are in production, it's what a lot of really math oriented data scientists want to use. And so people who come in with a really strong statistics background who want to use our, I don't want them to have to go through our ML ops team to deploy it if they don't want to. Yeah, when you package it up as an API, our studio has a package called plumber makes things that API accepts an API request, just like anything else there are, there are only a few tricks. One trick is that plumber does not accept HTTPS traffic. And so you can actually have like a container within your container that just accepts that traffic and passes it on. And I've an open source project on that on T mobile's GitHub, if you want to hugo bowne-anderson look at it, we'll link to that in the show notes. So that's a little container ception. heather Yeah, well, in there's a whole blog post, if I'm talking containers, and you don't understand it goes from like, how do you build an API with R to how do you make it into Docker? That kind of explains it? hugo bowne-anderson I do have that book. I will link to that in the show notes. That was from a couple of years ago, right? Yes. But I, heather depending on when this is released, I'm being asked to freshen up the entire project, for shiny reasons. Promote Yeah, great, fantastic. So there's that piece. And then the only other piece with our that people get hung up on is that it's single threaded, but you just like any single threaded language you can write to make it multi threaded. And so we have data scientist on our team who we hired because they were like, I've had to manually multi thread art before. Okay, cool. And then my, my other part is, if you are on an enterprise team, single threaded isn't that bad, because you can have things auto scale. So like when you set your Kubernetes to auto scale, if you need 20 containers for a hot 10 minutes, that's fine, and then it can scale back down. So we haven't had any issues. The only place where it where we were forced to start putting things in Java services was when we moved from doing just text based stuff to doing voice based stuff, because as you could imagine, at a telephone company, our voice traffic is Jai Mungus. And so that's when we had to really leverage the the kind of like Java Spring Boot infrastructure that exists to get to that scale. But under 2 million times a day, we're fine. hugo bowne-anderson Awesome. And if people want to look at a plumber is PL UMBR. Yeah, right. Yeah, great. Well, plumber heather was one of my child's first 40 words, like very early on can identify the plumber hex, hugo bowne-anderson but Yeah, amazing. Let's talk about building teams. Yeah. How do you need to hire people? Who do you? Who do you hire? How do you need? Like, what type of teams do you need on an ML ops team in a data science team? And when do you okay, I'm just gonna there's something leading here. So are you hiring? Are you someone is working as an ethicist on your team as a recently, so that's something I'd like to discuss at all, but everywhere from technical to non technical, or whatever it heather is. Yeah. So interestingly enough, it depends on the background of the person that we're looking at. If we have somebody coming in to be like a product or a delivery manager, and they're coming from the data science world, what I want to know is that they understand agile development, because there's a chance that they don't like there's a chance, they never encountered an agile world. And if I have somebody coming in from the software world, I went to make sure that they understand data curation in the data science lifecycle, like from getting data to building it out to just specking out for us being able to like spec out for stakeholders, when you change a requirement, we have to go back and get data labeled. And that pushes the product back six weeks, because we can't make that faster. Whereas in software, you can always kind of add more people, it gets faster. So making sure they understand that we have a lot of software engineers on our team. But what we found is that we cannot just take any straight software engineer that exists, we have to get a software engineer that's interested in learning data science. Sometimes you get software engineers, and they're like, I'm a Java Spring Boot developer, I will develop Java Spring Boot every day of my life. And when we get those people on our team, and they don't want to learn how to do something new and creative. And think kind of out of the box on how we're going to tackle this problem. They struggle a lot. So when we're looking for straight software engineers to join, it's if you're married to your stack, you probably don't want to join our team, like you're going to have to be a little more on the fly than that. And then for data scientist, it's interest in running actually production models. A lot of people like to create numbers that go into a PowerPoint that inform a decision. And for data scientists, they have to make sure that you're going to want something that people are going to touch and you're going to have to be held accountable for that amount of touch, like 2 million times a day, your models gonna make a prediction, you know, it only has an 88% accuracy, I need you to be able to go to sleep knowing the 12% of people you wrong that day. So there's that perspective. And then, as you mentioned, we just we also have data engineers, which is another thing where we need somebody to be very creative, because our task can be from, hey, we have a bunch of dirty Splunk logs, we need you to curate into overnight batch jobs that are pumping into a table that we're going to touch in two years. But I promise it'll be useful then to being like, we need to Cassandra database that can actually index our entire customer base, and pull features in less than 250 milliseconds, because that's what the enterprise tells us to do. So you have to have somebody very willing to stretch. But in particular, for my team, one of the hires that we just made is we hired our first ethics engineer. And that's because we get a lot of asks, and we build a lot of models that we don't have the time to invest in ethically. So like we do these speech to text models, we mentioned Rachel statement, and Rachel actually has a paper that's on bias and speech to text models, specifically using YouTube captioning. And we talked about this paper all the time, because I know that my models don't perform well on higher pitched voices. And I know that they don't perform well on certain types of accents. And so we have the question of, can we deploy it? Who are we deploying it to? Is it servicing them well, and we need somebody who understands the engineering technicalities of our system that can come up with numbers and report back how we can improve where we are hurting. And just to make sure that there's a focus on those at all points in time. I also like the title of machine learning engineer of ethics, specifically, because I think for most machine learning engineers, all the time that we spend dedicated on ethics is annoying to business. And I don't know if that's true, but it feels like I'm an engineer, I'm supposed to be creating value. But right now, I'm going to read a paper about the environmental impacts of large language models that might not feel like I'm delivering value. And so part of having ethics in the title is to give her full permission to invest in that as much as she wants. And to really own that entire space and bring items to the attention of the team as we need to be aware of them. Because, like I've mentioned, I don't believe there are a lot of production use cases for large language models that put text in front of people without human review first, and we get questions about that all the time. And so so one of her items is write a white paper about our use of language models and how we're going to go forward with that. So that way, next time I get somebody really excited about some hot technology, I can send that to them or write write write something that talks about the state of facial recognition, and why and our team's official stance on should we use facial recognition in any context, which is universally No, at the moment. And so just having that documented, written out and ready to communicate with other people, whether or not other teams listen to us is a different, different item. But just having having somebody who you know that they're looking out at all points in time, for that, specifically, is powerful also. But it lets teams know that we take it very seriously. So like if they say, Oh, we're thinking about using facial recognition technology to determine who's happy while they're FaceTiming. With us, just being like, I'll have my teams ethicist. Look at that, that already changes the way people think about interacting with your team. Because as we mentioned, people tend to not think of software as like an ethical agent in the world. But it truly is. Absolutely. And hugo bowne-anderson I want to come back to the idea of an ethicist engineer and the types of concerns and challenges that roll thinks about and how they work with it. I am interested when you before we get there, when you have, you know, ml engineers and data engineers and data scientists on a team. I think, in all honesty, I've, you know, the past couple of years, built small teams and been a hiring manager and one of the but I'm not actually like super experienced, and it's I'm discovering a lot still. And it's wonderful. Also, anyone out there support first time or early stage managers as well, because they can, you need to give them resources and help them along the way because they can be it can create so much more value for your company. And it's the best thing to do. But something I've noticed is when you hire a few people and step back and start to allow them to communicate with each other and start seeing almost emergent properties from within that network of humans themselves. And I'm wondering how you think about facilitating conversations between data scientists, ml engineers, data engineers, in order to kind of get them, you know, flourishing as much as possible and creating something where, you know, the whole is greater than the sum of its parts, right? heather Yeah. So we actually, I'm so glad that you asked us we have a organization that we call the data science discipline, but it's not just data scientists. So it's any people who are working on data technically can join the data science discipline, and I say that it's it's the data science union. And so people like right Now I'm the I kind of organize it so people can submit items to me before the meeting. So a data engineer can come in and be like, Hey, this is my concern. And I'm going to anonymously bring these up. Like, I'm not gonna say, say, Hoda sent me an email saying to say this, I'm going to say, Hey, I, it's come to my attention that we have this concern. So that way, we can all do a big group debugging session. And we do those every single week. We also have a series of meetings that I think is awesome. That's experiment reproduce. So we'll just say these are all the experiments that anybody's thinking about this week. And let's kind of get people talking about it. Like, do you see anything in my experiments that are not working. And what generally happens in these meetings is what I see more often than not, is people volunteering to learn things outside of their skill set. And also people volunteering time that they didn't have to to help somebody solve a cool problem. Because at the end of the day, like we all got into the data field, because we're incredibly curious people. And so everybody in those meetings is incredibly curious. And so letting that kind of happen naturally, and providing, especially the discipline like providing a neutral forum to talk about things so that way, people don't have to say, I think our taxonomy stinks. You know, they could just say, Oh, hey, I heard that some people are upset about taxonomy. Do we think that we should do a refactor and then we can have a completely neutral and open conversation about that, and that can come. And then there's no status assigned to it? Like, I know, if I bring up a problem, because I'm a Principal Engineer, people will take it more or less seriously. And if it's just like an anonymous form of readouts, then then people don't have to be worried about how they're standing. Or the fact that they're known as this or that sort of person comes into play there. hugo bowne-anderson Yeah. Great. Do you use the term data science Jr. internally? heather I say that sentence. hugo bowne-anderson I don't know what other people do. That is awesome. Let's move on to hiring an ethics engineer. Tell me about that. Yeah, was that happen? heather Well, well, you get the stoner team for forever. It's very tricky to get people to realize that that's somebody who you should have on staff. So I've proposed before, like a full time ethicist, I proposed like consulting ethicist, like I just tap you for a few hours a week. And how this actually happened was it was pretty organically, where I was doing an internal panel about disability advocacy AT T Mobile. And so it was a bunch of different people with disabilities. We were all there talking about about our disabilities, and how that impacts and somebody asked a question. And they were like, Hi, I'm a software engineer. But I care so much about the things that you guys are talking about right now. And I would love to do a deep dive on digital accessibility and digital accessibility and software specifically. And so after that, I was like, hello. I can speak to the AI portion of this like, like, I can speak to AI almost exclusively. And I can speak to my experience. And so when we got on the phone and started talking, I made I made a few different references to different either ethical problems that we all kind of talked about, or papers that have been really big like the stochastics parrots in opp for. And she just immediately knew all of them instantly. It was like Yes, and I have a comeback for that. And so she's a very early career candidate, but just the fact that like I met her because she is a software engineer wanted to attend digital accessibility panel and already knew all of the current commentary, we were able to bring her on as a engineer of ethics. hugo bowne-anderson What does she think about in the role? Well, so heather she's just getting started, the first thing we actually have her doing is something I mentioned earlier, which is bias testing all of our ASR models before we deploy them. Specifically, when we think about scaling up our speech detect solution, we're looking at geographic regions of the country, we have customer experience centers located all over the United States, and our customers have accents that come from all over the entire world. And our experts at the Center in Atlanta are going to have very different accents that are experts at the center, just north of Seattle. And so the bulk of our customers calling in from Dallas are going to have much different accents than the bulk of our customers coming in from Chicago. And so we need to make sure that as we roll out features to all different types of people, our models will be equally useful regardless of how you speak. And that's partially because at the end of the day care experts compensation comes from how good they are at their job, our software is supposed to help them be better at their job. So if we have people that have accents that aren't being treated well by our ASR models, that has downstream effects, if our transcription is is bad than our intent models will be bad. And the lift that they get from our systems will be bad, and so they could get less money. And so when I think about it, if I deploy a model, that means Jessica does less good at her job and earns less money simply because she was born and raised in Georgia and speaks like her peers. That's unethical. By doing this bias testing. Before we release, we can identify weaknesses and try and figure out how to improve performance for subsets of the population. Whether that involves changing the balance in our training data, finding a different model architecture, or even maybe creating a model that determines accent and then routes to accent specific models down the road or hugo bowne-anderson probe a bit deeper. Maybe it's too Early days to do this with this particular position, I think there is a concern. And this is not what I'm hearing here, of course, and I know you. So I know this isn't the case, there is a concern, particularly a lot of tech companies, that hiring ethicists is a form of box ticking, and it doesn't have a lot of impact on any product development. So how do you think about that, and this is some this isn't about rules and formulations and checklists and old checklist could be useful. It's about creating a culture where people listen to the people interested in whose responsibility ethics is. Yeah. heather And so for me, I think that's where like this data science discipline that we have comes into power, because every every time we show up for those meetings, we all kind of have agreement, like we are on the same team, we leave as the same team. And so by having an ethicist, report out at those meetings and report those things, we are able to stamp as a block. We also don't have to point it to her specifically, right, she comes up with ethical conclusions, we sign it as the data science discipline. That's our that's our stance. Right? And so that's one of the things that we try and do. And honestly, I would say it's like, for me personally, it's the opposite of taking boxes, where I'm like, actually, I want you to add a box. That's ethics for me, please like please, please, please. And then yeah, a lot of it is just, you know, freeing up our time, because, like, the large language model conversation, why I think we shouldn't use it, the biases encoded it in, etc, I have my speech on that down, I've never had time to write it down. I've never had time to curate a list. And it takes a lot of convincing every single time that somebody brings it up. And so I think by just having her be really practicing those points also and having documentation around it. That's nice. But yeah, I think signing co signing everything data science union saying like this is the AI Team Stance, not our ethicist stance creates cohesion there. That's great. hugo bowne-anderson And that may solve some of a second challenge that I've seen and heard a lot about, in particular, I don't know if you know, they are in society came out with a report in September 2019, called owning ethics, corporate logic, Silicon Valley and the institutionalization of ethics. It's actually a good friend of mine, Manny moss writer with Dana Boyd and a researcher called Jacob Metcalf. And I'm going to mess this up completely. They thought a lot about having individuals who own ethics in organizations. And one concern, I think that came up is that having a focal point of an ethicist can allow it not to be a concern that's distributed across an organization? It sounds like you've already solved for that idea of ways. But is that a concern we all need to actively be thinking about? heather Yeah, yeah. And so yeah, basically, I'm glad that you're bringing this up, because I never view ethics as an obstacle. And it's useful for me, because I didn't, it didn't even occur to me that other people in our organization might. But for me, part of the thing is, my team, just in general, we've always been like, we are here to build the right thing and do it the right way. Right. And that's like, that's in our credo that's like our in our team charter, like we're here to build the right thing and build it the right way. And I think that by having everybody committed to that, and then you have an ethicist on the team, we could choose to ignore her that might be going against our team standards. Right. And so just being able to have that conversation kind of out in the open. But yeah, I have not seen me bring up an ethical concern to people and have the reaction, at least AT T Mobile be anything other than Oh, my goodness, no, you know, like, Oh, my goodness, no. And I think that's part of because we have this do the right thing and do it the right way. mentality. Always. Did you come up with that credo? No, that's T Mobile official. Yeah. Oh, they instill it in all of us. But yeah, so hugo bowne-anderson this is I just want to push a bit harder here, because I know how not merely passionate, but you are about these, but how incredibly important. These things are to you like in a very deep, deep way. My concern in this situation would be, let's say you went to work somewhere else, they hired someone else who cared less about ethics. And suddenly there's a team lead, who has their eye on different things, they are trying to prove business value through meeting whatever metrics their to their manager wants, and how much of it is focused on you. Like, how much is it like Heather driven heather a team member on my team Sunan. I love very deeply and he started out as a standard software engineer. And he joined our team. And when he joined our team, he told me he had never worked with another woman ever. Right, like so he had never worked with another another woman. And he this came up in our GPT three conversation yesterday on Slack. So as I mentioned it, he said, in one of his very first weeks AT T Mobile, we have like a general AI channel. And he shared a link in there that was like, wow, look at this cool thing GPT three can do and immediately I was like, Look, I see you, you really want to get you're on an AI team. You're really excited about it. But I want to talk about the ethical implications of this. And actually, we should do another paper reading session of the stochastic Perets paper where we all read it and we're going to sit around and we're going to talk about it And it's to the point now where when somebody brings up GPT three, I'm not the first person to respond with, hey, whoa, this is unethical. Because a lot of the people when they come onto our team, I'm maybe one of the first people who's ever been like, maybe all tech isn't great. You know, like maybe all tech is it that you're building isn't great, like people think of a sci fi future. That's bad. But like, maybe you're contributing to that sci fi future. That's bad. hugo bowne-anderson Maybe we we live in maybe psycho futures or actually metaphors for what we're fucking going through right now as well. heather Yes, yeah. And so what I've seen is people be woken up by that. And so like, yesterday, I came off a PTO to respond to this GPT three stuff. And Sunan was like, I'm sad that I didn't get to respond to this earlier, because this is one of my favorite lessons that I've learned so far. And so it wasn't just me, it was all these other people on my team chiming in, because they all feel now that they fully understand everything. And that's because we've put the work in and like we read the stochastic parents paper together, we've said and talked around about this a lot. They've watched me bring up these issues in front of stakeholders. And now it's definitely like team culture. Like we don't we don't do things that are shady, especially Data Wise down to like, seeing, I know that so many models exist in the world built on credit score, right. And so we've been like, we should write a public blog post about how credit scores always an inappropriate feature. And that's something that we have full range to do that. And so I think that my team in particular likes to believe of themselves as trying very hard to be a good ethical team. But we know that there's not one that exist, right? Like, we all watched Google's ethical AI team implode. So like, we've seen this all happen, we know that it's kind of impossible, but we're gonna try really hard to do it, right. Anyway. It's kind of how I look at it. And one thing that I say a lot is that, like, people always bitch about the algorithm. Like we're the algorithm like not, we're not Facebook right now. But like, we're the algorithm and so we need to be the like, when you think about your friends bitching about the algorithm or their their accounts getting canceled things like this on various platforms, we just got to remember like, that's us. And we got it. We got to treat us like we're the evil algorithm at all points in time and think, think think through as many edges as we can. Oh, hugo bowne-anderson you know, as what is that? What Spider Man's uncle's name? Uncle Ben? Uncle Ben? Yeah, as Uncle Benson with great power comes great responsibility. And actually, I think he stole that from Voltaire. But I think I look. I think my soft spot for Uncle Ben exceeds my saucepot for Voltaire. So let's let's with Uncle Ben, I am sorry. Now I'm now I'm thinking heather about Spider Man. Oh, now I'm thinking about minute rice like the Uncle Ben Minute Rice. But hugo bowne-anderson y'all Yeah, great. I am interested in what type of ethical concerns arise with the use of chatbots. heather Yeah, man. So there are two different sides of this. And that's kind of what's interesting about working in a world where we will have bots, that handoff to people that might hand back to bots, is thinking that the actions of your chatbot could impact the care expert that gets the escalation when your chatbot does something wrong. So all the time? Well, there are a few different things. There's, if you're ever going to put a chatbot in front of inexperience, the first thing that's incredibly important is to disclose that it's actually a chatbot. Because people invest a lot of emotion into how they're talking. And if you don't disclose this as a this is a robotic system. Well, several things. First, people speak differently to robots, they speak more simply, they speak more simply, and they tend to not add as much nuance. And so just letting people know it's a chatbot might be a great way to set your model up for better success, right? Like it has the difference between Hey, can I check on my bill and just writing the word billing very different, two different ways that you would do that. So making sure that you disclose that. But then also realizing in my situation that people are way more likely to get aggressive with a chatbot because they know that there's not a person on the other end, they don't have to have that that filter. But for anything that happens in our chat bots, if our chat bots aren't able to solve it, it goes to a human who's going to help you out. So if we've done something even a little bit funky, that's elicited aggression that wouldn't have come from a human. And then we pass off this human who's in an aggressive state, to our actual human expert, we haven't helped them. We haven't helped them do anything. We've made their job so much worse by just priming the customer frothing mad before handing them over. And so I think about that a lot where, yeah, both letting people know that it's a chatbot. And then making sure that everything that you're saying, as a chatbot, it's just diffused and you're never escalating and if anything is dicey, send it to a person and let the person handle it with all the social grace of a person that a chatbot is never ever going to have down to we even have a feature on our chat bot that's longer sentences should go directly to a human expert, because if there's that much context involved, chatbots probably not a good use case for you. Sure. hugo bowne-anderson That makes perfect sense. I'm sorry, when my mind was going before I interrupted it with Peter Paka. Was it actually Sounds like when thinking through a lot of these ethical concerns, as you stated, creating a culture of it, once again, using education and providing resources where people can learn and talk about these things. And actually what I'm hearing, Mike, one of my questions was if you left and someone else was hired, that this would probably be a strong filter in hiring for this position for the team is someone who actually consider these legitimate concerns. And we need to be thinking about them as a foundational principle for any product heather we do. Yes, if not in the hiring, I'm like, they would find it very difficult to be successful in this role once they are here, if they didn't want to think about things like that, because everybody on our team does. And part of this, like this education piece is, I think that there's a way that rolling out an ethicist and having an ethics focus team could be really bad. And that's if you come in to kind of be the Justice cops, you know, like you, you're being unethical, and you you're being unethical. And we always try and frame it as a, like a curiosity and wonder moment, like, yeah, that model is really cool. Let me show you where it falls apart, right. And like you want to get their buy in on how the thing they didn't know is very interesting. And now they're on the inside, and they understand it, and they can go out and be an agent of good instead of kind of being the the no, we're gonna chop down your ideas here. And I think that's also part of why it's great to have our kind of ethics point of contact be on the AI team, because we can suggest solutions that do not incorporate that feature or whatever. So like, if somebody comes to us looking at using credit score as a feature in a model, we can say, Hello, well, what are you actually trying to predict here? Let's find you a better teacher, my friend. Yeah. And look at the holistic problem, hugo bowne-anderson virtually speaking to a really key point as well, which is like, irrespective of the work we're talking about, which is, there may be some outliers, but I honestly think that it's rare for if you tell someone that they're bad or unethical or doing something really wrong, or morally corrupt, that this will result in, I think, what you want to happen in the world? heather Yes, yeah. Well, I just don't think that and that's kind of part of it. I don't think that you can change anybody's mind in a conversation. But you can give them the seeds to change their own mind later. And that's kind of kind of what I what I always approached it, as I'm like, here, here are all these resources. And you can change your own mind when you're ready. Like hugo bowne-anderson I have one. That's a professional question. But it's also personal. In this process of moving from an internship to being a principal, machine learning engineer, you discovered that you're autistic. Yeah. Could you tell us a bit about that, and how that has maybe impacted your your work? And in what ways it's heather yeah, I discovered that I was autistic because I was at the doctor with my child who was being diagnosed autistic. And my mother was also there, because she she lives in Oklahoma, but she had flown in like it was just happened to be visiting at the time. And when like, we sat through the whole diagnostic exam, at the end, she was like your child's autistic and like, Oh, are you sure? And she's like, I have no doubt whatsoever that your child is autistic. And my mom just looked at me and said, Weldon, so are you and your brother? Because you are exactly like this the entire time you're like, Oh, okay. And it turns out that lots of women in particular find out this way because there is a genetic component to autism. And traditionally, the diagnostic criteria for autism was created based on young white boys who were incredibly distressed, and I am not young, I am why I'm not a boy. And I'm not distressed. So I look very different than the textbook for autism. And I've kind of been able to discover lots of other women who have come to realize that they are autistic in the exact same way. And for me, the most powerful piece of it has been realizing that like, everything my job likes about me, everything my work likes about me, it's also a thing about the artistic part of myself. So for example, the compliments I get are, you're a detail oriented, you love to do research, you are very passionate, and I'm like, yes, I've had four interests my entire life, and this is one of them. So there's that capacity, where it has honestly made it. Well. There are a few ways where it's made it really fantastic. The first is being able to get involved with a disability organization at my company, and being able to be like, Hey, I'm autistic and have other people be like me, too. I'm also dyslexic. So that's another thing. I'm okay. I'm just like the people like, oh my gosh, me too. And just being able to say, to see somebody in a position, a technical position of power, that that has a disability is really important. And then also just being able to communicate to my coworkers, the types of signals that I can't see from them. So I can't tell what I'm talking too much. I can't tell what I'm talking too long. I can't tell if I'm derailing the meeting. And it's not mean to Slack me and tell me I am like I promise I'll never I'll never feel mean about it. And from that perspective, it's really cleared up some like social weirdness that I think was on the team before because it's social weirdness. I've had my whole life where Yeah, I talk a lot. I'm really loud. I don't see when other people are not interested anymore. And I tend to be very direct and cute. munication. So sometimes it's my team coming to my aid, like we were I was trying to teach a product stakeholder how to label data. And we had sentences and topics. And we were talking about it. And she was like, How are we supposed to know what label to apply? And I saying what I thought was a very positive. And that's what you get to use your human brain. Like, you get to use your human brain. And you think about it, and you apply what label that is not thinking how condescending that could sound to somebody. And so having a team member step into Viet and Heather means that in the nicest way, because she's saying, as opposed to our models, which are very dumb, right, like opposed to our models, which are very, very silly, and they make mistakes all the time, your human brain is brilliant, and you get to use it, and whatever you decide is correct. So having them be able to kind of come in and give you the Save as well has been nice. hugo bowne-anderson This also comes back to this point about education as well. I mean, you've become educated as to, you know, certain signals and certain differences in behavioral patterns, and your team has as well. And it sounds like you've used that educational process to build an even more cohesive and better team. heather Yes, yeah, absolutely. And I think I can talk about this in 2022. T Mobile is actually doing a pilot program specifically for developers with disabilities, where we're going to be bringing in interns with disabilities specifically into our regular internship program, but they're going to be paired with with a mentor who also has a disability to kind of help them navigate it from that perspective. And our goal here is we know that hiring as a manager or hiring somebody with a disability. If you've never known somebody who has that disability is incredibly intimidating. You might not know what you should be thinking about, but you should be looking for what accommodations they might need. And so part of what we're doing is we're going to be collecting information and kind of coming up with some manager guides to be like so. So you're hiring your first staff developer, what do you need to know like a cozy warm invite into the the disabilities and tech space? hugo bowne-anderson That sounds like a fantastic program, be very excited to see how it goes? This is a slight nuance, but before we recorded this, this podcast, I asked you, like I wasn't sure whether to use the term autistic or neurodiverse, or this type of thing. And you told me a very interesting, nuanced aspect that um, you like, it's different people have different preferences. So your preference is saying, I am autistic, as opposed to I have autism? heather Yeah. In general, when you first encounter, like, if you curse encounter and like autism community in general, you're taught in by you, I mean, every single person who, who Googles even what do you call somebody, somebody's diagnosed with autism, what do you call them, you come across what the medical profession has really pushed for it, which is called person first language. And some disability communities love person first language, it actually started in the AIDS epidemic, because people didn't want like they, they're not aids written. They're a person who has AIDS, right, like trying to emphasize the personhood of these people. And I don't love it. And I'll give you like, my quick spiel for why I don't love it, don't love saying I have autism. When you say it's something you have, it sounds like something that I'm supposed to overcome. Like, if I have cancer, and then I beat cancer, like, I'm never gonna be autism, right? It's going to be with me forever. I don't have it like something I could leave behind in my car, like I can't be with autism, because I can't be without autism. It's all part of me. And I think that that has to do with the way that we look at language and what we look at as being an acceptable deviation from the norm. So I'm also a lesbian. And there is a period in time where you would say no, she has homosexuality, and she's diagnosed with it. And we're going to electroshock it out of her or whatever that would look like there was a time where that was an illness that you had and you were straddled with and you were trying to overcome. And that's not the world we live in. Now. We know that gay people exist in hugo bowne-anderson the countries we live in as well. In Yes, yeah. In a lot of places. It is still treated like heather that. Yeah. Well, yes. Yeah. And so but like the progressive consciousness has? Absolutely, absolutely. And so you say I'm a lesbian, and it's part of your identity. It's not something you're trying to change. You're not going to therapy to overcome it. And it's a part of yourself that you're like, proud of you're accepting of there's a whole culture around it that you get to play with. And I view autism in the same way, just like I'm not somebody who has lesbianism, I am not somebody who has autism. I am autistic, just like I am a lesbian. It's a part of me that is to be celebrated and enjoyed. It has all of my favorite parts of me inside. And I can't be me without that and non autistic Heather, no, this is not this person who you're speaking to right now. And so that's why I'm very passionate about what we call identity first language. And in general, the more progressive disability community is moving towards identity first language but especially the Autistic community. And one thing that I want to also point out there is the difference between the Autistic community and the autism community. Because the Autistic community is full of people who are autistic, whereas the autism community if you look at the majority of people talking about autism, they're not autistic people. They are parents. They are doctors. They are people who are only viewing this through a lens of these people or other and she'd probably be corrected, which makes for a lot of really gross material. And so, the Autistic community is of autistic people speaking for themselves, whereas the autism community is generally not autistic people speaking about autism over autistic people. hugo bowne-anderson Now, once again, this comes back to a point which we discussed earlier that speaking with the people most affected, and the people who are the most important, I mean, I'm going to use the word stakeholders, that isn't quite what I mean, but, you know, the most important people in the conversation, how about listening to them? Right, heather right. I also want to point out like, just because we mentioned it, me discovering I was autistic was super fascinating, because I do have a degree in neuroscience, right, like I studied, I also had helped a friend get diagnosed autistic as an adult. And I had like, independently read many studies and books and research papers on autism. And I knew just statistically that when Jacqueline and I were going to have a baby, they were going to be autistic. Because I was like the stats workout, it's like a 85% chance. And never in any of that did I stop and say that means I'm autistic, you know, like, never, never really do a self reflection on that whatsoever. I remember, even my friend and I doing the diagnostic test together, because all autism diagnostic tests are freely available online. And I remember me scoring worse on some of them. And what I thought at the time was goes to show you how biased those tests are, if a non autistic person can score higher than an autistic person, and now now I kind of know I still think the tests are incredibly biased, but I also understand my results. hugo bowne-anderson Yeah, absolutely. You said you worked with mice when you did doing? Rats? This is a really naive question. I have no idea. But I'm willing to put it out there. Is there autism in non human animals? And can we study that heather so there are things that are like autism and non human animals. So there are characteristics, the thing that gets so complicated is i a lot of autism, for me is rooted in social communication, which is different in animals. And then it's also rooted in sensory issues like sensory sensitivities. And there that's so hard to measure in an animal. And so I'm sure that those variations exist. But I want to point out a counterpoint, which is, these social communication issues that you think of autistic people having when studied, we do not have those communication issues within group. So when people say autistic communication issues, what they really mean, we call them allistic people, people who are not autistic. So what they really mean is that there's a communication issue between autistic and allistic people. And when you look at the therapy model that treats autism, they always prioritize the communication of allistic people. But when within group, we do not have those issues, so like allistic, people think we're blunt, and think that we missed the point and think that we're getting into into the weeds and autistic people are like, No, this is just how we completely and totally operate. hugo bowne-anderson This is so important, because I actually wanted to come back to this. You said, like, if you're derailing a meeting, that was your language, someone will slack you, you'll be like, Okay, and what came to mind is Wait, is Heather, actually derailing a meeting? Or could this meeting go in a direction, which will be for the extreme benefit of everyone? If if everyone pushes through and follows where you want to go with this conversation? heather Right. And I would say sometimes it's sometimes it's both sometimes it's i It's a social context miss, where like, I just didn't read the list of invitees. And like 80% of this reading doesn't need to hear my super technical rant, and I love having that information. But But yeah, the counterpoint would be like, if it was an all autistic team, would we listen to my entire rant first? Yeah. Like, yeah, we would? Because the weeds are interesting to us. hugo bowne-anderson Yeah, absolutely. The other thing you mentioned is that you're in a same sex marriage. Yes. With Jacqueline, a trans woman who you worked with and brought into to TMobile. And I'm just wondering if there were any sounds from our conversation, like T Mobile is an incredibly progressive place. Yes. Or at least what you've carved out for yourself there. But I'm wondering if there were any challenges or concerns or what happened when when you did that? Yeah. So heather I can kind of give the whole saga which is because being Jacqueline story is just super weird. So I've always known I was a lesbian. I was raised on like a farm in the middle of nowhere. And when my mom sent me to school for the first day, it was like the first time I played with kids my age, because I was like, literally like just with the goats all day before that. And I came home and I had an hour and a half ride bus ride home. And the first thing I say, after getting out of the bus for my first day at public kindergarten was mom was gay. And I don't mean the happy guide, because I was interested in finding the definition. So I've always known that I'm gay. And Jacqueline and I actually met on reality television. That's right. Yeah. And so when we met, she was the show me. King of the nerds season three. That's right. And so when we met, she was a boy. Our first conversation was about Markov models where she mansplain them to me and I said, Excuse you. I know what that is. And she said, Well, I didn't learn them until grad school and I said, That really sounds like a you problem and not a problem because I know them now and let's continue on. Yeah. And so at that point in time, like it took me a really long time to figure out, I was attracted to her. And then when I did the whole time, I was like, This feels like a lesbian relationship. And so when she transitioned, well, when we got married, the Uber back from our wedding, I told the Uber driver, I had a shaved head at the time, I was like, can you believe me, I looked this gay, and I just got straight married. And like he, like died laughing. So I was like, I can't believe I got straight married. And my mom was like, I can't believe you got straight married. And then, you know, two years later, we got to be like, just kidding. Everybody. It was gay the whole time, which I was very proud of. But the transition AT T Mobile was honestly, like, very smooth, everybody was really chill about it. And what was kind of nice after that, is it, it set up a path for people to transition. And so that thing I mentioned earlier of managers, sometimes not knowing what to do, I've gotten three different calls from managers AT T Mobile, after employees came out as trans being like, Heather, we need to know how to be a good ally here. We know that your wife went through this. And we want to know any issues that came up around that specifically. But in general, T Mobile leadership is just incredibly responsive to issues like this, that are rooted in political diversity, down to the point where like, one time, Jacqueline had a slight issue in a T Mobile store, because it was her old name on the account. And she couldn't prove that it wasn't that she had that like name change documentation. But still, there's like, they're afraid that it's a sim takeover for fraud, we actually got a call from the executive vice president of retail that day, being like, I am so sorry, like, here are all of the things that we're doing to fix this from happening in the future. And so I've only experienced them. Like, if you point out a problem, people bending over backwards to try and fix it. Amazing. And then also we're, we haven't talked about it, but my team is 50% women all the time. So having another woman on the team was just exciting. Yeah. hugo bowne-anderson Great. I love that people called you and asked, How can I be a good ally? And that's what I want to ask you? How can I be a good ally? And how can our listeners be good allies? heather Listen to trans people, as you mentioned, and so there's actually a group on Facebook, that's like, ask me, I'm transgender, I believe. And it's like, if you want to go to a group where Gnosis people speak, and you just watch trans people talk, and you can ask a question, and then zip your lip and see their response. That's my number one thing is like, there's a Laverne Cox documentary called disclosure. And they say in there that it's like, less than 10% of people think that they've met a trans person. And so what you have to realize in that moment is that means for 90% of people in the world, their vision of trans people come strictly from media. And it's media that was written by people who don't like trans people very much right? Or didn't understand them either, because they're part of that 90%, who believes they've never met a trans person. And so first of all, realizing that like, if you've never met a trans person, everything that you've been told about trans person was told to you by a sis white dude via story. And then the second point, just Yeah, turning on your listening ears strapping in and getting ready to really understand somebody else's experience that's completely different from you and not argue back, like your job is to zip it and understand their experience. Like you don't need to interject anything. And so those are my two main tips. hugo bowne-anderson Great, thank you. And thank you for sharing all of that, including your story with Jacqueline and the experience of discovering that you are autistic at work and all of these things, I think, I've learned a great deal. And I hope our audience has well in the process. I am heather incredibly artistic and incredibly a lesbian. So I hope so. hugo bowne-anderson Awesome. So back to the data science, AI space. What are you most excited about in this space? heather What am I most excited about? So one thing that I'm excited about is we've talked a lot about ethics here, but the general conversation around ethics, whether I know if anything we'll ever have a rubber meets the road result. But like watching Google's ethical, ethical AI team collapse and seeing what's being stood up as a pillar of excellence in its place, the entire path forward on how we go forward trying to build large models with large datasets rectifying that with the society we live in. I'm very excited about because like I said, I feel like it's a conversation software engineering almost had and then kind of gave up on having, we're bringing it back to the forefront. So that that's my number. My number one like not super sexy tech answer is people getting real about the impacts of the the systems that they have created. There's also all sorts of stuff in the speech tech space that helps address that issue that I'm very interested in for like, yeah, how do I create models that could be specific for different demographics, and then create a controller model that helps route it through these different answers, or even collapse that all into a single model that actually performs for all sorts of different types of people? I don't quite under I don't quite know yet. Right now. It's all it feels all very like an ensembling. You know, which is a little bit messy. But yeah, kind of figuring out where the rubber meets the road on, on implementing those ethical things that you now know you should have because you see where your models are failing. Like we pointed out the failures, how do we have a path to production from that? hugo bowne-anderson So let's say that things were direction that you thought were productive and good. How different would things look in five years? Or how would they look different? heather I think it depends. So first of all, I think first launches would be very different. I, I think a lot about two situations where I've seen AI be really horrible. Specifically, the first one is the engineers who developed Alexa not being able to activate Alexa, because they had Indian accents. And Alexa was not trained on that English. For me, I would hope that there's no room that that would get out of now. Whereas if I'm a developer developing on something, and it doesn't work for me, I hope somebody would have the ability to speak up for that. The other one is the automatic hand washing sensors for a long time didn't work on darker skin still are pretty bad about it, depending on which which hand sensors you look at. So I'm hoping that so it's like, how will the world change for me just a little bit? Because I'm a woman, right? How will the world change for other people, I'm hoping that these first day launches of new technology, it's actually useful for them. And then I'm hoping that we move to a world where we're not imagining AI taking over everything, we're imagining how we're going to work in concert with AI. So things like we won't have a police state where facial recognition identifies criminals and sends a drone to their house, etc. Like no realizing those pitfalls and saying, Okay, if we're not going to let ai do everything, what are the pieces that we should let an AI take over? And really kind of specking that out? Like, what are what are human problems that need a human brain and human heart and human consciousness to address versus the little things that we can take care of from an ethics perspective, not just from an engineers? Can we wish, hugo bowne-anderson right? I don't necessarily want to get into a mosque verse anti mosque conversation, I think you and I are probably clearly pro maskers. So to speak, but I do like the fact that wearing a mask makes you less and less susceptible to facial recognition technology as well. heather Yes, yeah. And you can change it in all sorts of ways. But then also Yeah, it's like masking sunglasses, kind of how I go out. And then I have on my big noise cancelling headphones and all the time I'm like, people are gonna think that I'm going to rob a store. I'm no one has ever thought I'm going to rob a store. Because I'm a small little white girl. If I was a different sort of person, that would be a different story. hugo bowne-anderson Sometimes, like a really big bearded six, Australian dude. And also when I wear a mask, like my beard is not CDC approved, like I definitely like get some depending on the size, size of it a various point in time, it's also one of the communities that's least susceptible to read an article, one of the communities that's least susceptible to facial recognition technologies are the Juggalos, the Insane Clown, because they've got clown face paint on all the time. So maybe we'll just we'll become Juggalos. In the end, you didn't go heather all sorts of things where it's like, at some point, like really goth eyeliner gets very close, you know? hugo bowne-anderson Totally. There's also I can't remember, is there a Bill Murray movie where He robs a bank playing a clown? Like he got these dresses as a clown? And because something like that, is that heather the one where they sing the song? Yeah, I think I only remember watching that as when I was little. And just, I remembered the song scene. And I sing that song all the time in my head. But yeah, that's all I remember. hugo bowne-anderson Amazing. So my next question, I ask you what you're most excited about in the space. But I think you your answer really hits it, what we want to discuss in my next question, which is what concerns you most in the space? What are the biggest challenges we're facing? As an industry? heather Yeah, it's thinking that machines can do can do complex problems that require human heart and human understanding. And that we could reduce that to some sort of confidence score, and then be like, well, it was above the 80% threshold, like, Let's arrest them. So I have this whole thing where it's like, there are so many situations where AI shouldn't be used. And one of them is if it requires really intense accuracy, I would agree that arresting people and illegal action requires intense accuracy. And I think anytime that it requires a human heart, right, like, like a real human heart to address, and that's what I wish people would think about before they implemented a system is is this a problem that honestly requires a human heart? Because if so, we can't make that into a number like if it requires a real human brain, human soul like to think about to process into answer humanely, we're never going to get a humane response from a numbers engine. I love it. hugo bowne-anderson And in all honesty, a good friend of mine share Mitchell, who I hope to have on the podcast at some point, she was at, um, civis analytics, and now she's at blue rose research. I think she still works with David shore, she wrote a paper and gave a talk about I'm going to get this horribly wrong. But one of the takeaways I got was opening up the decision space of interventions. And so one example she gave is, you know, we've had a lot of conversations around the pro publica expose a of the Northpointe model and recidivism risk recidivism risk scores, using parole hearings and this type of stuff. And she made incredibly nuanced point that, you know, we're having conversations around algorithmic fairness, and all of these these types of things. But we haven't even stopped to think about are we even asking the right questions around what interventions like should it be parole or not? Or should it be incarceration or not? Should we be opening up the space and thinking about, okay, these people require helping these aspects of their lives. So getting, you know, social services involved instead of thinking about incarceration and these types of Yes, heather well, yeah. And we're never going to get innovation out of an algorithm, right? A classifier can never put something into a bucket. I didn't make the bucket for it. So I think that that's another point is Yeah, we really limited the innovation that we can make, or the complex thinking that can happen when we say this is a scoring. And like, there's always the you can just say the model did it right, like the model did it in the model is not a person. And then we're almost back to that, who can be held accountable for any of these actions. So if it's, if it's something that somebody needs to be held accountable for, then we need a person attached, I feel. hugo bowne-anderson And we do see that I mean, this isn't necessarily around ethics, per se, but we saw like Zillow recently, they blamed a model and AI algorithm and laid off 25% of people or whatever, I don't necessarily want to get into hot takes on why this happened or not. But one, it's almost clear to me that, you know, like, they blame the algorithm, they took on corporate risk and messed it up and had to, but then they they totally pass the buck to an algorithm as opposed to we made a decision to do this didn't work, heather right? Yes. Yeah. And it's also like, who built the algorithm? Did you guys do it? Did you? Did you build it? Because if you built the algorithm that told you to make this decision that now you're getting flack for that counts, right? Like that counts, that counts for me, at least, because so a data scientists on my team talks about building bad models, and he always gives a story where he's like, fraud is very rare in my business fraud only occurs 1% of the time, I built a model that is really good at detecting fraud, it's 99% accurate, it just detects fraud every time. Yeah, I can build all sorts of bad models that tell the business to do bad things. And then we can all point a finger. That is the same thing as creating a fake person to make a fake decision and saying they did it not me. Like that doesn't count. So I just think it's absolute hogwash. hugo bowne-anderson And I mean, you spent to a great point that accuracy using one metric to codify how well how performant decision making is, in this case, we want to look good confusion matrices, I've probably made this joke to you before, but it's, you know, I think it's become apparent that confusion matrix is the best name for them. But like thinking about a variety of different metrics, and then maybe not metrics, qualitative ways of thinking about how decisions are made in the world machine. Yeah, well, heather and I think that like, if we're thinking about this, like you said, at some point, somebody, somebody saw an algorithm recommendation, but a human being evaluated that recommendation and said, Yeah, let's do it. Right, human in the loop responsibility. How can you like you can't expect the people making the decisions to completely understand the algorithm that implemented them. So it's like, that's where I think having an ethicist on the side of engineering is really powerful to kind of have that. But I almost think there should be accountant or like a business counterpart. That's the question of, should we should we be using machine learning for this at all? Time? It doesn't mean it's a good excuse. hugo bowne-anderson Yeah. I also I just had a brainwave, I think, I think even the term fairness in machine learning or thinking about machine learning fairness, it already passes the buck. It already shifts responsibility onto the algorithm. And but it comes from a good place to think about fairness in machine learning. But we even need to kind of step back from that, right. Yeah. heather Yeah, and that's what makes it all so complex. Because at the end of the day, the people who have the technical understanding to implement an algorithm do not have the societal understanding, to understand how it will impact everybody. We might never know before we are released, but what are we doing to at least hold ourselves accountable to that lifecycle? And drill in and investigate? I don't know. So yeah, maybe that's the thing I'm most excited for honestly, is kind of tying back to that answer is how do we how do we make that all together into something cohesive? I feel like this as a society. We're only just getting started. Yeah, hugo bowne-anderson absolutely. Oh, to wrap up, I'm wondering if you have a call to action for our listeners a call heather to action. Okay. There are several things. Number one, I am on Twitter. So Heather clips on Twitter, rather l us not my last name. It's a reference to Patrick lists, and Achilles, you know, gay lovers in Greek mythology. Okay, so Yes, awesome. Heather, Heather, close on Twitter. The other thing is that my team is hiring. Right now. We are hiring a senior machine learning engineer position that I'm very excited about. If you would like to do that. You can DM me on Twitter. And we can talk about what the role actually entails. We have that one when we have a few standard level engineer positions as well. We're encouraging people with data science experience to apply. So like if you're if you're like Heather talked a lot about Kubernetes and stuff that's very confusing. We will train you on all of that as long as you like, have a statistical understanding and can do me an analysis you're probably gonna be good, hugo bowne-anderson amazing. We can include job listings in the show notes if they're live as well. I can heather send it to you. I don't want to send you the link I currently have because I've had like five people apply to it. When my manager checks it says nobody's applied so I got to get that sussed out but yes hugo bowne-anderson by the time we go live we should be fine so awesome amazing Heather thank you once again for such a phenomenal conversation Unknown Speaker yeah heather I'm really excited I'm glad you're doing a podcast again hugo bowne-anderson so yeah great Transcribed by https://otter.ai