HH70-25-02-2022_mixdown.mp3 Vin: [00:00:09] Hi, I'm not Harpreet, but this is his office hours and we're not live streaming today, this is just being recorded. So everyone Russell A., Eric, Eric, Aaron and Dave, thank you. Thank you for coming because it would have been really weird just having me by myself, talking to myself. And yes, that's going to be my name for the rest of this is not Harp Reid. You put like an exclamation point in front of Harp and we'll know you're talking to me. All right. Kicking it off, I want to know, so what's your data science superpower, just your Data superpower. You have one. You have one of those things where you look at it and you go, You know, I have a job because I can or I'm going to get promoted or I am better than the average Data person because you have a Data superpower. And don't jump in at the same time. I mean, come on. Speaker2: [00:01:06] I'll throw something in if you like. Yeah. So it's my Data superpower. Data related superpower is an analytical mind. I just analyze everything my wife thinks. I'm an alien is my brain just never switches off. I'm always looking at something and picking out points of significance and joining them together, you know, for watching TV and for listening to music. If we're just walking up the street or driving or something, I'm always looking at everything and I do the same with with Data. Even if I'm looking at the raw data, I'm looking at Speaker3: [00:01:40] Attributes of it Speaker2: [00:01:42] Where I can small data sets. You know, I'm no Speaker3: [00:01:46] Substitute for an ML Speaker2: [00:01:48] System here. But yeah, I like to look at raw data and see things to help direct me into the ways to analyze it with with former analytical measures. Vin: [00:02:00] You [00:02:00] know what I mean? So what is that? I mean, where does that gotten you as far as career wise? You know, when you look at your career and you say, If I didn't have this, what do you think you would need? I mean, where do you see kind of the the downsides of not having that analytical thought process? That's always on. Speaker2: [00:02:21] So I probably wouldn't have gone so far down the rabbit Speaker3: [00:02:24] Hole of Data. Speaker2: [00:02:25] And when I say Data, I mean, you know, the wide Data, not just Data science. And I probably wouldn't have got to the position where I am now that, you know, I'm on the speed dial for a lot of people when they have any questions or issues related Speaker3: [00:02:41] To data or Speaker2: [00:02:43] Analytics. However, I might have progressed further in a different career if I hadn't done it because I started off with electrical engineering and then project management before I took it to the Side Avenue to concentrate on the Data elements of that, I was employing both of those Speaker3: [00:03:02] Fields, but really concentrated Speaker2: [00:03:04] On it some 15 years ago. Now, I think so. Speaker3: [00:03:07] Yeah, it could Speaker2: [00:03:08] Have been different. And there's there's positives and potential negatives. But you know, looking back in retrospect, it's it's easy to throw shade on things. So, oh, I'd like to think that I'm on the open the right door. Vin: [00:03:22] Do you know what I mean? You went the right direction, right? Yeah. Eric, what about you? What's your superpower? You've got to think of one by now. Yeah, so, uh. Speaker4: [00:03:35] I think that. Not knowing that I can't do something or whatever would probably be what I would go with because sometimes I've set out to do things that are, you know, it's like you just biting off more than you can chew. And so like in some ways, it's cool because in some ways I see it as a it can be a liability because it can make me a yes man, sometimes because it's like, Yeah, yeah, well, of course we can do that. [00:04:00] Let's jump in and try. But yeah, but at the same time, it opens up a lot of a lot of doors because it's like I'm willing to try it and I'm going to sit there in, you know, like bang my head against the keyboard until something happens or something that I've learned over time is to ask for help earlier on. And it's like, OK, I tried this for a while and it was fun. But now I'm going to ask for some help because it could still be fun. But I've just reached the end of my capabilities to to figure it out on my own. So, yeah, I would say not knowing that I can't do it and being OK with jumping in and trying it anyway, Vin: [00:04:40] You got something you want, like that's going to be your superpower going forward? Or do you think at some point in your career Speaker3: [00:04:44] You're going to say, All right now, I'm Vin: [00:04:46] Done being the I'll try anything once and I'm going to do something different. Look, are you to change and switch out your superpower? Speaker4: [00:04:52] Probably to some, probably to some extent. I think that the attitude of being willing to jump in and try stuff is just, I just think it's valuable and I enjoy it. So I'm definitely going to keep it around. But at some point I at some point meaning it's been in the past. Now I try to be more choosy about just trying anything and instead being saying, Is this something that can be done? I don't really know. Is it worth my time to try and figure it out, you know, and then just trying to make a good prioritization decision from there. But I like the attitude is fun and I enjoy it. So I want to keep the attitude. Just try and be a little smarter about it. Vin: [00:05:31] What do you got there? I see a comment in the chat, but go ahead and unmute yourself and Speaker3: [00:05:36] Let's hear the let's hear the superpower. Vin: [00:05:38] Nope, he's not going to do it. He refuses even after reality. Yep, I'm going to have to have to read it for him is my superpower. Speaker3: [00:05:46] I don't let it Vin: [00:05:47] Go without finding why it's huge. Why is this great? Data saying two hundred twenty one records in x? Yeah. Speaker3: [00:05:55] One Yep. Vin: [00:05:57] Let's find out, even if it takes hours. I don't know if that's a bad habit. [00:06:00] I mean, if you want to like, chime in. I don't know if that's a bad habit. I mean, some stuff you got to let go, right? But every once in a while, that one record really isn't just one record. Typically, if there's a problem, it's not limited. There's really more to explore. All right. Answer your superpower, it's your turn. Speaker2: [00:06:22] I would say that it's the Vin: [00:06:24] Subject matter expertize Speaker2: [00:06:27] Because I haven't been in the Data game that long, but in the music business now for 15 years and had different. Different positions, like in 10 years. I think this is my fourth or fifth one at the moment, and I'm now transitioning to a more Data Speaker4: [00:06:50] Role Speaker2: [00:06:50] Next. But having that background, knowing the people and the and the subject as well as I do, I think that's that's Speaker3: [00:07:04] A big one. Yeah. Vin: [00:07:06] You call the domain expertize or what would you if you had to give it a label? Speaker2: [00:07:11] Yeah, I think that's that's it. Vin: [00:07:13] And is that, you know, same question for Eric is that's something that's going to change, you know, have you found something that's kind of your this is going to be a long term superpower or are you looking at it Speaker3: [00:07:22] As this will be the Vin: [00:07:23] Five year superpower? And then you're going to jump into something else or you're looking forward to something else. Speaker2: [00:07:33] Yeah, I think so, because at the moment, I'm learning everything about our, for instance, so I'm looking Vin: [00:07:42] Forward to Speaker2: [00:07:43] That, maybe being my next next superpower for sure. Yeah, because I don't have to learn as much about the music business anymore. That's that's. Yeah, that's done, so I'm curious about the next [00:08:00] next moves. Vin: [00:08:02] Yeah, there, if you're if you're on, go ahead and unmute and let's hear your superpower. Speaker2: [00:08:07] All right. Okay, thank you. Like I mentioned, I think I'm a bit fastidious when it comes to learning. I want to I want to be sure that everything is accurate. Something tells me that the one Data that the one record that you are missing might be so important and that it might have all the underlying reasons or causes. So I want to get to the very root of it to understand why it is so. So I think that is my superpower, and I do not. It doesn't bother me if I have to spend Lenten time to discovering why it is so. And I think that has really helped me. It just really helped me Speaker3: [00:09:02] To lend more Speaker2: [00:09:03] And to see more things. Thank you. Vin: [00:09:07] Oh, thanks for. Thanks for coming in with that. We got Eric Eric Goode and tell us, what is what is your superpower? Speaker2: [00:09:17] Mine is, I guess, just making random connections of things. I tend to get where I'm talking to somebody about something a problem they have, and I'll think of a solution in a completely unrelated field that just works very well together. And I guess it comes Speaker3: [00:09:35] From just having Speaker2: [00:09:36] A very wide interest in totally random things that on the face of it, don't seem connected in any way. So, yeah, Vin: [00:09:46] Do you think that's helped you? You know, if you look at it and you Speaker3: [00:09:48] Say, OK, if I didn't have Vin: [00:09:50] This, where would you be? Speaker2: [00:09:52] Um, it helped me meet a lot of very interesting people and get very interesting opportunities because [00:10:00] I might be talking to somebody who initially might appear that I don't have anything in common with them, but in conversation turns out that we do because of this one thing that I'm interested in that helps them solve their problem. And so it's helped open some doors for me. Vin: [00:10:16] It is the most interesting person you've met. Speaker2: [00:10:19] Hmm. I have to get back to you on that one. Vin: [00:10:22] Okay. It'll be a follow up question. Yeah. I got to hear Mexico's answer to this one. What is Mexico's superpower? I want to hear this one. Speaker5: [00:10:32] I'm going to default to my CEO Ben's explanation of why MailChimp succeeded. And that's because I'm like a cockroach. You can't kill me. You're not going to keep me down. No, it was great because I think that's actually when I first heard about like MailChimp, like years ago was he was doing this creative like Creative MORNING's talk, and it was the first time I'd heard an entrepreneur just be real real about their business and like how they grew and stayed in the game. He was like, Look, at some point, it's just it's about time, this time in the game and just in good times. Like, what was it like? All like was all water like raise ships or whatever, like good, especially in good times. Everyone's doing well. But, you know, it's like how you do in the bad times. And so I feel like that's that's me. Like, I just I'm a cockroach. I will hang in there. You cannot squash me and I will just like, keep going. And I think, yeah. I guess maybe you could call it great, but I just think like cockroach. Vin: [00:11:47] Yeah, been asking this to a couple of people. What's like, where would you be if you didn't have that because everybody talks about, Well, this is good, that's good, but I mean good how Speaker5: [00:11:57] I probably would have committed suicide. I'm, you know, just [00:12:00] trigger warning. Like, honestly, my like yeah, like my mental health in college was really bad. Uh, I you know, and I don't hide this from people, but I did actually try a couple of times in college. It was bad. I had to get emergency counseling at one or two points. You know, part of it was having like a fixed mindset. I was so intent on trying to become a doctor. And especially like I think a lot of a lot of like, I think Asian-Americans, immigrant families or, you know, people who are second or third, you know, whatever second generation Speaker3: [00:12:36] Kids are, what have Speaker5: [00:12:38] You probably understand that pressure of like you have to become a doctor or a lawyer or engineer? So I try to become a biomedical engineer. I'm like, OK, I can squish the doctor and the engineer together at UCSD, which has a very competitive program, but I always had some mindset of black and white. I keep trying if I keep sort of I just grinding myself into the ground, you know, in retrospect, it was so totally unhealthy that like that hustle and grind culture that we kind of sort of celebrate even on LinkedIn a lot sometimes in that just kind of broke me. It really did. But I always ask myself this. I'm like, You know, I should just kind of give up on life. Why can't I just give up? It would just be so easy. And I was like, I was away from my family was very isolated. It would have been so, so easy. But there was just something in the back of my head that I was always like, No, no, no, like, you can't just you can't take that final step. So to be honest, I just wouldn't be here. Know, with that being said, for everyone who's listening, I really encourage you if you are struggling, like do reach out for help because counseling and therapy has been really, really good. I've utilized various portions of it in various points in my life. I have a lot of friends, especially in tech, who have some kind of anxiety, ADHD, [00:14:00] bipolar. A lot of my close friends have a lot of these sort of very similar to me. Speaker3: [00:14:05] They have various Speaker5: [00:14:06] Sort of challenges in life that they need extra help. Sometimes it's through a counselor, sometimes it's medical help. Sometimes it's both. I think it's just it's definitely something that shouldn't be ashamed about. But I think I think that kind of determination to just not give weight, not be crushed. I think it's going to be helpful, especially for tech with a lot of the imposter syndrome to like. Even today I was reading some post for people were like, Oh, well, I'll boot camp grads are bad, like they're terrible. You should hire them as engineers and I'm like, Look like there's there's pros and Speaker3: [00:14:44] Cons, right? Speaker5: [00:14:46] But something that is very true is that none of us will have the perfect opportunities in life, like none of us will have an amazing set of circumstances. I think then you probably like as an entrepreneur, you probably feel it in your bones that a lot of times you don't have these perfect circumstances. You have to kind of make things work. Sometimes that takes a little bit more than you can think you're capable of giving at that point and for years and years and years. But at the end of the day, like, you kind of have to just go and do it. You have to make that decision for yourself that you're not going to go do it and sometimes you have to go do it even like when you don't want to do it. So, yeah, like that cockroach determination. That grit, I think, is still got it. Still got it. Even after like a terrible week. Terrible a week. Vin: [00:15:41] Not the terrible week, obviously, but it's awesome. It's really good. It's kind of interesting how your work superpower kind of comes from a life superpower and how we often miss the connection between the two where, you know, people talk about work life balance and it's like, it's not a balance, it's the same thing, right? And [00:16:00] so it's super important to talk about like asking for help getting. You know, getting any sort of assistance that you need, not just mental help, but you know, just your health overall, just taking care of yourself, taking care of your family, there's a ton of great messages there, so thank you. All right. Obviously, if anybody has any other questions aside from mine, because I've got like an entire list of questions like it's been this whole show, just learning myself. But if anybody's got questions, please throw them into the chat and we will get them answered by smarter than me. Gina, outstanding. Let's just run it. Speaker3: [00:16:38] I can you hear me? Speaker4: [00:16:40] Oh yeah, I've got old AirPods and sometimes they kind of crap out on me. Thank you. First of all, Mexico, that was incredibly brave and really speaks to me, and I so appreciate that, you know, a lot of us go through tough times and I think one, maybe one positive even before the pandemic. But in this era, we're in. We're willing to talk about mental health these days, and I thank God for that because it's incredibly important. So I just wanted to give a shout out of support there. And you know, I'm sorry, I came in a few minutes late, so superpowers. That's an incredible superpower. Mexico talks about because I mean. Such tough times and yet still having that sense of no, there's more left in me, man, I'm going to keep going. I kind of that that resonated with me too. As well as Eric's comment, I mean, I feel like my superpowers are to talk across disciplines. I feel like I can basically strike up a conversation with just about anybody because I'm incredibly curious about people and what they do. And, you know, [00:18:00] kind of where they come from, you know, their origin story, things like that. And I guess, yeah, the kind of a I don't think of myself as being extremely Speaker3: [00:18:10] Determined or organized Speaker4: [00:18:13] Or, you know, I'm not a hustler, necessarily, but I I've achieved quite a bit and I think there's just this part of me that's like when I was in college, I was a bio major and I was underprepared at a very tough school relative to a lot of the other students. And I just was like, I refuse to be weeded out. That may not have been the most rational perspective if you if you saw my computer GPA coming out, but by God, I was not going to get weeded out because I wanted to study biology. And I'm really glad I did. And I love it, even though now I'm moving. Over the years, I've moved in kind of have environmental background and Data science as well. Speaker3: [00:18:58] But yeah, Speaker4: [00:18:58] It's that that sense of and even if you get really down and despairing, it's like. But there's still more to do. There's still things I want to accomplish. There's still things I want to give to others as much as I can. So, yeah, I think those are two, and I'm sorry if I if everyone else gave theirs, but that's great. Vin: [00:19:23] That's what we're looking for. And it's, you know, you kind of hit on it. This is the pandemic seems to have brought mental health to be OK because it didn't used to be OK to say anything about that. And so it's nice to hear a couple of good stories that turned out well because you're most mental health stories are they don't have a good ending. And that's the only way it seems like we get it brought to our attention because the people who are survivors and the people who have worked through it. You know, there was a stigma still is the stigma associated with it. So thank you both of you for talking through that and talking about it, [00:20:00] just getting help and how it works out. It's really important, and thank you for talking and bringing that through. All right. Anybody else want talk about superpowers? Because I've missed a just a couple of folks. I mean, Aaron Burr is if you want to call, though, if you want to jump in, talk about superpowers. Speaker2: [00:20:18] Yeah, I'll give that a shot. Superpowers are two of them that come to mind. One that is potentially dangerous but is also quite useful is great. It's just not giving up, right? Speaker3: [00:20:34] Like what Gina Speaker2: [00:20:35] Was talking about, right? Sometimes. And I find this is true with a lot of people who do highly technical Speaker3: [00:20:40] Degrees or Speaker2: [00:20:42] Technical fields of learning, essentially in engineering, at least in Sydney University and a couple of the other Australian universities. The ones that make it through the engineering degree are the ones that actually like, dug through it. It wasn't easy, and it's not going to be easy. Anyone who wants to, you know, get into a technical field, there's a lot of stuff that you're not going to enjoy every bit of it like, that's just a lie. There are entire subjects where I was just racking my brains and struggling hard to put her Speaker3: [00:21:13] Through, right? Speaker2: [00:21:14] I've said it before I dropped my computer science degree six months in. I just didn't understand coding, and then through my robotics degree, I had to pick up from the assembly Speaker3: [00:21:23] Level, right? Speaker2: [00:21:25] Start with assembly code and then on Sensi and then on object orientation. Really build it Speaker4: [00:21:29] Up from scratch, right? Speaker2: [00:21:31] That's not going to be everybody's path. Some people are just going to get it. And that's a superpower, which I just wish I had is just getting caught, something that I had to build up over literally two or three years, right? The danger? Absolutely. A. The responsibility is understanding to yourself what the what the limits of Speaker3: [00:21:53] Your grit is, right? Speaker2: [00:21:55] Sometimes we get into this mentality, and some of us particularly like the background I come from, [00:22:00] an Indian Speaker3: [00:22:00] Background is like, Speaker2: [00:22:02] Yeah, you've got to push yourself. You've got to keep going through to study and continue to finish your degree, go through and work right. There are other paths to get to the kinds of work you want to get to. So you just also got to be really aware of what your limits are, right? And often you're not aware of that. I was in a situation where I was at a job that I didn't particularly like. It was a couple of months in in a town I didn't really like, and just having the support network around me to call out saying, Hey, are you really? We're not supposed to be doing what you're supposed to be doing made me kind of trigger this whole rethink smart goals of my life and saying, OK, what's my net value as an individual right? And having that support network around me helped me kind of figure out, Hang on, no, maybe I should be back in Australia for a bit more and made that made that move right. So having that grit sometimes gets you into a hole that only other people can dig you out of. Right? On a lighter note, though, and this is kind of different to the mental health topic, is the superpower I would love to have Speaker3: [00:23:08] Is discipline am Speaker2: [00:23:09] One of my coworkers. He's he's the kind of guy who he will sleep at the same time every single day. Wake up at the same time, every single day. Right. And he just knows what he needs to get done and just gets it done within the time that he tells himself he's got to get it for, you know, I just I'm a more chaotic individual and I'm just looking at that claim, man, that's a superpower to my mind. I wish I had that. One of the things that I'm trying to focus 2022 on is discipline. I had no idea if I'll get that, but I need to tell you, Vin: [00:23:43] You know, in following up, one thing that I've learned over my career is you kind of go through these, you know, ups and downs of motivation. Not like your career is good and bad, Speaker3: [00:23:52] But sometimes Vin: [00:23:54] You're motivated. Sometimes you're going through a year where it's like, I just want to phone it in. But when I'm [00:24:00] in that super like accelerated, hyper performing Speaker3: [00:24:04] Mode, I can't try to drag Vin: [00:24:06] Everybody with me, you know? And that's what I kind of learned is like, that's just me, because that's where I'm at and some people are there. Some people are at a different place. And so it's great to have grit. But it's also, I think there's an empathy, you know, quantity to that. And I totally relate with everybody that's talked about like having Asian parents. Yeah, 100 percent. I don't look it. But yeah, you know, I've heard words like, we are Asian, not be Asian. I've heard that from not my parents, thankfully, but from a friend's parents growing up in Hawaii. Yeah, there's a lot of pressure isn't there, and I can't imagine that's like a thing. We have a corner market on. I'm imagining that someplace else to like. Everybody has a little bit of that Speaker5: [00:24:49] Just from like a bunch of my Caribbean, Nigerian, Chinese, Korean, Vietnamese, Filipino, I literally heard that almost like from you, take a map, it's like just draw a huge swath of here. Here's where all the pressure is going to come from. But I think sometimes it intersects also with. Also, economic security, shall we say. So if you're also like from a family where you know, parents didn't go to college or grew up, you know, I won't say poor lower middle class, but adjacent. Adjacent, yeah, adjacent. Absolutely. You know, then there's also like this pressure to of like and I thought, what was that movie in Contato that the movie that came out recently in Congo in control? Yeah, yeah. Vin: [00:25:56] I thought it didn't really get a shot me. I'm sorry, Speaker5: [00:26:00] But [00:26:00] it did a really good job of looking at that generational pressure. Like, especially if you're the first one your family to go to college or like, I don't know, it's just man. There's just a lot of ways parents can mess up their kids nowadays unlimited the number of tools they have. Vin: [00:26:17] Yes, and it's kind of scary being a parent now. Speaker2: [00:26:19] I just want to. I just want to jump on the back of that. Like, I'll be frank, I didn't agree with what you're saying, but it's not always the parents either. Like, say, and it's not always if you're from Speaker3: [00:26:32] That Speaker2: [00:26:34] Lower middle class adjacent kind of background where you're faced with, you know, financial difficulties or any of that, like, God bless my parents. I'm very, very thankful that both of them were pretty well educated and were really successful in their careers and what they want to do. And they never put that pressure on me that, Hey, you've got to get nine and I you got it. You know, that kind of mentality wasn't there, but I ended up going to a school where essentially the school was like one of the top schools in the state for. And we don't have and this is public schools I'm talking Speaker3: [00:27:08] About not like a Speaker2: [00:27:10] Private school, but essentially one of the top public schools in the state, but like 10, 15 years running. And the school had this mentality that, hey, we expect you guys Speaker3: [00:27:18] To come second. The grade before Speaker2: [00:27:20] Mine came fourth in the state across all schools, even competing against like private schools. And fourth of the state. And that grade was seen as the failure grade, right? Like, how dare you come fourth in the state when we've had 15 years of coming second in the state and that's coming from the other students, that's not that's just like a it's an ego push from other 15 year olds and six year olds. This isn't about like necessarily even parents. There's a strange economy of ego that mixes into that as well, right? That, Hey, this is my identity. Doing well at school is part of my Speaker3: [00:27:54] Identity, right? Speaker2: [00:27:56] And and I totally see where that that plays into it as well, because that [00:28:00] was more where the pressure to have grit came in for me. But yeah, you're right. Having an awareness Speaker3: [00:28:07] Of of like Speaker2: [00:28:08] Financial insecurity or, you know, being faced with that, like having parents to grow up in a country where financial insecurity is really visible. I think that there's a big factor in pushing people towards, Hey, you've got to go get a degree in a pretty stable career environment. It's not an unreasonable background to have. And I think those of us who had parents who didn't push that on us are thankful as hell because like, they shielded us from that, essentially. But it still gets to us in other ways, which is the strangest part. Vin: [00:28:44] Mind you, Speaker5: [00:28:45] I mean, I think. That's one of the things that in some ways I feel like I sort of missed out on was like football, culture, culture in the U.S. like, what's that show Monday? Monday Night Lights. I feel like I have friends who've gone to like a lot of these like schools that had the American football, basketball, sports, culture and like when stuff happened Friday Night Lights and when stuff like that happened, it's it's it's hilarious watching so many shows on like CBS or whatever. Like a lot of these high school dramas, because I'm like, Wow, Speaker3: [00:29:17] That 100 percent Speaker5: [00:29:19] Did not look like my high school. Not at all. Like, yeah, no, we didn't have these like burly American football players and like the cheerleaders and all that all that nonsense stuff like like toilet papering, the chemistry labs. I didn't get it. I'm like, If that happens, Oh, Speaker2: [00:29:41] I'm Australian, we don't have that anyway. So those shows were always a bit weird to me. Vin: [00:29:46] All right. I want to hear Gina, Gina and Eric were both like, there's been some interesting conversations in the chat that I want to catch. Speaker3: [00:29:51] And Erica, so your question? Vin: [00:29:53] Not ignoring it? I just want to get through this piece of it. And here's some more perspectives on it, because this is we honest. This [00:30:00] is kind of interesting. Speaker4: [00:30:01] Yeah. So I'll jump in here. Thanks, Ben. Yeah, know. Well, first of all, I think you mentioned when you're a student and you got fourth in the state and that wasn't good enough. I grew up in South Dakota, so pretty homogenous, mostly white people, northern European type and also western South Dakota, where I grew up, a lot of Native Americans. So certainly somebody from another country stands out and our chemistry teacher was from India. Like, not I mean, her husband, I think, was a professor at the local college, but she. So we have this, you know, I think it was like Science Olympiad or something like that. And I took a test for chemistry. And I think, let's see, I think I got the fourth best score ever. But and the best score in a long time. But her response was, Oh, I got the best score in a long time. But she said her first response to me was, You missed these two. It wasn't. You just got the strongest score that we've had in 10 years. It was. You miss these two. And I mean, right then and there, I realize there's a big cultural difference in terms of how people in different parts of the world respond. And then not only that, when I went to college, I guess I never really even heard of the whole idea of You're going to be a doctor or you're going to be an engineer or a lawyer. I'm thinking, Well, if I mean, if you're not really in, if you're not really interested in doing that, I don't know how happy you will be. Speaker4: [00:31:47] And you may be very good, but how happy. I mean, it doesn't make a lot of sense, but at the same time, I understand some of those cultural pressures. I grew up lower middle class, adjacent or whatever [00:32:00] you want to call it. So I grew up poor and I didn't want to be poor anymore. You know, got news for you there. It's like, No thank you. And at the same time, though, nobody was pressuring me to do anything. They were just probably happy that I went to college, and the farther I could go, the better. But at the same time, I think I was raised with, you know, Oh, you're you, you accomplish so much. You're so accomplished, you accomplish things. It's kind of effortless. The family didn't really have to push me or I was I was very self-motivated and where I grew up, frankly, it wasn't like some big city where there's all kinds of competition. So it wasn't really that hard. And then I got to my university, which I said is, you know, very competitive, and I struggled and I struggled. So when your identity is built around accomplishment and achievement. And without that, you don't really know who you are or if you're a worthwhile person. You know, and I don't know if you're disappointing, if it's coming externally and you're disappointing your family and you feel like you might be embarrassing your family or you're getting that message, I mean, that is a tough, tough thing. And I guess Speaker2: [00:33:16] You should drop out there. Speaker4: [00:33:18] Oh, sorry. Yeah, that yeah, yeah. You know, how to how to come out of that. I mean, hopefully no one's parents will disown them for, you know, becoming a stand up comedian who's the guy in Silicon Valley who wrote the book? I think he plays the character Gene Yang. He was. What's his name? Do you guys remember anyway? He writes about how disappointed his family was because he didn't become, I guess, an educator or teacher, professor kind of person. He became a stand up comedian. And they were just like, What is this? But then when he became more successful, then of course [00:34:00] everybody embraced him, you know, because it reflects, well, I suppose. But yeah, I mean, short of finding other support groups and just nurturing your connections and relationships, Speaker3: [00:34:13] I I wonder what Speaker4: [00:34:14] Others think here. Jimmy Yang Thank you. Eric, thank you. That's the guy. His book is hilarious. Get the audio book if you get this book. So anyway, I'm curious to know other sports, and I know others are patiently waiting. So maybe we move on. Vin: [00:34:29] I think the only thing let me close with the immortal words of my dad when he found out that I was switching to computer science Speaker3: [00:34:37] From civil Vin: [00:34:38] Engineering, not happy was a good way to characterize that phone call. And it goes three three years ago when he came out to visit and we talked about where I was, what I was doing for a career. And he said, You know, when I was a kid, I bought you this Atari. I think you took that a little too far. That was the those were his wise words of wisdom on my career. There's just a level of disappointment that you can never get rid of. In some families and I think that's he's he's good with it now. But I Speaker3: [00:35:11] Think that's a strange Vin: [00:35:13] Stigma that follows people around. Go good. Are you there, OK? Nope, not there. All right, so Eric, let's go to your question. You got a question about recommenders again. Thank you. I think I remember you having another one of these last, but last week, two weeks ago. Speaker4: [00:35:32] Yeah, recently it's been on my mind. So I was messaging a data scientist from Zapier and asked about Speaker3: [00:35:43] What they do Speaker4: [00:35:43] With recommenders, and she said that they use a random forest for a recommender system, and I had never heard of that before. So I googled it. And apparently you can use random forests for recommenders and Speaker3: [00:35:59] Other [00:36:00] unsupervised Speaker4: [00:36:01] Learning asks. And so I was just wondering if anybody had ever done that or heard of it or could explain the idea behind it otherwise. I'm just just working my way through the Wikipedia page about unsupervised that involves unsupervised learning with random forests and stuff. Vin: [00:36:18] I mean, you can do a, you know, I'm going to let everybody else get in in a second, but you can do pretty much anything with any model architecture. So I think I just want to warn you that, you know, and there are some, like Data Science MacGyver's out there who literally with a random forest and in seven rows of Data have come up with something that's wildly accurate doesn't mean it's a good idea. It just means it worked for some reason. So, yeah, anybody else want to chime in random forests for recommenders? Anybody else done that? Akiko, I got a yes. Speaker5: [00:36:50] Yeah, I mean, it's similar to your point. So, yeah, there are a lot of different ways that you can create recommendation systems. You can have clever filtering content. Yeah, yeah. There's collaborative and there's content, OK? But the general idea is like for recommendations, and this is where actually it's like the art of art and science, of coming up with a model architecture for recommendation systems, right? Because what? What does a recommendation look like? So in Spotify, Netflix and YouTube, for example, they have this interesting dilemma of we can either recommend stuff that you've already proven to like. We can recommend stuff that people like you like, and we can all also recommend stuff that is like way different from what you normally purchase. But we need to sort of introduce some kind of serendipity to it. And so you kind of translate that into like. Let's say, for example, like uranium for small like uranium forced regression [00:38:00] or a classifier model, right? You could do a binary like yes or no. Will this person like this thing? You could do more of a regression where you do like a scoring and then you just set a threshold. So generally speaking, I guess like any sort of like regression model you could hypothetically use in like a recommender, but it still comes down to that whole. Like how do you define like how do you define the the core idea of the recommendation or your sort of core implementation recommendation? Is it are you recommending it because once again, like you want, you want to give people the same thing that they have? Or do you want to like, introduce them to something new? And then the decision, the decision point there is you could either do that persona based, which is, I think, content filtering, library filtering. I always have to like, go back and remember, like what you guys are Speaker4: [00:38:58] Doing as user to user. Speaker5: [00:39:00] Yeah, you can either do something that's more like a persona based or you could do something that's more of like whatever they sort of their behavior. I guess you could Speaker4: [00:39:08] Argue, OK. I kind of wonder like the the thing, I guess a little bit more information, like one thing she said when I talked to her was is predicting item probability. So have you ever use Zapier or Zapier? Is the tool that it's basically the API that connects all the APIs, right? And so it's saying, you know, oh, you use Salesforce and you use Google Sheets, well, you'll probably also really like, you know, acuity, scheduling or MailChimp or whatever, right? And so. She said that they're predicting item item probabilities, which I'm assuming is another app connection by from user behavior and features and don't know what those features are. But that's what's gotten them the best results. And so I kind of wonder. As you were talking, I actually kind of thought about this persona thing where I was thinking, Gosh, I wonder if if you could have a classifier that would, let's [00:40:00] say that you've got 10 personas and those 10 personas boiled down to a collection of 10 apps each. And you could you could classify someone into a persona and then recommend those apps to them. Or I guess you could see which apps based on predictive probability going to all the different applications. But then I guess you just have to have a lot of users because you're going to hit a lot of sparsity because they have Zapier has like a bazillion possible app connections and growing. So it's kind of helpful to talk through it and kind of hear your thoughts about it. Curious to see if I could make something. Speaker5: [00:40:37] Yeah, I think the most successful are been good. Good, good. I was gonna say, I think the most successful products, they don't take one approach to it, right? They usually will stack or layer. Ok, let me read this so they can either like stack models where they kind of feed models into each other. And a lot of times you'll do that if you want to serve in rich like the feature set or what they can do is they can say, screw it. We're not going to try to blend models. We're just going to do is we're going to do everything. So like Spotify kind of does that. They have that stuff that you like, right? They have your playlist, right? You're like, mix one two three four five six, right? They also have stuff we think you might like. And then they also have things that are like, So if you took the Typekit, if you took the embedding of like all the different songs in the different genres, what they might do is they might say we'll pick the ones they're like closest to you. We'll also potentially try to add a little serendipity and pick the ones that are like farthest away from your interests because it winds up having you say if they keep recommending the same stuff, it just kind of going to spiral into this like death. Speaker4: [00:41:49] Sort of, yeah, echo chamber. Yeah, yeah, I guess so. Speaker5: [00:41:54] A lot of times they'll sort of pick a portfolio approach to recommendation models and [00:42:00] they'll basically like, why don't we just try like a bunch of different stuff and then we'll test it and see like what works and all that good stuff. So Spotify is really they have a really awesome, I think, experimentation culture because of that. Same with Amazon. Youtube, for sure. And TikTok, apparently. Speaker4: [00:42:21] Right? Yeah, I did hear something about the read about the, like you said, people who are way different from you. You could say, what do one of the ideas was, what do people who are way different from you dislike? Maybe you'll like that if you've even if you've never heard it, which I'm kind of interesting. I'm interested to see what that would be because I don't know who people are way different from me, let alone what they dislike. How you would choose that because lots of people are way different from me. Just depends on which, you know, access or whatever you decide to go down to find them. So interesting approach. Vin: [00:42:53] Cool. There's a lot of like what you're talking about with basically a distance metric of one sort or another where you're looking for how far away people are from each other. Speaker3: [00:43:02] And there's that concept Vin: [00:43:04] Of wrap around where you have, you know, this person hates it. They are very different from you, and it almost wraps around to a different class or different type of user. And I think it was kind of bringing this up. But I want to make it explicit. There's two ways kind of two schools of thought about recommenders. One of them is I want to just keep giving you kind of things that I know you're going to like or that you're very likely to enjoy. You know, and I want you to find some new diversity, some variety, because, you know, I only have so many products to sell you or something like that. The other ones are behavioral where you are actually creating more than just here's something to check out. You're really creating marketing loops. You're creating other behavioral loops where you can create rituals Speaker3: [00:43:53] And the Vin: [00:43:53] Person that you're targeting. Now you're trying to change their likes, you're trying to change their Speaker3: [00:43:59] Preference [00:44:00] and understand Vin: [00:44:01] What it takes to take somebody from this type of a customer Speaker3: [00:44:04] To a different type of Vin: [00:44:06] Customer or to consuming a different type of, you know, higher price range or. And so there's too, you know, there's kind of two schools of thought on what you should do with or recommend. Well, there's more than two, Speaker3: [00:44:18] But those are the two Vin: [00:44:19] Simplest ones. Speaker3: [00:44:20] So that also keep that Vin: [00:44:22] In mind, you may not want them to do exactly what's obvious and easy. You may actually want to take them on more of a journey Speaker2: [00:44:30] That might actually close the loop a little bit Speaker3: [00:44:32] On why you're Speaker2: [00:44:33] Under the forest. In certain cases might work well. Speaker3: [00:44:36] Right. Speaker2: [00:44:36] So random forest, where I've used it more because I haven't pushed recommendations systems before, is exploration for robotics. So area exploration, where there's an empty space and the robot doesn't know anything about it before. So there's a combination of do I explore new frontiers or do I exploit the areas that I already know right and unlearn, like exploration space? Speaker3: [00:45:00] So essentially, this is Speaker2: [00:45:00] That explore versus exploit balance that you that you're talking about bin where you can either continue to exploit stuff that you know, you can recommend and it's an Speaker3: [00:45:08] Easy win, or you can Speaker2: [00:45:10] Explore the areas that a person may be never seen this movie before, if it's a Netflix thing, for example. So that may be the the nature of why a random forest model might still work well in certain cases, depending on how you want to be able to balance that, explore, exploit, trade off. Speaker4: [00:45:29] Oh, I like that. I like that thought a lot. Speaker5: [00:45:31] The other selling point that our dear friend. Oh God, I remember Speaker4: [00:45:39] It's Friday, Mister Rogers. Speaker5: [00:45:41] No, our our our excel random forest friend. Yes. Something he likes to bring up a lot is that rainforest too. Like they are more interpretable than other types of models. They're fairly efficient. If [00:46:00] you throw the Data at them, they can handle ordinal and categorical variables a little bit better than some other types of models. The only, I think, challenge. Well, the main challenge I know about them is you can kind of say game them a little bit, but essentially like if you have features that have love splits or like they have a lot of values, you can kind of because essentially the Gini coefficient for rainforest once found you can get like a really high Gini coefficient, so you can have like variables that essentially, for example, like if you have a variable that is like state or city. Actually, city is a better one city, maybe region or neighborhood you can kind of like. Reinforced models will sometimes weight those features a lot more than like other types of models because of like the whole the thing with like the Gini coefficient. If I remember that, I think that was an issue like when we were, for example, using like still still a good model to use like a good starting one. Speaker3: [00:47:13] Absolutely. But that was Speaker5: [00:47:15] Something that I think we had issues with when we were trying to do sales modeling with, like Data from Salesforce. And you have essentially all these users and customers. And if you're trying to use like neighborhood or city or region or whatever, like a category, like a categorical feature set that has a lot of values. I think that's where sometimes we got. It gave it a little bit more importance than maybe it should have. So I think that's the one drawback, but the No. Vin: [00:47:45] Anybody else want to chime in on recommenders here before we get to customs question. Thank you. All right. Eric, I hope that gave you some good answers. Good places to go with recommenders recommendations on recommenders [00:48:00] that you will hopefully like. All right, costumes tell you. Speaker2: [00:48:05] All right. Ok, so I'm working with image datasets, right? And one of the things that we needed to do is keep track of the source of the dataset. Right? And there's a bunch of metadata that we're tracking about it. Essentially, it's metadata, right? Where did it come from? What's the nature of the content that's in there? And there was a bit of a toss up is do we go through with like an SQL database that has all this information in it? And what's the overheads Speaker3: [00:48:37] Of doing that versus, Speaker2: [00:48:39] Hey, we don't actually have it SQL experts within the team that controls this. Let's just chuck it into a JSON and put it on file. All right. So that was the other Speaker3: [00:48:51] Was the other like Speaker2: [00:48:52] Potential Speaker3: [00:48:53] Avenue. Speaker2: [00:48:54] So we wanted to be able to control some of it, like the structure of the document and be able to actually use that as a functional object so that we can track how things are progressing right with with that particular data set. So it's an interesting thing thing where we ended up getting a basically using pedantic to enforce the structure of a model in certain areas where we really care about it, but leaving it loose enough that we can add additional information on top of the JSON where we need it right? Has anyone done something like that before? Does that sound like, Hey, if you wanted that, you should have just done a sequel database? Am I barking up the wrong tree? Vin: [00:49:34] You might be, but that's image Data is hard, and I'm going to let Mexico kill it. Speaker3: [00:49:40] But I'm just going to tell you Vin: [00:49:41] This that image Data. There's it's like there's no right answer. And that's kind of the problem with image Data is that no matter, you know, no matter what direction you choose, you're going to feel like there was something that you missed or something lacking because there isn't. I mean, you're struggling with like kind of the existential [00:50:00] problems that some companies create customized systems to to manage. So on that note of there is no right answer. I'm going to give you right answer. Speaker5: [00:50:11] There is no right answer. No, I think because this comes back to and as like, not a data engineering expert. But as someone who is admiring Data, entering experts that I am studying from, I feel like that question of like what kind of data storage mechanisms someone uses, like unless someone is doing so. So if I go, OK, so let's say just cloud storage GCP. Right? They have tons of options from everything, from unstructured data to. You know, structured databases, both relational, non relational. I think what it comes down to is like your read rights, essentially the latency and also cost. So, for example, like if you already know for images, they probably would be stored in like a DC bucket or something like that. And same with the they would be stored in like an ice bucket or a 3x3. Sorry, I live in GCP, but I so then at that point, if it's just collecting the metadata, if you want like a really like loose, almost like MongoDB style like schema, you could you could do that. Another option is like BigQuery is a relational. It's a relational Lapp system, but they are now allowing unstructured data in their. So that could kind of be the best of both worlds, and BigQuery is a fairly stable product, unlike the GCP side, I'm sure has their version of it, which might be right shift, I think. Speaker5: [00:51:56] But yeah, like at that point, it's just kind of like just just stick in a school [00:52:00] database, you know, like you could use, you could use unstructured like column. I think pieces call in or database. You could like if you want, stick the JSON in there. And then as you just kind of get a little bit more structured data for stuff that you would normally use, you'd probably still use a pedantic anyway if you're doing like the inserts career, like data validation and enforcement and type enforcement. But I would go like literally the most vanilla, super easy path possible, and that would be like a big like using BigQuery or something like BigQuery with like a bucket for the images. Because the other part here is the more boring you make it, the easier you make it to hand it off to someone else. Oh, and also to you. If you have an agreement with Google, with GCP or Amazon, then you can kind of force them to help you out if something messes up. Speaker2: [00:52:59] No, that's that's a very good point. Typekit support and cost the latency, and some of that weighs in, right? Yeah. Like you guys, right? Obviously, like the image, Data is in like a GCP bucket. I would not try to do anything else with that. But but essentially, yeah, it was. What are we going to be quicker at? Is predominantly Python developers was to kind of make this like an interface class. Essentially, that just deals with the five store documents and only modifies it in ways that we want it to be modified in so that all of our pipelines can use that particular handler. And it's essentially like a handler interface to the dock in fire store and says, Okay, these are the actions you can do to that document. So that's how we kind of like programmatically controlling a lot of that stuff. Now, I think a lot of that was, I'm looking at it going, there's a sunk cost fallacy here where it's like, we've already done so much on this. We can just build a quick API [00:54:00] to strap other stuff onto it Speaker3: [00:54:01] Like, you know, a Speaker2: [00:54:02] Dashboard or whatever that is. Is there like value in moving over to like a SQL database? If a certain percentage of that is structured and part of it is unstructured, I might have a place to look at BigQuery now that you mentioned that if they're open with unstructured data as well, that might be an interesting progression because the truth of the matter is this is not going to ever be fully fixed and structured. You know what I mean? Like, we started off with the bare minimum MVP was a very tightly structured document, but it in the nature of image metadata, you're you're suddenly starting to get other bits adding to it, right? Yeah, yeah. I'm just kind of juggling a sunk cost fallacy here. It's like, Speaker3: [00:54:48] We've already got this thing that works. Is it worth Speaker2: [00:54:51] Putting any effort into getting that into, like, are there any really big wins out of people? I think the main thing we're seeing right now is a little bit of latency out of life by store and things. Vin: [00:55:00] Yeah, you've always got to think about what are you going to do in the future? I mean, if this is what it is and it's not really changing that much, I mean, how much are you going to be? How much are you going to be adapting in the future? How dependent upon this are going to be in the future? You know, is this something that's pretty static and stable or is this the kind of project that you're going to be leveraging and improving on over the next year or two? Speaker2: [00:55:22] I think so. So this started off. This started off as like a stable MVP kind of thing saying, this is what it's going to be. We don't need anything more than this. Speaker3: [00:55:33] And then now Speaker2: [00:55:34] We've grown a little bit. We've extended that and just found some edge cases Speaker3: [00:55:37] Where like, OK, we need to lose some a Speaker2: [00:55:38] Few things here. We need to change a couple of things there. But I suspect it's actually going to stabilize across, say, at least a year or so after that. As as the project evolves to further phase, it's it's likely to change, right? But at the same time, is it worth trying to look around that corner right now when what we have right [00:56:00] now essentially works with a couple of additional tweaks and can see us through for another, you know, x amount of months? Vin: [00:56:07] And I can tell you in that case, wait, wait until you have the detail of what it's going to look Speaker3: [00:56:12] Like in a year, you know, just Vin: [00:56:13] Limping along. You're going to hate it a little bit. But if it's just edge cases, you know, learn to hate it. And then when you have clarity on what you have to change, you know, especially with image Data because you can go down. I've seen companies have to do forklifts because they went the wrong way. So don't, you know, have specifications up front? Anybody else want to chime in on this? Oh, I'm sorry. Go ahead, Michael. Speaker5: [00:56:37] I totally agree with Ben. Yeah, I mean, like if it's already working? I wouldn't. Vin: [00:56:44] Yeah, yeah. Anybody else want to chime in on this? Anybody worked with? Image Data recently because my last image, Data thing was two and a half years ago. All right, because I hope that was helpful. I hope they. Some answers. Anybody else have any other questions I've been trying to keep track of the chat, but it would be very easy in the mix of Mandalorian music and ukulele references to have missed something. Anybody else have any other any questions to ask. The question. Yeah. Sorry, I was driving earlier, so I didn't get a chance to. I'm providing the answers to any other questions that may have been asked earlier. So we're kind of working on something where we're rolling out some new like a new app for our customers and there's a lot of business cases revolving around, you know, we're rolling out an app to have certain features to interact and reach out to our customers in different ways. So as we start to collect data, there is a discovery question around just which particular features of the app correlate to customer churn. Right now, we have a child care business where we want people to say as long as possible, so we're trying to correlate certain app features and interactions with customers to satisfaction things like that, [00:58:00] which are leading indicators of retention, right? So, you know, trying to we don't really have many of those skills or capabilities of trying to upskill the team. Like we have some people who are familiar with kind of doing predictive models and things like that, but trying to think about the approach because from the business perspective is kind of, you know, let's do some correlation analysis to figure out, you know, if they will send five text alerts or five video shares that more so correlates to them staying longer. I think of correlation is kind of the first step or AIs get to correlate, but you also want to use it to maybe predictive models Speaker3: [00:58:36] To do much more Vin: [00:58:37] Machine learning use cases. So just thinking about the approach around, you know, once we start to gather, a lot more data can pull it down to a customer level or a location level, just the type of analysis to kind of do that type of correlation. Or there's maybe there's different perspectives around how to do that analysis to figure out like which particular features of an app are correlated to increased customer satisfaction or retention or length of stay or things like that. What kind of behavioral data are you gathering right now from the app? Um, so essentially it's like saying, like, if I like, if I have a child in a in a school, if we do, if we're sending parents photos, if we're sending them updates from your child, ate you know, food today, they went to the bathroom. Speaker3: [00:59:19] This is their lesson plans. Vin: [00:59:20] This kind of lowers the behavior around. These are the things that your child are doing. Speaker3: [00:59:23] Also like VIDEO Vin: [00:59:25] If we send a video photos, all those different times of behavioral things during the day, these are the things that your child are doing. The question is like, what are those quote unquote quality behaviors of? What are the combinations of things of behaviors in the app? The more so correspond to customers saying, OK, I like that versus that. I don't really care about as much. And so you're looking at retention on the app or are you looking for like service retention for people staying at the keeping their kid in the daycare or the school? Yes. So we have a point of sale system. So that's how we track this enrollments. People staying for a length of time, we [01:00:00] know what the first time they arrived versus the last time that they left. We do use customer satisfaction surveys. We do have those surveys with a separate system that can track, you know, that type of sentiment. So it's really those who is like customer satisfaction, type of sentiment, as well as just overall length of stay of people staying for material into. For a length of time. Wants to dove in on this before I, you know, take over. And I shouldn't. Speaker5: [01:00:29] I would just say the gold standard is always randomized experimentation and testing. I think I have this issue over this challenge over at Teladoc. So we were trying to predict like retention or specifically activation in this case of our our clients members. So essentially, you know, we have open enrollment. We want them to enroll in our behavioral and chronic health condition, devices and programs and services. We kind of need to figure out who are the people who are like. They're just taking a little bit longer, and we kind of need to help them out based off of when the ship the device is shipped to them Speaker3: [01:01:17] Like a diabetes monitor, Speaker5: [01:01:19] Et cetera, versus people who are just never, ever going to sign up and we can send them emails. And we've tried to do historic analysis in the past, and I think that is right. Like we could get to the correlative Data and you can get that actually from from Speaker3: [01:01:35] Machine learning models. So, for example, Speaker5: [01:01:37] You could build Speaker3: [01:01:39] If you're trying Speaker5: [01:01:40] To predict like days to like a lifetime customer value or lifetime of a customer. We like we had the Data we could. We had how long they stayed so we could get to those like correlative features [01:02:00] and aspects. But that wouldn't tell us the why. Right? And like randomized experimentation that will kind of give you a little bit more, a little bit more weight on any sort of like causal analysis driven sort of initiatives. So I think that's I think that's the only thing that I would really think about is it can kind of tell you maybe the what, but it won't ever tell you the why until you do some kind of experimentation. And there is a lot of different methodologies for how to approach. And obviously, when you're doing it with people and families and children, you always want to be very like, sensitive to like how that's kind of set up, especially also with sample sizes. That's another thing that we least had challenges with is that to get a certain to measure the effect size we need, we did need a certain kind of sample size and that was a little bit hard to get to, sometimes just because of our enrollments and all that. So yeah, but I think that's the main thing I like thinking about. Vin: [01:03:06] Do you do any kind of treatments of people like Cincy? For our case, it's going to be rolled out to everybody. So do you want your experimentation? Do treatments around a particular sample size that was using certain features versus people who are not using for certain features where everyone, everyone seems to have the same features of Teladoc? Speaker5: [01:03:25] Yeah. So we actually have different populations. So we had, for example, like a Medicare medical Advantage or Medicare Advantage population. We had a more general population. So we partnered with like Home Depot, for example, we're covering all their employees. So that's not necessarily like a Medicare Medi-Cal specific population. And then we also partnered with some big insurance plans or big insurance companies, and then they would sort of determine the level [01:04:00] or the segments, the segments that we would cover and kind of what sort of coverage. So there's like a lot going on. That was one reason why it was a little bit hard sometimes to get the sample sizes that we truly needed. The other part that is really important is the effect size. At what point would you at what point would you call a difference in behavior meaningful, for example? So let's say, for example, you do a campaign like a push notification or an email campaign. If you find that and let's say you do it over a short period of time, let's say you find that people who in the treatment group are a lot more active and engaged in the beginning. That doesn't always translate to a higher lifetime value or a lifetime retention on that. Some of that stuff, it just it takes time, and that's where the quantitative analysis can help. It can help you make the business case to say we should do this kind of study. Speaker5: [01:05:08] But other times, for example, let's say, let's say all the groups are equally engaged, let's say, for example, one group they like or comment more on those like SAS updates that you send through the app. That's great is their business value, too, that some product managers would say like, yeah, that's like absolutely awesome. Some marketing people would say like, well, but are they like word of mouth referring us to like other families? So I think it's one of these things where, like, you have to figure out kind of like, what's the what's the behavior you want to measure and is it measurable? And then at that point, what's the effect size that you can quantify? And then from there, that's usually where you determine the sample size that you need and also the duration of the experiment. So for example, if you're Google, they can probably run an experiment in like a day and call it good just [01:06:00] because they have so many such a huge volume of people coming through that no matter what they do, it's going to be statistically significant. If you're like a medical insurance company, you don't always have that many samples or anything medical, behavioral, AIs. Sometimes you don't have that same sample size, so you need to, like, run it for longer. So that's that's the kind of the key thing when you're thinking about duration of experiment and also the sample size. Vin: [01:06:27] Ok, everybody realizes that was a about seven minute masterclass in experimental design like just the foundations of experimental design, right? That wasn't bad. That was that was, you know, if you're if you're watching this on video, rewind and listen to that again, because that was really important stuff. And I think, you know, the sampling, like half of your first experiments, are going to be just to figure out your samples and to figure out, excuse me, not your sample subpopulations like that's going to be the first, I don't know, a few months, maybe even longer, is going to be figuring out exactly how granular you can get your populations and how you can figure out the segments within each because you're going to have some significant diversity because you have kind of this service that everyone uses. There isn't like a specific demographic, you know, aside from having a kid. So you're you are just kind of across the if you're Speaker3: [01:07:30] Human, you could potentially Vin: [01:07:32] Be using your your service and your app, which makes it really hard to, you know, it's almost like a Google sized problem. It makes it really hard to figure out how to get your your segments and your stratification right. Speaker3: [01:07:47] And before you do Vin: [01:07:48] That, you can't really do experiments. Well, I mean, it'll look like you're getting strong results and then you'll do something similar. You'll you'll like, repeat it once and then realize, Wait a minute, why [01:08:00] did we get two different results? Same treatment? We thought the same conditions, same stratification. And so reproducibility is going to be a big thing for you in the very beginning. Don't just assume one one trial. One experimental run is enough. It's it's typically being able to repeat it and giving a giving yourself a higher level confidence in your methodologies. That's going to be really important up. And then you're going to get granular on what you're measuring. And that's what I've usually that's kind of the process that I'll Speaker3: [01:08:32] Go through is get Vin: [01:08:33] Granular on stratification, get my methodology kind of sound, just figure out my Speaker3: [01:08:39] Practices. You're going to have to build a platform Vin: [01:08:41] To support a bunch of this stuff because it gets complicated quick Speaker3: [01:08:44] And you want to do these Vin: [01:08:45] Experiments as simply as Speaker3: [01:08:47] Possible. They may take Vin: [01:08:49] A long time to run, but you don't want it to be your time taking up, building, designing, implementing the experiment Speaker3: [01:08:56] Itself. So build the Vin: [01:08:57] Platform to handle as much of this as you can, and then it's really getting to the point of reliability. And how deep do you want to go in this versus how much business value you're getting out of it? Because a lot of times just understanding basic correlation gives you a lot of value. Even though you don't truly understand the mechanics, you understand enough about it because you're trying to, you know, trying to reduce churn, right? Yeah. And certainly has so much value, you know, losing 50 customers versus twenty five, you know, obviously you want to go after something that has significant value. But when you start taking all the big stuff off the table, these experiments can kind of become extremely expensive and not very lucrative. And that's the that's the other problem is, once you get that first years worth of discovery and understanding under your belt, it's kind of time to say, OK, how much more of this do we really need to do? Because it will get expensive, quick [01:10:00] and you can chase rabbit holes that just aren't worth the money. And, you know, I don't know how many customers you have, but if you're not dealing with millions, it starts getting to diminishing returns really quickly. Now, do you then his last follow up is as we think about doing the correlation analysis and then trying to flip that into like a model? Is that a helpful thing that you think about? The big KPI we're trying to draw on is like length of stay or customer satisfaction, like is it valuable to really have those correlations to do the feature engineering to figure out those predictive models, which can be used to see back into the app and say, Hey, like, you know, these are the things that you should be doing today that we believe that's going to help out in terms of getting people to stay or increasing your customer satisfaction. Vin: [01:10:49] So it's kind of a feedback loop of taking of the feedback from the correlation to feedback to make the application a little more intelligent to know what we should be recommending that people do. I think the smartest thing I've ever heard about churn is that churn is an event. And if you were trying to predict churn, you're trying to predict that event or at least discover it before the person leaves. But also before you're at a point of no return. And so it's kind of a it's an interesting problem because you can detect churn and be correct and still not be able to prevent it. And so what you're you know, the metrics aren't really it isn't so much important to Speaker3: [01:11:27] Say this Vin: [01:11:28] Is what you know, leads to churn or this person has a high probability of churn. Speaker3: [01:11:34] You know, it's Vin: [01:11:34] Predicting the event. The thing that caused churn and that usually doesn't happen on your app if you're, you know, if there's a service behind what it is that you're doing, the event that ties to the churn isn't always something that you have data about. You're only gathering data after the fact about how people's behaviors change and shift over time. And so what you're really trying to do is get [01:12:00] your predictions as close to that event as possible and understand what your horizon is for making a behavioral change or having some intervention that will actually change the event and prevent the churn because that's not always possible. Speaker3: [01:12:16] You may detect Vin: [01:12:17] It, but not detect it in enough time. So, you know, when you're talking about a prescriptive model, because that's kind of what it sounds like you're getting to is now you want to Speaker3: [01:12:25] Build more of a prescriptive Vin: [01:12:26] Model. Now you have something that you have a more complex problem isn't it isn't so much this person is because now you're going from predictive to prescriptive. Speaker3: [01:12:36] And so you Vin: [01:12:36] Want a high level of accuracy to predict a person churning. But in order to do something to actually be able to recommend an Speaker3: [01:12:43] Intervention, you have to understand Vin: [01:12:46] A little bit more. And it's more than just the KPIs of churn itself. It's understanding the cycle. And like I said, finding out, more importantly, what the event that led to it in the first place Speaker3: [01:12:57] Was, which you may not Vin: [01:12:58] Be able to do. Speaker5: [01:12:59] Yeah, I think and I think this is like part of the. Why statisticians don't like this data and machine learning people, because Speaker3: [01:13:14] If you look at Speaker5: [01:13:15] The original usage rate for regression models, it was less about. So for example, we use a model for predicting BMI. It was less about predicting the BMI and more about understanding how the features interact with each other, like what features are sort of the biggest contributors to like, for example, someone's BMI? And how do those features like interact with each other? So the original sort of like predictive models were more to understand and to help kind of set up experimentation to figure out what are the right like, what are the right initiatives or policies or Speaker3: [01:13:56] Procedures to put in Speaker5: [01:13:58] Place to sort of address that like [01:14:00] outcome behavior. Right. So. I think ideally the way it would probably like, as Vince sort of described, Speaker3: [01:14:08] Is you have Speaker5: [01:14:09] Your correlative analysis and then you and I almost don't want to say a predictable I kind of just want to call like a machine learning model. You essentially like build a machine learning model off of that data. It gives you some insight you can use like Speaker3: [01:14:24] Feature boards or Speaker5: [01:14:26] Shapley or some other sort of small interpretability Speaker3: [01:14:29] Tools to understand Speaker5: [01:14:31] What are what features are being like, what features stack rank in terms of like contribution to that prediction, which ones are like the highest predictors of the outcome? And then that will help you help set you up for figuring out what should you experiment on? And then what can you then prescribe like as a procedure to say, cut down on churn and to increase retention? So I mean, that would be the ideal thing. I think what I see a lot of times is that people, they have the correlative analysis, they build the models and then they focus a lot on like the predictive power. And they kind of forget that whole all medals, all malls are bad, some are useful, they forget about the like, is it now useful? And then also to, for example, just because you're capturing like a historic trend in the features that might not. And let's say, for example, you implement a new initiative where you implement a marketing campaign or some kind of nurturing sequence. If you see like it trending and the model is still predicting very well, that would mean that like that marketing campaign is actually totally useless. From the perspective of model accuracy, your model is now more accurate. But from an app perspective, that initiative didn't do much, right? So that's where the new kind of want to also build in that experimentation layer or stage [01:16:00] in that procedure. Speaker5: [01:16:02] And I do think sometimes like not thinking about a model, but thinking of it just like as a machine learning model. And then you essentially want to get to that prescriptive modeling or that sort of prescriptive stage, I think really, really kind of helps. But that would be the ideal sort of workflow is you have the core of analysis, you know, that sort of provides the business use case of, you know, let's build some models. Let's see if we continue to see these kinds of trends. Let's see if the model picks up on any other potential interactions between these features. Let's then think about cueing them up for experimentation and specifically, like, you know, in this case, time on the app, let's think about some hypotheses is are people like leaving because they're not engaged? Like early on, are they leaving because they don't like the right features? Are they leaving because they don't like support and then do the experimentation off of that and then see from there, like what would be the next sort of like procedure or what would be the treatment you'd want to recommend? Vin: [01:17:07] So you had something in the comments that actually is really good to bring up, can you? So again, I know you heard something in the comments that was actually really good to bring up. Sure. Okay. Speaker2: [01:17:20] Yeah, so so I wrote about saying that I think the event of churn is more like a wave than an individual event, i.e. there's a lot Speaker3: [01:17:28] Of sub Speaker2: [01:17:29] Events that have to coincide for the the parent event to trigger. So whilst we can try to try to halt the parent event, what we want to do is Vin: [01:17:44] Identify all of those those child Speaker2: [01:17:45] Events and monitor those and monitor the transition over Vin: [01:17:49] Time of Speaker2: [01:17:50] Those, pick up the trends and try and predict when there's going to be a number of events Speaker3: [01:17:57] Happening at Speaker2: [01:17:59] The same time that [01:18:00] is likely to trigger the parent event. And it's unlikely also, I think that we'll be able to remove those altogether. But the best we can hope for is to reduce a number of the parent events, i.e. the churn itself. So, yeah, it's really interesting to identify the different types and flavors of events that contribute to churn because there's probably a huge amount and there's a there's a scale of, shall we call it, weight with those events, some are going to be a high contributor, some are going to be a low contributor to maybe you get 10 low contributing events that really won't trigger that parent. But you know, too high weighted events Vin: [01:18:41] Will almost certainly do it Speaker2: [01:18:43] And any permutation thereof. Vin: [01:18:45] So it's a really Speaker2: [01:18:46] Interesting subject to dig down into the roots of to try and identify all of those potential contributors. Vin: [01:18:54] I think that's helpful because it's interesting for our business like we know everyone's going to leave, right? It's at some point the kid's going to leave, says more about extending the time they stay Speaker3: [01:19:04] With us versus like Vin: [01:19:06] Like, we're not Amazon where we hope people stay forever. Like, Hey, a certain point, your kid is going to go to school and then they're gone. So it's kind of a how do you extend that out? But I do get your point around, really. There's events that are predictors. So since you're trying to weight those events and that's where you're trying to get at, is those particular events not the turn itself, but it's just the events that make someone leave sooner than we would like them to say, you know, from birth until five years old. But how do you get the events to get them the same long as that we're trying to get towards? Speaker2: [01:19:32] Yeah, exactly, and I think you've also got some wild card events that are those that are outside of the professional capacity, i.e. personal domestic issues that can contribute to a person leaving Speaker3: [01:19:45] Which you really want to Speaker2: [01:19:47] Take out of the model that contributes to churn because they have no direct correlation. But if you're just pulling everything into the picture, you may end up with a chunk of those given a large [01:20:00] audience sorry, not audience, but a large contributing field to the [01:20:07] To the model. Vin: [01:20:12] Right, Aaron, is that a good get the answers? Absolutely appreciate it. Listen to the recording again, just to make sure I got everything. But yeah, it's good. Yeah. Michiko and Russell said because those two were. Was kind of dead on. Exactly. All right. There you've got a question. It's going to be our last one because Speaker3: [01:20:32] I'm I'm killing her Vin: [01:20:34] Preetz bandwidth, growing overtime. All right. There's going to be reader's question again. It's going to do it to me again. Yep, yes. All right. Can you hire someone based on what they can do looking at the projects that they have done rather than passing tech screening and tech questions? Yeah, but I'm going to let somebody else answer a whole lot more comprehensive and let. Who wants to? Who wants to take the slam dunk around the corner? Speaker2: [01:21:00] Yeah, I'll jump in. Uh, yes. As you can. Essentially, any tech screen is going to be hyper specific to whatever you ask them, you're not asking them what you know, you're asking them, Do you know this? There's are two very different questions, like coming from a robotics background, moving into the machine learning space. My advantage is that I understand the physics and the electronics behind how sensors work on a platform, right? That's a completely different mindset and background technical knowledge to someone who's come from a pure data science statistical background. Speaker3: [01:21:37] You give us the same test. Speaker4: [01:21:38] It's asking a fish to climb a tree. Right? Speaker2: [01:21:42] And both of us could bring a lot of benefit to the same team to the same project, potentially in different ways, potentially in ways that you don't know. Right. Like, everyone doesn't know what they don't know, including recruiters and hiring managers. Right. You might know 75 percent of what that role [01:22:00] requires to win based on what you've seen in your vast experience. But I'm willing to bet that there's at least 25 percent of leeway in most roles where someone with a completely different perspective could solve the problem differently. You ask an electronics engineer, a software engineer and a mechanical engineer to solve the same problem. You'll come in with about six different solutions from three people. Vin: [01:22:25] Yeah, I couldn't agree more on that one. All right. Who else? I mean, you said you had three offers and no tech screening. Speaker5: [01:22:35] Yep. Yeah, this was 10 months ago. Basically, when I joined MailChimp, I had competing offers from Levi's and Cisco Meraki, both very like email and generals. So I kind of start off as an email engineer in my current role, but we're kind of more into ops. And then I had another there's a couple of other companies I was interviewing for where? They were sort of like flexible because of how badly they wanted, so on in the role. So outside of the Big Tech companies, it really comes down to the culture of the company. So, for example, some of the companies that I am read with for them, culture was a lot more important honestly than like proving capability. I don't see proven capability or I don't say culture was more important. But what they found was that having a very smart jerk was a lot more detrimental to the productivity of the team than having really nice people who are capable of doing the work, even if they have not proven that they have directly worked on that thing, i.e. if you had adjacent skills and experiences and you're a nice person and you could kind of think through problems at a higher [01:24:00] level level domain, they thought that was more important. And especially to you, like some roles, it's just really hard to like. They'll do those really stupidly code things, which I guess I get in some cases are useful. Speaker5: [01:24:14] But for a lot of roles at least, that I have interviewed for the skill set and the type of role, it's so complex that there is legitimately no way you could test candidates for that unless it was like a take home. And if you do that, you risk chasing away candidates that are actually really good. And there's like, Nah, bro, I'm not doing this. I I personally really loved take home stuff and system design interviews like That's my jam docs. I'm actually any good at it. But just because I think it's like intellectually interesting and I get to learn a lot. But yeah, like so during that that time that Facebook names, for example, I was interviewing with them, some other company. They were like, Yeah, we're going to put you through the whole gambit. And at that point, I was like, Yeah, it's not worth it for me. I'm OK. I don't need a fake job. What I really want is a good team. I want interesting work. I want a great manager. You know, I want like a market value pay, but I'm OK if it's not like monopoly money, you know? So for me, I kind of made that choice to basically go like, OK, I'm only going to interview four companies that don't do a tox screen, and I honestly didn't even realize that was a choice until I was seeing the full stack deep learning workshop. Speaker5: [01:25:35] One of my Tas like, I was kind of grumbling. I was like, Oh man, like, you know, I'm not getting any bites on this, and I'm like, Oh, is this like a really? Maybe I should just take a job as a data scientist, even though I really hate the work. Maybe I should just do it. And he was like, Yeah, you could just not do any tech screens. And I'm like, What? That's an option he's like in some spending, like two or three months doing Lee code, what you could do is basically just be [01:26:00] willing to apply to a lot more places, open up the portfolio of companies as long as I'm OK, not doing it. Big Tech me big tech company. But if I'm OK with doing S&P or something that's not Big Tech or a traditional tech industry, lots of companies are willing to be really flexible. If I can show that I have a nice little niche. So it's like I worked as a data scientist. I was I did some stuff in Data engineering. I did some stuff in like more that email ops world. So, you know, all like email ops types, roles, for example, it's so niche that could you really do you really want to say, Oh, we only want to hire someone that is TerraForm Python, whatever? No, because you need you need operations. Speaker5: [01:26:45] You need apps that, like every level of the stack and every company has a different stack. So a lot of those types of roles, people are like willing to be like, OK, you know, we'll, you know, we're not going to be those people when it comes to interviewing. Let's just try and see if we can get any candidates, which sure, they make sure they're nice. They are capable of figuring out the solution, even if they don't know it off the top their brain. And let's kind of invest in those candidates so those companies and jobs do exist. It does require giving up on thing, though I will say that because in a lot of cases, those companies they have, because they have such structured processes, if they do deviate from those, it looks really bad like to bring in candidate and also true as a candidate, if like everyone knows that you didn't go through the same like hazing ritual that everyone else did, that also kind of feels bad. It's a hazing ritual, right? So yeah, but as long as you're willing to give up on like as long as you're not you, but like as long as someone, if they're OK, not getting a job at a fake company is one hundred percent possible to have a well-paying, interesting, creative job with cool people without a text screen. Totally possible. I'm an example. [01:28:00] Vin: [01:28:01] And then I want to hear Russell's. I want to hear that out loud, because that was that's another epic comment. But go ahead. You go first and then Russell can can take us home with the no asshole rule. Speaker2: [01:28:14] Yep, yep. So, yeah, make it go. Yes, that's an option. I didn't realize that either. For a while. It is it often it comes down to how you negotiate with your interview is right. It's a conversation. At the end of the day, every interview process is a conversation, right? There are things you're going to want to do. You're not going to want to do. But I feel like in this room, we maybe have a little bit of a bias against the text screen, right? So let's play devil's advocate here for a second. In what situation do I reckon a text screen is actually useful right now, I reckon. Put yourself in the shoes of a recruiter, right? You've got ten thousand candidates applying every single year, right? And you've got to sift through them and you want a process that's slightly better than random. Select from a list of, you know, PDFs that are, you Speaker3: [01:29:04] Know, resumes Speaker2: [01:29:05] Like you want only slightly better than that. And I literally had shocking stories. And this maybe goes back to the 90s and the early 2000s where you get HR right, stacked with a bunch of resumes. They chuck them on the floor. Anything that Lance Bass up gets through to the next point, anything Lance Bass down doesn't write. So it's one of those Speaker3: [01:29:24] Things where it's like they need a better Speaker2: [01:29:25] System. And I think I can understand why a fine company or something that big that Speaker3: [01:29:31] Scale the most Speaker2: [01:29:33] Efficient way for them to find a good enough candidate is why that's useful. Now, I think the other two situations where I've seen that, like the the two situations where I say that work is one where a company wants a very specific role with a very specific stack, you know, tech skills that they're testing for. Speaker3: [01:29:54] And this might be Speaker2: [01:29:54] Because they need to maintain a legacy system and they need to make sure that, hey, this person's actually got the experience to [01:30:00] keep this thing going because it's system critical or whatever for X amount of months until they replace it. That's the kind of situation where you actually care about exactly what someone knows, and it's not so much about coach Speaker3: [01:30:10] Ability and it's more about Speaker2: [01:30:12] Existing knowledge. The other situation where I've seen it work is where companies do a technical interview, but it's not done, but it's communicated really well that it's not a Oh, you've got to top this and get 100 percent to even have a shot at the next interview. Right? That brings a toxic competitive mentality to any kind of test or screener. Speaker3: [01:30:36] And I think Speaker2: [01:30:37] We do see that a lot, right? They're really communicating that, hey, this is just a base on the standing that you're comfortable reading Python code, you're comfortable reading and ask your query, right? And it's a very basic, very basic level. It's just understanding, OK, we're not. Teaching someone to code from scratch. Right, and communicating that is probably the piece that's missing where I see in a lot of tech screeners where they just chocolate you through an automated email. So, yeah, making sure that like if you are looking for something very specific, you're communicating that you're able to feedback. And if you're not, it is just a very basic. You can call it a little bit, right? We can coach you off the rest. Those are the three situations where I actually see it reasonable to use a. Yeah, that's great. Vin: [01:31:25] I think I've just got to say it this way, and I think you've both kind of touched on this. At some point, like, why bother, you know, I get it, yeah. You know, you're an awesome company, but I got 15 other ones, you know, and that's really where a lot of people in tech are. After about 10, 15 years, it's I mean, it's a long line. So. I'm going to pick the ones that are the easiest that actually want me. You know, that's what I'm hearing. More and more is people saying, Look, if you if [01:32:00] you don't think that me doing the job for 10 years means I'm qualified to do the job. Your lead code test is really going to pick up something different. You know, that's going to be the thing that screens me out. So it's really it's an interesting dynamic where we don't. We don't bring realism into the hiring process nearly enough. And I think that's problematic, so we'll let Russell take us home that I saw Gina doing the no asshole remark as well. So I'm going to give Gina a chance to also take us home with a further expansion on Russell's new rule. Speaker2: [01:32:38] Okay. Shall I go first? Ok, so I was building upon Vin: [01:32:43] Mexico's Speaker2: [01:32:44] Great summary, actually just about toxicity as a general issue in the workplace, and I view toxicity as a stealth poison in any organization, and it's stealth not at the coalface, i.e. not by the people that are doing the work day to day, but more so the executive levels a, you know, Speaker3: [01:33:06] The the higher management, Speaker2: [01:33:08] The higher directorship or the or the or the C-suite, et cetera. And the reason for that is it seems to me I'm talking generically here. I'm not targeting this at any one organization or company at all. But I think in most organizations, there's a there's a latent respect for the sense. They get results regardless of the long term consequences. You know, and it's a short term view. You know, Speaker3: [01:33:36] You have secured a you've secured Speaker2: [01:33:38] A two million kind of pipeline of work for the next two years. They're actually, you know, that's that's a really good let's talk smaller dogs more than you secure like a ten thousand cell or something here. Speaker3: [01:33:49] And that's great. Speaker2: [01:33:50] But what about if that one person that's done that is part of a team of 10 that are securing the the long term revenue and [01:34:00] that person is really disrupting that sort of the overall balance is falling down. Then the the Vin: [01:34:07] Blame is thrown against those, Speaker2: [01:34:09] Those other 10 that do the long term stuff, even though it's not directly perhaps their fault or at least the fault lies with them is being caused by the toxic individual. But if that's removed, there's going to be an average lift to the overall. And I think that's something that's not understood very well, and it's generally due to this, as I say, latent Speaker3: [01:34:32] Respect for the Speaker2: [01:34:34] For the assets to get results. Unfortunately. Vin: [01:34:39] You know, you want to. You want to add. Speaker4: [01:34:42] No, I mean, I think Russell summed it up really well, I just put in the comment. Professor Bob Sutton at Stanford wrote the book The No Asshole Rule and you know, that sums it up. And I mean, he's worked extensively on this subject. A lot of times people, people seem to have gotten the wrong idea. They look at people like Steve Jobs, and I might put Elon Musk in this category, especially jobs was legendary. I know someone who worked for him back in the day. Dude was not a nice guy, frankly. He may have been a genius, but even there, right? I mean, it's a huge company, ideas and the way things were executed, it wasn't always. It's not like he did everything. And then the minions just went out and executed it. I mean, there was a give and take Speaker3: [01:35:35] Process, and I Speaker4: [01:35:36] Think it might have been Jobs's biographer. Or maybe, Sutton said Job's success was despite his caustic personality, not because of it. I think that's just the thing that sometimes I see people. It seems like maybe even younger, like really young people got this idea that they have to be like that [01:36:00] in order to get the best out of wealth, get the best out of quote unquote out of people. And it just, yeah, Russell says correlation does not equal causation. So true. And the worst thing is, you know, emulating jobs or someone like him by wearing black turtlenecks and kind of acting in the same way as though that somehow makes you a leader and that people are going to respect you. Come on, Speaker3: [01:36:28] Folks. But I mean, Speaker4: [01:36:30] Those points were, you know, it wasn't just dumping. I just want to emphasize what she said. And then in the chat, which she mentioned, it's not dumping on companies per say for doing technical screenings, but simply saying, You know, I think what I'm hearing Michiko say is you have some leverage in the market and you don't necessarily need to jump through all those hoops or simply, you can decide, OK, is this the kind of company I want to work for? And yeah, Makiko, if you I mean, I think you mentioned early on in this call the boot camp thing, you know, you saw someone making a comment about all suck and you know, come on, that's crazy. You have to look at the individuals. But, you know, boot camp grabs grads or career changers such as myself, I think feel like, Oh my God, we have to do all this prep for these technical interviews. And maybe there's a bit of doubt around our ability simply because we come out of a boot camp. I mean, that's a that is a one dimensional type of review. If somebody thinks that despite everyone, despite all of the other person's accomplishments, that would be pretty sad. But I'm sure it happens. So, yeah, I just want to get the last word if she has anything to add to that. Speaker5: [01:37:51] Yeah, I mean, if I ever decide to go bang or manga, as some people are calling it now, they're not. They're calling it Ming, but they [01:38:00] really should call it manga. I think that's a more brilliant play. I am 100 percent prepared to pay the league code tax. I'm ready to do it [01:38:10] At that point, but I think Speaker5: [01:38:14] At every point in time in people's careers, I think at the end of day, you know, you can kind of rage against something, but you kind of have to vote with your feet in a way. So if you want to see more companies that have better interviewing process or processes that are a little bit kinder, maybe to like your your skill set or person in your in my shoes, at that point, it's good to make that decision to then say like to know that first off, you have that option that you don't have to actually take a test screen. There are companies out there. But secondly, to also make the informed choice of to say that, you know, a candy is willing to give up on XYZ opportunities at manga, right? Knowing that they will pay Oeko-Tex, but they will be rewarded for it. But you will need to pay the tax. And if you're OK with that, you can go for companies that have a hiring process for where you are right now. You can vote with your feet at some point. I think, you know, people like Vin are carrying on the good fight to let companies know that there are better ways to hire, but we can also help people like Vin by voting with their feet. So if we want to see more companies without tax machines, let's interview for those companies that tax screens or the ones who are willing to do a more holistic process, let's support them. Let's wrap them in LinkedIn. You know, and also, let's, you know, let's applaud people, too, who are able to get through the Mongameli code process because it is grueling. It's excruciating. So I'm very happy for my friends who are able to get through that process, even knowing that right now, that process is not for me. Speaker2: [01:39:50] So, yeah, the next thing was in the house for this topic, Vin: [01:39:55] I think this is one of those things we could probably talk about for another two hours, just [01:40:00] about broken hiring processes. But I think kind of circling back to there's question I think we can definitely hire based on more sensible practices and more sensible measures of candidate value and experience at some point is enough. A portfolio at some point is enough. Having a Ph.D., I mean, come on, what else do you want? You know, when you talk about somebody who has a PhD in physics and you say, I want to see Data science experience, you know what you do to get a PhD, right? Do you not know what a PhD is, did you? You fall asleep when they were describing it to you? So I think there are always more sensible ways, and if we pursue the Data, maybe we'll find those. But yes, please reward companies that are good to their people. You know, the ones that do not have the workplace psychopaths roaming the hallways. And you know, those those people that truly make your life terrible, which there's a lot of companies that do. And we have to call out the ones that are toxic and you have to really reward the ones that aren't. Because if you start doing that, toxic companies go out of business. I feel bad because I've got nothing to plug. Harpreet always does the, you know, like, I'll be at the Riviera. This work this Friday and Saturday night, Sunday at the Palms, Monday, the San. I don't have anything to plug. Speaker3: [01:41:15] So like our PDF says, you got Vin: [01:41:18] One life, but I highly encourage you to enjoy it first and then do something amazing because that amazing thing. You'll regret not enjoying it. If you just do amazing, so have a great weekend. Thank you everybody for coming and hanging out through two hours. I really appreciate it. We miss you. Speaker5: [01:41:37] And don't forget to check out VINs newsletter. It's got Substack Vin: [01:41:42] On Substack, Google Vine Rishta. You'll find all kinds of crazy things that you can follow me on. Have a great weekend, everybody. Thank you.