MIX-6 === [00:00:00] James: Frank, have you heard of artificial intelligence? Ai? [00:00:07] Frank: Are you are, are you talking about AI that, [00:00:11] James: yes. Now, Frank, have you heard of machine learning? [00:00:15] Frank: I hear that's what the nerds call ai. I actually, I, I went through one of my own libraries and renamed all the ML things to ai just because I feel like it, it, it's winning in the, uh, in, in the culture war that is naming things [00:00:31] James: well. So that kind of brings me to today's topic, and I'm so glad that you talked about that because to me, I'm been, in my mind trying to figure out, as a developer, a client developer, you know, what. I should be learning what I should be doing when it comes to both machine learning and also ai. And are those the same thing and are those different things? Is it similar to, for example, backend developer and front end developer, right? Mm-hmm. You know, like there is the. Full stack developer, but is there, is there the full stack ai, ml, everything, because they often get lumped in together. But in this new world, in this new CHE G P T world that we have going on, you know, and open AI models and LLMs and all this other stuff, you know, what should I actually be doing and learning, right? There's a whole lot of new things coming at us, right? We just came outta Google E, we just came outta dub dub. We just came out of build and. AI was just everywhere. And I get excited because one of my main things that me and my team do is we get to go talk to developers who around the globe about usually specific frameworks, right? Web developers, client developers, backend developers, cloud native developers. And to me, I'm like, I, wow. I think everyone's just gonna wanna know how do I do AI stuff? And I'm like, I don't, I don't necessarily know where to start and I should know where to start. But I'm also thinking about, well, what is my. Viewer. We have many, many listeners, billions of listeners to this podcast, and they might be wondering the same thing. Now, the cool part here is that Frank Krueger, who's on this podcast, has been a machine learning data scientist guru for like the last a hundred billion years. Right? And also, Knows everything about ai, so you are like the full stack person. So I wanted to kind of ask you how do we decouple that and get to the root core of what does it actually mean to be an AI developer or a machine learning developer, data science person? Are they the same thing? Are they different thing? We got kernels and lang chains and we got all these things. I don't know what to do, Frank. What do [00:02:31] Frank: I do? Well, it's, it's a nice narrow topic then. This should be easy. We should be able to just, I'll, I'll state a few facts and we'll be outta here in five minutes. No problem. This isn't gonna kick up anything. This is hard though. Thank you for all the compliments. I appreciate it. But, uh, we are so headed for an AI winter man. Like there have been two AI winters already. The moment you're wondering like, how do I use AI to improve my product? Maybe it's the wrong question. You're like, does my product need improving? Have I hit a problem that I don't know how to solve? Then maybe AI can be there, but do just try to solve the problem yourself, really. You don't want to throw away AI at everything. Can I give the worst definitions ever? Sure. AI is just. It's such an overloaded term. That's, that's why the scientists don't like to use it. It's just, it means too many things. It's in too many sci-fi books and all that kind of stuff. But artificial intelligence is exactly what it is. It, it says on the tin, it's some intelligent thing that is artificial and that some other intelligent thing made it. So however we get there, it doesn't matter. Machine learning is a principle that we can take a computer and by giving it enough data, teach it something it can learn. Uh, maybe if it learns enough, it will become intelligent. Maybe that's where this machine learning stuff and AI meet is when the crazy G P T 10 with its eight bazillion models and whatever, not models, parameters and all that kind of stuff. Maybe then, maybe then it'll be intelligent. Maybe then it'll be artificial intelligent. But in the end, we all just call it AI because it's fun and we, we totally gotta hit another AI winter, so it's important to go there, but I totally get your bigger, broader question. It's, it's tricky. Where, where should we start? Where should we start? Um, uh, you tell me, uh, what, what do you think a data scientist does? [00:04:42] James: Okay, so yeah, if we start at the, we could start to the, from the most complex, which I think is data scientist stuff all the way up to high level client developer, aka James. So we start with Frank or do we start with With with James. Right. So if we start with Frank, a data scientist, I think their goal is to create the model. Their goal is to take a bunch of data and they are the ones that are training it in crazy. Python, C languages, shoving it in, and these are things that take a long time, right? They're pulling data from a bunch of sources and they are creating huge, crazy, big models that other folks can then use. Now, they also, if you were at a company, let's say you're at. Uh, Kintoso, let's use Kintoso. You're at Kintoso, uh, airlines. Now, Kintoso Airlines, they may also have a bunch of data. All of their airline data. They may a data scientist at that company. For example, Kintoso Airlines might make a machine learning model by taking in all of that data. And then the data scientists would take all the data, do all this stuff, and they would craft. Python code and then a big blob of model would pop out. And I think that's what a data scientist does. And then they probably check it, right? They like make sure there's inputs and outputs and do stuff. Is that what a data scientist does? Or they clean the data? Maybe they clean the data. I it's gotta like, you know, you gotta change, change, scrub it [00:06:22] Frank: down. Always clean your data there. There's no other way to handle your data. Of course, you have to clean the data. My God, man. Of course. I like your definition just fine. Um, the only place where I just being a. Pat. Um, I don't really know what the good terminology is in the industry these days, but I would separate out a data scientist is a little bit more scientist where they're actually exploring different new ways of architecting. The models, how you shape them, what its inputs and outputs should be, uh, the best way to train them. Things like that. They're, they're actually doing the research kind of stuff. So you described the practitioner, the actual engineer, the data person, because you usually, you, you, you have a rough idea of what kind of, uh, Model, you wanna use architecture and then you spend 90% of your time doing data, cleaning that data, scrubbing it, getting it nice and shiny. Then you run the network on, you're like, Ooh, I think I can polish up the data a little bit more. And you do, you do a bit of that. So I, I think you absolutely nailed it there. Um, what, what does a, a prompt engineer do then? [00:07:39] James: Well, that's, that's a good question, right? So, Th that's where I'm kind of like confused a little bit at too, is like, is that, is that me? Like, am I a prompt engineer? Like, yeah. Cause a, a prompt engineer is somebody that, there's already a model. I, I think this, I think this is what I wanna distinguish, is like a prompt engineer. There's already a model that's created and I think a prompt engineer figures out how to use that model with specific data sets. That when you put in a prompt it, the model then knows how to, and it's been trained on that data. So for example, let's say, um, um, let's say you're at Kentoso Airways and there's a model that the data scientist created. So I guess I was wrong in the data science, like they, they may create the LLMs, like the big. Models. But yeah, I think that the big create the big, you stable diffusions or you're stable diffusions. The images. Yeah, and the images. And then a prompt engineer uses one of those models, but then there's some way to feed it data and like retrain. The model on the data. And then, so for example, like if, if you're like, okay, now I'm, there's this big model that the Contoso Airways people created to do something, right? And they have this big lm and then I'm a proms engineer. I'm like, okay, well I'm really interested on. You know, um, popcorn data or something like that. Mm-hmm. I'm gonna put all the popcorn data in there and I'm gonna ask it all about popcorn, cuz now we're at the Kentoso Theater and, and I can ask it about popcorn and now it knows about popcorn and the data on popcorn that I fed the model and it gives me like, I, I don't know. Is that what I'm doing? I don't really know. Frank, what's a [00:09:30] Frank: prompt engineer? You were great in the first half. You, you nailed that first half. I, I like the definition of a prompt engineer is someone who just gets to know the model after it's been trained. You, you get to know it. It's ins and outs with these kind of inputs, it produces those kind of outputs so you can start taking advantage of it. If it's more general purpose model, like all your GPTs and things like that, then more advanced things. Um, you went off the rails a little bit toward the end there because what you were describing is something called fine tuning the model, and so often you don't train the model from beginning to end. Here is my input problem, here's my output problem. You actually train it in multiple stages. These are often called pre-training. There's the end stages can be called fine tuning, but really just thinking about it as it, it's done in stages, so maybe in the beginning it just learns. English, then it learns how to do a little bit of question and answer stuff, and then it gets judged a little bit. These can all be separate fine tunings. So those three things happened and then you, uh, fine tune it on airplanes or I don't know, your shopping cart, your inventory for your company. You can fine tune it on that. And the moment you're bringing data back into things, I think you're back into being a data scientist. Quite often you start with a. Already partially trained model. When I was doing my QNA form work, I started with a model that already had some understanding of languages. It was, had already been trained, and even then its training was three stages of training. All by itself. And then I added a fourth stage and I'm adding a fifth stage on top of that even. And so you can really train these things. And I think the moment you're talking about training, if it's anything more than a couple clicks on a ui, you're, you're a data scientist. If there's a nice UI for it, uh, okay, you're using an app, but um, up till then you're a data scientist. Can I use a. Fun analogy. I don't know. Maybe it's a fun analogy. Sure. We're, we're being reductive. We're, we're trying to define terms and simplify things a little bit. Um, I I, I like to think this, this AI world has a nice parallelism with what programmers do. So I, I asked you, what does a data scientist do in, in short terms, what does a programmer do? [00:11:56] James: Uh, program, uh, programmer, uh, there's a problem and they figure out a way to solve said problem by creating code that does that. Now, that might be user interface, that might be accessing data into a database that might be visualizing something on a mobile phone at a programmer. There's many levels, so there's different types of programmers, right, that do different types of programming. But yeah, programmer, they solve. Problems in a way that could be visual or not visual and, and there's inputs and outputs, I guess. Yeah, that what they're creating. So maybe when you tap a button, that's the input and the output is something happens on the screen. [00:12:37] Frank: I just think it's so similar to what the data scientists are doing. There is a problem. There, we know there's a solution out there. And programmers, we learn about the prob problem by imagining ourselves as the end user or actually talking to end users or customers come to us with requirements and then we sit down really hard and we think, and we write code, and then we make a thing to solve that problem. Whereas the AI way is very different. It's collect a. It's the same structure. I have a problem and I want to get to a solution. It's just the means that you go about it is different. You still have inputs to the problem. Things that change and don't change. Um, constants of the universe that the program should, uh, laws of the university or programs should abide by Constance. It should, uh, Keep in check. Uh, but otherwise the, the two are very similar. And then you get into prompt engineering. Well, isn't that all just the same thing again? They have a problem they're trying to solve, except they're gonna do it at a different abstraction level. So we're all just problem solvers and we're just using technology at different abstraction levels. But prompt engineer, I, I would say separating them out, it still sticks with, they're the ones who actually get to know a model and get to know how to use it well. Like stable diffusion is my favorite example because the prompt engineers have learned all the cute little phrases to add into your prompts to get it to do exactly what you want. And if you use Chachi PT for any amount of time, you know your own little cute little phrases to get it to do what you want it to do. But the [00:14:18] James: prompt to engineer is not the end user. Right. The prompt engineer is gonna be doing something to that model, like, are they, they're tweaking the model for their use case, then? [00:14:29] Frank: Well, okay. I, I don't know. I'm, I apologize because I don't know exact industry terms. Maybe there are, but I, I would come up with a different term for the person who's actually turning. I call it putting a user interface on top of these models like you're, you're trying to streamline it into one set of problems and have it set those kinds of problems and maybe had some check boxes and dialogue boxes in order to input that problem into it. I don't think of that as prompt engineering. Uh, maybe it is. It, it, it does become a bit of a blurry line for me. The prompt engineers are the ones who just learn the ins and outs of what the model's capable of, and then I don't know who the next person down the line is that are the programmers, like you and I, the devs who wanna put a user interface on top of these, uh, [00:15:23] James: Monsters. Yeah, so for example, a good thing here is like a prompt engineer might be working with the dev devs, you and me, that are building the ui, building the backend code that's going to expose those tweaks and tunes in those fine level things to to, to interact with the model better. [00:15:45] Frank: Yeah, like if, if, if I wanna write an app, that's a general purpose calculator, who knows? You know, and you just wanna type in some math, by the way, don't do this with an ai. They're, they're good at math, but they're not great at math. Um, uh, I, I'm just gonna have to dis. Decide a few, um, conditioning prompts to give a language model because if I say, uh, two plus two, it might respond, well, the answer to two plus two is four. And we can solve this by first checking what two plus one is and then adding another one. And, you know, it might give a very long, verbose explanation. So what you need to do yourself is experiment around and figure out what kind of little. Pre prompts, can I give it? And sometimes that's a form of an example. Here's an example, input output that I want from our session. Other times it's a long texty thing where you're like, you are a stupid calculator from the 1980s. I give you math problems, you give me answers. No embellishment, no explanation, none of that. And coming up with those kind of, um, conditioning prompts, I, I think of also as the prompt engineers. But if you're an app developer taking advantage of ais, you're gonna, you're gonna start learning those tricks too, for sure. [00:17:01] James: Okay, so let's take a use case here. Now, one use use case I can think of that maybe would involve all three levels of these would be, I'm at kentoso. Airways. Right. And I don't have anything yet. Right. I don't have anything yet. Okay. So I want to, I want to, let's just take this example. I'm at Contoso Airways and I want to use chat e p T, but I want to give Che G p t my data to use. Mm-hmm. I want, I want chat G P T to be aware of my data. Right. I think we saw this at Google io, which was like, you know, hey, like if you're in Google Drive, The Bard will be aware of your stuff inside of Google Drive. Right? Creepy. Yeah. A little creepy. Yeah. But you know, that's a good example. Like, Hey, how do I make a chat? L l m like chat, g p t Yeah. Aware of my data. Who's involved in doing that? Like what, what needs to be learned here to do that? [00:17:59] Frank: Well, the, the good news is the, in the, in the. With these large language models, the answer is actually very simple. Uh, the problem is the end result is very simple. The end result is these things. Okay, g PT four aside, work with text. So your inputs to this thing need to be text in some form, and it can be a long amount of text. I think GPT four can handle, uh, eight K tokens, which is probably more like 16 K or 24 K words. So we're like, what, what? What's a novel? Like 50,000 words? Yeah. So like you can almost fit a novel into these things. So if you can fit your entire. Company's data into those AK tokens. I'd start there. I would just dump it all in and then be like, hi. Hi, hi computer. You are an uh, amazing database system and here is all my company data. Now I'm gonna ask you questions and that would be the pre-pro that you give it. And then the UI to the user can still just be the simple chat interface, which is overdone, but obviously good enough and all that data can be used. That said, uh, we are working on multi-modal AI is now where they can take data from different modalities, so images, audio. In addition to the text. So we might we'll have more options in the future, but for now, all, all the hubbub is centered around text. So you wanna get your data into a text form. Now, how you generate that text? Do you type it in by hand? Do you ask? Um, uh, uh, I don't know. A kid down the street. What text should I be prompting this thing with? Or do you come up with like, I don't know, good Jason formats. Maybe you come up with templates, maybe you come up with, uh, pick your own adventure story time format. Maybe you, I don't know. There's gonna be different ways that you can create this text to be used in an app. Okay. [00:20:10] James: So I guess at that level, We have these big models, like I like that, that like, so you're saying that like chat g p t can take my data in and then it can use my data with its l l M to basically take in questions and spit out stuff from my dataset. Like that exists today. That's something I could do. [00:20:36] Frank: Yeah. Uh, yeah. If, like I said, if you can fit your data into its buffer, you can ask it any question you want. Again, don't ask it to do too much math. Use Excel for that. Ask it bigger questions, ask it for trends. Ask it for, you know, which product should I invest more into? Things like that. It, it can definitely, Do that kinda stuff. So, and I, I keep, sorry, I just wanna say, I keep saying AK as an example. Uh, there are bigger versions of G P T that handle 32 K tokens, and again, that's like, uh, almost a hundred thousand words. Can you fit your whole data? Uh, yeah. Into that. Yeah. [00:21:17] James: Perhaps, I'm sorry. And, and I keep saying chat G P T, but it's G P T is the actual [00:21:22] Frank: model. Well, it's so, it's so confusing because, um, look, uh, let, let's just add more definitions to make these things super confusing. A large language model doesn't say what, it's been fine tuned for. And, um, some of them have been fine tuned for chatty kind of interfaces. Some of them haven't. If you say l l M these days, you mean something like chat G p T, but to a data scientist that might be something more like G P T. Two or GPT three where it's just kind of a, a wild, wild west text generator. All it does is wanna spew out text that really doesn't have any guidance in the world. It's really not trying to satisfy any problem or anything. It just wants to generate text. And so I like to, in discourse, say chat g p t for the ones that are meant to have a dialogue and a back and forth with the ones that have these very large buffers, these eight K, 32 K buffers. So yeah, it's G P T. It's, they're, it's funny how the, the training is getting mixed with how Chachi PT was trained, but yeah, it's part of, it's just the UI. Who knows the terminology has yet to settle. I'm gonna go with the terminology that my mom and dad use, and that's what I'll go with. [00:22:42] James: Gotcha. Okay. So then where do things like Lang chain or Semantic Kernel comes in? Who is using that? Is it me, the end developer that's building the ui? Is that the prompt engineer? Am I building things for my engineers? Like how does that. Sort of [00:22:59] Frank: were right, so these lang chains and um, um, sorry, semantic Colonel, I. I, I'm being reductive today. I apologize. In the end, they're text template engines. They're different ways to specify text templates to add a more programmatic, to make pro using these large language models. Feel like using a programming language. You're creating functions out of these models. But in the end, these functions are just generating text. That text is being shot at the model, and then something has to parse it back. So it's just helping you close that, that weird text loop that we're in right now. Someday we'll be out of it, I swear. Yeah. Um, so where do these fit in? Uh, would a prompt engineer use it? In my terrible definition of a prompt engineer, probably not, because I see prompt engineers more as scientists, where they're doing research, they're experimenting. These are for, I would say, definitely more for. Uh, app developers and people who are trying to put user interfaces on top of these, because in the end, you're gonna need a template engine because you wanna have, you don't wanna expose the general purpose chat. Uh, UI because there's already apps that do that. Microsoft does it. Google does it there, there's nothing advantageous to having an open-ended chat in your app. What you want is refined UI that uses very specific smarty bits of this AI and takes that data back. And to do that, you're gonna need to turn your UI elements and your buttons and things like that into text prompts. And these things will help you do that. [00:24:48] James: Got it. That makes sense. And then additionally though, you might be building something, like if you're building co-pilot or you're doing stuff where people are, you know, you're people are already typing things and you have context, you might be feeding it additional data through these, you know, different Yeah. Frameworks on top of the L [00:25:07] Frank: L M. Right. Yeah. How you feed data to these things is the trickiest part. The part that you're gonna spend the most time with is what data to feed them, because you do have that limited buffer size, so oftentimes you can't feed it all the data. So you do need some kind of framework or something to help you figure out exactly what that is. Do you show the data from the most in, in the example of, uh, Code editor copilot. Do you show it the most recent text file that the person's edited? Do you show them the project structure? Do you show them the actual project file? Do you show them the last four files that the user has viewed? Uh, do you build up a context and save that away into a condensed form and feed that in as a, as an a conditional embedding because, A lot of these models I'm talking about from their simplest perspective of feeding text to them and getting text back. But if you're a data scientist, you can feed data to these models in many other ways, and you can, you can fine tune it to accept data in other ways. But, uh, for people like us who are just gonna be consuming these large language models, things that we can't train ourselves, uh, we we're gonna need these. Super fancy text generators, [00:26:28] James: templates. Ok. Okay, so let's say Frank, that I am somebody that I'm already a developer, right? I know a programming language, and I am like, I wanna dip my toes into ai, right? Into the space. What? And I have a day a week. Let's start with I have a day, one day. I have one day. What should I do? What should I build? Like what should be my goal of that one day be? Oh [00:27:00] Frank: that, see, that question has changed recently because if you had asked me that question two or three years ago, I would've started with a lecture of go get one of the big popular Python coded neural networks and get it to run on your machine, get it to do the inputs and outputs that you want, and then go look into the code and start breaking it and see things like that. But that's not right anymore, is it? Because that mindset doesn't, back when neural networks were small, James, you could actually train them on your own computers. Now they're gigantic and take entire state's worth of electricity to train. Um, now it's, boy, it, it's tough too because people are writing apps. People are trying to write like the neural network app that simplifies it so you can build up pieces of it too. So people are trying to find is, is there a a metaphor we can use and turn it into an app and simplify these things. What I'm gonna say is go get runway ml, the app and see what it can do and play around with the neural networks. I would say use these things. We've all been using these large language models now because they've been popular for the last few months, but there's a lot of neural networks out there that do a lot of different things, and I would go play with the ones that do different stuff, and you're gonna want an app to help you do that because otherwise getting these things to run is. Pain. So, uh, I recommend runway ml. Go get that thing and see what I can do. [00:28:41] James: Okay? Now let's say I have a week, I'm doing a whole stuff for a week. What should I be doing? So if I have one day, you're just like, go play with the stuff one week. What, what should I be trying to do as a d, as an AI developer? I, I have a different answer I think, by the way, and I want, but I want you to go first and then see how my comparison contrast and [00:29:03] Frank: me not knowing anything is AI app developer or an AI data scientist. What, what are we doing here? App developer as [00:29:10] James: an app developer. [00:29:12] Frank: Oh, see, I wanna retract my previous answer then, because I, I was still heading down the data scientist route. Got it. [00:29:18] James: But, um, okay. Okay. Okay. Hold on. So that is the one day data science route. Okay. Let's do the one day app developer route, AI app, developer route one day. What's that? [00:29:28] Frank: Yeah, I would, I would find your favorite. Text file that you have that's under like 5K ish, uh, that has some interesting data in it. Maybe it's an XML file, maybe it's something. And I would feed that. I would write an app and I would think of six different prompts of things. I could ask it about this data and I would write a UI with six buttons on it, that when you press the button, it um, Returns the response from the network that, uh, puts the data in, puts that prompt in, gives response back. And then I would think really hard about how could I parameterize one of those? How could I make it so the user can change the prompt without having to type something in? What is a variable I could add to that prompt and then build your own little. Text templating engine. This, I think you can still all do in one day because you're just starting with six buttons. Anyone can write six buttons and then I just want you to take one of those buttons and parametrize it, and then you'll have a template and now you're doing some proper AI uh, programming. [00:30:38] James: Okay, I like that. I think that is a good one. See, I was gonna go with. Go get yourself an open AI key and then call open ai. Call the api. [00:30:50] Frank: Okay. Fair enough. That probably is the day one. Okay. Let, let's recalibrate. So I did answer the one week question that you did. You're, you're pride. I, I'm, I'm putting too much into the day because honestly, uh, it took me about a day to get that to work also. Now there's libraries out there. You, you mentioned a favorite one on, uh, that, that are available on new. But, uh, you have to decide if you're gonna stream answers also, so that affects your UI and things like that. So yes, I agree with you. Day one, get a key to whichever neural network, it doesn't matter, and get a, get a call working. Okay, [00:31:25] James: cool, cool, cool. This makes sense. Yeah, I, I'm thinking about it because how I want to go learn it is like, I, I like your example of the one week, by the way, because it was, I think it was the perfect example if we went back to the beginning of this episode, which was. Hey, I have a piece of data and I want to ask something about that data. What do I need to do? Like actually, yeah, we just tarantino it in a way because that's really like what it should have been about. Like I should have prompted you with that as the prompt engineer is, is that's really at the end of the day. Yeah. It's like I have this cool text file here that's a bunch of data, right? On sales data. Mm-hmm. I need to ask itself about that. What, how do I do that? Right. And I think that is, if you now play this podcast in reverse, you'll get to that. [00:32:07] Frank: Yeah. And, uh, I'm trying to take my own advice in fact, because there are, there are tons of little 5k text files, and if you get access to that 32 k network, uh, and of course I'm switching Ks around here, right? Anyway, um, you, yeah. So you, you wanna, you wanna give it a little pre-pro that says something like, here is my data for my, whatever it is, and. Maybe give it a few details. If the data is in a weird format, give it some hints and things like that. Paste in the data and then, um, I, I like the idea, six questions, six buttons, and then parameterize 'em. I'm, I'm sticking with it. I, I like that as the one week exercise. [00:32:49] James: Nice. All right. I like it. Well, anything else you wanna add to this conversation about what AI developers and AI data scientists are? Frank? [00:32:58] Frank: Yeah, I just wanna toss it out there because I haven't done this myself, but these things can generate UIs also. So don't forget, but you can also say, look at this data. Here's a question I have now generate an HTML formatted page and a cyberpunk look that displays this data and just have it spit that out and display that. Wow, [00:33:21] James: wow. Look at that. Don't [00:33:23] Frank: just have it answer the question. Have it formatted nicely. That's what's important. [00:33:28] James: I like it. That's good. All right, Frank. All right, well let us know what you think about this in this conversation. Were we right? Were we wrong? Are there other things that we missed? Uh, I'm really interested in it because obviously everyone's getting super into the ai, you know, train. And I want to, I wanna not only just, you know, do some stuff, but I wanna kind of be able to also teach people stuff too. And I want to kind of know what most people are gonna be doing. I don't believe that most people are gonna be, That I'm gonna speak to are gonna be at the data scientist level. I think they're gonna be up a few levels of what they're gonna do and um, at least because I'm not a data scientist, if I was a data scientist, I'd go talk to data scientist, but I'm not a data scientist. Frank and I probably will never be one, maybe. But I think that this conversation, just trying to. Figure out these bits and pieces. I think the whole world's trying to figure it out right now, so this is a great conversation, Frank, and let us know what you think. If you have any questions, go to Merge Conflict fm. There are contact buttons all over there. You can conduct Frank at ProAm on Twitter at James Monte Mango. That's me. We also have a Patreon as well. If you're listening or watching this, you can go to the patreon.com/emerge. Conflict found. There's a button there. It's on the show notes. And then additionally, every week we do a bonus, and this week we did a bonus all about Frank's cat. So if you're really interrupt, you can. It was great. It was great. Yeah, the spot has like 20 questions on Frank's cap. Um, but Frank, I think I was gonna do it for this week's Emerge conflict. So until next time, I'm Jay Bonta Magno. [00:34:55] Frank: And I'm Frank Krueger. Thanks for watching and listening. [00:34:58] James: Peace.