mergeconflict247 James: [00:00:00] Frank Frank, Frank, Frank, Frank, Frank, Frank, Frank, Frank, Frank, Frank, Frank, how you doing Frank? Frank: [00:00:14] Oh, James. I am doing wonderful. It has been a very long day. I think we complained about the SD last time. So we're not allowed to, but I'm going to complain about DST. I looked at the clock today thinking it's going to be around noon and it was 5:00 PM. The sun was still in the middle of the sky. I'm like, I am not ready for long days yet. Put me back into the darkness. If James: [00:00:35] you're just tuning in listeners to this podcast, I'm Jason . Now this is Frank Krieger. We don't often different opinion. We do, but the one thing we actually different opinion on is Frank loves. Like dark. He didn't move to Alaska for just like the winter, but he loves. Long, dark days. I don't know who this person is. It doesn't make any sense. He's like, I can't wait for winter to come and I can't wait to turn my clocks. Frank: [00:01:06] That's you? That, that phrase, uh, he should move to Alaska for the winters. It seems like some kind of like 19th century insult, but it sounds kind of hilarious and I love it. And it did sound kind of wonderful. Shout out to Clancy. James: [00:01:22] I've been watching a show on Fox called the great North it's on Hulu and it's on Fox obviously, but it has, um, it's from the, I think it's from the creators of Bob's burgers, which is one of my favorite shows, but it's a show when the characters they live in Alaska. And it makes me think about Clancy all the time. Like I wonder how he's doing also, most of them are in Alaska were vaccinated, so it was pretty cool. Frank: [00:01:44] Oh, yeah. Okay. Well, you just took a turn there ma made me sad because Washington's not doing quite as well. So now, uh, Alaska has two check marks and its column. Good for you, Alaska, James: [00:01:57] but I guess it's cold all often and Frank: [00:01:59] all the time. Wonderful. Sounds wonderful. As long as I'm not like shoveling a driveway, as long as I don't have a car, James: [00:02:06] I dunno. Okay. Well, we need to find out if that's possible too. Cause here's what I, I. I see if anyone's watched the great North. This is literally Frank it's like they have this huge cabin in the middle of the woods, away from everyone in Alaska. And that's kind of you now. They also have a bow. I don't know if you're a boat tour. Boat person, but that was a fish. Are you a boat person? I'm Frank: [00:02:26] a boat racer. I don't really, I can't really relax on a boat, but I enjoy racing boats, so that's fun. James: [00:02:32] Okay. It's not what they do in the show, but anyways, Frank and I, Frank is wrong because sunlight is amazing and it nourishes the soul and the body and being able to be like, Oh, it's noon, but you think it's five o'clock. That's great. That's a great day in life. Frank: [00:02:46] No, it was the other way around. It was five o'clock when I thought it was noon. Oh, James: [00:02:50] well that's, that's that's bad. You don't want that? It's Frank: [00:02:53] really bad because they're like, where did the day go? James: [00:02:57] You need to go out and enjoy it. You need to go Frank. Here's what you need to do. Open the door, walk outside, enjoy the day the sun is there for you. Frank: [00:03:07] What people tune in for? I don't think that's what people tune in for. They tune in for me sitting in front of the computer. No, James: [00:03:12] here's what happened though. They do tune in for that. And I have been very fascinated and some of the new machine learning thing, stuff that you've been up to, uh, I was talking, I was on a hike this weekend with my partner and you texted me. So I'm very. Interesting tidbits of information. And then, uh, Heather and I, we got on this whole conversation about how maybe machine learning and some of the new, different technologies out there for natural language processing can maybe aid her on a day-to-day basis at her job. And we were talking about consumer versus enterprise play in this space. That's some of the stuff I want to talk about today is. The stuff that you're playing around with, you know, we've talked about some of the AI, the machine learning libraries and new technology out there, but I'm fascinated at the end of this podcast to take us on this journey of what you're building, but then how these things can apply at different scales because you, Frank are an independent developer and are playing around with stuff. And is it feasible that these technologies in the AIML space really make a difference for. The independent dev shops of the world. That that's all, I just talked a lot, Frank, Frank: [00:04:28] it's the whole episode. It's, it's a great outline. I actually had that in my outline, cause I actually took notes for this James, but yeah, I, um, I don't normally text you. I text you when I'm drunk and when I'm upset. So those are the times James gets me. But this time I was upset because I was excited, upset. Um, we had talked about GPT three on this show for a while. It's a general purpose, text processing. I can never remember what the G's and the peas and the teas actually stand for, but something like that, the G is general. It's a very big powerful AI, but we were talking about it in the abstract and I was happy because I finally got API access to it. And I finally got to run some experiments on it and. It just in the first five minutes I had to text you because I was freaking myself out with what this thing was doing. And I demanded that we do a show on it because, well, Jane's, I don't know, like not to put too fine, a whatever on it. How does that phrase go? I think we're out of jobs. Um, you know, not this year, not in 10 years, probably not even 20. But we're out of jobs. Eventually this thing is scary. Okay. So let's talk, James: [00:05:47] well, first the differences between GPT three and something such as TensorFlow, um, how, how are these two pieces of technology fundamentally different and similar? Because I think that's an important baseline. Cause we did talk a whole episode about GBT three, but. Now that you've gotten your hands on it and know the development environment that you're setting up, how are they different? Are they trying to accomplish different things? And as you had developer, do you need to do less or more things for each one? Frank: [00:06:17] Yeah. Um, comparing GPT to flow. Isn't too great. Cause that is a bit of an apples and oranges one. And I'll tell you why. GPT three is the name of a trained neural network. It's done. It's baked. It's finished. Technically, it's actually a little family of networks, but they're all trained baked finished. You know, they're done. TensorFlow is a neural network execution engine. So is torch PI torch. So is, uh, uh, I don't know, uh, core ML, ML compute, uh, Onyx ml.net, dif sharp. Those are all. Neural network execution engines, technically a train to network could be run on any of those that kind of doesn't matter. The power here is that someone trained a very large network and to be clear, like no one knows exactly how much they spent on it, but estimates are on the range of like $12 million of compute time to train this large of a network. So that's its value is not that. It's a program it's that it's done and we can just start poking at it and asking it about humanity. And the James: [00:07:34] thing that GPD through that we talked about in the previous episode, we don't recap it too much is that GB T3, the whole concept was that you can ask it for things and it would generate stuff for you. So a few examples, right? Imagine you are interested running a blog posts. You could say, you know, Write me a blog post on a gray whale that goes on an adventure to Seattle, and it would like generate a story based on the knowledge of those words. It takes in your words and spits out things. And the thing that got really, uh, people excited was that the text in that, in that journey of writing a blog posts, it was. Getting almost indistinguishable from what a human would. Right. Not necessarily dis completely indistinguishable, but very, very, very, very good. Is that an accurate statement, Frank? Frank: [00:08:32] Yeah, it is. Um, and as they say, that's just the first act. That's that's a little thing, but it can do. Um, but yeah, th the big change here is it's the size of the network, and it's the training data set that it was given. It was given the entire internet as much as someone could download, and the whole thing was put through it. Um, and so it was able to learn, it's been a limitation of other these generation networks that, you know, they're tuned for English, or they're tuned for some articles, broken up into some words, but this thing has been trained on such a vast Corpus. Everything. Everything that could be thrown at it was thrown at it that it has contextual knowledge. It can generate a good story because it knows a little bit of history. It can talk, it can write a decent character because it knows a little bit of psychology or at least the psychology as presented on the internet, you know, uh, of biases, all that. We can have the whole biases discussion, but obviously, uh, all those apply and it's. Strict to English. Now, when I say that's just the first act, because that is just the first act. This is a general purpose. AI. It not only can do text generation, but it can do other texts, uh, natural language processing tasks. I remember you've done a Twitter sentiment analysis. It can do that. How did you enjoy Twitter sentiment analysis? James: [00:10:04] Uh, well, I, so there's two types of. Twitter analysis is one. My wife did one, which was based on our programming and she did a bunch of stuff with hashtags. What I've done is I've been pretty, pretty easy about it as I have some flows. So I'm like logic app set up and it's literally like, take this tweet, take the tax, do sentiment. And if it's less than this, put it over here is this big, very simple, uh, overall not large grouping of data. So relatively simplistic for the ins and outs. Frank: [00:10:35] Right. So this network with the same training that can generate stories can easily classify task text. Once you give it a few examples, and I want to talk about the fine tuning and how you actually get it to accomplish certain tasks. But, um, so that's called classification and it can do classification on a variety of things. It can do summarization. Here's a book, give it to me in a paragraph. Does that very well. It does, uh, polling structured data out of text. So I don't know about you, but, um, I read a lot of Amazon reviews and all that good details are just mixed up with like the English prose text in there. Pull that kind of data out. Uh, I can keep going with examples and I'm sure I will, but. That's kind of the thought is that's the generality of it. We have networks that can generate news articles. We have networks that can do sentiment analysis. We have ones that can translate English to French. The weird thing is this one can do all of those without needing to be retrained again. Yeah. I've seen, I've seen James: [00:11:43] some really complex examples too on their website. They were saying, Oh, they have it in like a bunch of lawyers, offices, where you could say, you know, given this person's case, Go summarize all of the, like, you know, valid past cases that may apply to this case of data, you know? So like often if the cases are thrown out because of this and that, and that they can take in all that data about the case and spit out. It as well. There's other examples that they list too, which makes a lot of sense, which would be maybe, um, hospitality or even we have Intercom we're on Zencaster and there's Intercom. Imagine the best bot service in the entire world, because it knows everything about everything, right? There's bot service improvements. There could be analysis and updates to reviews. Um, you know, the list goes on and on that you could possibly do here. Um, In general, but yeah. Um, I'm interested in that fine tuning because it knows the data that it was given. Right. And how do I make that apply to my business? You know what I mean? Like how do I make that apply to Zencaster for example, like this specifics are. There was one thing that was it. I don't know who was working on. I remember watching some movies, Facebook, I think they talked about how they had an AI bot that could make like pool request to fix bug on code. At some sort like I'm imagining that this thing could do that, but it needs to know about your code or something like that. Right. That's what confuses me about this thing right now. Frank: [00:13:16] That's awesome because that's basically what I want to talk the rest of the podcast about. But, um, before we get to that, I want to go. Backwards in time a little bit, because when you're talking about, um, how can I tailor this to my own thing? I keep talking about how GPT three is this big general thing out there. And I want to make it clear. You can't actually download it. You have to, although it's released from open AI, it is proprietary and you have to get API access, uh, to use it historically though, there is GPT too. A smaller older brother, and you can download the source code for that. And you can't train that on your very own data. So if you are an industry person and you have a lot of, I don't know, Oracle databases full of data, and you can just start pumping all that data into these kinds of networks. And with enough compute power, you could actually train up your own. But the beauty of GPT three and whatsoever. Startling about it is its versatility to do all these different tasks and different styles. So I had the same question though. I'm like, okay, that's all nice at a high level, but what can you actually do with it? Like what, what utility does it actually have? And so I just started brainstorming myself. Like I got this API access, so why, what am I going to do with this? And then I remembered, I think it was a tweet or some, a blog article or something I saw. And someone had described how given words GPT could generate HTML. Hmm, uh, user interface, not just the text that you're saying, but say like, I want a button that says hello, and when you click it, it pops up an alert that says goodbye. It's surrounded by a blue box. There's a title that says, hello, James. And the background is clouds and it would just bring all that stuff out. And James, I thought. What if we did that with SAML James: [00:15:22] naturally, right? Because it's, it's very HTML asks. So if it could do HTML, it's a realistic in the realm that it could do. XAML because apparently GPT T3 can do that. Frank: [00:15:35] Well first I had to prove to myself that like, it could do HTML. Like I just had this vague tweet in the back of my head. Why can I get it to do HTML? So now let's talk a little bit about how you actually fine tune this thing, how you actually described to it, the task that you want to accomplish. And it's the weirdest thing ever. Everything about this network is just weird. Okay. It's pre-trained so there's no training happening. So, how do you get it to solve your problem? You just give it examples. Here's the input I'm going to give you. Here's the output aspect. Here's some more input I'm going to give you. Here's the output I expect. Give it. One of those examples and it generally figures out exactly what you want to do. Give it three of those examples. And the thing becomes a stupid master at pretty much any task. So, James: [00:16:26] so this kind of differs from traditional machine learning, which takes a bunch of inputs. And then you're trying to train it. And you have to give it positives and negatives and, and you have to identify those usually with tags, what it is, right. If you want to figure out if this thing is a banana or not, you need to feed it a bunch of bananas and also a bunch of not bananas. Frank: [00:16:49] Absolutely. That is the breakthrough here. That is where we've just entered machine learning stage to kind of, because in the past, if you wanted a neural network to solve a problem of yours, the hardest task was you had to collect buckets and buckets and buckets of data, and you had to organize it in the right way. And all this kind of stuff. What's wonderful. Here is just from, I've generally been giving it three examples with three examples. It seems to kind of get what I'm going for. Um, I'm sure the more examples you give it, the better there are limits, of course, but you give it that kind of backup buffer of examples. You don't need to feed it. I don't need to teach it. C-sharp I don't need to teach it. XAML I don't need to teach it anything. I just say. I want a blue button surrounded in a red box. I typed that myself. I type in the XAML myself. Then I go to a new one. I'm like, I want a title with a picture of a dog below it. I type in this animal on the third line. I make up a third I'm I'm getting tough. Cause I'm trying to make up examples of my head. James, what do we want as the third one? Well, when an, James: [00:18:06] uh, an entry box where I can type in my first name and an entry box below it, that I can type in my last Frank: [00:18:12] name. Yeah. Hit enter. And. Nine out of 10, it's going to give you the right answer just from the two previous examples. So those were the things I kept running and, you know, it sounds. It's basic in a way. I don't want people to get lost in just how basic that operation is, because I think it's pretty magical. What's happening here. There is a hole for, we've been trying to get non-programmers to be able to write programs. It is a fundamental problem in computer science. We are doing something wrong in that it takes someone. Five years to figure out how to write an application properly. Roughly some people are smart. We have been trying to improve that forever. And this is the first step where I'm just giving it plain English and it is giving back structurally correct stuff in a language that's probably not well represented. I mean, C sharp is pretty popular. Examples is pretty popular, but it picks up on these things very quickly. And it just got me to thinking like, Microsoft has that whole power apps world. I've always loved access. I just hope in 50 years, people are just writing out textual descriptions of an app that they want hitting enter and having this thing pop out a UI. James: [00:19:35] So, okay, so you have it. You give us some inputs. This could be any, so what are some non XAML examples? Let's talk about not coding. So there's like some more real-world examples of the types of inputs and outputs you could, you could give it. Frank: [00:19:52] Okay. Sorry. I had, uh, let me just give one more coding one or no, you want to do a non-coding one. So here's some examples. Uh, Q and a, um, this is just like, um, well, from alpha where you just ask a question about the universe, what is the mass of Mars? The very first one I asked was how many dogs are there? And it replied with how many dogs there were, uh, in the United States. I'm like, well, how many dogs are there in Seattle? And I replied with how many dogs that were in Seattle. That's from the same engine that's coding, you know, that's from the same engine that student sentiment analysis, it can do English to French, French to English. That's easy. It can pull out structured table data from any webpage. That's pretty easy for it. Um, going on and on here, it can take any movie, turn it into an emoji. It can generate product names for you. It can generate SQL queries, given a natural language question to a database and a schema. That sounds nice. Imagine natural language SQL queries. Uh, the JavaScript helper chat bot will turn errors and just try to fix your code while you're there a recipe generator. I don't know if I would want to try any of those. Would you try a recipe from an AI? I would, yeah, James: [00:21:19] I would. I think so. There's I think that'd be a great adventure, to be honest with you. And honestly, the AI could probably spit out the required ingredients and the recipe in less. Words then, you know, when you go to the recipe websites, it's all just ads and you gotta, you got to scroll through a whole blog post telling you how awesome it is and all this stuff to get to the recipe that's at the bottom. That's like has popups all over it. I just gave me the recipe. That's what I would want because I bet Ooh, would be really good is right. Ideally what it would do if it was a really smart engine, is it would take all the recipes. I'm like, you know, it could create recipes, but it would imagine I was like, give me a recipe for gave me the best recipe. In the, what create the best recipe in the world for a raspberry PI or whatever. Right. And what it does is it takes all of the recipes and all of the reviews and all of the feedback. And it doesn't just give you the top rated one. It reinvents the raspberry pie, the pie of Raz. Yeah, I got it. I got it. Key lime pie, whatever that could be kind of. Frank: [00:22:27] I love that one because it's almost like programming. Cause like baking is such a science and measurements have to be precise about one side. I am curious. I will, I might try that one myself. You are talking about what is a practical real-world one. I don't even know how practical real world this is, but it's kind of interesting. It will convert a story written in first person, point of view to third person. Hm. So it knows English enough that it can just change the author's point of view. That's kind of fun. I really love that one. So James: [00:23:02] these are all examples of things that it can do, but how do you get it into the state that you can do it? So you were talking about the, you were giving it saying, Hey, give me a, an entry box. That is the first name. And then you had to describe the results. So imagine. I'm doing something else. Like, did you, did you give it any other, the three things, the inputs and outputs besides this coding thing, did you say for the dog thing, you're like how many, how many dogs are there and then you to give it the answer or something like that? I'm kind of confused. Frank: [00:23:31] Uh, so for that one, for the Q and a one, I put three questions. I put a Q. Colin because it's text, it understands structured text. I put Q colon space and I just asked the question. Then on the next line, I put a colon space and I put the answer something. I knew the answer to phrase the way I want, because like I had fun. You can also switch it to like jeopardy phrasing and it can do that just fine too. So I did, I did myself three Q and A's then on the fourth one, I just type the question in. How many dogs are there and it just answered. So, so James: [00:24:10] it's like these it's like these inputs and outputs, these predefined things are. It's not training into Frank: [00:24:17] a mood. It's giving you context. It's not training. It is just telling the network, Oh, this is what we're talking about. In fact, this API is hilarious. The way that you access it, because what they say is at the beginning, just write one sentence of what you think the neural network should do. And that gives it context. So at the beginning of all these examples, uh, for that one, the very first sentence I put. This robot can answer any question, period. And then I started to go on the other one. I said, this robot can convert textual descriptions into XAML and then I gave it my couple examples. I don't know how many examples it really needs. It got me to thinking, but that's it that's that's the context and the rest it's it's off to the races. It is, James: [00:25:08] if you know, my, my practical real-world example is imagine that, you know, right now today there's another person in the room that is, can't quite hear the conversation, right. They have a bunch of knowledge or they're human being, they know a bunch of stuff and they, they, they walk over or the, the two people that are having a conversation walk closer to them and. Those two people are talking about dogs or they're talking about Sheba news and all of a sudden this person, they're like, Oh, what colors are Shiba Inu? Oh, mushy. But you knew is, is black. Oh, my Sheba you knew is, you know, vanilla, right. Whatever the college. Right. And then, Oh, my Shiba Inu came from Japan. Right. And then. Now this third person, that wasn't part of the conversation who has knowledge of not only the Sheba e-news, but also of everything else in the entire world that's on the internet can now say. You can be like, Oh, what color is your Shiva? You knew. Right. And then it's like, Oh, my Shiva you knew is, is, you know, magenta or whatever it is. You know what I mean? Like there's Oh, like, Oh, do you know how many Sheba neumes are, are, are black and you, Oh, well she, he knew his 37.5, 4% are black, you know, type of thing. I'm assuming that this is what we're talking about here. Right? Oh, can you describe your sheet? But you know, Oh, let me tell you about my Shiva. You knew, right? But this person is not actually a person at all. It's an I robot. Frank: [00:26:34] It's an API kind of a perfect analogy. That's really good. I w it's like, um, it's exactly that person. And what it is is there's a law in the conversation. People have said what they wanted to say, then there's a pause. And then that third person just starts talking and just kind of almost mimicking back the conversation. The joke is, so for that, like Q and a one. I put my question in, it gave the answer, but it wasn't done. It started asking other questions and giving those answers, because like you said, in the beginning it was trained to be generative. So that's when it's happy, it's when it's creating content. If it has to answer some questions in order to create that content fine. If it has to generate some, XAML fine. So that's, what's interesting too. So I was talking about how I was taking text descriptions, turns ammo into them. Then I, even with you on the phone, I was like, now I got to try to get it to generate the view model. Give me the accompanying C-sharp code. Give me the accompanying database code. These are all within its limits. You know, it's just as long as you can provide it enough examples. I don't know those limits, but I haven't found him yet James: [00:27:56] because it, we can assume at this point, if it did ingest the internet, that we can assume that it ingested get hub and documentation, more importantly documentation, I would say of, of that the documentation, which is. Verbose, right. It tells you what the thing is, how to write the thing in the correct syntax. Maybe even more important than all the source code in the world, because this is the correct way of using this thing. Structurally it knows, we assume that it knows all this stuff. Yeah. Frank: [00:28:33] Yeah. And, um, I just, I had to start poking at it to find out some answers to these questions. So like what, how much of the internet does it read? How much C sharp is it actually kind of understanding here? So yeah, so some fun things I thought about were. Well, I'm not even sure if I really liked my idea of an app generator, because it's a little bit random. What it's response is going to be. It is random because from the same input it can gives you, give you multiple outputs. There is a random factor there. So, um, maybe that's not exactly what I want. Maybe I'm more interested in deltas. So like here's an app, here's a bunch of C-sharp, here's a bunch of XAML. I want there to be a red button that when you click it, it does this go modify that code. So that, that kind of happens. And so I started working on these deltas because when you have AIS, there is always this question of, um, reproducibility and robustness. And this is robustness from an engineering perspective. Um, small changes in inputs should result in small changes in outcome. It's you can't guarantee that with a SoCast without random device with something that changes, but Delta's changes. Those are small. It shouldn't be able to do that. So I was interested in those. And so I started to do ones like here's some stay sharp code, um, refactor that into an interface, learn that one example. It learned how to refactor things into interfaces. Huh? Um, I didn't try to have, uh, uh, pull out a method, but, uh, what did I do? I did like add property, make that property. Um, I notified property change, blue, blue, blue, blue wrote all that stuff out, but then I hit the button that said, just keep going now. And it started coming up with its own code changes, make Fu inherit from base class. At this property to Fu add a relation over to this. And it just started imagining its own code mutation problems, and then performing those code mutation problems. So scary. I, there needs to be a version of visual studio based on, this is what I'm saying. James: [00:30:53] So, so where does this. Where does it, is this happening? Is it on a website? Do you have to, is there an IDE for like, where are you seeing these results in the real world? Frank: [00:31:05] Right. So right now there is an API to a server out there. They say it's handling millions of requests per day or something like that, but it is a closed beta. You have to sign up. Eventually they let you in. If you're an app developer, it's an API call. You send out your input to the server, it sends you back the response. Here's the fun thing, because you can't train this network every time you send it a request, you kind of have to give it those examples. Also, I haven't figured out exactly how the API works. I'm sure there's a way to like, just store those examples. So I don't have to send the examples every time, but it is that simple now for me, uh, who I haven't written an app yet. Uh, when you get API access, they have a cute little, um, just a text box to interact with it. And that's your, all your input is just text. So your three examples, just go with the top. Your, what you expect just kind of goes at the bottom. It's a text file. There is a wonderful. Elegance to that. If you can formulate your problem as a textual problem, which is not something we're accustomed to as programmers, usually we're trying to take textual problems and encode them into this other disgusting form. But if you can actually just, you know, describe your problem out in text, demonstrate it in texts, especially if your problem has an English bias to it. And I think this thing would be really powerful. Yeah, James: [00:32:37] cause you could almost envision a like Figma file or a sketch file that instead of giving you, here's this very pretty image, and then you have to go try to mimic it. It's it gives you descriptors. Imagine if there's like accessibility to descriptors on everything and it describes what a designer made and you're like, okay, like just give me the thing. And the cool part is that you wouldn't even need, it could be XAML it could be seizure. It could be abstract. It could be whatever. It doesn't matter what it is at that point, because it can just. You don't need a tool to translate it just then here you go. Right here. It's a, it's a profile page with a user image. That's in a circle in the middle of the page below it is their name onto the left-hand side. You know, one third of the screen over is their, um, Twitter icon and blah, blah, blah, blah, blah, blah. Right. It just describes everything manually. And then there it is. And then the diff would be important because as you made changes, you could, you could, you could diff those two things. Frank: [00:33:35] Yep. And then I think that's at the point where I sent you the text that were obsolete, not now. And you know, this GP T3, it's, it's going to have limits. I don't know where they are, but they're there. Um, I kept thinking, like, let's say I wanted to seriously make that tool that you're discussing. How would I seriously go about that right now? What I would do is find a bunch of existing XAML apps. Take a million screenshots of them. I would mechanical Turk have a thousand people describing their best words, what each of those is. Yeah. I would collect that into a file and I would use that as my context for, you know, giving this thing, its context. That sounds like a lot of work. And I it's, I don't need to say thousands, but like, That's what programming is going to be in the future. It's going to be that process. It's going to be more of a data collection process. James: [00:34:38] So this gets us to the question next, right now that you've kind of gone through this process as an individual. It brings you into that bigger question, right. Which is at what level does this technology, um, who is. Building and selling products based off of this technology. Right? Is it the enterprise play? Is it the Microsoft play with the visual studio or Jeffery's with rider that they're integrating this into their IDE? Is it a independent dev shop? Is it a solo? Like, Hey, I'm going to build something for my company. How realistic is it that, that those different levels are going to be there, right? Because. You know, you're saying like, Oh, it'd be great to visual studio has a story. And we think of Intella code right in Telecode is very, very smart. It's it's it's on top of IntelliSense and in Telecode what it does. It takes your source code, puts it in a machine learning algorithm, you know, thing in the backend spits down a model that visual studio can read and give you predictions based on what you're typing. Based on your characteristics. Now it's not saying, Hey, Mika geolocation requests that time's out after three minutes, then take that information and pop it up in the tax box. Right? It is recommending what you may write based on the dot, right? It's very different that the type of scenario in that case, but again is, this is this type of material that is it's a, these types of things are really only going to be. Worth selling at a enterprise level because it will cost so much to like use, you know what I mean? Frank: [00:36:18] No, I don't think so. Um, hard to predict the future and all that stuff, but. This is cloud computing. This is what cloud computing will be in the future is going to be accessing things like this. And let me cover that from a few perspectives. Number one, this thing is far too large for you to run probably far too large for most enterprises to run without, you know, a real dedicated staff monitoring that computer. And so this is just right for cloudifying and I am sure Microsoft is going to have Azure. God, I almost called it. God, I won't call it God, but Azure super-knowledgeable AI robot, API access available, all that kind of stuff. So I think these big things will always be provided through the cloud like that. And for that reason, they're always going to be. Price the way API is, are always priced, you know, James: [00:37:18] affordable. But do you think that let's say the API is affordable for, let's say a small dev shop that that makes a consumer, you know, Cezanne Castro, right? It's like, do you think that it's going to be worth there? Time and energy and effort to try to build things themselves that used a GP T3, you know, or is it like, Oh, Intercom is going to build it. I'm going to subscribe to their service. Right? This is a great example. Is, is it going to be worth the Zencaster to build their own, you know, diagnostics bot service that uses this stuff? Blob was something else. I dunno. Right? Or is it just like, Oh, there's going to be a service that someone else has built for me. In general, right? So is it there's scales, right? There's few players then there's like the thousands to 10 thousands of players. And there's the millions of players. Right. When you think about it in that regard, um, Like quantum computing. Those are the very small amount of people that are using like the cords. And there's like quantum computing as a service. That's a little bit bigger. And then we don't have the, every single person in the world world's using quantum computing on a daily basis. Right. Type of thing. Frank: [00:38:34] Yeah. Um, I am going to put this in every app that I have. James: [00:38:42] Do you think that it, it is a thing that you could use on a basis? Like, is that a thing that. You are able to use in your Frank: [00:38:54] business. So starting with, I think we're still gonna need mediators. We're still gonna need programmers to take people's problems and talk to this bot and talk to other data sources. No, we're not out of jobs just yet. This thing you can't type to eight GT GPT, please run my business. I'm going to be in Hawaii. Let me know how things go. You can't do that. It's not that powerful. But if you can formulate these little different problems, let me give you an example. Kalka has had this feature when Kalka can't solve a problem, it would Google the answer. It was a great backdoor for me. Um, cause it just, you know, there were some things that just couldn't do, especially if you asked it for knowledge, what is the massive Mars? It couldn't do it. GPT can GPT. He knows the MASM harms. If it can tell it to you in any units you want, it can tell you how many dogs are on Mars. Probably. I didn't try that one, but that's a good one. So immediately I'm just going to put it into Calcutta, but it's a little tricky because there are limits to how you can use the API. But a lot of that is because it's under beta right now. So I'm actually, I don't want this thing in beta. I want the Azure version of it, you know, I don't they're, I, I can give you a million uses of, for this thing. Is it going to transform every business tomorrow? No. If your business has anything to do with natural language processing though? Yes. Because you mentioned, um, Intel, uh, uh, Intel code, not Intel. Yeah. Intel code. Yes. Okay. And continuous has a feature like that. I trained a specific neural network. I gave it a hub. I gave it C sharp code on get hub, and it can do a pretty good job at that. But both of those are subsets of what this thing can do. You don't need to do that. It already has that knowledge. You just need to give it a few little cues. Um, it basically obsoletes of the last three or four years of text processing and neural networks and machine learning, all of those special. Things that we have built those special purpose devices are just not as good, simple as that because they don't have the context they're trained on that one specific task. But the moment in my C sharp code, I say, what is the square root of PI? They, they don't do anything. But GPT can. So, uh, there, I, I think it will be transformative. Uh, this is as usual. We're probably 10 years ahead of that, but, um, I'm excited. James: [00:41:37] So final question before I wrap this up is GPD three, a static. And what I mean by that is that someone ran all the data. Right. Is that a snapshot in time? Is it a snapshot to 2020? What happens when seizure 10 comes out or, or whatever? Is it continuously learning or is that GPT? I have Frank: [00:41:59] no idea, but I want the answer to this question because it can do things that scare me, James. I wrote into it. Um, what's the weather like in Seattle? And I said, raining. I'm like, Oh, Okay. Um, what's the weather like tomorrow and it was accurate to the weather forecast, but that shouldn't be possible because as far as I know it doesn't have accurate data. Wasn't referencing time. Was it making a generalization? I have no idea. So of course I then asked it, when will I be vaccinated? And it said, April 3rd. And like, okay. I don't know what you know, robot, but I'm convinced that you're convinced your confidence gives me confidence. I don't know the answer to that. Okay. James: [00:42:47] And how long did these requests take? That's the other thing too is because I know I interrupted Frank: [00:42:51] you Google. I don't know what kind of computer this thing's running on. I, yeah, it's, it's too fast. It's scary. How fast it is. It only hasn't been, I talk about how big it is, but it's been trained. All that effort goes into the training once it's there. I'm sure it's like a couple of gigabytes or wow. Who knows to move around. Uh, but I will also say it's not exactly fixed in time. GPT is in fact, a family. Of neural networks that have been trained, some are bigger, some are smaller, it has an English bias just because of the training data it's been trained on. So they're working on removing some of those, uh, language biases. We don't know anything about its cultural biases or anything like that, who knows. Um, those are the bigger questions that we have to answer. Once these things start making important decisions. If they right now, the most important thing to know is that this is AI generated. It's very, when they talk about the beta and the access that they're giving you to this thing, they make it very clear that anything it generates and that you publish, you need to make it very clear that an AI generated this. And that your company is taking responsibility for what it has generated and that's not just them being legal, fancy dancy. This is a great possibility for misinformation. This could take Wikipedia and I could say. Switch Wikipedia to be evil. And it could output a whole nother version of Wikipedia and who knows what would be right and wrong. The question is 99% might be wrong. Who knows how right or wrong Wikipedia is to start with, but, you know, we don't know what it would change. And so the scary, larger social question is we really need to start flagging things as AI generated. If they are. James: [00:44:52] Yeah, that is an importance of, I know there's a lot of commissions and boards out there that are trying to look at AI, uh, transparency and ethicacy that is going on in there as well, because you know, there are some times when you. Google something you're reading a news article. You don't know? I would like to know, like what level of automation this was. Was it written by a specific author? Was it something that was clumped together from different news sources like our remix or was it completely what what's the, the, the, the scale of automated that happened on this? Because, um, a lot of the internet is. No. I think a lot of the news articles and a lot of things that bubble up are generated in some form factor. They can be very easily. You, you, you almost know it. I think the scary part while you're saying is that at some point you may not know that it was completely. Written by AI. Right. And we may not know Frank: [00:45:50] sure. Today we're all reading. Something that we didn't know was generated by an AI. It's just so common in those quick news sites that we all click Twitter link to. We've never heard of that news site, but there it is with a million articles. James: [00:46:04] And that is why you need to support your favorite creators. And you can support us by going to patrion.com/merge conflict FM, because we make real content. This podcasts was not machine generated. It was generated by James and Frank. Frank: [00:46:18] Hello. I am not a robot. James: [00:46:21] Whatever Frank. Well, I was going to do for this week's merge conflicts. Stick around for the exciting conclusion of did James buy a Mac book air? I don't know why everyone knows that they read my Twitter, but beyond that, uh, thanks for listening. This has been yet another merge conflict. So until next time, I'm James about the mag now Frank: [00:46:39] and I'm Frank Krueger. Thanks for listening. James: [00:46:54] All right, Frank, I did it. I hopped on the apple.com Frank: [00:46:56] train. I saw Angie got the upgrades. Uh, I am so proud of you. I am so excited for you, but you didn't get the courier delivery. James: [00:47:06] Uh, no, no, no, no. So this was the, the, there was the next available shipment. So because of the upgrades and the color and the MacBook air was there. So what color now? And gold, obviously. Beautiful. Uh, it, and you can only get that in air. You can't get that in pro Frank: [00:47:23] Oh, good point. Good point. Um, was your other laptop technically Rose gold or James: [00:47:28] just gold? Rose gold? Yeah. Okay. So I don't have it. I would love to do a side-by-side. It is in canvas and Redmond. Uh, so I should swing by and get it, but I still don't want to enter the facilities yet, but, uh, I heard that gold is very pretty, so I'm, I'm excited for that because I've seen the gold phones. I think Heather has a gold iPhone, 11, whatever. Frank: [00:47:52] Yeah, I, yeah, I had a gold iPad. It was a little much for me. I'm curious how you, how you'll see it, but I think on a laptop, it's actually just going to look amazing with that keyboard. James: [00:48:05] I think so. So I, I was, I went back and forth and then I remembered, Oh, I own a business. And a Frank: [00:48:14] business expense. Hashtag question Mark James: [00:48:17] is this expense, I'm pretty sure. Well, you got upgrade your hardware. There's a hardware expense that you can do. So I decided to go a little bit more on that regard. I'm seeing it is literally a laptop for my business. Big. It makes sense. Uh, I don't know how businesses work, but you can pretty much write off your heart and your hardware if you need it. So I. Here's uh, the funky part, right. Is I got a lot of feedback on Twitter, just so much for you. Thank you, everyone for writing in and giving hearts and asking questions. And the biggest question I got was there's a few of them, Frank, which was, why did I not get the Mac book pro? Because the MacBook pro is actually only like a hundred dollars or $150, more, $200 more with the same specs, because it's all the same stuff. But. And they're the same. Well, here's the thing is they're the same weight and the MacBook air is actually a little bit thicker. On the button. Well, it Frank: [00:49:13] tapers it tapers the tapers on the book. Um, yeah, I had the same debate. It's what was causing me a lot of trouble. Finally. I decided I wanted the fan fan list design if for no other reason than the iPad has actually become my favorite device. And there's no fan and an iPad. Why should there be a fan in my computer? You know, I, I was thinking back to like some of my older laptops spinning disks fans, everywhere. Think some of them had like a CD tray that would pop out, like all these mechanical parts. We don't need that. We don't need mechanical parts anymore. We have dongles for that. So, um, I went down on, I wanted the simplicity of it. But it may not have been the greatest choice because it turns out I actually do love training neural networks on it, and it does kind of get the fans spinning, but then lack of fan not spinning. So it has to throttle down the CPU. But I think for my actual intended use it's, it's just perfect. Uh, how do you feel. Yeah, James: [00:50:21] the same. I mean, the biggest reasons why I didn't get it, uh, is cause I wanted to save $200 and it's the same stuff, because if it was for 200 or $300 more, and it was a completely, let's say it was an M two process or one , then I would totally have gotten the MacBook pro. The difference is that it is the same processor with, and I got 16 gigs of Ram and five 12 with the GPU. So it's literally the same thing in a different case. So. It comes down to that fan and the color and $200. I Frank: [00:50:51] forgot. I did forget one thing. I was debating because it has a touch bar and none of my apps support, touch bars for the simple reason of, I don't have it touch bar. I don't really know how to test it. So I thought perhaps that's a good business person. Maybe I should have gotten the pro. But, you know, I'm just kind of hoping the touch bar goes away. Yes. James: [00:51:15] That's what I didn't want to be. I didn't want to be the last Mac book pro that had a touch bar. You know what I mean? I didn't want to. Yeah. Frank: [00:51:23] I'm just hoping I won't have to support it in the future. James: [00:51:28] Cause Danny, he asked me like, Oh, should I wait? Because he's been looking at them as well. He's gonna, should I wait? Because I heard. and I said, you know, Frank and I talked about this previously and Frank just told me to buy it and just go for now. I think it's the right call. It it's actually coming a week earlier than it predicted. So it was said it was gonna come on April 7th, but it's supposed to show up one week from today when we're recording on off, the 30th is supposed to come. So that's very exciting. And a lot of people had a lot of questions around. Dot net performance, visual studio for Mac performance. Can you debugs of what about Docker? What about Android emulators? I'm gonna do a YouTube video breaking all down that stuff. Comparison to my old Mac book for my understanding is that big Sur, what version is big? Sur 11, Frank: [00:52:13] 11, 11, 11.0 11.2. What is it? Point something changed. I'm going to about this Mac 11.2 0.3 is what I have. James: [00:52:25] So from my understanding, reading the latest visual studio for Mac. Uh, released blog posts from John Galloway, from a team, uh, in there it says specifically 11 dot two. I think it says a lot in that too. I think it supposed to be 11, not two to three, has a bunch of fixes to Rosetta and that fixes all sorts of things, which is very exciting. And if you're going to early adopt, you got to wait a little bit, but, um, Frank: [00:52:47] let me interrupt there. So my general feeling has been, everything has worked, so I haven't really run into. Bugs like that. But I guess in that case, it was a really low level. One of like the debugging interface was getting messed up. So it was like, ah, error that would pop up and debugging. And I just never ran into it, but I guess it was there. So, uh, it's neat that they fixed it, but at the same time, I've just been pretty lucky. The things that hurt me are, um, dependencies. So I use like a lot of Unix software and Homebrew is just kind of catching up. Um, I'm getting a little bit purist. I only want Apple, Apple, Silicon compiled stuff for the Unix tools. I'm trying to get rid of all the Intel stuff. So I'm excited for all the dotnet upgrades. James: [00:53:37] Yeah. I'm excited. We're done at six, which will have native M one support in it. Uh, which is really cool. And I'm not sure about any of the ideas I know vs code has been recompiled for M one. It's supposed to be really cool. So things are going to get there and. I can't be mad at things taking a little bit. So you can't have everything day one, you, you want things at least work on day one, you know, and you as iOS and Android developers, especially iOS developers. One thing that we always do in beta summers is make sure like the software works. Sometimes that beta summer is a long ride for them ones. That really wasn't that long. So, and some of those things are out of our control. Like if it's a bug and Rosetta, then that that's not in certain control. Um, but then over time you upgrade stuff to get it to work. So that, so what it means though, Frank, and then we'll end this podcast is that if you're relatively happy now, imagine how much happier you'll be with the same exact hardware when everything is recompiled from one. Yeah, exactly. It only get better. It literally your machine will get better. Over time because the apps will be recompiled. Frank: [00:54:39] Yeah. It's, I'm looking forward to it for sure. Um, but anyway, just to use it, it's fine. There you go. James: [00:54:48] Thanks everyone for tuning in. See you next week. Frank: [00:54:50] Bye bye.