mergeconflict213 James: [00:00:00] Frank Krueger, Elon Musk tweets. One thing about doge coin, man. I'm just going to door's coin billionaire overnight. Can you believe that? Frank: [00:00:16] Wow. So one of his tweets came true. Is that what you're saying? Is he finally the prophet we've been looking for? James: [00:00:22] No, it's, it's quite fun. I. You know, invested quite heavily, like a hundred dollars in doge coin. Uh, just to spite my manager, Joseph, because he actually understands cryptocurrency and I don't, but I like Sheba e-news yeah, that is a dose. So why not invest in a great cryptocurrency? That one never runs out and B. Is a complete joke that when also somebody tweets something, it randomly spikes and triples your value. Um, it's quite fun. I'm actually up $60, so that's not bad. That's a 41%. Yeah. Yeah. Why am I that Frank: [00:00:59] I know it's super popular with the nerds, just for the dodginess of it. So I think I have a few friends that own some, I, it was a little too ironic for me to actually invest into though. I didn't want to put my money down on that irony. James: [00:01:15] No, it's a, it's a terrible decision. I'm not recommending that anyone buy dose even fact, because here's the problem with dose. It's a great idea in common of having a joke cryptocurrency, but then people try to do harm to it. In fact, uh, I don't think that Ilan is trying to do harm to doge by. By tweeting out like the doge apocalypse is coming and everyone are turning into a doge coin economy, but there were some tick talk individuals that were like, Oh, invest $20 in doge coin. And we'll, um, you know, cryptos spike it or whatever, you know, they'll kind of, you try to drive up a price and then like those people sell all their stuff and then it crashes. So that's any cryptocurrency, but you know, that day that, that. Happened. It was a tick tock spike. And then a week later, a Ilan spike, which has ha Elan's tweeted three, two or three times about dose point and every time that he does it spikes up, uh, quite, quite heavily. But, uh, yeah, it's ridiculous. Frank: [00:02:14] You know, we invented currency so that we had a stable form of transactions so that we knew a certain value of something so that we could easily say, well, I don't have any cattle here to give you, but I'll give you this coin. Instead. It was, you know, a constant that's something to rely on. And so I love these cryptocurrencies, but if one nerd popular celebrity dude can change the value of that currency. It ain't a currency yet. Maybe in 10 years, it all stabilize and actually be a currency. James: [00:02:46] It's very true. So why Frank am I talking about doge coin? May you ask. Frank: [00:02:51] I really am asking that James, are we doing? I'm so scared. What are we doing on this episode? James: [00:02:57] Well, it's because our good friend, Elon Musk is also a backer of a little company called open AI. Have you heard of this guy? Frank: [00:03:04] Uh, that's the tie in? I forgot that I have heard of open AI, James, and now I know what we're talking about. Uh, yeah. Jeez he was, was he one of the founders or just an investor? James: [00:03:18] You could just an investor. A little backer. Frank: [00:03:20] Yeah. I think Microsoft is invested full disclosure. I'm not sure. Yeah. Anyway, so I I'm excited because I want to talk about something and this isn't my idea, necessarily this came up, uh, on our discord chat. It was actually a lightening talk idea that I just wanted to turn into an episode. Cause I just want to talk about it without further ado. I want to talk about G P T three. James, have you heard of three GPT, three. James: [00:03:49] Yeah, it sounds like I thought I got it. I like it. I like it. I'm into it. No, I have heard of GPT three because my colleague Abdula went into our team's channel was like, so are all of our blogs just going to be auto-generated by GPT three? And then my response was, Google's a GPT three. And I was like, I know, you know, I guess I wasn't on the end of whatever was happening that day. Frank: [00:04:15] Yeah, it made its rounds on Twitter. In fact, um, especially after we're done talking about this, I highly recommend people just go on to Twitter and see to see experiments. You usually write a G P T hyphen three, and yeah, you can see a lot of fun experiments out there, but let's not get ahead of ourselves. So this is kind of a general purpose, natural language processor. So it works with. Stuffy type out words. And the G part, I think, stands for general in that it can accomplish a lot, a lot and a lot of scary tasks and I mean, scary by good. It does a very good job with language stuff. So where to begin with this thing, maybe some examples. James: [00:05:03] Yes. And also before we get to the set examples, I do want to acknowledge, yes, Microsoft didn't invest a billion dollars in open AI, a San Francisco based research lab founded by Silicon Valley luminaires, including ELL Musk and Sam Altman. It's dedicated to creating artificial general intelligence, which I did not know. So I have no prior knowledge to set investments. Uh, or anything and true transparency as people know this podcast I work for said company of Microsoft, but not in this division at all. Whew. Yeah, there we go. Frank: [00:05:32] Wow. Wow. Disclosure. Whew. Okay. So when I first heard of open AI, I understood it to be someone, a group trying to research, how do we prevent AI from taking over the world? And to be thoroughly honest, I kind of dismissed it because I'm like, Oh, this is the most Silicon Valley of Silicon Valley things you can get. Right. Um, but, uh, they started putting their money where their mouth and started actually releasing some neural networks. And I have my balance spot that doesn't work, but kind of works. And that's it actually running open AI neural networks on. Okay. So they are doing some of them they're open part in the open, but a funny thing about GPT three, and this is from open AI. They are not releasing the source code to it. And they're not even giving too many, too many details on how it works, because as they say, it's a dangerous technology, um, which, and it makes it scary to think that one company is in control of a dangerous technology, but at the same time, you have respect them for not releasing it because. Well, let's, let's talk about what you can do with this thing. So type in, um, the headline of an article and it'll write you the article, James . And your very first question should be well, is it an accurate article? And we're like, maybe who knows as the information in it, correct or false. Who knows the scary part of it though, is that it's well written to the point where you think a human wrote it. And just are natural. If it's written down, we tend to believe it bias that we have. So that's a scary application of the technology and that it can create a lot of false information. James: [00:07:22] Yes. I'm looking here at this beta of the open AI, uh, API in which there are a few things. So text generation is one there's. Translations, they have speech to bash. Frank: [00:07:35] That's ridiculous. We'll talk about a few of those, because those are kind of hilarious. James: [00:07:40] Yeah. They have like Q a Q and a things like that. Um, but yeah, the tech generation they're, they're sort of prompt is, is, um, feeding in a bunch of, of kind of bigger headline, like first paragraph, like here's the first paragraph now, right? The second paragraph and it's writing, um, Real real words. This is, I mean, this is a real, a real he'll sentence in which makes sense in the context that I'm reading this, you know, they're, they're talking about like text in tax out interface, and then the response is that using a DaVinci model, uh, that it says, and yeah, I mean, it says the roads. This is what a generated. This whole thing was about limits and strengths as like the road to making AI safe and useful is long and challenging. But with the support of the developer community, we expect to get there much faster than working alone. Powered by Microsoft Azure. Frank: [00:08:34] Wow. James: [00:08:35] It really goodness. Yeah. Oh boy. I don't know if that's the GPT three though, to be honest with you. I mean, this is the, uh, open AI technology. Open API beta. Is that the same thing? Frank: [00:08:48] Um, it probably is. I think they might have a couple API beta. So what they did instead of releasing the source code to those things. So you cannot download GPT three, but they made it a web service instead. Yep. And they made it a closed beta web service. And so that's them getting around the, we're not going to show you how to create this, but we're going to let you use the monster, James. Uh, so yeah, so generation that, that is a wonderful, I mean, what, uh, what domain squatter hasn't always wanted a system like that. But, you know, actually robots, right? A lot of, uh, news releases already. So it's just, where do you draw the line kind of stuff there. Um, but translation, that is an amazingly useful task, right? We, we all want better translators as a software developer don't we wish it was all just like a magical button that we could hit where our apps were magically translated into other languages. You know, so we can always hit better translation. And then you brought up. One of the weirder ones. And if there's been a lot of demos of this, but taking natural language and generating which one did you say? Well, I'll tell the one. I know, uh, you start typing, just start typing in your finances. You know, talk to this neuro network, tell it about your life, how much. Depending how much money you owe and it'll build an Excel sheet to track care, finances over time. So like you just, you know, you talk to it like a therapy because it'll build you like Excel sheets, a w which one did you mention earlier? James: [00:10:24] Let's say this one is. A speech to bash the input here was firewall all incoming connections, support 88 on this machine, and then an outputs, IP tables, dash input dash PTCP dash D port 22 dash J drop, which like I would have never, I mean, who knows what that would have been, but I'm assuming that this is the demo website that I have to assume that this is correct. Frank: [00:10:47] Okay. Yeah. Uh, isn't that fascinating? I mean, if you're you think about it, it's still translation. It's just translating this something we've never been able to translate into before we've always needed humans to generate code because that's, I mean, it's in the word, we're encoding information into it. How do you go from a high level spec into a working program? This obviously is just a cute little tech demo. Um, But maybe once we get into describing how GPT works, you'll understand that this is maybe a scary, the little tech demo, like the fact that it can do this, uh, there's other ones out there I saw doing SQL generation. So this is your, give me a natural language query and it can hit a structured data source. So really wonderful for our kind of user interface work. Especially if you take the next step and don't make people type in this stuff, but use an Alexa or Siri or something like that. James: [00:11:42] Yeah, very true. I'm looking here at a demo. They also have, which is, um, on Wikipedia. So this is a genius or on a Wikipedia page. And they went to the bread page, you know, so you're looking at bread and you, and normally you try to be looking for something, uh, maybe, you know, the, their example is why is it fluffy, right? Why is bread so fluffy? So you might, you might look in control control F or command F and start searching keywords. Frank: [00:12:16] Fun set of James: [00:12:18] fluffy, fluffy, Frank: [00:12:19] cause you've picked the wrong word for James: [00:12:21] me. You pick, pick the wrong word. Uh, so what they said is instead you're on said page of Wikipedia, you hit the open AI, the button and you type in why is bread so fluffy and it, and it does this machine learning algorithm, uh, or whatever it's doing on the page. And we'll, we'll not only find the area. Of of, of why it believes that is the answer that is analyzed, but it will also bring you to that part in it as well, which is really cool. Frank: [00:12:53] Yeah. Oh, so many things, um, I kind of want to just keep going with examples, but we could, we could do like an hour of just examples. I kind of, I want to get into, James: [00:13:05] I'll put a, I'll put a, I'll put a link to@betathatopenai.com and you can just go here and there's, there's things that you can run, but there's also like productivity tools, generation customers, or chat. My goodness, like chats going to be, Frank: [00:13:19] Oh, chat. Yeah. We're going to talk to, James: [00:13:22] yeah, exactly. Yeah. That were already talking to you and you hate to talk to, so if someone can make us a, we don't hate talking to the said robots, I'm definitely in, cause here's, what's happened with that robots. Um, you know, I was just messing with restream and, and I always have to ping their support for something really silly. Like I was like, Oh, what is this thing? You know, I can't find it in your FAQ. And, you know, the bot is not super smart, so it's like, what can I help? What can I help you with? And sure enough, the five option it gives me are not enough. So it's like other, and then I go other and it's like, I'll connect you to a human being. I was like, what is your point? What is your purpose? Frank: [00:13:56] Yeah, it's a phone tree. It's a standard phone tree, press one, press two, press three. But now it's in a chatbox form. That's like, you know, in some ways I don't mind that because at least it's reliable and you know, you're just working your way through a phone tree. But when it, when it gets, it's like that uncanny Valley, when it's semi human, you're like, are you a human? Oh man, I can't tell. And I think we're going to be in the uncanny Valley, probably for the rest of our lives. We're going to constantly be questioning. Are you a human. But human hope is, James: [00:14:25] am I what's going on? Uh, I will say this though. I have definitely been, I was on the phone with our good friends. Comcast. Yeah. Friends is the best, Frank: [00:14:39] best people to be on the phone with. James: [00:14:41] And do you know how long it took me to, um, cancel my Comcast service via their chat chat service. Frank: [00:14:48] I, I went through this recently, myself and worse. I had to have a hardware pickup during a pandemic getting that scheduled. So I had many opportunities in fact, to hang out with the people of Comcast anyway, enough bashing. What happened on what happened on yours? James: [00:15:07] Anything. It, it, it took, uh, so here's the thing with Comcast is they have the form that you can fill out online that says. I would like to cancel my service on this date. And then it's like, Oh yeah, we'll process it within two days. So it sounds great. I fill it out a week later. No response. I'm about to get billed again. And I'm like, okay, fine. I'm like, I don't really want to call. I was like, okay, I'll, I'll talk to somebody. So I'm watching house hunters and I am on the computer with a laptop because this is definitely a laptop situation. Cause you can't do it on your phone because it doesn't allow popups on your phone and it's a pop up window. So, you know, great. And I'm not going to install some app to talk to, to, so I'm sitting there and I'm in this huge long tree. I'm pretty sure it's a real person, but I'm not sure, but you know, they're there, they're hitting boxes. It's like tell them this thing, tell them like they're in a Frank: [00:15:59] right. Good point. Yeah. Good point. There was a tree somewhere. James: [00:16:01] There is a tree somewhere and it took 45 minutes. And, uh, I mean, it was, it was it's ridiculous. And then they're like, when do you want it? And I was like a week ago when I asked and then let me see if I can do that. And then they're, you know, they're like, where's the, I'm like, Oh my God, I don't even know if it's even canceled. I have no idea. I'm going to log in. All you talks about is, well, if we get an update, Frank: [00:16:22] did you'll know when you're. When your credit score takes a dip, you'll get a credit score notification or something like that. James: [00:16:28] Anyways, I digress. Frank: [00:16:31] Uh, how did we get their phone trees? Right? They're terrible. Okay. James: [00:16:34] Does this stuff work? We know how phone trees work. We're very well versed in said phone trees. Frank: [00:16:40] Yes. Yes. Okay. So you may be listening to your listener that I'm thinking, well, we've actually had pretty decent translation for awhile. Google did it. Yeah, they kind of, yeah. Microsoft has a, uh, being translate. I think that's what Twitter uses. Um, yeah, translations in a pretty decent state. Um, And we've had text generators before they were kind of worse. They were a little bit ridiculous, but, um, you know, they've been around. Yeah. I think everyone who's learning to program probably. Right. It's one, at some point it's always kind of fun. The scary, most interesting, the most why everyone's really paying attention to GPT three is that those were always specialized tasks. Even if it was a program you would hand write a program to do a, let's say. Hmm, translation's a bad example, but any of the natural language input things, um, you might train it to translate from English to French, English, to Russian, you know, English, to whatever, and back and forth, very purposeful. You design the machine, learning algorithms around that task. Uh, same thing for all the applications, text generation, he designed a specific thing for that. The thing about GPT three, was it never learned any of those tasks specifically? All it did was read the entire internet, the sum of human knowledge. And it's so weird, James, because they made this network so big. It has 175 billion parameters. So you can think about that as 175 billion floating point numbers. Um, It's a lot, 175 giga floats of numbers is the sum of human knowledge, the Corolla example. Yeah. And then when you want it to do a specific task, like translate this query into sequel, you just give it a few examples of it. Like here's here's one, two, three, four, five examples of how to translate this thing. And it figures out the rest. This is very different from how current networks work, where I would have to give it a hundred thousand examples and retrain it for that task. This network is not retraining. It's trained, it's been trained. It's it's a hard task to train this thing. James: [00:19:08] Yeah, it is a, yeah, it is a pre-trained piece of software. So like, for example, I want to talk about like, what you do you mean by what you would feed it, then blog posts. They say, here are some examples, uh, Elon Musk by dr. Seuss, right? So it's going to generate a story inform of dr. Seuss about Elon Musk. So it must have the complete. Um, you know, back knowledge of all dr. Seuss. Why dr. Sue styles of doctors, even books written about dr. Seuss and by himself, dr. Seuss and then all about Elon Musk and then put something together. Another one is Jerry Seinfeld and Eddie Murphy talking crap about San Francisco. And then they would, you know, you could imagine the amount of, of content there is from those comedians to put that together into something Frank: [00:19:57] real. God, can you imagine, like it read the internet? I forget what they said, the training Corpus was, but I, it was as large as we got, we just fed it everything, you know? Um, why not? Right. And you might be guessing, think this is GPT three. There was GPT to GPT one. So I'm sure like throughout all these years, they've just been building a larger and larger trust. Larger Corpus of tax to feed these things. And it's kind of interesting. Um, just to generalize a little bit here, uh, this has been a bit of a trend in neural networks of if you want a neuro network that does something interesting or creative or does it well, basically writing is a creative exercise. Um, the F you gotta kind of take it through some baby steps first. Uh, Google's voice synthesizer. Why am I blanking on its name, Wavenet or something like that? When they were first training it, they just gave it audio. They didn't tell them, go from text to speech, go from speech to text. They're just like here's some audio and this thing would just blurt out. Nonsensical English like blah, blah, blah. You know, just making up words, making up phonemes and just trying to get the rhythm of the language and the sound of the language. Once that was trained, then they would narrow the network down to a specific tax task of go from, uh, A text to speech and be able to synthesize a good sounding natural voice. And when they went through that process, well, you've heard, you've heard the best of Google. It's really close to human quality speech at this point. James: [00:21:38] Yeah, it can be super duper scary, but you know, it's not scary our sponsor this week. Zinc fusion. Wow. What a transition I'm hitting on all cylinders. Listen, our good friends over a sink, fusion, everything that you need for your mobile apps, a web web apps, desktop apps, or anything that you're building. They've got dashboards. They've got controls. They've got, Oh, there's amazing things you could possibly want and need. I use it in all my applications, including Island tracker, all sorts of goodies in there. I use it. Just charts, graphs, different input controls up and down expanders, all that stuff that I didn't want to write myself or I didn't want to grab, you know, a bunch of different packages and fusion had everything for me in one nice little package. All you gotta do to go is sync, fusion.com/merge conflict. You can check out all their controls for whatever you're building.net, non.net. You name it. They have it. Go check it out. Sync, fusion.com/merge complex. Thanks to sync fusion for sponsoring. This week's PA Frank: [00:22:35] think confusion. James: [00:22:36] Yeah. So, okay. Listen, it's fed all this data, right? It has everything. Now, one thing though about neural networks is usually they're getting smarter, right. Because you're feeding it more and then you're, canonically training it. How smart can this actually be? If it's pre trained, you know what I mean? Like Frank: [00:22:54] right. James: [00:22:55] The stuff that it's generating, you looked good, but is it actually good or is it just happenstance? Frank: [00:23:00] So I'm going to answer this question in two ways. Uh, number one is it's generally slightly worse than the best of class in each field, but it's also not like the worst in those fields. It's usually like number two or number three. So say you want to do English to Russian. Um, might miss a word here or there. But it's a generalized now. Right? So I just want to put that aside. So, um, you right now, you can still win by a very specifically trained ML application. But the beauty of this thing obviously is its capability to generalize and all of that, but yeah. Right now, but can you imagine, like, this is 175 billion parameters? What if you do 300 billion, it's just how big of a supercomputer can you build at this point? Um, so while it's still a slight underachiever, you have to think it's still smarter than all of them Southern networks, right? Just, just from its ability to not have to retrain for those specific. Tasks now answer number two is because it's a general finger. It's not trained on it. Specific task. It's able to. Lead and a million new categories of tasks that we've never tried before, because it's already trained. It's already executing. So, uh, how well does it do in the financial statement? Excel sheet generation category? Well, it's top of the class, cause no one else has ever been able to do that before. So like it's opening up a whole frontier. So I almost don't care how it compares to those other ones because it's opening up such a big, new play field. James: [00:24:47] Yeah. I think that what's really unique about that is, is that it's going to possibly unlock all those scenarios. Like you're saying, I'm looking at a sheriff who, um, I think a chef there's this, uh, Person knows person works somewhere else. But I was reading an article about different poets and I believe this person built a layout generator for, with JSX code. So what he did is he said, describe the layout that you want. This is really Frank: [00:25:15] cool. Yeah. Okay. A web layout. You mean like a UI or user interface James: [00:25:20] generate a user interface. So the first one that, uh, he put in here was, um, Um, a button that looks like a watermelon and it gives a button with a pink background with a green circle. And then the next one is Frank: [00:25:37] because it has general knowledge. Like if I were to write the thing that goes from tax to UI, I. I want to thought to include watermelons, James, it just would not have occurred to me. That's the beauty of a general knowledge. I interrupted you, James: [00:25:51] but you're right, because it knows that a watermelon pink on the inside and then green on the outside, that's the border of it. So it's like, okay, how do, how do I describe it? Understood Frank: [00:26:00] border versus interior. These are abstract concepts, but there are no graphics for these concepts. James: [00:26:08] Is it crazy? And then the other one that. Um, he put his large text that says, yep. Welcome to my newsletter and a blue button that says subscribe. And, uh, sure enough, it. Definitely puts it in there. Now it puts it, it put it in. This is funny is the first time you did it. The text was actually white, so it was the same as the background of the page. And then he said, but then he said, which text in red and then it just made it red. Frank: [00:26:38] Yeah, it's a new programming language. It's a new CSS. If nothing else like translation, what is a compiler it's translating from a programming language into bytecode. Yeah. Um, yeah, I hope no one's done by code yet. I mean, that's such a finicky language. It probably wouldn't work cause you gotta be really precise, but you know, maybe with 500 billion parameters. Okay. Okay. Um, yes, yes. Okay. So I keep singing the praises of this thing, but I want to address a couple of subjects. Number one is, as you all know, all neuro networks are biased. And the biggest bias this neural network has is that it was fed English. So at the current state of the art for this puppy, it is, uh, an English network, which is kind of weird. So, um, it's able to do translations cause it figures out other languages, oddly enough. Um, but it's definitely biased towards English right now. Yeah, James: [00:27:40] that makes sense. That makes sense. Yeah. Frank: [00:27:42] A little bit sad, but um, I guess we can deal. Um, the other part, um, I don't know how open they're being with the beta I've been waiting for for a while. James, and I'm very excited to ever get into the beta, but I don't think it's, I don't know if it's ever going to happen. Like they really asked for applications and that kind of stuff. And so I don't know. I don't know if like, I don't like the idea of open AI. I don't know how to phrase this. Right. As app developers are gonna use this thing, James: [00:28:17] you don't like that open AI is, is closed AI in this sense. Thank you. Frank: [00:28:22] Thank you. Well said this. Yeah. This is why you're here. You can explain my thoughts. Um, yeah, so like, I want to put all of these features into all of my apps, but at the same time, it's, you know, I'm going to have to start. Paying for someone and all that stuff. And so I really start to wonder if this is going to become a commoditized technology and that Microsoft will release one. Amazon I'll release one, etc. Or if open AI actually has an advantage in this situation. And now we have a new gatekeeper welcome to 2020. A new technology has been invented and a new gatekeeper for that technology. James: [00:29:02] No, it is very true. I mean, I could imagine a scenario that helps with all sorts of different applications, you know, um, even your application, like I circuit like build me and circuit that blah, blah, blah. Right. It's just like, or, or, um, you know, I could think of just even creating UI or like there's, there's all sorts of different things that I'm sure, like we're saying doesn't even, um, Doesn't even calculate, right? Like, imagine you had a scenario where you have lights on like Phillips use lights and you just talk to it like, Oh, can you make it a little warmer or can you set it to, can you set it Frank: [00:29:40] watermelon? Make my wife James: [00:29:42] exactly a watermelon. Yeah, maybe the watermelon. Oh, can you set the tone to assault? Uh, some, uh, summer sunrise in, in may. You know what I mean? Like in, or what, you know, it could be anything because it's Frank: [00:29:52] poetic. Yeah. James: [00:29:53] It could be poetic. And the idea is. It's general purpose. The one thing that you stopped, you said it stuck out to me and what I've had forever. And I've seen you talk about machine learning models. AI we've talked on this podcast more than any other topic is where building models to do a thing. Something goes in and something goes out, you feed it. Ones and zeros coming, you know what I mean? It's true. It's true or false. It is a hot hotdog, or it's not a hot dog and, uh, or there's there's tags. Right. And then you have multiple tags. So now you know that it's either a hot dog or a sausage or whatever, right. Or a veggie dog. So. Um, this is, I don't know why actually, you know what? I did buy a lot of veggie dogs recently. Frank: [00:30:36] Oh, okay. Healthy James: [00:30:38] at a trader Joe's are fantastic. They're like life something they're very, very good. I like 60 calories very anyways. Um, but you know, but imagine that, I mean, there's, there's these scenarios, but the question is, is it going to be good at. At very, very specific things, um, down the road again, but you said, this is just the start of it, but how much is it a detriment to open AI that this thing is closed AI in the sense. Frank: [00:31:08] Well, it's good for them. They're a business or for profit, as far as I understand, though, I did hear don't quote me on this. Everyone. It could be completely wrong. I heard it like cost $4 million to train. Like maybe they're accumulating like the cost of everyone salary and, you know, buildings and all that operational costs, that kind of thing. But. It was not easy to build or train or anything like that. You know, you need a computer with that much memory, a distributed computer, obviously, but a computer of sorts. And so I think the other part that bothers me is this is not something that can run on your computer. This is definitely something you have to pay someone else to run for you. That always breaks my heart because you know, me, James, I love to run everything on device. That's my Mo right there. Yeah. Um, but I do want to go back to what you were saying about, um, Hey Dingus, the Hey Dingus is out there because, Oh my God, aren't you tired of how. How dumb they are, you know what it is, it's they let us see the potential because like the voice recognition part is pretty decent, but we have a restaurant near me called row Rose and neither device can comprehend what the heck I'm saying, when I say, you know, Israel Rose open, you know, or, you know, things like that, that they can't comprehend that. Or if you say something, like you said, uh, something poetic. I want the lights, the, the Dawn of my birthday and, you know, 1980, I wanted it that color and I'd never be able to figure it out. And so, um, where was I going with the hat? I forget. Um, I think those devices it's going to be better. It's all James: [00:32:50] correct. And it could be a gradual improvements there, you know? And when you think about how we speak as human beings, that's how we want to speak to these devices. We don't want to have to learn how to speak to the machine. We want the, Frank: [00:33:07] yeah, sorry. James: [00:33:08] Yeah. We want the machine to understand us, right? So we actually want. We don't want to dumb down to the machine. We want the machine to rise, not to our level, but closer to our level. Right. One, a, a matrix situation and up in here, but also why would they harvest humans? I mean, we don't generate that much power. Like why wouldn't they just have like, you know, normal energy sources in the matrix, like power, the matrix. I mean, th that logic of, because we consume, we consume so much food for our energy. I can't imagine that the output of our energy of a human body of heat source could be more than anything else. Frank. Frank: [00:33:52] You know, I was having this argument while camping and we were intense and the argument was whether humans could create enough hot air to create condensation with the tent, given the amount of air gaps. And so I was literally having this argument of how much energy can we predict, you know, is the matrix valid? Is it valid? Um, but you bring up, you brought up the robot revolution and. It, it does become, uh, you know, I started this all by saying I completely dismissed the robot revolution. I don't think it's going to happen in my lifetime. So I don't care, but given 200 years, it just might. James: [00:34:29] Yeah. Well, the, you know, the, the, here's the part of, of this GPT three stuff that is fascinating is if they are just going out and they are training it on the internet, then that would. Make decisions about knowledge graphs that the internet is true in which we know that a lot of the internet is not true. So that is what sort of scares me in this part of saying, Hey. We're going to do this. And even if you're like, Oh, we're going to do only trusted sources. Like, well, anyone can edit Wikipedia. And in the moment in time, in which you select you index something, can't have false information on it. So I think that the downside here of, of, of just, we're just going to train it on the internet. And go off and do stuff. Um, it depends what they're indexing. And I don't know if, if they, if they necessarily documented that. Cause if you could say, okay, well we went to the national archives and we index like encyclopedia, Britannica and books and you know, you know, nonfiction, nonfiction, true stuff. Yeah. Write nonfiction. That was good. That makes sense. And then, so if you were going to index nonfiction, which again, Doesn't mean nonfiction books are true by the way, because inaccuracies, Frank: [00:35:43] everything has biases. James: [00:35:45] Everything is biased. There's, there's going to be that inherent bias in the system. And I don't know, what's checking it necessarily because if it's pre-trained the issue comes, how do you make GBT three smarter while I guess you train it on more stuff and have more inputs and that's GPT for. Frank: [00:36:01] Yeah, you could almost joke like GPT three has 2020 biases, you know, however we were thinking then, um, but it's truly a problem. You described it very well. And in some ways I think that that was one of the original purposes of open AI was to address this problem and try to think it through it's. I find it. Sadly funny that, um, they've more accelerated the problem than the thinking on it. I haven't seen too many research papers where they've thought through how they're going to handle this. Cause imagine this scenario, GPT three is generating the entertainment internet whilst GPT four is reading the internet. That's not good. And then GPT four can generate the internet while GPT five reads the internet. Um, I think my favorite scifi author, Neil Stevenson wrote about this in his book. NFM he joked that the internet actually did start to fall down that cycle. Where these generators and readers we're constantly combating each other. And instead of come up with one source of truth, it became more of a probabilistic game. Like what is the common story here and worse than that, they would just feed misinformation into the system anyway, to test the system. So you would write an article that had a bad fact in it, and then you'd have a neural network, basically fact checking that, um, I didn't say no, no. Right. He didn't write neuronetwork but you know what I mean? Something fact checking it. So it's an arms race. It's an escalation. It's something we're all going to have to be ready for. James: [00:37:33] Yeah. I agree with that. Well said, sir. Well, Said, I don't know if I can add any other value onto this conversation. Frank: [00:37:41] No, this is, this is fun. I just wanted to, you know, you know me, I need to get a neural network machine learning episode and now, and then, and it's timely. I, and I really am excited by this. I've said a lot of negative things on this, but that's just because I'm trying to be. Quantify everything I'm equivocating, but as a nerd, as a technologist, you know, the 12 year old part of me is like, this is awesome. I can't wait to get my hands on this thing. So that's all James: [00:38:08] nice. Nice. Well, if anyone has GPT three access, I'm sure that Frank would love access to that. So feel free to give him an email, go to and that'd be pretty funny. I mean, would be fascinated to see what. We could come up with, I'm just saying, Frank: [00:38:22] I don't know, just saying hashtag XAML generator. James: [00:38:26] Well, you know, I even think about it for the podcast, right. Which is we have transcriptions for everyone or for a lot of the podcasts and imagine. If that information was fed into some site sort of system Frank: [00:38:39] questioning the answer system, at least that would be fun. Frank James spot. Wow. James: [00:38:44] Yeah. And they're all tied with James and Frank. So literally you can have that type of response. Frank: [00:38:49] I kind of, I'm a little bit afraid of what the Frank bot would tell me. I think we might have to run this experiment sometime. I'm sure that James spot will be wonderful. I'm more afraid of the Frank bot. James: [00:38:59] Yes, definitely. The James bot will be the best that is for sure. Yeah. Frank: [00:39:04] Cool. I believe it. Yep. That's it. James: [00:39:07] All right, buddy. Well, go, um, go off. Um, don't believe everything you read on the internet that is for sure. Um, and I just want to make sure people do that. Make sure you wear a mask. Don't believe everything you read on the internet. And be safe out there, people, you know, um, uh, I love all of you, please be safe, wear a mask, um, and wash your hands until next time. This has been another rich conflict. I'm James Monson Magnum Frank: [00:39:28] and I'm Frank, the listening.