mergeconflict342 === [00:00:00] James: Frank, it's time for another AI driven merge conflict. You ready buddy? [00:00:15] Frank: Uh, I am so ready. Is everyone else ready? I, I hear there might be a little bit of AI overload. I, I saw someone on Twitter today, yesterday say, uh, can we not talk about AI for a while? But no. James, we are gonna talk about ai. That's right. We're gonna stay on this bubble. I love this bubble. [00:00:35] James: I was back last weekend. We had a long weekend and. I was doing some video gaming, playing some Zelda, and I went on Twitch and I had like, you know, six people. Cause I was like, I'm gonna go back on Twitch, but I'm just gonna play video games. And I was playing Zelda and one of the, one of the folks came on, they're like, Hey, like, you know, like the podcast, blah, blah, blah. I was like, yeah, it's cool. They're like, when are, when are you gonna talk about, uh, chat or when are you gonna do a video or deep dive on using chat G P T? And I was like, why? There's the thing about chat g p. I just feel like everybody's talking about chat gt, what kind of value can I add into the discussion about chat? G p t? I feel like I could do something cool, something witty, but like, you know, it's, it's, it's at that peak and by the time this podcast comes out, it maybe the bubble I've already burst. It's over, Frank. It's like, it's been two weeks. , we're onto something else. Uh, Dolly five, whatever it is. I don't know, you know, but I, I'm with you. I think that it's everywhere. It's been everywhere. I'm starting to hear a little bit less about it, but I am starting to see more development stuff come about and controversy around development stuff that we're not gonna talk about here, because I think some of that controversy will go away. Companies start to democratize Frank ai, um, and actually let people use these APIs as developers, specifically not only o um, open ai, but perhaps the cloud and Azure open API service, which just expanded to include, oh, Jet. G p [00:02:05] Frank: T. Oh, that's why we're talking about cuz that Azure got it. I see what's going on here. Microsoft plug. Microsoft plug. Now this is actually important and I, I completely roll my eyes at the word democratize here because I was joking with you earlier. I'm like, isn't that just a funny word for sell? Don't I just mean sell it? But, um, I was thinking about it. Um, It is more complicated than that because I love running neural networks on my own hardware. I love squeezing them down to be able to fit on the phone, running 'em on the phone. , but these networks are huge . You just, you can't run them on your machine. You know, if, if you've run stable diffusion, maybe if you have like a cool M two iMac or something, but it still takes like 10 seconds or something to generate an image, um, versus big servers. And then when it comes to something like chat G p T, you just can't run that on device. It's just too big. You don't have the memory for it, let alone the computer for it. So democratize in this case actually does apply because developers, us, you and me, James, we are gonna add all of the language modeling junk to our apps. I don't know how, but we're gonna do it, right, , [00:03:23] James: it's gonna happen. Yeah. And yeah, I think there's a whole bunch of opportunity here. And you know, I, I, I believe that as time goes on, many developers will build SDKs on top of it that will help sort of democratize it more. I would say integrate more like ideally these breakthroughs and, and, and these services will help developers integrate services more. And I'm actually really surprised, um, on this blog post, I'll put a link into the, into the show notes, which is now announcing the general availability of the Azure Open AI service, which does a whole bunch of different things. But it has this timeline from 2016 of all of the different AI breakthroughs that Microsoft done. Again, I work for Microsoft, not on this team. Am I? Are they endorsing this podcast at all? . I literally saw somebody on Twitter. Tweet or I saw, oh, who, somebody Satya tweet this thing . And I was like, oh, this is interesting. We should talk about it. Because I was recently listening to the tech meme ride home talking about the history of open ai and additionally the mic, Microsoft sort of collaboration there, which goes back all the way to 2019. And they were also telling about the touring n LG language, uh, that Microsoft was developing internal. Um, that this model with, you know, 17 billion different parameters they were using, like with office and whatnot is what they were saying on Techmeme. But then sort of this, um, partnership with open ai, um, on this, you know, AI. Outlook and enablement brought in many other services. So actually last year, I didn't even know about this, this is in November of, sorry, two years ago they announced the open AI service and then a bunch of stuff happened, like for example, like co-pilot launched and then a bunch of, you know, other services like Outlook, like integrated the different AI services. But then, um, last year in May, they integrated, um, Models with new responsible AI systems. And then in October we were talking about it a bunch, and I don't know how we missed this, but Dolly two was integrated into the Azure Open AI service, uh, and a bunch of those other, um, apps that Microsoft announced. I didn't, I missed that this was a service that we could take advantage of, Frank. [00:05:44] Frank: Estro Estro, asterisks. Um, you have to apply for the C. Yeah, there you go. Yeah. Uh, okay. Let me add a tiny bit of structure because I'm, I'm like going through the news myself and I'm trying to make sense of it. So I think, um, a few big things happened on the money side. Microsoft invested a bunch of money into open AI, and that's good. Whatever. . I don't understand how they make money other than they have these APIs and I pay them for it. I guess that's how they make money . So, but they're still taking money from Microsoft. I don't know. So there's money news there. I don't have too many opinions on that. But what I personally saw, the n first news I saw was that open AI themselves were gonna open, um, an. To chat G P T because up till now chat, g p t has just been a website that you have to go log into, get an account. You were complaining a lot about the account that you had to create to access it and then, um, I mean, but it was so popular, so I mean, it's so popular that we're still talking about it and people did some unscrupulous things, like people released apps that. Browsers, wrapping the chat G p T site and selling it as chat. G P T. Mm-hmm. . Yep. And I'm angry and jealous at those people because Okay, it's a scam. But man, they must have made a lot of money cause it was so popular. , okay, I'm not gonna be jealous of scam people, but, Hmm. . But now we as real developers, I was excited because if OpenAI offers an API, then yeah, we can start. Proper apps like that without being scammy and just have to figure out a good business model. And then, and then Satya puts out this tweet saying like, don't forget Azure. Yo, we're go, we're gonna, we're gonna come in aggressively on the Azure side, integrate all this stuff into like a bunch of services that you already have. Like, I don't know, I run websites through Azure. Of course I already have services there and Microsoft's gonna do it. So those were the big news items. money, open ai api and Microsoft api. It's basically all good news for developers because again, these models are just too cu darn big to fit on the device and you are gonna have to use a service to [00:07:53] James: access them. Yeah, and it's really kind of crazy because. To your point that you need really big computers to run these things. It's true. I mean, I think that's, when I was listening to Tech Meme Ride Home when they were talking about Open api, they basically ran into that issue and they said, well, we are gonna have to, we're gonna have to get a lot of cloud credits to be able to run and do the training for these things and actually run these things. And surprisingly, we were talking about hugging face before. Open AI and meta, and I don't want to just have a big Azure, you know, thing, but they use Azure for these things. So it's not like this is a new thing. It's like being used by huge companies all over the globe and some of the things that you're actually using, which is very fascinating when you think about what's gonna be built on top of this stuff. [00:08:38] Frank: Yeah. And there's others out there. I, I realized I'm quite in a bubble because the whole image generation community, there's, uh, hosting websites popping up all over the place there too. So I, I'm, we're talking right now mostly about chat G p T, but there's that whole imaging revolution that keeps on going that hasn't gone away. People are still doing crazy things with image generators. And you're right, it is, uh, fun, uh, hugging face has. They call it like the inference api, but they just changed how their pricing model works. But yeah, same difference. You can access their APIs. It's kind of nice. Uh, we have a plethora of APIs to access. James, uh, should we write a library to access all of these, or do you think. We are in the wild, wild west and we're all just gonna be paying Azure in the end, or , you think it'll, it'll probably be this way. I, I'm guessing for at least a few years as we figure out what companies can actually survive and build up data centers and be able to handle all the data. I'm Google should have an offering, but I haven't seen much from them. [00:09:43] James: Yeah, it's, I don't know. I'm, I'm really fascinated on. How will this all roll out? So we have some good friends, uh, such as Miguel Deca. Ding ding. Oh, ding, , uh, they just rolled out, for example, El Coto Coto, uh, which is an AI assistant for LA Terminal, which is their terminal application, um, which is fascinating. Uh, absolutely it'll, they give it the context of like what operating system you're running on and connected to, uh, what distributions you're using, what shell you're using, and it can offer up different, you know, opinions and things and recommendations. That I can do, which is [00:10:26] Frank: awesome. I didn't know about that. That's so cool. Talk more. No. So is that an add-in? Is that a subscription you have to go for or is he running it on device? That one, it [00:10:35] James: says it's powered by G P T three, so I don't know necessarily that's [00:10:40] Frank: a service. No, you're not running that on device. Yes. That's not happening. So that's a service. So that must [00:10:44] James: be a service that's out there. However, I don't see any. Add-ons. They also released, for example, El Pintado, which is their AI art, you know, um, generator crafting thing as well. I'm putting all these links for you, by the way, in there. Miguel's up to crazy AI shenanigans, um, which is pretty, pretty fascinating. And that one does have, um, subscriptions. So I don't know if just let terminals offering that stuff for free, cuz that's what it's looking like, um, in general. So maybe it just hasn't rolled out yet. No, it's in there. So you could definitely get it. Uh, that that's. Really cool, but that's what it's looking like, at least, uh, for, for G P T three. So again, I'm very, very fascinated as the types of applications and, you know, and to your point for the, the dolly stuff and the art stuff, yeah. That hasn't gone away at all. I've only seen more upon, more upon more, uh, different types of applications come about, especially when everyone was updating their profile photos. Like, you know, what, a few weeks ago or a month ago or so, And that was a while. The real question is when is this gonna get to the point where this is, you know, reasonably priced so you don't have to charge people? Cuz right now what's happening is everyone's creating apps and then they are charging people to do, to do stuff. I just saw really cool project, I'm gonna just throw out different project types, right? These are all great ideas and I was thinking about this over the weekend. I was like, wouldn't it be cool if there was a YouTube thumbnail, generat? Just give it what it is. You already have the description. They generated a thumbnail for me and then someone created it. It's like a project, but how it is, it's like, give me five bucks and I'll give you a bunch of thumbnails. I'm, ah, I don't know, you know, because it costs money to do this thing. So when are we gonna get to that point where, you know, this thing scales. To, again, democratize integration of these amazing services. [00:12:36] Frank: It, it's tough because the imaging networks, um, can just barely run on hardware on the device. So that's why we've. Been getting those M one versions of stable diffusion. And if you fine tune those to a particular application, especially like YouTube thumbnails, then it can do an even better job at the same size. It loses some of its generality, but it becomes more efficient at YouTube. Thumbnails, . So I, I think we might be able to keep imaging on device for a while and. in that case, you don't have a strong argument for being expensive. Like, Hmm. I, I, I think the cool thing about, uh, an on-device imager is you can be like, you can just say flat rate. $10 a year, generate as many images as you want. I'm not gonna charge you per image. I can flat rate it then. Yeah. But the big networks, the big, big ones like chat, g p T, uh, you just can't do it. Not, not yet. I, I do wonder in 10 years, well, hardware catch up. Mm-hmm. , but I don't think so because we're looking at orders of magnitude difference and those are really hard to catch up with. Um, so I think I. Paying expensive. A la carte is in our future. We thought subscriptions were bad. What about a la carte? Every operation you want to take gets a micro charge. It's gonna be rough. [00:14:03] James: That, and that's really kind of what I'm seeing in the world of, of AI integrations today is micro charges. You know, Hey, you wanna do this thing? Gimme, gimme two bucks, gimme that thing here. And I'm curious if, again, at scale and, and as time goes on, if, if that just becomes a smaller amount of money. Cuz you know, when you think about it, you know, you think about anything, and I don't wanna just bag on on AI here, but when you think. Oh, I'm creating, I'm calling restful services. I'm doing database access. Bandwidth is being processed. That stuff used to be more expensive and now it is not very expensive. Yeah, [00:14:38] Frank: and and as cool as it is that Open AI has an API and Microsoft has an api, I think. Open AI uses Azure anyway, so, and as much as I love those two companies, I would like to see Google come in here and make sure that, make sure that there is some competition to keep prices in order. Because right now, Only one company has chat G P T, and there's really nothing as good as it out there. I think in a year we're gonna have a million chat GPTs because people will fi figured out the algorithm. They'll raise the capital toll take to train it and they'll get 'em trained and then they can start offering their own services. So I think we will see an explosion of companies, but right now, um, Developers and therefore users are kind of at the mercy of cloud [00:15:30] James: services. Yeah. Do you feel like that? When is the time for us to actually, I mean, we have to get access to it, but let's say things are available, like, you know, I'm not a, a dev, you know, when I think about democratizing things, like when I look at probably what's available for these, there might be a rest service. There might be a Python library. Are we at the point where at this point you're feeling like, Hey, like I, anyone should be able to use this? Or is it still like, no, you really gotta, you gotta really have deep underlying AI ML machine, you know, model building knowledge. [00:16:10] Frank: Oh, okay. Well, there's a lot of ways to jump into this world. You can jump in low level, medium level, high level. I think what we're seeing is the explosion at the high level. Mm-hmm. , uh, we're getting simple enough UI that everyone can take advantage of 'em. It's funny, I wasn't expecting your question to go that direction because I was applying for the Azure <LAUGH> service and it, it's, it's, uh, by, by permission domain. Right now everyone, you have to apply. and they were very particular about, um, your exact A app. Mm-hmm. , what you were gonna do with it, uh, whether users were authenticated, uh, what's safeties you're gonna put in for bad inputs, bad outputs, even, you should guard against those two, uh oh, I'm forgetting. But a million other things that honestly, I've never thought of. and I get where they're coming from. They're trying to be responsible AI people, but I, I guess I'm a little bit of a chaos monkey myself. Uh, um, I, these networks aren't what they are. I feel like for a real creative explosion, we should just kind of open 'em up and let people do everything. But at the same time, I do recognize that society might collapse if we do that. So it's a real, it's a real moral dilemma that we're being presented. [00:17:32] James: So I want to ask you something here. You kind of mentioned it earlier. When we talk about running these models, there's, there's two sides to the coin of AI in machine learning, which is the training and then the like requests Yeah. That go into the model. Now, the, if you look at Dolly and the, the, um, the image generation stuff that's happening, um, stable diffusion, you're saying that, That stuff can be run on device, but all the training that can never basically be done, that's gonna always have to have some large cloud infrastructure. Correct. Like when does, when, and then when does the usage pattern flip as far as what's possible to do on device versus not on device. [00:18:16] Frank: uh, it, it, it's the size of the model and how much memory it takes up is pretty much the current limiting factor. Hmm. Uh, people can be patient, you can put a progress bar up and be like, come back in 10 minutes, . You know, it's, it's really, the memory is the limitation. Um, I think that, like I said, we're, we're in an okay place right now where we can run these networks. I , I'm just always afraid that, The goal of AI is the bigger you make the network, the better it. So it's gonna also be tracking our standards of quality. So I have plenty, a networks that can run on device generate images that can even generate 3D models, but it's not nearly as sophisticated or detailed as. Davi and things like that because it's just not big enough in general. Um, training, depending on how you do it, but roughly speaking takes three times the memory. Mm-hmm. and maybe even double the compute, roughly speaking three times the memory, double the compute. That doesn't make sense. Shouldn't it be twice the memory and double the compute, whatever? Um, rough numbers. Everyone and so you can definitely do training on the phone. In fact, um, I, I was making a scanning app. So you take lidar images from your phone and then it merges them together into a 3D model, and that involved training a neural network on the phone. Mm. . And so it really depends if you have these, there's still this machine learning thing out there where machine learning is a general algorithm for if I have a lot of data and some solutions to the problem, please, um, interpolate and extrapolate information from that. And then there's this whole. General AI kind of revolution going on right now where we have these huge models that are able to do sophisticated tasks in many domains using natural language, and those get big. So there's small models, , that can definitely be used for a lot of useful things. But then obviously the big trajectory is toward these giant models that try to do everything all at. [00:20:30] James: Got it. Yeah, that makes sense. The, the, and as we go on, the models are only gonna get bigger and bigger and bigger at this point. They're not gonna go backwards in time. [00:20:40] Frank: No, and, and devices will get faster and faster. They keep adding neural cores to phones and they keep adding GPUs to phones and they keep improving their software and the capabilities of those neural cores. And so those get faster and faster too. Um, but like all things mobile, um, there's just gonna be orders of magnitude difference. There's an order of magnitude. Or two. One or two between your phone and your desktop computer. And then there's another one or two between you and the server out there running it because they're running on dedicated hardware, neural engines made by Google and Microsoft and all that stuff. [00:21:17] James: let me ask you another question here, Frank. Let's get down. Let's get down to it. Are you scared at all? I, [00:21:25] Frank: I'm a bit of a chaos. I think we can handle it. I think the world is full of bad people. Sure. But I think the majority of us are good people and scams will arrive and we'll have to learn about new scams. I think the new cycle will be even more complicated. I think that the networks , I think that we are gonna need networks to. Proof check things . Like I, I want the chrome extension that reads whatever I'm reading and says like, I believe that. I believe that. That seems suss, that seems truthful. That seems suss. I think we are all gonna have to get a little bit smarter as a community. Hmm. But. So is, is that fear? Am I scared? No. Am I apprehensive? Yes. , I'm gonna change the word on you, . Yeah. It's, it's the gen, the genie is out of the bottle. It is. It, it is. We can try to control it, but eventually the genie's gonna win. [00:22:21] James: Yeah. Uh, Heather and I were talking a lot about. This, surprisingly, I'd not only talk about AI just with you, Frank, but uh, Heather brought it up cause she was talking about chat, G p T, and we're talking about this stuff. This was last week after we recorded last week's podcast. And that she was talking about those same things, like the complexity because I was listening to a Technium, Ryan, Ryan home, and there's the volley stuff. Did we talk about that? The voice? [00:22:48] Frank: Oh, I don't know. Uh, it, it, this isn't that whisper network that, that thing's making waves too. It, I don't know what you were talking [00:22:56] James: about. Foley's. Like basically you can give anything text and it will do a human, you can, I think we did talk about it. You can give, like they could get feed at this podcast and it will generate real to life forms of you and me. If we feed it text, it'll read it back. Right. And awesome. On the tech meme ride. He did some experiments with it, had the AI read back articles and read ad spots and things like that. Uh, shout out to our sponsor, uh, sync fusion, sync fusion.com four slash emerging Conflict for all your good controls, things that you need, not sponsoring this podcast episode specifically, but we love them. Why not? And so that was kind of cool. And uh, and then I was talking that with Heather and then I also, I think it was cnet. I wanna say CNET has actually been doing AI generated articles. And they, they've been edited, of course, like with by editors, but, um, it's under, it's, it's under a whole different like team person name or whatever, and you click on it and it gives you details. They've been generating articles for a long time, and then we were talking about the image stuff and you know, these, these things all multiply. So it's, it's kind of like when I say are, you know, we worried or we scared, or whatever, like what's the next level here is? I think we've always seen like, Inclinations of, oh, okay. This little thing over here is cute. This little thing's over here. Cute. This little thing's over here. Cute. But then combining them all together. Yeah, there's a lot of advancements in a lot of different areas. All happening really quick, all at the same time. [00:24:24] Frank: Multimedia. But we're, we've, we're, we've gotten through a lot of the medias. We're doing audio voice, we've kind of figured out music. Don't tell the musicians. They hate hearing that. But music's pretty simple. Um, . Video is still on the horizon. If you think generating images is expensive, imagine generating video. Mm-hmm. . So you know, there's, and even that one, like you could be patient, you could wait for the frames s render, but it's still gonna take time. What are the o other medias left text, do you think text will really survive? I think the big revolution is gonna be video when it comes out. That'll be the big one. We're these, these all feel like a little bit of precursors to video. I'm, I'm a little jealous about that voice synthesizer. The app I've always wanted to write was Twitter, but it read everyone's tweets in their own voice. Mm-hmm. , I just thought it would be fun, be cool. Automatic podcast thing. Maybe I should look into it. Why not? Um, , I have to make it for Macon now too. That's why it's, it's, it's too complicated. I. I think , we're just gonna keep on going with the medias until they're all combined. Ha. Have. So have you played with chat G P T? I'm, I feel like I ask you this question every week now. I [00:25:40] James: did. Um, I did play around with it. I think we did it on one of our Paton episodes a week or two ago. Yeah. Or maybe we did a live on here. Every time's a blur basically for this podcast. We apologize all of our listeners. But yeah, no, I did play around with it. I was talking about coffee roasting with it and going back and forth and having a conversation about where I found it from and this, and why did it recommend this versus that. And you know what's really funny about that convers. Is, I had a conversation with John on my team today about coffee roasting and his responses of how he roasted coffee were nearly identical to what chat g b t Ooh was talking about how to roast coffee as well. I was like, oh, that's really fascinating, um, that the responses were so similar to what John was talking about today and why he rose in a certain way and all these other things like that. I was like, wow, [00:26:29] Frank: that's crazy. . Uh, yeah. Okay. So good. You've gotten to play around. That's funny cuz that must just mean like the internet. That's the internet's agreed way to roast coffee, I guess. Mm-hmm. , I still don't fully understand what the training data set was. I'm sure it's published somewhere. Uh, I was asking the question because you often run into this, uh, edge case with it where it says hi. , uh, I was trained on a fixed set of data. I cannot make queries to the internet to look up, uh, current information and you'll, you'll run into that edge case anytime you ask it for something vaguely current or something like that, like whatever happened in the news yesterday or something like that. But that's a kind of a big tell on their side, as in. Can we hook it up to the internet ? Mm-hmm. So, yeah. Can it, can the network Google read the article and then summarize it for you and give you an answer back? Is that weird? Is that good? Am I afraid of that? You know, I actually, I'm less afraid of that than all the fake news articles that can be generated, I suppose. [00:27:34] James: Yeah. It, you know, the, the conundrum here, when I look at. Not even conundrum, but I think the evolution that we'll see is I, I hearken back to early days of, um, um, south Park, for example, the show, south Park and Comedy Central, which I, I used to watch a lot of. I don't watch South Park anymore. I haven't in a long time. Mm-hmm. , but. As a kid growing up, you know, south was very popular and it was all sort of this cutout animation. It would take them forever to create. And then what happened is as they advance into computer generated graphics and, and like having all these things stored and like streamlining the process, they're able to pump out episodes, you know, nearly in a, in a week or two weeks or whatever. So they could, they could talk about. Um, the news and, and basically do an episode on the news in near real time, right? The advantage of the news, you know, the nightly news or whatever you're watching is like, it's in real time. It can respond almost as fast as the news is happening, but almost it's on a schedule. It's a breaking news. You know, I think something like a show, you're talking about video creation, all this other stuff. Creation is like, can the thing get so good? Literally the inputs is the news and all the other episodes of South Park and the output is an episode. You know what I mean? like almost in real time, like the news is happening and like here's something, some event that loads it in and it has all the voices and has all the things like, you know, boom. Love it. [00:29:02] Frank: Love it. Wild. So instead of reading all the tweets and people's voices, you just want the South Park characters to act it out or something like that. I don't know. Act out Twitter , act out Twitter. But [00:29:12] James: it would, it would've a whole story, Eric or something like that. You know, I, I don't know. I don't know. I don't know what I'm talking about, but I, that's why I think it's really crazy cuz it's like, you know, how does this all come about as it happened? [00:29:24] Frank: I, I'm excited. I think there are a lot of creative opportunities like that. Um, maybe, maybe not so much ip, invasive ones that you're gonna get sued with, but, um, for example, I used old G P T and that's what I'm calling it now cuz Chat g p t is so much better. I used old G P T to generate that, uh, reaction video script for the movie. Uh, the, the, um, whatever. I made a reaction script out of it. I forget it. I think maybe something like a hundred reactions and it cost me $10, but I was able to do it. I was able to use an API work on this silly project of mine, this silly idea that I had, um, code, code things, access this powerful network and generates stuff. I'm excited to be able to do that with an even more powerful network. I think chat G P T is even better than G P T and it's gonna be really exciting to have that power. I'm curious how much money I'm gonna spend as I play around with just random scripts and everything, but, um, I, I guess getting all the way back around to it, they have democratized it because now we can work on all these silly art projects with it. A a assuming you can get past the form , that yeah. Seems very hard to fill out everyone. You better have a good idea and pitch it well in your form. . That's my only recommendation. [00:30:50] James: Once Frank comes up with the idea gets approved, we'll report back on Frank. Usage of the open AI [00:30:57] Frank: service. Well, also going back to what you were saying, um, big network versus small network. Uh, I've been wanting to upgrade the AI ML stuff in continuous code completion for a long time. Mm-hmm. And I had trained up some new networks that were way better than the code completion I have in there now. Now that these bigger networks will have API access, I'm like, do I just scrap my work James and put in a service, you know, have co-pilot Codex, whatever, chat, pt, all those things in the app instead of my own. It's tricky. Gonna have to make those kinds of decisions in the future now, but it, it's good that I'm able to make those decisions. Gotcha. Yeah. [00:31:41] James: Makes sense. Oh my goodness. So your excitement is high is what I. [00:31:47] Frank: Very, very, uh, I'm, I'm annoyed. I don't like their UI to chat. G p t mostly . Yeah. I don't think they allow it, but I would really like to just release an app that's just a better UI to the stupid thing. [00:32:01] James: nice. Yeah. I still don't know if I have a use case for it, but what I am liking out of Miguel and, and uh, team of Miguel is, uh, this small integrations, it kind of makes. Like I'm, I'm interested to see like what other developers are integrating into apps and like what potentially I could integrate into apps in some way and, um, how that would work. [00:32:25] Frank: Yeah, and I'm, I'm sorry, we forgot to mention one other big thing. Um, and it just kind of occurred to me and I'm excited by it and so I just wanna toss it out there. There is absolutely nothing stopping for Apple from putting one of these large networks on the device and allowing all apps to access it. Mm-hmm. , they've already, they already have a bunch of networks on device that you can use APIs to access. Android. It's the same. I, I assume I'm making that up. I assume Android's the same. . Yeah, think so. Yeah. Yeah. So, you know, maybe, uh, dub Dub 2023, maybe they'll announce something competitive with GPTs or whatever on an Apple [00:33:02] James: device. So is it every time we get a new device or operating system and like the usage by the operating system is just like always bigger every single time? Is that because there's new machine learning models that are just like taking up tons of room on our device? [00:33:17] Frank: I am curious. So you can get apps that'll crack open, uh, one of the OS releases, and you can, the models are pretty obvious. They're all like ML model or whatever format they're, they're pretty obvious they're. Too , they're there. But I think, um, Apple's moving to it downloads the model on demand also. Got it. And, and that's, and sorry, again, chat. G p T is probably way too big to fit on device, but maybe WWDC 30, 2030 . It'll be [00:33:49] James: slow, but, sure. Yeah. And then we'll have like terabytes, you know, 5 billion terabytes of storage by default, so That'd be nice. And flying. Bam. I like it. . Uh, anything else you wanna talk about on this thing? I don't even know what we talked about, just. [00:34:03] Frank: Democratizing . Hey. Hi man. That's what we talked about. Democratizing. I think we talked about it. Yeah. We totally, yeah, we did. Ai. [00:34:10] James: Ai. You've been democratized [00:34:13] Frank: with money ? Yes. How you solve all problems. [00:34:16] James: Yeah. Okay. You got some? Someone's gotta run the thing. Alright, well I think, I guess it's gonna do it. Let us know what you think. If anyone got into the Open AI service, a program, let us know. Reach out to us Merch conflict fm, we'd love to know. Um, or if you have a great idea. You don't have to tell us. If you tell us, we won't tell anybody else we promise right into the show. It'd be pretty fun and love to hear what y'all are working on, if you're doing it with Chevy g p t or Dolly or other things like that right in. And let us know what you built. Or if you've already shipped stuff, let us know too. We'll shake you a shout on the podcast. That'd be fun. Listeners building cool AI stuff, let us know. Um, but that's gonna do it for this week's merch conflict, I guess. I guess that's it. So till then, I'm James Mat. And [00:34:56] Frank: I'm Frank Krueger. Thanks for listening. Peace.