mergeconflict414 === [00:00:00] James: Welcome back everyone to Merge Conflict, your weekly developer and technology podcast in regards to developer y things. Uh, Frank is dub dub week. How excited are you for dub dub week? Because not the week that we're recording, but next week is dub dub week. [00:00:14] Frank: I'm not ready for dub dub. I think I'm going to get Sherlocked on at least two app ideas that I haven't released. Like they're great app ideas and I'm just. I'm like, Oh God, Apple's going to release something. So I am looking forward to DubDub because I'm hoping for AI stuff and I'm hoping for vision stuff. Both. I want both. Yep. Um, but I'm really worried since I've been working in AI and vision stuff about being Sherlocked on a million products. So we'll, we'll, we'll see what happens. Uh, I hope it'll be a good one though. So I am excited, but I'm also a little apprehensive. [00:00:49] James: Yeah, I am. I'm blocking my calendar. I'm ready to go. So today, which would be the 10th. Oh, June, that's kickoff day. Keynote day. Good to go. Mostly, Actually, it's the afternoon and the state of the platform, as we know. We're not going to do a prediction show because it's the day of dub dub. So like, normally if we record, it's a week out. We'll do some predictions early. If it's on the Tuesday or Wednesday, it might feel better, but it's a little bit weird because today the day that this one is coming out is the first day. So we'll come back next week with the full recap of all things dub dub. Wait, [00:01:18] Frank: I want one prediction. Okay. We'll talk. They'll talk about Apple Music for [00:01:24] James: 20 seconds. Uh, they're going to release Core AI. That's going to be their [00:01:30] Frank: Are you looking for a real prediction? I was [00:01:32] James: being [00:01:32] Frank: silly. Okay. Core AI. Okay. Didn't I complain on a whole episode years ago, probably now, about how many AI APIs Apple has? So yes, it would be actually proper of them to release yet another AI, ML, whatever you want to call this stuff. [00:01:52] James: Exactly. Well, while talking about Hey, I, Frank, um, I am, we did an entire episode recently about the Copilot Plus P AI PCs. Uh, and we talked about these new, uh, processors, the Snapdragons Elite X Deluxe Edition, uh, Uh, and ARM. ARM just being everywhere. But Frank! Yes? What if I told you X86 is back, [00:02:18] Frank: baby! [00:02:19] James: Oh, [00:02:22] Frank: did you buy some Intel stock? Full disclosure. What's going on here? [00:02:27] James: Uh, do I have, do I own, Intel. Well, we'll make sure that we're not, I don't know. Oh, you don't even know, you don't know your own [00:02:36] Frank: stocks. [00:02:36] James: Wow. Fancy pants. I, well, I no longer invest in individual stocks. So sometimes I have holdovers from a long [00:02:44] Frank: while back. I, [00:02:46] James: I, I have individual stocks though, from days of yore. So like I have two Nvidia shares, you know what I mean? Uh, Intel, I do own specifically 12. 625 shares of Intel. So 379. So one eighth the price of their brand new processors, lunar lake. Um, and lunar lake, my favorite lake was like, uh, I think it was a crabby lake or whatever, cabby lake, but I call it, it was cabby lake, but I call it crabby lake. Cause I like crabs. It's my favorite crustacean. [00:03:26] Frank: I, I, I'm sorry. I'm, I'm racking my brain. Everyone, please scream into your podcast. and video player right now, um, the Apollo mission, right? So, Apollo went to the moon, lunar, and there's lake, what is the lake called that they landed upon? Tranquility. Okay. Tranquility. Okay. Got there. Okay. Gotcha. So, sorry, that was a bad detour. I am here for lunars. I am here for PCs. I am a little bit confused on, um, Uh, who's making, who's making these neural processing units? Like, are they all like, is there one die that you use on an arm and a die that you use on an Intel? Are they the same die or are they very different from each other? I am curious what the Plus Copilot actually means to these NPUs, but I'm here for them, I, I like NPUs. [00:04:26] James: Yeah, so this is sort of the newest generation. This was the, Hey, x86 ain't going nowhere. And yes, the end, well, the NPUs, if we remember Copilot plus PCs, they have to perform 40 tops, uh, at least tops at a minimum. You got to get them tops. And this is. Here's the interesting part about Lunar Lake and this new, that we only, we're not just talking about, um, um, Intel. We're also talking about our good friends, AMD as well. Cause they got some hotness. If you scroll down that article there, Frank, and let me see, do I own AMD? Uh, yes, four shares of AMD. So, okay. There you go. Um, over the last [00:05:09] Frank: 20 [00:05:11] James: years, full disclosure. Um, there you go. Um, rolling in AMDs. Uh, so, um, the price. All right. So [00:05:20] Frank: I I'm just, I'm just reading here because you mentioned tops. So I got curious and tops, um, Tara. Floating point operation tops, I guess. Terra operations per second tops. So the old Meteor Lake. Okay. This is, this is, this is not Lunar Lake. This is Meteor Lake where 10 tops. Peasley. That is, that is not plus copilot or copilot plus. I keep forgetting what direction it goes. That, that is not worthy of it. The new ones though, with the Ryzen AI 300 can do 50, five, zero. So that is. A good, I don't know, call it 20 percent better than the 40 tops needed. Uh, that, that's cool. That's cool. I still wonder how you program these things, but that's cool. [00:06:11] James: Yeah. It's really fascinating because both Intel and AMD release chips and they both are pretty similar. The Intel is 48 tops and then. Yeah, the AMD is also 50 plus tops. And you know, what's very fascinating about this though, in general with these new chips is that one, there's something to be like x86 architecture at the end of the day, um, and we were talking a lot about ARM, uh, but obviously there's a need for all different sorts of CPUs, but it does feel like we're now talking about NPUs more than ever, and that's the thing that's getting the, the highlight at the end of the day, which makes some sense, right? Because when we think about it. You know, our CPUs are hanging. If you have a GPU, a lot of things have moved to GPU. So for example, in like even the latest version of Camtasia, which we're not talking premier, just Camtasia, there's a, you know, process on GPU button, which dramatically speeds up the processing time of video production compared to doing everything on the CPU. And not that your CPU is just not ever doing anything, right? It's the central. Processing unit over the years, the GPU has really ramped up specifically on taking on odds and ends of different things. And now the NPU is sort of doing the same. So the question ends up becoming how important is the CPU to it? You know, obviously with ARM, we're going to get a lot of different, you know, uh, nice performance boosts and also battery life boosts and just like the amount of power it needs. But kind of as these chips evolve over time. How much is that power of the actual CPU of importance, uh, not only to users, but also to developers? [00:07:52] Frank: I, I think it's a terribly fascinating conversation. Cause it's, it really just, what you're really talking about is like, can we predict the future of computing? Because we know how it's evolved in the past. We've had this little, um, game. Uh, you put a bunch of stuff on a chip, eventually it becomes a bunch of stuff. You create another chip, uh, in the old days it was the FPU, the floating point unit was a separate chip. Eventually, people get tired of that chip, so you merge it in. Um, then, uh, graphics were done on the CPU for a while. You had Intel with baked in 2D graphics, Windows could take advantage of it. But that got too big, so it split out to the GPU, and then we got the GPU. And then it turned out the GPU is just really good at doing arrays. If you have big arrays, you want to do some math on some arrays, the GPU can do it. And therefore we ought neural networks, but it's such a waste. Um, I do a lot of GPU programming now. And, and it's funny because I like it both for compute and for graphics. And it's funny, every modern, um, GPU library is split in half like that. They share a few common data structures, a few buffers and things like that. Uh, buffers, I mean, in particular. Um, but aside from that, it's basically two APIs. There's the graphics stuff, rendering 3D scenes, and then there's doing array computations. And, It still is weird that they're the same thing. It was just kind of the funny evolution of history that we just kept them all on the same device, even though it's external from the CPU. Um, I think it's just one of those things, like the CPU is good at procedural multi threaded stuff. The GPU is good at rendering graphics. It's also really good at doing computation on arrays. But it turns out, if you create dedicated hardware for doing computation on arrays, called an NPU, Um, it's even better. So we finally split the GPU off into the NPU. So what is the point of the power and everything? You want power in all of them. What modern, modern software uses all of them. If you're running multiple app instances, they're running on the CPU. If those apps do large amount of graphics, they're running on the GPU. If they do neural networks, they're running on the GPU slash the NPU. Um, yeah. You'll want it all. And they shouldn't be the same device because it's good that they could specialize to specific operations. So that's my high level. [00:10:28] James: For my understanding as well is like the NPU is also just, let's say I go to the store, I go to my local Fry's electronics. They must exist. I look at, uh, I, I, I, I go to my local, where do I go? Whatever I go to, I go to Newegg and I go and buy a processor. The processor has the NPU in it though. It's like a CPU NPU compass. I got a separate chip, correct? Yeah, [00:11:00] Frank: it's on the same guy. I think that's usually the same point. Um, things get complicated when you're actually asking physically, what is it? But it's probably on the same guy. Um, yeah, and, and for a lot of reasons, um, neural processing is more, we always talk about CPU, or I'm sorry, we always talk about, Compute in flops, tops, you know, how, how fast can you do math on these things? But when you're running the very large networks, what you realize very quickly is you run into memory bandwidth limits. And I think that that's why a lot of these NPUs are staying on the die, are staying right next to the CPU, because the CPU already has access to basically the fastest memory channels available. On the computer. And if it was split off into another device, then they would have to have a high bandwidth interlink between the two devices. And that's, that's a pain in the butt. So I think we've just learned it's easier to make the die more complicated and throw the stuff onto the same chip. [00:12:07] James: Which is fascinating because Intel also did something a little bit different with the Lunar Lakes, which is they followed, um, A, uh, trend that a little company out of, uh, Cupertino has been doing. I don't know if you've heard of them. They, they sell fruit, uh, it's called apple. And what they decided to do is just. Oh, put it all on the same die. They just throw it all on. [00:12:31] Frank: VR headset maker. Oh yes, the [00:12:32] James: VR headset maker. That, the one, the VR headset maker that sells fruit, uh, out there. They do that, is they moved everything on the die. And, uh, what, uh, Intel has done is they have now moved the RAM onto the same chip. [00:12:49] Frank: Oh boy. Okay. Yeah. Boy, things are going to get more expensive again. Great. [00:12:55] James: So you can get it in 60 gigs or 32 gigs. Yeah. And it's just on there and it's good to go. And I'm assuming you can add additional RAM as well, like in another channel, but you know, like you just said, you made a great segue unintentional, which was. These things can move faster if they're closer together, removing additional lanes physically, just the random move. [00:13:18] Frank: Well, so, okay, so the, the physical thing's a good point, but you just got me thinking, one of the reasons I don't like x86, and it's not x86's fault, and so I'm really curious about these new PCs. Let me, uh, try to explain this. Um. One of the tricks with, uh, programming a GPU is you have to synchronize memory between the CPUs. So you do some computations on the CPU. Now you want to do more stuff on the GPU, you have to upload it in the old parlance. You would bind a buffer and the magic driver would upload it to the GPU. [00:13:53] James: Mm hmm. [00:13:54] Frank: And then if the GPU modified it, you would have to say synchronize as in, um, I want to access it on the CPU again, so please download it from the GPU. It's a pain programming this. If you're just writing a video game with just a couple of render passes, someone smart writes those few lines of code and you're donezo, easy peasy. But if you're doing, if you're using the GPU for compute. If you're doing computations, you're doing this bounce between the CPU and the GPU a lot more often, especially in like neural network stuff, because there's usually a CPU program controlling the execution of the neural network at a high level. Um, so writing that synchronization code becomes terrible. You get into real manual memory management. Um, to this day, there is a bug in the Apple Sample app for their own library. The, it, the app crashes. If, if you go run it, it just doesn't work. And I've done support requests with them. They're like, yeah, you got to go change this line of code because of the memory management for it's incorrect. That's my big complaint about synchronizing memory. One of the cool things Apple has done on its end processors and what it's always done on the arms on iOS was a unified memory model. So the graphics memory and the CPU memory, the GPU memory and the CPU memory are the same memory. From a programmer's standpoint, it's amazing. You don't have to synchronize the memory anymore. You know, you read from it, write to it from the CPU. You still set flags saying whether you're allowed to or not, so it can do some optimizations, but it really tremendously simplifies the programming model. Whereas most Intel code, even on Mac, if you write metal code and it runs on Mac Intel, you have to put all those synchronization statements in. If you do web GPU programming, you have to put the synchronized statements in. It's just a, it's just a thing because you don't assume unified memory. All that's to say. I wonder if these, uh, copilot plus PCs have unified memory. [00:16:05] James: That's a good question. Because when you think about this is we have both Intel and AMD, which have sort of a different mechanism. Like, and you said, yeah, Intel does have integrated graphics. However, you know, you're probably just going to buy a video chip at the end of the day, but, you know, AMD, for example, does make GPUs as well and their new AMD Ryzen AI 300 series. Um, also has an RDNA Radeon, uh, 890M or 880M chip built in, graphics chip and well. So I think in that instance, I am more fascinated if AMD went down that route or has been going down that route specifically of more of the unified memory. I, it's hard for me to off the top of my head. Look in and go through this in general and figure out exactly if it's true, but they're still architected in a way, which is to have separate powerful GPUs attached to them, right? I mean, that's still the, the core of the PC in a way, unlike Apple devices, which everything is super integrated. Uh, and obviously on laptops, you know, you're going to get whatever, I mean, unless it's an external, uh, you know, GPU, but you know, you're mostly going to get what's in the box. But if you're building. NET MAUI, James Montemagno, Xamarin, Forms, Xamarin Forms, NET MAUI, James Montemagno, Xamarin Forms, Xamarin Forms, Xamarin Forms, [00:17:36] Frank: Xamarin Forms, Xamarin Forms, NET MAUI, James Thunderbolt 3, 4, 5, whatever we're up to these days. Um, that's pretty much all the bandwidth you need for a big honking GPU. So you even can have it. I mean, it's basically extending the PCI bus outside of your computer. That's why the cables can only be, you know, three inches long, but, um, it's a way of just having a PCI bus outside of your computer. So you can still do it with your laptop. You're not limited, but yeah, that's the coolest part of about, you know, A PC still is, but it's just ideally you buy a 30 case for it and it's a vanilla slash tan colored and you just throw as many GPUs into it as you can and you say Merry Christmas. [00:18:24] James: Um, okay, so we now have a new smattering of, of processors, right? I think, you know, you live in a happy Mac world over there where you just give what Apple takes you. You know? That's [00:18:33] Frank: not true. I, I fight Linux, NVIDIA drivers half my day also because I, I do my, I do my NVIDIA training. [00:18:43] James: Over there. The, the, the DPUs are working, they're heating up the house. [00:18:48] Frank: They do actually, we, we've had this discussion. I, I think it's, it's good for the environment that I heat my house via GPU. You [00:18:55] James: have to do it. I mean, that's what it's there for. And Jen, you could also, you do something where it like heats the hot water in your house as well, if you wanted to really be efficient. You [00:19:05] Frank: know, honestly, like it's a fun joke, but I just run out of things to train. I'm like, it's getting cold in here. I got to figure out something. I got to figure something to train. [00:19:16] James: You're just retraining just to train, to heat the house. Like, Oh, it's time to train something. Um, that's hilarious. Yeah. I, I think that, um. Now in the windows world, at least like we do have, we've always had choice. We don't even have more choices just because we have this AMD contender, this Intel, the question, I guess, for developers comes down to is our lives. Easier now, or is it more difficult now? Like, are there, you know, there's obviously like the direct ML and there's like a bunch of low level, you know, there's high level, low level abstractions over these things. But how much does it really matter to the developer at the end of the day? Like, it's great. I'm a, I'm a, I'm a user. I get choice. There's going to be, you know, there's different prices out there. I get this NPU stuff. That sounds great. Excellent. Apparently everything, all like the new reviews of the, the ARM, ARM laptops are amazing. As a user, it's great as a developer, do I care? You know, I was recently going in to an application and I just say, you know, check, check, check, you know, it's kind of like Android, right? Android, for example, had, you know, MIPS processing, and then it had ARM 32, ARM 64, VA, V7, and you would just check, check, check. And you're like, I get them all. Right. So I guess the question is, as a developer, does it really matter at all? You know, any of this, does any of this stuff, I mean, yeah, the, the faster MPUs, yes, but as far as a. I need to do more stuff because you're talking about like, you know, the processing and as far as the coding, like on the CUDA and stuff like that and synchronization, but from a day to day app dev perspective. I would say [00:20:49] Frank: one of the most frustrating aspects of these NPUs is how little direct programmer access they give, uh, us to it. Is that English? Did I just use English grammar? Yeah. Yeah. Um, yeah. Anyway, uh, we are forced to program at a very high level. If you want to use these things, so in the Windows world, I believe there's only two APIs to access the NPU. One is if you have an ONNX model and O N N X, it's a interchange format Microsoft invented. And if you turn your AI model into an ONNX model and you tell it to execute and the moon is in the right phase and all the things are happening, it'll execute on the NPU. Uh, likewise on Apple. If you compile your model as a Core ML model, and you tell it to execute, and Tim Cook's in a good mood, then it'll, it'll also use the NPU. So from, I would say from an, um, app developer's perspective, we've already been kind of forced into this limited world of how we can access these. But in this case, it's working out well because they can change the underlying technology pretty freely. Because they gave us such a Terrible interface to sort of basically compile your model into one of these two formats and then you can execute it, uh, and cross your fingers. Uh, so I would say this is actually really good for developers because make this stuff faster. We can run bigger models, more interesting models, better models, um, just models you can download off the internet and just start executing them. Uh, but I'd say. It's, it also highlights the slight annoyance I have that we have no access to this hardware. Like the GPU, you can program every little detail of it. The NPU, nope. You compile a model and you tell it to run. [00:22:51] James: Yeah, I, the, the one thing to add onto that about. Where the state of this is going, I think, is that flexibility though? You know, for example, I, uh, Bill just a few weeks ago talking about Onyx, the Onyx runtime for Gen AI, like they expanded it, right? So you can actually now just run models directly from, you know, PyTorch models, TensorFlow and Keras and TFL light. And, uh, was it scikit learn and a bunch of other, I know a bunch of other ones too, on top of it, including the. Pi five, three or whatever that's out there. So it's kind of cool about that. Is that. Maybe now with the NPUs, we're seeing more demand. So it actually maybe would open it up or at least allow more flexibility. Right. Um, which is kind of cool. So a good example of that is you could train in Python, then deploy into C sharp or C plus plus application as well. When you're going through these lists, this kind of flexibility compared to something that maybe would be more constricted in the past, which was you get Python in Python out, or you get, you know, Python Swift in, swift out, I guess. Yeah. [00:23:53] Frank: Yeah. It's, it's okay. Right. So I, I used to be a huge, I, if you can't train on device, I don't care. Um, but then I found fewer and fewer applications or fewer and fewer needs to do training on device. So I, I am even myself more about just executing these puppies. But it's cool to know that they're, um, baking in the ONNX backend into these higher level, uh, libraries that you would use for training. Uh, it's, it's, it's kind of neat. Um, in the old days, like PyTorch, Torches, just what you use. It's one of the most common ones people use for all this stuff right now. You would be forced to use CUDA, NVIDIA's product for doing high performance computations, but. They started baking in different backends. They started supporting AMDs. Uh, they started supporting, uh, Apple's GPU, Metal Performance Shaders. They don't support Apple's NPU. Uh, but that's cool that it sounds like, just from the description you gave me, I haven't actually looked this up. So, ideally, they're using the ONNX Primitives, uh, its own runtime to, you know, Execute, be the backend for one of these high level libraries. Because what these libraries are, they're a large bookkeeping system. A neural network is all just a very large bookkeeping system for a million, for a million mathematical operations that you need to perform. Billion mathematical operations you need to perform. How you do that bookkeeping is so irrelevant. It's it's such a minor part. Of the actual neural network part, the important part is the actual computation part, the back end, the doing the actual math itself. And so it is kind of funny that the back ends almost don't matter, right? Write your own. It's kind of fun. I've done it. [00:25:48] James: Yeah, it's a fascinating way of looking at it. Yeah, I think that as time goes on and now that we're seeing all these new processes come out and things be pushed to the limits and new APIs, like I do think that we had the same conversation about the impact of, you know, CPUs and GPUs and NPUs and the SDKs themselves on developers. It's going to be a very different story, not just next year. But next month next six months, you know, and it will continue to evolve. So I do, I do find it fascinating. What will we think about on episode 700 or whatever, you know, or nine or whatever it is, you know, and then say, okay, why, what's the state of what we're doing and has it been. Has it been smoothed over? Cause I do think that there's been a lot of advancement and sort of smoothing over the developer pipeline. And I'm curious to how much that will continue going forward. [00:26:38] Frank: Yeah. And it's the wild, wild west out there. Who knows? Because like we haven't even fully settled how you do GPU programming. Like I, I watch, um, Unity and Unreal videos on YouTube, like how to use the tools, because I still dream of being a game developer someday, um, but I And like, it's really funny, the split you'll see between people, some people are more comfortable writing code, some people like to use the blueprint, drag and drop, um, boxes and arrows kind of systems. Uh, with the Vision Pro, you don't do GPU programming directly, they don't trust us developers. Not even us developers, we're not even allowed to write GPU code. We have to use boxes and arrows. System for doing like GPU shaders and things like that. I don't like it, but I will admit that it's, it's definitely a sign that it's still the wild, wild west out there. The GPU feels so old and yet we're still finding ways to program it. And we still got this new beast out there, the NPU. And I, I just like that it's going to be ubiquitous because once it's ubiquitous, then developers can start relying on it and we can start playing games with it. And then hopefully it'll become more advanced and they'll let us program it directly. [00:27:58] James: Someday, a few years from now. No, I think that's a great way of looking at it, which is you have the CPU, super crazy standard, GPU, kind of in this middle tier and then NPU, shiny new, don't touch mode. You know what I mean? Yeah. You can look, but you can look at it. Don't touch it though. Don't, don't do it. We'll do it for you. [00:28:17] Frank: And you know, there, there's a lot of truth to what you just said, because in the early GPU days, um, very early days, like, There was a low level driver. You didn't really want to do that, so you used something like OpenGL. And OpenGL was an incredibly high level API compared to, compared to a modern GPU API that you would use today. The modern APIs are very low level. You can't even render a triangle without writing some shader code. Um, They're a lot more complicated, whereas in the old days, we all just wrote our triangles in OpenGL and it just rendered magically. So it is funny, like even, even the abstraction level changes over time. So it's possible the abstraction level will change on the neural, uh, NPUs once they've settled, once everyone's agreed on what operations they support and all that. [00:29:12] James: Well, they're all new and fancy and I'm excited to buy all of them. I'm going to get three computers. I'm going to get all the NPs. Got to collect them all, Frank. Yeah, I've just got one. So I want three. [00:29:23] Frank: No joke. I actually am looking a little bit for an inexpensive Windows one, because I do love my current Surface Go, but oh boy, it's slow when running the Visual Studios and things like that. So it's time I upgrade my little Windows laptop. [00:29:40] James: I'm waiting for that email. From the, so, so I think for me, I'm waiting for the email, which is from Qualcomm to get the Snapdragon dev kit for 900, I guess I could sell. My three AMD stock and my four Intel, and I can buy that. There you [00:29:59] Frank: go. There you go. [00:30:01] James: Um, but I do think that that'll be my device to replace this, this big, big computer, uh, even though I love it. It works great. Uh, you know, it's, it's, it's, it's a trooper. And I think that is some of the, you know, to me, at least it's the, the niceties of building and buying the best of the best of the time, uh, most powerful stuff that I don't really need. But it has lasted me a long time. So we'll see how these new chips all come out and are going to be coming out in the next few months. So it'd be interesting to see what they come out with. Cause the cool thing about Intel is that they always have those like knock boxes, like the, I think NUC, I want to say, and those were basically almost not, they weren't bare bones, but it was like, Hey, we'll give you the. A mini board and the CPU and the stuff and like you could throw in a hard drive or, or ram into it. Yeah. And then now you have a whole computer, so it's like a Mac mini, but even smaller. And I bought one this for my parents one time. You could put on a visa amount on like the back of your, you know, monitor. So that might be something that's of interest because would be nice. But if you want a laptop, then you know, you might need to look elsewhere. [00:31:00] Frank: Pop quiz. What was the big consumer item? Big x86 unified memory consumer item out there. What was it? I don't know. Unified memory does all the things I was bragging about that Apple does. Xbox. Oh. So that's how we know, that's how I know, that's how we all know that the Windows kernel can handle a unified memory x86 system because, uh, the NT kernel has been running Xbox for years. [00:31:33] James: There you go. Boom. It's fine. And just [00:31:36] Frank: fine. [00:31:38] James: Well, let us know what you're going to buy. Is it going to be an AMD? Is it going to be an Intel? Is it going to be a, uh, a snap dragon? Is it going to be arm? Is it going to be what I was going to be? What are you gonna have for the GPS? Let us know. Range of the show. Go to merge conflict out of them. Of course you can follow us on YouTube. We're almost at, well, are we at? Did we do it? Did we do it? Let's see. Are we at 500 drum? We are 4 97. So we are so close. That's three people. Hello everyone. Hope you hit that button, subscribe youtube.com/merge conflict fm. That's gonna do for this week's MER conflict zone. Until next time, I'm powered by. A processor. Are you? Okay. And I'm James. I'm an undecided processor. [00:32:27] Frank: Is, is, is the decision made? Because I'm just Frank Krueger. Thanks for watching and listening. Peace.