mergeconflict215 James: [00:00:00] Frank, why did you stream for six hours a day? Are you a mad man? What does good? You go on an Epic, uh, sort of a all day adventure, twitch.tv/frank Krueger. What's going on. Ooh, Frank: [00:00:20] it was a binge for sure. Um, I don't know, man. There's a virus time. Doesn't exist. Like why not do a six hour Twitter? Twitch, Twitch? I don't know. What are these things called? James? Why not? Yeah, that's my answer. James: [00:00:35] There you go. Perfect. Well, I, I missed out on and I was very sad because I was like, Oh, I messed up earlier today. And I was like, Hey, you know, are you cool recording a little bit later today? Little did I know you didn't respond because you are so in to the learning and the coding with the people. And I appreciate that. I've never, I've gone. I did. My longest stream was like eight hours and it's a, it's a. It's that's an adventure because you got to, you got to eat at some point in that eight hours. You gotta use the bathroom sometimes and that eight hours. So I hope that you had it all planned out efficiently. Frank: [00:01:08] I had none of it planned out. None of it at all. Um, the truth is, yeah, I've been on it just a little, like already. I already wasted my Saturday, like on this programming project and it was just eating away at my head, you know, how it gets right. Like you just want to work on something. And oddly enough, I had no obligations this weekend and I was, I never know what to Twitch stream either. We can always talk about that stuff, but it was really just, um, I was already obsessing over something. So I just continued my obsession onto a Twitch screen, which is kind of embarrassing, honestly, because it shows like. How obsessive programming is like, I could just sit there for 20 hours if I really want to solve a problem. James: [00:01:52] Oh yeah. When you get into it, you can just go to town. You know what I mean? Non-stops and I was actually intrigued by what you did today. I think it will lead to our topic of the day because you were, of course. What else did we talk about in this podcast, besides the machine learning and AI, but you're doing some machine learning. You're implementing something called style GaN, which I did go to your GaN. Talk in. And Seattle before, um, in this, you said you're, you're using TensorFlow and Python, but I know for the GaN stuff, I don't know if that's what you're using. Are you at the end of the day? But yeah, I did go to your GaN talk if people didn't see Frank talk and do the GaN talk, maybe we'll do it for this. Yeah. mobile.net user group, even though it has nothing to do with mobile, but kind of does. No, Frank: [00:02:33] I think it all has to do something with mobile. And I think that's kind of where my obsession was here. That, yeah, I did that talk forever ago and I was just, this obsession began with a thought and it was, I've been learning so much about neural networks and everything that I'm a little upset that I don't have any apps out there that kind of show off what I've learned so far. I've done a little bit like a continuous has, uh, some predictive keyboard stuff while you're coding. And I'm pretty proud of that. We've I think we must have talked about on the show. Um, but it's still not like a big demonstration, you know, I want to do like even kind of crazier things cause state of the art keeps moving on and on. And so I guess that's what I'm getting down to is I still want to make mobile mobile apps, you know, I just want to put some neural networks on, on a phone and it's still not easy, James. James: [00:03:28] Well, you know, the funny part about it, I was talking. To you just before we recorded that, I got this DJI Osmo three, get a smartphone gimbal for all intents and purposes. And one of the cool modes in it is that you can just press a physical button and it will attempt to use AI to detect a person or an object it'll, you know, based on. The outline and draw a box around it, which is a very rudimentary form. If I say rudimentary in quotes, cause it's super complex, but you know, a basic building block of sort of object detection that even you talked about in the GaN talk and different, different detection, motions and things like that, is that the type of things that you're looking to put in to a mobile application because, or is it something completely different because. You know, when I look at continuous, I almost think of it like in teleco that you wrote, you piped in a bunch of things and it pipes out a bunch of other things where sort of the live video image processing is like on a different level. Like, what are you going Frank: [00:04:30] for? Yeah, that's actually where I want to be is the live video image processing world. Uh, the fact is I'm just a tiny bit ahead of the curve and the hardware is not quite there. Um, the neuro networks, aren't quite doing what I want them to do and all that kind of stuff. Mmm. I don't know, man, like rudimentary object detection. That's kind of fun because that stuff is baked into the O S at this point, it's funny how quickly things become commoditized. It's um, built into X code. There's a tool. Totally blanking on its name, but like something like create ML and you go to file new and there's a bunch of different kind of neural networks that you can train. Um, what was Microsoft that we talked about? And I did hotdog or not on it. James: [00:05:19] Uh, cognitive Frank: [00:05:20] services, Azure cognitive services, James: [00:05:25] custom vision, part of cognitive services. Yes. That's what it is. Frank: [00:05:29] That's you got it. Thanks. Um, yeah, so like that stuff has been progressing and, um, it's pretty much commoditized. So writing an app that uses that, that's kind of interesting to me, but at the same time, I want to do things that aren't commoditized and that means running big nasty networks that really weren't designed to run on mobile devices. And I just started making a spreadsheet this weekend and I was like, look. I look, Frank, this is my pep talk. I gave myself, you're a smart guy sometimes when you're not being lazy. Um, maybe you should try to get one of these advanced networks working on the phone. How hard could it be? And here I am still kind of in the middle of it. James of how hard can it be? James: [00:06:13] You know what neural network I want to build? Are you ready for this? Frank: [00:06:18] Given, I love these kinds of it's like, it's like when you people gave at pitches back in the day, I love neuro network pitches. James: [00:06:25] So I want to create a predictive model that feeds in your exercise data, your cycling data. And I want it to predict when. And to not only pause recording, but also to when resume, because often if it's just based on a motion of distance, that's not good. Yeah. Enough. I feel as though there's, there's a lot of, I'm talking about Apple watch here because Apple watch, you have to manually pause, like I'm pulling up to, uh, to get a coffee at a window, but I'm there for five minutes. It doesn't stop recording. It just keeps going. Right activity. Why didn't you stop? I didn't even move, like at least. Pause, right. Strava does that. But imagine you have all these sensors and you pipe all of this data. So the motion of how you normally ride a bike, the motions you make, your, your, your, you know, how you're, you're still motions are. Um, and you could take a lot of different things, your heart, your heart rate, right? There's so much data being piped in here, and then actively predict when. Uh, you're gonna stop, but when you're going to start, I don't even know if that's humanly possible, but it sounds like something I want. Frank: [00:07:38] Well, you know, that's a really funny one because it's, it comes along with that commoditization and talking about that. That's basically at the OS level in the same way, we can't write apps. You know, take over basic functionality, that phone, um, that activity stuff I think is maybe off limits of apps, but don't quote me on that. I really don't quite remember, but that said, um, there was a definite theme at this year's dub dub. DC that they have activity recognition 10 a little bit better, and to watch her West. So you can record a crazy arm and body movements with your watches and blue plead guilty to blue. I think he uses like the create ML app and turn that into triggers. So whenever it detects that, you know, consider app a little bit of a trigger, so that kind of stuff is actually being built in now. James: [00:08:33] That's a really cool actually. Yeah. I would like to see some of that stuff come to light. Cause I, I had just been doing a lot more than my Apple watch recently because you know, it's an Apple watch and uh, um, you know, I just sorta felt like he could do better. So maybe I watch it West. Wow. Seven. I think it's on. Frank: [00:08:53] Yeah, don't ask 40. Yeah, it's kind of funny because, um, uh, that kind of activity, movement recognition kind of thing was going to be one of my first neural network kind of apps. And I wrote it really shortly after the watch first came out and. For some reason, I just had no doubts that it wasn't a good enough app, even though I had it recognizing the most hilarious things like I was teaching at Kung Fu moves and it could recognize those kinds of things. I never released the app and I kind of kicked myself all the time for it because it, I assumed that all that stuff was around the corner. But it turned out. It took another four years before Apple baked it into the OS. And now that stuff is trivial to do. It's just funny. It stinks also being a little bit ahead, so, James: [00:09:48] well, what all were you doing? And I see that you have written down here, core ML tools. Like we're talking about machine learning. I remember using quarter-mile tools. A long time ago just to bring in a Cornell model, but I assume, Oh yeah, Frank: [00:10:04] yeah. You were doing, um, um, happy Twitter said tweet. James: [00:10:09] Yeah. Yeah. And sentiment analysis. I believe that that is called. Frank: [00:10:13] Oh, right, right. Happy fat is a little easier. Um, so I actually put this topic down because I wanted to give shout outs to Apple for releasing a really good library. At version four, it took them to version four to kind of get it right. But, um, I wanted to do an entire show James on to me, singing Apple's praises for how good this library is and how impressed I am with it. And we can run down what I was trying to do with, uh, um, problems I've had in the past. Why I'm kind of impressed with this version for, I don't know, whatever you want to talk about, dude. James: [00:10:54] We can do that. Yeah. I mean, I it's been a while since I use Cornell. And in general, I thought that it was a little tedious as far as was bringing the model, creating bindings around it, creating the inputs and the outputs and knowing, because it just felt like there was a lot of, was very error prone. That's what I always felt like with, with quorum at like, I just, I felt like I had to know too much to be successful. Frank: [00:11:18] Yes. And that's still the case for how, okay, sorry. Sorry. Yep. Moving on now. Um, it's not Apple's fault. Uh, neural networks are terrible, but before I get ahead there, maybe I should give a brief overview in case no one knows what the heck I'm talking about. So core ML tools is a Python script library thing that you use to convert neural networks trained and. Some library to become a core ML neural network that can be run on phones and Macintosh computers and watches and things like that. Um, it's, it's kind of an important process because it's the preferred definitely the preferred way to run your own networks on the devices. Plus if you do it this way, um, you can take advantage of the hardware of the devices. James: [00:12:12] Uh, yes, you want to make them fast? I believe not slow. Frank: [00:12:18] Yeah, well, I mean, these neural networks are ridiculous in size. Like the math that they're doing is just a ton of math. So you definitely want to run them on hardware. If you can enter core ML. James: [00:12:33] Now this. Okay. So here, let me set the stage here for the core ML tools. There's also other tools and it seems like everyone starting to play really nice with each other. Right? Cause you have a Onyx for example, which is from windows, I believe. Uh, and then core ML for the Cornell tools enabling you to take, let's say something such as Onyx or CAF cafe PI torch, TensorFlow, Carrasco, lid, SVM, uh, and then turn them into a core ML. Was that, is that a correct analysis so that I did, I sat there. Frank: [00:13:09] Yeah, yeah, nailed it. Um, and that goes along with what I was saying about neural networks are terrible. Uh, all of these neural network engines, that's what you were mentioning there, TensorFlow, um, cafe, cross torch, whatever, all of those are different neural network engines and they of course have different file formats for. All of their datas, you know, they were not meant to interoperate with each other. You are meant to write some codes and one, and execute your codes using that one. You know, you're not supposed to move this thing around. So this task of. Coming up with standard formats is kind of crazy, um, because all these libraries are technical way different and work very differently. So it's a little bit of a miracle that we've actually narrowed it down to a few, few, couple standard formats. And I'd say the biggest one is what you mentioned Onyx. Oh, and an ex. Um, that's just kind of become a standard if nothing else, you know, people just treat it as here's a library, but the truth is a lot of. A lot of people don't bother, um, uh, releasing neural networks, even in these forms. James: [00:14:23] Hmm, Frank: [00:14:24] it's terrible. You know, most of them are just giant Python scripts on get hub and they're like, good luck. Here's a read me have on neural networks, star hashtag James: [00:14:36] that's right. I was, you know, I was thinking of when I said Onyx on windows, I think is because there's a winner Mao. Which is like basically core ML, but like windows ML, and then they also can go to Onyx and then like Onyx can go to Cormo. So you could in theory, go to like, I'm going to start in core ML, go to Onyx, go to Onyx to when I'm out, and then you can go back around. You can go, you know, now it's in PI torch, then over and over and over. Now, the reason that this all exists right, is because, um, every different. The company that creates, this is fine tuning their model for a specific piece of hardware or GPU CPU, different use cases. And in the terms of, uh, uh, of course, AML right there, optimizing and machine learning model to work seamlessly across all of Apple devices. Now, Apple of course could have said, Hey, we're going to take on X or we're going to take PI torch and make that run great and super native on all the platforms. Uh, but then they don't own it. So really becomes down to, Hey, we want to own us a weekend rev on it and make it better in the future, which, you know, has the side effect of this, which is now they've created a tool to make it so you can take other things, put it into here. Um, which isn't a bad idea. I don't think either, by the way, I think that's legitimate way of, of doing it because it enables them to onboard as many of those versions of Onyx or pie, torture, TensorFlow, and, but still own the ecosystem. Frank: [00:16:10] Yeah, that was well said. Um, especially the thing about the hardware. It's funny how much hardware dictates the design of all these things. We try to be computer scientists and abstracted from all that, but it really comes down to, if you want to run TensorFlow, you need Nvidia hardware. Uh, if you want to run when ML, I think the whole nice thing about when emo and Onyx that Microsoft provided was that hardware abstraction. It runs on all hardware. Yeah. And Cora ML, although it's an Apple product and it runs on various specific hardware, you could kind of says that to you because the watch is very different from a Mac and it's able to run on both of those. So it it's funny. Um, How, yeah. We're, we're still stuck in the early days. And you can tell because the hardware's dictating so many things and we have all of these standards. Yeah. So, yeah. So you can imagine that the conversion process between all these, you were making the joke about doing this cycle between them. No, that would never work James, because like just going from one library to another, it's a small miracle. If you can get it to work, if I'm honest, James: [00:17:22] Yeah. I mean, I have to imagine that everything is so version numbers of this and what they tested against and what they move against. It's it's always a moving target, uh, in, in general. But I have to say at the same time, even if it is a moving target, I'll add one more little fuel to the fire as to why Apple or anybody else would create their own because Apple now. What do they do that only optimize it for their CPU or GPU, but they literally have like a neural chip, right? They can, they can create Silicon. It can be optimized for, and it all can work cohesively together. And that's like an extra added bonus, even if it adds is extra complexity, complexity to the whole thing. Frank: [00:18:07] Yeah. And I've been doing a great job of integrating it into other things. Whereas Microsoft has, when ML, you don't see it integrated into stuff. And what do I mean by that? Uh, AR kit. So you can insert neural networks into parts of AR kit and it can do three D object recognition. It can do video recognition, things like that. In fact, uh, Apple. Uh, we'll give you this neuro network called Yolo. You only look once and it can detect multiple objects in a scene. I know isn't that the cutest name, but you can just tell that. You know, these are for the AR goggles that will maybe never get, but it's, it's still neat to see it all apply two neural networks. Okay. So one more time. The big problem here is that neural networks are terrible. They're written in a variety of engines using a variety of programming techniques. And being able to convert them. It's not like converting a Jeffer, a JPEG or something like that. It's more like converting a piece of software. In fact, it pretty much is converting a piece of software, like a binary. Like, can you give me a windows binary? And can I run this on Linux? And you're just like, Ooh, uh, technically, maybe Apple is somehow pulling that off with their, uh, DDA that we have. Uh, so what I want to say about core ML four is I think that they've actually come a long way in solving that problem, whereas before it felt like they covered, you know, the top 80% of the most used functions out there now, it feels like they had an army. Implement all the functions and at the same time, completely change how their library works from being a simple kind of for a loop converting. One thing at a time to a complete programming language, parser and processor, where to even begin with this thing. I'm just so blown away by it. James: [00:20:08] Well, I'm going to give you a second to think about that as we thank our amazing sponsor this week. Sync fusion. That's right. It's seeing fusion, no matter what you're building sync fusion has something that will help make it. Absolutely spectacular. Are you need charts, graphs, data grids, fly outs. Do list view optimization. Could election views anything? You name it, basically, they have it for your application, whether you're building a mobile application, a web application, whether you're dealing with Xamarin, whether you're doing with flutter react, native asp.net, core blazer, you name it. They have hundreds of controls. For your application. I love seeing fusion. I use it in my most recent application Island tracker for animal crossing. It enabled me to get really complex input controls, data grids, and graphics on it. Um, and charts and graphs. It's really, really cool. Check out saying fusion. Am I going to sync fusion.com/merge conflict to learn all about the amazing controls that they have for your application. That's seeing fusion.com/merge complex. Thanks to thing fusion for sponsoring this a week's pod. Thank Frank: [00:21:13] you. Sync, fusion, and Anita neuro network viewer from them. There you go. Cross platform. James: [00:21:20] Was that enough time? Is that enough time to get something? Frank: [00:21:22] Yeah, sure, sure. I got something. I got something. What do you do when you run into a hard problem in computer science, James, you. Invent a programming language. Oh James: [00:21:32] yes. Obviously I thought I was like, I was like, you Google it. Frank: [00:21:36] Yeah. I shouldn't let you respond to actually was curious. And what you're going to say. Well, they invented a programming language called um, well, it's called mill M I L I'm going to. Okay, I'm going to call it machine intermediate language, but that doesn't make sense either. So it probably doesn't stand for that male. They invented mill, you know how we have and.net, you know what they got? They got mill James: [00:22:01] machine learning IO. Frank: [00:22:03] Yeah, that'd be millennial male. Middleville metal intermediate. And it's not quite. We should just keep guessing at it instead of reading the docs. Nope. Not going to read the docs anyway. So they have this, a problem of all these different libraries are out there and they need to be able to convert to these core ML model formats. And what they decided to do was focus was break the problem down instead of going from torched, a core ML from TensorFlow to Cora and mal from cafe to core ML. What they do instead is go from those two mil. And then from mill to core and L James: [00:22:47] yes, the model intermediate language. I see this, I Frank: [00:22:51] love this James: [00:22:53] model. Frank: [00:22:56] Okay. Right. So it's basically a programming language because neural networks and a lot of ways are a programming language. We have functions, inputs, outputs, things like that. You're combining them all. The only thing that doesn't really do is like step-by-step stuff. But if you can, if you can express your entire program without any loops or anything, then you're fine. Or without, yeah. Fixed size loops. So it's, um, a funny intermediate language, but the neat thing about it is instead of just hoping that Apple's conversion is perfect, they give you this intermediate representation that you can play with. So say they, uh, they might mess up the conversion a tiny bit, you know, What happens? You have a programming interface to the abstract syntax tree of this programming language, and you can modify nodes in it and you can fix it up. So just number one hackability that is really wonderful. James: [00:23:57] Yeah. That's really neat. I'm looking at the documentation and it looks, what, what is it? Its own programming language, like you said, I mean, you said that. And I'm looking at it. I don't, I mean, there's like things being set to, to other things Frank: [00:24:17] I'm being generous. It's the, it's a programming language without a textual representation. So, you know, you could think of a programming languages, uh, the code that we type in. And then a parser parses it. And then, you know, it has an abstract syntax tree of your code. So they don't do the parsing part. They just have an abstract syntax tree, but they do have the compiler part. So from the abstract syntax tree, they generate the core ML file. So this intermediate language is not the core ML file itself. It's a little tiny programming language that you can code to. Without code, you have to edit the syntax tree itself. It'd be like using Roslyn without being able to use C-sharp. James: [00:25:05] I see. I see. Yeah, that makes sense. That makes sense. Frank: [00:25:08] Yeah. So it's definitely a pro feature. I have to know what you're doing, but it's a real, um, concession to the real world. They're recognizing that, remember neural networks are terrible and they're almost impossible to convert. So given that they're almost impossible to convert, they give programmers this ability to, uh, you know, in the, in between step, uh, fix things up, patch things up. So that's good. But there is another neat feature of it that you really appreciate. And this is the best part people. If you consider neuronetworks as a programming language, people are inventing new pro uh, primitives for it all the time functions in it all of the time, almost, you know, There actually should be a report. There's like 500 new functions every year or something like that. There's no way Apple could ever keep up with those. Uh, just the fact that they could do 80% of them is a engineering achievement, but they do when the next hot shot kid publishes a paper next month. It's like, Oh, well now I can't use that hot chalk kid's library because it's not supported what you can do now with that mill stuff is implement all those neat new little functions. So when a new paper comes out and they do something fancy, it's no longer, Oh, darn I can't do that on iOS now. It's Oh, I just have to write this tiny little mill function that does what's in the paper. Oh, how wonderful is that James? James: [00:26:46] So you're, you're implementing an interface, uh, whenever forever, if you consider that each of the different machine learning engines that are out there have methods or functions or things that they're calling you, you have to do the conversion. So they, they, they. Act as an interface that you implement, which is, which is actually pretty cool. So as these APIs evolve, when there's new ones or new parameters or new whatever, even if, you know, PI torch, whatever version next, be next, do you just simply implement that interface and boom, you're off to the races. So you have to worry about the whole thing. Frank: [00:27:24] Right. And this just enables whole new scenarios. Like, um, what you would have to do alternatively in the past is go back and modify the actual neural network that you're trying to convert. And that's impossible basically. Like these are finicky beasts. It's a miracle you got to training at all. Uh, you can't. It would be like going back and refactoring your code, but making changes that are, you know, so much harder than just a refactor, because you don't own the code. You probably don't understand how the neural network works. If you're me, you know, you only have a vague understanding of how all this stuff works. So instead of me having to tinker around with the crazy neural network, I can tinker around with this little, much friendlier, intermediate representation and get that model output James: [00:28:12] that's really need. Is there any other. Other ways, like how does Onyx or the other ones do, is there some similarity to how Apple model this, or are they unique in their own? As far as, you Frank: [00:28:24] know, It's far as I know, um, the like Onyx converters knows what it knows and you can, you know, submit patches to it. You could implement in the Onyx converter, uh, things to convert the. Uh, whatever's missing, you know, so I would have to download the Onyx converter, modify it, and then use that modified version. Sounds terrible. And it kind of is, but you know, it, it is just a library. It's not too hard to do build and things like that, but you can see where. Again, you're modifying someone else's code now in the core ML way. You're not modifying anyone else's code. You're really just modifying your models. You don't have to worry about it. That said, I hope that this becomes a trend and maybe, maybe my information is out of date. Maybe Onyx has like the super fancy mode that can also do that. But as far as I understand it, this is some pretty sophisticated. Uh, conversion software here. I'm just so impressed. James: [00:29:27] Now, how are you modifying these scripts or implementing them? Are you inside of ax code? Are you inside of vs code? Is it just a text editor? Like, is there, you said, you said their Python scripts, so they just script so that you then execute Python commands on to make sure your stuff is legit. Frank: [00:29:47] Yeah, it's, it's actually kind of crazy. Um, so I personally use vs code, uh, for most of this stuff, there are ridiculous Python tools in vs code. Like. The state of the art has gotten really advanced I'm color. Me impressed. I've been using Python for years and it's never been this good. So that's quite a happy place for me. There are some funny things, um, with core ML, actually, let me, let me give it one more praise feature, and then I'll talk about how that kind of failed me. Um, so this conversion process it's fraught. Things can go wrong and you kind of never know how things have gone wrong until you've gotten the model onto your phone receiving real world data. And then it's actually trying to display results. Like, and then you realize, Oh, the conversion went bad. Something happened in the conversion. So a really neat feature of the core ML tools just as conversion software is that it can now execute your model. James: [00:30:45] Ooh, that's cool. Frank: [00:30:48] Yeah, which seems so obvious, but I'm just like, why didn't it do that before? But like, yeah. So now you can imagine just execute some test data in your model. Do the conversion run that same test data through the converted one and they better be identical. That's how the world works. So, um, and if it's not, then, you know, things have gone bad and. That actually, um, that happened to me. So I was trying to convert this neural network. This was the day before, I guess some people call that yesterday and I trained it. It seemed to be working. I put it onto my phone and it was supposed to be generating pictures and everything I generated was blue. Not even like a hundred percent blue, just blue tinted, like. You know, I wrong, but right. Like it was there, but blue tinted how weird James: [00:31:49] or like someone was taking that, that RGB and the B of it and just sort of turning it up a little bit. Yeah, Frank: [00:31:57] I just kept assuming it was my math. Like I kept just literally multiplying the B number until like, I could like filter out some of the blue, but then the greens and the reds were too strong, so, Oh, so I'm just like, what in the world is going on. I've never seen this. Like, you know, when you do image stuff, it usually works or it doesn't, or it's sideways, but, you know, um, but I was actually able to use, um, Well, I, I was able to use the core ML validator, right. Model runner thing to see, Oh yes, indeed. It is generating bluish kind of images. But then I was able to go a step backwards and see my network. My network was not generating blueish images. So there was a funny step in between and I was able to narrow it down and find, Oh, there's this. Annoying mistake here that unfortunately I can't work around, I can't use this model, but it's preventing me from the thing executing. So that was, that was just a really good experience I had James: [00:32:56] with it. Nice. I like that. I mean, so far it sounds like positive coming off of it. The four feeling good. Frank's feeling good, basically. Frank: [00:33:07] No. My success rate so far has been one out of five James: [00:33:10] Oh four. It was pretty good. 20%. Wait, it is. Frank: [00:33:13] Thank you for seeing it that way, because that's how I see it, James, because you know what? My old record was zero. So this is all coming from me, converting one model and having it execute correctly once I'm just like so happy right now for all of it. Oh James: [00:33:34] yeah. Now is the reason in which you are, you know, converting models is because. This, you, you need to use this specific thing or you enjoy coding and other engines compared to just building something for core ML specifically, or can you not build suffer Cornwell specifically? Frank: [00:33:55] You can not build anything for Cora mill, a period full stop. Now there is that create ML tool that Apple has, but that's using, um, Uh, the metal performance shaders you're on network library. Yeah. And you can write code using that, but that puppy is a bit of a beast, right? And ironically enough, it's not compatible with Cora mal weird, so you and create models in it, and you can execute models in it, but you can't generate a quorum L file or anything like that. So there's pros and cons. The pro is if you use and they're, they're more low level, you can imagine. So if you use the low level stuff, it's still using hardware, but. It's just so low level. It's like assembly language instead of a programming language. It's hard. And I've tried to work on libraries, um, specifically to answer what you just said. Like, I want to train on Apple and execute on Apple and I've actually started working on stuff like that for Xamarin. And it actually kind of works problem. Is it only kind of works and it's. Um, it's not as powerful as what you're able to achieve with core ML. Unfortunately. Got James: [00:35:16] it. Yeah. That makes, um, some logical sense there, I guess. And I think the other problem that you may have is is that, you know, there's all the other. You know, machine learning engines that came out before Quora mal, right. We're talking about PI torch. We're talking about, um, TensorFlow, right? TensorFlows by been around for a long time, and those are going to have the biggest. Knowledge base the biggest documentation, the biggest amount of models are, I'm not saying that that TensorFlow does have the biggest, I'm thinking that'll be bigger than what maybe quarter mill has an additionally. If you do something in TensorFlow or Onyx, you're going to have an easier time going to other platforms perhaps then. Nope, no. Frank: [00:36:02] Nope. You know, like I was saying that neuro network wasn't working for me, it was actually a bug in a TensorFlow. Now be things don't work at all, either. This is all terrible. It's a whole bunch of terrible stuff. James: [00:36:15] How are we supposed to be getting self driving cars and all this stuff? If nothing works for me, should I be holding off Frank: [00:36:22] now? It, it, it works. It just takes actual dedication. It's just not turned. As we used to say, you know, you can't, unless you have a, a network that's already kind of pre-trained and people tell you how it works. That's pretty turnkey. But if you're making your own, none of it is that simple, something that we haven't. Well, I've kind of alluded to, but when we talk about neural network engines, how people use them as so varied, um, because it's a programming language and programmers are insane, the way they implement things. And so just such a large diversity. Fair that you have to deal with. So getting back to, um, do I want to be using Python? No, absolutely not. The fact is though I'm coming up with interesting new architectures and getting them to actually train is very difficult. And so when someone is successful at that, you really just want to use their network. You really don't want to. Change its code at all. It's scary. Right? Cause like the moment you start changing some numbers, the thing stops working. And so it's not that I want to be in the Python world. It's just that that's the world where it exists. And so if you want to get these things running someplace else, you know what it really is. It's the old science and engineering split. Right. Scientists are off creating these wonderful things. And we have to be engineers of take them, convert them to a form that can actually execute, um, or, you know, be useful to people. And that's a hard part that engineering. James: [00:38:09] Yeah. Yeah. Hmm. Frank: [00:38:13] Anyway, good library only took them four versions James: [00:38:18] and maybe in the next four versions, we'll we'll do another one where I understand anything that we talked about today. Frank: [00:38:24] Oh, come on. You got it. Yolo. You only look once. How funny is that? James: [00:38:30] Well, you know what I do think we do need is probably a presentation on sort of the state and the demo abilities. I think one thing that would lend pretty decent to this is a, is a demo. So. Uh, I think that if we ever do that, we'll announce it to all the podcast subscribers, and then, uh, we'll make Frank do the demo and that'd be pretty cool. Frank: [00:38:49] Yeah. I'll condense six hours of Twitch down 30 minutes somehow. Yeah, James: [00:38:53] I do like that. I mean, So you mess around with all the tools, everything is going good. So far 20% and feeling pretty positive about it. Is there something you feel like Apple O'Connor wrap it up a little bit, but do you think there's something Apple is missing from this? Like do something that they could do in core ML, the core of Mt. Tools five that would revolutionize it or is it just like fixing it up to be more reliable at this point? As far as the conversion stuff goes. Frank: [00:39:19] Uh, there's definitely a few of those. Uh, I keep calling functions like fancy new functions that Apple has not implemented. And even with the great ability to manipulate at that mil level, you still can't. You know, accomplish it. Uh, so there's always those kinds of things. And then there's kind of what I was saying before neural networks are feed forward. You can't do step one, step two, step three, step four. Like you can't, it's not a procedural programming language. It's a functional programming language and it would be nice someday to get some, a procedural ism into our networks. I think that's. Because in my mind, these things are just programs that are and they need to be able to execute in a more, a larger variety of places. And from all hosting programming languages, that's why I'm on C like C sharp and F sharp. Can easily access them even today, running core ML stuff on iOS. I've been doing it for how many years now, James, like three, four, five years who knows. And I still find it difficult to remember how to use it, the API. So still a lot of progress we can make as an industry. Yeah, James: [00:40:30] I would really enjoy it. I know there are a lot of turnkey solutions like we talked about, but when it comes to customer, I'd love to be able to get to the point where I just drag. I just drag a model into like X code and it's like, okay, cool. Frank: [00:40:44] W w we're so close, like they have that demo, like you use creative L it works in San Fran studio too. You can drag a model in and it generates a thing, but the thing still isn't that easy to you, unfortunately. James: [00:41:00] Ah, there we go. 40 minutes of machine learning. We did it. Frank: [00:41:04] Isn't this, this is like twice a month or something like that. I'm going to have to do penance and we're going to have to talk about Nintendo for at least another month. Now James: [00:41:12] I talk about it every single week. So there's that night we did. We did talk about this. Talk about GPT three to two weeks ago. Frank: [00:41:20] Yeah. Wow. Okay. I've totally, I've totally can take taken over the show. I feel bad. James: [00:41:26] Well, if you love art machine learning shenanigans on this podcast, you should write it and you should go to merge conflict.fm. We love fan mail, by the way, we don't get very much of it. But, um, you know, we, a lot of people telling us like, you know, talking to us on our discord or on Twitter, but we do like a good fan mail. From time to time, you can go to merge conflict that I found. There's literally a button that says contact. I just pressed it. Just data, Frank, you can go on there and you can just Frank: [00:41:50] look right at nice James B be nice James: [00:41:54] long form. Long form. You're not restricted. There's no restrictions. Yeah. Frank: [00:41:59] Can people send images? James: [00:42:00] No, that's restrict. Frank: [00:42:02] Okay. James: [00:42:04] Uh, but, but, but long form, like if you were going to write us a letter, you could do, you could do that. Yeah. Frank: [00:42:11] Who doesn't like a good letter. James: [00:42:13] I love a great letter. I don't get them ever, but. You know, you could get one, Frank: [00:42:18] I'll send you a letter. You can use the form. James: [00:42:21] You could get one from us. If you become a patron, we have awesome patrons that are helping the podcast be awesome. In fact, I just sent out hundreds of mugs and buy hundreds. I mean, 30 or 40 mugs to art, to our subscribers, patron, and additionally. Um, they help, um, not only support the show by, you know, our podcast fees, but additionally, the transcripts, we have automated machine learning transcripts that are, that occur on this podcast on sad podcasts. I'm using a new service. I moved away from other ones called de script. It's really cool. It's a whole studio and shenanigans super fast. It's really good. So I hope the new transcripts you are enjoying, uh, they are spectacular. Um, I'll put a link in the show notes for this, this tool called tool D script. D E script. Um, I think my, my coworker John got away. Frank: [00:43:17] I haven't seen this. James: [00:43:18] It's good. It's very good. Um, yeah, you'd have to give that a look. Um, so if you're doing Frank: [00:43:22] gotcha. Looking for that, have you done, James: [00:43:25] um, so the last two, last three episodes have the new machine learning ones. I used to use a different service Temi, but they went up in price by four X. So I said, ah, that's okay. And this one is a subscription service. So we pay a monthly fee. And we get unlimited transcripts, which I think is a spectacular deal by the way. Frank: [00:43:48] Yeah, it must be a machine then. Huh? No humans involved. James: [00:43:51] Yeah. These are, these are machine machines, correct? Yeah. But you get 10, 10 hours of transcripts, which is that just a lot. And I mean, just Frank: [00:43:57] robots are cheap until the unionized watch out. James: [00:44:01] That's true. Yeah. It's really good. Anyways, they even have a free plan and it's a whole studio too, which I think is really cool. And you can try your three hours of transcriptions for free. This is not an ad. I just have used it. I thought it was cool. So. Uh, and I, and I have noticed that the it's way faster, it's just so much faster and just really good. So definitely give it a go. Oh, that's gonna do it for this week's podcast. Yeah. Thanks to our sponsor, not descript, but to think fusion for making awesome controls that you can pop into your application today. So until next time. I'm James mountain Frank: [00:44:31] Magno I'm group. Thanks for listening. Peace.