mergeconflict321 === [00:00:00] James: This week's episode's brought to you by the one and only sync fusion. Listen, we've been talking about sync fusion for about the last 45 years, because they've been with us since the beginning of the podcast. They help you deliver your apps faster. They have the world's best UI component suite for building powerful web desktop and mobile apps. No matter what you're building with, whether it's blazer fluter ASP, net angular, react view. .NET Maui Zin UWP are so much more. They have awesome controls, powerful document processing, and entire dashboards. Ready to go. Head to sync, fusion.com/merge conflict. To learn more that sync fusion.com/me conflict. Thanks, sync fusion sponsoring this week's Palm. Frank Frank, Frank, Frank, Mr. JavaScript, developer himself. goodbye.net. Hello, JavaScript. Frank you've turned. [00:00:57] Frank: I've turned. I haven't turned. I've been rocking the JavaScript since I was a little kitty. I love the JavaScript. That's not true, James. I absolutely hate JavaScript, but I did write a Java script library. And so yes, I am a JavaScript programmer this week. Just this week. [00:01:13] James: I'm proud of you because you made some amazing graphics on this website and then additionally, um, your styling looks very good. So I'm very impressed with it. It looks very Frank Kruger esque in nature, the CSS, and it is good. No, I mean, you can't get away from JavaScript. In some relation, we were talking about just blazer in bla web assembly and, you know, there's, there's JavaScript invoking, and there's JavaScript libraries that you're calling and like, you can create a.net rapper over it, but it's how much do you wanna do stuff in general? So the cool part here is that. That you did this thing. And, and this is cool because this is actually combining two, maybe three episodes that we've talked about. So this has sort of been a buildup over time, which I'm very proud mm-hmm um, to, to see that our podcast has led to not only talking about cool little projects, but also yet another thing, bringing them together. And you could always link to the podcast and your getup age, but you know, or not a big deal or onto the website doesn't really matter. Um, but maybe this. But what's cool here is that this is combining all of that, um, um, machine learning stuff that we talked about a few weeks ago, all of that, um, neural network creation to do language translation and also, uh, static websites to host stuff for free, who doesn't love hosting websites for free. Frank is [00:02:27] Frank: it's just drawing. Mm. Yeah, it was, it was, this is the culmination episode. I I've been actually really looking forward to this. There have been so many little pieces in play, so let me introduce what we're actually talking about here. Um, I, I, I kind of wanna give some background, but I'll just. I'll just TLDR it. I wrote some codes so that you can run neural networks right inside of the browser. This is neat because you don't need a server to execute the neural network. All the computation is done right on the person's own machine. This is good for, um, Economy, it it's like infinite scale out. You know, if you have a thousand users, you basically have a thousand servers. It's good for that. It's good for the monies. Uh, it can be a little bit slow. Yeah. You know, people's machines, aren't all as fast as the super servers out there. Uh, but it's also good for security. And so I'm excited, uh, to have this little library. Oh and sorry. Yeah. And it's good for static website hosts. where you can't execute any code and those are free. And so we're just trying to combine all these, you know, I'm cheap. That's, that's all I'm saying here. so I'm trying to get a lot of bang for my buck out of these, uh, free websites. And I decided running neural networks in the browser was the way to. [00:03:42] James: Now, because previously you would've to run this neural network on a server. So that, that was the problem that you had is that, yeah. Hey, you know, I don't want to, to run sub on the server [00:03:51] Frank: at. You have to pay for it, basically, you, you have to pay to host it and you pay per execution. Now, uh, going back to that episode where I talked about the QA form, I put the network up on hugging face, which is basically free and they let you run a whole bunch of queries against it for free, but eventually that quota runs out and you have to start paying and things like that. Real quick story time. Okay. Uh, I was doing a Twitch stream and I was trying to show off some of this stuff. I was trying to show off the QA form network and the stupid server inference API kept failing. It just kept failing. It's like, no, bro. Not doing IML today. Just not in the mood. I think it was yeah. HT HDP code, not in the mood. And it, it was frustrating because it was a demo fail. You know, me, I hate demo fails. I, I knew what was happening. Okay. Cloud services. They go down once in a while. It's annoying, but I kept thinking to myself like this network it's big, but it's not ridiculous. It's not one of the giant crazy it's no, co-pilot, you know, it's, it's not this thing that requires data centers to execute. And I was just like, it should be able to run in the browser. And so I, I made the effort to. Yeah, just, it turns out there's a lot of little pieces. So what I really did was I gathered together a million little pieces and glued them all together to make it work. [00:05:18] James: Okay. I like that. I mean, that's kind of how. My software is made in general. [00:05:22] Frank: yeah. Yeah. Uh, so it's interesting because, um, these neural networks, the one that I wanted to show off was a translation network. So again, going back to the QA form, but this isn't a network I, uh, trained, this is just an off the shelf. One from Google. It's able to translate from English to, uh, French, German and Romanian. it's kinda a eclectic set of languages. Um, and I. I just wanted to use this as like a demonstration network of what's possible here. And then as long as you have a network, that's a similar architecture to it. Uh, it should be able to run all of those too, but it turns out that the neural network is only like a part of the puzzle because, uh, in order to do translation, you actually have to do a bunch of other things. A big part of it is called. Tokenization process and it all just comes down to neural networks. Don't operate on text. They don't operate on AKI. They don't operate on Unicode. They a hundred million percent could, but they don't. It's, it's just a waste of computation to make them go like letter by letter. It's just, it's just a waste of computation. So I had to write this tokenization library, James, and it almost broke me. I had to read rust code. I had to translate rust code into JavaScript, and there's a few tweets out there of me, just a hundred percent nagging on rust because I just, I was miserable. I hate that language. I don't hate that language. I love all programming languages, but it's low on my list of languages that I like, and it almost broke me, but I [00:07:04] James: did. So, okay. Describe this tokenizes thing again. So you, yeah. You, because you were like, oh, did you tap on the test tokenizes button? And then I just see a bunch of things in here cause okay. So, okay. Let's back up a second here. Mm-hmm mm-hmm your goal was to figure out at the end of the day to run neural networks for free and on. Folks machines that that was the end goal of this, right? Yeah. Free. [00:07:33] Frank: Okay. Free being the, I, I, I don't, I don't think I admitted it to myself at the time, but hearing you say it like, yeah, I was aiming for free. I, I wanna run this thing everywhere for [00:07:42] James: free and what we got. So I wanna describe it, put it in the show notes is we got a website that, uh, has a SubD domain on proclaim.org, uh, which will enable me to translate from English. That's the only, you can't go the otherwise only from English. Two French German Romanian using the T five small neural network, which is the one from Google that does those things. So if I understand this correct, you built a website that can be host as a static website, that, and, and, and you built a JavaScript framework library that. Website can can use that will then call in to that neural network to do that translation automatically. That's like what you created it and you then deployed it using Azure static web apps. to host that JavaScript library and pulling down those neural network files. I'm assuming that are required for that JavaScript. Stuff to run. Is that accurate? That I'd accurately describe what happened. [00:09:00] Frank: I think you did. You did. You did. Uh, I'm. I'm gonna change the perspective slightly though. Okay. Um, what I needed was a JavaScript library to do translations using a neural network. So I'm starting with the library perspective because I wanna deploy this to multiple websites. To do multiple things. Hmm. Um, you know, one just because it can run one structured neural network, those neural networks can do a variety of things. Like we're doing translation right now, but that's just a demonstration. This network itself can also do summarization and it can do question answering. Give it some text and you can start asking questions. It knows a little bit about the world. And so it has all these advanced capabilities. And what I want is for people to be able to put those kinds of neural advanced capabilities into their own websites or, and we'll get there. Apps, obviously I'm an app developer. I wanna do this for apps. I did it for the web first, just because I don't know I'm being wacky. I just wanted to like, prove it to myself that I could get this stuff even working anywhere. But at rest assured if you can get it working on the web, you can get it working in apps. So I started with the library and then James, this is all you. I knew you wouldn't take this seriously unless I built a nice demo website. Correct. And hosted it on a nice. Server and had a nice domain. So this is all your pressure on me of wanting to present to you, James, a nice demo of this, but I definitely have the library perspective. I plan on using this on a variety of websites and honestly, not for translation. Translation's one of the most boring things that this library, uh, that these neural networks can do, but they all share a common code base. And it was that common code base that I had to build up in order to be able to execute. Yeah, that [00:10:43] James: makes sense. Yeah, that, that makes sense because the, the end goal there, like you said, is to, to use it, to do other type of things. Yeah. Basically. And you have a nice, uh, blog post right there, [00:10:54] Frank: all about it. I put all the effort in, I put all the effort in . Is it like, it's like a niche. It's like a product launch. Thanks. Yeah. well, honestly, um, I've been getting a lot of projects to like 90% and not kind of releasing them and it was starting to weigh on me. So I was like, you know what, I'm gonna have a nice little launch for this library. It's very niche. Obviously not many websites need to run a neural network on them, but you know what, in the future, they will be, it's gonna become more and more popular. We can't really help that. And so I think libraries like this and all that will become more popular. And I kind of wanted to put a stake in the sand also and be like, look, here's my version of it. It works great. let's all move on. And maybe do this a lot more. I say in the blog post that there are certainly these very large neural networks out there, like copilot that are so big that running them in the browser is not feasible. They can be, they can be as long as you have enough Ram and you're patient enough, they can be run in the browser. Mm. Uh, but you're you don't want to, they're just too slow. Uh, so you you'll you'll oh, we'll always have big. Iron to run those big networks, but there are a lot of small to medium size networks that do a lot of good work that can easily easily run inside the browser. It really was just a matter of putting all that glue together to make it easy. Um, I should say, um, I did not write any of the code that actually executes the neural network for that. I use the Onyx. Library from Microsoft. Mm. And it, this is all thanks to them that they have a JavaScript version of Onyx run time, and that can run beautifully inside the web browser. So I provided one big chunk, the tokenizes and I'd love to nerd out about those forever. I provided a whole bunch of glue, but, uh, Microsoft provided this, uh, neural network execution library on [00:12:49] James: X. Okay. Back up here on the Onyx bit. Cuz I remember Onyx being, this would be like similar to like KMEL or to something else basically. That's like kind of what I remember. No. Am I wrong at this point? Yeah, because I thought that that was sort of like what it was, but it was originally for like windows. So it's not just for windows. So like other stuff now. [00:13:11] Frank: Yeah. In fact, um, it's more of an interchange file format. All these neuro networks, they, they all use kind of their own file formats and there there's not great interoperability or interchange, uh, between all these kinds of libraries. And so the Onyx initiative, of course they promoted windows of it's it's Microsoft. They can't help themselves. But of course being Microsoft, they're also like, this is an open initiative, you know, let's create yet another interchange format. What's that joke like? Um, the only thing about standards is there's always a new one or something like that, whatever. Yeah. So they added another standard, but, um, it's a standard that torch torch is a Python library. Pie torch is a Python library that you use to train your own networks. It's a format that torch can pretty easily output. and so, you know, I, I wouldn't say they've won the format wars, cuz of course there's, Komel, there's, uh, TensorFlow JS an alternative to all of this. So they started out writing basically a file format, basically an interchange format, but to prove that anything's correct, you have to execute these models at some point. So they also built a run time to execute on X models and someone over there was, had a lot of forethought and decided to make. Run time run on all sorts of devices. Pretty sure it's supported directly on mobile. I really should have done that research before this show. Um, but I, I promise you if, if this thing is running in web assembly on the web, we can get it running on an iPhone. No problem. so, uh, I'm excited to also move these things into mobile. Oh, [00:14:56] James: very cool. That's pretty awesome to hear that it's kind of above and beyond. So. Took, but, but got a bunch of stuff. Put a bunch of glue together and then what happened? [00:15:08] Frank: Oh, well I realized as in, over my head oh, no. Um, okay. Well there there's a lot of small details. Um, I wasn't gonna work on this at all. It was kind of neat. Another person just on GitHub. They wrote a little script that optimized these networks and made 'em a little bit smaller because this network that I'm talking about is. Four 500 megabytes and you really don't want people downloading four, 500 megabytes at, you know what? I think it's even bigger than that. And, uh, they were able to do some fancy medieval magic on it. and quantize it down from 32 bit down to eight bit. Oh, wow. And yeah, and that's not just a size. It's a, okay, so it's a huge size thing. Obviously , you're getting one quarter of the size. There can be a potential accuracy loss, but what you do is a very con careful conversion between that 32 bit to eight bit, you don't just, you know, map number zero to one to zero to 2 55, you do it a lot more intelligently. And there was someone who. Library, I believe it's called fast T five and it was just kind of inspiring because, um, uh, okay. So when I was saying it's not just a size thing, it's a speed thing too. If, if you have to wait. 10 seconds between every word of a translation. You're just gonna go to Google translate. You know, you you're gonna be like, there's no point Frank in running this locally, if it's this slow. Yeah. So it was, it was neat to see this, um, fast T five library come out where it was like, oh, okay. We can take these bigger networks and actually run them on the CPU at reasonable speeds. So definitely another big block puzzle piece came in there. [00:16:56] James: That's very cool. Yeah. I mean, that's what the next question I was really gonna have. And you kind of answered it for me, which is traditionally, when you think of doing machine learning and models, these things are huge. Yeah. And that's why you wanna run them on a server or, you know, you mean, even if you're doing like quel or you're doing TensorFlow stuff, like adding those models, Is quite exhaustive. So you might have to like, like have a download script or something like that. So you're able to sort of get around that, but there is a, a, a package and an, an actual download executable. So is that file being hosted? Is that being hosted in. Static web apps. Is that on GitHub or like, how is that all bundled up basically? Yes. [00:17:38] Frank: Well, it's the web, so you can actually get away with putting things pretty much anywhere , you know, given security models and all that. In fact, you ran into a weird security model glitch with it that I didn't fully understand, but, uh, yeah, we we'll we'll we'll get to that after I'll I'll try to explain what I'm doing, but please understand it's the web, you can do so many permutations of all this stuff. So what I have is just a. Website, index dot HTML that has some things in it. That imports, the Onyx runtime library. This is the thing that actually executes the neural network. It's called RT. OnX run time. And I just pull that down as a little JavaScript library hosted on delivered JS, you know, you know, you, you know how the JavaScript people do it. They, someone else host it. Yeah. Yeah. That's how you do it. mm-hmm and it's actually done very nicely where, um, You just referenced that and it'll download web assembly to execute the models. This is a neat little detail about the Onyx runtime library, the way they were able to get a JavaScript library is they didn't they, they took their awesome C code, their runtime code to execute these models and they just compiled it as web assembly. Bingo. Bango. Clever. Yeah. What they did was built a law a little, and I mean, little JavaScript library to wrap over that web assembly. It basically, you know, just has some good data types IOPS with the JavaScript run time, and then hands, all that junk off to, um, the low level C code that's actually executing the neural network. Hm. It's kind of fancy. I like that. [00:19:26] James: That's really cool. Cuz often when we think of web assembly, we're always thinking of blazer, cuz we're done in developers. At least that's how I'm thinking about it. But then really when you think about it. Yeah. What we're really talking about is a lot more than that. Right? We're talking about web codes and native code, its an open standard at the end of the day. [00:19:42] Frank: Right? Yeah. Yeah. And it's fun. You can take advantage of this in mobile too. Just pop, open a little web view and run your code in there. No biggie or, uh, you do, you can do blazer bedding, right, right. Inside a, a Maui app. Yeah. So you can take advantage of all this stuff there too. I think there's still probably gonna be advantages. Like these models could be converted to core ML, but I do like the simplicity of just using the Onyx run. If I can, you know, like, so I let let's rewind even further back in time. , uh, in 2017, I wrote a prediction engine for continuous to do a little bit of code prediction. I remember that mostly. Yeah. Mostly just to make the keyboard better, just cause typing on the iPad's not the greatest, that's all of it. So I was just trying to make the keyboard better. And honestly, it was pretty advanced for the. But time progresses and things like co-pilot and in Telecode have come out. They've they've upped the game a little bit, James . I was so clever there for a couple years until the game got up. And so I've been wanting to put more sophisticated networks into the apps also. And me being me, I don't wanna have that running on a server. I want it running locally. Mostly. I don't have to pay, but also it's a great privacy thing. I, I don't have to do any privacy thing. I'm like, Nope, I'm not shipping your code off anywhere. It's all good to go. And so I do wanna run these more advanced neural networks and I wasn't sure. A, I could convert them to a form that could run on the phone. Let's say the iPhone and B what is all the other junk surrounding the neural network? Cuz I, I had learned enough at this point to know that the neural network's only half the battle. There's a lot of other stuff at play to make these things as good as they are. And. Going back to what I said, for some reason I decided to solve it on the web first. It's really just, uh, what mood I was in Um, but I a hundred percent plan on, uh, for example, the tokenize is the big piece of the puzzle. That was the largest question mark for me. I, I knew in principle what it did, but I had no code. Did what it did. And if I were to put one of these modern neural networks into continuous, I'm gonna need a tokenize. Like I need to run that tokenization code on the phone, feed the results of that to the neural network, get the results back deep, tokenize it and pop it up onto the screen. That's the flow. Uh, so I had to solve this giant question, mark, and I wrote it in JavaScript. Cause that's the mood I was in. Do [00:22:21] James: you wake up in the morning and you're like, today's a JavaScript mood. Today's a swift mood. Today's a F sharp mood. How do, how do you, how do you wake up? Cause I wake up and I'm like, Hmm, today's gonna be like a cycling day or like, yeah, I'm gonna wake up. Oh, today's today's feeling like. You know, eggs or it's feeling like Turkey bacon, or it's feeling like, uh, no breakfast or actually I mostly wake up and I go, is today gonna be an Apress type of day, a French press type of day? Is it a V 60 type of day? Is this a. James needs to go out and spend $6 on a coffee type of day. . Those are the days that, I mean, that's 95% of my mornings. To be honest with you is debating coffee situation. I feel like Frank, you and I are a little bit different in the way that we wake [00:23:07] Frank: up. Yeah, dude, I solve a coffee problem. Do you know? It, it, you know, it it's that four SCMP saying it's you, it's one less thing, James, one less thing. Just solve that coffee problem. I make a Mr. Coffee every morning. Yes. I live a boring life, every one, but it's one less thing I have to think about. And therefore I can think about which programming language I want to use for the day. No, I don't pick that language based on my mood. I'm just oversimplifying, uh, The real, the, the the needle, the straw, the straw that broke the camel's back was, um, that, that server outage on my QNA form website. And that made me mad. And I really, I wanted the live translate on that website. It's always been in my original idea and plan. And so this was, look, I have a very specific project. I will know whether this is working or not. And so I might as well code against that. The continuous one is harder because I haven't settled on the neural network. I'm gonna use to deploy to it. Unfortunately it looks like all. Bestest goodest networks out there are copyrighted , which is super annoying. So it means I have to train my own and deploy my own. And so this, I already had a preexisting network, my QA form, one, I could test against it, all that stuff. And I could really prove out all this stuff. And the beautiful thing is, although I had such a hard time creating the tokenized library, reading all that old rust code in the end, the code was pretty simple. And it's gonna take me a whopping, like two hours to convert it over to C sharp and, or I sharp, whatever language I feel like picking that morning, James . [00:24:53] James: I like that. That's cool. That's cool. It so, so. What comes next in here, it comes next to the trans forms that you actually have in this thing that are the tokens. I mean, that you [00:25:08] Frank: wanted to get back to. Yeah. Okay. Uh, sorry, just to complete the story, cuz I keep talking about these tokenizes and I'm I'm, I'm very proud of them. Um, just from programming languages, we've always had these tokenizes things and it, the whole principle's real simple. There's a text string neural networks. Don't deal with text. You gotta turn the text into numbers, which numbers. Who knows, but, uh, you, you can actually decide you have a lot of control over that. But all these different networks have slightly different tokenized settings. And so I couldn't just implement one tokenized. I had to implement like a general solution the way like these libraries are designed, they have all these different options. So I had to implement a lot of different options, uh, converting it to numbers. Turns out to be an interesting AI problem. All to itself. The tokens are over specified, like the way that you write the word universe. I can give you at least 10 different numeric patterns that would generate the word universe. It is redundant. It is over specified. And so to actually give the tokens that these neural networks want, what you have to find is the optimal. Set of tokens to provide them. You have to solve a little optimization problem just to feed the data to the neural network, to get the neural network, to do the optimization problem. I found it all kind of hilarious to be honest, especially because that optimization problem that will pre tokenization problem. That used to be AI . That was the extent of 1970s. AI was graph optimization problems. And so I thought it was super cute that you have to do like 1970s, AI, just to input into, you know, 2000 twenties neural network. I thought that was so funny. [00:26:57] James: It's like, uh, it's like, yeah, going back in time to, to actually be successful in today's. [00:27:04] Frank: Yeah, and it was hard. It it's it's like graph theory stuff. It was so hard that, um, I ended up coming up with this test suite. Because like, you're never a hundred percent sure if this code that you're writing works, because like I said, there's a million ways you get the right answer a million different ways. Yeah. So you have to be very specific about, um, what's happening here. So I ended up doing a kind of the brute force way. I generated a thousand different strings and their tokenization, and I just kept running my library against it. Sorry. I, I ran that through the standard token. The one that runs in Python and actually it runs in rust. I ran it through that. You know, I basically created a thousand acceptance tests and just kept hammering away at my code until my silly little JavaScript algorithm matched the output of this ridiculous rust code that I absolutely cannot read. Why do people use that language? Okay. Rust, ranch. Hope [00:28:00] James: well, you know, that's the best part is that you did a little test driven development. I'm very proud. [00:28:05] Frank: You, you have to, um, be, especially because I underestimated the problem. When I, when I learned the tokenization format, I, I sat down with myself. I said, Frank, how would you implement this tokenization algorithm? And I said, this is how I would do it. And I sat down and I implemented it that way and it got me a 4% success rate. And I just had to, like, I had to take a step back, James, because. Look, if you get the algorithm wrong, it should be a 0% success rate. Like there there's no random chance here. There's too much data here. So why is it 4%? Right. And. Why is it only 4%, right? this is, this is the best algorithm I could think of. You know, I'm like, if I were to do this, this is how I would do it. Well, it turns out that's not how you should design things. You should actually go read the manual and learn how they did it and replicate that. And that's when I found about, out about this crazy graph optimization problem that you have to solve just to do the tokenization. So I went in with a lot of hubris. And I definitely got humbled by this half of my pride around this library and why I tried to make the website look half decent was I just want people to know that this, this effort was worth it. You know, like I'm not gonna show this thing off in it's ugliest form. I'm gonna show it off in its best form because it was painful. It was painful, James . [00:29:31] James: I love it. That's that's amazing. It's so, so kinda kind of crazy. [00:29:36] Frank: Yeah. So, so to answer your question, honestly, this is kind of a culmination of where I was going with everything. I now know that I can take these neural networks, that I know how to train. I know how to convert them to forms where they can run on the web. They can run on mobile. I know how to write all the glue code that's needed to actually support them, to use them in their proper form, uh, which basically just gets me back to continuous . Now I actually have to integrate one of these into continuous and I have other app ideas too. Um, there, there's a fun idea. I, I keep playing around with my friends. Like I've learned a lot about these networks and I keep, I wanna do like a joke reader. So you type in a joke and it gives you a rating. how good is your joke and fun little things like that, but I have other app ideas, but number one is I wanna bring this into, uh, continuous and then can I pitch you another even crazier idea after that? Yeah, you [00:30:33] James: can pitch me all the ideas. Frank I'll take Frank ideas all day, every day. [00:30:37] Frank: This is gonna take a little leap of faith. All, all my best ideas. You gotta sit back and just, just try to guess it when I'm, trying to think of here. Circuit files, James, in, in the end, are they not just text files and you already know where I'm going, don't you? No, no, go on. Go on. No. Okay. Okay. So a, a circuit file is a text file. It lists out all the things in the circuit and their connections and all that stuff. Is that not just a language. Can I not just learn that language with a neural network. And then can I not just do the G P T trick, but instead of generating boring stories or summarizations, what if as you're building a circuit, it suggests the next component to put in. is suggests the next warrior connection to make, or Clipy comes up and says, I see, you're trying to write a letter and here's some helpful advice for it. Uh, I think that these kinds of language models still haven't been fully exploited yet. I should say these exact language models are what's viewing the new, um, image generation Renaissance, the AI image generation. Uh, open AI just released, um, stable diff fusion, which is just an absolutely gorgeous image generator. And it's all, it's just one of these neural networks. It is, it's just one of these things that can run in the browser. It can run on the phone. And so I think we're, we're just starting to tap the area and, uh, I, I, I think I wanna predictive circuit editor. I think I want [00:32:10] James: that. I like that. I feel like you're gonna get to the point where. Eye circuit again, little complimentary here sort of completes the circuit for you like visual studio and co-pilot writes the stuff for you. So here you go. [00:32:25] Frank: Exactly. Yeah. I want the, I want the co-pilot for circuits. Yeah. There's your elevator pitch, I guess. I like it. [00:32:32] James: We're getting there. [00:32:33] Frank: Oh my goodness. Oh God. We really are. I mean, this is running in a web browser of all places. put this in a powerful environment. Let me use a few more CPU cores. Let me use your GPU. I can do a lot. Oh yeah. [00:32:47] James: Oh yeah. I love it. Uh, well I wanna ask one more thing before we get outta here is, was the easiest or hardest part? The actual static. [00:32:58] Frank: Oh, we have to talk about this for a few seconds. Uh, I will go with. One of the easier parts. It was a, I'm not gonna say it was perfectly smooth, James. I did get confused a couple times. So for reference everyone, I, I was using CloudFlare for a lot of these things. Um, but it, um, James showed me that Azure can also very easily host these static websites and the rule is 250 megabytes for the free tier. You can get more. And so, uh, when I was talking about quantizing, the models earlier, bringing 'em down in size from 32 bit to eight bit, there's a very practical app reason for that. And that's so I can fit it onto the E Azure websites, but I knew that going in. So no, no shade, no lemonade against Azure. There they say on the tin 250 Meg for the free tier. So no problem there. Um, I got my GitHub all set up. That was exciting. Uh, it's funny. Somehow CloudFlare does its deployments without a GitHub action file. But when you hook up Azure static web apps, it creates a GitHub action file. Huh? In the end. I think I prefer the GitHub action file because I can customize it later. And I did, I ended up messing around with the build a little bit because it was being a little picky, picky about files and things. [00:34:16] James: Yeah. That that's actually, I'm assuming that CloudFlare is probably looking and at, um, like web hooks and then it is like pulling the files down automatically where I do like the other one, which gives you the control. I wonder if I wonder if they do have. Action that you could set up if you wanted to. Yeah, but I do like the, that I always, like when I have more control over it, like, Hey, set up the thing for me and then let, maybe that works. Maybe that works great. Right. Yeah. Um, okay. And then go from there. Yeah, yeah, [00:34:45] Frank: yeah. Uh, now one thing that got me a little bit upset was CloudFlare. Lets you pick the SubD. That's gonna be hosted on their domain. It's on their domain. Right. But you can pick the subdomain. Mm. Uh, Azure for some reason, generates a random one and doesn't let you change it. It's it seems weird, honestly, and it wasn't just me. I Googled around you can't change the name for some reason. I suppose they're trying to prevent name squatting and such, but they're automatically generated. Names are a little horrendous. So that was a little sad because I had QA forum pages. Do I'm sorry for the other project, I, I plan on actually moving it over to Azure now. um, you could pick the subdomains, but now I couldn't. So I was immediately forced into their only option was you can't change the subdomain. Therefore you have to use a custom domain. I was like, oh brother, I'm not gonna pay for a whole domain just for this stupid demo of this stupid JavaScript library that I wrote. Uh, fortunately thank you, Microsoft. They allow CNA, which means you can do a subdomain off your own domain. Great. Love it. Mm-hmm but then I'm like, but wait a minute, if I'm C naming this thing, I'm not gonna have SSL there's oh yeah. Who's gonna sign the thing. And does that mean I have to put CloudFlare in front of my Azure in order to get SSL and I'm like, no, that sounds terrible. And then James Microsoft came. Microsoft came through. They're like, Hey, if you set up your CNAME correctly, we'll do the cert. And I'm like, but that's impossible Azure. You can't do the cert. And they're like sh sh Frank we'll we'll do the cert for you. And somehow magically, I have an SSL cert on a domain that. I never put an SSL shirt on, but a magical server out there decided it's good enough. I don't know. I, I didn't investigate too deep, but there's an SSL cert on my C Dave. And so I just put it as a subdomain off of Larm, uh, which is fine. Honestly, I, I would rather have. Would I have transformers dash JS, Dora clear on.org, then transformers-js.azure. Look at me. I'm hosting you.com. Yeah. Yeah. So that's cool. It's fine. Yeah. At first I found it super annoying, but in the end it came out better. Nice. It all [00:37:08] James: came together. It's a whole series of strings and tubes [00:37:12] Frank: and thanks to you. Um, I, I didn't know about the Azure stuff and I definitely will adopt it because. As much as I love having 8,000 different web hosts, it is nice to have everything under one roof. I value simplicity. yeah. [00:37:26] James: I agree with that. Well, cool. You did. I'm proud. I will put links to everything in the show notes. It's had a great blog detailing, all the stuff that we kind talked about today too. Sounds very cool. I'm very proud of you, Frank. I love it. Yeah. I can't wait to see what you do with it [00:37:37] Frank: next. That's a cool part. Yeah. Well, keep an eye out for the, uh, dot net six version of it, because obviously I'm gonna need that to put it into my own apps. So that's true that that's hot on the heels. I was just excited to get this web version. Nice. I [00:37:51] James: like it. I like it. And I like talking about other stuff. Random, cool jobs stuff. Mm-hmm it happens. It's cool. All right, man. Ah, sometimes Frank talks about stuff that's way over my head, but he tries to break it down for me. We'll see how much I retain from this. When he gives us the update about all the other awesome stuff he's doing with it. It's a [00:38:08] Frank: journey. It's a journey. You and meam generation circuit generat. Generate all the [00:38:13] James: things, all the things. Yeah. At the end of the generate all the things. Well, I think that that's gonna do it for this week's podcast. Frank, what do you think about that? Mm, [00:38:21] Frank: uh, I, I appreciate you letting me do two ML episodes within a few weeks of each other. We'll get back to mobile development. I promise you everyone. So I appreciate it. And, uh, It wasn't over your head. You, you, you know, you know, the JavaScript, you just don't like to admit it. [00:38:38] James: That's true. You just, yeah, that's true. well, we do have some exciting apple stuff upcoming, so I'm excited about that. Some iPhone events and a whole lot more so stay tuned for that folks, but until next time, This has been another merge conflict. I'm James Monte Magno [00:38:54] Frank: and I'm Frank Kruger. Thanks.