00:00:10 Matt Welcome everyone to the.net Maui podcast. We're here to keep you up to date with the latest and greatest.net client development. 00:00:22 Matt We'll talk about some Azure from Visual Studio, some blazer and of course. 00:00:26 Matt Dot net Maui. 00:00:27 Matt I'm Matt Soucoup, but today we're going to talk about a library. 00:00:30 Matt That lets you add. 00:00:31 Matt On device machine learning to your apps. 00:00:34 Matt And that library is Onyx. 00:00:37 Matt I'm joined today by Scott McKay from the Onyx team and Mike Parker from the Microsoft UH mobile customer advisory team. 00:00:44 Matt But before we get into Onyx, machine learning and overall making your app smarter, let's let's get a little bit of background on our guest, Scott. 00:00:52 Matt I'll start with you. 00:00:53 Matt So tell her this knows a little bit about. 00:00:55 Matt Yourself and how you got into machine learning. 00:00:58 Scott Hi Matt, thanks for having me on the show. 00:01:00 Scott I I got into machine learning a few years ago when I started working on the Onyx runtime team. 00:01:07 Scott So Alex Runtime implements Onyx and it's used to actually do your inferencing. 00:01:13 Scott I came, I didn't come from a data science or a yellow background, so it is a bit of a steep learning. 00:01:18 Scott Curve when you get started for sure. 00:01:20 Scott And yeah, I've been at Microsoft for over 10 years now, and before that places like Yahoo and AltaVista. 00:01:28 Matt Wow from the the search giants. 00:01:30 Scott Yes, from the old days. 00:01:31 Matt If you will from. 00:01:31 Matt The When the good old days it's all. 00:01:35 Matt Right, Mike a little bit about yourself. 00:01:38 Mike So yeah, thanks for having me on Matt, so I'm a principal software engineer from the modern client app, customer advisory team and we're still working on a shorter name. 00:01:47 Mike In case you're wondering, so I've been with this team for about five years now and a further 10 years at Microsoft before that. 00:01:55 Mike So in our team, our goal is. 00:01:57 Mike To help customers be successful, learn how they're using our products with the ultimate aim of taking those learnings back to help drive product improvement. 00:02:09 Matt Great and yeah, we've had Mike and your teammates on before because y'all will. 00:02:14 Matt Yeah, as you mentioned, help our customers and teams within Microsoft implement solutions on Xamarin and. 00:02:22 Matt Net Maui going forward, which is great. So Scott, let me let me ask you something. So let's start ML as you mentioned. 00:02:30 Matt As a pretty big steep learning curve and Onyx, what exactly does that mean? 00:02:37 Matt Overall, what is Onyx? 00:02:40 Matt And then we'll get into machine running. 00:02:40 Matt OK. 00:02:43 Scott So ONNX is basically an open standard for. 00:02:48 Scott Describing a machine learning model so you have different frameworks that can be used to to create models to the biggest ones, or of course at tensor flow and Pytorch. 00:02:59 Matt I thought... 00:03:00 Scott And they have their proprietary model format, so a few years back, if a number of companies got together and decided let's have an open format. 00:03:07 Scott So that if you want to check. 00:03:08 Scott Change so the framework you use for training and use a different one for running the actual model. 00:03:15 Scott You'd be freer to move between those, so Onyx itself is that standard that lets you define a model in a generic way, and then different companies have implemented products that are capable of reading running Onyx models. 00:03:29 Scott And our particular software for that is called Onyx Runtime. 00:03:33 Matt All right, so Onyx is like the I guess, uh, API surface if you will, that lets you port models between different. 00:03:45 Matt One time sentence. 00:03:45 Yeah, it's. 00:03:46 Scott Probably close to the science, like say, XML, so you might have a. 00:03:49 Scott Word document that you could export to XML and then so they could read the XML and convert to PDF. 00:03:55 Matt Gotcha, alright? 00:03:56 Matt Well that's cool and before we get too far we just had a special guest join Monash. 00:04:04 Matt That was caught in traffic. 00:04:07 Matt So Monash also works on the on the Onyx team, so when I did you just give us a real quick overview of yourself a little bit. 00:04:14 Matt About your background. 00:04:16 Manash Yeah, hi everyone and thanks for accommodating my late joining here. 00:04:20 Manash So my name is Manash Goswami and I work alongside Scott. 00:04:24 Manash I'm a PM in the Onyx runtime team and I look after our Onix runtime mobile project. 00:04:33 Matt Great, and so when asked we just had Scott tell us, you know, a little bit about what Onyx was. 00:04:38 Matt So tell us we're gonna jump in then to machine learning and how I guess. 00:04:43 Matt Overall, what is machine learning on device? 00:04:48 Matt Why would I want to do machine learning on a phone? 00:04:53 Manash So there are. 00:04:54 Manash So let's start with what is machine learning. 00:04:57 Manash The machine learning is this field of computer science whereby you can use software and math to apply and implement automation on tasks and different workflows at a high level you. 00:05:12 Manash You would build models which are these mathematical representations of decision trees that otherwise a human would have to make, and so you you teach this model specific things by training it, and then you deploy that as a part of your application now. 00:05:33 Manash Why would we use machine learning in mobile devices? 00:05:36 Manash So today, as we started out with the evolution of mill, it proliferated widely in cloud services because of the access to literally infinite compute and storage and lots of data, we were able to build these really sophisticated models and deploy them. 00:05:53 Manash In the cloud. 00:05:54 Manash And we would send user requests to the cloud process, then send results back. 00:05:59 Manash That's great, but only for a particular you know set of scenarios when we look at mobile phones, they're constantly getting smarter and constantly getting more power. 00:06:10 Manash And therefore they now have the capability to run some of these models locally. 00:06:16 Manash On the device itself. 00:06:17 Manash That creates for richer AI scenarios. 00:06:21 Manash More efficient execution of code because you don't have to keep sending data back and forth at. 00:06:26 Manash The cloud. 00:06:27 Manash And then finally, you actually preserve privacy. 00:06:30 Manash And keep users data on the device with the user. 00:06:33 Manash So all of those reasons allow us to build really strong robust AI solutions and run them on the users devices. 00:06:42 Matt OK, so we have AI which is artificial intelligence and then. 00:06:46 Matt And there's mill. 00:06:48 Matt So how does AI relate to that machine learning itself? 00:06:51 Matt 'cause they're the same but different, right? 00:06:55 Manash So AI is the overall concept of implementing human intelligence into machines. 00:07:05 Manash Machine learning is a concept or a branch whereby you use mathematical constructs and special techniques to build. 00:07:15 Manash About that automation as I said. 00:07:19 Manash And then the other one other aspect of AI would be deep learning and this is where you use neural networks to go build out your AI models. 00:07:30 Manash Today we are essentially the transition from going from mill to deep learning. 00:07:38 Manash In the field of. 00:07:39 Matt Alright, and so Onyx itself has neural networks. 00:07:43 Matt That's the two ends and Onyx, right? 00:07:45 Manash Right? 00:07:45 Manash Onyx as Scott would have explained is is a format to represent our neural networks, and as Scott was saying, Onyx is essentially a portable format to take take models. 00:08:00 Manash That are trained in the cloud or or anywhere else and be able to port them across different environments and different devices. 00:08:08 Manash Yes, and the NN in Onyx is neural networks. 00:08:11 Matt So Scott, then the then the benefit of using. 00:08:15 Matt Onyx runtime over tensor Flow Lite then what's what's what? 00:08:20 Matt What is that? 00:08:22 Scott So yeah, the different products are essentially doing the same thing. 00:08:26 Scott Next runtime we structure it in a way so that it should be easy to run the same model on as many different devices as you can think of. 00:08:36 Scott So if you're running on desktop, it would be able to take advantage of if you've got an NVIDIA video card, it can use that to run the model. 00:08:43 Scott The exact same model can be used in mobile scenarios, so we've got Android and iOS packages and those packages have accelerators for the different NP. 00:08:53 Scott Use the neural processing units on the phones. 00:08:57 Scott So really, the the honest runtime approach is 1 model runs everywhere and at runtime we optimize different things to adjust to the device it's. 00:09:06 Scott On so this is all similar to what something like tensor Flow Lite might do. 00:09:10 Scott It's slightly different in that you're probably using tensor flow Lite on a mobile device. 00:09:17 Scott Whereas you're useful tensor flow on a desktop, so there's some differences between the model used on different devices there. 00:09:24 Scott OK, we put Pytorch your for example on mobile for pilot you if you want to use the NPU on the phone you have to export the model differently so. 00:09:34 Scott So we focus on performance and focus on usability as well. 00:09:38 Matt What kind of things am I able to do with Onyx? 00:09:41 Manash So let's start with the model zoo. 00:09:44 Manash So the Onyx Model Zoo provides you a set of pre built pre trained Onyx models that are there for you to get started quickly. 00:09:54 Manash They have been trained and converted so it becomes like you're ready mate. 00:09:59 Manash Getting started point. 00:10:02 Manash And the the Onyx Model Zoo has models that have that address scenarios across language vision and others. 00:10:10 Manash So so it's a great way to get familiar. 00:10:14 Manash In terms of what you can do with Onyx, we have converters as as Scott also said, converters to convert 2 Onyx from the different training frameworks. 00:10:25 Manash Now data scientists, when they build models on their own, they would use a training framework like Pytorch or Tensorflow. 00:10:31 Manash Those are the two popular ones. 00:10:32 Manash Now there were many beforehand, uh, you know. 00:10:35 Manash A few years ago. 00:10:37 Manash Pytorch and Tensorflow are dominant today. 00:10:39 Manash The advantage with Onyx is that within in organization you may have data scientists familiar with Pytorch, and they may be, you know, more comfortable dealing with the way Pytorch works, because they learned that at school or somewhere in the previous job, and so they build their models with Pytorch. 00:10:57 Manash But then you may have another group of data scientists who are traditional Tensorflow users because you know, organizations typically cannot always harmonize towards one thing. 00:11:08 Manash Then Onyx allows you to bring all of those. 00:11:11 Manash All of those models into a common format and run them with a single runtime across this variety of different environments, right? 00:11:20 Manash So that is on the training to training to bring bring into Onyx side. 00:11:26 Manash But then with Onyx runtime. 00:11:28 Manash Given that we are a Multiplex. 00:11:31 Manash We actually take the same Onyx model and then because of our execution provider interface, which allows us to optimize the model for differently for the different types of hardware endpoints we have. 00:11:44 Manash So the execution provider is like this pluggable library that is. 00:11:50 Manash Actually tightly coupled with the different types of hardware that you have out there, you have PCs. 00:11:54 Manash You've got Macs, mobile phones from Android as well as iOS and then you've got these large data centers cloud. 00:12:01 Manash Right, and Onyx runtime with a different execution provider plugged in. 00:12:07 Manash You can then take the same Onyx model and optimize it for all of these endpoints so you get a lot of flexibility to port your code across these endpoints to run your code across these endpoints and then on the data science side. 00:12:23 Manash You now allow the. 00:12:24 Manash Harmonize all of their training work. 00:12:27 Manash Into a common format to go deploy into. 00:12:31 Scott And it's probably good to point out that there's an Onyx model zoo, but because there's converters from Tensorflow and Pytorch in Scikit learn to Onyx, you can also grab models from the tensor flow model Zoo that Tensorflow Lite model zoo or the Pytorch Model Hub. 00:12:46 Scott I think they call it so it doesn't really matter where you get the. 00:12:49 Scott Model you want to use from. 00:12:51 Scott There should be a converter to convert that domics. 00:12:54 Mike It's probably worth also pointing out services such as the custom vision they actually output directly to Onyx format as well, and those models are quite easy to consume. 00:13:05 Matt I love this called model zoos and get all the different specimens and they use but so. 00:13:14 Matt Mike, you brought up a good point with like custom vision. 00:13:17 Matt A lot of times. 00:13:19 Matt That's all trained in the cloud, but then you can bring it down to on device. 00:13:23 Matt And obviously Onyx is going to run on device and what is the performance like? 00:13:28 Mike Well, I'll I'll probably pass that one on to Scott, but I think from working with him, one thing I've learned is that performance completely varies according to the model. 00:13:39 Mike The hardware it's run on, as well as the execution provider and. 00:13:44 Mike Frankly, it's a it's a process of experimenting with all of those options and deciding what works best for that particularly. 00:13:52 Scott Yeah, absolutely. 00:13:53 Scott Like different models have different operations, they perform which costs different amounts depending on the device. 00:14:01 Scott Uhm, so if parts of the model can run on the neural processing unit on a mobile device, it's probably gonna go well. 00:14:09 Scott However, if those parts aren't all joined together in the model, like there's a bit at the start and the bit at the end, then it's probably gonna go badly because going between the CPU and the NPU takes time. 00:14:20 Scott So as we have. 00:14:23 Scott Between the executed execution provider infrastructure and our CPU implementations, we have various ways to run the model and so sometimes it's just a case of try it out with the NPU tried out with the CPU, see what's fastest. 00:14:39 Manash And I'd like to add that. 00:14:42 Manash These even within a within a set of devices. 00:14:45 Manash Like say Android. 00:14:46 Manash You have got hardware from many different. 00:14:50 Manash You know platform providers and so with this common interface to something called an API and Android Onyx run them can make the determination whether that model would work in a particular kind of hardware. 00:15:06 Manash Because it's old, and then you know have the ability to fall back to CPU execution, which can give a lot of flexibility to at least execute the model even when you are running on older hardware. 00:15:20 Matt Well, that's super interesting, and actually it kind of gives you that fail safe. 00:15:24 Matt Fall back which is. 00:15:26 Matt Exactly what you need. 00:15:27 Matt I mean, 'cause you really cannot. 00:15:28 Matt I mean that's the bane of every mobile developers existence is that you cannot predict what device the app will be running on. 00:15:35 Matt So let's talk a little bit about the apps. 00:15:37 Matt I mean, what are some? 00:15:38 Matt I guess just. 00:15:40 Matt What kind of models? 00:15:40 Matt What would I be using some use cases? 00:15:45 Matt For manesh, I mean what, what, what kind of apps can can? 00:15:48 Matt I build with this. 00:15:49 Manash We can look at like 3 different verticals, vision, text and speech. 00:15:57 Manash You can implement or you know implement AI in any of these and what I mean. 00:16:02 Manash By that is. 00:16:03 Manash You can get vision signals beat from your. 00:16:06 Manash Camera or a video stream that you're playing, and there could be intelligence built into a model to detect faces you know or or you know you're scanning a check to deposit into your bank. 00:16:20 Manash You can deploy. 00:16:21 Manash You can implement AI to detect the form or the structure of that. 00:16:26 Manash Uh, you know, check that. 00:16:27 Manash You need to deposit and like the routing number and account number and all that. 00:16:31 Manash So that that's where you know you can use vision based models to kind of segment an image, detect specific parts and then translate it into from image to text. 00:16:43 Manash Because if there are numbers for example on the on the language side, we already see scenarios where. 00:16:51 Manash When you're typing something in a document or in an email, you get some predictions of what you may want to be typing next and or you know in the keyboard when you're you know. 00:17:03 Manash There's there's a lot of like. 00:17:05 Manash Correction to your spellings or even prediction on what the. 00:17:08 Manash Next word might. 00:17:08 Manash Be those are examples of where we are implementing AI models that that understand language. 00:17:17 Manash Or text and then finally the last category would be around speech and what I mean by this is. 00:17:24 Manash These things around you know translating. 00:17:28 Manash Being able to say something onto your phone and asking for translation back on, uh, in a different language or or detecting like noise or or ambient ambient disruptions so that your phone calls are really clear and those are examples where AI is. 00:17:45 Manash Implemented to kind of clean up the speech signal as an example to to deliver better experience. 00:17:54 Mike Yeah, certainly based on the enterprise customers that our team has been working with, we've certainly seen you know a focus on trying to reduce like error prone or repetitive manual steps so you know staff can and and field workers. 00:18:10 Mike Especially in our case, can focus on I guess more valuable things. 00:18:14 Mike So that could be just improving the quality of of the input so that just reduces time spent fixing issues later downstream, maybe offering suggestions like my Nash suggested you know so that it speeds up the the input process over. 00:18:31 Mike Overall, and in some cases we've seen scenarios where it's been helpful to identify safety considerations for people in, you know, busy or dangerous work environments. 00:18:42 Matt And that's really interesting, because a lot of times when you know you think of artificial intelligence, you think of, you know the Sci fi. 00:18:49 Matt Factor, but. 00:18:51 Matt It's like the everyday, as you mentioned, Mike. 00:18:53 Matt It's like cleaning up input or manesh. 00:18:56 Matt When you said it's like predicting text, I mean that stuff people use everyday and you probably don't think of it. 00:19:02 Matt That's that's artificial intelligence doing that. 00:19:05 Matt It's not a big if then else. 00:19:08 Matt Statement it's. 00:19:10 Matt There's smarts and quotation marks behind all that stuff, so that's really neat and you can actually with with the right model. 00:19:17 Matt Start implementing that in your in your own applications, which is which is really neat, and so Scott. 00:19:25 Matt If I was going to then start. 00:19:28 Matt I, let's say I picked my model from the models. 00:19:31 Matt How do I go about now starting to get that to work in a Xamarin app? 00:19:36 Matt What are my general really high level steps to go about getting this to be integrated? 00:19:42 Scott Sure, so the the high level steps are. There are a couple of nu get packages there's. I think it's called Microsoft dot ML Tonix runtime and there is also which sorry contains the native libraries. So that is the. 00:19:57 Scott Implementation of the pieces that will run the Onyx model in. 00:20:01 Scott We've got native libraries for Windows for Mac for iOS, to Android, et cetera. 00:20:06 Scott And then there's a managed package on top of that, which gives you the C sharp bindings for calling into Onyx runtime. 00:20:14 Scott So those will give you the. 00:20:15 Scott Pieces to run them all and and then you've got your model assembly in Onyx format. 00:20:21 Scott So what you need to do then is figure out how you're going to create the. 00:20:26 Scott Input to your model and then read the out. 00:20:29 Scott So creating the inputs can be a little tricky. 00:20:33 Scott Uhm, because it needs to match the format that the model was trained with. 00:20:39 Scott So for example, for a say you've got an image processing model that detects animals and labels the animal. 00:20:46 Scott Just as an example. 00:20:47 Scott So if you got the pre trained model that would have been trained most likely with images of a certain size and and in a certain format. So for example it might be 256 by 256. 00:21:00 Scott And it's red, green, blue. 00:21:02 Scott Uhm, so when you get your image from say the device camera, you need to alter it to to match that format so that the results from the model are good. 00:21:06 Speaker 2 Right? 00:21:12 Scott So this is called generally preprocessing. 00:21:17 Scott And then essentially you call on its runtime with that and it will spit out some results that you, for example, you'll match to labels in that in that case. 00:21:25 Scott So what you do around the call to Onyx runtime is determined by the model. 00:21:31 Scott And what how the model was trained? 00:21:33 Scott And then what sort of results the model outputs so different models will have different outputs, but remember it's going to be numbers or maybe text at the most. 00:21:42 Scott So for example, you might need to convert an index number to a label for. 00:21:46 Scott This is a dog or this is a cat? 00:21:49 Scott But all of that model specific, and that's probably one of the tricky things to do when you're trying to use this Onyx model. 00:21:57 Scott Yeah, those are probably the high levels that's Mike can probably speak a bit more about the experience as he he's actually gone through these tests and and can probably tell you some of the pain points. 00:22:07 Mike Yeah, certainly I think, uh. 00:22:08 Mike A great app that Scott introduced me to was netron. 00:22:13 Mike And I think that was really helpful 'cause that what that does is it allows you to load up your model and it visualizes everything that goes on, including giving you the the names for the inputs and the outputs and the. 00:22:25 Mike The format that the input and the output is is in, so you can more easily understand what you need to do with that. 00:22:32 Mike You know, for example, once you've got the results. 00:22:34 Mike Uh, I think if I'm if I'm honest, some models do a great job with the documentation and other models. 00:22:41 Mike It's more difficult, especially for those who don't necessarily have a traditional data science background, so. 00:22:50 Mike Sometimes it's a. 00:22:52 Mike It it's an iterative process, shall we say. 00:22:55 Mike But ultimately, like Scott says, it's it's a matter of making sure you know your input is what the model expects and you understand what the model gives you as output so that you can. 00:23:07 Mike Makes sense. 00:23:09 Matt I loved how Scott and his answer kind of danced around the question of that. 00:23:13 Matt You have to read the documentation for the model. 00:23:16 Scott I'm fine. 00:23:18 Scott So things things in a model zoo or a model hub will have more documentation. 00:23:23 Scott But if you have, say, some random thing that you want to do, and you go off and you find a model it. 00:23:28 Scott May be. 00:23:28 Scott That that model lacks clear documentation about what input and output it expects. 00:23:33 Scott Then it might just assume, well, you you know Pytorch, so you know where. 00:23:37 Scott Look to kind of infer those things so it can be a bit of an adventure to figure out. 00:23:42 Manash And also, and we recognize this problem. 00:23:45 Manash And as as a. 00:23:47 Manash Network right? 00:23:48 Manash Like both Scott and Mike have have prepared amazing samples that are out there. 00:23:54 Manash They may not address. 00:23:55 Manash Every you know scenario but that they could serve as good getting started for say something. 00:24:02 Manash Or vision and we will continue to expand these samples to enable users to kind of you know address. 00:24:11 Manash Scenarios around you know vision, text and speech. 00:24:15 Scott Yeah, and and we're also in the process of creating some infrastructure where we can add some of that preprocessing to the model. 00:24:24 Scott So what tends to happen? 00:24:25 Scott Is it's not. 00:24:26 Scott Part of the. 00:24:27 Scott Model because when you're doing the training you want to do it as quickly as. 00:24:31 Scott Possible so you don't want to be, say, resizing and reformatting an image every time you do a training run. 00:24:38 Scott You want to do that ahead of time so that you save processing time when you're training, so that leads to these models that do a great job in terms of accuracy, but the preprocessing happens outside of the mall. 00:24:49 Scott So we're working on some infrastructure to allow you to add some of that preprocessing into the model, which would mean that in a mobile scenario it's more about just capture the image and give it to us and we'll be able to do any resizing or reformatting that's required because we gave you a way to do that in the model. 00:25:06 Matt Now that would be amazing. 00:25:08 Matt But I don't have to actually resize my image down, it just works. 00:25:11 Matt It just works. 00:25:12 Matt That would be really amazing. 00:25:14 Matt Not that it just doesn't work now, but it's just. 00:25:16 Scott Yeah, let's let's upgrade development after learn to adopt an X runtime. 00:25:20 Matt Yeah, that would be something else. 00:25:23 Matt And then, as you mentioned about the samples, I know Mike you wrote a blog post a couple of weeks back about using Onyx. 00:25:31 Matt So that was Grady. 00:25:32 Matt Which one gets a quick little rundown? 00:25:34 Matt It was all in there. 00:25:35 Mike Yes, I think what I was trying to do with that really is just provide an introduction to to Onyx runtime in general. 00:25:42 Mike And by providing a really stripped down simple example and it, it uses a classification model called Mobile net and so it just takes the. 00:25:52 Mike An image goes through a similar. 00:25:54 Mike Set of steps that Scott described to preprocess that image, but ultimately that it just shows you you know the high level steps of load. 00:26:03 Mike The model up create your inference session. 00:26:06 Mike Run the model based on that. 00:26:08 Mike You know the preprocessing according to the instructions, then taking that output, mapping it to the right label. 00:26:14 Mike And just showing an alert. 00:26:16 Mike So the real goal with that was really just to you know, given you know, bring those concepts to life and provide some links to to folks who want to take that a little bit further and and. 00:26:27 Mike Use their own models. 00:26:29 Matt It's it's a great read and we'll put the link to that blog post in the show notes for this, and I highly encourage everybody. 00:26:35 Matt Go check it out to just get started with this and so Mike, I'm going to throw this one right to you because I know this is part of it that you. 00:26:42 Matt Were involved. 00:26:42 Matt With and how? 00:26:44 Matt Come Xamarin wasn't what were there like any technical reasons like Xamarin wasn't? 00:26:49 Matt Supported on the Onyx Runtime previous before now. 00:26:55 Matt I guess what were? 00:26:56 Matt What were the issues and. 00:26:57 Matt What solved them for us? 00:27:00 Mike Well, I mean, I think Scott alluded to this earlier. 00:27:04 Mike Under the hood, it's actually just using the native iOS and Android frameworks, so I suppose the pieces of the puzzle were always there, but they weren't included in in the nu get. 00:27:17 Mike Packaged themselves so. 00:27:19 Mike The few things that. 00:27:21 Mike Had to change slightly, which is to use multi targeting so that there's some specific Android and iOS steps that make sure the right library gets included in the right. 00:27:30 Mike You know, in the at build time in the right place. 00:27:33 Mike And then there's a few platform specific. 00:27:36 Mike Differences to the way that we're using P invoke to call the underlying libraries, and that's just for example. 00:27:42 Mike Whether it's a static library or a dynamic library, or in certain cases to deal with some AOT specific challenges. 00:27:50 Mike So I think changing you know dynamic function pointers just to use the right attributes so that. 00:27:57 Mike When when it's built it it can get handled correctly for AOT? 00:28:01 Mike And then I think. 00:28:04 Mike Everything else is pretty much there, really. 00:28:06 Mike I mean the the team have a really interesting and elaborate dev OPS process, and maybe Scott can talk to you about having more detail. 00:28:15 Scott Well, we build for a lot of platforms so and it was only about a year ago. 00:28:21 Scott I think that we added or a year and a bit that we added to iOS and the Android builds so so next runtime is built for just about every platform you can imagine. 00:28:29 Scott And then with different combinations like tensor RT, it's used in places we've got open vino. 00:28:36 Scott There's a whole bunch of different accelerators, and so the the dev OPS to build this is like it's cross platform to. 00:28:43 Scott To a massive degree. 00:28:47 Matt That's cool, I mean well. 00:28:49 Matt So Mike, I kind of like how you you played it. 00:28:51 Matt There was a lot of work involved. 00:28:52 Matt I mean you were talking about dynamic pointers and everything else on board on device for platform specific and you made it sound easy. 00:28:59 Matt I'm sure it was not easy at all. 00:29:01 Matt But now we. 00:29:02 Matt Can use it in a in a platform independent way. 00:29:05 Matt Which is amazing. 00:29:06 Mike What was quite interesting to me was actually. 00:29:10 Mike So I started working with Scott because we decided to do a quick peek to, you know, just to see how easy or difficult it would be to use it, 'cause we had a reason to use it in that case. 00:29:21 Mike And it turns out it was really. 00:29:22 Mike Quick to make it work with Xamarin. 00:29:24 Mike Unfortunately it didn't necessarily consider the fact that it already had. 00:29:30 Mike A lot of supported platforms, so I think it was interesting 'cause it informed the work we had to do to make it work over on the Xamarin side. 00:29:37 Mike But I think the real work is in sort of taking an existing product and. 00:29:41 Mike And trying to add zamarin without necessarily, you know, throwing the baby out with the bathwater. 00:29:49 Scott Yeah, coming coming up with a project file that supports all those different platforms and the different bits and pieces you need to do when it loads is challenging. 00:29:57 Matt So when ish then so let's say somebody wants to start doing this, what? 00:30:02 Matt What kind of challenges can they expect when they say they're brand new to me? 00:30:07 Matt What should they be prepared to learn about as they get going? 00:30:12 Matt I mean I guess. 00:30:13 Matt How how can we? 00:30:13 Matt Ease the on ramp. 00:30:15 Manash I think at the at. 00:30:15 Manash A high level. 00:30:17 Manash The biggest, uh, dissatisfaction or surprise would be for someone new. 00:30:22 Manash Is that the that they did everything right? 00:30:25 Manash They integrated the model and you know the application now runs, but when they're trying to look for cats, it cannot detect cats, right so? 00:30:35 Manash The reason that happens is usually because the data that was used to create the model. 00:30:43 Manash Does not represent or match with the end business solution that they are trying to solve for and that is going to require a lot of data science, which is outside the boundaries of what we have been talking about here. 00:30:58 Manash So so you know, once you get your application in the model and all the execution. 00:31:04 Manash And all of that working right? 00:31:05 Manash Do you have the right trained model that meets your business need and that becomes that you know Uber question and and the the investigation that would have to come together even after like you've put together your solution and that goes back to when I. 00:31:23 Manash Said, you know? 00:31:24 Manash If I'm in building a model to. 00:31:25 Manash Look for cats. 00:31:26 Manash And and I I did everything right and I I put a cat picture in front of in front of my phone. 00:31:32 Manash But you can't still detect a cat. 00:31:34 Manash Well, guess what? 00:31:34 Manash You know your model may not be right. 00:31:38 Manash That's the first thing the other would be that you know there are different execution environments and build infrastructure required so those are all the execution and integration work that the hard work you need on. 00:31:53 Manash The dev OPS. 00:31:53 Manash Side not on the data science side. 00:31:56 Manash Which you'd have to like worry about. 00:31:58 Manash And and think about as you go into the. 00:32:01 Matt And and Scott, I wanted to ask you one thing I mean, I was just kind of checking out some of the overall documentation. 00:32:08 Matt So what's a tensor? 00:32:10 Matt Scott, we talk about tensor flow, tensor flow, lightning. 00:32:10 Yeah, yeah. 00:32:13 Matt What's a tensor? 00:32:15 Scott Yeah, you'll see this term all over the place. 00:32:18 Scott At the end of the day, a tensor is just really a blob of data. 00:32:22 Scott Uhm, it's a multidimensional array that represents some data. 00:32:28 Scott So to build out the different dimensions so you've got a picture, it's gotta hide and whip. 00:32:34 Scott Those are two dimensions. 00:32:35 Scott If it's red, green, blue, there's a height and width worth worth of pixels that are red. 00:32:40 Scott Same for green and same with blue. 00:32:42 Scott So now you've got a 3 dimensional thing. 00:32:44 Scott You've got three channels, which is the red, green and blue, and the height and. 00:32:47 Scott With and then often in a in a model, you'll see a batch dimension. 00:32:52 Scott This is again more of a side effect from training, where you might instead of feeding it one image at a time, you might fit 20, so the batch size. 00:33:00 Scott So you might have a blob of data that has the 20 images. 00:33:03 Scott Each image has three different channels for the colors and a height and width, so whether that's speech or text or what have you, the tensor. 00:33:12 Scott Each other, essentially, the bits that represent that thing, and it's structured with these different dimensions for different attributes of it. 00:33:18 Scott So when you when you see you have to convert your input to a tensor, it's really just put the data in the right order so that it can read the right bits from the right places. 00:33:26 Matt Back to the documentation, Scott. 00:33:27 Matt Always in the documentation. 00:33:29 Matt But no, that's I'm glad you brought that up, because it's always knowing about what things are called C. 00:33:36 Matt If you don't know what they're called, you're going to be lost even more, and, uh, a tensor is like one of those things that you would never guess what it is like matrix, you kind of have an idea. 00:33:45 Matt It's multi, you know a 2 dimensional array but a tensor. 00:33:49 Matt I'm not sure what that would be so. 00:33:51 Matt Yeah, that's great to to know about. 00:33:54 Matt So with that said, Mike Scott Manesh. 00:33:58 Matt I want to thank you all very much for joining me today. 00:34:01 Matt I learned a lot. 00:34:02 Matt I cannot wait to start using Onyx. 00:34:05 Matt I'm going to think of a use case for it. 00:34:07 Matt I'm going to implement it. 00:34:08 Matt I'm going to put it out there. 00:34:09 Matt I'm going to talk about it. 00:34:11 Matt I just have to. 00:34:11 Matt Figure out what it's gonna be. 00:34:13 Matt Be yet I'm gonna. 00:34:13 Matt Go to a zoo and grab myself a model and this has been great. 00:34:18 Matt So again, thank you all very much. 00:34:20 Matt This has been the Net Maui podcast and we'll talk to everybody next time.