episode279 === [00:00:00] James: Hi, Frank. I knew nothing about microservices or containers or Dockers or hubs or Kubernetes or cheetahs or services or instances or function. I know about functions. I don't know anything about anything, Frank, but I'm really excited about Azure container apps. You know what I'm saying? [00:00:29] Frank: Uh, I actually don't, uh, I do know about Azure. I've heard of that. I know about containers. We we've actually done a few episodes on containers. So I think I know about that. And in the past I have actually indeed looked into running containers on the Azure, which I know because every time I create a website, it's like, Hey. Yeah. It's your website? A container. And I'm like, no, not this time. One of these times it will be, but not this time, Azure. So, uh, I'm I'm a little bit last year. Okay. At Kubernetes, I groaned at Kita. I think I realized, oh, I'm out of touch. And I like to stay in touch James. So bring me up to date. What is all this [00:01:06] James: stuff? I also forgot about the dapper and the Envoy and other things. Okay. So here's what this is about. And I'm pretty excited about it because I have. We're on the dinette comp keynote that people should go watch tomorrow. Donna cough, you're listening to this on the Monday via 2022, just launched today. Super exciting tomorrow down at IComp with Donna Essex. It's out. We'll talk about it next week on the podcast, but at the keynote, the Mr. Fowler is going to be there and he's gonna talk about container apps, spoiler alert. He's gonna talk about a bunch of stuff, but there's a container apps demo. And I learned about container apps with my buddy, Nick. Um, uh, initially Neil, because he does like all the things microservices on our team and he tried to explain it to me because I understand like Docker and here's a container and you put the things and then I kind of understand the Kubernetes for orchestration. And then, uh, then I am like, I get lost in the dapper and the other stuff, but here's what they call it. They call it a serverless container service for running modern apps at scale. And here's what it's about. If you think about. What is the problem with running a web backend? For example, let's say you just put something in app service, Frank, what's the problem. [00:02:20] Frank: Uh, for me, it's usually deployment and getting like a diversion up, you get into multiple deployments slots and that kind of stuff. And then once I have that figured out, it's getting the dev ops hooked into all of that. And then inevitably my API key, uh, goes bad stale after awhile. And then I forgot about it and yeah, all sorts of things. I guess, what was I supposed to say? What is the problem being solved here? Well, you know, [00:02:49] James: when you think of, Hey, I'm going to take my web API, I'm going to shove in an app service. Well, I'm going to have a web server that's always on. Right. I got paid for that. Okay. [00:02:56] Frank: Yeah, it's always, and it's, it's in for a penny and for a pound because you go from the free service, which they tell you do not use this. This is for dev only do not run production stuff on here to the next one. That's like 30 ish bucks. And you're like, Hmm. Or even more sometimes. And you're like, I don't know, do it, do it. Is there no middle ground between here? So yeah, it's like first service that literally just me and my mom, you know? Um, I don't know if I want to pay that much. [00:03:23] James: Correct. And there's no way to scale to zero. You can't turn it off if you turn it off, it's off. [00:03:29] Frank: Yeah. [00:03:29] James: Well, yeah, you can't like, there's not an event that turns it back on. So then the magical parts of Azure functions, serverless computing would then come up. Right. Because I, you know, I'm a huge Azure function fan, you know what I mean? Like I'm a, I'm a fan. [00:03:46] Frank: Yeah. And we've solved a lot of problems with that. And honestly, every project I start, I start by thinking, can I just do this with a function instead? Because why, why else would you do it? Like even, I don't know, w w we, we got to get those like blazer demos where you're doing web assembly. Like, man, put, put the code in the, WebAssembly put that, put that up on like a static thing and then talk to the server through functions. That seems like a really sweet. And uh, web development, but anyway, uh, yeah, functions are hot, but this isn't functions, [00:04:17] James: functions are hot. Now the thing with functions, different programming model, right? You can't just like, you know, it's not just a web API. You gotta get in, you know, there are new, um, self-contained thingies and all this other stuff. And I love functions, right? I run island tracker completely on functions and a bunch of the infrastructure on my team on functions. And that is pay, pay for consult. Which is super good, because if you only have someone call your web API like five times a month, cause you're running your moms, you know, business or whatever off of it, you're only gonna pay 0.0 0 0 0 0 0 0 1. Penny, which is pretty awesome. To be honest with you now, the problem, like I said, with that, some, these aren't really problems, right? This is like the flip side of the coin. [00:05:02] Frank: Because I want to reiterate a static serving web server. That's talking to cloud functions is a great architecture. I know we'll talk about microservices, but my gosh, I mean like that, that's what we always wanted back in the day, because like dynamically generating every page, that's such a waste. Like every webpage is pretty much identical to its other self. And so yeah, static server. [00:05:26] James: Static server that stuff. So, you know, with Azure functions and they're also event driven too, which I think is rad. The, the flip side of the coin is do you need super long running operations? Like, let's say you're doing some super long data processing and you know, it, you do scale, but you need it to run for like an hour. Well, that's not really gonna take advantage of the function cause functions to me. Event-driven in event driven out quick in quick out type of processing type of stuff. Let me ask [00:05:59] Frank: you, uh, are there, I'm pretty sure it's a setting where you can just put a time limit on a function. Is that normal? Do you know what time limits people usually set on functions? I'm sure that's mostly to protect yourself from runaway servers and things like that. But I assume like you said, I wouldn't want to train a neural network inside a function. Probably it would be really expensive. You probably aren't paying per minute at that point, but I'm curious if Azure puts any hard, hard limits. [00:06:24] James: I don't know, there are a few different modes. Like there is an Azure function, always on mode, which seems like defeats the purpose of an Azure function, but you've, you were committed to the Azure function mixture. And then you did, you don't want to go back to app service or whatever, because functions are built on top of it. [00:06:42] Frank: Yeah, the app service is literally, you can put anything on app service, whereas functions are here where we're doing web API [00:06:49] James: stuff. Exactly. Yeah. Functions. Yeah. And, and I love the connectors into functions cause it brings things in and out super easy. So, and those have nothing to do with microservices. So then you might be saying, okay, Hey, I have this API or I have this, um, that this worker service that needs to run. Um, and maybe it's going to run like for long periods of time and then stop. Let's say it's doing some, uh, image processing and like that may take anywhere from, um, let's say you have a queue, right? You have a queue of images that are coming in and it needs to process them. And sometimes it's a huge load and sometimes it's no load. Right. You put that in a microservice, right? You put that in a container, right? So you, you create a asp.net core worker service. That's listening to a queue, you right. Click add Docker support, and you got to contain them. Now you got to put that container somewhere. [00:07:45] Frank: Yeah. And sorry, I just want to interrupt. This is actually a really classic problem. Um, I used to help my friends with video processing websites and the, the machines that had the video processing stuff. They were dealing with random data from the internet. So you never wanted to run all that. On your actual web server. So immediately now you have two computers. Now you have a queue. Now you have asynchronous programming. Now you have batch jobs. Now you have monitoring the other computer, and now you have an orchestrator. Like it's funny, like the simple, small task of, I want to read in a video file and maybe still hurt away somewhere immediately kind of scales up your problem. So this is. Some small hypothetical. I actually run into this more often than I would like. [00:08:30] James: It's true. Yeah. That's true. So then you, you, you are like, okay, I'm going to go put this container somewhere and run it and you need to put it somewhere. So you could, for example, put it in something. And I'm going to talk about all Azure services. I work at Microsoft. I'm not, and I know that AWS and Google, they have like some similar services, obviously. So I'm just going to tell. The stuff that we have, cause those are the things I'm familiar with. So if you're an AWS or Google cloud listener, they're, you know, similar services and things that are over there, right. You might just want to run this thing. And that would be, I'm going to put it in a container instance, which is like give Azure a container and it'll just run it on a per usage pricing, but that doesn't have scaling load balance, versioning rollouts. It's just like, I want to run this thing, you know, and I don't know what the use case for that is, but it's like, one-off go basically, right. [00:09:21] Frank: I actually have plenty of use cases, but I'm wondering, are you talking, um, how do you actually run that? Is that from an API you could have your program actually kick off that process or are we talking from like the portal you go to the portal and kick off that process? Yeah, I think the, [00:09:35] James: um, I think you can do it. You could deploy to ACI from command line or from something else that you just give it a container registry. And I think it just runs. [00:09:46] Frank: Scary every time I've tried to do this. Well, the protections here are, um, they make you go through active directory kind of authentication stuff usually. And I think that's the safety line, but it'd be great. I would love like a C-sharp library when I can just import and say, go run this container and give me the results. And it's kind of magical. Yeah. Sorry. [00:10:07] James: Dreams. Yeah. And then beyond that, you might say, okay, I'm going to deploy this into. Uh, Kubernetes service like EKS, which is Azure Kubernetes service. Right. And this gives you a Kubernetes, but then there's all the clustering and the connections and all this other stuff on top of it, right. That's like full scale. You're probably running multiple containers. They're talking to each other, they're doing stuff. And, um, but that's other stuff [00:10:33] Frank: the way I understand it. And please, correct. Dear listeners, if I'm wrong. Um, Kubernetes is what you do when you want a service where three nines of uptime, where you're just trying to keep a bunch of stuff up its job is to keep a bunch of stuff up and running and happy. Uh, whereas what we're talking about are, uh, Well, you were talking about a little batch jobs a second ago, but, uh, with all these microservices, if you got microservices, there's going to be a bunch of them. You got to have something to keep them up. So that's usually Kubernetes roles, but, uh, we're not going to do the Kubernetes on these. [00:11:07] James: No, because I mean, in this instance, right, you could think about it as like maybe we have a queue. Maybe we have an API. I mean, we have a bunch of like a hub. We have a bunch of things, a bunch of microservices. We want to deploy in our solution and we could do that in HKS, the Kubernetes service, but. We could use container apps, which is a managed service built on top of Kubernetes to make Kubernetes easy. Basically that's like a Kubernetes easy button with all of the other cool parts of app service and Azure functions that you may want. So for example, [00:11:46] Frank: Okay. No, I'm sorry. I'm going to interrupt you because I'm, I, I do love the containers, but I do my brain stops once the keyword is mentioned. So I, I'm kinda excited to hear about something that just skips over it and builds a layer on top of it. So dumped at some point me right now, James, I got a lot of emotional value riding on this. Yes. [00:12:07] James: So imagine basically that you have a, uh, a Docker container, right? You you've created. Q listener that we're talking about here, your API, and you want to put it in a container, so you right. Click and you add a Docker support and then you build up a container and then you give then right click. And then you can say, Hey, I want to put this into a container app. And based on what this container's doing, you can give it some rules. So what this does is behind the scenes container apps runs on AKI. So it runs on a Kubernetes service, but it brings in three important pieces of software, open source integrations. And one is called Envoy, which is basically an ingress controller that enables HTTPS. [00:12:56] Frank: Oh, okay. So this is if you actually want a public facing server, because usually we're talking about services, so like a bunch of internal stuff, and then you carefully craft your actual front facing server and PHP running on Apache, something like that. But in this case you say, okay, actually exposing, uh, or is this HDP for API calls or is this HDP for like. HTML like the web, [00:13:22] James: I think more for like API calls. Yeah. [00:13:24] Frank: Yeah. Okay. Right. So just enabling that, [00:13:27] James: just enabling that bit, and then you have this thing called Kita, K E D a, which is the Kubernetes event-driven auto-scaling. Now this is cool because what it does, Kita is it enables dynamic auto scaling for the containers and the service. Based on scaling rules that you apply. So for example, we have our queue worker and you could say, you know, start at zero is my minimum, and I want it to go up to 10 instances and scale this up and based on the traffic and the queue or the traffic in the API, it's going to scale it up automatically and then scale it back to. Yeah, fam [00:14:11] Frank: always wanted this. This is the Slashdot effect, right? Um, I'm scared. I'm glad you can set a maximum. I've always been afraid of auto. It's funny because like, as a lazy person, all I want in the world is auto-scale, but as a cheap person, I'm terrified of auto scale. So I'm always worried. It's going to create like a thousand machines. If someone decides to run some w get script against my servers, something like that. Not not to make me sound dumb, but I'm glad there's a map. And there, [00:14:44] James: there is a, there is a maximum, so yes, and, and, and you can say, Hey, you can have one as your minimum, if you do want an always on. But if you're like, Hey, this is a background process that I don't care if I got to, if it takes a little bit of time to spin up, you know, keep it at at, uh, at zero. So that like your queue worker might be a zero, but your API. May always be on because you always want your API on, but you may want it to scale up to 10,000 container images, whatever shenanigans, right? [00:15:14] Frank: Yeah. Um, I, cause I'm wondering myself right now. I I'm sorry. I'm always going back to the video Rayanne coding example. So what I have. Once the machine is up, would it be able to handle multiple jobs? You know what it sit there until the job queue is empty and then what it turned itself off. But I guess that's what you're saying. That Kita part is with all those events, I'm sure it's those events deciding who, who who's doing, what, and what's been orchestrated because yeah, definitely. I want to scale up to 10 concurrent video in codes, but stop there and then, you know, keep it at 10 until things are done. And then back off, back to. And the end, that all makes perfect sense to me if [00:15:55] James: it works. Yeah. I'm pretty sure that you're able to put in these scale rules, which would be like, you know, Hey, if I get more than 20 things in a queue, then scale up. [00:16:09] Frank: Yeah. Yeah, exactly. That's fun. So that's the key to technology. Would you say that, that one's new to me. I have to admit I'm totally oblivious here, but [00:16:20] James: you spell it. K E D a, which is an acronym for the Kubernetes event driven. Auto-scale. Oh, [00:16:30] Frank: yeah. Oh yeah. Great name everyone. Good job. I should have guessed from the K. Yep. So, uh, that's cool. Um, it's tricky though. Like scaling usually gets into, especially in the microservice world, like. It's going up multiple services at a time, but it sounds like in this case, the app is saying that it can state its own scaling rules or would you combine this happen to multiple apps, a container? [00:17:00] James: Oh, I'm so excited that you asked this question because there's one more important piece here that enables just that Frank, because what you would do is you'd put your services into a container. And you would take these container apps and put them all into a container app environment. So there's all of your container apps are all bundled into this environment, um, that are in there and everything in the environment can use dapper, which enabled. Support for, um, basically service discovery, state management and asynchronous message. Passaging passing between all of your container apps inside of a container app environment. [00:17:43] Frank: Oh boy. Okay. So let me read from a website here because I've, I've heard people say dapper before, but I'm not sure I've ever had a good definition. Dapper is an open source event driven runtime that codifies best practices for building portable microservice applications. That's actually pretty cool. Yeah. Um, because I think, you know, I'm old school. I like to throw together these architectures myself, but they say here, you got service to service calls, pub sub cool event, bindings, great state stores going to abuse the heck out of that and actors, because why not everyone has actors these days. That's interesting. It's um, so dapper is basically the new memcache dapper is the new Redis. Uh, we keep coming up with these technologies to get a bunch of computers, to talk to each other and to make sense of the world. So good. I'm going to go with that definition for dapper right now. It's just making all these things actually be able to talk to each other. [00:18:45] James: Yeah. Pretty much. And the cool part about all. Is how it's put together is just like a Jason file. I think that, I think they, I think there's multiple ways to do it. I think I can use something called a bicep file or just a Azure contaminate. James. I want to talk. Yeah, they have a gamma one. I think they have multiple ways of doing it. Frank. I'm pretty sure it's terrible. Don't do it, but, but what's cool here is that you can have like configurations so you can see. And in your configuration of your environment, you can say, Hey, my service bus connection, or my queue connection or my, whatever, my, my SQL, or my, my SQL, or my Azure SQL database connections here. And it will, it'll figure out how to pipe those down into your services. Automagically from this configuration. Uh, if that makes sense. [00:19:35] Frank: It does. It does. Um, it's funny. Um, uh, a lot of stuff happens through the Azure command line these days, too. So I think that, um, Easiest way, because the moment you said configuration files, I got a little bit sad, you know, and I like everything to be, just to be in C, C sharp or F sharp. We should be doing it all there. But, uh, now that I actually looked through the documentation, there is a lot of, uh, pretty simple, I would say Azure command line stuff. So there's like the AC tool you say AC container app environment. Create, give it a name. Tell us who's going to pay for it where it's going to go. And then you have yourself and environment. And then I guess the rest after that, is this a magical configuration file? That's gonna do the rest there, but I do notice in the documentation, they are good enough to do things in Yammel so good job web people. [00:20:29] James: Yeah. So you can kind of see like, Hey, like you use these ports, like here's like arguments, here's all this here's. It's like, here's the container. Here's the XYZ. And what's cool is that you can actually like set this up and get hub actions and it will automatically connect to container registries and automatically connect to the Kane container apps automatically for you. So every time you push code, it'll redeploy an entire new, like all of your images and all of your container apps automatically. [00:20:59] Frank: And I'm going to say that's a necessary step because, oh boy, there are a lot of settings to put in here. Men, replicas, Macs, replicas, you got to expose all your ports, all that kind of stuff. That is definitely not something you want to do from the command line. That is definitely something you're going to want to put into a workflow file of any sort. Yeah. I know. I would forget this stuff pretty much [00:21:20] James: immediately. Yeah. So it's like this really, really, really cool thing, which means because. You know, all it brings all of these things together. You don't actually have to really think about it too much, to be honest with you. It's, it's just, you have your containers that are using these services or exposing these services and you just configure it and like to use dapper, you say use dapper and then it just figures it out automatically. [00:21:48] Frank: Yeah, because devil's always in the configuration, right. Trying to actually talk to each other. So it sounds like this is just a bunch of technologies kind of bundled together, as in, we're finding a sweet spot for how to do this kind of stuff. And we're going to recommend that you use dapper for your communications. We're going to recommend that you use Kita for your, for your scaling events and what was the first technology? I feel like I skipped over. [00:22:15] James: Uh, the first technology was the Envoy. Oh, [00:22:19] Frank: the Envoy. Right? So that's just managing who can talk to whom, I suppose you could call that ingress, [00:22:26] James: correct. That's an English. Yeah. [00:22:29] Frank: That's where things always get weird for me is getting all those permissions. Right. But sure. I'll just ignore that one for now. [00:22:36] James: Yeah. Like here's a good example. I'll put this in the, I'll put a link to the documentation, but under the tutorials evidence like microservices with dapper using arm, which is like this configuration Jaison file and you'll see under, um, like resources. You'll see, um, a template eventually, if you search for template and you'll see containers, and it's like, here's the container, here's the CPU and memory I want. Here's the scale I want. And then here's dapper settings, which might tell you the storage key and account key and where the things live. And that's going to enable the apps to all kind of communicate together, which is kind of cool. [00:23:14] Frank: Yeah, because dapper obviously is going to need some kind of storage system. So you're going to have to yeah. Get those two communicating. So sounding doable. I'm about to start my YouTube. So it should be able to auto scale. I should be paying very little because it's all gonna be functions with static storage, with blob storage, blob apps. I'm going to call them blog apps, and then I'm going to have container apps for all my FFmpeg work. This is, this is going to happen. [00:23:42] James: Yeah, I think. What's nice about this, is that it sort of off you skates a lot of the complexities of trying to set all that up yourself. And then also, um, you know, if you already are using containers and deploying those. You could bring them over to container apps and get the advantages like maybe you're deploying and using containers and microservices, but you're not getting the auto scale. Right. For example, where this could give you that or long running background jobs, maybe always on background jobs of some of them. Right. And you can deploy multiple of these container apps. And some of them spin up. Some of them spend down, back and forth. So I feel like it's the first time that I've started to understand a little bit of microservices. Like. I mean, I'm probably still gonna use functions for a lot of my stuff, but at the same time, um, when you see the demos from Donna come tomorrow, if you're listening to this new it's past or it's before the ninth, which is when Donna comp is, there's a bunch of, of these container ups and an environment, it's like, wow, that's, that's really cool. Like this architecture. Makes sense to me, like in generally when you see the projects, there's like three projects and they're all kind of doing different things where my traditional mind is all about. Like, I'm just going to shove everything in one project, like a wedding, a service worker, all these things. There's a video that Michelle did, which I'll put on, which is. Build your first microservices with.net and I'll link this so you can watch it later. It's a beautiful YouTube video. He spent a long time on it. It's really amazing. All these animations and it, he breaks down like containers versus virtual machines like orchestration core Kubernetes, like building end points and like, why use containers? Why use microservices? And I was like, I'm starting to understand this, right. I'm a client developer. That's never written backend services, but I kind of am starting to understand. This mentality, which is like, if I build an app that displays a number on the screen connected to a Bluetooth sensor, I bundle that up into a app container, but my app containers get deployed to the iOS and Android app stores. But like now if I have back-end services, I could break them up into smaller mini apps and then deploy them into containers. [00:26:02] Frank: Yeah, I love it. I wish we could just deploy container apps to iOS though. I wish that worked. Oh my gosh. It would solve so many problems. Yeah. Um, and I was gonna make fun of you a little. I'm like, ha ha you understand it now, but give these web people one year and they're going to change everything, but we can understand it for one year at a time until things progress. I should've learned dapper years ago though. Um, This is a pretty nerdy one. So I'll have to add a second nerdy topic here. I see. Communicating with the API is use HTTP, which obviously you should use HTTP, but there's a GRPC, which I love serialization of data. I just love it. And I feel like we could do a whole episode on GRPC and how much I love protocol buffers, and everyone should be in fact, using GRPC. So a potential show topic there, everyone write in and say how much you want to hear me talk about serialization protocols. Yeah, and I'm happy. I'm happy that you're understanding. Um, for me, I, again, I loved containers. Containers are great for me. It's once you get two containers that everything just falls apart and I get start to get confused. And so for anything that simplifies that I'm excited, but I'm really looking forward to actual demos of this because I can in my head think of all these weird ways to compete with you. But I need to see a professional. Tell me the right way to use this. [00:27:29] James: Yeah. And I probably needed a dig up some videos from ignite because that's when this thing was announced as well. So, you know, I definitely need to go like, take a look out there. I'm sure there's some videos from ignite on container apps and I need to go look at it because the thing about it, I think is also really neat is that it doesn't matter. You're like, oh, I just have a web API. Right. And that's all I want to do. Like I'm never going to use containers for that, but maybe you will, because one thing that's really neat is that you can actually, every time you bundle your container up, it's a revision of your container and there's revision management, where you can control the traffic. So you could roll back by just literally saying this one's active. This one's not active. And you are, you could say this one has a hundred percent traffic and it says zero or this one has 50 50, and you can like roll out updates via traffic being splitted automatically to your different containers. So like there's also this weird revision part of it too, that we didn't really talk about, but there's a whole bunch of neatness basically is what I'm saying. Yeah. [00:28:35] Frank: Yeah. Yeah. Uh, I, I believe I called these deployments slots, but that's like the, that's the 1990s version of this, where we would all register 50 different domains and then just put things up under different domains. So it's good that we've progressed a little bit there. We probably have check boxes and drop downs and things like that, but of course you need revision management. What's neat. Is that kind of district. AB testing that, you know, that kind of terrible stuff that you can do with it. But realistically, this is all getting back to our iOS and Android deployments of, we only want 10 people, 10% of people to see it. It's not the world I live in, but I totally get it. If you're, if you're microsoft.com you don't want to deploy to the entire world all at once. Maybe, maybe. Yeah. Yeah. It's the rollbacks. Have you ever had to roll back a website? I did. I had a bad one. I almost deleted a bunch of data database. Thank goodness we had [00:29:29] James: a backup. The worst is if you, uh, If you need to roll back your app, after you submit it to the app store it's approved and it's already rolled out. That's hard. You can't roll [00:29:41] Frank: back. I've wanted that ability three times, two times, three times, three times. Yeah. That's when you write to app review and say, Hey, could I have that priority review please right now? Yeah, [00:29:53] James: please. I need read. Go, go, go. And then your app just slowly tanks for a few days. Um, anyways, um, that is now again. I am now. And microservice or container apps expert out there, even a web [00:30:06] Frank: developer. I pretend to be though. [00:30:12] James: So what I would say is if anyone has any additional things that you're super, like, we miss this, let us know right into the show, emerge conflict out of em. Um, I'm super excited. I like, I'm starting to like the microservices information more like I'm finding the needs. Like I think now that I've done. The entire animal crossing backend and functions because I wanted server lists. I'm like, man, I could probably put that in a, in a container app. And that actually would have been a lot better. Cause I had to do a whole bunch of hoops to, to kind of do the Azure function thing and whatnot. And the nice thing about the containers right. Is like the entire environment is there. You're not relying on. You know, you know, Azure functions or something, AWS, like whatever there is, is to support the runtime. Long-term right. If you're like, you know what, this is, this is a containers running dotnet at framework 3.2. Right. And then that's what I'm running my app on. You just put it in the thing. Right. And it goes, [00:31:11] Frank: yeah, I had that because I was running a dotnet six that, um, the game I keep talking about and. I got lucky. Azure said we'd do support.net six, but I was like, Ooh, I probably should have just put this in a container. So I wouldn't have to worry about, you know, the, the, the web host actually dealing with it. You know, you write your app to be, self-contained put that inside a container. So it has it's environmental self-contained and then plop it up on a server that is kind of the ideal world and anything that makes putting multiples of those together. I'm here. Yeah, I still don't get the key to scalers. I, I keep looking at it because all it does is bring up a million other words. You've got to learn Apache Kafka as your monitor, active MQ Artemis as your Bob storage weight. I know that one that one's easy. Okay. [00:32:01] James: A lot of these, I don't get you're overthinking it. Right. Kita integrates with all those services. So you can say, Hey, scale down to zero scale. It, it takes you having to know all those things. To basically setting two lines in your Jaison file. [00:32:17] Frank: Right. So you just got to figure out those two lines. Yeah. Oh, they do support, uh, to support the rapid MQ and Redis. So Rita's you, you are alive and well, even into this modern age. Good job. Yes. Yeah. [00:32:32] James: I call it Reddis [00:32:33] Frank: yeah. I don't know. I think I bounced between pronunciations all the time today. It just came out this way. Yeah. Hm. Hm. I don't know, Kita scalers can both detect if a deployment should be activated or deactivated. Yeah. It's the deactivated part I'm most interested in? Correct? Unscaled unscaled these scale Kita. These, [00:32:55] James: get all these scalers. [00:32:58] Frank: Well show title D scaling the web to scaling the path. [00:33:02] James: All right, well, that's going to do it for this week's containerized mini podcast. 30 minutes feels about right. We'll be back in next week with the whole dynamic comp breakdown. Super excited about that tune in. If, if you haven't or you need to go back and watch it go do that. It's going to be awesome, but Frank's thanks. Thanks for letting me talk about nothing that I know anything about. [00:33:24] Frank: Eh, one of these days, I'm going to learn how to do deployments correctly. And you're just getting me closer and closer to that point. So I appreciate it, James. Thank you. [00:33:34] James: Yeah, appreciate it. I'm going to where we're going to be. I'm just getting and Jeff Holland animals that a bunch of people on the podcast, if you all want us to have people on the podcast and are sick of me and Frank, let us know right in emerging conflict. Uh, fam tweeted us, discord us do other things. Let's go do it for this week's podcast until next time. I'm James wants Magno and [00:33:51] Frank: I'm Frank Kroger. Thanks for listening.