Turbopack and Turborepo with Anthon Shew V2 === [00:00:00] Hi, and welcome to PodRocket, a web development podcast brought to you by LogRocket. LogRocket helps software teams improve user experience with session replay, error tracking, and product analytics. Try it free on LogRocket. com. I'm Chris, and today we have Anthony Hsu, who is currently at Vercel as TurboDX, where he leads the Turbo community, here to talk about TurboRepo and TurboPack. Welcome to the show, Anthony. Hey, how's it Can you give us a brief overview about what TurboRepo is? I feel like I hear it everywhere, no matter where I go. Yeah, of course. I guess I'll use the tagline from Turbo. Build. TurboRepo is a high performance build system for JavaScript and TypeScript repos. We're giving developers all the way from solo devs to the largest enterprises a world class development experience by simplifying and speeding up the things you need to get [00:01:00] done. for your local and CI environments. We do everything incrementally, execute in parallel, wherever possible and cache remotely. So you, uh, your teammates, your CI never do the same work twice. If there haven't been any coaches in a package, why run any of those Lint test builds again? So it's your monorepo, just way faster. These are all techniques that were pioneered at. Large companies, Google and Meta, you have huge monorepos, famously, historically. TurboRepo takes the best of that tooling and makes it available to everyone. Solve the same problems, just hopefully with a little more versatility and easy use. Getting you as close to zero configuration as possible. So yeah, handling a scaled up monorepo like that can be a bit of a beast, but here at Vercel, we're taking that challenge head on and making monorepos hopefully just a downright delightful experience overall. And I know Jared Palmer created this way back when and then for sale acquired it in [00:02:00] 2021. So like, how has it grown since that acquisition? It's been great to see. I was a very early TurboRepo user before it was acquired, before I worked at Vercel. So it's been really cool to see it grow and change pre and post acquisition, like you mentioned. Jared's brainchild. Now we have a full team behind it. And yeah, we're excited about the optimizations that we are going to unlock in the future. And by the way, if you're just hearing about Turbo Repo for the first time, you can try it out with npx create turbo at latest, or you can visit turbo. build to learn more. So can you give us an overview of the key improvements introduced Um, and turbo one dot 11. Yeah, sure. Specifically in one dot 11, we released the first full rust version of turbo. That was a pretty hallmark change. And so kudos to the team for finishing that up. Turbo repo users. probably didn't notice too many functional changes in that release specifically. It's sort of a little [00:03:00] bit of a anti release there. But there were a lot of changes that went, you know, under the hood. And so now we're really well set up to do a lot of things that before the repository wasn't Go. So now we're really set up to do a lot of things. that were any combination of like untenable, inconvenient, or just plain like impossible to do in Go. And so we're really excited about upcoming features. On top of that, like Rust is letting us deepen some reliability and stability for our users. One thing that comes to mind that was a new feature in the CLI is grouped logs. This allows you to have all the logs for that task printed at the same time instead of interleaved in between each other. I can make The logs for your tasks, easier to review just since they're all going to show up as like one giant chunk. Another thing that we did include in those release notes for 1. 11 was a renewed effort in our examples and templates. A lot of developers go to our examples [00:04:00] to learn about how to architect monorepos. And the Turbo team just had a lot on their plate. We weren't able to invest the time and maintain those examples. So like the framework examples were falling behind major versions and stuff like that. And so when I joined, that was the first thing I did was making sure that those examples can be something that we're proud of and that the community can, you know, learn off of and use confidently. So you kind of already talked about this a little bit about the shift from go to rest. I'm curious because that just sounds like a crazy haul to do. So what were the challenges involved in porting that over? Many. I think one thing, the motivation at least, is good to know. One thing was that our repo mates, TurboPack, were writing Rust. And we're in a monorepo with them, naturally. And there was a lot of code that, between our two projects could potentially be shared working on a lot of the same things, you know, low at the file system close to the machine. So it just seemed like a natural evolution there. But just in the scope of Turbo Repo [00:05:00] itself, Rust makes a lot more sense for where the project is at today. The Golang ecosystem is really tuned into things like networking and building fast web services. And so the ecosystem is really deep in those areas. But TurboRepo is, like I said, working on the file system, managing system processes, worrying about differences in operating systems, and eventually shipping software to our users machines. And that last one's a big one, because once you publish a release, there's really no taking it back from your users machines. And so those bugs are really expensive for us. And so the Rust compiler up front and the borrow checker is going to yell at us when we do try to do things that we're not supposed to. And so surfacing that complexity into our code base means removing complexity from yours. So it's not like the big benefit is basically you can catch bugs a lot earlier before shipping them to your customers effectively, right? Among other things as well, we're looking forward to some performance [00:06:00] improvements. This first past really is our, like in the full Rust world, is our baseline performance that we're going to have established. We really didn't do too many optimizations at this point, but we did notice that our optimized Go code versus our unoptimized Rust code was nearly the same performance. Yeah, we're looking forward to some performance improvements too. In terms of just like porting, like what are the, what are the trade offs here versus like doing a complete rewrite, right? Versus doing like the incremental approach. I know like the obvious one is rewriting from scratch just takes a lot longer, right? You can't really ship features while you're doing that, right? Yeah, exactly. I think it's important to note too. Okay. So like hard definition of incremental porting, just so the folks at home are very aware of the difference. So with a porting migration strategy, you're going to go piece by piece and migrate like slices and sections of your code. I think visually you can think of that service or that program as this one big [00:07:00] box that is made up of several smaller boxes, right? Maybe those are like the modules or functions, however you want to think about it, but. When you go to incrementally port, you pick one of those boxes and you migrate it to whatever else. And so that One smaller box inside of the bigger box changed, but to all of the other boxes, it looks like nothing changed externally. And so, as you mentioned, you do get to, while you're doing that porting effort, continue to ship features because the program's behavior is still known, right? Especially important when you're, while you're doing a process like this is, you're gonna go bug for bug, you're gonna go feature for feature. Like I said, externally, it doesn't look like anything changed. At least that's the goal, of course. But there's a lot of trade offs that come in with that approach. For us specifically, one of the upsides that, as you mentioned, we got to absorb from that effort is that We got to keep [00:08:00] shipping features. And so through TurboRepo 1. 7 was the first one that had part of the port in it. And all the way through, I think it was, that's four miners all the way up to 111. I guess that's five. Actually, we got to keep shipping features. And in that window, in that time span, we got to do workspace configurations, automatic workspace, scoping for tasks, run summaries. Generators, build in env awareness, errors only output. So there was a lot that we got to keep shipping to our users during that time. And then another upside too was, I mentioned bugs are expensive, right? When you're shipping to your user machines. We got to minimize the amount of unused code that we had lying around. You know, as well as you can test as much as you can possibly try to get it right before it makes it to your users, you still got to go ship it. And find out what's going to happen, putting that fresh code that we were writing into the hands of our users as early as we could, let us have as much confidence as we could once we finished [00:09:00] the port and ship that last bit, if you have that full scale rewrite that you're shipping all at once, you just, hopefully you probably end up with a deluge of bugs all at once since getting to have. piece by piece and getting to address the things that we messed up or didn't. Piece by piece made it a little bit easier to snap off the bits of work that needed to happen. There's a bunch of downsides too, I don't know if you want me to keep jammering. We can, but it sounds like also just like with incremental porting, you limit the blast radius of what could go wrong more or less, right? Because you said one individual box of the entire container could potentially Go wrong, but everything else is unaffected. Hopefully, maybe? Yeah. Not sure. Yeah, no, I mean, and there's, but there's the downside to that too, right? Like, you're accepting a lot of complexity there, right? You have two versions of your code base, maybe, in a way. And there's just more code to hold in. your head about the program that you have in total. And in our case, we had an entire language switch. So we had the tool chains, for instance, for [00:10:00] Go and Rust in there at the same time. And we had to cross compile and do all kinds of things to make sure that we could link those binaries. And so in our use case, there was a lot of complexities that we had to surface, but overall the team ended up. Pretty generally happy with the approach that we took, I guess. Like one last thing is, it sounds like you're in criminal parties, like a long road potentially. Right? Like, how does that align with the long term vision of the product? The long term vision really is to make your repo crazy fast, no matter how big it is. So. Just because there's more code in that code base doesn't mean that your developer experience should suffer by default. The bare metal performance of Rust can't be understated there. I'd say the other thing that comes to mind for me is that we're going to keep iterating towards managing more and more of the things that you need. to run a healthy repository. And so as that surface area for what Turbo is grows into the future, we want well defined behaviors that we can [00:11:00] trust and that our users can trust as we keep building towards that future. And so like in particular, we found some spots where Go completely paved over those cross platform differences in behaviors. And whereas we had to mine those differences in Rust quite explicitly to tie those. Kind of two ideas back together to an example for self conformance and code owners are probably great examples of this. And over in the Versace ecosystem, conformance is a CLI tool for doing static analysis checks of your code base so you can encode rules about performance, quality, consistency, and like maybe even that tribal knowledge that like one person has and they just retired and suddenly no one knows how this service works. Those like weird. sharp edges. Those kinds of things can be encoded into the code base. And we had one of our beta customers actually for conformance tell us that they checked off a few of the boxes on the recommended rule set that came from us, and it [00:12:00] sped up their pages by 200 milliseconds. And when It doesn't sound like much, maybe to some folks, but when only 10 milliseconds can mean like 10 percent conversion rate lifts, that's huge. And that comes from web. dev. It's called the milliseconds make millions is the article. If you want to know where, if you want to know where that number came from, those numbers, but yeah. And so there's also code owners. I mentioned that you can establish with for sale code owners, like ownership of different parts of that repo, particularly in a monorepo. So. It's all these tools you keep building around that monorepo story and it all starts to make monorepos really shine. We've, at Vercell, moved to a monorepo for our, particularly for our front end teams. And we've found out, once we have all these tools in hand, we can ship way faster out of a monorepo than we were before using whatever other approach. I guess it makes sense that Vercell is using TurboRepo, right? Yeah. Yeah, we dog food everything. Anything that [00:13:00] someone uses that came from Versell, whether it's one of our open source projects, whether it's one of our, a true Versell product, you weren't the first to touch it. A Versellian was probably the one feeling the pains of whatever that product is first. We really lean into drinking our own champagne and Trying to find out, be the first users, really. I think as DevRel, actually, a lot of the time, I'm one of the people that's touching things for the first time, like before it's going to go to ship, like, Hey, Anthony, can you go break this? And then, Oh yeah, sure. And I have to be the bearer of bad news or good news, whatever it is. So, yeah, it's fun to get to do that a lot of the time. That's awesome. I'm like a strong believer that you should be using your own tools if you're going to ship them out to the general public. Let's get back to this Rust thing, because I'm just interested, because I feel like in the past year or so, like, everything is just going to Rust. It's like, Rust this, Rust that. So, how does Rust enable TuroRepo to handle configurations more confidently? I would say the move to Rust Also, uh, that [00:14:00] move, I think I mentioned it a little bit, like there were incidental and accidental behaviors that we had that existed in Go that as we started writing the Rust, Rust cared a lot about those things, right? There were, I meant, I think I mentioned file system differences, for instance, and like logging. It's like cross platform issues. Yeah, and like globbing behaviors, those things really just were like, Hey, you have to solve this now because you changed languages. And so we really got to define those, as I mentioned, accidental behaviors that existed in Go. And those things were maybe not tucked away from us, but they were easy for us to get away with just because of the nature of the language. So Rust really brought those things front and center. I do think configuration really is only one part of the story, though. It's true. Yeah, that our users can now trust that we're going to get, for instance, those globs and their turbo JSON file. We're, we're way more likely to get them right now. But there's a whole new set of features [00:15:00] that I think I mentioned this before, that we're now enabled to do because of that Rust investment. One of those things I'm actually really excited for is zero configuration next JS. If you put it. a Next. js app in a turbo repo, we're just going to know exactly what to do. You don't have to write any of the, uh, DSL in your turbo JSON to say what the work is to go into that build, just less lines of code. It's just, you know, auto magicked. I think one thing actually thinking more back to the configuration stuff is that another thing that we're doing enabled by Rust again, we're able to surface better errors, just. in the way that the language works and bring them out to our users. So we're doing a kind of a error overhaul right now where unexpected or errorful configurations will get a much better experience. We'll be able to pass along a better message and just have a better terminal UI. If you will. One flag that I noticed was there's like a fallback path, right? So there's like a go fallback path. So like, when would [00:16:00] someone want to use that? I'm assuming, obviously. Something's ported and it doesn't work, right? That seems like that's, yeah, it was our safety hatch. Big change, right? As much as confident, like I was mentioning, once your code makes it to your users is when you really, you know, that is the ultimate test. And so big changes under the hood. And so we included dash go dash fallback. as a flag on that final ship of the port, just in case we made some catastrophic mistake for your edge case so that you couldn't run 1. 11 effectively. As it turned out, we didn't have too many problems. There's, you know, those small bugs that exist when you write code, of course, naturally, but nothing too show stopping. So we were pretty excited and we had originally planned on keeping that flag through several miners. But it really does feel a little bit like the landing has been stuck. Even over the past like month, depending on how things look over the next month, we might be removing [00:17:00] that flag just in this next minor. We'll see. I mean, it's always good to have that fallback. Like you said, like just in case something goes wrong, you don't have to worry about hopefully not worrying about having to ship something else so they could get unblocked. Right. So, and then it is the case that you could downgrade and go back to one 10. That'd be perfectly fine. But yeah, but I don't know. Just a nice to have. So it is January of the new year. So there's plenty of time for you all to ship new stuff. What are you looking to implement in turbo repo in 2024? Ooh, nice. Uh, all right. I've been calling 2024 the year of turbo. Hopefully it sticks. I I've got high hopes, um, for this year. I don't. Want to speculate too far out, but what I can say is you're going to see Turbo Repo continue to integrate deeper into your code base, particularly in the monorepos continuing to make that experience better, continuing to ease more and more of the work. That you [00:18:00] need to get done stuff that we have actively cooking right now. I know I mentioned zero config next JS One change that we're working on right now is supporting colors in your logs natively in the past Turbo repo has swallowed that color that if you're running like jest or something the pass message comes in green and the fails come in red, the whatever. In the past, Turbo has just shown that everything in white, but we're finally going to be able to get the colors to shine through from your original task, which is, I personally am like, Oh, finally, thank goodness. Even like way back when that was like one of the first things that bothered me about the Turbo logs. What else? Interactive terminals. That one will be exciting too. Thinking about Jest again and using Jest as an example. Right now your turbo subprocesses, right? So you run like turbo dev and so you have like dev running in that next JS app. Maybe you have like all the dev scripts in your rebar running. And so. If you want to [00:19:00] try to access, for instance, like tests that is running in that sub process in your UI package or whatever it is, you want to hit, I think the key is like A to run all tests or something like that, or O for like only change or something, whatever. Since in the past, TurboRepo has had that process like trapped and you couldn't use standard in to talk to it. Uh, we're working on right now, interactive terminals. So you can again, reach into that sub process and go ahead and type into it. That's on the OSS side. I think on the Vercel side, too, we're plugging away at making sure that Vercel, like, really understands your Turbo Repo deeply to continue to remove a lot of that configuration burden, that CICD, that DevOps ish work, that any moment that you're working on that stuff isn't differentiated work. Get back to the application. Yeah, so it sounds like a lot of stuff planned. I know one thing I always look forward to is whenever I snap a new repo, I just look at the examples and hope, hope everything's up to date so I can just copy and paste it and then take the credit of looking like a genius [00:20:00] in monorepo configuration. So this next topic is something I'm super interested in and I honestly don't, do not know too much about it. I do have it enabled in my project. But I don't know what it does, actually. Let's talk about TurboPack. Can you just give us a brief overview of TurboPack and just the design principles there? Sure, yeah. I guess I'll just use a tagline from the website again. TurboPack is an incremental bundler optimized for JavaScript and TypeScript written in Rust by the creators of Webpack and Next. js. Couldn't have said it better myself. Right now it's being developed inside of the Next. js development server. That's the proving ground, and a difficult one, I might add, since React server components require like deep integration with a bundler, and Next. js is bearing the burden of being early to the RSC show for the rest of the ecosystem. But yeah, as far as design principles go, I'd say the things we're concentrating on right now. The most are performance [00:21:00] and correctness. Nextstates developers expect a fast, predictable development experience. And that's something we want to continue to iterate and improve on. Just not to confuse the listeners, there's TurboRepo and TurboPack. Can you just like briefly, just quick two sentences on what the difference is there? Just in case someone is getting super confused at the moment. Sure, yeah. TurboRepo and TurboPack are different in that TurboRepo is, at least for right now, a task runner with superpowers. It sits above your applications and orchestrates things. Maybe you can think of it as like outside of your applications, depending on how you want to mentally model that. So Turbo Repo isn't a part of your application necessarily. It's a part of your repository. Hopefully that makes sense. Whereas TurboPack, on the other hand, is a bundler. So it's in your application, doing the work to prepare it to be shipped to your end user. So you'll Maybe I can say it as you'll have many applications [00:22:00] that bundle with TurboPack inside of your TurboRepo. Layers of the onion. So, it looks like another tool with Rust, the whole like, must Rust renaissance or what have you. So, what motivated the decision to build TurboPack? as the successor to Webpack. And what advantages does Rust bring to this new generation bundler? Yeah, yeah, no, that's a common question. Why does this iteration get a different name? Why isn't this the next major of Webpack? Totally a fair question. And I think Tobias Coppers, I call him Dr. Webpack, gives the answer well. So I'll try to approach it from his angle. For better or worse, when what he said is that while TurboPak solves the same problems as Webpack, it's going about it in such a different way that they're not spiritually, you're going to see a lot of the learnings of the Webpack era influence what TurboPak becomes and is both in the way of repeating the good parts. in ways that rhyme and throwing away the parts that didn't stand the test of time. [00:23:00] Another part of that is, as you mentioned, TurboPack is Rust, and so that'd be effectively throwing away the entire existing Webpack repo and replacing it with a Rust based one. And when you couple that with the API changes and an entirely brand new architecture, it just starts to make a lot less sense as a major version bump. It's, it just, it's starting to be something entirely new, but yeah, what Rust brings to TurboPack is that raw machine code speed. So we're working off of the fastest baseline that we can start up times fast, modular loadings fast. There's also the correctness guarantees that we were mentioning before about that come along with the Rust borrow checker. So that helps us get things right the first time, hopefully. You mentioned like lessons from Webpack, like how does TurboPack leverage those lessons and then Incorporate it into like what innovations are coming from TurboRepo. Wow. You're full of great questions, Trash. I know. I'm sorry. After working on [00:24:00] Webpack for years, Tobias has a wealth of knowledge on what it takes to build the world's most used web bundler. As far as lessons learned there, he's been able to share a lot of the knowledge with the rest of the TurboPack team, and that experience has led the way on a lot Innovations in TurboPack. The second part, as you mentioned, incorporating innovations from TurboRepo is where I'm personally looking forward to things getting really interesting. For instance, in TurboRepo, we cache tasks at the workspace level. So if anything in your package changes, anything against dependencies changes, we rerun that task. But there's kind of a, okay, what if we could get even more granular than that, right? So say you put up a PR to change a singular Tailwind class on one page, you know, you turned a P2 into a P4, for instance, what if TurboPack knew how to only rebuild that one thing that just changed? And so [00:25:00] restoring the entire rest of the application from a cache. So as it stands today, you'd be rebuilding the entire app. So you can imagine that's. A tremendous speed difference, theoretically. Yeah, that feature, I'm ready. I'm ready for it. I'm ready for it. I think a lot of us are. So, one other thing is like, uh, hot module replacement, HMR. So, TurboPack claims to ensure lightning fast HMR, which is hot module replacement, regardless of the app size. How does TurboPack achieve this speed in HMR? Gosh, how many times can I say Rust in one podcast? Hopefully the folks at home aren't playing a drinking game. Yeah, no. Beyond the Rust answer, the architectural changes of TurboPack, bringing in those learnings from Webpack. Perhaps I should have mentioned this in the last answer, but here we are. All good. One example of this would be that The way that the module graph gets built is much different in Webpack. First, the [00:26:00] unoptimized graph would get built, and then a second step would optimize it. In TurboPack, we just build the optimized graph the first time. So a lot less work, as you can imagine, of course, much faster. And also worthy of note, probably. is that TurboPack is able to decompose the modules that make up your code base much more granularly compared to Webpack. So in TurboPack, we've been calling these a module fragments, um, essentially breaking up your code at a per function level instead of at the file level. So that granularity, again, less work to do. And so we can follow around the paths in your code base to build your module graph much, much more granularly. I'm sitting here thinking that module graph actually in that context stops making sense. Maybe it's a, like a function graph or something like that. I don't know. I hadn't really thought about it before. Yeah. And it's cool. Cause if you break it at the function levels and it's like, you can just start caching at the function level. Right. Whoa, big brain trash. Yeah, no, that's where that idea for, [00:27:00] as I was mentioning before, the restoring everything from cache and rebuilding only that one page that kind of ties right back to that idea of the perfunction you can. I was mentioning it, I was describing it at the page level, but it's possible even at the function level that that could be an optimization that could be made. Yeah. Yeah, that's pretty insane. That's like super insane actually. Given that Webpack is no secret that everybody has been using Webpack for a while, um, they might be interested in wanting to use TurboPack, right? So like what considerations would they need to keep in mind when trying to make this transition? I'd say the big thing to make note of is that we aren't planning at this time one to one API compatibility for Webpack to TurboPack migrations. We are planning on making TurboPack highly flexible and extensible, but this is a spot where taking those years of Webpack learnings and making sure that we can provide the best user experience as possible using those learnings.[00:28:00] Improving on those APIs is going to let us optimize for speed and efficiency while still providing that extensibility that Webpack users have enjoyed. So that API is at least what we're expecting right now. Won't be exactly the same, but we're expecting it to feel quite familiar. I think another consideration too is this significant improvements will be at scale. The more modules and the more code you're bringing in to your pages or whatever it is that you're building, the more you're going to see a dramatic difference in that initial page load startup times and that HMR experience. And that kind of goes back to that module fragment idea that I was describing before. So some motivators for me, if I wanted to go from back. But pack the turbo pack, I think the obvious one is speed, right? And then what you just mentioned, it scales with the size of your code base, right? Yeah, exactly. Yeah. Scale is a big factor there. Okay, gotcha. So for those listening, keep that in mind. So I think that's pretty cool, especially for like big enterprise apps where you potentially [00:29:00] see slowdowns. When you have thousands of files, this could potentially be a good alternative to ease those pains. Yeah, totally. So looking ahead, what are some key features or developments on the TurboPak roadmap that developers can anticipate in future release? Stable release for NextDev is going to be the first major milestone for sure. If you want to use TurboPak today, you will need an XJS app, and you'll put dash dash turbo behind the NextDev command. And that will let you use TurboPack. And so, yeah, still in beta, the stable release. Once that release is stable, you won't need that flag anymore. It'll just be NextDev and you'll be using TurboPack. So I know a lot of folks are really looking forward to that, myself included. After that, you'll see TurboPack be able to compile your production Next. js build. Also very exciting. And when you're working on a large Next. js app with tons of other developers, Getting those build times and CI down is crucial, of course. So, really excited for that one. [00:30:00] And then I think after that, there's a couple different directions that things could go. I know we mentioned the persistent caching story. That's, I'm not gonna say it's a 100 percent thing, but I'm sure it'll make sense once we make it there. There's also the story of, can TurboPack be a general purpose bundler? There's a lot of innovation going on in the bundler space right now between TurboPack, RSPack, Rolldown So, it'll be really interesting to see, you know, where the ecosystem goes in, in the coming months, years, what have you. By the time we get those Next. js optimizations stable, it really will be a matter of going back to the basics and asking what do developers need right now from us. So, it'll be a matter of finding out what makes sense once, once we make it there. So, sounds like a lot of stuff. And I'm super excited, especially for this, like, function level caching. I'm chomping at the bit for that one. Wow. But yeah, that's pretty much it. That's all the questions I have for you. Any last things you want to plug or anything you want to mention? If you want to play with TurboRepo, [00:31:00] npx create turbo at latest. And then as just mentioned, next dev turbo. If you want to. Be a part of testing out turbo pack. There's, I believe if I open, are we turbo yet. com, what is that a thing? Yeah, go ahead. Yeah, go ahead and open it. Are we turbo yet. com you'll get a big fat no across the top of your screen. This is the. Tests for next dev once it makes it to 100%, there will be a big yes across your stream. But yeah, this is a live counter of a little graph of the tests that TurboPak needs to pass to be ready for stable for, uh, the next dev server. Wow. Looks like you're almost there. Yeah, there's a little across there. You'll see across the top. There's the line that goes up. It goes up and down a little bit for the past few months. Yeah, I see the downs a little. Yeah, folks have been confused when looking at that because what's actually happening there is we're adding more tests. So naturally, the math works out. [00:32:00] And so it goes up and down there, unfortunately, but it's all in the name of more stability. So that's at least good. And it's kind of a classic, uh. You know, the developer's problem of the last 10 percent is 90 percent of the work. So, um, classic. Yeah. Yeah. Well, that's awesome. Yeah. Well, it was a pleasure having you on the podcast and, you know, I look forward to the future of Turo Repo and TuroPack cause I am quite invested in the ecosystem.