Paul Mikulskis (00:16): Hi there, and welcome to PodRocket. I'm your host, Paul, and today we are very privileged to have Juri Strumpflohner on the show with us. Welcome to the show, Juri. Juri Strumpflohner (00:26): Yeah. Thanks. Welcome. Thanks for having me. Paul Mikulskis (00:29): So today, we're going to be talking about Nrwl/Nx, monorepos, and just general development styles that we're seeing in the modern day of technology and with these big companies. I mean, Nrwl, jumping right into it, they consult for Fortune 500 companies. Right? Juri Strumpflohner (00:48): Yeah. We work with some pretty big companies. Paul Mikulskis (00:50): So you're going to get the best of the best talking to you and learning about the impact that you have at the company and the impact that you're having on the world today. So your title is Director of Engineering and Dev Experience. Does that wrap up what you do at Nrwl? Juri Strumpflohner (01:09): Yeah, exactly. It's kind of a double role, let's say. We are globally distributed, so for Europe, I'm also taking care of some of the engineering stuff. I have a couple of folks here working for companies, because we consult also for companies. And then the main job is probably the one about being the Director of Developer Experience, so focusing a lot on the content stuff and being on shows like this one here, talking about what we do, and that focuses mostly Nx and open source products that we try to bring forward. Paul Mikulskis (01:39): What do you think is the biggest misconception about what Nrwl is when you talk to people about what you do? Juri Strumpflohner (01:45): I think a lot of people don't have, or don't make the connection between Nx and Nrwl, and we probably sometimes make the mistake of using them somehow interchangeably. Sometimes, there's Nrwl mentioned, or we mention on our titles on slides we are from Nrwl, and so people don't immediately recognize like, what does it stand for? But basically, Nrwl is the company, if you like, behind Nx. Nx is an open source product, it has always been open source, so we develop that as part of our working time at Nrwl, but a large chunk of what we do is also provide consulting for big companies... mainly in the front end space, not always just in that space. We have also expertise on some back end areas, but it's mostly on the front end space and most of the time it involves like monorepos and that's where we have a lot of expertise. Paul Mikulskis (02:35): Do you think there's a shift coming to monorepos? Is this a new thing and why Nrwl is finding successful business practices in the consulting space now? Juri Strumpflohner (02:45): Kind of, yeah. I mean, Nrwl has been around for, I think, five years now, so we've been around for quite a while, also always doing monorepos. The founders of Nrwl, Victor Savkin and Jeff Cross, they're two ex-Googlers, so obviously within Google, they saw how Google, which is known for having those gigantic monorepo, and everyone develops the products, all the products of Google, inside that monorepo, and so they had seen what that can give you. So when they left Google, they wanted to bring the same experience somehow to the open source community to do all that outside of Google, but much simpler, obviously, because not everyone can have a dedicated team just handing the tooling stuff. So it came a bit out of that, although in the front end space, I feel since last year it has gained much more popularity. We have seen also more tools come up that are in the same area that also provide some sort of monorepo tooling, so there has been some increased awareness about monorepos in general, for sure. Paul Mikulskis (03:48): Well, there's been an increase in tools and I guess sort of a decrease, because Nrwl bought Lerna, right? Juri Strumpflohner (03:56): Yeah, exactly. We took over stewardship in May, I think. Yeah. Lerna has been, for a while, staggering. There were just some minor fixes going in, but it was mostly unmaintained for two years, I would say, more or less. I think July 2020 or something was the last time where it was active. And so from the Nx side, we have seen that, and so a lot of the folks that we also consult with, they wanted to have some sort of integration, so Nx was always possible to have within the Lerna workspace. So just use the very core, very minimum Nx, have maybe fast task execution within Lerna repos, so that's what people did. And then the whole Lerna being unmaintained, being dead, came up again in May when someone actually merged some PR into the main readme saying, in big letters at the very top, "Look, this is unmaintained, go search for another solution." So that's when we had the idea, why not carry it on? Because we have already been integrated with Lerna, it's just not in, if you like, an official manner. And obviously in that integration, what people usually did, they changed the Lerna commands and just used Nx commands for running their process within a Lerna monorepo, and we saw the possibility that we could just make it even simpler. So a lot of the folks that use Lerna right now, the main pain point they had is mostly just the speed aspect. So they were usually happy with the rest of it, like the publishing process and the package linking. Lerna had already some alternatives provided there as well. So when we saw Lerna being unmaintained and the whole community being like, "Oh, now what do we do? Where do we go?" We were like, yeah, we could take it over. We have a lot of expertise in there, Nx already works with Lerna, so it was kind of a natural fit. So we reached out to the main maintainer and they agreed, and so we started going almost a month ago, it must be. Yeah. Paul Mikulskis (05:49): So you get the added benefits of Nx, which we should get into in a sec. So that's the caching and the awesome dependency graph that gets drawn out, so that can integrate straight into Lerna? Very cool. Okay, very cool. So, I mean, let's do it. Let's get straight to Nx. If somebody's never heard of Nx, what would you tell them? Juri Strumpflohner (06:13): So Nx, as we call it, the official slogan is that it's like a smart, fast, extensional build system. That's what you see on our website. And obviously, as I mentioned already a couple of times, we were very optimized for monorepo, so that's kind of part of the DNA of Nx, if you like. Now, a lot of folks, when you see something like build system, they think, oh, does it then replace my Webex or my rollup or Vite or esbuild or whatever you're using, which is not the case. It's more like a task orchestrator. It basically takes those tools and applies them in the most efficient way, so if you have a lot of packages within the same workspace, so basically you have some sort of monorepo, and you might use Vite, you might use esbuild for some packages, or rollup. Nx really just recognizes those tools and schedules their execution, so it just uses the scripts that you have in there and does that in an efficient way. And as you said, it's things like caching, applying caching to identify what actually needs to be run. So it doesn't run whatever you didn't change or touch. And also, the whole part of the distribution, so distribution of the task and distribution of the cache. And actually, a couple of weeks ago, I started listening to a book from Tony Fidel, he's the creator of the iPad and iPhone, he works with Apple, and he had a good slogan. He said, when you create a product, you need to have a painkiller, not a vitamin. You need to actually kill the pain of... Paul Mikulskis (07:44): Ooh. Juri Strumpflohner (07:44): Yeah, it's a really good... I really liked the analogy. It's like, you need to first of all, like solve the pain point that people have and then you can go further. I feel like with Nx, we definitely addressed that pain point, which is usually the skating aspect and the speed parts, which is mainly what we did, for instance, also when we took over [inaudible] with Lerna. That was the main thing we addressed there. But I feel like we also kind of have the vitamin, because once you have the monorepo set up and you get going, then we have also a lot of support as you grow that monorepo, like for organizing libraries within that, helping generate new stuff, helping with consistency and those kinds of features. So yeah, it's pretty good analogy, I feel. Paul Mikulskis (08:24): I like that analogy as well, and speaking of, what are some of those vitamins? Just to seal the analogy here, what I mentioned just a few minutes ago, that dependency graph that gets drawn, I would love to take that vitamin. That is beautiful. For anybody that doesn't really know what we're talking about here, when you have Nx core, you can have a webpage open up that shows all of the different packages and how they build and link to each other, and if you change one, oh, now you have to go test this dependency graph. It really helps organize that thought in your head. Juri Strumpflohner (08:57): Yeah, it's not just the visualization, it's also to use that tool for debugging. What we always often see is first people start with the monorepo, then have just a couple packages and apps in there, they look at the graph and like, "Yeah, cool, nice," right? And then you might not look at it anymore for quite some time, but then they hit an issue where some library depends on some other library and they have no clue why that is, and so with the graph, you actually now can also do some smart filtering where you say, okay, I know this library up there, the whole messy graph, depends on some library down there, but which path? And so you can really select first node, select last node, and then get the closest path, all the paths and stuff like that, and that is super useful for debugging large workspaces. So yeah, absolutely. That's very cool. Paul Mikulskis (09:45): And I can also imagine, if I were to ever use this for something like Python, I'm always stuck in package hell with Python. I love Python, it's probably my go-to language when I'm trying to just get work done dirty and quick, but, oh God. Solving some of those pip catastrophes, it would be really, really helpful. So yeah, maybe could you talk a little bit about, if somebody's maybe not super familiar with the type of load and tax that a build might exert on a computer, the binaries that get invoked in bringing together the files, the IO and stuff. Could we maybe look at that a little bit and talk about that main painkiller, the speed? How does Nx improve that? How does a build work at its core and where does Nx step in and make that better? Juri Strumpflohner (10:31): Yeah. So the main thing is usually, when we talk about monorepos, people usually like them when they start looking at them, because you see all the benefits. So you see, oh, now I can have two libraries which before were in separate repos, I can just share the code. I don't have to publish, I don't have to think about versioning, necessarily. So it is initially very... it gives you a good feeling, a lot of productivity. The problem usually that we see is that if you approach it very naively, at some point, you will add in that and that stuff on top of it, but at some point you hit a limit where the productivity decreases again. Why? Because the build takes super long. Like on CI, for instance, your build starts taking 30 minutes to an hour, which obviously is quite a problem if you need to merge multiple... Paul Mikulskis (11:16): That's a long time, yeah. Juri Strumpflohner (11:17): Yeah, exactly. If you need to merge multiple PRs a day, that can become quite an issue. And so with Nx, we tried to address that from different perspectives. So first of all, what we do is with the dependency graph or the project graph, which you mentioned initially, you cannot just visualize that. That is the end product visualization, but behind the scenes, that graph is continuously being used to actually understand what are the relationships. So it's automatically built, and based on that, Nx is able to understand... in a PR, for instance, what libraries did you touch? So we don't have to rebuild and retest everything, but really just take that node that you touched, that project, and then follow the path upwards on that graph and see which other nodes are dependent on that, and so obviously, we need to also execute tests and builds and stuff for those nodes. And in the larger workspace, this already cuts down quite a lot of... a big chunk of computation which otherwise you would need to do. So that's one aspect. The next one is definitely then, on top of that, the whole parallelism that comes in, running tasks in parallel, making sure they're executing in correct order, and you don't waste computation by having processes sitting there idle instead of taking up maybe the test run instead while the build is still running, things like that. But on top of that, there's the caching, which is a big part. So all the computation, all the tasks that you execute are cached, meaning that Nx looks at the inputs you have, it looks at like what SWORD files, environment variables, and things are involved in that specific run, and then it caches that result, it can use a hash, basically, kind of a number and unique string. And when you rerun that at some point, it will check again that those conditions still meet, as I said before, and if it is the case, it just pulls it out of the cache instead of running it. That drastically reduces the time. That's also something, for instance, which you see with Lerna, where we applied it. That really drastically reduced the time of the computation. Paul Mikulskis (13:18): So if you were running this on Circle, for example, where would that cache be located? Juri Strumpflohner (13:22): So normally, the cache... On your work station, by default, Nx always caches. So it's local and sits in your node models folder, there's a specific .cash folder and it just adds it in there. You can specify that location, so potentially, you could also have it somewhere outside the part of... like on your CircleCI build and then have that cache by CircleCI specifically that folder, or alternatively, you can also opt in to something like Nx Cloud, which we provide, which then distributes the cache to the cloud. So in that case, you don't have to worry about where the cache actually sits, but then Nx takes care, together with Nx cloud, to basically synchronize that local cache folder to the cloud and restore it again. And that's what most people do, so most people opt into that and then basically they have that caching for free. Because the main advantages you get on CI, that is run so often, so subsequent PRs, they don't need to rerun all the parts, maybe the previous PR has already computed that part so it just gets restored to your own run and you get a much faster result in that way. Paul Mikulskis (14:25): And that's immediate value-add for a team because it's just less build time that you're shelling out dollars and dimes for. Juri Strumpflohner (14:32): Oh, absolutely. Absolutely. Absolutely. Paul Mikulskis (14:34): It's that significant. Juri Strumpflohner (14:35): Yeah. I forgot to look up the exact numbers, but we applied this already for Nx, and within a week we saved sometimes months, which is crazy if you think about it. If, within a week, you have so much saved time, which otherwise it's computation, which is wasted in the end. Even not just for your time that you wait, but also the resources that have to be spun up, machines that have to run to execute a task, which otherwise don't have to. So, yeah, totally. Emily (15:01): Hey, this is Emily, one of the producers for PodRocket. I'm so glad you're enjoying this episode. You probably hear this from lots of other podcasts, but we really do appreciate our listeners - without you, there would be no podcast, and because of that, it would really help if you could follow us on Apple Podcasts so we can continue to bring you conversations with great devs like Evan You and Rich Harris. In return, we'll send you some awesome PodRocket stickers. So check out the show notes on this episode and follow the link to claim your stickers as a small thanks for following us on Apple Podcasts. All right, back to the show. Paul Mikulskis (15:38): Who should be using this? What types of projects benefit most from Nx? Because you could have big repos that would benefit from a cache. If you have something that's very disparate, the cache is going to be very impactful. For a smaller repo, that's not going to be as much, but should the small repo guys still be looking at this if you're just trying to build a small SPA or something? Juri Strumpflohner (15:59): Yeah. I would say so, because Nx is not just... it is optimized for monorepos, but you can use it also outside monorepos. So there's the whole, if you like, the vitamin part where it comes with the generators and scaffolding of new projects, those kind of functionalities which help you even a lot if you build a single application. We have actually a lot of use cases where companies are not even sure from the very beginning, like, "Oh, we want to go with this new project as a monorepo," so initially, you just start sketching it out, you create a product, and we always start with an Nx repo and then come with some guidance where we say, don't just like create a single project with sub folders; rather, for those sub folders, create small libraries. So you already build out a monorepo, if you like, but the difference is really just within... Normally, you would have basically single sub folders when you... Create React App or whatever you're using, and in this pattern, instead, you can just have your thin layer of application at the very top and then a set of libraries, specific to your features, to your domain areas, or whatever you're using in that application. And those libraries are also not something that... they don't need to be shared across domains. So it's not that the second app comes in and the library needs to be built already to be reusable. Quite a lot of libraries are actually very focused to the domain area they belong to, and so it's more, if you want an organizational structure, how you then also scale teams, and where do teams work on when you have those neat library structures? So we've seen, usually, in the end, you end up with a nicer API as well, because you have that notion of, okay, what is public to that library? What is private within that library? Which components should be used from the outside in the application, which just belong to that small feature library I'm building? And so that helps. Paul Mikulskis (17:49): You're kind of defining an API. Juri Strumpflohner (17:51): Yeah, exactly. Exactly. So it's much more a finer, a clearer separation, instead of just having a folder based within your Create React App setup. An advantage, then, on top of that, is that those single libraries then would get cached, whereas if they were part of the single library application build, it's always just one build, but if you split them up and have sub libraries, then the test runs are different. So those tests don't need to be executed if you don't change that sub domain of your application, and so you also get that benefit. Then once teams get accustomed to that, very often we see, oh, well actually, that application could be two applications deployed and scaled independently. And so at that point, you naturally build a monorepo, really. So a lot of the monorepos we see is not a gigantic monorepo, one per organization, but it's more a couple monorepos right, maybe per department or per larger domain area which the company covers. It really depends on the actual company that is using it. Paul Mikulskis (18:53): Would you consider monorepo a misnomer? Juri Strumpflohner (18:56): Yeah, yeah, totally. Yeah, I actually tweeted about that a couple of weeks ago and I think Rich Harris also brought it up in one of his tweets, because that's exactly the misconception a lot of developers have when I talk to them, or they're like, "Oh, we didn't go with Nx because, actually, we don't really want to have a monorepo, one per organization," and I'm like, "Yeah, that's not what you have to do." So I would probably prefer something like "multi-project repo" or something, that denotes it much clearer. So it's really just a couple of projects. Most of the time, they're very related, so it could even be the web app and the mobile application for that product. Technically, you have already a small monorepo at that point, but you gain a lot of benefits because there's probably a large notion, large part that is shared. So that's what people don't think initially, immediately, because they're kind of scared from that monorepo thing... because they know, "Okay, yeah, I know Google monorepos, those are those gigantic repos. I'm probably not in that use case," and I agree on that part, most of us are not in that large use case, but there are a lot of smaller monorepos as well. Paul Mikulskis (20:01): Yeah. It's confusing, too, because I guess the opposite of that would be a polyrepo, which is the smaller one. Juri Strumpflohner (20:10): Yeah, exactly, exactly. Paul Mikulskis (20:12): [inaudible]. Juri Strumpflohner (20:12): Yeah. Paul Mikulskis (20:13): So Nx 15 is coming out sometime this year, right? When is that again? Juri Strumpflohner (20:21): It will probably be around October. We usually have a six-month cycle in terms of major releases. We stick to that already for a couple of years and it works out pretty well. So we have minor releases every couple of weeks and back fixing [inaudible] obviously, and then a major release usually every six months, roughly. Although, that said, if you go into Nx and use the plugins that Nx comes with, that type of setup, then we also come usually with automatic migrations. So it is a breaking change, technically, and so we are cautious and therefore are always, always... increment major version, but we ship also migrations with that. So basically, we upgrade your repo, meaning upgrading the package versions, of course, installing new packages, but also changing configuration, if that changes, changing your source code to some degree. So we basically help you bring up to that new version and upgrade. Paul Mikulskis (21:17): What are you most excited about for Nx 15? Is there a feature in there, or a feature you feel like isn't broadcasted enough that people are chatting about? Juri Strumpflohner (21:29): I think one of the biggest ones I'm excited about is the whole simplification of the configuration. We called it, I think, negative configuration or something, because Nx, you can use Nx in different sides. So you can use Nx... We come up with the term Lerna-based repos, but in the sense of you have those Yarn workspaces repositories, or MPM or PMPM workspace repositories, where you have usually a structure where you have the packages, every one has its node model folder, you have the package JSON in there with the script and stuff, and you add Nx on top of that just for the task scheduling. So that's the lightweight setup. And then there's the more evolved setup, which we call Google-style, because it's usually a bit more opinionated. It comes with more plugins and stuff so it's more, probably... stricter in the sense of how you set it up, but it gives you a lot more in terms of developer experience. So with that specific setup, there comes obviously also more configuration, in the sense that all those plugins have some of the metadata which sits in a configuration file based on which they understand, how should they build your project? The whole point in the end is, due to those plugins, we can provide features such as automated code migrations because we know the structure. If you have a loose Webex file somewhere laying around and you fiddle around with it and plug in stuff, it's really hard to automatically migrate that because you would have to really look at all the possible source code combinations, you could fit in a Webex conflict, which you probably wouldn't want to do. And so the whole aspect of negative configuration is interesting because, for both of these approaches, you can really reduce the amount of configuration, or repetitive configuration specifically, that you have in those single areas. So if you have a test target defined for every package in your project, you don't really have to repeat every one every time that you want to use JES for all of those, but rather you can do it in one single place and then basically project all those configurations on the different pieces. And this just simplifies things a lot, and so I'm most excited about that one, also because some of those things we already are mostly in the process of releasing. Some of it has already been released. For instance, when we integrated Nx with Lerna in 5.1, or we [inaudible] that easy opt-in, we simplified Nx a lot. At this point, you can actually just install the Nx package. You don't have to add any other file, and it would already be able to schedule tasks within your monorepo and get it running on any type of monorepo, even on Lerna and PMPM, whatever... even the graph, the graph would totally work. Actually, you wouldn't even have to install Nx. You could run NPX Nx graph and it would work on any monorepo type of setup, which I think is pretty cool. So that comes with those negative NPX Nx graph, as we call it, already. At some point, you probably would want to have at least a top-level Nx JSON file. You always mostly have to somehow fine tune, how does my build graph look like? Which project... whenever I run the build, I want to also build my dependencies. Stuff like that, you always mostly want to define, but it feels much more lightweight if you don't have to have it just initially when you get started. So I'm pretty excited about that one. Paul Mikulskis (24:46): So [inaudible] to get started and just experience the out of the box benefits, because caching a build, pieces... that's a significant thing that, if you can get that out the box, it's like, why not? Juri Strumpflohner (24:59): Yeah, exactly. Exactly. Yeah. Paul Mikulskis (25:01): Is there any other thing about Nx? I mean, I keep talking about the caching because in my eyes, having my face in CI/CD a lot is just like... that's a no brainer, huge low-hanging fruit. It's like a melon hanging from a tree. But what other features haven't we talked about that you think are really cool? There's cool integrations straight with React, there's all sorts of extensions and modules out there. Maybe we could throw out one or two more. Juri Strumpflohner (25:26): Yeah, absolutely. Still maybe talking couple of seconds on the caching part, the nice fit on top of the caching is also the distributed task execution which we added. I think with that, especially when we talk about CI, the combination of caching and distributed task execution, that's what really gives the speed. So with distributed task execution, what I mean is basically, when you run builds on your CI, at some point... you run them always in parallel, that's for sure. If you have a couple tests, you run them in parallel, you scale them in parallel. The problem, though, is that you can only run that many parallel agents or [inaudible] within the same agent that you have from CI. At some point, it would get even slower. Those have to wait and then they bulk up, right? So what people usually do is they start to actually split them up. Multiple agents that run in parallel and compute stuff, and then you collect the results again. The thing is, if you do it manually, and you can totally do it, so Nx has programmatic access where you can say, okay, give me all the changed libraries, all the changed packages, and then you can collect them, and give me all the targets, then you can distribute them on your own, basically. Whatever setup you're using on CI. The thing is, usually that is very naive, setting up and distributing, because you just usually just cut them in batches. You have like 20 tests to run, you just create equal batches, assign them to agents, and run them. The problem is then when you run them, often what happens is the first batch that... one agent gets four tests to run in parallel, but those are super heavy tests which take 10 minutes, while other agents are finished off for two, three minutes and they're waiting idle there, but they have to wait until all the agents come back and then collect the results. So with the distributed task execution, we tried to solve that. First of all, taking away the burden of you having to figure out how to distribute, so it's done automatically. It still runs on your CI, CircleCI or GitHub or whatever, but it's more the agent that managed the distribution that is the intelligent part, that comes with Nx Cloud, for instance. So that one, knowing the graph of how the projects relate, it is able basically to prioritize things. So it knows that if there is a leaf project that is used by a couple different parts of the graph, it will compute that first, it will cache it then, so then it can just distribute the cache among the agents. As well as, in that way, it knows also the historical data and so it knows which tasks usually take how long and it can also use that information to actually balance out the utilization of the agents. So in the end, what you end up with is you end up with pretty high utilization on the CI agents, and therefore obviously a much, much quicker computational log. And so I think... That is something we have been working on a lot recently. The CI setup is super simple, you can actually generate it. There's a CI generator in Nx where you can say, okay, give me the CI setup for CircleCI or GitHub, and we're working on a couple others, simply because the CI setup is so simple. It's really just starting the master node that orchestrates between agents and how many agents you have at disposal and you just start them, and then a master node will coordinate them and shut them down again. And interestingly also, in the end, from a developer experience, you just have one single output. So if you go on your CI and look at the log, it feels as if it was computed on one single machine, which I feel is also super important. You don't want to go and like find the agent which ran that specific test just to see the error message or something, so I think that is a very big plus or something that also distinguishes Nx from other things in the market and the ecosystem. Paul Mikulskis (28:59): That's something that I would rather pick up from you explaining to me, because that's kind of a complicated thing if you're not in the face of dependency and how that goes, but what you're basically explaining here is it'll look at a dependency on the graph by analyzing the graph and say, oh, let's temporally front run this bit, because it can be distributed. That's huge. That's really... very interesting. Juri Strumpflohner (29:23): Yeah, absolutely. Yeah, it's hard to explain those things. We tried a couple times, but it's really hard to boil that down. You really need to try it out, and that said, for open source projects... technically, Nx Cloud is our paid product on top of Nx, if you want. Nx is free, and so Nx Cloud is the paid product that you can add on top of it. We actually changed half a year ago the pricing model such that it's basically free for everyone, because we give everyone 500 hours for free, simply because we figured out that... Yeah, because we simply figured out we don't get that much from the single runs of people trying it out or stuff like that. Usually, we had like five hours per month. But rather, we have a different section of on-premise hosting of Nx Cloud, which large companies usually want to do, and so the business model is mostly on that. So the large companies pay for it while others can try it out. Open source projects, in general, are completely free, so if you are out there hearing this, you have an open source product with Nx, just ping me and we give basically unlimited free hours for open source projects to run Nx Cloud and use those distributed task execution stuff, which is our giving back to the open source community, just because we are very hooked up with that and do open source ourself quite a lot. Paul Mikulskis (30:40): So a little bit less related to Nx, I'm just personally curious on your take on the SAS space here. So something like Circle, any of these run CI/CD, they make a lot of money off of the fact that sneaky parameters, such as utilization, you briefly mentioned, that's a cash cow. That's something where, as a team, you can let that slip under your belt, under your vision, and really rack up a bill, like buying a really nice electric car every month or something. Juri Strumpflohner (31:13): Yeah. Paul Mikulskis (31:14): And this product, the whole Nx, it tackles that very effectively. We're decreasing utilization, you're decreasing RAM usage from waiting, like, oh, I have a package in memory, because we're waiting for this other one to get built. Well, you just explained temporal front running. So... we're having all these things get optimized. Do you think this is going to in any way have a long-term impact on the revenue and profit business models of these other SAS providers and all of the sizeable revenue that they used to rely on? Because we're not talking about a 5% decrease here. You even mentioned at the beginning of the podcast, we went from months of runtime to days. Juri Strumpflohner (31:55): Yeah. I never thought from that perspective. It could totally be. I think on the other side, however, you would run much more... you would have much more throughput at this point, right? If you can run more PRs and push through more PRs... Otherwise, as a developer, what do you do? You run maybe that single PR multiple times a day because you're trying to get it merged in, if that is faster, in the end, in terms of computation, you would probably start working on more stuff and have more stuff parallel going through. So it would ramp up probably some of the computation on that side and get more throughput. But yeah, it's an interesting take in general. I mean, we are super... We've been working on this vision for, I would say, almost two years, for sure. We already saw... when we had local caching, we had already had the distributed caching in mind with Nx Cloud, and we already had the distributed task execution in mind. It just took time, obviously, to develop those things and to refine them. But now, I feel like we really came full circle again, because you also mentioned before, what are some other cool features? One, for instance, that we released, I think in was Nx 14, so a couple months ago... is a module federation, and the whole speeding up with module federation, and if you really want, also microservices, right? What often happens is if you create large applications, and we see that when we consult for those Fortune 500 companies, sometimes they have those gigantic either Angular or React applications, which, even for local serving time, it takes like a minute to serve the app, or even more, which is a problem if you have to restart local web server and you have to wait two, three minutes until you see something in the browser, that is pain. And so now, however, with the module federation part, you can actually start... because how do you cache that? You cannot really cache the single application build easily because it's always like one block of Webex that builds it and links things together. Even if you create buildable libraries so they can be built independently and cached and then just cached in or restore the cache and link the built library, the pre-built one, you still have that linking time, which might take quite a long time. So now, with module federation, what you can actually do, which is pretty neat, is you can slice the application. So the whole module federation, is often just seen in the aspect of microservices where you want to also deploy independently, but we don't necessarily go that far, but we just say, okay, Nx provides you the facilities like generators and plugins that allow you to slice applications based on your features, which you might have already done, but it builds them independently due to, and thanks to, the Webex module federation, so you get different buildable artifacts. And then what happens is, obviously, those are independently cachable at that point, and so mentioning going full circle again, what we can do now, what we introduced, we introduced a couple of helpers and facilities around those module federation, setups, such as nowadays you can create a React app that is set up with that module federation, but you develop it as a single application. However, you can now say, okay, I want work on, I don't know, the product catalog of that whole huge web shop application. I don't really care about the checkout process. So what I do is I can say, I want to launch application with all those modules, but I want to launch the product catalog in development mode, meaning that it would use the Webex dev server, you would live refresh and everything, but all the other modules are actually just pulled from cache and search statically. So you still see the whole application as a whole, but it's super fast, because really you're just booting a single small piece within an application. So I think things like that are what gives you that added benefit in the end for having the cache, having it distributed, and being able to pull it back down, together with those generators and plugins that Nx has. Paul Mikulskis (35:43): I mean, yeah, if it's doing local dev time, and that's also iration on a developer's part. You don't code eight hours a day, you code three to six, and then it's like, if you're getting really frustrated, that turns into two to three. Juri Strumpflohner (35:59): Yeah, exactly. Paul Mikulskis (36:00): So engineers aren't cheap, so that's another saving money front. It's interesting you bring up this whole lens of looking through the value of Nx, because you're like, "It saves time, it increases throughput," and I'm like, "Well, on the systems side, it tackles utilization and memory consumption and stuff," and what are the effects of those two? I guess it really depends on the culture of your company, what apps you're building, and what services you're employing. But it's irrefutable that on both sides, you get these side channel benefits that you wouldn't even expect before. I can say that I had a friend that worked at a larger firm and his Angular app took, I mean, I was over his house one time, it took it like 10 minutes to build before it booted up, and I was like, "How do you work? I don't understand. This would drive me crazy." Juri Strumpflohner (36:51): The thing is, that is not that uncommon, unfortunately. We've seen that. We've seen that when we worked on such projects where we optimized it and a lot of the features of Nx come out of frustrations of us consulting for companies having those problems, us then thinking about a solution and building it back into Nx, because we mostly, almost all of the time, use Nx for those companies. So a lot is just back feeding basically into the open source product. Paul Mikulskis (37:15): I should say, the pain lever is where you start, and then you add on the vitamins. Juri Strumpflohner (37:20): Absolutely. Yeah, yeah. Exactly. Paul Mikulskis (37:23): All right. Well, we're coming up on time in a few minutes. So I know one of your socials, because I like to ask socials and YouTubes at the end. So I will say you can find Juri, we can put your Twitter handle below. You have a very rich posting history of updates and all sorts of goodies on there. So definitely check that out. Are there any other areas that you would like to point our viewers to? Juri Strumpflohner (37:45): Yeah, absolutely. Also check out the Nx Dev Tools Twitter account. So that is NxDevTools, basically, all written lowercase. From there, you get most of the info. So whenever we... We have a YouTube channel where we post updates and guides and video tutorials about either new features or also things like how do I set up the module federation, which I've just talked about, with React, for instance. You will most probably find a video on there. And apart from that, Nx.dev is our documentation, which we're currently working on all the time. We all know how hard docs are, especially if you have a large feature set, it's really hard to nail down the most simple things that are valuable for visitors, and yeah, given that we all talked about Lerna, since that is a pretty new project - for us, at least. I mean, Lerna has been around for years, but for us, it's pretty new land. We just released Lerna 5.1 two weeks ago with a new website, with new guides, with a couple of new features, speed improvements. So definitely check that out as well, especially if you have a Lerna workspace already. It might make a lot of sense to look at that and just upgrade and basically relieve you from the pain of having slow speeds. So yeah. Paul Mikulskis (38:55): And I went on your docs website. It's beautiful, it's really good, but this is the best docs: Juri docs. Go watch YouTube, go watch Juri docs. You'll learn the most there. Well, thank you for your time, Juri, again, and we'll see you around. Juri Strumpflohner (39:14): Absolutely. Yeah, thanks for having me. It was a pleasure. Kate (39:29): Thanks for listening to PodRocket. You can find us @PodRocketPod on Twitter, and don't forget to subscribe, rate, and review on Apple Podcasts. Thanks.