Vite, the past, present & Future with Matias Capeletto === [00:00:00] Noel: Hello and welcome to PodRocket, a web development podcast brought to you by LogRocket. LogRocket helps software teams improve user experience with session replay, error tracking, and product analytics. You can try it free at logrocket. com. I'm Noel, and today we're welcoming back Mateo Capilouto. He's a Vite core team member. He's here to talk about Vite, the past, the present, and future. You may know him as Patak. Welcome back to the show, Patak. How's it going? Matias: I'm good. Thanks a lot for inviting me back here. Like always a pleasure to talk with you. Noel: Yeah, I'm excited to talk. it is always a joyThis talk covers the past, the present, the future. Just like the journey that vet took. Can you guide us through the beginnings maybe of vet and contextualize a little bit for us? Matias: Yeah. Wow. Like it's already four years almost is going to be in April, I think, because at the beginning, let's say four years ago, there was Like Webpack was the king of build tools, no and then you had rollup that was also like use a lot in libraries and there were [00:01:00] some projects like SWC and ES build that were like using native like Rust or Go. They were starting to show like a lot of promise. And at that point around like April 20, 2020 Evan that had the idea of getting to work into a dev server for view at that point in particular. And this was, the idea was to use modern technologies like ESM modules. To make a very fast dev server that has like good HMR because it uses directly the AS module and using the technology that for example, ES build enabled to on the fly process a TypeScript file into removing the types into shayas, like extremely fast. So these were seen that before were not possible around the same time Evan that day that he actually started hacking the stay up all night. And the day afterwards released [00:02:00] the first open source version. That at that point was like vid born and during all the initial ER it was like this vid one period where like he was working towards the first stable release and there were other players, like they were like heavily interacting between each other and influencing like a snowpack and a web server. And then like WMR from the project Fox and all these projects, like they were like it was like a very interesting period of trying new things and influencing each other. And yeah, it was a great collaboration already at that stage. And. In when vid one was about to be released in December of that year, Evan said, okay, we need to stop the ball here. Like we need to, this is not going to be the architecture that is going to take us to the future. We're going to review and revise [00:03:00] the version vid one that was at that point in RC already. And it, there was a little ecosystem already forming at that point. Evan said, okay, that will not be with one at all. And he started a sprint of three months to create V2, taking the best ideas for example, right from WMR, he took a universal rollup plugin API that work it, you write plugins and they work both in dev and build. And he also took the opportunity, vid1 was a little bit view centric still. You could use like React, but there were a lot of checks that were view was intermingled in them. So he really took the time and said, okay view will be everything view related would be its own plugin. Everything React related will be its own plugin and it make it really framework agnostic. So VidCore had nothing related to Vue at that point. And at that point is when I got involved, like I started like [00:04:00] any open source project, like sending PRs like documentation or there was at that point I was checking the compatibility with rollup official plugins. So like I tried them out with Vid2 and there was a lot of them that fail. And I reported the issue and Evan was at that point like closing issues like crazy. So like normally I report it and I got like a peep like 15 minutes later. It was like a commit that fixed it. Okay. This is interesting. And also we started the discord at that point and there was like, Bigger ecosystem appearing like VidRuby, for example, is from that point. I think that like Maximo was in PodRocket at one point. And I, yeah, like when Evan finishes Vid2, that is where the project really start to took off. And Svelkid, for example, that was using a Snowpack decided to switch to Vid. The Snowpack team is starting to work on Astro. And [00:05:00] they, at one point decided that it was better to join forces with us. So they move Astro from a snowpack to beat, and then I started into help us. They had so much experience, so like they sent to help us a lot WMR, like it was like a few years later, but now also they recommend VIT and they also have in the past. And yeah, like it was like one project after the other in discord. It was like so many people every day. There was a new plugin, someone was building an integration for some other projects. It was a really interesting time. And that ended up in v3. It was the first major for a time where. Evan, after the release of V2, like he had spent so much time on VIT and he has another project that is quite important. That's called VUE. So he needed to focus a bit on that too. So at that point like he created the VIT team. He gave us like [00:06:00] access and like really trusted us to start doing the day to day maintenance and we started to have b weekly team meetings to talk with him, to guide the project and we're starting to grow the team. And yeah, like v3 was the first major release where that it was guided by this new team. From then on, we have kept. Like working now, like bit five was released in December and we are working on bit six, like probably in October or something like that. And in the middle of that yeah, like a lot of things happen, but at one point after bit three, I got contacted by Eric Simons, that is the CEO of Staglitz. And like for them, Vite was in the critical path for their product because they developed this IDE using WebContainer that allows you to run a node in the browser. And they had a very fast IDE working in the browser, like in [00:07:00] no time. And then Webpack was installing and then starting. And it was not experience they were looking for. And when they saw everybody is starting to adopt Vite, and Vite actually Playing so well with that experience that you just like a startup project. And this beat server is right away there and you'll see that the app. So this was really important for them. So they hired me and I started working full time on beat. And then we got lucky for what it is. Open source, because like beat likely had enough resources because Yorn, for example, that is another team member, joined Astro to work on Astro and Vid Noel: And then Anthony Fu I think you talk also with We have a couple of times. Yeah. Matias: half of GitHub right now. So he is working for NuxLab too. So we are lucky that we count with some [00:08:00] team members that have or full time or like half time capacity. And then a lot of the members and contributors that like do this in their free time, but they're really. fundamental for the project. So yeah, with time, it is almost like every framework and tool is starting to offer first class support for Vite or Vite by default. And we reached this point with around 11 million NPM weekly downloads. That is just like crazy to see the curve and it's like an exponential. And at this point. We are at about like 40 percent of the weekly downloads of Webpack, just to give an idea of how much Bit has grown in these years. Noel: what do you think that the biggest contributing factors were that made V kind of latch on and have adoption so quickly? Because I feel like numbers like that across just [00:09:00] the, gambit of the dev space, like it's almost unheard of to replace such a fundamental tool replaces a strong word, but take a lot of market share from such a fundamental tool. So quickly. Why do you think that was primed to happen and Veet was Able to do Matias: I think that vid was at the right time and was the right idea and to be like Evan built a very good base from where like we all kept working on and each framework that joined it ended up like maybe one of their team members joining our team, and now we are like 10 team members. And like it, I think, that it may have been possible that we will be talking about the Snowpack at this point, like any of these project at that point. We'll have work. The whole idea was let's choose one and once the tipping point, like as well, kid choose this one and a snowpack like Astro movie to beat, it was clear who [00:10:00] was the one leading and then we all just say, okay, let's work on this common base. This is what we need right now. And for me, the ecosystem was the key for these grow because new projects started to join. Not only because it worked well for them, but to also take advantage of all the plugin ecosystem, all the features that were already built in, the idea of sharing the load between different frameworks. And I, yeah, like I seen that the main point was the ecosystem. And that was helped a lot by the Evan decision of using rollup during build and use adopting these rollup plugin API that it make it so easy to work with, compared to other kinds of plugin APIs that were around. So this really resonated with a lot of people. And talking about like the kind of project that has this trajectory, I think [00:11:00] it is quite unique, but there is another project that has almost the exact same curve at the exact same point right now that is PMPM, like you put the two curves and it's exactly both of them the same. But I think that also influence each other because like tons of people in the Vitecos system are adopting PMPM and yeah, I don't know, like the, but it is very interesting. I will say that, each new project that joined it collaborated a lot to that numbers because Angular, for example, like is using Vitecos CLI and Angular numbers are ridiculous. So like each of them collaborated a lot. Noel: Yeah. That is interesting. And yeah I wasn't familiar that the curve on PNPM lined up so well. I maybe find that a little bit less. Incredible, than the VEET journey just because PNPM feels like it's easier to go in and, use one off and introduce slowly where it feels like the VEET switch for a lot of projects, it's much more kind of like fundamental [00:12:00] to how like the project structures are set up initially for build tools and stuff. So it is very, it's still, I find it very incredible. What are like, so I guess, what are the big changes that are going on right now, like in, in five, 5. 1 and then like version six, what are the moving pieces still? Matias: Yeah like in bit five we really see it like roll up for that was the first version that roll up started to incrementally move. Towards native tooling also to speed up because this was one of the pain points that the dev server was very fast, but we still had slow builds because we choose to. Like maturity and flexibility or overlap over some other tool. But Lucas that is a maintainer of rollup, he was like extremely aware of this and he wanted to remain like being the choice for VIT. And so he has been working in incremental improving. He moved like the AST parsing to SWC that is a native in [00:13:00] Rust. And that, us a boost in performance. There were other features, for example, like we started to use the define is a feature that lets you like define certain strings in code, in your code, and you can replace them by something else. And normally we were using regex and it was a constant source of bugs. we didn't want it to do a full like AST parts because it was slow, but the regex were killing us and we were starting to have these monstrosities. Yorn, one of the team members actually made a switch to have the fine work with ES build. That ESB has the defined feature. So that was a big improvement. And then there was like in both like 5 and 5. 1, there was a lot of work on server performance. We started to Vite is being adopt not only by small applications, but also enterprise. And now we start to hear people that [00:14:00] say Hey, the Vite dev server is slow. Because I'm trying to load 10, 000 modules and you say like why are you trying to load 10, 000 modules at once in the dev server? I don't know, can you do a monorepo and maybe partition some of your stuff and the design system, if you're not going to modify it? Add it as a dependency. So like we can prevent lead and then we don't import the 1000 files of the design system separately, in each time you start, but it is hard to say that sometime in enterprise, because like things move in a different pace and the, this huge code basis sometimes cannot be refactored that easy. We started to focus a lot on performance of the dev server or the bundlers dev server. And in 4. 3 already, we got very good, for example, resolving IDs was quite as low at that point. And we got like a very big performance boost. And in 5. 1, [00:15:00] we got a very similar boost. If you see the progression, for example, in all the minors. Loading of 10, 000 modules tooks around eight seconds. Noel: Mmhmm. Matias: this is like measure with puppeteer. So actually, if you load it with Chrome, it will be like half, but let's say that for comparison, it was like eight second in 4. 0. It got in 4. 3 to around 6. 2. And then in 5. 1, we already got it to around 5. 3. So like from eight to 5. 3 already. Like it, it is a lot. Because if you consider the difference between, for example, let's talk about an app that is like more normal size, but let's say around 2000 modules or 1000 modules. So the difference between having something load in 600 milliseconds or 1.2 seconds, it is a lot because it is the time. Like with [00:16:00] 1.2 seconds, you see the white flash. Noel: Mmhmm. Matias: If you are around 500 milliseconds, you don't see even a flicker. Noel: Mmhmm. Matias: there is like DX experience there that even for normal size up, it just start to be important that we keep the BDEP server as snappy as possible. And what we are seeing is that like the bundlers approach works even at that scale, obviously you should take care because there, there's a lot of things that you can do to help for some examples Every time that you import a file without an extension, basically the, like vid will have the vid or note or whoever is resolving that file will need to say, Hey, file system. Is there a file that is mcias? No. Is there a file that is cshas? No. Is that like ts? No. There is a json file. No, there is. And it has to ask 10 times. And there is like the, all these weird rules about okay, if it [00:17:00] is. If you import it as JSX, but actually it's TSX, like you can import it. Like you should remove the extension, add another, it is so weird. So if you can directly. Imported with extension. I know it doesn't look as cool, but like you are going to do such a favor to the tooling and the same for barrel files, for example, this is a problem that all the bundlers have right now that is very common. Imagine you have a folder that's called utils and you have 10 different files. And one is like a slash and one is like file system, utils and different ones. And then it is very common to say okay, I have an index. Yes. I re export every of the file that are there directly to the as util. So anywhere in the code, I will say import a slash from util only. Noel: Mmhmm. Matias: And then the vendor will go to that file that is called a barrel file that only [00:18:00] has re exported things, and we'll go and we'll find it. But that is very bad for a server like Vite because it means that when we load a barrel file, we don't do tree shaking in dev. This means that all the files of utils, even if you don't use them, are going to be loaded. So if instead of using a barrel file, You say import a slash from util slash, you're going to save like a ton of module processing. So these kinds of things like also helps a lot again, like it's about kind of education. So sometimes it's something like it's. Not even easy to, I don't know, think about because we are not used to, because if you think 10, 000 modules and you think about Chrome extensions, for example, that every time that you do a request, a Chrome extension could get in the middle to check something when you are talking about 10, 000 modules, that could, [00:19:00] Like really affect, and I tested with, for example, tamper monkey. And I seen there was you block like a two extensions and loading these, I seen like one K modules. I got something like four seconds. Offloading time. And if instead of that, I go to an incognito window without extension, I got two seconds. So like you get already half just by using an incognito window. Noel: And this is something that you could tell in an enterprise setup, you could tell your devs, Hey, when you are developing, you should use an incognito window and then your experience will be a lot better. Back on this point of like optimizations that, developers can make to just make, Their projects more friendly, especially like in their dev environments. Like, how do you guys think about communicating these things? Cause I imagine a lot of devs don't even think about, Oh, like I'll export everything into a file. Then I'll import via that file in the future. [00:20:00] Or I don't, I'm not going to use file extensions. Cause it looks cooler. Is there like, tooling that can help devs stay on the right path here or something you'd recommend they think about or look at, Matias: Yeah I think that using like active warnings that Vite could do or even the browser could help with is very interesting. There is another case that is interesting here that is if you disable caching, for example, like you have the dev tools open and you disable caching and it is very normal that developers will say, I always develop with disabled caching. Noel: I'm sure my cash is turned off on all of my browsers Matias: Yeah, but the problem is that Vite, like this was, I think really needed when our tooling was quite quickly and it didn't work well. And we got like a scare of cache, at the, at that point. But in the Vite dev server, the dependencies are hard cache with a hash. So every time that you load the dependency, it will be a 200 [00:21:00] cache. And like, when you load the file, if it hasn't been modified, this will be a three or four this is already like a lot faster. So if you like stop disable the cache, like it, just leave it without that. Vite is also going to be like a lot faster. Let's say maybe 20, 30 percent faster. And this is hard to communicate because that is also going against what we do a lot of the time but for example the people from the Chrome dev tools, they started in the latest version to show when you have disabled cache in the network tab, that is going to be this triangle with exclamation mark That is going to be there like a yellow triangle. So you see that there is something going on. Actually, people were like mad at them because Hey, I don't want to see that triangle. But like we, we actually like that because this will make a lot of people think about that. And then it is [00:22:00] documentation or advocacy. Like we have a performance guide in David that like talks aloud about a lot of the things that we are talking right now and. How, like we can at least like show that to anyone that comes to us saying, Hey, my V tab doesn't work like fast enough. And when we start digging a lot of the time, it is that yes, they have or 10, 000 modules and they didn't like dynamically load anything or things like we have the tools right now. For example, if you are in a React application. If you are using plugin react, there is a version that's called plugin rack SWC and this is using like SWC native and just switching from it's a single line of code that you change and switching to that. get like more than twice the speed. And then there's a lot of things [00:23:00] like this, and this is like nothing new, like all tools have these kinds of things. The, what I seen vid gives you is that for application that are like. Modern build, like with proper code splitting and that are size it like already you start building them as a monorepo with thinking about like splitting independencies. So you can prebundle some like Vita scales incredibly well. It doesn't matter the size for the rest there is. Also options in the future, like maybe we can talk a little bit about roll down for example, because that will enable a lot of things too. Noel: I guess this is good time as any, what is Rolldown? How does Rolldown help this process? Matias: Yeah, so this is a parallel thing to all the things that we have been doing in Vite is that Evan, you like two projects, like Vue and Vite wasn't enough. So he's now leading roll down, in the last ViteConf he announced it. There is a independent team [00:24:00] that some of them were. Vid contributors. Some of them are like air spark also contributors, like they have done as an experience. And the idea is to build a. roll up, we said that it was like incrementally adopting some native parts. There are like two ways to actually speed up a project. Or you incrementally build towards that, or you say, I'm going to write it in native directly. And the writing part is like a little bit more like riskier, but in the case of roll up, there is a lot of Test already this team has a lot of experience and Lucas, even like at one point will get involved like consulting for trying to help them. Like he, he already say in VidCon, actually if the roll down team managed to get rollup compatibility, like at the speed of ES build, that is what they are trying to do, then [00:25:00] I don't have any problem calling that like rollup six, let's Noel: Yeah. Matias: I'm working with you because this is the idea, like to make a rollup version that works at the speed and we need that in vid because in vid we use rollup. And yes, build both for their own strengths, roll up during build maturity, flexibility, yes, build wherever we need, like very fast processing, Noel: Yeah. Matias: But each of them are like big tools with their own kind of bugs and differences. And it will be amazing if we could say, we are going to use roll down for everything. And we only have one tool. And so it could simplify a lot of things internally. Speed up build time. We can also later on consider migrating some plugins that we have in dev to native. And then speed up even more, the dev server these roll [00:26:00] down is going to be based on OXC that is an alternative to SWC. Like it's like a Shavascript parser Noel: Yeah. Matias: And it's also writing in Rust and OXC, like it looks like a very good base. So it is very interesting that OXC is also going to power your So there will be collaboration Noel: Yeah. Matias: And I, like at one point we could move, for example, these plug in React SWC, we could move it to plug in React OXC, let's say. And then we don't have, like we have right now that we have rollup, SWC, and we have ESBuild. But at one point in the future, everything will be consolidated to OXC at the lowest level, then roll down as a bundler and then beat as the high level tool. And on top of that, all the frameworks. And if the roll down team managed to get let's say like [00:27:00] 100 percent compatibility will be tough because there will be some decisions that I will need to review, but let's say mostly compatible and then in vid we'll, we polish some of that when we integrate it, this should be mainly transparent for all the ecosystem built on top of it. So there is a very big potential to speed up everybody. This will take time. Like we are not talking about in October, we are going to be using roll down maybe in bit six, we could have opt in for certain things, but we don't know. Really replacing both roll up and build this is like a long bet but it looks extremely interesting. And the project was open source it, I seen two weeks ago and there was like a lot of movement already. if you're listening to these and you're interested in build tools and Rust, go to the roll down this code, because it's going to be very interesting. Noel: Nice. What, if we can get to that future where, [00:28:00] we're using a Rust based lower level language to do that what kind of performance Impact would you expect to see? Matias: The version that they have right now is working faster than he has built like the proof of concept they have. They still need to add some features, for example, since source map, they were still not generated in this measurement. So there isn't yet like formal benchmark but I think we are in the ballpark of VS build kind of speed. And so very fast. Noel: Yeah. Super, super quick blink of an eye. Cool. I know you said we're not there with roll down yet, but like in general, is there a place that the community can go look and jump in and see how this stuff is evolving over time, especially like when we're talking those projects with a thousand to 10, 000 to a hundred thousand modules. Like, how do you guys evaluate that and think about that and communicate with the community? Matias: we were approached by some people from the web team at one point to build something [00:29:00] together. But like that, it didn't happen at the end. in VITE we are using several benchmarksone of them, for example, I was working with the one that Turak release like the one that when they started, it's like a triangle that was like with 1000 triangles like a fractal. And that was an interesting benchmark because it lows like 1000 react components. And yeah, it was interesting to see how evolved with that, but I don't know of a place right now that is really comparing. I think that benchmark was used. In some there was like a performance compare repo that it was measuring all the problem with this kind of metric also is, it's artificial because you later on when you put the real world, you have a lot more variety and you have preprocessors and you have Tons of things that end up affecting, it's interesting that we can [00:30:00] load modules that quickly, but once, for example, you put a preprocessor in that is start to be the bottleneck and not the loading. This is one of the things that is new in 5. 1 that Safi Red, that is another one of the team members. He managed to allowed you to run preprocessors. In a threat. And in a unified project, for example, he was seeing like 40% speed up. Just putting that process not in the main threat. So yeah, like this is a thing also, this is the same that we talk about plugging, react, plugging rec, s wc, like a lot of the time it is like the project isn't fast, not because the number of model, but because there is a plugin. That is actually parsing all the files with Babel to replace a single thing. And there are tools that there is really interesting tool that's called bit plugin inspect that is from Anthony Fu that you install that and it's give you a URL that is inspect is called when you start a server, [00:31:00] you have the URL there to click and it gives you like a view of how it is a transformer stack of all your plugins, what they are doing. And it also gives you timing for each of Noel: Cool. Matias: So like you can see with this tool, like visually inspect and check what is causing issues. We also have a, you can say like a profile in the CLI and then press P enter, and then you get a CPU profile that you can analyze how the server is working. Like there are different things that you can do. Okay. Noel: our conversation, integrating or I guess deintegrating from view originally, like being super tightly coupled to them now there's, Better ways to like use different dependencies for like your react project or something to make things optimal. How do you guys like collaborate with these framework maintainers? or is it just like fan members in the community that are doing work to bridge this gap? Matias: No, [00:32:00] we work a lot with the maintainers because Vite, like the main users are the frameworks. Like most users will experience Vite through a framework, Not directly through Vite, they will actually use as Velkit or Astro or Knux or Remix now or Angular, so when Evan was working on Vite, he was also working on VitePress. That is this Vue and Vite a static size generator that we all use for documentation and Noel: great. I've used it a couple times. Yeah. Matias: The default theme is so good that we all look professional. Just you say install VitePress and then you, Oh, I made a library. This looks awesome. Noel: Yeah. Matias: so when he was working on that, he was working at the same time, like dog fooding and he was changing Vite, so it is good to build VitePress. And this really shows later on, like framework authors will come and say it looks like this was built for me, like they feel it and every framework that join [00:33:00] also push it beat in that direction because they were asking for features and modifying things and that keeps going on, like we have a very tight feedback loop with a lot of the main maintainers of the frameworks. As I say, some of them are even bit team members, but a lot of them, Shaz, they contribute really good PRs, really good ideas. We ask for feedback. There is now, for example, one of the main feature that is coming. There was this project that's called vid node that Puya and Anthony Fu they started this idea and this is a way to use vid to process files, like using the vid plugin system, but to run them, no, not in node. Like next to the bit server, but like anywhere you want, for example, in a worker thread or like in worker D or like somewhere else. And also enabling in this, in the same way or you can [00:34:00] also run it in node, but there is this concept of kind of a client that is communicating if you build a browser, but to run the code. And so they have a very good HMR also. Not for the browser, but for these run code. And they build these because I needed for the NUX3 dev server to have HMR during Noel: For those like the build steps. Yeah. Yeah. Matias: For the SSR of Noel: Yeah when the server is building the pages. Matias: Yes. So like the idea was that during dev, they wanted to modify code from the server and it should actually code model reload in the same way that the code was Noel: would yeah. Yeah. Cool. Matias: And so they build this like tool vidnode and then this was the base, the engine of vTest. Noel: Ah, nice Matias: vidnode for the past years live, actually, it was a package in the monorep of vTest. Because it was so much use and so coupled with it. And it was also used by Nuxt, [00:35:00] but it was used there. And we started working one year ago Vladimir, that is like the, one of the main team members of ViteS. He actually joined the Vite team and started working on integrating ViteNode directly into Vite. We call it that Vite Runtime, like we rename it things to make it more agnostic. And we really see that in 5. 1 as experimental, because like we needed feedback and we got that feedback, the Cloudflare team, for example, Remix team, like there is a lot of people that really wants this to happen because one, one of the ideas is, for example, work Cloudflare team wants for frameworks to be able to say, Okay. I want SSR to be done like in node that is a default or just changing a config. I want the now to be in worker D and like the dev server should run inside mini flare locally in worker, like there is a, this [00:36:00] emulator, like on worker D. And so you have the same experience in build and in dev. And also that should work with HMR. Like it's like really interesting. And right now Vite didn't have that tools, the flexibility, people were starting to use tools like Vite Node to do that and hack their way. Like adding middlewares and doing a lot of hacks, but it was really difficult because each framework was doing their own thing. And here, there is another opportunity where Vite could provide some higher level API for everybody to use. So we got feedback from Vitra and TimeAPI, and now we are working together with Vladimir and other people in the team, and we're in a revision, it's going to be called EnvironmentAPI, and this is looking quite interesting. Like we are targeting Vite 6 for being a stable, probably in Vite 5. 3. We are going to release another experimental version and get more feedback, but this [00:37:00] could be like one of the biggest change in Vite regarding flexibility. In, yeah, like from Vite 2, this should be one of the main features. Yeah, we're looking forward because that could enable a lot of really cool integrations. Noel: Yeah, it feels like that's almost like an API Surface or like a layer that has never really been exposed in that way before with one of these tools Is there a way that you guys think about the correct abstractions to make it like when you're exposing just like the runtime API in general and making that more pluggable, is that, is it tricky to figure out where to draw those lines and how other tools in the future might want to integrate? Matias: Yeah. It's really tricky because like making any new API, you are taking like a lot of responsibility to actually support it over time. And so you have to, you want to have it. Be the right abstraction and it should be like flexible, but [00:38:00] at the same time, if it's not out of the box working with right defaults, you'll end up like with that config mess. This is what Vite got right in the beginning. Like it was like getting. Out as out of these massive configs where you have vid was the right abstraction for all the common patterns that we have. And what we are seeing with this environment API is that there are a lot of common patterns again, but that, as you said, there is no common tool and they are starting to be some ideas, but we think this is something that we still need to actually, where we draw the line. This is something that internally in the vid team, we need to discuss because it's always evolving for a long time. For example, we always said that vid build should only build a single like part of your app. So if you call vid [00:39:00] build, it will be the client. If you call vid build SSR, then it will be the SSR one. And it's like a single roll up run. This is what vid build does. And framework have been asking us. For a long time please make vidVille, build the whole app. This means if there is SSR, I need you to build the client. When that finished, I need you to build the SSR. And we always say no, because we didn't have the proper instructions for that in place. And we felt that we were starting to get into the, like Gulp business, I don't know, like that's a task runner thing. So We say look, you can wrap the CLI of it and do whatever you want. Like you run programmatically all the running in parallel, run one before the other, you can choose what to do. But a lot of project have decided that they want Vite as a CLI. They want, [00:40:00] because this is what all their users know. And it is easier for them to just point to our documentation, all the Vite plugins already work. More integrated, they use our VidConfig. So Svelte does this, Remix does the same, SolidStarter does the same. All these projects like Vike, for example, all these projects are framework as a plugin. So you like add a plugin and that's it. There are other projects that this was our, the way that we set to framework, how they should be building that is Nuxt, Astro. They wrap or Angular. They wrap Vid. I know in their own CLI, you don't call VIT when you use NUX, you call NUX Noel: next? Yep. Matias: and the same for Astro. They have their own CLI, Houston will appear there and say hello to you. So they can decide to do something different, but it seems that a lot of people are getting value in this consolidation of VIT as a base tooling [00:41:00] and also as the base CLI. So they were even hacking that in the last config, like in the last plugin hook of a vid build, they will actually firing up another vid build Calling around. So like they say okay, like you don't do it. We were do this hack and this is how Valkyrie work. This is how solid the start work. Everybody copy from each other Noel: Yeah. Matias: And I think this is something that we need to discuss with the team, but with environment API, like an environment represents the browser environment, like the client. Okay. The SSR environment that you can run in node or in worker, like a different environment that runs anywhere else. So the idea will be that you can have like a VDEV server with as many environments as you want. Right now there is two hard coded, Noel: Yeah. Yeah. Yeah. Matias: are opening that possibility. And in that way, we may have the opportunity to say, okay, let's provide a VDEV build all. Like flag that [00:42:00] actually is going to say, okay, for all the configure environments, I will call vid build, and then maybe give some way to configure like this task in a way that is not making like a full configuration scene. Like maybe just look like I generate, these are the four tags that you need to run that this with the name of each environment, like I give that to you in an array to a function. You do whatever you want. You like, you can promise all of them. You can await one, decide the order. You just give me the function and that's it. Like this could be one way to invert the control. So we don't need to invent a new tool. Noel: Yeah. It's like we'll give you one little abstraction that lets you do a mapping here and then everything else is still in your guys's hands. Yeah. that makes sense. Keep it. Matias: but yeah, I seen that we cool go there, for example, and that could make a lot of people happy that they could remove that hack. Noel: For sure. That makes sense. Let's see. We've covered a [00:43:00] lot. We've touched on VEET 6. Is there anything in particular that you guys are looking for help on as we wrap up here, either with VEET 6 or just at large? You mentioned roll down for Rust people already, but for, other devs, is there anything that you guys? Matias: Yeah. I like, like with everything there is always so much to do in any of these kind of big open source projects again, if you enjoy build tooling. If you are more in the side of JS, like join the vid discord or go in the issues and I think the best way there is to help us triaging or help other people. So you learn more about the tool that not, don't try to go and do a PR, just try to help us triaging that it means like. Checking the reproduction, seeing if it is really an issue or maybe a config problem. Like maintainers will love if you say Hey, I check it and this is just a config problem. And then you save it a maintainer like an hour. Noel: Yeah. Matias: That is a very good way to help learn more about the [00:44:00] project, get into more contact with the code. And later on, it's very natural that you get opportunities to contribute more. So yeah, we always need help. There is 400 open issues and that is like the base, like baseline estate of bit, like there is always 400 open issues, no matter how many we close, Noel: Yeah. Matias: is always 400 Noel: some asymptote that approaches 400. That's always. Matias: that is like the normal state of the project. And so there is always a lot to do. And there is also yeah, if you like more on the Rust side, like the both roll down and OXC are very young projects. And that is also extremely interesting because that initial time is the same as the initial time of beat or the initial time of the test that first six months of a project. They are so interesting because it is not only about the code is about also all the relationships that are forming all the incipient. Like tooling built on top of the project and the ecosystem appearing [00:45:00] like yeah I really suggest even just go and hang out in the discord, because there will be like so interesting things to learn and to opportunities to help. Noel: Nice. I feel like we've covered a lot in an hour here. Is there anything else you want to shout out or implore people to check out? Matias: Shout out to all the ecosystem again, like Vite because of all the maintainers and all the Vite team members. Shout out to everybody involved because this kind of project is built with a ton of like work of different kinds of people. Noel: Yeah, cool, cool. Thank you for going through all this with me. Once again, it's always a pleasure talking to people who are like passionate about projects and open source and building dev tooling. I appreciate it. Matias: Yeah Sam, thanks a lot for the invite. And again, like always a pleasure to talk with you. I hope we get together again maybe for a bit eight. Noel: Yeah, exactly. Sounds good. See you later.