2023-08-14--t08-07-33pm--guest80633--kevin === [00:00:00] Hello, and welcome to PodRocket, a web development podcast brought to you by LogRocket. LogRocket helps software teams improve user experience with session replay, error tracking, and product analytics. You can try it free at logrocket. com. I'm Noel, and with me today is Kevin Winery, Kevin's devrel at Deno, and he's here to talk about what you need to know about the upcoming 2. 0 release. Welcome to the show, Kevin. Thank you so much for having me. Excited to chat with you Yeah, I'm excited. I'm excited to dig in and learn. This is kind of a blind spot for me. I know it shouldn't be, but I haven't been following this too closely. So this will be exploratory for me as well. But before we get into it and your talk and everything can you give us a overview on who you are and your journey into web dev and then developer relations? Yeah, of course. So as you mentioned, I'm on the Dino team working on developer relations and I've been doing like developer tools and developer relations stuff for about the last 15 years or so when I started really getting into JavaScript and at that time, about 15 [00:01:00] years ago I. Kind of went from being primarily a back end developer, which is what I did coming out of school. And I wanted to build full stack software because there were, applications I wanted to create. And in order to, do that I figured out the front end development world as well. And as I was doing that, I started to get involved with an open source project that was actually sponsored by a a developer tool company. And I was contributing in the community helping out in the forums, making contributions to the open source repository. And the founders of the company were like, Hey, would you like to join the company as a developer evangelist? And I said sure. What's that? Because , I, that was like a job I had never actually heard of prior to that conversation. And but it turns out like I just have really enjoyed it. I love teaching. I love working in the open and working on open source projects. And being in developer relations has given me a chance to do that. So, yeah, I've been happily doing that for a number of years. Most recently with Dino. But I also spent about 10 years working at Twilio. And a few, with a few other stops in there as well doing DevRel and DevTools. [00:02:00] Nice. Awesome. Awesome. Thank you for that. Yeah, before, before we dive into new things that are coming to, to the project, can you tell us about what Deno do you know it's going to be Deno in my head forever. I'm always, I'm going to, I know it's one of those where half, half the community, I feel still saying it incorrectly, but anyway, Can we talk a little about what it is, why devs should care about it and the problem it's trying to solve? Yeah, for sure. And don't worry about the Deno, Dino schism. It was originally for a time Deno when Ryan Dahl first announced it. That was the pronunciation Heh. in the middle there somewhere. But yeah, so Dino is a server side JavaScript runtime. It's a server side JavaScript runtime. Similar to Node. js in some respects, but the primary differences are it's intended to be a very browser like development environment. So, rather than having APIs that only work in Deno wherever possible, we try to use web standard APIs. Like for for instance, the default web server in Deno uses the request and the response [00:03:00] objects that the Fetch API uses in the browser, say. So anytime, like it is a possibility to use a web standard API that's the choice we tend to make. The other major thing that's different is security by default. So, when you, install an NPM package in a node program all the code including your projects by default can access the file system, the network and kind of. do whatever it would like to do. But in a Dino program, you do have to explicitly opt in to letting your code access, environment variables or, file system paths and things like that. So there's a lot more granular control over the sort of security environment of the of the runtime. And then the other, thing that's a major philosophical difference between like Deno and Node is that we want to include a lot more of the sort of meat and potatoes like developer tools that you are going to need in most projects actually as a part of the runtime. So Deno has like a built in linter, built in formatter test runner And we're actually even going further to [00:04:00] provide more primitives. Similar to how the browser provides you like service workers and local storage and these APIs that you need to actually build applications. We want to build, build a similar thing on the server. So like our first sort of foray into that space is like a key value database that works the same in your local development environment as it does, in the cloud when you actually deploy your application. So, being a little more opinionated and trying to have a batteries included development experience where node is very explicitly Unix philosophy ish, where, the idea is that it wants to be unopinionated about how those types of things get right. Yeah that, that makes a lot of sense to me. I feel like that's a good kind of way to, good way to frame this. And yeah, I want to get into differences in like security and how package management and stuff works and what's changing there with 2. 0, but before we do, I'm curious on this batteries included note, I feel like a lot of frameworks and tooling when they're making this step, they've come to the realization that they need to be Like tightly linked or at least [00:05:00] communicating pretty closely with the hosting itself, like hosting providers and like these abstractions that things are providing. Is there anything happening in that space that is like prescribed or is it all pretty like agnostic and you can deploy wherever. Um, You can deploy Dino wherever today. So, the CLI you can configure it in like a container environment or, basically any any place you can run JavaScript today will run Dino just fine. Even like on AWS Lambda, there's like a configuration where you can have a custom runtime that's, that will be Dino in that instance. The thing that Because the KB API specifically is probably the most like Dino specific thing that's in the runtime today. And that today will only really work very well if you run it on Dino deploy, which is like the cloud environment that Dino provides. It's like a global edge network. It's similar to Cloudflare workers in that rather than managing VMs, code runs in V8 isolates, which [00:06:00] are, fast and cheap to spin up and down and KV as a database is built on top of foundation DB, which is an open source product from Apple that they use with iCloud and a lot of other services in their portfolio, and that's a globally distributed as well. So it's designed to be used by as like a application data store for. An app that is globally distributed, so that right now will definitely work fast on, deploy. It certainly will work in other environments, but I think that's actually one of the technical challenges we'll have to sort out as well. It's like, how do we create the necessary hooks to make those types of things work? Well, in other environments, too. Hello, everybody. Yeah, it is a challenge and kind of an interesting just project philosophy dilemma, I think. So it's always curious. I'm always curious to get those perspectives. How long is, has, deploy been around? Is this is it pretty established at this point or is it just entering into this space? Nice. Yeah, it's I'm gonna get the exact year wrong, but it has been around for a minute now. I think [00:07:00] the probably the most established use cases that we've seen so far are like with the folks that. Use Dino deploy. It's an infrastructure service. So if you've used like a super base edge function or a Netlify edge function those two are actually built on top of Dino deploy. So we do have, customers that build directly and deploy on Dino deploy. But those are probably the biggest users of the platform so far. I would say. Nice. Well, yeah, it's super basis serving a growing number of customers it feels every day. So I'm always like impressed by Indeed. They are. yeah, it's very impressive. Cool. Well, let's let's talk about 2. 0 a little bit. When was the first major release? Like how long ago was that? And what is the thinking of that, makes this a good break point for another for a major version bump. . So the DDo 1. 0 release was in May 2020. So it's been a few years since 1. 0 came out. And since then there's been some iterative improvement, but nothing actually breaking some of the major Improvements have been like compatibility with the [00:08:00] vast majority of the NPM ecosystem. So building a compatibility layer for both like Node. js built in APIs, which then, NPM packages are built on top of. The big thing that's driving the 2. 0 switch is how we're going to do module resolution in Dino, or how we're going to recommend that folks do that going forward. And while it's not like exactly a breaking change because actually like HTTP imports which are the primary way that you include like Dino native code in your programs today those are going to continue to be around and still be used. But What we've discovered from like people using, in production and actually trying to get work done is that the HTTP URLs have some kind of important drawbacks, like from a developer experience and like technical usability standpoint, Where I know when you have lots of different servers that are hosting different versions of packages, it becomes very difficult to, do any kind of Optimization around like semantic versions. So [00:09:00] you get like multiple versions that are very similar of the same package all being included in a program where we can't really do any kind of optimization around that. It's also just the case where like server infrastructure that we Don't control or that actually dynamically creates content that gets included in programs can be a little bit unreliable, like we've had bugs that users have encountered where, a dependency that they're using changes in a very subtle way, like even though the the end point is the same. So there are like guarantees that we can't really make around like the content of the packages. So, Yeah. So like duplicate and disappearing dependencies are definitely part of it. There are also just some like DX issues around like it's, if you've seen a Dino program that imports a module from a URL it's a little bit verbose and it can be a little tricky to configure those inside your programs. And there's minor things like I Maintain a number of NPM packages, and one thing that's really nice for testing is you can [00:10:00] do NPM link and like within another node project, you can use a development version of your module, and that's not really a thing we can easily support in the current world. So, so, yes, one of the big changes is going to be where we're going to actually be creating a registry that will work a little bit more like a traditional package manager, like a node or cargo or packages during any other package management tool you've used in the past, where they'll be like a central namespace. And, in that registry will be able to do things like have some have a degree of like editorial control over the namespace. So if there is a my sequel module say that's published, but it points to like an unmaintained or an older or God forbid, like malicious piece of code. Like we will be able to intervene in those cases and make decisions based on what the best developer experience will be. So we can have the ability to do that. At the end of the day, like it'll still be built on a lot of the same, like HTTP import functionality that exists in Dino today, [00:11:00] but by having kind of a central registry, we can make sure that the DX of that improves a little bit where like we can type check modules as they're registered and and make sure like the Dino first certain nature of the registry ends up being a good experience. Gotcha. Gotcha. Is the abstraction where there is going to be this central managed registry is there, could you, could a user, developer point at a different instance of that registry? Is there a top level key where you can tell, where what registry it should be using? Yeah, so the like the API for that is squishy right now but the idea is that uh, registry itself is, should be like really easy to self host. So yeah, the idea is we, there could be alternate versions of the registry that you could stand up the configuration of how to do that. I don't think, again, like we figured out quite yet, but that's definitely the goal that it should be like very easy to stand up your own version of the registry if you so choose. Gotcha, that makes sense. Yeah, part of the reason I ask is because it does feel like a good word is failing me, [00:12:00] but just a departure from a lot of that kind of core like, and we'll just use the web to, pull packages in. And I think there was a lot of obvious benefits to that. And obviously we just outlaid some of the cons there, like the problems that we can run into. I guess in within the community, has there been much of a kind of debate around is this the right. Just like approach to take, is this kind of a necessary step for the project to be taking? Or has it been pretty, have these kind of pain points been pretty unanimously felt across the board? Yeah, it's certainly like an object of some of some debate just because the decentralized nature of like HTTP imports. Was and, remains attractive to many and certainly like that will continue to be possible, like the HDP imports will still work. But I think like what we've learned from my practical experience is that the sort of benefits of that outweigh what are the very frequently encountered detriments, which are the like disappearing and duplicate dependency problems that we see crop up all the [00:13:00] time. Gotcha. Yeah, I think... Yeah going back in time, whatever, four years, I don't know if I would have called out like the deduplication difficulties there as an obvious thing that would, that could have been problematic in the future, like now it seems like, oh, of course it's much harder when, like we're pulling in all this data from all over the place. Who knows what little teeny minor tweaks have been pulled in and versions, but. At the I don't know if I would have seen that at the time. Is there something with the you know, traditional remote, like HTTP import that does try to do deduplication if it finds the same files at two different URLs or is that, does that not really exist? At all. I don't it's definitely not something we've built. I'm not sure if it does exist out there somewhere, but I don't think the like module resolution as it exists today is capable of that kind of understanding like, oh, this is the low debt. This is the same low dash, more or less like being imported from two different URLs. Which is definitely where the duplicate dependency issue comes from. Where, and that's really the value of having a central [00:14:00] registry, is that that service, because it enforces semantic versioning can allow you to say oh, these two dependencies are about the same, and we probably don't need to import, include both of them. Gotcha. How does this play with the or interact with the permission system, which we touched on before? Is that, is there anything changing there? Is that all still largely the same or permissions need to be granted when packages are installed? That'll still be the same. So there isn't a like a separate permission system for packages. So like the packages you use still have the same privileges as like the code that you write. Although that, that has been a thing That the community has brought up a lot and would be very cool, but like we haven't been able to figure out like a good way to maybe have different permissions for like your dependencies versus the code that you're actually writing. That's that's actually ends up being a pretty tricky problem to unwind. But so, yeah, so your dependencies have the same privileges as any of your, any of your code does the sort of [00:15:00] way that you like take advantage of the security flags of the runtime generally is say you're like in a continuous integration environment and you know that all you're doing is building, JavaScript files or optimizing images or whatever that code doesn't need to have network access. So even if it, if you did have a malicious package that wanted to steal all your environment variables and phone home to a server you can. Prevent that from happening by giving your code only the access it needs to actually do the work it's intended to do. Nice. Nice. Very cool. Yeah. Is there, are there any other major kind of major features coming in 2. 0 that are like external or exciting or is this kind of the big one that most users will feel the impact of? This I think is definitely the biggest one. There will be some other like minor breaking API changes. There's also going to be some other features that we know about today that are pretty exciting, which is the capacity to do have like workspace support, so you can have different import maps for yeah. So if you're familiar with NPM, our workspaces it's a [00:16:00] very similar feature and the ability to do a patching of dependencies as well. again, built into the runtime. So if you use node and NPM, like it's very likely you might use like a pat like node patch to. Sort of monkey batch the your NPM dependencies we'll be bringing that into the runtime directly as well, which will be end up being nice. And then a lot in the 2. 0 timeframe We'll also be experimenting with other sort of infrastructure primitives. So we have the. Key value database today by the time 2. 0 rolls around. I think we'll also have cues available to play with, which will work in a similar way. So you'll have a queuing system that you can use in local development and then without configuration. If you run that on Dino deploy, it'll hook into the queuing system that kind of already exists in that environment. So again, like trying to bring more primitives into the runtime for folks to use as well. Nice, nice. On that note of the kind of dev environments and workspaces like you mentioned earlier. How does that tie into the like private [00:17:00] registries like we mentioned before? Is there, is that easy to hook into? Could you do something where it's like locally you use this registry and then when you publish, you use the general public one or is that not really the intent? The way that we see it working out initially is that You can, in whatever environment local or production or in your build environment or what have you certainly use a like a self hosted registry. Because we actually already have support for NPM. If you're using any like NPM private repository technology that's another option that you have available to we're also evaluating what it would take to integrate with other vendors in this space. So there's like artifactory and, other sort of similar solutions that um, allow you to do like private package management within your, Okay. Code as well. So, either by self hosting private NPM registries or inter interacting with one of these other tools are the way that, I'd see that working out. Gotcha, gotcha. How does this change things for like [00:18:00] maintainers of Packages like what's going to have to change in their workflows. Yeah, so, One thing that will change is because we are trying to sidestep like the problems of the disappearing duplicate dependencies packages like in the new registry won't actually be able to use other HTTPS packages. So like the, you'll be able to like use dependencies that are on the registry or available on NPM. Like those will be the ways that you can have your own dependencies. For but for publishing, that I think will be the. The big one consuming modules will be. Very largely the same. So, well, it'll be actually very similar to what you would do with an NPM package today. So like in a Dino program, you have this NPM specifier. So you can say, import, express from NPM colon express. And potentially you could pin a version still in text there as well. Although Typically you wouldn't like you would want to maybe do that like a configuration file in a a dino dot json so we'll also be [00:19:00] then introducing like the dino specifier So it'll be like dino colon and then a slug for a package. That's a part of this, central Registry, so as a consumer of packages, it'll be a little bit different. But yeah as a publisher, that'll be the major the major change it You'll also still be able to publish scripts that can be imported via HTTP. That actually ends up being useful a lot of the times, like for command line utilities or install scripts where, you have a Dino program that you do want to actually be executable. So, those things will continue to be supported, but yeah, I think the major workflow change will be how you manage your own dependencies as a module author. Nice. Are there, are a lot of those authors kind of writing and maintaining modules that are shared or like a lot of the core logic is shared between a traditional. NPM package and then a Dino package, or is it like a people rewritten stuff for the most part? There's not there yet. There aren't a ton of libraries that are explicitly maintained for[00:20:00] both Dino and node. I think there's a, there's definitely a few like notable exceptions. So, like Zod is one that comes to mind, which is like an object schema. Library that is explicitly maintainable. What we actually created relatively recently though is a Library called dnt which allows you to write a library in in dno so like you're using, you know the built in typescript and you know the And have the local development workflow of a dino program, but then this utility will actually take your dino code and generate an NPM package that has both like a common JS and like an ECMAScript module, like import interface. So like from a single code base, it becomes pretty easy to like, have a dino first module, which then generates the artifacts and then relevant tests and everything like that for a node package as well. Nice. Nice. That sounds that sounds super, super handy. So it's there's a lot to check out here. Is there a good place for somebody to jump in if they're not really like familiar with the ecosystem or anything, would now be a [00:21:00] good time? Is there any reason that they should wait for 2. 0? No, I don't think there's any reason to wait for 2. 0 and like the entry point that I usually suggest is actually with a fresh, which is a web framework. It's built for Dino first. It's built on pre act and It's also designed for server side rendering on the edge. So, it sends like zero JavaScript to the client by default. It has an island architecture where if you do have like interactive components that have to run client side, you can like opt into having those small interactive islands within the app. And that I think is like one of the. More fun and useful ways to experience Dino in a more practical setting where you can see how that feels do server rendering by default and use this island architecture to, do the more interactive bits. So that's what I would actually recommend if folks are interested in checking out Dino is fresh for web development is a great place to, to start and get a feel for it. Nice. Nice. So we'll we'll try to get a link to that in the show notes so people can find it easily.[00:22:00] Yeah. And then last question, do we have a timeline on on 2. 0? Not a firm one, but we do think we're going to be able to be ready by the end of the year. So that's what we're shooting for at the moment. So I would look for later in 2023. Awesome. Awesome. Well, is there anything else you want to call out or anything you want to direct listeners towards? like I said, , I think it is a good time to check out. It's a Thank you. It's definitely still a newer technology, like the ecosystem around it is growing, but with I think like with N P M support being in the place where it's with like framework support being in the place where it's like it's actually possible to use a lot of pretty, popular tools, whether it's like Astro or remix or other frameworks that are originally designed for no actually those end up running pretty well. I think like in the last year or so has crossed the chasm from being like really cool to mess around with to being pretty productive for boring stuff as well. So I think it is a time to check it out. [00:23:00] Sure. . Well, uh, thank you so much for coming on and answering my questions, Kevin. I appreciate it. Yeah, it's super fun to chat with you. I appreciate you having me on.