Paul: Hi there and welcome to Pod Rocket, a web development podcast brought to you by Log Rocket. Log Rocket combines session replay, error tracking, and product analytics to help software teams solve user reported issues, find issues faster and improve conversion and adoption. Get a free trial at logrocket.com today. I'm your host Paul, and today we're excited to have a returning guest Chance Strickland. He's a senior software engineer over at Shopify, an educator. You might see him in some YouTube talks and at some conferences. And we're going to be talking about what this web podcast is about, which is web development and making faster backends. Might be more of the mental model, thinking about front end and backend together in a new way with these new frameworks coming out. But Chance is always, he's pushing the boundaries with what we know is possible and we're excited to get into it. So welcome to the podcast, Chance. Chance Strickland: Yeah, Paul, thanks for having me. Excited about this. Paul: So recently, I mean the topic of the podcast is your recent talk, which is faster backends and faster frontends because they're related. I put you to the title right there. But the general idea is that we have back ends related to front ends in a way that we might not be thinking about traditionally. Chance Strickland: Yeah, you've introduced me as a software engineer at Shopify, which is true, but my team over at Remix got acquired back in, I think it finalized in August. So this is still a relatively new title. I've been focusing the vast majority of my time over the past year in Change working on the Remix framework. So if your listeners are familiar, that's great. We love when folks are already out there using it, but if you're not, go check out remix.run. And that should give you a little bit of context for what we're about to talk about because I try and frame a lot of these topics in the context of working within Remix, but the concepts themselves are broadly applicable to software development. Paul: That's one of the things I love about talking about Remix is because it really applies a new mental model to every swath of web development in ways that kind of push the boundary for you as a web developer. So really quick, what makes Remix for folks who might be coming here for their first time stand out to preface our conversation? Chance Strickland: Yeah. No, it's a great question. And this is something I think about often because Remix, it's a lot of different things. It's a framework for the frontend, it's a framework for the backend and it blurs the lines between the two sides of the stack. And a way that I think is we're sort of converging on a lot of different similar models in the ecosystem and we're all moving this same direction. But I think Remix really took the ball and ran with it in a way that a lot of other frameworks hadn't really thought about some of these problems. And what I mean by that is traditional server side frameworks sort of drop you off at the HTML cliff. And what I mean by that is you have a really powerful framework in the backend to help you think about data, help you think about generating your views. They help you think about handling your business logic, but then you get to the frontend and it's like choose your own adventure. I can throw a framework in front of this specifically for the frontend, but it's not well integrated with the data layer. And then on the other side of the equation, you have all these front-end tools like React and View. It's sort of the other end of the spectrum. You've got all of these super powerful tools in the frontend, but they don't really talk to what's going on in the backend or at least there's no prescribed way for it to talk to what's going on the backend. And it's sort of on your own to stitch those two sides together. And so with Remix, we really try to think more holistically about how your frontend and your backend work together and how we can marry the two. I like to call Remix a center stack framework because that's really where you're focused when you're writing your application logic is the center between the data coming from your backend and the interactions on your frontend. And so that's my mental model for Remix and how I work within a Remix application. And I think it's a primary goal of ours to when we're putting this in front of developers is to change the way they think about how the backend and the frontend work together and marry the two in a way that I think just makes sense. Paul: So you really feel like there's a lot of work, a lot of attention to be given to this layer between the frontend model and the backend model and the crux of the problem could be identified as loading data and synchronizing data, or does it extend beyond that with like NBC and views in all this and rendering? Chance Strickland: Yeah, I think data is really at the core of it, but it's not just loading data, it's also changing data because the apps that we're building are highly interactive. The scope of the web in general has just exploded over the past couple of decades. And the complexity with of the apps that we're building demand that they respond to a lot of user input and that we're making data requests in the backend, not just the fetch data, but to make changes to it, for mutations and doing that in a way that's predictable, that is bug free, that you're sure that you're looking at the data that is actually representing the data in the backend as soon as you click that button or you navigate that view or you making up an update to a form. That's really important. And it's also very tricky to do with traditional server side frameworks. It's tricky to do with modern frontend frameworks. There's just a lot to stitch together there and there's a lot of things that can go wrong when you're sending data over the network and when you're checking to see if that data actually successfully landed in the database. So that's kind of how we think about this. That's how I think about this. Yeah, data is at the core of it, but it's more than just loading data, it's just dealing with data at scale, dealing with data that's far away from the frontend of your application or potentially far away from your user. Paul: At the start of your talk, you also mentioned this catchphrase that I love personally. It was talking about how frontend developers and backend developers, as we would traditionally call them, are kind of melding together. You said they're getting closer together and what really defines a frontend developer might be changing. And I think what we just talked about bleeds really nicely into that because we're talking about, well, I'm a frontend developer and I have to think about loading data and have to think about, "How does a server interact and hydrate my state?" Well, we might be walking to a day and age where that says yes. So what do you think defines a frontend developer and what was the main takeaway in your talk? Chance Strickland: Yeah, I think for me, the thing that I think about a lot is why we ever thought the frontend developer shouldn't have to think about data. As a frontend developer myself, I'm someone who very much loves the frontend of the frontend. I come from the CSS world, I come from design systems. I've worked on really low level design system esque component libraries where every single user interaction is highly focused on the details of that user's direct experience. And we don't normally think of data as a part of that story, but the truth is, it really is. It always has been. It's just that the domains have been separated as far as how we think about classifying different engineers and there is some degree of separation, but if you want your users to have a great experience, you do need to be thinking about the actual data. For example, let's say you're working on search, you're working on a Google search feature or some sort of similar search feature for your application. As I'm typing into a form field, I want to see suggestions pop down into that combo box. All of those suggestions have to come from a database somewhere on a server somewhere or some index. And the reaction speed of that dropdown menu is really important to the end user's experience. That's a frontend concern, but it's also very much focused on data. So these concepts, these ideas that front and backend developers are clearly defined or the domain is clearly defined as this separate thing I think is kind of a myth that we're pulling away at. We're realizing that these things are interconnected and frontend developers do need to be thinking about these sort of things. Now, what is a frontend developer? That's I think a different kind of question that I really do explore in my talk. And I really get a lot of this from another talk that I watched a few years ago that's just resonated over the years from Chris Coyer. I don't remember what he actually called the talk that he did, but it was based on an article he wrote called The Great Divide on CSS Tricks I believe. And it's a fantastic article and it really does explore how these sort of distinctive sides of the stack over the years has sort of blurred. I think a lot of that is just a natural result of the things that we're building on the web becoming much more complex, but there's a lot more to it than that. There's a whole cultural aspect of it that he explores that I think is really fantastic and definitely worth a read. Paul: The Great Divide, yeah I haven't read that myself. So definitely need to look that up because this general idea of maybe we've been thinking about the whole model front and backend wrong, starting to resonate with a lot of people. And I feel like Remix, one thing it does is it urges somebody who might be traditionally only in frontend to thinking about the backend pieces with loaders and routes and all that. So I'd love to hop into talking about some specifics about Remix, some parts that maybe you're particularly proud about Chance that you think help push the boundary. And maybe as a frontend developer, when you step into it might seem scary and you're like, "Oh God, what is this?" But no, there's a reason why it's there. It's helpful and really curious as to why you think it pushes the boundary of web development. Chance Strickland: It does feel scary. It was scary to me when I started working on Remix. I had never worked on a framework, so this was my first foray into this world in this domain. And the reason I said yes to the opportunity to work on it in the first place was because as a user when I jumped in, it was slightly scary. But it also I think made a lot of these things that naturally would be scary to us, a little bit more approachable. And that's because we build on top of web standards. There's standards that we all have some degree of experience with, these APIs feel familiar with us, but some of our abstractions are also I think done in a way that makes it easier to learn as you go and as you write because they're built on these web standards. So one example, just as an API example, if you look at our data loaders, our loaders receive a request that is a request, that's the HTTP request that comes into that route as soon as you visit a page or make a request. Paul: From the client right? Chance Strickland: Exactly. From whatever client, you make a request to some endpoint and then you get that request right there in your data order that exists right next to your route, your UI route. And you can see exactly where that... with that one simple decision, you can see the request, the response model right there. It's all right there in your route, that model, the whole idea of HTTP request response cycle is right there in your face. And that alone I think breaks down some mystique in how the web works for a lot of folks who just take that for granted or don't think about it. And again, that's just something I've found is pretty common with a lot of frontend developers is it's not front of mind. So leaning on that request response model might not be intuitive until you kind of see it right there. Paul: Having to think about things you have never thought about as a general rule is scary. So it's good to hear that it makes it less scary at least. So why do you think that the loader presents such a better front of mind sort of view. Is the actions and sort of logic you would do in a loader traditionally more hidden in another framework like Next or a Plane React app or something? Chance Strickland: Yeah, I wouldn't say that. On its face, just that core idea of a data loader is not that different from what you'd see in a lot of frameworks. But when you marry that with the idea of nested routes, which we haven't really talked about, but I think the marriage of data loaders and actions in conjunction with nested routes really are a new way of thinking things, at least in our sort of framework space. And what I mean by that is you're handling a lot of the data loading on this. First and foremost, you're handling a lot of the data loading on the server. We lean on the idea that most of your intensive data works should probably happen on your server, especially if you're communicating to a bunch of third parties and services or you're making expensive database or queries. All that stuff needs to happen on your server. And then your server is responsible for sending that data to your client so that your user has something in front of them as soon as they possibly can, otherwise they're staring at a bunch of spinners all day waiting for their client side JavaScript to run and we're also dependent on the performance of that user's machine to actually get that data. So that in of itself is something that we talk a great deal about. But when you pair that with nested routing, what you get is the ability to easily parallelize a lot of these requests in a way that optimize them without you as the developer having to think too hard about, "How do I prevent all of these data waterfalls down the React tree? If you're initiating that fetch that request at the component level, that means every time in the render cycle that you hit a new component, you're going to potentially block everything down the tree until that request processes and comes back and get the response and then you deal with it in your render, then you go down the tree further and nope, there's another component that initiates a fetch and now you have to do the same thing over again and that's where you get these waterfalls in your network. And so when we can parallelize all that stuff by initiating those fetches at the route level and also calling all of your loaders concurrently do our nested routes API, that's just a big superpower and it's going to increase the performance of your frontend exponentially. Paul: So we're talking about increasing frontend performance in the end here. It's not only just a mental model about efficiency of development and efficiency of maintenance, it's a efficiency of runtime and how we're actually loading the data and maximizing these pieces. So you mentioned nested routes and loading in some deterministic way with dependencies like the data for these nested routes. And I remember when I was looking to Remix, there was some things I could see about, oh, I could have a component in the page and that's like it's own little route that's here, it's in the page and I can have another component to the left of it and they're in the same view, but they have their own loaders, they have their own life cycles and stuff like that. So I'd love to learn about why nested routes is so powerful in your mind and when you pair it with the loaders, why does that give you an edge over how you're modeling your application? Chance Strickland: So over the past several years, I think we've started thinking about UI and performance with the notion that we need to get something in front of the user as soon as possible to make it feel fast. And so what do we do? We send them a big skeleton of nothing. There's just a skeleton that says, "Hey, you got something, but it's not really there, so we're going to just wait," so you know something's happening but you're not sure what, you're not sure where it's going to show up. We just got this empty skeleton with a bunch of spinners all over the page indicating that something's floating here, something's floating there, whatever. Let's say you have a main view and a sidebar, that sidebar's got to fetch some data, that main view's got to fetching data. I mentioned waterfalls, this is exactly what I'm talking about. You have different routes in the tree that are all initiating these requests and they're going off at their own pace and doing God knows what until something comes back and then you're just going to start plugging things into the skeleton view until, okay, everything's here. Well what does that do to me? It creates a really jarring experience as a user. It's not cool to have stuff popping into the screen all over the place and causing all of this weird layout shift. It just feels very janky to me, like contrast that with a lot of other apps that you use, like native apps on your phone or on your desktop, you don't have the same experience there typically. And so what can we do to fix that? Well, the whole idea of nested routes and where I think it really becomes a superpower is in parallelizing, these requests. First of all, they're just all going to move a lot faster so that you don't have to worry about showing something like a skeleton to the user as soon as possible because everything's going to happen so much faster that if it takes just a few milliseconds longer to show them anything on the screen, you can just wait until it's resolved because it's a much more efficient request. You can wait until it's resolved and then show them all of the views all at once and then possibly revalidate that on the client. It's super important that that needs to be revalidated as they see it, but for the most part, you can handle all of that stuff with good old HTTP caching so that what they see is exactly what you'd expect them to see as performant as possible because we've just handled all those requests in a much more performant manner. And that to me is just a much better user experience. And going back to talking about faster backends for faster frontends, this is exactly the kind of thing I'm talking about. Paul: Yeah, I would love to pose a follow-up question here just to clarify my mental models. So you mentioned parallelization, nested routes, and we're talking about loading data. So is what's happening here is sort of like I can have a component on the left and a component on the right. They're technically separate routes of separate loaders, and when I visit the webpage on the server, whatever is happening in the backend whenever API recalls that's being parallelized and is that the crux about why this is faster? Chance Strickland: Well, that's a big part of it. Another part of it is just running on a faster backend in general and letting proper HTTP caching to really speed up your experience. We went through this phase where everything was supposed to be static, the static site generation phase where you had all these generating tools and the whole idea was that if you just had static pages, you executed all of these expensive queries at build time, then we could just have HTML and we're going to send it to you super, super fast. Well, we can do something very similar to that without a lot of the tradeoffs of static side generators by just using HTTP caching or CDN caching or the two in conjunction with one another so that the response that the user is getting it is super fast, but it's also being revalidated without having to run a build every time you change a little bit of data. So all of these different things, these different approaches that we bring to the table with Remix contribute to the end goal, which is a much faster, much smoother user experience. The nested routes, data loaders, all these things are pieces of the puzzle. But I think the bigger piece of the puzzle is just the overall philosophical shift in how we approach a lot of the same problems. Paul: If folks are listening along, you're really into web development. We're talking about a lot of service activity here and Chance mentioned we have the static site generators, SSGs and that's not what we're talking about here. And it could be beyond just frontend and backend development separation we're talking about. We're sticking with server side and we're optimizing the backend for a faster frontend. And I'd love to get into a little bit about the future of Remix and how you guys are planning the next year of development. Before I do that really quick, I just want to remind our listeners that Pod Rocket is brought to you by Log Rocket and Log Rocket is for frontends, the stuff that we're talking about right now. It can help you understand exactly how your users are experiencing your components, your digital products. It has a bunch of features like session replay, error tracking, product analytics, even frustration indicators powered by AI and analysis. We have machine learning algorithms that surface the most impactful issues affecting your users so you can spend more time building a better product and less time hunting through tools. So solve your user reported issues, find those issues faster and improve conversion and adoption with Log Rocket today. So I'd love to talk a little bit about the future of Remix and how your team is planning it out. You mentioned SSGs and that's like a hot topic right now because it's two ways we're trying to optimize. We're doing it the Chance way, faster backends for faster frontends, or we're doing it the static way, which is ship nothing. So is Remix going to have their feet in the sand kind of then, you're going to really make it the best it can be for server side, or are you going to start maybe spreading the surface area thin and experimenting with those other areas such as static or hybrid rendering? Chance Strickland: Yeah, I wouldn't say that that's high on our priority list mostly because I think what we're doing right now is challenging a lot of the norms and expectations. And if anything, I actually see the industry at large shifting back towards the server. Next.js is doing that as well. You have a lot of static superpowers in Next.js. There's a lot of API there to deal with static generation and even incremental, I think they call it ISR, incremental static regeneration maybe. But nonetheless, you have all of these APIs and these things that you can do. But you do see, I think with take React server components, which has been in the works for years and years and years now, and Next.js has really leaned into that, they're leaning into the server. And so I think there's a shift going on in the ecosystem that's bringing people back in that direction and maybe moving away from static to a certain degree. Now it's always going to be a mix and there's going to be... it's like everything else in our industry. It depends whether or not you do want to do something on the server or if you want to just generate an HTML page. We might explore that, but I wouldn't say it's top of mind or top priority. We have a lot of really other cool things that we're working on at the moment, and all of those things are readily available to you and to anyone who is interested in checking it out. If you go to our GitHub repo, there is under the discussions tab, you can see we have official RFCs, we have proposals, all of the stuff that we're doing, everything that's on our roadmap is available in public view on GitHub. So if anyone is interested on the specifics of what we're working on in the coming year, it's all right there. Our main priority at the moment is V2, and I think that's right around the corner. We have some internal deadlines around that that we're working towards and very excited about V2. And we're already even talking about what V3 is going to look like. So lots of really cool stuff come into Remix, but I wouldn't say that static site generation is the top concern that we have done some experiments in that domain. Paul: So what do you think is one of the poster childs of V2 that you look forward to advertising? Chance Strickland: Well, honestly, V2 I think is just an opportunity to take a fresh look at some of the APIs that we introduced in V1. V1 of anything is V1, right? It's based on a lot of ideas and APIs that we designed without the full understanding of how people are going to use them. And so the biggest thing about V1 of any software I think is, what can we learn from what we did at this point in time and move forward or how can we move forward from that? And so we've learned a lot. We've learned about what our users are building, what they want to build, where some of those pain points are, what the rough edges look like. And so we've introduced a number of new features in V1 that we are essentially planning to make the default behavior in V2 and then deprecate some older behavior just to shed some development weights. So we've got this idea of future flags where you can do some early opt-in to a lot of our V2 features. We've changed the Meta-API, we have an API for generating metadata on a page, and that API has been completely overhauled because of things that we've learned from our users. We now have a lot of built in support for CSS processing that we didn't when we started, and we've been working on stabilizing a lot of that and that'll all be stable in V2. We've got a whole new convention for how you set up your routes in your file system, which will be stabilized in V2 in the default. So a lot of it is really going to be taking things that we've already introduced in V1, making them default behavior instead of opt-in and then getting rid of the V1 default so that we can move forward on the design of V3. Paul: So it's really like a buildup and a polish for V2 and full of tooling and making it easier, better developer experience. Chance Strickland: And I think every major version is kind of an opportunity to explore, "What did we learn from the last version? What are our users telling us? What do we need to lean into the future and what do we need to do to make sure that that upgrade path is as seamless as possible so that we're not constantly generating all of this friction within our ecosystem?" It's all a balancing act. We're striving for stability without stagnation, and we want to keep moving forward. And I think there's a lot of really cool features to come. Paul: So you mentioned two or three really great features that you're working on, such as the Meta-API. I mean that's huge for folks wanting to build actually high ranking websites. But if we wanted to turn away from the new shiny stuff that you're polishing, what are one of the customer frustrations or developer frustrations that you tried to tackle? It could totally be one of the things you just mentioned. I'm just curious what the original frustration point was. If somebody's listening, they're like, "Oh yeah, V1, I remember that was kind of difficult. So I'm looking forward to v2." Chance Strickland: Well, the big one in my opinion is, and we just launched this in 1.14 I think last week, is hot module reloading, what we're calling hot data revalidated. Yeah, hot data... I don't know. We came up with some sleek marketing term that essentially means HMR for the server. But what we've come to know and love from tools like Webpack and Next.js and other compiler driven projects is this really nice feature where you can just change some code and have it immediately updated in the browser without losing any of your data or any of your client side state is really, really nice, especially when you're building these high level, high interactivity type applications, think like a chat widget where you're making changes to the styling of your widget, but you don't want to lose all that chat history every and have to reset the chat every single time you make a change. Really frustrating. And so we just launched a very early, not even a pre-release, it's unstable, but it's marked as an unstable feature that you opt into to use our new dev server. So we're going to stabilize that in V2 so that by default when you spin up an app in dev mode in V2, you've got HMR ready to go not only for all of your client site code, but for your server site code as well, which is really cool for any data in your loaders or any of the logic in your actions or if you change any of the client side state in your components, none of those changes should reset the view or result in some data loss as you're developing. And I think that's going to be a huge productivity boom. Paul: And last question Chance to wrap us up here on the technical side of Remix, what do you think Remix is going to offer the industry for people wanting to sell post, not use the typical jam stack sort of Serverless in a API glued together architecture that they might step up the gate with? Do you think Remix is made for self hosters or is it made for more the self-hosted serverless deployment world that we're stepping into right now? Chance Strickland: I mean, all of the above. It's really not prescriptive and as far as where you get your data or how you send data. It is prescriptive in how you load the data for your client, but it's not prescriptive as to where your data actually lives. We want you to put your data or to be able to use the data infrastructure that you already have. We're not trying to tell you to overhaul your infrastructure. We simply give you conventions from working with data in a way that is performant and creates a better user experience. So Remix is well-suited for folks who were using Serverless, well suited for folks who were running their own servers for launching on distributed Edge networks. There's a lot of possibilities there, and we're not overly prescriptive. In fact, we have server adapters for a variety of different hosts as well as want to highlight some community adapters as well. They're folks in the community and folks at different hosts like Netlify just launched their own Remix adapter for their Edge platform, which is run on Deno, which is really, really cool. So we're pretty much set up to run not only on Serverless, but on completely different run times. We can run on Bun, you can run on Deno, you can run on traditional node server, we can run on Lambda function. It's really up to you. And we have a lot of adapters and tools to enable all of those folks. Paul: So that means I could have my own Deno run time and use this community adapter and run my Remix backend. Chance Strickland: Well, yeah, and if you're running your own Deno service, you don't even need a community adapter because we provide one in Remix. It's one of our core adapters. So if you just have your own Deno server and you want to deploy to it, just run create Remix and select the Deno template and you're off to the races. Paul: Very exciting. So you can really take Remix and put it everywhere. And I think any framework that gives developers power to deploy where they want and how they want is typically a very good thing. It prevents vendor lock in and it really lets you spread your wings and grow organically as you need. So it's really great to hear that all these... even the community is pumping out these adapters to really allow you to deploy where you need. Chance Strickland: Yeah, that's the idea. We want to be completely server runtime agnostic, which is only possible when these run times build on top of web fundamentals and web APIs. So really excited to see the greater ecosystem moving in that direction. All of the new takes at JavaScript run times that are enabling a lot of these things to move forward with a remarkable amount of speed and makes us really excited about the future. Paul: So Chance, you've mentioned if people wanted to look at the new stuff coming out at Remix, the GitHub's like one of the number one spots to go. I'm sure you're posting stuff about that you're working on though. Do you have a Twitter or a medium where people can follow you and wait for updates? Chance Strickland: Yeah, I'm @chancethedev on Twitter, spelled exactly as you would imagine it could be. Chance the Dev, I also have a course coming out I'm working on, hopefully it'll be launching in the next year or so, called Frontto Back.dev. As far as Remix is concerned, you can follow us on GitHub, remix_run on Twitter. We've also got a Discord, so if you go to remix.run, check down in the footer, there's a link to join our Discord. We got a vibrant community on Discord full of folks who are always eager to help you work through any challenges you're having, as well as just talk about what's going on in your neck of the woods and web development and help us build a tool that's better for you. Paul: That's one of the best resources ever, a chat room. If you have an open Slack or an open Discord, those are questions that no Google can answer for you sometimes. Chance Strickland: Yeah, great space. We love our Discord users and it's really to me, one of the cornerstones of our community. Paul: Chance, thank you for your time coming on and teaching us about the potential that lies in service side rendering and marrying the backend and the frontend ideas a little tighter, and hopefully some people can go check out Remix if they haven't checked it out for their first time and see some of the new stuff. Chance Strickland: Yeah, thanks for having me on, Paul. Appreciate it.