tanstack-query-v5-with-dominik-dorfmeister === [00:00:00] Hi, and welcome to PodRocket, a web development podcast brought to you by LogRocket. LogRocket helps software teams to improve user experience with session replay, error tracking, and product analytics. Try it for free at LogRocket. com. Hey, everybody. I am Paige Niedringhaus, and I will be your podcast host for today. And we have Dominic Dorfmeister on to talk about TanStack Query v5. Dominic is one of the TanStack Query maintainers. Welcome, Dominic. Hello. Thanks for having me. We are very excited to have you on because just recently TANstack Query v5 has come out, so maybe you could give us a little bit of background for people who are less familiar with TANstack Query in general, and then we can start to dive into some of the improvements and the new things that are coming with v5. Okay. Absolutely. TenStack Query is a library which originated from the React sphere, so to say, but it's since [00:01:00] become an agnostic library that also works with other frameworks like Vue or Solid or Svelte. And we're, by the way, we're also having currently a contribution ongoing where we might get an Angular adapter. For it as well, based on uh, Angular signals. But what the library is actually doing is it's basically a state manager for asynchronous state. So everything that can produce a promise can be worked with it. And that's probably what most people are using it for is to make data fetching in those frameworks easier. Because in react, you usually only have the server to do data fetching or on the client you have use effect. But with react query, you basically get a lot of things handled for you automatically when it comes to managing the life cycle around those states. And then also distributing the data that comes back in your application very efficiently. I've had the pleasure of using react query in my own projects. And for me, it was a godsend for polling in particular, because we [00:02:00] didn't know when new data was coming in, but we knew it would be. So having that set up and just handle the polling for me was. There are like lots of features that are built around those things. Like you said there's polling, then there is paginated queries where you make working with paginated APIs easier and also infinite scrolling. And there's a million of features that we've come to accumulate over the years. But the goal is like still the same to make it make the most use cases around data fetching easier for consumers. \ awesome. . So let's start talking about what's new in V5. I had a chance to read the blog post, which we will link to in the show notes, but it says that it is 20 percent smaller than V4, which is always great. And it says that it is better and more intuitive for developers to use. So can you talk a little bit about what kind of enhancements you made to make it better and easier for developers? Okay. So one of the things that we did in version five was like the big theme of the version was to [00:03:00] basically finish what we wanted to do in version four already, which is unifying all the APIs to use a single object that you pass into it. Um, If you've used use query before , like in the docs, we had these examples where you pass in like a key and then a function and then some additional options, but because the library originated in JavaScript, we actually had this kind of overloads where you could pass a different amount of arguments in and the library would then figure out what you'd want with it at runtime. And there's not only like works not so nice with with TypeScript is also a bit of an overhead for us internally to figure this out and. It also made specific reactivity things easier for SolidJS if we'd only had one version. And it also makes teaching easier if you have one version where you basically use an object everywhere. And we've kind of wanted to do this for version 4 already. But there was a problem with TypeScript and type inference specifically. We already had the PR up and then we said, no, we can't do this. This will like... Not work in specific [00:04:00] situations for many users. So we shelf the idea and then with TypeScript 4. 7 they fixed the issue and we said, okay, we're going to pick this up for the next major version. And that's also why the minimum requirement for TypeScript for the new major version is in fact TypeScript 4. 7. Oh, that's really interesting. That's, that's great to know though. So how has the community responded to the changes so far? Have you been hearing a lot of feedback, either positive or negative about the changes? Yeah, so I tried to be very open with what we're doing because in version four we didn't do it in secret, but I was just, the team was working on things and then we released alphas and betas. But by the time we released the final version that's when people really start to want to adopt it. And then they found out some things that they didn't like, and we got some negative feedback about that. And that for me personally that hurt a lot because I tried to make things right. And then we get the feedback right after it was too late. So we reverted that mistake that we made in version five anyways, but this [00:05:00] time I wanted to be very upfront a bit, so about a year ago. We released a roadmap for version five with all the things that we had planned and we got some amazing feedback and we got some very early adopters that tried things out and we also, I think we pushed back on one of the changes that wasn't so well received. like we tried to really listen to the community about which changes. Make sense driven from what we think makes sense in the first place. Did that roadmap change significantly based on the early feedback from your early adopters and just your community in general? I wouldn't say it changed significantly, but it changed to some degree because a couple of things were fuzzy on the roadmap where we didn't, where we just knew we wanted to go a different way for infinite queries. And we knew we wanted like there's a, an experimental persister plugin where we didn't yet know how we would. aNd then there were some renames where we had lots of bike trading going on what actually the name should be. But that's good because like you get lots of opinions and you can't do it right for [00:06:00] everyone, but at least you can try to come to a conclusion where you're happy with it at the end. And the more feedback you get for that the better decision you can make. so you said that you rolled back some changes that you'd made in V4 and as well as simplifying the hooks and, eliminating the overloads. What kinds of potential issues could users encounter as they upgrade to V5? Are there a lot of breaking changes that are going to look very different from if they're currently using V4? I think that depends on um, how you're using before, like which, which syntax you're using. So when we knew we were going to make this. It's this big breaking syntax where everything that you pass in has to be one object. We knew that if you're using TypeScript, it's not going to be that much of a problem because the compiler will tell you. But if you are a JavaScript user, then you will have to go through these things manually probably. And it's not only the use query, like hook that changed. It was like all the methods, like invalidate queries and use mutation. And basically [00:07:00] every method that you use gets a change signature. So what we tried to do was to get users in very early and. Like warn or hint them about that. This is going to happen in the future. And because the syntax was valid in version 4 anyways, right? It was just in version 5 It was the only syntax that is valid We released an ESLint package with an ESLint rule that basically tells you to use the object syntax if possible And it was also auto fixable So you could go through your code base with version 4 with the ESLint plugin and then auto change most of the usages up front And for version five specifically, we also wrote a a couple of code mods, that basically make these renames and these like structural changes for you as good as possible. And yeah, apart from that, it was all about communicating the changes and making sure that people are aware of what's going to happen with version five. And I think from the people that have upgraded what I've heard now is there aren't any issues with that, but of course, if you have a large [00:08:00] code base and you haven't been using the ESLint plugin before, and you don't have TypeScript then I think you probably have bigger problems anyways, but still, if you then want to, if you still then want to update, then yeah it's going to be a bit of find and replace and hope that you have test coverage. That is wild. I've actually never heard of an auto fixing ESLint rule. I mean, had you ever done something like that before? Because it sounds like such a cool thing that would be really useful for a lot of people, a lot of instances. I haven't written any auto fixable ones, but I know for example, that the exhaustive dependencies rule of the ESLint plugin for the rules of hooks from react, that one is also auto fixable. So it detects what you're using inside your use memo hook, and then it basically auto fills the dependency array for you. So I think that's one of the. More prominent ones that I know that kind of seem rather complex. There are ones that, that like replace let with const if you're not reassigning it or so these are also auto fixable. But yeah, the React one is the one [00:09:00] where I think there's, there must be some edge cases in there that are pretty hard to get right, All right, um, so we talked a little bit about how you did the substantial changes for V5, the API restructuring. What was the rationale behind the removal of some of the callbacks that you had, like the on success, on error, and on settled from the queries? What do you recommend that people do now if they were using those previously? Oh yeah that, that was a fun one. , I think this isn't on the original roadmap, but it just came up a lot of the times when, I was answering questions about use query and how it is being used. And a lot of the things where I saw that people use the callbacks, they didn't really understand what the callbacks were doing because they are, or there were a bit confusing They weren't the most intuitive thing of what you'd expect this callback to do. So I thought about, introducing an additional callback that would maybe make things easier and you would know when you had to use them. But at the end of the day, [00:10:00] when I looked at all the usages, and I also asked around on Twitter and other heavy users what's going on with those callbacks. And when I tried to explain what they're actually doing, most people agreed that they're not very useful. And I mean, it sounds okay, if a query then you want the on success. And if it errors, you want the on error. But that's not actually what happens. And I have a complete blog post on that topic, like why we remove them. That was became like an RFC for removing them. And then it became a whole blog post out of it. And the bottom line was that, yeah, it was just too confusing and not easy to fix. And you don't actually want them most of the time. And if you've been using them before, then. There are some suggestions in my blog on what you can do. We do have global callbacks on the cash level that are probably more appropriate for the things that you are doing. And if you're using react query together with an additional state manager, where you use those callbacks to write the data to a Redux store or a Zustand store then the suggestion is to not do that because there's one of the kind of anti patterns with react query that, that [00:11:00] I've seen people do a lot. And there was like one of the most use cases for those callbacks. And it was also one of the reasons we removed them is that if you're this basically the callbacks, they invited to do this, these anti patterns. And now that you don't have them, you would need to write probably a use effect that does this, and nobody likes to do that. So it discourages people from doing that, which I also think is a good idea. So there weren't many good use cases where you'd really want to have the callbacks that work on a component level. So if you were using the callbacks, you probably should reevaluate your code and consider refactoring it anyway. Yeah. So it depe it depends on what you were doing. Of course the easy way out is to just make a use effect instead that listens on the data property, and then you basically have an unsuccess callback that works a bit more reliably than the one that we had actually. But it's still like, at least , I think , that's the wording that I have in my blog post is at least if you write the use effect, you feel a bit dirty that you're doing it. Um, But if you use like the on success callback, then you Oh, this must be something that's like a legit pattern, but it's [00:12:00] just like an kind of a side effect in disguise that. You probably shouldn't, \ for those cases, don't probably don't want.. could you elaborate a little bit about the query options API? You say that it can streamline the query definitions and the client interactions. So maybe you could talk a little bit more about that. Yeah. This is also one of the changes that, that, that got in like quite late into the roadmap. I did a live stream last Friday where I basically, I think I talked for 45 minutes only about the query options API, because I'm so excited about that addition and what it can do. And the interesting thing is that at runtime, it doesn't do anything. It's a function that takes an an object in and returns an object out. So it's like kind of like it takes the input and returns its own input. There isn't, there is nothing really fancy going on, but there are some magical things going on type level that will basically allow you to define the. Options object that you put into use query, because now with the new API, it has to be one object, but you can now define it basically outside of the component outside of the use query usage, [00:13:00] still get all your type inference for you, even better type inference if you reuse it then across multiple functions, and you can take that object and you can pass it to use query, you can pass it to prefetch query, you can pass it to use queries, right? The hook that allows multiple queries to run in parallel and so on. So you can really streamline your single definition of a query and centralize this and pass it around to wherever you need to. And this is the successor to a pattern that I've always recommended in the past, which is the query key factory, where I basically suggested to centralize how you define your query keys because getting them wrong is like one of the things that Get harder to do as your app grows and if you get them wrong, it's like devastating to read from or write to the wrong cache entry. I've been there myself and debugged this for hours. Why I'm seeing the wrong data. Yeah the query key factory was the pattern that, that was established before that, but we're now taking this a step further and not only. Centralizing how you define the [00:14:00] key, but you can define the key plus the function together plus all additional options that you would want to define on the query in a single place. Excellent. So you're essentially making it easier for people to write to the correct. queries and not hopefully mix them up. Yeah. Very good. all right. So I'd love to talk a little bit about the new experimental suspense options that you have. So React server components, how developers can use them with React query. And, could you tell us a little bit how, about how you're supporting suspense now? So this is also something that wasn't on the roadmap because it came in as an outside contribution from the the team behind TRPC. Which is an end to end full stack layer that, that has directed up the build on top of a 10 stack query and Alex and Julius. Thanks again for contributing this kind of played around with this and in TRPC and try to. Basically [00:15:00] make suspense and streaming work together with a React query, and then basically extracted it into a standalone adapter. And we agreed that the query on the repo is the right place to actually have have that because it's independent of TRPC. But what it actually does is in an XJS app directory application, it allows you to write. a use suspense query hook call in a client component as you would normally do but in the Server side rendering run it would kick off the data fetching on the server It would automatically render the suspense fallback to the client but it would then as the result come in stream the results to the client because that is what like at least the html result, right? This is what what next. js does but what the adapter then does on top is it basically? Hooks into the stream from Next. js and sends the data from the query client that you have on the server to the query client that you have on the client in the browser so that it basically also [00:16:00] seeds the query cache in the browser with the data from the server. In a nutshell, you start fetching on the server, and then it like magically appears on the client in your cache, and you can continue to have all the interactivity and smart refetches on the client. With the data from the server without having to do like the the pre fetching and the manual hydration that you would need to do without this adapter. Nice. That sounds like a massive improvement and something that I'm sure a lot of developers, myself included, have wanted for a long time, Yeah. It's like one of these I think have your cake and eat it situations too, where you don't want to write additional boilerplate to set up the server side rendering thing. You just want to, write your components that do data fetching as if you were to write just your components. Okay. But, the client components would still automatically pick up all those things and it would blur the lines between server and client a little bit more. But on the good side, so yeah, I really like it. Yeah. In a good way. Yeah. Is it safe to say that [00:17:00] this is one of the main reasons that you, or that handstack query V5 is only supporting react 18 versions and above? That's one of the reasons. Yeah. Next. js also only supports version 18 and above. and something that came from multiple sources, basically. Because we were using the using external store hook, that's a new hook in react 18 that allows us to keep our state in an external store and then sync it back to react and make it work with concurrent features. But to support react 17, you need a separate dependency that makes the hook also exist like in a shamed way. in react 17 and there are other reasons of things that we were doing that got better in react 18. I think that there's an unstable batched updates hook function that we were using under the hood, but that kind of got removed in 18. So there were lots of things that we could just throw away and also make it easy. And that's also where some of the bundle size improvements come from. That we were just saying, okay, we only support react 18. [00:18:00] We only support modern browsers, like not the most modern ones, but we switched to. Optional chaining being preserved and private class feeds and stuff like that. Things that have been around for at least like one year. And that all of these things make for a 20 percent size reduction in total. \ so you just dropped some shims and some additional backwards compatibility and support, things like that, and that really cut down on your size a lot, Yeah. The decision was to not make 95 percent of our user base suffer for also supporting the other 5%, because the other 5 percent that remain, they can still take the output and, transpilate themselves backwards to what they want to consume. So it's leaving nobody behind, but still trying to optimize for the use case of the many. No, I think that makes complete sense. So one of the other new features that was marked in the blog announcement was something about simplified optimistic updates and not having to update caches manually. So I'd love [00:19:00] if you could describe that for us and how it works. Yeah, that was something that was only technically a breaking change. I think we could have also made this in version four. But there were some like internals that we needed to change to make this feature. So that's why it only came out in version five, but essentially what it does is, if you have an an update and we want to have an optimistic update, the traditional way was you had to write a bit of boilerplate and write data directly to the cache so that. For example, your to do list that renders the data would actually be updated before the response comes back from the server. And what we did in version 5 was, we just exposed a couple of more things on the use mutation hook. Namely, the variables that you, that were used to send into the mutation are now coming back to the use mutation component. And that means you can just render the variables optimistically in your UI without writing them to the cache. So you just basically get an ephemeral UI while the mutation is going on. One of the other things that had to come with this was a [00:20:00] shared mutation state feature because mutations were only local to your component. So the state wasn't really shared around in other components. So this thing only worked if you had the mutation and the query in the same component, which I think isn't always the case. Probably it's more often the case than not. So, what you could do with the new use mutation state hook is basically read all the variables of the pending mutations in a separate component, and then render them in the UI optimistically while the mutation is running if you want. So it it takes a, I think, 15 line boilerplate thing down to maybe two or three lines of code. For those simple cases where you only want to render one UI in a pending state. Georgina, it's your turn! the new max pages option for infinite queries. What can you explain a little bit about what that is solving for users? Yep, absolutely. Infinite queries are basically one query where the data is chunked into multiple pages and into multiple fetches. So [00:21:00] think a Twitter feed or some other timeline that basically goes larger over time. Now, if you have scrolled to the bottom, like maybe 10 times, then you have 10 pages in the cache. And because react query keeps all those pages in the cache What happens is if you move away from one page and then you come back you instantly get like all those 10 pages Now rendering 10 pages. If every page has 100 elements that's like a thousand elements and if you haven't virtualized that then it maybe , freezes your browser, or at least slows it on. And also req query will refetch all the pages if they're considered stale. So you go into a bit of a situation where you're overfetching, and you're also going to situation where you have maybe too much data in memory and you want to render too much data at the same time. So with the max pages option, you're basically getting some kind of sliding window on your infinite query, where you can define how many pages should be kept in the cache. And as you move forward in the cage, the pages at the beginning are basically thrown out of the cache. And because infinite queries [00:22:00] work in both directions, you can fetch more into the one direction and into the other. , when the user scrolls back up, you can go back there again and refetch that page and it would kick out one page on the other side and so on. So it it, it goes both ways. And it's one of those things where you can now optimize how much is stored , in memory. And how many pages you're actually refetching when the user read stale data. Can you also set like how long it should be before pages are invalidated as well? uh, that's a general feature that you basically define a stale time on your queries, where it says that the data in the cache is considered fresh for this amount of time. And after that, if a specific trigger occurs, React query will go and fetch some new data for you. I know that one of the things that you mentioned in the blog post was fine grained persistence, so I'd love if you could talk a little bit more about what that means. Thanks. this is a proposal that was basically coming from users of the native community where. Memory is also a concern on low end [00:23:00] devices. And we have a plugin that basically takes your whole query client and persists it onto a disk storage or local storage, or I think it recognized if it's aging storage mostly And this works really nice for an offline experience, but at the cost of always having to write and read the whole cache. And if the cache is large, again this gets into situations where you do lots of writes that might be blocking or that might be expensive. And you'd still always have a full copy in memory. So you can't have queries on the disk, but not in memory. It's kind of like, it's always a full copy. And if we delete it from memory, it's also deleted from the disk because it like mirrors what you have in memory. So with the new create persistent plugin, we went a different approach where we're saying that you can define that a query is basically short lived in memory. But whenever it gets garbage collected and then it tries to fetch again, it's going to look it up in the storage first and the storage and the data in memory basically have two [00:24:00] different times when they expire. So you can keep your data on the disk for days, but you can keep it in memory for maybe, I don't know, more one minute after it's unused. And then you're basically only having in memory. What you're actually using. And when you switch around between pages data gets instantly purged from the cache but you still have it on the disc and can read it from there very fast. So the fine grain persistence goes into this memory optimization that you would need for those cases. . So I know that V5 has just come out and you guys are. I'm sure very glad that it's out and it's available for people to use, but what are some anticipated developments or new things that you're thinking TAN stack query might be focusing on in the future? Do you have any ideas for the V6 roadmap yet? nOt really. No. V5 has just happened. We're usually, I think we're a query is now five years old and we have five versions. So it seems like we're doing one major version per year, which is probably the most that I would do. I think it depends on when the next big [00:25:00] obstacle comes up, because all of those major versions were basically driven by. Bugs that we cannot fix without breaking things or features that we cannot add without Re architecturing something internally that would break something and as long as that doesn't happen there isn't really anything that warrants a major release and I wouldn't be too excited about the major releases I would be excited about the minor releases because that's where we are, you know Going to come out with the new features and where the new things are happening If a major is coming It means we messed up something along the way and need to fix it Well, tHat's, that is very fair. And that's good advice for anybody who's interested in using React queries that the minor ones are where all the goodies come in. The major ones are just because things had to get fixed or major changes happened to the React ecosystem, yeah. I mean, If you look at other libraries, I think it's, I think it's the same overall, right? React shipped hooks in, I think, 16. 8 and React Router got this great loader feature also in a minor version. So it's the minor versions where, the good stuff is [00:26:00] happening. Is there anything that we haven't touched on in V5 that you think is worth discussing or calling out? Yeah, there's one thing I want to really thank Arian, who is our solid query maintainer, because he also worked on the agnostic dev tools, which are new in version five and they have a brand new look. They still feel the same, but they have a lot better like user interface to interact with, but also they are written in solid JS and are now available for all other adapters. So react basically is just a thin layer on top of that. We can have the same DevTools in for all the other adapters. And it's just a huge step forward in providing the same DevTools for all of our users. I've used the dev tools before in the past and it was incredibly helpful in debugging. So it's great to hear that everybody can have the same experience regardless of the framework that they're using. Yeah, I can't work without them. Like whenever I get the reproduction and they don't have the DevTools, the first thing that I add is I need to see what's going on in the cache, otherwise. I'm a flying blind. [00:27:00] Dominic, it has been a pleasure talking to you today. If people want to get in touch with you or learn more about TanStack Query, where are the best places to find you online? So yeah, you can always find me on Twitter. , TK Dolo there., quite responsive. we have a great community on Discord. , there's , 10 stack Discord server that you can join. GitHub discussions is also always a possibility. And yeah, , if you want to learn more about React Query, , we have great docs up. , I can also recommend the blog that I'm writing,, where there's tons of content in about React Query. And, , on top of that, I've been working,, together with UI. dev for, , some time now on a new, official React Query course. It's, , called query. gg. You can, , go and take a look and it's going to be available, , soon. I hope, , end of this year or beginning of next year. And, , I hope it's going to be a great resource to, get to know React Query very well. I'm sure that it will be. I mean, Those sorts of courses that are put out by the creators and maintainers of the frameworks are By far the best because [00:28:00] you guys really know It inside and out. So that's going to be a fantastic resource. I'm sure. Thanks. Yeah, I hope so too. Well, Dominic, it has been an absolute pleasure talking to you today. Thank you so much for joining us on the log rocket podcast. Thank you for having me. It was great fun.