Noel: Hello, and welcome to PodRocket. I'm Noel, and joining me today is Matteus Albuquerque. Matteus is a Senior Front-End Engineer at Medallia, I hope I'm saying that right, and a mentor at Tech Labs. He's here today to give us a deep dive on Concurrent React, which is a talk he gave recently at the React Advanced Conference. Did I get all that right? Matteus: Yep. Hello. Noel: Perfect. Awesome. Cool. Thank you so much for being here. Where was that? Where that React Advanced Conference at? Matteus: First, thanks for having me. It's great to be here. I personally like the podcast. I've been following you guys. So yeah, it was in London. It was at the end of October in London. Noel: Nice, nice. Cool. I listened to most of that. I didn't make it all the way to the end of your talk, so hopefully we can kind of delve in and bring it to the listeners in some form here. But before we get into it, can you give us a little bit of your background in web dev and how you found yourself in the position you're in today? Matteus: Sure. So, basically I started with web development back in 2013, 2014. I was mostly into what was Angular 1.2 and Ionic, back then. I did play a lot with other stuff, like query was a still thing, PHP. So, I did experiment with a lot of that. And then, for random reasons, I switched to iOS and I spent two years as a native iOS developer playing with Swift, Objective-C, and a bunch of Apple stuff. Then, I kind of realized that was not too much my thing and I switched back to front-end again. And then I was doing a lot of React and stuff, and I kind of decided to focus in React back then. I just kept driving more and more into the whole ecosystem and stuff, and across different companies of different scales. Noel: Would you say you're more close to the internals of React's day-to-day than a lot of other web developers are, even if they're React users? Matteus: To be honest, it is not part of my day-to-day job. So basically, at Medallia, I do work with performance optimization and stuff. So sometimes knowing how React works and stuff, it's useful. But my story with the internals of React traces back to... It was about 2018 when Hooks and Suspense and et cetera, they became a thing. I had a close friend of mine that he basically predicted Suspense a couple months before it was released. And then, I remember seeing that one session where they announced Suspense and I was like, hey, my friend showed me that the beginning of the year. I went to him like, "Hey, how did you know about it?" And he was like, "Yeah, I was following the pool requests and stuff and it just felt it was going to happen anytime." So back then, I was really motivated. Okay, maybe trying digging a bit into the source and trying to figure out things might help me predict a bit of what's going to happen and understand what's happening now. Noel: Yeah. So let's dig into the topic of the talk a little bit. What is Concurrent React in your words, and maybe at a high level, what do developers need to know about it to make their web apps as optimal as possible? Matteus: So, I would say that Concurrent React nowadays, it's more of an umbrella of different patterns and different features. And years ago, when this whole topic started, it was sold as one thing. It's like you enabled concurrent mode and that was Concurrent React and et cetera. But nowadays, and mostly with React 18, it's more like a set of features that you can opt in and patterns you can follow. Like splitting high and low priority updates, deferring values, that kind of stuff. So, it's an umbrella of patents, umbrella of APIs, and it's also an implementation detail of how those things work under the hood to basically allow React to render multiple versions of UI at the same time. Noel: Gotcha, gotcha. So maybe to kind of start at the beginning and paint this so we can reach as broad a listener base as possible, can we start a little bit by talking about the default way of doing things in JavaScript? Like blocking on the main thread, how that can be a problem, some real life examples of that, and then we can tie that into React as we go. Matteus: Cool. So basically, coming to the main thread, it's basically where most of the tasks run into the browser. It's where nearly all the JavaScript we have that we write works and et cetera. The tricky thing is that it can only process one thing at a time. So, this kind of becomes a problem when we have, and that's one of the things I kind of started by approaching those on my session is, when we get to have long tasks. So basically, those long tasks are things... When something takes more than 50 milliseconds, it's a long task. That's bad because we have only one thread. So if that one thread that is responsible for rendering the y is blocked, then the whole EY is blocked and we can get to really bad states in our apps. Noel: Is there ways that we can avoid doing that? I guess as particularly in terms of working in UI frameworks, React in particular? Matteus: Yeah, sure. So basically, we have different approaches because to think of how not to block the mainframe, we have to think about how to run our tasks, especially the long tasks. Throughout the history, we've seen different ways of doing that and different approaches. We have, for example, the ones we touch more in the session are parallelism and concurrency. Especially because for discussing concurrency, you kind of have to discuss the other side, which is parallelism, and see what are the drawbacks and stuff. So basically, the ways we capture Y blocking the main thread is going either with parallelism or concurrency, or scheduling. There is concurrency plus a scheduler. Noel: Kind of be tricky to get into, but let's talk about parallelism and concurrency first. What are the ways that workers play into concurrency in today? If somebody's kind of getting into workers starting to offload some basic functionality to them, how does that kind of play into this topic? Matteus: Workers are basically our alternative for parallelism in the browser. So, they're the way we have for doing different threads in front-end. Basically, a worker is an isolated JavaScript scope running on a separate thread. And basically, they have a few special requirements. Workers behave in a particular way. So, the first thing is that they can only do message passing. You don't share variables, you don't share code with workers. They can send and react to messages that they get. That's one thing. The other thing is that they don't have access to the DOM. So, that's kind of... To most of the DOM APIs, actually. So that can be another limiting thing. On the other hand, though, they're really good because if you have a long task that you can manage to offload that you work, or for example, some huge math or some heavy computation, something that's CPU-bound and you can offload that, then that's good. So, it's usually balancing the overhead of moving code to worker where you get to have for example, these issues and missions, versus what you're gaining by offloading the main thread. I guess, to sum up, that's the role that workers play there, the parallelism alternative in browser. And usually when you're dealing with workers, you end up abstracting things, abstracting your code in your solutions in two ways: either following what's known as accurate model or working with shared memory. Noel: Can you talk a little bit more about shared memory? I feel like a lot of devs may not have delved into that much. They may have done some basic workers that are doing more trivial message passing, but nothing in the shared memory space. How does that work? Matteus: Yeah, I guess that because of the message-passing nature of workers, we tend to have this actors saying more clear and et cetera. So, I kind of agree what you said about shared memory. And there's another thing that comes for that. Basically, the web was not built, and I mean the browsers, they were not built around the idea of having concurrent x's shoot objects to variables or that kind of thing. So, that's why we don't have a lot of APIs for that in browser. You asked about shared memory. So, that's basically one of the traditional approaches we have for concurrency, for accessing things in a concurrent way. It does have an advantage that is, it cuts off the overhead that you would have with message passing. Because message passing, especially in workers, it works via structured cloning. So you are basically making copies of things and et cetera, and this does have an overhead. With shared memory, you get rid of that, which is good. But the point is, because the web was not built around this, usually if you're doing that in the browsers with web workers and stuff, you end up having to build your own data structures for enabling that [inaudible 00:10:29] and those things we see in CS and ways of doing countries. So, this is getting better and better over time. I would say we have shared the right buffer in the browser that is one data type and now we also have atomics. But still, in the end, we share the right buffers, we don't have traditional objects, traditional arrays from JavaScript, so we're just handling bytes. This would be what... I kind of consider this complicated and it does have a learning curve, I would say. Noel: Doing kind of byte-based memory management doesn't feel very JavaScript-y, for lack of a better term. I feel like those that spend most of their time in the JavaScript world, me included, we're kind of used to a lot of the niceties that JavaScript bring us. Got to go dust off the computer science fundamentals books when we're having to do byte-level memory manipulation and stuff. Are there any good use cases you've encountered for these workers with shared memory that are kind of more concise examples that may be... Like if someone's in that realm like this, that may be a good avenue for them to explore? Matteus: Yeah, sure. So actually, if you're going either with shared memory or with the accurate model, to be honest, it's more specific to your code base and with your demand. But for workers and perl in general, throughout history, I've seen some examples in some projects I worked on. For example, it was about five or actually six or seven years ago, I was working on this React app and the key of the app was to render a map and it was a traditional Mac box map. And the tricky thing about it is that we had 100,000 data points plotted on that map. So, it was like imagine the whole map of the US with a bunch of ads that displayed plotted there and sometimes clusterized and et cetera. And we had to do a lot of operations on the front-end on top of those. So basically, we used the workers, and we used a bunch of different stuff, including workers and protocol buffers to optimize the way they were downloaded. We also used some Redux-Saga utilities on the front end. Back then we were using Redux and Redux-Saga. So this was one example. Another example, is it's not workers, but it's web assembly. Because people usually they like to see both workers and web assembly. Som about a few years ago, I was working on a side project with a friend of mine and it was just for fun. We basically, there's this Game Boy advanced game, it's called Noah. It's not one of the most popular ones in Game Boy. But basically, he really liked that game. I didn't know the game. And he came up with this crazy project idea of building a level editor for that game in React and he was really focused in reverse engineering back then. So I left all the reverse engineering fun for him, especially because I didn't know a lot of it. And basically, the web assembly used over there was the algorithm for decompressing the game wrong was written in C. It was completely out of consideration for us to rewrite that. First because we didn't know, and second because it would be really inefficient to rewrite that in C. So, back then we ended up having the C code targeted to web assembly and running in a separate thread with a worker to basically decompress the game wrong so that we could render the tile maps of the game on this React app. So again, this was another example. But yeah, it's usually, I would say the general thing is when you have to crunch numbers, huge data processing, and you do measure and you see that the overheads of workers are... The trade-off is a good thing, then workers might be an answer. Noel: I think that's a good example. Thank you for bringing that in. Are there particular cases where web assembly in particular may be a good fit for front-end devs? And kind of more broadly, how does that fit into the React ecosystem? In my head, React is so far removed from the kind of work you'd be handing off to a worker or having web assembly do for us, that they're kind of different realms. But that interface layer, the way in which they communicate, I think, is the tricky part. Are there particular difficulties there? Or I guess, is that the subject of most of your talk is trying to help developers figure out how to bridge those two things? Matteus: Usually, those come from some problems. Another really interesting case for workers. So, you know Jason Miller, the guy behind Preact. It was about six or seven years ago, he came up with... I don't recall the name of the library, but back then it was a library that was basically offloading reducers to worker threads. And this was one of the ideas that when I first saw that, I didn't quite like... I was like why would I want to offload a reducer to the main frame? But then, again, five, six years ago, I got to a project that basically it was a huge piece of legacy code and we have our reducers really... We had a lot of data processing happening on the reducers. It wasn't supposed to be there, but because it was a huge legacy code, we couldn't just get rid of it and start from scratch. So we had to do something to optimize that. And it turned out that, for example, offloading those heavy CPU-bound reducers to worker threads was basically one thing that worked for us. So I guess that, it's kind of like when you see the opportunity, and you do a little bit of discovery and you see if that's a fit or not. And the second thing you mentioned, like bridging those. I still kind of feel, especially when it comes to wasm, that for a lot of us, React and web assembly are just two completely different realms that don't usually talk to each other. To be honest, from my own experience that one I mentioned, the side project rendering the game, was one of the few cases I have had. But I've seen a lot of experimentation out there, and I'm really interested in see what's coming out of it. For example, I saw a couple of folks, I think it was last year, they were basically porting the reconciliation algorithm from React into Rust and targeting that into web assembly. Because nowadays, everyone is rewriting front-end stuff in Rust, so why not? That was a really, really interesting case, to be honest. But I guess that because we still are missing a lot of things in web assembly... For example, we still don't have proper garbage collection. I mean, there are plans to bring it, but we still don't have it. We still don't have ES modules interpretation, error handling, and a couple of different things. Because a lot of things in web assembly are still promises. That's why I think that when it comes to wasm, it feels like those are different realms because we still need few things to happen in wasm before we can see more of those combined together. And that's why, for example, nowadays, you'll see a lot of benchmarks basically proving when you have to interpolate a lot with JavaScript. Like for example, when you have to do a lot of DOM, web assembly becomes really, really slow. And sometimes it's even slower in libraries like React as well or those with a high level of abstraction. So, I guess that's another example. And last but not least, the fact that, I mean, it's not like most of front-ender's are used to doing Rust or C++ or those languages that are compiled, too. Noel: There's a lot of optimization work that one needs to kind of manually do, right. Like the lower-level code you start writing. Whereas if you can lean on React more, you can assume it's going to do the right things. There's people who have spent a lot of time thinking about making sure stuff's happening in as optimal a way as possible. So I think that that is intuitive to some degree. But yeah, it may not be what most people arrive at right away. But I think that that is a good segue into some of the optimization work that React is doing. Or maybe just optimization work's too broad a term, but the work React is doing to ensure that rendering happens as efficiently as possible. And I think that's kind of more in the realm of schedulers, maybe the area that we haven't talked about as much here. Is that accurate? Matteus: Yeah, exactly. Basically, what we did so far with stitching the first part of the talk that is basically, "Okay, this is parallelism, this is when parallelism is good, this is when parallelism is bad, those are the drawbacks." So now, we can kind of look, okay, now that we know parallelism, what is concurrency all about? Because when it comes to concurrency, you don't have separate CPUs course. So you have a single thread and you just switch quickly. You quickly switch between the different tasks so that they feel concurrent. And scheduling itself is basically concurrency, but you add a piece of software called the scheduler that is responsible for assigning different priorities for different tasks. So, that's kind of the role that this internal scheduler package plays in React. For me, most of Concurrent React, it's pretty much, that's one of the things that I kind of do. If I had to summarize with only one word, that would be pretty much scheduling, for Concurrent React. Noel: Got it, got it. So for a front-end dev who hasn't kind of considered this at all, or you hadn't gone down into the internals of React and kind of explored there and figured out what's going on, can we talk a little bit about what React's concerns are in regard to concurrency, like what the priorities are, how it decides when to render what, and just kind of talk through that a little bit? Matteus: That's one thing I managed to by doing this with Dan Abramov after my session because I jumped straight in to ask feedback. Because in the session, I do touch some of the internals, like priority levels, render lanes, and some really interesting abstractions. But one of the amazing things, in my opinion, that React does is to basically abstract the developer away from those complex, yet very interesting concepts that are underneath React, and to give you just simple and powerful APIs. So basically, he was like, "Oh, we're trying our best not to have developers having to learn all of those things, and then suddenly you're here showing some of the internals and they sound a bit complicated and et cetera." So he basically said there should be a warning at the beginning of the session. Like hey, that's kind of a heads-up that he also writes in some of his blog post, you don't need this to write React, but if you want to check this out, that's interesting. So basically, you mentioned priorities and stuff. So React does have really, really interesting abstractions internally. First then, I would say is the heuristics it has for deciding when to use execution back to the main thread. And basically, it was one of the things that most caught my attention when I was going through the source is that they have an interpol of five milliseconds. And when I first saw that I was like, hmm, this sounds like a magic number. Where is this coming from? And basically, it's one way that they have for ensuring that animations are blocked even on a hundred training FBS devices. So, that's one of the interesting things. Another interesting thing are different priority levels. So, React internally has six, I think it's six, yes, levels ranging from... For example, this should run immediately to this can run whenever we have some spare power, that kind of stuff, and those have different timeouts and things. Understanding those internals, to be honest, I think it's really interesting and I do spend some of my slides going through that. But what's even more interesting is the next section. Because then I was like, Okay, we saw those different levels of priorities, we saw render length, they're a bit of a complicated concept that they have internally. But what can we do? I mean, we are not building schedulers. I myself, I don't work on a daily basis interacting with those concepts. And most of us front-end engineers, we don't either. So how can we benefit from that? And that's when I start exploring some other use cases, like the results of those internal abstractions. Noel: Yeah, yeah. So let's focus our time there because yeah, I think you're right. It's good to know that there is a underlying priority level-based scheduling system going on. But yeah, like you said, maybe less impactful day to day to an engineer who's writing this code. So what are we doing in React, typically? Like in a Hello World app with maybe a couple input boxes and a submit button to a server that gives us feedback. In a very basic flow, how are those priority levels playing into that? And what might devs be doing that would impact those or leverage those in a useful way? Matteus: Yeah, so one of the first things I approach are transitions. I mean, we had something similar to transitions in previous version of React. But in the way that they are now, it came out with React 18. And basically, first example I give is we see, and I myself saw a lot of blog posts up there and even sessions of some examples like, "Hey, I'm running this really, really heavy algorithm inside my render functional, like some algorithm to crack a password or some algorithm to find prime numbers." Those things that are usually used for benchmarking. And those are great for benchmarking, but the point is when us front-end engineers see those, we don't see any practical application of that because it's like, I'm not trying to find prime numbers inside my render of a component, so how is that useful? We tend to forget about things that we do. It's normal that we have apps that handle a large set of data, for example. It's usual that we have apps where we have to render a lot of things on the canvas element, for example, that kind of thing. Those are what I consider to be practical applications. So one of an examples I give is we have a dashboard that basically renders the amount of visitors and website guests every day, and you can filter that by the dates and that's kind of a heavy operation because it's a huge set of data. And then I show how to optimize that using the use transition hook that that's basically wrapping in with a start transition thing you get from using transition. So this would be one practical example. Another one is when we usually discuss, okay, hey, we have this component and it's re-rendering a lot and it shouldn't be like that. Why is it re-rendering? So, the first things we think, okay, maybe I'm passing down some props that shouldn't be there, or maybe I forgot to use mimo or use callback or some of those memorization hooks, that kind of things. But for example, in React 18, we have a hook called "use sync external store." It's a huge name, by the way. Basically, this hook, when it came out, it was mostly sold as a hook for library maintainers. So, we saw a lot of libraries for state management using that for, even Redux, starting on version 8, they adopted that hook. But the point is, again, because those are complex cases, we kind of forget to, okay, how could I use that? And one of the examples I show is using that to kind of create your own selectors of parts of this data. And then I show one example that's basically using React Router and then we have a component that's being re-rendered because of one of the properties changed, and it wasn't supposed to be rebranded. So, how can we optimize that with using external store? So I guess that, kind of to sum up, it's saying what those benchmarks, what those block [inaudible 00:28:23], and what those library maintainers are doing and trying to fit that in what we are doing, because it does apply all the same. Noel: Yeah, so I think in all of the examples you just brought up, they all concern themselves with using the right hook to extract data or to pass it around between components. Is that where engineers should be focusing? Is that what we should be thinking about, is ensuring that we're using the correct hooks or abstractions so as to not be overly sinking data or passing it around to components that don't need it passed around? Matteus: Just like in the very beginning, we discussed what Concurrent React was all about and was kind of an umbrella of different APIs and different patterns. I would say, this could be one of the things. But I think that in my opinion, for us on our daily jobs, it's also a lot about thinking on how we can split how, in life, high and low priority updates and when something makes sense to be immediate and when something can be deferred. When some value can be deferred, that kind of stuff. So, that's why I think it's going to be more and more and more about scheduling. Not only internally, but also for we as users of a library think of when we want something to be run, when it makes sense, what kind of priority it makes sense to have. Noel: Yeah, yeah. I think that makes a lot of sense to me. I think hooks are a pretty powerful abstraction in general. So I'm curious if you think that most of these scheduling concerns are typically going to be interfaced with via hooks of some kind, would you say that that's accurate? Matteus: I would say that it's coming not only hooks, but in different ways. One example is the offspring component. So, it's going to be basically a declarative way for you to assign offscreen priority for a component by wrapping it with this component. So, I would say that those scheduling instructions are going to come from hooks, but also in form of components. And even they're coming to the web itself. So nowadays, not only nowadays, but in the past couple years, there is one of my favorite proposals for the web is the scheduling API. And this one is kind of an umbrella for a lot of different APIs. And basically, this is going to be a unified way integrated into the event loop of JavaScript and et cetera, to allow us developers to use execution, to delay execution, to play with different priorities to aboard tasks. Because the thing is, if the web, for example, had a scheduler itself, if the browsers had it, probably React wouldn't have to do a lot of powerful engineering work under the hood to have its own scheduler. And another thing is if React has its own scheduler, then let's say Solid has its own scheduler, then different frameworks, they have their own schedulers each. Then, we would probably have resource fighting and et cetera. So overall, I would say that the web is kind of lacking unified scheduling primitives and we have a bunch of proposals covering different aspects of it. We even have people using it in production already. So it's going to come in form of hooks, in components, but also as more of the web standards. Noel: Cool. Yeah. So you mentioned some declarative stuff, like the offscreen components tag and the scheduling API. So, is what you're compelling people writing front-end software to do, is just be aware of these things and go seek them out? Is there any particular channels you'd implore them to keep an eye on or be listening to? Matteus: Definitely. One of the things that I think it's amazing, one of the most incredible education efforts from React in my opinion is now that they have the open discussions ripple. So, that's a great source for you to find out about how React is during scheduling, and not only scheduling, but anything. All of those more advanced topics like hydration, server components, everything we hear a lot everywhere these days. You see a lot of those discussions happening over there. And that's amazing because in the past you would have to, for example, try to dig that in pool requests or open issues follow different discussions across different ripples. So I think that nowadays this kind of information, the access to it, is way more democratic and I like that. Another thing is keep an eye on the proposals of the web. For example, the scheduling APIs, keep an eye on what's happening to [inaudible] script, what's happening to the browsers, what kind of APIs they're trying to bring. And keep an eye on the engineering blogs of people experimenting with that. Facebook has blog posts of how they are using this scheduling API for the web. Airbnb also have really interesting cases above that. So yeah, I really see that we have a good momentum here because this kind of information is way more spread out. Noel: Yeah, I think that there's a lot out there. It can be overwhelming, but yeah, I think it's always good to have a few resources to point devs to. I think we're kind of closing the loop here pretty well. Is there anything else you want to implore developers to check out or anything else in this kind of more broad concurrency discussion that you think is worth mentioning? Matteus: One thing that, and that's kind of a feedback I've been getting recently, now that I'm talking a lot more about internals of React and et cetera. Usually, people feel a little bit of the JavaScript fatigue. Because they're like, oh, that's a lot of things about the parallelization and concurrency. I wasn't even aware we have those internals in the browser or those internals in React. So, I would say that the most important thing here is first not to freak out because you're not familiar with one or another of those concepts. Because actually you can write really, really good front-end code without ever touching these. So, I guess that's the first piece of advice. And the second one is I think that, okay, you can write amazing code, build amazing apps without these, but if you are willing to explore these, it's just like you're gaining extra powers. First because it's kind of a way for you to see what's coming down the line for React and for the web, and you can kind of prepare for that. So even you're at your company and you have some library, some internal SDK, some kind of stuff, you can start preparing you for what's coming for the future, that that's one thing. Sometimes, understanding those internals also helps us come up with our own abstractions in our databases, and that's amazing. For example, Google Maps, they do have, or at least they used to, did have their own scheduler. So it was not React, it was not a framework, it was just a scheduler built for the Google app itself. So that's one example. And on my session, I also try to provide other examples where understanding those internals might help you come up with your own abstractions and that would be helpful. But I would say anyone shouldn't feel the fatigue of not being familiarized with those and they're not essential. Noel: Yeah. We'll get links in the show notes to your talk and some of the resources, the blogs that you mentioned, like Facebook and Airbnb engineering blogs. Is there anything else you want to point listeners to specifically, like anywhere they can keep track of what you're working on or places you'd encourage them to check out? Matteus: So basically, anyone who's willing to explore more, you can find me everywhere at @ythecombinator. I'm just going to be in the description. So Twitter, LinkedIn, everywhere, I'm YtheCombinator. I usually post all my sessions there. Not only the recordings, but also the slides and some references. So, I usually talking about performance, internals of tools, and et cetera. So for people who are willing to explore more, I'm always open to discussions and et cetera. And yeah, we do have a lot of resources online these days, so that's amazing. Noel: Awesome. Yeah, we'll get links to your relevant profiles in the show notes as well, so people can find you. Cool. Well thank you so much for coming online and chatting with me, Matteus. I appreciate it a lot and I think that this was an awesome little chat. Matteus: Thanks for having me, Noel. Yeah, I particularly love talking about those random topics. And yeah, that kind of overlap, CS, React, other crazy front-end stuff. So yeah, for me, it's always a pleasure to be discussing this. Noel: Of course, of course. Yeah. We'll catch you soon. Matteus: Cool, thank you. Thanks for having me.