Ben: Hello and welcome to PodRocket. Today I'm here with Jason Laster, who is the founder and CEO of Replay. And Mark Erikson, who's a senior frontend engineer at Replay and also a former PodRocket guest, who you may remember from a previous conversation about Redux. How are you guys doing today? Jason Laster: Well. Mark Erikson: Doing pretty good. Ben: Well, I'm really excited for this episode. Today, hoping to learn more about Replay and then also I hear we have perhaps a demo, which will be something that will just be relevant to the YouTube version of this podcast. We can try to narrate it for the audio version and see if that works, but we'll put a link to the YouTube for the people that are listening just on audio as well. Yeah, maybe we could kick things off, tell me about Replay. What is it? What does it do? Jason Laster: Yeah. So Replay is a time travel debugger. It makes it easy to record your website, or node script now as well. Share it with anyone on your team, so that they can debug it afterwards with familiar browser dev tools. Think the elements panel, console, debugger, net monitor. Ben: Got it. So it's primarily for web apps, correct? Jason Laster: Yes. Ben: And so if I have my web app running locally and I have a bug, is the idea that I can go into Replay, record it, and then have a artifact that I send to someone else on my team who can reproduce that exact situation? Jason Laster: We're a lot like Loom, but with dev tools. And we think of the world in terms of, everyone's got Sentry running, they've got LogRocket, we use LogRocket. You see something went wrong and then the question is, how do I reproduce it? So if you can go into Replay and reproduce it once, you can share that replay with anyone on your team, on Slack, post the issue on Jira, and then devs can begin debugging without ever having to reproduce the problem. Ben: Got it. And I think it might be helpful to frame this for our audience, probably some, not all of our audience is familiar with LogRocket. So as a quick example or a quick understanding of how LogRocket works, essentially you have a web application, it could be running locally or in production. LogRocket is capturing a replay of what's going on in the DOM, so we look at how the HTML and CSS changes over time, we capture that replay, we capture console logs, we capture a log of all the network requests that took place, a bunch of performance information. Ben: And we show that all to you, but what LogRocket doesn't do, and I think this is what we're going to get into with Replay is, actually replay the same JavaScript code. So the thing you couldn't do in LogRocket is look and see, what was this variable at this point in time? But with Replay I think that's something you do do, so maybe tell me a bit more about specifically what you could do with Replay. And maybe just to know, LogRocket is not competitive with Replay, we're primarily for production monitoring, whereas Replay allows you to do more because it sounds like primarily for development processes and QA. Jason Laster: Yeah. Where LogRocket's running as a JavaScript library in your app in production, gathering these replays, which is really, really useful to see what your users are doing, understand where your product is doing well, not doing well. And of course, when there's an exception, you can see that something went wrong, go in and see what the user did before that. On the Replay side, we're not a JavaScript library running in the page, we couldn't do what we're doing if we were. We're actually the browser. So we have a fork of Firefox, a fork of Chrome, we have a fork of Node. At the end of the day, these run times are recording what's going in and out of the run time. So we capture all of the library calls that the browser makes. If the library wants to write to a file, we captured that. If it opens a socket, we captured that. All the necessary things needed to spin up that browser after the fact and replay it. Jason Laster: And gosh, if a user in Japan on a really slow Windows computer is having an issue on some website and she records it with Replay, then a developer in San Francisco can view that replay and experience that exact session exactly as it ran in Japan. So the killer feature for Replay is, you can open up the replay, find a file just like in VS Code, command P, search for a file, that kind of thing. Find a line of code that you want add a print statement to, add a print statement and Replay is going to run through the recording really, really, really quickly. And then show all those logs in the console as if you'd always had that print statement there in the first place. Ben: So taking a step back, JavaScript code is running, right? And if you have a line of code, one plus one equals variable A, equals one plus one that's always the same. But then you have these sources of non-determinism if you have a call to, what is the current time? And you do something based on that, or you have a call to call some third-party API. There are things that are non-deterministic, so tell me a bit more about your approach to capturing those sources of non-determinism and making sure the same thing happens in the replay that it did originally. Jason Laster: Yeah. That's such a great question. So it all starts with an HML file. So the HML file has some script tags, it has some image tags, it has some link tags. The browser gets the HML file and begins fetching those assets. It's like, okay, I got to talk to the server. I've got to download this JavaScript file. Okay, the JavaScript file comes in, it has to parse that file, and it has to execute that file. And as it executes it sees one plus one, it has to evaluate that expression, that kind of thing. Jason Laster: So that process of recording the socket connections and anything else that has to happen to load the JavaScript file and execute it, that's Replay. So when we open up the replay later in the cloud and we're rerunning, the browser thinks it's talking to the internet to download that JavaScript file, but it's just reading from the recording. Actually, executing is deterministic, every time you see a JavaScript file with one plus one, it's going to do the same thing, so that's not a big problem. But if you get to that date on now call, then that's going to resolve down to a library call to actually get the current time from the computer. And that's where Replay comes in again. And that's something that's going to be in the recording. The really nice thing is print statements, all we have to do is replay and then you can get those logs. Ben: I feel like the amount of ways that JavaScript can be non-deterministic is increasing a lot. Browsers are adding more and more functionality to interact with the system, whether it's getting a file from the system, or accessing the camera, or getting performance data, or crash... There's more and more of these non-deterministic APIs. So are you having to constantly stay on top of all the new ways to have non-determinism or is there a way that you can go one step lower and capture all those without constantly adding new support for new features? Jason Laster: So these are the questions that browser engineers ask us. For instance, we were talking to the WebGL team at Chromium and it just hurt their head. Because you're thinking about how complicated WebGL is, how complicated the canvas, or how complicated the DOM is, how complicated JavaScript is with all these APIs. And all of these components resolve down to the same 400 library calls. In fact, recording Firefox in Chromium is kind of the same thing when you're at that level. There's millions of lines of C++ to implement the browser, but we're not thinking about it at that level. We're thinking about it at the bottom of the ocean, what is the browser doing when it's actually talking to the computer? So yeah, there are tons of APIs and there are some that we don't support right now, we don't support video. But when we do get around a video, it's going to probably get back to those same API calls. Ben: So you kind of sit below at the deepest level of where the browser talks to the system and asks for system information. And by monitoring what's happening in that communication layer between the browser and the system, you don't even care what the communication is, as long as you monitor it and then make sure the same thing happens in the replay that it did during the recording. Jason Laster: A similar question we get is, God, if you forked these three massive projects, the rebase must be hell. And the nice thing is because we're at the bottom of the ocean, most of the commits just flow right by and we never see them. I think one out of a hundred or one out of a thousand commits have a conflict. Ben: I'm curious, how data intensive is a recording? If I record five minutes of me interacting with my web app, how much data does that capture? Jason Laster: So most of compute is deterministic. A React component, a React app is mostly just going through the same and render loop time and time again. And that's kind of what React figured out, you give it the same props you're going to get the same UI. A very small percent of compute is non-deterministic where you actually have to do a date.now. So these replays are tiny, like 10, 20, 50 megabytes. But if you play through the recording and look at the entire execution choice, it's massive. The joke for us is, you can replay through a recording and make a video, but the video in terms of megabytes, is often bigger than the original replay. Ben: Yeah. Which is the same with LogRocket and our replay technique. Because we're just capturing with the DOM and how the DOM changed over time, that's order of magnitude lighter weight than a MP4 video file. Jason Laster: That's a great analogy. Yeah. Ben: You mentioned something at the beginning about playing a replay faster than it occurred in real time. And is the correct way to think about that the amount of time a real session took, was how quickly the user interacted with it. But if you play back those user interactions faster than they occurred in real time. The JavaScript code certainly executes very fast, and so you could replay way faster because you just make the series of user inputs and non-deterministic API calls happen faster than they did. Jason Laster: So it'd be pretty cool if you could add a print statement and then see all those console logs in real time, but if you actually want to be debugging and you have to wait two minutes to get those logs back, that's not a fun experience. It's a cool toy, but it's not the kind of debugging environment you want to be in. So we have to be able to evaluate maybe a hundred expressions, if the print statement was called a hundred times in under, let's say, a second or half a second. So that's just another thing that we have to figure out, to just go through the recording really, really quickly. Mark Erikson: And I can actually give an example of where this comes into play. There's a new feature that we're, I think ideally releasing in experimental today. So in Replay, when you open up one of these recordings and you go to the dev tools debugger and you pause the app and you open up one of the source files. As you move your mouse over any one of the lines in the standard dev tools debugger, as you hover over a line it'll show you how many times that line got executed over the course of the recording. And that right there is useful because it starts to tell how many print statements you're going to have, or what lines were the hottest, or did this line of code actually run at all? Right now in our production version, if you hover over one of those lines, you'll see a loading indicator for a couple seconds as it queries the back end for the information of, how many times did this one line get hit? Mark Erikson: Well, one of our engineers, Josh, has been implementing some updates to both the back end and the UI to where, as soon as you click on that source file and you bring up the contents of the source file in your debugger, it queries the back end for all the hit counts for every line in that file in one shot. And it can just calculate that real fast, bring back the entire hit count result in a second or two. The immediate benefit you're going to see as soon as this feature is out, is now as I move my mouse back and forth over those lines real fast, the little tool tip pops up immediately and says, one hit, five hits, a thousand hits. But this is actually going to unlock some very, very cool features. What if we can highlight the lines that got executed or gray out the lines that never ran at all? What if we can start showing some heat maps, showing which lines were the most computationally expensive? I'm actually really, really excited about this feature and I haven't even had a chance to try it yet. Ben: Got it. Because Chrome has a code coverage tool, does this leverage Chrome's built in code coverage or you've rolled your own on top of the replay? Jason Laster: Oh, we've rolled our own. Ben: Very cool. And I'm actually curious, on that note, I think Chrome has some kind of basic replay functionality. And I'm not even as familiar with State-of-the-Art and Firefox or Safari. Do you leverage any of that functionality or you've built everything from scratch? It sounds like Replay does quite a bit more than those basic tools, but just curious to... Jason Laster: Yeah, you could think of Chrome's replay as browser automation. It's using Puppeteer under the hood. It could be using Playwright. It could be using Cypress. And their definition of replay is, you record some user actions, like I'm going to click and type. And then when I click the button, the browser's going to redo those actions again and again, which is really nice for getting in a tight feedback loop as you're developing or testing a feature. But it's not the same thing as being able to share a replay with a developer and then seeing exactly as it ran because we've recorded it. Ben: Right. And if the approach is simply just to replay the user actions, any other source of non-determinism, be it a call at a date.now, or an API call, it's not going to be deterministic, the replay could be different than what you originally did. Yeah. Jason Laster: If I'm recording going into an admin panel and deleting a user, well that automation is... There's no more user, there's no more user to delete. But in Replay I can view that time and time again. Ben: Awesome. Well, I think now would be a good time to jump into the demo. So as I mentioned before, we are going to have this on our YouTube. We'll try to describe what we're seeing I guess, for the audio folks. Yeah, should we jump in? Jason Laster: So right before I jumped on this call, I was thinking about this bug that I saw in LogRocket six months ago, when I was trying to filter the Redux actions and find something that was going wrong. And at the time when I would click on this timestamp that you can see right here, and I'd been filtering, the timestamp wouldn't jump to that spot. So we do this thing pretty often, where when we see an issue, we record it with Replay and we see if we can figure it out. And if we could figure it out, we send the replay to somebody on the other side. So we've recorded issues in GitHub, in Facebook and Asana. And when I saw this issue, I sent it over to Matt who's the CEO of LogRocket, and honestly, has been one of the people who I've reached out to when I had questions, which is amazing. Jason Laster: If you start a dev tools company, I strongly recommend finding other founders who can help you along the way. And Matt's been one of them for me, and I was really excited to share this, this replay with him. So right before I jumped on this call, I remember that it was possible to record LogRocket and see what's going on, so I thought, hey, what if we record something and we can look at it in Replay together? So this is the replay I just recorded five minutes ago. You can see that I was doing something similar, where I was looking at the Redux actions in LogRocket, because LogRocket uses React in Redux, and I was clicking on that timestamp. And you can even see over here in the viewer, we capture the click events, navigation, so we can see what's going on too. Jason Laster: The reason we offer this viewer, which is really similar to a session replay tool or a video tool is, even though we're in a time travel debugger, it's nice to describe what's going on. So I want to be able to click, leave a comment and say, hey, that's interesting. You should look at that too. But the really interesting thing about a replay is being able to switch over to dev tools and see what's going on. So when we're in Replay dev tools, the goal is to be able to do anything you can do in browser dev tools. So of course, we have the console so we can see what LogRocket logged out in production or logged out in that replay. You can rewind, fast forward, those kinds of things. Jason Laster: If I play it back to the middle, I can go to the elements panel, we can see any of these components, its attributes, those kinds of things. And we could do the same thing with React too. So I can see there's this component for the network monitor that had all these prompts. But what I really want to do, because I knew I was jumping on the call with Mark, was find the Redux calls, those actions that are being dispatched to set up LogRocket, and I think I found that. So I'm going to hide the video here, make this a little bit bigger, go back to the comment, and this should be recognizable. I think what this is, is Redux's setup code that's called every time a thunk is dispatched. Mark Erikson: That looks like the fun middleware to me. Ben: And just to explain this to the audience, so we're looking here at LogRocket's minified production code, because Jason doesn't have access to our source code, but when you use LogRocket, you obviously could see our minified production code. And just based on your experience probably looking at a lot of minified code, you were able to figure out this looks like where Redux middlewares are being initialized. Jason Laster: What I did was I used our search tool to search for all the calls to dispatch. And then from there, I found one call to dispatch, stepped in, and that took me into the middleware. And the reason I wanted to be in the middleware is, I wanted to see all the dispatches to get a sense of what's happening within LogRocket. Ben: Got it. And I imagine most of the time when people are using Replay, they are using it on a development or staging build of their site, where the code would not be minified, it would probably be a lot easier to know what you're looking at. Right. Mark Erikson: And we've got source maps. Ben: Oh, yeah. And you have source maps even better. Jason Laster: So the story could be support QA, even a user in many cases, records the issue. And because, let's say we were working with LogRocket. Let's say you'd set up Replay internally. If somebody in support records an issue on LogRocket, then when the developer's looking at it they can see the replay with source maps because you've uploaded the source maps to Replay as well. So it's kind of a Sentry use case as well. Ben: Got it makes sense. Mark Erikson: And I can give a personal example of an actual bug that I fixed for Redux stuff with Replay a few weeks ago. So I was working on the release of Redux Toolkit 1.8 and we were releasing a new feature called a listener middleware, which has some similarity to the Redux-Saga or Redux-Observable middleware, that let you run logic in response to dispatched actions. And we'd had that in alpha and beta for a few months. And I was literally getting ready to actually release RTK 1.8. And all of our tests had been passing the whole time. And I wanted to just do a quick check against some kind of an actual application before I hit the button and push this code out. And boy, it's a good thing I did. So I did a local publish of the RTK library, created an example app and actually copy pasted one of the examples out of our code base. And I installed the local build of RTK into that project. Mark Erikson: It was just a basic counter example, and I went to click the increment button and nothing happened. And I kept clicking and nothing was happening. I was like, wait a minute, this is not good. And I opened up the normal browser dev tools and was trying to stick some break points in there, and there was a loop where the action has been processed by the reducers. Now we want to loop over all the listener entries and see if any of them are actually supposed to process and handle this action and run more logic. And for some reason, it was basically stepping over that loop every time. Really confusing. So I went and I recorded a replay of this, and I actually pulled Jason in on it because I was able to send him the link for the same replay. Mark Erikson: And we're going through it, and again, I was looking at the source mapped view of the code and it was a for of loop that was looping over the values in a JavaScript map object where we keep the listener entries. And I could see it skipping right past the loop. And I was able to switch over to the non-source mapped view and look at the transpiled code that was actually running. And see that there was actually a bug in the compilation step itself that was causing a bad loop to be emitted. Mark Erikson: And I remember that I'd actually seen this same issue a year ago on a previous RTK release. There's something about our library build setup, the combination of esbuild and TypeScript that occasionally spits out weird code if you're doing four of over a map or a set. And so I was able to identify that and then the workaround was convert the values into an array first, and then loop over the array. But I honestly would've been very, very stuck on trying to do that release if I hadn't been able to step into the replay, identify exactly what was going on, and flip back and forth between the transpiled code and the source map code. So even that right there was really valuable for me. Ben: Awesome. Yeah. Very cool. Jason Laster: So we just added this print statement into the thunks and I'm looking at the console and I can see... We have this called identify user, so we can see who's logged in. It's kind of fun to see what the data retention policies are, so we have data retention in general of one month. But then the set search data retention is also a month too, so we can see there's some granularity there, the active apps there. Oh, this is fun. It looks like the feature flags are also being logged as well, so feel free to redact this section. Jason Laster: But the ability to do conditional recording, in-app demos, live mode, that's kind of fun, I wonder what live mode is. Impact. Are there any in here? SSO is false, so that's not been feature flagged on. R back is here, pro beta access. I don't know. This is stuff that is been being shipped. I could see this in Chrome dev tools if I opened up the debugger and I was looking for these kinds of things. But it's just funny to me how in Replay the ability to just jump around, you find these things that are shipped all the time, that it's just interesting. Ben: I'm curious, this something either today or in the future, is there a world in which during a replay you could change something? Let's say I don't know if there's a use case necessarily for changing. Well, there probably is like Redux action or something as simple as change what a variable is in a replay and just change that, and it may or may not break the app on Replay. Jason Laster: Yes. The short answer is yes. So the simple thing that we've already shipped, is the ability to pause a point in time, go into the console, and do something that changes the DOM. So I can find an element and change its background color. The thing we would like to do, let's say in the summer, is open the elements panel, click on an element, and edit its styles directly in the elements panel. You can imagine as a designer, you can see a button that looks bad at certain point in time, fix the styles, share with the developer and say, hey, it should really look like this. But the question about changing a value or changing a variable and then seeing what it looks like, that's actually kind of profound. The ability to record a million replays of React, see the common issues, and then begin trying to fix them. Jason Laster: That stuff has only been possible in video games before now. So the way OpenAI got good, in the early days when they want to play simple video games is, they got their hands on a couple million recordings of Dota 2, the video game. And they were able to put that into the AI system and then have the AI try to win the game because some of these games, the player had won, some of the games, the player had lost. And the AI was able to go to an arbitrary point in time and say, well, what if we did something different? What if we try a different strategy? Jason Laster: And that was how AI got good at video games. We think there's a similar story for debugging software, where if you can get a million recordings of React, and sometimes there are bugs in there, we can let the computer begin trying different strategies. What if you modify this IF statement, or if you change this Redux-Thunk, or you add this hook here, does that fix the problem? And that's the interesting thing, five, 10 years out for Replay. Not necessarily building the time travel debugger for people, that's great. That makes software more approachable. But the data comes from Replay, when you can do dynamic analysis after the fact, and when you can actually arbitrarily debug after the fact as well at scale. Ben: Yeah. And I'm even thinking replay my app and change the payload of a network request or a response that came in or things like that, that just in the normal development workflow would save so much time. And a lot of times it's just hard to get your app in the state you want it to be in. And if you could in the replay just do that, that would make things way easier. Jason Laster: Totally. And a simple one that I've been thinking about recently is [inaudible 00:28:21] tests. You write the test, it passes 95% of the time, but then 5% of the time something happens that's too fast or too slow and you're not sure why. And those are the most difficult things to debug. But if we have a thousand recordings of that test, and let's say a hundred failed and 900 passed, we can look and see, oh, those failing tests, the API call came back too fast. And we could even try to simulate that API call taking a little bit longer and see if the test passes. Because we work at that low level, we can simulate a socket connecting and then the packet coming a little bit slower and then replay with that slightly modified recording. And if that turns out to fix the test, we can tell the developer, Hey, maybe you want an add weight here because that's what's causing the flakiness. Ben: Yeah. No, that makes a lot of sense. And I'm curious, you mentioned before the concept of having thousands or millions of replays and doing interesting things with that data. One of them is, if you have a, as you just mentioned, you have a flaky test, how do you identify what are the outliers? Or what things are different, the times when the test fails? What are other maybe in the future ideas, in terms of more concretely using large amounts of replays to get insights? Jason Laster: Oh, God. And by the way, the biggest caveat here is, we treat privacy incredibly seriously. So replays are private by default. Enterprises who come on, they own their own data. So with all of those caveats aside, the ability to look at a corpus of React recordings is incredibly useful for the React team. It's also incredibly useful for the VA team that wants to see where is the JavaScript engine fast, slow, et cetera. I can go on and on. The idea of replay as data is really, really powerful. I could be in my text editor, hover on a variable, and in addition to seeing the TypeScript values, I can also see some example values that came from recent replays. And maybe that function was called a hundred times, and only once it received a false Boolean and every other time it received true. I could click in and be at that replay, at that point in time when it received the false, and get a much better idea of what's going on. Ben: Yeah. No, it's super interesting. And one question I actually had, which I should have clarified this earlier, what is the format of Replay? What we're looking at here is app.replay.io, we're in a web browser. But it sounds like where you actually record the replay session is in a forked version of Chrome or Firefox, so a native application. So is that primarily how it works? You record in the native application, you replay in the browser? Jason Laster: We primarily record in the native app. So it doesn't matter if you're recording a js test in Node, or an express server in node, or you're recording LogRocket in Chrome. You click record or you run replay Node, you make the recording. The recording itself is just array of binary blobs. It literally is the system call inputs and outputs, so not very human readable. We then upload the recording to the cloud. When you want to view the replay, spin up a Docker container, download the run time that was used, download the recording, and then run through it. The funny thing is that recording might be 10 megabytes, 20 megabytes. When we're replaying, obviously there's a lot more going on. And then when you're actually viewing the replay, the replay dev tools is a React app, it's a Redux app. It's just talking over a web socket to that back end. Mark Erikson: And that's kind of where I came in. So I joined the team two weeks ago. The Replay client code base is actually entirely open-source, and so I had actually had a chance to go poke around it, as I was talking to the Replay team about joining initially. And it's a large code base, a lot of it originally started as a fork of the Firefox dev tools. But it really is a large React Redux application. And so as I've been coming in, I've been poking around, learning how things are put together, doing a lot of code clean up. There's code that's dead, there's code that's using some older style patterns. But even though I don't necessarily understand all the ins and outs of the code base right now, I've been able to jump in and start doing cleanup, modernizing some of the code base. And it's just a React Redux app, I know how those work. Jason Laster: To plug Mark for a second, if anybody listening to the podcast now wants to work with Mark, come join and help us. We are taking this legacy Redux app that's all JavaScript, converting it to [inaudible 00:33:38], converting it to RTK, RTK landed last week. Every reducer is being updated. It's a really fun time to get involved. Mark Erikson: Yeah, I've been having a blast getting to do some of the modernization. And also, at the same time, it's a live code base, there's people on the team working on new features right now. I'll move on to some feature stuff down the road. And in fact actually, Jason and I were just talking, right now we've got the React dev tools integrated directly into the UI of Replay. Not the extension, but actually as a component in the page. And it listens for basically the React specific data coming out of the app. And we plan to actually have me do the same thing for the Redux dev tools, probably this next quarter. But there's nothing that says we can't do the same thing for the view dev tools, whatever Angular has. This is a starting point, not the finish line. Ben: Yeah, and I imagine it'd be cool to get... I imagine you have some kind of network request inspector, but you could bring in something more like Postman or more advanced. Jason Laster: Postman, Apollo, Relay, anything GraphQL related, it can all be there. If you want to understand your outgo queries, you could format those as well. Ben: Yeah. A lot of cool possibilities. And on that note, can you preview some of the other big picture items that are on the roadmap in the next year or so? Jason Laster: Year is tough. We could talk Q2. So Mark mentioned React's dev tools, there are going to be some really fun React dev tools features that we're going to ship this quarter. So something that I think about is, if you pause on a React render function, the call stack is meaningless. You've got your render function, then every other frame is just React internals. But what you want to see is, I'm paused in this button component and the button component is in a list component, the list component is in the header, and the header is in the app. Jason Laster: And we're calling that the React component stack. It'll be a call stack, but it's going to work with just pure React, so you click on one of the other components and then you're up in that components render function itself. You can see all of its props, its date, everything. Another thing that's coming in Q2 is CI support. So last week we shipped first class support for Playwright, Playwright and Playwright Test. But we're going to add Cypress support, so that's a prototype now that you can begin using in alpha version, that's going to be first class. Puppeteer, Storybook, Selenium, anything that you use for automation, you should be able to record as well. And then when a test fails in CI you just have that GitHub comment and you click it and you can be in the replay. Ben: And I'm curious to learn a bit more about the business side. So I think Mark mentioned earlier that the replay side is open-source. Is the recording side open-source or is that still closed-source for that native application? Jason Laster: So our forks of Firefox, Chromium, and Node are open-source. The driver that we use to record is documented. We would love to work with more runtime folks like the Python community, the Ruby community, to begin supporting other runtimes as well. The goal is for runtimes to be replayable. And the more that we can do to make that first class, the better. On the back end side, the protocol that we use for dev tools is also documented, for a couple reasons, but the one that I'm most excited about is, we're doing the simple version of dev tools. Jason Laster: It's a joke that we shipped with print statements. It's like taking a rocket ship and putting it into a car and talking about how much faster you can go. You could take the rocket ship to Mars. And the dev tools protocol that we shipped, I'm so excited to see how people are beginning to use it and where that can go. Because people are going to build way more exciting things than print statements. We had to do that because that's where devs are now, that's how people debug now. But if you think about the future of debugging, it's really, really exciting. Ben: And you charge a monthly fee based on one developer license and that basically is getting you access to the cloud platform, the ability to share, is that accurate? Yeah. Jason Laster: The way we think about pricing, we charge $75 per dev for the organization plan and $20 per dev for the team plan. And the team plan's nothing, but even on the organization side, if you have one bug a month that it's helpful, it's paid for itself. Mark Erikson: Think about how expensive developer time is. And if you save yourself an hour or two debugging something that literally just paid for itself. Jason Laster: And we have users who spend hours a day in Replay, much less an hour a month. Ben: Yeah. And a lot of times our customers justify LogRocket in a similar way, in terms of developer times spent. But then if you can fix a production bug more quickly, that could save you untold amounts of money if it's an important production system. Jason Laster: Yeah. Meantime to resolution, obviously is huge. Mark Erikson: And the flip side is, it is free for individuals, including the open source community. As a Redux maintainer, my standard refrain anytime someone files an issue is, please provide me with a GitHub repo or a CodeSandbox that reproduces the issue. And I want to start asking for a replay, a standard part of that. And honestly, part of our goal is, that would become a standard thing for the open-source community. Think about how much faster so many bugs could be solved if the user was able to provide a replay as part of the issue and the maintainer could just step in and go see exactly what happened and not even have to reproduce the problem themselves. Jason Laster: It's incredible for open-source maintainers because it's even more difficult to reproduce bugs on the open-source than it is within a company. At least at the company you have the code, but then you go one level up, I've talked to so many people who started programming and then dropped out to do other things because they loved building, but debugging was hard. So if we can make everything from that first hello world function that you write easier to understand, our hope is that more people will start programming, stay programming, and really love it. Ben: Well, Jason, Mark, it's been great having you on and really exciting to learn about Replay. It sounds like you are hiring, is that correct? Jason Laster: Yes. If you're interested in any aspect of Replay from the browser side, to the back end where we run thousands of Docker containers, to the dev tools where you can work with Mark, let us know. Ben: And for folks who want to learn more, check it out and get started. It's just replay.io. Jason Laster: That's right. Ben: Well, thanks again. Jason Laster: Thank you. Mark Erikson: Thanks. Appreciate it. Kate: Thanks for listening to PodRocket. You can find us @PodRocketpod on Twitter. And don't forget to subscribe, rate and review on Apple Podcast. Thanks.