Paul: Hi there and welcome to PodRocket. I'm your host, Paul. And today we have Robert Plummer with us. Aside from first being a husband and father and tinkerer of things, Robert was telling me about... For lack of a better term, I'm going to call it the potato Tesla coil he's building. Summing lots of charge from potatoes and making something amazing. Besides that stuff, Robert is an engineering manager over at iFIT. He's also the maintainer of Brains.js And GPU.js, which we're going to be getting into on the podcast today. Welcome Robert. Robert Plummer: Yeah. Thanks for having me. Paul: Just jumping right into it. You're the maintainer of these two libraries, GPU.js And Brains.js. Can you tell us a little bit about those? Robert Plummer: Sure. Can I make a correction though? Paul: Oh. Please, yeah. Robert Plummer: It's just Brain.js, not plural. Paul: I'm slipping right into my... When I mined Bitcoin, we had Braiins OS. Robert Plummer: Oh, cool. Paul: So, yeah. Brain.js. No plural. To be clear. Robert Plummer: Tell you a little bit about them, is that what you said? Paul: Please. Yeah, what's what's the point of them? Why'd you create them? Robert Plummer: JavaScript has become the most ubiquitous computing language. It's everywhere. I think maybe in the early days, a lot of us really hated it, but aside from the history of it, it's become a really neat language and very enabling. Brain.js was an effort to utilize JavaScript and make it work with neural nets, machine learning and GPU.js was an effort to accelerate JavaScript, to make it run ridiculously fast and not have to learn anything, but just JavaScript. Paul: And does this work on a plethora of platforms? I imagine there's a lot of hardware dependency in time going on under the hood? Robert Plummer: Brain.js, no. Paul: Okay. Gotcha. Robert Plummer: Directly. But when you install it, you'll see you have to install some dependencies for GPU.js. GPU.js is where the rubber meets the road, to try and accelerate things. Not try, but do. It translates JavaScripts into GLSL at this time, like a subset of C++. And allows you to use your GPU to accelerate it. There's a lot of intricacies. It runs in a browser. It runs in Node. It can run using native code. One of the... Paul: So could run on a mobile platform potentially? Robert Plummer: Yes. [inaudible]. Paul: Amazing. Okay. Robert Plummer: I might have... Oh, there it is. Expo Go. You can even run it on there. So stuff that's translated to a machine code, it can be used there. Paul: This is a learning platform. We could really run it anywhere. You can run it natively, you can run it in the browser. And natively means we could run it on a phone. Because we could use it with the Expo framework. Robert Plummer: Yeah, absolutely. Paul: Amazing. So how'd you stumble upon or I shouldn't say stumble how'd you dedicate the time, the unrelenting time that's required to put into a project like this. What's your background? Robert Plummer: As you stated, I'm a tinkerer. But it's not because I just like to tinker, it's really an effort to understand things. Everything is generally really simple. Sophisticated, perhaps, but usually a lot of simple things. I found that neural nets were not that way. That was the journey in the beginning. I was told [inaudible]. Paul: And when you say not that way, do you mean they are not inherently simple? Robert Plummer: No, they're not. Paul: Or they are- Robert Plummer: In principle. If somebody can tell you, "Yeah, this is what's going on because it's taking an advantage of hardware," et cetera, or some sort of acceleration, then it's simple and it's very finite levels. But it's not simple in terms of implementation. For example, Python is often regarded as a great language for machine learning, but it's not Python that you're actually using. Python is an upper level. In fact, NumPy, for example, uses a very similar strategy that GPU.js uses and takes advantage of C++ or C. I'm not sure which, but I know it's down there somewhere. And is actually... You're marionetting a very low-level language so that you can, often, be successful with neural nets. And so if you start investigating how a neural net actually works from end to end, it is a very long journey with something like Python. I know I'm picking on Python a lot. But it's not apparent what is actually happening in the neural net. Because you see functions and their calls and everything, but with JavaScript and with Brain.js and GPU.js the effort was just to keep it in JavaScripts and keep it maybe a couple layers of abstraction away from where you're actually using it. So if ever want to know how it actually works, you can just step a couple layers in, maybe three and actually see, "Oh, it's just adding something together or multiplying," et cetera. Paul: So you're an engineering manager at iFIT. Is anything you do at iFIT related to the work that you're putting out for people to use with Brain and with GPU.js? Robert Plummer: Yes and no. We don't use Brain.js at iFIT just because we don't need it there. A lot of the work that I do for machine learning and iFIT is to do with the categorization or recommendations, which is a very simple algorithm. It weighs a neuron within a neural net. So we use a lot of commercial- Paul: Commercial solutions for neural nets that are out there? What's an example of a commercial solution [inaudible]. Robert Plummer: For example, AWS has several commercial solutions. If you wanted to go really deep, for example, and run your own neural net, you could use something like SageMaker. Paul: On AWS marketplace, we can go and you could spin up a neural net for you to use today right now as a product? Would you say a lot of enterprise companies go this route just for the reliability and continuous improvement these nets receive? Robert Plummer: Yes. Yes, I would I say, the commercial... It depends on what levels that you need. You only want to use the right tool for the job. If you don't need a full-fledged neural net or a recurrent neural net or something like that, then there's not really... you don't want to use the wrong tool for the right job. And so AWS Personalize was exactly what we needed and we were very close with them with our platform anyway. And so that was the right fit. But a lot of the key features that I learned with Brain.js and with GPU.js and just how neural nets operate, allows us to better use the entire system. One example that we had was a lot of different types of workouts have different metrics. Depending on how those metrics are fed into the system, it can alter how data or rather how recommendations are provided to the end user. And we were finding that recommendations were essentially broken at first. And so what we did was we took a most median value and put it where, for example, a value was zero for some workouts. And by doing that, the neural net understood that's basically just noise rather than zero. Zero was like, "I have no idea what to do." And so the neural net would basically panic. Whereas a median value was just like, "Oh, there's nothing going on in here." And it skipped over it. That would've been a really long investigation. I know it sounds very simple, but because neural nets can be finicky that would've been a long investigation, had it not been... Paul: It's all a balancing act to lasso this idea. It's a balancing act. And you have to find where you're not upsetting the center of gravity within your equation. Robert Plummer: Yeah, absolutely. Paul: That can be a really challenging thing to do if you don't have a holistic picture of what you're balancing. Robert Plummer: And it's funny too, with neural nets, they generally like values between zero and one. That's not just Brain GS, it's really a standard, because the random values are generally between zero and one that you start a neural net with. So if you feed zeros in, zero's a very special number to a neural net. And oftentimes when you feed a zero, because it's so powerful, when you multiply something by zero it's zero. And so you have this flatlining effect, that's really funny that you can see with neural nets. And the values beyond one, it can take a neural net a long time to scale up to that value and beyond. So if you feed values between zero and one, and hopefully they're not exactly zero, then you're talking its language. Paul: It's almost like you're lassoing the scope of numbers between zero and one. You're defining the space so you have control over how it's going to operate. Robert Plummer: Yeah, absolutely. Speaker 3: Enjoying the podcast? Consider hitting that follow button for more great episodes. Paul: Bringing us back to these wonderful pieces of creation that you put out for the world, Brain.js, and GPU.js, somebody's going to want to check this out and they're going to want to try to neural net. What's the hello world version. Robert Plummer: Before I go there, I want to tell the story that got me started with neural nets. It was an effort to understand what was going on Python. Paul: I remember you saying you wanted to understand. Robert Plummer: Yeah. I started to look into the layers and I got into the plugin. If it was comparable to Node, it would be a package. I got several packages in and I still wasn't at the code to understand. I have attention-deficit disorder and I just didn't have the energy to get excited about that. And then I looked at a few JavaScript neural nets and they were following a similar trend. And then I saw Brain.js. I looked into the neural net and it was almost like I scrolled right past the feedforward steps. There's three steps inside of a neural net for it to learn. There's feedforward, which is what we generally think of as, is it providing a recommendation or some type of prediction? There's backpropagation where it's comparing how far off it was. And there's the second step of backpropagation where it's adjusting values. And then it can go again. It can feedforward, backpropagate, backpropagate, and then feedforward again. Or forward propagate. Sorry, feedforward is a type of neural net, whereas forward propagate is a step. My bad. It was so simple. It was literally just a few lines long. I think it was between 11 and 15 lines long for the feedforward step. And both backpropagations were very similar in terms of size. It blew me away. How can it be that simple? That's as simple as it was. It's not any more complex than that. And so when I saw that, I immediately wanted to get involved because I saw maybe there was a few bugs with the project overall. And I went to go see what the current state was, what was the next release? What was being fixed in it? And there just wasn't in [inaudible] Yeah, Brain.js. And I couldn't find anything active. In fact, I found a very long thread of people who were like, "Hey, why aren't you maintaining this anymore?" And some people getting somewhat flustered, somewhat upset. I ignored that. I really wanted to focus on problem solving and helping. And so I forked the project and started fixing some of the issues and just said, "Hey guys, let's continue our work over there rather than here." And from there, it's turned into something bigger. I'm not the originator of Brain.js. I think some great minds really put a lot of effort to make it amazing, but that was the journey in realizing that Brain.js was a really good tool for the job to understand neural nets. And where GPU.js comes in is everybody obviously wants to accelerate neural nets. They're not the fastest thing, especially when you're talking about those two ladder steps, the backpropagation steps. And so GPU.js was the only project I've ever found that translates JavaScript to low-level machine code. And so you can both understand what the project is trying to do in JavaScript, which it can execute in if there's not any acceleration available. And also it can execute in those low-level machine languages. You understand exactly what's going on in JavaScript and you're able to execute in a much faster way. Paul: It's almost like documenting the low-level APIs that are needed to run these things with power and commanding them with the JavaScript interpreter? Robert Plummer: Yeah. I think that's a really good way of putting it. Paul: I think JavaScript is the language of the people these days, like you put it. Not verbatim, you said, but something along those lines. This gives a really great window into how to use neural nets. Like you said, Python has been the language, I'll say, of choice because of PyTorch and these really great tools that allow people to spin up quickly. But there's a whole subset of developers that... I feel like I meet more developers who actually say, "I've never touched Python" than developers who are like, "Oh, I'm super into Python." People who are in development are often into JavaScript. You find the Go people, find the Rust... This is my very blinded experience here, but this opens up the language to a lot more people to really get hairy with it. Robert Plummer: Yeah. I agree with that. I don't hate Python. It bothers me that so many people are like, "Well, you got to learn Python." Here's an analogy. And I'm probably, to some degree, off. I'm a self-taught engineer. I didn't go to college. I went to vocational school. Obviously I had the opportunity to go to college in the time. I sorted on a spreadsheet the most flexible job with the highest pay and software development was number one. And I was like, "Great, I'll do that." You think, through the field, most of it's open source or a very large part of it's open source, even back then when I was in high school. Paul: The good stuff anyway. Robert Plummer: Yeah, right. So I reasoned, if college course is going to offer this... And I'm not telling anybody not to do college, I'm just trying to get them to think. College courses for programming, it means they have got a curriculum and that curriculum is selected at a previous date than you're taking it, which means that the software that you're learning is out of date. So you're learning something that's old. And the languages that we were learning were already essentially ancient. My first language was COBOL. Paul: Oh, wow. Oh, you're OG. Robert Plummer: Even then nobody really spoke it. I mean, mainframes for banks and stuff like that run COBOL. I don't know if they were trying to prepare me for that or not, but... It was not very helpful, but it was helpful that they got me to understand the principles of programming. For example, in another field, like a lawyer. For example, they tell you you got to go to college and then you got to go to graduate school to take the bar to be a lawyer, but you can take the bar. Anybody can take the bar, you can just go. You could just go and take the bar and fail hypothetically, and fail again and fail again. And eventually you'll pass and then you'll be a lawyer. You could just skip the college stuff. Probably not. They probably don't tell you what you got wrong or whatever. With JavaScript and with Python, if anybody's telling you exactly what you should do, there's that principle of one hand clapping. Weigh it on the other side. Think about it from both ends of an argument. Don't let somebody else think for you. Paul: Don't let somebody else think for you. I love hearing that from an AI engineer. That's awesome. When you're working on these libraries, are you finding the communities are people that are new into the deep learning space and people that are coming from a similar background as you that are just like, "Hey, I really want to learn about this. I don't have any training"? Or is it more like researchers and those types of folks that you're finding? Because I'm sure with the language you get a certain demographic of folks coming in as well. Robert Plummer: No, I wouldn't agree with that. I've seen all of them. It tends to attract those that are newer to JavaScript. But I mean, there are many companies that use it. There's a certain company that uses it for listening to brainwaves, which is pretty fantastic. They know that there's probably better tools to use that are in other languages, but they love JavaScript. And their entire ecosystems built around JavaScript. And they followed Brain for a number of years and they're using it. I think that's really respectable. There was a... Let's see, I think his name was Pedro. I forget his last name. He used Brain when it was in its early days as well for natural language processing. And I think it's... NLP.js is the project. And they utilized, at that time, all the major names for natural language processing. So we had... Let's see, it was IBM Watson? They had Google and Microsoft. I forget, all three. It was more than that. But Brain.js ended up outperforming them for natural language processing. And I'm not sure if it was to tweak certain values or what. It's not an amateur framework. It's something that's been tried and tested. It does have bugs occasionally. I do try to fix those and make sure that the community's happy. But it's for a wide audience. It's for those that... If you want to use it in a commercial space, you can do that. If you want to use it in just understanding what neural nets are, you can do that. It's okay, either way. And I find too, I'm dissatisfied a lot of times with these big corporations pushing their frameworks and things on you. You don't necessarily have to do that, especially if you have something like a Jupyter notebook. Jupyter can run Node just fine. And a lot of the algorithms that they provide you are like, "These are pre-trained networks or pre-trained algorithms." They use that specific terminology. How can something be pre-trained if it's all your data and it's starting from scratch every time? It's just a canned response to try and sell something that they are producing. If they're just providing you the algorithm to learn your data from scratch, and you have something low-level that you can implement, then you should seriously feel comfortable implementing it. What allows your team to stretch their legs and to utilize something that's fun that gets them excited in the morning? JavaScript does that for me. I think it's a neat language, TypeScript for that matter, too, which the entire platform has now been translated into. Use something that allows your team to work most efficiently. Don't just think about, "Is this for amateurs?" Again, don't let somebody else think for you. Emily: It's Emily again, producer for PodRocket. And I want to talk to you. Yeah, you, the person who's listening, but won't stop talking about your new favorite front end framework to your friends, even though they don't want to hear about it anymore. Well, I do want to hear about it because you are really important to us as a listener. What do you think of PodRocket? What do you like best? What do you absolutely hate? What's the one thing in the entire world that you want to hear about? Edge computing, weird little component libraries, how to become a productive developer when your WiFi's out? I don't know, and that's the point. If you get in contact with us, you can rant about how we haven't had your favorite dev advocate on or tell us we're doing great, whatever. And if you do, we'll give you a $25 gift card. That's pretty sweet, right? So reach out to us. Links are in the description. $25 gift card. Paul: Slightly unrelated, but on the topic of deploying it yourself, if you're able to do that, which is really becoming less and less of a barrier with the great DevOps tooling we're getting. If you have a team that's using those resources, they can actually stretch their legs. Especially when you're getting into NVMe storage, GPU-powered compute, you're talking extremely large bills. And if you can co-locate something and run it yourself, you have a lot more room to be creative. And you can see teams explode, because they can run a staging and a production on day one of the startup. Robert Plummer: Yeah. That's a definitely important point. The big corporations, they do want to sell you something. And they're not necessarily bad, they're all capitalists. They're trying to make money and we're trying to make money to feed our family. I told you at the beginning when we greeted that I'm a father and a husband first. I do the commercial stuff because I need to support my family. Even that is a capitalist mindset, but it's not capitalism first. But a company is really capitalism first. Paul: The definition of a company, pretty much. Robert Plummer: They don't have any... Well, maybe hypothetically they could have children, like sub companies or... The important thing is to utilize what allows your team to work best. And if your team is comfortable using Jupyter notebooks or some sort of equivalent and then a tool like Python, fine. But if they're into Node and they really appreciate that... Anytime that somebody fights for Node, they're fighting to grow an ecosystem because it is relatively young in terms of it being... Or rather I should say... Well, young is a right word. I wouldn't say immature, but it does need some maturing. Paul: It's continually maturing. Unless you're talking about the creator who just dipped and made Deno, but we're improving. Robert Plummer: It's an opportunity to push your understanding of JavaScript. And that's really what you should be doing. You shouldn't just take a canned response, a canned algorithm and just shove it in and let it do everything for you. Maybe you could, I don't don't know. Paul: Implementation specific. It depends. But you're encouraging people break it open, educate yourself on how it works. Brain.js is really well opted for that, specifically in the neural net. Robert Plummer: Yes. You're absolutely correct, but I think you should know how something works. In principle, for example, we don't just install Node applications or Node packages and there could be a C++ system compromising thing in there. You should know what it's going to do. It's going to sort in a certain way or provides this specific type of functionality. Know what it's going to do with your data and utilize it for that. Nothing should ever really be magic to especially engineers. Paul: You mentioned earlier that there is a company using Brain.js to actually analyze brain waves, neural patterns coming from people. I want to get into this because you noted that Brain.js is a self-taught teaching framework, but you have to give it the inputs about what was right and what was wrong, supervised learning. If you were to want to run Brain.js at scale, is thinking about how am I going to orchestrate the supervised learning something that you would think about? Is that included in the framework? For somebody trying to get into this and maybe make a little service just in their basement, that's doing something, what are some considerations that they might want to look at? Robert Plummer: I think a lot of people think of neural nets as something a whole lot more powerful than they really are. There are special examples. In the news, we hear regularly about Tesla and how their AI drives cars. At scale, the hardest thing is training a neural net by far, because you're just feeding in massive amounts of data. Even Tesla driving is really a supervised learning model because... I should say it could be, I don't know exactly what they're doing. I don't think they really talk too much about it. But if you had a thousand drivers driving and you fed the results, or sorry, the video of what they're doing into a neural net, pressed it down into it, was easy to process. And as well fed in their reactions with the car, the turning of the steering wheel, gas, brake, what gear they're in, ultimately that's supervised and now you can drive cars. Steering and the brake and the gas, those would be your outputs and your input would be video. At first, they're really all inputs. But you're saying use both of these values differently. One is for recognizing what's going on, the other as outputting. As long as you have an appropriate system that allows for training, then after the network is trained, you just have the feedforward step, or the forward propagation, I should say. That is a whole lot easier by far because it's not all of your training sets of data, it's just one. You're just feeding it in and it's reacting. So it's extremely lightweight to run that, comparatively speaking. Paul: I guess we can wrap this up into saying, if you want to run this supervised learning, you need to think of a way to collect your input data and you need to think of a way to store that input data and feed these parameters in. In the case of Tesla, they can use all the sensors and cameras around the car and make a vectorized data set and feed it in for every Delta T. And then they have, God knows what, steering input, what side of your butt you're leaning on more when you go [inaudible] a turn. I don't know what they collect, but they probably collect that somehow. But it's going to depend on your situation. The end requirement is you need to figure out what inputs you're going to give and how are you going to systemize that and how you're going to systemize the actual thing it's going to propagate on, that you'd feedforward through. Robert Plummer: I agree with that. Paul: Just trying to wrap it up into two, what does this framework do if people want to try it out and what doesn't it do? People need to think about how they're going to prepare their data sets and how they're going to feed in this training data it's part of. Robert Plummer: Yeah. I can give you an example of maybe more simple model. Paul: Okay. Robert Plummer: I wouldn't advise anybody to do this, but you can. A lot of people want to predict the weather. The simplest probably way to do that is collect temperatures. Temperatures for the whole year every day. The thing I wouldn't recommend that is similar, but stock market prices to predict where the stock market's going. Again, please don't do this. Maybe for just hypothetically understanding, but trying... It truly is random. But I had somebody trying to do the stock market thing. I tried to equate it to weather so that I... I didn't try to mislead anybody or anything like that. But if you take stock market prices, just like weather or temperatures rather, and feed them into the neural net as one long thing for the entire year... This would be recurrent neural net. Your model's going to memorize that year and it's always going to... That's what they call overfitting it. It's always going to know what's going to come next. It's going to look amazing at first until you feed it in the next year and it's going to not know what is going on, because it's stupid, for the next year. It only knows this year. It doesn't know any other year. But if you slice that data into smaller subsets, like say, three days or four days or five days at a time, this is a situation, a trend. Now you've got something that's reusable multiple years. And if you take multiple years of data and feed that in and then do something like cross-validate. If you want to do something like cross-validate, which is slicing large sets of data into smaller subsets of data and then validating each section of data against other sections of data and you find the most well-trained model, or well-trained data set rather, you can do some really powerful stuff because you've trained several neural nets on something that's learnable, these little smaller patterns. And now you've gotten the best of the best, of all of those, comparatively speaking. And so you really can do things like predict the weather with some accuracy. For example, you're not going to have 30 degree days at least here in Indiana during the blistering hot summer, it's going to understand that... It doesn't work like that. With some level of accuracy, but again, please don't use it for stock market analysis or stock market. Don't use it to try and make money on the stock market. I don't want to be responsible for loss. Paul: Yeah, not financial advice. I think the end takeaway here is that there's a whole sense of data engineering that goes into how you're preparing your training sets. And what is the meaning behind how we're bucketizing this. And what are the time periods. Because ultimately, the neural net's looking for diffs between patterns and that's how it's able to generalize. So there's a lot of thought and that can get highly complex. We're talking... Bringing us back to the Tesla example. A steering wheel, I'm sure, depending what degree of movement you are on the axis, it has different human neurostimulation behind it. We're logarithmic creatures, we think in logarithms. It's not a linear... 75% to a 100% turns different than a zero to 25%. You could divvy that data set up in a hundred ways, I'm sure. There's a lot of thought that goes into and you really have to- Robert Plummer: Yeah. You wouldn't want to feed, for example, crash data probably into a neural net because it would understand, "Okay, this is what we do," just panic and then you slam into a semi or something like that, and die a very horrendous death. You would want to be selective in putting successful data into your neural net. It's a very much, garbage in, garbage out, or gold in, gold out or treasure in, treasure out. What would be the opposite of garbage? Paul: I like garbage, garbage works. Robert Plummer: It's a two-sided coin, garbage and garbage out. You don't want garbage. Feed it good stuff and you go from there. And I find that oftentimes when the neural net is not working for you, most of the time it probably will not be working for you. And I'm not talking about Brain. I'm just talking about algorithmically speaking, because you haven't found what you're looking for. And that's okay because you're identifying what you shouldn't be looking for every time you iterate. And you don't find the trends that you're looking for with neural nets. Maybe it's not learning. The neural net doesn't... You've not given it enough data. An example of that is date. What is a date to a neural net? It does not know what a date is. No, there's no beginning and there's no end, so there's no bounds on a date. You're just talking about milliseconds from zero, milliseconds... I'm not talking about 19... Is it 70? Paul: Oh yeah. It's 79. Is it? Robert Plummer: Is it? Okay. Paul: [inaudible]. Robert Plummer: You have to think of it more beginning of time till now. So that's median, is now. And then... Or so we think. Or we hope. And then the upper bounds is the entire universe exploding or something or collapsing or whatever it does. It would be really hard to feed that in between zero and one. But if you give it a time of day, like at 10 o'clock this happened or at 12 o'clock, now you've got 24-hour span that you can compress down between zero and one and that's digestible. If it's not working for you, don't give up and be like, "Okay, this project is just... It's not written correctly because it's not working." Think about your data and how you're feeding it in. A lot of times, you're just not looking for the right thing. But when you find it, I'm telling you, there is such a euphoric feeling because now you have a machine working for you, bending to your will. It's the biggest hammer you can think of, or the most articulate means of milling or machining, if you were to put it in modern terms or physical terms. And you found success with it, that's amazing. And all within JavaScript. Paul: That is the most beautiful closer we've had in a while, because we're running up on time, Robert. We couldn't have let out with a better thing. If people want to learn more about what you're working on, of course they can go to the Brain.js repo, the GPU.js repo. Are you on a social anywhere? Do you write or blog anywhere where people can follow you? Robert Plummer: Every once in a while, it's not something I'm regularly doing. I'm honestly just trying to... To answer your question, no. My life right now doesn't allow for that much free time. I just got a fix in for Brain.js yesterday, that was pretty good accomplishment. I hope to spend more time with it in the future and working on things like unsupervised training, that would be really neat, especially in TypeScript and in JavaScript to be able to understand where everything's going. Occasionally you'll see me on Twitter. It's robertlplummer. Paul: In one string, robertlplummer? Robert Plummer: Yeah, with two Ms. No fancy disco name or- Paul: Simple as... People are going to remember it. robertlplummer. All right, Robert. Thank you for your time hopping on and really musing on deep things, deep learning with us. I'm sure this is going to inspire some people to put their hand in the cookie jars. Robert Plummer: Well said. Paul: Thanks for your time. Emily: Hey, this is Emily. One of the producers for PodRocket. I'm so glad you're enjoying this episode. You probably hear this from lots of other podcasts, but we really do appreciate our listeners. Without you there would be no podcasts. And because of that, it would really help if you could follow us on Apple Podcasts so we can continue to bring you conversations with great devs like Evan You and Rich Harris. In return, we'll send you some awesome PodRocket stickers. So check out the show notes on this episode and follow the link to claim your stickers as a small thanks for following us on Apple Podcasts.