Ben: Hello and welcome to LogRocket. Today, I'm here with Matteo Collina, who is chief software architect at NearForm, lead maintainer at Fastify, member of Notes' technical steering committee, and a collaborator on Pino. Matteo Collina: I created both Pino and Fastify, so yay. Ben: Okay. Creator. Matteo Collina: Yes. Ben: My apologies. Well, quite a list of responsibilities and roles and titles. I am excited to learn about all of them today. Which one do you want to start with? Matteo Collina: I don't know. We can start with Ruby Rails. No, I'm joking. Sorry. Java, I don't know. Ben: Java, yeah. Matteo Collina: Java and PHP and C, Basic, maybe. Let's start from the magic. Starting basic that I was doing when I was a kid. Okay. No, let's jump a little bit of years. You need to jump a lot of stuff. I started using JS since putting in initial production with Node 0.2, those era. I call myself a second generation node person as, as what I call it. Since then, I've been gaining massive experience on how to build and fine tune and make sure Node's application runs well in production. Because of that, I ended up accumulating a lot of, as I'd say, responsibility in the Node.js community and in the Node.js ecosystem from Node.js, itself, to a lot of modules. I don't know. I don't want to boast too much, but my modules are downloaded something around tens of millions of time per month or something like that. Matteo Collina: Yeah, no, I think it's 60 billion per year. Probably something. I'll get the numbers. Very crazy. I'm one of the most downloaded authors on MPM. Stuff that I wrote or stuff that I maintain, things like that. Okay. Yeah. Yeah. I have a big chunk of responsibility. I want to start talking a little bit about Fastify, even though it's the latest one to some extent. Pino was created earlier. Fastify is a web framework for Node.js. It's really modern. It's built to use the latest and the greatest of the JavaScript work. Yeah, it's really nice. It's growing a lot in the community. It's actually moving pretty fast now. We have been growing the download numbers significantly over the last couple of years. It has been moving from a few hundred thousand to more than 1.8 million per month. Matteo Collina: That was downloads per month, so it's been getting very popular, very quickly. I am really happy. It was recently the state of JavaScript, state of JS survey, and it was ranked as the framework with highest satisfaction across Bitcoin-only frameworks. The other ones were all react and full stack or something. It does not have the same job as Next.js or Astro or Valve Kit or whatever. It's a system to build microservice and APIs. Yeah. Yeah, just to give you a hint, in January, I ranked 1.2 billion downloads of all my modules. Just to give you the numbers. On average, it's around 60 million per day during each of the work week. Yeah, if I could get a microcent for each one of them, I will be very well off. You see? Ben: What you're saying is that if someone were to hack your GitHub credentials, they could distribute malicious kinds of malware? Matteo Collina: A lot of malware. Yeah, a lot of malware, literally. Literally a lot of malware, which is cool, but not so cool for my MPM permission. I have plenty of 2FAs, so good luck with that. It's pretty great. Yeah, that's what it is, Fastify. You can check it out, Fastify.io. We are actually in the process of shipping the new major version. This is V3, which has been around for a couple of years. We are going to ship V4 soon. Probably when you are listening to this, you are first alpha release candidate will be out. We'll see when that happens, but hopefully soon. Ben: Well, I'm excited to understand it bit more. I'm curious, when I think of a quote, unquote backend only framework, Express comes to mind. I've used that a little bit in the past. Help me understand, how does Fastify compare to Express in terms of how you use it? What you use it for? Performance? Matteo Collina: Okay. Express, it comes as minimalistic. It says on the team, it's a minimalistic framework. It does the minimal things that you need. However, if you're building an API, you need to do way more than that. Let's talk a little bit about the fundamental things that are missing. The first one that is similarly important to me is validation. You always need to validate your inputs. Now, if you don't validate your inputs, you are in big, big trouble. Big trouble, big time, just to be clear. It's not that easy, if you start not to validate your inputs. What the heck? By the way, most of the developers that I met do not validate their inputs on their API endpoint. I'm like, this is the security of the community, which also means that they're probably vulnerable to very bad prototyping ... prototype poisoning attacks. If you're using Express, it does not protect you against prototype poisoning attacks, for example, as one of the topics. Matteo Collina: Make up your decisions here. Prototype poisoning attacks are pretty bad, by the way. They can crash you easily or worse. Anyway, it's not a prototype pollution. It's prototype poisoning, which is a different type of attack. Anyway, there is a good piece of documentation on the Fastify website for this. That is one of the critical ones. The second one that I want to talk about is logging. Fastify embeds a logger. It makes that decision for you. Logging is one of the top bottlenecks in all Node.js applications. If you don't do it correctly, you are probably screwed. Therefore, you will log use. This is one of the links with Pino, which is the library that I built before, which is the fastest adjacent logger for Node.js. Matteo Collina: It's great. You see that there is a trend here, and if you don't log in correctly, that will destroy the performance of your systems. Therefore, you are in big trouble. Therefore, we do that choice for you. It's a choice that a developer should not be making, if you're using the web framework. It's so tied with the framework itself without the framework works, that there is quite fundamental questions that you need to answer. Last, but not least,. Express does lot of monkey patching on the prototypes to work on top of Node. This is bad for performance. Fastify does not do that. It adopts a lifecycle model. It does not need to monkey patch anything on the node core objects. It's also one of the reasons why it is faster. Not the only one, but this is one of the key reasons. Matteo Collina: It also means that certain things are a little bit easier. Certain things are a little bit harder than Express. Depends. Well, one of the nicest bits though, is that Fastify allows you to have a concept of plugins and the encapsulated plugins. You can actually constrain the functionality, constrain your app in certain groups, in sub apps. Then you can take and migrate to microservices or lamdas if you want. It can allow you to slowly separate the concerns of your applications without building a single big ball of mud. Matteo Collina: One of the differences between Express and Fastify is that Fastify uses a deterministic routing algorithm, which uses a radix tree to structure your routes, while Express iterates over our list of rigorous expressions. This means that at some point, after so many routes, the gap between the other one is so bad that the performance will quickly go down. It will quickly decrease because of how many you especially need to check before eating your route. Also, the middle pattern is pretty heavy to implement in terms of the overhead. By getting away with a life cycle model, it's actually way faster for certain things. That's it. Ben: Got it. It sounds like the benefits are certainly performance. Also, would it be accurate to say a bit more opinionated about some of the questions around structuring your application and common patterns like middle wares? Matteo Collina: Yes. Also, you have, for example, the concept of plugins. It's way more articulated. Plugins are so ... in Express, everything is a middleware, so you need to register your middleware. You end up having 30 to 50 middlewares and those middlewares, though are always called every single time you pass through a request, receives it. These results in 100, 150 function in a call stack before reaching your application. This is pretty heavy. Thanks to Fastify's plugin system decorators and hooks, you can actually implement most of the same flexibility without having to pay these huge performance costs. You can have the great developer experience without compromising on performance, which is our greatest achievement for us. Ben: Does Fastify have an opinion on ORM, or is that kind of outside the scope of what Fastify does? Matteo Collina: I have an opinion. Fastify does not say anything. I do have an opinion. I'm anti ORM, as much as a person could be. The fundamental reason, ORMs help a lot in the 80% of an application, that's very easy to implement. It makes the rest 20% extremely harder to do. However, the last 20% is where you spend most of your time. You are actually simplifying stuff that is already simple, but repetitive to make stuff that's hard, even harder. Unless you are building copycat applications that are very crude based type of stuff all the time for complex business flows, when you have complex data that you need to manage. It does not sound correct to me. If I need to go fast, I prefer to use something like Azure, which just gives me a GraphQL endpoint and I just connect SPA and I'm good. Matteo Collina: I'm not even dealing with writing a backend system to some extent. Prisma is a good thing in between. I like Prisma, because it has a very good migration system and it enables to do a few nice things that otherwise would not be as easy. If somebody wanted, I would say let's just use Prisma, but most of the time, I'm just not using anything. Personally, when I need to put something, I would recommend not to use anything because I've been scarred too many times before. Since the days of hibernate. I told you that we're going back talking about Java, so I had some bad hibernate scars on my back, and I don't want to go near ... and Ruby Rails, too. You see the only good ORM that I ever use, is active record in Ruby Rails. Again, as I told you, we were going back to both Ruby Rails and Java. I started this. For some reason I had an idea where this conversation would've gone. Yeah, that's it. Ben: I'm curious. You're saying the application that's already easy to build is where the ORM helps. The 20% is where it makes it more difficult. What do you see as that? What's the 80% and what's the 20%? Matteo Collina: The vast majority of building an application is doing CRUD. Reading, updating, saving, deleting, all those things. That's repetitive to do. Once you have implemented for the dog table, you could implement it for the cat table; to be honest, the code would not be much different. It actually makes a lot of sense, if you only have to deal with cats and dogs. However, if you want to build a system where you want to respond on a query of how many of those cats are in a radius of 50 miles, then things start getting a little bit more complicated. You need to go into, how does this ORM end your spatial data? It doesn't do that? How can we do it? Matteo Collina: Well, we need to go down and go one level below and start understanding how to interact with the ORM in the first place. Because everything else on our system actually wants the data, the objects from our ORM. That's what the problem ... wants the model now. Now I need to run this custom SQL query, put it into my ORM, get the objects so that I can move them to the rest of this. It's become very brutal and very complicated. I prefer not to, and I prefer to say, just move around pure Java, skip objects, and just not forget about those complex models stuff. Just forget them. If you need to touch the database, just do a secret query or a MongoDB query, or a radius comment, or a dynamic comment, whatever they're using. Just use the native drivers. Ben: I'm curious to go back to an earlier comment you made, that logging is one of the most ... I don't know if you say the most or a comment bottleneck in Node applications. I'm curious to understand that a bit more. What does that mean and why? Matteo Collina: Okay. Let's consider this problem. A Node server receives an HTTP request and produces an HTTP response. In order to handle many of those, it needs to be really fast in receiving them and sending them out. That's the theory. However, there is one catch in this thing. Those things also have side effects, otherwise we won't do them. If it's just another word, probably not very useful. The reason why we are doing these requests is because they have some side effects so that they change some state. They will probably touch a database, something like that, and hopefully they will log something to standard output or file or somewhere. Matteo Collina: The problem with this pattern is that if the speed in which I can handle my HTTP request coming in is higher than the speed at which I can write logs out, I am in big trouble because if I receive more HTTP, if I produce more HTTP requests, then more logs to write that the speed at which my receiver can handle them. This means that I will have a memory buildup for my logs in my process, or I need to slow down my process. There's no way out of this. Either I'm slowing down the process, or I have a big memory buildup or both. Even if I fill up my memory with logs and at some point the system crashes, because it cannot write them out. Most people don't really even think this can be a problem. They just add more log lines, because they work in whatever field that they want to have a lot of log lines. Matteo Collina: However, this means that it doesn't work because things are in an uncontrollable spiral to some extent. Basically, you are enabling your users to create a denial of service attack against you, because your logging is not as fast. Logging needs to be fast or needs to be slow, but block the amount of requests that are being processed. Most logging framework for Node, do not offer you the flexibility to configure and tune those two behaviors. In Node, you can log to standard output or to a file, both synchronous. Matteo Collina: With Pino, you can do it with both synchronously and asynchronous. It means that I'm blocking my main thread up until the data has been passed down to the operating system. It does not do an F sync, but it's passed down to the operating system or it just said, well, put it there, start buffering it, and flush it out whenever you have time. This also offers some flexibility, which buffer up to a point. If it's over a certain threshold, stop buffering. We don't want to use more than X amount of memory and Pino offers you all this range of flexibility to control your logging, to tune your logging production. On top of that, we also recently added a system to do multi threaded logging. Matteo Collina: Very often when you receive the logs from your server, you need to send them to elastic search or CloudWatch or Sentry or LogRocket, or wherever you want to send them, to be honest. A lot of people send the logs to their [inaudible 00:21:42]. Well, some of the funniest one is CIS log. You don't imagine how many people use the CIS log stuff. I'm just like, how come Splunk is a thing? I don't know. I work in the enterprise world, so I know a lot of these logging problems. You probably know too, right, Ben? You know a thing or two about logging, I suspect. Ben: I am a fellow lumberjack, so to speak. Matteo Collina: You see. Exactly. Yeah. Basically, in order to support that pattern, we implemented this system of multi trader logging. The main process writes the logs to a shared array buffer, and then those logs are then picked up by the writer. These, then, ship them externally. The fundamental reason that we need to use two threads. One, it's because we don't want the main thread delaying the response to end users, because of these additional IO, due to the logging tread. The second one is to handle crashes. If you're doing a synchronous logging, if your process crashes, you want to send all the logs. The most important logs are the ones that are the latest that have led to the crash, but if you're doing a synchronous logging, those logs are gone, because they have not been sent. Matteo Collina: When it crashes, it's gone, it's done. It's not there anymore. You need to leave the process some time to ship these things out. This multi trader system is actually really great, because the second thread essentially monitors the first one, so if my first one dies, it'll tell the secondthread, oh, I'm dead. It will start flashing all the logs. Even as asynchronous, it can call your external API and do whatever you want, even if the main event loop is dead. This happens automatically. You can compress it or tax it and the secondary thread will keep going, up until it's finished, Which is really nice and something that is fundamental for a single logging tool. Ben: Taking a step back, what led you to want to build Fastify? How did you get started? Why? Matteo Collina: I started working on it in 2016. I built Pino. I was doing some consulting with clients, and I've seen these massive problems with logging. I started working with Pino and I built Pino. Then the next major problem that I saw in applications was Express. Then I decided, no, this is too complex. I don't want to build another web framework. Then I was saying, well, if I'm going to build another web framework, I will need to start building another web framework, if I can convince some other human being to build it with me. This is where Fastify is different from a lot of the other web frameworks. All the other web frameworks were started or are maintained by some sort of benevolent dictator. More or less, they are very close to a one man show. Instead, Fastify is open. Fastify was started because I was able to convince somebody else to build it with me. In fact, as you probably have saw, I'm not the first committer of Fastify. Thomas is. I had the idea, but Thomas did more of the implementation than me. Matteo Collina: I did probably the hardest parts, but he did a fair chunk of it. Anyway, that's the thing. We started building Fastify when we got to a certain point, but nobody cared about what we had done for a few more years. Part of the breeze increasing popularity Fastify, is because we've been pushing out releases every week, every other week, for the last few years. We have 15 contributors, so there is a very huge momentum behind the community. Which is great, because things keeps improving and improving and improving without being investment from companies. Well, every company is a little bit chipping in, but not with some ... let's do some back fixing, oh, I have a bug. I am very ruthless on that side. We are very loose as a community on that side of things. If you are using the framework, we are responsible for its maintenance. Happy days, it's up to you. You're using the framework. You are responsible for it's maintenance, as well. If you have a bug, you have the responsibility to maintain the framework, as well, which makes it super good, from a community perspective. Ben: What's in the future? What does the future look like for Fastify? What are you most excited about? What's on the roadmap? Matteo Collina: We are shipping Fastify V4 soon, as I said, when we start going into the release candidate phase. It has a brand new Typescript system in it that will enable you to define the schema for your query strings and bodies and others, and all those things, and have it automatically inferred in the types of your applications or when you're coding, so that you don't need to type things. This is one of the major feature requests from people. Their experience will be pretty nice, to be honest. I just tested it. It's like, whoa, I didn't know types kit could do this. By the way, it was not long ago that it couldn't. To be clear, its timescale is evolving as well. Thanks to Sinclair, one of the collaborators, has done a phenomenal job at implementing that feature. It also came up completely brand new error and link system. We have improved the performance of the router and also of the framework itself. It's all around better. The bit that excites me the most actually is probably the error handling, because I've done a complete right of how we handle errors. It's much more robust now than it was before and way more flexible. Pretty big deal. Ben: Awesome. Well, Matteo, thank you so much for joining us today. It's been very interesting learning about Pino and Fastify and hearing about your opinions of ORMs. I enjoyed that. Thanks for joining us. For anyone else out there who wants to check out Fastify, what's the URL? Is it just Fastify.com? Matteo Collina: Fastify.io, not Fastify.com. That was taken. Fastify.io. Ben: Well, maybe one day you'll get the dot com, but for now the IO will have to suffice. Matteo Collina: Yeah, we also have the dot net, if you prefer that one. We need to migrate to the dot dev one, but you know, not enough time in the day to do those kind of monotonous tasks. Ben: Fair enough. Well, thanks again for joining us and take care. Matteo Collina: Bye. Bye. Bye. Ben: Thanks for listening to PodRocket. Find us at PodRocketPod on Twitter, or you could always email me, even though that's not a popular option. It's Brian at LogRocket.