Daily Brush for Website Speed: Embrace the Performance Budget Ritual with Medhat Dawoud === Noel: [00:00:00] And welcome to PodRocket, a web development broadcast brought to you by LogRocket. LogRocket helps software teams improve user experience with session replay, error tracking, and product analytics. You can try it free at logrocket. com. I'm Noel, and today we have Medhat Dawood joining us. He's going to talk about performance budgets on websites and speed. is a senior software engineer at Miro. Welcome to the show. Medhat: Thank you very much. No, I'm so excited to be here today and excited about the topic that we're going to talk about. Noel: Yeah. Yeah. I like these episodes that we're going to talk about performance and stuff because I feel like everyone devs all no, this is a thing. And they like to think about it when they can, but it gets pushed to the back burner sometimes. Medhat: Yeah. Sometimes people are avoiding it. Noel: yeah, exactly. Exactly. So recently you've been drawing an analogy between website performance and dental hygiene. Can you kind of guide us through that? Medhat: Yeah, Yeah, absolutely. I see that web performance is very important to most [00:01:00] of the cases. If it's not all the cases, but to emphasize that, I see that. Any website is like the human body and we're growing up by eating more food. And by eating more food, we're using our teeth in order to, show this food and swallow it laughter. So we're using it more. So we need to care more about it. You have to visit the dentist, like at least twice a year or something to shake up if it's not more. And whenever you have a problem, you're just running to the dentist, find out the problem. And from there. The dentist asked you to do some stuff in order to not to ruin it again, at least not to enhance it, but at least not to ruin it, to just keep the same state until the next visit. And this is by, brushing, flossing and so on until the next time. I see exactly the same example here in web performance. Performance is like the teeth of the website. Okay. So if you feed the website with more features. This means that you are affecting [00:02:00] performance more and more. So you need to brush it brushing. It is by getting about it and at least keeping it in the same state as it was before adding these features. So with more features, this is affecting a lot of things, bundle size, affecting the speed of the loading of the page, affecting the metrics that you're using for this page and so on. If you don't care about it. For a long time, the website will still work, but you don't know exactly like the teeth, bad teeth are working too, right? People can eat with bad teeth, but they are not going to enjoy the food, you know, the bond, so they are not going to enjoy the food. They're not going to enjoy the features your users. So you need to keep brushing your performance. Okay. It's not about reaching a performance goal. It's about caring about not to lose this performance win that you make one day. That's the difference. All right. So goal is something that you need to reach performance budget. Is something that we need not to break it. Noel: right, that makes a lot of sense. [00:03:00] And with , some like regular habits, maybe if we can go back to the thing like brushing and flossing and all that, , I think we, as humans get into the ritual of doing that. So just becomes part of the daily routine is that. Kind of as simple as you feel that it is when it comes to like performance on the website, you just have to make this part of the routine of making any change. Or is there, something more that you recommend people keep at the front of their mind when making changes? Medhat: That's an interesting question. Yes, in the beginning, if someone is not brushing and not flossing, ever, in their life, it will be hard in the beginning, right? You need to build it by time. If you don't do that and start doing that, You will take time taking care of it and you're building habits. You're making it easier for yourself You're making it in front of you when you woke up in the morning or before going to sleep and so on Same for performance if you would like to keep this performance budget You need to make it easier for other team members to use as well or to care about it. You need to make it some Ritual, as we said, it's something [00:04:00] that is part of the routine of building new features. You need to care about it. And in order to do that, there's some steps and some recommendations in order to build performance budgets. Okay. So performance budgets in the beginning, just to summarize the definition of what's performance budget is just some rules that you are putting into your system that could prevent performance regressions. So you reached out to this level of performance, these numbers of each metric, you would like not to lose this number. If you are not enhancing them, at least don't lose them, right? So these roles, you have several ways to apply them. The first time you can only just run it every time you're making a new feature, just run a comment and make sure that you're not changing anything in performance. And there's a different way is to run it every time you are making like a push or something during the CI CD. And a third one, which is the optimal one, [00:05:00] is to just connect it to your website. And from there, with each event or each session that is visiting your website, It is listening to some events and these events is gathering or collecting new numbers for each metric and tells you whether some users are experiencing a problem in a specific budget or not. Noel: , I think that people can go out and find these tools. Is there any specific use cases you have where, you know, like one of these strategies proved. More or less effective than the others. Is there something that you see more success with out in the wild? Medhat: Yeah, absolutely. , the last one is the best, to be honest. We have two ways to, to measure performance in general. The first one is, synthetic testing or lab testing as known as lab testing as well, which is very known with Lighthouse that everyone is running if they are caring about performance and the different one, which is ROM testing or also known as field testing. ROM testing is really use of monitoring. So [00:06:00] for the first one, the lab testing, you can gather these numbers. But actually you cannot care about the performance budget because as I said performance budget is not to reach a goal It's about preventing regressions in this goal You can find out if there is a regression using the lab testing, but you cannot prevent it You cannot get an alert telling you well, you reached this threshold That you shouldn't reach. So we care in performance budget about the other type, which is ROM testing or field testing. Then this one, you can gather from the user, different information about the geolocation, about the internet connection, about their performance of their machine, even, okay. It about everything about the size of their screen and so on. Based on this information, you can sit different budgets. So I would say. For example, if I would like to set a budget, some people might think that budget is like, all right, I would like my website to score like. 80 or 90 in performance at minimum. And that's it. No, it can be more than that. [00:07:00] You can make it more granular. You can say, I would like to have a score of 90 or more on a mobile phone on a 4g connection and so on. So make it very specific. So for different devices, for different people in different geolocations, for different people using different screen sizes, you can sit a different budget. And so on. So the best way is to sit rom testing. Using any of the services I can recommend some of them are free. Some of them are not. I would recommend if you would like to just try out how budget is going is to try first the free versions or make a trial for these paid services. I tried some of them that I can recommend. And based on that, you can set the budget and get notified, get a nice dashboard or create your own dashboard as well using Grafana or something or Looker and so on. Noel: Yeah, very cool. I guess this will probably depend on the tool that a person ends up reaching for, but is there any [00:08:00] specific approach you have to like monitoring and maintenance? Like how did, how should one ensure that they don't lose track of this over time? Medhat: Cool. At the beginning, let's talk about, how to generate a budget in the beginning. Okay. We have two, two cases. The first case is that you have already a running website that has some performance problems, and you find that performance budget might help you with that. And another thought side, which is a project in an early state or a stage or you are starting already a new website from the scratch. So if we are talking about the first version, which is a website that has problems and people cannot stop the regression in performance, there's three steps to fix performance. And I'm going to say it in. Order, the first one is to measure right in different tools, even lab testing, you can measure and find out how your website is performance right now. And after measuring, you need to plan and fix all of that. And when you reach out after fixing or during fixing to a specific level that [00:09:00] you are satisfied off and you say, now my website is performant enough. I like it right now. You can use this state and set it as the start budget that you're using. And this is a circle. You start again and measure and from measuring you find out that your website is not doing very well in a specific metric like CLS for example. So CLS needs some improvements. You plan it, you fix it, then you sit and you budget based on what you reach it. So you are never getting lower than what you reach it. This doesn't mean that you are going to reach the best performance website from the beginning, but every time you are making a win in performance, just secure it using the budget, secure it and say, all right, I'm not going to lose that in the future. I'm going to get notified if there is a problem. I'm going to get a red pipeline in case there is a problem. So we have now a budget in hand. Based on that, you can use several things. For example, one of the things that you can secure [00:10:00] using budget is bundle sizes. So bundle size, we have two free tools that can be used, something called bundle size GS and something. If you are using web back in your website, you can just set up performance conflicts. Both of them can be used. And during the build time, it can tell you if you are exceeding the performance budget or not. And from there, you can just free of a charge in your pipelines. See whether the new feature is affecting performance. Of course, it's not the only thing, but the bundle size is not the only thing that is indicating whether you're excited performance or not, but this one part, okay. Another thing, if you have this budget in hand and you would like for a specific page, for a specific resource to be changed in a different way. You can generate something called package the JSON file, and this you can generate it in several ways and use it in a tool like Lighthouse CI and Lighthouse CI is something similar to Lighthouse that you're using in your dev tools, but it is using a [00:11:00] common line that you can run it and see how you are doing in different metrics. Lighthouse CI is coming as well with support for budgets and they call it light wallet. Okay, so if you have this package of the JSON file that you generated and you sure that these numbers are at least the numbers that you are satisfied of, or these are the latest numbers that you make, and you would like not to break it in the future in a JSON file, you can run it free of a charge using something like Lighthouse CLI. So using that in your pipelines gives you an indication that you are breaking the rules. You are getting beyond this budget. , if you would like to make a good culture in your team, this should be a hard line. If there's any reason broke this budget, we should stop. We should fix it. And so on. If you start building a bad culture about, you know, if we broke it, okay, it's fine, it's not going to be so bad. This is going to build bad culture, bad habits for all developers, and it's not going to be followed [00:12:00] in the future. So it's a consensus decision. Moving on to the other side, if you are building a new website or you are in an early stage in a website, okay? And you would like to prevent integrations in the future. For now, you might not have any problems, right? Because you are in early stage, you don't have much users, you don't have much problems reported to you. So what you do here is to generate your own performance based on the community recommendations. You don't have numbers, right? How to build that? There is several tools to do that as well. But one of the tools that I can recommend something called performance budget calculator. You can reach out on performancebudget. io based on that you can just give it a number target loot time, okay, a target user connection 3G or 4G. This is the numbers that you expect your users to, to connect from. And from there, it generates this package, the sorry budget. json file that you can Use it in order to run it with [00:13:00] Lighthouse AI or any other tool that you would like to do. So here is the gist. Okay. You, if you would like to start making a budget free of charge, just go to do this website to generate it for you. Some recommendations tells you the HTML should be this number. CSS should be this number and JSON, JavaScript images and so on. And based on that, you can use it. And in the future, when you have more users, when you have a performance problem that show up, what you need to do is to just revisit revisiting shouldn't be usually by enlarging this numbers might be to lower these numbers. Okay. And it can be based on data only. That's my main recommendation for this, if you're walking away after listening to us in this episode was one thing I would say, don't set budgets. Or it change budgets without having data. Data is very important. Collect data, right? About [00:14:00] your users, about the persona. And based on that, take decisions. Set the budget. Don't say I would like to make the fastest website in the world. This doesn't, it doesn't exist. Believe me, there's always like trade offs about it. Noel: Yeah I'm curious on that point, on, data and how we make decisions around, around these things, given that they're, I'm sure, are trade offs almost all the time. Is there, when you say make these decisions with data what factors, Are the things that devs should be paying attention to that maybe they're not thinking of other than people look at, like you said like, time to their cumulative layout shift, like time to first bite, like all these things are the easy, like that we can look at a number and say, it's better if it's smaller, but , that doesn't really help us make decisions as to how can we direct the website? Is there anything, any other, like what data should we be comparing those numbers to and joining them against to figure out how much of a weight [00:15:00] we should put on performance? Medhat: I Would respond to you with a cliche response. It depends. Noel: Sure. Hmm. Medhat: Actually it's not like one size fits all at all. So here is, it is a funny thing. Sometimes people are measuring using lab data or like a lighthouse or something and get so nice numbers. Very good numbers. Okay. And when they start measuring for ROM testing for real user. They're getting extremely bad numbers and they feel that, all right I did everything I can what happens the whole idea here is that this. Lab data is not really very correct. Okay. It depends on your machine. It depends on the condition that you are testing in and so on. So the same lab data for the same website that I ran it on my machine might be different in your machine. Okay. Since it's a lab data. So you need to use for sure lab data in order to set a [00:16:00] baseline and after setting the baseline and you feel that, all right, at least now it's performance And you run it on the ROM testing, you will not be surprised except with user conditions. And here it comes with the user conditions for people who are running the lab data and get good numbers and run it on ROM data. A real user and get bad numbers. This is the common thing But here is the funny thing as I said, it's all the way around. Sometimes you're running it on lab data and you get cache number and when you run ram data ram information collected You find extremely good information users are extremely performant on their websites and their devices. Why? because This website is mainly used by maybe iphone users Maybe It's only used by desktop people so you get really good [00:17:00] numbers, but this is false positives All right, so you get everything Is good. So you don't do anything, right? But what to do in this case, nothing, you just, your user base is extremely happy, even if your lab data are not satisfying. So these are the data that you need to collect. So it depends on that. If your user persona or the user information that you're collecting user using a ROM service or something is really good. I don't think you need to invest time doing that. Optimizing performance or changing the budget or whatsoever, if it's all the way around your real user information or data is coming very bad here. You need to make a change, collect the data, collect these metrics that I shared in my talk and before time to first bite time to last bite. All right. Fed. Of course, FIT is going to be duplicated [00:18:00] soon early in 2024. If you're watching that later, it might be duplicated already. And IMP is going to be a successor. IMP is a interaction to NixPent. And this one is going to ensure that the performance all the time will be good for any interaction from the user. It's not going to be like fit. Fit is first input delay, which was only measuring the first input that the user is doing click or tickest or whatsoever, but only the first one. And it was always good for some very obvious reason. People, most people, 90 percent of the people are not interacting with a page, except if the page is fully loaded. So it always gives you like, okay, fit is really good. Noel: Yeah. Medhat: IMB will be in a different way. It's, it will be just measuring all the interactions throughout the page, but it's not formalized as Core Web Vital yet. It will be in March, 2024 as a Core Web Vital, but for now it's a Web Vital already and will be more wanted later. Noel: [00:19:00] I want to roll back a little bit to what you were saying with like real data, metrics and real users versus the lab and I'm sure. I'm not sure, but I'd imagine some people are a little bit shaky on what that actually means. So when you're talking about real metrics in telemetry, are we still talking about these same, like the same vitals just on real data that we're collecting, or are we doing something else to get an actual better grasp of the user's perception of a website's performance? Medhat: It's more or less the same metrics. And you can create your own custom metrics as well. If you remember it was before shared by Twitter X now, but Twitter in the past, it was they had their own custom metrics. So they had before time to first tweet. Okay. And this was measuring the time the user might take on the page loot in order to be able to tweet, to write it with and publish it. This can be measured with real user monitoring as So with real user monitoring or with lab data [00:20:00] like lighthouse or whatsoever you're using. Both of them might measure the same thing, might measure cruel vitals and other metrics like, time to first bite and so on. But the difference is that. This real user monitoring is getting a real user perception of the page, as you said, so if the user experience time to interactive, which takes five seconds, and in your machine lab data is telling you three seconds, the user is more correct. That's real. Okay. So you need to act based on the user data because this is really how the user experience your website. Your lab data might be, or for sure we are privileged as well in this field that developers have extremely fast whoops laptops. So it's we are not the baseline. You need to use your user machine and these three user monitoring as the baseline not yours. Noel: I think again, devs [00:21:00] have a lot of empathy for this and they, they understand. The importance of it. But I think that there is. I guess there's a lot of the friction here ends up happening. Cause the business comes in with some specific request or there's some new feature that we've got to get out right now. We don't have time to go through all this stuff. How do you handle that contention between like new features that might affect performance versus, like how this is going to degrade existing performance that Like just for all users that might not even be touching this feature. Like, how do you approach that? Cause you, you spoke before about how, if the bundle size gets too big or if any of these metrics are slipping, it's like a part of the culture that we should never undermine. It's some team gets tasked with adding some feature and it's we need this new big library in here that's making the bundle size bigger to get this out the door. Like, how do you approach those tough situations? Medhat: That's a very interesting question and very important because the evil of any performance win that we have. It's new features, right? And new features are coming from one source. Usually the product [00:22:00] owners or business people in general. So we need to connect that in general to the business metrics. And here's some facts about the business, how performance might affect the business metrics especially for the online businesses that are SaaS or maybe e commerce or something. Here's one fact. The first one is Curve of Vitals, which are the three of them, FID, LCP, and CLS, and later will be IMP as well, are contributing to the search signals for your page. If people are searching online for your business and find you in the first page, that's the best thing that might happen for online business, right? So if you are not caring, sorry, about Core Web Vitals will affect your your ranking in Google search. And in different search engines as well. And this is a sitting point to business people to care about performance. Another thing is that [00:23:00] research shows that if you keep the user waiting for loading your page more than three seconds, you're losing actually. You're losing this user that the bounce rate is going higher. So up to five seconds, 90 percent increase in the probability of this user will bounce from this page. And usually people now are impatient and the more you're keeping them waiting for the page to be fully interactive. The more they are going to leave your website and most probably will search in a different service. You're losing your business. Another fact for every one second that you are saving for the first load of the page, you are enhancing your conversion rate with 17%. That's the research. I can give you the links later. People can can read about it. fInally in the user experience hierarchy, there is a fact telling that the most important thing for the user is the speed. Of the page in loading. So 75 percent is contribution for the [00:24:00] user. Satisfaction of the is coming from how fast your page is loading. Noel: Yeah. Medhat: These are the four business metrics that are tightly connected to performance, SEO ranking, conversion rates, balancing rates, and user experience. The four of them directly affect business metrics and might make the business lose money. If you're not caring about would like to get some inspiration, there's two websites I can recommend about that. It's a, the first one is wpostats. com. This is full of success stories for people who are caring about performance and how this is affecting their conversion rate and their user satisfaction and so on. And another one is web. dev, famous one from Google, which is having a tag called case studies and case study is you can find under it as well. A lot of cases studies from different companies who cared about Kuru vitals and different metrics and enhancing performance and how this affects their business. [00:25:00] So in this case, we, you have a strong case about reaching performance. Goals, making the website is performant and make the the metrics green again, but this cannot be prevented without having a performance budget. Again, you would like to secure what you reach it out. You have now convinced the business that we need to have a sprint only for performance or something to enhance the performance. And you did great job, but you cannot secure that without having a performance budget because. After that, you add new features is the evil that is getting the performance again to the red side, or at least the amber. So how to deal with new features? If this is your question, how to convince them, there is four factors that you need to care about when you are creating a new performance budget, the first one is to make it concrete, how to make it concrete. You shouldn't say, all right, I would [00:26:00] like to have the fastest website. And keep it as that. No, as I said earlier, you need to make it very specific concrete. Let's say I would like to have LCP under three seconds on mobile in 4g connection, or I would like to have performance score to be more than 95 on desktop. So it's very specific. So you can just target different things. The other factor is that you need to make the budget meaningful. So you can say, I would like to make performance limits just to cope with a trend, because I hear this, episode in the podcast, they are talking about, yeah, let's do it. No, you need to connect it really to business needs. Remember the performance budget goal is to prevent regressions in performance and business and UX, right? So if you didn't connect performance budget that you are sitting to a business goal, everyone will break it. Everyone, and this is the worst thing that might happen and it will be useless in the future. A third [00:27:00] factor, you need to make the performance budget integrated, make it easier for people to see. So you need to put it as well to your CI CD. You need to make it easy for the team to follow the budget and use it. Okay. Another thing you need to make it visible to stakeholders. Okay. In most companies nowadays, if you are visiting any office, they have screens everywhere. I recommend, and they did that in a former job before, is to use one of these screens to show always a board of the performance of your website. So you need to see all the time the metrics, you need to see all the time how the servers are doing, but also how your website are going with some metrics coming from real user. In this case, stakeholders will feel that, all right, this metric is part of the business. We need to keep it green all the time. So fourth and last time last thing in, in a performance budget is make it enforceable how to make it enforceable. This is a hard line. You are not allowed to exceed it. You are not [00:28:00] allowed to modify it in the future. If you really need to change it, it has to be based on data. Again, you need to have data, concrete data in your hand in order to change it. Other than that, it's enforceable, make it bleep breaking the build, make it breaking the pipeline and so on. Make it breaking the going to production. Just one example of that people are taking that very serious is preact. Preact, for example, is an alternative for react and it's selling point, even in Twitter, it's fast three kilobytes alternative to react. Okay. This is the selling point. So I have listened to, just a Miller, the creator of Preact in a different podcast. And he's saying that every time we are adding a new feature that might exceed this three kilobytes, we are start doing, this good goal thing in order to just go back to this. Hard line that they bought. So it's enforceable. We can not exceed this number. This is a [00:29:00] selling point. You can see in the Nick's version sale, right? It's a fast four kilobyte for react. And the next time it's a fast five kilobyte. No, it's, this is a marketing part. So it's a three kilobyte. It might be changing in the future, but for now, they are getting too much about this budget. They put a budget for themselves and they are enforcing it. After caring about these four things, make it concrete, make it meaningful, make it integrated and make it enforceable, how to deal with new features. We have four things to do. First one, optimize existing features. So existing feature are mostly features that has been added early in the project or added in in an MVP event. Okay. And you keep adding on top of it. So most probably it's not optimized. If you optimize it, you keep some room for new features to be added without a problem. Another thing, to remove existing features. Some of the features might not be useful for the user anymore. Or removing it doesn't mean that you delete it, but it can mean that you remove it at least [00:30:00] from the critical path for the user. So in this case, it's not affecting performance for this page. It's not affecting like lazy loading it or something or moving it in a different page. So you make this current page that you have a problem with was a new feature. More performance. Third one is to add a new don't add a new feature. Don't add a new feature. Doesn't be, very useful for anyone who was listening to us, but I'm not going to add new features. No, at least don't add the feature that is affecting performance to the critical path. Negotiate and ask the business, is it really important? Is it a core feature that we need to add to this current website if yes, so make alternatives, make compromises, ask them, is it is it's going to exceed the budget that we bought for this page? Is it okay to modify the budget in this case? Is it like a hard line? And again, it's a consensus decision. You need to discuss with the whole team. [00:31:00] Would we add this feature or not? This feature is fully optimized and it still will exceed the budget. Can we extend it? Can we remove other features? Can we not add this feature and lazy load it later? And so on. Final thing, recommendation for new features is to EBTest it. We are doing that as well in Miro, that if we have a new feature and we would like to test it against performance, we epitest it, we are not adding it immediately and we see the numbers and we see all the metrics, how they are doing with a new feature. And afterwards we decide whether to add it in this page, in the critical path or not to add it at all. Noel: Nice. Yeah. That makes a lot of sense. Is there, it seems a little bit odd that devs are tasked with. Having to advocate for performance all the time to the business. Is there anything that kind of makes that or breaks down that paradigm a little bit and makes that less strange? Cause it just, it does feel weird that, the business has to ask, the dev has to go look and then they're in charge of owning the business process that is ensuring [00:32:00] performance isn't going to end up degrading the bottom line somehow, like that is a pretty, that's quite a bit of ping ponging, right? Is there anything that you've found that makes it easier for. The people asking for these features that are thinking more about, the business and less about the implementation to bring this into the front of their mind. Medhat: I will just push back for the part that I talked about, the business metrics, if we connect it again to. Real business metrics, discuss it with them, show them how we are doing right now. You be tested and show them the numbers based on the the other variant as a start. I think this might be convincing. If this is a critical feature that has to be added, that's a new question. All right. In this case, let's discuss. And see whether we would like to perform perform in a different way. If, for example, we have the DTI, for example, time to interactive is three seconds, it would be three seconds and a half. Is it acceptable? Is it affecting our business? Yes or no? If it's yes, okay, we need to [00:33:00] make compromises, remove other features, or at least we can shrink it a little bit. So this feature does have 1, Can we have it 1, 2, 3 only? Yes. If yes, okay. So we make compromises and we have to compromise as well. It's not only about developers. Noel: Yeah. Yeah. That makes a lot of sense. Are there any I know we've talked about a lot here. Are there any tools in particular that you think are particularly effective at facilitating that, that conversation? Medhat: I talk about only the free versions about the light wallet and about the setup of bundle size and the webpack, but there is a lot of paid tools that totally worth it to be tried as well, to be honest, I tried speed curve. It's fantastic one that you are sitting up on the website and it gives you a fantastic dashboard. You can get notified and get trained for all the time about different metrics, how you're doing into that if for example, it gives you the trend and you can just hover over it and see when did you break this budget and when you fix it and based on that [00:34:00] and the dates and all the information, you can just go back in your code and find out which feature was added in that date. Which affects that trend. All right. And gives you this nice dashboard telling you what's green, what's under the budget, what's Amber, which is all right, close to the budget limit and what's red that you need to act immediately to it. Budget doesn't stop you. All right. Budget is recommendation, but it should be respected. It should be a culture in the team. That's one tool and gives you a notification and an email and can be connected to the cool guy and so on. And another one that I tried as well is a fantastic one. It's called the caliper. More or less, they are doing the same thing and like speed curve that gives you in notifications that you can just run it with webhooks on slack on different email and so on gives you the profile as well of the tests. It runs on like Motorola it runs on an iPhone. It was, the user was on this connection level and [00:35:00] so on. This is very useful. And these notifications gives you a lot of responsibility feeling. All right. You are warned it, there is a problem. We need to care about that. Even if you are spending like 20 percent of the sprint getting about fixing these performance problems and they keep all the wallet the budget, sorry, under the limit that you set yourself. There is another tool that they can recommend as well. It's called the bugbear and bugbear is giving you fantastic information, fantastic a collection of data as well, more or less doing the same thing, but. They are a little bit caring about new metrics coming up. Early, there's something called the WAF, if you heard about it, it's called long animation frames. I recommend to people to go and search about it. It's a new metric that is coming up and it cares about the time the browser is taking to. Measure the animation happening in the screen and IMP and so on. I recommend these tools, it's paid tools [00:36:00] they are, speed curve. Caliper and the bugbear go try them out. There's other tons of them out there, but to be honest I didn't try all of them, but they are more or less have the same features All right for enterprises, they might provide some more support. They might provide some more, with hoax and so on but Anyway, they are paid But totally worth it, of course, if you care about the performance wins that you have and you would like not to lose it in the future. Noel: Yeah. Are there any other kind of just emerging trends or tech that you've been excited about or see being impactful in the future? Medhat: Yeah, the two I mentioned I will just recall the first one is IMP. Indeed, IMP was a huge win. I'm a Google Developer Expert in web performance. That's why I know about IMP way earlier than it was. Announced it this year, and I was super excited about announcing it in Google I O this year and super excited about having it as a [00:37:00] cool vital next year. It's very accurate and very important to care about it. And they give a talk before about how to optimize for IMP as well. I might share it with you later. thIs is one kind of trend. The other one is the long animation frames. The WAF is something very important, but it might not be very visible for small businesses or people with less traffic or less features on a page, but it's extremely important for people who has heavy. Pages heavy animations in the page, heavy images or videos and so on. So yeah, go check them out. IMP and LUAF, L capital O small A capital F capital, but there is. A lot of things online with this name. So you just write loaf measuring performance to just be specific. Noel: Sure. Nice. Nice. Cool. Is there any other material you'd recommend? Devs look at and explore to deepen their understanding of Performance. Medhat: Yeah, if you feel about [00:38:00] performance in general I would recommend always web. dev. It has a lot of good materials. I would recommend also the Caliper emails. It's called I think peripheral email. It's a mailing list only about performance. Fantastic. It's coming with a lot of latest. Trends and latest information about performance. That's the best. There is a couple of Twitter accounts. To be honest, I don't know the handles now, but I can share later. Noel: Show notes Medhat: yeah, in the show notes, I can put them of course. And yeah, shameless blogger. You can just follow me as well on Medhat. dev. I'm writing about performance from time to time I can share other links as well in the show notes. Noel: Perfect. Cool. Thank you so much for coming online and chatting with me. It's been a pleasure. Medhat: Thank you very much. No.