Ben: Hello, and welcome to PodRocket. Today we're here with Jon Kuperman who's a developer advocate at Cloudflare. How are you, Jon? Jon: Good. Thanks for having me on. Ben: Yeah, I mean, I'm really excited for this episode, frankly. I've been a big fan of Cloudflare's products for years. I mean, I use like the basic DNS and those kinds of things for pretty much every website I launch. Also use Workers and some of those newer products as well. What I'm really excited to do today is to maybe quickly go through the full product suite, just to understand the full gamut of products that Cloudflare provides, since I don't follow the news, particularly in terms of what Cloudflare is producing. Ben: So I feel like I'm going to learn about some new products I don't even know about. Then we can maybe talk in more depth about some of the areas you work on specifically. Jon: Yeah, that sounds great to me. Ben: Maybe we can start at the top. Was DNS the first product Cloudflare released, or certainly early on? I'm curious, where did Cloudflare start? Jon: Yeah. I think they started with a lot of security stuff. And so they were starting off with a anti-spam. But as far as flagship products, I think DNS and CDN have become two of their early out the gate... Those are things that they were very well known for years ago. Ben: Right. Yeah. That's a good point about the security. I remember always on the website, there's a button that says, "Are you under attack? We can help." Jon: Yeah. Ben: So if you're getting like a DDoS attack or something, I imagine Cloudflare can step in and help. Jon: Yeah, I think that's a thing... I'm trying to think, because I joined only a few months ago. But I was trying to think of my image of core products, were definitely like, "Always use the CDN. Always know that they're there for DDoS attacks." And then, I also know that they do a lot of cutting edge stuff when it comes to like application firewalls or security rules, stuff like that. I think that they're very security minded too. Yeah, happy to start with any of those. Ben: Yeah. Maybe we could start with the DNS, CDN, like what's special about... There's a lot of DNS and CDN providers, so what's special about Cloudflare's functionality there? Jon: Yeah, absolutely. I think a few things. There's a few products we offer, like we just became official domain registrar a few months ago. I do think that using us, if you're already using our toolkit, makes a ton of sense using us as a domain registrar. But at the end of the day, I think with the domain service, we're providing what other folks do. Jon: When it comes to our CDN or like our 1.1.1.1, which is our DNS registry, there's some really special stuff that we do there. Because CDN, for example, we have a lot of locations. We're well over 200 locations. We've got some cool things. I think if you go to the workers.cloudflare.com website, if you scroll down this little button, you can click to see how fast your like return data is from the closest CDN node. It's really fast, and we're talking sub a hundred milliseconds a lot of times. Jon: We have locations all over. We also are a company that really benefits, almost like a snowball effect. The more customers we get on our free tier stuff, like our CDN and things like that, the more we can analyze things like traffic patterns, and we can come up with some really cool tools, which I'd like to talk about later. Things like Argo Smart Routing and things like that. Jon: Yeah. For the CDN, I think, A, the number of points of presence that we have and then, B, the ability to, as we get more and more traffic, to analyze it anonymously and try to optimize it. I think that's a big thing that we offer there. Similarly, with 1.1.1.1, which is our DNS registry... For example, you can go get a free product called Cloudflare WARP. You can put it on your phone or your computer, and it should just sit there behind the scenes. But it's very cool because it's really fast. So your request should just load faster. That's all internet requests. Jon: It's fully encrypted. There have been a lot of articles over the past few years about like DNS resolvers harvesting data or selling data, things like that. So we take a very security first approach to those things. I think those are the security backed encrypted side. And then also that we're the biggest player in the game as far as points of presence goes. There really is a big ROI there when you get these 50 millisecond or a hundred millisecond response times. Ben: Right. I guess a lot of people, they, by default, will use the CDN that's part of their cloud provider, whether it's Amazon, Google, Microsoft. But other advantages to not having a CDN that's hosted by the same provider as your main cloud infrastructure? Jon: Yeah. That's a really good question. I think my gut feeling says probably to just go with your current provider in a lot of cases, just because of ease of use, like if you're already on, yeah, Amazon and you have that offering. I do think there's truth to the idea that having a disconnect or having a separation of concerns there could be good. Jon: But I think all of us are at the point where we have so many points of presence that you're probably not looking at a CDN level outage for many of these companies. So I think you should probably feel free to stick to whatever your hosting provider is, unless some of the more advanced features like tiered caching or Argo smart routing really appeal to you, because those are Cloudflare exclusive features, which are more like advanced performance concepts. Ben: You mentioned Argo smart routing before as something you're excited about. What is that? Jon: Yeah. These are some of the coolest things for me. Earlier, I was talking about this snowball effect where the more traffic we get, the more we can analyze patterns and everything like that. With traditional internet, like if you do a ping of google.com, the internet routes that packet efficiently through to one of Google's data centers and back. And the way that they optimize efficiency is fewest number of routes. They're trying to get you from A to B with the fewest number of steps, and probably nine times out of 10, that's the perfect thing to do. It's exactly what you would want to do. Jon: But we see this happen all the time where one very important of these centralized internet or centralized data warehouse locations will start being under too much pressure, or they'll start going slow, or they'll have an outage, something like that. And then we'll see this very observable traffic backup, where everybody's trying to route through this place in North Carolina and it's going slow. Jon: And so, even though it's still the fewest number of hops to Google, there are in arguably faster ways to get there, going around this. And so, what Argo smart routing does is it's analyzing all of the network traffic that goes through our network. Basically, I think the simplistic way of saying is the internet does fewest number of hops, Argo does fewest number of hops, unless we suspect that we can get you there faster. Jon: And so, as we detect like, "Oh, there's a real slowdown in this one node," we can start routing traffic around it. So even though you might get one or two more hops, we're still giving you a lot faster time to first byte, resolving those traffic requests. Ben: Is that like just looking at nodes within the Cloudflare network, or does it also look at, what are the underlying providers serving a given asset that Cloudflare is caching or providing the CDN layer for? Jon: Yeah. That's a really good question. I think the easy answer is it's really just looking at our nodes. We're only able to really observe our nodes and where things are going faster or slower. But those nodes are located at big data centers that serve a lot of traffic. I think that if something in North Carolina is going slowly, there's a really good chance that there's something going on in that data center, so it would apply to everybody. But our metrics are only run on stuff that goes through our network. Ben: Got it. Let's say I'm using Cloudflare to cache and provide CDN for my images, when one of my clients or one of my users request an image, normally it might use the data center closest to them. But if that data center load, then I'll use a different data center to servse them. Jon: Exactly. Yeah. They don't have to be customers. They could just be users of your app to get that benefit. Ben: Got it. Yeah. That makes a lot of sense. I know Cloudflare... I've seen there's some features that have been coming out that on the subject of images can automatically optimize images, automatically use a WebP format, which is something that's new and a lot of excitement around that. Can you talk some more about ways that the Cloudflare CDN adds intelligence on top of just serving the same asset faster? Jon: Yeah. Yeah. I love that stuff. So yeah, for me, what's really interesting is there was this... I don't want to dramatize it or whatever, but there was this interesting thing where these browsers had these different standards and things were getting better, but we were still serving giant bundles or unminified and concatenated file... All this performance stuff was going around. Jon: And so, I think a lot of different companies tried to move into this space in different ways. I think that, I don't want to say we got lucky, I'm sure it was intentional at the leadership level, but I think that coming in at the DNS level was a really smart move for Cloudflare. For example, there are a lot of really cool command line tools, whatever code you're writing, that will do these things like build steps, smart builds steps. So they'll minify, they'll concatenate, all this stuff. Jon: But I think that us being at this DNS level has led to this really cool experience where you can put your just regular website. It can be completely unoptimize, and then you can go to our dashboard. I always describe it like a shopping list sometimes where you can go through and be like, "I would like you to minify my JavaScript." Like, "Oh yeah, you should totally optimize all my images." Stuff like this is really cool. Jon: I think one way I think about it is, if you have a really good tech stack, a really good build pipeline, you probably run these scripts locally or on the build step in GitHub, that will do a lot of stuff. They'll minify. They'll dead code eliminate. Maybe they'll optimize images and stick them in a cache. They'll do all this heavy lifting stuff, which is great. One downside to that is it requires quite a bit of know-how and time to get that all working with your current stack. Jon: And so, I think Cloudflare is providing the same solutions, but in a really novel way where you can just upload a super normal node or like HTML CSS website, you don't have to... Like my personal site does no minification or concatenating files or image optimization. I just stick everything raw up there. And then I just use the Cloudflare services to go ahead and do that. Jon: And so, then, if you hit my website, you'll see very well optimized, minified, concatenated, and image optimized stuff. But for images in particular, yeah, we're always... They've gone through a lot. The browser standards have gotten really good. So we've got like AVIF and WebP formats. We have the ability to do lossy or lossless image optimization, depending on where we're using the images. Jon: We also have some really cool image caching where we can resize images on the fly and cache those resized images. So we have a lot of really cool tools that you can just go in the dashboard and play around with. Ben: Yeah. I mean, that's all pretty awesome. The way I've heard it described is that... Tell me if this makes sense to you. Essentially, when you're using Cloudflare for your DNS, or I guess rather for your... Yeah, for your DNS, I guess, you're allowing Cloudflare to man in the middle of your website and then do useful things. So- Jon: Yeah. Ben: Cloudflare is able to, yes, sit in the middle. Normally man in the middle's bad thing, but this is a good example where it's providing value and optimizing your images or caching or doing CDN. Jon: Yeah. I really see it as, after all the dust settles, a really good abstraction point, as opposed to build tools, or CI tools, or browser updates for your users. We picked this abstraction point, which is DNS, and I think that's paying off really well for us right now where we view it as your... We call it being orange clouded, like you're in. Once your website is on DNS fully, there's all of this stuff we can do. Jon: It's just a really nice point to set out where we can optimize. We can get you back. And analytics, we can update your images. We can push all your files. We can give you like custom caching rules. There's just so much you can do that's dashboard click button, as opposed to with other frameworks, I think it would have to be like, "Oh, go find a new MPM module, install it, put it in your GitHub CI pipeline," just a little bit more hands-on for the same effect. Jon: So yeah, I think it's a really nice point. I also think it's nice because nothing ever becomes Cloudflare specific about your app itself. Like my website will just stay just a ball of HTML, CSS, and JavaScript. That's all it is. So if I wanted to move it to another platform, of course, I'd lose those Cloudflare one-click optimizations, but nothing gets embedded into your code that becomes Cloudflare specific. It's just stuff we provide on top. Ben: Yeah, no, I mean, totally see all the benefits there. I guess, playing devil's advocate, by having a lot of these things be check boxes or configurations in the cloud for a dashboard, they're not subjects to version control or code reviews or some of the team workflows that a larger development team is typically used to. What is recommended by Cloudflare in terms of best practices for auditability and code review or...? Yeah. Jon: Yeah, that's a great question. I think we try to look about it in ways of like we mark the danger level of each service that we're offering here. One way that we do it is what things we mark as beta, versus GA, versus default on. Those would be the three steps. For one example, we offer brotli support. For those that don't know, there are ways that you compress files on your server, that the browser can decompress, so you send less bites over the wire. Jon: We have like gzip for a long time. Now we have brotli. Brotli support is considered basically 100% safe. Turning on brotli support should not ever break anything. Using our Cloudflare pages product, when you to make a new page, we just by default serve everything over brotli. You can go turn it to gzip if you'd like, but that one we have decided is anyone at your company should be able to just click that on, you should see savings. It should never take anything down. Jon: Then we have other products, like we have a really cool product called a rocket loader, which is pretty advanced. It rewrites all of your script tags as script type rocket loaders, so the browser doesn't run them. Tries to load in all your HTML and CSS. And then, only when that's done, start running the JavaScript in an attempt to really optimize your site, especially if you have a bunch of ads or things running in JavaScript that aren't required. However, I'm sure you can imagine immediately that that can totally break depending on- Ben: Yeah. What could go wrong? Jon: Right. And so, for that one, we would never turn that on by default. We also have a bunch of warnings on it about how to test and things like that. And we provide some cool performance metrics where we can load your site with both rocket loader on and rocket loader off, so you can see some of the differences. I think the simple answer is we try to cover that in the dashboard via, is this just automatically on, is it off but it's easy to turn on, or is it off with some warnings and some blog posts that we recommend you reading before you turn it on? That kind of thing. Ben: Got it. Makes sense. Let's talk about Cloudflare Workers, because I feel like one of the most powerful, or what I think are the most powerful tools I've used in the Cloudflare platform. Just in case anyone hasn't used Workers or isn't familiar, could you give us a quick introduction to what are Workers? What can you do with them, [crosstalk 00:16:32]? Jon: Yeah. Yeah, I'd love to. I hope I don't go back too far here, but basically you go back like 10 years ago or whatever, and we were all building and deploying apps the same way, where we would find a VPS provider like Linode or DreamHost or I don't know, something like that. We would get a machine or maybe a couple of machines. These are virtual machines, but they're still... So you're like, "Oh, I need one maybe in this location, one in the US, one in Europe." There's all these things you're thinking about all the time. Jon: And so, you get these machines and you pay by the memory and the CPU and all this stuff. Then you deploy your code to the machines. And then, if your site gets really popular, like it's too much for a machine, then you've got to get more. Then you have to get a load balancer and that load balancer will have to help navigate, like send traffic to the one that could take it, the fastest one. It was a big thing. Jon: I always remember, at my first jobs, that there were a lot of folks in charge of very important tasks here with making sure that our servers stay up, don't crash. There's also updates to think about, a huge thing. So you get like 10 servers and they're running Ubuntu or whatever on it. And somebody's going to have to go in or write scripts that will go in and deploy these updates. Then we tried to modernize a lot of that with this code as config or whatever they call it, config as code, I think. Jon: We had these cool ways with Docker and things like that, where you could just have one Docker file that you would send to all 10 of your machines. So it got a lot better there. But still there's all this stuff. I think back 10 years ago, if somebody asked you to build a production service, you should be nervous because it's like, "Oh, how many servers do we need and where should they be? And who's going to be on call, and who's going to be..." All this stuff. Jon: And so, Amazon led the way with this kind of server-less revolution where... Which I really just didn't understand at first. But the basic idea was that they tried to just take away a lot of those decisions. And so, you could write each kind of... If you had an express JSF back in the day and it had like 10 routes in the main file, that was like, get Slash, get list, get all this stuff. You would write each one of those things as a separate function and you would deploy all 10 of those to Amazon via AWS? Jon: Now they can take care of a lot of it for you. You don't have to worry about operating system. You don't have to worry about updates. You don't have to worry about paying too much because you just pay for what you use, and all this great stuff. And so, for folks who were doing that, it really made the job a hundred times easier where it was like, "Oh cool. We just need to make sure that our functions are working well and they go up." That was the preface of it. Jon: And then, since then, all of these other companies have come up with really cool ways of offering serverless packages that compete with Amazon in some way. Amazon's is you still pick a data center. You go to Amazon and you pick like, "Oh, US east," or whatever. Then you upload your function or whatever. And then the way that they handle it is that they have these virtual machines. I don't know if folks have heard of hot starts, cold starts with these serverless functions before, where if it's been too long and the VM shuts off and somebody hits it, then the VM has to spin up again. And so that's a cold start that takes a while. Jon: Similarly, when you get really big and you want to distribute it amongst locations, you need to use more AWS services where you're like, "Oh, I'm going to get some US east and then some US west, and then I'm going to load balance between." And so, then, at the same time, Fastly, or same time later, Fastly came out with this really cool idea, which is they run a web assembly, basically only VM. And som they're still doing the VMs like Amazon is, but they're lightning fast. They're super, super, super quick cold starts. So you can do all this really cool stuff there with like... Jon: Yeah, as long as you can write something that compiles down to WebAssembly, then you can deploy it to Fastly and you'll get these really cool effects. So then Cloudflare's competitive approach went differently. We decided that we were going to do this edge first, basically. So instead of when you go to Amazon, you pick US east, we abstract that away as well. So everything like serverless in general, abstract it away, like OS and version updates, number of machines. Jon: But then Cloudflare serverless, which is Workers, also abstracts away location, that you don't think about things like where you're going to put it. You just write your code and then we can do a lot of really cool optimizations where we... I think one simple way of thinking about it is we take your code and we put it at all like 200 plus of our data centers. So it's always really, really close to your users. Jon: And so again, so it's like the serverless offering, you write your functions. We support JavaScript type script, and then we also support WebAssembly. So anything, you could write Python and compile it down or Go and compile it down. And so, yeah. We have this cool offering where it abstracts even further where you just don't worry about where the thing is going to go. It just goes everywhere. Jon: And so, you get this really high performance, serverless framework. Oh, the other thing I wanted to mention is that we also went different with the VM approach. We don't do a VM approach where you spin up a VM for each serverless function. We use Chrome's V8, which is the engine behind Google Chrome. V8 offers this concept of isolates, where you can run code in the one VM that stays on all the time. Each function runs in a V8 isolate. Jon: And so, isolates are protected from each other where they can't interfere or they can't share or steal memory or anything like that. But it's a different approach where we don't have cold starts because our VM is never off. We just have this one big Google V8 VM that runs your code in separate isolates. Ben: This is maybe getting really in the weeds, but I'm just curious, is there like one V8 VM per wet per like site on Cloudflare customer? Is it shared across multi-tenants? Jon: That's a really good question. I assume it has to be multi-tenant somehow. I don't often know the answer. I assume there's no way that there's just one instance per data center or whatever. But I'm not sure about how that code works, like figuring out which one you go to or like how you get prioritized. I'm not sure on that. Ben: Are those isolates what underpin individual tabs in a browser that's based on a V8? Jon: Yeah. I also think they're the same thing that would work for your web workers and things like that. I think that one of the reasons that we were called Workers is because it uses a lot of the same security protections that you would get when you would make a new worker in your browser. Ben: When you're writing this JavaScript code that's running as a worker, what APIs do you have access to in terms of what is the format of a request that comes in for a Worker to process? What does it have to spit out? How does that work? Jon: Yeah, that's an awesome question because this is one of the things that tripped me up forever where I was like, "Okay, I get that serverless is cool, but it scares me. I don't know what I'm doing." Yeah, all of our stuff is based around the Fetch spec, the actual MDN, the actual W3 Fetch spec. What you get is a request object and what you must return as a response object. Jon: And so, it really is, our initial syntax was addEventListener syntax. So you would do addEventListener and you could listen to a Fetch or a ScheduledEvent, which is our CRON job thing. That fact would receive a request object. And then it's exactly like you're on the web or in Node or anything you can do, like set headers. You can go, async/await, go get data, stuff like that. Jon: But the primitives that you're working with there are going to be request response from the Fetch API. Then if you go to workers.cloudflare.com, we also have a lot of other APIs that we support. And so, like alongside with just doing the normal stuff, like your Fetch request response, we also have a lot of stuff. Trying to think now. We have the crypto APIs. We have encoding stuff. We have the cache, the web cache stuff that you would expect. We have WebSocket support. So a lot of other things like that. Jon: But it is a little bit of a game where I think one thing to know right away is that these Workers are not Node. They're not like a full Node.js environment. And so, you wouldn't necessarily be able to take a gigantic application with all these dependencies. You would run into areas where you would need a polyfill something like that. But when you're writing your own ones, you get a very familiar... Jon: We only add things that are spec compliant. But it is a constant mission for us to find popular specs that people want to be using and then find ways of supporting those. Ben: What are some of the typical use cases you see for Workers? Jon: Yeah, so I think one of the most common ones I've ever seen is simply adding headers. For example, one of our products that's really cool, we have this bot management product. What it does is it uses all of this machine learning and all of this data to take each request and analyze it and give it a score, zero to a hundred, how likely we think it is a bot. So you can then have full control of what you do with that. Jon: Like you could use our capture and you could say, "Oh, if it's over 70% likely, give it a capture. I don't want that." Or you can just block it, be like, "Yeah, I don't want to mess with that." But then the question is, well, how do you get that data on the client side? And so, one really simple Worker, like, say, you're using bot management, you would take in each request. You would grab the bot score from the bot API, and then you would append a new header that was like, "Add a new header Cloudflare bot score," and then stick the score on. Jon: Another way is if you wanted to have a security header, like you wanted to add... I'm trying to think of... Or like a course header, anything like that. Workers would be a really good way of just proxying a request, adding some headers to it, and then returning the request. What I'd love to get into is we have our new move is into stateful stuff, which is another really good chance to use Workers, now that we offer a Key Value store, and our new product, Durable Objects, can build way more robust things with Workers. Ben: Yeah. I think those are newer. I was doing some work with Workers about a year ago and I think Key... Maybe it's more than that, two years ago, Key Value was just coming out. But I think that was the only mechanism for state in Workers. It sounds like there's more now. Can you tell us a bit about some of those mechanisms and then what can you do if you have stateful Workers that you couldn't previously do? Jon: Yeah. Yeah. You have three options. You can use either of our two things, which are the Key Value store or the Durable Objects. Or we're constantly adding better and better support to connect with your real databases, if you've got some data in MySQL somewhere. On the database side, we're always working on our open source adapters for databases, and we're really trying to improve that story over the next year, making it easier and easier to just use your real data. Jon: And so, for that stuff, yeah, look at our database adapters. And then you can just do whatever you were planning on doing with a non serverless thing. You can just connect to your data store. With KV and Durable Objects, those are things that we offer in our edge network. KV, they're really similar, which I think it's a little bit confusing. But the Key Value store is eventually consistent and you can store any key, any string key in any string value. Jon: So it'd be very similar if you've used Redis. I know a lot of companies have like these Key Value stores. One example that you could do with it is you could have a blog, I don't know, an Eleventy, or a Jekyll, or whatever blog. On the build command, it takes all your mark down and generates HTML. So then you could just have aWorker that would take each HTML file and it would just stick it in a Key Value store with a path like, "My first post." And then the value would be the HTML. Jon: Then you could just build a Worker that intercepts all routes, pulls the right one out of the Key Value store, and serves that. And boom, you have a stateful blog just on a single worker. The issue with the Key Value store, like I said, is eventually consistent. So that Key Value store is at all like 250 or whatever of our data centers. So if you're looking at my blog and I update it with a new push, you are not going to necessarily get it on the read for maybe even up to 30 seconds as it propagates everywhere. Jon: It's really good for some things. Like when I update my blog, I don't need that to be in real time. It would not work for other things like a chat application or Google Docs commenting or collaboration. If you have something that's just simple reads and writes, KV is free. It works really simple API, really easy to use. I've used it for sticking all sorts of stuff like blogs or metadata, anything like that. Jon: When you get into the need for fast reads or consistency, that's our Durable Objects offering. Durable Objects is really cool. This is probably the thing I'm going to work the most on in the next quarter, is documentation and examples for Durable Objects because I feel like they're very powerful. But they're essentially just a simple JavaScript object where you can put anything that you can put in JavaScript state. So sets, maps, objects, rays, strings, anything like that. Jon: But it will come with some special APIs, including like reading and writing to disk, while also keeping stuff in memory. And so, this would be a really good thing if you wanted to build like collaboration or chat, anything real time communication, stuff like that, that would also write to disk. The nifty thing that I love, so the Key Value store goes in all 250 of our data centers, which is why it takes a while. Jon: But Durable Object is this magical creature. There's only one of them. It's like a singleton. What it does is it is moved around our CDN, based on where the most requests are coming from. It's this single consistent, you can always read and get fresh data from it. Then the thing itself magically behind the scenes moves around to get closer to whoever's reading from it. It provides this really cool experience for building stateful, but also lightening fast applications. Ben: This is maybe a silly question, but in this context of serverless with Durable Objects, what is the difference between keeping something in memory versus writing it to disk? Jon: Yeah, that's a really good question. There's always going to be latency with the writing to disk, for sure. And so, I think it really is like the whole JavaScript, the Durable Object APIs async. So you can just wait on it to be... If you do like a durable object that does like await this.storage.get messages, something like that. There's nothing wrong with that. The client will just wait an extra few milliseconds as that reads from disk. Jon: But the API that I've been using the most is just to have like two variables in my durable object. One is the memory one. And then on writes, I just write the memory to disk. Then if the memory one doesn't exist, the durable object has shut down, then I read into it from disk. That's the method I've been going with because it just shaves a little bit of that time when people go to read, to just be able to immediately return it from memory. But it's still really fast, no matter what. Jon: But I would say, if you're building something that's super time-sensitive like chat, you would just want to keep a copy of the state in memory at all times. Ben: Got it. We've kind of talked about Workers, which involve writing JavaScript code. And then we've also talked to this Durable Objects, which is starting to approximate on database. I'm curious, when it comes to the code you write in Workers, is there tooling to put that code in Git, or whatever version of control you're using? What does testing look like in terms of writing that worker code? Ben: Then the third question is around some of the similar software engineering concepts when it comes to a database like migrations and schemas like... Yeah. Let's talk about those- Jon: Yeah. Those are all great. Let me know if I forget one as I go. But yeah, those are awesome questions. Originally, I think we had this idea since our network's so fast and deploys are so quick, that we could maybe skip a lot of these local dev or testing environments, because you can just grab a new domain name and deploy it. I think as we go further and further, we realized there's just a big need for this kind of stuff. Jon: We have a lot of really exciting things. We just announced a new version of our CLI, which is called wrangler. The new version has really cool built-in stuff where you can... It's super fun. Basically, you start your app with your Workers and your Durable Objects, and then you can have single button keys to toggle between, "Move it to the edge and give me a link to that edge," or "Okay, move it back locally and then so I can run my tests against it locally." Jon: We use an open source product called Miniflare, which replicates the edge experience locally. It doesn't have everything, but it maps when it can. Miniflare, which is in wrangler 2.0 is the great answer to how to do tests, local development, stuff like that. It also pairs really nicely against a single-button click. We also have this awesome product called Argo Tunnel, which is like a competitor to nGraph, which is you take your local app and then it exposes a secure, unique URL. Jon: That's another cool way we do things is we'll build something, run our tests via Miniflare, and then we'll generate a unique URL for it locally and have our co-Workers test it out. So yeah, I think as far as the testing goes, we also have the option for environment. You can have like a stage or pre-prod and a prod environment, so you can deploy it to much like a traditional app is deployed. Jon: Sorry, I'm trying to remember all of your questions. That was like testing, deploy... Oh, migrations. You were talking about data migrations, right? Like how you would handle- Ben: Yeah. Jon: ... database stuff. We have two options there. Again, the high level is that we have... There's a config file that goes in all Workers. It's called wrangler.toml. As you add Key Value or Durable Objects, that goes in there too. So we do have opportunities for like you can change the version of your durable object, which will mean on the next deploy, it'll get rid of all that data and start with new state. Whereas if you keep the same version, you can update the application code while keeping the data. We do offer things like that. Jon: I do think though that long term, we're not trying to be your giant MySQL database. We're not trying to be like that. And so, I think while some things like a message queue are perfect for Durable Objects, I also think we're working really hard on supporting your current databases as best as possible. Ben: What exactly are Cloudflare pages and how do they differ from Workers or maybe they're not at all similar- Jon: No, that's a good question. Ben: Yeah. Jon: Yeah. A little bit of the history. We launched Workers, and Workers is a primitive. You can do anything you want with it. And so, then one thing that people wanted to do was build a blog with it. And so, then we're like, okay, well, we made this thing called Workers Sites. And so, Worker Sites is just Workers... It's exactly what I described earlier. It's Workers, but on the building step, we go through and we take all of your things in your post directory and we stick them in a KV store. And then it generates a router. Jon: That works great. But then it's still a little bit weird when all these other competitors have these beautiful one-click solutions for static sites and we're out here like, "Okay, run this command line and then set up this," and all this stuff. We packaged it up into this product called Cloudflare Pages where works very similarly to Netlify. You link it through to your GitLab or your GitHub URL, and then boom, it deploys to a unique page site and you can grab a custom domain. All that stuff's great. Jon: But what's special about it, going back to the beginning of our talk, is that once you are now in with your custom domain into the DNS, that pages application is just one click away from all of our existing Cloudflare optimizations. If you want to just move your site over to Pages, then you're just one click away from brotli, one click away from WebP, image support. You know what I mean? You can just start adding all of these things, DDoS protection, capture, all of this stuff, bot management. Jon: Really Cloudflare Pages is a very direct, like apples to apples competitor to the Netlify workflow. But our thinking behind it is that it's a cool way of moving static sites into the Cloudflare ecosystem where they can easily take advantage of some of the more advanced features too. Ben: Do you have a deep integration with GitHub where you can build a preview for each branch and some of those really nice convenience features that Netlify has? Jon: Yeah. Yeah. I'm a gigantic fan of Netlify. I love their stuff. I know you have had them on a lot before. I think their product is amazing. So yeah, I think we try for a lot of that main feature parody. Each commit generates a unique URL. Those unique URLs stay alive. You can configure your build pipeline to commit certain branches to generate URLs, but not... All of that stuff, we offer. We offer all that stuff as well. Ben: It seems like Cloudflare is doing a lot now in the video hosting and streaming space. Can you tell us about some of those tools that have come out recently? Jon: Yeah. This is a space I'm really excited about. For a little bit of like backstory, both when I was at Twitter and when I was at Adobe, we started getting really into video and it is very technically difficult work, like once you want to start getting into streaming and managing different file formats and trying to eliminate latency. And so, at both of my previous jobs, we ended up hiring a pretty huge amount of engineers, that we were trying to get folks from Netflix and Twitch and all these companies because it's very difficult to get right. Jon: And people expect extreme quality these days because of Netflix and Twitch and things like that. So I'm really excited about this. Our stream, our video platform, Cloudflare Stream is really trying to just sit behind the scenes. It's like a B2B product. It's really aimed at folks who want to make their own video applications without having to learn about all these formats and transcoding and all this stuff. And so, at its simplest, it has APIs for starting a stream from a web client for streaming that out to people via unique URL, for re-streaming it, if you want to broadcast it to YouTube or Twitch at the same time. Jon: Then after you're done streaming, for processing the videos, giving you metadata on them, and then giving you your own API to serve those videos. For our last innovation week, I built a demo, which I can link after the show, which is just how to do it. It's a simple dashboard. It's all built on Workers with some KVs, some Durable Objects, and just shows you have an admin section which uses another one of our products, Cloudflare Teams to regulate access. Jon: If you go into the admin, you can stream. It turns to videos. You play with everything that you would need if you wanted to build either your own streaming platform like YouTube or Twitch, or even just a internal education site where you have learning videos and track analytics and stuff like that. And so, we try to offer all the hard parts bundled and then give you as much configuration over the UI, the video player. All of that stuff is not really... We're not trying to brand like if you would embed a YouTube video and it would have YouTube at... Jon: We're not doing anything like that. We're just very behind the scenes handling the video stuff. I think it's really cool. I think it's the first time in my career I would even consider working on a video application, because it's gotten a lot easier now, if you use something like Stream. You don't have to learn about quite as much stuff. Ben: Yeah. No, very cool. Overall, what are you most excited about in the next year, to the extent that you can share, anything on Cloudflare's roadmap or just general themes that the company is working on? Yeah. What are you looking forward to? Jon: Yeah. I think it's really interesting because Cloudflare's very different than anywhere else I've worked, where we're often a behind the scenes company. It's not necessarily about me using Cloudflare's CVN. It's about trying to get big clients. The stream product's really interesting because we're almost positioning ourselves like, "We'll make a thing and then you can use that thing to make your own video platform that then you charge for. And then we'll just be behind the scenes." Jon: I think the thing that's interesting about Cloudflare is we're seemingly obsessed with primitives. We like to build these building blocks that everyone can use. We use them ourselves to build the next thing. But what I think is really cool about this year and next year is I'm really starting to finally see the pieces all coming together, where as opposed to Workers, but then you're going to need to go find your own data store solution. You're going to need to find your own domain registrar. Jon: I'm starting to see like, whoa, you can really easily build a SaaS startup just on Cloudflare, and we really make it a lot easier. I think one thing I'm looking forward to is, as a developer advocate, being able to tell that story and be like, "Look, I made my own youtube.com just off of Cloudflare services," or like, "Now I made my own e-commerce store," stuff like that, that I'm getting really a big kick when I read. Jon: We often reach out to folks and have them write for our blog with stories about how they use our services. And I really love seeing that. So I think more and more we're well-positioned to be a pretty big player in this building your SaaS space, like you think about Cloudflare. I'm really excited to see the cool stuff that people build on top of it. Ben: Well, Jon, thanks so much for joining us. This has been fascinating to hear about all of the new Cloudflare tools and some of the philosophy behind what Cloudflare does and why they do it. So really appreciate you joining the podcast today. Hope to speak to you again soon. Jon: Awesome. Yeah. Thanks so much for having me. Brian: Thanks for listening to PodRocket. Find us at PodRocketPod on Twitter, or you could always email me even though that's not a popular option. It's brian@logrocket.