William Morgan: ... in the modern world, an open source project by and large, for the most part, we're beyond the days of it being this volunteer, nights and weekends things. The majority of open source projects are funded by companies that have an interest in that project being successful. Eric Anderson: This is Contributor, a podcast telling the stories behind the best open source projects and the communities that make them. I'm Eric Anderson. Eric Anderson: All right. We're live today with William Morgan, the CEO of Buoyant, and also one of the early creators of Linkerd, here to tell the story of Linkerd. Maybe William, you can just give us a couple of words about yourself and the project to give context for our listeners and then we'll jump into the story. William Morgan: Yeah, sure. Thanks for having me, Eric. It's really nice to be here. My very brief life story is that I was an engineer at Twitter on the infrastructure side. Obviously, there's a lot more that happened prior to that, but that's where, for service mesh purposes, my life began. And this was almost 10 years ago so it was 2010 to circa 2015. And this was a time where Twitter was going through some very massive infrastructure changes in a public way. William Morgan: And what happened at Twitter, the most amazing thing of course, is that it worked because it was really this quite massive transformation, but what happened was we had this monolithic, Ruby on Rails app and we turned that into microservices. And the output of that five year process, which was really painful, involved a whole lot of lessons learned, often the hard way, sparked the initial idea behind Linkerd, which is our open source service mission, sparked the idea behind Buoyant and sparked the idea behind everything that we've been doing since. Eric Anderson: You were part of this transformation, this core thing, and all the learnings on what you need to do to be successful with microservices, the missing pieces. Some of that informed what you've been doing since, basically. William Morgan: Yeah, that's right. That's right because we saw firsthand just how painful it was, right? Twitter's move from monolith to microservices was hugely expensive and it involved all sorts of things that had to be done, both on the technology side and on the people side, right? Both components there, both sides of that coin, had to move in some pretty substantial ways and to change it in some pretty substantial ways. And so, when my colleague, Oliver Gould and I, who's my co founder and the CTO of Buoyant, when we left Twitter, basically the idea was, "Wow, that was a lot. That was pretty intense and it sure feels like other companies are going to have to go through that same transformation." William Morgan: And in fact, what has happened, which has been really fascinating to watch, is that the advent of things like Kubernetes and Docker and the Cloud-Native stack has actually made it really, really easy for companies to adopt microservices, at least easy on the deploy side, right? And so, it's actually exceeded what our expectations were for exactly how many companies and organizations would be going through this process. Eric Anderson: Yeah, I don't want to go too far off track, but this is interesting, this whole Twitter transformation. This predated Containers, Kubernetes, et cetera. What would the stack look like, just briefly? William Morgan: Yeah, it was weird and it was very... The Twitter stack is a lot like... It's like living in Australia or [crosstalk 00:03:42] irritate everyone from Australia, but it's evolved in this really weird way, where you got kangaroos and koalas and meanwhile, the rest of the world is like, "No, we've got elephants." It was strange. We didn't have Containers. We had cgroups and we had the JVM. We had the ability to limit resource usage and we had the ability to package stuff. There was no Kubernetes back then. We had Mesos. Mesos was a grad student project at the time and Twitter basically brought it to this [crosstalk 00:04:09] state. We didn't even really have the word, microservices, or we didn't know it, at least. We called this thing, SOA and we knew that was a dirty word. Eric Anderson: Wow. William Morgan: Right? Eric Anderson: Yeah. William Morgan: We hug our heads in shame. We were like, "Well, I don't know. We're making services and so it must be SOA." Eric Anderson: Yeah. Right. William Morgan: Yeah. But what's been amazing is now, if we fast forward to the modern world and people are hopping on Kubernetes and Docker and all that stuff, what's remarkable is number one, just how advanced that stuff is compared to the Twitter stack. If you, as a startup today, are hopping on Kubernetes bandwagons, man, you've got technology that's much, much better than what Twitter has with some exceptions. And two, the problems that you have are actually really, really similar. Even though the details are totally different, the problems that you have are actually really similar to what Twitter had to go through. Eric Anderson: Yeah. Okay, so you and your cofounder then are... You've been through this crazy evolution within Twitter. You leave Twitter and are you leaving with, "We want to build a service mesh or whatever you... A SOA mesh, whatever you want to call it, or are you leaving with, "That was hard. Let's take a break and figure out new things," and later, the inkling to build that emerges? William Morgan: Well, there's some messy details in there. Eric Anderson: Right. Right. William Morgan: I think from my perspective, at least, and I think Oliver's perspective is a little different, but from my perspective, I knew I wanted to start a company and I did try it a couple of different ideas to varying degrees of failure. And it was really... It's my founder journey and did the classic exercises you're supposed to do, which is, if you were starting something new, what would you want to bring with you from your previous company? And that's what led us down the path and actually, honestly, those first couple of idea I tried were very consumer focused. It was really later, when I realized, "Gosh, the stuff that I really understand and the stuff that really was transformative at Twitter was on the infrastructure side." It wasn't the consumer stuff. Eric Anderson: Yeah. Yeah. That's your competitive advantage. That's the thing you know better than anyone. William Morgan: I guess. Yeah, this was like, "Okay, what do I actually understand really well? What are the weird intuitions that I've developed and that all of us actually at Twitter developed that the rest of the world doesn't really have?" And that was the Genesis for going down the path. But the question of, "Did we start out with a service mesh," that's a really interesting question, right, because that gets into naming and marketing and some of the things where, early on at Buoyant, we learned some weirdly, interesting things that were unexpected, I think, coming into this process as engineers, where we call it whatever it is and you expect the world to consume it as is, right? Eric Anderson: Right, right. Yeah. William Morgan: The world doesn't really work that way. Eric Anderson: Yeah. I like this idea of to first commit, and maybe that's not the right thing to hone in on, but what's day zero look like for Linkerd? William Morgan: We started with this project, this open source project that came out of Twitter, that was called Finagle, right? And Finagle was this really transformative piece of technology at Twitter. It was this... We called it an RPC library, which meant every service at Twitter communicated over RPC, which is remote procedure call, and Twitter use this particular technology called Thrift, which is ancient and busted in all sorts of ways. The goal of Finagle was basically to provide this cool, functional, programming, idiom on top of Thrift's calls. William Morgan: And we were like, "Okay." First of all, that was super transformative for Twitter, but no one else in the world is going to care about functional programming on top of RPC calls and especially, Finagle, was a scala library. It was on the JBM, so we were like, "All right." Finagle had two components to it, right? There was the programming model, the cool, functional programming stuff, and then there was operational model and we're like, "Okay, well forget about the programming model. That's really specific to people who want to do functional programming over RPC costs." That's a weird audience, but the operational model is super powerful because Finagle, you as a programmer, you were like, "Okay, A is calling me. I need to make this call." And so "Finagle, go make this call." William Morgan: And under the hood, Finagle was doing retries and timeouts and it was doing load balancing in this super intelligent way, where we're taking into account the latencies of all these things, and it was doing the request routing and it was doing all these transformative things. Operationally, there was a huge amount of value. And so, we said, "Well, let's literally just take Finagle, which is this library, let's just package it up as a proxy." And then a proxy, anyone can run, right? And then it doesn't matter what language anything is written in. And by the way, we've got Docker now, so the polyglot lifestyle, it's something that people can adopt more easily. And that was really the first version and that was the first commit of Linkerd, was just taking Finagle and turning it into a proxy. Eric Anderson: Got it. Proxy, step one. We're not yet a mesh, so to speak- William Morgan: Right. Eric Anderson: ... where we solve the first order problem. And this is you and a few friends that are building this? William Morgan: Mm-hmm (affirmative). Yeah. Yeah, we hired some of our friends from Twitter and we were thinking very Twitter-E terms and what we quickly realized... Well, then we had this funny exercise where we were like, "Okay, well people don't say SOA anymore. They say microservices. We're going to go to all the microservices meetups and conferences and we're going to say, "Hey, look at this cool proxy that we built," right? "It's an RPC proxy." And those conversations went nowhere. It was astounding how terrible of a reaction or reception we had because first of all, the microservices meetups were full of people who were not operators. William Morgan: It was a lot of architecture, a lot of, "Here's our 24 month roadmap and I want to talk about CQRS versus event sourcing and blah, blah, blah." And so, we didn't actually meet people who were trying to really do this in practice. And then the other thing was, when we did finally meet people who were doing this in practice, they were like, "I already have a proxy. I've got HA proxy," or I've got Nginx." And we were like, "Well, no, no. This is different because this is an RPC proxy and blah, blah, blah." And they're like, "Well, we don't use RPC. We use HTTP." And we were like, "Well, in our ontology, HTTP is actually a subclass of RPC so it's totally applicable." By the time you have these conversations, they had wandered off. Eric Anderson: Just talking past each other. William Morgan: Right. Right. We had one moment of inspiration where, rather than talking at those meetups, we stumbled into the Kubernetes community and we were like, "Well, these folks they're not using the M word. They're not talking about microservices, but they sure are building them and they are actually having operational problems." And once we made that association, everything became a lot more clear. Here were fellow engineers who were just trying to get their apps to work and were running into all these issues that Finagle and Linkerd, by using Finagle, was actually really good at helping with. William Morgan: And then the other genius thing we fell into was, instead of calling it a proxy, we said, "Okay, well the way this makes sense is you're going to add a lot of these proxies. It's not just one or three at the edge. You adding them everywhere." You can visualize that as this mesh so we're going to call it a service mesh and service mesh had no meaning. It was a meaningless, blank term that we could then start writing into. We could define, "Hey, here's what that means," and that actually had a profound impact on people's ability to think about what Linkerd was. Instead of confusing it with Nginx and HA Proxy and the 50 other proxies that we had, it was a new thing and it forced them to treat it as such, and that was really, really important. Eric Anderson: It's interesting to think that relative, maybe to your consumer startup attempts, where you might have been searching for product market fit, you kind of knew this thing, at least in Twitterland, was super relevant. And then you went to a bunch of people and they were like, "No, I don't know what you're talking about. Why would we want this?" And you're like, "No, but..." I can imagine the cognitive dissonance of, "But this is real. I've seen it be useful." And then to find a home where people get it would be very satisfying. William Morgan: Yeah. That it is so emotionally taxing being a startup founder, because either you're doing something that's really obvious and a hundred other people have already done it or you're doing something that doesn't make sense to the world at large and you have to push through that. And all you hear all day is, "It doesn't make sense. It doesn't make sense. It doesn't make sense." And I've got to imagine that... I think the only thing that allowed us to get through that was the fact that we had seen it work, right? That we knew this actually was a good idea so it didn't matter what these people were saying. We knew this was the right thing to do. That allowed us to get through that period. Eric Anderson: Great. They're other people who emerge with service mesh ideas. The best are happening now, as you coined the term and are finding first users, or does that happen later? William Morgan: That happened about a year later. What happened with Linkerd was, we called it a service mesh. We actually hosted it in the Cloud Native Computing Foundation. We said, "Okay, it's open source so we're going to do this right." I say, "We host it by a neutral foundation." It was the fourth or fifth project accepted to the CNCF. It was sitting alongside Kubernetes and Prometheus and these other projects that were very, very popular, and it started to take off. And pretty soon, all sorts of companies were using it and sometimes they were telling us, and sometimes they weren't, which is one of the irritating aspects of open source, is that you often don't know who's using it until you find out three years later. William Morgan: We found out that all of WebEx is powered by Linkerd and we found out because someone curled the WebEx API and in the headers, it was like, "Oh, this request was certified Linkerd," and we were like, "Oh, okay, wow." You have crazy experiences like that, or I go to parties and someone was like, "Oh, what do you work on?" And I'm like, "Oh, service mesh, all this stuff," and they're like, "Oh, it sounds like Linkerd," and I'm like, "Yeah, it's Linkerd. And they're like, "Oh yeah, we use Linkerd all over the place" I'm like, "Oh, okay. I had no idea." Anyways, yeah, open source can be frustrating in that sense. Eric Anderson: And on that point, maybe I should ask you about the decision to go into the CNCF Foundation. Was there a lot of thought around governance and how we do this, or was CNCF just an obvious next step? William Morgan: Well, that gets more into the Buoyant side of things. For us, there's like, "Okay, we've got this open source project, but how is Buoyant going to be successful?" And it's important for Buoyant to be successful, obviously, for a variety of reasons. One of those reasons it's just, in the modern world, an open source project by and large, for the most part, we're beyond the days of it being this volunteer, nights and weekends thing. The majority of open source projects are funded by companies that have an interest in that project being successful. William Morgan: Anyways, what we decided early on was, for Buoyant to be successful, what we didn't want to do with Linkerd was to do what was common at the time was this, I guess you'd call it open core, where you have the open source thing, and then you've got the commercial extension and you're always in this difficult position of deciding where features go and should they be in the open source? Well, that'll help adoption but then we won't be able to make any money so we'll put them in the proprietary stuff. But then, you have these bad incentives, so we didn't want to do that and we knew that however Buoyant was going to make money, we wanted Linkerd to be a full, first-class, open source project that didn't have any of those reservations. William Morgan: For us, that made the decision a lot easier because you're giving up control. Giving it to the CNCF, it's now something that CNCF hosts it. They have some say in how it runs. They have the trademark and things like that, but it made a lot of sense for us so it felt pretty natural. I think if we had a different business model behind Buoyant, we probably would've gone a different way. Eric Anderson: Yep. Your point earlier about most open source is funded by companies, the idealic view of a nights and weekends community across the globe, chipping in. Were there people that came out of the woodwork to add a feature here or a feature there, or were they always corporate, representative folks? William Morgan: No, there were always people who came out of the woodwork to add features and stuff. And in fact, that was great for hiring because, we would often say, "Hey, you're doing a really great job. Would you like to do this for money, full time?" And they say, "Yes." Right? It's not that those people don't exist, it's that the majority... If you look at something like Kubernetes- Eric Anderson: Right. William Morgan: ... the majority of the code that gets in is done through sponsorship from a company. The nights and weekend folks certainly exist although in reality, I think it's less about nights and weekends. It's more about, "Oh, I'm using this thing as part of my job and it doesn't have this feature. That's really irritating to me. And by the way, contributing to open source is cool and it's really good for my resume. And therefore, I'm going to do this as opposed to..." You imagine the days of Linux and Linus in his basement or whatever. I think as an industry, we've shifted away from that being the primary way that open source makes progress. Eric Anderson: Any maybe it isn't that the Linus's of the world have gone away as much as the corporate world has shown up in large numbers to want to invest in open source. I don't know. William Morgan: Yeah. Oh yeah, that's exactly right. That's right. People realize, "Hey, this is actually a thing we can do that's good for our company and as good for the world as a whole." Eric Anderson: Great. Where are we on the life cycle path? William Morgan: Well, what had happened is, once we were in the CNCF, that was further fuel to the fire and we got a lot of adoption, but we noticed that there was this source of friction, which was, because Linkerd was built on Finagle and was built on to the rest of the Twitter stack, it involves introducing the JVM into your environment. Each proxy was a little JVM proxy and that was okay for some people, but there were a lot of people who didn't want the JVM in their stack, for reasons that were either good or bad, but, they didn't want to adopt Linkerd because it was on the JVM. William Morgan: And the other implication was that Linkerd, was, as a proxy, was actually heavyweight. We were sitting at... After a whole lot of JVM tuning and that stuff, we were sitting on 120 megs of RAM per proxy. And if you were running these big Java apps that were taking up two gigs, then an additional 120 megs per instance was not that bad. But if you were writing these little go microservices that were sitting at 30 megs of memory, and then we were asking you to stick a proxy next to each one, and the proxy was going to be 120 megs. Well now, you're like, "Okay, well that doesn't really make a lot of sense." William Morgan: There was some pushback around that. And then starting, I forget what had happened 2017, I think, Google jumped in the game. There was the 800 pound gorilla was like, "Oh, I have a service mesh," and they actually had called it something else when they saw Linkerd being really popular and the service mesh term made sense. They were like, "Okay, it's actually a service mesh." They introduced Istio and then the rest of the industry, Console Connect and App Mesh, and whatever else, there's seven of these projects now, followed over the next couple of years. William Morgan: And in the meantime, we were like, "Okay, well, crap. Linkerd has some flaws, namely the fact that some of the JVM and that it's so heavyweight and also, we did the engineering-centric thing, which was, we were like, "Okay, well, Finagle has a million options so we're going to expose them all to you and here's a giant Finagle configuration file and go to town. You figure it out." And the actual adoption path of Linkerd could be quite challenging because you had to understand a whole bunch of complicated concepts, including dtabs, which were this language for specifying routing rules. You had to learn this whole complicated language. William Morgan: Anyways, there were a bunch of issues that basically led us to late 2017 saying, "I think we need to rewrite Linkerd," and we spent the next year and a half rewriting Linkerd from the ground up to A, not beyond the JVM. Yeah, that was a pretty major thing. A, we wanted to get off the JVM and then B, we wanted to make it so that it was easy to adopt, easy to install. We wanted to bring some products sensibilities to this thing. And I wrote this long article in InfoQueue about... If you search for InfoQueue and Linkerd v2 or something, you can read about some of the history and some of the decisions we made. But we spent a lot of 2018 basically rerunning Linkerd from scratch. Eric Anderson: And in many ways, this is expected. You were clearly ahead of the curve. You predated Kubernetes and in some degree, Containers, and I think all the other noise or new service meshes seem to post date Kubernetes to some degree, at least. And so, this is expected to a degree, right? First mover advantage, you have to tweak as the market emerges. William Morgan: Right. Yeah. Yeah, I think that's right. Eric Anderson: At this point, the Buoyant team is big enough that you can, as a group, come up with a shared view on what v2 looks like and turn it around in a year's time, as you described. William Morgan: Yeah. Yeah, that's right. 2.0 was GA'd in September of 2018, and the vast majority of our development efforts have been on the 2.X branch since that point. Two things have happened. One is, we've removed that JVM independency and 2.X now, 2.7 is around the corner, we released 2.6 late last year is built on the actual proxy. It's written in Rust, which is really cool for us because it gives us all sorts of security guarantees. You avoid a whole class of C++ issues around buffer overflows and things like that. William Morgan: And then the control plane is written in Go so it fits into that Kubernetes ecosystem and we've tied it, we've cuddled really tightly to Kubernetes, which was a conscious decision we made, which means that it's much harder to apply outside of Kubernetes but if you're adopting Kubernetes, then it is so easy to get started and so lightweight and so fast. And the security posture is so good that we've seen a huge amount of adoption. William Morgan: In fact, we now, I don't know what the exact percentage is, but we've got a huge amount of Linkerd deduction that's just coming from the Istio camp, where they learned about the service mesh from Istio. And they're like, "Wow, man, that seems great. Value props seems great. But man, this thing's a beast. Let me go check out Linkerd and it's really fast and it's really lightweight and you can install it in 60 seconds. There's been this weird shift where, those first few years we were defining this term, right? And we were doing a lot of evangelization. William Morgan: Now we don't have to evangelize the service mesh. Everyone knows it's a thing. Although we do have to tell people, "Hey, it doesn't have to be this complicated thing. It can actually be really simple. It can be really lightweight. It can be this incremental thing, rather than this giant layer of technology you now have to bolt onto your Kubernetes clusters." Eric Anderson: Yeah, and we talked briefly about Istio. Envoy, which has relationship with Istio, showed up somewhere there as well. William Morgan: Yeah. Eric Anderson: Tell me about presumably, Linkerd, the proxy, which you refer to Linkerd as a service mesh, but it comes with a proxy but it's part of the mesh. Any comments about the beginning of Envoy in relation to your history and shared inspiration, that sort of thing? William Morgan: Yeah. Envoy came out, I forget when, but it was around the time that Linkerd was starting to gain traction. It seems to be a great project. It's a general purpose proxy. It's a much better version of Nginx, basically. And what has happened that's been interesting is, people have used Envoy to build lots of stuff. Istio uses Envoy as its proxy. Most of the other service meshes do as well, because it's this really useful building block layer. William Morgan: For us, it didn't make quite as much sense because we knew we wanted to build something with security as the focus from the bottom up. We want it to start with Rust and not with a language like C++. And also, having control because we had the wherewithal to build our own proxy. Building a proxy is actually... It's really difficult building a high performance, high throughput number proxies. It's an incredibly challenging technical task. Because we had the expertise to do that, we wanted to do that and do it in a way that made the most sense for the service mesh. William Morgan: Linkerd has this proxy called... It's just called Linkerd Dash Proxy, and it's not a general purpose proxy. It's super tightly coupled to Linkerd because our goal was to solve the service mesh problem not solve the general, purpose proxy problem. If Envoy is like a Swiss army knife, Linkerd proxy is like a little needle, right? You can't use it to saw down a tree, but you can use it to, I don't know, whatever you do with... You can use it to poke a hole, right? And that's part of what's made Linkerd so lightweight because its proxies are so small and so fast, it allows Linkerd to be really, really lightweight in a way that other service meshes can not. Eric Anderson: Yep. You came up with the term service mesh, defined the category. Istio has carried the banner almost on your behalf, in some ways of evangelizing the term. Do both parties benefit the category? But I remember you posted something recently around further defining service mesh and I imagine you probably need to continue to clarify what mesh is, at least to Linkerd, as the term expands and gets thrown around. William Morgan: Yeah. It is very convenient for us that the service mesh market has been so validated and it's been validated in the most forceful manner possible. Yeah. What's unfortunate now, I alluded to this is that, we now have a struggle where there are people who are turning away from the service mesh because they're saying, "Oh man, it looks really complicated and it looks really bloated and it looks like this pile of enterprise-focused stuff, where I'm defining these YAML policies and blah, blah, blah." And they know that because they're encounter, or they think that because they encountered with the service meshes through Istio. William Morgan: And so, we actually have to be a little aggressive in the messaging and say, "Hey, guess what? The service mesh doesn't have to be that way. You can install it in 60 seconds." One of the things we worked very hard to get to with Linkerd is we had this idea that if you have a functioning Kubernetes application and you're adding service mesh, the application should still continue to function, right? It's like you shouldn't break things by default, which doesn't sound genius, but Istio could not do that, still can't do that. And that was a really important principle for us because we wanted to make it something that you can add without fearing for your life, without having to then spend six weeks writing configuration. Eric Anderson: Yeah, totally. William, as we wrap up the story here, I want to make sure we capture any final thoughts you have on either the future of Linkerd or the things you're working on today. And then I have one more question I wanted to ask you around earlier, we talked about your wanting to make this a pure, open source project, gave it to the CNCF. You don't have to think about open core. You, through Buoyant, monetized by providing a service, presumably. Any comments on how that debate settled that you had internally? William Morgan: Yeah, for us, it's actually been pretty clear from the very beginning and the model that we have is that the surface mesh actually doesn't really matter. The service mesh is irrelevant. That's a little bit of an overstatement, but nowhere in the mission of Buoyant, nowhere in our strategy, does it say, "Okay, service smash..." Anything about the service mesh or Linkerd. The really important stuff for us goes right back to our experience at Twitter. It's what happens when an organization adopts microservices, right? What happens when they make this transformation? I mentioned early on that there's the technical side of things and then there's the human side of things. The stuff that is easy for open source to address is on the technical side of things, right? How do I make these computers talk to each other? William Morgan: Right now I've introduced this big distributed system and I need to think about, retry storms and things like that, and that's what the service mesh is really good at. But what the service mesh cannot help you with is, "How do I have all the engineers in this organization communicate with each other in a way that now makes sense?" We all used to get together on every Tuesday or once a quarter or whatever it was, and deploy, right? We merge all the branches in. Okay, it's deploy Tuesday. Everyone standby. All right, deploys out. Oh no, the deploy failed. William Morgan: That world is long gone. Right now, we've done all this work to decouple the entire engineering orient to individual teams that own their own services and they iterate as fast as they want and they're deploying whenever they and they're deploying 30 times a day and that's great, right? That's a really positive transformation, but the result is that the way that these engineers need to coordinate and need to communicate with each other, it's actually really, really different from how it was before and that is an organizational set of challenges that Buoyant can solve that are not possible to solve in open source. It's not a computer problem. It's a human problem. William Morgan: We introduced a... We did a very soft launch of a SAS app called Dive at CoopCon last November, and the goal of Dive is to solve exactly those challenges. And Dive is something that becomes relevant when it's enabled by a service mesh, when it's enabled by Kubernetes adoption and the shift into micro services. But ultimately, the technology choices are an implementation detail. What's important to an organization is the fact that we have to change our processes and the way that the human beings are interacting. That's where I see the real value of Buoyant and certainly, the part of the problem that I'm real excited about working on. Eric Anderson: This has been great, William. Thank you for sharing with us today. Maybe just one parting thought as we wrap this up. I think it's amazing that your early experience at Twitter for a brief time, has powered a decade or something of innovation and it's interesting to reflect on how critical an experience that was for you and it's interesting to think about placing ourselves in the bleeding edge of transformational epics to have similar experiences. William Morgan: Yeah, I feel fortunate to have been there at that time. I think things would have gone very differently if I were somewhere else and I certainly didn't realize what I was getting into. Eric Anderson: Yeah, but you didn't plan for that. William Morgan: Right. Eric Anderson: Thank you very much, William. Best to you and the Linkerd community and Buoyant. William Morgan: Thank you, Eric. It's been a pleasure being here. Eric Anderson: You can find today's show notes and past episodes at contributor.fyi. Until next time, I'm Eric Anderson and this has been Contributor.