Jeu George: This was staring us right in the face. And we saw that hey, companies were kind of going through the struggle. Every few months we would look at this, and this problem was just exploding. Eric Anderson: This is Contributor, a podcast telling the stories behind the best open source projects and the communities that make them. I am Eric Anderson. Today we have Jeu George, who is a co-founder and co-creator of both Netflix Conductor, the open source project, as well as more recently, Orkes, a company offering a managed Conductor as a service. Jeu, thanks for coming on the show. Jeu George: Hey, thanks for having me, Eric. Super excited to be here. Eric Anderson: So, I like to give you a moment to give us, or our listeners really, an explanation of what Conductor is. I understand it to be a workflow orchestrator, which is a bunch of words that may not mean anything to somebody. How would you describe it? Jeu George: Yeah. Conductor, in essence, as you know, this was actually developed at Netflix, and I think the initial use cases, obviously, had its roots in media entertainment. And if you see how the industry looked at that point in time, Cloud made it very easy to do launch services. And so, there's a lot of microservices in any company that you're looking at, and you needed a higher layer. When people broke away from monoliths to microservice, you needed a higher layer to go and talk to all of the services that you created, and that's where the real orchestration piece comes in. But if you look at increasingly the last, I think 2017 is when we had open sourced this, right? And the last five, six, seven years that it's gotten a lot of traction in industry, people have been just using this to orchestrate the core service that they build. Application building platform is what it's evolved into. That's really at the core of what Conductor's used for; building backend applications, trying to see how to build this higher liability piece there, joining services together, talking to each other, creating that layer where it gives direct value to the consumer. I would really, at this point in time, the way it's evolved, it's an application building platform. That's what it really evolved into. Eric Anderson: Got it. So Conductor's an application building platform. There are other things out there that people could think are also application building platforms, and maybe they ground this. Initially, Jeu, I imagine a lot of the use cases were around long-running processes that had a series of steps, and accomplishing those steps over the network, over a series of microservices, were difficult. Maybe we just go through the history and that would kind of illustrate for folks where this thing started and what it's become. Jeu George: Yeah. So, I think that might be a kind of good segue into how it all started. If you look at, I think around the time when AWS kind of generally launched, right? Like, pre-AWS days, generally companies build services at monoliths, everything was in one single box. At best, you would have a database server that separated out and that was the general way things were built. There were also the days when scale was not big enough for a lot of companies. The traffic could mostly be handled on really large scale boxes and vertical scaling was the way to scale up when traffic increased. One of the things that was a big friction point that the 2006, 2007 era was, if you wanted to start something new or expand, like the friction to enter that was really, really high, and you had to have capital to get machines procured, launch services planned for that. And AWS kind of saw that as a really great opportunity and just came up with this notion of cloud. And generally, if you think about it, makes it very easy for someone to launch services because you didn't have to worry about procuring machines, like how long you had to keep it for, recycling them, and all of that stuff. One of the things that it did was basically, from an engineering perspective, it made it very, very simple to launch services. And Netflix saw a big benefit in that and it made a significant call that altered the course of how people build service and build companies thereafter. It was basically... I think it was the first company of its size to completely operate in the cloud. All of its data centers was completely in the cloud. And that led to one other interesting problem, because it made it super easy to see create services, it also created orchestration challenges. Because before everything was in one box it was easy to do that, now everything was getting distributed. And that is one of the things that Conductor was trying to solve. You build out the services, now we need to make a business sense of it. Pre-Conductor era, all of those was done by other services. You know, you build a service, call into all the other services that in turn make business sense, and return whatever you need to right? Now, Conductor came to kind of solve that piece because it created coupling between all the services and counter intuitive to the fact why microservices were getting built in the first place. Some of the use cases also led into long-running workflows. Sometimes there were processes, especially in the media entertainment industry, content acquisition there were flows that would last several hours, to weeks, to months. And today, we have in use cases that last several years. So that was one of the use case, but because it was built as a very, very general purpose orchestration engine, it would also handle cases where you had to do orchestration, sub-100 milliseconds as well. It basically does a wide variety of use cases. So, very general purpose, and one of the things that we also realized was around that point in time, Netflix also made a fundamental shift in strategy to move from licensed programming, licensed content to kind of original content, because content creation was kind of the differentiator in the industry. To achieve that, it also meant that you had to create content at a scale that was never done before in history, and that meant creating the world's largest production studio, also one of the very initial use cases for Conductor. So we also understood that, if something this was going to be used in the company, it was most likely going to replace your core flows, because you probably already had something before. What that meant is, if it's running the core of a business, reliability was the number one tenet of the things about building Conductor. And the second one was, every company is different, their language stacks are going to be different, making it language agnostic, making it cloud agnostic. So, these were the fundamental principles of building Conductor and I think eventually that's how it's evolved as well. When people looked at it, it's a very general purpose orchestration system. And if you look at companies like Tesla for example, you would use it when the company started from using car automaton systems. That was eight month long workflow before your car comes home once you place the order. That runs from Conductor, right? Then, they started using this for payments and billing, use it for CICD pipelines. And then, across the industry, FinTech, retail, supply chain, healthcare, it was getting used everywhere. Before starting Orkes, we also talked to around 50, 60 companies, and one of the commonalities that we saw was, people were using this to build platform teams, all of which they could unlock their end user use case on scale. So, distributed application building platforms, that's essentially what Conductor's really evolved into. Eric Anderson: Okay. So, yeah, thank you, Jeu, because I think I was trying to push you in the direction at the beginning of, there's certain use cases that are more fitting for Conductor, and I think you've helped me understand that no, this is a general application platform. And maybe to provide some specificity, if I understand it right, what Conductor offers are kind of modules, common patterns, that people want to employ when building an application. And rather than repeat themselves over and over or reinvent the wheel every time, Conductor gives them these kind of... If you want to call them modules out of the box, to do common things you'd want to do with distributed applications, whether that's retries on failure, whether that's have logical workflows trigger this next step unless this condition exists, and then trigger that next step, wait until this step is finished, until you do something else. All these patterns people may want to do, you've codified those into standard bulletproof ways that then people can pull off the shelf and employ. Jeu George: Exactly. And if you look at general platform teams in most companies, this is exactly what they do right, like they intend to repeat these things in companies all the time. So that's one piece of the puzzle, right? You called about, hey, when things fail, how do you go and do the retries, for example? And those kind of things are in built into the Conductor and battle tested over the last few years in a thousand odd companies. There's also this sense of, it also kind of enhances and encourages use of reusable pieces of code. So, to give an example, if you are in the retail industry or supply chain industry, payments is a core part of the system. It's not exactly as a company, that's not what you're doing, but then you are integrating with payments providers, or maybe when something finishes, you're sending a notification to a customer who placed the order. So, sending email or sending an SMS notification, right? Not the core part of your business, but once you build it as a building block, as a small task or as a small service, now it also encourages you, because you have a repository of tasks and flows that you've built within Conductor, and now you can take that and then inject it into other flows that you're doing. So, it encourages reuse as well, in addition to the things that we already talked about. Eric Anderson: Okay, so Conductor brings with it its own common patterns and modules, but individuals at a team can also build modules, and Conductor gives them a mechanism to share and manage those within the organization. Jeu George: Exactly, and that's a big value add. When we talk to customers, customers or workers who are using Conductor, the enterprise grade Conductor that are doing... Even before you use a specific module, you can see how the SLA for that is going to look like, right? Because you have past history of that on all the workflow runs that's been run over the last three months for example. So you can pick up a task and say, "If I were to use this in my flow, how would my end-to-end SLA look like?" And that's a big value add because you can see that hey, maybe there's hundreds of millions of executions in the last quarter for that particular task, it has a four nine success rate, so you can just blindly pull that, and then use that without worrying about your end-to-end SLAs. But on the other hand, if you see something valuable, but it's a two nine SLA and if, as a service, you're providing a four nine SLA, you don't want that, it's not going to meet your standards. So, you get that kind of visibility as well, especially when you're reusing tasks as well. Eric Anderson: And Netflix was early to... I mean there there's other solutions out there for doing this kind of thing, but Netflix seemed to be kind of on the cutting edge, certainly of consuming public cloud. They ran into these problems, release Conductor, how long ago was this? Remind me. Jeu George: So this was around 2016, is when we kind of build this out. And I think over the course of 2016 we got it to a place where it started getting widely used within the company as well. I think towards the end of 2016 is when we decided to open source that. Eric Anderson: And how does that conversation go? Jeu George: So I think one of our co-founders, Viren, he wrote the first line of code in Conductor. So this was getting... I think there was a problem that we were seeing that the industry was actually going through, and Netflix was kind of deep into that. And we could see the journey that the companies were taking and we knew that hey, we had a two year headstart to see all of these problems, and we had a solution in place. And the focus is going to be on being the largest content provider, get to 200, 500 million subscribers over time, not spin off like other businesses. I think AWS did a fabulous job with that, they were in the retail industry, but they saw, hey, there's a bunch of tooling out there that could benefit the industry and then spawn off a business unit, but Netflix was not going in that direction. However, they encouraged engineers in the company to go and say if there's great tooling that you have built, great products that you built, feel free to open source that. And there was a culture within that then there was this Netflix or umbrella of products, and it just naturally fit into that, and then that's how we decided to open source that. Eric Anderson: And is there an open source czar that you go to and say, "I've got this project, can we release it?" Jeu George: No, we didn't have that. I think Netflix also had this culture of freedom and responsibility. So, if you felt that that was the right thing to do for the company, that was the right thing to do for the community, just go ahead and do that. Eric Anderson: Good, and then the project took off, there was enthusiasm maybe from the get go, eventually you left Netflix. Maybe tell us your story and how you stayed with the project, and then came back more recently to it. Jeu George: Yeah, so we are a co-founder company; three of us from Netflix, me, Viren, and Boney, and all of us worked closely on Conductor, some directly into the project, some of us... Like, me when I started off, I started to use Conductor as a user. And then Dilip joined us, he was XAWS at Microsoft, and he joined us later on. He was also a Conductor user. I kind of left Netflix to join Uber late 2018 and a lot of us went in another direction. I think Viren to Google, Boney went and joined Robinhood. We also had an entrepreneurial mindset all the time. Whenever the four of us would get together, it was like, "Hey, let's do a startup." And there was also a discussion on, "Hey, what do we build?" And there was so many ideas that went back and forth, but this was staring us right in the face. And we saw that hey, companies were kind of going through the struggle and every few months we would look at this, and this problem was just exploding. And then when we just got together, thought about this thing, the industry was going through a big change. We had the right kind of founding team who built this product inside and seen how it actually benefited Netflix and the open source community, and the few thousand companies who were using it, and I felt like that was the right time to go and launch a company around this. And then we got together somewhere mid 2021 and decided, hey, let's go after this, and that's how it happened. Eric Anderson: Now maybe as the project matured, we've talked a little bit about how it's become more general, but you you've taken on... tell us about the market as it exists today. It seems like it's evolved quite a bit. Jeu George: Yeah, I think if you look at open source usage or usage within Netflix as well early on, it's pretty much used across every single vertical that you can think of. If you look at media entertainment, obviously had its roots there, several companies that's using it at scale. In the retail supply chain world, we talked about the Teslas, there's the booking.com, Swiggys, Q Packs of the world that's been using this for a variety of different use cases. Heavy in FinTech, right? JP Morgan, Morgan Stanley, Oracle Financial Services in a huge usage there, and BIFS, right? Insurance companies. And then, we also started to see that hey, this was also used for app modernizing use cases across verticals, like Apple, LinkedIn, Cisco center site product. So that's where we saw, hey, there was clear product market fit, but we still didn't have an idea of what is the commonality across all of these things, right? Because as a company we also wanted to focus on something, should we focus on a vertical? Should we focus on a specific use case cross verticals? But I think talking to all of these companies, one of the things that we found was, the commonality was people were using this to build platforms. So in the example of Tesla, it was an auto management platform. In companies like Amex, it was a fraud detection platform. In cases like LinkedIn, it was building a sales platform at the top of this. So platforms is what people were actually using this for. And then in general, I think as I was saying, Conductor's just evolved into that as well. Distributed application building platform, that's really what it's being used for right now. And one of the other things that we have also noticed in the last couple of years is the mind share of people has also changed, the way people think about how to build applications also changed. A lot of these folks who have been building applications today have gone through the journey one or two times and they understand that hey, orchestration is now a key part of the stack as well, the modern application stack. And when they start building applications, just like you don't think about building a database from scratch, you think sql, no sql, whatever option you have, a key value pair database, memory in memory store, right? You don't build a database from scratch today and that same thing is actually happening in the orchestration space as well. And people building distributed application platforms, they start with one orchestration goal, and Orkes happens to be a key leader in that space, primarily because the product itself is so mature and battle tested over the years. Eric Anderson: Yeah. We chatted just a bit before the show started and you were talking about AI as being one of these areas that maybe wasn't the original use case but has become a great fit. Tell us about what kind of use cases, are these data preparation for AI or where does Conductor fit in the AI landscape? Jeu George: Yeah, it's actually an extension of what we've been doing already, it's AI orchestration. But if you just rewind back a little bit, the last I would say six to eight months, there's been a lot of change in the industry and how people have been pursuing how to use AI in their businesses. But if you look at, even take a company like Netflix or Google, they have used AI using Netflix itself, in a computer version for example. They have kind of used that to their advantage a lot and not a lot of people understand there were a lot of tests that were done around artwork generation, and sometimes when you go on the website, when you see the landing page, a lot of the artwork that you see is completely machine generated stuff. And some of that also helped retention by a lot or increased viewing hours by a lot. Companies like Netflix have been able to leverage AI a lot to take their business to the next level, get ahead of competition a lot, and a lot of people don't realize that, right? And we actually had a meetup that we hosted with Netflix where Netflix engineers talked about how Conductor has really leveraged AI, and how Netflix has used that to get ahead of all the competition as well. But one of the things that's happened of late is ChatGPT. When that came out, what that's also done is put that in the hands of consumers. So people who never kind of used AINML before, you could actually use ChatGPT and see how powerful this was, and put that in the hands of consumers. And then, as a user, people start to see firsthand how powerful this was and how they could potentially use it for a different use case within the company. And in the last few months in a lot of our customers, they have been also talking to us, "Hey, I want to use AINML for my use cases. What do I use it for?" So I think there's a problem around discovery of what we can leverage AINML for, that's a fundamental problem in the industry today. The second one is once I decide how I want to use it, it's not just about figuring out, hey, I'm going to use a model, going to retain my data. It's not that, but it's like, hey, once you decide your problem space and how a particular model can go and fit that, you also want to integrate that with your core business flows. And that's where this whole end-to-end AI orchestration piece comes in, and that's where we are seeing increased demand as well. So, you have core business flow, and let's say in the insurance industry for example, document scanning, it's a very, very common use case, and a lot of this is a really long, manual process. And using computer vision to scan documents, taking that to the next level, taking the whole journey from... maybe it's a two-week journey taking insurance claim process by humans, something like AI can supercharge that and get that done in a matter of a few seconds. And then a human can come and take a look at the results of that, and then supercharge what they've been doing in the past before. So basically, this was a piece of the core business flow before, but now they have actually leveraged AI and then brought in the core kind of workflow, and then they have actually taken it to the next level. So, integrating AI pieces in the core business flow, that's really where we are seeing the industry is actually moving right now. And there's a lot of companies where we have seen that there's increased demand there. They don't have a lot of talent around ML engineers. In the Bay Area, you can see a lot of companies, there's a significant amount of investment in, there are ML teams, ML engineers, but not a lot of companies operate that way. So taking AI and democratizing that, and taking that to a regular engineer, and helping them discard what they can do, integrate in the core business flows. That's really where we see the power of, yeah, orchestration is going to be it, right? Eric Anderson: Yeah, I mean, I'm already thinking one of the things I like is that you could easily insert... Take some of these media and entertainment examples, my understanding is that if you run a service like Netflix, every time a new title wants to become available, you probably have a bunch of steps you need to accomplish on that title. And if you wanted to use AI to generate captions, that would be a natural place to insert a step, generate captions. If you wanted to use AI to generate a summary text at the beginning, it'd be a natural place to insert it in the flow. So it not only is a way to automate or control some of the AI workflows you have, but it's a natural extension point. I want to add AI to my business, if I have these workflows already built, it's pretty easy to slot them in. Jeu George: Yeah, absolutely. We are actually working with several customers in that space to exactly solve that problem. And I can give you a detailed example of that. If you take a movie today, in the industry, that has to go through a bunch of different steps before the consumer gets received. And today a lot of these companies have presence on the internet, so movies that you shoot in one country can actually travel globally. But to make that happen, you need to localize that to that country's language. And subtitling, for example, is one of a critical piece of that. But if you look at how subtitling is done today, take a movie, if it's an episode of a TV show or a movie that's two hours long, it's multi-week process to get that subtitling right. And now we've got the subtitling right, maybe let's say you start with English, now you need to... transcription is done, but you need to translate that in several languages and carrying the context over correctly as well. It's a multi-month process across different vendor companies before you actually get it right, and then doing a bunch of QC on the top of that. Now, especially with how text generation has changed with large language models, you can automate a lot of this, including capture emotions, question marks, making sure the names are capitalized, pronouns, and all of those things. Generally this is something that automation around this area and industry has been struggling with, and doing subtitling use cases is a top thing that we have gotten from our customers, and we are doing some really good work there as well. So that was just one use case, but there's... within media and entertainment, there's tons of such use cases where tedious processes where humans were involved in, now AI can assist those humans to take a multi-month project to a few minutes. And then do final QC on the top of that, and then get that released with real super, super high quality. Eric Anderson: Well, and you can fake it til you make it with human in the loop stuff quite easily. I mean, that could just be another logical step insertion point for the workflow. So that's exciting on a front, given everyone's excited about that. But more generally, I also think you're finding a home in the broader... Maybe you've already spoken about this, Jeu, the application stack. Rather than being an appendage, a thing in certain use cases you need, I think the market is recognizing that workflow orchestration is required as kind of a standard part of every application now. Jeu George: Yeah, I think if you look at general modern application stack itself, that's really changed. I think if you're talking about how the industry has evolved, in the past when you look at building applications, it involved a service layer where you had logic on how you want to solve a specific business problem. The data that was associated with that would be stored at a database and then you might have multi-tiered strategy on doing that, and then you'll have caching layers in between, and the UI layer on the top. And that would be a general way to do things. But more recently, a lot of those things have also changed. People have been thinking about the orchestration layer. And if you look at orchestration, services has kind of said, "Hey, can I build this in a serverless manner? Especially in things which we don't... stateless kind of service, I don't need to go and launch something new for doing this, right?" Or orchestration layer, which is probably that sitting at the top of your back end stack for example, handling both synchronous and asynchronous flows for example, application logic layer that's kind of cleanly separated out. And then, there's also the whole transactional logic platform, different good players in the space there. I think the way people have been thinking about building modern applications have also changed. Orchestration, again, happens to be one of the key pieces of the stack, sitting right on top of everything that you do on the back end. And typically it's being the entry point on how you enter your back end stack, and that layer will actually grow and become even more important, because that layer now has visibility into everything that's happening within the company. And so it's a natural fit into solving other problems that sit on the side as well. So for example, observability for example, or how do you just make debugging so simple, because now it has a visibility to the entire thing. And it not only provides value to the core developer or the DevOps who's writing and managing those flows, but also now gives insight to the business leader, business unit leaders. Like, hey, I have X number of services within the company or X number of applications with the company, but I want to get visibility into what are the services where the SLAs are gone up or gone down? Or where's my resources going, where's my compute resources going? Where are the top spends going into? Are they well utilized? Increasingly it's becoming a very, very important piece of the entire application stack. Eric Anderson: That's great. And, Jeu, we gave the history, maybe take us to today, what's happening with the project this year? What do people have to look forward to? If folks want to get involved, how do they get involved? Jeu George: So I think as a company, last year early on was kind of building our cloud products out. So we, early part of the year, had presence in most of the prominent cloud stacks. So we have enterprise grade SaaS products available in all the three cloud options doing the same private data centers for companies that are still on-prem. And this year we are focusing a lot on scaling and then increasing our GTM also. But from a company's perspective, there's a few things that we are focusing on for the rest of the year, and moving forward as well. One is around integrations. We talked a little bit earlier on about, yes you can build flows that are very important for your company, that's what the orchestration layer is used for. But as an enterprise, you know are probably using a lot of different services across a lot of different providers. And 80% of the flows for a specific business use case, you're building this yourself, but there is this 20% of the use case which you might need to integrate with other enterprise applications that you have in the company already. So maybe when you finish a flow, you want to notify something on Slack, or you might have to send a page notification when things go wrong in some areas, or other things like Zendesk integration, or integrating with Stripe for doing payments for example. So there are these popular enterprise applications that we are providing direct integration support with, and that we are doing right from a very grand level on just calling API integration, task level integration, end to end flow integrations, making it very easy for people to one, adopt Orkes within the company, and then integrate with all of the key applications that they have within the company itself. The other one that we are having a big focus on is also on AI. AI orchestration is going to be a big part of what we are going to focus on for the next six to 12 months, especially because the industry has actually seen how AI can help the business a lot. So there's a lot of AI go... We're going into orchestration around the AI space, discovering what companies can do with that, and then integrating with the core business flow, making it very, very simple. Building AI should be like building another application, and that's one of our key focuses as well. From a high level, I think from a product perspective, those are the two big areas that we are focusing on for the next few months. From a hiring perspective, a lot of effort going into increasing GTM sales and marketing, that's one big area that we want to focus on as well. Industry is also moving really, really fast. This macroeconomic conditions as well, people looking into how can I leverage this to reduce TCO costs? That's actually helping, playing in our favor as well, because now you can consolidate a lot of these things and get them all behind a single orchestration, because it's very general purpose. One tool can serve a lot of use cases within the company as well. So those are the high level things that we are focusing on as a company. Eric Anderson: Great. And bring us home, anything we didn't cover you wanted to cover today? Jeu George: Yeah, I think we covered most of it, but I think the future, I think we've all seen. There's going to be some tremendous impact in AI and not a lot of... No one knows really where this is going to go into, but everyone has seen some really good value in how AI can leverage, so I think that's a big area. I think this thing is going to continue to change direction and this where this is going to land at, but I think we are in the right space. So I think that's the super exciting thing that's happening on our side, but I think outside of that, we covered a lot of it, yeah. Eric Anderson: Yeah, the AI stuff I do find, not only is it exciting the market, but I do feel like... You look at [inaudible 00:32:40], or other kind of prompt engineering tools, they're a workflow solutions. Ask the model this question, they give you some refined output, then you want to change that output. You could apply security to, people are trying to figure out how to do prompt injection. There's things you could do to sanitize the output of a model using Conductor today, request something from the model and then do all these checks to make sure it's not racist, or insulting, before I publish it. So I do feel like there's a bunch of prompts related work that Conductor would be useful for. It'd be interesting to watch. Jeu George: Yeah, and I think seeing the customer demand as well, and we can see this wide variety of use cases coming up. Sometimes we feel, hey, it's like is it like deja vu, because we saw similar things in Netflix? But I think the industry is at that point right now where everyone wants to basically leverage this. So I think it's going to be a great couple of years. Eric Anderson: Good. Thanks for doing this, Jeu, and we'll have to follow things quite closely. Jeu George: Yeah, let's do that. And thanks for having me, Eric. Eric Anderson: You can subscribe to the podcast and check out our community Slack and newsletter at contributor.fyi. If you like the show, please leave a rating and review on Apple Podcasts, Spotify, or wherever you get your podcasts. Until next time, I'm Eric Anderson and this has been Contributor.