Willem Pienaar: We open-sourced it way too soon. It was pretty raw. And a lot of people were like, "Wait, this is just undocumented, bowl of spaghetti. It's not even a real open source project," but so we continued to develop out in the open and over time we got more and more traction and Feast became something that's a little bit more well known. Eric Anderson: This is Contributor, a podcast telling the stories behind the best open source projects and the communities that make them. I'm Eric Anderson. I'm joined today by Willem Pienaar who is the creator, or at least one of the creators of Feast. Feast is an open source feature store and I'm super excited to have Will on the show. He's been on the cutting edge of what's an important and emerging category here around feature stores. Willem, maybe you could tell us exactly what Feast is to start out. Willem Pienaar: Sure. So Feast is a feature store for machine learning. It's an open source tool that we built at Gojek when I worked there. So essentially it's a data system for operational machine learning. It acts as a bridge between models and data in a production setting. So essentially, data scientists can use the system to ship features into production without the need for engineers to be involved. And they can quickly iterate in features and get those into production and use them in models and manage the life cycle of those features. Willem Pienaar: In much in the same way as data scientists today can train models and ship those into production, through ML platforms, they also need a way to ship data into production, where traditionally that was a little bit harder. You need to get an engineer to write a streaming pipeline for you or they would need to go make some changes in a product backend to compute features in the production environment where the purview of the data scientist was mostly the offline world or the batch world. So now you enable data scientist to be more self serve, and that's pretty much what the feature store's all about. Eric Anderson: Got it. And I don't want to go too far into this without going into the history, but when you say putting features into production, I think we talk about features in training, but you're talking about also when features need to be served to the model for inference. Willem Pienaar: Right? Okay, I'm blurring those lines a little bit, but that's a key part of the value that feature store provides. If your use case is, you're training a model and you're serving a model, the model expects the same feature values in both environments. And so feature stores provide that consistent interface, both offline and online. So if you don't have that online environment, that production environment, it's a little bit easier, right? Because you can write a single SQL query, you get your data from, let's say query or some snowflake or some data warehouse. You can both train and score in batch pretty easy, but if you have divergent environments, then it's a lot harder, right? Willem Pienaar: Because the data scientist can do the training, but shipping that model into production, it also requires equivalent data in a production setting. But that data comes from different places. Doesn't come from a data warehouse necessarily. It'll come from your product backends or streams or other backend services because it needs to be real time and all those things. And so the feature store provides not just a way for data sciences to ship features into production, but also provides a unified interface between those two environments. So the model only sees one view of data and it's going to a unique problem to ML or at least decision systems that need to be trained like models. Eric Anderson: I'm a bit familiar, when I was at Google Cloud, working on data processing services, there was this awareness that to do machine learning, we needed not only to present the same features in training and production, but we probably should do the whole transformation pipeline the same to ensure, like you said, consistency in how those features are calculated. Now that we kind of understand what Feast is, tell us your story or the story of Feast, which is maybe one and the same and how you came to discover the need here and then start working on it. Willem Pienaar: Yeah. I'm South African. I'm actually from engineering traditionally. So studied mechanical engineering. And I worked for a while in control systems and automation. Actually, I ran a startup, like the networking for a certain ISP for a while prior to that, but professionally, I worked as a software engineer in industrial control systems and automation for about five years. And I started that in South Africa. At some point I moved abroad and I worked in Thailand and continued to work in that space, but I've always enjoyed the act of automating processes. And so if it's conveyor belts or factories or ERP systems, like with multinational corporations, all their control systems and their processes, or whether it's a start-up's internal infrastructure, that automation's always been a fun area for me to explore. And so I moved more and more into data systems and machine learning over time and landed a role in Gojek in early 2017. Willem Pienaar: And what is interesting about this company is that they were really like a rocket ship and a lot of companies call themselves that today. But at that point we had, I think, three or four data scientists, and they were trying to build a data team, a data science team. And from the start, they knew they wanted to build production ML systems. The company was valued at about a billion dollars and they were just getting very large amounts of money being injected because in ride hailing, the numbers are quite astronomical in terms of the burn and the revenue. And so it's 500 million or a billion at a time every couple of months, every six months or so. And so we're really scaling out the team and running into a lot of pain points, despite having so many people and such a willingness to get into production with these machine learning systems, not being able to get support from product engineering and not being able to get into product. Willem Pienaar: So they hired me as the lead of this team and then a team underneath me to help the data sciences get into production. So it is a very classic archetypal problem, because you had five data scientists per engineer. And at the time we thought that we were not doing a very good job, not being able to make everyone happy with getting them into product. But every time I realized that this is a very common problem because management thinks you just throw data scientists, just hire more data scientists and things will just fix themselves, but it never happened. So originally we were just boiling the core ML systems like pricing systems, matchmaking, and ranking drivers to match customers, fraud detection systems. So we also had a lot of recommendation systems because we had a food delivery network. So we had ride hailing, food delivery, digital payments, bank, lifestyle services, all kinds of things. It's like if you immerse a food delivery company with a digital payments company and then ride hailing companies. Eric Anderson: The company spans all these different verticals. Willem Pienaar: Yeah. Eric Anderson: Is your team catering to all of them as well? Willem Pienaar: Yeah. So the team catered to all the verticals at the start though we didn't specifically focus on digital payments because of regulatory risk. So it's just a lot more complicated because the data was segregated and it was just a lot of red tape. So we ignored that for the first two years basically, but we did do fraud detection. So there was a lot of transactional data. It's just not the banking side, but yeah, we had all kinds of use cases. So we even had credit scoring, sorry, is it fraud detection? OCR and ranking forecasting and early detection. You name it, we had it, but in most cases we'd start with a bespoke system and we didn't have infrastructure for this. But over time we realized that only way that a small team like ours... At that time, we were about 15 people at the peak and about 60, 70 data scientists... Only way that our team could support them was through building a platform and building the tools that these data scientists could be self-empowered. Willem Pienaar: And the first system we built was the feature store. And we built that because most of our time was being spent on both feature engineering, as well as productionizing those features. We look at Uber's implementation and the work that my current employers or the people I work with, Mike and Kevin, did in Michelangelo and their system was completely end-to-end. At Gojek at the time though, what we had was pretty advanced, pretty robust computational platform. So we could transform batch data. We could transform streaming data, but we didn't have a way to get that into prod. And so Feast took on a shape that fit the Gojek mold to some degree or the architecture. And so it's essentially the bridge between these transformation systems and the production environment. And it allows data scientists to define schemas that point to sources. Willem Pienaar: It's kind of like a virtual feature store in a sense. And it maps into those sources and then allows them to ingest the data. And then expose that in both in a developer environment, as well as a production environment. And so we built the project with Google Cloud, so they were a strategic investor with us, and we worked with their professional service's arm over a three month period. We built the software, we shipped it, we got a few cases onboarded and we're also very close to the Keyflow team at that point. And they really encouraged us strongly to open source it and work out in the open. And so we did that. We open sourced it way too soon. It was pretty raw. And a lot of people were like, "Wait, this is just undocumented bowl of spaghetti. It's not even a real open source project," but so we continue to develop art in the open. Willem Pienaar: And over time we got more and more traction. We had a few blog posts, one on the Google Cloud blog and people I think were less focused on ML ops at that point in time. ML ops was not a day-to-day term, but over time feature stores became a household name and Feast became something that's a little bit more well known. So I'll just finish up the Gojek story. And we both a bunch of other tools there as well, a model serving stack, data orchestration tooling, data pipeline tooling. We got a great experimentation platform as well. And a bunch of the stuff is really open source. Cheering is the experimentation system and Merlin is our model serving system. And so after about four years of that, we really rolled out Feast quite widely within the organization. Willem Pienaar: People were pretty happy with that. All these teams in the feature source space talk to each other and Tecton, and Mike, and Kevin reached out to me and we were just riffing on ideas. We decided that we were trying to do the same thing. They're trying to build the best enterprise feature store. We're trying to build the best open source community and open source feature store. So we are really aligned what we're trying to do. It's just we're focusing on different areas. And so we decided to work together. Of course, I don't represent the whole Feast. It's a community project essentially, but I joined Tecton about a year and a four months ago. And since then, Tecton believes that it needs to invest in Feast as one area for various reasons. But one is to reach this ideal end state of what a feature store is. Working with an open source community really guides you in a lot of ways to give you good feedback. Willem Pienaar: It's a large market. And obviously there's ability for people to cross pollinate between the projects. And so Tecton invests heavily into Feast. They've got a dedicated team on Feast. And so that's been my focus and over the last year and a half, we've shipped a lot of releases. I think we've shipped 0.18 right now. So that's it more than 10 releases since I've left Gojek and the Feast is now a part of the Linux Foundation for AI. It's one of the first couple that joined. And now we've got a bunch that have joined, like Lyft has contributed Flyte as well. And there's a bunch of other ones that have joined since then. Willem Pienaar: So it's mutually governed. We've got a bunch of users right now, like Shopify and Twitter, Robinhood, Salesforce, all using Feast. There's a bunch of others that are household names, but they don't want to be named for certain reasons, but we've got about 3000-ish people in our Slack and you can see the industry's growing, like ML op's growing and the data ops and whole ML space, but also the community around Feast is growing. So that's a snapshot of where we are today. Eric Anderson: So what does it mean to open source something too early? You mentioned it was a spaghetti bowl of code. Well, there's a school of thought that says you can never ship too early. Early is better. But maybe there's a cost to the confusion that comes around from an early launch. Willem Pienaar: Yeah. I think it depends on who your customer is at that point. Some people will only will have one first impression. They will never come back to your project no matter what. Eric Anderson: Totally. Willem Pienaar: All right. And so some of those people that are just lost forever and it doesn't even mean that your code is bad. It's just, they don't understand what the purpose is. Even today with proper documentation. A lot of people don't understand why feature stores need to be there. Right? What problem it solves. And so I think that's more the problem that I would've solved, but I also the idea of shipping early, getting people to contribute. And it's not clear that we made a mistake. I think it could have been that we did the right thing. Eric Anderson: Yeah. And then maybe another interesting part of your story, I think a lot of people would love to collaborate with Google on an open source project. They're at random company and building something interesting. How fun that you got to do this with Google Cloud when you were at Gojek. Imagine that's in part, just because of this strong relationship you had with Google Cloud at the time? Willem Pienaar: Yeah. Gojek at that time was, I think, the second largest customer of Google Cloud in Asia, I think just behind Decapedia or something. They were very incentivized to work with us. Eric Anderson: Right. Willem Pienaar: And we also wanted to work with them because they had really good engineers and they had good visibility in how other companies were also doing things and they could connect us to teams like Spotify, or I can't even remember all the companies that they connected us to. So it was great in knowing what not to build and what to build and being on the cutting edge instead of building something that's three years dated, but yeah, we worked with that team of really strong folks. And some of those folks went on to work on Vertex at Google Cloud. And even at Robinhood, some of the engineers that worked on Vertex are now working on Feast at Robinhood on feature stores. So it's a small space and it's interesting how the same engineers and people are working on the same things. Eric Anderson: It is, it is. Nothing's new. We all are just stealing each other's ideas, I suppose. So great. And then the other interesting element was Google encouraged you, nudged you maybe to open source a little bit, or at least the Kubeflow team, I suppose they already had this open source project and they thought it might pair well. Willem Pienaar: Yeah, I think at the time Kubeflow was composed of components. It's still today, but there was also an awareness that they were relatively weak on the data side, like Feast really plugged a big hole for them. And so it was an important thing for Feast to be interoperable with the ML platform that they were building. Yeah. So they had model surveying and hyper primary tuning and experimentation and notebooks, but then what is your production story for getting data? And now the user basically has to run their own spark jobs and deploy their own online stores and all those things. So that's where we came in and I think that's why they wanted to collaborate with us and work with us. Eric Anderson: I guess I don't know Gojek for having a pattern or history of developing open source, but it sounds like Merlin and the other projects, maybe there is more of a pattern than I realized. Willem Pienaar: Yeah. The company has shipped quite a few. I think the leadership there was really very open around opensource. Eric Anderson: Great. Willem Pienaar: They were a bunch of ex-Thought workers. So RJ Gore, who was the CTO at the time, he basically just said, "Don't even ask me about open source, just open source it." So there was a blanket statement that you could just ship code out into the open and do your best not to ship something secret or something that's of strategic importance, but it's actually, it's much more rare to find that in just code that in every day they deliver rights, right? Unless you're shipping models or business logic that's your feature engineering pipelines or something crazy that clearly is IP. Eric Anderson: That's a great point. I think most organizations are trapped in this world of, "Let's assume everything we write is-" Willem Pienaar: Exactly. Eric Anderson: "... Top secret and then, and give permission on occasion for open source." And they're like, "No, let's just assume everything's vanilla code because it is," and be sensitive to maybe a situation's edge cases where it's probably-" Willem Pienaar: Also our biggest problem at Gojek was we had all this money, but we couldn't hire people. And so if you just open source a lot, then really helps with hiring. It's a lot better story than hand ringing about like, "Oh, is this line of code going to cost us something?" Eric Anderson: Yeah. Yep. Great. So you open source this, people start responding. 3000 people in Slack is impressive. How do you get the word? I mean, I guess Kubeflow maybe helps you a bit. I don't know how much of that happened before or after your involvement with Tecton, but are you surprised by all the interests and where do you feel like it came from? Willem Pienaar: Well, it's multiple things. So firstly, we had a couple of hundred already in the Kubeflow, just Feast room, like Slack. So there was growth, but I think the ML op space has had an incredible growth in the last year or two. And if you look at ML ops community, that Demetrios is running, I'm not sure if you're familiar with that. It has also, I don't know, 8,000 or 9,000 or massive growth. Of course, that's a lot more engaging and it's a lot more different areas that people are interested in there, but it's not really surprising that we're at 3,000, but there is also another factor that we have the Apply Conference and that runs every... It's two or three times that we've hosted already and that brings more people in. So that brings people in that are not specifically just looking for feature stores, but they're in the broader community. So not all 3,000 are active in databases, but we have a very, very healthy core group and that number is always going up. Eric Anderson: Awesome. Was the Apply Conference something you did after joining Tecton or was that before? Willem Pienaar: That was after. Yeah, so that's not something that I drove. It's more from the Tecton side, but we got into a bunch of great speakers like Wes McKinney and a bunch of others from companies that have applied machine learning at scale, solved real problems. We try and make it not vendor specific, but it's always very hard because vendors want to talk and engineers typically don't want to talk. So yeah, we try and make it interesting and more applied and something that's a little bit more appealing to the average data engineer or machine learning practitioner and it's gone well so far. Yeah. Eric Anderson: Yeah. It's a tough line to draw and I feel maybe the best way sometimes is just to do a little bit of everything, have some vendor agnostic content and then allow the vendors to do a little bit and try and make everybody a little bit happy. But let's talk a little bit, now that we understand the story of Feast, maybe some of the broader topics around the market here. Maybe tell us a little bit about what it was joining Tecton. I think we hear a lot about open source projects that create companies from themselves. I don't know, like a Mongo-type situation, but this was a little bit unique. Right, Willem? What's it been like, I guess, bringing an open source project to an existing company? Willem Pienaar: Yeah, actually I think that is very unique. I can't think of a company that has done that before. Eric Anderson: No. Yeah. Willem Pienaar: I think it's for multiple reasons. There's definitely some traction on the fee side when we had this discussion and there's an agreement that we'd want to kind of build on that. And I think both Tecton and Feast benefits from that, of course, if Feast gets an investment of engineering resources benefits, Tecton is also ahead of Feast in terms of functionality. And so a lot of those ideas, we can just copy verbatim and roll out to open source because Tecton will always have a head start because the bulk of our engineering workforce is on Tecton. But another thing is that the Tecton product is based on the work that Mike and Kevin and the team did at Uber. And so is Feast. Willem Pienaar: And so our products are not wildly different. They're pretty close. And in fact, we always had this idea in the back of our minds that these products will converge and we can make it easy for people to move from the one to the other. So if you want something open source or you just want to kick the tires or you want to run it in your own, you can use Feast and it'll solve your problems, but you're going to have to roll up your sleeves a little bit. If you want just use a managed service and you've got bigger problems to solve then deal with the feature store, then you can use Teton. And so the long term goal was always to allow people to move between the one and the other. And we also knew that because the space is growing this a little bit of a distrust of vendors and that changes over time as the market matures. Willem Pienaar: But to start, people want to deploy something on their own, they want to get a win. They want to prove to the business that the feature store makes sense and there's value here and then they want to pay for something. And so it's a good story for Tecton and it's good for Feast. Of course, it is a very unique situation. It doesn't always happen. Normally the company starts something on their own and implements that. But so it has been a joy doing this from both Gojek and then Tecton. Willem Pienaar: Of course the conversations are completely different now. At Gojek, it was here's this captive audience of data scientists that you can dog food this software until it works eventually. Right? But out in the wild, the product needs to be at a higher level of maturity. So luckily we could do that at Gojek because now you're dealing with companies that are deciding, "Do I build or buy?" And by build in tops of open source and build on that. But if you want to use an open source product, you probably want to show them something that is a little bit more mature, because they don't have many options today. So yeah, that's been very refreshing. Yeah. Eric Anderson: And then you mentioned earlier that your project, Feast fits between processing engines and storage, if I understood. You talk about bridging the gap, the assumption is people bring their own processing pipelines and bring their own storage. Is that correct? And I imagine an alternative would be an all-in-one type solution. Willem Pienaar: Yeah. Maybe I talk about what Tecton does and then I could talk about what Feast does. So Tecton, you bring a data source. You bring either a batch source, like a RedShift or a Parquet file on a data lake. Or you bring a stream and then you define transformations on those sources. And then those transformations will be run as either streaming or batch jobs on a schedule or constantly. And then both an online and offline store will be populated from those computations. And those two stores will then be used for building training data sets or being served online in a production environment. Now Feast is slightly different. Feast is, was both on the Gojek architecture where the computer systems were upstream. And so we are more focused on, once you have your streaming or your batch features already computed, we can make that available for you in production. Willem Pienaar: Now for a lot of people, they think, "Well, the feature engineering is the difficult part." And we agree with that. I think the risk for us was, we didn't want to reinvent a data pipe-lining or a computational system. So we really wanted to focus on the area that we thought that was unaddressed instead of the area that was the most challenging. And so plugging Feast into an airflow pipeline or a DBT pipeline is very easy and if solve that last mile, but it doesn't do the computation. So you need to do that upstream. What we do do today is we do have some level of computation. So you have row level realtime computation called on-demand features. So when you read from a online store, for example, you can transform data at a row level and you can apply those both at the online, sorry it's a bit online and offline, in a consistent manner. Willem Pienaar: And this is very important when you have event data, let's say a customer's location, that isn't available at do pre-compute for that customer. So you can't use that data to figure out the distance between the customer and the driver or between a customer and where the product is going to be available, like the closest store or something. But over time we're going to be implementing a lot of the same functionality the Tecton has. So even right now we are planning to build batch feature engineering support and streaming compute support. Willem Pienaar: And yeah, we're chatting with a bunch of folks that are RFCs and that's something we do in the Feast project. We create RFCs or design documents, essentially that, I mean, lots of people collaborate on, they voice what they want, the requirements. And then we go and actually implement that. So definitely streaming compute and batch compute will become a part of Feast because ultimately our core user is an engineer, somebody that needs to support data scientists and we want to give them higher leverage. And so if they have to manage streaming jobs, we want to take that burden off of their plate. But I think the risk for us is, we don't want to add a new system for them to manage when they rehab a system. So we prefer to integrate then reinvent that. Eric Anderson: Yeah. Makes total sense. And when you say row level, is that a non aggregation? Willem Pienaar: Yeah. That's non aggregation. You'd not want to aggregate at that level because normally if you're doing an entity lockup, like for this user, give me his or her features. You can't aggregate there. I mean, you could, but then you'd need to apply that aggregation on the batch side as well. Eric Anderson: Yeah. Aggregating streams has always been a little complicated. You have to think about windows and when do the windows end and- Willem Pienaar: Yeah, I think a part of why it falls apart for the retrieval case. I think you're talking about the production, how do you produce features to store on the online store? But at the retrieval time, the list of entities that you look up is kind of random, right? Let's say you are looking for a driver with Uber, which drivers are going to be queried is, depending on which ones are close to you, right? So that list is always random. So if you aggregate over that list, when you look up their features, how does that translate to the batch case? Like the offline feature computation. It can't, right? And because those are just query time lists that are completely impossible to know for trainings. So basically it's always point look-ups. It's always row level. Yeah. Eric Anderson: Great. And then maybe we talked a bit before the show about use cases that are good for feature stores, bad for feature stores. Where do you find people reaching for Feast versus other things? Willem Pienaar: Yeah, I think for traditional supervised learning, anything basically that... You each want to make a prediction. An HG boost or is scikit-Learn learn, anything where ML flow is involved. So fraud detection comes to mind, churn prediction, pretty much anything where you can run a model and it needs data and you want to make a prediction. That's where feature stores is valuable. What we do see is that typically it's not valuable when you have batch-only use cases, because you don't have this consistency problem, right? You don't have two environments, the offline-online environment. Typically you just have your one single data source. Feature stores are typically more valuable when you have a freshness requirement. Either you have streams pushing roof, low latency data into an online store, or you have these realtime computations or you have fast low latency reads and you have those elements and you need to do some kind of online scoring, then a feature store is going to be critical for you. Willem Pienaar: Feature stores are useful as part of recommendation systems, although they don't do everything in a recommendation system. So it's pretty good at ranking. You can use a candidate server, look up a bunch of entities, like a bunch of users and then rank them. And the feature store can be the source of the features for those users or those entities. So that fits really well into the Reksys's case there as well. Yeah. And I think that's pretty much it where, where feature stores are not really good, good at today's mostly computer vision, and I'd say NLP, partly because the data models don't really lend themselves to reuse and discovery. Over time. I guess you could have feature stores moving more into that space, but most people in those spaces, I think they work more at the data set level where feature store are really good at the, "Hey here's this feature that's been computed as time series over time. What was this driver's rating or what's this driver's top three locations in this area or in the city?" Willem Pienaar: Or it's just some time series' value that's constantly changing as opposed to, "Here's this big image or here's this video" and the features are hard to discretize from that. Eric Anderson: You bring up a point that I've heard elsewhere, that you mentioned ML ops is a big market trend happening. And some have said that maybe ML ops is not one pipeline. It's really several pipelines, one being computer vision, one being NLP, and one being more tabular numerical data. And what you say resonates with that theory, that you probably want a feature store in the right column, but you may not need it in the others as much. Willem Pienaar: Yeah. I think that's a good question. I really think it depends on, going back to first principles, do you need to be online and is there a latency or freshness requirement? Or in a lot of cases, the computer vision doesn't really need that. Often, you're okay with a one second delay or predictions. So yeah, I'm not sure if features store is absolutely... Because if you can wait a second, you can probably run a micro batch. Right? You can read it from the offline store, do your prediction and send the result back, and that simplifies your architecture dramatically. I hope I'm not promoting some theory that I'm not aware of, but yeah- Eric Anderson: No, no, no. Willem Pienaar: I think I agree. Eric Anderson: Yeah. Yeah. You're right. You're looking for almost user response, things that the user are waiting for, if the users are waiting for answers, you kind of need a feature store and then maybe tell us what do we have to look forward to on the Feast project going forward? Willem Pienaar: Yeah. So I think there's two areas that we are heavily investing in right now. And the one is high performance reads and rights through our feature server. So right now we're focusing on supporting extremely high loads in our read API for reading feature values in an online production setting. Over the last couple of months, we've been doing a little benchmarks, a lot of optimizations, and we'll be continuing with that. And mid-March to end of March, we're going to be releasing our Go feature server, which has a lot of great functionality and is really, really about robust. And it actually ki d of ships with a lot of the learnings we've had on the Tecton side. So there's a lot of really battle tested code on the Tecton side and a lot of learnings that we are open sourcing there. The other thing is data quality monitoring. Willem Pienaar: Almost universally, when we speak to data scientists, when we say, "What's the number one thing that a feature store can add?" It's, "Can we have better data quality monitoring?" So we're working on integrating great expectations. Actually, we've got a proof of concept of that already out. So if you look at the latest Feast, Feast 0.18, we already support integration with great expectations. And so you can already, when you bullet train data set to select that as a reference data set and then use that to validate future training runs. Willem Pienaar: And so the feature store can integrate with those tools and ensure that your data gets profiled, and then you've got these rules that can be used to prevent drift in your data. And those rules we're going to continue to apply in different places. So we're going to allow you to ship rules from the offline setting, like imagine a data scientist, both creates features offline trains a model, but also ships the rules around those features into prod. And so when there's a streaming system creating these features, that's maybe created by a better data scientist or created by a different team. Those feature values are being computed or going to be validated by those rules. And that's extremely powerful because we enable these data scientists more. Yeah. So that's something that a lot of people have been asking for, and it's going to be a big focus for us this year. You know, if you check out 0.18 and 0.19, that'll be releasing next month, there's going to be big releases focusing on that. Eric Anderson: That's great. I feel like a feature store was an exciting new, maybe to some people ambiguous idea historically. And do you feel that's... I don't know if you would feel like that, but maybe it's been ironed out more recently. Do people understand where the feature store fits in their environment? Willem Pienaar: Well, I think there's, if you don't have the problems, then it's hard for you to understand the role of the feature store, but when you run into the problem, it's so obvious. And so I think the problems are a bit niche because ML is a big space. And if you deal with batch data, you don't get it because until you run with into operational problems, you're not going to understand it. So I think we can be better about messaging and positioning or whatever you want to call it, but it's still a bit of a problem in explaining the value of it. But we have so many teams running it and really relying on it that we know that it is important component. Eric Anderson: Well, maybe it's a feature, not a bug. I mean, makes for easy qualification. Willem Pienaar: Pun intended. Eric Anderson: I don't know why I need you. Well, then you don't. Yeah, exactly. Willem, I'm so grateful to have you come on the show today. Really exciting stuff you're working on. Anything you want to say before you go, maybe where people can find Feast? Willem Pienaar: We're going to be talking at a lot of conferences this year. We're going to be at data council. Hopefully we're going to be at Data ICONF. You check out Feast@Feast.dev. We've got a great growing slack community at slack.Feast.dev. So just pop in and join the community if you're interested in the space. Eric Anderson: Thanks, Willem. Willem Pienaar: Yeah, my pleasure. Eric Anderson: You can find today's show notes and past episodes at Contributor.fyi. Until next time, I'm Eric Anderson and this has been Contributor.