Matt Kimball Yo, Yo, what's up,Stevie Ray. Steve McDowell Don't do that. Please don't do that. Matt Kimball Come on, man.It's Thursday. Steve McDowell How's life in quarantine? We're seven months in. Matt Kimball No one's gotten the Roni yet. But I'm ready to I'm ready to poke my eyeballs out. So I need to get on the road again. Yeah. And I need my kids to be at school and not in the house. Steve McDowell Yeah, I hear you. I hear you. Matt Kimball Hey, Steve. So you know, we talk a lot with we talk a lot with the likes of HP and the likes of Dell, Lenovo. We talk about them a lot. And it's funny because you know, while everybody well HP and Dell are a new household brand names, if you will, as this Lenovo, and when you look at the IDC tracker for server, and the arguments that take place between HP and Dell around who's number one who's number two, the actual leader in the market is another category called ODM direct. There are two categories ODM derived in rest of market, that those are the players that service you know, the, the hyperscale market, the large, large cloud service for Oracle cloud service providers, large large customers that understand kind of the true value of computer fuel. And what I find interesting is, you know, while these discussions Go on, and you know, HP I think right now in about 15% of market, that ODM direct category is 28.8, so nearly 29% of the market, and it's up about eight points year over year. Steve McDowell Oh, yeah, man, I'm looking at the IDC numbers now from last quarter. And if you'd look at HP and Dell tech, combined, they're at about 38 39% of the server market together. And if you look at ODM direct and rest of market there, it's close to 45% with a handful of others kind of splitting in the extra whatever that is 15%. Matt Kimball And almost more importantly is when you look year over a year that HP Dell has gone down. Well you know, ODM market is has increased significantly. So that tells you that, you know, the the folks that know compute the best understand kind of, you know, they, they they they place a certain value on that infrastructure they're putting in, they see something in the sodium space that that maybe enterprise it doesn't see it. So today, we have an interesting, we have an interesting topic, we have one of the companies that plays in that, that ODM and rest of market space is a company called hive solutions have been around for quite a while. They do custom custom solutions for the hyperscale space. And we have Jay Shenoy, who's VP of products at.. do I have that right, Jay? VP of products.., Jay Shenoy VP of technology, but close enough. Matt Kimball Good enough VP of technology, sorry about that. At hive solutions, and Jays an interesting guy, because Jay, you've spent time at Twitter at eBay, you spent time on the other side of the equation as deploying and managing infrastructure correct. Jay Shenoy But to make things just slightly less confusing, I consider myself to be from the chip industry. That's where I am from. Matt Kimball Even better, so you're even smarter than what you're telling us very soon. So, Jay, so So welcome to welcome to the podcast, Steven i and this is a this is a topic we talk about all the time, kind of the move to software defined the move to infrastructure in place, understanding the real value of infrastructure. So you're a great addition to the podcast this week. So So what do you want to tell us a little bit about yourself before we get into hive and you know who hive is what they do? Jay Shenoy Yeah. Thanks, Martin. Thanks, Steve. Good talking to you again. So yeah, I am VP of technology at hive solutions. I have been doing that for over two years now. But before that, I was a customer of hive. And I was a customer of hive through my role at eBay, where I lead hardware engineering for eBay's hyperscale infrastructure for eBay's compute infrastructure. And before that, I did the same thing at quitar, where I actually was the first hardware engineer. And so I that was a great education for me. So that that like from almost from the ground up, we built a fairly significant infrastructure. And before that, I came from the chip industry. So for the past 10 years or so, I've seen this hyperscale infrastructure or cloud or it goes by different names. I've seen it up close from multiple angles. Steve McDowell So just to clarify for our audience who doesn't follow the games closely, we think ODM you know what comes to my mind immediately. Consumer Electronics But if you focus more on kind of the server side, the enterprise enterprise will but but for hyperscale market, right? Jay Shenoy Yeah, I was a relatively not that old, a company hive last year, celebrated eight years or a few months ago, I would say. And so in the grand scheme of things that's not such an old company. So next, our parent company is older, I think has been around for in some form for over 30 years. hive actually came out of and compared to those companies, hive has had a slightly different story to its evolution in the sense that hive started out as the system integration, the rack level system integration business at sinex. And then my boss, Steve Cienaga, about 10 years ago or so, saw this hyperscale, what we now know is hyperscale, emerged as a separate branch of or at least emerged as a niche at the time, that was somewhat distinct from other compute segments, and started focusing on that initially as a system integrator. But over time, hive has really built its capabilities, to where, although we came at it from a different route than let's say, a contacted by now or capabilities are quite quite on the same plate simpler. Matt Kimball Just to kind of go down the, you know, putting finer points on things when we say hyperscale, you know, hyperscale, you know, there, there's small business, there's enterprise hyperscale, is cloud, it's any really large data center kind of installation can be a JP Morgan could be a Yes, Jay Shenoy yes. And some of those companies, some of those companies, when their compute needs get big enough, they begin to think about their own infrastructure in an efficient manner. And then they do begin to do hyperscale. Right. So there is hyperscale is, in some ways, it's a very broad marketing term. But in other ways, it can be a reasonably well defined technical term as well. And people kind of go back and forth between the sort of broad marketing brush, and sometimes the final technical definition, Steve McDowell when you look at the kinds of problems that the hyperscale market is addressing, right, and the solutions that fell into there. These are not off the shelf pizza box when you kind of servers all the time, right? It requires different sorts of solutions to meet the needs of that environment, Jay Shenoy is that correct? Yes, it is. And in a couple of ways, let me illustrate that in a couple of ways. Right. So like, maybe 10 years ago, maybe a lot more than 10 years ago, the whole hyperscale the hardware for hyperscale started off as a separate thread, in many ways. Because enterprise servers were too complicated that too many bells and whistles that we're not needed, right. And so if you remember the old OCP, sort of the original OCP, sort of byline, it was vanity free servers. So at that time, it was more an aspect of simplifying what would be an Enterprise Server and simplifying it in cost, to some extent, but most more than that, simplifying it operationally, I think we have turned turned a corner without kind of realizing it, where the emphasis, I don't think the emphasis of cost as much anymore, although the hyperscale company certainly get the bang for their compute buck. Because when you think about the sort of massive scale that these people operate at, but which is much greater than most enterprises are all enterprises. Other things like serviceability come into, come into consideration. Other things like completely automated provisioning, completely automated monitoring come into play. And when you are a public cloud, for example, the security requirements, and this is a particular thing that I think technologically differentiates some of these solutions. public clouds have security requirements that are actually unique, that are fairly unprecedented in the enterprise space. Right. So there are a couple of areas in which I would even think that hyperscale servers are more sophisticated than enterprise servers. Matt Kimball And when you know, there are multiple elements to cost there is near your initial kind of layout to buy those. You know, the the total cost of ownership which is nebulous term, but to what you're saying. You know, I always think that cost it's not necessarily the cost is not important that people look at true cost instead. acquisition cost. Yeah, security breaches can cost millions of dollars. Yeah, Jay Shenoy yeah, yeah, total cost of ownership is actually not a buzzword in our it is a tool of the trade in hyperscale. Matt Kimball Yeah. And that's an interesting thing to point out. Because, you know, Steve, you and I talked, you know, as we talked to enterprise IT, folks, we hear about TCL a lot and how it's been kind of taken over by marketing departments at every technology. Right. Yes. tool, but, but the folks, you know, that that hive services, you know, the, you know, the large hyperscalers, they really do have those spreadsheets, and they they live and die by Steve McDowell those spreadsheets. Yeah. Jay Shenoy Yeah. Yeah, yeah, I think you may have heard lies, damned lies and tissue models. So yeah, and you can, like, if you're sort of, if you want to put a marketing spin on it, you can sort of put your thumb on the tissue scale, to narrate a particular story. But most of them, you're not doing that, right. Or at least, when you're actually making decisions, you're not doing that. And you are actually trying to quantify things that sometimes are difficult to quantify. And at other times, they're easy to quantify in terms of trade offs between initial capex cost and like the long term cost of it. Steve McDowell It's been it's been fascinating watching this market unfold. Matt and I were both I don't know if you know this, both at AMD, you know, a dozen years ago, when this market really started to take off. And I remember the early days, I don't know if it was Facebook or Twitter who right would do that had the thousands of like consumer motherboards they weren't they weren't packaged, they were just, you know, wire wrapped to the to the rack that Jay Shenoy might have bamboo that might have been Google. Steve McDowell Yeah. might have been Google. And then today, right, we're talking about specially built architectures. You know, I think the hyper scalars really given rise to kind of a lot of the disaggregated stuff that we're starting to see, trickled its way into the enterprise now. So it's, it's, it's been a fascinating evolution, and you guys have been there the whole time. Jay Shenoy Yeah. And I would say deep learning is another area where there's a lot of mindshare, on AI or deep learning. And in terms of infrastructure, or actually, in terms of your platforms, and even, like, applications, the hyper scalars are actually probably more advanced and like, I'm part of that ecosystem. So it's very possible that I'm overstating our own case, but from what I see, we are actually ahead of almost anybody that we can think of, in terms of our different segments. Matt Kimball Yeah, I think it's an interesting trend, because Steve, you hit on that there's like a seep down into the enterprise, it seems from, from hyperscale, right. And it goes from anything on the software side, because you know, open source really kind of bleeding down in to the enterprise, or, you know, adoption and acceptance, I should say, of open source, not the existence, but, but also on the server side, you see, a lot more enterprises begin to look at hardware differently. And they look at, you know, folks like yourself, and Frank, Frank kowski, who is, you know, the original Oji of OCP, you know, kind of promoting this, you don't need that brand name server, you're going to get you can get, you know, very good performance and capabilities, and even better performance capabilities. that mindset starts to bleed down into enterprise as well. Same thing with deep learning and adoption of it, do you see kind of longer term? Or do you see those trends, you know, kind of from your side of the equation as well? Or, you know, do you are you so focused on what hyperscale is doing, you know, who cares about the 10,000 server install? Jay Shenoy Well, we are we are focused on, let me be candidly, sort of admit to being completely focused on the hyper scalars. And what we do see is a couple of things once in a while, it will happen that significantly large company like a name that most people would recognize, will kind of set like will come our way and ask about like using hyperscale here, and it, it becomes clear that they have never done it before. And so we are even used to that by now. So once in a while that does happen. More commonly, what we are seeing is that enterprises are actually using public clouds. And so we hear that discussion quite, quite good, even more so. And hive being a part of sin x and x is one of the biggest it distributors, right it's among the top three or four it distributors. So synnex sees this even more, that a lot of its customers who are like small and medium enterprises and so on. Have like public cloud, as I think so next actually has public cloud on the line cards are various public cloud offerings and deadline cards. Matt Kimball You know, when you talk about deep learning in AI You know, I know this is an area that I think Steve and I even I've talked about this right about a hyperscale. And cloud is so far ahead. People don't fully realize, you know, MIT folks don't realize it, you know, is that an area where hive goes? That's our next. That's our next Mars landing. You know, that's where we want to kind of set camp and really kind of expand. Jay Shenoy Yes, except the present. The future is here already. Steve McDowell Yeah. Yeah. So what's interesting when I look at what the hyperscale market is doing, right, I mean, you started off with, you know, kind of massive data centers. We talked about Facebook and Google, and then also the public cloud. But now, if you look at what those guys are talking about, it's not just AI and machine learning, which is driving some really interesting stuff. But it's also the expansion out toward the edge. Oh, yeah. Right. Are you seeing that as well with talk to your hyperscale? customers is edge and Jay Shenoy Absolutely, absolutely. Like edge might have been like a sort of, like two years ago, but I would have thought that edge is sort of coming. It's coming. It's a technology of the future. Like in the middle of 2020 edges here, right. And it's very, sort of natural part of what we do. And hyper skin. And like there's edges, edges, one of those very loosely defined terms. Because there's multiple, I personally believe that there's many slices of the edge, and each slice has its own dynamic in terms of what's driving it, and what are the platforms, and what are the use cases. So it's not always easy to talk about edges, one thing, but whatever the collection of things, we see more than one in our day to day life, right. So like some of the hyperscalers, for various reasons for deploying ecommerce and or social networking applications, they are building out their edge. So then basically, that involves a certain dynamic, then there are all these video streaming, video conferencing, those type of people more like what would have been a CDN, except it's a CDN in a very, very dramatically different way. And they worry about different things. Yeah. So that's another example. And then, of course, this whole five G, which is converting the base station or a big part of the base station, not all of the base station into like a software defined element that runs on ordinary servers or couldn't got ordinary servers, that is another part of the edge that we see quite, quite closely. Matt Kimball So talk to us a little bit, Jay, about your engagement, like what is, you know, for our listeners, you know, they don't deal with a company like I've typically what is your engagement with a hyper scale? Because you you hit on it quite a bit about, you know, the needs of hyperscale are unique, and they can be very unique to a specific. Yeah. Is it a, and I know that, you know, hive takes great pride in kind of customizing solutions. So finally, does that get down to board level design? Yes, Jay Shenoy yes, it does. Yes, it does. And so like, we actually have, we found it prudent to and by now we have a, like a reasonably structured way of thinking about customization. So we have, like, everything we do, can be classified into four levels of customization. And I can go over them. So we do our own building blocks, which are our own motherboards and our own chassis, and so on that we can sort of configure in different ways. So that's, that's our sort of basic entry point for so and that's in parallel to our fully custom portfolios. So then our sort of level zero, if you will, is configuring standard building blocks that we we have, and we do that, I would say several times a month, like new projects several times a month, but Matt Kimball in for our listeners, just to simplify, that's the Lego approach, right? Yes, Jay Shenoy yes, it is a Lego portrait. And for some of the smaller customers, where the world where the needs are, may be similar to other customers of their type. And also, their scale doesn't justify more customization. This works actually perfectly fine. Yeah. And so that's on the hardware side, on the firmer side, we have standard images, hardened images, or we're beginning to have hardened images that will work also for most customers without customization. So that's, that's kind of like a configure to order if you will. And then the next three levels are like you can think of as low medium high of customization. So low customization may actually not involve changing motherboard, it may involve smaller boards, it may involve some mild firmware features, that the that we can add into our code base firmware, and use it as a build option. Okay, so it's not a separate code base, but just a build. Then there's medium customization, which may involve, like taking out basic motherboard and changing a few things or making a custom board for some expander, or some that type of stuff. And certainly, thermal solutions and all would be heavily customized in this medium customization regime. And on the firmer side, this is the place that we really don't want to is like, we would have a separate country. For that customer. This is actually by far, by far the worst of the sort of customization things situations we're in. And then the last stage of customization, which we also do very frequently, is a full customization. And so on the hardware side, the motherboard is designed to motherboard or motherboards is designed anything else that is specific to that customer is also fairly customized to them. And on the firmware side, like it's like, like we contribute to and we develop source code for people. Okay. Yeah, the four levels we cover, we cover our spectrum of customization Matt Kimball in that fourth level, that's a ground up design. So that's long lead time. And that means that but that's also a customer that is going to be engaged with hive for the long term as well. Right? Yes, yeah. Jay Shenoy And needs the customization too, right. So they're not doing it, just because Matt Kimball you see a lot of these, you know, going back to kind of relevance to enterprise, do you see a lot of these not necessarily hive solutions, but these concepts, you know, when Steve talked about disaggregation, these concepts bleeding down into the enterprise and adoption of them. Jay Shenoy Yeah, yeah, actually. So the, we talked about, like some former, like enterprise, or even current enterprise customers, occasionally, sort of coming our way. And in a few times, we are not actively seeking them, because we don't know how not because, because we don't know, we don't have a large sales force and so on. That, go go go seek this out. And most of the times when that happens, like the, like, the sort of basic the Configure to order model kind of does work. But yeah, because we have hardware storage systems, we have flash storage systems, we have GPU systems, we have general compute systems. So a lot of the times we can sort of offer what we already have. Matt Kimball That's a good point, because we're talking a lot about you, we kind of went into the weeds on some of this, but, you know, your portfolio at at hive rivals, the portfolio of the major OEMs. Right, you've got a range of solutions that Steve McDowell span kind of the infrastructure space. Jay Shenoy Yeah, we Yeah, it's not there. Yeah, you could say that. Yeah. You could tell that yeah. Matt Kimball I could say anything I want. But that's, Jay Shenoy like, we wouldn't be able to, like, we don't have a standard priceless, and we don't have a standard portfolio and so on. Because some of the custom work we do is actually custom. But yeah, like the range of things that we develop, is actually quite, quite fast. Matt Kimball Yes, yeah. Yeah. The range of the range of products or solutions you use. Jay Shenoy Yeah. And then yeah, and then we are seeing disaggregation trends become become more accessible. And this is a, this is a sort of evolution in the industry, right? Open Source, and open source mindset, kind of percolates down from very sophisticated users to where it becomes robust enough that pretty much anybody can use it. An example of that, since you mentioned disaggregation, would be like storage would be the first step for disaggregation. Right? So now NVMe over TCP is kind of becoming robust enough that, like, we actually do expect to see that get adopted more. And when that happens on the hardware side, we will have, we will have things that other people are already used for applications like that. Steve McDowell That makes a lot of sense. And when you think about what hyper scalars are doing, right, that's an environment where you know, Software Defined everything is really what drives TCL. I can't go in and reconfigure storage boxes physically when I have thousands and thousands of storage boxes, and I cover storage for more insights. I'm actually happy to hear you say you're seeing some traction with NVMe over Ethernet. Yeah, the enterprise market I think is going more towards Original approach at least initially. Yeah, no, that's good points. Matt Kimball Yeah. And let me ask you, so, you know, we're talking a bit about, you know, enterprise arriving at space, or that places where, you know, hyperscale has been four years. Right. And, like you bring up AI, you know, I call it the next landing, you call it, you know, the moon landing, because we're already there. But, you know, do you see, you know, hyperscale has figured it out, right, or I shouldn't say, you know, Jay Shenoy further ahead, yeah. Matt Kimball Do you see that same kind of percolation happening in the AI deep learning space down into enterprise? Jay Shenoy Actually, the case for it may be, in fact, slightly better in the AI space, right. And there's, there's this AI is like quite a spectrum, like AI, like, far from actually being a niche segment, when you actually look into AI, the breadth of AI applications is like, very, very wide. But ml perf does a decent ish, somewhat decent approximation of the span of applications, but the span of applications is actually quite vast. Right. So. So there are specific groups of applications in which in there that are so well, or so much better developed in hyperscale. That, yeah, that some enterprise customers will actually run into situations that maybe they went to an AI conference, maybe they heard about something, and the infrastructure for that isn't quite, like available from their favorite OEM. Matt Kimball Yes, with that said, You know, I mean, everybody talks about every enterprise CIO is talking about adopting AI doing air quotes, you know, in the enterprise, and how much per customer are helped their company, but when you kind of press them on that, it's clear that there's not a, there's not a really firm understanding of what it is how they're going to get there, and what they need to do if you know, if you were embarking, given your background. And given what you do now, if you were embarking on a quote unquote, ai project or there, you know, here are the five things you have to make sure that you do as a as an enterprise CIO or as a CIO, period or CTO. Jay Shenoy Yeah, so it's kind of hard for me to put myself in their shoes, but kind of let me try because there's at least one precedent to this, that, like, if any CIO were to sort of asked me on how do I think about this, this is what I would say is that a few years ago, like Hadoop became very prominent in the mindshare, for, for enterprise CIOs, right. And then in Hadoop, then all the standard things that they went through on Hadoop, right, so Hey, your data can't be in silos, you need to have a data warehouse or data lakes and that kind of stuff that led to what I imagined lead to I don't live it firsthand, lead to discipline in sort of corralling all of your data. That I think is a very, very good sort of template for AI. Because some of the characteristics are actually the same. Just like Hadoop that gets you insight into data with statistical models in the same way, you throw these neural network models at large, large pools of data. But so I would make that analogy and how you manage your, like Hadoop or analytics transition is how you should manage your AI transition. Unknown Speaker That's good. Steve McDowell So I'm gonna switch topics a little bit, let's talk about what you're seeing from from kind of a trend perspective, or you talked about, you know, storage is moving more toward NVMe over fabric and things of that sort. Right? Are you seeing other things that might surprise us and then the hyper scalar space? So you've seen, you know, your customer starting to ask for arm in the server space, for example, or, you know, what are you seeing in terms of how this Mark Yeah, Jay Shenoy actually, that particular one? Like maybe you used as example, in terms of arm? I think that is also kind of happening. And one of the, one of the things that like 10 years ago, people used to say, but isn't strictly wasn't wasn't quite true back then. And isn't true now. Is that the sort of at that time, you would hear many people say that Oh, but it takes a lot of time to port your software, right, which is true for some mission critical type of software, but big enough part of your software portfolio Actually is already abstracted from the hardware, whether you're writing it in Java or you have Docker containers for it, and so on, that you can actually be portable, as long as the other CPU architecture provides you Linux, good JVM, and like two or three things, right. So, so he. So the reason why I think that type of change is not maybe as formidable as you would think, is that you may not have to do hundred percent of your codebase or applications to get the benefits, right. So I think that's why hyper scalars are looking at alternative CPU architectures. Now, that said, I mean, x86 has a lot to be said for it. And then there is the brawny core versus wimpy core, or discussion that has always been there. And time and again, people have made the case, I think they've made the case that brawny cores are, in fact, like more versatile than with the course. So if you could only have one time, then of course, you should go with the bronco. But But then the real question is do you need only one type? Right? Steve McDowell Yeah, I guess the same question could apply to the and we're talking about AI earlier and you look at AI or machine learning, there's really two pieces, right? There's training and inference and training. It's kind of a GPU led market, but that's on that same path. The click on the inference side, which I think we're seeing a lot of deployment in hyperscale space. Everybody has a different answer. I mean, Intel has I mean, it seems like every quarter, they're rolling out some kind of new accelerator. Yeah. Yeah. How do you look at that? Right? And what are you seeing in terms of what your customers are asking for? Jay Shenoy Yeah. Yeah. So So yeah. Let me start with the famous it depends. But but but here's the, here's, here's, like, here's how I look at it, which has helped me to sort of understand many of the choices that I see people make, which is when you're doing inference, right? inference is the process by which, in real time, somebody comes across a new piece of data or presents a new piece of data. And in almost real time, you are making some assessment of it, you're not thinking about the 10 million or 10 billion piece of data that are similar to that those are already captured in your model. And so then, the way I think about it is okay, when you're making this inference, you're not making this inference in isolation, you're making this inference as a part of some other logical process. Right? So there is an inference part, which involves computation of the exact neural network model that you call, let's say, image recognition. And then there's some other business logic around it. Which is standard compute, which is general purpose compute. And then if you can size, the ratio of like, how much workload is your inference? And how much is that general purpose compute, which is actually not trivial, in most cases. And you put those two things in series, that's your total workload, that so if your influences are really tiny piece, then maybe you don't need an accelerator? Or if your CPU is good enough, that inference is not even a tiny piece, then you don't need an accelerator. If your business logic are your application logic is relatively small, then yes, you probably are better off with an accelerator. So that's the way to think about it. And then also, the way to think about it is in your inference systems. How much can you paralyze? Can you can you sustain, like, hundreds of requests per node, in which case, you can paralyze your business logic and you can paralyze your inference at that time you benefit from an accelerator? Or if you cannot, then you may not need an accelerator because you're going to underutilized it that that's kind of the way to think about it. Steve McDowell And I think some of the chip guys agree with that as well. We're seeing you know, new instructions and things put into Yeah, yeah, get around the need, because there's always some level of inference, right? Jay Shenoy Yeah, yeah, influence in particular. I think people sometimes underestimate people. People spend a lot of time on the neural network models, but they sometimes underestimate the sort of business logic that goes around the influence action. And to me, almost, that is a bigger key in deciding whether you need an excellent or not. Steve McDowell Now, I'm making lots of notes here because you're making a really good point. Jay Shenoy Yeah, obviously, and of course, mcps, of course, of course, CPUs are also getting nontrivially better at inference and as well as other matrix computations. Matt Kimball You said you're a chip guy. Jay talked a little bit about arm and we hit on a little bit talked about Intel. AMD is in there, obviously as well. Yes. Negative all this specialized silicon coming out graph core nuvia. Yeah. How do you see that kind of chip market evolving? And I'm speaking specifically to CPUs, not necessarily even e6. And do you see that CPU space kind of shaken out over the next few years? Jay Shenoy Yeah. That's a, it's one of those questions. If I knew the answer would be in a completely different space than I am right now. Matt Kimball All you have to do is sound smart. Jay Shenoy Yeah, yeah. So So hopefully, you will delete this podcast after like a convenient period of time that nobody will go back and look at this. Oh, no. What have you back? Yeah, so so I think we are at the, at the phase where we are seeing a in deep learning that we're experiencing firsthand that the spectrum of deep learning applications just actually just training, forget training versus inference, right? is large enough. And there's enough unknowns about it, that we are experimenting with various types of architectures. Right. And as in any, as in many markets, we will go through a period of discovery, and we will settle down into fewer choices is my guess right now, which are the winners? I bet? I don't know. But will GPUs been there? Yes. Um, by their sheer presence today, you can't you have to give them a give them a place at the table. Will CPUs be there? Yeah, I think CPUs have a lot of potential to get better. There will some other specialized accelerators be there? maybe one or two, but maybe not as many as they are now. And, and if I can talk about CPUs, right, so CPUs, I think people, people sometimes maybe underestimate, like, how good CPUs can be. And CPUs made very specific choices in prioritizing scalar execution, right? So that's why CPUs are CPUs, right? And then, when CPUs have needed to do vector execution, well, guess what I mean, you look at just Intel, right? And then we'll go to other instruction sets in a moment here, Intel went, MMX x one, x two X 512. And then now there's a max even, at least, it's been announced to the software community. So that's, that's Intel, right. So so starting from a scalar optimized processor, they got better and better at vector processing. And now they're going into matrix processing also, right. So So that's, that's Intel arm. Fujitsu did an outstanding job with SV. And with the with, with their, with the Japanese supercomputing initiative. So that's our second example. So now, when you have two good examples, when you control CPUs out, Steve McDowell in much like we were talking about inference, you know, I think we're seeing machine learning evolve. When we talked about learning the learning process, you know, even three years ago, it was about doing ground up models. Now, I think what we're seeing in the enterprise is a lot of reinforcement learning, which is a much smaller task, right? We're gonna be taking a pre trained library, whether it's, you know, Booz Allen's marketplace, or whatever it is given for free and saying, I already know how to do face recognition, image recognition. Yeah. Look at us. And that doesn't need a big Titan GPU all the time, right. And Jay Shenoy yet, yes, and no. So reinforcement learning is actually slightly different from what he just said. Right. Or at least that's what I think. But what he said is retraining the model or retraining. Yeah. So retrain the model. Yeah. So retraining models, yes, in theory, it is easier. But in practice, it is not less formidable, because guess what, now that you figured out that you can retain models easily. And this is the sort of Murphy's Law of computing. Now that you forget that something is easy, you want to do it all the time. And so previously, you might have been satisfied with training your model maybe once a month. Now you want to do it every other day. Yeah. So So it's, it's I think, the need catches up with the available should a perceived need have catches up with the available compute infrastructure. Steve McDowell That makes sense. Matt Kimball You know, one of one of the things as we as we sit and chat and I listen to what you're saying, Jay, one of the things I find interesting is That, you know, when open compute started the open compute initiative started. And you know, we'll just call it the generic the vanilla, ODM space. A lot of a lot of pundits looked at the market and said, well, computers commoditized doesn't matter. I just need somewhere to run my software and run it well, right. Yeah. As we sit here and talk, I think to myself, in many ways, that platform has never mattered more to the needs of kind of where the where the, you know, where the not just the hyperscale, but any data center going if they want to, you know, if they want to really kind of stay on the front end and be competitive. Yeah. He need finely tuned infrastructure needs. And it's companies like hive that are really kind of leaning. Yeah, Jay Shenoy yeah. Yeah. So let me give you a couple of high level ways of looking at it, right. So like 10 years ago, or even five years ago, you might have said big data, right. And we did get as a as a as in like, everybody did get better big data, enterprises, and hyper scalars, and so on. But just when we thought that, hey, we have finally sort of mastered this thing. It became big, fast data. So not only did you have petabytes and exabytes of data, you want to access them all the time, which is not a case with the classical big data, right? I mean, you had a lot of cool data. And so on the storage side, what was big data and what you thought you finally figured out with big data now became big, fast data. And similar on the compute side, right, you figured you thought that you had math, you had finally figured it out, running your web stack and application stack and so on. And this whole massive AI, sort of which which we, which we are still learning about it came about. And so yeah, so no, no danger of sitting on your laurels, or resting on your laurels. Steve McDowell Yeah, so sounds like we're solving a lot of problems in the hyperscale market. Jay Shenoy Yeah, yeah. Yeah. Steve McDowell Hey, so this has been a fascinating conversation you've taught me I'd like I said this earlier, I'm taking pages of notes here, because you're gonna make me sound smart when I have my next conversation. So I appreciate that. What do you want to leave us with? What should we take away? In terms of, you know, what hive is doing? And how the hyperscale market is, is Yeah, Jay Shenoy yeah, literally talk a little bit about hive for a couple of metrics. So, hive is a global company. And that's not a buzzword for us. And I will illustrate that with a couple of hard facts, right, so. So we are based in Fremont. But the headquarters in Fremont Steve is sort of based out of Fremont, our VP of manufacturing operations is based out of UK. But and manufacturing is a very big part of odms life. And in fact, like in terms of the sheer percentage of efforts that go into it, manufacturing dominates other activities, including engineering, so of our VP of manufacturing, is based out of UK, our VP of engineering is based out of Taipei, at which is probably the next biggest sort of function we have. And so we truly run as a global company. We have five factories around the world, we have, like practically not practically every single function that I can, every single function or type is multi site, and multi country as well. But so it is a global thing. Matt Kimball global reach Jay Shenoy globally. Yeah, yeah, we land servers, account most countries in the in the world. We are able to service them in most countries in the world. Matt Kimball And populating Steve McDowell us everywhere. Yeah. Yep. Thanks, Jay, for coming on today. I think you've helped us all learn a little bit more about what's happening in the world and love your perspectives on the AI and machine learning particular. Jay Shenoy Yeah, yeah. Thanks. Yeah. Thanks, man. Awesome. Steve McDowell good conversation. I thought going into it. You know, I'm fascinated by the evolution of the hyperscale market. specifically around the technology that's evolved out of it, right I mean, hyperscale is Software Defined world where we're, you know, I'm running logical wires between servers not physical ones necessarily. Matt Kimball Yes, I hyper scale is where enterprise will be in five years. You can set as a at any given time hyper skills were enterprises at five years. Steve McDowell I think you're right. And you know, looking at it from the storage market. We're already seeing learnings from the hyperscale market move into the enterprise. And we talked in the conversation with Jay a little bit about disaggregated storage. You know, we've seen Intel's storage division over the past couple of weeks, partnered with a company called light bit labs which is all about moving into DMA over the wire. So you, you build a virtual router drives anywhere. And that's what the public cloud guys are already doing. And that's going to move fast into the enterprise. And they I think, you know, I think we're already seeing a blending of platform types, right. And it's just a pool of resources and kind of a logical move from where we all started with virtualization. Matt Kimball Yeah, I think you know, for for our enterprise listeners, enterprise IT listeners, you know, the big takeaway from this is not necessarily kind of going out and buying hive servers today are hot solutions today, but it's, it's look at the way j positions. Ai, look at the way he positions disaggregation already seen it with ci and HCI and the like, but kind of more advanced disaggregation. And this is where your world is going. Steve McDowell So take take note for sure. All right. In the meantime, I'm off to do other work, and we will see you next week. Maddie, what do you have to do Stevie? I'm not