- info
- Transcript

From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is "Breaking Analysis" with Dave Vellante. HPE's announcement of an AI cloud for large language models highlights a differentiated strategy that the company hopes will lead to sustained momentum in its high performance computing business. While we think HPE has some distinct advantages with respect to its supercomputing intellectual property, the public cloud players have a substantial lead in AI with a point of view that generative AI is fully dependent on the cloud and its massive compute capabilities. The question is, can HPE bring unique capabilities and a focus to the table that will yield competitive advantage and ultimately, profits in the space? Hello, and welcome to this week's Wikibon Cube Insights powered by ETR. In this "Breaking Analysis", we unpack HPE's LLM as-a-service announcement from the company's recent Discover Conference. And we'll try to answer the question, is HPE strategy a viable alternative to today's public and private cloud gen AI deployment models?
Or is it ultimately destined to be a niche player in the market? And to do so, we welcome to the program, Cube Analyst, Rob Stretchy, and Vice President, Principal Analyst at Constellation Research and friend of theCUBE, Andy Thurai. Gentlemen, hello. Andy, good to see you again. We saw you this week. Great to have you back. Good to be here. All right, let's start with what HPE announced. Entered the AI cloud market via an expansion of GreenLake as-a-service, its platform. Companies offering large language models on-demand in a multi-tenant service powered by HPE supercomputers. HPE is partnering with a German-based startup that none of us ever heard of, at least I hadn't, called the Aleph Alpha, company specializing in large language models with explainability as one of the features. HPE believes that this is critically important for its strategy of offering domain-specific AI application. HPE's first offering is going to provide access to something called Luminous, a pre-trained LLM from Aleph Alpha, which will allow companies to leverage their own data to train and tune custom models using proprietary information and avoiding IP leakage. So, let's start with you, Rob. What else can you add to what HPE is offering here? Yeah, I think that what's interesting is that they're taking Cray to the next level and making it as-a-service versus having to buy a supercomputer and bring it on. It makes it more accessible to the supercomputer infrastructure. I think it's taking advantage of the underlying software, which is interesting, but when you start to peel it back, they're still six months away from having a GA in North America, which they announced it'll be towards the end of the year. And then, in Europe, you still have until next year. So, I think they're playing catch up in this space in a pretty big way right at the moment. Andy, in your view, how viable is this strategy? So, first of all, like Rob said, it's only an announcement now. It's not GA, at least for another six months or so. So, take it with a grain of salt. But one of the things, what they are suggesting, which could be compelling for large workloads, right now, the problem is, when you go to the public cloud, the way you are setting up the machine learning models, LLMs and the train, the whole nine yards, there's a lot of data science and productionizing work involved before you can get the models to production. What they are suggesting is that give us the best possible biggest workload, throw it at us, we'll figure it out and how you need to be efficient. We'll have our GreenLake services. We'll have our supercomputer, powerful network, powerful hardware, powerful storage, powerful memory, powerful whatever.
We'll figure it out. You don't have to fine tune anything. You throw the workload at us, we'll make it train and run. Which is actually very good for too high volume like HPC workload. So, it's a good message, not ready yet. And I need to see a whole lot of details. For example, they just sat in the panel and talked about machine learning operations as key, but I don't see any announcements saying that how they're going to execute it. So, there are a lot of holes, big announcements. Is it going to move the needle? I don't know. >> Yeah. But to your point being that the viability in the premise is simplification. So, that's good. But now, it's a matter of execution. Now, this, yeah, you alluded to this, Rob. This is not IaaS, right? >> Right. Be specific. What are they actually delivering? They mentioned several workloads to come, like climate modeling. >> Right. Well, they had three models that they're really approaching, which was climate, it was BioLife Sciences, and it was healthcare >> Yeah. were the big three that they were. And they alluded to the fact that they were going to have financial models as well- Which would be of course- >> Yeah, they have to. I think that's, without that, you don't get a lot of the non-governmental types. I think what they're focusing on is a lot of the workloads that Cray really does well with today. And to Andy's point, they're trying to simplify it by saying, hey, just you can use our models out of the gate, or you can bring your own model. I did ask that as part of one of the follow-ups was that great, so I can use Luminous, but what if I want to go and use something, from one of the others like Anthropic or what have you? And they said, you can bring it. I think to your, what your question, it's more a platform-as-a-service that they're providing versus infrastructure-as-a-service. So, you don't, to Andy's point again, you don't have to go under and plumb all the data together and stuff like that. Well, Andy, what about the partnership with the Aleph Alpha? A lot of the analysts were like, why are you dealing with this little tiny company that's raised, I don't know, 20, $25 million as opposed to working with one of the other firms that maybe is more mainstream or well-known? Of course, the cloud guys, many of the cloud guys anyway, are working with them. Of course, Microsoft with OpenAI, you see guys like Hugging Face, but you know this space really well. What are your thoughts on that move by HPE to work with an upstart like Aleph Alpha? So, it's not about who they are partnering. Of course Aleph Alpha, nobody know about, and so it's not the goal, it's not about them showing that. The goal at the end of the day for HPE was to show, look, right now, LLM is the craze. So far, when you think of a thick, big AI workloads, it's used to be HPC. Rob was mentioning, they were talking about the climate weather prediction, genome modeling, seismic analysis, Monte Carlo analysis, even computational fluid dynamics. All of those things have been HPE is known for and have been doing for a while. Now, LLM has the model training needs and the size of the model, that time it takes to train. It has come up to the same level as some of the big HPC workloads.
And everybody and their uncle is training an LLM model now. So, their goal is to show that you could train your own private LLM fairly easy to using our service without fine tuning the knobs and everything. You throw at us, like I said earlier, we'll make it work. Yeah. So, in that sense, they have shown a, demonstrated the capability of training one large LLM. Hey, this is how we do it, this is how we did it. It's not any different than... Remember, Databricks came out like about three, >> Yeah. four or five months ago. They did the same thing. They showed you how to train a LLM. Exactly. Yeah. >> Yeah. And yeah, okay, this is how you do it. And we used... And their differentiation was that, we don't need all kind of parameters. They used their employees' data, very small set, trying to pick LLM, fine tune using their own data. So, everybody's going a variation of it. So, it doesn't matter what company, basically HPE wanted to demonstrate, we can train a big LLM using our system, which I think they're proven, but again, in my view, (clears throat) training that still then leave a lot more questions to me to ask things like, the HPC workloads are not the pure AI, ML, or even deep learning workloads. There are neural networks, RNN, CNN workloads, all of those things. They don't exactly demonstrate how they'll work with that.
In order to do that, you need to have a big ecosystem of all of those components. And what HPE has decided to do is, I don't want to get into the competitive market of creating a software and doing all of those things. With that, it become very complicated. So, I'm going to let you use some of the open source available systems. So, they pretty much went all open source. Yeah. >> Whether it's from Ray. And I would just add, I had Jonas Andrulis on theCUBE as the CEO of Aleph Alpha. Pretty impressive guy. But he was really doubling down on explainability as the differentiation. >> Yeah. You had something to add? Yeah, no, I think that's the thing. I think the one thing from the PaaS that they're offering, and I think Andy Selipsky or Adam, sorry. >> Adam. Yeah. Oh my god. >> Adam Selipsky. Yes. >> We're both right. Red eye brain right now. (Dave laughing) But when you start to look at it, he was talking about the actual, with the trainiums and the new chips and how sustainable they are when they did their announcements a couple weeks back, I think the sustainability aspect of what they're doing with their cloud that they're building, it is pretty unique in the fact that they're going to help people hit their scope one, scope two, and scope three sustainability. I think for large companies in particular at the top of that, which is, I think where they're aiming, that's going to be pretty important, because I don't think Amazon's carbon footprint tool goes far enough. It doesn't talk about supply chain, which gets you out of some of those sustainability metrics and things like that. But again, is it enough for them to win at this market? I think it's enough to keep them in and get them enough revenue to build a business. Well, so to Andy Thurai's earlier point, HPE's fundamental belief is that the world's of high performance computing and AI are colliding in a way that confers, it will confer competitive advantage to HPE. And indeed, HPE has a leadership position in high performance computing. As we're showing here, HPE is the number one and number three in terms of the world's top five supercomputers with Frontier and LUMI, both leveraging HPE Slingshot interconnect, which it believes is a critical differentiator. We're going to talk about that. Also believes that generative AI's unique workload characteristics favor HPE supercomputing expertise. Here's how HPE's chief technology officer for AI, Dr. Eng Lim Goh, describes the difference between traditional cloud workloads and gen AI. Let's play the clip and come back and talk about it. The traditional cloud service model is where you have many, many workloads running on many computer servers. But with a large language model, you have one workload running on many computer servers. And therefore, the scalability part is very different. This is where we bring in our supercomputing knowledge that we have for decades to be able to deal with this one big workload on many computer servers. Rob, obviously, what Dr. Goh said makes sense, but the public cloud players, they have supercomputing services. So, why, in your view, does HPE feel it has an advantage over the public cloud players and do you think it does? I think they have a lot of heritage now. People move around that have worked on these projects, but the Open Grid Forum and before that, the Global Grid Forum, where I actually ran a research group in there was all of these guys. It was Cray, it was SGI, it was IBM, it was HPE. So, when you start to look at it, they do have a heritage in doing these, I guess, you could say, applications that are large applications across many servers versus time slicing servers for many applications. So, they have this in their heritage, in their software. I think there is an advantage there for them. Is it an advantage that is above some of those other vendors that contributed back? I'm not a hundred percent sure that it's there, but you also have these people who moved around. It was open source.
There was a lot of fundamental pieces that the cloud guys can go pick up and use as well. It doesn't mean that they can do it in the way that Cray has hardened it over the years for people like NASA, DOE, and others that they've been doing it for decades now. So, I think there is some substantial intellectual property in the software in that aspect of it from a grid perspective. Yeah, but, Andy, to your point, you feel like it's really not mainstream. It's more niche. Now, whether or not it comes more mainstream or maybe that niche grows, remains to be seen. But I'm inferring from your comments that you feel as though it's a little far off from where you'd like to see the company's momentum. Is that fair characterization? Yeah. Yes and no. Look, (chuckles) at the end of the day, HPE, as with many of the enterprise companies, they're all good storytellers. And if you listen to the story, they'll make you believe that as if they're the only one who has a HPC service, which is actually not true. There are about at least I can think of about a dozen vendors, about five of them really good. For example, Amazon, they kept repeating saying that AWS doesn't have it, but if you look at it, Amazon HPC service is not bad. They run on a elastic fabric adapter powered by the Nitro system on it. They run the low latency purpose-built HPC apps in it. So, it's not exactly apples to apples comparison what they're making, but at the end of the day, they've figured out they're way back lagging in. Look, at the end of the day, in order to do AI data, any of those things, to do a training models and stuff, data is king. Data is not with HPE right now.
So, which means all of those workloads, the innovation workloads as they call it, as we talked about earlier, innovation workloads, AI workloads are always, always, always going to go to HyperCloud. You don't have the data with you to begin with. You don't have the ecosystem to begin with. You don't have the ease of use to begin with. What HPE does have is a humongous supercomputer with their Cray, solid density core. And they have storage where you can put all the data, and then combined with the GreenLake data network, all the combination, what they're trying to market is, you know what, we got all of this stuff. Bring your largest workload possible, we can do better than them. Yeah. >> Is that going to move the needle for them? I'm still not convinced yet. Well, so, and by the way, I want to... Your point about data is interesting, 'cause we heard Adam Selipsky on Bloomberg say, "90%, it's the same as Jassy. 90% of the data is still on-prem." I don't believe that's true. I think the data is more like 45% is in the cloud, not 10% in the cloud. Now, if you include the edge, i.e, telco, maybe you can get there. And even HPE all week, we're saying that 70% of the workloads are on-prem. Again, I don't believe it's that high. I think it's >> Yeah. much more balanced than they're suggesting. Well, yeah. I think it's industry-dependent as well. I think especially when you get towards the smaller, newer, been born in the cloud last 10 years, yeah, of course it's going to be 90% in the cloud and 10% on-prem or something of that nature. I think when you start to look at, there is... I was talking to a very large, one of the top five too big to fail banks, they still don't have their strategy nailed down. Is it going to be Snowflake or Databricks yet? Yeah. They have no cloud databases. So, when you start to look at how these large organizations are approaching this, I would also say that especially, 'cause Grid's been around forever. I was doing it at Manulife Financial, where you're using it to do actuarial tables. So, this is not like this concept of doing big data on-prem and doing big data in the cloud is not that complicated except for what we're all talking about, which is you got to get the data there. And I think their data story about their data fabric with the Ezmeral stuff and some of the other things they're doing, to Andy's point, helps bridge that. I think I want to see it actually work. They didn't have a lot of information about how that worked with the supercomputing workloads and bringing the data to the network and to that fabric in their cloud. But the simplification message resonates, and they did talk about how a lot of the jobs in the cloud that they fail and they have to rerun them. Now, that's not necessarily anything fundamental to the cloud, it's just that it is your responsibility to make them work. So, the simplification is a good idea and they do have high performance computing data. That is their domain. All right, let's move on. Let's take a look at HPE's lines. Can I make a quick comment on that? Yeah, please go ahead. About the enterprise data. I actually challenged them in one of the panels we had in there, they asked the same question. Here's my view. The enterprise transactional data is predominantly, mostly structured data is still in-house. But if anybody claims that the newer innovation data, unstructured data, all the vision, audio, and all of this unstructured data, if it's predominantly on-prem, they're lying. Most of that is on cloud, because- Either they're lying, Andy, or they just have flawed assumptions. But just look at the numbers. >> Yeah. Yeah. >> If the cloud... If the big four, if you include Alibaba cloud players are going to do close to 200 billion this year, throw in the SaaS guys. Where's the rest >> Yeah. of it coming from? Services? I just, I'm not buying it, so. Well, I look at that and I say especially where they said, "Hey, we're starting with customer support and being able to, and AI enable our customer support." I bet you their customer support docs are in the cloud. Nobody keeps those on-prem. >> Yeah. Here's the irony, here's the irony. (Dave drowns out Rob) Both the public cloud player, AWS and the private cloud guys, HPE, Dell, they're both saying the same thing and touting it as an advantage. Yeah. >> I believe that there's more of an equilibrium. (Rob laughing) It's closer to 50/50 than anybody thinks. So, anyway. Let's move on. >> I agree. We're going to want to take a look at HPE's lines of business and how its AI and HPC business fits and how it performs. Remember, HPE purchased Cray in 2019 and Silicon Graphics in 2016, I think a few years before that, to get into the HPC space. And looking at HPE's most recent quarter, you can see here how it reports its business segments. HPC and AI is a multibillion dollar business and it's growing, but essentially, it's break even. So, not a great business from that standpoint. It brings bragging rights, but not profits. Intelligent Edge by the way, AKA, Aruba, is the shining star right now. It's got a five plus billion dollar run rate, 27% operating profit. So, margin-wise, it's their best business. It throws off nearly as much operating profit as HPE's really strong server business. So, guys, I want you to listen to this clip from Antonio Neri, talking about HPE's unique IP in this space relative to the public clouds, and get your reaction. Please play the clip. Well, if you think about how public clouds are being architected, it's a traditional network architecture at massive scale, with leaf and spine where generic or general purpose workloads of sorts use that architecture to run workloads and connect to the data. When you go to this architecture, which is an AI native architecture, the network is completely different. You mentioned Slingshot, right? Yeah. That network runs and operates totally different. Obviously, you need the network interface cards that connects with each GPU or CPU for the matter. And also, a bunch of accelerators that come with it. And there is all about the silicon programmability with the contention software management. And that's what Slingshot is all about, and takes many, many years to develop. But if you look at public clouds today, generally speaking, they have not developed a network. They have been using companies like Arista, Cisco, or Juniper, and the like. We have that proprietary network. And so, Nvidia, by the way. But ours actually opens up multiple ecosystems and we can support any of them. So, it will take a lot of time and effort. And then, also remember, you're now dealing with a whole different compute stack, which is direct liquid cooling, and that requires a whole different set of understanding. And the data service, the data center is very different as well. Okay. So, lots to unpack there, guys. The network, the Slingshot interconnect, the data services, ecosystem, liquid cooling. Andy, what do you think, is HPE naive about the capabilities of the public cloud players? Or is this HPE flipping the adage that Andy Jassy invokes, i.e, there's no compression algorithm for experience. Is he flipping that on the public cloud guys? Kind of both, right? So, the number, what you showed there for HPE and AI, even though it looks very, very high, the actual number, I can guarantee you I asked them the question, they refuse to answer, I would say about 98%, probably 95, 98% is coming from the classic HPC workloads that they have now, right? Yeah. Guaranteed. Yeah. (Andy clears throat) The rest of it, the pure AI workloads, including LLM, they're just showing that they are demonstrating how to use that. So, are they be able to convince people to come and run a LLM workload on these servers? I very highly doubt that. One, you don't have an ecosystem and you don't have an MLOps practitioning in place. But more importantly, in order for you to get the model and train them, you got to have some kind of a repository partnership. Hugging Face is example that they're never considering. And AWS, that's why they're brilliant. AWS is taking a similar approach to HPE, but they're doing it in a little bit different way. You know what, the backend could be whatever, whoever we are. You bring the model from wherever, We'll fine tune the models, we'll make it run. They are partnering with OctoML. They are partnering with the Hugging Face.
HPE is not doing any of that. Maybe eventually they will, but right now, they're going again after the pure HPC classic laws. They are trying to rebrand it as AI workloads. They're claiming that we are going to get all of that. Are people going to do that? I don't know. Again, like we talked about, >> Well, well, it is not there. >> well, well, but the HPC guys might do it. So, maybe that's not >> Yeah. such a bad strategy. The question is whether or not it's going to actually drive profitability. That's my big question. >> Yeah. I love the bragging rights, but the HPE's business needs, first storage has to be more profitable and HPC/AI has to be more profitable. Yeah, and I question the whole network as an advantage thing. >> Okay. I think maybe they do have a little bit of an advantage right now, but everybody's buying parts from everybody else. They're buying Mellanox, InfiniBands from Nvidia. They're getting other stuff from other people beyond Slingshot, which is that, I'd say Frank in ethernet that they've built out. And it's not really standard ethernet. It's Frank in net. >> Yeah. Frank in ethernet. It's got advan- >> It's number one and number three. Now, I know these are Leapfrogs. Yeah, but again, it's an interconnect. I don't know that that's why they won those deals. You start to look at the number of AMD cores in both of those, and you start to look at all the other pieces that go into Oak Ridge National Labs, and you start to go, well, where's the DOE going to buy from? They're going to buy from an American company. They're not going to buy from Fujitsu. They're not buying from Fujitsu. So, I think, again, if I'm one of the other American server manufacturers, I look at that and probably go, why aren't we up there? But at the same time, they do have the software layer, they do have the water cooling. How many times did I feel like I needed to go back to the trade school and get a plumbing license, so that I could be a plumber to run a data center now? I start to look at this and go, okay, if water cooling is where everybody's going and this is what we're going to do, we're going to need a lot more plumbers in platform engineering. Because I think it makes sense from the sustainability perspective, but I'm also looking at it going, there's things there that as the models become more efficient, is the water cooling going to be that big a thing?
I don't know. And I think, again, it goes back to our earlier premise that we're still not GA, and there's still a lot of questions back to what is sustainability going to really do? I think in Europe, sustainability will significantly help them. I don't think it's as big an advantage in North America as it is. When you say water cooling, you're not disputing that water cooling will be necessary. You're saying is it a differentiator? Is that what you're saying? I'm wondering if it's a hundred percent necessary. I think it probably is to be efficient, as efficient they want to be, but I think that to be in this and to generate revenue from AI and LLMs isn't necessary. And are they going to be able to, 'cause at mass, do I need that from startup? >> Yeah, they're not going to have water cooling in your phone. Who knows? Maybe it will someday. Yeah. (laughs) All right. I want to explore with you guys how to think about this announcement. In other words, does it have the potential to go mainstream or is it destined to be a niche status? That's one of the themes we're poking at today. Here's some ETR data asking organizations that are pursuing gen AI. These are folks that said, yes, we're pursuing gen AI. Actually, we're... Sorry, across the survey, are you pursuing gen AI? I'll get to the data in a second. And LLMs and what use cases they're evaluating or pursuing actively in production? And I misspoke at first. This is not just people pursuing it. 34% of the organization say they're not evaluating, which is surprising to me. I bet you they are actually, they just don't know it. But it's what you'd expect in terms of the ones that are a chatbots, they're generating code, they're writing marketing copy, they're summarizing texts, et cetera.
But HPE has a different point of view. They're focusing on very specific domains where companies have their own proprietary data. They want to train that data, don't want to incur the expense of acquiring and managing their own supercomputing infrastructure. That's HPE's premise or any infrastructure, GPU infrastructure. At the same time, HPE believes because it has unique IP, that it can be more reliable and cost-effective than the public cloud players while still offering the advantages of a public cloud. So, Rob, is HPE onto something here in that these mainstream use cases are not where the money is for HPE? In other words, they can leverage their supercomputing prowess? And is there gold in those hills with HPE strategy in your view? Yeah, I think exactly. They're going to go to the edges, the edge cases that are more in their wheelhouse from an HPC perspective. And I think even Andy was saying the same thing. I think is there enough revenue there? Probably. They don't have to be as big as Amazon from a global coverage perspective to win and make this a profitable endeavor. I think what they can do is they can bring supercomputers to people who don't have the means to go. They're not Oak Ridge National Labs and going to go buy 96,000 cores or something like that. So, I think there is a middle ground where they could help people who can't get there or have tried to do it and failed on-prem with standard hardware and GPUs. Yeah. Andy, anything to add to this? Yeah, first of all, on your chart, that 34% not evaluating, they're just hallucinating. They don't know that their people are evaluating. Probably 100% of the people are evaluating that. So, that's- >> No doubt. Yeah. >> You got to be. How can you not >> Yeah. be evaluating? So, coming back to this, again, remember, I seem to be the only one who's making the differentiation, AI workloads, nobody else is talking about it. There is a differentiation between innovation workloads and there's a differentiation between, and the mature workloads. For innovation workloads, almost every single CXO I spoke to, I spoke to many of them over the last year, none of them seem to care. Not because especially they're experimenting right now. Sustainability, carbon footprint, cost, efficiency, none of them seem to matter. Can I experiment? Can I get the model working? Can I get it to go? That's their important differentiator. I need to get going like that. That's why ChatGPT and other LLMs pre-train enables you to retrain to go to the market faster. If I'm in the innovation mode training those models, I could care less about sustainability, carbon footprint, and all of this crap.
However, when the model matures, when I want to fine tune and start running it in full speed, when the maturity comes in, it's a different set of problems including security, governance, ethicality, explainability, sustainability, even liability. So, that's the core market that if I'm not getting it wrong, that HPE wants to go after. I want you to train first, get the model rate. And when the workload matures, bring it to me. I'll take care of doing all of those things for you without having to go through multiple ML engineers. If they get the message right, if that works out, it could work out really well for them. And sustainability could come into play at that point. Until then, eh, who cares? Okay. All right. I want to talk about a tale of two points of view. So, (chuckles) I just kind of tongue in cheek, but I tweeted this out during the situation when Matt Wood of AWS was on the main stage with Antonio Niri, and much to my surprise, Matt Wood said, "Well, the fullness of time," he didn't say the fullness of, but he basically said, "over time," which is Amazon always talks about in the fullness of time, "but over time, we still believe all the workloads are, or most of the workloads are going to go to the public cloud." He actually said that in front of HPE's audience. And then, Antonio basically countered that with, "Yeah, and the world's hybrid, dude. And it's going to be hybrid indefinitely." So, I put this tweet out and it reminded me of that scene in "Bridesmaids", where the two bridesmaids are dueling for attention of the bride. But there's another underneath this line here, the supercomputing workloads are different and HPE has the expertise. We heard Andy or Adam Selipsky on Bloomberg basically say, I'm practically quoting here, but I'm paraphrasing, "LLMs are fully dependent on the public cloud and its massive compute capabilities." So, in the end, in the movie the "Bridesmaids", I guess they were both. (laughs) Yeah. And there's probably a market for both. I think there's no question that there's a bigger market, as you guys have pointed out in the public cloud. But HPE's got to go from its position of strength, which is supercomputing. Anything you guys would add? Yeah, I mean, I tend to agree that they have to play it to their strengths. And I think to just exactly what Andy was saying, maybe the next six months is... And I don't even think maybe. I don't think everything's going to be decided in the next six months 'til they get to GA. Yeah. (laughs) I think this is this is going to have a long tail on it, that there's time, to Andy's point, that he was making about, hey, train the models, then get to production. And when I get to production, then I got to worry about my scope one, scope two, scope three, and my science-based sustainability data. 'Til then, let me play around. I need a place to play and maybe the cloud is a good place for that. So, let's take a look at some of the ETR data again. And, Andy, you're going to, I think this is your wheelhouse here. So, this data shows the ML/AI spending and which companies are getting all the action. It shows the net score spending momentum on the vertical axis and the horizontal axis or pervasiveness or presence in the data set for, again, specifically the ML/AI players. Right off the bat, focus on the big three public cloud players, Microsoft, AWS, and Google. They're pervasive and they're all above that magic 40% red dotted line, which is an indicator of highly elevated momentum. Databricks also stands out. You guys got to both be at their conference next week. I'll be at Snowflake. And as an aside, Andreessen just published a version of the LLM stack as they see it this week.
Databricks IP was all over it. Not a lot of Snowflake in there. You had a little bit of Streamlit. Yeah. >> But I expect we're going to see some announcements this week in that regard. So, it's the Snowflake has its own stack. Let's face it. >> Yeah. So, anyway, Databricks' clearly a player in that mix. And I got a peak at the July ETR survey data, and it's not going to surprise you that OpenAI is setting new records beyond even where we saw Snowflake at its peak net score, which is during the pandemic up in the 80% range. You see OpenAI has rocketed up to the lead and you're going to see that in the ETR data soon. That OpenAI has really gone mainstream. In core, these are core IT shops, IT decision-makers. So, it's no surprise that you don't see HPE in this mix. But I would say over time, Andy, if the company's aspirations that have come true, like Oracle and IBM, you would want to see them on this chart, don't you think? You would want to. Would they make it to the list? I don't know. Because like I said, all of those companies, if you look at it, there is a commonality in there. They all talk about not only training a large LLMs, they're also talking about retraining existing models, fine tuning the models, and the whole nine yards. And HPE is taking a different approach. They are saying bring the whole enchilada, the huge, biggest model, we'll tackle it. So, if that messaging works out well, they could become the center of force to train all of those things. But none of these guys want to go after the market. They'll give you an option. You can take a model from whether Alpaca or from AI21 Labs, or even from existing Hugging Face models. You retrain, fine tune, and then you work on it, or even have your own data and do it. So, again, at the end of the day, their core information is not about do (audio breaks) want to make it work.
I want to sell my strength. Computer, I have networking, I have storage. I want to sell all of this to you. So, bring the biggest possible model. I'll make it work with all of this stuff. So, would they succeed? We'll talk about the next year or so and then we'll see. Rob, I did give HPE props for including LLMs in GreenLake. I didn't see that at Dell Tech World and APEX. Although I've listened to you guys, I wonder is it bespoke GreenLake? Is it like a separate GreenLake or is it actually GreenLake, >> Yeah. integrated into the console part of that model? Well, I think we don't know yet. I think that's the big thing is we don't know is it integrated into the console? Is it a separate console? Is it really on top of the Aruba Central stuff or is it a separate installation? I have a funny feeling it's going to be separate to begin with and then be more brought in. I don't think it's like AWS or Azure's consoles or Google where you can go in and pick from all the different services and just start 'em with one thing. There'll be links and different areas to go to. I'm not surprised by the ETR data either, but I think it's a place. The only thing that would surprise me is, and I don't know if this is just because people just don't really trust Google as much from a data analysis and what they do with people's data that they're so distant and so low towards the 40% line. I expected them to be a little bit higher. So, it'll be interesting to see in July where they end up. Yeah, but- >> Yeah. Part of that is the bias that not as many people are using Google Cloud. They're (mumbles) >> Yeah. But they are using BigQuery and that's where they shined. >> Right. But Amazon or Azure's ubiquitous, 'cause of their software estate and Amazon's- But if you think about it, like still, even though there's five countries in Europe that have banned Google Analytics right now, Google Analytics is the largest platform for and sitting on top of BigQuery, for web data. So, if you're going and doing intention and spending and return on advertising, a lot of times, you're going into Google Analytics and building models on top of that. Yeah, yeah. >> And you're using the Google stack to go and do that. Yeah, yeah. So, okay. So, you don't trust Google? No. I trust Amazon. You trust Amazon? I trust Amazon. >> Yeah, I do. Yeah. >> And then, Microsoft, I trust, but they go down a lot, so it worries me. (Dave and Rob laughing) So, I trust them for certain workloads. All right, let's wrap here. We'll bring up this last chart and some of the issues that we want to talk about. We think that the real competitive advantage to the extent that HPE has one, we think it does. It's in the infrastructure software. It's not the... I think the big takeaway from listening to you, Andy and Rob, is it's not the LLMs per se, 'cause you can bring those in. Like Amazon's got its own, it'll bring in others. beside Bedrock, but that's really, it's the infrastructure software within what they've built with Cray that could be the competitive advantage, right? Right. Yeah, I think so. I think that it's that bring your own model concept and the fact that the Cray grid technology has been there and tested over 20, 30 years now. Yeah, and to the next two points, Andy, again, how many models will HPE bring to bear in its model versus the cloud players? And you talked about HPE's AI ecosystem for right now, it's focused on HPC, can they expand that? Your thoughts, Andy. Their ecosystem is very, very weak. Sorry to say that, but it's almost non-existent. None of the model repositories, model sharing, or even the software stack. (clears throat) So, how many models can they bring? I don't know. They got to partner with somebody like model producers and models and put it out there. So, for people to retrain. Otherwise, they got to force people to do it. But however, like I said, the advantage that I see with HP, if they get their messaging right, is that with cloud, the problem has always been, that's why it's still cloud is very messed up for deployment for a lot of people. Fine tuning it. You could get hit with the bill without you knowing it. So, you got to fine tune it, watch it, go on it. FinOps is a pretty big thing. What HPE is trying to say, at least from my understanding, is that, you know what, don't worry about all of those things, man.
Just bring the model. We'll help you train. We'll get the best possible you can be. Don't worry about it. We'll take care of. And we got everything from soup to nuts to take care of that. That could be their one advantage. And second advantage is, they don't talk about this a lot, but when it comes to AI models, training is only one part of it. Deploying and inferencing is the major huge issue, particularly when it comes to smaller models. LLM's all the craze now, but the regular AI models, which means there are edge and networking could be a huge play for HPE in this. Train in my core and I'll help you push it out the edge and do things with that. That could be huge, which they're not talking about. And sustainability, if the play comes in into fruition sometime in the future, because nobody's talking about it now, that could be a good play.
But the hurdles, they have to go through. They don't have the data. Data is king. They got to figure out and convince people to move the data. That's going to be major. >> Yeah. I agree. I think edge is going to be huge. But I think it's going to be a lot of this stuff at the edge. It's going to be arm-based, low power, very low cost. There's going to be tons of data doing that inferencing at the edge. All right, last couple of points here that we want to bring up. Can the business be profitable? That's ultimately to me, what this is all about. And then, (laughs) what about Quantum, Rob? Yeah. >> You brought that up as well. And, Andy, you might have some thoughts on that. Yeah, the fact that Quantum really wasn't mentioned at all in anything, in any interview, in any analyst session this week, that was shocking to me, given that, does supercomputer really go away when Quantum gets there? And Quantum as-a-service, IBM's big in pushing down that space and you have others in that space already. It seems like they're going to have to play catch up yet again in the quantum space. And maybe they're already doing it behind the scenes with Cray and they're already down the... I just haven't seen it and I think that would worry me, because I think that changes the game longer term for this. Well, it's interesting, Andy. I think, Andy, you were at IBM Think. I wasn't there, but I'm sure they were talking about Quantum. Cisco talked Quantum, AWS talks Quantum. Yeah, nothing at HPE Discover. Andy, you got the last words. Quantum is not ready for real world yet. They're all talking, they're wasting their time. As terrible as that. You think it's smart (Rob laughing) that they didn't talk... Well, in fact, Furrier agrees- I think so, because they had to worry, they had to worry about catching up with us. All these other guys. What's the point of talking about Quantum, which is out of the possible, which is five years away. If we are talking about through six months away. Well, in fact, John Furrier said he wished that Cisco didn't talk about Quantum for that very reason. All right, guys, we got to wrap. I want to thank Rob Stretchy and Andy Thurai. Thanks you guys for coming on today. Great discussion and to be continued, no doubt. All right, I also want to thank Alex Myerson who's on production and manages the podcast. Ken Schiffman as well on our East Coast office. Kristen Martin and Cheryl Knight helped get the word out on social media and in our newsletters. And Rob Hof is our editor-in-chief over at siliconangle.com. He does some great work. Thanks, everybody. Remember, all these episodes are available as podcasts. Wherever you listen, just search "Breaking Analysis" podcast. I publish each week on wikibon.com and siliconangle.com. You can email me at david.vellante@siliconangle.com or DM me @dvellante or comment on our LinkedIn post. Like I say, we post every week. In fact, Rob, ARInsights just reclassified "Breaking Analysis" not as blogs, but as real research. (chuckles) Yes. Cracked the ARInsights 100. I didn't even know it existed >> Yeah. a month ago. >> Well, you, Andy were listed pretty high up there in the top 100. (Dave laughing) So, congratulations >> All right. to both of you. >> Congratulations, Andy. I thought- >> Thank you. Like I said, I didn't even know about this list a month ago. >> Yeah. (laughs) Also, check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching and we'll see you next time on "Breaking Analysis". (soft upbeat music)