- info
- Clips
- Transcript
(upbeat music) Live From Miami Beach, Florida. It's theCUBE! Covering VeeamON 2019. Brought to you by Veeam! Welcome back to Miami everybody this is theCUBE, the leader in live tech coverage. My name is Dave Vellante I'm here with my co-host Peter Burris. Two days of wall to wall coverage of VeeamON 2019. They selected the Fontainebleau Hotel in hip, swanky Miami. Tad Brockway is here he's the corporate VP of Azure Storage, good to see you! Yeah great to see you thank you for having me. So you're at work for a pretty hip company, Microsoft Azure is where all the growth is, 70 plus percent growth, and you doing some cool stuff with storage. So let's get into it. Let's start with your role and kind of your swim lane if you will. So our team is responsible for our storage platform that includes our disc service for IAS virtual machines, our scale our storage we call Azure blob storage. We have support for files as well with a product called Azure Files, we support SMB based files, NFS based files, we have a partnership with NetApp, we're bring Azure NetApp files is what we call it, we're bringing NetApp on tap into our data centers delivering that as a first priority service we're pretty excited about that. And then a number of other services around those core capabilities. And that's really grown over the last several years, optionality is really the watch word there right, giving customers as many options, file, block, object, etc. How would you summarize the Azure Storage strategy? I like that point, optionality and really flexibility for customers to approach storage in whatever way makes sense. So there may be customers, there are customers who are developing brand new cloud base taps, maybe they'll go straight to object storage or blobs. There are many customers who have data sets and work loads on-prem that are NFS based and SMB based, they can bring those assets to our cloud as well. We're the only vendor in the industry that has a server side implementation of HDFS. So for analytics workloads we bring file system semantics for those large scale HDFS workloads. We bring them into our storage environment so that the customer can do all of the things that are possible with a file system hierarchy's for organizing their data, use ACl's to protect their data assets and that's a pretty revolutionary thing that we've done but to your point though, optionality is the key and being able to do all of those things for all of those different access types, and then being able to do that for multiple economic tiers as well from hot storage all the way down to our archive storage tier. And I short changed you on your title cause your also responsible for media and edge, so that includes Azure stack is that right? Right so we have Azure stack as well within our area and DataBox and DataBox edge, DataBox edge and Azure stack are our edge portfolio platforms. So the customers can bring cloud based applications right into their on-prem environments. Peter you were making a point this morning about the cloud and it's distributed nature, can you make that point I'd love to hear Tad's reaction and response. So Tad we've been arguing in our research here Wikibon SiliconANGLE for quite some time. The common parlance the common concept of cloud, move everything to the center was wrong. We've been saying this for probably four or five years, and we believe very strongly that the cloud really is a technology for further distributing data, further distributing computing so that you can locate data approximate to the activity that it's going to support. But do so in a way that's coherent, comprehensive, and quite frankly confident. That's what's been missing in the industry for a long time so if you look at it that way, tell us a little bit about how that approach, that thinking informs what you're doing with Azure and specifically one of the other challenges is how does then data services impact that? So maybe we'll come to that in a second I'm sure. Great insight by the way, I agree that the assumption had been that everything is going to move to these large data centers in the cloud and I think that is happening for sure, but what we're seeing now is that there's a greater understanding of the longer term requirements for compute and that there are a bunch of workloads that need to be in proximity to where the data is being generated and to where it's being acted upon, and there are tons of scenarios here. Manufacturing is an example where we have one of our customers who's using our DataBox edge product to monitor an assembly line as parts come out of the assembly line our DataBox edge device is used with a camera system attached to it, AI inferencing to detect defects in the assembly line, and then stop the assembly line with very low latency where a round trip to the cloud and back to do all the AI inferencing and then do the command and control to stop the assembly line that would just be too much round trip time so in many different verticals we're seeing this awareness that there are very good reasons to have compute and storage on-prem, and so that's why we're investing in Azure stack and DataBox edge in particular. Now you asked well how does data factor in to that, because it turns out in a world of IoT and basically an infinite number of devices over time, more and more data is going to be generated. That data needs to be archived somewhere so that's where public cloud comes in and all the elasticity and the scale economies of cloud. But in terms of processing that data you need to be able to have a nice strong connection between what's going on in the public cloud and what's going on on-prem, so the killer scenario here is AI. Being able to grab data as it's being generated on-prem, write it into a product like DataBox edge, DataBox edge is a storage gateway device so you can map your cameras in the use case I mentioned or for other scenarios you can route the data directly into a file share, an NFS, blob, or SMB file share, drop into DataBox edge, then DataBox edge will automatically copy it over to the cloud, but allow for local processing to local applications as if it were, in fact it is local, running in a hot SSD NVME tier, and the beautiful thing about DataBox edge it includes an FPGA device to do AI inference offloading. So this is a very modern device that intersects a whole bunch oft things all on one very simple, self contained unit. Then the data flows into the cloud where it can be archived permanently in the cloud, and then AI models can be updated using the elastic scale of cloud compute, then those models can be brought back on-prem for enhanced processing over time. So you can sort of see this virtuous cycle happening over time where the edge is getting smarter and smarter and smarter. So that's what you mean kind of when you talked about the intelligent cloud and the intelligent edge, I was going to ask you you just kind of explained it and you can automate this, use machine intelligence to actually determine where the data should land and minimize human involvement. You talked about driving marginal cost of storing your data to zero, which we've always talked about doing that from the standpoint of reducing or even eliminating labor cost through automation, but you've also got some cool projects to reduce the cost for storing a bit. Yeah. Maybe you could talk about some of those projects a little bit. Thats right so, and that was mentioned in the keynote this morning and so our vision is that we want for our customers to be able to keep their artifacts that they store on our cloud platform for thousands of years and if you think about sort of the history of humanity that's not outside the question at all, in fact wouldn't it be great to have everything that was ever generated by humankind for the thousands of years of modern or human history. We'll be able to do that with technology that we're developing so we're investing in technology to store data virtually indefinitely on glass, as well as even in DNA, and by investing in those advance types of storage that is going to allow us to drive that marginal cost down to zero over time. Epigenetic storage systems. I want to come back to this notion of services though, and where the data's located. From our research what we see is we see as you said, data being proximate or being housed, approximator created and acted upon, but that increasingly businesses want the options to be able to replicate that, replicates a strong word it's a loaded word, but to be able to do something similar in some other location if the action is taking place in that location too. That's what Kubernetes is kind of about, and server list computing and some of these other things are about. But it's more than just the data, it's the data, it's the data services, it's the meditate associated with that , how do you foresee at Microsoft and what role might they play in this notion of a greater federation of data services that make possible a policy driven, back up, restore, data protection architecture that's really driven by what the business needs and where the actions taking place. Is that something you were seeing in a direction that you see it going? Yeah absolutely and so I'll talk conceptually about our strategy in that regard and where we see that going for customers, and then maybe we can come back to the Veeam partnership as well cause I think this is all connected up. Our approach to storage, our view is that you should be able to drop all your data assets into a single storage system like we talked about that supports all the different protocols that are required, can automatically tier from very hot storage all the way down to overtime glass and DNA, and we do all of that within one storage system and then the movement across those different vertical and horizontal slices that can all be done programmatically or via policy. So customers can make a choice in the near term about how they drop their data into the cloud but then they have a lot of flexibility to do all kinds of things with it over time, and then with that we layer on the Microsoft whole set of analytics services. So all of our data and analytics products, they layer on top of this disaggregated storage system so there can be late binding of the type of processing that's used including AI to reason over that data relatively to where and how and when the data entered into the platform. So that's sort of modularity, it really future proofs the use of data over the long haul we're really excited about that, and then those data assets can then be replicated to use your term to other regions around the globe as well using our backbone. So the customers can use our network, our network is a customers network, and then the way that docs into the partnership with Veeam is that just as I mentioned in the keynote this morning, data protection is a use case that is just fundamental to enterprise IT. We can make together with customers and with Veeam, we can make data protection better today using the cloud and with the work that Veeam has done in integrating with 0365, the integration from there into Azure storage and then over time customers can start down this path of something that feels sort of mundane and it's just been a part of daily life at enterprise IT, and then that becomes an entry point into our broader longterm data strategy in the cloud. But following up on this if we agree that data is not going to be entirely centralized, but it's going to be more broadly distributed and that there is a need for a common set of capabilities around data protection which is a very narrowly defined term today and is probably going to evolve over the next few years. I agree with that. We think you're going to have a federated model for data protection that provides for local autonomous data protection activities that is consistent with the needs of those local data assets, but under a common policy based framework that a company like Veeam's going to be able to provide. What do you think? So first of all a core principle of ours is that while we're creating these platforms for large data sets to move into Azure the most important thing is that customers own their own data. So there's this balance that has to be reached in terms of cloud scale and the federated nature of cloud and these common platforms and ways of approaching data, while simultaneously making sure that customers and users are in charge of their own data assets. So those are the principles that we'll use to guide our innovation moving forward and then I agree I think we're going to see a lot of innovation when it comes to taking advantage of cloud scale, cloud flexibility and economics but also empowering customers to advantage of these things but do it on their terms. I think the futures pretty bright in that regard. And the operative term there is their terms. Obviously Microsoft has always had a large on-prem install base and the software estate, and so you've embraced hybrid to use that term, with your strategies. You never sort of run away from it, you never said everything's going to go into the cloud, and that's now evolving to the edge. And so my question is what are the big gaps, not necessarily organizationally or process wise, but from a technology standpoint that the industry, generally in Microsoft specifically, have to fill to make that sort of federated vision a reality. I mean we're just at the early stages of all this for sure in fact as we talked about this morning, the notion of hybrid which started out with use cases like backup is rapidly evolving toward a more sort of modern enduring view. I think in a lot of ways hybrid was used as this kind of temporary stop along a path to cloud, and back to our earlier discussion for by some I guess, maybe there's a debate you all are having there. But what we're seeing is the emergence of edge is being and enduring location for compute and for data, and that's where the concept of intelligent edge comes in. So the model that I talked about earlier today is about extending on-prem data assets into the cloud, where as intelligent edge is taking cloud concepts and bringing them back to the edge, in an enduring way. So it's pretty neat stuff. And a big part of that is much of the data if not most of the data, the vast majority even might stay at the edge permanently and of course you want to run your models up in the cloud. That's right, at least for realtime processing. Right you just don't have the time to do the round trip. Alright Tad I'll give you the last word on Azure, direction, your relationship with Veeam, the conference, take your pick. Yeah well I thank you, thanks great to be here. As I mentioned earlier today the partnership with Veeam and then this conference in particular is great because I really love the idea of solving a very real and urgent problem for customers today, and then helping them along that journey to the cloud so that's one of the things that makes my job a great one. Well we talk about digital transformation all the time on theCUBE it's real, it's not just a buzz word, it can happen without the cloud but it's not all in the central location, it's extending now to other locations. It reflects your data assets. And where your data wants to live. So Tad thanks very much for coming to theCUBE it was great to have you. Thanks guys! Alright keep it right there everybody we'll be back with our next guest. This is VeeamOn 2019 and you're watching theCUBE. (upbeat music)