SiliconANGLE theCUBESiliconANGLE theCUBE
  • info
  • Transcript
Tim Hockin explains the urgency that they needed to convince Google management to act.
Clip Duration 00:29 / February 11, 2022
Brian Grant & Tim Hockin, Google Cloud | KubeCon 2018
Video Duration: 17:18
search

Live from Seattle, Washington, it's theCUBE covering KubeCon and CloudNativeCon, North America 2018, brought to you by Redhat, the Cloud Native Computing Foundation and it's ecosystem partners. Okay, welcome back, everyone, this is theCUBE's live coverage here in Seattle for KubeCon and CloudNativeCon 2018. I'm John Furrier with Stu Miniman breaking down all the action, talking to all the top people, influencers, executives, start-ups, vendors, the foundation itself. We're here with two co-leads of Kubernetes at Google, legends in the Kubernetes industry. Tim Hockin and Brian Grant, both with Google, both co-leads at GKE. Thanks for joining us, legends in the industry. Kubernetes is still a short life, but still, being there from the beginning, you guys were instrumental at Google building out and contributing to this massive tsunami of 8000 people here. Who would have thought? It's amazing! >> It's a little overwhelming. It's almost like you guys are celebrity-status here inside this crowd. How's that feel? It's a little weird. I don't buy into the celebrity culture for technologists. I don't think it works well. We agree, but it's great to have you on. Let's get down to it. Kubernetes, certainly the rise of Kubernetes has grown. It's now pretty mainstream, people look at that as a key linchpin for the center of Cloud Native. And we see the growth of Cloud, you guys are living it with Google. What is the importance of Kubernetes? Why is it so important? Fundamentally at it's core, has a lot of impact, what's the fundamental reason why it's so successful? I think fundamentally Kubernetes provides a framework for driving migration towards Cloud Native patterns across your entire operational infrastructure. The basic design of Kubernetes is pretty simple and can be applied to automating pretty much anything. We're seeing that here, there are at least more than half a dozen talks about how people are using the Kubernetes to control plane to manage their applications or workflows or functions or things other than just core Kubernetes, containers, for example. Cloud Native is about... One of the things I'm involved with is I'm on the Technical Oversight Committee of the Cloud Native Computing Foundation. I drove the update of the Cloud Native definition. If you're trying to operate with high velocity, deploying many times a day, if you're trying to operate at scale, especially with containers and functions, scale is increasing and compounding as people break their applications into more and more micro services. Kubernetes really provides the framework for managing that scale and for integrating other infrastructure that needs to accommodate that scale and that pace of change. I think Kubernetes speaks to the pain points that users are really having today. Everybody's a software company now, right? And they have to deploy their software, they have to build their software, they have to run their software, and these things, they build up pain. When it was just a little thing, you didn't have to worry about scale, internet-scale and web-scale, you could tolerate it within your organization. But more and more, you need to deploy faster, you need to automate things. You can't afford to have giant staffs of people who are running your applications. These things are all part of Kubernetes purvey. I think it just spoke to people in a way, they said I suffer from that every day and you just made it go away. And what's the core impact now? Because then now people are seeing it, what is the impact to the organizations that are rethinking their entire operation from all parts of the staff, from how they buy infrastructure, which is also Cloud, you see some Cloud there, and then that deploying applicant, what's the real impact? I think the most obvious, the most important part here is the way it changes how people operate and how they think about how they manage systems. It no longer becomes scary to update your application. It's just a thing you do. If you can do it with high confidence, you're going to do it more often, which means you get features and bugs fixed and you get your roll-outs done quicker. It's amazing, the result that it can have on the user experience. A user reports a bug in the morning, and you fix it in the afternoon, and you don't worry about that. You bring up some really interesting points. I think back 10 years ago, from a research standpoint, we were looking at how can the enterprise do some of the things that the hyperscale vendors were doing. I feel over the last 10 years, every time Google released one of the great scientific papers, we'd all get a peer inside and say like, oh hey. When I went to the first DockerCon and heard how Google was using containers, when Kubernetes first came out, it's like, oh wow, maybe the rest of us will get to do something that Google's been doing for the last 10 years. Maybe bring us back a little bit to Borg and how that led to Kubernetes. Are we still all the rest of us just doing whatever Google did 10 years ago? Yeah, Tim and I both worked on Borg previously, Tim on the node-agent side and I worked on the control-point side in Borg One lesson we really took from Borg is that really you can run all types of applications. People started with stateless applications and we started with that because it's simpler in Kubernetes. But really it's just a general management control plane for managing applications. With the model of one application per container, then you can manage the applications in a much more first-class way and unlock a lot of opportunities for automation in the management control plane. At Google, several years ago when we started, Google had already gone through the transition of moving most of its applications to Borg. It was after that phase that Google started its Cloud effort and the rest of the world was doing VMs. When Docker emerged, we were... In the early phases, Tim mentioned this in our keynote yesterday of open-sourcing our container runtime. When Docker emerged, it is clear it had a much better user experience for the way folks were managing applications outside of Google and we just pivoted to that immediately. When Docker first came out, we took a look at it, we, my node-agent team in Borg, and we went, yeah, it's kind of like poor man's version of Borglet. We sort of ignored it for awhile because we were already working on our open-source effort. We were open-sourcing it, not really to change the world and make everybody use it, but more so that we can have conversations with people like the Linux kernel community. When we said we need this feature, and they'd say well why, why do you need this, we could actually demonstrate for them why we needed it. When Docker landed, we saw the community building, and building, and building. That was a snowball of its own, right? As it caught on, we realized we know what this is going to. We know once you embrace the Docker mindset that you very quickly need something to manage all of your Docker nodes once you get beyond two or three of them. We know how to build that. We got a ton of experience here. We went to our leadership and said, please, this is going to happen with us or without us and I think the world would be better if we helped. I think that's an interesting point. You guys had to open-source to do collaboration with Linux to get that flywheel going for you guys out of necessity. Then when Docker validated the community acceptance of hey, we can just use containers, a lot of magic will happen, it hit the second trigger point. What happened after that? You guys just had a debate internally? Is this another MapReduce? What's happening? Like, we should get behind this. I knew there was a big argument or debate, I should say, within Google. At that time there were a lot of conversations, how do we handle this? That was around the time that Google Compute Engine, our infrastructures and service platform, was going GA and really starting to get usage. So then we had an opportunity to enable our customers to benefit from the kinds of techniques we had been using internally. So I don't think the debate was whether we should participate, it was more how. For example, should we have a fully managed product, should we have to do open-source, should we do managed open-source, so those were really the three alternatives that we were discussing. Well, congratulations, you guys done great work and certainly a huge impact to the industry. I think it's clear that the motivation to have some sort of standardization, de facto standard, whatever word can be used to kind of let people be enabled on top or below Kubernetes is great. I guess the next question is how do you guys envision this going forward as a core? If we're going to go to decomposition with low levels of granularity tying together through the network and cloud-scale and the new operating law, we'll have comments in this, how does the industry maintain the greatness of what Kubernetes is delivering and bring new things to market faster? What's your vision on this? I talked a little bit about this this week. We put a ton of work into extension points, extensibility of the system trying to stay very true to the original vision of Kubernetes. It is a box, and Kubernetes fits inside a box, and anything that's outside the box has to stay outside the box. This gives us the opportunity to build new ecosystems. You can see it in networking space, you can see it in storage space where whole sort of cottage industries are now springing up around doing networking for Kubernetes and doing storage for Kubernetes. And that's fantastic! You see projects like Istio, which I'm a big fan of, it's outside of Kubernetes. It works really well with Kubernetes, it's designed on top of Kubernetes infrastructure, but it's not Kubernetes. It's totally removable and you don't need it. There's systems like Knative which are taking the serverless idea and upleveling Kubernetes into serverless space. It's happening all over the place. We're trying to sort of pray fanatically, say, no, we're staying this big and no bigger. It's a really... From an engineering standpoint, it's much simpler if I just build a product and build everything into it. All those connection points, I go back to my engineering training. It's like every connection point is going to be another place where it could fail. Now it's got all these APIs, there's all the security issues, and things like that. But what I love what I heard right here is some of the learnings that we've had in open-source is these are all of these individual components that most of them can stand on their own. They don't even have to be with Kubernetes, but altogether you can build lots of different offerings. How do you balance that? How do you look at that from kind of a design and architecture standpoint? So one thing I've been looking at is how do we ensure compatibility of workloads across Kubernetes in all different environments and different configurations. How do we ensure that the tools and other systems building an ecosystem work with Kubernetes everywhere? So this is why we created the Conformance Program to certify that the critical APIs that everybody depends on behave the same way. As we try to improve the test coverage of the conformance, people are focusing on these areas of the system that are highly pluggable and extensible. So for example, the kubelet in the node has a pluggable container runtime, pluggable networks, pluggable storage systems now with CSI. So we're really focusing on ensuring we have good coverage of the Pod API, for example. And other parts of the system, people have swapped out an ecosystem, whether it's kube-proxy for our Kubernetes services or the scheduler. So we'll be working through those areas to make sure that they have really good coverage so users can deploy, say, a Helm Chart or their takes on a configuration or whatever, however they manage their applications and have that behave the same way on Kubernetes everywhere. I think you guys have done a great job of identifying this enabling concept. What is good enabling technology? Allowing others to do innovation around it. I think that's a nice positioning. What are the new problem areas that you guys see to work on next? Now I see things are developing in the ecosystem. You mentioned the Istio service mesh and people see value in that. Security is certainly a big conversation we've been having this week. What new problem areas or problem sets you guys see emerging that are needed to just tackle and just knock down right away? The most obvious, the thing that comes up sort of in every conversation of users now is multi-cluster, multi-cloud, hybrid, whether that's two clouds or on-prem plus cloud or even across different data centers on your premises. It's a hard topic. For a long time Kubernetes was able to sort of put a finger in our ears and pretend it didn't exist while we built out the Kubernetes model. Now we're at a place where we've crossed the adoption chasm. We're into the real adoption now. It's a real problem. It actually exists and we have to deal with it, and so we're now looking at how's it supposed to work. Philosophically, what do we think is supposed to happen here? Technologically, how do we make it happen? How do these pieces fit together? What primitives can we bring into Kubernetes to make these higher level systems possible? Would you consider 2019 to be the year of multi-cloud, in terms of the evolution of trying to tackle some of these things from latency? Yeah, I'm always reluctant to say the year of something because... Someone has to get killed, and someone dies, and someone's winning. It's the year of the last desktop. >> It's the year of something. (laughs) EDI, I'm just saying. I think multi-cluster is definitely the hot topic right now. It's certainly almost every customer that we talk to through Google and tons of community chatter about how to make this work. You've seen companies like NetApp and Cisco, for instance, and how they're been getting a tail-wind from the Kubernetes. It's been interesting. You need networks. They have a lot of networks. They can play a role in it. So it's interesting how it's designed to allow people to put their hands in there without kind of mucking up the main... Yeah, I think that really contributes to the success of Kubernetes, the more people that can help add value to Kubernetes, more people have a stake in the success of Kubernetes, both users and vendors, and developers, and contributors. We're all stakeholders in this endeavor now and we all share common goals, I think. Well guys, final question for you. I know we got to break on time. Thanks for coming. I really appreciate the time. Talk about an area of Kubernetes that most people should know about that might not know about. In other words, there was a lot of hype around Kubernetes, and it's warranted, it's a lot of buzz, what's an important area that's not talked about much that people should know more about it and pay attention to within the Kubernetes realms of that world? Is there any area that you think is not talked about enough that should be focused on in the conversations, the press, or just in general? Wow, that's a challenging question. I spent a lot of my time in the infrastructure side of Kubernetes, the lower end of the stack, so my brain immediately goes to networking and storage and all the lower level pieces there. I think there's a lot of policy knobs that Kubernetes has that not everybody's aware of, whether those are security policies or network policies. There's a whole family of these things and I think we're going to continue to acree more and more policy as more people come up with real-use cases for doing stuff. It's hard to keep that all in your mind, but it's really valuable stuff down there. For programmability, it's like a Holy Grail, really. Thoughts on the things that (chuckles) put you on the spot there? I think this question of how people should change what they were doing before if they're going to migrate to Kubernetes. To operate any workload, you need at least monitoring and you need really CI/CD if you want to operate with any amount of velocity. When you bring those practices to Kubernetes, should you just lift and shift those into Kubernetes or do you really need to change your mindset? I think Kubernetes really provides some capabilities that create opportunities for changing the way some things happen. I'm a big fan of GitOps, for example, in managing the resources to declaritively using version control as a source of truth and keeping that in sync with the state in your for live clusters. I think that enables a lot of interesting capabilities like instant disaster recovery, for example, migrations, new locations. There are some key folks here who are talking about that, giving that message, but we're really at the early stages there. All right, well great to have you guys on. Thanks for the insight. We've got to wrap up. Thanks Brian, thanks Tim, appreciate it. Live coverage here, theCUBE is at KubeCon, Cloud Native, Cloud 2018. I'm John Furrier with Stu Miniman, we'll be back after this short break.