Ep. 7: Kubernetes and Microservices with Joe Beda
Description
Links Mentioned in This Episode
- BigBinary on Twitter
- Joe on Twitter
- Rahul on Twitter
- Kubernetes
- Containerd
- CRI-O
- ark
- sonobuoy
- contour
- ksonnet
- kubeless
- serverless
Transcript
Rahul: Hello, and welcome to the new episode of “All Things DevOps” podcast. Today, we have Joe from Heptio. Tell you a little bit about Joe, he is co-founder and CEO of Heptio. I will let him introduce himself. Hi Joe, welcome to this new episode of “All Things DevOps” podcast. Can you just introduce yourself?
Joe: Sure. Nice to meet you and nice to be on here. So my name is Joe Beda. I am the CTO and founder of Heptio. We are a company specializing in bringing Kubernetes and cloud data technologies to a wider enterprise audience. Before this, I was at Google for about 10 years where I helped to start the Kubernetes project after being there for quite a while working on cloud stuff.
Rahul: Awesome. Thank you. So recently I have been hearing a lot about Heptio and its work in the Kubernetes ecosystem. I myself have tried some of the tools like ark, Sonobuoy and some of those are really helpful in maintaining Kubernetes and Kubernetes related clusters and all. So just wanted to know like as you already said, like you have almost more than 10 years with this container ecosystem. So what was the point where Kubernetes evolved? Don’t know if you were at that time you were in Google or somewhere, but where was the start of Kubernetes or the project where Kubernetes evolved?
Joe: Yeah, so Google, gosh probably it’s somewhere around 2003, 2004, started building out this system internally called Borg and it pioneered a lot of the ideas that we see Kubernetes today. It’s not just about running containers or running workloads and containers, but finding out, figuring out how do you assign those containers to individual machines that are running on in the network, how do you find those things and how do you manage all that stuff and scale. So if you fast forward several years after that, I started this project at Google called Google compute engine which is Google’s virtual machine as a service business. And part of getting that off the ground and making it work well with Google systems, was building it on top of Borg and so my first experience with cloud was building a virtual machine product on top of a container platform. And as cloud and GCE became more critical to Google’s business, the next step was to try and get Googlers inside of Google, engineers inside of Google using the exact same platform or the exact same mechanisms that folks outside of Google were using.
So this meant that we either got engineers inside of Google building on top of the amps which to them would have felt like a big step backward once they are used to something like Borg. Or, we had to go through and bring Borg to the wider world and so we decided to do the second thing. It wasn’t really practical to bring Borg to the wider world as is, so we started a new project based on all the ideas that were proven out in Borg over that time. And just because I didn’t start Borg, I was around at Google, as it was getting off the ground but I didn’t start that project. But we did start Kubernetes as a way to bring the ideas that were proven in Borg out to a much wider audience. And then so that’s about – that’s 3 or 4 years ago now and then open sourced it and it’s just been pretty crazy ever since then, it’s really been an exciting thing to be part of.
Rahul: Awesome. That’s really interesting. Any of the like these big upstream projects are being built. So one thing, at what point it was meant like a Kubernetes will be open sourced or like what was the moment when Google thought like this should be an open source project and what made a company like Google to release such a huge project as an open source management project?
Joe: Sure. I mean it didn’t start out being a very large project. I mean that sort of happened over time and it started out pretty modest. It was part of the strategy to actually make it open source from the start and it actually took quite a bit of convincing, we probably spent 3 or 4 months doing slides and talking to Google execs and really making the case to do this as an open source project. The reasoning behind that thought was historical if you looked at systems that – so Google had talked about its internal systems with things like MapReduce, and GFS and Chubby which is their lock server. Had talked about all these things by writing papers. And then what they found is that other folks would take these papers and they would independent implementations and I’m thinking things like they dupe based on GFS and MapReduce and they would make independent implementations of these. And it would ignite a thriving ecosystem but because those implementations were slightly different, slightly incompatible with the way Google approached the problem, what that meant is that Google could not benefit from the larger ecosystem that formed up around some of the ideas that they had proven out.
So that experience combined with the goal of essentially creating some level of leadership that could be turned into a product at Google with something like Google Kubernetes engine, really was the plan for doing Kubernetes as an open source project. But it really took quite a bit of convincing to really get folks on board with it.
Rahul: Okay. So you just mentioned about Google Kubernetes engine, there is also a Kubernetes service, ACS, sorry, Azure Container Service and EKS are also coming up. So I few weeks backs I came across one of your product, Kubernetes and distribution, which it promises as a cloud-native platform for building Kubernetes cluster. So would you just briefly about that?
Joe: Sure, yeah. So one of the things that our goal, that we want to do as a company, is that we want to make Kubernetes more accessible to a larger set of enterprises and I think there is a lot of folks who really want to have somebody that they can lean on so that they can get a production-ready Kubernetes cluster up and running. And they want to make sure that they – if they hit problems, they can go ahead and talk to somebody. A lot of these folks are running in environments where there is no supported cloud mechanism, whether it’d be GKE or AKS or EKS. And even if they are running on those, administering a Kubernetes cluster and making it work inside your organization does not end with getting the cluster running. There is still quite a bit of knobs and decisions and policies that you need to make decisions about. So that’s where we came up with HKS, the Heptio Kubernetes subscription which is a support model where we will make sure that you are successful running Kubernetes.
And we call it the undistribution because our goal out the gate, was not to create something that’s significantly different from the upstream experience. We really want to support to some degree, the open source Kubernetes that everybody else is using. And we found that a lot of folks really had a hard time if they were used to the distribution model, really wrapping their head around this. So that’s why we came up with the tagline called it the undistribution. So the idea is that folks really do want a – they want some aspects of the distribution but they don’t want other aspects of the distribution. So we will make sure that you’re successful, will answer support tickets. If you hit a problem, we can get you a hotfix. So all that stuff is part of making sure that you will be successful with Kubernetes. What we won’t do is create a super unique only available with us installation and management experience and we won’t create a set of tools or a set of customizations to the cluster that are not available elsewhere.
So we want to provide all the good things of the distribution without the experience of diverging from upstream. And I think this is becoming more and more important as companies really do want to stay close to upstream and they really want to make sure that they don’t sort of pin themselves into a corner by taking dependencies on aspects of a distribution without thinking about it.
Rahul: So companies really want like the upstream – more of the upstream product, whatever it may be not in Kubernetes, if some of the companies offering the in service, people will expect like it should be as compatible with the upstream. So that’s a reality and this Kubernetes and distribution really sounds interesting. Does it supports the hybrid cloud architecture or it’s just for bare metal?
Joe: No. We will support folks running on cloud also. Our model is that we can help you get up and running and provide some best practices around getting Kubernetes installed and configured correctly. But we’ll also support you as long as you have a cluster that passes our set of performance test plus some other tests that we are continuing to develop. We are happy to support you however and wherever you are going to run a cluster. So that whether that be on a cloud or whether that be on param, and yeah, so that’s – it’s really a matter of if you have a cluster that’s functioning well, we can help you make