DiscoverAll Things Devops PodcastEp. 2: Deployment lifecycle of Rails apps on Kubernetes
Ep. 2: Deployment lifecycle of Rails apps on Kubernetes

Ep. 2: Deployment lifecycle of Rails apps on Kubernetes

Update: 2017-09-25
Share

Description

In this episode Vipul, Rahul and Vishal discuss deployment lifecycle of Rails on Kubernetes including
Rolling restart with zero downtime.



Key Points From This Episode




  • Image building and Image caching

  • Moving from docker hub to Jenkins

  • Reduction in build time after moving from docker hub to Jenkins

  • Tools involved in managing cluster

  • Kubernetes namespacing and labelling



Links Mentioned in This Episode





Transcript



[0:00:00 .8] VIPUL: Hey everyone, welcome to our new episode at All Things DevOps. Today we have Rahul and Vishal again with us and we will be discussing about Kubernetes and how we have been using it at Big Binary to develop deployment tool on top of it so that we could use it for deploying real applications.



Hi Vishal, hi Rahul.



[0:00:18 .7] RAHUL: Hey.



[0:00:19 .5] VISHAL: Hi Vipul.



[0:00:20 .7] VIPUL: Hi. So I guess like last time we discussed quite a few things about why we chose Kubernetes over Rancher as well as how it’s suited best for this tool that we are working on. Before we begin, would you like to give maybe Rahul, would you like to give a brief about how the deployment cycle is for various application?



How exactly – what exactly is this app and how it get – what exactly is this real app and how it gets built or deployed? What other things are involved in this?



[0:00:51 .2] RAHUL: Yeah, sure, as we mentioned previously, there are four modules of this project and they are interlinked with each other but containerizing and deploying it on Kubernetes, first challenge was to couple them all together and setup for deployment lifecycle.



Obviously, we have our code base on Github and whenever we commit, we have to deploy that change in any of the deployment app, maybe it is a EC2 or Heroku with containers, we have to build an image and we have to deploy it on Kubernetes.



Yeah, first thing is like, build an image then push it to some Docker registry, maybe Docker hub, quay.io or self hosted Docker registry. Then trigger your Kubernetes deployment so that your Docker image is pulled and deploy it in rolling restart. Meaning that without zero downtime.



This is all the basics steps to what we have in our deployment lifecycle and important part where we had to work on was about image building, automation of image building and caching and also deploying all the apps and zero downtime. Taking care of somethings like configuration changes, how to do that in run time and how we can take care of all the interlinked services like databases and other things.



Yeah, I think we can talk more about how we have achieved image building and caching. Vishal, can you brief about like how we started on image building?



[0:02:27 .0] VISHAL: Yes, in our application, we have a base image, but we have used ruby 2.2 image and our rails application is relying on ruby 2.2. We can start a base image like that. After that, we installed some aptitude packages in the Docker file and the initial of these, the initial step is to update the list of packages using update command and just to you know, be sure that we are caching things properly.



The standard where is to update command as well as update install command on the same length using – you can mention it as the update and person on person update install and the list of packages of the databases.



Because of this, the daily instruction so this line will be colorized instruction in Docker file and this instruction will be cached when you try to build another image from the same Docker file and the cached instruction will be used and start of, you know, doing the same thing over again when you build another image. After that…



[0:03:56 .1] VIPUL: As you might have like just Plain old ruby or what else is this apart from ruby or installing rails does that might have?



[0:04:04 .3] RAHUL: For that, we have split our application in kind of micro services architecture and we run services like unicorn for website of engine. We also have one of the web socket process, which is running on ruby’s Thin server and we have sidekiq as well.



We build different – we build the same image but instead, we are passing as an argument to container while running those images. Let’s say if we want to run only web image, we’ll just pass an argument, “Hey, this container needs to be started with a pod type or container web and it will start only unicorn process.” Same applies for sidekiq and other processes as well. We are using the same image for running all the four services and that is helping out studio the build time and deploy time. This is where we are achieving caching and image building.



There is good support from Docker Hub for image caching and whenever we are pushing a commit to build an image, it automatically uses the previous build and it took – uses it from cache and our image is built. While we were building an image, I think when Vishal used all the build and stuff, he used to stuck on like on image building while using Docker Hub. Because Docker Hub used to take around 10, 15 minutes to build our image and it did not support a parallel rails in our initial release plan.



We searched for some tools which could speed up our deployment process and I think this is where we led to use something like Jenkins. With Jenkins, really moved our image building process faster and that we started to run out of build in parallel and reduced the build time as well. Yeah, we Vishal, how easy was it from moving Docker building thing from Docker Hub to Jenkins?



[0:06:05 .4] VISHAL: Yeah, initially we tried to build images on Docker Hub and due to that they are lot a virtual machine which has a minimum configuration, it takes so much time to build an image actually.



The same image, I was trying to build on my local macro machine, it was taking just around 5 to 10 minutes but on Docker Hub, it was taking around 20 to 30 minutes and that too was happening sequentially, which means if I submit more than two builds, there were actually – and in Docker Hub was picking up one by one.



Yeah, actually, we were discussing about like how caching was done. Docker has so many ways to reduce the build time and optimize the build size as well. There are some standard ways. In most of the cases what happens — so Docker, what the lines which we write in Docker file, those are called as instructions and in most cases, the instructions are, you know — I mean, the caching is performed by comparing just the instructions and that could help Docker to decide whether to use previously cached instruction to invalidate that cache and rebuild the same. For each instruction, cache actually – the child image will be produced and that will be considered as a leap of cache leap or that particular instruction.



That will be later on can be used by Docker as the cache product particular instruction. This happens but most of this happens but most of this is like there are instructions. One is to – one instruction can be from them we can specify the base image tag to promise one of the instruction and that is – let’s say expose them command.



These are kind of instructions which are, you know, simply compared or directly by Docker to check whether they can be – there is a cache layer available whether we use it or not. There are some minor instructions like [inaudible], which I mean, that there is some different approach that’s taken by Docker to check whether to use the cache layer generator to those instructions.



What happens, all the copy instructions are used to copy or add the tiles form the build context to the build that is being generated. So what happens is that Docker looks at the last modified and the last access, these kind of some meta data which Docker looks at it and something is changed and it actually generates check sum best kind of meta data and this meta data is modified somehow, which is build context for building the image, it will be used to compare and desire for their two use, you know, previously cache layer for that particular ad or instruction.



[0:09:49 .9] VIPUL: How significant was like moving from say the Docker Hub to how you are exactly building it right now in Jenkins? How significant was the movement? What are the build time changes which you are getting on top of like doing it on something like Docker service. How did the dashing improve or how did the time improve?



[0:10:10 .0] VISHAL: Okay. We talk about like a Docker wasn’t helping to reduce the build time because we already tested it on local machines even with the small configurations and we were able to build those images in less time as compared to Docker Hub and so there is another side released by Docker which is Docker flow.



Both of these services, we were not able to see the improvements and we also looked at some other options like grey and what not but talked about that, let’s try to build it on our own, or Jenkins server. I mean, we setup Jenkins and installed a few packages like Git to pull the source code from git hub.



Another package was clou

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Ep. 2: Deployment lifecycle of Rails apps on Kubernetes

Ep. 2: Deployment lifecycle of Rails apps on Kubernetes

BigBinary