Engineering

Journey Through Kubernetes Deployment

July 16, 2024
Vikas Srivastava
Saubhagya Maheshwari

As a developer, we have spent a lot of time in designing and building applications so we are comfortable with challenges that come during building applications. When it comes to deploying / managing those applications especially on kubernetes environments, there are a lot of unknowns which make many of us feel all at sea.

In this blog series we will go through the journey of deploying an application on kubernetes. In this part we will focus on journey of application from code to container image.

Entry from (Go)Land

Starting with a simple hello world application hello.go on (Go)land. This application accepts http request with name parameter and responds with hello <name> to caller.

import (  
   "fmt"
   "net/http")  
 
func main() {  
   http.HandleFunc("/", handler)  

   fmt.Println("Starting server at localhost:4567")
   err := http.ListenAndServe(":4567", nil)  
   if err != nil {  
      return
   }  
}  
 
func handler(w http.ResponseWriter, r *http.Request) {  
   name := r.URL.Query().Get("name")  
   w.Header().Set("Content-Type", "application/json")  
   w.WriteHeader(http.StatusOK)  
   _, err := fmt.Fprintln(w, "hello "+name)  
   if err != nil {  
      return
   }  
}

To start this application

go run hello.go
Starting server at localhost:4567

This works well on local system but if we want to start same application on production, we have to copy hello.go to production server. Copying a single file is still fine but for applications that have hundreds of files, it will be challenging.

This is like a passenger carrying all their belongings for the journey. We need some sort of luggage check.

Luggage check

When we are carrying too much luggage on a journey, we need more space to store it and more effort to carry it around. Applications are not very different, copying a lot of files will slow down our deployment process as it would require more resources for transfer over network and more space on server to store it. So we can agree that copying all source files as part of deployment in not the right way.

What could be solution? What do we do to avoid carrying too much luggage on a journey? We sort through our belongings before hand and carry only what is essentially required for journey. For application deployment, we only need the executable and copying source files is redundant so lets apply same logic here.

That means deployment steps will be

  1. building an executable locally,
  2. copy that executable to production, and
  3. execute that executable

go build -o hello
./hello

This allows us to copy only the executable file hello to production server and then we can start application by executing that file. This is how applications are deployed in non-kubernetes deployment.

This works fine for single application but if there are multiple applications, we need to figure out how many physical servers do we need for our applications.

It is similar to figuring out how many vehicles to take for journey with family & friends. Next we need some sort of vehicle check

Vehicle check

Unless vehicles are small or each passenger is carrying a lot of luggage, it is not optimal to have one vehicle for each passenger. Similarly one application per physical server is optimal only if applications are monolithic and physical servers are small.

However we know that,

  • servers have grew larger due to technological advancements, and
  • application have become smaller due to increasing adoption of micro-service architecture

This means most of times a single service is not big enough to utilise resources available on production server and we cannot have one physical server for each service.

If we cannot have one vehicle for each person on journey, what do we do? We ask each person, how much luggage are they carrying and if they have any special requirements etc. Then based on that, we group them with other persons and send them together in a vehicle

If we apply same logic to applications this means

  1. understanding their resource requirements and special permissions needed
  2. based on that figure out which server suits them best
  3. deploy application on that server

This is how we deploy micro-services in non-kubernetes deployment.

This works fine for applications that have predictable resource usage pattern but most of applications have resource usage dependent upon usage pattern and that can be highly unpredictable. So if one application requires more resources and gets hold of it, other applications on same server will suffer lack of resources when they need it. We need to define boundary for each application so that it does not impact other applications on same server.

It is similar to scenario where some of passengers are not well-behaved and use other persons space for keeping their luggage.

Space check

What if passengers on a trip are not well behaved? One person keeps there luggage in space of other person then where would other person keep their luggage? We tell each person what their designated space is and within that space they have to sit and keep their luggage. We define sort of boundary for each person.

Similarly we need to define boundaries for our applications, and we can use containers for that. We can build containers for individual applications so that each application is using resources within that container and does not impact other applications.

Building container image
For running a container we first need to build container image. There are multiple tools to build containers but here we will use docker for building containers. For that, lets add Dockerfile like below in same directory where we have hello.go

FROM golang:latest
WORKDIR /hello
COPY . .
RUN go build -o hello
EXPOSE 4567
CMD ["./hello"]

Here, we are telling docker

  • to use latest golang libraries to build container
  • run go build -o hello to generate executable
  • expose port 4567 of container to accept incoming requests for application
  • when container is started execute, ./hello to start application

After adding Dockerfile, we can build a container image using

docker build -t hello:1.0 .

Here, we are telling docker to build an image using Dockerfile in current directory and tag that image as hello:1.0

Running container image
This container image can be executed using

docker run hello:1.0
Starting server at localhost:4567

Although application is running, making an API request to localhost:4567 won’t give any response.

This is because api request will reach to host machine port 4567, but we haven’t told docker how to forward request from host machine to container. That can be done using

docker run -p 4567:4567 hello:1.0
Starting server at localhost:4567

Here, we are telling docker to map port 4567 of container with that of host machine.

If we want to use some other port say 5678 of host machine, we can run

docker run -p 5678:4567 hello:1.0
Starting server at localhost:4567

Now client will need to send request to localhost:5678, it will reach to our application on port 4567 and client will get back response.

Summary

So far we have covered how to build container image for our simple application and run it, however there is lot of ground to cover before we deploy an actual application in kubernetes deployment and we will go there in subsequent posts.

We at Kapstan are building tools so that developers don’t have to worry about Infrastructure and Application deployment nightmares. It just takes you a few button clicks, and application will be deployed in highly available and cost optimal Kubernetes cluster. If you are interested, get in touch at hi@kapstan.io so that we can start our journey to simple, fast and secure cloud practices together.

Vikas Srivastava
Principal Engineer @ Kapstan. Vikas has over a decade of experience in designing and developing distributed systems at scale. His expertise lies in solving reliability and productivity bottlenecks.

Simplify your DevEx with a single platform

Schedule a demo