How to Test Your Kubernetes Application

IAS Tech Blog
5 min readAug 3, 2022

By Amen Al-Moamen, Associate Software Engineer at Integral Ad Science

“The bigger they are, the harder they fall “ — Somebody at some point

The Problem

At IAS, we’ve slowly (but surely) begun migrating some of our tools and libraries to Kubernetes. By doing so, we hope to one day completely decouple them from our monolithic application into a more efficient microservice based architecture. However, early in the lifting and shifting process, we ran into what seems to be a widely experienced software-centric predicament–how would developers properly test their future application changes? They can run unit tests against the application’s code using any given mocking framework…and that’s nice and all…but how do they run integration tests with the application now meant to run in its cozy new Kubernetes home? Having a way to run and test an application’s functionality and correctness in an environment that simulates how and where the service will be accessed will provide an added layer of confidence when deploying to any production level cluster(s). As an added benefit, running test suites against an application living in a Kubernetes cluster will provide important insights into the effectiveness of your Kubernetes configuration solutions (i.e, Docker images, Resources, environment variables, etc.). One way a developer can do this is to deploy any changes to an IAS Kubernetes testing cluster. In order to do that, they first need to create and merge a pull request from a K8s tenant repository and wait for the canary process to detect and release that change. Unfortunately, this process can quickly become long and troublesome if you need to continuously make minor changes to the application’s code and observe the outcome. So, how do we achieve the same reliability as deploying directly to an IAS Kubernetes testing cluster, but locally?

Short answer ? minikube

What the heck is minikube

At its core, minikube is a tool that allows you to run a single-node Kubernetes cluster on your local machine. With a few basic hardware requirements (2 CPUs, 2 GBs of free memory, 20 GBs of free disk space) and a Docker container, minikube enables those interested in Kubernetes to experience what it’s like working within the Kubernetes ecosystem. Once installed, you’re able to quickly start & stop your cluster, create deployments on the fly, and expose them for quick and easy access to your services. Even more relevant to our need was its ability to deploy and use our Kubernetes related tech-stack, which includes but is not limited to:

  • Docker
  • Helm
  • Kustomize
  • etc

What initially only seemed like a light-weight tool that allowed users to learn within a Kubernetes environment was suddenly our answer.

Deploying our applications to minikube

Our first step was obtaining a Docker image of our application. This image can be built locally with the application’s Dockerfile or pulled from a remote repository. At IAS, whenever a pull request is created in our application’s repository, a Docker image reflecting the changes in the pull request can be created and uploaded into a private Docker repository using a comment trigger. So, we were able to easily pull that Docker image and load it into our minikube cluster. Docker images are also automatically generated and uploaded to the private Docker repository on new application releases. Because Docker images are versioned and tagged, we were able to indicate what version we’d like to pull and deploy (PR-version or Release-version).

As of today, all of our applications’ Kubernetes configuration files are packaged and versioned by Helm into their own respective “charts”. You can read more about Helm and Helm Charts here. Our Helm charts, like our Docker images, are also automatically uploaded into a private Helm repository. Once there, they’re used to configure and deploy our applications into IAS Kubernetes clusters. So our next step was pulling these charts so we could deploy our application to our local minikube cluster.

Once we had a Docker image and a Helm chart on our local machine, we were able to deploy to our minikube cluster and watch as our application’s pods came up.

All that was left to do was making our pods accessible by running a port-forward command on whatever port we choose–and voila, we were now able to completely access and interact with our application locally!

Once we connected to our pod(s) we were able to start the process of building a test suite that acts as a client to our application.

Testing our services

Before writing any tests, we needed to make sure that we established a connection between our test suite and our application. Whether the service is REST or gRPC based, it was important that the connection to the service existed and persisted throughout the lifetime of testing.

Using popular testing frameworks like Spock, we built a comprehensive test suite that encompasses our applications’ most important features. This includes but is not limited to expected responses to different requests, headers returned by various endpoints, and Kubernetes-related configuration values.

Because these tests can now run essentially anywhere, we’ve used it as a unique opportunity to incorporate them into our CICD pipeline using a Jenkins-backed environment. Our Jenkinsfile is configured to actively listen for comment triggers like “run integration” on a pull request, and then run the scripts we created before to deploy the application to minikube and run our test suite once it’s triggered. Today, any pull requests made against our application’s repository are required to run our test suite through Jenkins before they’re able to be merged.

Using minikube, we’ve been able to continuously develop and deploy our Kubernetes applications with little to no breaking changes. Our team’s engineers can now easily test and develop around an instance of our applications running in a local minikube cluster, giving us an added layer of confidence and efficiency that other testing methods weren’t able to.

Join Our Innovative Team

IAS is a global leader in digital media quality. Our engineers collaborate daily to design for excellence as we strive to build high performing platforms and leverage impactful tools to make every impression count. We analyze emerging industry trends in order to drive innovation, research new areas of interest, and enhance our revolutionary technology to provide top-tier media quality outcomes. IAS is an ever-expanding company in a constantly evolving space, and we are always looking for new collaborative, self-starting technologists to join our team. If you are interested, we would love to have you on board! Check out our job opportunities here.

--

--