Kubernetes on the Ground: Part 2 – Furnishing the Room

This post is part of a series. Click here to view the first post in the series.
 

Once we decided to bring Kubernetes in house on our infrastructure, we started figuring out how the pieces are going to fit together. It’s a bit like moving into a larger home with extra rooms. Suddenly you must decorate a foyer when yesterday you didn’t even know how to spell the word “foyer.” You want a perfect, functional space suited for a DIY show, but you fear you may end up buying a table saw to get there.

Making It Your Own

Room with an atypical layoutContinuing the furniture analogy, there are things you’d generally want in a foyer. A bench to sit on. A place to store your shoes. A bowl to drop your keys into that will eventually also be filled with LEGO bricks, Cheerios, and somehow not your keys even though that’s where you swear that you left them. But you may also buck tradition in other ways. You’re not a fan of “Live Laugh Love” signs. But you’ve always dreamed of having one of those proper wooden coat racks. Trust the analogy here: Kubernetes is exactly like this.

When furnishing a room, you have certain constraints. The size and shape of the room, windows and doors, the type of flooring. What are the constraints for our Kubernetes build out? In our case, we have existing, perfectly good, reliable servers. The principle constraint on our build out was to build around what we already had. We are licensed for VM platform and have a reliable backup solution that works at the VM level.
We wanted to avoid licensing any new software. We work on a great number of projects, and it’s extremely nice when we can scale up without incurring additional licensing costs. This means were limited to components which can be freely licensed.

Additionally, we hoped to avoid doing any actual Kubernetes development. We were willing to build automation AROUND Kubernetes, but not within Kubernetes itself.

Person looking at instructions

Learning the Pieces

Kubernetes is made up of several different components that form a whole. Several of the components have multiple implementations to suit your environment and workloads. Or you could go rogue and build your own implementation – a topic for the future.

The components are driven from a common API and interact with each other in a predefined way. This means that you can mix and match the different component implementations and they (should) interoperate. This is a great feature, since you get to choose what is critical for your environment. For example: if data redundancy and encryption are vital, you can leverage an implementation for volumes which includes those features.
In order to get off the ground, here are the very basic components you need to think about:

Master Nodes

In general, a node is a machine (virtual or otherwise) which you’re committing to the Kubernetes cause. Amongst nodes, some have a special purpose. These master nodes are responsible for taking your instructions and executing them. They also ensure that your instructions stay executed by monitoring the other nodes. If something fails, the master nodes attempts to work around the failure and get things back to normal.

Worker Nodes

The other nodes are resources used by the master node to run your workloads. The work ultimately boils down to containers running on your nodes. But you generally don’t need to worry about what runs where. The containers will be scheduled onto the nodes based on the availability of resources.

Container Runtime

Your containers are executed by a runtime that Kubernetes manages. This is ultimately where Docker comes into play, but there are other options if you wanted.

Kubernetes Network

Your nodes work together as a cluster that exists on the same physical network as any non-Kubernetes machines that you manage. Additionally, however, Kubernetes creates a virtual, private network that all your containers run on. This network is seamlessly handled between nodes, so you don’t need to worry about where any two things are for them to talk with each other.
Volume Storage

Any time you run a container which needs to persist data, you need a volume to store it in. Because containers can be moved between nodes, you won’t store your data on the nodes themselves. Kubernetes has several volume storage implementations. If you were working with a cloud provider, this would be handled by a cloud data service with the provider. When working on the ground, it may suffice to have a server host NFS exports for your volume data.

Proxying or Load Balancing

Person satisfied with a simple accomplishmentIf your workloads are web applications, you need them to be accessible beyond Kubernetes’ network. Kubernetes allows you to expose ports from your containers in a reliable manner using services. This keeps the access point for your application consistent, even if the containers move around. However, there’s a “gotcha” with services: the ports need to be in an obtuse range. Because you’re not going to convince your users to type “:32531” after the URL of your web application, you are intended to put a proxy or load balancer ahead of your Kubernetes cluster to normalize your access point into something users will be comfortable with. There is a silver lining, however, because your proxy will be a great, consistent place to handle your SSL certificates.

Next Steps

Now that we have and idea of our constraints and a basic understanding of the components, what’s next? The new post in this series will give a high-level overview of our Kubernetes build out.

Additional Resources

The official Kubernetes documentation goes into depth on the architecture of Kubernetes. They also have information on specific providers you can leverage for some pieces of your Kubernetes build out.