Kubernetes from the Ground Up: Choosing a configuration method

Kubernetes’ configuration is simply a bunch of Kubernetes objects. Let’s take a quick look at what these objects are, and what they’re used for. The following quotes are from the Kubernetes object documentation:

Kubernetes Objects are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster. Specifically, they can describe:

  • What containerized applications are running (and on which nodes)
  • The resources available to those applications
  • The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance

Continue reading

Kubernetes from the Ground Up: What is it?

If you’ve looked into containers before, you’ve likely heard the name Kubernetes. This post will tackle what it is at a high level, while subsequent posts will delve deeper into the details.

Let’s kick off this post with a couple of quotes from the Kubernetes website:

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.

There’s a lot of great information in this quote, so let’s go ahead and take a closer look.

Continue reading

DevOps: The Evolution of Applications

One server per application

In the not too distant past, it was normal to have a one to one relationship between applications and servers. For example, your mail and web applications would reside on two separate, physical servers.

This approach was greatly inefficient and costly too. Each server would be purchased with years of predicted growth in mind. This resulted in racks full of overly powerful and underutilised servers which would drive up cooling, power and data centre space costs.

Furthermore, applications would often fall short of their forecasted growth, leaving businesses in a stalemate. They would want to move their applications to smaller servers to save on costs, but they can’t guarantee they won’t break their applications in the process. If they do break, then that’ll end up costing the business too. It really was a lose-lose situation.

Continue reading

Getting Started with Prometheus – Part 2

As you’ve probably guessed by Docker posts, I’m a huge fan of containerisation. Therefore instead of installing Prometheus on a host, let’s instead spin it up in a container. As described on the Prometheus website, we can accomplish this by issuing only a single command:

Let’s break down each component of this command to make sure we fully understand what it is doing:

  • docker run: Spin up a container
  • -p 9090:9090: Bind a port on our Docker host to a container port. This enables devices outside of the Docker host to reach the container on port 9090
  • -v /tmp/prometheus.yml:/etc/prometheus/prometheus.yml: Binds the /tmp/prometheus.yml file stored on the Docker host to  /etc/prometheus/prometheus.yml inside of the container
  • prom/prometheus: The <user_account>/<container_name> that we want to use

Continue reading

Getting Started with Prometheus – Part 1

If you’ve used Grafana, or even heard of it, chances are you’ve also heard of InfluxDB and Prometheus too. As I haven’t touched on the latter yet, I figured now is a good time to start. In case you haven’t heard of some, or all of these applications, let’s start off with a quick description on what they can do for us.

Note: You might also want to have a read of the My Monitoring Journey: Cacti, Graphite, Grafana & Chronograf post too.

Grafana is a frontend web app that is used to create beautiful dashboards. It does this by retrieving metrics which are stored on backend database servers such as InfluxDB, Prometheus MySQL, PostgreSQL and Graphite (to name just a few). It then uses metrics to create graphs which are displayed on the aforementioned dashboards.

Continue reading

Getting Started with Docker – Part 2

In the previous post we may have started running before we could walk. In this post we’ll first take a few steps back to make sure we cover the basics before diving deeper.

Building an image

When we examined the python:3.6.3-alpine3.6 image’s dockerfile, we saw the components which are used to create a Docker image:

  • FROM alpine3.6
  • ENV commands
  • RUN commands
  • A single CMD command

OK, so we already know that the  FROM alpine3.6 command means that the Python image is going to use the  Apline 3.6 as its Linux operating system. ENV, as its name suggests, is used to specify environment variables for the image. That leaves us with RUN and CMD.

Continue reading

Getting Started with Docker – Part 1

For those new to Docker, you’re probably wondering – What is it exactly?

“Docker is the company driving the container movement and the only container platform provider to address every application across the hybrid cloud.”

OK, it’s the name of a company. The next question is, what are containers? Rather than give you another quote, I’ll give you my own definition…

What are containers?

Containers are like portable executables. You know, the applications that come as a standalone  exe file and don’t require installation? I draw this comparison for the following reasons:

Continue reading

Python: Demystifying AWS’ Boto3

As the GitHub page says, “Boto3 is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python, which allows Python developers to write software that makes use of services like Amazon S3 and Amazon EC2.”

The good news is that Boto 3 is extremely well documented. However, the bad news is that it is quite difficult to follow. The documentation starts with a Quickstart guide, followed by a Sample Tutorial followed then by Code Examples. This is all good stuff, though it doesn’t give you much of an understanding of how to actually use Boto 3. For example,  we see things such as:


But we haven’t yet learned what a client and a resource is, nor do we see sessions mentioned until much later in the documentation. But I digress. Let’s go ahead and get started!

Continue reading

Git: Merging & Rebasing basics

In the Git: Keeping in sync post we learned how to merge the orgin/master commits into our local master branch. Then in Git: Effective branching using workflows we learned about how to use branches effectively. What we haven’t yet touched on yet though is rebasing and its affect on merging.

Commit log

Before we get started on merging and rebasing, let’s first see how we can view our git log as we will need to do it throughout this post:

In a nutshell, the above command shows us the last three commits which were made in this repo. If we want to get a little fancier, we can have git draw a graph for us:

Continue reading