Blog
Community July 25, 2019

Running HA Kubernetes clusters on AWS using KubeOne

In a previous blog post, we talked about KubeOne and how it makes your highly available Kubernetes cluster easier to manage. In this post, I’ll show you step-by-step how to deploy and run a vanilla cluster with machine-controller and metrics-server on AWS. You may also adapt this process to other providers since it differs only marginally.


Creating your first cluster

In this example we use Terraform to create the infrastructure. Of course KubeOne can also be configured using Ansible or manually. We will need to run through the following steps:


If this sounds like much effort: Don’t worry, we have taken care of many steps for you. Expect about 20 minutes of your time for these steps.


How to AWS?

Let’s say you just have access to Amazon Web Services (AWS) and you want to get started, go ahead and grab the latest release on our releases page. We support KubeOne on Linux and MacOS.


Installing

Starting with v0.10.0-alpha.0, our release archive contains the KubeOne binary and example Terraform scripts that can create the infrastructure for you. If you want to use an earlier version you may find the examples in our GitHub repository. If you got a distribution with up to date package repositories you can fetch Terraform that way, otherwise follow their installation guide (Take at least version 0.12.0).


Terraforming

Next up, we’ll configure Terraform and create the infrastructure where we’ll run Kubernetes. Inside the KubeOne repository, head into the examples/terraform/aws directory. Here you’ll find a Terraform scripts you can use to get started.


First, fetch the required Terraform modules using the following command:


terraform init


If you don’t need anything too fancy, you don’t need to touch the existing files here. You just need to create a file named terraform.tfvars that contains basic information on how your cluster should be shaped:


cluster_name = "alexbox"
aws_region = "eu-central-1"
worker_os = "ubuntu"
ssh_public_key_file = "~/.ssh/id_rsa.pub"


This sets the name of the new cluster, the region where nodes will reside in and the flavor of the worker nodes. I chose the region of Central Europe (since I live here and want short latencies) and Ubuntu for control plane and worker nodes. By default a single worker node is created initially. Terraform will copy your public SSH key to created hosts so you can access them. Make sure that it is present at the configured location. There are more options you can customize, just take a look into variables.tf for a list of variables.


To tell Terraform and KubeOne how to authenticate, export your credentials (note that the first character of the command line is a space, which prevents your credentials from popping up in your bash history):


$  export AWS_ACCESS_KEY_ID=' 🔒🔒🔒' AWS_SECRET_ACCESS_KEY='🔒🔒🔒'


This credentials are also deployed on the cluster so tools like machine-controller can autoscale your cluster.


If you want to customize the initial infrastructure state even more , e.g. increase the amount of worker nodes, you can edit the output.tf file accordingly. Finally, create you infrastructure using:


terraform apply


Now that our infrastructure is in place, in comes KubeOne, which will set up the cluster for us.


Deploy your K8s high up in the clouds

Now you have some barren VMs that need to be populated with your hopes and dreams, mainly the Kubernetes control plane. Use the following command to generate a KubeOne configuration file:


$ kubeone config print > config.yaml


In our case, the configuration contains the kubernetes version to install and which cloud provider to use. The rest is taken over from the Terraform configuration:


apiVersion: kubeone.io/v1alpha1
kind: KubeOneCluster
name: alexbox
versions:
  kubernetes: 1.14.1
cloudProvider:
  name: aws


The basic KubeOne configuration file defines what Kubernetes version will be used and on what provider we’re deploying on. The configuration file has many more options and features and if you want to customize them, get the full config file with


$ kubeone config print --full > config.yaml


The cluster provisioning is done by supplying the install command with the config file and the Terraform state.


$ kubeone install config.yaml -t .


Using your newly created cluster

And now that the cluster provisioning is done, you can use your new cluster. KubeOne even dropped the kubeconfig file for your. Just export its location as an environment variable with:


$ export KUBECONFIG=$PWD/alexbox-kubeconfig


You are ready to kubectl around as you like. For example, try out to list nodes of the new cluster:


$ kubectl get nodes


Scaling your cluster

KubeOne installs a vanilla cluster with a couple of open source projects, such as machine-controller, for automatically managing the cluster using Cluster-API and metrics-server. Machine-controller makes it easy to scale your cluster on demand. Execute the following to get all MachineDeployments in your new cluster:


kubectl --kubeconfig=alexbox-kubeconfig get -n kube-system machinedeployment


The result should be something like this:


NAME                REPLICAS         PROVIDER        OS                   KUBELET AGE
alexbox-pool1       1                aws             ubuntu 1.14.1        3m44s


To increase the number of worker nodes to three just scale the machine deployment up:


kubectl scale -n kube-system machinedeployment/alexbox-pool1 --replicas=3


Note: There is currently a bug with Kubernetes 1.15 requiring you to provide the resource version when using the scale command. This version can be obtained using kubectl describe.


Of course you can also configure to opt-out of machine-controller but you would have to take care of worker nodes by yourself.


Conclusion

As you can see, the happy path of KubeOne makes bootstrapping a cluster much more automated, less error prone and (almost) fun to do.


KubeOne works out-of-the-box with a bunch of other cloud providers including Google Cloud, Packet and DigitalOcean. You can even install a cluster on premise with OpenStack, VMware vSphere or completely bare metal if you are adventurous. See our docs for more information, take a look at our latest webinar or the technical deep-dive talk that was recorded on the ContainerDays 2019.

Alexander Sowitzki

Alexander Sowitzki

Site Reliability Engineer

Through cookies we deliver improved web content and provide you with a personalised experience. By using this site you agree to our