My first self-hosted kubernetes cluster with RKE
Posted in development on November 18, 2020 by Adrian Wyssmann ‐ 7 min read
After reading my post Kubernetes - a brief introduction to container orchestration you should know the different entities of a cluster.
Managed Clusters
Probably the easiest way to get a cluster is by use of a managed cluster which is essentially a cluster managed by your cloud provider, which takes care of the setup of the nodes. Most common are Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE) and Amazon Elastic Container Service (ECS).
Self-hosted Clusters
Obviously not all companies can’t or won’t use a cloud provider, but rather host their own cluster on premise. Ultimately you need some nodes on which you can run Kubernetes - or the respective components. To setup kubernetes you can use one of these
Of course, you can also use tools like Ansible or RKE which really simplifies the setup. In addition there are more advanced platform which does not only help you to setup your cluster(s) but also facilitate the management and the use. Such platform are for example Rancher or Openshift. Whatever you want to use heavily depends on your needs and your use case.
At my current employer we use Rancher to manage our 8 clusters (a playground cluster, a dev cluster, test clusters and various production clusters). This works very well and upgrading is quite easy with almost never any downtime. Personally - I have 3 dedicated servers - I use Ansible to setup the servers in combination with RKE to manage (install, update) my single cluster.
If you simply want to try out kubernetes on your development machine, there is Minikube
minikube quickly sets up a local Kubernetes cluster on macOS, Linux, and Windows. We proudly focus on helping application developers and new Kubernetes users.
Usually the first thing you need is a container runtime which is the software responsible for running containers. There are several available, whereas the most commons are:
So basically have to ensure your nodes have one of these installed.
My Setup
I have 3 dedicated servers on which I will setup my cluster using Rancher Kubernetes Engine (RKE):
RKE is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers. It solves the common frustration of installation complexity with Kubernetes by removing most host dependencies and presenting a stable path for deployment, upgrades, and rollbacks.
This makes it very very easy to setup an self-hosted cluster, but keep in mind, currently it requires Docker.
Preparing the nodes
The nodes for the cluster need to prepared
- ensure ssh is enabled on the nodes
- create a user e.g.
sshuser
- create a ssh-key-pair for the user
- install docker (pre-requisite for rke)
Setup the cluster (using Rancher Kubernetes Engine (RKE))
The usage is pretty simple and the configuration of the cluster can be expressed as cluster configuration in form of a yaml
i.e. cluster.yml
. Mine looks something like this:
rke
will take care of installing all [necessary components required by the kubernetes]https://wyssmann.com/blog/2020/11/kubernetes-a-brief-introduction-to-container-orchestration/#components-of-kubernetes on the 3 nodes defined under nodes
:
rke
creates an ssh connection to the hosts atnode00x.mydomain.com
- we use user
sshuser
to connect - the hostname for kubernetes will be
node00x
- we use an
internal_address
on which the nodes communicate (this is actually a VLAN which connects the 3 servers) - we have to [
controlplane
]https://wyssmann.com/blog/2020/11/kubernetes-a-brief-introduction-to-container-orchestration/#control-plane) which means on these nodes we will get thekube-api
installed - all are [
worker
-nodes]https://wyssmann.com/blog/2020/11/kubernetes-a-brief-introduction-to-container-orchestration/#worker-nodes) which means deployments will be running on either of these nodes - all are
etcd
which means all nodes will have a copy of theetcd
So after we know what will happen let’s do the installation:
Download rke binary (I use Arch, so you can find it in AUR)
Apply your config by running
rke up --config ./cluster.yml
This takes some time and when successfully applied. You can check to see for example kubelet is running - you should be aware that when using RKE
the necessary kubernetes services are itself running as containers, so run
You will have two files available:
cluster.rkestate
: The Kubernetes Cluster State file, this file contains credentials for full access to the clusterkube_config_cluster.yml
: The Kubeconfig file for the cluster, this file contains credentials for full access to the cluster
Accessing cluster with kubectl
In order to be able to interact with your cluster, you need kubectl the command-line tool that allows you ti run commands against your clusters. After installing you also need a proper kubeconf-file which contains the necessary information to interact with your cluster.
How to configure kubectl
After installing kubernetes you can check /etc/kubernetes/ssl
on the nodes and you will find some .yaml
and .pem
files:
The different certificate are automatically generated and required to access the cluster. You can read some more details here to what they are used for and how to use your own. You can also see there are kubeconfig file, for example kubecfg-kube-node.yaml
- this is used by the nodes which looks as follows:
clusters
defines the available cluster withname
an unique nameserver
the url of the kube-apiservercertificate-authority
the server certificate to access the apiserver
context
a combination of acluster
anduser
users
a list of users that can access the cluster(s)client-certificate
client certificates to authenticate to the API serverclient-key
matching private key to the certificate
So if you want to access your cluster from your developer machine you need such a kubeconfig. Usually you would have a $HOME/.kube/config
but you also can provide it as kubectl --kubeconfig $pathToYourFile
or as a list of files that should be merged when defined as environment variable KUBECONFIG
. Now that you know how a kubeconf looks like, you might create your own. You could the above one as example and download it, but you might need to
- change the
server
tpo actually point to your kube-apiserver, so add a valid ip or uri - copy the certificates to your local machine and adjust the paths
You can also embed the certificate files as Base64-encoded data - in that case you need add the suffix -data
to the keys, e.g. certificate-authority-data
, client-certificate-data
, client-key-data
. Luckily as seen above, we already have a config file provieded, so you can copy or move this as $HOME/.kube/config
: cp kube_config_cluster.yml $HOME/.kube/config
. Now, let’s have a quick look at this config file:
As you can see, the names of the entities are slightly different - you actually can choose whatever you like, as long as it matches together - and also server
is now pointing to a fixed ip. In addition, you can see that the certificates are embedded. The structure of the kubeconfig file allows you to have multiple clusters configured.
Cool, now that we have a kubeconfig file, make yourself familiar with kubectl and then let’s access our cluster by getting the nodes:
What next
Now that access to the cluster works, I want to cover the topics of authentication in more details in an upcoming post. Also can we start now to some actual stuff of the cluster like create a deployment. I plan to have dedicated posts about some of the entities (deployment, statefulset) I have introduced in my last post [Kubernetes - a brief introduction to container orchestration]https://wyssmann.com/blog/2020/11/kubernetes-a-brief-introduction-to-container-orchestration/). Stay tuned, there is a lot more to come.