My first self-hosted kubernetes cluster with RKE
Posted November 18, 2020 by Adrian Wyssmann ‐ 7 min read
After reading my post Kubernetes - a brief introduction to container orchestration you should know the different entities of a cluster.
Probably the easiest way to get a cluster is by use of a managed cluster which is essentially a cluster managed by your cloud provider, which takes care of the setup of the nodes. Most common are Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE) and Amazon Elastic Container Service (ECS).
Obviously not all companies can’t or won’t use a cloud provider, but rather host their own cluster on premise. Ultimately you need some nodes on which you can run Kubernetes - or the respective components. To setup kubernetes you can use one of these
Of course, you can also use tools like Ansible or RKE which really simplifies the setup. In addition there are more advanced platform which does not only help you to setup your cluster(s) but also facilitate the management and the use. Such platform are for example Rancher or Openshift. Whatever you want to use heavily depends on your needs and your use case.
At my current employer we use Rancher to manage our 8 clusters (a playground cluster, a dev cluster, test clusters and various production clusters). This works very well and upgrading is quite easy with almost never any downtime. Personally - I have 3 dedicated servers - I use Ansible to setup the servers in combination with RKE to manage (install, update) my single cluster.
If you simply want to try out kubernetes on your development machine, there is Minikube
minikube quickly sets up a local Kubernetes cluster on macOS, Linux, and Windows. We proudly focus on helping application developers and new Kubernetes users.
Usually the first thing you need is a container runtime which is the software responsible for running containers. There are several available, whereas the most commons are:
So basically have to ensure your nodes have one of these installed.
I have 3 dedicated servers on which I will setup my cluster using Rancher Kubernetes Engine (RKE):
RKE is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers. It solves the common frustration of installation complexity with Kubernetes by removing most host dependencies and presenting a stable path for deployment, upgrades, and rollbacks.
This makes it very very easy to setup an self-hosted cluster, but keep in mind, currently it requires Docker.
Preparing the nodes
The nodes for the cluster need to prepared
- ensure ssh is enabled on the nodes
- create a user e.g.
- create a ssh-key-pair for the user
- install docker (pre-requisite for rke)
Setup the cluster (using Rancher Kubernetes Engine (RKE))
The usage is pretty simple and the configuration of the cluster can be expressed as cluster configuration in form of a
cluster.yml. Mine looks something like this:
nodes: - address: node001.mydomain.com hostname_override: node001 internal_address: 192.168.100.1 user: sshuser role: [controlplane,worker,etcd] - address: node002.mydomain.com hostname_override: node002 internal_address: 192.168.100.2 user: sshuser role: [controlplane,worker,etcd] - address: node003.mydomain.com hostname_override: node003 internal_address: 192.168.100.3 user: sshuser role: [worker,etcd] cluster_name: dev-cluster kubernetes_version: "v1.19.3-rancher1-1" # rke config --list-version --all
rke will take care of installing all [necessary components required by the kubernetes]https://wyssmann.com/blog/2020/11/kubernetes-a-brief-introduction-to-container-orchestration/#components-of-kubernetes on the 3 nodes defined under
rkecreates an ssh connection to the hosts at
- we use user
- the hostname for kubernetes will be
- we use an
internal_addresson which the nodes communicate (this is actually a VLAN which connects the 3 servers)
- we have to [
controlplane]https://wyssmann.com/blog/2020/11/kubernetes-a-brief-introduction-to-container-orchestration/#control-plane) which means on these nodes we will get the
- all are [
worker-nodes]https://wyssmann.com/blog/2020/11/kubernetes-a-brief-introduction-to-container-orchestration/#worker-nodes) which means deployments will be running on either of these nodes
- all are
etcdwhich means all nodes will have a copy of the
So after we know what will happen let’s do the installation:
Download rke binary (I use Arch, so you can find it in AUR)
Apply your config by running
rke up --config ./cluster.yml
INFO Running RKE version: v1.2.2 INFO Initiating Kubernetes cluster INFO [certificates] GenerateServingCertificate is disabled, checking if there are unused kubelet certificates INFO [certificates] Generating admin certificates and kubeconfig INFO Successfully Deployed state file at [./cluster.rkestate] INFO Building Kubernetes cluster .... INFO [addons] Setting up user addons INFO [addons] no user addons defined INFO Finished building Kubernetes cluster successfully
This takes some time and when successfully applied. You can check to see for example kubelet is running - you should be aware that when using
RKE the necessary kubernetes services are itself running as containers, so run
[email protected]:~$ docker ps | grep kubelet 06721abf366a rancher/hyperkube:v1.19.3-rancher1 "/opt/rke-tools/entr…" 23 minutes ago Up 23 minutes kubelet ...
You will have two files available:
cluster.rkestate: The Kubernetes Cluster State file, this file contains credentials for full access to the cluster
kube_config_cluster.yml: The Kubeconfig file for the cluster, this file contains credentials for full access to the cluster
Accessing cluster with
In order to be able to interact with your cluster, you need kubectl the command-line tool that allows you ti run commands against your clusters. After installing you also need a proper kubeconf-file which contains the necessary information to interact with your cluster.
How to configure
After installing kubernetes you can check
/etc/kubernetes/ssl on the nodes and you will find some
[email protected]:/etc/kubernetes/ssl$ ls kube-ca.pem kubecfg-kube-proxy.yaml kube-etcd-192-168-100-1.pem kube-etcd-192-168-100-2.pem kube-etcd-192-168-100-3.pem kube-node.pem kube-proxy.pem kubecfg-kube-node.yaml kube-etcd-192-168-100-1-key.pem kube-etcd-192-168-100-2-key.pem kube-etcd-192-168-100-3-key.pem kube-node-key.pem kube-proxy-key.pem
The different certificate are automatically generated and required to access the cluster. You can read some more details here to what they are used for and how to use your own. You can also see there are kubeconfig file, for example
kubecfg-kube-node.yaml - this is used by the nodes which looks as follows:
apiVersion: v1 kind: Config clusters: - cluster: api-version: v1 certificate-authority: /etc/kubernetes/ssl/kube-ca.pem server: "https://127.0.0.1:6443" name: "local" contexts: - context: cluster: "local" user: "kube-node-local" name: "local" current-context: "local" users: - name: "kube-node-local" user: client-certificate: /etc/kubernetes/ssl/kube-node.pem client-key: /etc/kubernetes/ssl/kube-node-key.pem
clustersdefines the available cluster with
contexta combination of a
usersa list of users that can access the cluster(s)
client-certificateclient certificates to authenticate to the API server
client-keymatching private key to the certificate
So if you want to access your cluster from your developer machine you need such a kubeconfig. Usually you would have a
$HOME/.kube/config but you also can provide it as
kubectl --kubeconfig $pathToYourFile or as a list of files that should be merged when defined as environment variable
KUBECONFIG. Now that you know how a kubeconf looks like, you might create your own. You could the above one as example and download it, but you might need to
- change the
servertpo actually point to your kube-apiserver, so add a valid ip or uri
- copy the certificates to your local machine and adjust the paths
You can also embed the certificate files as Base64-encoded data - in that case you need add the suffix
-data to the keys, e.g.
client-key-data. Luckily as seen above, we already have a config file provieded, so you can copy or move this as
cp kube_config_cluster.yml $HOME/.kube/config. Now, let’s have a quick look at this config file:
apiVersion: v1 kind: Config clusters: - cluster: api-version: v1 certificate-authority-data: LS0tLS1...LS0K server: "https://x.x.x.x:6443" name: "cluster" contexts: - context: cluster: "cluster" user: "kube-admin-cluster" name: "cluster" current-context: "cluster" users: - name: "kube-admin-cluster" user: client-certificate-data: LS0tLS...LQo= client-key-data: LS0tLS...Cg==%
As you can see, the names of the entities are slightly different - you actually can choose whatever you like, as long as it matches together - and also
server is now pointing to a fixed ip. In addition, you can see that the certificates are embedded. The structure of the kubeconfig file allows you to have multiple clusters configured.
Cool, now that we have a kubeconfig file, make yourself familiar with kubectl and then let’s access our cluster by getting the nodes:
kubectl get nodes NAME STATUS ROLES AGE VERSION node001 Ready controlplane,etcd,worker 1d v1.19.3 node002 Ready etcd,worker 1d v1.19.3 node003 Ready etcd,worker 1d v1.19.3
Now that access to the cluster works, I want to cover the topics of authentication in more details in an upcoming post. Also can we start now to some actual stuff of the cluster like create a deployment. I plan to have dedicated posts about some of the entities (deployment, statefulset) I have introduced in my last post [Kubernetes - a brief introduction to container orchestration]https://wyssmann.com/blog/2020/11/kubernetes-a-brief-introduction-to-container-orchestration/). Stay tuned, there is a lot more to come.