Import a Rancher apps and Kubernetes manifest into Terraform
Posted on June 24, 2022 by Adrian Wyssmann ‐ 5 min read
While initially setting up our Rancher clusters manually, we started to use Terraform, which simplifies the management of the clusters tremendously.
Introduction
As the Rancher clusters already exists, the first thing is to import existing resources into Terraform using the verified provider rancher2.
You basically start importing the cluster and then go on from here, importing everything else like projects, etc. A bit more challenging is the import of apps, which ultimately are helm charts. For apps you can use the rancher2_app_v2. I started with the logging app, which is pretty simple:
Create
logging.tf
with minimal configresource "rancher2_app_v2" "logging" { cluster_id = rancher2_cluster.cluster.id name = "rancher-logging" namespace = "cattle-logging-system" repo_name = "rancher-charts" chart_name = "rancher-logging" #values = file("values.logging.yaml") }
Then you import the resource as follows
terraform import rancher2_app_v2.logging CLUSTERID.cattle-logging-system/rancher-logging
Now you run
tf plan
which gives you the differences - remember we only have the minimal config.tf plan
will show you mainly the values passed to the helm chart, hence take this info and add it to a file calledvaules.logging.yaml
additionalLoggingSources: aks: enabled: false eks: enabled: false gke: enabled: false rke: enabled: true fluentbit: inputTail: Buffer_Chunk_Size: 1MB Buffer_Max_Size: 5MB fluentd: resources: limits: cpu: "1" memory: 1Gi requests: cpu: "0.5" memory: 500Mi ...
Add the file to the resource
resource "rancher2_app_v2" "logging" { .... values = file("values.logging.yaml") }
ClusterFlows and ClusterOutput
While we have the app installed, we still need ClusterFlows and ClusterOutputs. They can be configured in the Rancher UI or you can install it as kubernetes manifests. Looking at the Terraform Registry, the official provider is kubernetes_manifest. You can use your local kubeconfig
, so I configured the provider as follows
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.11.0"
}
}
}
provider "kubernetes" {
config_path = "~/.kube/config"
}
Once you configured the provider, you have to create a manifest
using [hcl]. Can use tfk8s to convert YAML to HCL. The ClusterOutput
in my case would look like this
resource "kubernetes_manifest" "cluster_output_default " {
manifest = {
"apiVersion" = "logging.banzaicloud.io/v1beta1"
"kind" = "ClusterOutput"
"metadata" = {
"name" = "cluster-output-default"
"namespace" = "cattle-logging-system"
}
"spec" = {
"default Hec" = {
"hec_host" = "splunk.intra"
"hec_port" = "443"
"hec_token" = {
"valueFrom" = {
"secretKeyRef" = {
"name" = "default -token"
"key" = "token"
}
}
}
"insecure_ssl" = "true"
"protocol" = "https"
"index" = "k8s_playg"
"source" = "http:k8s_playg"
"sourcetype" = "k8s:playg"
"buffer" = {
"flush_interval" = "60s"
"flush_mode" = "interval"
"flush_thread_count" = "4"
"queued_chunks_limit_size" = "300"
"type" = "file"
"tags" = "[]" #https://github.com/banzaicloud/logging-operator/issues/717
}
}
}
}
}
[hcl]:
You can import using apiVersion=<APIVERSION>,kind=<KIND>,namespace=<NAMESPACE>,name=<NAME>
as follows:
terraform import kubernetes_manifest.cluster-output-default "apiVersion=logging.banzaicloud.io/v1beta1,kind=ClusterOutput,namespace=cattle-logging-system,name=cluster-output-default"
kubernetes_manifest.cluster-output-default: Importing from ID "apiVersion=logging.banzaicloud.io/v1beta1,kind=ClusterOutput,namespace=cattle-logging-system,name=cluster-output-default"...
╷
│ Error: Failed to get resource { Object:map[apiVersion:logging.banzaicloud.io/v1beta1 kind:ClusterOutput metadata:map[name:cluster-output-default namespace:cattle-logging-system]]} from API
│
│ clusteroutputs.logging.banzaicloud.io "cluster-output-default" not found
╵
Weired, as the object is there cluster_output_default
:
kubectl get --show-kind --context playground clusteroutputs --all-namespaces
NAMESPACE NAME ACTIVE PROBLEMS
cattle-logging-system clusteroutput.logging.banzaicloud.io/cluster_output_default true
Well, if you have multiple contexts in your KUBECONFIG
, it uses the selected one, which may be different than the cluster you want to manage. So you have to specify config_context
. Alternatively you can also use host
and token
, which may be even better, than relying on a KUBECONFIG
-file:
provider "kubernetes" {
host = "${var.RANCHER_API_URL}" #https//rancher.intra
token = "${var.RANCHER_TOKEN}:${var.RANCHER_SECRET}"
}
This however still does not work:
...
╷
│ Error: Failed to get namespacing requirement from RESTMapper
│
│ no matches for kind "ClusterOutput" in version "logging.banzaicloud.io/v1beta1"
It turnes out, that the host was not pointing to the cluster API. Hence the host
has to be corrected by pointing to <RANCHER_API_URL>//k8s/clusters/<CLUSTERID>
:
provider "kubernetes" {
host = "${var.RANCHER__API_URL}/k8s/clusters/<CLUSTERID>" # TODO: not hardcode!?
token = "${var.RANCHER_TOKEN}:${var.RANCHER_SECRET}"
}
After that, the import works fine.
HCL vs YAML
If you don’t like to convert YAML to HCL, you may have a look at kubectl_manifest, which allows you to use YAML as follows
resource "kubectl_manifest" "cluster_output_default" {
yaml_body = <<YAML
apiVersion: logging.banzaicloud.io/v1beta1
...
You have to take care, as the import is slightly different and uses the API format i.e. <APIVERSION>//<KIND>//<NAME>//<NAMESPACE>
terraform import kubectl_manifest.cluster_output_default logging.banzaicloud.io/v1beta1//ClusterOutput//cluster_output_default//cattle-logging-system
Below is my full config, which features both kubernetes_manifest and kubectl_manifest:
resource "rancher2_app_v2" "logging" {
cluster_id = rancher2_cluster.cluster.id
name = "rancher-logging"
namespace = "cattle-logging-system"
repo_name = "rancher-charts"
chart_name = "rancher-logging"
values = file("values.logging.yaml")
}
# default configuration
# token and sourcetype have to match
# terraform import kubernetes_manifest.cluster_output_default "apiVersion=logging.banzaicloud.io/v1beta1,kind=ClusterOutput,namespace=cattle-logging-system,name=cluster-output-default
resource "kubernetes_manifest" "cluster_output_default " {
manifest = {
"apiVersion" = "logging.banzaicloud.io/v1beta1"
"kind" = "ClusterOutput"
"metadata" = {
"name" = "cluster-output-default"
"namespace" = "cattle-logging-system"
}
"spec" = {
"default Hec" = {
"hec_host" = "splunk.intra"
"hec_port" = "443"
"hec_token" = {
"valueFrom" = {
"secretKeyRef" = {
"name" = "default -token"
"key" = "token"
}
}
}
"insecure_ssl" = "true"
"protocol" = "https"
"index" = "k8s_playg"
"source" = "http:k8s_playg"
"sourcetype" = "k8s:playg"
"buffer" = {
"flush_interval" = "60s"
"flush_mode" = "interval"
"flush_thread_count" = "4"
"queued_chunks_limit_size" = "300"
"type" = "file"
"tags" = "[]" #https://github.com/banzaicloud/logging-operator/issues/717
}
}
}
}
}
# Forward all logs to default
# terraform import kubernetes_manifest.cluster_flow_all "apiVersion=logging.banzaicloud.io/v1beta1,kind=ClusterFlow,namespace=cattle-logging-system,name=all-logs"
# terraform import kubectl_manifest.cluster_flow_all logging.banzaicloud.io/v1beta1//ClusterFlow//all-logs//cattle-logging-system
resource "kubectl_manifest" "cluster_flow_all" {
yaml_body = <<YAML
apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterFlow
metadata:
name: "all-logs"
namespace: "cattle-logging-system"
spec:
globalOutputRefs:
- "cluster-output-default"
match:
- select: {}
filters:
- parser:
parse:
type: multi_format
patterns:
- format: json
- format: none
key_name: log
remove_key_name_field: true
reserve_data: true
reserve_time: true
YAML
Conclusion
Importing resource are not always that easy, you have to be familiar with the specific of the provider you are using, as well as with the syntax on how to identify the actual object. Once you get that right, you realize that Terraform is very powerful with all the providers available.