Ship Rancher API Audit Logs from AKS clusters

Posted in kubernetes on October 24, 2024 by Adrian Wyssmann ‐ 3 min read

As reader of my blog you know we are using Rancher logging app. While we migrated the Rancher (Upstream) cluster from RKE to AKS, we cannot use the built in log collection and shipping for audit logs.

As reader of my blog you know we are using Rancher logging app. While we migrated the Rancher (Upstream) cluster from RKE to AKS, we cannot use the built in log collection and shipping for audit logs.

According to Enabling the API Audit Log to Record System Events - a guide on how to enable audit logs - we find the information as of today is a bit misleading. We have enabled audit logging in the helm chart as follows.

auditLog:
  destination: hostPath
  hostPath: /var/log/rancher/audit/
  level: 3
  maxAge: 1
  maxBackup: 1
  maxSize: 100

According to the documentation we should see a sidecar - which is not the case.

Enabling the API Audit Log with the Helm chart install will create a rancher-audit-log sidecar container in the Rancher pod

Important

This is not about the kube-api audit logs but the ones generated by Rancher.

When having managed cluster there are is no visibility over the control plan hence no access do the kube-api audit logs.

You actually can achieve this by using hosttailer, which tail logs from the node’s host filesyste. As we have hostPath: /var/log/rancher/audit/ we only need to name of the log file, which is rancher-api-audit.log. Hence our hosttailer looks like this:

apiVersion: logging-extensions.banzaicloud.io/v1alpha1
kind: HostTailer
metadata:
  name: rancher-audit-logs
  namespace: cattle-logging-system
spec:
  fileTailers:
    - disabled: false
      name: audit-logfiles
      path: /var/log/rancher/audit/rancher-api-audit.log

Then we only need a ClusterFlow which gets the logs from the hosttailer:

apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterFlow
metadata:
  name: cluster-flow-auditlogs
  namespace: cattle-logging-system
spec:
  filters:
    - parser:
        key_name: message
        parse:
          patterns:
            - format: json
            - format: none
          type: multi_format
        remove_key_name_field: true
        reserve_data: true
        reserve_time: true
  globalOutputRefs:
    - cluster-output-splunk-auditlogs
  match:
    - select:
        labels:
          app.kubernetes.io/instance: rancher-audit-logs-host-tailer

At last we have to use ClusterOutput which does care about the shipping:

apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterOutput
metadata:
  name: "cluster-output-splunk-auditlogs"
  namespace: "cattle-logging-system"
spec:
  splunkHec:
    hec_host: ${var.hec_audit.host}
    hec_port: ${var.hec_audit.port}
    hec_token:
      valueFrom:
        secretKeyRef:
          name: splunk-token-auditlogs
          key: token
    insecure_ssl: true
    protocol: https
    index: ${var.hec_audit.index}
    source: ${var.hec_audit.source}
    sourcetype: ${var.hec_audit.sourcetype}
    buffer:
      flush_interval: ${var.buffer_config_flush.flush_interval}
      flush_mode: ${var.buffer_config_flush.flush_mode}
      flush_thread_count: ${var.buffer_config_flush.flush_thread_count}
      queued_chunks_limit_size: ${var.buffer_config_flush.queued_chunks_limit_size}
      type: file
      tags: "[]" #https://github.com/banzaicloud/logging-operator/issues/717

That’s it, now we also can see the api audit logs in out external logging system. However there is one more thing: You might alraady forward the other logs, but you might want to send the audit logs not to the same place. With the current setup audit logs are now made visible and also picked up by other ClusterFlow. We added the following exludes to the main ClusterFlow, in order to exclude audit logs:

- exclude:
    labels:
      app.kubernetes.io/name: "host-tailer"

Here the complete ClusterFlow:

apiVersion: logging.banzaicloud.io/v1beta1
kind: ClusterFlow
metadata:
  name: "cluster-all-logs"
  namespace: "cattle-logging-system"
spec:
  globalOutputRefs:
    - "cluster-output-all-logs"
  match:
    - exclude:
        labels:
          app.kubernetes.io/name: "host-tailer"
    - select: {}
  filters:
    - parser:
        key_name: message
        parse:
          patterns:
            - format: json
            - format: none
          type: multi_format
        remove_key_name_field: true