Deploy RHACS with Rancher fleet

Posted November 10, 2021 by Adrian Wyssmann ‐ 5 min read

In my last post I talked about Rancher fleet, as a next step we will manage RHACS with fleet.

As you know from my post, there are two main components, central-services and Sensors. As mentioned in the post, central components are used to be installed via helm, while the sensors are installed by downloading the sensor bundle. However, as we are using Rancher Fleet to manage the cluster tooling, we want to use a different approach. Luckily you can also install RHACS sensors using helm. Steps are pretty straight forward:

  1. Helm chart repository
  2. Install the central-services Helm chart to install the centralized components (Central and Scanner).
  3. Generate an init bundle.
  4. Install the secured-cluster-services Helm chart to install the per-cluster and per-node components (Sensor, Admission Controller, and Collector).

Install central-services with Fleet:

When using helm, we have to use a fleet.yaml

defaultNamespace: stackrox
helm:
  chart: stackrox-central-services
  repo: https://helm.svc.prd.intra/virtual-helm
  releaseName: stackrox-central-services
  version: 66.1.0
  values:
    imagePullSecrets:
      allowNone: true
    image:
      registry: docker.svc.prd.intra
    env:
      platform: null
      offlineMode: false
      proxyConfig: |
        url: http://webproxy.intra:9090
        excludes:
        - "*.intra"
        - "*.dev.intra"
        - "*.svc.dev.intra"
        - "*.sit.intra"
        - "*.svc.sit.intra"
        - "*.uat.intra"
        - "*.svc.uat.intra"
        - "*.nop.intra"
        - "*.svc.nop.intra"
        - "*.prd.intra"
        - "*.svc.prd.intra"        

    ## Additional CA certificates to trust, besides system roots
    ## If specified, this should be a map mapping file names to PEM-encoded contents.
    additionalCAs: |
      -----BEGIN CERTIFICATE-----
      XXXXXXXXXX
      -----END CERTIFICATE-----
      -----BEGIN CERTIFICATE-----
      XXXXXXXXXX
      -----END CERTIFICATE-----      
    central:
      defaultTLS:
        # see ca/stackrox.svc.nop.intra.pem
        cert: |
          -----BEGIN CERTIFICATE-----
          XXXXXXXXXX
          -----END CERTIFICATE-----
          -----BEGIN CERTIFICATE-----
          XXXXXXXXXX
          -----END CERTIFICATE-----          
        key: |
          -----BEGIN RSA PRIVATE KEY-----
          XXXXXXXXXX
          -----END RSA PRIVATE KEY-----          
      adminPassword:
        htpasswd: admin:$2y$05$xxxxxxxx

      resources:
        requests:
          memory: "4Gi"
          cpu: "1500m"
        limits:
          memory: "8Gi"
          cpu: "2000m"

      persistence:
        persistentVolumeClaim:
          claimName: stackrox-db
          storageClass: hpe-csi
          createClaim: false
          size: 40

      exposure:
        loadBalancer:
          enabled: true
          port: 443

    ## Configuration options relating to StackRox Scanner.
    scanner:
      disable: false

Install SecuredCluster (Sensors)

Before we can configure fleet, we have to create an init bundle within RHACS

The secured cluster uses this bundle to authenticate with Central.

We have multiple clusters, that we manage with RHACS, but we can use the same bundle. Knowing this, we take the approach of targetCustomizations:

Target customization are used to determine how resources should be modified per target.

Targets are evaluated in order and the first one to match a cluster is used for that cluster

We already have created all clusters under Platform Configuration > Clusters, hence we have to tell fleet, to use different values for each cluster. But before we need a GitRepo which targets all clusters:

apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
  name: cluster-tooling-nop-all
  namespace: fleet-default
spec:
  branch: master
  clientSecretName: gitrepo-auth-bitbucket
  helmSecretName: helm-repo
  caBundle: XXXXXXXX
  insecureSkipTLSVerify: false
  paths:
    - stackrox/sensors/nop
  repo: https://scm.intra/scm/k8s/kube.git
  targets:
    - clusterSelector: {}

Within the folder stackrox/sensors/nop defined in the GitRepo we have a fleet.yaml.

defaultNamespace: stackrox

helm:
  chart: secured-cluster-services

  repo: https://helm.svc.prd.intra/virtual-helm

  releaseName: stackrox-secured-cluster-services

  version: 66.1.0

  # Path to any values files that need to be passed to helm during install
  valuesFiles:
    - ../nop-cluster-init-bundle.yaml
  values:
    image:
      registry: docker.svc.prd.intra
    admissionControl:
      dynamic:
        disableBypass: false
        enforceOnCreates: false
        enforceOnUpdates: false
        scanInline: false
        timeout: 3
      image:
        registry: docker.svc.prd.intra
      listenOnCreates: false
      listenOnEvents: true
      listenOnUpdates: false
    centralEndpoint: stackrox.svc.nop.intra:443
    collector:
      collectionMethod: KERNEL_MODULE
      disableTaintTolerations: false
      image:
        registry: stackrox.svc.nop.intra
      slimMode: true
    helmManaged: false
    sensor:
      image:
        registry: docker.svc.prd.intra

targetCustomizations:
- name: dev
  helm:
    values:
      clusterName: dev
      confirmNewClusterName: dev
  clusterSelector:
    matchLabels:
      management.cattle.io/cluster-name: c-7qhh7
- name: dev-v2
  helm:
    values:
      clusterName: dev-v2
      confirmNewClusterName: dev-v2
  clusterSelector:
    matchLabels:
      management.cattle.io/cluster-name: c-7h6q6
- name: nop
  helm:
    values:
      clusterName: nop
      confirmNewClusterName: nop
  clusterSelector:
    matchLabels:
      management.cattle.io/cluster-name: c-5s5c8
- name: uat
  helm:
    values:
      clusterName: uat
      confirmNewClusterName: uat
  clusterSelector:
    matchLabels:
      management.cattle.io/cluster-name: c-dckkt
- name: sit
  helm:
    values:
      clusterName: sit
      confirmNewClusterName: sit
  clusterSelector:
    matchLabels:
      management.cattle.io/cluster-name: c-8svc5

The valuesFiles points to the init bundle we downloaded

  valuesFiles:
    - ../nop-cluster-init-bundle.yaml

Using it that way, we will have to add the nop-cluster-init-bundle.yaml to the source code repo, which is definitively a bad practice as it contains private keys. So a better approach is using values files from configmaps or secrets as suggested by the documentation.

Hence we add nop-cluster-init-bundle.yaml to as secret called cluster-init-bundle, to the desired namespace stackrox in all our clusters:

for context in nop uat sit nop-local playground; do \
    kubectl create secret -n stackrox namespace generic cluster-init-bundle \
    --from-file=cluster-init-bundle.yaml=../stackrox/sensors/nop-cluster-init-bundle.yaml \
    --context $context; done

Then we can use the following in the fleet.yaml

  valuesFrom:
    secretKeyRef:
      name: cluster-init-bundle
      namespace: stackrox
      key: cluster-init-bundle.yaml

Unfortunately at the time of writing the post this did not work neither in Rancher 2.5.11 nor 2.6.2. Maybe I miss something or it is a bug. I may update the post once, the problem is solved.

I then have a targetCustomizations for each cluster which defines/overrides clusterName and confirmNewClusterName for each cluster:

- name: dev
  helm:
    values:
      clusterName: dev
      confirmNewClusterName: dev
  clusterSelector:
    matchLabels:
      management.cattle.io/cluster-name: c-7qhh7

Wrap-up

Using fleet to manage RHACS works fine, but I had some hard time to make it work. Particularly with the additionalCAs for the sensors - we use internally signed certificates for the central endpoint stackrox.svc.nop.intra:443. While I had to specify it for the central, I ran into issues while specifying it for the sensor: It simply would failed to connect with Get "https://stackrox.svc.nop.intra:443/v1/metadata": remote error: tls: bad certificate. Somehow additionalCAs is not needed to be specified for the sensor while using helm, but while you do it messes up things. Not sure if that’s an issue specifically to RHACS or it is related to fleet. However after sorting it out, all looks very good:

rhacs helm managed clusters
The sensors for the running clusters are successfully connected to central