Deploy RHACS with Rancher fleet
Posted on November 10, 2021 by Adrian Wyssmann ‐ 4 min read
As you know from my post, there are two main components, central-services
and Sensors. As mentioned in the post, central components are used to be installed via helm, while the sensors are installed by downloading the sensor bundle. However, as we are using Rancher Fleet to manage the cluster tooling, we want to use a different approach. Luckily you can also install RHACS sensors using helm. Steps are pretty straight forward:
- Helm chart repository
- Install the
central-services
Helm chart to install the centralized components (Central and Scanner). - Generate an init bundle.
- Install the
secured-cluster-services
Helm chart to install the per-cluster and per-node components (Sensor, Admission Controller, and Collector).
Install central-services
with Fleet:
When using helm, we have to use a fleet.yaml
defaultNamespace: stackrox
helm:
chart: stackrox-central-services
repo: https://helm.intra/virtual-helm
releaseName: stackrox-central-services
version: 66.1.0
values:
imagePullSecrets:
allowNone: true
image:
registry: docker.intra
env:
platform: null
offlineMode: false
proxyConfig: |
url: http://myproxy.intra:8888
excludes:
- "*.cluster1.intra"
- "*.cluster4.intra"
- "*.cluster3.intra"
- "*.cluster2.intra"
- "*.intra"
- "*.prd.intra"
## Additional CA certificates to trust, besides system roots
## If specified, this should be a map mapping file names to PEM-encoded contents.
additionalCAs: |
-----BEGIN CERTIFICATE-----
XXXXXXXXXX
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
XXXXXXXXXX
-----END CERTIFICATE-----
central:
defaultTLS:
# see ca/stackrox.intra.pem
cert: |
-----BEGIN CERTIFICATE-----
XXXXXXXXXX
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
XXXXXXXXXX
-----END CERTIFICATE-----
key: |
-----BEGIN RSA PRIVATE KEY-----
XXXXXXXXXX
-----END RSA PRIVATE KEY-----
adminPassword:
htpasswd: admin:$2y$05$xxxxxxxx
resources:
requests:
memory: "4Gi"
cpu: "1500m"
limits:
memory: "8Gi"
cpu: "2000m"
persistence:
persistentVolumeClaim:
claimName: stackrox-db
storageClass: hpe-csi
createClaim: false
size: 40
exposure:
loadBalancer:
enabled: true
port: 443
## Configuration options relating to StackRox Scanner.
scanner:
disable: false
Install SecuredCluster
(Sensors)
Before we can configure fleet, we have to create an init bundle within RHACS
The secured cluster uses this bundle to authenticate with Central.
We have multiple clusters, that we manage with RHACS, but we can use the same bundle. Knowing this, we take the approach of targetCustomizations
:
Target customization are used to determine how resources should be modified per target.
Targets are evaluated in order and the first one to match a cluster is used for that cluster
We already have created all clusters under Platform Configuration > Clusters
, hence we have to tell fleet, to use different values for each cluster. But before we need a GitRepo
which targets all clusters:
apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
name: cluster-tooling-nop-all
namespace: fleet-default
spec:
branch: master
clientSecretName: gitrepo-auth-bitbucket
helmSecretName: helm-repo
caBundle: XXXXXXXX
insecureSkipTLSVerify: false
paths:
- stackrox/sensors/nop
repo: https://git.intra/k8s/config.git
targets:
- clusterSelector: {}
Within the folder stackrox/sensors/nop
defined in the GitRepo
we have a fleet.yaml
.
defaultNamespace: stackrox
helm:
chart: secured-cluster-services
repo: https://helm.intra/virtual-helm
releaseName: stackrox-secured-cluster-services
version: 66.1.0
# Path to any values files that need to be passed to helm during install
valuesFiles:
- ../nop-cluster-init-bundle.yaml
values:
image:
registry: docker.intra
admissionControl:
dynamic:
disableBypass: false
enforceOnCreates: false
enforceOnUpdates: false
scanInline: false
timeout: 3
image:
registry: docker.intra
listenOnCreates: false
listenOnEvents: true
listenOnUpdates: false
centralEndpoint: stackrox.intra:443
collector:
collectionMethod: KERNEL_MODULE
disableTaintTolerations: false
image:
registry: stackrox.intra
slimMode: true
helmManaged: false
sensor:
image:
registry: docker.intra
targetCustomizations:
- name: cluster1
helm:
values:
clusterName: cluster1
confirmNewClusterName: cluster1
clusterSelector:
matchLabels:
management.cattle.io/cluster-name: c-7qhh7
- name: cluster2
helm:
values:
clusterName: cluster2
confirmNewClusterName: cluster2
clusterSelector:
matchLabels:
management.cattle.io/cluster-name: c-5s5c8
- name: cluster3
helm:
values:
clusterName: cluster3
confirmNewClusterName: cluster3
clusterSelector:
matchLabels:
management.cattle.io/cluster-name: c-dckkt
- name: cluster4
helm:
values:
clusterName: cluster4
confirmNewClusterName: cluster4
clusterSelector:
matchLabels:
management.cattle.io/cluster-name: c-8svc5
The valuesFiles
points to the init bundle we downloaded
valuesFiles:
- ../nop-cluster-init-bundle.yaml
Using it that way, we will have to add the nop-cluster-init-bundle.yaml
to the source code repo, which is definitively a bad practice as it contains private keys. So a better approach is using values files from configmaps or secrets as suggested by the documentation.
Hence we add nop-cluster-init-bundle.yaml
to as secret called cluster-init-bundle
, to the desired namespace stackrox
in all our clusters:
for context in cluster1 cluster2 cluster3; do \
kubectl create secret -n stackrox namespace generic cluster-init-bundle \
--from-file=cluster-init-bundle.yaml=../stackrox/sensors/nop-cluster-init-bundle.yaml \
--context $context; done
Then we can use the following in the fleet.yaml
valuesFrom:
secretKeyRef:
name: cluster-init-bundle
namespace: stackrox
key: cluster-init-bundle.yaml
Unfortunately at the time of writing the post this did not work neither in Rancher 2.5.11 nor 2.6.2. Maybe I miss something or it is a bug. I may update the post once, the problem is solved.
I then have a targetCustomizations
for each cluster which defines/overrides clusterName
and confirmNewClusterName
for each cluster:
- name: cluster1
helm:
values:
clusterName: cluster1
confirmNewClusterName: cluster1
clusterSelector:
matchLabels:
management.cattle.io/cluster-name: c-7qhh7
Wrap-up
Using fleet to manage RHACS works fine, but I had some hard time to make it work. Particularly with the additionalCAs
for the sensors - we use internally signed certificates for the central endpoint stackrox.intra:443
. While I had to specify it for the central, I ran into issues while specifying it for the sensor
: It simply would failed to connect with Get "https://stackrox.intra:443/v1/metadata": remote error: tls: bad certificate
. Somehow additionalCAs
is not needed to be specified for the sensor while using helm, but while you do it messes up things. Not sure if that’s an issue specifically to RHACS or it is related to fleet. However after sorting it out, all looks very good: