Air Gap Installation

Single Node Installation

Install K3s

Make sure you have /usr/local/bin configured in your PATH: export PATH=$PATH:/usr/local/bin). All the commands must be executed as root user.

The commands have been tested on Ubuntu Server 20.04 LTS, SUSE Linux Enterprise Server 15 SP4 and RHEL 8.6.

For RHEL, K3s needs the following package to be installed: k3s-selinux (repo rancher-k3s-common-stable) and its dependencies container-selinux (repo rhel-8-appstream-rhui-rpms) and policycoreutils-python-utils (repo rhel-8-baseos-rhui-rpms). Also, firewalld nm-cloud-setup.service and nm-cloud-setup.timer must be disabled and the server restarted before the installation, click here for more information.

The steps below you guide you through the air-gap installation of K3s, a lightweight Kubernetes distribution created by Rancher Labs:

  1. Extract the downloaded file: tar -xf gv-platform-$VERSION.tar

  2. Prepare K3s for air-gap installation:

# mkdir -p /var/lib/rancher/k3s/agent/images/
# gunzip -c assets/k3s-airgap-images-amd64.tar.gz > /var/lib/rancher/k3s/agent/images/airgap-images.tar
# cp assets/k3s /usr/local/bin && chmod +x /usr/local/bin/k3s
# tar -xzf assets/helm-v3.8.2-linux-amd64.tar.gz
# cp linux-amd64/helm /usr/local/bin
  1. Install K3s:

# cat scripts/k3s.sh | INSTALL_K3S_SKIP_DOWNLOAD=true SKIP_PRECHECK=true K3S_KUBECONFIG_MODE="644" sh -s - server --node-name=local-01
  1. Wait for the 30s and check if K3s is running with the command: kubectl get pods -A and systemctl status k3s.service

Import Docker images

The steps below will manually deploy the necessary images to the cluster.

  1. Import Docker images locally:

# mkdir /tmp/import
# for f in images/*.gz; do IMG=$(basename "${f}" .gz); gunzip -c "${f}" > /tmp/import/"${IMG}"; done
# for f in /tmp/import/*.tar; do ctr -n=k8s.io images import "${f}"; done

Install Helm charts

The following steps guide you through the installation of the dependencies required by Focus and Synergy.

Replace $VERSION with the version that is present in the bundle that has been downloaded. To check all the charts that have been download run ls charts.

  1. Install Getvisibility Essentials and set the daily UTC backup hour (0-23) for performing backups.

# helm upgrade --install gv-essentials charts/gv-essentials-$VERSION.tgz --wait \
--timeout=10m0s --kubeconfig /etc/rancher/k3s/k3s.yaml \
--set backup.hour=1 \
--set eck-operator.enabled=true \
--set updateclusterid.enabled=false \
--set eck-operator.settings.cpu=4 \
--set eck-operator.settings.memory=20 \
--set eck-operator.settings.storage=160
  1. Install Monitoring CRD:

# helm upgrade --install rancher-monitoring-crd charts/rancher-monitoring-crd-$VERSION.tgz --wait \
--kubeconfig /etc/rancher/k3s/k3s.yaml \
--namespace=cattle-monitoring-system \
--create-namespace
  1. Install Monitoring:

# helm upgrade --install rancher-monitoring charts/rancher-monitoring-$VERSION.tgz --wait \
--kubeconfig /etc/rancher/k3s/k3s.yaml \
--namespace=cattle-monitoring-system \
--set k3sServer.enabled=true \
--set k3sControllerManager.enabled=true \
--set k3sScheduler.enabled=true \
--set k3sProxy.enabled=true \
--set prometheus.retention=5
  1. Check all pods are Running with the command: kubectl get pods -A

Install Focus/Synergy Helm Chart

Replace the following variables:

  • $VERSION with the version that is present in the bundle that has been downloaded

  • $RESELLER with the reseller code (either getvisibility or forcepoint)

  • $PRODUCT with the product being installed (synergy or focus or enterprise)

# helm upgrade --install gv-platform charts/gv-platform-$VERSION.tgz --wait \
--timeout=10m0s --kubeconfig /etc/rancher/k3s/k3s.yaml \
--set-string clusterLabels.environment=prod \
--set-string clusterLabels.cluster_reseller=$RESELLER \
--set-string clusterLabels.cluster_name=mycluster \
--set-string clusterLabels.product=$PRODUCT

In case if you expirience 404 error for accessing to Keycloak or UI and use 1.26 (default) version of K3s ensure that treafik patch is applied

# kubectl patch clusterrole traefik-kube-system -n kube-system --type='json' -p='[{"op": "add", "path": "/rules/-1/apiGroups/-", "value": "traefik.io"}]'
# kubectl apply -f assets/traefik-patch.yaml
# kubectl rollout restart deployment traefik -n kube-system

Install custom artifact bundles

Models and other artifacts, like custom agent versions or custom consul configuration can be shipped inside auto deployable bundles. These bundles are docker images that contain the artifacts to be deployed alongside scripts to deploy them. To create a new bundle or modify an existing one follow this guide first: https://getvisibility.atlassian.net/wiki/spaces/GS/pages/65372391/Model+deployment+guide#1.-Create-a-new-model-bundle-or-modify-an-existing-one . The list of all the available bundles is inside the bundles/ directory on the models-ci project on github.

link to an internal confuence

After the model bundle is published, for example images.master.k3s.getvisibility.com/models:company-1.0.1 You’ll have to generate a public link to this image by running the k3s-air-gap Publish ML models GitHub CI task. The task will ask you for the docker image URL.

We are still using the images.master.k3s.getvisibility.com/models repo because the bundles were only used to deploy custom models at first.

Once the task is complete you’ll get a public URL to download the artifact on the summary of the task. After that you have to execute the following commands.

Replace the following variables:

  • $URL with the URL to the model bundle provided by the task

  • $BUNDLE with the name of the artifact, in this case company-1.0.1

mkdir custom
wget -O custom/$BUNDLE.tar.gz $URL
gunzip custom/$BUNDLE.tar.gz
ctr -n=k8s.io images import models/$BUNDLE.tar

Now you’ll need to execute the artifact deployment job. This job will unpack the artifacts from the docker image into a MinIO bucket inside the on premise cluster and restart any services that use them.

Replace the following variables:

  • $GV_DEPLOYER_VERSION with the version of the model deployer available under charts/

  • $BUNDLE_VERSION with the version of the artifact, in this case company-1.0.1

 helm upgrade \
 --install gv-model-deployer charts/gv-model-deployer-$GV_DEPLOYER_VERSION.tgz \
 --wait --timeout=10m0s --kubeconfig /etc/rancher/k3s/k3s.yaml \
 --set models.version="$BUNDLE_VERSION"

You should be able to verify that everything went alright by locating the ml-model job that was launched. The logs should look like this:

root@ip-172-31-9-140:~# kubectl logs -f ml-model-0jvaycku9prx-84nbf
Uploading models
Added `myminio` successfully.
`/models/AIP-1.0.0.zip` -> `myminio/models-data/AIP-1.0.0.zip`
`/models/Commercial-1.0.0.zip` -> `myminio/models-data/Commercial-1.0.0.zip`
`/models/Default-1.0.0.zip` -> `myminio/models-data/Default-1.0.0.zip`
`/models/classifier-6.1.2.zip` -> `myminio/models-data/classifier-6.1.2.zip`
`/models/lm-full-en-2.1.2.zip` -> `myminio/models-data/lm-full-en-2.1.2.zip`
`/models/sec-mapped-1.0.0.zip` -> `myminio/models-data/sec-mapped-1.0.0.zip`
Total: 0 B, Transferred: 297.38 MiB, Speed: 684.36 MiB/s
Restart classifier
deployment.apps/classifier-focus restarted
root@ip-172-31-9-140:~# 

In addition you can enter the different services that consume these artifacts to check if they have been correctly deployed. For example for the models you can open a shell inside the classifier containers and check the /models directory or check the models-data bucket inside MinIO. Both should contain the expected models.


Multiple Node Installation (High Availability)

Prerequisites

Firewall Rules for Internal Communication

We recommend running the K3s nodes in a 10Gb low latency private network for the maximum security and performance.

K3s needs the following ports to be accessible (Inbound and Outbound) by all other nodes running in the same cluster:

Protocol

Port

Description

TCP

6443

Kubernetes API Server

UDP

8472

Required for Flannel VXLAN

TCP

2379-2380

embedded etcd

TCP

10250

metrics-server for HPA

TCP

9796

Prometheus node exporter

TCP

80

Private Docker Registry

The ports above should not be publicly exposed as they will open up your cluster to be accessed by anyone. Make sure to always run your nodes behind a firewall/security group/private network that disables external access to the ports mentioned above.

All nodes in the cluster must have:

  1. Domain Name Service (DNS) configured

  2. Network Time Protocol (NTP) configured

  3. Fixed private IPv4 address

  4. Globally unique node name (use --node-name when installing K3s in a VM to set a static node name)

Firewall Rules for External Communication

The following port must be publicly exposed in order to allow users to access Synergy or Focus product:

Protocol

Port

Description

TCP

443

Focus/Synergy backend

The user must not access the K3s nodes directly, instead, there should be a load balancer sitting between the end user and all the K3s nodes (master and worker nodes):

The load balancer must operate at Layer 4 of the OSI model and listen for connections on port 443. After the load balancer receives a connection request, it selects a target from the target group (which can be any of the master or worker nodes in the cluster) and then attempts to open a TCP connection to the selected target (node) on port 443.

The load balancer must have health checks enabled which are used to monitor the health of the registered targets (nodes in the cluster) so that the load balancer can send requests to healthy nodes only.

The recommended health check configuration is:

  • Timeout: 10 seconds

  • Healthy threshold: 3 consecutive health check successes

  • Unhealthy threshold: 3 consecutive health check failures

  • Interval: 30 seconds

  • Balance mode: round-robin

VM Count

At least 4 machines are required to provide high availability of the Getvisibility platform. The HA setup supports a single-node failure.

Install K3s

Make sure you have /usr/local/bin configured in your PATH: export PATH=$PATH:/usr/local/bin). All the commands must be executed as root user.

The commands have been tested on Ubuntu Server 20.04 LTS, SUSE Linux Enterprise Server 15 SP4 and RHEL 8.6.

For RHEL, K3s needs the following package to be installed: k3s-selinux (repo rancher-k3s-common-stable) and its dependencies container-selinux (repo rhel-8-appstream-rhui-rpms) and policycoreutils-python-utils (repo rhel-8-baseos-rhui-rpms). Also, firewalld nm-cloud-setup.service and nm-cloud-setup.timer must be disabled and the server restarted before the installation, click here for more information.

The steps below you guide you through the air-gap installation of K3s, a lightweight Kubernetes distribution created by Rancher Labs:

  1. Create at least 4 VMs with the same specs

  2. Extract the downloaded file: tar -xf gv-platform-$VERSION.tar to all the VMs

  3. Create a local DNS entry private-docker-registry.local across all the nodes resolving to the master1 node:

cat >> /etc/hosts  << EOF
<Master1_node_VM_IP>  private-docker-registry.local
EOF
  1. Prepare the K3s for air-gap installation files:

$ mkdir -p /var/lib/rancher/k3s/agent/images/
$ gunzip -c assets/k3s-airgap-images-amd64.tar.gz > /var/lib/rancher/k3s/agent/images/airgap-images.tar
$ cp assets/k3s /usr/local/bin && chmod +x /usr/local/bin/k3s
$ tar -xzf assets/helm-v3.8.2-linux-amd64.tar.gz && cp linux-amd64/helm /usr/local/bin
  1. Update the registries.yaml file across all the nodes.

$ mkdir -p /etc/rancher/k3s
$ cp assets/registries.yaml  /etc/rancher/k3s/
  1. Install K3s in the 1st master node: To get started launch a server node using the cluster-init flag:

cat scripts/k3s.sh | INSTALL_K3S_SKIP_DOWNLOAD=true K3S_KUBECONFIG_MODE="644" sh -s - server --node-name=master1 --cluster-init

Check for your first master node status, it should have the Ready state:

kubectl get nodes

Use the following command to copy the TOKEN from this node that will be used to join the other nodes to the cluster:

cat /var/lib/rancher/k3s/server/node-token

Also, copy the IP address of the 1st master node which will be used by the other nodes to join the cluster.

  1. Install K3s in the 2nd master node:

Run the following command and assign the contents of the file: /var/lib/rancher/k3s/server/node-token from the 1st master node to the K3S_TOKEN variable.

Set --node-name to “master2”

Set --server to the IP address of the 1st master node

cat scripts/k3s.sh | K3S_TOKEN=$K3S_TOKEN INSTALL_K3S_SKIP_DOWNLOAD=true K3S_KUBECONFIG_MODE="644" sh -s - server --node-name=master2 --server https://<ip or hostname of any master node>:6443

Check the node status:

kubectl get nodes
  1. Install K3s in the 3rd master node:

Run the following command and assign the contents of the file: /var/lib/rancher/k3s/server/node-token from the 1st master node to the K3S_TOKEN variable.

Set --node-name to “master3”

Set --server to the IP address of the 1st master node

cat scripts/k3s.sh | K3S_TOKEN=$K3S_TOKEN INSTALL_K3S_SKIP_DOWNLOAD=true K3S_KUBECONFIG_MODE="644" sh -s - server --node-name=master3 --server https://<ip or hostname of any master node>:6443

Check the node status:

  1. Install K3s in the 1st worker node: Use the same approach to install K3s and to connect the worker node to the cluster group. The installation parameter would be different in this case. Run the following command: Set --node-name to “worker1” (where n is the nth number of the worker node)

cat scripts/k3s.sh | $K3S_TOKEN INSTALL_K3S_SKIP_DOWNLOAD=true K3S_TOKEN=$K3S_TOKEN K3S_KUBECONFIG_MODE="644" sh -s - agent --node-name=worker1 --server https://<ip or hostname of any master node>:6443

Check the node status:

kubectl get nodes

Deploy Private Docker Registry and Import Docker images

  1. Extract and Import the Docker images locally to the master1 node

$ mkdir /tmp/import
$ for f in images/*.gz; do IMG=$(basename "${f}" .gz); gunzip -c "${f}" > /tmp/import/"${IMG}"; done
$ for f in /tmp/import/*.tar; do ctr -n=k8s.io images import "${f}"; done
  1. Install gv-private-registry helm chart in the master1 node: Replace $VERSION with the version that is present in the bundle that has been downloaded. To check all the charts that have been download run ls charts.

$ helm upgrade --install  gv-private-registry charts/gv-private-registry-$VERSION.tgz --wait \
  --timeout=10m0s \
  --kubeconfig /etc/rancher/k3s/k3s.yaml
  1. Tag and push the docker images to the local private docker registry deployed in the master1 node:

$ sh scripts/push-docker-images.sh

Install Helm charts

The following steps guide you through the installation of the dependencies required by Focus and Synergy.

Perform the following steps in the master1 Node

Replace $VERSION with the version that is present in the bundle that has been downloaded. To check all the charts that have been download run ls charts.

  1. Install Getvisibility Essentials and set the daily UTC backup hour (0-23) for performing backups. If you are installing Focus or Enterprise append --set eck-operator.enabled=true to the command in order to enable (BROKEN LINK TO ELASTIC SEARCH)

$ helm upgrade --install gv-essentials charts/gv-essentials-$VERSION.tgz --wait \
--timeout=10m0s --kubeconfig /etc/rancher/k3s/k3s.yaml \
--set global.high_available=true \
--set eck-operator.enabled=true  \
--set minio.replicas=4 \
--set minio.mode=distributed \
--set consul.server.replicas=3 \
--set updateclusterid.enabled=false \
--set backup.hour=1
  1. Install Monitoring CRD:

$ helm upgrade --install rancher-monitoring-crd charts/rancher-monitoring-crd-$VERSION.tgz --wait \
--kubeconfig /etc/rancher/k3s/k3s.yaml \
--namespace=cattle-monitoring-system \
--create-namespace
  1. Install Monitoring:

$ helm upgrade --install rancher-monitoring charts/rancher-monitoring-$VERSION.tgz --wait \
--kubeconfig /etc/rancher/k3s/k3s.yaml \
--set global.high_available=true \
--namespace=cattle-monitoring-system \
--set loki-stack.loki.replicas=2 \
--set prometheus.prometheusSpec.replicas=2
  1. Check all pods are Running with the command:

kubectl get pods -A

Install Focus/Synergy Helm Chart

Replace the following variables:

  • $VERSION with the version that is present in the bundle that has been downloaded

  • $RESELLER with the reseller code (either getvisibility or forcepoint)

  • $PRODUCT with the product being installed (synergy or focus or enterprise)

$ helm upgrade --install gv-platform charts/gv-platform-$VERSION.tgz --wait \
--timeout=10m0s --kubeconfig /etc/rancher/k3s/k3s.yaml \
--set high_available=true \
--set-string clusterLabels.environment=prod \
--set-string clusterLabels.cluster_reseller=$RESELLER \
--set-string clusterLabels.cluster_name=mycluster \
--set-string clusterLabels.product=$PRODUCT

Install Kube-fledged

Perform the following steps in the master1 node

  1. Install gv-kube-fledged helm chart. Replace $VERSION with the version that is present in the bundle that has been downloaded. To check all the charts that have been download run ls charts.

$ helm upgrade --install gv-kube-fledged charts/gv-kube-fledged-$VERSION.tgz -n kube-fledged \
--timeout=10m0s \
--kubeconfig /etc/rancher/k3s/k3s.yaml \
--create-namespace
  1. Create and deploy imagecache.yaml

$ sh scripts/create-imagecache-file.sh
$ kubectl apply -f scripts/imagecache.yaml

Install custom artifacts

Models and other artifacts, like custom agent versions or custom consul configuration can be shipped inside auto deployable bundles. The procedure to install custom artifact bundles on an HA cluster is the same as in the single node cluster case. Take a look at the guide for single-node clusters above.

Upgrade

View current values in config file for each chart

  • Before upgrading each chart, you can check the settings used in the current installation with helm get values <chartname>.

  • If the current values are different from the defaults, you will need to change the parameters of the helm upgrade command for the chart in question.

  • For example, if the backup is currently set to run at 2 AM instead of the 1 AM default, change --set backup.hour=1 to --set backup.hour=2.

  • Below is a mostly default config.

Focus/Synergy/Enterprise Helm Chart

To upgrade Focus/Synergy/Enterprise you must:

  1. Download the new bundle

  2. Import Docker images

  3. Install Focus/Synergy/Enterprise Helm Chart

LINK TO INTERNAL CONFLUENCE

  1. Import Docker images only to the Master1 node

  2. In the case of HA deployment, Recreate and redeploy the imagecache.yaml file Air Gap Installation | Install Kube fledged: Perform the 2nd Step

GetVisibility Essentials Helm Chart

To upgrade the GV Essential chart you must:

  1. Download the new bundle

  2. Import Docker images

  3. Run the command from Install Getvisibility Essentials under Install Helm charts section

  1. Import Docker images only to the Master1 node

  2. In the case of HA deployment, Recreate and redeploy the imagecache.yaml file Air Gap Installation | Install Kube fledged: Perform the 2nd Step

Install custom artifacts

Models and other artifacts, like custom agent versions or custom consul configuration can be shipped inside auto deployable bundles. The procedure to upgrade custom artifact bundles is the same as the installation one, take a look at the guides above for single-node and multi-node installations.

Last updated

Was this helpful?