Air Gap Installation
Last updated
Was this helpful?
Last updated
Was this helpful?
Make sure you have /usr/local/bin
configured in your PATH: export PATH=$PATH:/usr/local/bin
). All the commands must be executed as root
user.
For RHEL, K3s needs the following package to be installed: k3s-selinux
(repo rancher-k3s-common-stable) and its dependencies container-selinux
(repo rhel-8-appstream-rhui-rpms) and policycoreutils-python-utils
(repo rhel-8-baseos-rhui-rpms).
Also, firewalld
nm-cloud-setup.service
and nm-cloud-setup.timer
must be disabled and the server restarted before the installation, for more information.
The steps below you guide you through the air-gap installation of , a lightweight Kubernetes distribution created by Rancher Labs:
Extract the downloaded file: tar -xf gv-platform-$VERSION.tar
Prepare K3s for air-gap installation:
Install K3s:
Wait for the 30s and check if K3s is running with the command: kubectl get pods -A
and systemctl status k3s.service
The steps below will manually deploy the necessary images to the cluster.
Import Docker images locally:
The following steps guide you through the installation of the dependencies required by Focus and Synergy.
Install Getvisibility Essentials and set the daily UTC backup hour (0-23) for performing backups.
Install Monitoring CRD:
Install Monitoring:
Check all pods are Running
with the command: kubectl get pods -A
Replace the following variables:
$VERSION
with the version that is present in the bundle that has been downloaded
$RESELLER
with the reseller code (either getvisibility
or forcepoint
)
$PRODUCT
with the product being installed (synergy
or focus
or enterprise
)
link to an internal confuence
After the model bundle is published, for example images.master.k3s.getvisibility.com/models:company-1.0.1 You’ll have to generate a public link to this image by running the k3s-air-gap Publish ML models GitHub CI task. The task will ask you for the docker image URL.
Once the task is complete you’ll get a public URL to download the artifact on the summary of the task. After that you have to execute the following commands.
Replace the following variables:
$URL
with the URL to the model bundle provided by the task
$BUNDLE
with the name of the artifact, in this case company-1.0.1
Now you’ll need to execute the artifact deployment job. This job will unpack the artifacts from the docker image into a MinIO bucket inside the on premise cluster and restart any services that use them.
Replace the following variables:
$GV_DEPLOYER_VERSION
with the version of the model deployer available under charts/
$BUNDLE_VERSION
with the version of the artifact, in this case company-1.0.1
You should be able to verify that everything went alright by locating the ml-model job that was launched. The logs should look like this:
In addition you can enter the different services that consume these artifacts to check if they have been correctly deployed. For example for the models you can open a shell inside the classifier containers and check the /models directory or check the models-data bucket inside MinIO. Both should contain the expected models.
K3s needs the following ports to be accessible (Inbound and Outbound) by all other nodes running in the same cluster:
Protocol
Port
Description
TCP
6443
Kubernetes API Server
UDP
8472
Required for Flannel VXLAN
TCP
2379-2380
embedded etcd
TCP
10250
metrics-server for HPA
TCP
9796
Prometheus node exporter
TCP
80
Private Docker Registry
The ports above should not be publicly exposed as they will open up your cluster to be accessed by anyone. Make sure to always run your nodes behind a firewall/security group/private network that disables external access to the ports mentioned above.
All nodes in the cluster must have:
Domain Name Service (DNS) configured
Network Time Protocol (NTP) configured
Fixed private IPv4 address
Globally unique node name (use --node-name
when installing K3s in a VM to set a static node name)
The following port must be publicly exposed in order to allow users to access Synergy or Focus product:
Protocol
Port
Description
TCP
443
Focus/Synergy backend
The user must not access the K3s nodes directly, instead, there should be a load balancer sitting between the end user and all the K3s nodes (master and worker nodes):
The load balancer must operate at Layer 4 of the OSI model and listen for connections on port 443. After the load balancer receives a connection request, it selects a target from the target group (which can be any of the master or worker nodes in the cluster) and then attempts to open a TCP connection to the selected target (node) on port 443.
The load balancer must have health checks enabled which are used to monitor the health of the registered targets (nodes in the cluster) so that the load balancer can send requests to healthy nodes only.
The recommended health check configuration is:
Timeout: 10 seconds
Healthy threshold: 3 consecutive health check successes
Unhealthy threshold: 3 consecutive health check failures
Interval: 30 seconds
Balance mode: round-robin
At least 4 machines are required to provide high availability of the Getvisibility platform. The HA setup supports a single-node failure.
Make sure you have /usr/local/bin
configured in your PATH: export PATH=$PATH:/usr/local/bin
). All the commands must be executed as root
user.
Create at least 4 VMs with the same specs
Extract the downloaded file: tar -xf gv-platform-$VERSION.tar
to all the VMs
Create a local DNS entry private-docker-registry.local
across all the nodes resolving to the master1 node:
Prepare the K3s for air-gap installation files:
Update the registries.yaml
file across all the nodes.
Install K3s in the 1st master node:
To get started launch a server node using the cluster-init
flag:
Check for your first master node status, it should have the Ready
state:
Use the following command to copy the TOKEN from this node that will be used to join the other nodes to the cluster:
Also, copy the IP address of the 1st master node which will be used by the other nodes to join the cluster.
Install K3s in the 2nd master node:
Run the following command and assign the contents of the file: /var/lib/rancher/k3s/server/node-token
from the 1st master node to the K3S_TOKEN
variable.
Set --node-name
to “master2”
Set --server
to the IP address of the 1st master node
Check the node status:
Install K3s in the 3rd master node:
Run the following command and assign the contents of the file: /var/lib/rancher/k3s/server/node-token
from the 1st master node to the K3S_TOKEN
variable.
Set --node-name
to “master3”
Set --server
to the IP address of the 1st master node
Check the node status:
Install K3s in the 1st worker node:
Use the same approach to install K3s and to connect the worker node to the cluster group.
The installation parameter would be different in this case. Run the following command:
Set --node-name
to “worker1” (where n is the nth number of the worker node)
Check the node status:
Extract and Import the Docker images locally to the master1
node
Install gv-private-registry
helm chart in the master1 node:
Replace $VERSION
with the version that is present in the bundle that has been downloaded.
To check all the charts that have been download run ls charts
.
Tag and push the docker images to the local private docker registry deployed in the master1
node:
The following steps guide you through the installation of the dependencies required by Focus and Synergy.
Perform the following steps in the master1 Node
Install Getvisibility Essentials and set the daily UTC backup hour (0-23) for performing backups.
If you are installing Focus or Enterprise append --set eck-operator.enabled=true
to the command in order to enable (BROKEN LINK TO ELASTIC SEARCH)
Install Monitoring CRD:
Install Monitoring:
Check all pods are Running
with the command:
Replace the following variables:
$VERSION
with the version that is present in the bundle that has been downloaded
$RESELLER
with the reseller code (either getvisibility
or forcepoint
)
$PRODUCT
with the product being installed (synergy
or focus
or enterprise
)
Perform the following steps in the master1 node
Install gv-kube-fledged
helm chart.
Replace $VERSION
with the version that is present in the bundle that has been downloaded.
To check all the charts that have been download run ls charts
.
Create and deploy imagecache.yaml
Models and other artifacts, like custom agent versions or custom consul configuration can be shipped inside auto deployable bundles. The procedure to install custom artifact bundles on an HA cluster is the same as in the single node cluster case. Take a look at the guide for single-node clusters above.
Before upgrading each chart, you can check the settings used in the current installation with
helm get values <chartname>
.
If the current values are different from the defaults, you will need to change the parameters of the
helm upgrade
command for the chart in question.
For example, if the backup is currently set to run at 2 AM instead of the 1 AM default, change
--set backup.hour=1
to --set backup.hour=2
.
Below is a mostly default config.
To upgrade Focus/Synergy/Enterprise you must:
Download the new bundle
Import Docker images
Install Focus/Synergy/Enterprise Helm Chart
LINK TO INTERNAL CONFLUENCE
To upgrade the GV Essential chart you must:
Download the new bundle
Import Docker images
Run the command from Install Getvisibility Essentials under Install Helm charts section
Models and other artifacts, like custom agent versions or custom consul configuration can be shipped inside auto deployable bundles. The procedure to upgrade custom artifact bundles is the same as the installation one, take a look at the guides above for single-node and multi-node installations.
Models and other artifacts, like custom agent versions or custom consul configuration can be shipped inside auto deployable bundles. These bundles are docker images that contain the artifacts to be deployed alongside scripts to deploy them. To create a new bundle or modify an existing one follow this guide first: . The list of all the available bundles is inside the bundles/ directory on the models-ci project on github.
We are still using the s repo because the bundles were only used to deploy custom models at first.
For RHEL, K3s needs the following package to be installed: k3s-selinux
(repo rancher-k3s-common-stable) and its dependencies container-selinux
(repo rhel-8-appstream-rhui-rpms) and policycoreutils-python-utils
(repo rhel-8-baseos-rhui-rpms).
Also, firewalld
nm-cloud-setup.service
and nm-cloud-setup.timer
must be disabled and the server restarted before the installation, for more information.
The steps below you guide you through the air-gap installation of , a lightweight Kubernetes distribution created by Rancher Labs:
In the case of HA deployment, Recreate and redeploy the imagecache.yaml
file
: Perform the 2nd Step
In the case of HA deployment, Recreate and redeploy the imagecache.yaml
file
: Perform the 2nd Step