Link Search Menu Expand Document

Preparing the Environment

Jan 1 2022 at 12:00 AM

  1. Checklist
  2. Kubernetes cluster networking / CNI
  3. Load Balancers
  4. Shared Storage
  5. Storage Class
  6. Ingress (if used)
  7. Public certificates
  8. Namespace
  9. Kube config with required permissions
  10. Container Registry
  11. Requirements
    1. Single node
    2. Multi node

Checklist

It is important to note that in order to set up many of the tools and services mentioned below access will be required to a number of software and image repositories on the internet. Most important to note here is that all drivers for V-Raptor will ultimately be stored in one of two possible repositories:

  • iotnxt.azurecr.io
  • 830156806394.dkr.ecr.eu-west-1.amazonaws.com

It may however be necessary to also have access to the below list of repositories, depending on what software or services you decide to implement:

  • keyserver.ubuntu.com
  • packages.cloud.google.com
  • pypi.python.org
  • pypi.org
  • files.pythonhosted.org
  • get.helm.sh
  • dl.k8s.io
  • storage.googleapis.com
  • k8s.gcr.io
  • docs.projectcalico.org
  • github.com
  • registry-1.docker.io
  • github-production-release-asset-2e65be.s3.amazonaws.com
  • auth.docker.io
  • production.cloudflare.docker.com
  • raw.githubusercontent.com
  • charts.appscode.com
  • charts.rook.io
  • quay.io
  • d3uo42mtx6z2cr.cloudfront.net
  • prod-eu-west-1-starport-layer-bucket.s3.eu-west-1.amazonaws.com
  • ecr.eu-west-1.amazonaws.com

Kubernetes cluster networking / CNI

What is a Kubernetes CNI?

CNI (Container Network Interface), a Cloud Native Computing Foundation project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of plugins. Kubernetes uses CNI as an interface between network providers and Kubernetes pod networking. Kubernetes calls the API any time a pod is being created or destroyed.

What is Kubernetes calico?

Calico is a container networking solution created by Tigera. While solutions like Flannel operate over layer 2, Calico makes use of layer 3 to route packets to pods. Calico can also provide network policy for Kubernetes

Install Calico networking and network policy for on-premises deployments

We make use of Calico CNI in the Kubernetes cluster setup, Calico CNI (Container Network Interface) is a standard API which allows different network implementations to plug into Kubernetes. Kubernetes calls the API any time a pod is being created or destroyed. Before the deployment process can take place, we must determine if the following exists from a previous installation and on which node to deploy:

  • Check for pre-existing Operator
  • Check for pre-existing Calico
  • Decide whether we are using legacy version or not

New we can continue to download the calico manifest from the mirror site and extract it to temporary directory.

Download latest calico version

When that is done we can proceed to download the operator manifest and deploy the operator. We will make use of the operator to deploy the calico CNI.

Configuring Calico

Load Balancers

A load balancer is required to be installed with your K8s cluster. This will be responsible for routing all incoming and outgoing traffic to the required destinations. It is also used to expose an IP address and endpoint for access from outside of the K8s cluster. This is an important step to setting up a LoadBalancer Service in Kubernetes and more so for setting up a valid ingress service which is then used to route data coming in from different devices to their desired drivers/services running inside the V-Raptor. When installing and running a V-Raptor in a cloud provider you will likely not need to worry about setting this up as the cloud provider will be able to manage the creation of load balancers on your behalf. However, when setting the system up using bare metal or VMWare on your own servers it might be necessary to install some sort of load balancer to incorporate with your Kubernetes cluster. As a rule, V-Raptor is deployed on bare metal using MetalLB as the load balancer.

Why MetalLB?

Installing MetalLB

Configuring MetalLB

Once set up and configured, an ingress can then be installed on your Kubernetes cluster to allow for data routing and access to any UIs. MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation.

For MetalLB deployment we need to specify two items

  • version
  • load-balancer IP ranges

The following URL can be used to fetch the installation file

MetalLB installation**

Run the following command to create the needed secret.

kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

On completion you should have the following in place on the metallb-system namespace:

  • metallb-system/controller: This is the cluster-wide controller that handles IP address assignments.
  • metallb-system/speaker daemonset. This is the component that speaks the protocol(s) of your choice to make the services reachable.
  • Metallb will come online but will not operate as needed without any configuration files. To get it functional we will need to create configmap file added to the Kubernetes cluster.

You can make use of the URL if needed for deployment and configuration

MetalLB deployment & configuration

Shared Storage

Shared storage is required for V-Raptor to store information like devices and device metadata. It is also most importantly used to store data in transit (DSIT). This means that services in each step of the journey through the V-Raptor stack will need to store the data it is receiving before shipping it on to the next step or service. V-Raptor does this to ensure data integrity is preserved throughout the stack, meaning that services can be stopped, started, and reconfigured on the fly without the fear of losing data as a result. To achieve this there are two recognized methods that can be deployed.

  1. The first is to use a MongoDB database, this is the recommended method of deployment for large deployments where there are several V-Raptors and/or the V-Raptors are installed over several Nodes.
  2. The second option is to use persistent volumes within the Kubernetes cluster. This option is the recommended route when deploying your V-Raptor onto single node clusters or clusters that are very isolated from other resources by either security or geo-location in the network. For this option it is necessary to install a form of shared storage such as nfs onto the Kubernetes nodes in question.

Kubernetes Shared Storage: The Basics and a Quick Tutorial

Generally, as a rule, single node clusters can be set up with something like Rook Ceph as the basis to manage persistent volumes, however this can prove to be a challenge when deploying the V-Raptor to multi node clusters. The reason is that when a persistent volume is created in a K8s cluster, it translates into being stored onto the disk of the node where it is created. In this case, if the node is lost or goes offline for some reason it means that the persistent volume will become unavailable to the service that requires it. A new volume could just be created elsewhere to replace it, but that would mean the loss of any data that was on the existing volume. Cloud base infrastructure is better at handling these situations as they have the ability to detach disk from one node and add them to another node dynamically. This, however, also has limitations as the volumes cannot be moved to nodes in other availability zones or other geo-locations. Similar issues obviously also crop up when the clusters are configured over multiple geo redundant data centers. It is therefore suggested that in these cases where multi node clusters or more than one data center or availability zone are required to be used, that the storage be setup and configure with a tool such as Longhorn. Longhorn is designed to replicate volumes across multiple nodes, clusters, and data centers.

What is Longhorn

Storage Class

If the V-Raptor is going to use a shared storage option, then you will be required to configure a StorageClass in your cluster.

A StorageClass provides a way for administrators to describe the “classes” of storage they offer. Different classes might map to quality-of-service levels, or to backup policies, or to arbitrary policies determined by the cluster administrators. Kubernetes itself is unopinionated about what classes represent.

Ingress (if used)

An ingress service is required to allow external services to connect to the V-Raptor. To achieve this, we use the Nginx Ingress to configure access through to any required ports. This ingress can be set up as a TCP ingress or a UDP ingress as well as allowing http traffic. While the http traffic can be setup and configured to run along-side either of these other two protocols, one ingress cannot handle TCP and UDP traffic at the same time, so either 2 V-Raptors can be installed, or 2 ingresses need to be configured and set up to allow the different protocol access. Setting up Nginx will be addressed in the installation section.

Exposing TCP and UDP services on Nginx Ingress Controller

Public certificates

A public certificate is required to be attached to the V-Raptor to allow users and devices to connect to any exposed services through the Ingress installed there. This will help ensure TLS encryption of data. This certificate can be set up from any publicly recognised certificate authority. It can be generated using tolls like OpenSSL, or provided by your certificate authority. The certificate needs to match the URL used to access the V-Raptor.

Namespace

You will need to create a Namespace on your Kubernetes cluster to install the V-Raptor services. It is neccessary to do this before installing the V-Raptor as some of the required configuration files will need this name in them. Kubernetes namespaces help different projects, teams, or customers to share a Kubernetes cluster. It does this by providing the following:

  • A scope for Names.
  • A mechanism to attach authorization and policy to a subsection of the cluster.
  • Use of multiple namespaces is optional.

Definition: Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces.

Kube config with required permissions

A valid Kubernetes config file will be required to allow the installer tools and deployment services to be able to setup and configure required services, secrets, and configmaps as they are required in the V-Raptor. This Kubernetes config must have the necessary permissions to be able to create and edit objects on the Namespace that has been set up for the V-Raptor

Container Registry

It is important to note that in order to set up allot of the tools and services mentioned below access will be required to a number of software and image repositories on the internet. Most important to note here is that all drivers for V-Raptor will ultimately be stored in one of two possible repositories:

  • iotnxt.azurecr.io
  • 830156806394.dkr.ecr.eu-west-1.amazonaws.com

Requirements

Single node

When setting up your V-Raptor on a single node cluster we would recommend using a server with a minimum

  • 8 cores
  • 32 Gb of RAM
  • 512 Gb hard drive.

Multi node

For a multiple node cluster or a cluster using multiple VMs and its node base we would recommend using at least 3 or 4 nodes with a minimum for each node

  • 4 cores
  • 8 Gb RAM
  • 256 Gb hard disk

Note:
The reason for requiring more disk space on the nodes is for V-Raptor to use the disk space as shared storage for persistent volumes as discussed in another section. If, however, you are going to use a MongoDB for shared storage then you might require a much smaller disk to run your nodes on.