Service Ports
Jan 1 2022 at 12:00 AM
- Required Ports for services
- Service leader election
- V-Raptor Ingress Setup
- Rules and mappings
- Driver Rules
This document describes the networking setup used within the V-Raptor, such as which ports are used by the various services, and explains how to go about exposing device drivers that accept incoming connections using the Raptor’s ingress service.
Required Ports for services
By default, all V-Raptor services will make use of a minimum of two ports - one for communication over the RPC layer and one for the service’s HTTP endpoint. These ports will need to be added to each deployed service’s Service
object in Kubernetes.
Note: Services deployed through the V-Raptor’s installation agent will have the ports automatically configured
Each V-Raptor service will have a web endpoint that can be used to check the health and metrics of the service. By default, this will only be accessible from within the V-Raptor’s Kubernetes cluster. If these endpoints need to be exposed (e.g. to query an API) a rule will need to be added to the ingress. The default port for the HTTP endpoint is 8000 and can be reached from within the cluster on https://<serviceId>.<kubeNamespace>:8000
. On this endpoint, the following will always be available:
Endpoint | Description |
---|---|
/metrics | Prometheus Metrics endpoint |
/health | Service’s health status that can be used for liveness probes |
/health/rag | Service’s RAG (Red Amber Green) status that provides an insight into the application’s inner status |
/health/all1 | Detailed breakdown of the health of each software component in the service |
RPC endpoints
These ports are used for inter-service communication over the Raptor’s RPC layer (default is gRPC). By default this will be port 6000 over TCP.
Service leader election
This port should be opened if service replication will be used. By default this will be 8739 over UDP. If no replication will be used, then it is not necessary to add this rule.
V-Raptor Ingress Setup
The V-Raptor does not make use of a Kubernetes ingress object. Instead, it utilises two LoadBalancer
type services - one for TCP/HTTP traffic and another for UDP traffic.
The traffic from both of these services gets routed to a replicated instance of the Raptor’s ingress service. This is a service based off nginx and is tailored for the V-Raptor’s use case. Rules are specified through a combination of config maps and ports on the respective LoadBalancer services.
At the bare minimum, the V-Raptor requires the following ports to be opened for external traffic into the V-Raptor:
Port | Protocol | Usage |
---|---|---|
802 | TCP | Let’s Encrypt certificate renewal |
443 | TCP | APIs and web applications |
Using port 443, APIs for various services can be exposed as needed. The following services are exposed by default.
Service | Dependency |
---|---|
collector | Device Management API |
deployment | General Management APIs |
devicestore | Driver deployments and updates |
ui | Managing client applications |
In addition to the services, various web applications can also be exposed through the V-Raptor’s application bundle. These apps are not required for the full functionality of the V-Raptor as the APIs can also be used.
Application | Route | Dependency |
---|---|---|
Landing Page | /v4 | Landing page of the V-Raptor |
Credential Manager | /v4/apps/credential-manager | Application used to manage credentials used for integration |
Deployment Manager | /v4/apps/deployment-manager | Application used to deploy and manage device drivers and core services |
Status Overview | /v4/apps/raptor-status* | Application that visualised the health of components on the V-Raptor |
Note : These services can be removed from the landing page using the V-Raptor’s client application API if they need to be disabled. The corresponding rule in the ingress config should also be removed.
Rules and mappings
The various rules are split into four config files
File | Usage |
---|---|
nginx.conf | nginx server template config. |
api.conf | API routing rules |
tcp.conf | TCP port forwarding rules |
udp.conf | UDP port forwarding passthrough rules |
Nginx template config
This file contains the various CSP policies and headers, as well as including the additional config files. In general this file need not be modified unless CSP, headers or timeouts need to be adjusted for security reasons.
These policies and headers can be adjusted by modifying the following sections in the configuration file.
API routing rules
APIs that are exposed can always be reached using the following URL scheme:
https://{public-dns}/{serviceId}/{api-route}
So for example, to access the service management API on the V-Raptor located at vraptor01-organisation.commander.io
one would query the deployment service as follows:
https://vraptor01-organisation.commander.io/deployment/api/service/online
To configure a route to a service’s API, the following should be added under the API rules in the api.conf
file in the load-balancer config map.
location /<serviceId> {
rewrite /<serviceId>/(.*) /$1 break;
proxy_pass https://<serviceId>.<namespace>:<kestrelPort>;
}
Note: If the route contains sensitive information, logging should be disabled for the route.
Application routing rules
Web applications that need to be exposed follow a similar URL scheme to APIs. However, instead of and /api
in the URL scheme, the /apps
prefix should be used. For example, the V-Raptor’s credential manager routing scheme will look like the following:
location /v4/apps/credential-manager {
rewrite /v4/apps/credential-manager/(.*) /$1 break;
proxy_pass http://credential-manager-app.vraptor01-airspace;
proxy_set_header Host $host;
}
In the example above, a version has been added before the application prefix. This is only required for the V-Raptor application.
Driver Rules
In addition to the core services, additional micro-services - called device drivers - are also deployed to the V-Raptor. These services serve as communication interfaces with various devices or APIs for the purpose of aggregating telemetry. Depending on the device, different ingress rules may need to be specified to allow inbound traffic from the devices/APIs to reach the correct driver services.
Devices may communicate using HTTP, TCP or UDP or a combination of one or more of these. Additionally, certain protocol implementations may support TLS and DTLS which necessitates the use of a certificate. For this reason, the V-Raptor’s ingress provides two different methods of routing traffic to services.
HTTP Communication
For devices that support HTTP communication, the same method can be followed as described in the API routing section above. Suppose we have a Sigfox device driver deployed with the serviceId of sigfox-driver
. This driver receives callbacks from the Sigfox platform, so it requires a route to be opened to it. Using the same structure as an API, we can expose the driver at https://vraptor01-organisation.commander.io/sigfox-driver/api/device/telemetry
by adding the following rule:
location /sigfox-driver {
rewrite /sigfox-driver/(.*) /$1 break;
proxy_pass https://sigfox-driver.vraptor01-organisation:8000;
}
The encryption will be taken care of by the ingress, which will ensure that the public facing endpoint makes use of the Raptor’s trusted certificate issued on the same domain name as the V-Raptor.
Note: The ingress will make a secure connection to the backend service - so all APIs are encrypted end-to-end.
TCP and UDP Communication
For TCP and UDP traffic, we will make use of port forwarding. This means that the connection will be passed through to the service directly where it can then be handled as required. If DTLS or TLS is used, the service will then need to be provided with a suitable certificate.
Note: An RSA certificate will always be available within the driver container by default. This certificate can be used for most TLS/DTLS connections. If an EC certificate is required, an additional public certificate will need to be uploaded using the V-Raptor’s certificate management API.
To expose the service’s TCP/UDP server, the following should be added to the TCP or UDP config on the load balancer:
# TCP Example
server {
listen <publicPort>;
proxy_pass <serviceId>.<namespace>:<servicePort>;
proxy_next_upstream on;
}
# UDP Example
server {
listen <publicPort> udp;
proxy_pass <serviceId>.<namespace>:<servicePort>;
proxy_next_upstream on;
}
In our example, let us suppose we have a driver service called teltonika-driver
that has a TCP server using TLS listening on port 12000
in the container. We would add the following rules to make the Teltonika service available at https://vraptor01-organisation.commander.io:8003
.
On the nginx TCP config:
server {
listen 8003;
proxy_pass teltonika-driver.vraptor01-organisation:12000;
proxy_next_upstream on;
}
On the TCP ingress’ LoadBalancer
- name: tcp8003
port: 8003
protocol: TCP
targetPort: 8003
This will allow all inbound traffic on port 8003
to be routed to the teltonika-driver
service listening on port 12000
within the V-Raptor.