Skip to content

Latest commit

 

History

History
341 lines (228 loc) · 15.8 KB

README.md

File metadata and controls

341 lines (228 loc) · 15.8 KB

inlets-operator

Build Status License: MIT Go Report Card Documentation

"Get a Kubernetes LoadBalancer where you never thought it was possible."

In cloud-based Kubernetes solutions, Services can be exposed as type "LoadBalancer" and your cloud provider will provision a LoadBalancer and start routing traffic, in another word: you get ingress to your service.

inlets-operator brings that same experience to your local Kubernetes or k3s cluster (k3s/k3d/minikube/microk8s/Docker Desktop/KinD). The operator automates the creation of an inlets exit-node on public cloud, and runs the client as a Pod inside your cluster. Your Kubernetes Service will be updated with the public IP of the exit-node and you can start receiving incoming traffic immediately.

Who is this for?

This solution is for users who want to gain incoming network access (ingress) to their private Kubernetes clusters running on their laptops, VMs, within a Docker container, on-premises, or behind NAT. The cost of the LoadBalancer with a IaaS like DigitalOcean is around 5 USD / mo, which is 10 USD cheaper than an AWS ELB or GCP LoadBalancer.

Whilst 5 USD is cheaper than a "Cloud Load Balancer", this tool is for users who cannot get incoming connections due to their network configuration, not for saving money vs. public cloud.

You can configure the operator to use either of our tunnels: inlets OSS for L7 HTTP traffic, or inlets PRO which adds L4 TCP support, automatic encryption with TLS and can enable the use of an IngressController and cert-manager directly from your laptop or private cloud.

inlets tunnel capabilities

The operator detects Services of type LoadBalancer, and then creates a Tunnel Custom Resource. Its next step is to provision a small VM with a public IP on the public cloud, where it will run the inlets tunnel server. Then an inlets client is deployed as a Pod within your local cluster, which connects to the server and acts like a gateway to your chosen local service.

Pick your inlets edition:

  • Tunnel an IngressController including TLS termination and LetsEncrypt certs from cert-manager
  • Tunnel any TCP traffic at L4 i.e. Mongo, Postgres, MariaDB, Redis, NATS, SSH and TLS itself.
  • Automatic end-to-end encryption built-in with TLS.
  • Commercially licensed and supported. For cloud native operators and developers.
  • Punch out multiple ports such as 80 and 443 over the same tunnel

Discounted pricing available for personal use.

  • Tunnel L7 HTTP traffic.
  • Free, OSS, built for community developers.
  • Punch out only one port per tunnel, port name must be: http
  • No encryption enabled by default.

Status and backlog

Operator cloud host provisioning:

  • Provision VMs/exit-nodes on public cloud
    • Provision to Packet.com
    • Provision to DigitalOcean
    • Provision to Scaleway
    • Provision to GCP
    • Provision to AWS EC2
  • Publish stand-alone Go provisioning library/SDK

With inlets-pro configured, you get the following additional benefits:

  • Automatic configuration of TLS and encryption using secured websocket wss:// for control-port
  • Tunnel pure TCP traffic
  • Separate data-plane (ports given by Kubernetes) and control-plane (port 8132)

Other features:

  • Automatically update Service type LoadBalancer with a public IP
  • Tunnel L7 http traffic
  • In-cluster Role, Dockerfile and YAML files
  • Raspberry Pi / armhf build and YAML file
  • ARM64 (Graviton/Odroid/Packet.com) Dockerfile/build and K8s YAML files
  • Ignore Services with dev.inlets.manage: false annotation
  • Garbage collect hosts when Service or CRD is deleted
  • CI with Travis and automated release artifacts
  • One-line installer arkade - arkade install inlets-operator --help

Backlog pending:

  • Provision to Civo

inlets projects

Inlets is a Cloud Native Tunnel and is listed on the Cloud Native Landscape under Service Proxies.

  • inlets - Cloud Native Tunnel for L7 / HTTP traffic written in Go
  • inlets-pro - Cloud Native Tunnel for L4 TCP
  • inlets-operator - Public IPs for your private Kubernetes Services and CRD
  • inletsctl - Automate the cloud for fast HTTP (L7) and TCP (L4) tunnels

Author

inlets and inlets-operator are brought to you by Alex Ellis. Alex is a CNCF Ambassador and the founder of OpenFaaS.

inlets is made available free-of-charge, but you can support its ongoing development through GitHub Sponsors 💪

Video demo

This video demo shows a single-node VM running on k3s on Packet.com, and the inlets exit node also being provisioned on Packet's infrastructure.

https://img.youtube.com/vi/LeKMSG7QFSk/0.jpg

See an alternative video showing my cluster running with KinD on my Mac and the exit node being provisioned on DigitalOcean:

Step-by-step tutorial

Try the step-by-step tutorial

Running in-cluster, using DigitalOcean for the exit node

Note: this example is now multi-arch, so it's valid for x86_64, ARMHF, and ARM64.

You can also run the operator in-cluster, a ClusterRole is used since Services can be created in any namespace, and may need a tunnel.

# Create a secret to store the access token

kubectl create secret generic inlets-access-key \
  --from-literal inlets-access-key="$(cat ~/Downloads/do-access-token)"

kubectl apply -f ./artifacts/crd.yaml

# Apply the operator deployment and RBAC role
kubectl apply -f ./artifacts/operator-rbac.yaml
kubectl apply -f ./artifacts/operator.yaml

You can also install the inlets-operator using a single command using arkade, arkade runs against any Kubernetes cluster.

Install with inlets PRO:

arkade install inlets-operator \
 --provider digitalocean \
 --region lon1 \
 --token-file $HOME/Downloads/do-access-token \
 --license $(cat $HOME/inlets-pro-license.txt)

Install with inlets OSS:

arkade install inlets-operator \
 --provider digitalocean \
 --region lon1 \
 --token-file $HOME/Downloads/do-access-token

Using a provider which requires an Access Key and Secret Key? (AWS EC2, Scaleway)

These providers require an additional secret to be provided, this can be created like this:

To install using arkade you can pass the additional --secret-key-file arg

arkade install inlets-operator \
 --provider ec2 \
 --region eu-west-1 \
 --token-file $HOME/Downloads/access-key \
 --secret-key-file $HOME/Downloads/secret-access-key \
 --license $(cat $HOME/inlets-pro-license.txt)

If you are installing manually, using the yaml files you will need to un-comment the sections indicated in the artifacts/operator.yaml file

kubectl apply -f ./artifacts/crd.yaml

# Create a secret to store the access token

kubectl create secret generic inlets-access-key \
  --from-literal inlets-access-key="$(cat ~/Downloads/access-key)"

# Create a secret to store the secret access token

kubectl create secret generic inlets-secret-key \
  --from-literal inlets-secret-key="$(cat ~/Downloads/secret-access-key)"

kubectl apply -f ./artifacts/crd.yaml

# Apply the operator deployment and RBAC role
kubectl apply -f ./artifacts/operator-rbac.yaml
kubectl apply -f ./artifacts/operator.yaml

Running in-cluster, using Google Compute Engine for the exit node using helm

Note: this example is now multi-arch, so it's valid for x86_64, ARMHF, and ARM64.

If you do not have helm installed and configured follow the instructions here

It is assumed that you have gcloud installed and configured on your machine. If not, then follow the instructions here

# Get current projectID
export PROJECTID=$(gcloud config get-value core/project 2>/dev/null)

# Create a service account
gcloud iam service-accounts create inlets \
--description "inlets-operator service account" \
--display-name "inlets"

# Get service account email
export SERVICEACCOUNT=$(gcloud iam service-accounts list | grep inlets | awk '{print $2}')

# Assign appropriate roles to inlets service account
gcloud projects add-iam-policy-binding $PROJECTID \
--member serviceAccount:$SERVICEACCOUNT \
--role roles/compute.admin

gcloud projects add-iam-policy-binding $PROJECTID \
--member serviceAccount:$SERVICEACCOUNT \
--role roles/iam.serviceAccountUser

# Create inlets service account key file
gcloud iam service-accounts keys create key.json \
--iam-account $SERVICEACCOUNT

# Create a secret to store the service account key file
kubectl create secret generic inlets-access-key --from-file=inlets-access-key=key.json

# Add and update the inlets-operator helm repo
helm repo add inlets https://inlets.github.io/inlets-operator/

helm repo update

# Install inlets-operator with the required fields
helm upgrade inlets-operator --install inlets/inlets-operator \
  --set provider=gce,zone=us-central1-a,projectID=$PROJECTID

Expose a service with a LoadBalancer

The LoadBalancer type is usually provided by a cloud controller, but when that is not available, then you can use the inlets-operator to get a public IP and ingress. The free OSS version of inlets provides a HTTP tunnel, inlets PRO can provide TCP and full functionality to an IngressController.

First create a deployment for Nginx.

For Kubernetes 1.17 and lower:

kubectl run nginx-1 --image=nginx --port=80 --restart=Always

For 1.18 and higher:

kubectl apply -f https://raw.githubusercontent.com/inlets/inlets-operator/master/contrib/nginx-sample-deployment.yaml

Now create a service of type LoadBalancer via kubectl expose:

kubectl expose deployment nginx-1 --port=80 --type=LoadBalancer
kubectl get svc

kubectl get tunnel/nginx-1-tunnel -o yaml

kubectl logs deploy/nginx-1-tunnel-client

Check the IP of the LoadBalancer and then access it via the Internet.

Get an IngressController with TLS certificates

You can bring your own IngressController such as ingress-nginx or Traefik. And if you are using inlets PRO, you can also get TLS termination and certificates from LetsEncrypt via cert-manager.

Notes on OSS inlets

inlets PRO can tunnel multiple ports, but inlets OSS is set to take the first port named "http" for your service. With the OSS version of inlets (see example with OpenFaaS), make sure you give the port a name of http, otherwise a default of 80 will be used incorrectly.

apiVersion: v1
kind: Service
metadata:
  name: gateway
  namespace: openfaas
  labels:
    app: gateway
spec:
  ports:
    - name: http
      port: 8080
      protocol: TCP
      targetPort: 8080
      nodePort: 31112
  selector:
    app: gateway
  type: LoadBalancer

Annotations and ignoring services

By default the operator will create a tunnel for every LoadBalancer service.

To ignore a service such as traefik type in: kubectl annotate svc/traefik -n kube-system dev.inlets.manage=false

You can also set the operator to ignore the services by default and only manage them when the annotation is true. dev.inlets.manage=true To do this, run the operator with the flag -annotated-only

Monitor/view logs

The operator deployment is in the kube-system namespace.

kubectl logs deploy/inlets-operator -n kube-system -f

Running on a Raspberry Pi

Use the same commands as described in the section above.

There used to be separate deployment files in artifacts folder called operator-amd64.yaml and operator-armhf.yaml. Since version 0.2.7 Docker images get built for multiple architectures with the same tag which means that there is now just one deployment file called operator.yaml that can be used on all supported architecures.

Provider Pricing

The host provisioning code used by the inlets-operator is shared with inletsctl, both tools use the configuration in the grid below.

These costs need to be treated as an estimate and will depend on your bandwidth usage and how many hosts you decide to create. You can at all times check your cloud provider's dashboard, API, or CLI to view your exit-nodes. The hosts provided have been chosen because they are the absolute lowest-cost option that the maintainers could find.

Provider Price per month Price per hour OS image CPU Memory Boot time
Google Compute Engine * ~$4.28 ~$0.006 Debian GNU Linux 9 (stretch) 1 614MB ~3-15s
Packet ~$51 $0.07 Ubuntu 16.04 4 8GB ~45-60s
Digital Ocean $5 ~$0.0068 Ubuntu 16.04 1 512MB ~20-30s
Scaleway 2.99€ 0.006€ Ubuntu 18.04 2 2GB 3-5m
  • The first f1-micro instance in a GCP Project (the default instance type for inlets-operator) is free for 720hrs(30 days) a month

You can purchase inlets PRO here

Contributing

Contributions are welcome, see the CONTRIBUTING.md guide.

Similar projects / products and alternatives

  • inlets pro - L4 TCP tunnel, which can tunnel any TCP traffic with automatic, built-in encryption. Kubernetes-ready with Docker images and YAML manifests.
  • inlets - inlets provides an L7 HTTP tunnel for applications through the use of an exit node, it is used by the inlets operator. Encryption can be configured separately.
  • metallb - open source LoadBalancer for private Kubernetes clusters, no tunnelling.
  • Cloudflare Argo - paid SaaS product from Cloudflare for Cloudflare customers and domains - K8s integration available through Ingress
  • ngrok - a popular tunnelling tool, restarts every 7 hours, limits connections per minute, paid SaaS product with no K8s integration available