Distribution

Deploy the distribution on top of a Kubernetes Cluster

6 minute read

This Quick Start Guide will guide you through the process of deploying KFD in a Kubernetes cluster.

Pre-requirements

To continue with this Quick Start guide and install the Kubernetes Fury Distribution, you’ll need a running Kubernetes cluster, if you don’t have one available, you can create it quickly using one of the two alternatives described in the section before: create a cluster on aws or you can run a local cluster

The following software is required to deploy the distribution:

  • kubectl: This will be used to managing our cluster. Recommended version: 1.19.4
  • kustomize: Used to render distribution manifests. Required version > 3.3
  • furyctl: Downloads distribution files. Required version > v0.2.4

Check your requirements before start:

$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-11T13:17:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
$ kustomize version
Version: {Version:kustomize/v3.3.0 GitCommit:7050c6a7b692fdba6e831e63c7b83920ab03ad76 BuildDate:2019-10-24T17:54:30Z GoOs:linux GoArch:amd64}
$ furyctl version
INFO[0000] Furyctl version  0.2.4

Hands-on

Let’s define some useful variables:

$ export CLUSTER_DIR="/tmp/sighup.io/cluster"
$ export KFD_VERSION="v1.7.0"

Download the distribution files:

$ cd ${CLUSTER_DIR}
$ ls
config		kind-kfd-config
$ furyctl init --version ${KFD_VERSION}
INFO[0000] downloading: github.com/sighupio/fury-distribution/releases/download/v1.7.0/Furyfile.yml -> Furyfile.yml
INFO[0000] removing Furyfile.yml/.git
ERRO[0000] unlinkat Furyfile.yml/.git: not a directory
INFO[0000] unlinkat Furyfile.yml/.git: not a directory
INFO[0000] downloading: github.com/sighupio/fury-distribution/releases/download/v1.7.0/kustomization.yaml -> kustomization.yaml
INFO[0001] removing kustomization.yaml/.git
ERRO[0001] unlinkat kustomization.yaml/.git: not a directory
INFO[0001] unlinkat kustomization.yaml/.git: not a directory
$ furyctl vendor -H
INFO[0000] using v1.7.0 for package networking/calico
INFO[0000] using v1.13.0 for package monitoring/prometheus-operator
INFO[0000] using v1.13.0 for package monitoring/prometheus-operated
INFO[0000] using v1.13.0 for package monitoring/grafana
INFO[0000] using v1.13.0 for package monitoring/goldpinger
INFO[0000] using v1.13.0 for package monitoring/configs
INFO[0000] using v1.13.0 for package monitoring/kubeadm-sm
INFO[0000] using v1.13.0 for package monitoring/kube-proxy-metrics
INFO[0000] using v1.13.0 for package monitoring/kube-state-metrics
INFO[0000] using v1.13.0 for package monitoring/node-exporter
INFO[0000] using v1.13.0 for package monitoring/metrics-server
INFO[0000] using v1.9.0 for package logging/elasticsearch-single
INFO[0000] using v1.9.0 for package logging/cerebro
INFO[0000] using v1.9.0 for package logging/curator
INFO[0000] using v1.9.0 for package logging/fluentd
INFO[0000] using v1.9.0 for package logging/kibana
INFO[0000] using v1.11.0 for package ingress/cert-manager
INFO[0000] using v1.11.0 for package ingress/nginx
INFO[0000] using v1.11.0 for package ingress/forecastle
INFO[0000] using v1.8.0 for package dr/velero
INFO[0000] using v1.5.0 for package opa/gatekeeper
INFO[0000] downloading: github.com/sighupio/fury-kubernetes-networking.git/katalog/calico?ref=v1.7.0 -> vendor/katalog/networking/calico
INFO[0000] downloading: github.com/sighupio/fury-kubernetes-monitoring.git/katalog/prometheus-operator?ref=v1.13.0 -> vendor/katalog/monitoring/prometheus-operator
INFO[0000] downloading: github.com/sighupio/fury-kubernetes-monitoring.git/katalog/grafana?ref=v1.13.0 -> vendor/katalog/monitoring/grafana
INFO[0000] downloading: github.com/sighupio/fury-kubernetes-monitoring.git/katalog/goldpinger?ref=v1.13.0 -> vendor/katalog/monitoring/goldpinger
INFO[0000] downloading: github.com/sighupio/fury-kubernetes-monitoring.git/katalog/prometheus-operated?ref=v1.13.0 -> vendor/katalog/monitoring/prometheus-operated
INFO[0000] removing vendor/katalog/networking/calico/.git
INFO[0000] downloading: github.com/sighupio/fury-kubernetes-monitoring.git/katalog/configs?ref=v1.13.0 -> vendor/katalog/monitoring/configs
INFO[0001] removing vendor/katalog/monitoring/grafana/.git
INFO[0001] downloading: github.com/sighupio/fury-kubernetes-monitoring.git/katalog/kubeadm-sm?ref=v1.13.0 -> vendor/katalog/monitoring/kubeadm-sm
INFO[0001] removing vendor/katalog/monitoring/prometheus-operated/.git
INFO[0001] downloading: github.com/sighupio/fury-kubernetes-monitoring.git/katalog/kube-proxy-metrics?ref=v1.13.0 -> vendor/katalog/monitoring/kube-proxy-metrics
INFO[0001] removing vendor/katalog/monitoring/prometheus-operator/.git
INFO[0001] downloading: github.com/sighupio/fury-kubernetes-monitoring.git/katalog/kube-state-metrics?ref=v1.13.0 -> vendor/katalog/monitoring/kube-state-metrics
INFO[0001] removing vendor/katalog/monitoring/goldpinger/.git
INFO[0001] downloading: github.com/sighupio/fury-kubernetes-monitoring.git/katalog/node-exporter?ref=v1.13.0 -> vendor/katalog/monitoring/node-exporter
INFO[0002] removing vendor/katalog/monitoring/configs/.git
INFO[0002] downloading: github.com/sighupio/fury-kubernetes-monitoring.git/katalog/metrics-server?ref=v1.13.0 -> vendor/katalog/monitoring/metrics-server
INFO[0002] removing vendor/katalog/monitoring/kubeadm-sm/.git
INFO[0002] downloading: github.com/sighupio/fury-kubernetes-logging.git/katalog/elasticsearch-single?ref=v1.9.0 -> vendor/katalog/logging/elasticsearch-single
INFO[0002] removing vendor/katalog/monitoring/kube-state-metrics/.git
INFO[0002] downloading: github.com/sighupio/fury-kubernetes-logging.git/katalog/cerebro?ref=v1.9.0 -> vendor/katalog/logging/cerebro
INFO[0002] removing vendor/katalog/monitoring/kube-proxy-metrics/.git
INFO[0002] downloading: github.com/sighupio/fury-kubernetes-logging.git/katalog/curator?ref=v1.9.0 -> vendor/katalog/logging/curator
INFO[0002] removing vendor/katalog/monitoring/node-exporter/.git
INFO[0002] downloading: github.com/sighupio/fury-kubernetes-logging.git/katalog/fluentd?ref=v1.9.0 -> vendor/katalog/logging/fluentd
INFO[0003] removing vendor/katalog/logging/elasticsearch-single/.git
INFO[0003] downloading: github.com/sighupio/fury-kubernetes-logging.git/katalog/kibana?ref=v1.9.0 -> vendor/katalog/logging/kibana
INFO[0003] removing vendor/katalog/monitoring/metrics-server/.git
INFO[0003] downloading: github.com/sighupio/fury-kubernetes-ingress.git/katalog/cert-manager?ref=v1.11.0 -> vendor/katalog/ingress/cert-manager
INFO[0003] removing vendor/katalog/logging/curator/.git
INFO[0003] downloading: github.com/sighupio/fury-kubernetes-ingress.git/katalog/nginx?ref=v1.11.0 -> vendor/katalog/ingress/nginx
INFO[0003] removing vendor/katalog/logging/cerebro/.git
INFO[0003] downloading: github.com/sighupio/fury-kubernetes-ingress.git/katalog/forecastle?ref=v1.11.0 -> vendor/katalog/ingress/forecastle
INFO[0003] removing vendor/katalog/logging/kibana/.git
INFO[0003] downloading: github.com/sighupio/fury-kubernetes-dr.git/katalog/velero?ref=v1.8.0 -> vendor/katalog/dr/velero
INFO[0003] removing vendor/katalog/logging/fluentd/.git
INFO[0003] downloading: github.com/sighupio/fury-kubernetes-opa.git/katalog/gatekeeper?ref=v1.5.0 -> vendor/katalog/opa/gatekeeper
INFO[0003] removing vendor/katalog/ingress/forecastle/.git
INFO[0004] removing vendor/katalog/ingress/nginx/.git
INFO[0004] removing vendor/katalog/dr/velero/.git
INFO[0004] removing vendor/katalog/opa/gatekeeper/.git
INFO[0005] removing vendor/katalog/ingress/cert-manager/.git

Apply the distribution to the cluster:

$ kustomize build
$ kustomize build | kubectl apply -f -

After some time (depens of the capacity of your computer):

$ kubectl get nodes
NAME                            STATUS   ROLES    AGE   VERSION
kfd-quick-start-control-plane   Ready    master   14m   v1.19.4
kfd-quick-start-worker          Ready    <none>   13m   v1.19.4
$ kubectl get pods -A
NAMESPACE            NAME                                                                 READY   STATUS      RESTARTS   AGE
cert-manager         cert-manager-b4767cd87-cc4fk                                         1/1     Running     0          5m24s
cert-manager         cert-manager-cainjector-7cff7c9699-8qlnx                             1/1     Running     0          5m24s
cert-manager         cert-manager-webhook-6665ff6ddd-scxlc                                1/1     Running     0          5m23s
gatekeeper-system    gatekeeper-audit-7d64cc45df-pblf4                                    1/1     Running     0          5m23s
gatekeeper-system    gatekeeper-controller-manager-6999845cc9-f9fb8                       1/1     Running     0          5m23s
gatekeeper-system    gatekeeper-controller-manager-6999845cc9-lq8dx                       1/1     Running     0          5m23s
gatekeeper-system    gatekeeper-controller-manager-6999845cc9-nlhhd                       1/1     Running     0          5m23s
gatekeeper-system    gatekeeper-policy-manager-7bfd5dbdd4-ntvdn                           1/1     Running     0          5m23s
ingress-nginx        forecastle-f66cc877f-dzqnl                                           1/1     Running     0          5m23s
ingress-nginx        nginx-ingress-controller-9spmh                                       1/1     Running     0          4m58s
ingress-nginx        nginx-ingress-controller-ctf97                                       1/1     Running     0          4m59s
ingress-nginx        nginx-ingress-controller-vx8hn                                       1/1     Running     0          5m
kube-system          calico-kube-controllers-7c6869b847-nvv89                             1/1     Running     0          5m23s
kube-system          calico-node-4hmbv                                                    1/1     Running     0          5m22s
kube-system          calico-node-58lxc                                                    1/1     Running     0          5m22s
kube-system          calico-node-9gcmc                                                    1/1     Running     0          5m22s
kube-system          calico-node-zv9vt                                                    1/1     Running     0          5m22s
kube-system          coredns-558bd4d5db-2j7mk                                             1/1     Running     0          46m
kube-system          coredns-558bd4d5db-lnrvg                                             1/1     Running     0          46m
kube-system          etcd-ip-172-31-33-70.eu-west-1.compute.internal                      1/1     Running     0          47m
kube-system          kube-apiserver-ip-172-31-33-70.eu-west-1.compute.internal            1/1     Running     0          47m
kube-system          kube-controller-manager-ip-172-31-33-70.eu-west-1.compute.internal   1/1     Running     0          47m
kube-system          kube-proxy-6m72n                                                     1/1     Running     0          46m
kube-system          kube-proxy-hwwww                                                     1/1     Running     0          46m
kube-system          kube-proxy-vnhwx                                                     1/1     Running     0          46m
kube-system          kube-proxy-w55nz                                                     1/1     Running     0          46m
kube-system          kube-scheduler-ip-172-31-33-70.eu-west-1.compute.internal            1/1     Running     0          47m
kube-system          metrics-server-5dd7c59cfd-bgwg9                                      1/1     Running     0          5m23s
kube-system          minio-0                                                              1/1     Running     0          5m23s
kube-system          minio-setup-wdlqn                                                    0/1     Completed   0          5m22s
kube-system          velero-85bb6fbbc6-5lwqc                                              1/1     Running     0          5m23s
kube-system          velero-restic-8mxk7                                                  1/1     Running     0          5m
kube-system          velero-restic-dz2jq                                                  1/1     Running     0          4m58s
kube-system          velero-restic-wwqck                                                  1/1     Running     0          4m59s
local-path-storage   local-path-provisioner-64bb9787d9-znd8l                              1/1     Running     0          46m
logging              cerebro-578676bc6b-5lsgp                                             1/1     Running     0          5m23s
logging              elasticsearch-0                                                      2/2     Running     0          5m23s
logging              fluentbit-69rpt                                                      1/1     Running     0          5m22s
logging              fluentbit-8l6k4                                                      1/1     Running     0          5m22s
logging              fluentbit-nnl6k                                                      1/1     Running     0          5m22s
logging              fluentbit-vgv8d                                                      1/1     Running     1          5m22s
logging              fluentd-0                                                            1/1     Running     1          5m23s
logging              fluentd-1                                                            1/1     Running     0          3m6s
logging              fluentd-2                                                            1/1     Running     0          2m32s
logging              kibana-7fd6f6897c-nwtj6                                              1/1     Running     0          5m22s
monitoring           goldpinger-2mqkr                                                     1/1     Running     0          5m22s
monitoring           goldpinger-bdznq                                                     1/1     Running     0          5m22s
monitoring           goldpinger-mn6gs                                                     1/1     Running     0          5m22s
monitoring           goldpinger-znl9d                                                     1/1     Running     0          5m22s
monitoring           grafana-5df78cd97-ssdzp                                              2/2     Running     0          5m22s
monitoring           kube-proxy-metrics-78zk2                                             1/1     Running     0          5m22s
monitoring           kube-proxy-metrics-7kzqc                                             1/1     Running     0          5m22s
monitoring           kube-proxy-metrics-9bfsf                                             1/1     Running     0          5m22s
monitoring           kube-proxy-metrics-frbxv                                             1/1     Running     0          5m22s
monitoring           kube-state-metrics-7657b8f59c-gszlx                                  1/1     Running     0          5m22s
monitoring           node-exporter-449p8                                                  2/2     Running     0          5m22s
monitoring           node-exporter-6spbq                                                  2/2     Running     0          5m22s
monitoring           node-exporter-d8nt9                                                  2/2     Running     0          5m22s
monitoring           node-exporter-zf4lz                                                  2/2     Running     0          5m20s
monitoring           prometheus-k8s-0                                                     2/2     Running     1          4m9s
monitoring           prometheus-operator-654d5c5468-g4tdz                                 1/1     Running     0          5m22s

Everything will be up and running.

Test it

# The following command is only required in the QuickStart setup. In a production setup, you need a load balancer
# in front of the nodes hosting the ingress controller.
$ kubectl patch svc -n ingress-nginx ingress-nginx --patch '{"spec": {"externalTrafficPolicy": "Cluster"}}'
service/ingress-nginx edited
$ kubectl run fury-demo --image=nginx --restart=Always  --port 80 --expose
service/fury-demo created
deployment.apps/fury-demo created

Expose the service with the ingress controller.

But first, you have to know what’s your IP:

$ curl ifconfig.co
18.156.197.200

Replace the following manifests with the right ip in spec.rules[0].host attribute (note the dash notation):

cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: fury-demo
spec:
  rules:
  - host: fury-demo.18.156.197.200.nip.io
    http:
      paths:
      - path: /
        backend:
          serviceName: fury-demo
          servicePort: 80
EOF

Open a web browser and navigate to: fury-demo.18.156.197.200.nip.io:31080

Ingress Controller is exposed as node port in 31080 port.

nginx

Access Fury Components

Monitoring & Logging

cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: monitoring
  namespace: monitoring
spec:
  rules:
  - host: grafana.18.156.197.200.nip.io
    http:
      paths:
      - path: /
        backend:
          serviceName: grafana
          servicePort: 3000
  - host: prometheus.18.156.197.200.nip.io
    http:
      paths:
      - path: /
        backend:
          serviceName: prometheus-k8s
          servicePort: 9090
  - host: goldpinger.18.156.197.200.nip.io
    http:
      paths:
      - path: /
        backend:
          serviceName: goldpinger
          servicePort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: logging
  namespace: logging
spec:
  rules:
  - host: kibana.18.156.197.200.nip.io
    http:
      paths:
      - path: /
        backend:
          serviceName: kibana
          servicePort: 5601
  - host: cerebro.18.156.197.200.nip.io
    http:
      paths:
      - path: /
        backend:
          serviceName: cerebro
          servicePort: 9000
EOF

Open a web browser and navigate to: {grafana,prometheus,goldpinger,kibana,cerebro}.18.156.197.200.nip.io:31080

Grafana

Grafana

Prometheus

Prometheus

Goldpinger

Goldpinger

Kibana

Kibana

Cerebro

Kibana