Setup the installation
5 minute read
This installation procedure is the one recommended by SIGHUP to deploy this distribution in an extensible and effective manner.
Before installing the distribution you should already own a Kubernetes Cluster with the following configuration:
- Kubernetes Version 1.19, 1.20, 1.21 or 1.22 currenlty in tech preview.
- Managed Kubernetes service (AKS, EKS, GKE) are also supported.
- A dedicated node-pool to deploy infrastructural components. At least three nodes with the following characteristics
- CPU: >= 8 Cores
- Memory: >= 16Gb RAM
- Storage: >= 200Gb
Configuring infrastructural nodes
In this example Kubernetes Cluster with 9 workers and 3 master nodes we can pick three different nodes to mark them as infrastructural nodes.
$ kubectl get nodes NAME STATUS ROLES AGE VERSION worker-1 Ready <none> 14s v1.19.4 worker-2 Ready <none> 14s v1.19.4 worker-3 Ready <none> 14s v1.19.4 worker-4 Ready <none> 14s v1.19.4 worker-5 Ready <none> 14s v1.19.4 worker-6 Ready <none> 14s v1.19.4 worker-7 Ready <none> 14s v1.19.4 worker-8 Ready <none> 14s v1.19.4 worker-9 Ready <none> 14s v1.19.4 master-1 Ready master 14s v1.19.4 master-2 Ready master 14s v1.19.4 master-3 Ready master 14s v1.19.4
Ideally you should select three nodes that are located in three different physical locations following the same high availability best practices as followed to deploy Kubernetes master nodes.
worker-3 are selected to be marked as infrastructural nodes. This includes a specific
label and a taint.
$ kubectl label node worker-1 node-role.kubernetes.io/infra= --overwrite $ kubectl label node worker-2 node-role.kubernetes.io/infra= --overwrite $ kubectl label node worker-3 node-role.kubernetes.io/infra= --overwrite $ kubectl taint nodes -l node-role.kubernetes.io/infra= node-role.kubernetes.io/infra=:NoSchedule
Check the result:
$ kubectl get nodes NAME STATUS ROLES AGE VERSION worker-1 Ready infra 43s v1.19.4 worker-2 Ready infra 43s v1.19.4 worker-3 Ready infra 43s v1.19.4 worker-4 Ready <none> 43s v1.19.4 worker-5 Ready <none> 43s v1.19.4 worker-6 Ready <none> 43s v1.19.4 worker-7 Ready <none> 43s v1.19.4 worker-8 Ready <none> 43s v1.19.4 worker-9 Ready <none> 43s v1.19.4 master-1 Ready master 43s v1.19.4 master-2 Ready master 43s v1.19.4 master-3 Ready master 43s v1.19.4
To follow this installation guide you’ll need the following tools:
- SIGHUP tooling:
- Kubernetes common tooling like:
- Linux or MacOS operating system with SO tools like:
- git repository to save the configuration. We will use
git.example.com/my-fury-cluster.gitrepo as an example.
Every command in this Installation section (Including customizations) runs in the root of your git repository:
$ pwd my-fury-cluster $ git remote -v origin firstname.lastname@example.org:my-fury-cluster.git (fetch) origin email@example.com:my-fury-cluster.git (push)
First, create the recommended project structure inside your local git repository:
$ mkdir -p manifests/distribution manifests/patches
Then, pull the distribution files and download every package in the
$ furyctl init --version v1.7.0 INFO downloading: github.com/sighupio/fury-distribution/releases/download/v1.7.0/Furyfile.yml -> Furyfile.yml INFO downloading: github.com/sighupio/fury-distribution/releases/download/v1.7.0/kustomization.yaml -> kustomization.yaml $ furyctl vendor -H # Omitted output
vendor directory is downloaded we can continue to the configure the
Move the recommended
kustomization.yaml file downloaded from the fury distribution repository (furyctl init)
to the distribution directory that was created before
mkdir -p manifests/distribution (with some modifications):
# Change vendor paths $ sed 's@./vendor@../../vendor@g' kustomization.yaml > manifests/distribution/kustomization.yaml # Delete original kustomization file $ rm kustomization.yaml $ cd manifests/ # Create a kustomization.yaml file $ kustomize create # Add the distribution bases to this kustomize project $ echo -e "\nbases:\n - distribution" >> kustomization.yaml $ cat kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization bases: - distribution
$ pwd my-fury-cluster $ tree . ├── Furyfile.yml ├── manifests │ ├── distribution │ │ └── kustomization.yaml │ ├── kustomization.yaml │ └── patches └── vendor └── katalog # Omitted output
This repository structure makes possible to extend and configure the distribution.
Assign infrastructure components to infrastructural nodes
Networking, monitoring, logging, ingress and disaster-recovery components should be deployed to infrastructural nodes. This way we can guarantee the correct operation of the cluster.
Run the following command to add the needed lines to your
cat <<EOT >> manifests/kustomization.yaml patches: - patches/calico.yaml - patches/prometheus-operator.yaml - patches/prometheus-operated.yaml - patches/grafana.yaml - patches/metrics-server.yaml - patches/kube-state-metrics.yaml - patches/elasticsearch.yaml - patches/cerebro.yaml - patches/curator.yaml - patches/kibana.yaml - patches/fluentd.yaml - patches/cert-manager-ca-injector.yaml - patches/cert-manager-controller.yaml - patches/cert-manager-webhook.yaml - patches/forecastle.yaml - patches/nginx.yaml - patches/velero.yaml - patches/minio.yaml - patches/minio-setup.yaml - patches/gatekeeper.yaml - patches/gatekeeper-audit.yaml EOT
manifests/kustomization.yaml file should look like:
apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization bases: - distribution patches: - patches/calico.yaml - patches/prometheus-operator.yaml - patches/prometheus-operated.yaml - patches/grafana.yaml - patches/metrics-server.yaml - patches/kube-state-metrics.yaml - patches/elasticsearch.yaml - patches/cerebro.yaml - patches/curator.yaml - patches/kibana.yaml - patches/fluentd.yaml - patches/cert-manager-ca-injector.yaml - patches/cert-manager-controller.yaml - patches/cert-manager-webhook.yaml - patches/forecastle.yaml - patches/nginx.yaml - patches/velero.yaml - patches/minio.yaml - patches/minio-setup.yaml - patches/gatekeeper.yaml - patches/gatekeeper-audit.yaml
ATTENTION Don’t forget to create the following patches files inside the
Everything is ready to be deployed to the cluster. In case you need additional configuration, take a look at the customization section.
Otherwise, continue to the next section: applying the cluster configuration
If you have followed every step, you can verify everything is working with the following command:
$ kustomize build manifests # Omitted output
You should see a bunch of resource definitions printed in the console
Commit the changes to the repository
As we did some progress with the setup of the Kubernetes Fury Distribution, its really important to track changes in the repository. So, push everything to the repository before continuing.
$ git add . $ git commit -m "Add basic Kubernetes Fury Distribution project structure" $ git push origin master
Was this page helpful?
Glad to hear it! Thanks for letting us know!
Sorry to hear that. Please tell us how we can improve.