Skip to main content
Version: 1.23

Fury on EKS

This step-by-step tutorial guides you to deploy the Kubernetes Fury Distribution on an EKS cluster on AWS.

This tutorial covers the following steps:

  1. Deploy an EKS Kubernetes cluster on AWS with furyctl
  2. Download the latest version of Fury with furyctl
  3. Install the Fury distribution
  4. Explore some features of the distribution
  5. (optional) Deploy additional modules of the Fury distribution
  6. Teardown of the environment

⚠ī¸ AWS charges you to provision the resources used in this tutorial. You should be charged only a few dollars, but we are not responsible for any costs that incur.

❗ī¸ Remember to stop all the instances by following all the steps listed in the teardown phase.

đŸ’ģ If you prefer trying Fury in a local environment, check out the Fury on Minikube tutorial.

Prerequisites​

This tutorial assumes some basic familiarity with Kubernetes and AWS. Some experience with Terraform is helpful but not required.

To follow this tutorial, you need:

  • AWS Access Credentials of an AWS Account with the following IAM permissions.
  • Docker - the tutorial uses a Docker image containing furyctl and all the necessary tools to follow it.
  • OpenVPN Client - Tunnelblick (on macOS) or OpenVPN Connect (for other OS) are recommended.
  • AWS S3 Bucket (optional) to store the Terraform state.
  • Github account with SSH key configured.

Setup and initialize the environment​

  1. Open a terminal

  2. Clone the fury getting started repository containing the example code used in this tutorial:

git clone https://github.com/sighupio/fury-getting-started/
cd fury-getting-started/fury-on-eks
  1. Run the fury-getting-started docker image:
docker run -ti --rm \
-v $PWD:/demo \
registry.sighup.io/delivery/fury-getting-started
  1. Setup your AWS credentials by exporting the following environment variables:
export AWS_ACCESS_KEY_ID=<YOUR_AWS_ACCESS_KEY_ID>
export AWS_SECRET_ACCESS_KEY=<YOUR_AWS_SECRET_ACCESS_KEY>
export AWS_DEFAULT_REGION=<YOUR_AWS_REGION>

Alternatively, authenticate with AWS by running aws configure in your terminal. When prompted, enter your AWS Access Key ID, Secret Access Key, region, and output format.

$ aws configure
AWS Access Key ID [None]: <YOUR_AWS_ACCESS_KEY_ID>
AWS Secret Access Key [None]: <YOUR_AWS_SECRET_ACCESS_KEY>
Default region name [None]: <YOUR_AWS_REGION>
Default output format [None]: json

You are all set ✌ī¸.

Step 1 - Automatic provisioning of an EKS Cluster with furyctl​

furyctl is a command-line tool developed by SIGHUP to support:

  • the automatic provisioning of Kubernetes clusters in various cloud environments
  • the installation of the Fury distribution

The provisioning process is divided into two phases:

  1. Bootstrap provisioning phase
  2. Cluster provisioning phase

Boostrap provisioning phase​

In the bootstrap phase, furyctl automatically provisions:

  • Virtual Private Cloud (VPC) in a specified CIDR range with public and private subnets
  • EC2 instance bastion host with an OpenVPN Server
  • All the required networking gateways and routes

More details about the bootstrap provisioner can be found here.

Configure the bootstrap provisioner​

The bootstrap provisioner takes a bootstrap.yml as input. This file, instructs the bootstrap provisioner with all the needed parameters to deploy the networking infrastructure.

For this tutorial, use the bootstrap.yml template located at /demo/infrastructure/bootstrap.yml:

kind: Bootstrap
metadata:
name: fury-eks-demo
spec:
networkCIDR: 10.0.0.0/16
publicSubnetsCIDRs:
- 10.0.1.0/24
- 10.0.2.0/24
- 10.0.3.0/24
privateSubnetsCIDRs:
- 10.0.101.0/24
- 10.0.102.0/24
- 10.0.103.0/24
vpn:
instances: 1
port: 1194
instanceType: t3.micro
diskSize: 50
operatorName: fury
dhParamsBits: 2048
subnetCIDR: 172.16.0.0/16
sshUsers:
- <GITHUB_USER>
executor:
# state:
# backend: s3
# config:
# bucket: <S3_BUCKET>
# key: furyctl/boostrap
# region: <S3_BUCKET_REGION>
provisioner: aws

Open the /demo/infrastructure/bootstrap.yml file with a text editor of your choice and:

  • Replace the field <GITHUB_USER> with your actual GitHub username
  • Ensure that the VPC and subnets ranges are not already in use. If so, specify different values in the fields:
    • networkCIDR
    • publicSubnetsCIDRs
    • privateSubnetsCIDRs

Leave the rest as configured. More details about each field can be found here.

(optional) Create S3 Bucket to hold the Terraform remote​

Although this is a tutorial, it is always a good practice to use a remote Terraform state over a local one. In case you are not familiar with Terraform, you can skip this section.

  1. Choose a unique name and an AWS region for the S3 Bucket:
export S3_BUCKET=fury-demo-eks              # Use a different name
export S3_BUCKET_REGION=$AWS_DEFAULT_REGION # You can use the same region of before
  1. Create the S3 bucket using the AWS CLI:
aws s3api create-bucket \
--bucket $S3_BUCKET \
--region $S3_BUCKET_REGION \
--create-bucket-configuration LocationConstraint=$S3_BUCKET_REGION

ℹī¸ You might need to give permissions on S3 to the user.

  1. Once created, uncomment the spec.executor.state block in the /demo/infrastructure/bootstrap.yml file:
...
executor:
state:
backend: s3
config:
bucket: <S3_BUCKET>
key: fury/boostrap
region: <S3_BUCKET_REGION>
  1. Replace the <S3_BUCKET> and <S3_BUCKET_REGION> placeholders with the correct values from the previous commands:
...
executor:
# version: 0.13.6
state:
backend: s3
config:
bucket: fury-demo-eks # example value
key: fury/boostrap
region: eu-central-1 # example value

Provision networking infrastructure​

  1. Initialize the bootstrap provisioner:
cd /demo/infrastructure/
furyctl bootstrap init

In case you run into errors, you can re-initialize the bootstrap provisioner by adding the --reset flag:

furyctl bootstrap init --reset
  1. If the initialization succeeds, apply the bootstrap provisioner:
furyctl bootstrap apply

⏱ This phase may take some minutes.

Logs are available at /demo/infrastructure/bootstrap/logs/terraform.logs.

  1. When the furyctl bootstrap apply completes, inspect the output:
...
All the bootstrap components are up to date.

VPC and VPN ready:

VPC: vpc-0d2fd9bcb4f68379e
Public Subnets: [subnet-0bc905beb6622f446, subnet-0c6856acb42edf8f3, subnet-0272dcf88b2f5d12c]
Private Subnets: [subnet-072b1e3405f662c70, subnet-0a23db3b19e5a7ed7, subnet-08f4930148ab5223f]

Your VPN instance IPs are: [34.243.133.186]
...

In particular, take note of:

  • VPC - vpc-0d2fd9bcb4f68379e in the example output above
  • Private Subnets - [subnet-072b1e3405f662c70, subnet-0a23db3b19e5a7ed7, subnet-08f4930148ab5223f] in the example output above

These values are used in the cluster provisioning phase.

Cluster provisioning phase​

In the cluster provisioning phase, furyctl automatically deploys a battle-tested private EKS Cluster. To interact with the private EKS cluster, connect first to the private network via the OpenVPN server in the bastion host.

Connect to the private network​

  1. Create the fury.ovpn OpenVPN credentials file with furyagent:
furyagent configure openvpn-client \
--client-name fury \
--config /demo/infrastructure/bootstrap/secrets/furyagent.yml > fury.ovpn

đŸ•ĩđŸģ‍♂ī¸ Furyagent is a tool developed by SIGHUP to manage OpenVPN and SSH user access to the bastion host.

  1. Check that the fury user is now listed:
furyagent configure openvpn-client \
--list \
--config /demo/infrastructure/bootstrap/secrets/furyagent.yml

Output:

2021-06-07 14:37:52.169664 I | storage.go:146: Item pki/vpn-client/fury.crt found [size: 1094]
2021-06-07 14:37:52.169850 I | storage.go:147: Saving item pki/vpn-client/fury.crt ...
2021-06-07 14:37:52.265797 I | storage.go:146: Item pki/vpn/ca.crl found [size: 560]
2021-06-07 14:37:52.265879 I | storage.go:147: Saving item pki/vpn/ca.crl ...
+------+------------+------------+---------+--------------------------------+
| USER | VALID FROM | VALID TO | EXPIRED | REVOKED |
+------+------------+------------+---------+--------------------------------+
| fury | 2021-06-07 | 2022-06-07 | false | false 0001-01-01 00:00:00 |
| | | | | +0000 UTC |
+------+------------+------------+---------+--------------------------------+
  1. Open the fury.ovpn file with any OpenVPN Client.

  2. Connect to the OpenVPN Server via the chosen OpenVPN Client.

Configure the cluster provisioner​

The cluster provisioner takes a cluster.yml as input. This file instructs the provisioner with all the needed parameters to deploy the EKS cluster.

In the repository, you can find a template for this file at /demo/infrastructure/cluster.yml:

kind: Cluster
metadata:
name: fury-eks-demo
spec:
version: 1.21
network: <VPC_ID>
subnetworks:
- <PRIVATE_SUBNET1_ID>
- <PRIVATE_SUBNET2_ID>
- <PRIVATE_SUBNET3_ID>
dmzCIDRRange:
- 10.0.0.0/16
sshPublicKey: example-ssh-key # put your id_rsa.pub file content here
nodePools:
- name: fury
version: null
minSize: 3
maxSize: 3
instanceType: t3.large
volumeSize: 50
executor:
# state:
# backend: s3
# config:
# bucket: <S3_BUCKET>
# key: furyctl/cluster
# region: <S3_BUCKET_REGION>
provisioner: eks

Open the file with a text editor and replace:

  • <VPC_ID> with the VPC ID (vpc-0d2fd9bcb4f68379e) created in the previous phase.
  • <PRIVATE_SUBNET1_ID> with ID of the first private subnet ID (subnet-072b1e3405f662c70) created in the previous phase.
  • <PRIVATE_SUBNET2_ID> with ID of the second private subnet ID (subnet-subnet-0a23db3b19e5a7ed7) created in the previous phase.
  • <PRIVATE_SUBNET3_ID> with ID of the third private subnet ID (subnet-08f4930148ab5223f) created in the previous phase.
  • (optional) As before, add the details of the S3 Bucket that holds the Terraform remote state.

⚠ī¸ if you are using an S3 bucket to store the Terraform state make sure to use a different key in executor.state.config.key than the one used in the boorstrap phase.

Provision EKS Cluster​

  1. Initialize the cluster provisioner:
furyctl cluster init
  1. Create EKS cluster:
furyctl cluster apply

⏱ This phase may take some minutes.

Logs are available at /demo/infrastructure/cluster/logs/terraform.logs.

  1. When the furyctl cluster apply completes, test the connection with the cluster:
export KUBECONFIG=/demo/infrastructure/cluster/secrets/kubeconfig
kubectl get nodes

Step 2 - Download fury modules​

furyctl can do a lot more than deploying infrastructure. In this section, you use furyctl to download the monitoring, logging, and ingress modules of the Fury distribution.

Inspect the Furyfile​

furyctl needs a Furyfile.yml to know which modules to download.

For this tutorial, use the Furyfile.yml located at /demo/Furyfile.yaml:

versions:
networking: v1.8.2
monitoring: v1.14.1
logging: v1.10.2
ingress: v1.12.2
# dr: v1.9.2
# opa: v1.6.2

bases:
- name: networking/calico
- name: monitoring/prometheus-operator
- name: monitoring/prometheus-operated
- name: monitoring/grafana
- name: monitoring/goldpinger
- name: monitoring/configs
- name: monitoring/kubeadm-sm
- name: monitoring/kube-proxy-metrics
- name: monitoring/kube-state-metrics
- name: monitoring/node-exporter
- name: monitoring/metrics-server
- name: monitoring/eks-sm
- name: monitoring/alertmanager-operated
- name: logging/elasticsearch-single
- name: logging/cerebro
- name: logging/curator
- name: logging/fluentd
- name: logging/kibana
- name: ingress/nginx
- name: ingress/forecastle
- name: ingress/cert-manager
# - name: dr/velero
# - name: opa/gatekeeper

#modules:
#- name: dr/eks-velero

Download Fury modules​

  1. Download the Fury modules with furyctl:
cd /demo/
furyctl vendor -H
  1. Inspect the downloaded modules in the vendor folder:
tree -d /demo/vendor -L 3

Output:

$ tree -d vendor -L 3

vendor
└── katalog
├── ingress
│ ├── cert-manager
│ ├── forecastle
│ └── nginx
├── logging
│ ├── cerebro
│ ├── curator
│ ├── elasticsearch-single
│ ├── fluentd
│ └── kibana
├── monitoring
│ ├── alertmanager-operated
│ ├── configs
│ ├── goldpinger
│ ├── grafana
│ ├── kube-proxy-metrics
│ ├── kube-state-metrics
│ ├── node-exporter
│ ├── prometheus-operated
│ └── prometheus-operator
└── networking
└── calico

Step 3 - Installation​

Each module is a Kustomize project. Kustomize allows to group together related Kubernetes resources and combines them to create more complex deployments. Moreover, it is flexible, and it enables a simple patching mechanism for additional customization.

To deploy the Fury distribution, use the following root kustomization.yaml located /demo/manifests/kustomization.yaml:

resources:

- ingress
- logging
- monitoring
- networking

This kustomization.yaml wraps the other kustomization.yamls in subfolders. For example in /demo/manifests/logging/kustomization.yaml

resources:

- ../../vendor/katalog/logging/cerebro
- ../../vendor/katalog/logging/curator
- ../../vendor/katalog/logging/elasticsearch-single
- ../../vendor/katalog/logging/fluentd
- ../../vendor/katalog/logging/kibana

- resources/ingress.yml

patchesStrategicMerge:

- patches/fluentd-resources.yml
- patches/fluentbit-resources.yml
- patches/elasticsearch-resources.yml
- patches/cerebro-resources.yml

Each kustomization.yaml:

  • references the modules downloaded in the previous section
  • patches the upstream modules (e.g. patches/elasticsearch-resources.yml limits the resources requested by elastic search)
  • deploys some additional custom resources (e.g. resources/ingress.yml)

Install the modules:

cd /demo/manifests/

make apply
# Due to some chicken-egg 🐓đŸĨš problem with custom resources you have to apply again
make apply

Step 4 - Explore the distribution​

🚀 The distribution is finally deployed! In this section, you explore some of its features.

Setup local DNS​

In Step 3, alongside the distribution, you have deployed Kubernetes ingresses to expose underlying services at the following HTTP routes:

  • forecastle.fury.info
  • grafana.fury.info
  • kibana.fury.info

To access the ingresses more easily via the browser, configure your local DNS to resolve the ingresses to the internal loadbalancer IP:

  1. Get the address of the internal load balancer:
dig $(kubectl get svc ingress-nginx -n ingress-nginx --no-headers | awk '{print $4}')

Output:

...

;; ANSWER SECTION:
xxx.elb.eu-west-1.amazonaws.com. 77 IN A <FIRST_IP>
xxx.elb.eu-west-1.amazonaws.com. 77 IN A <SECOND_IP>
xxx.elb.eu-west-1.amazonaws.com. 77 IN A <THIRD_IP>
...

  1. Add the following line to your machine's /etc/hosts (not the container's):
<FIRST_IP> forecastle.fury.info cerebro.fury.info kibana.fury.info grafana.fury.info

Now, you can reach the ingresses directly from your browser.

Forecastle​

Forecastle is an open-source control panel where you can access all exposed applications running on Kubernetes.

Navigate to http://forecastle.fury.info to see all the other ingresses deployed, grouped by namespace.

Forecastle

Kibana​

Kibana is an open-source analytics and visualization platform for Elasticsearch. Kibana lets you perform advanced data analysis and visualize data in various charts, tables, and maps. You can use it to search, view, and interact with data stored in Elasticsearch indices.

Navigate to http://kibana.fury.info or click the Kibana icon from Forecastle.

Read the logs​

The Fury Logging module already collects data from the following indices:

  • kubernetes-*
  • system-*
  • ingress-controller-*

Click on Discover to see the main dashboard. On the top left corner select one of the indices to explore the logs.

Kibana

Grafana​

Grafana is an open-source platform for monitoring and observability. Grafana allows you to query, visualize, alert on and understand your metrics.

Navigate to http://grafana.fury.info or click the Grafana icon from Forecastle.

Fury provides some pre-configured dashboards to visualize the state of the cluster. Examine an example dashboard:

  1. Click on the search icon on the left sidebar.
  2. Write pods and click enter.
  3. Select the Kubernetes/Pods dashboard.

This is what you should see:

Grafana

Step 5 (optional) - Deploy additional modules​

The Fury Distribution is a modular distribution. You can install other modules to extend its functionality. A list of all the available modules can be found here

In this section, you deploy the following additional modules:

  • Fury Disaster Recovery Module - a Disaster Recovery solution based on Velero.
  • Fury OPA Module - a policy engine based on OPA Gatekeeper.

The Fury Disaster Recovery Module requires additional infrastructure to function. These required resources are deployed via Terraform.

  1. Edit the Furyfile.yml in the /demo folder and add (uncomment) the new bases and modules:
versions:
...
dr: v1.8.0
opa: v1.5.0

bases:
...
- name: dr/velero
- name: opa/gatekeeper

modules:
- name: dr/eks-velero
  1. Download the modules in the vendor folders with furyctl:
cd /demo/
furyctl vendor -H
  1. Create the resources for the Velero module using Terraform:
cd /demo/terraform/

make init
make plan
make apply
  1. Gather some output manifests from Terraform:
make generate-output

The make entry point is a shortcut for:

terraform output -raw velero_patch > ../manifests/dr/patches/velero.yml
terraform output -raw velero_backup_storage_location > ../manifests/dr/resources/velero-backup-storage-location.yml
terraform output -raw velero_volume_snapshot_location > ../manifests/dr/resources/velero-volume-snapshot-location.yml
  1. Have a look at /demo/manifests/dr/kustomization.yaml...
resources:

- ../../vendor/katalog/dr/velero/velero-aws
- ../../vendor/katalog/dr/velero/velero-schedules
- resources/velero-backup-storage-location.yml
- resources/velero-volume-snapshot-location.yml

patchesStrategicMerge:

- patches/velero.yml
...

... and /demo/manifests/opa/kustomization.yaml

resources:

- ../../vendor/katalog/opa/gatekeeper
  1. Install the modules as before:
cd /demo/manifests/dr
make apply
# Again our chicken-egg 🐓đŸĨš problem with custom resources
make apply
cd /demo/manifests/opa
make apply
# Again our chicken-egg 🐓đŸĨš problem with custom resources
make apply
# Again our chicken-egg 🐓đŸĨš problem with custom resources
make apply
cd ..

(optional) Create a backup with Velero​

  1. Create a backup with the velero command-line utility:
velero backup create --from-schedule manifests test -n kube-system
  1. Check the backup status:
velero backup get -n kube-system

(optional) Enforce a Policy with OPA Gatekeeper​

This section is under construction.

Please refer to the OPA module's documentation while we work on this part of the guide. Sorry for the inconvenience.

Step 6 - Teardown​

Clean up the demo environment:

  1. (Required only if you performed the optional steps) Destroy the additional Terraform resources used by Velero:
cd /demo/terraform/
terraform destroy
  1. Destroy EKS cluster:
cd /demo/infrastructure/
furyctl cluster destroy
  1. Some resources are created outside Terraform, for example when you create a LoadBalancer service it will create an ELB. You can find a script to delete the target groups, load balancers, volumes, and snapshots associated with the EKS cluster using AWS CLI:

✋đŸģ Check that the TAG_KEY variable has the righ value before running the script. It should finihs with the cluster name.

bash cleanup.sh
  1. Destroy network infrastructure:
furyctl bootstrap destroy
  1. (Optional) Destroy the S3 bucket holding the Terraform state
aws s3api delete-objects --bucket $S3_BUCKET \
--delete "$(aws s3api list-object-versions --bucket $S3_BUCKET --query='{Objects: Versions[].{Key:Key,VersionId:VersionId}}')"

aws s3api delete-bucket --bucket $S3_BUCKET
  1. Exit from the docker container:
exit

Conclusions​

Congratulations, you made it! đŸĨŗđŸĨŗ

We hope you enjoyed this tour of Fury!

Issues/Feedback​

In case your ran into any problems feel free to open an issue here on GitHub.

Where to go next?​

More tutorials:

More about Fury: