Kubernetes Fury Installers - Public Cloud

Public Cloud Installers

9 minute read

SIGHUP has designed a simple common interface to deploy managed clusters in Google Cloud Platform, Amazon Web Services and Microsoft Azure in a super-easy way offering a consistent feature set:

  • Private control plane: The cluster control-plane shouldn’t be public. Only accessible from a user-defined network
  • Seamless cluster updates: The control-plane should be updated as soon as a new release is available.
  • Seamless node pools updates: Once the clusters control-plane receives an update, node pools should be updated too. These installers make this process straightforward.
  • Support for multiple node pools: It’s interesting to have various node pools to enable different workload types. Each node pool can be configured to have different machine types, labels, Kubernetes version…

Architecture

Managed Architecture

Common requirements

As these installers create only private control plane clusters, the operator who is responsible for making it has to have connectivity from the operator’s machine (bastion host, laptop with configured VPN…) to the network where the cluster will be placed.

The machine used to create the cluster should have installed:

  • OS tooling like: git and ssh.
  • terraform version 0.15.4.

Cloud requirements

The requirements are specific to each cloud provider. Please refer to the documentation section corresponding to your cloud provider to see the details.

Common interface

To start deploying a new cluster, make sure you know the value of all terraform input parameters:

Inputs

Name Description Type Default Required
cluster_name Unique cluster name. Used in multiple resources to identify your cluster resources string n/a yes
cluster_version Kubernetes Cluster Version. Look at the cloud providers documentation to discover available versions. EKS example -> 1.16, GKE example -> 1.16.8-gke.9 string n/a yes
dmz_cidr_range Network CIDR range from where cluster control plane will be accessible string or list(string) n/a yes
network Network where the Kubernetes cluster will be hosted string n/a yes
node_pools An object list defining node pools configurations list(object) [] no
ssh_public_key Cluster administrator public ssh key. Used to access cluster nodes with the operator_ssh_user string n/a yes
subnetworks List of subnets where the cluster will be hosted list n/a yes
resource_group_name Resource group name where every resource will be placed. Required only in AKS installer (*) string n/a yes*
tags The tags to apply to all resources map {} no

node_pools

node_pools input parameter requires a list of node_pool objects. This node_pool object receives the following input variables:

Name Description Type Default Required
name Unique node pool name. Used in multiple resources to identify your node pool resources string n/a yes
version Kubernetes Node Version. none value allowed. Means to use the cluster_version value string n/a yes
min_size Minimum number of nodes in the node pool number n/a yes
max_size Maximum number of nodes in the node pool number n/a yes
instance_type Name of the instance type to use in the node pool. The value is fully dependent on the cloud provider string n/a yes
os The operating system to use. The value is fully dependent on the cloud provider string cloud provider no
volume_size Disk size of each instance in the node pool; unit: GB gigabytes. number n/a yes
subnetworks List of subnetworks where nodes node will be located. When null cluster subnetworks will be used. list(string) n/a yes
labels Map of custom labels every node in the node pool will expose in the cluster. Useful to dedicate nodes to run specific workloads map(string) n/a yes
taints List of taints that lets you mark a node so that the scheduler avoids or prevents using it for certain Pods list(string) n/a yes
tags The node-pool specific tags. It merges the var.tags into it. map n/a yes
max_pods The maximum number of pods that can run on each agent. number n/a yes
additional_firewall_rules An object list defining node firewall rules configurations list(object) n/a yes
additional_firewall_rules

additional_firewall_rules input parameter requires a list of additional_firewall_rule objects. This additional_firewall_rule object receives the following input variables:

Name Description Type Default Required
name Unique firewall rule name string n/a yes
direction The direction of the firewall rule. Could be ingress or egress string n/a yes
cidr_block CIDR block to grant access from or to string n/a yes
protocol The firewall rule protocol. Some examples: TCP, UDP, ICMP string n/a yes
ports Ports to be opened or to have access from the nodes. Its format is: INI-END. Example: 80-80, 32000-32500 string n/a yes
tags Firewall rule specific tags map n/a yes

Outputs

Name Description
cluster_certificate_authority The base64 encoded certificate data required to communicate with the cluster
cluster_endpoint The endpoint for your Kubernetes API server
operator_ssh_user SSH user to access cluster nodes with ssh_public_key

These output values are the minimum requirement to set up a proper kubeconfig file to interact with the cluster. Check the documentation of each installer to understand better how to create the kubeconfig file.

Specific interface

Defining a common shared interface across cloud providers is a really difficult task as there are many differences between cloud providers.

We analyze every new requirement to see if it fits in the common cloud installer interface or if it is something specific to a cloud provider that doesn’t work in other cloud providers.

Currently, the only cloud provider with specific input parameters is Google. Take a look at its specific documentation.

Changelog

Version v1.0.0

Release date: 13th May 2020

First release with a shared interface across three different managed services: EKS, GKE and AKS.

Version v1.1.0

Release date: 8th July 2020

The input interface is modified on the node_pools object to add the taints attribute. Ready to use on all supported managed services.

Migration from v1.0.0

Add taints attribute to your node_pools variables.

node_pools = [
  {
    name : "node-pool-1"
    version : null # To use the cluster_version
    min_size : 1
    max_size : 1
    instance_type : "n1-standard-1"
    volume_size : 100
    labels : {
      "sighup.io/role" : "app"
      "sighup.io/fury-release" : "v1.3.0"
    }
    taints : [] # If you want to preserve v1.0.0 behavior
  },
  {
    name : "node-pool-2"
    instance_type : "n1-standard-2"
    volume_size : 50
    labels : {}
     # If you want to add a taint to a node_pool
    taints : [
      "sighup.io/role=app:NoSchedule"
    ]
  }
]

Version v1.2.0

Release date: 29th October 2020

It includes a new root variable: tags. It is a set of key pairs to add metadata information to all cloud installers' resources. In addition to that, the input interface is modified on the node_pools object to add the tags and the max_pods attribute. The tags node_pool attribute merges its value with the root tags variable.

Ready to use on all supported managed services.

Besides in this release is included a set of specific input variables in the GKE installer that are not part of the shared common interface as they are specific to the Kubernetes managed service of Google.

Discover them in the GKE documentation.

Migration from v1.1.0

Add tags attribute to your node_pools variables (Required). Add max_pods attribute to your node_pools variables (Required). Use null value to use the default behavior. Add tags variable to your root project (Optional).

tags = {
  "environment": "production"
}
node_pools = [
  {
    name : "node-pool-1"
    version : null # To use the cluster_version
    min_size : 1
    max_size : 1
    instance_type : "n1-standard-1"
    volume_size : 100
    labels : {
      "sighup.io/role" : "app"
      "sighup.io/fury-release" : "v1.3.0"
    }
    taints : []
    tags : {} # If you want to preserve v1.1.0 behavior
    max_pods : null
  },
  {
    name : "node-pool-2"
    instance_type : "n1-standard-2"
    volume_size : 50
    labels : {}
     # If you want to add a taint to a node_pool
    taints : [
      "sighup.io/role=app:NoSchedule"
    ]
    tags : {
      "kind": "databases"
    }
    max_pods : null
  }
]

Version v1.3.0

This release is only available for the eks installer. It includes the possibility of setting up authentication configuration directly from the installer. Read more about it in the specific eks documentation page.

In the other side, aks and gke installers does not have this release version.

Version v1.4.0

Release date: 26th January 2021

It includes a new node_pool variable: subnetwors. It enables a node pool to be only available in certain locations. If null value is specified, the global subnetworks variable (used for the cluster) is used.

Ready to use on all supported managed services.

It also includes a fix on the Kubernetes provider. We pin the version to avoid breaking changes.

Migration from v1.2.0

Add subnetworks attribute to your node_pools variables (Required).

tags = {
  "environment": "production"
}
node_pools = [
  {
    name : "node-pool-1"
    version : null
    min_size : 1
    max_size : 1
    instance_type : "n1-standard-1"
    volume_size : 100
    subnetworks: null # If you want to preserve v1.2.0 behavior
    labels : {
      "sighup.io/role" : "app"
      "sighup.io/fury-release" : "v1.3.0"
    }
    taints : []
    tags : {}
    max_pods : null
  },
  {
    name : "node-pool-2"
    instance_type : "n1-standard-2"
    volume_size : 50
    subnetworks:
      - my-subnetwork-id-1  # If you want to deploy a node_pool in a specific subnetwork
    labels : {}
    taints : [
      "sighup.io/role=app:NoSchedule"
    ]
    tags : {
      "kind": "databases"
    }
    max_pods : null
  }
]

Version v1.5.0

Release date: 23rd February 2021

It includes a new node_pool variable: additional_firewall_rules. It enables a node pool to apply specific firewall rules to a node pool in the cluster. Also includes a modification in the dmz_cidr_range input. Before this release, only a string value was valid. Now it supports both a CIDR range as a string value or a list(string) containing multiple CIDR ranges.

Ready to use on all supported managed services.

Migration from v1.4.0

Add additional_firewall_rules attribute to your node_pools variables (Required).

tags = {
  "environment": "production"
}
node_pools = [
  {
    name : "node-pool-1"
    version : null
    min_size : 1
    max_size : 1
    instance_type : "n1-standard-1"
    volume_size : 100
    subnetworks: null
    labels : {
      "sighup.io/role" : "app"
      "sighup.io/fury-release" : "v1.3.0"
    }
    additional_firewall_rules: [] This is now required. Set an empty list to don't modify the current setup
    taints : []
    tags : {}
    max_pods : null
  },
  {
    name : "node-pool-2"
    instance_type : "n1-standard-2"
    volume_size : 50
    subnetworks:
      - my-subnetwork-id-1
    labels : {}
    additional_firewall_rules: [] This is now required. Set an empty list to don't modify the current setup
    taints : [
      "sighup.io/role=app:NoSchedule"
    ]
    tags : {
      "kind": "databases"
    }
    max_pods : null
  }
]

Version v1.6.0

Release date: 10th May 2021

It includes a terraform module on the eks and gke installer to deploy the basic components to deploy a Kubernetes Cluster.

Ready to use on eks and gke.

Version v1.7.0

Release date: 26th May 2021

This is just a technical release in witch:

  • The terraform version has been pinned to v0.15.4
  • The used terraform providers has been updated and pinned to a specific version.

Ready to use on all supported managed services.

Version v1.8.0

Release date: 6th Sept 2021

Add an optional attribute of the node_pool definition to set the operating system to use (os). You can find the details on the input parameters table.

The os value is dependant from the cloud provider:

  • eks: ami-id
  • gke: COS | COS_CONTAINERD | UBUNTU | UBUNTU_CONTAINERD
  • aks: Linux | Windows

Ready to use on all supported managed services.


EKS Installer

Fury Kubernetes Installer - Managed Services - EKS - oss project.

GKE Installer

Fury Kubernetes Installer - Managed Services - GKE - oss project.

AKS Installer

Fury Kubernetes Installer - Managed Services - AKS - oss project.