3 minute read
The SIGHUP engineering team has designed this distribution to be able to be installed on top of any Kubernetes (Upstream) Cluster. The current version of this distribution (v1) has been tested across three different Kubernetes Cluster Versions.
- Kubernetes 1.14: Production-level support for Windows Nodes, Kubectl Updates, Persistent Local Volumes GA
- Kubernetes 1.15: Extensibility and Continuous Improvement
- Kubernetes 1.16: Custom Resources, Overhauled Metrics, and Volume Extensions
- Kubernetes 1.17: Stability
- Kubernetes 1.18: Fit & Finish
- Kubernetes 1.19: Accentuate the Paw-sitive
- Kubernetes 1.20: The Raddest Release
- Kubernetes 1.21: Power to the Community
- Kubernetes 1.22: Power to the Community (tech preview)
Every core module that is part of this distribution has its tests across these three Kubernetes Upstream versions. This way we can ensure:
- Every module is compatible with three different Kubernetes Clusters by itself.
- All core modules (as a Distribution) works as expected in these three Kubernetes Versions when working altogether.
Every module/distribution tests run on top of a
This guarantees that the tests are run on a pristine cluster created just it time for the test and are reproducible.
Also, we get to test against several Kubernetes versions in parallel.
|Fury Version/Kubernetes Version||v1.14.Z||v1.15.Z||v1.16.Z||v1.17.Z||v1.18.Z||v1.19.Z||v1.20.Z||v1.21.Z||1.22.Z|
It does not matter if the Kubernetes nodes were created from a virtualized provider (AWS EC2, VMware vSphere…) or from plain instances (bare-metal). With a plain Kubernetes Cluster installed on these instances, SIGHUP can deploy the distribution on top of it.
|Unmanaged Provider/Fury Version||v1.0.0||v1.1.0||v1.2.0||v1.3.0||v1.4.0||v1.5.0||v1.6.0||v1.7.0|
|Oracle Cloud instances|
|bare-metal (dell, hp…)|
If you choose to deploy Kubernetes in a Kubernetes Managed Service, SIGHUP can deploy the distribution on top of it.
|Managed Provider/Fury Version||v1.0.0||v1.1.0||v1.2.0||v1.3.0||v1.4.0||v1.5.0||v1.6.0||v1.7.0|
|Google Kubernetes Engine 1.14|
|Google Kubernetes Engine 1.15|
|Google Kubernetes Engine 1.16|
|Google Kubernetes Engine 1.17|
|Google Kubernetes Engine 1.18|
|Google Kubernetes Engine 1.19|
|Google Kubernetes Engine 1.20|
|Google Kubernetes Engine 1.21|
|Azure Kubernetes Service 1.14|
|Azure Kubernetes Service 1.15|
|Azure Kubernetes Service 1.16|
|Azure Kubernetes Service 1.17|
|Azure Kubernetes Service 1.18|
|Azure Kubernetes Service 1.19|
|Azure Kubernetes Service 1.20|
|Azure Kubernetes Service 1.21|
|Elastic Kubernetes Service 1.15|
|Elastic Kubernetes Service 1.16|
|Elastic Kubernetes Service 1.17|
|Elastic Kubernetes Service 1.18|
|Elastic Kubernetes Service 1.19|
|Elastic Kubernetes Service 1.20|
|Elastic Kubernetes Service 1.21|
|OVH Kubernetes Service 1.16|
|OVH Kubernetes Service 1.17|
|OVH Kubernetes Service 1.18|
If you are in this scenario, some tweaks have to be done.
For example, SIGHUP includes a pre-preconfigured CNI plugin based on Calico but most Kubernetes Managed Services deploys its own CNI plugin. So, you have to disable the CNI deployment of this Kubernetes distribution otherwise you can encounter unexpected networking issues.
In addition to the CNI modification, other configurations have to be applied/modified. Read carefully every module’s documentation for more detailed information.
Was this page helpful?
Glad to hear it! Thanks for letting us know!
Sorry to hear that. Please tell us how we can improve.