Deploy disaster recovery
2 minute read
Fury’s Disaster Recovery module is based on Velero, a popular open-source, cloud-native backup and snapshot tool, and its companion tool, like velero-restic, for backing up and restoring Kubernetes volumes.
The Fury Kubernetes Disaster Recovery module can be deployed on the following platforms:
- Elastic Kubernetes Service (EKS)
- on-premises or unmanaged cloud clusters
- Google Kubernetes Engine (GKE)
- Azure Kubernetes Service (AKS)
The Fury Kubernetes Disaster Recovery module makes use of Velero, making it easy for you to take backups of your cluster and restore in case of loss, migrate cluster resources to other clusters, and replicate your production cluster to development and testing clusters. Velero runs as a server on your cluster.
Velero specific instructions and commands for scheduling backups and disaster recovery procedures can be found in Velero’s own documentation pages.
Disaster Recovery Module
The following packages are included in the Fury Kubernetes Disaster Recovery module. All the
resources listed below are going to be deployed in the
kube-system namespace in your Kubernetes cluster.
|velero||Velero is an open-source tool to safely backup, and restore, perform disaster recovery, and migrate Kubernetes cluster resources and persistent volumes. Velero treats volume backups as snapshots: the cluster will revert to exactly how it was at the time of the snapshot.|
|velero-restic||Velero-restic is a Velero integration working as an out-of-the-box solution for backing up and restoring almost any type of Kubernetes volume. velero-restic treats volume backups as incremental backups in a new volume.|
|velero-on-prem||Velero for on-premises clusters.|
|aws-velero||Velero for AWS (EC2 instances).|
|eks-velero||Velero for Elastic Kubernetes Service (EKS AWS) clusters.|
|gcp-velero||Velero for Google Kubernetes Engine (GKE) clusters.|
|azure-velero||Velero for Azure Kubernetes Service (AKS) clusters.|
Was this page helpful?
Glad to hear it! Thanks for letting us know!
Sorry to hear that. Please tell us how we can improve.