Add a new Kubernetes cluster#
This guide will walk through the process of adding a new cluster to our terraform configuration.
You can find out more about terraform in Terraform and their documentation.
Attention
Currently, we do not deploy clusters to AWS using terraform. Please see Create a new cluster for AWS-specific deployment guidelines.
Cluster Design#
This guide will assume you have already followed the guidance in Cluster design considerations to select the appropriate infrastructure.
Create a Terraform variables file for the cluster#
The first step is to create a .tfvars
file in the appropriate terraform projects subdirectory:
terraform/gcp/projects
for Google Cloud clustersterraform/azure/projects
for Azure clusters
Give it a descriptive name that at a glance provides context to the location and/or purpose of the cluster.
Once you have created this file, open a Pull Request to the infrastructure
repo for review.
See our review and merge guidelines for how this process should pan out.
Initialising Terraform#
Our default terraform state is located centrally in our two-eye-two-see-org
GCP project, therefore you must authenticate gcloud
to your @2i2c.org
account before initialising terraform.
The terraform state includes all cloud providers, not just GCP.
gcloud auth application-default login
Then you can change into the terraform subdirectory for the appropriate cloud provider and initialise terraform.
Note
There are other backend config files stored in terraform/backends
that will configure a different storage bucket to read/write the remote terraform state for projects which we cannot access from GCP with our @2i2c.org
email accounts.
This saves us the pain of having to handle multiple authentications as these storage buckets are within the project we are trying to deploy to.
For example, to work with Pangeo you would initialise terraform like so:
terraform init -backend-config=pangeo-backend.hcl -reconfigure
Creating a new terraform workspace#
We use terraform workspaces so that the state of one .tfvars
file does not influence another.
Create a new workspace with the below command, and again give it the same name as the .tfvars
filename.
terraform workspace new WORKSPACE_NAME
Note
Workspaces are defined per backend. If you can’t find the workspace you’re looking for, double check you’ve enabled the correct backend.
Plan and Apply Changes#
Note
When deploying to Google Cloud, make sure the Compute Engine, Kubernetes Engine, and Artifact Registry APIs are enabled on the project before deploying!
First, make sure you are in the new workspace that you just created.
terraform workspace show
Plan your changes with the terraform plan
command, passing the .tfvars
file as a variable file.
terraform plan -var-file=projects/CLUSTER.tfvars
Check over the output of this command to ensure nothing is being created/deleted that you didn’t expect. Copy-paste the plan into your open Pull Request so a fellow 2i2c engineer can double check it too.
If you’re both satisfied with the plan, merge the Pull Request and apply the changes to deploy the cluster.
terraform apply -var-file=projects/CLUSTER.tfvars
Congratulations, you’ve just deployed a new cluster!
Exporting and Encrypting the Cluster Access Credentials#
To begin deploying and operating hubs on your new cluster, we need to export the credentials created by terraform, encrypt it using sops
, and store it in the secrets
directory of the infrastructure
repo.
Check you are still in the correct terraform workspace
terraform workspace show
If you need to change, you can do so as follows
terraform workspace list # List all available workspaces
terraform workspace select WORKSPACE_NAME
Then, output the credentials created by terraform to a file under the secrets
directory.
Then encrypt the key using sops
.
Note
You must be logged into Google with your @2i2c.org
account at this point so sops
can read the encryption key from the two-eye-two-see
project.
cd ../..
sops --output config/clusters/<cluster_name>/enc-deployer-credentials.secret.{{ json | yaml }} --encrypt config/clusters/<cluster_name>/deployer-credentials.secret.{{ json | yaml }}
This key can now be committed to the infrastructure
repo and used to deploy and manage hubs hosted on that cluster.
Adding the new cluster to CI/CD#
To ensure the new cluster is appropriately handled by our CI/CD system, please add it as an entry in the following places:
The
deploy-hubs.yaml
workflow fileThe
validate-clusters.yaml
workflow file