We will need several certificates to set up our cluster, namely for the following components:
The figure below shows what certificates we need to generate.
First, create a directory inside your project to hold the keys we are going to generate
mkdir keys && cd ./keys
For some obscure reason cfssl did not work on my Mac. There's an optional way to run it in a container instead:
docker pull cfssl/cfssl
docker run -it --entrypoint "/bin/bash" cfssl/cfssl
The rest of commands are the same.
These certificate and key are used to generate additional TLS certificates. Please store them somewhere secure. If the root of trust is compromised, the whole cluster becomes vulnerable to a wide range of nasty attacks.
Copy the files ca-config.json and ca-csr.json to cfssl Docker container.
Now use cfssl in the container to create keys and pipe its output to cfssljson. ca
here is the prefix for the files created.
docker exec -it 9ebdbfd4cedd /bin/sh -c 'cfssl gencert \
-initca ca-csr.json | cfssljson -bare ca'
Copy these two files from the container to some secure place.
docker cp 9ebdbfd4cedd:/go/src/github.com/cloudflare/cfssl/ca.pem ./keys/ca.pem
docker cp 9ebdbfd4cedd:/go/src/github.com/cloudflare/cfssl/ca.pem ./keys/ca-key.pem
Copy the file admin-csr.json to cfssl Docker container. Then run cfssl with this template to create a key and certificate for the admin user.
docker exec -it 9ebdbfd4cedd /bin/sh -c 'cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin'
Configuration for worker-0
Copy the files
worker-0-csr.json
worker-1-csr.json
worker-2-csr.json
to cfssl Docker container.
Find the IP addresses for worker-0
#EXTERNAL_IP
$(gcloud compute instances describe worker-0 \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
#INTERNAL_IP
$(gcloud compute instances describe worker-0 \
--format 'value(networkInterfaces[0].networkIP)')
Since we are generating certificates in a container, we will need to specify node name and IP addresses explicitly (there's a way to automate it, but we will not go into it for now) The worker IP addresses are known (from 10.240.0.20-22 range). The external NAT addresses you need to find separately with the commands above.
docker exec -it 9ebdbfd4cedd /bin/sh -c 'cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=worker-0,35.233.169.40,10.240.0.20 \
-profile=kubernetes \
worker-0-csr.json | cfssljson -bare worker-0'
Repeat for controllers 1 and 2
Next you need to create a certificate and key for the controller manager Copy the file kube-controller-manager-csr.json to cfssl Docker container. Generate the certificate
docker exec -it 9ebdbfd4cedd /bin/sh -c 'cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager'
Copy the file kube-proxy-csr.json to cfssl Docker container. Generate the certificate
docker exec -it 9ebdbfd4cedd /bin/sh -c 'cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy'
Copy the file kube-scheduler-csr.json to cfssl Docker container. Generate the certificate
docker exec -it 9ebdbfd4cedd /bin/sh -c 'cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler'
# Find the static IP address
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe external-static-address \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
# Set hostnames
KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local
The name of the static IP address is external-static-address
In this exercise the return value is 34.83.119.241
Copy the file kubernetes-csr.json to cfssl Docker container.
10.32.0.1
- this address will be linked to internal DNS namekubernetes
later on
docker exec -it 9ebdbfd4cedd /bin/sh -c 'cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,34.83.119.241,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes'
Controller Manager will use this keypair to generate service account tokens. Copy the file service-account-csr.json to cfssl Docker container. Generate the keypair:
docker exec -it 9ebdbfd4cedd /bin/sh -c 'cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
service-account-csr.json | cfssljson -bare service-account'
Copy all generated keys to a new container folder and copy it from the container
docker exec -it 9ebdbfd4cedd /bin/bash -c 'mkdir keys'
docker exec -it 9ebdbfd4cedd /bin/bash -c 'cp *.pem ./keys'
docker cp 9ebdbfd4cedd:/go/src/github.com/cloudflare/cfssl/keys ./keys
You need to distribute the following keys between controllers and worker nodes
# Copy keys to worker 0
gcloud compute scp ca.pem worker-0-key.pem worker-0.pem worker-0:~/
# Copy key to controller 0
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem controller-0:~/
Repeat for other workers and controllers
Copying the keys and certificates manually is not one of the best practices with K8S. If you absolutely need to do it, you need to edit gcloud_k8s_instances.tf
file to include provisioners for remote executions and files in google compute instance resources.
First, let's add keys to worker instances:
⒈ Create a folder /etc/kubernetes for keys with remote-exec provisioner
⒈⒈ Add a variable with private project-level key to .tfvars file
gce_ssh_pri_key_file = "/Users/username/.ssh/google_compute_engine"
⒈⒉ Add a provisioner to gcloud_k8s_instances.tf
The catch here is that we are creating 3 resources with count, there is no way to hardcode the external instance IP adress.
provisioner "remote-exec" {
connection {
type = "ssh"
user = "${var.gce_ssh_user}"
private_key = "${file(var.gce_ssh_pri_key_file)}"
timeout = "500s"
host = self.network_interface.0.access_config.0.nat_ip
}
inline = [
"if [ ! -d /etc/kubernetes ]; then sudo mkdir -p mkdir /etc/kubernetes; fi"
]
}
NB! Count in Terraform regularly leads people to frustration and despair. For instance, if you reference a resource property as host = "${element(google_compute_instance.workers.*.network_interface.0.access_config.0.nat_ip, count.index)}"
the way documentation tells you to, the provisioning will fail with Cycle error. Because f..k you, really.
After half an hour of angry googling, I decided use a special self.
reference, which can only be used inside the provisioners.
host = self.network_interface.0.access_config.0.nat_ip
⒉ Copy the keys there with file provisioner
The tricky part here is that you need a ssh key on the instance already to be able to remote exec in it.
In the first article we added a ssh key at the project level. We will use this key to connect the provisioners over SSH
⒉⒈ Add the following to instance template:
Check that the keys are present at the worker and controller instances:
Please remember, that IP addresses we used earlier when generating the keys and certificates will change in this way, if you want your terraform template to create a cluster automatically you will need another way to genarate keys after creating the infrastructure.