Kubernetes the Hard Way 002

Reserve an external IP address

K8S needs an external IP address for external access and load balancing.

⒈⒈ External address (gcloud_static_address.tf)

Instances

⒈⒈⒈Control plane instances (gcloud_k8s_instances.tf)

You can create instances in a bash loop:

for i in 0 1 2; do
  gcloud compute instances create controller-${i} \
    --async \
    --boot-disk-size 200GB \
    --can-ip-forward \
    --image-family ubuntu-1804-lts \
    --image-project ubuntu-os-cloud \
    --machine-type n1-standard-1 \
    --private-network-ip 10.240.0.1${i} \
    --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
    --subnet default-us-west1 \
    --tags kubernetes-the-hard-way,controller
done

And another, to add worker instances:

for i in 0 1 2; do
  gcloud compute instances create worker-${i} \
    --async \
    --boot-disk-size 200GB \
    --can-ip-forward \
    --image-family ubuntu-1804-lts \
    --image-project ubuntu-os-cloud \
    --machine-type n1-standard-1 \
    --metadata pod-cidr=10.200.${i}.0/24 \
    --private-network-ip 10.240.0.2${i} \
    --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
    --subnet default-us-west1 \
    --tags kubernetes-the-hard-way,worker
done

We will use terraform file gcloud_k8s_instances.tf to automate the process

Connect to instances via SSH

You can add a SSH key to connect to your instances manually or pass it in the TF template. There are pros and cons for both options, but adding them in the template will teach us to use variables and variable substitution.

First we need to add the following to variables.tfvars to provide input to instance resource

gce_ssh_user = "user"
gce_ssh_pub_key_file = "/Users/username/.ssh/gce001.pub"

Then we declare these variables in gcloud_k8s_instances.tf file to make them accessible


variable "gce_ssh_user" {
  type        = string
  description = "Remote SSH user"
  default     = "user"
}
variable "gce_ssh_pub_key_file" {
  type        = string
  description = "Remote SSH key"
  default     = "/Users/username/.ssh/gce001.pub"
}

The following snippet in resource adds an SSH key to your google compute instances:

metadata = {
    ssh-keys =  "${var.gce_ssh_user}:${file(var.gce_ssh_pub_key_file)}"
}

Test the keys by logging in to one of control instances via SSH

# Find the IP address of your target
gcloud compute instances list
# Connect to target
ssh -i /Users/username/.ssh/gce001 user@XXX.XXX.XXX.XXX

 
image-20191030162015726.png
 
Type exit to finish your SSH session.

Adding the keys manually

After your instances are created and come online, run this command in your terminal

gcloud compute ssh controller-0

A key will be generated and propagated to your instances. After this you will proceed to log in to controller-0 as $USER

 
image-20191030162405089.png
 

The key is saved in your default system .ssh location

The project and file structure

 
image-20191030170209913.png
 

Next we will add a Certification Authority to generate SSL certificates.

November 11, 2019   (v.b6b8c00)