How to manually install Kubernetes cluster on AWS EC2 instances

How to manually install Kubernetes cluster on AWS EC2 instances

Kubernetes cluster on AWS EC2 instances

All credits goes to the author of the original post "12 steps to setup Kubernetes Cluster on AWS EC2".

In my testbed, I've checked the post and successfully created Kubernetes cluster. There were some caveats related to the updated versions of the software, so my post introduces some improvements to the original post in the section on cluster kubelet error troubleshooting and network CNI plugin installation.

  1. 12 steps to setup Kubernetes Cluster on AWS EC2
  2. How do I know what Ubuntu AMI to launch on EC2?
  3. Using instance profiles
  4. Launching EC Instance :: A client error (UnauthorizedOperation)
  5. Granting Permission to Launch EC2 Instances with IAM Roles (PassRole Permission)
  6. The 169.254. 169.254 IP address
  7. kubeadm init fail : dial tcp connect: connection refused
  8. kubeadm reset

Installation notes:

  1. In my installation I have several AWS profiles. To specify the required profile for usage in AWS CLI use: --profile=[profile_name]
  2. My region is: eu-central-1. To specify region in AWS CLI use: --region=eu-central-1
  3. Instance type is: t2.medium. To save money you can use: t2.micro. But there may be issues related to the performance.
  4. SSH key pair name is: makbanov-aws-ec2
  5. If you have several AWS CLI profiles: When using script for instance profiles, manually change the script by adding --profile=[profile_name] --region=[region_name] to the commands. Then run the script.
  6. For CNI install check the following links:

Installation scheme

(credits: scheme

Instruction on how to install Kubernetes cluster on AWS

Create SSH key-pair and import into AWS region

  1. Choose region in AWS console (Frankfurt eu-central-1)
  2. On your local host generate the key pairt
    cd ~/.ssh
    ssh-keygen -t rsa -P "" -f [key_pair_name]
  3. Copy the content of the generated public key and import into the AWS region on EC2 section

Create VPC

  1. Create VPC
    VPC_ID=$(aws ec2 create-vpc --cidr-block --region=eu-central-1 --query "Vpc.VpcId" --output text --profile=work)
  2. Show VPC ID:
    echo $VPC_ID

Enable DNS support

  1. DNS support:
    aws ec2 modify-vpc-attribute --enable-dns-support --vpc-id $VPC_ID --profile=work --region=eu-central-1
  2. Enable DNS hostname support:
    aws ec2 modify-vpc-attribute --enable-dns-hostnames --vpc-id $VPC_ID --profile=work --region=eu-central-1

Add tags to the VPC and subnet

aws ec2 create-tags --resources $VPC_ID --tags Key=Name,Value=monitoring-stand,Value=shared --profile=work --region=eu-central-1

Get Private route table ID

  1. Get ID:
    PRIVATE_ROUTE_TABLE_ID=$(aws ec2 describe-route-tables --filters Name=vpc-id,Values=$VPC_ID --query "RouteTables[0].RouteTableId" --output=text --profile=work --region=eu-central-1)
  2. Show ID

Add second route table to manage public subnets in VPC

  1. Get Public route ID:
    PUBLIC_ROUTE_TABLE_ID=$(aws ec2 create-route-table --vpc-id $VPC_ID --query "RouteTable.RouteTableId" --output text --profile=work --region=eu-central-1)
  2. Show ID:

Give the route tables names

  1. Name public subnet:
    aws ec2 create-tags --resources $PUBLIC_ROUTE_TABLE_ID --tags Key=Name,Value=monitoring-stand-public --profile=work --region=eu-central-1
  2. Name private subnet:
    aws ec2 create-tags --resources $PRIVATE_ROUTE_TABLE_ID --tags Key=Name,Value=monitoring-stand-private --profile=work --region=eu-central-1

Create private and public subnets for cluster

  1. Get all available AZs in your region:
    aws ec2 describe-availability-zones --region=eu-central-1
  2. Create private subnet with CIDR /24 == 256 IP in eu-central-1b:
    PRIVATE_SUBNET_ID=$(aws ec2 create-subnet --vpc-id $VPC_ID --availability-zone eu-central-1b --cidr-block --query "Subnet.SubnetId" --output text --profile=work --region=eu-central-1)
  3. Show private subnet ID:
  4. Create tags for subnet:
    aws ec2 create-tags --resources $PRIVATE_SUBNET_ID --tags Key=Name,Value=monitoring-stand-private-1b,Value=owned,Value=1 --profile=work --region=eu-central-1
  5. Create public subnet in the same AZ:
    PUBLIC_SUBNET_ID=$(aws ec2 create-subnet --vpc-id $VPC_ID --availability-zone eu-central-1b --cidr-block --query "Subnet.SubnetId" --output text --profile=work --region=eu-central-1)
  6. Show public subnet ID:
  7. Create tags for public subnet:
    aws ec2 create-tags --resources $PUBLIC_SUBNET_ID --tags Key=Name,Value=monitoring-stand-public-1b,Value=owned,Value=1 --profile=work --region=eu-central-1
  8. Associate public subnet with the public route table:
    ws ec2 associate-route-table --subnet-id $PUBLIC_SUBNET_ID --route-table-id $PUBLIC_ROUTE_TABLE_ID --profile=work --region=eu-central-1

Create Internet Gateway

  1. In order for the instances in our public subnet to communicate with the internet, we will create an internet gateway, attach it to our VPC, and then add a route to the route table, routing traffic bound for the internet to the gateway
    INTERNET_GATEWAY_ID=$(aws ec2 create-internet-gateway --query "InternetGateway.InternetGatewayId" --output text --profile=work --region=eu-central-1)
  2. Show IG ID:
  3. Attach internet gateway to VPC:
    aws ec2 attach-internet-gateway --internet-gateway-id $INTERNET_GATEWAY_ID --vpc-id $VPC_ID --profile=work --region=eu-central-1
  4. Create public route table:
    aws ec2 create-route --route-table-id $PUBLIC_ROUTE_TABLE_ID --destination-cidr-block --gateway-id $INTERNET_GATEWAY_ID --profile=work --region=eu-central-1

Create NAT gateway

In order to configure the instances in the private subnet, we will need them to be able to make outbound connections to the internet in order to install software packages and so on.

To make this possible, we will add a NAT gateway to the public subnet and then add a route to the private route table for internet-bound traffic

  1. Allocate address for NAT gateway:
    NAT_GATEWAY_ALLOCATION_ID=$(aws ec2 allocate-address --domain vpc --query AllocationId --output text --profile=work --region=eu-central-1)
  2. Show NAT Gateway ID:
  3. Create NAT gateway:
    NAT_GATEWAY_ID=$(aws ec2 create-nat-gateway --subnet-id $PUBLIC_SUBNET_ID --allocation-id $NAT_GATEWAY_ALLOCATION_ID --query NatGateway.NatGatewayId --output text --profile=work --region=eu-central-1)
  4. Show NAT gateway ID:
    echo $NAT_GATEWAY_ID

Create route:

At this stage, you may have to wait a few moments for the NAT gateway to be created before creating the route.

  1. Create route:
    aws ec2 create-route --route-table-id $PRIVATE_ROUTE_TABLE_ID --destination-cidr-block --nat-gateway-id $NAT_GATEWAY_ID --profile=work --region=eu-central-1

Set bastion host:

We will use the first host we are going to launch as a bastion host that will allow us to connect to other servers that are only accessible from within the private side of our VPC network.

Create security group:

  1. Create a security group to allow SSH traffic to the bastion host:
    BASTION_SG_ID=$(aws ec2 create-security-group --group-name ssh-bastion --description "SSH Bastion Hosts" --vpc-id $VPC_ID --query GroupId --output text --profile=work --region=eu-central-1)
  2. Show Bastion host SG ID:
    echo $BASTION_SG_ID
  3. Allow SSH ingress on port 22 of the Bastion Host. For all Internet traffic (insecure) use Recommended way to use your own stable IP address:
    aws ec2 authorize-security-group-ingress --group-id $BASTION_SG_ID --protocol tcp --port 22 --cidr --profile=work --region=eu-central-1

Create EC2 Compute Instance for bastion host

  1. Secect official Ubuntu AMI ID for your EC2 instances:
    UBUNTU_AMI_ID=$(aws ec2 describe-images --owners 099720109477 --filters Name=root-device-type,Values=ebs Name=architecture,Values=x86_64 Name=name,Values='*hvm-ssd/ubuntu-xenial-16.04*' --query "sort_by(Images, &Name)[-1].ImageId" --output text --profile=work --region=eu-central-1)
  2. Show Ubuntu AMI ID:
    echo $UBUNTU_AMI_ID
  3. Run bastion host based on EC2 t2.micro instance with SSH key pairt makbanov-aws-ec2
    BASTION_ID=$(aws ec2 run-instances --image-id $UBUNTU_AMI_ID --instance-type t2.micro --key-name makbanov-aws-ec2 --security-group-ids $BASTION_SG_ID  --subnet-id $PUBLIC_SUBNET_ID --associate-public-ip-address --query "Instances[0].InstanceId" --output text --profile=work --region=eu-central-1)
  4. Show Bastion ID:
    echo $BASTION_ID
  5. Update the Bastion instance with the Name tag to recognize it in the AWS EC2 dashboard:
    aws ec2 create-tags --resources $BASTION_ID --tags Key=Name,Value=ssh-bastion --profile=work --region=eu-central-1

Access bastion host

  1. Once the instance has launched, you should be able to run the aws ec2 describe-instances command to discover the public IP address of your new instance:
    BASTION_IP=$(aws ec2 describe-instances --instance-ids $BASTION_ID --query "Reservations[0].Instances[0].PublicIpAddress" --output text --profile=work --region=eu-central-1)
  2. Show Bastion IP:
    echo $BASTION_IP
  3. Check SSH connection to the Bastion host. You should now be able to access the instance using the private key from the same key pair as used to create the instane:
    ssh -i ~/.ssh/work-makbanov-aws-ec2 ubuntu@$BASTION_IP

Install sshutle to configure proxy

It is possible to forward traffic from your workstation to the private network by just using SSH port forwarding. However, we can make accessing servers via the bastion instance much more convenient by using the sshuttle tool.

  1. Install sshuttle:
    pip install sshuttle
    or for Ubuntu:
    sudo apt-get install sshuttle
  2. To transparently proxy traffic to the instances inside the private network, we can run the following command: In the separate terminal run:
    export BASTION_IP=[bastion_ip_address]
    export BASTION_IP=
    run sshuttle:
    sshuttle -r ubuntu@$BASTION_IP --dns  --ssh-cmd 'ssh -i ~/.ssh/work-makbanov-aws-ec2'
  3. On another terminal, we can validate that this setup is working correctly by trying to log in to our instance through its private DNS name:
    ws ec2 describe-instances --profile=work --region=eu-central-1 --instance-ids $BASTION_ID --query "Reservations[0].Instances[0].PrivateDnsName"
  4. Now that we have the DNS name, try to connect to the instance using the DNS name:
    ssh -i ~/.ssh/work-makbanov-aws-ec2
    This tests whether you can resolve a DNS entry from the private DNS provided by AWS to instances running within your VPC, and whether the private IP address now returned by that query is reachable.

Create Instance Profiles

  1. In order for Kubernetes to make use of its integrations with the AWS cloud APIs, we need to set up IAM instance profiles. An instance profile is a way for the Kubernetes software to authenticate with the AWS API, and for us to assign fine-grained permissions on the actions that Kubernetes can take.
    curl -o
  2. Execute this script:
    sh -e

Create AMI - Install Kubernetes Software

Now we will create one EC2 instance to setup our Kubernetes Cluster. We will use this as a AMI to create EC2 instances for our Kubernetes Cluster on AWS.

Create security group

  1. Create SG for this intsance:
    K8S_AMI_SG_ID=$(aws ec2 create-security-group --group-name k8s-ami --description "Kubernetes AMI Instances" --vpc-id $VPC_ID --query GroupId --output text --profile=work --region=eu-central-1)
  2. We will need to be able to access this instance from our bastion host in order to log in and install software, so let's add a rule to allow SSH traffic on port 22 from instances in the ssh-bastion security group, as follows:
    aws ec2 authorize-security-group-ingress --group-id $K8S_AMI_SG_ID --protocol tcp --port 22 --source-group $BASTION_SG_ID --profile=work --region=eu-central-1

Create EC2 instance

  1. We're using t2.medium type:
    K8S_AMI_INSTANCE_ID=$(aws ec2 run-instances --subnet-id $PRIVATE_SUBNET_ID --image-id $UBUNTU_AMI_ID --instance-type t2.medium --key-name makbanov-aws-ec2 --security-group-ids $K8S_AMI_SG_ID  --query "Instances[0].InstanceId" --output text --profile=work --region=eu-central-1)
  2. Show instance ID:
  3. Add Name tag for the instance:
    aws ec2 create-tags --resources $K8S_AMI_INSTANCE_ID --tags Key=Name,Value=kubernetes-node-ami --profile=work --region=eu-central-1
  4. Grab the IP address of the instance:
    K8S_AMI_IP=$(aws ec2 describe-instances --instance-ids $K8S_AMI_INSTANCE_ID --query "Reservations[0].Instances[0].PrivateIpAddress" --output text --profile=work --region=eu-central-1)
  5. Show IP address of the instance:
    echo $K8S_AMI_IP
  6. Log in with ssh:
    ssh -i ~/.ssh/work-makbanov-aws-ec2 ubuntu@$K8S_AMI_IP
  7. Now we are ready to start configuring the instance with the software and configuration that all of the nodes in our cluster will need. Start by synchronizing the apt repositories:
    sudo apt-get update
  8. Install container runtime (Docker):
  9. Change to root:
    sudo su -
  10. Kubernetes will work well with the version of Docker that is included in the Ubuntu repositories, so we can install it simply by installing the package, as follows:
    apt-get install -y
  11. Check that Docker is installed:
    docker version

Install Kubernetes packages

  1. Next, we will install the packages that we need to set up a Kubernetes control plane on this host. These packages are described in the following list:
    • kubelet: The node agent that Kubernetes uses to control the container runtime. This is used to run all the other components of the control plane within Docker containers.
    • kubeadm: This utility is responsible for bootstrapping a Kubernetes cluster.
    • kubectl: The Kubernetes command-line client, which will allow us to interact with the Kubernetes API server.
  2. Add the signing key for the apt repository that hosts the Kubernetes packages:
    curl -s | sudo apt-key add -
  3. Add the Kubernetes apt repository:
    apt-add-repository 'deb kubernetes-xenial main'
  4. Resynchronize the package indexes
    apt-get update
  5. Install the required packages:
    apt-get install -y kubelet kubeadm kubectl
  6. Shutdown the instance:
    shutdown -h now

Create an AMI

  1. We can use the create-image command to instruct AWS to snapshot the root volume of our instance and use it to produce an AMI.
    K8S_AMI_ID=$(aws ec2 create-image --name k8s-1.10.3-001 --instance-id $K8S_AMI_INSTANCE_ID --description "Kubernetes v1.10.3" --query ImageId --output text --profile=work --region=eu-central-1)
  2. Check the status with the describe-images command:
    aws ec2 describe-images --profile=work --region=eu-central-1 --image-ids $K8S_AMI_ID --query "Images[0].State"

Setup Kubernetes Cluster on AWS

Now we can launch an instance for Kubernetes control plane components.

Create security group

  1. Create a security group for this new instance:
    K8S_MASTER_SG_ID=$(aws ec2 create-security-group --group-name k8s-master --description "Kubernetes Master Hosts" --vpc-id $VPC_ID --query GroupId --output text --profile=work --region=eu-central-1)
  2. Show SG ID:
    echo $K8S_MASTER_SG_ID
  3. We will need to be able to access this instance from our bastion host in order to log in and configure the cluster. We will add a rule to allow SSH traffic on port 22 from instances in the ssh-bastion security group:
    aws ec2 authorize-security-group-ingress --group-id $K8S_MASTER_SG_ID --protocol tcp --port 22 --source-group $BASTION_SG_ID --profile=work --region=eu-central-1

Launch EC2 instance using AMI:

  1. Now we can launch the instance using the AMI image we created earlier which contains all the Kubernetes packages and docker as container runtime:
    K8S_MASTER_INSTANCE_ID=$(aws ec2 run-instances --private-ip-address --subnet-id $PRIVATE_SUBNET_ID --image-id $K8S_AMI_ID --instance-type t2.medium --key-name makbanov-aws-ec2 --security-group-ids $K8S_MASTER_SG_ID --credit-specification CpuCredits=unlimited --iam-instance-profile Name=K8sMaster --query "Instances[0].InstanceId" --output text --profile=work --region=eu-central-1)
  2. Show Master instance ID:
  3. Give an instance name:
    aws ec2 create-tags --resources $K8S_MASTER_INSTANCE_ID --tags Key=Name,Value=monitoring-stand-k8s-master,Value=owned --profile=work --region=eu-central-1
  4. Connect to this instance:
    ssh -i ~/.ssh/work-makbanov-aws-ec2 ubuntu@

Pre-requisite configuration of controller node

  1. To ensure that all the Kubernetes components use the same name, we should set the hostname to match the name given by the AWS metadata service. This is because the name from the metadata service is used by components that have the AWS cloud provider enabled. The 169.254. 169.254 IP address is a “magic” IP in the cloud world, in AWS it used to retrieve user data and instance metadata specific to a instance. In Ubuntu instance:
    sudo hostnamectl set-hostname $(curl
  2. Check hostname:
  3. To correctly configure the kubelet to use the AWS cloud provider, we create a systemd drop-in file to pass some extra arguments to the kubelet:
    printf '[Service]\nEnvironment="KUBELET_EXTRA_ARGS=--node-ip="' | sudo tee /etc/systemd/system/kubelet.service.d/20-aws.conf
  4. Check:
    cat /etc/systemd/system/kubelet.service.d/20-aws.conf
  5. Reload the configuration file and restart kubelet service:
    sudo systemctl daemon-reload
    sudo systemctl restart kubelet

Initialize controller node

  1. We need to provide kubeadm with --token-ttl 0, this means that the token that is issued to allow worker nodes to join the cluster won't expire. Now initialize the controller node:
    sudo su -
    kubeadm init --token-ttl 0 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem
  2. If you will encounter error related to kubelet then:
    kubeadm reset
    cat <<EOF | sudo tee /etc/docker/daemon.json
     "exec-opts": ["native.cgroupdriver=systemd"]
    cat /etc/docker/daemon.json
    systemctl daemon-reload
    systemctl restart docker
    kubeadm init --token-ttl 0 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem
  3. You must save the kubeadm join command as highlighted above as we will use this to join worker node to our controller node.:
    Your Kubernetes control-plane has initialized successfully!
    To start using your cluster, you need to run the following as a regular user:
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    Alternatively, if you are the root user, you can run:
    export KUBECONFIG=/etc/kubernetes/admin.conf
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    Then you can join any number of worker nodes by running the following on each as root:
    kubeadm join --token kp9gfv.5g1x40k2dgz6oobx \
     --discovery-token-ca-cert-hash sha256:c045d5deaddc565839e1046f084dbd969aa5b2c74774a92c02e4c260402731a7
  4. We can check that the API server is functioning correctly by following the instructions given by kubeadm to set up kubectl on the host:
    ubuntu@ip-10-0-0-11:~$ mkdir -p $HOME/.kube
    ubuntu@ip-10-0-0-11:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    ubuntu@ip-10-0-0-11:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
  5. If you're planning to use root user then (not recommended):
    ubuntu@ip-10-0-0-11:~$ sudo su -
    root@ip-10-0-0-11:~# export KUBECONFIG=/etc/kubernetes/admin.conf
    root@ip-10-0-0-11:~# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
  6. Check the version of kubectl:
    kubectl version --short
    Client Version: v1.23.5
    Server Version: v1.23.5
  7. Check the status of cluster nodes:
    kubectl get nodes
  8. Currently the status of the controller node is NotReady because the network plugin is still not installed.

Install Container Network Interface (CNI) Plugin

  1. We will be deploying a CNI plugin called amazon-vpc-cni-k8s that integrates Kubernetes with the native networking capabilities of the AWS VPC network.

This plugin works by attaching secondary private IP addresses to the elastic network interfaces of the EC2 instances that form the nodes of our cluster, and then assigning them to pods as they are scheduled by Kubernetes to go into each node. Traffic is then routed directly to the correct node by the AWS VPC network fabric.

touch aws-k8s-cni.yaml
# kubernetes versions before 1.8.0 should use
kind: ClusterRole
  name: aws-node
- apiGroups:
  - "*"
  - namespaces
  - "*"
- apiGroups: [""]
  - pods
  - nodes
  - namespaces
  verbs: ["list", "watch", "get"]
- apiGroups: ["extensions"]
  - daemonsets
  verbs: ["list", "watch"]
apiVersion: v1
kind: ServiceAccount
  name: aws-node
  namespace: kube-system
# kubernetes versions before 1.8.0 should use
kind: ClusterRoleBinding
  name: aws-node
  kind: ClusterRole
  name: aws-node
- kind: ServiceAccount
  name: aws-node
  namespace: kube-system
kind: DaemonSet
apiVersion: apps/v1
  name: aws-node
  namespace: kube-system
    k8s-app: aws-node
    type: RollingUpdate
      k8s-app: aws-node
        k8s-app: aws-node
      annotations: ''
      serviceAccountName: aws-node
      hostNetwork: true
      - operator: Exists
      - image:
        imagePullPolicy: Always
        - containerPort: 61678
          name: metrics
        name: aws-node
          - name: AWS_VPC_K8S_CNI_LOGLEVEL
            value: DEBUG
          - name: MY_NODE_NAME
                fieldPath: spec.nodeName
          - name: WATCH_NAMESPACE
                fieldPath: metadata.namespace
            cpu: 10m
          privileged: true
        - mountPath: /host/opt/cni/bin
          name: cni-bin-dir
        - mountPath: /host/etc/cni/net.d
          name: cni-net-dir
        - mountPath: /host/var/log
          name: log-dir
        - mountPath: /var/run/docker.sock
          name: dockersock
      - name: cni-bin-dir
          path: /opt/cni/bin
      - name: cni-net-dir
          path: /etc/cni/net.d
      - name: log-dir
          path: /var/log
      - name: dockersock
          path: /var/run/docker.sock
kind: CustomResourceDefinition
  scope: Cluster
  - name: v1beta1
    # Each version can be enabled/disabled by Served flag.
    served: true
    # One and only one version must be marked as the storage version.
    storage: true
    # A schema is required
        type: object
            type: string
            type: string
  - name: v1
    served: true
    storage: false
        type: object
            type: string
            type: string
    plural: eniconfigs
    singular: eniconfig
    kind: ENIConfig
  1. Apply aws-k8s-cni.yaml file:
    kubectl apply -f aws-k8s-cni.yaml
  2. You can monitor the networking plugin that is being installed and started by running the following:
    root@ip-10-0-0-11:~# kubectl -n kube-system get pods
  3. Now you can check the status of your controller node and it should be in Ready state:
    kubectl get nodes

Create worker nodes

Create security group (on local machine):

  1. Create a new security group for the worker nodes
    K8S_NODES_SG_ID=$(aws ec2 create-security-group --group-name k8s-nodes --description "Kubernetes Nodes" --vpc-id $VPC_ID --query GroupId --output text --profile=work --region=eu-central-1)
  2. Show SG ID:
    echo $K8S_NODES_SG_ID
  3. We will allow access to the worker nodes via the bastion host in order for us to log in for debugging purposes:
    aws ec2 authorize-security-group-ingress --group-id $K8S_NODES_SG_ID --protocol tcp --port 22 --source-group $BASTION_SG_ID --profile=work --region=eu-central-1
  4. We want to allow the kubelet and other processes running on the worker nodes to be able to connect to the API server on the master node. We do this using the following command:
    aws ec2 authorize-security-group-ingress --group-id $K8S_MASTER_SG_ID --protocol tcp --port 6443 --source-group $K8S_NODES_SG_ID --profile=work --region=eu-central-1
  5. Since the kube-dns add-on may run on the master node, let's allow this traffic from the nodes security group, as follows:
    aws ec2 authorize-security-group-ingress --group-id $K8S_MASTER_SG_ID --protocol all --port 53 --source-group $K8S_NODES_SG_ID --profile=work --region=eu-central-1
  6. We also need the master node to be able to connect to the APIs that are exposed by the kubelet in order to stream logs and other metrics. We enable this by entering the following command:
    ws ec2 authorize-security-group-ingress --group-id $K8S_NODES_SG_ID --protocol tcp --port 10250 --source-group $K8S_MASTER_SG_ID --profile=work --region=eu-central-1
    aws ec2 authorize-security-group-ingress --group-id $K8S_NODES_SG_ID --protocol tcp --port 10255 --source-group $K8S_MASTER_SG_ID --profile=work --region=eu-central-1
  7. Finally, we need to allow any pod on any node to be able to connect to any other pod. We do this using the following command:
    aws ec2 authorize-security-group-ingress --group-id $K8S_NODES_SG_ID  --protocol all --port -1 --source-group $K8S_NODES_SG_ID --profile=work --region=eu-central-1

Create user-data script (on local machine):

In order to have the worker node(s) register themselves with the master when they start up, we will create a startup script. These are the user-data script which are executed immediately after an instance is started.

I have enabled logging for troubleshooting, systemd configuration to update hostname and connect master node. Lastly the script contains kubeadm join command which will be used to join the worker node to the cluster. This command was printed at the end of kubeadm init stage which we executed earlier.

exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
echo BEGIN
date '+%Y-%m-%d %H:%M:%S'
echo END

sudo hostnamectl set-hostname $(curl
sudo mkdir -p /etc/systemd/system/kubelet.service.d
cat << EOF >/etc/systemd/system/kubelet.service.d/20-aws.conf

sudo systemctl daemon-reload
sudo systemctl restart kubelet

sudo kubeadm join --token kp9gfv.5g1x40k2dgz6oobx --discovery-token-ca-cert-hash sha256:c045d5deaddc565839e1046f084dbd969aa5b2c74774a92c02e4c260402731a7

Create AWS::AutoScaling::LaunchConfiguration

The AWS::AutoScaling::LaunchConfiguration resource specifies the launch configuration that can be used by an Auto Scaling group to configure Amazon EC2 instances.

First, we create a launch configuration using the following command. This is like a template of the configuration that the autoscaling group will use to launch our worker nodes. Many of the arguments are similar to those that we would have passed to the EC2 run-instances command:

aws autoscaling create-launch-configuration --launch-configuration-name k8s-node-1.10.3-t2-medium-001 --image-id $K8S_AMI_ID --key-name makbanov-aws-ec2 --security-groups $K8S_NODES_SG_ID --user-data file://~/ --instance-type t2.medium --iam-instance-profile K8sNode --no-associate-public-ip-address --profile=work --region=eu-central-1

Create AWS::AutoScaling::AutoScalingGroup

The AWS::AutoScaling::AutoScalingGroup resource defines an Amazon EC2 Auto Scaling group, which is a collection of Amazon EC2 instances that are treated as a logical grouping for the purposes of automatic scaling and management.

Once we have created the launch configuration, we can create an autoscaling group, as follows:

aws autoscaling create-auto-scaling-group --auto-scaling-group-name monitoring-stand-t2-medium-nodes --launch-configuration-name k8s-node-1.10.3-t2-medium-001 --min-size 1 --max-size 1 --vpc-zone-identifier $PRIVATE_SUBNET_ID --tags Key=Name,Value=monitoring-stand-k8s-node,Value=owned,Value=1 --profile=work --region=eu-central-1

This step will automatically create a new AWS EC2 instance which will act as our worker node. Since we have defined a user data script, so that script will be executed immediately after the launch of the instance and join it to the controller node.

Verify worker node status

Next you can connect to your master node and check the status of available nodes. You should see your worker node in a few minutes once kubeadm is initialized:

root@ip-10-0-0-11:~# kubectl get nodes

Troubleshoot worked node startup

  1. If your worker node is not showing up in the previous command, to troubleshoot you need to ssh into your worker node and check the following logs:
  2. If error related to not starting up and running kubelet process then you need to run the following commands:
    cat <<EOF | sudo tee /etc/docker/daemon.json
     "exec-opts": ["native.cgroupdriver=systemd"]
    cat /etc/docker/daemon.json
    systemctl daemon-reload
    systemctl restart docker

Create a Pod to verify cluster (on Master node)

  1. Create simple web nginx web server deployment on cluster:
    apiVersion: v1
    kind: Pod
    name: nginx
    namespace: default
    - name: nginx
     image: nginx
     - containerPort: 80
  2. Apply file:
    root@ip-10-0-0-11:~# kubectl create -f nginx.yaml
  3. Check the status of the Pod:
    root@ip-10-0-0-11:~# kubectl get pods
  4. Try to connect to the container inside the Pod:
    root@ip-10-0-0-11:~# kubectl exec -it nginx -- nginx -v