How to Install and Deploy Kubernetes on Ubuntu 16.04

本文涉及的产品
Redis 开源版,标准版 2GB
推荐场景:
搭建游戏排行榜
云数据库 Tair(兼容Redis),内存型 2GB
简介: In this tutorial, we are going to setup multi-node Kubernetes Cluster on Ubuntu 16.04 server using Alibaba Cloud Elastic Compute Service (ECS).

By Hitesh Jethva, Alibaba Cloud Tech Share Author. Tech Share is Alibaba Cloud’s incentive program to encourage the sharing of technical knowledge and best practices within the cloud community.

Kubernetes is an open-source container management system that is available for free. It provides a platform for automating deployment, scaling, and operations of application containers across clusters of hosts. Kubernetes gives you the freedom to take advantage of on-premises, hybrid, or public cloud infrastructure, releasing organizations from tedious deployment tasks.

Kubernetes was originally designed by Google and maintained by the Cloud Native Computing Foundation (CNCF). It is quickly becoming the new standard for deploying and managing software in the cloud. Kubernetes follows the master-slave architecture, where, it has a master that provides centralized control for an all agents. Kubernetes has several components including, etcd, flannel, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, kube-proxy, docker and much more.

In this tutorial, we are going to setup multi-node Kubernetes Cluster on Ubuntu 16.04 server.

Prerequisites

  • Two fresh Alibaba Cloud Elastic Compute Service (ECS) instance with Ubuntu 16.04 server installed.
  • A static IP address 192.168.0.103 is configured on the first instance (Master) and 192.168.0.104 is configured on the second instance (Slave).
  • Minimum 2GB RAM per instance.
  • A Root password is setup on each instance.

Launch Alibaba Cloud ECS Instance

First, Login to your https://ecs.console.aliyun.com/?spm=a3c0i.o25424en.a3.13.388d499ep38szx">Alibaba Cloud ECS Console. Create a new ECS instance, choosing Ubuntu 16.04 as the operating system with at least 2GB RAM. Connect to your ECS instance and log in as the root user.

Once you are logged into your Ubuntu 16.04 instance, run the following command to update your base system with the latest available packages.

apt-get update -y

Configuring Your ECS Server

Before starting, you will need to configure hosts file and hostname on each server, so each server can communicate with each other using the hostname.

First, open /etc/hosts file on the first server:

nano /etc/hosts

Add the following lines:

192.168.0.103 master-node
192.168.0.104 slave-node

Save and close the file when you are finished, then setup hostname by running the following command:

hostnamectl set-hostname master-node

Next, open /etc/hosts file on second server:

nano /etc/hosts

Add the following lines:

192.168.0.103 master-node
192.168.0.104 slave-node

Save and close the file when you are finished, then setup hostname by running the following command:

hostnamectl set-hostname slave-node

Next, you will need to disable swap memory on each server. Because, kubelets do not support swap memory and will not work if swap is active or even present in your /etc/fstab file.

You can disable swap memory usage with the following command:

swapoff -a

You can disable this permanent by commenting out the swap file in /etc/fstab:

nano /etc/fstab

Comment out the swap line as shown below:

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/sda4 during installation
UUID=6f612675-026a-4d52-9d02-547030ff8a7e /               ext4    errors=remount-ro 0       1
# swap was on /dev/sda6 during installation
#UUID=46ee415b-4afa-4134-9821-c4e4c275e264 none            swap    sw              0       0
/dev/sda5 /Data               ext4   defaults  0 0

Save and close the file, when you are finished.

Install Docker

Before starting, you will need to install Docker on both the master and slave server. By default, the latest version of the Docker is not available in Ubuntu 16.04 repository, so you will need to add Docker repository to your system.

First, install required packages to add Docker repository with the following command:

apt-get install apt-transport-https ca-certificates curl software-properties-common -y

Next, download and add Docker's GPG key with the following command:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -

Next, add Docker repository with the following command:

add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Next, update the repository and install Docker with the following command:

apt-get update -y
apt-get install docker-ce -y

Install Kubernetes

Next, you will need to install kubeadm, kubectl and kubelet on both the server. First, download and GPG key with the following command:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

Next, add Kubernetes repository with the following command:

echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list

Finally, update the repository and install Kubernetes with the following command:

apt-get update -y
apt-get install kubelet kubeadm kubectl -y

Configure Master Node

All the required packages are installed on both servers. Now, it's time to configure Kubernetes Master Node.

First, initialize your cluster using its private IP address with the following command:

kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.0.103

You should see the following output:

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join --token 62b281.f819128770e900a3 192.168.0.103:6443 --discovery-token-ca-cert-hash sha256:68ce767b188860676e6952fdeddd4e9fd45ab141a3d6d50c02505fa0d4d44686

Note : Note down the token from the above output. This will be used to join Slave Node to the Master Node in the next step.

Next, you will need to run the following command to configure kubectl tool:

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

Next, check the status of the Master Node by running the following command:

kubectl get nodes

You should see the following output:

NAME          STATUS     ROLES     AGE       VERSION
master-node   NotReady   master    14m       v1.9.4

In the above output, you should see that Master Node is listed as not ready. Because the cluster does not have a Container Networking Interface (CNI).

Let's deploy a Calico CNI for the Master Node with the following command:

kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml

Make sure Calico was deployed correctly by running the following command:

kubectl get pods --all-namespaces

You should see the following output:

NAMESPACE     NAME                                      READY     STATUS              RESTARTS   AGE
kube-system   calico-etcd-p2gbx                         0/1       ContainerCreating   0          35s
kube-system   calico-kube-controllers-d554689d5-v5lb6   0/1       Pending             0          34s
kube-system   calico-node-667j2                         0/2       ContainerCreating   0          35s
kube-system   etcd-master-node                          1/1       Running             0          15m
kube-system   kube-apiserver-master-node                1/1       Running             0          15m
kube-system   kube-controller-manager-master-node       1/1       Running             0          15m
kube-system   kube-dns-6f4fd4bdf-7rl74                  0/3       Pending             0          15m
kube-system   kube-proxy-hqb98                          1/1       Running             0          15m
kube-system   kube-scheduler-master-node                1/1       Running             0          15m

Now, Run kubectl get nodes command again, and you should see the Master Node is now listed as Ready.

kubectl get nodes

Output:

NAME          STATUS    ROLES     AGE       VERSION
master-node   Ready     master    7m        v1.9.4

Add Slave Node to the Kubernetes Cluster

Next, you will need to log in to the Slave Node and add it to the Cluster. Remember the join command in the output from the Master Node initialization command and issue it on the Slave Node as shown below:

kubeadm join --token 62b281.f819128770e900a3 192.168.0.103:6443 --discovery-token-ca-cert-hash sha256:68ce767b188860676e6952fdeddd4e9fd45ab141a3d6d50c02505fa0d4d44686

Once the Node is joined successfully, you should see the following output:

[discovery] Trying to connect to API Server "192.168.0.103:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.104:6443"
[discovery] Requesting info from "https://192.168.0.104:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.0.104:6443"
[discovery] Successfully established connection with API Server "192.168.0.103:6443"

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

Now, go back to the Master Node and issue the command kubectl get nodes to see that the slave node is now ready:

kubectl get nodes

Output:

NAME          STATUS    ROLES     AGE       VERSION
master-node   Ready     master    35m       v1.9.4
slave-node    Ready     <none>    7m        v1.9.4

Deploy the Apache Container to the Cluster

Kubernetes Cluster is now ready, it's time to deploy the Apache container.

On the Master Node, run the following command to create an Apache deployment:

kubectl create deployment apache --image=apache

Output:

deployment "apache" created

You can list out the deployments with the following command:

kubectl get deployments

Output :

NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
apache    1         1         1            0           16s

You can see the more information about Apache deployment with the following command:

kubectl describe deployment apache

Output:

Name:                   apache
Namespace:              default
CreationTimestamp:      Mon, 19 Mar 2018 19:04:03 +0530
Labels:                 app=apache
Annotations:            deployment.kubernetes.io/revision=1
Selector:               app=apache
Replicas:               1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
Pod Template:
  Labels:  app=apache
  Containers:
   apache:
    Image:        apache
    Port:         <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  <none>
NewReplicaSet:   apache-5fcc8cd4bf (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  43s   deployment-controller  Scaled up replica set apache-5fcc8cd4bf to 1

Next, you will need to make the Apache container available to the network with the command:

kubectl create service nodeport apache --tcp=80:80

Now, list out the current services by running the following command:

kubectl get svc

You should see the Apache service with assigned port 30267:

NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
apache       NodePort    10.107.95.29   <none>        80:30267/TCP   15s
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        37m

Now, open your web browser and type the URL http://192.168.0.104:30267 (Slave Node IP), you should see the default Apache Welcome page:

1

Congratulations! Your Apache container has been deployed on your Kubernetes Cluster.

Related Alibaba Cloud Products

After completing your Kubernetes Cluster, it makes perfect sense to scale it for production. That's the whole design concept of using containers. To do this, we need to set up a database for our application. Ideally, for production scenarios, I do not recommend making your own database. Instead, you can choose from one of Alibaba Cloud's suite of database products.

ApsaraDB for Redis is an automated and scalable tool for developers to manage data storage shared across multiple processes, applications or servers.

As a Redis protocol compatible tool, ApsaraDB for Redis offers exceptional read-write capabilities and ensures data persistence by using memory and hard disk storage. ApsaraDB for Redis provides data read-write capabilities at high speed by retrieving data from in-memory caches and ensures data persistence by using both memory and hard disk storage mode.

Data Transmission Service (DTS) helps you migrate data between data storage types, such as relational database, NoSQL, and OLAP. The service supports homogeneous migrations as well as heterogeneous migration between different data storage types.

DTS also can be used for continuous data replication with high availability. In addition, DTS can help you subscribe to the change data function of ApsaraDB for RDS. With DTS, you can easily implement scenarios such as data migration, remote real time data backup, real time data integration and cache refresh.

相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
目录
相关文章
|
4月前
|
Kubernetes Perl 容器
在K8S中,replicaset 和deploy有何区别?
在K8S中,replicaset 和deploy有何区别?
|
4月前
|
运维 Kubernetes 容器
在K8S中,replicaset和deploy的区别?
在K8S中,replicaset和deploy的区别?
|
4月前
|
存储 消息中间件 Kubernetes
在K8S中,deploy和Statefulset有何区别?
在K8S中,deploy和Statefulset有何区别?
|
4月前
|
Kubernetes 监控 Perl
在K8S中,deploy升级过程包括什么?
在K8S中,deploy升级过程包括什么?
|
4月前
|
Kubernetes 应用服务中间件 调度
在K8S中,deploy创建过程包括什么?
在K8S中,deploy创建过程包括什么?
|
4月前
|
Kubernetes 应用服务中间件 nginx
在K8S中,deploy的yaml如何编写?
在K8S中,deploy的yaml如何编写?
|
7月前
|
NoSQL MongoDB Docker
Ubuntu18 Install MongoDB
Ubuntu18 Install MongoDB
69 0
|
Kubernetes Perl 容器
15-Kubernetes-Pod控制器详解-Deployment(Deploy)
15-Kubernetes-Pod控制器详解-Deployment(Deploy)
|
Ubuntu Linux Windows
Install KDE Plasma and XRDP on Ubuntu Server
Install KDE Plasma and XRDP on Ubuntu Server
318 0
Install KDE Plasma and XRDP on Ubuntu Server
|
Ubuntu
how to install xlsx in R on ubuntu?
how to install xlsx in R on ubuntu?
80 0