kubernetes1.8.5集群安装(带证书)

本文涉及的产品
公共DNS(含HTTPDNS解析),每月1000万次HTTP解析
全局流量管理 GTM,标准版 1个月
云解析 DNS,旗舰版 1个月
简介:

一、安装 Docker(集群中每台服务器都需安装)

cd /data/tools/kubernetes/
yum -y install docker-ce-17.09.1.ce-1.el7.centos.x86_64.rpm

查看 Docker 默认存储位置

docker info | grep "Docker Root Dir"
docker info | grep "Storage Driver"

修改 Docker 默认存储位置docker服务启动脚本会调用docker.service.d下的配置文件

mkdir -pv /etc/systemd/system/docker.service.d
cat > /etc/systemd/system/docker.service.d/docker.conf <<EOF
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --graph=/data/docker --storage-driver=overlay
EOF
vim /etc/systemd/system/docker.service.d/docker.conf

启动docker

systemctl daemon-reload && \
systemctl start docker && \
systemctl -l status docker

查看docker运行日志

journalctl -f -u docker
journalctl -xe
systemctl stop docker
systemctl restart docker
netstat -ntlp
systemctl enable docker

设置环境变量,开启转发选项

cat > /etc/sysctl.d/k8s.conf <<EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf

二、创建 CA 证书配置,生成 CA 证书和私钥

备注: etcd和kuberneter都需要生产证书(下面分别配置etcd和kuberneter的真是真是证书)
etcd证书存放目录:/etc/etcd/ssl/
kubernetes证书存放目录:/etc/kubernetes/ssl

创建存放证书的临时目录(先把etcd及kuberneter需要的证书先在一台服务器上生成再做分发到其他服务)
mkdir -pv /data/ssl && \
cd /data/ssl/

安装 cfssl(生产证书的工具)
curl -s -L -o /usr/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
curl -s -L -o /usr/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
curl -s -L -o /usr/bin/cfssl-certinfo https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x /usr/bin/cfssl*

生成 CA 证书 和 私钥
config.json配置文件

cat >> config.json << EOF
{
    "signing": {
        "default": {
            "expiry": "87600h"
        },
        "profiles": {
            "kubernetes": {
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ],
                "expiry":"87600h"
            }
        }
    }
}
EOF

etcd-csr.json配置文件

cat >> etcd-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
        "127.0.0.1",
        "10.10.175.3",
        "10.10.188.125",
        "10.10.121.199"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Shanghai",
            "L": "Shanghai",
            "O": "k8s",
            "OU": "System"
        }
    ]
}

EOF

csr.json配置文件

cat >> csr.json << EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C":"CN",
            "ST": "Shanghai",
            "L": "Shanghai",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

备注:etcd-csr.json中的hosts需要根据自己环境的具体情况去配置
把上面三个文件复制到/data/ssl目录下

cd /data/ssl/
cfssl gencert -initca csr.json | cfssljson -bare ca

创建 etcd 证书配置,生成 etcd 证书和私钥

cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=config.json \
-profile=kubernetes \
etcd-csr.json | cfssljson -bare etcd

[root@k8smaster01 ~]# ll /data/ssl
-rw-r--r-- 1 root root 1005 Dec 21 13:56 ca.csr
-rw------- 1 root root 1679 Dec 21 13:56 ca-key.pem
-rw-r--r-- 1 root root 1363 Dec 21 13:56 ca.pem
-rw-r--r-- 1 root root  385 Dec 21 13:56 config.json
-rw-r--r-- 1 root root  265 Dec 21 13:56 csr.json
-rw-r--r-- 1 root root 1066 Dec 21 13:56 etcd.csr
-rw-r--r-- 1 root root  375 Dec 21 13:56 etcd-csr.json
-rw------- 1 root root 1675 Dec 21 13:56 etcd-key.pem
-rw-r--r-- 1 root root 1440 Dec 21 13:56 etcd.pem

到此etcd的证书文件生成了


创建 kube-apiserver 证书配置,生成 kube-apiserver 证书和私钥
首先把下面附件中的文件都复制到/data/ssl目录下
front-proxy-client-csr.json配置文件

cat >> front-proxy-client-csr.json << EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [
        "10.10.175.3",
        "10.10.188.125"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Shanghai",
            "L": "Shanghai",
            "O": "system:kube-scheduler",
            "OU": "System"
        }
    ]
}
EOF

kube-admin-csr.json配置文件

cat >> kube-admin-csr.json << EOF
{
    "CN": "kube-admin",
    "hosts": [
        "10.10.175.3"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Shanghai",
            "L": "Shanghai",
            "O": "system:masters",
            "OU": "System"
        }
    ]
}
EOF

kube-apiserver-csr.json配置文件

cat >> kube-apiserver-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
        "127.0.0.1",
        "10.10.175.3",
        "10.10.188.125",
        "10.10.121.199",
        "10.254.0.1",
        "localhost",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Shanghai",
            "L": "Shanghai",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

kube-controller-manager-csr.json配置文件

cat >> kube-controller-manager-csr.json << EOF
{
    "CN": "system:kube-controller-manager",
    "hosts": [
        "10.10.175.3",
        "10.10.188.125"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Shanghai",
            "L": "Shanghai",
            "O": "system:kube-controller-manager",
            "OU": "System"
        }
    ]
}
EOF

front-proxy-client-csr.json配置文件

cat >> front-proxy-client-csr.json << EOF
{
    "CN": "front-proxy-client",
    "key": {
        "algo": "rsa",
        "size": 2048
    }
}
EOF

kube-proxy-csr.json配置文件

cat >> kube-proxy-csr.json << EOF
{
    "CN": "system:kube-proxy",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Shanghai",
            "L": "Shanghai",
            "O": "system:kube-proxy",
            "OU": "System"
        }
    ]
}
EOF

kube-scheduler-csr.json配置文件

cat >> kube-scheduler-csr.json << EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [
        "10.10.175.3",
        "10.10.188.125"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Shanghai",
            "L": "Shanghai",
            "O": "system:kube-scheduler",
            "OU": "System"
        }
    ]
}
EOF

kubelet-csr.json配置文件

cat >> kubelet-csr.json << EOF
{
    "CN": "system:node:master01",
    "hosts": [
        "master01",
        "10.10.175.3"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Shanghai",
            "ST": "Shanghai",
            "O": "system:nodes",
            "OU": "Kubernetes-manual"
        }
    ]
}
EOF

备注:kube-admin-csr.json,kube-apiserver-csr.json,kube-controller-manager-csr.json,kubelet-csr.json,kube-scheduler-csr.json这五个json文件的hosts内容根据自己环境的实际情况作相应的更改

cfssl gencert \json
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=config.json \
-profile=kubernetes \
kube-apiserver-csr.json | cfssljson -bare kube-apiserver

创建 kube-controller-manager 证书配置,生成 kube-controller-manager 证书和私钥

cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=config.json \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

创建 kube-scheduler 证书配置,生成 kube-scheduler 证书和私钥

cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=config.json \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler

创建 kube-admin 证书配置,生成 kube-admin 证书和私钥

cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=config.json \
-profile=kubernetes \
kube-admin-csr.json | cfssljson -bare kube-admin

创建 kube-proxy 证书配置,生成 kube-proxy 证书和私钥

cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy

生成 高级审计 配置

cat > audit-policy.yaml <<EOF
# Log all requests at the Metadata level.
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
- level: Metadata
EOF

生成 token.csv

cd /data/ssl/
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > /data/ssl/token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

查看生产的证书文件
kubernetes1.8.5集群安装(带证书)
kubelet 也可以手动通过 CA 来进行签发,但是这只能针对少数机器,毕竟我们在进行证书签发的时候,是需要绑定对应 Node 的 IP 的,
如果 node 太多了,加 IP 就会很幸苦, 所以这里我们使用 TLS 认证,由 apiserver 自动给符合条件的 node 签发证书,允许节点加入集群。
kubelet 首次启动时向 kube-apiserver 发送 TLS Bootstrapping 请求,
kube-apiserver 验证 kubelet 请求中的 token 是否与它配置的 token 一致,如果一致则自动为 kubelet 生成证书和密钥。

三、证书分发

无论是node节点还是master节点都需要etcd和kubeneter证书
证书分发之前首先在其他的点建好下面两个目录
mkdir -pv /etc/etcd/ssl && mkdir -pv /etc/kubernetes/ssl
证书分发

cd /data/ssl/
rsync -avzP ca*.pem etcd*.pem root@10.10.188.125:/etc/etcd/ssl/
rsync -avzP ca*.pem kube-apiserver*.pem kube-controller-manager*.pem kube-scheduler*.pem kube-proxy*.pem root@10.10.188.125:/etc/kubernetes/ssl/
rsync -avzP bootstrap.kubeconfig kube-proxy.kubeconfig audit-policy.yaml token.csv root@10.10.188.125 :/etc/kubernetes/

kubernetes1.8.5集群安装(带证书)

四、etcd集群安装

etcd集群至少需要三台服务器才能做集群
下载地址:https://github.com/coreos/etcd

cd /data/tools/kubernetes/
tar -zxvf etcd-v3.2.11-linux-amd64.tar.gz
cd etcd-v3.2.11-linux-amd64/ && \
cp etcd etcdctl /usr/bin/ && \
etcd --version

etcd配置文件(配置文件中改成当前服务器IP)
ETCD_INITIAL_CLUSTER_STATE参数:new 初始化集群安装时使用该选项;existing 新加入集群时使用该选项
ETCD_NAME命名必须和ETCD_INITIAL_CLUSTER设置对应上

cat > /etc/etcd/etcd.conf <<EOF
# [member]
ETCD_NAME=etcd1
ETCD_DATA_DIR="/data/kubernetes/etcd/"
# ETCD_WAL_DIR="/data/kubernetes/etcd/wal"
ETCD_SNAPSHOT_COUNT="100"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://10.10.175.3:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.10.175.3:2379,http://127.0.0.1:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
#ETCD_CORS=""

# [cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.10.175.3:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
ETCD_INITIAL_CLUSTER="etcd1=https://10.10.175.3:2380,etcd2=https://10.10.188.125:2380,etcd3=https://10.10.121.199:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://10.10.175.3:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_STRICT_RECONFIG_CHECK="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"

# [proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"

# [security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem"
ETCD_PEER_AUTO_TLS="true"

# [logging]
#ETCD_DEBUG="false"
# examples for -log-package-levels etcdserver=WARNING,security=DEBUG
#ETCD_LOG_PACKAGE_LEVELS=""
EOF

etcd启动文件

cat > /usr/lib/systemd/system/etcd.service <<EOF
[Unit]
Description=Etcd Service
After=network.target

[Service]
Environment=/data/kubernetes/etcd
EnvironmentFile=-/etc/etcd/etcd.conf
Type=notify
User=root
PermissionsStartOnly=true
ExecStart=/usr/bin/etcd
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

查看集群状态(在etcd 集群中任意一台执行下面的命令)
etcdctl --endpoints=https://10.10.175.3:2379 --cert-file=/etc/etcd/ssl/etcd.pem --ca-file=/etc/etcd/ssl/ca.pem --key-file=/etc/etcd/ssl/etcd-key.pem cluster-health
kubernetes1.8.5集群安装(带证书)
查看etcd集群成员
etcdctl --endpoints=https://10.10.175.3:2379 --cert-file=/etc/etcd/ssl/etcd.pem --ca-file=/etc/etcd/ssl/ca.pem --key-file=/etc/etcd/ssl/etcd-key.pem member list
查看etcd集群中的存储信息(ls后面可接具体目录)
etcdctl --endpoints=https://127.0.0.1:2379 --cert-file=/etc/etcd/ssl/etcd.pem --ca-file=/etc/etcd/ssl/ca.pem --key-file=/etc/etcd/ssl/etcd-key.pem ls

五、kubeneter集群安装

1、master节点安装
下载地址:https://storage.googleapis.com/kubernetes-release/release/v1.8.5/kubernetes-server-linux-amd64.tar.gz
#拷贝相关命令(kubernetes-server解压包中包含了server,client,node所有的脚本)

cd /data/tools/kubernetes/v1.8.5/
tar -xvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
cp kube-apiserver kube-controller-manager kube-scheduler /usr/bin/
cp kubelet kube-proxy /usr/bin/
cp kubectl /usr/bin/

设置为当前master节点IP
export KUBE_APISERVER="https://10.10.175.3:6443"

创建 kubelet bootstrapping kubeconfig 配置文件

kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig

kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig

kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig

kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

cp ./bootstrap.kubeconfig /etc/kubernetes/

创建 kube-proxy kubeconfig 配置文件

cd /data/ssl/
kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

cp ./kube-proxy.kubeconfig /etc/kubernetes/

创建 kube-admin kubeconfig 配置文件

kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=admin.conf

kubectl config set-credentials kube-admin \
--client-certificate=kube-admin.pem \
--embed-certs=true \
--client-key=kube-admin-key.pem \
--kubeconfig=admin.conf

kubectl config set-context kube-admin@kubernetes \
--cluster=kubernetes \
--user=kube-admin \
--kubeconfig=admin.conf

kubectl config use-context kube-admin@kubernetes --kubeconfig=admin.conf

cp ./admin.conf /etc/kubernetes/

cp /etc/kubernetes/ssl/admin.conf ~/.kube/config

配置 config 通用配置

cd /etc/kubernetes/
cat > /etc/kubernetes/config <<EOF
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=2"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=true"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://127.0.0.1:8080"
EOF

配置 apiserver 配置文件

cd /etc/kubernetes/
mkdir -pv /data/kubernetes/logs   
cat > /etc/kubernetes/apiserver <<EOF
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--advertise-address=10.10.175.3 --insecure-bind-address=127.0.0.1 --bind-address=10.10.175.3"

# The port on the local server to listen on.
KUBE_API_PORT="--insecure-port=8080 --secure-port=6443"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://10.10.175.3:2379,https://10.10.188.125:2379,https://10.10.121.199:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction"

# Add your own!
KUBE_API_ARGS="--authorization-mode=RBAC,Node \\
               --anonymous-auth=false \\
               --kubelet-https=true \\
               --enable-bootstrap-token-auth \\
               --token-auth-file=/etc/kubernetes/token.csv \\
               --service-node-port-range=30000-50000 \\
               --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \\
               --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \\
               --client-ca-file=/etc/kubernetes/ssl/ca.pem \\
               --service-account-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \\
               --etcd-quorum-read=true \\
               --storage-backend=etcd3 \\
               --etcd-cafile=/etc/etcd/ssl/ca.pem \\
               --etcd-certfile=/etc/etcd/ssl/etcd.pem \\
               --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\
               --enable-swagger-ui=true \\
               --apiserver-count=3 \\
               --audit-policy-file=/etc/kubernetes/audit-policy.yaml \\
               --audit-log-maxage=30 \\
               --audit-log-maxbackup=3 \\
               --audit-log-maxsize=100 \\
               --audit-log-path=/data/kubernetes/logs/audit.log \\
               --event-ttl=1h"
EOF

注意:
安全端口监听在 10.10.175.3,提供给 node 节点访问(当前服务器IP)
非安全端口监听在 127.0.0.1,只提供给同一台机器上的 kube-controller-manager 和 kube-scheduler 访问,这样就保证了安全性和稳定性(IP可换成0.0.0.0)


配置 kube-apiserver 启动项文件

cat > /usr/lib/systemd/system/kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
After=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/local/bin/kube-apiserver \\
            \$KUBE_LOGTOSTDERR \\
            \$KUBE_LOG_LEVEL \\
            \$KUBE_ETCD_SERVERS \\
            \$KUBE_API_ADDRESS \\
            \$KUBE_API_PORT \\
            \$KUBELET_PORT \\
            \$KUBE_ALLOW_PRIV \\
            \$KUBE_SERVICE_ADDRESSES \\
            \$KUBE_ADMISSION_CONTROL \\
            \$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
vim /usr/lib/systemd/system/kube-apiserver.service

启动 kube-apiserver

systemctl daemon-reload && \
systemctl start kube-apiserver && \
systemctl -l status kube-apiserver
journalctl -f -u kube-apiserver
journalctl -xe
systemctl stop kube-apiserver
netstat -ntlp
systemctl enable kube-apiserver

查看当前 master 集群状态
kubectl get cs


配置 controller-manager 配置文件

cd /etc/kubernetes/
cat > /etc/kubernetes/controller-manager <<EOF
# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--address=0.0.0.0 \\
                              --service-cluster-ip-range=10.254.0.0/16 \\
                              --cluster-name=kubernetes \\
                              --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\
                              --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\
                              --service-account-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \\
                              --root-ca-file=/etc/kubernetes/ssl/ca.pem \\
                              --leader-elect=true \\
                              --node-monitor-grace-period=40s \\
                              --node-monitor-period=5s \\
                              --pod-eviction-timeout=5m0s"
EOF

配置 kube-controller-manager 启动项文件

cat > /usr/lib/systemd/system/kube-controller-manager.service <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/local/bin/kube-controller-manager \\
            \$KUBE_LOGTOSTDERR \\
            \$KUBE_LOG_LEVEL \\
            \$KUBE_MASTER \\
            \$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
vim /usr/lib/systemd/system/kube-controller-manager.service`

启动 kube-controller-manager 服务

systemctl daemon-reload && \
systemctl start kube-controller-manager && \
systemctl -l status kube-controller-manager
journalctl -f -u kube-controller-manager
journalctl -xe
systemctl stop kube-controller-manager
netstat -ntlp
systemctl enable kube-controller-manager

查看当前 master 集群状态

kubectl get cs

配置 scheduler 配置文件

cd /etc/kubernetes/
cat > /etc/kubernetes/scheduler <<EOF
###
# kubernetes scheduler config

# default config should be adequate

# Add your own!
KUBE_SCHEDULER_ARGS="--leader-elect=true --address=0.0.0.0"
EOF
vim /etc/kubernetes/scheduler

配置 kube-scheduler 启动项文件

cat > /usr/lib/systemd/system/kube-scheduler.service <<EOF
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/local/bin/kube-scheduler \\
            \$KUBE_LOGTOSTDERR \\
            \$KUBE_LOG_LEVEL \\
            \$KUBE_MASTER \\
            \$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
vim /usr/lib/systemd/system/kube-scheduler.service

启动 kube-scheduler 服务

systemctl daemon-reload && \
systemctl start kube-scheduler && \
systemctl -l status kube-scheduler
journalctl -f -u kube-scheduler
journalctl -xe
systemctl stop kube-scheduler
netstat -ntlp
systemctl enable kube-scheduler

查看当前 master 集群状态

kubectl get cs

设置开机启动

systemctl enable kube-apiserver
systemctl enable kube-controller-manager
systemctl enable kube-scheduler

2、Master 作为 Node
配置 kubelet 配置文件(配置文件中的红色字体根据服务器当前情况变更)

mkdir -pv /data/kubernetes/kubelet

cd /etc/kubernetes/
cat > /etc/kubernetes/kubelet <<EOF
###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=10.10.175.3"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=k8smaster01.test.com"

# location of the api-server
# KUBELET_API_SERVER=""

# Add your own!
KUBELET_ARGS="--cgroup-driver=cgroupfs \
              --cluster-dns=10.254.0.2 \
              --resolv-conf=/etc/resolv.conf \
              --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
              --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
              --fail-swap-on=false \
              --cert-dir=/etc/kubernetes/ssl \
              --cluster-domain=cluster.local. \
              --hairpin-mode=promiscuous-bridge \
              --serialize-image-pulls=false \
              --logtostderr=false \
              --log-dir=/data/kubernetes/logs \
              --root-dir=/data/kubernetes/kubelet \
              #--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"
              --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
EOF
vim /etc/kubernetes/kubelet

启动kubelet时会调用pause-amd64:3.0镜像,可以使用如上配置调用阿里云镜像或者把镜像从gci.io下载下来用下面的方式加载(需×××)

cd /data/tools/kubernetes/images/
docker images
docker load < gcr.io_google_containers_pause-amd64_3.0.tar

docker pull gcr.io/google_containers/pause-amd64:3.0
docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0

网络组件参数先不加,到后期做网络组件的时候再加
--network-plugin=cni \
gcr.io/google_containers 可能下载不下来 需要更换国内的镜像
--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0


配置 kubelet 启动项文件

cat > /usr/lib/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/data/kubernetes/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/local/bin/kubelet \\
            \$KUBE_LOGTOSTDERR \\
            \$KUBE_LOG_LEVEL \\
            \$KUBELET_ADDRESS \\
            \$KUBELET_PORT \\
            \$KUBELET_HOSTNAME \\
            \$KUBE_ALLOW_PRIV \\
            \$KUBELET_POD_INFRA_CONTAINER \\
            \$KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF
vim /usr/lib/systemd/system/kubelet.service

创建 kubelet 数据文件目录
mkdir -pv /data/kubernetes/kubelet

启动 kubelet 服务

systemctl daemon-reload && \
systemctl start kubelet && \
systemctl -l status kubelet
journalctl -f -u kubelet
journalctl -xe
systemctl stop kubelet
systemctl restart kubelet
netstat -ntlp
systemctl enable kubelet

授权 Kubernetes Node
在 master 节点,首先由于我们采用了 TLS Bootstrapping,所以需要先创建一个 ClusterRoleBinding
在任意 master 节点 执行即可

kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

它需要启动 api-server 有了 8080 端口后再去执行
The connection to the server localhost:8080 was refused - did you specify the right host or port?
如果在某台 master 上已经执行过了的话,再去其它主机上执行,就会报 已经创建了
Error from server (AlreadyExists): clusterrolebindings.rbac.authorization.k8s.io "kubelet-bootstrap" already exists

查看 node 节点证书
在 master 通过简单指令验证,会看到 node 节点处于 pending

kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr-5apDfsKujoNM61vlHpg1o3gboEYI5xaXsB54uniZLS8   7m        kubelet-bootstrap   Pending

通过 kubectl 来允许节点加入集群

kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve
kubectl get csr | awk '/Pending/ {print $1}' | xargs kubectl certificate approve

kubernetes1.8.5集群安装(带证书)


配置 kube-proxy 配置文件(配置文件根据服务器当前情况变更 )
Kube-proxy 是实现 Service 的关键组件,
kube-proxy 会在每台节点上执行,然后监听 API Server 的 Service 与 Endpoint 资源对象的改变,
然后来依据变化执行 iptables 来实现网络的转发。

cd /etc/kubernetes/
cat > /etc/kubernetes/proxy <<EOF
###
# kubernetes proxy config
# default config should be adequate
# Add your own!
KUBE_PROXY_ARGS="--bind-address=10.10.175.3 \\
                 --hostname-override=k8smaster01.test.com \\
                 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \\
                 --cluster-cidr=10.254.0.0/16"
EOF

配置 kube-proxy 启动项文件

cat > /usr/lib/systemd/system/kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/local/bin/kube-proxy \\
            \$KUBE_LOGTOSTDERR \\
            \$KUBE_LOG_LEVEL \\
            \$KUBE_MASTER \\
            \$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
vim /usr/lib/systemd/system/kube-proxy.service

启动 kube-proxy 服务

systemctl daemon-reload && \
systemctl start kube-proxy && \
systemctl -l status kube-proxy
journalctl -f -u kube-proxy
journalctl -xe
systemctl stop kube-proxy
netstat -ntlp
systemctl enable kube-proxy

systemctl daemon-reload && \
systemctl stop kube-proxy && \
systemctl start kube-proxy

master节点当成node时,每台maste节点都需要以下配置文件。在配置其他master节点时,可以从这台服务器把配置拷贝过去(相应更改下配置)再导入启动脚本即可
kubernetes1.8.5集群安装(带证书)


3、配置 Node节点
备注:确保etcd及kubeneter证书已存在到相应的目录,docker服务已安装
如果使用的是kubernetes-server安装包就按照以下方式配置二进制脚本;如果下载的是kubernetes-node则把```
kubernetes/node/bin/拷贝到/usr/bin/下

cd /data/tools/kubernetes/v1.8.5/
tar -xvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
cp kubelet kube-proxy kubectl /usr/bin/
cp kubectl /usr/bin/

node配置文件
node节点服务需要配置kube-proxy和kubelet服务,安装配置过程和<Master 作为 Node>一样
从master节点/etc/kubernetes下把配置文件audit-policy.yaml bootstrap.kubeconfig config kubelet kubelet.kubeconfig kube-proxy.kubeconfig proxy token.csv拷贝到node的/etc/kubernetes

bootstrap.kubeconfig : server改成 https://127.0.0.1:6443
conf :KUBE_MASTER这行注释掉
kubelet.kubeconfig : server改成 https://127.0.0.1:6443
kube-proxy.kubeconfig : server改成 https://127.0.0.1:6443

其他配置根据当前服务器的实际情况配置
创建 Nginx 代理
由于 HA 方案基于 Nginx 反代实现,所以每个 Node 要启动一个 Nginx 负载均衡 Master,具体参考 HA Master 简述

创建配置目录
mkdir -pv /etc/nginx

写入代理配置

cat > /etc/nginx/nginx.conf << EOF
error_log stderr notice;

worker_processes auto;
events {
    multi_accept on;
    use epoll;
    worker_connections 1024;
}

stream {
    upstream kube_apiserver {
        least_conn;
        server 10.10.175.3:6443;
        server 10.10.188.125:6443;
        server 10.10.121.199:6443;
    }

    server {
        listen        0.0.0.0:6443;
        proxy_pass    kube_apiserver;
        proxy_timeout 10m;
        proxy_connect_timeout 1s;
    }
}
EOF

更新权限
chmod +r /etc/nginx/nginx.conf


配置nginx启动脚本(nginx是以docker启动,会自动下载nginx:1.13.5-alpine 镜像)

cat > /etc/systemd/system/nginx-proxy.service << EOF
[Unit]
Description=kubernetes apiserver docker wrapper
Wants=docker.socket
After=docker.service

[Service]
User=root
PermissionsStartOnly=true
ExecStart=/usr/bin/docker run -p 127.0.0.1:6443:6443 \\
                              -v /etc/nginx:/etc/nginx \\
                              --name nginx-proxy \\
                              --net=host \\
                              --restart=on-failure:5 \\
                              --memory=512M \\
                              nginx:1.13.5-alpine
ExecStartPre=-/usr/bin/docker rm -f nginx-proxy
ExecStop=/usr/bin/docker stop nginx-proxy
Restart=always
RestartSec=15s
TimeoutStartSec=30s

[Install]
WantedBy=multi-user.target
EOF
vim /etc/systemd/system/nginx-proxy.service

启动 Nginx 代理服务

systemctl daemon-reload && \
systemctl start nginx-proxy && \
systemctl -l status nginx-proxy
journalctl -f -u nginx-proxy
journalctl -xe
netstat -ntlp
systemctl enable nginx-proxy

六、部署 Calico 网络组件

导入 docker calico 镜像

docker load < quay.io_calico_cni_v1.11.0.tar
docker load < quay.io_calico_kube-controllers_v1.0.0.tar
docker load < quay.io_calico_node_v2.6.1.tar

创建 calico 目录

mkdir -pv /etc/calico && \
mkdir -pv /data/kubernetes/calico/
cd /etc/calico/

修改 Calico 配置
Calico 部署采用 “混搭” 方式,即 Systemd 控制 calico node,cni 等由 kubernetes daemonset 安装
具体请参考 Calico 部署踩坑记录,以下直接上代码
https://mritd.me/2017/07/31/calico-yml-bug/

获取 calico.yaml
wget -c "https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/calico.yaml"

替换 Etcd 地址
sed -i 's@.*etcd_endpoints:.*@\ \ etcd_endpoints:\ \"https://10.10.175.3:2379,https://10.10.188.125:2379,https://10.10.121.199:2379\"@gi' calico.yaml

替换 Etcd 证书

export ETCD_CERT=`cat /etc/etcd/ssl/etcd.pem | base64 | tr -d '\n'`
export ETCD_KEY=`cat /etc/etcd/ssl/etcd-key.pem | base64 | tr -d '\n'`
export ETCD_CA=`cat /etc/etcd/ssl/ca.pem | base64 | tr -d '\n'`

sed -i "s@.*etcd-cert:.*@\ \ etcd-cert:\ ${ETCD_CERT}@gi" calico.yaml
sed -i "s@.*etcd-key:.*@\ \ etcd-key:\ ${ETCD_KEY}@gi" calico.yaml
sed -i "s@.*etcd-ca:.*@\ \ etcd-ca:\ ${ETCD_CA}@gi" calico.yaml

sed -i 's@.*etcd_ca:.*@\ \ etcd_ca:\ "/calico-secrets/etcd-ca"@gi' calico.yaml
sed -i 's@.*etcd_cert:.*@\ \ etcd_cert:\ "/calico-secrets/etcd-cert"@gi' calico.yaml
sed -i 's@.*etcd_key:.*@\ \ etcd_key:\ "/calico-secrets/etcd-key"@gi' calico.yaml

注释掉 calico-node 部分(由 Systemd 接管)
sed -i '103,197s@.*@#&@gi' calico.yaml

创建 Calico Daemonset
cd /etc/calico/
先创建 RBAC
wget -c "https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/rbac.yaml"
kubectl apply -f rbac.yaml
再创建 Calico Daemonset
kubectl create -f calico.yaml
kubernetes1.8.5集群安装(带证书)
删除 Calico Daemonset
kubectl delete -f calico.yaml


创建 Systemd 文件
上一步注释了 calico.yaml 中 Calico Node 相关内容,为了防止自动获取 IP 出现问题,将其移动到 Systemd,
Systemd service 配置如下,
每个节点都要安装 calico-node 的 Service,其他节点请自行修改 IP和主机名(别问为啥是两个反引号 \,自己试就知道了)

cat > /usr/lib/systemd/system/calico-node.service <<EOF
[Unit]
Description=calico node
After=docker.service
Requires=docker.service

[Service]
User=root
PermissionsStartOnly=true
ExecStart=/usr/bin/docker run --net=host --privileged --name=calico-node \\
                              -e ETCD_ENDPOINTS=https://10.10.175.3:2379,https://10.10.188.125:2379,https://10.10.121.199:2379 \\
                              -e ETCD_CA_CERT_FILE=/etc/etcd/ssl/ca.pem \\
                              -e ETCD_CERT_FILE=/etc/etcd/ssl/etcd.pem \\
                              -e ETCD_KEY_FILE=/etc/etcd/ssl/etcd-key.pem \\
                              -e NODENAME=k8smaster01.test.com \\
                              -e IP=10.10.175.3 \\
                              -e IP6= \\
                              -e AS= \\
                              -e CALICO_IPV4POOL_CIDR=10.254.0.0/16 \\
                              -e CALICO_IPV4POOL_IPIP=always \\
                              -e CALICO_LIBNETWORK_ENABLED=true \\
                              -e CALICO_NETWORKING_BACKEND=bird \\
                              -e CALICO_DISABLE_FILE_LOGGING=true \\
                              -e FELIX_IPV6SUPPORT=false \\
                              -e FELIX_DEFAULTENDPOINTTOHOSTACTION=ACCEPT \\
                              -e FELIX_LOGSEVERITYSCREEN=info \\
                              -v /etc/etcd/ssl/ca.pem:/etc/etcd/ssl/ca.pem \\
                              -v /etc/etcd/ssl/etcd.pem:/etc/etcd/ssl/etcd.pem \\
                              -v /etc/etcd/ssl/etcd-key.pem:/etc/etcd/ssl/etcd-key.pem \\
                              -v /var/run/calico:/var/run/calico \\
                              -v /lib/modules:/lib/modules \\
                              -v /run/docker/plugins:/run/docker/plugins \\
                              -v /var/run/docker.sock:/var/run/docker.sock \\
                              -v /var/log/calico:/var/log/calico \\
                              quay.io/calico/node:v2.6.1
ExecStop=/usr/bin/docker rm -f calico-node
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF
vim /usr/lib/systemd/system/calico-node.service

修改 kubelet 配置
根据官方文档要求 kubelet 配置必须增加 --network-plugin=cni 选项,所以需要修改 kubelet 配置

vim  /etc/kubernetes/kubelet 
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=10.10.175.3"
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=k8smaster01.test.com"
# location of the api-server
# KUBELET_API_SERVER=""
# Add your own!
KUBELET_ARGS="--cgroup-driver=cgroupfs \
              --network-plugin=cni \
              --cluster-dns=10.254.0.2 \
              --resolv-conf=/etc/resolv.conf \
              --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
              --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
              --fail-swap-on=false \
              --cert-dir=/etc/kubernetes/ssl \
              --cluster-domain=cluster.local. \
              --hairpin-mode=promiscuous-bridge \
              --serialize-image-pulls=false \
              --pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"
EOF
vim /etc/kubernetes/kubelet

重启 kubelet

systemctl daemon-reload && \
systemctl stop kubelet && \
systemctl start kubelet

启动 calico-node 服务

systemctl enable calico-node
systemctl start calico-node
systemctl -l status calico-node
journalctl -f -u calico-node
journalctl -xe
systemctl stop calico-node
netstat -ntlp

测试跨主机通信
创建 deployment

cat >> /data/kubernetes/calico/demo.deploy.yml <<EOF
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: demo-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: demo
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
      - name: demo
        image: mritd/demo
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
EOF
vim /data/kubernetes/calico/demo.deploy.yml
kubectl create -f demo.deploy.yml

安装 calicoctl
下载 calicoctl 分发到各个 node 节点

cd /data/tools/kubernetes/
wget -c https://github.com/projectcalico/calicoctl/releases/download/v1.6.1/calicoctl
chmod +x calicoctl
cp calicoctl /usr/bin/

rsync -avzP calicoctl root@10.10.188.125:/usr/local/bin/
rsync -avzP calicoctl root@10.10.121.199:/usr/local/bin/

在节点上查看 calico 的状态

calicoctl node status

查看创建结果
kubectl get pod
kubectl get pod -o wide

验证 calico
kubectl get pods -n kube-system

kubectl get deployment

kubectl get svc

kubectl get pod -o wide -n kube-system
kubectl get svc,po -o wide --all-namespaces

进入其中一个 Pod,ping 另一个 Pod 的 IP 测试
kubectl exec -it demo-deployment-5fc9c54fb4-gnh8b bash
ping 10.254.x.x
curl 10.254.x.x

查看 pending 的 pod 是否已执行
kubectl -n kube-system get po

七、部署 DNS

DNS只需要在其中一台主节点服务器安装即可
导入 kube-dns docker 镜像

docker pull foxchan/k8s-dns-kube-dns-amd64:1.14.7
docker pull foxchan/k8s-dns-dnsmasq-nanny-amd64:1.14.7
docker pull foxchan/k8s-dns-sidecar-amd64:1.14.7
docker pull registry.cn-hangzhou.aliyuncs.com/linkcloud/cluster-proportional-autoscaler-amd64:1.1.2

获取对应的 yaml 文件
cd /etc/kubernetes/
wget -c https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/kube-dns.yaml.sed
mv kube-dns.yaml.sed kube-dns.yaml

修改配置

sed -i 's/$DNS_DOMAIN/cluster.local/gi' kube-dns.yaml
sed -i 's/$DNS_SERVER_IP/10.254.0.2/gi' kube-dns.yaml
vim /etc/kubernetes/kube-dns.yaml
...省略...
        - --domain=cluster.local.
        - --kube-master-url=http://10.10.175.3:8080
        - --dns-port=10053
        - --config-dir=/kube-dns-config
        - --v=2
...省略...

修改镜像路径
:%s#gcr.io/google_containers#foxchan#g

创建

kubectl create -f kube-dns.yaml
kubectl delete -f kube-dns.yaml

部署 DNS 自动扩容部署
wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
修改dns-horizontal-autoscaler.yaml中镜像的路径,改成 registry.cn-hangzhou.aliyuncs.com/linkcloud/cluster-proportional-autoscaler-amd64:1.1.2

kubectl create -f dns-horizontal-autoscaler.yaml

# kubectl delete -f dns-horizontal-autoscaler.yaml

查看创建结果
kubectl get pods -o wide -n kube-system
kubernetes1.8.5集群安装(带证书)


八、安装kuberneter界面

导入 kubernetes-dashboard 镜像
阿里云镜像库:https://dev.aliyun.com/list.html。你可以在里面找到对应的镜像包
docker load < kubernetes-dashboard-amd64.v1.8.0.tar

Dashboard 是 Kubernetes 社区官方开发的仪表板,有了仪表板后管理者就能够透过 Web-based 方式来管理 Kubernetes 集群,除了提升管理方便,也让资源可视化,让人更直接看见系统信息的呈现结果。
首先我们要建立 kubernetes-dashboard-certs,来提供给 Dashboard TLS 使用:

cd /etc/kubernetes/
wget -c https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

vim kubernetes-dashboard.yaml
126行左右
...省略...

- --apiserver-host=http://my-address:port

      - --apiserver-host=http://10.10.175.3:8080

...省略...

创建 kubernetes-dashboard
kubectl create -f kubernetes-dashboard.yaml
kubectl delete -f kubernetes-dashboard.yaml

查看创建结果
kubectl get pods -n kube-system -o wide
kubernetes1.8.5集群安装(带证书)
kubectl get svc -n kube-system

检查 kubernetes-dashboard 服务
kubectl get svc,po -o wide --all-namespaces
kubectl get pods -n kube-system | grep dashboard

这里我们使用 token 认证,那么 token 来自于哪里呢,我们创建一个 kubernetes-dashboard-rbac.yaml 内容如下
创建一个 kubernetes-dashboard-rbac.yaml

cd /etc/kubernetes/
cat > kubernetes-dashboard-rbac.yaml <<EOF
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: dashboard-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: default
  namespace: kube-system
EOF
vim /etc/kubernetes/kubernetes-dashboard-rbac.yaml

创建之后,我们来获取它的 token 值
我们看到这里的 serviceaccount 是在 kube-system 的 default 的,所以我们直接查看 kube-system 中的 default secret 就可以了
然后执行
kubectl -n kube-system get secret
找到 default-token-XXXX 的字段
执行 kubectl create -f kubernetes-dashboard-rbac.yaml
kubectl create -f kubernetes-dashboard-rbac.yaml
clusterrolebinding "dashboard-admin" created

kubectl delete -f kubernetes-dashboard-rbac.yaml

最后找到 default-token-XXXX 的字段以后,执行
kubectl describe secret default-token-XXXX -n kube-system
kubectl describe secret default-token-w5htr -n kube-system
就能获取到 token

kubectl -n kube-system get po,svc -l k8s-app=kubernetes-dashboard
NAME                                       READY     STATUS    RESTARTS   AGE
po/kubernetes-dashboard-766666b68c-q2l28   1/1       Running   0          23h

NAME                       TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
svc/kubernetes-dashboard   ClusterIP   10.254.8.65   <none>        443/TCP   23h

完成后,就可以透过浏览器访问 Dashboard
https://10.10.172.3:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

访问 kubernetes dashboard
http://10.10.175.3:8080/ui

查看 docker 日志
docker logs <CONTAINER ID>
docker logs 546a1f7e2153
kubernetes1.8.5集群安装(带证书)



本文转自 irow10 51CTO博客,原文链接:http://blog.51cto.com/irow10/2055064,如需转载请自行联系原作者

相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
7月前
|
Kubernetes Cloud Native Go
云原生|kubernetes|kubeadm部署的集群的100年证书
云原生|kubernetes|kubeadm部署的集群的100年证书
274 0
|
Kubernetes 应用服务中间件 网络安全
Kubernetes证书类型和适用场景
Kubernetes证书类型和适用场景
2171 0
|
Kubernetes 网络协议 Docker
kubernetes配置私有仓库证书
Harbor仓库配置自签名证书
636 0
|
存储 Kubernetes 网络协议
安装Kubernetes集群
安装Kubernetes集群
483 0
|
Kubernetes 测试技术 API
Kubernetes 证书过期怎么玩
Kubernetes 证书过期怎么玩
363 1
|
Kubernetes Linux 文件存储
3. Kubernetes集群安装
整了两天, 虚拟机才在我的mac上顺利跑起来.
194 0
3. Kubernetes集群安装
|
Kubernetes 应用服务中间件 网络安全
安装部署Kubernetes集群
本文主要分三大部分,他们分别是系统初始化、安装docker、安装Kubernetes,测试验证与删库跑路
262 0
安装部署Kubernetes集群
|
域名解析 Kubernetes JavaScript
kubernetes部署sekiro集群
kubernetes部署sekiro集群
693 0
kubernetes部署sekiro集群
|
Kubernetes Linux 网络安全
Kubekey安装kubernetes集群
使用开源的Kubekey安装高可用的kubernetes集群
556 0

热门文章

最新文章