1.配置并启用 etcd 集群
A. 配置启动项并将启动项分发至其他节点
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
# vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=
[Service]
Type=notify
WorkingDirectory=
/var/lib/etcd
EnvironmentFile=-
/etc/etcd/etcd
.conf
ExecStart=
/usr/bin/etcd
--config-
file
/etc/etcd/etcd
.conf
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
# ansible node -m copy -a 'src=/usr/lib/systemd/system/etcd.service dest=/usr/lib/systemd/system/'
|
B.指定 etcd 的工作目录和数据目录
1
2
3
|
# mkdir -p /var/lib/etcd/ && mkdir -p /etc/etcd
# ansible node -m file -a 'path=/var/lib/etcd state=directory'
# ansible node -m file -a 'path=/etc/etcd state=directory'
|
C. 配置 etcd.conf 配置文件并分发至其他节点
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
# export ETCD_NAME=etcd1
# export INTERNAL_IP=192.168.100.102
# cat << EOF > etcd.conf
name:
'${ETCD_NAME}'
data-
dir
:
"/var/lib/etcd/"
listen-peer-urls: https:
//
${INTERNAL_IP}:2380
listen-client-urls: https:
//
${INTERNAL_IP}:2379,https:
//127
.0.0.1:2379
initial-advertise-peer-urls: https:
//
${INTERNAL_IP}:2380
advertise-client-urls: https:
//
${INTERNAL_IP}:2379
initial-cluster:
"etcd1=https://192.168.100.102:2380,etcd2=https://192.168.100.103:2380,etcd3=https://192.168.100.104:2380"
initial-cluster-token:
'etcd-cluster'
# 初始化集群状态('new' or 'existing').
initial-cluster-state:
'new'
client-transport-security:
cert-
file
:
/etc/kubernetes/ssl/etcd
.pem
key-
file
:
/etc/kubernetes/ssl/etcd-key
.pem
trusted-ca-
file
:
/etc/kubernetes/ssl/ca
.pem
peer-transport-security:
cert-
file
:
/etc/kubernetes/ssl/etcd
.pem
key-
file
:
/etc/kubernetes/ssl/etcd-key
.pem
trusted-ca-
file
:
/etc/kubernetes/ssl/ca
.pem
EOF
# mv etcd.conf /etc/etcd/
##更换节点 etcd 名及其 IP ,完成后分发至相应节点
# ansible 192.168.100.103 -m copy -a 'src=etcd.conf dest=/etc/etcd/etcd.conf'
# ansible 192.168.100.104 -m copy -a 'src=etcd.conf dest=/etc/etcd/etcd.conf'
|
D. 启动 etcd 集群
1
2
3
4
5
6
|
# systemctl start etcd
# systemctl status etcd
# systemctl enable etcd
# ansible node -a 'systemctl start etcd'
# ansible node -a 'systemctl status etcd'
# ansible node -a 'systemctl enable etcd'
|
注:首个 etcd 节点会显示启动失败,那是由于没有检测到其他节点存活状态。
E.查看集群成员
1
2
3
4
|
# etcdctl --endpoints=https://192.168.100.102:2379 member list
32293bbc65784dda: name=etcd1 peerURLs=https:
//192
.168.100.102:2380 clientURLs=https:
//192
.168.100.102:2379 isLeader=
true
703725a0e421bc44: name=etcd2 peerURLs=https:
//192
.168.100.103:2380 clientURLs=https:
//192
.168.100.103:2379 isLeader=
false
78ac8de330c5272a: name=etcd3 peerURLs=https:
//192
.168.200.104:2380 clientURLs=https:
//192
.168.100.104:2379 isLeader=
false
|
F.查看集群健康状况
1
2
3
4
5
|
# etcdctl --endpoints=https://192.168.100.102:2379 cluster-health
member 32293bbc65784dda is healthy: got healthy result from https:
//192
.168.100.102:2379
member 703725a0e421bc44 is healthy: got healthy result from https:
//192
.168.100.103:2379
member 78ac8de330c5272a is healthy: got healthy result from https:
//192
.168.100.104:2379
cluster is healthy
|
2.配置并启用 flanneld
A. 配置启动项并分发至其他节点
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
|
# vim /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
EnvironmentFile=
/etc/sysconfig/flanneld
EnvironmentFile=-
/etc/sysconfig/docker-network
ExecStart=
/usr/bin/flanneld-start
$FLANNEL_OPTIONS
ExecStartPost=
/usr/libexec/flannel/mk-docker-opts
.sh -k DOCKER_NETWORK_OPTIONS -d
/run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
# ansible node -m copy -a 'src=/usr/lib/systemd/system/flanneld.service dest=/usr/lib/systemd/system/'
# cat << EOF > /usr/bin/flanneld-start
#!/bin/sh
exec
/usr/bin/flanneld
\
-etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS:-${FLANNEL_ETCD}} \
-etcd-prefix=${FLANNEL_ETCD_PREFIX:-${FLANNEL_ETCD_KEY}} \
"$@"
EOF
# chmod 755 /usr/bin/flanneld-start
# ansible node -m copy -a 'src=/usr/bin/flanneld-start dest=/usr/bin/ mode=755'
|
B. 配置 flannel 配置文件并分发至其他节点
1
2
3
4
5
6
|
# cat << EOF > /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS=
"https://192.168.100.102:2379,https://192.168.100.103:2379,https://192.168.100.104:2379"
FLANNEL_ETCD_PREFIX=
"/kube/network"
FLANNEL_OPTIONS=
"-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/etcd.pem -etcd-keyfile=/etc/kubernetes/ssl/etcd-key.pem"
EOF
# ansible node -m copy -a 'src=/etc/sysconfig/flanneld dest=/etc/sysconfig/'
|
C.使用 etcd 存储为 flannel 创建目录并添加网络配置
1
2
3
|
# etcdctl --endpoints=https://192.168.100.102:2379 mkdir /kube/network
# etcdctl --endpoints=https://192.168.100.102:2379 set /kube/network/config '{ "Network": "10.254.0.0/16" }'
{
"Network"
:
"10.254.0.0/16"
}
|
D. 启动 flanneld
1
2
3
4
5
6
|
# systemctl start flanneld
# systemctl status flanneld
# systemctl enable flanneld
# ansible node -a 'systemctl start flanneld'
# ansible node -a 'systemctl status flanneld'
# ansible node -a 'systemctl enable flanneld'
|
E. 查看各节点网段
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
# cat /var/run/flannel/subnet.env
FLANNEL_NETWORK=10.254.0.0
/16
FLANNEL_SUBNET=10.254.80.1
/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=
false
# ansible node -a "cat /var/run/flannel/subnet.env"
192.168.100.104 | SUCCESS | rc=0 >>
FLANNEL_NETWORK=10.254.0.0
/16
FLANNEL_SUBNET=10.254.95.1
/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=
false
192.168.100.103 | SUCCESS | rc=0 >>
FLANNEL_NETWORK=10.254.0.0
/16
FLANNEL_SUBNET=10.254.59.1
/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=
false
|
F. 更改 docker 网段为 flannel 分配的网段
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
# export FLANNEL_SUBNET=10.254.80.1/24
# cat << EOF > daemon.json
{
"bip"
:
"$FLANNEL_SUBNET"
}
EOF
# mkdir -p /etc/docker/ && mv daemon.json /etc/docker/
##更换为相应节点网段,完成后分发至相应节点
# ansible node -m file -a 'path=/etc/docker/ state=directory'
# ansible 192.168.100.103 -m copy -a 'src=daemon.json dest=/etc/docker/daemon.json'
# ansible 192.168.100.104 -m copy -a 'src=daemon.json dest=/etc/docker/daemon.json'
# systemctl daemon-reload
# systemctl restart docker
# ansible node -a "systemctl daemon-reload"
# ansible node -a "systemctl restart docker"
|
G. 查看是否已分配相应网段
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
|
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.100.2 0.0.0.0 UG 100 0 0 ens33
10.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 flannel0
10.254.80.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
192.168.100.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
# ansible node -a 'route -n'
192.168.100.103 | SUCCESS | rc=0 >>
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.100.2 0.0.0.0 UG 100 0 0 ens33
10.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 flannel0
10.254.59.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
192.168.100.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
192.168.100.104 | SUCCESS | rc=0 >>
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.100.2 0.0.0.0 UG 100 0 0 ens33
10.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 flannel0
10.254.95.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
192.168.100.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
|
H. 使用 etcdctl 命令查看 flannel 的相关信息
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
|
# etcdctl --endpoints=https://192.168.100.102:2379 ls /kube/network/subnets
/kube/network/subnets/10
.254.80.0-24
/kube/network/subnets/10
.254.59.0-24
/kube/network/subnets/10
.254.95.0-24
# etcdctl --endpoints=https://192.168.100.102:2379 -o extended get /kube/network/subnets/10.254.80.0-24
Key:
/kube/network/subnets/10
.254.80.0-24
Created-Index: 10
Modified-Index: 10
TTL: 85486
Index: 12
{
"PublicIP"
:
"192.168.100.102"
}
# etcdctl --endpoints=https://192.168.100.102:2379 -o extended get /kube/network/subnets/10.254.59.0-24
Key:
/kube/network/subnets/10
.254.59.0-24
Created-Index: 11
Modified-Index: 11
TTL: 85449
Index: 12
{
"PublicIP"
:
"192.168.100.103"
}
# etcdctl --endpoints=https://192.168.100.102:2379 -o extended get /kube/network/subnets/10.254.95.0-24
Key:
/kube/network/subnets/10
.254.95.0-24
Created-Index: 12
Modified-Index: 12
TTL: 85399
Index: 12
{
"PublicIP"
:
"192.168.100.104"
}
|
I. 测试网络是否正常
1
2
|
# ping -c 4 10.254.59.1
# ping -c 4 10.254.95.1
|
3. 配置并启用 Kubernetes Master 节点
Kubernetes Master 节点包含的组件:
kube-apiserver
kube-controller-manager
kube-scheduler
A. 配置 config 文件并分发至所有节点
1
2
3
4
5
6
|
# grep ^[A-Z] /etc/kubernetes/config
KUBE_LOGTOSTDERR=
"--logtostderr=true"
KUBE_LOG_LEVEL=
"--v=0"
KUBE_ALLOW_PRIV=
"--allow-privileged=true"
KUBE_MASTER=
"--master=http://192.168.100.102:8080"
# ansible node -m copy -a 'src=/etc/kubernetes/config dest=/etc/kubernetes/'
|
B. 配置 kube-apiserver 启动项
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
# vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https:
//github
.com
/kubernetes/kubernetes
After=network.target
After=etcd.service
[Service]
EnvironmentFile=-
/etc/kubernetes/config
EnvironmentFile=-
/etc/kubernetes/apiserver
ExecStart=
/usr/bin/kube-apiserver
\
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ALLOW_PRIV \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
|
C. 配置 apiserver 配置文件
1
2
3
4
5
6
|
# grep ^[A-Z] /etc/kubernetes/apiserver
KUBE_API_ADDRESS=
"--advertise-address=192.168.100.102 --bind-address=192.168.100.102 --insecure-bind-address=192.168.100.102"
KUBE_ETCD_SERVERS=
"--etcd-servers=https://192.168.100.102:2379,192.168.100.103:2379,192.168.100.104:2379"
KUBE_SERVICE_ADDRESSES=
"--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL=
"--admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota"
KUBE_API_ARGS=
"--authorization-mode=RBAC,Node --kubelet-https=true --service-node-port-range=30000-42767 --enable-bootstrap-token-auth --token-auth-file=/etc/kubernetes/token.csv --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem --etcd-cafile=/etc/kubernetes/ssl/ca.pem --etcd-certfile=/etc/kubernetes/ssl/etcd.pem --etcd-keyfile=/etc/kubernetes/ssl/etcd-key.pem --enable-swagger-ui=true --event-ttl=1h --basic-auth-file=/etc/kubernetes/basic_auth_file"
|
D. 配置访问用户
1
|
# echo admin,admin,1 > /etc/kubernetes/basic_auth_file
|
格式:用户名、密码和UID
E. 启动 kube-apiserver
1
2
3
|
# systemctl start kube-apiserver
# systemctl status kube-apiserver
# systemctl enable kube-apiserver
|
F. 将 admin 用户与clusterrole: cluster-admin 绑定到一起并验证
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
# kubectl get clusterrole/cluster-admin -o yaml
# kubectl create clusterrolebinding login-on-dashboard-with-cluster-admin --clusterrole=cluster-admin --user=admin
clusterrolebinding
"login-on-dashboard-with-cluster-admin"
created
# kubectl get clusterrolebinding/login-on-dashboard-with-cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io
/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: 2017-10-31T10:35:06Z
name: login-on-dashboard-with-cluster-admin
resourceVersion:
"116"
selfLink:
/apis/rbac
.authorization.k8s.io
/v1/clusterrolebindings/login-on-dashboard-with-cluster-admin
uid: 292ae19a-be27-11e7-853b-000c297aff5d
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: admin
|
G. 配置 kube-controller-manager 启动项
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
# vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=
[Service]
EnvironmentFile=-
/etc/kubernetes/config
EnvironmentFile=-
/etc/kubernetes/controller-manager
ExecStart=
/usr/bin/kube-controller-manager
\
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
|
H. 配置 kube-controller-manager 配置文件
1
2
|
# grep ^[A-Z] /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS=
"--address=127.0.0.1 --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem"
|
I. 启动 kube-controller-manager
1
2
|
# systemctl start kube-controller-manager
# systemctl status kube-controller-manager
|
J. 配置 kube-scheduler 启动项
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
# vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=
[Service]
EnvironmentFile=-
/etc/kubernetes/config
EnvironmentFile=-
/etc/kubernetes/scheduler
ExecStart=
/usr/bin/kube-scheduler
\
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
|
K. 配置 kube-scheduler 配置文件
1
2
|
# grep ^[A-Z] /etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS=
"--address=127.0.0.1"
|
L. 启动 kube-scheduler
1
2
|
# systemctl start kube-scheduler
# systemctl status kube-scheduler
|
M. 验证 Master 节点
1
2
3
4
5
6
|
# kubectl get cs
# kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {
"health"
:
"true"
}
|
4. 配置并启用 Kubernetes Node 节点
Kubernetes Node 节点包含如下组件:
kubelet
kube-proxy
A. 为 kubelet 赋予权限并新建 kubelet 数据目录
kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先将 bootstrap token 文件中的 kubelet-bootstrap 用户赋予 system:node-bootstrapper cluster 角色(role), 然后 kubelet 才能有权限创建认证请求(certificate signing requests)。
Master:
1
2
3
4
5
6
|
# kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
clusterrolebinding
"kubelet-bootstrap"
created
# mkdir -p /var/lib/kubelet
# ansible node -m file -a 'path=/var/lib/kubelet state=directory'
|
B. 配置 kubelet 启动项并分发至 node 节点
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
# vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https:
//github
.com
/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=
/var/lib/kubelet
EnvironmentFile=-
/etc/kubernetes/config
EnvironmentFile=-
/etc/kubernetes/kubelet
ExecStart=
/usr/bin/kubelet
\
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_POD_INFRA_CONTAINER \
$KUBELET_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
##分发至其他 Node 节点
# ansible node -m copy -a 'src=/usr/lib/systemd/system/kubelet.service dest=/usr/lib/systemd/system/'
|
C. 配置 kubelet 配置文件
1
2
3
4
5
6
7
8
9
10
11
12
13
|
# export KUBELET_ADDRESS=192.168.100.102
# export KUBELET_HOSTNAME=Master
# cat << EOF > kubelet
KUBELET_ADDRESS=
"--address=$KUBELET_ADDRESS"
KUBELET_PORT=
"--port=10250"
KUBELET_HOSTNAME=
"--hostname-override=$KUBELET_HOSTNAME"
KUBELET_POD_INFRA_CONTAINER=
"--pod-infra-container-image=hub.c.163.com/k8s163/pause-amd64:3.0"
KUBELET_ARGS=
"--cluster-dns=10.254.0.2 --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --fail-swap-on=false --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local. --serialize-image-pulls=false"
EOF
# mv kubelet /etc/kubernetes/
##更换 kubelet 节点 IP 和 hostname,完成后分发至相应节点
# ansible 192.168.100.103 -m copy -a 'src=kubelet dest=/etc/kubernetes/'
# ansible 192.168.100.104 -m copy -a 'src=kubelet dest=/etc/kubernetes/'
|
D.启动 kubelet
1
2
3
4
|
# systemctl start kubelet
# systemctl status kubelet
# ansible node -a 'systemctl start kubelet'
# ansible node -a 'systemctl status kubelet'
|
E.将 Node 加入 Kubernetes 集群
kubelet 首次启动时向 kube-apiserver 发送证书签名请求,必须通过 Master 认证才会将该 Node 加入到集群。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
# kubectl get nodes
No resources found.
# kubectl get csr ###查看未授权的 CSR 请求
NAME AGE REQUESTOR CONDITION
node-csr-ZU39iUu-E9FuadeTq58 9s kubelet-bootstrap Pending
node-csr-sJXwbG8c9UUGS8iTrV8 15s kubelet-bootstrap Pending
node-csr-zpol8cIJZfrcU8fd7l4 15s kubelet-bootstrap Pending
# kubectl certificate approve node-csr-ZU39iphJAYDQfsLssAwMViUu-E9Fua2pKhELMdeTq58 ###通过 CSR 请求[其他两个节点也使用同样的命令授权]
certificatesigningrequest
"node-csr-ZU39iphJAYDQfsLssAwMViUu-E9Fua2pKhELMdeTq58"
approved
# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-ZU39iUu-E9FuadeTq58 50s kubelet-bootstrap Approved,Issued
node-csr-sJXwbG8c9UUGS8iTrV8 1m kubelet-bootstrap Approved,Issued
node-csr-zpol8cIJZfrcU8fd7l4 1m kubelet-bootstrap Approved,Issued
# kubectl get nodes ##查看 nodes
NAME STATUS AGE VERSION
master Ready 1m v1.8.2
node1 Ready 2m v1.8.2
node2 Ready 2m v1.8.2
|
注:CSR 授权后会自动在 Node 端生成 kubelet kubeconfig 配置文件和公私钥:
1
2
3
4
|
# ls /etc/kubernetes/kubelet.kubeconfig
/etc/kubernetes/kubelet
.kubeconfig
# ls /etc/kubernetes/ssl/kubelet*
/etc/kubernetes/ssl/kubelet-client
.crt
/etc/kubernetes/ssl/kubelet-client
.key
/etc/kubernetes/ssl/kubelet
.crt
/etc/kubernetes/ssl/kubelet
.key
|
F. 配置 kube-proxy 启动项并分发至其他 node 节点
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
|
# vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https:
//github
.com
/kubernetes/kubernetes
After=network.target
[Service]
EnvironmentFile=-
/etc/kubernetes/config
EnvironmentFile=-
/etc/kubernetes/proxy
ExecStart=
/usr/bin/kube-proxy
\
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
##分发至其他 Node 节点
# ansible node -m copy -a 'src=/usr/lib/systemd/system/kube-proxy.service dest=/usr/lib/systemd/system/'
|
G. 调整内核参数
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
# grep -v ^# /etc/sysctl.conf ###配置 kube-proxy 代理模式
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
# sysctl -p ###加载系统参数
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
# ansible node -m copy -a 'src=/etc/sysctl.conf dest=/etc/'
# ansible node -a 'sysctl -p'
192.168.100.104 | SUCCESS | rc=0 >>
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
192.168.100.103 | SUCCESS | rc=0 >>
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
|
说明:在这加这个参数是因为 kube-proxy 使用 iptables 来进行数据转发,而Linux系统默认是禁止数据包转发,这里我是因为无法使用nodeport发现的。
H. 配置 kube-proxy 配置文件并分发至其他 node 节点
1
2
3
4
5
6
7
8
|
# export KUBE_PROXY=192.168.100.102
# cat << EOF > proxy
KUBE_PROXY_ARGS=
"--bind-address=$KUBE_PROXY --hostname-override=$KUBE_PROXY --cluster-cidr=10.254.0.0/16 --proxy-mode=iptables --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig"
EOF
# mv proxy /etc/kubernetes/proxy
##更换 kube-proxy 节点 IP,完成后分发至相应节点
# ansible 192.168.100.103 -m copy -a 'src=proxy dest=/etc/kubernetes/'
# ansible 192.168.100.104 -m copy -a 'src=proxy dest=/etc/kubernetes/'
|
I. 启动 kube-proxy
1
2
3
4
|
# systemctl start kube-proxy
# systemctl status kube-proxy
# ansible node -a 'systemctl start kube-proxy'
# ansible node -a 'systemctl status kube-proxy'
|
J. 查看节点相关信息
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
|
# kubectl get nodes -o wide ###查看节点相关信息(注:这里容器显示Unknown是因为版本相对较高)
NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master Ready <none> 3m v1.8.2 <none> CentOS Linux 7 (Core) 3.10.0-693.2.2.el7.x86_64 docker:
//Unknown
node1 Ready <none> 3m v1.8.2 <none> CentOS Linux 7 (Core) 3.10.0-693.2.2.el7.x86_64 docker:
//Unknown
node2 Ready <none> 3m v1.8.2 <none> CentOS Linux 7 (Core) 3.10.0-693.2.2.el7.x86_64 docker:
//Unknown
# kubectl get node --show-labels=true
# kubectl get nodes --show-labels ###查看节点标签
NAME STATUS ROLES AGE VERSION LABELS
master Ready <none> 4m v1.8.2 beta.kubernetes.io
/arch
=amd64,beta.kubernetes.io
/os
=linux,kubernetes.io
/hostname
=master
node1 Ready <none> 5m v1.8.2 beta.kubernetes.io
/arch
=amd64,beta.kubernetes.io
/os
=linux,kubernetes.io
/hostname
=node1
node2 Ready <none> 5m v1.8.2 beta.kubernetes.io
/arch
=amd64,beta.kubernetes.io
/os
=linux,kubernetes.io
/hostname
=node2
# kubectl version --short ###查看kubectl版本信息
Client Version: v1.8.2
Server Version: v1.8.2
# curl ###查看健康状况
ok
# kubectl cluster-info ###查看集群信息
Kubernetes master is running at https:
//192
.168.100.102:6443
# kubectl get ns ###获取所有命名空间
# kubectl get namespace
NAME STATUS AGE
default Active 29m
kube-public Active 29m
kube-system Active 29m
# kubectl get services ###查看默认service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.254.0.1 <none> 443
/TCP
36m
# kubectl get services --all-namespaces ###查看所有service
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.254.0.1 <none> 443
/TCP
37m
# kubectl get ep ###查看endpoints
# kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 192.168.100.102:6443 38m
# kubectl get sa ###查看service account
# kubectl get serviceaccount
NAME SECRETS AGE
default 1 38m
|
说明:其他命令可使用kubectl --help查看,某些长命令可以缩写(如kubectl get namespace可缩写为kubectl get ns),kubectl命令表参考如下:
http://docs.kubernetes.org.cn/683.html
本文转自 结束的伤感 51CTO博客,原文链接:http://blog.51cto.com/wangzhijian/2046124