Kubernetes学习笔记-Part.05 基础环境准备

本文涉及的产品
容器服务 Serverless 版 ACK Serverless,952元额度 多规格
容器服务 Serverless 版 ACK Serverless,317元额度 多规格
简介: Part.01 Kubernets与dockerPart.02 Docker版本Part.03 Kubernetes原理Part.04 资源规划Part.05 基础环境准备Part.06 Docker安装Part.07 Harbor搭建Part.08 K8s环境安装Part.09 K8s集群构建Part.10 容器回退

目录
Part.01 Kubernets与docker
Part.02 Docker版本
Part.03 Kubernetes原理
Part.04 资源规划
Part.05 基础环境准备
Part.06 Docker安装
Part.07 Harbor搭建
Part.08 K8s环境安装
Part.09 K8s集群构建
Part.10 容器回退

第五章 基础环境准备

5.1.SSH免密登录

在master01、master02、master03上生成公钥,配置免密登录到其他节点

ssh-keygen -t rsa -f ~/.ssh/id_rsa -C username_root
ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22 root@192.168.111.1
ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22 root@192.168.111.2
ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22 root@192.168.111.3
ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22 root@192.168.111.11
ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22 root@192.168.111.12
ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22 root@192.168.111.20

5.2.ansbile配置

在外网服务器上,下载ansible及相关依赖包

yum install -y epel-release
yumdownloader --resolve --destdir /opt/ansible/ ansible

上传至master01上,并进行安装

rpm -ivh /opt/ansible/*

安装完成后查询版本

[root@localhost ~]# ansible --version
ansible 2.9.27
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Aug  7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]

配置ansible和rhel-system-roles,创建配置文件

mkdir /root/ansible
cd /root/ansible
cp /etc/ansible/ansible.cfg /root/ansible/

修改配置文件,/root/ansible/ansible.cfg

[defaults]
inventory      = /root/ansible/inventory
ask_pass      = false
remote_user = root

配置inventory文件,/root/ansible/inventory

[k8s:children]
master
worker
harbor
[master]
192.168.111.1 hostname=master01
192.168.111.2 hostname=master02
192.168.111.3 hostname=master03
[worker]
192.168.111.11 hostname=worker01
192.168.111.12 hostname=worker02
[harbor]
192.168.111.20 hostname=harbor01

测试

[root@master01 ansible]# ansible all -m ping
192.168.111.3 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
192.168.111.12 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
192.168.111.11 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
192.168.111.1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
192.168.111.2 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
192.168.111.20 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}

5.3.修改主机名

创建playbook,/root/ansible/hostname.yml

---
- name: modify hostname
  hosts: all
  tasks:
    - name: modify hostname permanently
      raw: "echo {
   { hostname | quote }} > /etc/hostname"
    - name: modify hostname temporarily
      shell: hostname {
   {
    hostname | quote }}

执行并确认

[root@master01 ansible]# ansible-playbook hostname.yml

PLAY [modify hostname] ****************************************************************************************************************************************

TASK [Gathering Facts] ****************************************************************************************************************************************
ok: [192.168.111.11]
ok: [192.168.111.12]
ok: [192.168.111.1]
ok: [192.168.111.2]
ok: [192.168.111.3]
ok: [192.168.111.20]

TASK [modify hostname permanently] ****************************************************************************************************************************
changed: [192.168.111.2]
changed: [192.168.111.1]
changed: [192.168.111.11]
changed: [192.168.111.3]
changed: [192.168.111.12]
changed: [192.168.111.20]

TASK [modify hostname temporarily] ****************************************************************************************************************************
changed: [192.168.111.3]
changed: [192.168.111.11]
changed: [192.168.111.1]
changed: [192.168.111.2]
changed: [192.168.111.12]
changed: [192.168.111.20]

PLAY RECAP ****************************************************************************************************************************************************
192.168.111.1              : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
192.168.111.11             : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
192.168.111.12             : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
192.168.111.2              : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
192.168.111.20             : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
192.168.111.3              : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

[root@master01 ansible]# ansible all -m shell -a 'hostname'
192.168.111.3 | CHANGED | rc=0 >>
master03
192.168.111.11 | CHANGED | rc=0 >>
worker01
192.168.111.1 | CHANGED | rc=0 >>
master01
192.168.111.2 | CHANGED | rc=0 >>
master02
192.168.111.12 | CHANGED | rc=0 >>
worker02
192.168.111.20 | CHANGED | rc=0 >>
harbor01

5.4.修改hosts列表

在master01上修改主机列表,/etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.111.1 master01.k8s.local   master01
192.168.111.2 master02.k8s.local   master02
192.168.111.3 master03.k8s.local   master03
192.168.111.11 worker01.k8s.local   worker01
192.168.111.12 worker02.k8s.local   worker02
192.168.111.20 harbor01.k8s.local   harbor01

分发至其他节点

ansible all -m template -a 'src=/etc/hosts dest=/etc/hosts'

5.5.关闭firewall和SELinux

关闭firewall

ansible all -m service -a 'name=firewalld state=stopped enabled=no'

确认状态

[root@master01 ansible]# ansible all -m shell -a 'systemctl status firewalld | grep Active'
192.168.111.11 | CHANGED | rc=0 >>
   Active: inactive (dead)
192.168.111.12 | CHANGED | rc=0 >>
   Active: inactive (dead)
192.168.111.1 | CHANGED | rc=0 >>
   Active: inactive (dead)
192.168.111.3 | CHANGED | rc=0 >>
   Active: inactive (dead)
192.168.111.2 | CHANGED | rc=0 >>
   Active: inactive (dead)
192.168.111.20 | CHANGED | rc=0 >>
   Active: inactive (dead)

关闭SELinux

ansible all -m selinux -a 'policy=targeted state=disabled'

确认状态

[root@localhost ansible]# ansible all -m shell -a 'getenforce'
192.168.111.1 | CHANGED | rc=0 >>
Permissive
192.168.111.11 | CHANGED | rc=0 >>
Permissive
192.168.111.3 | CHANGED | rc=0 >>
Permissive
192.168.111.2 | CHANGED | rc=0 >>
Permissive
192.168.111.12 | CHANGED | rc=0 >>
Permissive
192.168.111.20 | CHANGED | rc=0 >>
Permissive

5.6.配置系统Yum源

【master01】配置CentOS镜像Yum源

mkdir /mnt/cdrom
mount /dev/cdrom /mnt/cdrom/
rm -f /etc/yum.repos.d/*

创建repo文件,/etc/yum.repos.d/local.repo

[centos]
name=centos
baseurl=file:///mnt/cdrom
gpgcheck=0
enabled=1

更新yum源

yum clean all
yum makecache fast

安装httpd服务

yum install -y httpd
systemctl enable --now httpd

配置http服务指向CentOS源

mkdir /var/www/html/centos
umount /mnt/cdrom/
mount /dev/cdrom /var/www/html/centos/

删除原有repo文件

ansible all -m shell -a 'rm -f /etc/yum.repos.d/*.repo'

配置所有节点的系统Yum源

ansible all -m yum_repository -a 'name="centos" description="centos" baseurl="http://master01.k8s.local/centos" enabled=yes gpgcheck=no'
ansible all -m shell -a 'yum clean all'
ansible all -m shell -a 'yum makecache fast'
ansible all -m shell -a 'yum update -y'

5.7.安装基础软件

安装vim等基础软件,/root/ansible/packages.yml

---
- hosts: all
  tasks:
    - name: install packages
      yum:
        name:
          - pciutils
          - bash-completion
          - vim
          - chrony
          - net-tools
        state: present

执行并确认

[root@master01 ansible]# ansible-playbook packages.yml

PLAY [all] ****************************************************************************************************************************************************

TASK [Gathering Facts] ****************************************************************************************************************************************
ok: [192.168.111.3]
ok: [192.168.111.1]
ok: [192.168.111.12]
ok: [192.168.111.11]
ok: [192.168.111.2]
ok: [192.168.111.20]

TASK [install packages] ***************************************************************************************************************************************
ok: [192.168.111.2]
ok: [192.168.111.11]
ok: [192.168.111.1]
ok: [192.168.111.12]
ok: [192.168.111.20]
changed: [192.168.111.3]

PLAY RECAP ****************************************************************************************************************************************************
192.168.111.1              : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
192.168.111.11             : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
192.168.111.12             : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
192.168.111.2              : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
192.168.111.20             : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
192.168.111.3              : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

5.8.NTP时钟

以master01为时钟源,其余节点从master01进行时钟同步
服务端(master01)
修改配置文件,/etc/chrony.conf

# 不指定外部NTP源
# 允许本网段其节点作为客户端访问
allow 192.168.111.0/24
# 如果时间服务可不用,则使用本地时间作为标准时间授权,层数为10
local stratum 10

重启服务

systemctl restart chronyd

客户端(mster02/worker01/worker02/harbor01)
在外网服务器上下载ansible system role的安装包

yumdownloader --resolve rhel-system-roles

将安装包上传至master01的/opt/ansible/下,并进行安装

[root@localhost ~]# rpm -ivh /opt/ansible/python-netaddr-0.7.5-9.el7.noarch.rpm
warning: /opt/ansible/python-netaddr-0.7.5-9.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:python-netaddr-0.7.5-9.el7       ################################# [100%]
[root@localhost ~]# rpm -ivh /opt/ansible/rhel-system-roles-1.7.3-4.el7_9.noarch.rpm
warning: /opt/ansible/rhel-system-roles-1.7.3-4.el7_9.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:rhel-system-roles-1.7.3-4.el7_9  ################################# [100%]

安装ntp时钟,/root/ansible/timesync.yml

---
- hosts: 192.168.111.2,192.168.111.3,worker,harbor
  vars:
    timesync_ntp_servers:
      - hostname: 192.168.111.1
        iburst: yes
  roles:
    - rhel-system-roles.timesync

执行

ansible-playbook /root/ansible/timesync.yml

确认时钟同步情况

[root@master01 ansible]# ansible 192.168.111.2,192.168.111.3,worker,harbor -m shell -a 'chronyc sources -v'
192.168.111.12 | CHANGED | rc=0 >>
210 Number of sources = 1

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* master01.k8s.local           10   6   377    46  +5212ns[  +19us] +/-   73us
192.168.111.3 | CHANGED | rc=0 >>
210 Number of sources = 1

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* master01.k8s.local           10   6    17    30   -261ns[  -62us] +/-  966us
192.168.111.11 | CHANGED | rc=0 >>
210 Number of sources = 1

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* master01.k8s.local           10   6   377    35    -17us[  -20us] +/-  130us
192.168.111.20 | CHANGED | rc=0 >>
210 Number of sources = 1

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* master01.k8s.local           10   6   377    25  -4152ns[-7463ns] +/-   96us
192.168.111.2 | CHANGED | rc=0 >>
210 Number of sources = 1

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* master01.k8s.local           10   6   377    27    -52us[  -50us] +/-  191us

5.9.关闭swap

临时关闭:

ansible all -m shell -a 'swapoff -a'

永久关闭:

ansible all -m shell -a 'sed -ri "s/.*swap.*/#&/" /etc/fstab'

5.10.启用ipvs转发

在kubernetes中service有两种代理模型,一种是基于iptables的,一种是基于ipvs的;ipvs转发性能更好。
在master01-03上开启ipvs转发

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

赋予执行权限并执行

chmod +x /etc/sysconfig/modules/ipvs.modules
/bin/bash /etc/sysconfig/modules/ipvs.modules

5.11.启用网桥过滤及内核转发

bridge-nf-call-iptables这个内核参数,表示bridge设备在二层转发时也去调用iptables配置的三层规则(包含conntrack),所以开启这个参数就能够解决Service同节点通信问题。
在master01上创建/etc/sysctl.d/k8s.conf,添加网桥过滤及内核转发配置

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

分发至其他节点

ansible all -m template -a 'src=/etc/sysctl.d/k8s.conf dest=/etc/sysctl.d/'
ansible all -m shell -a 'modprobe br_netfilter'

验证是否生效

[root@master01 ansible]# ansible all -m shell -a 'sysctl --system | grep -A3 k8s'
192.168.111.3 | CHANGED | rc=0 >>
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
192.168.111.1 | CHANGED | rc=0 >>
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
192.168.111.12 | CHANGED | rc=0 >>
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
192.168.111.11 | CHANGED | rc=0 >>
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
192.168.111.2 | CHANGED | rc=0 >>
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
192.168.111.20 | CHANGED | rc=0 >>
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
相关实践学习
通过Ingress进行灰度发布
本场景您将运行一个简单的应用,部署一个新的应用用于新的发布,并通过Ingress能力实现灰度发布。
容器应用与集群管理
欢迎来到《容器应用与集群管理》课程,本课程是“云原生容器Clouder认证“系列中的第二阶段。课程将向您介绍与容器集群相关的概念和技术,这些概念和技术可以帮助您了解阿里云容器服务ACK/ACK Serverless的使用。同时,本课程也会向您介绍可以采取的工具、方法和可操作步骤,以帮助您了解如何基于容器服务ACK Serverless构建和管理企业级应用。 学习完本课程后,您将能够: 掌握容器集群、容器编排的基本概念 掌握Kubernetes的基础概念及核心思想 掌握阿里云容器服务ACK/ACK Serverless概念及使用方法 基于容器服务ACK Serverless搭建和管理企业级网站应用
相关文章
|
6月前
|
Kubernetes Cloud Native 容器
完全免费的K8S学习平台:在线集群环境助力你的云原生之路!
完全免费的K8S学习平台:在线集群环境助力你的云原生之路!
1013 1
|
6月前
|
Kubernetes Ubuntu Shell
wsl Ubuntu环境 创建 k8s集群
wsl Ubuntu环境 创建 k8s集群
226 0
|
1月前
|
Kubernetes 安全 Linux
ansible-install-k8s 之 1 初始化环境
ansible-install-k8s 之 1 初始化环境
|
6月前
|
存储 数据采集 Kubernetes
一文详解K8s环境下Job类日志采集方案
本文介绍了K8s中Job和Cronjob控制器用于非常驻容器编排的场景,以及Job容器的特点:增删频率高、生命周期短和突发并发大。文章重点讨论了Job日志采集的关键考虑点,包括容器发现速度、开始采集延时和弹性支持,并对比了5种采集方案:DaemonSet采集、Sidecar采集、ECI采集、同容器采集和独立存储采集。对于短生命周期Job,建议使用Sidecar或ECI采集,通过调整参数确保数据完整性。对于突发大量Job,需要关注服务端资源限制和采集容器的资源调整。文章总结了不同场景下的推荐采集方案,并指出iLogtail和SLS未来可能的优化方向。
|
2月前
|
Kubernetes Linux Docker
在centos7上搭建k8s环境
在centos7上搭建k8s环境
|
3月前
|
Prometheus Kubernetes 网络协议
k8s学习笔记之CoreDNS
k8s学习笔记之CoreDNS
|
3月前
|
Kubernetes jenkins 持续交付
jenkins学习笔记之二十一:k8s部署jenkins及动态slave
jenkins学习笔记之二十一:k8s部署jenkins及动态slave
|
3月前
|
存储 Kubernetes 数据安全/隐私保护
k8s学习笔记之ConfigMap和Secret
k8s学习笔记之ConfigMap和Secret
|
3月前
|
存储 运维 Kubernetes
k8s学习笔记之StorageClass+NFS
k8s学习笔记之StorageClass+NFS
|
3月前
|
Kubernetes 监控 Shell
在K8S中,我们公司用户反应pod连接数非常多,希望看一下这些连接都是什么信息?什么状态?怎么排查?容器里面没有集成bash环境、网络工具,怎么处理?
在K8S中,我们公司用户反应pod连接数非常多,希望看一下这些连接都是什么信息?什么状态?怎么排查?容器里面没有集成bash环境、网络工具,怎么处理?