ceph Nautilus-14.2.22版本部署

简介: 关于如何在CentOS 7上部署Ceph Nautilus 14.2.22版本的详细教程,包括环境准备、时间同步、免密登录配置、ceph-deploy工具安装、硬盘准备、mon初始化、osd初始化、mgr初始化、集群状态检查、radosgw服务部署、mds安装、dashboard组件启用等一系列步骤。

温馨提示:
centos 7支持的最高版本时ceph 15.2.17 octopus,如果想要使用更高版本请绕道。
必须选择Ubuntu 20.04 LTS 或者Centos 8+
但是,经实际测试,ceph 15.2.17 octopus的MGR组件改用python3改写,这倒是在部署"ceph-mgr-dashboard"组件时会失败,官方推荐使用cephadm来部署解决。
当然,也可以降低ceph的版本,比如降低版本为ceph 14.2.22 Nautilus,该"ceph-mgr-dashboard"组件依赖的依旧是python2的环境。

  • ceph基础环境准备
    1.配置主机解析
    cat >> /etc/hosts <<EOF
    10.0.0.141 ceph141
    10.0.0.142 ceph142
    10.0.0.143 ceph143
    EOF

2.安装常用的工具
yum -y install wget unzip chrony ntpdate

3.配置时间同步
3.1 ceph141作为服务端
[root@ceph141 ~]# vim /etc/chrony.conf

注释原有的时间服务器

server 0.centos.pool.ntp.org iburst

server 1.centos.pool.ntp.org iburst

server 2.centos.pool.ntp.org iburst

server 3.centos.pool.ntp.org iburst

server ceph141 iburst
...
allow 10.0.0.0/24
local stratum 10
[root@ceph141 ~]#
[root@ceph141 ~]# echo "/10 /usr/sbin/ntpdate ntp.aliyun.com" >> /var/spool/cron/root
[root@ceph141 ~]#
[root@ceph141 ~]# crontab -l
/10 /usr/sbin/ntpdate ntp.aliyun.com
[root@ceph141 ~]#

3.2 ceph142和ceph143作为客户端
[root@ceph142 ~]# vim /etc/chrony.conf

注释原有的时间服务器

server 0.centos.pool.ntp.org iburst

server 1.centos.pool.ntp.org iburst

server 2.centos.pool.ntp.org iburst

server 3.centos.pool.ntp.org iburst

server ceph141 iburst
...

[root@ceph143 ~]# vim /etc/chrony.conf

注释原有的时间服务器

server 0.centos.pool.ntp.org iburst

server 1.centos.pool.ntp.org iburst

server 2.centos.pool.ntp.org iburst

server 3.centos.pool.ntp.org iburst

server ceph141 iburst
...

3.3 检查时间是否同步
systemctl enable --now chronyd
timedatectl set-ntp true
timedatectl set-timezone Asia/Shanghai
chronyc activity -v

4.ceph141节点配置免密登录
4.1 生成密钥
ssh-keygen -t rsa -f ~/.ssh/id_rsa -P '' -q

4.2 拷贝密钥到其它节点
for i in seq 1 3;do ssh-copy-id ceph14$i;done

4.3 让所有节点公用同一套密钥
scp -rp ~/.ssh ceph142:~
scp -rp ~/.ssh ceph143:~

5."ceph141"节点安装"ceph-deploy"工具,用于后期部署ceph集群
5.1 准备国内的软件源(含基础镜像软件源和epel源)
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

5.2 配置ceph软件源
cat > /etc/yum.repos.d/ceph.repo << EOF
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/\$basearch
gpgcheck=0
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
gpgcheck=0
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
gpgcheck=0
EOF

5.3 安装ceph-deploy工具
yum -y install ceph-deploy

  1. 准备硬盘
    每个节点准备4 * 2TB硬盘。如果之前忘记添加了也没有关系,添加后执行如下命令就可以自动识别硬盘。

for i in seq 0 2; do echo "- - -" > /sys/class/scsi_host/host${i}/scan;done

2 创建ceph-deploy部署目录
mkdir -pv /yinzhengjie/softwares/ceph-cluster/ && cd /yinzhengjie/softwares/ceph-cluster/

3 初始化mon
ceph-deploy install --no-adjust-repos ceph141 ceph142 ceph143
ceph-deploy new ceph141 ceph142 ceph143
ceph-deploy mon create-initial
ceph-deploy admin ceph141 ceph142 ceph143

  • 初始化osd,如果在此阶段,卡主时间较长,超过1分钟,则可以尝试重启操作系统,基本上就会被解决,怀疑是跟磁盘热加载的问题。
    ceph-deploy osd create --data /dev/sdb ceph141
    ceph-deploy osd create --data /dev/sdc ceph141
    ceph-deploy osd create --data /dev/sdd ceph141
    ceph-deploy osd create --data /dev/sde ceph141

ceph-deploy osd create --data /dev/sdb ceph142
ceph-deploy osd create --data /dev/sdc ceph142
ceph-deploy osd create --data /dev/sdd ceph142
ceph-deploy osd create --data /dev/sde ceph142

ceph-deploy osd create --data /dev/sdb ceph143
ceph-deploy osd create --data /dev/sdc ceph143
ceph-deploy osd create --data /dev/sdd ceph143
ceph-deploy osd create --data /dev/sde ceph143

  • 初始化mgr,如果不初始化,则看不到集群的存储可用空间大小
    ceph-deploy mgr create ceph141 ceph142 ceph143
  • 查看ceph集群状态
    [root@ceph141 ceph-cluster]# ceph -s
    cluster:
    id: 1a9394f4-5983-4818-aa2d-f81ba06702ab
    health: HEALTH_WARN
    mons are allowing insecure global_id reclaim

services:
mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 7m)
mgr: ceph141(active, since 32s), standbys: ceph142, ceph143
osd: 12 osds: 12 up (since 2m), 12 in (since 2m)

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 12 GiB used, 23 TiB / 23 TiB avail
pgs:

[root@ceph141 ceph-cluster]#

报错解决办法:

ceph config set mon auth_allow_insecure_global_id_reclaim false

再次查看ceph集群状态(正常)
[root@ceph141 ceph-cluster]# ceph -s
cluster:
id: 1a9394f4-5983-4818-aa2d-f81ba06702ab
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 7m)
mgr: ceph141(active, since 79s), standbys: ceph142, ceph143
osd: 12 osds: 12 up (since 3m), 12 in (since 3m)

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 12 GiB used, 23 TiB / 23 TiB avail
pgs:

[root@ceph141 ceph-cluster]#

  • 查看Ceph版本
    [root@ceph141 ceph-cluster]# ceph -v
    ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)
    [root@ceph141 ceph-cluster]#
  • 部署radosgw服务,如果遇到卡顿就重启虚拟机,基本上该步骤10s内能搞定
    [root@ceph141 ceph-cluster]# ceph-deploy rgw create ceph141 ceph142 ceph143
    [root@ceph141 ceph-cluster]#
    [root@ceph141 ceph-cluster]# ceph -s
    cluster:
    id: 1a9394f4-5983-4818-aa2d-f81ba06702ab
    health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 55s)
mgr: ceph142(active, since 40s), standbys: ceph143, ceph141
osd: 12 osds: 12 up (since 49s), 12 in (since 11m)
rgw: 3 daemons active (ceph141, ceph142, ceph143)

task status:

data:
pools: 4 pools, 128 pgs
objects: 187 objects, 1.2 KiB
usage: 12 GiB used, 23 TiB / 23 TiB avail
pgs: 128 active+clean

io:
client: 89 KiB/s rd, 0 B/s wr, 88 op/s rd, 58 op/s wr

[root@ceph141 ceph-cluster]#
[root@ceph141 ceph-cluster]# ss -ntl | grep 7480
LISTEN 0 128 :7480 :
LISTEN 0 128 [::]:7480 [::]:

[root@ceph141 ceph-cluster]#

  • 安装mds
    [root@ceph141 ceph-cluster]# ceph-deploy mds create ceph141 ceph142 ceph143
    [root@ceph141 ceph-cluster]#
    [root@ceph141 ceph-cluster]# ceph -s
    cluster:
    id: 1a9394f4-5983-4818-aa2d-f81ba06702ab
    health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 2m)
mgr: ceph142(active, since 2m), standbys: ceph143, ceph141
mds: 3 up:standby
osd: 12 osds: 12 up (since 2m), 12 in (since 13m)
rgw: 3 daemons active (ceph141, ceph142, ceph143)

task status:

data:
pools: 4 pools, 128 pgs
objects: 187 objects, 1.2 KiB
usage: 12 GiB used, 23 TiB / 23 TiB avail
pgs: 128 active+clean

[root@ceph141 ceph-cluster]#
[root@ceph141 ceph-cluster]# ceph mds stat
3 up:standby
[root@ceph141 ceph-cluster]# ceph osd pool create cephfs-metadata 32 32
pool 'cephfs-metadata' created
[root@ceph141 ceph-cluster]# ceph osd pool create cephfs-data 64 64
pool 'cephfs-data' created
[root@ceph141 ceph-cluster]# ceph fs new yinzhengjie-cephfs cephfs-metadata cephfs-data
new fs with metadata pool 5 and data pool 6
[root@ceph141 ceph-cluster]# ceph fs ls
name: yinzhengjie-cephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
[root@ceph141 ceph-cluster]# ceph fs status yinzhengjie-cephfs

yinzhengjie-cephfs - 0 clients

+------+----------+---------+----------+-------+-------+
| Rank | State | MDS | Activity | dns | inos |
+------+----------+---------+----------+-------+-------+
| 0 | creating | ceph143 | | 10 | 13 |
+------+----------+---------+----------+-------+-------+
+-----------------+----------+-------+-------+
| Pool | type | used | avail |
+-----------------+----------+-------+-------+
| cephfs-metadata | metadata | 256k | 7595G |
| cephfs-data | data | 0 | 7595G |
+-----------------+----------+-------+-------+
+-------------+
| Standby MDS |
+-------------+
| ceph142 |
| ceph141 |
+-------------+
MDS version: ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)
[root@ceph141 ceph-cluster]#

  • 启用dashboard组件

安装dashboard依赖

yum -y install ceph-mgr-dashboard

开启MGR功能

[root@ceph141 ~]# ceph mgr module enable dashboard --force
[root@ceph141 ~]# ceph mgr module ls | grep dashboard
"dashboard",
[root@ceph141 ~]#

修改默认配置

格式:
dashboard ac-user-create { } { }

实操案例:
[root@ceph142 ~]# echo "yinzhengjie" > password.txt
[root@ceph142 ~]# ceph dashboard ac-user-create admin administrator -i password.txt
{"username": "admin", "lastUpdate": 1697075884, "name": null, "roles": ["administrator"], "password": "$2b$12$f9dU7BSemcwRL9kWnCS4z.f5iG5c0jnOufvuZ.mqScxxl5hepnsWG", "email": null}
[root@ceph142 ~]#

查看服务访问方式

[root@ceph142 ~]# ceph mgr services
{
"dashboard": "http://ceph143:7000/"
}
[root@ceph142 ~]#

目录
相关文章
|
2月前
|
存储 关系型数据库 文件存储
Ubuntu22.04LTS基于cephadm快速部署Ceph Reef(18.2.X)集群
这篇文章是关于如何在Ubuntu 22.04LTS上使用cephadm工具快速部署Ceph Reef(18.2.X)存储集群的详细教程,包括ceph的基本概念、集群的搭建步骤、集群管理以及测试集群可用性等内容。
354 8
Ubuntu22.04LTS基于cephadm快速部署Ceph Reef(18.2.X)集群
|
前端开发 Shell 网络安全
Ubuntu20.04LTS环境docker+cephadm方式部署Ceph 17.2.5
参照本文档将指导您,如何在Ubuntu20.0.4服务器采用docker+cephadm方式安装 17.2.5版本的Ceph。
1203 0
Ubuntu20.04LTS环境docker+cephadm方式部署Ceph 17.2.5
|
弹性计算 Ubuntu 应用服务中间件
在Ubuntu 18.04上快速构建minikube集群
本文为您介绍如何快速搭建一个基于Ubuntu的MiniKube集群并部署nginx服务。
|
存储 监控 关系型数据库
Ceph的核心组件的介绍(基于nautilus版本)
Ceph的核心组件的介绍(基于nautilus版本)
251 0
|
块存储 Docker 容器
用docker搭建Ceph集群(基于nautilus版本)
用docker搭建Ceph集群(基于nautilus版本)
567 0
|
存储 块存储 Docker
用docker搭建Ceph集群问题整理(基于nautilus版本)
用docker搭建Ceph集群问题整理(基于nautilus版本)
176 0
|
存储 运维 Kubernetes
CentOS7下离线安装KubeSphere3.0集群
CentOS7下离线安装KubeSphere3.0集群
763 0
CentOS7下离线安装KubeSphere3.0集群
|
关系型数据库 Linux 块存储
CentOS7.5 手动部署ceph
1  环境配置 1.1  设备列表   功能 主机名 IP mon node1 192.168.1.10 mon node2 192.168.
9568 0
|
网络安全 Docker 容器
centos8安装ceph octopus集群
centos8安装ceph octopus集群
574 0
|
存储 测试技术 网络安全
ceph安装配置
简介 ceph是一个开源分布式存储系统,支持PB级别的存储,支持对象存储,块存储和文件存储,高性能,高可用,可扩展。 部署网络建议架构图 部署 部署架构图,本次实验部署jewel版本 实验环境的Vagrantfile lab1节点既作admin节点又作node节点,lab2,.
2630 0
下一篇
无影云桌面