ceph rbd.ko by elrepo (when I mapping it, Module rbd not found, CentOS 7)

简介:
我们知道ceph的支持块设备功能, 但是需要客户端加载RBD模块, 并且能够连到CEPH集群的所有节点.
ceph rbd.ko by elrepo (when I mapping it, Module rbd not found, CentOS 7) - 德哥@Digoal - PostgreSQL research

但是测试过程中, 在使用rbd 映射块设备时, 出现一个错误, 如下 :
[root@localhost ~]# rbd map img1 --pool pool1 -m 172.17.0.2 -k /etc/ceph/ceph.client.admin.keyring 
modinfo: ERROR: Module rbd not found.
modprobe: FATAL: Module rbd not found.
rbd: failed to load rbd kernel module (1)
rbd: sysfs write failed
rbd: map failed: (2) No such file or directory

这个错误的原因是没有rbd模块.
我这里用的是CentOS 7 x64, 解决办法有2个.
1. 自行编译内核, 将rbd模块加入内核.
参考 : 
2. 使用elrepo提供的, 已经编译好的内核.
参考 : 
这里使用第二种方法来解决这个问题, 即安装已经编译好的内核 :
[root@localhost ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
[root@localhost ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
[root@localhost ~]# yum install -y yum-plugin-fastestmirror
[root@localhost ~]# yum --enablerepo=elrepo-kernel install kernel-ml kernel-ml-devel

可以看到新的内核支持rbd模块
[root@localhost ceph-kmod-rpm]# locate rbd.ko
/usr/lib/modules/3.18.1-1.el7.elrepo.x86_64/kernel/drivers/block/rbd.ko

修改为以新内核启动
[root@localhost ~]#  cat /boot/grub2/grub.cfg |grep menuentry
if [ x"${feature_menuentry_id}" = xy ]; then
  menuentry_id_option="--id"
  menuentry_id_option=""
export menuentry_id_option
menuentry 'CentOS Linux (3.18.1-1.el7.elrepo.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-123.el7.x86_64-advanced-5b6291c5-d316-467f-9a78-f4535ad76e6f' {
menuentry 'CentOS Linux, with Linux 3.10.0-123.el7.x86_64' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-123.el7.x86_64-advanced-5b6291c5-d316-467f-9a78-f4535ad76e6f' {
menuentry 'CentOS Linux, with Linux 0-rescue-0e18ebb9ae3d4dbfb4b95bbd4cf359ed' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-0-rescue-0e18ebb9ae3d4dbfb4b95bbd4cf359ed-advanced-5b6291c5-d316-467f-9a78-f4535ad76e6f' {

[root@localhost ~]# grub2-set-default 'CentOS Linux (3.18.1-1.el7.elrepo.x86_64) 7 (Core)'

[root@localhost ~]# grub2-editenv list
saved_entry=CentOS Linux (3.18.1-1.el7.elrepo.x86_64) 7 (Core)

[root@localhost ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-3.18.1-1.el7.elrepo.x86_64
Found initrd image: /boot/initramfs-3.18.1-1.el7.elrepo.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-123.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-123.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-0e18ebb9ae3d4dbfb4b95bbd4cf359ed
Found initrd image: /boot/initramfs-0-rescue-0e18ebb9ae3d4dbfb4b95bbd4cf359ed.img
done

重启系统, 确认已经可以使用rbd模块.
[root@localhost ~]# uname -r
3.18.1-1.el7.elrepo.x86_64
[root@localhost ~]# lsmod|grep rbd
[root@localhost ~]# modprobe rbd
[root@localhost ~]# lsmod|grep rbd
rbd                    73304  0 
libceph               235718  1 rbd


好了, 再回过头来演示一遍使用ceph块设备的过程.(使用块设备的系统必须能连接到ceph集群(指集群中的所有节点))
因为我的环境是docker下部署的, 所以需要重启这些容器, 手工配置IP
docker start deploy
docker start mon1
docker start mon2
docker start mon3
docker start mon4
docker start mon5
docker start osd1
docker start osd2
docker start osd3
docker start osd4

配置IP
./pipework.sh docker0 -i eth0 deploy 172.17.0.1/16@172.17.42.1
./pipework.sh docker0 -i eth0 mon1 172.17.0.2/16@172.17.42.1
./pipework.sh docker0 -i eth0 mon2 172.17.0.3/16@172.17.42.1
./pipework.sh docker0 -i eth0 mon3 172.17.0.4/16@172.17.42.1
./pipework.sh docker0 -i eth0 osd1 172.17.0.5/16@172.17.42.1
./pipework.sh docker0 -i eth0 osd2 172.17.0.6/16@172.17.42.1
./pipework.sh docker0 -i eth0 osd3 172.17.0.7/16@172.17.42.1
./pipework.sh docker0 -i eth0 osd4 172.17.0.8/16@172.17.42.1
./pipework.sh docker0 -i eth0 mon4 172.17.0.9/16@172.17.42.1
./pipework.sh docker0 -i eth0 mon5 172.17.0.10/16@172.17.42.1
./pipework.sh docker0 -i eth1 deploy 172.18.0.1/16
./pipework.sh docker0 -i eth1 mon1 172.18.0.2/16
./pipework.sh docker0 -i eth1 mon2 172.18.0.3/16
./pipework.sh docker0 -i eth1 mon3 172.18.0.4/16
./pipework.sh docker0 -i eth1 osd1 172.18.0.5/16
./pipework.sh docker0 -i eth1 osd2 172.18.0.6/16
./pipework.sh docker0 -i eth1 osd3 172.18.0.7/16
./pipework.sh docker0 -i eth1 osd4 172.18.0.8/16
./pipework.sh docker0 -i eth1 mon4 172.18.0.9/16
./pipework.sh docker0 -i eth1 mon5 172.18.0.10/16
./pipework.sh docker0 -i eth2 deploy 172.19.0.1/16
./pipework.sh docker0 -i eth2 mon1 172.19.0.2/16
./pipework.sh docker0 -i eth2 mon2 172.19.0.3/16
./pipework.sh docker0 -i eth2 mon3 172.19.0.4/16
./pipework.sh docker0 -i eth2 osd1 172.19.0.5/16
./pipework.sh docker0 -i eth2 osd2 172.19.0.6/16
./pipework.sh docker0 -i eth2 osd3 172.19.0.7/16
./pipework.sh docker0 -i eth2 osd4 172.19.0.8/16
./pipework.sh docker0 -i eth2 mon4 172.19.0.9/16
./pipework.sh docker0 -i eth2 mon5 172.19.0.10/16

启动mon, osd
service ceph start mon.mon1
...
ceph-mon -i mon4 --pid-file /var/run/ceph/mon.mon4.pid -c /etc/ceph/ceph.conf --cluster ceph --mon-data /data01/ceph/mon/ceph-4

service ceph start osd1

... 略

接下来可以在宿主机管理这个ceph了, 注意如果有认证的话, 需要拷贝配置文件和认证文件到宿主机, 如果文件名非默认的话, 那么需要在命令中指定配置文件和认证文件.
[root@localhost rbd0]# ceph osd lspools
0 rbd

创建pool
exp.
ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] \
     [crush-ruleset-name]

[root@localhost rbd0]# ceph osd pool create pool1 128 128 replicated
[root@localhost rbd0]# ceph osd lspools
0 rbd,1 pool1,

创建块设备镜像
[root@localhost rbd0]# rbd create --help

在pool1池创建一个镜像名为img2, 大小8192MB, 格式为2(支持克隆)
[root@localhost rbd0]# rbd create --size 8192 img2 --image-format 2 -p pool1

接着创建
rbd create --size 8192 img2 -p rbd
rbd create --size 8192 img1 -p rbd
rbd create --size 8192 img1 -p pool1


接下来可以在宿主机(即更新了内核的主机)映射这个块设备镜像了.
[root@localhost ~]# rbd map img1 --pool pool1 -m 172.17.0.2 -k /etc/ceph/ceph.client.admin.keyring 
/dev/rbd0

[root@localhost ~]# fdisk -c -u /dev/rbd0
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x2b2bb069.

Command (m for help): p

Disk /dev/rbd0: 8589 MB, 8589934592 bytes, 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes
Disk label type: dos
Disk identifier: 0x2b2bb069

     Device Boot      Start         End      Blocks   Id  System

创建文件系统
[root@localhost ~]# mkfs.xfs /dev/rbd0
log stripe unit (4194304 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
meta-data=/dev/rbd0              isize=256    agcount=9, agsize=261120 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0
data     =                       bsize=4096   blocks=2097152, imaxpct=25
         =                       sunit=1024   swidth=1024 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@localhost ~]# mkdir /rbd0
[root@localhost ~]# mount /dev/rbd0 /rbd0
df -h
/dev/rbd0                  8.0G   33M  8.0G   1% /rbd0

简单的使用测试 : 
[root@localhost rbd0]# dd if=/dev/zero of=./test.img bs=8192k count=512 oflag=direct
512+0 records in
512+0 records out
4294967296 bytes (4.3 GB) copied, 53.148 s, 80.8 MB/s

[root@localhost ~]# ceph -s
    cluster f649b128-963c-4802-ae17-5a76f36c4c76
     health HEALTH_OK
     monmap e8: 5 mons at {mon1=172.17.0.2:6789/0,mon2=172.17.0.3:6789/0,mon3=172.17.0.4:6789/0,mon4=172.17.0.9:6789/0,mon5=172.17.0.10:6789/0}, election epoch 50, quorum 0,1,2,3,4 mon1,mon2,mon3,mon4,mon5
     osdmap e59: 4 osds: 4 up, 4 in
      pgmap v21762: 256 pgs, 2 pools, 1773 MB data, 456 objects
            444 GB used, 1192 GB / 1636 GB avail
                 256 active+clean
  client io 105 MB/s wr, 420 op/s

其他测试 : 
[root@localhost rbd1]# rbd bench-write img2 -p rbd
bench-write  io_size 4096 io_threads 16 bytes 1073741824 pattern seq
  SEC       OPS   OPS/SEC   BYTES/SEC
    1     30097  30059.61  123124145.56
    2     52487  25616.02  104923229.04
    3     77155  25716.23  105333691.08
    4    102370  24461.42  100193961.33
    5    128880  25772.12  105562598.27
    6    150406  24981.22  102323056.68
    7    174073  24863.73  101841845.28
    8    192126  24012.55  98355418.43
    9    220293  24475.64  100252211.29
   10    241683  24168.03  98992238.19
elapsed:    11  ops:   262144  ops/sec: 23773.85  bytes/sec: 97377672.90


快照 : 
[root@localhost rbd1]# rbd snap create img2 -p rbd --snap img2@123


[参考]
相关文章
|
应用服务中间件 Linux 开发工具
CentOS7下启动Nginx出现Failed to start nginx.service:unit not found
CentOS7下启动Nginx出现Failed to start nginx.service:unit not found
3006 0
CentOS7下启动Nginx出现Failed to start nginx.service:unit not found
|
Linux Shell Python
-bash: pip: command not found pip命令报错 解决方法(Centos版)
-bash: pip: command not found pip命令报错 解决方法(Centos版)
3486 0
|
8月前
|
JavaScript Linux C语言
【NodeJS】GLIBC_2.28 not found CentOS7不兼容Node高版本
【NodeJS】GLIBC_2.28 not found CentOS7不兼容Node高版本
1256 1
【NodeJS】GLIBC_2.28 not found CentOS7不兼容Node高版本
|
8月前
|
缓存 Linux 开发工具
centos 7 yum安装失败(HTTP Error 404 - Not Found)的解决方法
centos 7 yum安装失败(HTTP Error 404 - Not Found)的解决方法
582 0
|
Linux Perl
修复 Longhorn 卷挂载失败(”CentOS 7.6-'fsck' found errors on device“)
修复 Longhorn 卷挂载失败(”CentOS 7.6-'fsck' found errors on device“)
238 0
|
Shell Linux 开发工具
centos7 -bash: vim: command not found
centos7 -bash: vim: command not found
181 0
|
Linux
CentOS系统安装之网络问题(Network is unreachable;name or service not found)
CentOS系统安装之网络问题(Network is unreachable;name or service not found)
853 0
CentOS系统安装之网络问题(Network is unreachable;name or service not found)
|
Shell Linux
CentOS7下-bash: nano: command not found
由于安装的是纯净版系统,运行nano命令是提示没有找到该命令,以下是解决方法,用root权限的用户运行以下命令安装nano: yum install nano 遇到询问时一路点y即可。 安装好后运行:nano a.txt,如果该文件不存在就会创建一个。
3742 0

热门文章

最新文章