目的
测试 ceph 与 vm 连接与使用
创建 vm
主机 128030 及 129094 是全新安装并利用 puppet 推送的 nova compute 主机
计划在这两个主机上进行 vm 连接 ceph 测试
nova boot --flavor b2c_web_1core --image Centos6.3_1.3 --security_group default --nic net-id=9106aee4-2dc0-4a6d-a789-10c53e2b88c1 ceph-test01.sh.vclound.com --availability-zone nova:sh-compute-128030.sh.vclound.com
+--------------------------------------+------------------------------------------------------+
| Property | Value |
+--------------------------------------+------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance-0000020d |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | 5DCHoj8ihwN6 |
| config_drive | |
| created | 2015-06-25T03:49:14Z |
| flavor | b2c_web_1core (5) |
| hostId | |
| id | 99d37977-a13a-4a8b-b8b1-e613a4959623 |
| image | Centos6.3_1.3 (7ec6eb66-b8a2-41e9-bbb5-b1e7ce1efed4) |
| key_name | - |
| metadata | {} |
| name | ceph-test01.sh.vclound.com |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | bb0b51d166254dc99bc7462c0ac002ff |
| updated | 2015-06-25T03:49:14Z |
| user_id | 226e71f1c1aa4bae85485d1d17b6f0ae |
+--------------------------------------+------------------------------------------------------+
nova boot --flavor b2c_web_1core --image Centos6.3_1.3 --security_group default --nic net-id=9106aee4-2dc0-4a6d-a789-10c53e2b88c1 ceph-test02.sh.vclound.com --availability-zone nova:sh-compute-129094.sh.vclound.com
+--------------------------------------+------------------------------------------------------+
| Property | Value |
+--------------------------------------+------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance-0000020f |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | wHddAW33sFBE |
| config_drive | |
| created | 2015-06-25T03:51:03Z |
| flavor | b2c_web_1core (5) |
| hostId | |
| id | b433b227-14ab-4157-8f08-362ad680e35e |
| image | Centos6.3_1.3 (7ec6eb66-b8a2-41e9-bbb5-b1e7ce1efed4) |
| key_name | - |
| metadata | {} |
| name | ceph-test02.sh.vclound.com |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | bb0b51d166254dc99bc7462c0ac002ff |
| updated | 2015-06-25T03:51:03Z |
| user_id | 226e71f1c1aa4bae85485d1d17b6f0ae |
+--------------------------------------+------------------------------------------------------+
instance 状态
[root@sh-controller-129022 ~(keystone_admin)]# nova list
+--------------------------------------+-----------------------------+--------+------------+-------------+---------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------------------------+--------+------------+-------------+---------------------------+
| 99d37977-a13a-4a8b-b8b1-e613a4959623 | ceph-test01.sh.vclound.com | ACTIVE | - | Running | SH_DEV_NET=10.198.192.254 |
| b433b227-14ab-4157-8f08-362ad680e35e | ceph-test02.sh.vclound.com | ACTIVE | - | Running | SH_DEV_NET=10.198.192.255 |
+--------------------------------------+-----------------------------+--------+------------+-------------+---------------------------+
创建云盘
[root@sh-controller-129022 ~(keystone_admin)]# cinder create 50
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2015-06-25T06:11:58.840626 |
| display_description | None |
| display_name | None |
| encrypted | False |
| id | 8516fb02-b578-4e57-9678-d30d2b0a6734 |
| metadata | {} |
| size | 50 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+
[root@sh-controller-129022 ~(keystone_admin)]# cinder create 50
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2015-06-25T06:12:07.151001 |
| display_description | None |
| display_name | None |
| encrypted | False |
| id | 9d8aa395-5e6a-411a-9f19-6375f29e9f9f |
| metadata | {} |
| size | 50 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+
[root@sh-controller-129022 ~(keystone_admin)]# cinder create 50
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2015-06-25T06:12:14.321030 |
| display_description | None |
| display_name | None |
| encrypted | False |
| id | a5751c38-01c0-4f25-a02c-7d2a05d6ea36 |
| metadata | {} |
| size | 50 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+
查询云盘
[root@sh-controller-129022 ~(keystone_admin)]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 8516fb02-b578-4e57-9678-d30d2b0a6734 | available | None | 50 | None | false | |
| 9d8aa395-5e6a-411a-9f19-6375f29e9f9f | available | None | 50 | None | false | |
| a5751c38-01c0-4f25-a02c-7d2a05d6ea36 | available | None | 50 | None | false | |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
连接云盘
[root@sh-controller-129022 ~(keystone_admin)]# nova volume-attach 99d37977-a13a-4a8b-b8b1-e613a4959623 8516fb02-b578-4e57-9678-d30d2b0a6734
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdc |
| id | 8516fb02-b578-4e57-9678-d30d2b0a6734 |
| serverId | 99d37977-a13a-4a8b-b8b1-e613a4959623 |
| volumeId | 8516fb02-b578-4e57-9678-d30d2b0a6734 |
+----------+--------------------------------------+
[root@sh-controller-129022 ~(keystone_admin)]# nova volume-attach 99d37977-a13a-4a8b-b8b1-e613a4959623 9d8aa395-5e6a-411a-9f19-6375f29e9f9f
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdd |
| id | 9d8aa395-5e6a-411a-9f19-6375f29e9f9f |
| serverId | 99d37977-a13a-4a8b-b8b1-e613a4959623 |
| volumeId | 9d8aa395-5e6a-411a-9f19-6375f29e9f9f |
+----------+--------------------------------------+
[root@sh-controller-129022 ~(keystone_admin)]# nova volume-attach b433b227-14ab-4157-8f08-362ad680e35e a5751c38-01c0-4f25-a02c-7d2a05d6ea36
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdc |
| id | a5751c38-01c0-4f25-a02c-7d2a05d6ea36 |
| serverId | b433b227-14ab-4157-8f08-362ad680e35e |
| volumeId | a5751c38-01c0-4f25-a02c-7d2a05d6ea36 |
+----------+--------------------------------------+
检测云盘
[root@sh-controller-129022 ~(keystone_admin)]# nova show 99d37977-a13a-4a8b-b8b1-e613a4959623
+--------------------------------------+--------------------------------------------------------------------------------------------------+
| Property | Value |
+--------------------------------------+--------------------------------------------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | sh-compute-128030.sh.vclound.com |
| OS-EXT-SRV-ATTR:hypervisor_hostname | sh-compute-128030.sh.vclound.com |
| OS-EXT-SRV-ATTR:instance_name | instance-0000020d |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-STS:task_state | - |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2015-06-25T03:49:26.000000 |
| OS-SRV-USG:terminated_at | - |
| SH_DEV_NET network | 10.198.192.254 |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2015-06-25T03:49:14Z |
| flavor | b2c_web_1core (5) |
| hostId | 8b5b75df8b0271d739323f1373b7363d432bb9c68b079ab3e94e1c1a |
| id | 99d37977-a13a-4a8b-b8b1-e613a4959623 |
| image | Centos6.3_1.3 (7ec6eb66-b8a2-41e9-bbb5-b1e7ce1efed4) |
| key_name | - |
| metadata | {} |
| name | ceph-test01.sh.vclound.com |
| os-extended-volumes:volumes_attached | [{"id": "8516fb02-b578-4e57-9678-d30d2b0a6734"}, {"id": "9d8aa395-5e6a-411a-9f19-6375f29e9f9f"}] | <-挂载两个云盘
| progress | 0 |
| security_groups | default |
| status | ACTIVE |
| tenant_id | bb0b51d166254dc99bc7462c0ac002ff |
| updated | 2015-06-25T03:49:26Z |
| user_id | 226e71f1c1aa4bae85485d1d17b6f0ae |
+--------------------------------------+--------------------------------------------------------------------------------------------------+
[root@sh-controller-129022 ~(keystone_admin)]# nova show b433b227-14ab-4157-8f08-362ad680e35e
+--------------------------------------+----------------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | sh-compute-129094.sh.vclound.com |
| OS-EXT-SRV-ATTR:hypervisor_hostname | sh-compute-129094.sh.vclound.com |
| OS-EXT-SRV-ATTR:instance_name | instance-0000020f |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-STS:task_state | - |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2015-06-25T03:52:05.000000 |
| OS-SRV-USG:terminated_at | - |
| SH_DEV_NET network | 10.198.192.255 |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2015-06-25T03:51:03Z |
| flavor | b2c_web_1core (5) |
| hostId | a5239c63509fa00ab056ca701363538ecc0afe41d8f886f82b345b4d | <- 挂载一个
| id | b433b227-14ab-4157-8f08-362ad680e35e |
| image | Centos6.3_1.3 (7ec6eb66-b8a2-41e9-bbb5-b1e7ce1efed4) |
| key_name | - |
| metadata | {} |
| name | ceph-test02.sh.vclound.com |
| os-extended-volumes:volumes_attached | [{"id": "a5751c38-01c0-4f25-a02c-7d2a05d6ea36"}] |
| progress | 0 |
| security_groups | default |
| status | ACTIVE |
| tenant_id | bb0b51d166254dc99bc7462c0ac002ff |
| updated | 2015-06-25T03:52:05Z |
| user_id | 226e71f1c1aa4bae85485d1d17b6f0ae |
+--------------------------------------+----------------------------------------------------------+
检测 cinder 状态
[root@sh-controller-129022 ~(keystone_admin)]# cinder list
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| 8516fb02-b578-4e57-9678-d30d2b0a6734 | in-use | None | 50 | None | false | 99d37977-a13a-4a8b-b8b1-e613a4959623 |
| 9d8aa395-5e6a-411a-9f19-6375f29e9f9f | in-use | None | 50 | None | false | 99d37977-a13a-4a8b-b8b1-e613a4959623 |
| a5751c38-01c0-4f25-a02c-7d2a05d6ea36 | in-use | None | 50 | None | false | b433b227-14ab-4157-8f08-362ad680e35e |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
测试
测试云盘读写
[root@ceph-test01 ~]# pvcreate /dev/vdc /dev/vdd
Physical volume "/dev/vdc" successfully created
Physical volume "/dev/vdd" successfully created
[root@ceph-test01 ~]# vgcreate myvg /dev/vdc /dev/vdd
Volume group "myvg" successfully created
[root@ceph-test01 ~]# lvcreate -i 2 -n mylv -l 100%FREE myvg
Using default stripesize 64.00 KiB
Logical volume "mylv" created
[root@ceph-test01 ~]# yum install -y xfsprogs.x86_64 > /dev/null 2>&1
[root@ceph-test01 ~]# mkfs.xfs /dev/myvg/mylv
meta-data=/dev/myvg/mylv isize=256 agcount=16, agsize=1638256 blks
= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=26212096, imaxpct=25
= sunit=16 swidth=32 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=12800, version=2
= sectsz=512 sunit=16 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@ceph-test01 ~]# mount /dev/myvg/mylv /mnt
[root@ceph-test01 ~]# df -h
文件系统 容量 已用 可用 已用%% 挂载点
/dev/vda1 20G 1.1G 18G 6% /
tmpfs 939M 0 939M 0% /dev/shm
/dev/mapper/myvg-mylv
100G 33M 100G 1% /mnt
查询当前 ceph 情况
[root@sh-ceph-128213 ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
290T 290T 4615M 0
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 0 0 99157G 0
volumes 1 37220k 0 99157G 28 <- 留意当前默认下 ceph 只有 28 个对象
在 vm 上执行 操作
[root@ceph-test01 ~]# dd if=/dev/zero of=/mnt/1.img bs=1M count=700000
dd: 正在写入"/mnt/1.img": 设备上没有空间
记录了102309+0 的读入
记录了102308+0 的写出
107278180352字节(107 GB)已复制,982.231 秒,109 MB/秒
监控 ceph 存储空间
[root@sh-ceph-128212 var]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
290T 290T 337G 0.11
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 0 0 99033G 0
volumes 1 102399M 0.03 99033G 25622 <- dd 在客户端有 1 个文件, 但在 ceph 中会出现 [25622-28] 个文件 2.5 万个小文件
监控 ceph 物理存储空间
[root@sh-ceph-128212 ceph-0]# df -h
文件系统 容量 已用 可用 已用% 挂载点
/dev/mapper/centos-root 50G 1.8G 49G 4% /
devtmpfs 32G 0 32G 0% /dev
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 32G 18M 32G 1% /run
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/sda2 494M 123M 372M 25% /boot
/dev/mapper/centos-home 3.6T 33M 3.6T 1% /home
/dev/sdb1 3.7T 3.8G 3.7T 1% /var/lib/ceph/osd/ceph-0
/dev/sdc1 3.7T 4.2G 3.7T 1% /var/lib/ceph/osd/ceph-1
/dev/sdd1 3.7T 4.3G 3.7T 1% /var/lib/ceph/osd/ceph-2
/dev/sdf1 3.7T 4.2G 3.7T 1% /var/lib/ceph/osd/ceph-3
/dev/sdg1 3.7T 4.1G 3.7T 1% /var/lib/ceph/osd/ceph-4
/dev/sdh1 3.7T 4.2G 3.7T 1% /var/lib/ceph/osd/ceph-5
/dev/sde1 3.7T 4.2G 3.7T 1% /var/lib/ceph/osd/ceph-6
/dev/sdi1 3.7T 3.7G 3.7T 1% /var/lib/ceph/osd/ceph-7
/dev/sdj1 3.7T 3.9G 3.7T 1% /var/lib/ceph/osd/ceph-8
/dev/sdk1 3.7T 4.6G 3.7T 1% /var/lib/ceph/osd/ceph-9 <- 参考已用空间
发现之前 dd 的文件是被打散, 分布在各个不同的磁盘中, 而且分布数量也不一定是一样, 没有在 /var/lib/ceph/osd/ceph* 目录中找到一个具体的文件名, 只看见一堆被打散的小文件
卸载云盘
[root@ceph-test01 ~]# rm -rf /mnt/1.img
[root@ceph-test01 ~]# umount /mnt
检测 ceph 空间
[root@sh-controller-128022 cinder]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
290T 290T 304G 0.10
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 0 0 99044G 0
volumes 1 102399M 0.03 99044G 25622 <- 删除 1.img 文件 不会直接影响 ceph 磁盘容量
卸载 openstack 中云盘
[root@sh-controller-129022 ~(keystone_admin)]# nova volume-detach 99d37977-a13a-4a8b-b8b1-e613a4959623 8516fb02-b578-4e57-9678-d30d2b0a6734
[root@sh-controller-129022 ~(keystone_admin)]# nova volume-detach 99d37977-a13a-4a8b-b8b1-e613a4959623 9d8aa395-5e6a-411a-9f19-6375f29e9f9f
删除 openstack 云盘
[root@sh-controller-129022 ~(keystone_admin)]# cinder delete 8516fb02-b578-4e57-9678-d30d2b0a6734
[root@sh-controller-129022 ~(keystone_admin)]# cinder delete 9d8aa395-5e6a-411a-9f19-6375f29e9f9f
查询 ceph 存储空间
[root@sh-controller-128022 cinder]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
290T 290T 15961M 0
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
rbd 0 0 0 99131G 0
volumes 1 37220k 0 99131G 24 <- 之前产生的 1img 的对象被删除了
[root@sh-ceph-128212 ~]# df -h
文件系统 容量 已用 可用 已用% 挂载点
/dev/mapper/centos-root 50G 1.8G 49G 4% /
devtmpfs 32G 0 32G 0% /dev
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 32G 18M 32G 1% /run
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/sda2 494M 123M 372M 25% /boot
/dev/mapper/centos-home 3.6T 33M 3.6T 1% /home
/dev/sdb1 3.7T 58M 3.7T 1% /var/lib/ceph/osd/ceph-0 <-参考, 磁盘也恢复了之前大小
/dev/sdc1 3.7T 62M 3.7T 1% /var/lib/ceph/osd/ceph-1
/dev/sdd1 3.7T 55M 3.7T 1% /var/lib/ceph/osd/ceph-2
/dev/sdf1 3.7T 61M 3.7T 1% /var/lib/ceph/osd/ceph-3
/dev/sdg1 3.7T 63M 3.7T 1% /var/lib/ceph/osd/ceph-4
/dev/sdh1 3.7T 56M 3.7T 1% /var/lib/ceph/osd/ceph-5
/dev/sde1 3.7T 56M 3.7T 1% /var/lib/ceph/osd/ceph-6
/dev/sdi1 3.7T 63M 3.7T 1% /var/lib/ceph/osd/ceph-7
/dev/sdj1 3.7T 59M 3.7T 1% /var/lib/ceph/osd/ceph-8
/dev/sdk1 3.7T 60M 3.7T 1% /var/lib/ceph/osd/ceph-9
证明, 删除云盘能够会自动释放空间