ceph 故障解决备忘

简介: 参考 ceph 环境 cluster dc4f91c1-8792-4948-b68f-2fcea75f53b9 health HEALTH_WARN 15 requests are blocked > 32 sec; clock skew detected on mon.hh-yun-ceph-cinder025-128075 monm

参考 ceph 环境

    cluster dc4f91c1-8792-4948-b68f-2fcea75f53b9
     health HEALTH_WARN 15 requests are blocked > 32 sec; clock skew detected on mon.hh-yun-ceph-cinder025-128075
     monmap e3: 5 mons at {hh-yun-ceph-cinder015-128055=240.30.128.55:6789/0,hh-yun-ceph-cinder017-128057=240.30.128.57:6789/0
,hh-yun-ceph-cinder024-128074=240.30.128.74:6789/0,hh-yun-ceph-cinder025-128075=240.30.128.75:6789/0,hh-yun-ceph-cinder026-128
076=240.30.128.76:6789/0}, election epoch 168, quorum 0,1,2,3,4 hh-yun-ceph-cinder015-128055,hh-yun-ceph-cinder017-128057,hh-y
un-ceph-cinder024-128074,hh-yun-ceph-cinder025-128075,hh-yun-ceph-cinder026-128076
     osdmap e27430: 100 osds: 100 up, 100 in
      pgmap v11224834: 20544 pgs, 2 pools, 70255 GB data, 17678 kobjects
            205 TB used, 157 TB / 363 TB avail
               20540 active+clean
                   4 active+clean+scrubbing+deep
  client io 57251 kB/s rd, 44602 kB/s wr, 3797 op/s

参考 ceph health detail 返回结果

1.  mon.hh-yun-ceph-cinder025-128075 addr 240.30.128.75:6789/0 clock skew 0.919947s > max 0.05s (latency 0.000544046s)
2.  15 requests are blocked

这里是具有两个常见错误
1. 时间不同步导致 mon 报警
2. 由于有硬件故障, 网络延时, 或其他原因导致客户端访问 ceph 存储超时

问题解决 (时间同步)
当前系统中的环境设定

[root@hh-yun-ceph-cinder015-128055 ceph]# ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show  | grep clock
  "mon_clock_drift_allowed": "0.05",   <- 当 mon 时间偏移 0.05 秒则不正常
  "mon_clock_drift_warn_backoff": "5",    <- 当出现 5 次偏移, 则报警
  "clock_offset": "0",                  <- mon 节点的时间偏移默认值

问题定位
检测各个机器的时间, 发现 hh-yun-ceph-cinder025-128075 节点时间偏移
修正方法

systemctl stop chronyd
ntpdate 10.199.129.21
systemctl start chronyd

当同步了时间并验证后, 需重启 mon 节点

/etc/init.d/ceph stop mon
/etc/init.d/ceph start mon

因为 mon 节点与客户非常连接, 因此, 在确保 mon 节点具有冗余情况下, 可以在生产时间段进行快速重启

问题解决 (15 requests are blocked)

参考信息

ceph health detail
HEALTH_WARN 14 requests are blocked > 32 sec; 11 osds have slow requests
7 ops are blocked > 536871 sec
2 ops are blocked > 268435 sec
2 ops are blocked > 67108.9 sec
3 ops are blocked > 33554.4 sec
1 ops are blocked > 536871 sec on osd.0
1 ops are blocked > 536871 sec on osd.10
2 ops are blocked > 536871 sec on osd.12
2 ops are blocked > 268435 sec on osd.18
1 ops are blocked > 536871 sec on osd.31
1 ops are blocked > 536871 sec on osd.38
1 ops are blocked > 67108.9 sec on osd.38
1 ops are blocked > 33554.4 sec on osd.48
1 ops are blocked > 67108.9 sec on osd.52
1 ops are blocked > 536871 sec on osd.63
1 ops are blocked > 33554.4 sec on osd.64
1 ops are blocked > 33554.4 sec on osd.69
11 osds have slow requests

上述信息, 发现, 访问被 block 而且时间很久,
解决方法

对上述 osd 进行一个一个的重启,  切记,  一个重启后,  数据 recovery 正常后才可以执行下一次的 osd 重启

/etc/init.d/ceph stop osd.0
/etc/init.d/ceph start osd.0

待数据恢复后才能够执行下一个 OSD 重启

恢复后解决

    cluster dc4f91c1-8792-4948-b68f-2fcea75f53b9
     health HEALTH_OK
     monmap e3: 5 mons at {hh-yun-ceph-cinder015-128055=240.30.128.55:6789/0,hh-yun-ceph-cinder017-128057=240.30.128.57:6789/0
,hh-yun-ceph-cinder024-128074=240.30.128.74:6789/0,hh-yun-ceph-cinder025-128075=240.30.128.75:6789/0,hh-yun-ceph-cinder026-128
076=240.30.128.76:6789/0}, election epoch 170, quorum 0,1,2,3,4 hh-yun-ceph-cinder015-128055,hh-yun-ceph-cinder017-128057,hh-y
un-ceph-cinder024-128074,hh-yun-ceph-cinder025-128075,hh-yun-ceph-cinder026-128076
     osdmap e27495: 100 osds: 100 up, 100 in
      pgmap v11231620: 20544 pgs, 2 pools, 70294 GB data, 17688 kobjects
            206 TB used, 157 TB / 363 TB avail
               20539 active+clean
                   5 active+clean+scrubbing+deep
  client io 973 kB/s rd, 22936 kB/s wr, 1334 op/s
目录
相关文章
|
关系型数据库 块存储
ceph 故障分析(backfill_toofull)
在执行了 ceph 扩容之前, 发现长时间内都具有下面的状态存在 参考下面信息 # ceph -s cluster dc4f91c1-8792-4948-b68f-2fcea75f53b9 health HEALTH_WARN 13 pgs backfill_toofull; 1 pgs degraded; 1 pgs stuck degraded
7131 0
|
块存储
【Openstack】排错:Cinder创建云硬盘状态错误解决
Cinder创建云硬盘状态错误,配置服务器时钟同步
4292 0
【Openstack】排错:Cinder创建云硬盘状态错误解决
|
运维
Rados故障处理操作手册
Ceph故障处理Rados操作手册
305 1
|
缓存 块存储 开发工具
CephFS 常用命令以及问题分析
最近公司的生产环境已经开始使用 CephFS 作为文件系统存储,记录一下使用过程中遇到的问题,已经一些常用的命令。
3267 0
|
关系型数据库 MySQL 前端开发
|
存储 运维 Linux
RH236GlusterFS排错
RH236GlusterFS排错
203 0
RH236GlusterFS排错