Ceph Reef(18.2.X)集群的状态管理实战

简介: 这篇文章是关于Ceph Reef(18.2.X)集群的状态管理实战,包括如何检查集群状态、OSD状态、MON监视器映射、PG和OSD存储对应关系,以及如何通过套接字管理集群和修改集群配置的详细指南。

                                             作者:尹正杰
版权声明:原创作品,谢绝转载!否则将追究法律责任。

一.检查ceph集群状态

1.检查ceph集群信息

[root@ceph141 ~]# ceph -s
  cluster:  # 查看集群ID和集群运行状态
    id:     c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4
    health: HEALTH_OK

  services:  # 各组件服务的状态监控
    mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 12h)
    mgr: ceph141.fuztcs(active, since 12h), standbys: ceph142.vdsfzv
    mds: 1/1 daemons up, 1 standby
    osd: 6 osds: 6 up (since 12h), 6 in (since 9d)
    rgw: 1 daemon active (1 hosts, 1 zones)

  data:  # 存储数据相关的监控,包括存储池,存储对象,存储统计信息等。
    volumes: 1/1 healthy
    pools:   11 pools, 498 pgs
    objects: 325 objects, 32 MiB
    usage:   435 MiB used, 1.5 TiB / 1.5 TiB avail
    pgs:     498 active+clean

[root@ceph141 ~]#

2.查看pool的状态

[root@ceph141 ~]# ceph osd pool stats
pool .mgr id 1
  nothing is going on

pool yinzhengjie-rbd id 2
  nothing is going on

pool yinzhengjie id 3
  nothing is going on

pool .rgw.root id 4
  nothing is going on

pool default.rgw.log id 5
  nothing is going on

pool default.rgw.control id 6
  nothing is going on

pool default.rgw.meta id 7
  nothing is going on

pool default.rgw.buckets.index id 8
  nothing is going on

pool default.rgw.buckets.data id 9
  nothing is going on

pool cephfs_data id 10
  nothing is going on

pool cephfs_metadata id 11
  nothing is going on

[root@ceph141 ~]#

3.查看ceph存储空间状态

[root@ceph141 ~]# ceph df 
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd    1.5 TiB  1.5 TiB  435 MiB   435 MiB       0.03
TOTAL  1.5 TiB  1.5 TiB  435 MiB   435 MiB       0.03

--- POOLS ---
POOL                       ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
.mgr                        1    1  705 KiB        2  2.1 MiB      0    475 GiB
yinzhengjie-rbd             2   16  8.1 MiB       17   24 MiB      0    475 GiB
yinzhengjie                 3   32   19 MiB       38   56 MiB      0    475 GiB
.rgw.root                   4   32  1.4 KiB        4   48 KiB      0    475 GiB
default.rgw.log             5   32  3.6 KiB      209  408 KiB      0    475 GiB
default.rgw.control         6   32      0 B        8      0 B      0    475 GiB
default.rgw.meta            7   32  1.5 KiB        9   84 KiB      0    475 GiB
default.rgw.buckets.index   8   32   17 KiB       11   50 KiB      0    475 GiB
default.rgw.buckets.data    9  256    983 B        3   36 KiB      0    475 GiB
cephfs_data                10   32    386 B        1   12 KiB      0    475 GiB
cephfs_metadata            11    1   15 KiB       23  144 KiB      0    475 GiB
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph df detail
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd    1.5 TiB  1.5 TiB  435 MiB   435 MiB       0.03
TOTAL  1.5 TiB  1.5 TiB  435 MiB   435 MiB       0.03

--- POOLS ---
POOL                       ID  PGS   STORED   (DATA)   (OMAP)  OBJECTS     USED   (DATA)   (OMAP)  %USED  MAX AVAIL  QUOTA OBJECTS  QUOTA BYTES  DIRTY  USED COMPR  UNDER COMPR
.mgr                        1    1  705 KiB  705 KiB      0 B        2  2.1 MiB  2.1 MiB      0 B      0    475 GiB            N/A          N/A    N/A         0 B          0 B
yinzhengjie-rbd             2   16  8.1 MiB  8.1 MiB  2.8 KiB       17   24 MiB   24 MiB  8.3 KiB      0    475 GiB            N/A          N/A    N/A         0 B          0 B
yinzhengjie                 3   32   19 MiB   19 MiB  7.2 KiB       38   56 MiB   56 MiB   21 KiB      0    475 GiB            N/A          N/A    N/A         0 B          0 B
.rgw.root                   4   32  1.4 KiB  1.4 KiB      0 B        4   48 KiB   48 KiB      0 B      0    475 GiB            N/A          N/A    N/A         0 B          0 B
default.rgw.log             5   32  3.6 KiB  3.6 KiB      0 B      209  408 KiB  408 KiB      0 B      0    475 GiB            N/A          N/A    N/A         0 B          0 B
default.rgw.control         6   32      0 B      0 B      0 B        8      0 B      0 B      0 B      0    475 GiB            N/A          N/A    N/A         0 B          0 B
default.rgw.meta            7   32  1.5 KiB  1.5 KiB      0 B        9   84 KiB   84 KiB      0 B      0    475 GiB            N/A          N/A    N/A         0 B          0 B
default.rgw.buckets.index   8   32   17 KiB      0 B   17 KiB       11   50 KiB      0 B   50 KiB      0    475 GiB            N/A          N/A    N/A         0 B          0 B
default.rgw.buckets.data    9  256    983 B    983 B      0 B        3   36 KiB   36 KiB      0 B      0    475 GiB            N/A          N/A    N/A         0 B          0 B
cephfs_data                10   32    386 B    386 B      0 B        1   12 KiB   12 KiB      0 B      0    475 GiB            N/A          N/A    N/A         0 B          0 B
cephfs_metadata            11    1   15 KiB   15 KiB      0 B       23  144 KiB  144 KiB      0 B      0    475 GiB            N/A          N/A    N/A         0 B          0 B
[root@ceph141 ~]#

二.检查OSD状态命令

1.查看osd基本状态

[root@ceph141 ~]# ceph osd stat
6 osds: 6 up (since 12h), 6 in (since 9d); epoch: e295
[root@ceph141 ~]#

2.查看osd的属性详情

[root@ceph141 ~]# ceph osd dump 
epoch 295
fsid c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4
created 2024-08-21T12:56:27.504471+0000
modified 2024-08-31T09:41:42.368550+0000
flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit
crush_version 28
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client luminous
min_compat_client luminous
require_osd_release reef
stretch_mode_enabled false
pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 21 flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr read_balance_score 6.00
pool 2 'yinzhengjie-rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 63 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd read_balance_score 1.88
pool 3 'yinzhengjie' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 89 lfor 0/0/69 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd read_balance_score 1.13
pool 4 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 118 lfor 0/0/114 flags hashpspool stripe_width 0 application rgw read_balance_score 1.50
pool 5 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 205 lfor 0/0/114 flags hashpspool stripe_width 0 application rgw read_balance_score 1.50
pool 6 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 118 lfor 0/0/116 flags hashpspool stripe_width 0 application rgw read_balance_score 2.06
pool 7 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 118 lfor 0/0/116 flags hashpspool stripe_width 0 pg_autoscale_bias 4 application rgw read_balance_score 1.69
pool 8 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 225 lfor 0/0/223 flags hashpspool stripe_width 0 pg_autoscale_bias 4 application rgw read_balance_score 1.50
pool 9 'default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 256 pgp_num 256 autoscale_mode on last_change 229 lfor 0/0/227 flags hashpspool,bulk stripe_width 0 application rgw read_balance_score 1.24
pool 10 'cephfs_data' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 291 lfor 0/0/287 flags hashpspool stripe_width 0 application cephfs read_balance_score 1.69
pool 11 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 291 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs read_balance_score 6.00
max_osd 6
osd.0 up   in  weight 1 up_from 269 up_thru 289 down_at 268 last_clean_interval [261,264) [v2:10.0.0.141:6800/1990108235,v1:10.0.0.141:6801/1990108235] [v2:10.0.0.141:6802/1990108235,v1:10.0.0.141:6803/1990108235] exists,up d14909c7-044b-4a9f-bbb5-adb8faa6313a
osd.1 up   in  weight 1 up_from 271 up_thru 289 down_at 270 last_clean_interval [261,264) [v2:10.0.0.141:6808/2506390922,v1:10.0.0.141:6809/2506390922] [v2:10.0.0.141:6810/2506390922,v1:10.0.0.141:6811/2506390922] exists,up 9eb9c2b3-74c4-4498-b0f2-c7982806160a
osd.2 up   in  weight 1 up_from 272 up_thru 289 down_at 271 last_clean_interval [261,264) [v2:10.0.0.142:6808/1342057546,v1:10.0.0.142:6809/1342057546] [v2:10.0.0.142:6810/1342057546,v1:10.0.0.142:6811/1342057546] exists,up deac792b-ffaa-4c67-86ab-90945435d75d
osd.3 up   in  weight 1 up_from 271 up_thru 289 down_at 270 last_clean_interval [259,264) [v2:10.0.0.143:6808/3481127188,v1:10.0.0.143:6809/3481127188] [v2:10.0.0.143:6810/3481127188,v1:10.0.0.143:6811/3481127188] exists,up 0065bf70-6947-4b17-86ed-c1d902120512
osd.4 up   in  weight 1 up_from 272 up_thru 289 down_at 271 last_clean_interval [262,264) [v2:10.0.0.142:6800/1666605842,v1:10.0.0.142:6801/1666605842] [v2:10.0.0.142:6802/1666605842,v1:10.0.0.142:6803/1666605842] exists,up 6c26fe57-304a-4c4a-bf7b-737926fd4bb6
osd.5 up   in  weight 1 up_from 271 up_thru 289 down_at 270 last_clean_interval [257,264) [v2:10.0.0.143:6800/3676157786,v1:10.0.0.143:6801/3676157786] [v2:10.0.0.143:6802/3676157786,v1:10.0.0.143:6803/3676157786] exists,up 60ac4910-12bc-4f21-9d89-2cfd48ed0cb4
pg_upmap_items 5.b [0,1]
pg_upmap_items 9.c3 [0,1]
pg_upmap_items 9.d5 [0,1]
blocklist 10.0.0.143:6817/4143444342 expires 2024-09-01T08:54:17.751135+0000
blocklist 10.0.0.141:0/3629763328 expires 2024-08-31T14:15:09.741920+0000
blocklist 10.0.0.141:0/441401873 expires 2024-08-31T14:15:09.741920+0000
blocklist 10.0.0.141:6817/3968580519 expires 2024-08-31T14:15:09.741920+0000
blocklist 10.0.0.141:6816/3968580519 expires 2024-08-31T14:15:09.741920+0000
blocklist 10.0.0.141:0/1582300072 expires 2024-08-31T14:15:09.741920+0000
blocklist 10.0.0.141:0/1266853066 expires 2024-08-31T14:15:09.741920+0000
blocklist 10.0.0.141:0/4161651982 expires 2024-08-31T23:42:31.028220+0000
blocklist 10.0.0.141:0/1741713508 expires 2024-08-31T23:42:31.028220+0000
blocklist 10.0.0.141:6816/2507019140 expires 2024-08-31T23:42:31.028220+0000
blocklist 10.0.0.141:0/1583018183 expires 2024-08-31T23:42:31.028220+0000
blocklist 10.0.0.141:6817/2507019140 expires 2024-08-31T23:42:31.028220+0000
blocklist 10.0.0.141:0/3486898620 expires 2024-08-31T23:42:31.028220+0000
blocklist 10.0.0.143:6816/4143444342 expires 2024-09-01T08:54:17.751135+0000
[root@ceph141 ~]#

3.查看osd归属结构

[root@ceph141 ~]# ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME         STATUS  REWEIGHT  PRI-AFF
-1         1.46489  root default                               
-3         0.48830      host ceph141                           
 0    hdd  0.19530          osd.0         up   1.00000  1.00000
 1    hdd  0.29300          osd.1         up   1.00000  1.00000
-5         0.48830      host ceph142                           
 2    hdd  0.19530          osd.2         up   1.00000  1.00000
 4    hdd  0.29300          osd.4         up   1.00000  1.00000
-7         0.48830      host ceph143                           
 3    hdd  0.29300          osd.3         up   1.00000  1.00000
 5    hdd  0.19530          osd.5         up   1.00000  1.00000
[root@ceph141 ~]#

三.检查mon监视器映射

1.查看mon组件的概要信息

[root@ceph141 ~]# ceph mon stat
e3: 3 mons at {ceph141=[v2:10.0.0.141:3300/0,v1:10.0.0.141:6789/0],ceph142=[v2:10.0.0.142:3300/0,v1:10.0.0.142:6789/0],ceph143=[v2:10.0.0.143:3300/0,v1:10.0.0.143:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 58, leader 0 ceph141, quorum 0,1,2 ceph141,ceph142,ceph143
[root@ceph141 ~]#

2.查看mon组件的详细信息

[root@ceph141 ~]# ceph mon dump 
epoch 3
fsid c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4
last_changed 2024-08-21T13:11:17.811485+0000
created 2024-08-21T12:56:24.217633+0000
min_mon_release 18 (reef)
election_strategy: 1
0: [v2:10.0.0.141:3300/0,v1:10.0.0.141:6789/0] mon.ceph141
1: [v2:10.0.0.142:3300/0,v1:10.0.0.142:6789/0] mon.ceph142
2: [v2:10.0.0.143:3300/0,v1:10.0.0.143:6789/0] mon.ceph143
dumped monmap epoch 3
[root@ceph141 ~]# 

温馨提示:
    ceph-mon组件的3399端口用于选举出谁是leader,而6789是对外提供服务的端口。

3.查看多个mon节点的选举状态

[root@ceph141 ~]# ceph quorum_status  # 扁平化展示
{"election_epoch":58,"quorum":[0,1,2],"quorum_names":["ceph141","ceph142","ceph143"],"quorum_leader_name":"ceph141","quorum_age":47184,"features":{"quorum_con":"4540138322906710015","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"]},"monmap":{"epoch":3,"fsid":"c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4","modified":"2024-08-21T13:11:17.811485Z","created":"2024-08-21T12:56:24.217633Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"ceph141","public_addrs":{"addrvec":[{"type":"v2","addr":"10.0.0.141:3300","nonce":0},{"type":"v1","addr":"10.0.0.141:6789","nonce":0}]},"addr":"10.0.0.141:6789/0","public_addr":"10.0.0.141:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"ceph142","public_addrs":{"addrvec":[{"type":"v2","addr":"10.0.0.142:3300","nonce":0},{"type":"v1","addr":"10.0.0.142:6789","nonce":0}]},"addr":"10.0.0.142:6789/0","public_addr":"10.0.0.142:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"ceph143","public_addrs":{"addrvec":[{"type":"v2","addr":"10.0.0.143:3300","nonce":0},{"type":"v1","addr":"10.0.0.143:6789","nonce":0}]},"addr":"10.0.0.143:6789/0","public_addr":"10.0.0.143:6789/0","priority":0,"weight":0,"crush_location":"{}"}]}}
[root@ceph141 ~]# 
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph quorum_status  -f json-pretty  # 以格式化输出展示

{
    "election_epoch": 58,
    "quorum": [
        0,
        1,
        2
    ],
    "quorum_names": [  # 此处是参与选举的成员
        "ceph141",
        "ceph142",
        "ceph143"
    ],
    "quorum_leader_name": "ceph141",  # 当前集群ceph141为leader。
    "quorum_age": 47199,  # 选举的周期性,过了指定时间后会触发重新选举。
    "features": {
        "quorum_con": "4540138322906710015",
        "quorum_mon": [
            "kraken",
            "luminous",
            "mimic",
            "osdmap-prune",
            "nautilus",
            "octopus",
            "pacific",
            "elector-pinging",
            "quincy",
            "reef"
        ]
    },
    "monmap": {
        "epoch": 3,
        "fsid": "c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4",
        "modified": "2024-08-21T13:11:17.811485Z",
        "created": "2024-08-21T12:56:24.217633Z",
        "min_mon_release": 18,
        "min_mon_release_name": "reef",
        "election_strategy": 1,
        "disallowed_leaders: ": "",
        "stretch_mode": false,
        "tiebreaker_mon": "",
        "removed_ranks: ": "",
        "features": {
            "persistent": [
                "kraken",
                "luminous",
                "mimic",
                "osdmap-prune",
                "nautilus",
                "octopus",
                "pacific",
                "elector-pinging",
                "quincy",
                "reef"
            ],
            "optional": []
        },
        "mons": [
            {
                "rank": 0,
                "name": "ceph141",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v2",
                            "addr": "10.0.0.141:3300",
                            "nonce": 0
                        },
                        {
                            "type": "v1",
                            "addr": "10.0.0.141:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "10.0.0.141:6789/0",
                "public_addr": "10.0.0.141:6789/0",
                "priority": 0,
                "weight": 0,
                "crush_location": "{}"
            },
            {
                "rank": 1,
                "name": "ceph142",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v2",
                            "addr": "10.0.0.142:3300",
                            "nonce": 0
                        },
                        {
                            "type": "v1",
                            "addr": "10.0.0.142:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "10.0.0.142:6789/0",
                "public_addr": "10.0.0.142:6789/0",
                "priority": 0,
                "weight": 0,
                "crush_location": "{}"
            },
            {
                "rank": 2,
                "name": "ceph143",
                "public_addrs": {
                    "addrvec": [
                        {
                            "type": "v2",
                            "addr": "10.0.0.143:3300",
                            "nonce": 0
                        },
                        {
                            "type": "v1",
                            "addr": "10.0.0.143:6789",
                            "nonce": 0
                        }
                    ]
                },
                "addr": "10.0.0.143:6789/0",
                "public_addr": "10.0.0.143:6789/0",
                "priority": 0,
                "weight": 0,
                "crush_location": "{}"
            }
        ]
    }
}
[root@ceph141 ~]#

四.pg和osd存储的对应关系

1.查看pg状态

[root@ceph141 ~]# ceph pg stat
498 pgs: 498 active+clean; 32 MiB data, 435 MiB used, 1.5 TiB / 1.5 TiB avail
[root@ceph141 ~]#

2.验证pg和osd存储的对应关系

2.1 创建存储池

[root@ceph141 ~]# ceph osd pool create jasonyin 2 2
pool 'jasonyin' created
[root@ceph141 ~]#

2.2 查看存储池对应的id编号

[root@ceph141 ~]# ceph osd pool ls detail 
pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 21 flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr read_balance_score 6.00
pool 4 'jasonyin' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 2 pgp_num 2 autoscale_mode on last_change 91 flags hashpspool stripe_width 0 read_balance_score 3.01

[root@ceph141 ~]# 

温馨提示:
    如上所示,"yinzhengjie"存储池编号为"4"。

2.3 查看存储池和pg的对应关系

    1.查看指定存储池编号的pg信息
[root@ceph141 ~]# ceph pg ls 4  # 直接查处存储池编号为4的pg信息
PG   OBJECTS  DEGRADED  MISPLACED  UNFOUND  BYTES  OMAP_BYTES*  OMAP_KEYS*  LOG  LOG_DUPS  STATE         SINCE  VERSION  REPORTED  UP         ACTING     SCRUB_STAMP                      DEEP_SCRUB_STAMP                 LAST_SCRUB_DURATION  SCRUB_SCHEDULING                                          
4.0        0         0          0        0      0            0           0    0         0  active+clean    21s      0'0     91:11  [1,4,3]p1  [1,4,3]p1  2024-08-26T15:53:14.519657+0000  2024-08-26T15:53:14.519657+0000                    0  periodic scrub scheduled @ 2024-08-28T02:37:30.352752+0000
4.1        0         0          0        0      0            0           0    0         0  active+clean    21s      0'0     91:11  [3,2,0]p3  [3,2,0]p3  2024-08-26T15:53:14.519657+0000  2024-08-26T15:53:14.519657+0000                    0  periodic scrub scheduled @ 2024-08-27T18:50:09.853957+0000

* NOTE: Omap statistics are gathered during deep scrub and may be inaccurate soon afterwards depending on utilization. See http://docs.ceph.com/en/latest/dev/placement-group/#omap-statistics for further details.
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph pg dump  | grep "^4\."  # 查看存储池编号为4的所有pg信息,先查出所有的pg在进行过滤。
dumped all
4.1            0                   0         0          0        0       0            0           0    0         0         0  active+clean  2024-08-26T16:04:30.988416+0000      0'0   255:276  [3,2,0]           3  [3,2,0]               3         0'0  2024-08-26T15:54:04.549067+0000              0'0  2024-08-26T15:53:14.519657+0000              0                    1  periodic scrub scheduled @ 2024-08-27T16:04:47.086740+0000                 0                0
4.0            0                   0         0          0        0       0            0           0    0         0         0  active+clean  2024-08-26T16:04:42.197497+0000      0'0   255:279  [1,4,3]           1  [1,4,3]               1         0'0  2024-08-26T15:54:03.280077+0000              0'0  2024-08-26T15:53:14.519657+0000              0                    1  periodic scrub scheduled @ 2024-08-27T19:17:36.857529+0000                 0                0
[root@ceph141 ~]# 


    2.查看指定存储池名称的pg信息
[root@ceph141 ~]# ceph pg ls-by-pool jasonyin
PG   OBJECTS  DEGRADED  MISPLACED  UNFOUND  BYTES  OMAP_BYTES*  OMAP_KEYS*  LOG  LOG_DUPS  STATE         SINCE  VERSION  REPORTED  UP         ACTING     SCRUB_STAMP                      DEEP_SCRUB_STAMP                 LAST_SCRUB_DURATION  SCRUB_SCHEDULING                                          
4.0        0         0          0        0      0            0           0    0         0  active+clean    52s      0'0   255:274  [1,4,3]p1  [1,4,3]p1  2024-08-26T15:54:03.280077+0000  2024-08-26T15:53:14.519657+0000                    1  periodic scrub scheduled @ 2024-08-27T19:17:36.857529+0000
4.1        0         0          0        0      0            0           0    0         0  active+clean    63s      0'0   254:271  [3,2,0]p3  [3,2,0]p3  2024-08-26T15:54:04.549067+0000  2024-08-26T15:53:14.519657+0000                    1  periodic scrub scheduled @ 2024-08-27T16:04:47.086740+0000

* NOTE: Omap statistics are gathered during deep scrub and may be inaccurate soon afterwards depending on utilization. See http://docs.ceph.com/en/latest/dev/placement-group/#omap-statistics for further details.
[root@ceph141 ~]# 


温馨提示:
    如上所示,编号为4的存储池有2个pg,分别对应编号为0和1。

2.4 禁止pg数量自动伸缩

[root@ceph141 ~]# ceph osd pool ls detail
pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 21 flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr read_balance_score 6.00
pool 4 'jasonyin' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 95 lfor 0/0/93 flags hashpspool stripe_width 0 read_balance_score 1.50

[root@ceph141 ~]# 


如上所示,我啥也没干,但是存储池的pg数了竟然自动扩容到32个啦~这时因为ceph集群默认开启了pg数量自动伸缩。
[root@ceph141 ~]# ceph osd pool get jasonyin pg_autoscale_mode
pg_autoscale_mode: on
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool set jasonyin pg_autoscale_mode off
set pool 4 pg_autoscale_mode to off
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool get jasonyin pg_autoscale_mode
pg_autoscale_mode: off
[root@ceph141 ~]# 


接下来,我们就可以放心大胆的把pg数量调整回到2,从而观察pg到数量。
[root@ceph141 ~]# ceph osd pool get jasonyin pg_num 
pg_num: 32
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool set jasonyin pg_num 2
set pool 4 pg_num to 2
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool get jasonyin pg_num  # 注意,不能立刻直接变为2,而是逐渐减少pg数量至2哟~
pg_num: 26
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool get jasonyin pg_num 
pg_num: 16
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool get jasonyin pg_num 
pg_num: 13
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph osd pool get jasonyin pg_num 
pg_num: 2
[root@ceph141 ~]#

2.5 查看指定osd存储的pg信息

[root@ceph141 ~]# ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME         STATUS  REWEIGHT  PRI-AFF
-1         1.46489  root default                               
-3         0.48830      host ceph141                           
 0    hdd  0.19530          osd.0         up   1.00000  1.00000
 1    hdd  0.29300          osd.1         up   1.00000  1.00000
-5         0.48830      host ceph142                           
 2    hdd  0.19530          osd.2         up   1.00000  1.00000
 4    hdd  0.29300          osd.4         up   1.00000  1.00000
-7         0.48830      host ceph143                           
 3    hdd  0.29300          osd.3         up   1.00000  1.00000
 5    hdd  0.19530          osd.5         up   1.00000  1.00000
[root@ceph141 ~]# 
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph pg ls-by-osd osd.3
PG   OBJECTS  DEGRADED  MISPLACED  UNFOUND  BYTES  OMAP_BYTES*  OMAP_KEYS*  LOG  LOG_DUPS  STATE         SINCE  VERSION  REPORTED  UP         ACTING     SCRUB_STAMP                      DEEP_SCRUB_STAMP                 LAST_SCRUB_DURATION  SCRUB_SCHEDULING                                          
4.0        0         0          0        0      0            0           0    0         0  active+clean     4m      0'0   255:275  [1,4,3]p1  [1,4,3]p1  2024-08-26T15:54:03.280077+0000  2024-08-26T15:53:14.519657+0000                    1  periodic scrub scheduled @ 2024-08-27T19:17:36.857529+0000
4.1        0         0          0        0      0            0           0    0         0  active+clean     4m      0'0   255:273  [3,2,0]p3  [3,2,0]p3  2024-08-26T15:54:04.549067+0000  2024-08-26T15:53:14.519657+0000                    1  periodic scrub scheduled @ 2024-08-27T16:04:47.086740+0000

* NOTE: Omap statistics are gathered during deep scrub and may be inaccurate soon afterwards depending on utilization. See http://docs.ceph.com/en/latest/dev/placement-group/#omap-statistics for further details.
[root@ceph141 ~]# 
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph pg ls-by-osd osd.1
PG   OBJECTS  DEGRADED  MISPLACED  UNFOUND  BYTES  OMAP_BYTES*  OMAP_KEYS*  LOG  LOG_DUPS  STATE         SINCE  VERSION  REPORTED  UP         ACTING     SCRUB_STAMP                      DEEP_SCRUB_STAMP                 LAST_SCRUB_DURATION  SCRUB_SCHEDULING                                          
4.0        0         0          0        0      0            0           0    0         0  active+clean     4m      0'0   255:275  [1,4,3]p1  [1,4,3]p1  2024-08-26T15:54:03.280077+0000  2024-08-26T15:53:14.519657+0000                    1  periodic scrub scheduled @ 2024-08-27T19:17:36.857529+0000

* NOTE: Omap statistics are gathered during deep scrub and may be inaccurate soon afterwards depending on utilization. See http://docs.ceph.com/en/latest/dev/placement-group/#omap-statistics for further details.
[root@ceph141 ~]# 
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph pg ls-by-osd osd.5
PG   OBJECTS  DEGRADED  MISPLACED  UNFOUND  BYTES   OMAP_BYTES*  OMAP_KEYS*  LOG  LOG_DUPS  STATE         SINCE  VERSION  REPORTED  UP         ACTING     SCRUB_STAMP                      DEEP_SCRUB_STAMP                 LAST_SCRUB_DURATION  SCRUB_SCHEDULING                                          
1.0        2         0          0        0  590368            0           0  545         0  active+clean    45m   75'545  255:1040  [2,5,0]p2  [2,5,0]p2  2024-08-26T15:24:04.386961+0000  2024-08-21T13:15:49.160928+0000                    1  periodic scrub scheduled @ 2024-08-28T02:30:43.351375+0000

* NOTE: Omap statistics are gathered during deep scrub and may be inaccurate soon afterwards depending on utilization. See http://docs.ceph.com/en/latest/dev/placement-group/#omap-statistics for further details.
[root@ceph141 ~]#

五.ceph集群基于套接字管理

1.ceph的管理套接字接口概述

和mysql类似,即支持从配置文件中读取认证信息对ceph集群进行管理,也支持基于套接字进行管理。

ceph的套接字接口常用于查询本地守护进程:
    - 资源对象文件保存在"/var/lib/ceph"目录;
    - 套接字默认保存在"/var/run/ceph"目录;

借助socket文件实现ceph属性的动态管理:
    - 命令使用格式
ceph --admin-daemon /var/run/ceph/socket-name

    - 获取使用帮助信息
ceph --admin-daemon /var/run/ceph/socket-name help


查看当前节点ceph组件的套接字文件列表
[root@ceph141 ~]# ll /var/run/ceph/c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4/
total 0
drwxrwx--- 2  167  167 160 Aug 31 16:54 ./
drwxrwx--- 3 ceph ceph  60 Aug 31 07:41 ../
srwxr-xr-x 1 root root   0 Aug 31 07:42 ceph-client.ceph-exporter.ceph141.asok=
srwxr-xr-x 1  167  167   0 Aug 31 16:54 ceph-mds.yinzhengjie-cephfs.ceph141.ezrzln.asok=
srwxr-xr-x 1  167  167   0 Aug 31 07:42 ceph-mgr.ceph141.fuztcs.asok=
srwxr-xr-x 1  167  167   0 Aug 31 07:41 ceph-mon.ceph141.asok=
srwxr-xr-x 1  167  167   0 Aug 31 07:42 ceph-osd.0.asok=
srwxr-xr-x 1  167  167   0 Aug 31 07:42 ceph-osd.1.asok=
[root@ceph141 ~]#

2.通过套接字管理osd案例

    1.查看管理mon的帮助信息
[root@ceph141 ~]# ceph --admin-daemon /var/run/ceph/c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4/ceph-osd.0.asok help
{
    "bench": "OSD benchmark: write <count> <size>-byte objects(with <obj_size> <obj_num>), (default count=1G default size=4MB). Results in log.",
    "bluefs debug_inject_read_zeros": "Injects 8K zeros into next BlueFS read. Debug only.",
    "bluefs files list": "print files in bluefs",
    "bluefs stats": "Dump internal statistics for bluefs.",
    "bluestore allocator dump block": "dump allocator free regions",
    "bluestore allocator fragmentation block": "give allocator fragmentation (0-no fragmentation, 1-absolute fragmentation)",
    "bluestore allocator score block": "give score on allocator fragmentation (0-no fragmentation, 1-absolute fragmentation)",
    "bluestore bluefs device info": "Shows space report for bluefs devices. This also includes an estimation for space available to bluefs at main device. alloc_size, if set, specifies the custom bluefs allocation unit size for the estimation above.",
    "cache drop": "Drop all OSD caches",
    "cache status": "Get OSD caches statistics",
    "calc_objectstore_db_histogram": "Generate key value histogram of kvdb(rocksdb) which used by bluestore",
    "cluster_log": "log a message to the cluster log",
    "compact": "Commpact object store's omap. WARNING: Compaction probably slows your requests",
    "config diff": "dump diff of current config and default config",
    "config diff get": "dump diff get <field>: dump diff of current and default config setting <field>",
    "config get": "config get <field>: get the config value",
    "config help": "get config setting schema and descriptions",
    "config set": "config set <field> <val> [<val> ...]: set a config variable",
    "config show": "dump current config settings",
    "config unset": "config unset <field>: unset a config variable",
    "counter dump": "dump all labeled and non-labeled counters and their values",
    "counter schema": "dump all labeled and non-labeled counters schemas",
    "cpu_profiler": "run cpu profiling on daemon",
    "debug dump_missing": "dump missing objects to a named file",
    "debug kick_recovery_wq": "set osd_recovery_delay_start to <val>",
    "deep_scrub": "Trigger a scheduled deep scrub ",
    "dump_blocked_ops": "show the blocked ops currently in flight",
    "dump_blocked_ops_count": "show the count of blocked ops currently in flight",
    "dump_blocklist": "dump blocklisted clients and times",
    "dump_historic_ops": "show recent ops",
    "dump_historic_ops_by_duration": "show slowest recent ops, sorted by duration",
    "dump_historic_slow_ops": "show slowest recent ops",
    "dump_mempools": "get mempool stats",
    "dump_objectstore_kv_stats": "print statistics of kvdb which used by bluestore",
    "dump_op_pq_state": "dump op queue state",
    "dump_ops_in_flight": "show the ops currently in flight",
    "dump_osd_network": "Dump osd heartbeat network ping times",
    "dump_pg_recovery_stats": "dump pg recovery statistics",
    "dump_pgstate_history": "show recent state history",
    "dump_pool_statfs": "Dump store's statistics for the given pool",
    "dump_recovery_reservations": "show recovery reservations",
    "dump_scrub_reservations": "show scrub reservations",
    "dump_scrubs": "print scheduled scrubs",
    "dump_watchers": "show clients which have active watches, and on which objects",
    "flush_journal": "flush the journal to permanent store",
    "flush_pg_stats": "flush pg stats",
    "flush_store_cache": "Flush bluestore internal cache",
    "get_command_descriptions": "list available commands",
    "get_heap_property": "get malloc extension heap property",
    "get_latest_osdmap": "force osd to update the latest map from the mon",
    "get_mapped_pools": "dump pools whose PG(s) are mapped to this OSD.",
    "getomap": "output entire object map",
    "git_version": "get git sha1",
    "heap": "show heap usage info (available only if compiled with tcmalloc)",
    "help": "list available commands",
    "injectargs": "inject configuration arguments into running daemon",
    "injectdataerr": "inject data error to an object",
    "injectfull": "Inject a full disk (optional count times)",
    "injectmdataerr": "inject metadata error to an object",
    "list_devices": "list OSD devices.",
    "list_unfound": "list unfound objects on this pg, perhaps starting at an offset given in JSON",
    "log": "dump pg_log of a specific pg",
    "log dump": "dump recent log entries to log file",
    "log flush": "flush log entries to log file",
    "log reopen": "reopen log file",
    "mark_unfound_lost": "mark all unfound objects in this pg as lost, either removing or reverting to a prior version if one is available",
    "objecter_requests": "show in-progress osd requests",
    "ops": "show the ops currently in flight",
    "perf dump": "dump non-labeled counters and their values",
    "perf histogram dump": "dump perf histogram values",
    "perf histogram schema": "dump perf histogram schema",
    "perf reset": "perf reset <name>: perf reset all or one perfcounter name",
    "perf schema": "dump non-labeled counters schemas",
    "query": "show details of a specific pg",
    "reset_pg_recovery_stats": "reset pg recovery statistics",
    "reset_purged_snaps_last": "Reset the superblock's purged_snaps_last",
    "rmomapkey": "remove omap key",
    "rotate-key": "rotate live authentication key",
    "rotate-stored-key": "Update the stored osd_key",
    "scrub": "Trigger a scheduled scrub ",
    "scrub_purged_snaps": "Scrub purged_snaps vs snapmapper index",
    "scrubdebug": "debug the scrubber",
    "send_beacon": "send OSD beacon to mon immediately",
    "set_heap_property": "update malloc extension heap property",
    "set_recovery_delay": "Delay osd recovery by specified seconds",
    "setomapheader": "set omap header",
    "setomapval": "set omap key",
    "smart": "probe OSD devices for SMART data.",
    "status": "high-level status of OSD",
    "truncobj": "truncate object to length",
    "version": "get ceph version"
}
[root@ceph141 ~]# 

    2.查看当前节点"OSD.0"的状态
[root@ceph141 ~]# ceph --admin-daemon /var/run/ceph/c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4/ceph-osd.0.asok status
{
    "cluster_fsid": "c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4",
    "osd_fsid": "d14909c7-044b-4a9f-bbb5-adb8faa6313a",
    "whoami": 0,
    "state": "active",
    "oldest_map": 1,
    "cluster_osdmap_trim_lower_bound": 1,
    "newest_map": 295,
    "num_pgs": 211
}
[root@ceph141 ~]# 

    3.查看当前节点"OSD.0"的版本信息
[root@ceph141 ~]# ceph --admin-daemon /var/run/ceph/c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4/ceph-osd.0.asok version
{
    "version": "18.2.4",
    "release": "reef",
    "release_type": "stable"
}
[root@ceph141 ~]#

3.通过套接字管理mon案例

    1.查看管理mon的帮助信息
[root@ceph141 ~]# ceph --admin-daemon /var/run/ceph/c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4/ceph-mon.ceph141.asok help
{
    "add_bootstrap_peer_hint": "add peer address as potential bootstrap peer for cluster bringup",
    "add_bootstrap_peer_hintv": "add peer address vector as potential bootstrap peer for cluster bringup",
    "compact": "cause compaction of monitor's leveldb/rocksdb storage",
    "config diff": "dump diff of current config and default config",
    "config diff get": "dump diff get <field>: dump diff of current and default config setting <field>",
    "config get": "config get <field>: get the config value",
    "config help": "get config setting schema and descriptions",
    "config set": "config set <field> <val> [<val> ...]: set a config variable",
    "config show": "dump current config settings",
    "config unset": "config unset <field>: unset a config variable",
    "connection scores dump": "show the scores used in connectivity-based elections",
    "connection scores reset": "reset the scores used in connectivity-based elections",
    "counter dump": "dump all labeled and non-labeled counters and their values",
    "counter schema": "dump all labeled and non-labeled counters schemas",
    "dump_historic_ops": "show recent ops",
    "dump_historic_slow_ops": "show recent slow ops",
    "dump_mempools": "get mempool stats",
    "get_command_descriptions": "list available commands",
    "git_version": "get git sha1",
    "heap": "show heap usage info (available only if compiled with tcmalloc)",
    "help": "list available commands",
    "injectargs": "inject configuration arguments into running daemon",
    "log dump": "dump recent log entries to log file",
    "log flush": "flush log entries to log file",
    "log reopen": "reopen log file",
    "mon_status": "report status of monitors",
    "ops": "show the ops currently in flight",
    "perf dump": "dump non-labeled counters and their values",
    "perf histogram dump": "dump perf histogram values",
    "perf histogram schema": "dump perf histogram schema",
    "perf reset": "perf reset <name>: perf reset all or one perfcounter name",
    "perf schema": "dump non-labeled counters schemas",
    "quorum enter": "force monitor back into quorum",
    "quorum exit": "force monitor out of the quorum",
    "sessions": "list existing sessions",
    "smart": "Query health metrics for underlying device",
    "sync_force": "force sync of and clear monitor store",
    "version": "get ceph version"
}
[root@ceph141 ~]# 

    2.查看指定的配置信息
[root@ceph141 ~]# ceph --admin-daemon /var/run/ceph/c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4/ceph-mon.ceph141.asok config get  public_network
{
    "public_network": "10.0.0.0/24"
}
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph --admin-daemon /var/run/ceph/c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4/ceph-mon.ceph141.asok config get  rgw_ops_log_file_path
{
    "rgw_ops_log_file_path": "/var/log/ceph/ops-log-ceph-mon.ceph141.log"
}
[root@ceph141 ~]#

4.通过套接字管理mgr案例

    1.查看管理mgr的帮助信息
[root@ceph141 ~]# ceph --admin-daemon /var/run/ceph/c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4/ceph-mgr.ceph141.fuztcs.asok help
{
    "config diff": "dump diff of current config and default config",
    "config diff get": "dump diff get <field>: dump diff of current and default config setting <field>",
    "config get": "config get <field>: get the config value",
    "config help": "get config setting schema and descriptions",
    "config set": "config set <field> <val> [<val> ...]: set a config variable",
    "config show": "dump current config settings",
    "config unset": "config unset <field>: unset a config variable",
    "counter dump": "dump all labeled and non-labeled counters and their values",
    "counter schema": "dump all labeled and non-labeled counters schemas",
    "dump_cache": "show in-memory metadata cache contents",
    "dump_mempools": "get mempool stats",
    "dump_osd_network": "Dump osd heartbeat network ping times",
    "get_command_descriptions": "list available commands",
    "git_version": "get git sha1",
    "help": "list available commands",
    "injectargs": "inject configuration arguments into running daemon",
    "kick_stale_sessions": "kick sessions that were remote reset",
    "log dump": "dump recent log entries to log file",
    "log flush": "flush log entries to log file",
    "log reopen": "reopen log file",
    "mds_requests": "show in-progress mds requests",
    "mds_sessions": "show mds session state",
    "mgr_status": "Dump mgr status",
    "objecter_requests": "show in-progress osd requests",
    "perf dump": "dump non-labeled counters and their values",
    "perf histogram dump": "dump perf histogram values",
    "perf histogram schema": "dump perf histogram schema",
    "perf reset": "perf reset <name>: perf reset all or one perfcounter name",
    "perf schema": "dump non-labeled counters schemas",
    "rotate-key": "rotate live authentication key",
    "status": "show overall client status",
    "version": "get ceph version"
}
[root@ceph141 ~]# 

    2.查看指定的配置信息
[root@ceph141 ~]# ceph --admin-daemon /var/run/ceph/c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4/ceph-mgr.ceph141.fuztcs.asok config show | wc -l
1964
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph --admin-daemon /var/run/ceph/c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4/ceph-mgr.ceph141.fuztcs.asok config get mon_host
{
    "mon_host": "[v2:10.0.0.141:3300/0,v1:10.0.0.141:6789/0] [v2:10.0.0.142:3300/0,v1:10.0.0.142:6789/0] [v2:10.0.0.143:3300/0,v1:10.0.0.143:6789/0]"
}
[root@ceph141 ~]#

5.通过套接字管理mds案例

    1.查看管理mds的帮助信息
[root@ceph141 ~]# ceph --admin-daemon /var/run/ceph/c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4/ceph-mds.yinzhengjie-cephfs.ceph141.ezrzln.asok help
{
    "cache drop": "trim cache and optionally request client to release all caps and flush the journal",
    "cache status": "show cache status",
    "client config": "Config a CephFS client session",
    "client evict": "Evict client session(s) based on a filter",
    "client ls": "List client sessions based on a filter",
    "config diff": "dump diff of current config and default config",
    "config diff get": "dump diff get <field>: dump diff of current and default config setting <field>",
    "config get": "config get <field>: get the config value",
    "config help": "get config setting schema and descriptions",
    "config set": "config set <field> <val> [<val> ...]: set a config variable",
    "config show": "dump current config settings",
    "config unset": "config unset <field>: unset a config variable",
    "counter dump": "dump all labeled and non-labeled counters and their values",
    "counter schema": "dump all labeled and non-labeled counters schemas",
    "cpu_profiler": "run cpu profiling on daemon",
    "damage ls": "List detected metadata damage",
    "damage rm": "Remove a damage table entry",
    "dirfrag ls": "List fragments in directory",
    "dirfrag merge": "De-fragment directory by path",
    "dirfrag split": "Fragment directory by path",
    "dump cache": "dump metadata cache (optionally to a file)",
    "dump dir": "dump directory by path",
    "dump inode": "dump inode by inode number",
    "dump loads": "dump metadata loads",
    "dump snaps": "dump snapshots",
    "dump tree": "dump metadata cache for subtree",
    "dump_blocked_ops": "show the blocked ops currently in flight",
    "dump_blocked_ops_count": "show the count of blocked ops currently in flight",
    "dump_historic_ops": "show recent ops",
    "dump_historic_ops_by_duration": "show recent ops, sorted by op duration",
    "dump_mempools": "get mempool stats",
    "dump_ops_in_flight": "show the ops currently in flight",
    "exit": "Terminate this MDS",
    "export dir": "migrate a subtree to named MDS",
    "flush journal": "Flush the journal to the backing store",
    "flush_path": "flush an inode (and its dirfrags)",
    "force_readonly": "Force MDS to read-only mode",
    "get subtrees": "Return the subtree map",
    "get_command_descriptions": "list available commands",
    "git_version": "get git sha1",
    "heap": "show heap usage info (available only if compiled with tcmalloc)",
    "help": "list available commands",
    "injectargs": "inject configuration arguments into running daemon",
    "log dump": "dump recent log entries to log file",
    "log flush": "flush log entries to log file",
    "log reopen": "reopen log file",
    "objecter_requests": "show in-progress osd requests",
    "openfiles ls": "List the opening files and their caps",
    "ops": "show the ops currently in flight",
    "osdmap barrier": "Wait until the MDS has this OSD map epoch",
    "perf dump": "dump non-labeled counters and their values",
    "perf histogram dump": "dump perf histogram values",
    "perf histogram schema": "dump perf histogram schema",
    "perf reset": "perf reset <name>: perf reset all or one perfcounter name",
    "perf schema": "dump non-labeled counters schemas",
    "respawn": "Respawn this MDS",
    "rotate-key": "rotate live authentication key",
    "scrub abort": "Abort in progress scrub operations(s)",
    "scrub pause": "Pause in progress scrub operations(s)",
    "scrub resume": "Resume paused scrub operations(s)",
    "scrub start": "scrub and inode and output results",
    "scrub status": "Status of scrub operations(s)",
    "scrub_path": "scrub an inode and output results",
    "session config": "Config a CephFS client session",
    "session evict": "Evict client session(s) based on a filter",
    "session kill": "Evict a client session by id",
    "session ls": "List client sessions based on a filter",
    "status": "high-level status of MDS",
    "tag path": "Apply scrub tag recursively",
    "version": "get ceph version"
}
[root@ceph141 ~]# 

    2.查看指定的配置信息    
[root@ceph141 ~]# ceph --admin-daemon /var/run/ceph/c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4/ceph-mds.yinzhengjie-cephfs.ceph141.ezrzln.asok status
{
    "cluster_fsid": "c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4",
    "whoami": 0,
    "id": 85140,
    "want_state": "up:active",
    "state": "up:active",
    "fs_name": "yinzhengjie-cephfs",
    "rank_uptime": 17302.831363436999,
    "mdsmap_epoch": 9,
    "osdmap_epoch": 295,
    "osdmap_epoch_barrier": 292,
    "uptime": 17303.537646723998
}
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph --admin-daemon /var/run/ceph/c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4/ceph-mds.yinzhengjie-cephfs.ceph141.ezrzln.asok version
{
    "version": "18.2.4",
    "release": "reef",
    "release_type": "stable"
}
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph --admin-daemon /var/run/ceph/c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4/ceph-mds.yinzhengjie-cephfs.ceph141.ezrzln.asok cache status
{
    "pool": {
        "items": 228,
        "bytes": 55008
    }
}
[root@ceph141 ~]#

六.ceph集群配置修改

1.基于配置文件修改

ceph软件安装完成后,就会生成专用的家目录,ceph的配置文件默认存储在"/etc/ceph/ceph.conf",该文件遵循"ini"文件的语法格式。

对于ceph.conf文件来说,它主要由以下几个部分组成:
    [global]
        ceph存储的全局性配置。
    [osd]
        所有影响ceph存储集群中ceph-osd守护进程相关配置,会覆盖global的相同属性。
    [mon]
        所有影响ceph存储集群中ceph-mon守护进程的相关配置,会覆盖global的相同属性。
    [client]
        所有影响ceph客户端管理的相关配置,比如块设备客户端,cephFS客户端,OSS客户端等。
    [mgr]
        所有影响ceph存储集群中ceph-mgr守护进程的相关配置,会覆盖global的相同属性。

ceph.conf主要针对整体来进行设置的,如果我们需要针对某一个对象来进行修改的话,可以使用"TYPE.ID"的方式来进行针对定制,比如:"[mon.ceph141]","[osd.2]"等。

对于ceph来说,它的配置文件不仅仅有"/etc/ceph/ceph.conf"一种方式定义,还支持其他方式i定义,主要是作用的范围不同,分别是全局范围和局部范围。
    全局范围:
        - /etc/ceph/ceph.conf(默认就会读取的配置文件)
        - $CEPH_CONF(可选的配置,基于环境变量来定义配置文件的路径,比如: export CEPH_CONF=....)
        - -c /path/to/ceph.conf(可选的配置,在ceph启动时指定配置文件)

    局部范围:
        - ~/.ceph/config(针对当前系统用户级别的配置文件)
        - ./ceph/conf(指定目录项目专属的配置文件)

2.基于命令行方式修改

    方式一: 使用"ceph daemon"修改
[root@ceph141 ~]# ceph daemon /var/run/ceph/c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4/ceph-osd.0.asok config get debug_osd
{
    "debug_osd": "1/5"
}
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph daemon /var/run/ceph/c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4/ceph-osd.0.asok config set debug_osd 2/5
{
    "success": "osd_delete_sleep = '' osd_delete_sleep_hdd = '' osd_delete_sleep_hybrid = '' osd_delete_sleep_ssd = '' osd_max_backfills = '' osd_pg_delete_cost = '' (not observed, change may require restart) osd_recovery_max_active = '' osd_recovery_max_active_hdd = '' osd_recovery_max_active_ssd = '' osd_recovery_sleep = '' osd_recovery_sleep_hdd = '' osd_recovery_sleep_hybrid = '' osd_recovery_sleep_ssd = '' osd_scrub_sleep = '' osd_snap_trim_sleep = '' osd_snap_trim_sleep_hdd = '' osd_snap_trim_sleep_hybrid = '' osd_snap_trim_sleep_ssd = '' "
}
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph daemon /var/run/ceph/c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4/ceph-osd.0.asok config get debug_osd
{
    "debug_osd": "2/5"
}
[root@ceph141 ~]# 


    方式二: 使用"ceph tell"修改
[root@ceph141 ~]# ceph daemon /var/run/ceph/c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4/ceph-osd.0.asok config get debug_osd
{
    "debug_osd": "2/5"
}
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph tell osd.0 injectargs '--debug-osd 1/5'
{}
[root@ceph141 ~]# 
[root@ceph141 ~]# ceph daemon /var/run/ceph/c0ed6ca0-5fbc-11ef-9ff6-cf3a9f02b0d4/ceph-osd.0.asok config get debug_osd
{
    "debug_osd": "1/5"
}
[root@ceph141 ~]#
目录
相关文章
|
存储 算法 关系型数据库
【CEPH-初识篇】ceph详细介绍、搭建集群及使用,带你认识新大陆
你好,我是无名小歌。 今天给大家分享一个分布式存储系统ceph。 什么是ceph? Ceph在一个统一的系统中独特地提供对象、块和文件存储。Ceph 高度可靠、易于管理且免费。Ceph 的强大功能可以改变您公司的 IT 基础架构和管理大量数据的能力。Ceph 提供了非凡的可扩展性——数以千计的客户端访问 PB 到 EB 的数据。ceph存储集群相互通信以动态复制和重新分配数据。
1211 0
【CEPH-初识篇】ceph详细介绍、搭建集群及使用,带你认识新大陆
|
12天前
|
存储
Ceph Reef(18.2.X)的CephFS高可用集群实战案例
这篇文章是关于Ceph Reef(18.2.X)版本中CephFS高可用集群的实战案例,涵盖了CephFS的基础知识、一主一从架构的搭建、多主一从架构的配置、客户端挂载方式以及fuse方式访问CephFS的详细步骤和配置。
29 3
Ceph Reef(18.2.X)的CephFS高可用集群实战案例
|
12天前
|
存储 对象存储 Swift
Ceph Reef(18.2.X)之对象访问策略配置
这篇文章讲述了对象存储的多种访问方式,包括HTTP、S3cmd、Swift和Python程序访问,并介绍了如何定制存储桶的访问策略和跨域规则。
42 8
Ceph Reef(18.2.X)之对象访问策略配置
|
12天前
|
Shell 容器
Ceph Reef(18.2.X)访问ceph集群的方式及管理员节点配置案例
这篇文章是关于Ceph Reef(18.2.X)版本中访问ceph集群的方式和管理员节点配置的案例,介绍了使用cephadm shell的不同方式访问集群和如何配置管理节点以方便集群管理。
19 5
|
12天前
|
存储 块存储
Ceph Reef(18.2.X)集群的OSD管理基础及OSD节点扩缩容
这篇文章是关于Ceph Reef(18.2.X)集群的OSD管理基础及OSD节点扩缩容的详细教程,涵盖了OSD的基础操作、节点缩容的步骤和实战案例以及OSD节点扩容的基本流程和实战案例。
26 6
|
12天前
|
存储 测试技术
Ceph Reef(18.2.X)的快照分层实战案例
这篇文章是关于Ceph Reef(18.2.X)版本快照分层的实战案例,详细介绍了从准备测试环境到删除基础镜像快照的一系列操作步骤。
16 3
|
13天前
|
块存储
ceph集群的OSD设备扩缩容实战指南
这篇文章详细介绍了Ceph集群中OSD设备的扩容和缩容过程,包括如何添加新的OSD设备、如何准备和部署,以及如何安全地移除OSD设备并从Crushmap中清除相关配置。
44 4
|
12天前
|
Prometheus 监控 Cloud Native
Ceph Reef(18.2.X)的内置Prometheus监控集群
这篇文章是关于Ceph Reef(18.2.X)版本中内置Prometheus监控集群的使用方法,包括如何查看集群架构、访问Prometheus、Grafana、Node-Exporter和Alertmanager的Web界面,以及推荐阅读的自实现Prometheus监控资源链接。
36 2
|
13天前
|
块存储
ceph-deploy部署ceph分部署集群
这篇博客详细介绍了如何使用ceph-deploy工具部署Ceph集群,包括环境准备、配置hosts、免密登录、时间同步、添加块设备、部署mon、mgr组件以及初始化OSD节点的步骤,并提供了在部署过程中可能遇到的问题和解决方案。
20 4
|
13天前
|
网络安全
ceph的mgr组件模块dashboard图形化管理ceph集群
关于如何通过Ceph的mgr组件模块dashboard来图形化管理Ceph集群的教程,包括基于HTTP和HTTPS的配置步骤。
15 3