Ceph:Unknown lvalue 'TasksMax' in section 'Service

简介:

1、查看ceph服务状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# systemctl status ceph\*.service
● ceph-osd@0.service - Ceph object storage daemon
   Loaded: loaded ( /usr/lib/systemd/system/ceph-osd @.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2018-01-12 06:03:20 CST; 7h ago
Main PID: 4686 (ceph-osd)
   CGroup:  /system .slice /system-ceph \x2dosd.slice /ceph-osd @0.service
           └─4686  /usr/bin/ceph-osd  -f --cluster ceph -- id  0 --setuser ceph --setgroup ceph
Jan 12 06:03:20 hz-01-ops-tc-ceph-01 ceph-osd[4686]: starting osd.0 at : /0  osd_data  /var/lib/ceph/osd/ceph-0  /var/lib/ceph/osd/ceph-0/journal
Jan 12 06:03:20 hz-01-ops-tc-ceph-01 ceph-osd[4686]: 2018-01-12 06:03:20.610107 7f6b4111b800 -1 osd.0 0 log_to_monitors {default= true }
Jan 12 06:03:20 hz-01-ops-tc-ceph-01 systemd[1]: [ /usr/lib/systemd/system/ceph-osd @.service:18] Unknown lvalue  'TasksMax'  in  section  'Service'
Jan 12 06:03:21 hz-01-ops-tc-ceph-01 systemd[1]: [ /usr/lib/systemd/system/ceph-osd @.service:18] Unknown lvalue  'TasksMax'  in  section  'Service'
Jan 12 06:03:21 hz-01-ops-tc-ceph-01 systemd[1]: [ /usr/lib/systemd/system/ceph-osd @.service:18] Unknown lvalue  'TasksMax'  in  section  'Service'
Jan 12 13:53:07 hz-01-ops-tc-ceph-01 systemd[1]: [ /usr/lib/systemd/system/ceph-osd @.service:18] Unknown lvalue  'TasksMax'  in  section  'Service'
Jan 12 13:53:07 hz-01-ops-tc-ceph-01 systemd[1]: [ /usr/lib/systemd/system/ceph-osd @.service:18] Unknown lvalue  'TasksMax'  in  section  'Service'
Jan 12 13:53:07 hz-01-ops-tc-ceph-01 systemd[1]: [ /usr/lib/systemd/system/ceph-osd @.service:18] Unknown lvalue  'TasksMax'  in  section  'Service'
Jan 12 13:53:07 hz-01-ops-tc-ceph-01 systemd[1]: [ /usr/lib/systemd/system/ceph-osd @.service:18] Unknown lvalue  'TasksMax'  in  section  'Service'
Jan 12 13:53:07 hz-01-ops-tc-ceph-01 systemd[1]: [ /usr/lib/systemd/system/ceph-osd @.service:18] Unknown lvalue  'TasksMax'  in  section  'Service'


2、解决方法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# sudo yum install systemd-*
# systemctl status ceph\*.service
● ceph-osd@0.service - Ceph object storage daemon
   Loaded: loaded ( /usr/lib/systemd/system/ceph-osd @.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2018-01-12 06:03:20 CST; 7h ago
Main PID: 4686 (ceph-osd)
   CGroup:  /system .slice /system-ceph \x2dosd.slice /ceph-osd @0.service
           └─4686  /usr/bin/ceph-osd  -f --cluster ceph -- id  0 --setuser ceph --setgroup ceph
 
Jan 12 06:03:20 hz-01-ops-tc-ceph-01 ceph-osd[4686]: starting osd.0 at : /0  osd_data  /var/lib/ceph/osd/ceph-0  /var/lib/ceph/osd/ceph-0/journal
Jan 12 06:03:20 hz-01-ops-tc-ceph-01 ceph-osd[4686]: 2018-01-12 06:03:20.610107 7f6b4111b800 -1 osd.0 0 log_to_monitors {default= true }
Jan 12 06:03:20 hz-01-ops-tc-ceph-01 systemd[1]: [ /usr/lib/systemd/system/ceph-osd @.service:18] Unknown lvalue  'TasksMax'  in  section  'Service'
Jan 12 06:03:21 hz-01-ops-tc-ceph-01 systemd[1]: [ /usr/lib/systemd/system/ceph-osd @.service:18] Unknown lvalue  'TasksMax'  in  section  'Service'
Jan 12 06:03:21 hz-01-ops-tc-ceph-01 systemd[1]: [ /usr/lib/systemd/system/ceph-osd @.service:18] Unknown lvalue  'TasksMax'  in  section  'Service'
Jan 12 13:53:07 hz-01-ops-tc-ceph-01 systemd[1]: [ /usr/lib/systemd/system/ceph-osd @.service:18] Unknown lvalue  'TasksMax'  in  section  'Service'
Jan 12 13:53:07 hz-01-ops-tc-ceph-01 systemd[1]: [ /usr/lib/systemd/system/ceph-osd @.service:18] Unknown lvalue  'TasksMax'  in  section  'Service'
Jan 12 13:53:07 hz-01-ops-tc-ceph-01 systemd[1]: [ /usr/lib/systemd/system/ceph-osd @.service:18] Unknown lvalue  'TasksMax'  in  section  'Service'
Jan 12 13:53:07 hz-01-ops-tc-ceph-01 systemd[1]: [ /usr/lib/systemd/system/ceph-osd @.service:18] Unknown lvalue  'TasksMax'  in  section  'Service'
Jan 12 13:53:07 hz-01-ops-tc-ceph-01 systemd[1]: [ /usr/lib/systemd/system/ceph-osd @.service:18] Unknown lvalue  'TasksMax'  in  section  'Service'

3、服务验证:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# systemctl restart ceph\*.service
# systemctl status ceph\*.service
● ceph-osd@0.service - Ceph object storage daemon
   Loaded: loaded ( /usr/lib/systemd/system/ceph-osd @.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2018-01-12 13:56:14 CST; 2s ago
   Process: 14682 ExecStartPre= /usr/lib/ceph/ceph-osd-prestart .sh --cluster ${CLUSTER} -- id  %i (code=exited, status=0 /SUCCESS )
Main PID: 14733 (ceph-osd)
   CGroup:  /system .slice /system-ceph \x2dosd.slice /ceph-osd @0.service
           └─14733  /usr/bin/ceph-osd  -f --cluster ceph -- id  0 --setuser ceph --setgroup ceph
Jan 12 13:56:14 hz-01-ops-tc-ceph-01 systemd[1]: Starting Ceph object storage daemon...
Jan 12 13:56:14 hz-01-ops-tc-ceph-01 ceph-osd-prestart.sh[14682]: create-or-move updated item name  'osd.0'  weight 0.0439 at location {host=hz-01-ops-tc-ceph-01,root=default} to crush map
Jan 12 13:56:14 hz-01-ops-tc-ceph-01 systemd[1]: Started Ceph object storage daemon.
Jan 12 13:56:14 hz-01-ops-tc-ceph-01 ceph-osd[14733]: starting osd.0 at : /0  osd_data  /var/lib/ceph/osd/ceph-0  /var/lib/ceph/osd/ceph-0/journal
Jan 12 13:56:14 hz-01-ops-tc-ceph-01 ceph-osd[14733]: 2018-01-12 13:56:14.620901 7f0aee7fb800 -1 osd.0 19 log_to_monitors {default= true }









本文转自 冰冻vs西瓜 51CTO博客,原文链接:http://blog.51cto.com/molewan/2060357,如需转载请自行联系原作者
目录
相关文章
Consider defining a bean of type ‘com.bsj.system.service.RedisService‘ in your configuration
Consider defining a bean of type ‘com.bsj.system.service.RedisService‘ in your configuration
594 0
|
8月前
|
Stylelint——Unexpected unknown pseudo-class selector ":deep" selector-pseudo-class-no-unknown
新项目制定规范接入了stylelint,并通过husky在git提交时去触发检测修复,使用`:deep()`的时候却发现了报错;
272 1
error @achrinza/node-ipc@9.2.2: The engine “node“ is incompatible with this module. Expected version
error @achrinza/node-ipc@9.2.2: The engine “node“ is incompatible with this module. Expected version
567 0
|
11月前
|
【kubernetes】解决k8s1.28.4:"command failed" err="failed to parse kubelet flag: unknown flag: --c...
【kubernetes】解决k8s1.28.4:"command failed" err="failed to parse kubelet flag: unknown flag: --c...
1619 0
错误集--NFS报错clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host)
错误集--NFS报错clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host)
1674 0
错误集--NFS报错clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host)
ERROR: 2 matches found based on name: network product-server_default is ambiguous
ERROR: 2 matches found based on name: network product-server_default is ambiguous
189 0
控制台报错: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting proper
控制台报错: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting proper
138 0
控制台报错: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting proper
Job for named.service failed because the control process exited with error code.
Job for named.service failed because the control process exited with error code.
921 0
Field roomService in edu.hpu.controller.GuestController required a bean of type 'edu.hpu.service.Roo
Field roomService in edu.hpu.controller.GuestController required a bean of type 'edu.hpu.service.Roo
145 0
关于Failed to check the status of the service com.taotao.service.ItemService. No provider available fo
原文:http://www.bubuko.com/infodetail-2250226.html 项目中用dubbo发生:     Failed to check the status of the service com.taotao.service.ItemService. No provider available for the service 原因: Dubbo缺省会在启动时检查依赖的服务是否可用,不可用时会抛出异常,阻止Spring初始化完成,以便上线时,能及早发现问题,默认check=true。
2802 0