【cluvfy】集群验证工具cluvfy使用方法
什么是CVU
cluvfy(Cluster Verification Utility,集群检验工具),简称CVU,是随Oracle集群管理软件一起发布的检查工具。它的功能是对整个集群系统实施过程的各个阶段以及各个组件进行检查,并验证是否满足Oracle的要求。
cluvfy能对集群提供非常广泛的检查,包括:OS硬件配置、内核参数设置、用户资源限制设置、网络设置、NTP设置、RAC组件健康性等。
cluvfy在进行检查时并不会修改系统配置,所以不会对系统造成影响。cluvfy检查的内容可以从两个角度进行分类:阶段(stage)、组件(component)。
使用命令cluvfy stage -list可以查看所有阶段。
使用命令cluvfy comp -list可以查看所有组件。
CVU工具包括两个脚本:
runcluvfy.sh和cluvfy 。
runcluvfy.sh脚本位于Grid Infrastructure的安装介质中,它的功能是在安装Grid Infrastructure之前对系统进行校验。
cluvfy位于Grid Infrastructure软件的HOME目录下的bin目录中,它的功能是在安装Oracle 数据库软件或者创建集群数据库之前,对系统进行校验。
-pre选项主要检查是否满足安装的需要,
-post选择则检查安装后的组件是否正常
[grid@orclalhr ~]$ cluvfy stage -list
USAGE:
cluvfy stage {-pre|-post} <stage-name> <stage-specific options> [-verbose]
Valid Stages are:
-pre cfs : pre-check for CFS setup
-pre crsinst : pre-check for CRS installation
-pre acfscfg : pre-check for ACFS Configuration.
-pre dbinst : pre-check for database installation
-pre dbcfg : pre-check for database configuration
-pre hacfg : pre-check for HA configuration
-pre nodeadd : pre-check for node addition.
-post hwos : post-check for hardware and operating system
-post cfs : post-check for CFS setup
-post crsinst : post-check for CRS installation
-post acfscfg : post-check for ACFS Configuration.
-post hacfg : post-check for HA configuration
-post nodeadd : post-check for node addition.
-post nodedel : post-check for node deletion.
比较常用的就是使用cluvfy命令进行安装集群之前的系统检查,如下所示:
$ORACLE_HOME/bin/cluvfy stage -pre crsinst -n all -r 11gR2 -verbose -fixup
其中,
-n 选项表示需要检查的节点列表。这里需要所有列出的节点之间的用户等效性已经配置成功。
-r 表示需要安装的软件版本,可以使用help查看支持的软件版本。
-verbose 表示列出检查内容的详细信息。
-fixup 生成修改脚本,需要用root用户执行
一、【cluvfy】集群验证工具cluvfy使用方法——stage
Oracle集群和RAC安装过程中每个阶段都称作一个stage,进入每一个stage之前和之后都应该做检查
--pre:在进入特定阶段之前应进行一系列的预定义内容的检查,确保集群环境已满足进入下一阶段的条件,即所谓的"预检查"。--post:与之对应的便是"后期检查",即在完成某个阶段操作后需要执行的一组预定义的检查动作。
1.获取集群验证工具cluvfy的帮助信息
RACDB1@rac1 /home/oracle$ cluvfy -help
USAGE:
cluvfy [ -help ]
cluvfy stage { -list | -help }
cluvfy stage {-pre|-post} [-verbose]
cluvfy comp { -list | -help }
cluvfy comp [-verbose]
2.获得stage选项可验证的信息
可以使用"cluvfy stage -list"命令获得可验证的阶段。
RACDB1@rac1 /home/oracle$ cluvfy stage -list
USAGE:
cluvfy stage {-pre|-post} [-verbose]
Valid stage options and stage names are:
-post hwos : post-check for hardware and operating system
-pre cfs : pre-check for CFS setup
-post cfs : post-check for CFS setup
-pre crsinst : pre-check for CRS installation
-post crsinst : post-check for CRS installation
-pre dbinst : pre-check for database installation
-pre dbcfg : pre-check for database configuration
简单的注释:
-post hwos:对硬件和操作系统进行后期检查;
-pre cfs:对CFS设置进行预检;
-post cfs:对CFS设置进行后期检查;
-pre crsinst:对CRS安装进行预检查;
-post crsinst:对CRS安装进行后期检查;
-pre dbinst:对数据库安装进行预检查;
-pre dbcfg:对数据库配置进行预检查。
3.具体的使用方法
3.1)对rac1和rac2两个节点做硬件和操作系统的后期检查
RACDB1@rac1 /home/oracle$ cluvfy stage -post hwos -n rac1,rac2
Performing post-checks for hardware and operating system setup
Checking node reachability...
Node reachability check passed from node "rac1".
Checking user equivalence...
User equivalence check passed for user "oracle".
Checking node connectivity...
Node connectivity check passed for subnet "192.168.1.0" with node(s) rac2,rac1.
Node connectivity check passed for subnet "192.168.3.0" with node(s) rac2,rac1.
Node connectivity check passed for subnet "192.168.2.0" with node(s) rac2,rac1.
Node connectivity check passed for subnet "192.168.247.0" with node(s) rac2,rac1.
Suitable interfaces for the private interconnect on subnet "192.168.1.0":
rac2 eth0:192.168.1.101
rac1 eth0:192.168.1.100
Suitable interfaces for the private interconnect on subnet "192.168.3.0":
rac2 eth0:192.168.3.101
rac1 eth0:192.168.3.100
Suitable interfaces for the private interconnect on subnet "192.168.2.0":
rac2 eth1:192.168.2.101
rac1 eth1:192.168.2.100
Suitable interfaces for the private interconnect on subnet "192.168.247.0":
rac2 eth2:192.168.247.222
rac1 eth2:192.168.247.111
ERROR:
Could not find a suitable set of interfaces for VIPs.
Node connectivity check failed.
Checking shared storage accessibility...
WARNING:
Package cvuqdisk not installed.
rac2,rac1
Shared storage check failed on nodes "rac2,rac1".
Post-check for hardware and operating system setup was unsuccessful on all the nodes.
3.2)对rac1和rac2两个节点做CRS安装的预检查
RACDB1@rac1 /home/oracle$ cluvfy stage -pre crsinst -n rac1,rac2
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "rac1".
Checking user equivalence...
User equivalence check passed for user "oracle".
Checking administrative privileges...
User existence check passed for "oracle".
Group existence check passed for "oinstall".
Membership check for user "oracle" in group "oinstall" [as Primary] passed.
Administrative privileges check passed.
Checking node connectivity...
Node connectivity check passed for subnet "192.168.1.0" with node(s) rac2,rac1.
Node connectivity check passed for subnet "192.168.3.0" with node(s) rac2,rac1.
Node connectivity check passed for subnet "192.168.2.0" with node(s) rac2,rac1.
Node connectivity check passed for subnet "192.168.247.0" with node(s) rac2,rac1.
Suitable interfaces for the private interconnect on subnet "192.168.1.0":
rac2 eth0:192.168.1.101
rac1 eth0:192.168.1.100
Suitable interfaces for the private interconnect on subnet "192.168.3.0":
rac2 eth0:192.168.3.101
rac1 eth0:192.168.3.100
Suitable interfaces for the private interconnect on subnet "192.168.2.0":
rac2 eth1:192.168.2.101
rac1 eth1:192.168.2.100
Suitable interfaces for the private interconnect on subnet "192.168.247.0":
rac2 eth2:192.168.247.222
rac1 eth2:192.168.247.111
ERROR:
Could not find a suitable set of interfaces for VIPs.
Node connectivity check failed.
Checking system requirements for 'crs'...
No checks registered for this product.
Pre-check for cluster services setup was unsuccessful on all the nodes.
3.3)对rac1和rac2两个节点做CRS安装的后期检查
RACDB1@rac1 /home/oracle$ cluvfy stage -post crsinst -n rac1,rac2
Performing post-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "rac1".
Checking user equivalence...
User equivalence check passed for user "oracle".
Checking Cluster manager integrity...
Checking CSS daemon...
Daemon status check passed for "CSS daemon".
Cluster manager integrity check passed.
Checking cluster integrity...
Cluster integrity check passed
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations.
Uniqueness check for OCR device passed.
Checking the version of OCR...
OCR of correct Version "2" exists.
Checking data integrity of OCR...
Data integrity check for OCR passed.
OCR integrity check passed.
Checking CRS integrity...
Checking daemon liveness...
Liveness check passed for "CRS daemon".
Checking daemon liveness...
Liveness check passed for "CSS daemon".
Checking daemon liveness...
Liveness check passed for "EVM daemon".
Checking CRS health...
CRS health check passed.
CRS integrity check passed.
Checking node application existence...
Checking existence of VIP node application (required)
Check passed.
Checking existence of ONS node application (optional)
Check passed.
Checking existence of GSD node application (optional)
Check passed.
Post-check for cluster services setup was successful.
3.4)对rac1和rac2两个节点做数据库安装的预检查
RACDB1@rac1 /home/oracle$ cluvfy stage -pre dbinst -n rac1,rac2
Performing pre-checks for database installation
Checking node reachability...
Node reachability check passed from node "rac1".
Checking user equivalence...
User equivalence check passed for user "oracle".
Checking administrative privileges...
User existence check passed for "oracle".
Group existence check passed for "oinstall".
Membership check for user "oracle" in group "oinstall" [as Primary] passed.
Group existence check passed for "dba".
Membership check for user "oracle" in group "dba" passed.
Administrative privileges check passed.
Checking node connectivity...
Node connectivity check passed for subnet "192.168.1.0" with node(s) rac2,rac1.
Node connectivity check passed for subnet "192.168.3.0" with node(s) rac2,rac1.
Node connectivity check passed for subnet "192.168.2.0" with node(s) rac2,rac1.
Node connectivity check passed for subnet "192.168.247.0" with node(s) rac2,rac1.
Suitable interfaces for the private interconnect on subnet "192.168.1.0":
rac2 eth0:192.168.1.101
rac1 eth0:192.168.1.100
Suitable interfaces for the private interconnect on subnet "192.168.3.0":
rac2 eth0:192.168.3.101
rac1 eth0:192.168.3.100
Suitable interfaces for the private interconnect on subnet "192.168.2.0":
rac2 eth1:192.168.2.101
rac1 eth1:192.168.2.100
Suitable interfaces for the private interconnect on subnet "192.168.247.0":
rac2 eth2:192.168.247.222
rac1 eth2:192.168.247.111
ERROR:
Could not find a suitable set of interfaces for VIPs.
Node connectivity check failed.
Checking system requirements for 'database'...
No checks registered for this product.
Checking CRS integrity...
Checking daemon liveness...
Liveness check passed for "CRS daemon".
Checking daemon liveness...
Liveness check passed for "CSS daemon".
Checking daemon liveness...
Liveness check passed for "EVM daemon".
Checking CRS health...
CRS health check passed.
CRS integrity check passed.
Checking node application existence...
Checking existence of VIP node application (required)
Check passed.
Checking existence of ONS node application (optional)
Check passed.
Checking existence of GSD node application (optional)
Check passed.
Pre-check for database installation was unsuccessful on all the nodes.
3.5)对rac1和rac2两个节点对数据库配置进行预检查
RACDB1@rac1 /home/oracle$ cluvfy stage -pre dbcfg -n rac1,rac2 -d $ORACLE_HOME
Performing pre-checks for database configuration
Checking node reachability...
Node reachability check passed from node "rac1".
Checking user equivalence...
User equivalence check passed for user "oracle".
Checking administrative privileges...
User existence check passed for "oracle".
Group existence check passed for "oinstall".
Membership check for user "oracle" in group "oinstall" [as Primary] passed.
Group existence check passed for "dba".
Membership check for user "oracle" in group "dba" passed.
Administrative privileges check passed.
Checking node connectivity...
Node connectivity check passed for subnet "192.168.1.0" with node(s) rac2,rac1.
Node connectivity check passed for subnet "192.168.3.0" with node(s) rac2,rac1.
Node connectivity check passed for subnet "192.168.2.0" with node(s) rac2,rac1.
Node connectivity check passed for subnet "192.168.247.0" with node(s) rac2,rac1.
Suitable interfaces for the private interconnect on subnet "192.168.1.0":
rac2 eth0:192.168.1.101
rac1 eth0:192.168.1.100
Suitable interfaces for the private interconnect on subnet "192.168.3.0":
rac2 eth0:192.168.3.101
rac1 eth0:192.168.3.100
Suitable interfaces for the private interconnect on subnet "192.168.2.0":
rac2 eth1:192.168.2.101
rac1 eth1:192.168.2.100
Suitable interfaces for the private interconnect on subnet "192.168.247.0":
rac2 eth2:192.168.247.222
rac1 eth2:192.168.247.111
ERROR:
Could not find a suitable set of interfaces for VIPs.
Node connectivity check failed.
Checking CRS integrity...
Checking daemon liveness...
Liveness check passed for "CRS daemon".
Checking daemon liveness...
Liveness check passed for "CSS daemon".
Checking daemon liveness...
Liveness check passed for "EVM daemon".
Checking CRS health...
CRS health check passed.
CRS integrity check passed.
Pre-check for database configuration was unsuccessful on all the nodes.
4.小结
cluvfy工具的stage选项所检验的阶段涵盖了Oracle集群和RAC安装的每一个重要的步骤,建议在每一步骤完成之前和之后都应该使用cluvfy工具进行验证。
5实践使用:
----安装crs前使用:
./runcluvfy.sh stage -pre crsinst -n dghpl2056,dghpl1902 -fixup -verbose >/home/grid/0421.txt
----平时也可以使用检查crs是否正常
[grid@dghpl1902 ~]$ cluvfy stage -post crsinst -n dghpl2056,dghpl1902 -verbose >/home/grid/cluvfy20211203.txt
二、【cluvfy】集群验证工具cluvfy使用方法——comp
在《【cluvfy】集群验证工具cluvfy使用方法——stage》本文给出该工具的另外一个重要的功能——集群组件的验证comp。使用该功能可以对集群组件的可用性和完整性进行验证。
1.cluvfy工具可验证的组件
可以使用"cluvfy comp -list"命令给出可验证的组件。
RACDB1@rac1 /home/oracle$ cluvfy comp -list
USAGE:
cluvfy comp [-verbose]
Valid components are:
nodereach : checks reachability between nodes
nodecon : checks node connectivity
cfs : checks CFS integrity
ssa : checks shared storage accessibility
space : checks space availability
sys : checks minimum system requirements
clu : checks cluster integrity
clumgr : checks cluster manager integrity
ocr : checks OCR integrity
crs : checks CRS integrity
nodeapp : checks node applications existence
admprv : checks administrative privileges
peer : compares properties with peers
注释如下:
nodereach:检查各节点之间的可访问性;
nodecon:检查节点连接性;
cfs:检查Oracle集群文件系统完整性;
ssa:检查共享存储的可访问性;
space:检查空间可用性;
sys:检查最低系统要求;
clu:检查集群完整性;
clumgr:检查集群管理器完整性;
ocr:检查OCR完整性;
crs:检查CRS完整性;
nodeapp:检查是否存在节点应用程序;
admprv:检查管理权限;
peer:对等对象间的属性比较。
2.具体使用方法
简单罗列一些常用的组件检查命令,供参考。
2.1)nodereach:检查各节点之间的可访问性
语法:cluvfy comp nodereach -n [ -srcnode ] [-verbose]
“node_list”是逗号分隔的节点列表,如果想列出所有节点也可以使用“all”;
“srcnode”表示发起连接测试请求的源节点,如果不指定,当前节点被指定为源节点。
RACDB1@rac1 /home/oracle$ cluvfy comp nodereach -n all
Verifying node reachability
Checking node reachability...
Node reachability check passed from node "rac1".
Verification of node reachability was successful.
2.2)nodecon:检查节点连接性
RACDB1@rac1 /home/oracle$ cluvfy comp nodecon -n all -i eth0,eth1
Verifying node connectivity
Checking node connectivity...
Check: Node connectivity for interface "eth0"
Node connectivity check passed for interface "eth0".
Check: Node connectivity for interface "eth1"
Node connectivity check passed for interface "eth1".
Node connectivity check passed.
Verification of node connectivity was successful.
2.3)clu:检查集群完整性
RACDB1@rac1 /home/oracle$ cluvfy comp clu
Verifying cluster integrity
Checking cluster integrity...
Cluster integrity check passed
Verification of cluster integrity was successful.
2.4)clumgr:检查集群管理器完整性
RACDB1@rac1 /home/oracle$ cluvfy comp clumgr
Verifying cluster manager integrity
Checking Cluster manager integrity...
Checking CSS daemon...
Daemon status check passed for "CSS daemon".
Cluster manager integrity check passed.
Verification of cluster manager integrity was successful.
2.5)ocr:检查OCR完整性
RACDB1@rac1 /home/oracle$ cluvfy comp ocr
Verifying OCR integrity
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations.
Uniqueness check for OCR device passed.
Checking the version of OCR...
OCR of correct Version "2" exists.
Checking data integrity of OCR...
Data integrity check for OCR passed.
OCR integrity check passed.
Verification of OCR integrity was successful.
2.6)crs:检查CRS完整性
RACDB1@rac1 /home/oracle$ cluvfy comp crs
Verifying CRS integrity
Checking CRS integrity...
Checking daemon liveness...
Liveness check passed for "CRS daemon".
Checking daemon liveness...
Liveness check passed for "CSS daemon".
Checking daemon liveness...
Liveness check passed for "EVM daemon".
Checking CRS health...
CRS health check passed.
CRS integrity check passed.
Verification of CRS integrity was successful.
2.7)nodeapp:检查是否存在节点应用程序
RACDB1@rac1 /home/oracle$ cluvfy comp nodeapp
Verifying node application existence
Checking node application existence...
Checking existence of VIP node application (required)
Check passed.
Checking existence of ONS node application (optional)
Check passed.
Checking existence of GSD node application (optional)
Check passed.
Verification of node application existence was successful.
参考:
https://docs.oracle.com/cd/E11882_01/rac.112/e41959/cvu.htm#CWADD1100
http://blog.itpub.net/25462274/viewspace-2137500/
https://blog.csdn.net/cuanchuwei1207/article/details/100477369?spm=1001.2101.3001.6661.1&utm_medium=distribute.pc_relevant_t0.none-task-blog-2%7Edefault%7EOPENSEARCH%7Edefault-1.no_search_link&depth_1-utm_source=distribute.pc_relevant_t0.none-task-blog-2%7Edefault%7EOPENSEARCH%7Edefault-1.no_search_link