恢复过程中的问题:
1、首先,在root运行rootdelete.sh的脚本时,2个节点运行时间很长大概有半个小时;
2、在为rac数据库划分新LUN时,名称及逻辑rhdiskn对应关系要与故障前保持一致;如果是IBM AIX小型机,在划分LUN后根据存储类型需要在2个节点设置生效参数reserve_policy=no_reserve;否则,在2个节点的RAC的第二个节点运行root.sh时会报如下异常错误:
cp: /dev/rhdiskn: The requested resource is busy.
3、在2个节点的RAC的第二个节点执行完毕root.sh后还需要注册vip、asm(如果RAC采用的是asm存储管理)、database、instance到OCR;在注册vip时需要服务器桌面支持,登录时必须使用oracle用户,否则以root 登录服务器su到oracle,在执行vipca时无法启动图形桌面。
下面是LINUX RED5的oracle 10g rac的仲裁盘损坏重建过程的模拟
RAC健康状态下的一些检查
[oracle@rac10gnode1 admin]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....0gdb.db application ONLINE ONLINE rac10gnode1
ora....b1.inst application ONLINE ONLINE rac10gnode1
ora....b2.inst application ONLINE ONLINE rac10gnode2
ora....SM1.asm application ONLINE ONLINE rac10gnode1
ora....E1.lsnr application ONLINE ONLINE rac10gnode1
ora....de1.gsd application ONLINE ONLINE rac10gnode1
ora....de1.ons application ONLINE ONLINE rac10gnode1
ora....de1.vip application ONLINE ONLINE rac10gnode1
ora....SM2.asm application ONLINE ONLINE rac10gnode2
ora....E2.lsnr application ONLINE ONLINE rac10gnode2
ora....de2.gsd application ONLINE ONLINE rac10gnode2
ora....de2.ons application ONLINE ONLINE rac10gnode2
ora....de2.vip application ONLINE ONLINE rac10gnode2
[oracle@rac10gnode1 admin]$ onsctl ping
Number of onsconfiguration retrieved, numcfg = 2
onscfg[0]
{node = rac10gnode1, port = 6200}
Adding remote host rac10gnode1:6200
onscfg[1]
{node = rac10gnode2, port = 6200}
Adding remote host rac10gnode2:6200
ons is running ...
[oracle@rac10gnode1 admin]$ crsctl query css votedisk
0. 0 /dev/raw/raw2
located 1 votedisk(s).
[oracle@rac10gnode1 admin]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 200692
Used space (kbytes) : 3792
Available space (kbytes) : 196900
ID : 1874739684
Device/File Name : /dev/raw/raw1
Device/File integrity check succeeded
Device/File not configured
Cluster registry integrity check succeeded
实验前的必要备份
[oracle@rac10gnode1 admin]$ dd if=/dev/raw/raw2 of=/oracle/app/oracle/votediskbak
996030+0 records in
996030+0 records out
509967360 bytes (510 MB) copied, 173.582 seconds, 2.9 MB/s
[oracle@rac10gnode1 admin]$ dd if=/dev/raw/raw2 of/oracle/app/oracle/raocrbak
401562+0 records in
401562+0 records out
205599744 bytes (206 MB) copied, 70.1261 seconds, 2.9 MB/s
记得要使用RMAN进行全库备份,避免仲裁盘恢复失败数据丢失,用以RAC重建恢复。
故障模拟
[oracle@rac10gnode1 ~]$ dd if=/dev/zero of=/dev/raw/raw2 bs=8k count=4k
Connection closed by foreign host.
可以看到当仲裁盘被破坏后,2台RAC集群服务器同时自动重启
重启后可以查看RAC集群的状态,已经看不到crs和votedisk了
[oracle@rac10gnode1 bdump]$ crs_stat -t
CRS-0184: Cannot communicate with the CRS daemon.
接下来是votedisk重建过程
首先删除原有的CRS和votedisk的配置
[root@rac10gnode1 ~]# cd /oracle/app/oracle/product/10.2.0.1/crs/install
[root@rac10gnode1 install]# ./rootdelete.sh
Shutting down Oracle Cluster Ready Services (CRS):
Stopping resources.
Error while stopping resources. Possible cause: CRSD is down.
Stopping CSSD.
Unable to communicate with the CSS daemon.
Shutdown has begun. The daemons should exit soon.
Checking to see if Oracle CRS stack is down...
Oracle CRS stack is not running.
Oracle CRS stack is down now.
Removing script for Oracle Cluster Ready services
Updating ocr file for downgrade
Cleaning up SCR settings in '/etc/oracle/scls_scr' #需要提的是,生产环境中该步骤很慢
[root@rac10gnode2~]# cd /oracle/app/oracle/product/10.2.0.1/crs/install
[root@rac10gnode2 install]# ./rootdelete.sh
Shutting down Oracle Cluster Ready Services (CRS):
Stopping resources.
Error while stopping resources. Possible cause: CRSD is down.
Stopping CSSD.
Unable to communicate with the CSS daemon.
Shutdown has begun. The daemons should exit soon.
Checking to see if Oracle CRS stack is down...
Oracle CRS stack is not running.
Oracle CRS stack is down now.
Removing script for Oracle Cluster Ready services
Updating ocr file for downgrade
Cleaning up SCR settings in '/etc/oracle/scls_scr' #需要提的是,生产环境中该步骤很慢
[root@rac10gnode1 install]# ./rootdeinstall.sh
Removing contents from OCR device
2560+0 records in
2560+0 records out
10485760 bytes (10 MB) copied, 0.393353 seconds, 26.7 MB/s
检查是否还有ocr、crs、evm进程存在
[root@rac10gnode1 install]# ps -e | grep -i 'ocs[s]d'
[root@rac10gnode1 install]# ps -e | grep -i 'cr[s]d.bin'
[root@rac10gnode1 install]# ps -e | grep -i 'ev[m]d.bin'
接下来是重建votedisk和ocr的过程,这里使用原先的ocr和votedisk裸设备,生产中可能需要重新划分votedisk和ocrLUN,需要注意reserve_policy参数的设置;另外,需要使用root用户到两个节点crs目录执行root.sh
[root@rac10gnode1 crs]# ./root.sh
WARNING: directory '/oracle/app/oracle/product/10.2.0.1' is not owned by root
WARNING: directory '/oracle/app/oracle/product' is not owned by root
WARNING: directory '/oracle/app/oracle' is not owned by root
WARNING: directory '/oracle/app' is not owned by root
WARNING: directory '/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/oracle/app/oracle/product/10.2.0.1' is not owned by root
WARNING: directory '/oracle/app/oracle/product' is not owned by root
WARNING: directory '/oracle/app/oracle' is not owned by root
WARNING: directory '/oracle/app' is not owned by root
WARNING: directory '/oracle' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: rac10gnode1 rac10g1priv rac10gnode1
node 2: rac10gnode2 rac10g2priv rac10gnode2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw2
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac10gnode1
CSS is inactive on these nodes.
rac10gnode2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
[root@rac10gnode2 crs]# ./root.sh
WARNING: directory '/oracle/app/oracle/product/10.2.0.1' is not owned by root
WARNING: directory '/oracle/app/oracle/product' is not owned by root
WARNING: directory '/oracle/app/oracle' is not owned by root
WARNING: directory '/oracle/app' is not owned by root
WARNING: directory '/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/oracle/app/oracle/product/10.2.0.1' is not owned by root
WARNING: directory '/oracle/app/oracle/product' is not owned by root
WARNING: directory '/oracle/app/oracle' is not owned by root
WARNING: directory '/oracle/app' is not owned by root
WARNING: directory '/oracle' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: rac10gnode1 rac10g1priv rac10gnode1
node 2: rac10gnode2 rac10g2priv rac10gnode2
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac10gnode1
rac10gnode2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/oracle/app/oracle/product/10.2.0.1/crs/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
这里有报错,是因为vip网口没有配置,接下来配置vip
[root@rac10gnode1 crs]# su - oracle
[oracle@rac10gnode1 ~]$ oifcfg setif -global eth1/192.168.56.0:public
[oracle@rac10gnode1 ~]$ oifcfg setif -global eth0/10.10.10.0:cluster_interconnect
[oracle@rac10gnode1 ~]$ oifcfg iflist
eth0 10.10.10.0
eth1 192.168.56.0
[oracle@rac10gnode1 ~]$ oifcfg getif
eth1 192.168.56.0 global public
eth0 10.10.10.0 global cluster_interconnect
然后使用root用户到/oracle/app/oracle/product/10.2.0.1/crs/bin目录下运行vipca
配置VIP
vip配置完成后就可以查看RAC集群的状态了
[root@rac10gnode1 bin]# su - oracle
[oracle@rac10gnode1 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....de1.gsd application ONLINE ONLINE rac10gnode1
ora....de1.ons application ONLINE ONLINE rac10gnode1
ora....de1.vip application ONLINE ONLINE rac10gnode1
ora....de2.gsd application ONLINE ONLINE rac10gnode2
ora....de2.ons application ONLINE ONLINE rac10gnode2
ora....de2.vip application ONLINE ONLINE rac10gnode2
接下来配置数据库监听,事先要做备份
[oracle@rac10gnode1 db]$ cd network/
[oracle@rac10gnode1 network]$ ls
admin doc install jlib lib lib32 log mesg tools trace
[oracle@rac10gnode1 network]$ cp -R admin adminbak
[oracle@rac10gnode1 network]$ ls
admin adminbak doc install jlib lib lib32 log mesg tools trace
[oracle@rac10gnode1 network]$ netca
[oracle@rac10gnode1 network]$ netca
Oracle Net Services Configuration:
Configuring Listener:LISTENER
rac10gnode1...
rac10gnode2...
Listener configuration complete.
Oracle Net Services configuration successful. The exit code is 0
查看RAC集群状态,可以看到两个节点的监听服务online了
[oracle@rac10gnode1 network]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....E1.lsnr application ONLINE ONLINE rac10gnode1
ora....de1.gsd application ONLINE ONLINE rac10gnode1
ora....de1.ons application ONLINE ONLINE rac10gnode1
ora....de1.vip application ONLINE ONLINE rac10gnode1
ora....E2.lsnr application ONLINE ONLINE rac10gnode2
ora....de2.gsd application ONLINE ONLINE rac10gnode2
ora....de2.ons application ONLINE ONLINE rac10gnode2
ora....de2.vip application ONLINE ONLINE rac10gnode2
查看当前节点的监听,已经自动启动了
[oracle@rac10gnode1 network]$ lsnrctl status
LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 18-JUN-2015 19:56:25
Copyright (c) 1991, 2005, Oracle. All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER_RAC10GNODE1
Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date 18-JUN-2015 19:55:30
Uptime 0 days 0 hr. 0 min. 55 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /oracle/app/oracle/product/10.2.0.1/db/network/admin/listener.ora
Listener Log File /oracle/app/oracle/product/10.2.0.1/db/network/log/listener_rac10gnode1.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.22)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.11)(PORT=1521)))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
添加ONS配置
[oracle@rac10gnode1 network]$ racgons add_config rac10gnode1:6251 rac10gnode2:6251
[oracle@rac10gnode1 network]$ onsctl ping
Number of onsconfiguration retrieved, numcfg = 2
onscfg[0]
{node = rac10gnode1, port = 6251}
Adding remote host rac10gnode1:6251
onscfg[1]
{node = rac10gnode2, port = 6251}
Adding remote host rac10gnode2:6251
ons is running ...
添加ASM配置
[oracle@rac10gnode1 network]$ srvctl add asm -n rac10gnode1
[oracle@rac10gnode1 network]$ srvctl add asm -n rac10gnode2
添加database配置
[oracle@rac10gnode1 network]$ srvctl add database -d rac10gdb -o $ORACLE_HOME
添加实例配置
[oracle@rac10gnode1 network]$ srvctl add instance -d rac10gdb -i rac10gdb1 -n rac10gnode1
[oracle@rac10gnode1 network]$ srvctl add instance -d rac10gdb -i rac10gdb2 -n rac10gnode2
查看RAC集群的状态,会发现asm、instance、db等服务处于offline状态
[oracle@rac10gnode1 network]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....0gdb.db application OFFLINE OFFLINE
ora....b1.inst application OFFLINE OFFLINE
ora....b2.inst application OFFLINE OFFLINE
ora....SM1.asm application OFFLINE OFFLINE
ora....E1.lsnr application ONLINE ONLINE rac10gnode1
ora....de1.gsd application ONLINE ONLINE rac10gnode1
ora....de1.ons application ONLINE ONLINE rac10gnode1
ora....de1.vip application ONLINE ONLINE rac10gnode1
ora....SM2.asm application OFFLINE OFFLINE
ora....E2.lsnr application ONLINE ONLINE rac10gnode2
ora....de2.gsd application ONLINE ONLINE rac10gnode2
ora....de2.ons application ONLINE ONLINE rac10gnode2
ora....de2.vip application ONLINE ONLINE rac10gnode2
接下来启动asm、db服务
[oracle@rac10gnode1 network]$ srvctl start asm -n rac10gnode1
[oracle@rac10gnode1 network]$ srvctl start asm -n rac10gnod2
[oracle@rac10gnode1 network]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....0gdb.db application OFFLINE OFFLINE
ora....b1.inst application OFFLINE OFFLINE
ora....b2.inst application OFFLINE OFFLINE
ora....SM1.asm application ONLINE ONLINE rac10gnode1
ora....E1.lsnr application ONLINE ONLINE rac10gnode1
ora....de1.gsd application ONLINE ONLINE rac10gnode1
ora....de1.ons application ONLINE ONLINE rac10gnode1
ora....de1.vip application ONLINE ONLINE rac10gnode1
ora....SM2.asm application ONLINE ONLINE rac10gnode2
ora....E2.lsnr application ONLINE ONLINE rac10gnode2
ora....de2.gsd application ONLINE ONLINE rac10gnode2
ora....de2.ons application ONLINE ONLINE rac10gnode2
ora....de2.vip application ONLINE ONLINE rac10gnode2
[oracle@rac10gnode1 network]$ srvctl start database -d rac10gdb
[oracle@rac10gnode1 network]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....0gdb.db application ONLINE ONLINE rac10gnode2
ora....b1.inst application ONLINE ONLINE rac10gnode1
ora....b2.inst application ONLINE ONLINE rac10gnode2
ora....SM1.asm application ONLINE ONLINE rac10gnode1
ora....E1.lsnr application ONLINE ONLINE rac10gnode1
ora....de1.gsd application ONLINE ONLINE rac10gnode1
ora....de1.ons application ONLINE ONLINE rac10gnode1
ora....de1.vip application ONLINE ONLINE rac10gnode1
ora....SM2.asm application ONLINE ONLINE rac10gnode2
ora....E2.lsnr application ONLINE ONLINE rac10gnode2
ora....de2.gsd application ONLINE ONLINE rac10gnode2
ora....de2.ons application ONLINE ONLINE rac10gnode2
ora....de2.vip application ONLINE ONLINE rac10gnode2
接下拉可以验证当前RAC修复情况
[oracle@rac10gnode1 network]$ lsnrctl status
LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 18-JUN-2015 20:09:35
Copyright (c) 1991, 2005, Oracle. All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER_RAC10GNODE1
Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date 18-JUN-2015 19:55:30
Uptime 0 days 0 hr. 14 min. 5 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /oracle/app/oracle/product/10.2.0.1/db/network/admin/listener.ora
Listener Log File /oracle/app/oracle/product/10.2.0.1/db/network/log/listener_rac10gnode1.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.22)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.11)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM1", status BLOCKED, has 1 handler(s) for this service...
Service "+ASM_XPT" has 1 instance(s).
Instance "+ASM1", status BLOCKED, has 1 handler(s) for this service...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "rac10gdb" has 2 instance(s).
Instance "rac10gdb1", status READY, has 2 handler(s) for this service...
Instance "rac10gdb2", status READY, has 1 handler(s) for this service...
Service "rac10gdbXDB" has 2 instance(s).
Instance "rac10gdb1", status READY, has 1 handler(s) for this service...
Instance "rac10gdb2", status READY, has 1 handler(s) for this service...
Service "rac10gdb_XPT" has 2 instance(s).
Instance "rac10gdb1", status READY, has 2 handler(s) for this service...
Instance "rac10gdb2", status READY, has 1 handler(s) for this service...
The command completed successfully
[oracle@rac10gnode1 network]$ ps -ef|grep ora_
oracle 21906 1 0 20:08 ? 00:00:00 ora_pmon_rac10gdb1
oracle 21908 1 0 20:08 ? 00:00:00 ora_diag_rac10gdb1
oracle 21910 1 0 20:08 ? 00:00:00 ora_psp0_rac10gdb1
oracle 21912 1 1 20:08 ? 00:00:01 ora_lmon_rac10gdb1
oracle 21914 1 0 20:08 ? 00:00:00 ora_lmd0_rac10gdb1
oracle 21916 1 0 20:08 ? 00:00:00 ora_lms0_rac10gdb1
oracle 21920 1 0 20:08 ? 00:00:00 ora_mman_rac10gdb1
oracle 21922 1 0 20:08 ? 00:00:00 ora_dbw0_rac10gdb1
oracle 21924 1 0 20:08 ? 00:00:00 ora_lgwr_rac10gdb1
oracle 21926 1 0 20:08 ? 00:00:00 ora_ckpt_rac10gdb1
oracle 21928 1 0 20:08 ? 00:00:00 ora_smon_rac10gdb1
oracle 21930 1 0 20:08 ? 00:00:00 ora_reco_rac10gdb1
oracle 21932 1 0 20:08 ? 00:00:00 ora_cjq0_rac10gdb1
oracle 21934 1 2 20:08 ? 00:00:02 ora_mmon_rac10gdb1
oracle 21936 1 0 20:08 ? 00:00:00 ora_mmnl_rac10gdb1
oracle 21938 1 0 20:08 ? 00:00:00 ora_d000_rac10gdb1
oracle 21940 1 0 20:08 ? 00:00:00 ora_s000_rac10gdb1
oracle 21992 1 0 20:08 ? 00:00:00 ora_lck0_rac10gdb1
oracle 21997 1 0 20:08 ? 00:00:00 ora_asmb_rac10gdb1
oracle 22001 1 0 20:08 ? 00:00:00 ora_rbal_rac10gdb1
oracle 22079 1 0 20:08 ? 00:00:00 ora_o000_rac10gdb1
oracle 22083 1 0 20:08 ? 00:00:00 ora_o001_rac10gdb1
oracle 22085 1 0 20:08 ? 00:00:00 ora_o002_rac10gdb1
oracle 22265 1 0 20:08 ? 00:00:00 ora_arc0_rac10gdb1
oracle 22267 1 0 20:08 ? 00:00:00 ora_arc1_rac10gdb1
oracle 22285 1 0 20:08 ? 00:00:00 ora_arc2_rac10gdb1
oracle 22387 1 0 20:08 ? 00:00:00 ora_qmnc_rac10gdb1
oracle 22640 1 0 20:08 ? 00:00:00 ora_q000_rac10gdb1
oracle 22642 1 0 20:08 ? 00:00:00 ora_pz99_rac10gdb1
oracle 22646 1 0 20:08 ? 00:00:00 ora_q002_rac10gdb1
oracle 22657 1 0 20:08 ? 00:00:00 ora_j000_rac10gdb1
oracle 24560 28572 0 20:10 pts/1 00:00:00 grep ora_
[oracle@rac10gnode1 network]$ ps -ef|grep asm_
oracle 18485 1 0 20:06 ? 00:00:00 asm_pmon_+ASM1
oracle 18487 1 0 20:06 ? 00:00:00 asm_diag_+ASM1
oracle 18489 1 0 20:06 ? 00:00:00 asm_psp0_+ASM1
oracle 18491 1 0 20:06 ? 00:00:01 asm_lmon_+ASM1
oracle 18493 1 0 20:06 ? 00:00:00 asm_lmd0_+ASM1
oracle 18495 1 0 20:06 ? 00:00:00 asm_lms0_+ASM1
oracle 18499 1 0 20:06 ? 00:00:00 asm_mman_+ASM1
oracle 18501 1 0 20:06 ? 00:00:00 asm_dbw0_+ASM1
oracle 18503 1 0 20:06 ? 00:00:00 asm_lgwr_+ASM1
oracle 18505 1 0 20:06 ? 00:00:00 asm_ckpt_+ASM1
oracle 18507 1 0 20:06 ? 00:00:00 asm_smon_+ASM1
oracle 18509 1 0 20:06 ? 00:00:00 asm_rbal_+ASM1
oracle 18511 1 0 20:06 ? 00:00:00 asm_gmon_+ASM1
oracle 18538 1 0 20:06 ? 00:00:00 asm_lck0_+ASM1
oracle 21723 1 0 20:08 ? 00:00:00 asm_asmb_+ASM1
oracle 21727 1 0 20:08 ? 00:00:00 asm_o000_+ASM1
oracle 24712 28572 0 20:10 pts/1 00:00:00 grep asm_
[oracle@rac10gnode1 network]$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Thu Jun 18 20:10:29 2015
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options
SQL> col file_name for a50
SQL> set lineisze 1000
SP2-0158: unknown SET option "lineisze"
SQL> set linesize 1000
SQL> select tablespace_name from dba_data_files;
TABLESPACE_NAME
------------------------------
USERS
SYSAUX
UNDOTBS1
SYSTEM
UNDOTBS2
TEST
6 rows selected.
SQL> select tablespace_name,file_name from dba_data_files;
TABLESPACE_NAME FILE_NAME
------------------------------ --------------------------------------------------
USERS +ORADATA/rac10gdb/datafile/users.259.856721895
SYSAUX +ORADATA/rac10gdb/datafile/sysaux.257.856721893
UNDOTBS1 +ORADATA/rac10gdb/datafile/undotbs1.258.856721895
SYSTEM +ORADATA/rac10gdb/datafile/system.256.856721893
UNDOTBS2 +ORADATA/rac10gdb/datafile/undotbs2.267.856722041
TEST +ORADATA/rac10gdb/datafile/test.dbf
6 rows selected.
至此ORACLE 10G的仲裁盘重建全部成功完成!