奕新集团--RAC环境后续
2、重新配置Grid Infrastructure及ASM
[python] view plaincopyprint?
1 #重新配置Grid Infrastructure并不会移除已经复制的二进制文件,仅仅是回复到配置crs之前的状态,下面是其步骤
2
3 a、使用root用户登录,并执行下面的命令(所有节点,但最后一个节点除外)
4 # perl $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force
5
6 b、同样使用root用户在最后一个节点执行下面的命令。该命令将清空ocr 配置和voting disk
7 # perl $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force -lastnode
8
9
10
11 c、如果使用了ASM磁盘,继续下面的操作以使得ASM重新作为候选磁盘(清空所有的ASM磁盘组)
12 # dd if=/dev/zero of=/dev/sdb1 bs=1024 count=100
13 # /etc/init.d/oracleasm deletedisk DATA /dev/sdb1
14 # /etc/init.d/oracleasm createdisk DATA /dev/sdb1
15
16 #Author : Robinson
17 #Blog : http://blog.csdn.net/robinson_0612
3、彻底删除Grid Infrastructure
[python] view plaincopyprint?
18 #11g R2 Grid Infrastructure也提供了彻底卸载的功能,deinstall该命令取代了使用OUI方式来清除clusterware以及ASM,回复到安装grid之前的环境。
19 #该命令将停止集群,移除二进制文件及其相关的所有配置信息。
20 #命令位置:$GRID_HOME/deinstall
21 #下面是该命令操作的具体事例,操作期间,需要提供一些交互信息,以及在新的session以root身份清除一些/tmp下的文件
22 [root@linux1 bin]# ./crsctl check crs
23 CRS-4638: Oracle High Availability Services is online
24 CRS-4537: Cluster Ready Services is online
25 CRS-4529: Cluster Synchronization Services is online
26 CRS-4533: Event Manager is online
27 [root@linux1 bin]# cd ../deinstall/
28 [root@linux1 deinstall]# pwd
29 /u01/app/11.2.0/grid/deinstall
30
31
32
33
34 删除oracle 的安装目录:
35
每个节点都要删除。
oracle 用户也用:
然后再删除一些目录就可以了。。
36 这个是上面提供的目录。
37 就是VIP 就是虚拟IP pri Ip 是心跳线
38 [root@linux1 deinstall]# ./deinstall
39 You must not be logged in as root to run ./deinstall.
40 Log in as Oracle user and rerun ./deinstall.
41 [root@linux1 deinstall]# su grid
42 [grid@linux1 deinstall]$ ./deinstall
43 Checking for required files and bootstrapping ...
44 Please wait ...
45 Location of logs /tmp/deinstall2013-07-16_05-54-03-PM/logs/
46
47 ############ ORACLE DEINSTALL & DECONFIG TOOL START ############
48
49 ######################## CHECK OPERATION START ########################
50 Install check configuration START
51
52 Checking for existence of the Oracle home location /u01/app/11.2.0/grid
53 Oracle Home type selected for de-install is: CRS
54 Oracle Base selected for de-install is: /u01/app/grid
55 Checking for existence of central inventory location /u01/app/oraInventory
56 Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid
57 The following nodes are part of this cluster: linux1,linux2
58
59 Install check configuration END
60
61 Traces log file: /tmp/deinstall2013-07-16_05-54-03-PM/logs//crsdc.log
62
63 Network Configuration check config START
64
65 Network de-configuration trace file location: /tmp/deinstall2013-07-16_05-54-03-PM/logs/netdc_check207506844451155733.log
66
67 Network Configuration check config END
68
69 Asm Check Configuration START
70
71 ASM de-configuration trace file location: /tmp/deinstall2013-07-16_05-54-03-PM/logs/asmcadc_check2698133635629979531.log
72
73 ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: y
74 Automatic Storage Management (ASM) instance is detected in this Oracle home /u01/app/11.2.0/grid.
75 ASM Diagnostic Destination : /u01/app/grid
76 ASM Diskgroups : +DATA
77 Diskgroups will be dropped
78 De-configuring ASM will drop all the diskgroups and it's contents at cleanup time. This will affect all of the databases and ACFS
79 that use this ASM instance(s).
80 If you want to retain the existing diskgroups or if any of the information detected is incorrect, you can modify by entering 'y'.
81 Do you want to modify above information (y|n) [n]:
82
83 ######################### CHECK OPERATION END #########################
84
85 ####################### CHECK OPERATION SUMMARY #######################
86 Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid
87 The cluster node(s) on which the Oracle home exists are: (Please input nodes seperated by ",", eg: node1,node2,...)linux1,linux2
88 Oracle Home selected for de-install is: /u01/app/11.2.0/grid
89 Inventory Location where the Oracle home registered is: /u01/app/oraInventory
90 ASM instance will be de-configured from this Oracle home
91 Do you want to continue (y - yes, n - no)? [n]: y
92 A log of this session will be written to: '/tmp/deinstall2013-07-16_05-54-03-PM/logs/deinstall_deconfig2013-07-16_05-54-37-PM.out'
93 Any error messages from this session will be written to: '/tmp/deinstall2013-07-16_05-54-03-PM/logs/deinstall_deconfig2013-07-16_05-54-37-PM.err'
94
95 ######################## CLEAN OPERATION START ########################
96 ASM de-configuration trace file location: /tmp/deinstall2013-07-16_05-54-03-PM/logs/asmcadc_clean3319637107726750003.log
97 ASM Clean Configuration START
98 ASM Clean Configuration END
99
100 Network Configuration clean config START
101
102 Network de-configuration trace file location: /tmp/deinstall2013-07-16_05-54-03-PM/logs/netdc_clean9055263637610505743.log
103
104 De-configuring Naming Methods configuration file on all nodes...
105 Naming Methods configuration file de-configured successfully.
106
107 De-configuring Local Net Service Names configuration file on all nodes...
108 Local Net Service Names configuration file de-configured successfully.
109
110 De-configuring Directory Usage configuration file on all nodes...
111 Directory Usage configuration file de-configured successfully.
112
113 De-configuring backup files on all nodes...
114 Backup files de-configured successfully.
115
116 The network configuration has been cleaned up successfully.
117
118 Network Configuration clean config END
119
120 ---------------------------------------->
121
122 Run the following command as the root user or the administrator on node "linux2".
123
124 /tmp/deinstall2013-07-16_05-54-03-PM/perl/bin/perl -I/tmp/deinstall2013-07-16_05-54-03-PM/perl/lib
125 -I/tmp/deinstall2013-07-16_05-54-03-PM/crs/install /tmp/deinstall2013-07-16_05-54-03-PM/crs/install/rootcrs.pl -force
126 -delete -paramfile /tmp/deinstall2013-07-16_05-54-03-PM/response/deinstall_Ora11g_gridinfrahome1.rsp
127
128 Run the following command as the root user or the administrator on node "linux1".
129
130 /tmp/deinstall2013-07-16_05-54-03-PM/perl/bin/perl -I/tmp/deinstall2013-07-16_05-54-03-PM/perl/lib
131 -I/tmp/deinstall2013-07-16_05-54-03-PM/crs/install /tmp/deinstall2013-07-16_05-54-03-PM/crs/install/rootcrs.pl -force
132 -delete -paramfile /tmp/deinstall2013-07-16_05-54-03-PM/response/deinstall_Ora11g_gridinfrahome1.rsp -lastnode
133
134 Press Enter after you finish running the above commands
135
136 <----------------------------------------
137
138 Oracle Universal Installer clean START
139
140 Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done
141
142 Delete directory '/u01/app/11.2.0/grid' on the local node : Done
143
144 Delete directory '/u01/app/oraInventory' on the local node : Done
145
146 Delete directory '/u01/app/grid' on the local node : Done
147
148 Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the remote nodes 'linux2' : Done
149
150 Delete directory '/u01/app/11.2.0/grid' on the remote nodes 'linux2' : Done
151
152 Delete directory '/u01/app/oraInventory' on the remote nodes 'linux2' : Done
153
154 Delete directory '/u01/app/grid' on the remote nodes 'linux2' : Done
155
156 Oracle Universal Installer cleanup was successful.
157
158 Oracle Universal Installer clean END
159
160
161 Oracle install clean START
162
163 Clean install operation removing temporary directory '/tmp/install' on node 'linux1'
164 Clean install operation removing temporary directory '/tmp/install' on node 'linux2'
165
166 Oracle install clean END
167
168 ######################### CLEAN OPERATION END #########################
169
170 ####################### CLEAN OPERATION SUMMARY #######################
171 ASM instance was de-configured successfully from the Oracle home
172 Oracle Clusterware is stopped and successfully de-configured on node "linux2"
173 Oracle Clusterware is stopped and successfully de-configured on node "linux1"
174 Oracle Clusterware is stopped and de-configured successfully.
175 Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.
176 Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.
177 Successfully deleted directory '/u01/app/oraInventory' on the local node.
178 Successfully deleted directory '/u01/app/grid' on the local node.
179 Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the remote nodes 'linux2'.
180 Successfully deleted directory '/u01/app/11.2.0/grid' on the remote nodes 'linux2'.
181 Successfully deleted directory '/u01/app/oraInventory' on the remote nodes 'linux2'.
182 Successfully deleted directory '/u01/app/grid' on the remote nodes 'linux2'.
183 Oracle Universal Installer cleanup was successful.
184
185 Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'linux1,linux2' at the end of the session.
186
187 Oracle install successfully cleaned up the temporary directories.
188 #######################################################################
189
190 ############# ORACLE DEINSTALL & DECONFIG TOOL END #############
总结:
1、分别以 Oracle Grid 用户 执行crsroot.pl --force 脚本
2、Oracle Grid 用户执行 ./deinstall 脚本
3、Oracle Grid 用户rm -rf ORALCE_HOME 目录。
gird用户:
export ORACLE_SID=+ASM1
export ORACLE_BASE=/u01/grid
export ORACLE_HOME=/u01/app/11.2.3/grid
export PATH=$ORACLE_HOME/bin:$PATH:/usr/local/bin/:.
export TEMP=/tmp
export TMP=/tmp
export TMPDIR=/tmp
umask 022
alias sqlplus='rlwrap sqlplus'
~
oracle用户:
export ORACLE_SID=mes1
export ORACLE_UNQNAME=mes
export JAVA_HOME=/usr/local/java
export ORACLE_BASE=/u01/oracle
export ORACLE_HOME=$ORACLE_BASE/11.2.3/db
export ORACLE_TERM=xterm
export NLS_DATE_FORMAT="YYYY:MM:DD HH24:MI:SS"
export NLS_LANG=american_america.ZHS16GBK
export TNS_ADMIN=$ORACLE_HOME/network/admin
export ORA_NLS11=$ORACLE_HOME/nls/data
PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin:$ORA_CRS_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
export THREADS_FLAG=native
export TEMP=/tmp
export TMPDIR=/tmp
umask 022
删除分区
重新新建分区
[root@m1 ~]#
for i in b c d e f g ;
do
echo "sd$i" "`scsi_id -g -u -s /block/sd$i` ";
done
sdb 36000c29a89de45f738ab0cfa02b9c79e
sdc 36000c295cfaf6508afb7635d7d212ea4
sdd 36000c2968c0330c628277dd9d434b227
sde 36000c29b24374ca1e1d72fb7cc4eeaeb
sdf 36000c29e6a3d2a6368deeaf7d0cd971b
sdg 36000c290212ef972a444f1036210823b
[root@m1 ~]# cat /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="36000c29a89de45f738ab0cfa02b9c79e", NAME="asm-disk1", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="36000c295cfaf6508afb7635d7d212ea4", NAME="asm-disk2", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="36000c2968c0330c628277dd9d434b227", NAME="asm-disk3", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="36000c29b24374ca1e1d72fb7cc4eeaeb", NAME="asm-disk4", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="36000c29e6a3d2a6368deeaf7d0cd971b", NAME="asm-disk5", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="36000c290212ef972a444f1036210823b", NAME="asm-disk6", OWNER="grid", GROUP="asmadmin", MODE="0660"
[root@m1 ~]# /sbin/udevcontrol reload_rules
[root@m1 ~]# /sbin/start_udev
Starting udev: [ OK ]
确保内核是 2.6.18
这个启动不起来有可能是配置文件的原因。
一个方面: 配置文件有问题99-oracle-《》里面绝对不允许空格
测试空格 。。
2 我分区了 测试分区
把分区删除重启
3 我授予了权限给 chown grid.oinstall /dev/sdb* dbc* sdd* 才能看到
为什么 没有出现 /dev/asm-sdb 。。。 说明没有绑定成功。
这样先分区 然后在进行格式化。
sr/ccs/bin:/u01/app/11.2.3/grid/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/bin/:.:/home/grid/run/grid/install}
INFO: Starting Output Reader Threads for process /tmp/OraInstall2013-11-17_08-01-12PM/ext/bin/kfod
INFO: Parsing KFOD-00311: Error scanning device /dev/asm-sdc
INFO: Parsing ORA-15087: disk '/dev/asm-sdc' is formatted as an ext2/ext3 or OCFS2 file system.
INFO: Parsing KFOD-00311: Error scanning device /dev/asm-sdb
INFO: The process /tmp/OraInstall2013-11-17_08-01-12PM/ext/bin/kfod exited with code 0
INFO: Parsing ORA-15087: disk '/dev/asm-sdb' is formatted as an ext2/ext3 or OCFS2 file system.
INFO: Waiting for output processor threads to exit.
INFO: Parsing KFOD-00311: Error scanning device /dev/asm-sdd
INFO: Parsing ORA-15087: disk '/dev/asm-sdd' is formatted as an ext2/ext3 or OCFS2 file system.
INFO: Parsing KFOD-00311: Error scanning device /dev/asm-sde
INFO: Parsing ORA-15087: disk '/dev/asm-sde' is formatted as an ext2/ext3 or OCFS2 file system.
INFO: Parsing KFOD-00311: Error scanning device /dev/asm-sdg
INFO: Parsing ORA-15087: disk '/dev/asm-sdg' is formatted as an ext2/ext3 or OCFS2 file system.
INFO: Parsing KFOD-00311: Error scanning device /dev/asm-sdf
INFO: Parsing ORA-15087: disk '/dev/asm-sdf' is formatted as an ext2/ext3 or OCFS2 file system.
INFO: Output processor threads exited.
INFO: ... discoveryString = /dev/*
INFO: Executing [/tmp/OraInstall2013-11-17_08-01-12PM/ext/bin/kfod, nohdr=true, verbose=true, disks=all, status=true, op=disks, asm_diskstring='/dev
/*']