How to restore ASM based OCR after complete loss of the CRS diskgroup on Linux/Unix systems [ID 1062

简介:   How to restore ASM based OCR after complete loss of the CRS diskgroup on Linux/Unix systems [ID 1062983.

 

How to restore ASM based OCR after complete loss of the CRS diskgroup on Linux/Unix systems [ID 1062983.1]

--------------------------------------------------------------------------------
 
  修改时间 12-FEB-2012     类型 HOWTO     状态 PUBLISHED  

In this Document
  Goal
  Solution
  References

 

--------------------------------------------------------------------------------

 

Applies to:
Oracle Server - Enterprise Edition - Version: 11.2.0.1.0 to 11.2.0.2 - Release: 11.2 to 11.2
Information in this document applies to any platform.

Goal
It is not possible to directly restore a manual or automatic OCR backup if the OCR is located in an ASM disk group. This is caused by the fact that the command 'ocrconfig -restore' requires ASM to be up & running in order to restore an OCR backup to an ASM disk group. However, for ASM to be available, the CRS stack must have been successfully started. For the restore to succeed, the OCR also must not be in use (r/w), i.e. no CRS daemon must be running while the OCR is being restored.

A description of the general procedure to restore the OCR can be found in the  documentation, this document explains how to recover from a complete loss of the ASM disk group that held the OCR and Voting files in a 11gR2 Grid environment.

Solution
When using an ASM disk group for CRS there are typically 3 different types of files located in the disk group that potentially need to be restored/recreated:

•the Oracle Cluster Registry file (OCR)
•the Voting file(s)
•the shared SPFILE for the ASM instances

The following example assumes that the OCR was located in a single disk group used exclusively for CRS. The disk group has just one disk using external redundancy.

Since the CRS disk group has been lost the CRS stack will not be available on any node.

The following settings used in the example would need to be replaced according to the actual configuration:

GRID user:                       oragrid
GRID home:                       /u01/app/11.2.0/grid ($CRS_HOME)
ASM disk group name for OCR:     CRS
ASM/ASMLIB disk name:            ASMD40
Linux device name for ASM disk:  /dev/sdh1
Cluster name:                    rac_cluster1
Nodes:                           racnode1, racnode2

 

This document assumes that the name of the OCR diskgroup remains unchanged, however there may be a need to use a different diskgroup name, in which case the name of the OCR diskgroup would have to be modified in /etc/oracle/ocr.loc across all nodes prior to executing the following steps.


1. Locate the latest automatic OCR backup

When using a non-shared CRS home, automatic OCR backups can be located on any node of the cluster, consequently all nodes need to be checked for the most recent backup:

$ ls -lrt $CRS_HOME/cdata/rac_cluster1/
-rw------- 1 root root 7331840 Mar 10 18:52 week.ocr
-rw------- 1 root root 7651328 Mar 26 01:33 week_.ocr
-rw------- 1 root root 7651328 Mar 29 01:33 day.ocr
-rw------- 1 root root 7651328 Mar 30 01:33 day_.ocr
-rw------- 1 root root 7651328 Mar 30 01:33 backup02.ocr
-rw------- 1 root root 7651328 Mar 30 05:33 backup01.ocr
-rw------- 1 root root 7651328 Mar 30 09:33 backup00.ocr

2. Make sure the Grid Infrastructure is shutdown on all nodes

Given that the OCR diskgroup is missing, the GI stack will not be functional on any node, however there may still be various daemon processes running.  On each node shutdown the GI stack using the force (-f) option:
# $CRS_HOME/bin/crsctl stop crs -f


3. Start the CRS stack in exclusive mode

On the node that has the most recent OCR backup, log on as root and start CRS in exclusive mode, this mode will allow ASM to start & stay up without the presence of a Voting disk and without the CRS daemon process (crsd.bin) running.

11.2.0.1:
# $CRS_HOME/bin/crsctl start crs -excl
...
CRS-2672: Attempting to start 'ora.asm' on 'racnode1'
CRS-2676: Start of 'ora.asm' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'racnode1'
CRS-2676: Start of 'ora.crsd' on 'racnode1' succeeded


Please note:
This document assumes that the CRS diskgroup was completely lost, in which  case the CRS daemon (resource ora.crsd) will terminate again due to the inaccessibility of the OCR - even if above message indicates that the start succeeded.
If this is not the case - i.e. if the CRS diskgroup is still present (but corrupt or incorrect) the CRS daemon needs to be shutdown manually using:
# $CRS_HOME/bin/crsctl stop res ora.crsd -init
otherwise the subsequent OCR restore will fail.


11.2.0.2:
# $CRS_HOME/bin/crsctl start crs -excl -nocrs
CRS-4123: Oracle High Availability Services has been started.
...
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'auw2k3'
CRS-2672: Attempting to start 'ora.ctssd' on 'racnode1'
CRS-2676: Start of 'ora.drivers.acfs' on 'racnode1' succeeded
CRS-2676: Start of 'ora.ctssd' on 'racnode1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'racnode1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'racnode1'
CRS-2676: Start of 'ora.asm' on 'racnode1' succeeded

IMPORTANT:
A new option '-nocrs' has been introduced with  11.2.0.2, which prevents the start of the ora.crsd resource. It is vital that this option is specified, otherwise the failure to start the ora.crsd resource will tear down ora.cluster_interconnect.haip, which in turn will cause ASM to crash.

 

4. Label the CRS disk for ASMLIB use

If using ASMLIB the disk to be used for the CRS disk group needs to stamped first, as user root do:
# /usr/sbin/oracleasm createdisk ASMD40 /dev/sdh1
Writing disk header: done
Instantiating disk: done


5. Create the CRS diskgroup via sqlplus

The disk group can now be (re-)created via sqlplus from the grid user. The compatible.asm attribute must be set to 11.2 in order for the disk group to be used by CRS:
$ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.1.0 Production on Tue Mar 30 11:47:24 2010
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> create diskgroup CRS external redundancy disk 'ORCL:ASMD40' attribute 'COMPATIBLE.ASM' = '11.2';

Diskgroup created.
SQL> exit

6. Restore the latest OCR backup

Now that the CRS disk group is created & mounted the OCR can be restored - must be done as the root user:
# cd $CRS_HOME/cdata/rac_cluster1/
# $CRS_HOME/bin/ocrconfig -restore backup00.ocr

7. Start the CRS daemon on the current node (11.2.0.1 only !)

Now that the OCR has been restored the CRS daemon can be started, this is needed to recreate the Voting file. Skip this step for 11.2.0.2.0.
# $CRS_HOME/bin/crsctl start res ora.crsd -init
CRS-2672: Attempting to start 'ora.crsd' on 'racnode1'
CRS-2676: Start of 'ora.crsd' on 'racnode1' succeeded


8. Recreate the Voting file

The Voting file needs to be initialized in the CRS disk group:
# $CRS_HOME/bin/crsctl replace votedisk +CRS
Successful addition of voting disk 00caa5b9c0f54f3abf5bd2a2609f09a9.
Successfully replaced voting disk group with +CRS.
CRS-4266: Voting file(s) successfully replaced


9. Recreate the SPFILE for ASM (optional)

 

Please note:

If you are
- not using an  SPFILE for ASM
- not using a shared SPFILE for ASM
- using a shared SPFILE not stored in ASM (e.g. on cluster file system)
this step possibly should be skipped.

Also use extra care in regards to the asm_diskstring parameter as it impacts the discovery of the voting disks.

Please verify the previous settings using the ASM alert log.


Prepare a pfile (e.g. /tmp/asm_pfile.ora) with the ASM startup parameters - these may vary from the example below. If in doubt consult the ASM alert log  as the ASM instance startup should list all non-default parameter values. Please note the last startup of ASM (in step 2 via CRS start) will not have used an SPFILE, so a startup prior to the loss of the CRS disk group would need to be located.
*.asm_power_limit=1
*.diagnostic_dest='/u01/app/oragrid'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'Now the SPFILE can be created using this PFILE:
$ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.1.0 Production on Tue Mar 30 11:52:39 2010
Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> create spfile='+CRS' from pfile='/tmp/asm_pfile.ora';

File created.
SQL> exit


10. Shutdown CRS

Since CRS is running in exclusive mode, it needs to be shutdown  to allow CRS to run on all nodes again. Use of the force (-f) option may be required:
# $CRS_HOME/bin/crsctl stop crs -f
...
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'auw2k3' has completed
CRS-4133: Oracle High Availability Services has been stopped.


11. Rescan ASM disks

If using ASMLIB rescan all ASM disks on each node as the root user:
# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "ASMD40"


12. Start CRS
As the root user submit the CRS startup on all cluster nodes:
# $CRS_HOME/bin/crsctl start crs
CRS-4123: Oracle High Availability Services has been started.


13. Verify CRS

To verify that CRS is fully functional again:
# $CRS_HOME/bin/crsctl check cluster -all
**************************************************************
racnode1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

# $CRS_HOME/bin/crsctl status resource -t
...


References

 相关内容

 

--------------------------------------------------------------------------------
产品
--------------------------------------------------------------------------------

•Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Enterprise Edition
关键字
--------------------------------------------------------------------------------
11GR2; ASM; CRS; DISKGROUP; OCR; RESTORE
错误
--------------------------------------------------------------------------------
CRS-4537; CRS-2672; CRS-4266; CRS-4529; CRS-2676; CRS-4533; CRS-2793; CRS-4133; CRS-4123


 


 

目录
相关文章
|
Linux API C语言
【Linux系统编程】深入理解Linux 组ID和附属组ID的查询与设置
【Linux系统编程】深入理解Linux 组ID和附属组ID的查询与设置
347 0
【Linux系统编程】深入理解Linux 组ID和附属组ID的查询与设置
|
存储 Shell Linux
【Shell 命令集合 备份压缩 】Linux 恢复由dump命令创建的备份文件 restore命令 使用指南
【Shell 命令集合 备份压缩 】Linux 恢复由dump命令创建的备份文件 restore命令 使用指南
235 0
|
Linux Shell C语言
如何在 Linux 中查找父进程 ID (PPID)?
【5月更文挑战第4天】
2265 4
如何在 Linux 中查找父进程 ID (PPID)?
|
Linux API
Linux内核中的两种ID分配方式
Linux内核中的两种ID分配方式
|
Linux 数据处理 数据库
深入解析Linux命令id:理解用户身份与权限
`id`命令在Linux中用于显示用户身份(UID, GID和附加组)。它查看系统用户数据库获取信息。参数如`-u`显示UID,`-g`显示GID,`-G`显示附加组,结合`-n`显示名称而非ID。用于确认命令执行者身份,确保权限正确。在脚本中使用时注意权限管理,遵循最小权限原则。
|
安全 Linux 数据安全/隐私保护
探索Linux命令newuidmap:用户ID映射的利器
`newuidmap`是Linux工具,用于在用户命名空间中设定UID映射,支持容器安全。它允许限定容器内进程的主机系统权限,确保数据安全和隔离。通过映射文件或命令行参数定义UID映射,提供灵活性和安全性。例如,为Docker容器设置映射,使进程能访问特定UID的数据文件。使用时需注意映射准确性、权限控制和避免映射过多UID。与其他工具如`newgidmap`配合使用以增强用户命名空间支持。
|
Linux
Linux系统之id命令的基本使用
Linux系统之id命令的基本使用
576 5
Linux系统之id命令的基本使用
|
Linux 调度
【Linux】线程ID
【Linux】线程ID
243 1
|
文字识别
印刷文字识别使用问题之如何获取Access Key ID和Access Key Secret
印刷文字识别产品,通常称为OCR(Optical Character Recognition)技术,是一种将图像中的印刷或手写文字转换为机器编码文本的过程。这项技术广泛应用于多个行业和场景中,显著提升文档处理、信息提取和数据录入的效率。以下是印刷文字识别产品的一些典型使用合集。
|
Oracle 关系型数据库 Linux
Oracle Linux: How To Disable NUMA At OS Level (Doc ID 2193586.1)
Oracle Linux: How To Disable NUMA At OS Level (Doc ID 2193586.1)
337 1