--------------
环境介绍:
--------------
节点名称: rac001 , rac002
数据库名称: racdb
内存分配: 每节点 800M
IP及SCANIP :
Public IP: 10.161.32.177 , 10.161.32.179
Private IP: 10.1.1.11, 10.1.1.12
VIP: 10.161.32.187 , 10.161.32.189
SCANIP: 10.161.32.191 , 10.161.32.192
Grid Infrastructure 操作系统用户grid , 主组为oinstall, 辅助组为asmadmin, asmdba, asmoper
Oracle RAC 操作系统用户 oracle , 主组为oinstall , 辅助组为dba, oper , asmdba
Grid Infrastructure 安装目录(注意: 不是GRID_HOME哦) :
ORACLE_BASE=/u01/product/grid/crs
ORACLE_HOME=/u01/product/grid/11.2.0
备注: grid用户的base及home不能有父子关系 。
Oracle RDBMS 安装目录 :
ORACLE_BASE=/u01/product/oracle
ORACLE_HOME=/u01/product/oracle/11.2.0/db_1
详细说明参考:
http://www.oracledatabase12g.com/archives/oracle-installation-os-user-groups.html
同时设置Linux系统Firewall为disable, 关闭不需要的system service, 设置正确的timezone .
1. 用户组及账号设置
1.1. 在 root用户环境下创建 OS 组(每个节点执行)
创建组之前要确认一下/etc/group及/etc/passwd下的组及用户,确保每个节点
上的uid及gid 一致 (当然也可以建组的时候加入id号,groupadd -g 501 oinstall) 。
根据规划:
Grid Infrastructure 操作系统用户grid , 主组为oinstall, 辅助组为asmadmin, asmdba, asmoper
Oracle RAC 操作系统用户 oracle , 主组为oinstall , 辅助组为dba, oper , asmdba
# groupadd oinstall
# groupadd asmadmin
# groupadd asmdba
# groupadd asmoper
# groupadd dba
# groupadd oper
# cat /etc/group 查看组建立情况
.....
oinstall:x:501:
asmadmin:x:502:
asmdba:x:503:
asmoper:x:504:
dba:x:505:
oper:x:506:
1.2. 创建安装 grid infrastructure 及 oracle软件的用户 (每个节点执行)
根据规划:
Grid Infrastructure 操作系统用户grid , 主组为oinstall, 辅助组为asmadmin, asmdba, asmoper
Oracle RAC 操作系统用户 oracle , 主组为oinstall , 辅助组为dba, oper , asmdba
#useradd -g oinstall -G asmadmin,asmdba,asmoper grid
#useradd -g oinstall -G dba, oper, asmdba oracle
1.3. 为 grid及 oracle用户设置密码 (每个节点执行)
# passwd oracle
# passwd grid
------------------------------------------------------------------------
备注:
Oracle10g中还是使用sysdba管理asm实例,Oracle11g中使用一个新角色sysasm, 专用于
管理asm, 相当于针对asm的sysdba角色。 在Oracle11g RAC 中以grid用户登入后 , 以
sysdba登入也是可以查看asm实例相关的状态, 但是不能做变更, sysasm 角色连接后才
可以。
------------------------------------------------------------------------
2. 网络设置
2.1 定义 cluster name, 这是 11g特有的, 缺省为crs, 这里定义为rac .
2.2 定义每个节点的 public hostname
也就是本机的 host name, 比如 rac001,rac002. 这里推荐建立网卡 bonding (具体设置
略,active/passive模式).
2.3 定义 public virtual hostname, 一般建议为<public hostname>‐vip 或直接接 vip.
2.4 开始修改所有节点的/etc/hosts (每个节点执行) , 同时修改相关IP地址。
[root@rac001 etc]# vi hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
10.161.32.161 rac001
10.1.1.11 pri001
10.161.32.173 vip001
10.161.32.162 rac002
10.1.1.12 pri002
10.161.32.179 vip002
# Single Client Access Name (SCAN IP)
10.161.32.182 racscan1
10.161.32.184 racscan2
3. 各个节点时间同步
(所有节点设置,这里是测试,所以两个节点相互同步,不设置时间服务器)
通过 nptdate或 rdate 设置各个节点时间同步 (注意时区)
[root@rac01 etc]# chkconfig time‐stream on
[root@rac01 etc]# date
Tue Dec 28 13:23:40 CST 2010
然后在节点 2 设置与节点1 的时间同步排程。
[root@rac02 etc]# crontab ‐e
*/2 * * * * rdate ‐s 10.161.32.161
4. 配置 Linux 内核参数 (所有节点设置)
[root@rac001 etc]# vi sysctl.conf
fs.aio‐max‐nr=1048576
fs.file‐max=6815744
kernel.shmall=2097152
kernel.shmmax=1073741824
kernel.shmmni=4096
kernel.sem=250 32000 100 128
net.ipv4.ip_local_port_range=9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048576
[root@rac001 etc]# sysctl -p 使生效。
5. 为 oracle 用户设置 shell limits(每个节点执行) .
5.1 修改 /etc/security/limits.conf
[root@rac01 etc]# cd /etc/security/
[root@rac01 security]# vi limits.conf
grid soft nproc 2047
grid hard nproc 32768
grid soft nofile 1024
grid hard nofile 250000
oracle soft nproc 2047
oracle hard nproc 32768
oracle soft nofile 1024
oracle hard nofile 250000
5.2 修改/etc/pam.d/login,如果不存在以下行,请加入
session required pam_limits.so
5.3 对默认 shell startup file 做变更,加入如下行到/etc/profile
if [ \$USER = "oracle" ] || [ \$USER = "grid" ]; then
if [ \$SHELL = "/bin/ksh" ]; then
ulimit ‐p 16384
ulimit ‐n 65536
else
ulimit ‐u 16384 ‐n 65536
fi
umask 022
fi
5.4 设置 SELinux为 disable (每个节点都设置).
# vi /etc/grub.conf
default=3
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux Server (2.6.18‐128.el5xen)
root (hd0,0)
kernel /xen.gz‐2.6.18‐128.el5
selinux=0
6. 创建 Oracle Inventory Directory (每个节点执行) ---- 这一步应该可以不要------
[root@rac01 u01]# mkdir ‐p /u01/product/oraInventory
[root@rac01 u01]# chown ‐R grid:oinstall /u01/product/oraInventory
[root@rac01 u01]# chmod ‐R 775 /u01/product/oraInventory/
7. 创建 Oracle Grid Infrastructure home 目录(在每个节点建立)
规划目录如下:
Grid Infrastructure 安装目录(注意: 不是GRID_HOME哦) :
ORACLE_BASE=/u01/product/grid/crs
ORACLE_HOME=/u01/product/grid/11.2.0
备注: grid用户的base及home不能有父子关系 。
注意:
11g单实例如果需要使用 ASM, 也必须安装Grid ,且必须放在 ORACLE_BASE 下,
11g RAC则不行, 它的 grid家目录必须另外放在一个地方,比如/u01/grid .
# mkdir ‐p /u01/product/grid/crs
# mkdir ‐p /u01/product/grid/11.2.0
# chown ‐R grid.oinstall /u01/product/grid/crs
# chown ‐R grid.oinstall /u01/product/grid/11.2.0
# chmod ‐R 775 /u01/product/grid/crs
# chmod ‐R 775 /u01/product/grid/11.2.0
8. 创建 Oracle Base 目录(在每个节点建立)
规划:
Oracle RDBMS 安装目录 :
ORACLE_BASE=/u01/product/oracle
ORACLE_HOME=/u01/product/oracle/11.2.0/db_1
# mkdir ‐p /u01/product/oracle
暂时不做这一步 # mkdir /u01/product/oracle/cfgtoollogs
‐‐ 确保软件安装后 dbca 可以运行
# chown ‐R oracle.oinstall /u01/product/oracle
# chmod ‐R 775 /u01/product/oracle
9. 创建 Oracle RDBMS home 目录(所有節點執行)
# mkdir ‐p /u01/product/oracle/11.2.0/db_1
# chown ‐R oracle.oinstall /u01/product/oracle/11.2.0/db_1
# chmod ‐R 775 /u01/product/oracle/11.2.0/db_1
10. 準備 Oracle Grid Infrastructure 及 RDBMS software (主節點準備)。
11. 檢查 OS rpm 包 (所有節點執行)
这里是 Linux AS 5.4 32bit, 如果是 64bit,需要检查以下的 Packages.
以下重复包名称的部分是 64bit,注明 32bit 的是 32bit packages. 如果
是 32bit OS, 那么重复包名的只需要 32bit 部分的包 。注意不同版本的
Linux 系统后面的版本不太一样 。
binutils‐2.15.92.0.2
compat‐libstdc++‐33‐3.2.3
compat‐libstdc++‐33‐3.2.3 (32 bit)
elfutils‐libelf‐0.97
elfutils‐libelf‐devel‐0.97
expat‐1.95.7
gcc‐3.4.6
gcc‐c++‐3.4.6
glibc‐2.3.4‐2.41
glibc‐2.3.4‐2.41 (32 bit)
glibc‐common‐2.3.4
glibc‐devel‐2.3.4
glibc‐headers‐2.3.4
libaio‐0.3.105
libaio‐0.3.105 (32 bit)
libaio‐devel‐0.3.105
libaio‐devel‐0.3.105 (32 bit)
libgcc‐3.4.6
libgcc‐3.4.6 (32‐bit)
libstdc++‐3.4.6
libstdc++‐3.4.6 (32 bit)
libstdc++‐devel-3.4.6
make‐3.80
pdksh‐5.2.14
sysstat‐5.0.5
unixODBC‐2.2.11
unixODBC‐2.2.11 (32 bit)
unixODBC‐devel‐2.2.11
unixODBC‐devel‐2.2.11 (32 bit)
检查方法:
[root@rac02 grid]# rpm ‐q ‐‐qf '%{NAME}‐%{VERSION}‐%{RELEASE} (%{ARCH})\n' binutils \
然后输入:
compat‐libstdc++‐33 \
elfutils‐libelf \
elfutils‐libelf‐devel \
expat \
gcc \
gcc‐c++ \
glibc \
glibc‐common \
glibc‐devel \
glibc‐headers \
ksh \
libaio \
libaio‐devel \
libgcc \
libstdc++ \
libstdc++‐devel \
make \
pdksh \
sysstat \
unixODBC \
unixODBC‐devel
结果如下:
[root@rac01 u01]# rpm ‐q ‐‐qf '%{NAME}‐%{VERSION}‐%{RELEASE} (%{ARCH})\n' binutils \
> compat‐libstdc++‐33 \
> elfutils‐libelf \
> elfutils‐libelf‐devel \
> expat \
> gcc \
> gcc‐c++ \
> glibc \
> glibc‐common \
> glibc‐devel \
> glibc‐headers \
> ksh \
> libaio \
> libaio‐devel \
> libgcc \
> libstdc++ \
> libstdc++‐devel \
> make \
> pdksh \
> sysstat \
> unixODBC \
> unixODBC‐devel
binutils‐2.17.50.0.6‐9.el5 (i386)
compat‐libstdc++‐33‐3.2.3‐61 (i386)
elfutils‐libelf‐0.137‐3.el5 (i386)
elfutils‐libelf‐devel‐0.137‐3.el5 (i386)
expat‐1.95.8‐8.2.1 (i386)
gcc‐4.1.2‐44.el5 (i386)
gcc‐c++‐4.1.2‐44.el5 (i386)
glibc‐2.5‐34 (i686)
glibc‐common‐2.5‐34 (i386)
glibc‐devel‐2.5‐34 (i386)
glibc‐headers‐2.5‐34 (i386)
ksh‐20080202‐2.el5 (i386)
libaio‐0.3.106‐3.2 (i386)
libaio‐devel‐0.3.106‐3.2 (i386)
libgcc‐4.1.2‐44.el5 (i386)
libstdc++‐4.1.2‐44.el5 (i386)
libstdc++‐devel‐4.1.2‐44.el5 (i386)
make‐3.81‐3.el5 (i386)
package pdksh is not installed ‐‐‐ pdksh 没有安装
sysstat‐7.0.2‐3.el5 (i386)
unixODBC‐2.2.11‐7.1 (i386)
unixODBC‐devel‐2.2.11‐7.1 (i386)
查询一些文档,pdksh 这个包可以不用安装。
设置grid及oracle用户的环境变量 。
Grid Infrastructure 安装目录(注意: 不是GRID_HOME哦) :
ORACLE_BASE=/u01/product/grid/crs
ORACLE_HOME=/u01/product/grid/11.2.0
Oracle RDBMS 安装目录 :
ORACLE_BASE=/u01/product/oracle
ORACLE_HOME=/u01/product/oracle/11.2.0/db_1
[root@rac01 init.d]# su ‐ grid
[grid@rac01 ~]$ cd /home/grid/
[grid@rac01 ~]$ vi .bash_profile
以下 grid 的 bash_profile, 仅供参考,还有一些参数可以自行加入。
# ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
# .bash_profile
# ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
# OS User: grid
# Application: Oracle Grid Infrastructure
# Version: Oracle 11g release 2
# ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
# Get the aliases and functions
if [ ‐f ~/.bashrc ]; then
. ~/.bashrc
fi
alias ls="ls ‐FA"
# ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
# ORACLE_SID
# ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
# Specifies the Oracle system identifier (SID)
# for the Automatic Storage Management (ASM)instance
# running on this node.
# Each RAC node must have a unique ORACLE_SID.
# (i.e. +ASM1, +ASM2,...)
# ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
ORACLE_SID=+ASM1; export ORACLE_SID
JAVA_HOME=/usr/local/java; export JAVA_HOME
ORACLE_BASE=/u01/product/grid/crs ; export ORACLE_BASE
ORACLE_HOME=/u01/product/grid/11.2.0 ; export ORACLE_HOME
ORACLE_TERM=vt100 ; export ORACLE_TERM
NLS_DATE_FORMAT="DD‐MON‐YYYY HH24:MI:SS"; export NLS_DATE_FORMAT
ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11
PATH=${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/sbin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/product/common/oracle/bin
export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib
LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
export TEMP=/tmp
export TMPDIR=/tmp
umask 022
同样在其他节点的 grid 用户.bash_profile 中加入,并注意修改 ORACLE_SID=+ASM1 .
同样,安装 Oracle 软件的 Oracle 用户也需要设置.bash_profile .
[root@rac01 init.d]# su ‐ oracle
[grid@rac01 ~]$ cd /home/oracle/
[grid@rac01 ~]$ vi .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
BASH_ENV=$HOME/.BASHRC
export BASH_ENV
export TEMP=/tmp
export TMPDIR=/tmp
PATH=$PATH:$HOME/bin:/bin:/sbin:/usr/bin:/usr/sbin
PATH=${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin
PATH=${PATH}:/u01/product/common/oracle/bin
export PATH
ORACLE_SID=racdb1 ; export ORACLE_SID
ORACLE_BASE=/u01/product/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/11.2.0/db_1; export ORACLE_HOME
ORACLE_TERM=vt100;export ORACLE_TERM
export PATH=$ORACLE_HOME/bin:$PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib;export LD_LIBRARY_PATH
JAVA_HOME=/usr/local/java; export JAVA_HOME
CLASSPATH=$ORACLE_HOME/JRE
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib
CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib
export CLASSPATH
set -u
PS1=`hostname`'<*$ORACLE_SID*$PWD>$';export PS1
EDITOR=/bin/vi; export EDITOR
alias ll='ls -l';
alias ls='ls --color';
alias his='history';
umask 022
其他节点也设置,并注意修改 ORACLE_SID 部分。
12. 为 Oracle RAC 准备共享存储
在三个分开(三个物理磁盘)的 failure groups 下需要至少 2GB 的磁盘空间用于
Oracle Clusterware文件 (Oracle Cluster Registry and voting disk) .
‐All of the devices in an Automatic Storage Management disk group
should be the same size and have the same performance characteristics.
‐A disk group should not contain more than one partition on a single physical disk device.
‐Using logical volumes as a device in an Automatic Storage Management disk group
is not supported with Oracle RAC.
‐The user account with which you perform the installation (oracle) must have
write permissions to create the files in the path that you specify.
共享磁盘划分规划(假设硬件上没做 RAID1,这里规划镜像磁盘组)
Block Device ASMlib Name Size Comments
/dev/sdb1 OCR_VOTE01 10GB ASM Diskgroup for OCR and Voting Disks
/dev/sdc1 ASM_DATA01 10GB ASM Data Diskgroup
/dev/sdd1 ASM_DATA02 10GB ASM Data Diskgroup (镜像磁盘组)
/dev/sde1 ASM_FRA 4GB ASM Flash Recovery Area Diskgroup
这里是虚拟机安装 RAC, 所以需要设置共享磁盘文件 (可以通过以下命令,也可以通过
VMware 界面添加,注意选择磁盘时候需要选取 SCSI 1:1,SCSI 1:2,依次类推,多少个磁盘选择
多少个)。如果是实体 Storage,这一步不需要。
A. 建立一个文件夹用于存放共享磁盘文件 : E:\SharedDiskASM
B. 执行下面命令生成共享文件
C:\>cd C:\Program Files\VMware\VMware Workstation
vmware‐vdiskmanager.exe ‐c ‐s 10Gb ‐a lsilogic ‐t 2 "E:\SharedDiskASM"\ShareDiskOCR.vmdk
vmware‐vdiskmanager.exe ‐c ‐s 10Gb ‐a lsilogic ‐t 2 "E:\SharedDiskASM"\ShareDiskData01.vmdk
vmware‐vdiskmanager.exe ‐c ‐s 10Gb ‐a lsilogic ‐t 2 "E:\SharedDiskASM"\ShareDiskData02.vmdk
vmware‐vdiskmanager.exe ‐c ‐s 4Gb ‐a lsilogic ‐t 2 "E:\SharedDiskASM"\ShareDiskFlash.vmdk
这样就生成了新的虚拟磁盘,其中‐s 10Gb 表示磁盘容量,‐a 表示接口类型 lsilogic 即 scsi 接
口,‐t 2 的意思是预分配磁盘空间,可以用‐t 0 选项,这样就不会占用空间了,实际用多少就多少.
可以看到在 E:\SharedDiskASM 生成了多个个文件。
關閉 Linux 系統及虛擬機 :
到虛擬機目錄比如 D:\VM\Linux5_Test_ASM,直接编辑*.vmx 文件, 加上下面语句(所有虛擬機节点).
disk.locking = "false"
scsi1.virtualDev = "lsilogic"
scsi1.present = "TRUE"
scsi1.sharedBus = "virtual"
diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.DataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize = "4096"
diskLib.maxUnsyncedWrites = "0"
scsi1:1.deviceType = "disk"
scsi1:1.present = "TRUE"
scsi1:1.fileName = "E:\SharedDiskASM\ShareDiskOCR.vmdk"
scsi1:1.mode = "independent-persistent"
scsi1:1.redo = ""
scsi1:2.deviceType = "disk"
scsi1:2.present = "TRUE"
scsi1:2.fileName = "E:\SharedDiskASM\ShareDiskData01.vmdk"
scsi1:2.mode = "independent-persistent"
scsi1:2.redo = ""
scsi1:3.deviceType = "disk"
scsi1:3.present = "TRUE"
scsi1:3.fileName = "E:\SharedDiskASM\ShareDiskData02.vmdk"
scsi1:3.mode = "independent-persistent"
scsi1:3.redo = ""
scsi1:4.deviceType = "disk"
scsi1:4.present = "TRUE"
scsi1:4.fileName = "E:\SharedDiskASM\ShareDiskFlash.vmdk"
scsi1:4.mode = "independent-persistent"
scsi1:4.redo = ""
注意:这个文件中的每一行都不能重复,否则会报错, 而且不要去改变文件的编码格式
(如果提示需要存储为其他编码格式,比如unicode, 那么就是拷贝的格式有问题,需要
手工写入 )。
最后开启虚拟机程序(注意,一定要重新启动vmware界面程序),查看每个节点虚拟机
Devices部分,在未开启各个虚拟机的时候就应该可以看到磁盘挂载情况 。 然后开机
再次确认 。 如果在未开启时没有看到磁盘信息, 那么就是写入vmx文件的语法有问题,
可以手工写入(而不是拷贝)。
当然也可以通过VMWare图形界面建立磁盘,注意共享磁盘选择SCSI 1而不是0才可以。
这里不详细介绍。
13. 划分共享磁盘 (在一个节点上运行 fdisk)
[root@rac001 ~]# fdisk -l
Disk /dev/sda: 16.1 GB, 16106127360 bytes
255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 6 48163+ 83 Linux
/dev/sda2 7 515 4088542+ 83 Linux
/dev/sda3 516 1759 9992430 83 Linux
/dev/sda4 1760 1958 1598467+ 5 Extended
/dev/sda5 1760 1958 1598436 82 Linux swap / Solaris
Disk /dev/sdb: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sde doesn't contain a valid partition table
[root@rac001 ~]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305):
Using default value 1305
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
再分别对 /dev/sdc, /dev/sdd, /dev/sde 进行磁盘分区。
最后结果如下:
[root@rac001 ~]# fdisk -l
Disk /dev/sda: 16.1 GB, 16106127360 bytes
255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 6 48163+ 83 Linux
/dev/sda2 7 515 4088542+ 83 Linux
/dev/sda3 516 1759 9992430 83 Linux
/dev/sda4 1760 1958 1598467+ 5 Extended
/dev/sda5 1760 1958 1598436 82 Linux swap / Solaris
Disk /dev/sdb: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 1305 10482381 83 Linux
Disk /dev/sdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 1305 10482381 83 Linux
Disk /dev/sdd: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 1305 10482381 83 Linux
Disk /dev/sde: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sde1 1 522 4192933+ 83 Linux
[root@rac001 ~]#
重新启动节点 1,2 ,查看是否都显示挂载正常。
14. 安装及配置 ASMLib (每个节点都需要安装)
从Oracle OTN ASMLib网页下载ASMLib包, 使用uname –a查看系统核心版本, 下载的ASMLib
版本要与之匹配。
[root@rac002 ~]# uname -a
Linux rac002 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:54 EDT 2009 i686 i686 i386 GNU/Linux
[root@rac001 etc]# cat redhat-release
Red Hat Enterprise Linux Server release 5.4 (Tikanga)
查看是否有安装过:
[root@rac01 ~]# rpm -qa | grep oracleasm
匹配的 ASMLib 文件如下 (以下也是文件安装顺序):
oracleasm-support-2.1.7-1.el5.i386.rpm
oracleasm-2.6.18-164.el5-2.0.5-1.el5.i686.rpm
oracleasmlib-2.0.4-1.el5.i386.rpm
[root@rac001 packages]# rpm -ivh oracleasm-support-2.1.7-1.el5.i386.rpm
warning: oracleasm-support-2.1.7-1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing... ########################################### [100%]
1:oracleasm-support ########################################### [100%]
[root@rac001 packages]# rpm -ivh oracleasm-2.6.18-164.el5-2.0.5-1.el5.i686.rpm
warning: oracleasm-2.6.18-164.el5-2.0.5-1.el5.i686.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing... ########################################### [100%]
1:oracleasm-2.6.18-164.el########################################### [100%]
[root@rac001 packages]# rpm -ivh oracleasmlib-2.0.4-1.el5.i386.rpm
warning: oracleasmlib-2.0.4-1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing... ########################################### [100%]
1:oracleasmlib ########################################### [100%]
[root@rac001 packages]#
同样在其他节点也执行。
15. 以 root 用户配置 ASMLib (每个节点都需要安装)
If using user and group separation for the installation (as shown in this guide), the ASMLib driver
interface owner is grid and the group to own the driver interface is asmdba (oracle and grid are
both members of this group). These groups were created in section 2.1. If a more simplistic
installation using only the Oracle user is performed, the owner will be oracle and the group
owner will be dba.
[root@rac001 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@rac001 ~]#
同样在其他节点也执行上面命令。
16. 使用 ASMLib 去标示 Shared Disks 作为 Candidate Disks,在一个
节点执行即可。
A. 使用 ASMLib 创建 ASM Disks , 语法如下:
# /usr/sbin/oracleasm createdisk disk_name device_partition_name
其中 disk_name 是你为 ASM Disk 选择的一个名字,名字只能包含数字字母及下划线,比如
OCR01 , DATA01 等. device_partition_name 标示为 ASM 的系磁盘分区,如/dev/sdb1 ,
/dev/sdc1 等
如果你发现设置错误或需要 unmark这个磁盘,可以运行如下命令:
# /usr/sbin/oracleasm deletedisk disk_name
下面开始设置共享磁盘。
根据规划:
Block Device ASMlib Name Size Comments
/dev/sdb1 OCR_VOTE01 10GB ASM Diskgroup for OCR and Voting Disks
/dev/sdc1 ASM_DATA01 10GB ASM Data Diskgroup
/dev/sdd1 ASM_DATA02 10GB ASM Data Diskgroup (镜像磁盘组)
/dev/sde1 ASM_FRA 4GB ASM Flash Recovery Area Diskgroup
# /usr/sbin/oracleasm createdisk OCR_VOTE01 /dev/sdb1
# /usr/sbin/oracleasm createdisk ASM_DATA01 /dev/sdc1
# /usr/sbin/oracleasm createdisk ASM_DATA02 /dev/sdd1
# /usr/sbin/oracleasm createdisk ASM_FRA /dev/sde1
运行结果如下,
[root@rac001 ~]# /usr/sbin/oracleasm createdisk OCR_VOTE01 /dev/sdb1
Writing disk header: done
Instantiating disk: done
[root@rac001 ~]# /usr/sbin/oracleasm createdisk ASM_DATA01 /dev/sdc1
Writing disk header: done
Instantiating disk: done
[root@rac001 ~]# /usr/sbin/oracleasm createdisk ASM_DATA02 /dev/sdd1
Writing disk header: done
Instantiating disk: done
[root@rac001 ~]# /usr/sbin/oracleasm createdisk ASM_FRA /dev/sde1
Writing disk header: done
Instantiating disk: done
B. 在节点 1上为 RAC创建了所有的 ASM disks 后,使用 listdisks 命令确认他们的可用性。
[root@rac001 ~]# /usr/sbin/oracleasm listdisks
ASM_DATA01
ASM_DATA02
ASM_FRA
OCR_VOTE01
然后在所有其他节点上以 root 用户身份,使用 scandisks 命令扫描已经创建的 ASM 磁盘,也
就是说,我们只需要在节点 1 上创建 ASM 磁盘,其他节点不需要。
[root@rac002 ~]# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "OCR_VOTE01"
Instantiating disk "ASM_DATA01"
Instantiating disk "ASM_DATA02"
Instantiating disk "ASM_FRA"
最后在其他节点通过 listdisks 检查 ASM 磁盘的可用性。
[root@rac002 ~]# /usr/sbin/oracleasm listdisks
ASM_DATA01
ASM_DATA02
ASM_FRA
OCR_VOTE01
17. 准备安装 Oracle Grid Infrastructure
切换到 grid 用户,准备安装 grid Infrastructure . 首先需要确认所有节点 grid 用户的环境变
量.bash_profile .
安装用于 Linux 的 cvuqdisk 程序包
在两个 Oracle RAC 节点上安装操作系统程序包 cvuqdisk。如果没有 cvuqdisk,集群验证实
用程序就无法发现共享磁盘,当运行(手动运行或在 Oracle Grid Infrastructure 安装结束时
自动运行) 集群验证实用程序时, 您会收到这样的错误消息: “Package cvuqdisk not installed” 。
注意使用适用于你的硬件体系结构(例如,x86_64 或 i386)的 cvuqdisk RPM。
cvuqdisk RPM 包含在 Oracle Grid Infrastructure 安装介质上的 rpm 目录中。
解压 linux_11gR2_grid.zip
[root@rac001 packages]# unzip linux_11gR2_grid.zip
[root@rac001 rpm]# cd /home/packages/grid/rpm
[root@rac001 rpm]# ls
cvuqdisk-1.0.7-1.rpm
要安装 cvuqdisk RPM,执行以下步骤:
以 grid 用户帐户将 cvuqdisk 程序包从节点 1 复制到节点 2 :
以 root 用户身份分别登录到两节点:
设置环境变量 CVUQDISK_GRP, 使其指向作为 cvuqdisk 的所有者所在的组 (本文为 oinstall) :
[root@rac001 rpm]# export CVUQDISK_GRP=oinstall
[root@rac001 rpm]# rpm -ihv cvuqdisk-1.0.7-1.rpm
Preparing... ########################################### [100%]
1:cvuqdisk ########################################### [100%]
[root@rac001 rpm]#
[root@rac002 home]# export CVUQDISK_GRP=oinstall
[root@rac002 home]# rpm -ivh cvuqdisk-1.0.7-1.rpm
Preparing... ########################################### [100%]
1:cvuqdisk ########################################### [100%]
使用 CVU 验证是否满足 Oracle 集群件要求(可选),下面文字只是说明。
在运行 Oracle 安装程序之前不一定要运行集群验证实用程序。从 Oracle Clusterware 11g 第
2 版开始, Oracle Universal Installer (OUI) 会检测到不满足最低安装要求的情况, 并创建 shell
脚本(称为修复脚本)以完成尚未完成的系统配置步骤。如果 OUI 发现未完成的任务,它
会生成修复脚本 (runfixup.sh)。 在 Oracle Grid Infrastructure 安装过程中, 单击 Fix and Check
Again Button 之后,可以运行修复脚本。
您也可以让 CVU 在安装之前生成修复脚本。
如果您决定亲自运行 CVU,请记住要作为 grid 用户在将要执行 Oracle 安装的节点 (rac01)
上运行。此外,必须为 grid 用户配置通过用户等效性实现的 SSH 连通性。如果您打算使
用 OUI 配置 SSH 连接,则 CVU 实用程序会失败,它没有机会执行其任何的关键检查并生
成修复脚本:
这里我们以传统方式先建立各节点间的 SSH连通性( grid用户 )。
在节点 1 上:
[root@rac001 rpm]# su - grid
[grid@rac001 ~]$
[grid@rac001 ~]$ cd /home/grid/
[grid@rac001 ~]$ mkdir .ssh
mkdir: cannot create directory `.ssh': File exists
[grid@rac001 ~]$ chmod 700 .ssh/
[grid@rac001 ~]$
同样在节点 2 上执行:
[root@rac002 home]# su - grid
[grid@rac002 ~]$ cd /home/grid/
[grid@rac002 ~]$ mkdir .ssh
[grid@rac002 ~]$ chmod 700 .ssh
[grid@rac002 ~]$
在节点 1 上:
[root@rac001 rpm]# su - grid
[grid@rac001 ~]$
[grid@rac001 ~]$ cd /home/grid/
[grid@rac001 ~]$ mkdir .ssh
mkdir: cannot create directory `.ssh': File exists
[grid@rac001 ~]$ chmod 700 .ssh/
[grid@rac001 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/grid/.ssh/id_rsa.
Your public key has been saved in /home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
57:eb:a8:61:e2:5f:a0:ed:0e:7d:4a:3d:41:df:fb:b1 grid@rac001
[grid@rac001 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/grid/.ssh/id_dsa.
Your public key has been saved in /home/grid/.ssh/id_dsa.pub.
The key fingerprint is:
08:7a:8c:74:4b:cd:34:fd:4a:eb:88:b4:fa:58:7d:0f grid@rac001
[grid@rac001 ~]$
同样在节点 2 上执行:
[grid@rac002 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/grid/.ssh/id_rsa.
Your public key has been saved in /home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
d0:d2:58:0f:b5:93:f2:76:f0:9d:b5:d6:45:cc:72:f5 grid@rac002
[grid@rac002 ~]$
[grid@rac002 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/grid/.ssh/id_dsa.
Your public key has been saved in /home/grid/.ssh/id_dsa.pub.
The key fingerprint is:
6b:4b:12:50:c9:00:35:24:ab:ce:6c:ba:98:49:7f:2b grid@rac002
在节点 1 上执行:
[grid@rac001 ~]$ cd /home/grid/.ssh/
[grid@rac001 .ssh]$ ssh rac001 cat /home/grid/.ssh/id_rsa.pub >>authorized_keys
The authenticity of host 'rac001 (10.161.32.161)' can't be established.
RSA key fingerprint is bf:17:7b:22:7a:fe:31:67:5e:7d:b2:a2:15:66:cf:4e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac001,10.161.32.161' (RSA) to the list of known hosts.
grid@rac001's password:
[grid@rac001 .ssh]$ ssh rac001 cat /home/grid/.ssh/id_dsa.pub >>authorized_keys
[grid@rac001 .ssh]$ ssh rac002 cat /home/grid/.ssh/id_rsa.pub >>authorized_keys
The authenticity of host 'rac002 (10.161.32.162)' can't be established.
RSA key fingerprint is bf:17:7b:22:7a:fe:31:67:5e:7d:b2:a2:15:66:cf:4e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac002' (RSA) to the list of known hosts.
grid@rac002's password:
[grid@rac001 .ssh]$ ssh rac002 cat /home/grid/.ssh/id_dsa.pub >>authorized_keys
grid@rac002's password:
[grid@rac001 .ssh]$
在节点 1 上将生成的 authorized_keys 拷贝到节点 2 上。
[grid@rac001 .ssh]$ cd /home/grid/.ssh/
[grid@rac001 .ssh]$ scp authorized_keys rac002:/home/grid/.ssh/
grid@rac002's password:
authorized_keys 100% 1988 1.9KB/s 00:00
[grid@rac001 .ssh]$
在节点 1 及节点 2 上 chmod authorized_keys 。
[grid@rac001 .ssh]$ cd /home/grid/.ssh/
[grid@rac001 .ssh]$ chmod 600 authorized_keys
[grid@rac002 .ssh]$ cd /home/grid/.ssh/
[grid@rac002 .ssh]$ chmod 600 authorized_keys
测试节点 1,2 的 SSH 连通性。
在节点 1 上:
[root@rac001 ~]# su - grid
[grid@rac001 ~]$ ssh rac001 date
Thu Nov 10 16:47:25 CST 2011
[grid@rac001 ~]$ ssh rac002 date
Thu Nov 10 16:47:28 CST 2011
[grid@rac001 ~]$ ssh pri001 date
The authenticity of host 'pri001 (10.1.1.11)' can't be established.
RSA key fingerprint is bf:17:7b:22:7a:fe:31:67:5e:7d:b2:a2:15:66:cf:4e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'pri001,10.1.1.11' (RSA) to the list of known hosts.
Thu Nov 10 16:47:37 CST 2011
[grid@rac001 ~]$ ssh pri002 date
The authenticity of host 'pri002 (10.1.1.12)' can't be established.
RSA key fingerprint is bf:17:7b:22:7a:fe:31:67:5e:7d:b2:a2:15:66:cf:4e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'pri002,10.1.1.12' (RSA) to the list of known hosts.
Thu Nov 10 16:47:43 CST 2011
[grid@rac001 ~]$
在节点 2 上:
[root@rac002 ~]# su - grid
[grid@rac002 ~]$ ssh rac001 date
The authenticity of host 'rac001 (10.161.32.161)' can't be established.
RSA key fingerprint is bf:17:7b:22:7a:fe:31:67:5e:7d:b2:a2:15:66:cf:4e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac001,10.161.32.161' (RSA) to the list of known hosts.
Thu Nov 10 16:48:40 CST 2011
[grid@rac002 ~]$ ssh rac002 date
The authenticity of host 'rac002 (10.161.32.162)' can't be established.
RSA key fingerprint is bf:17:7b:22:7a:fe:31:67:5e:7d:b2:a2:15:66:cf:4e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac002,10.161.32.162' (RSA) to the list of known hosts.
Thu Nov 10 16:48:45 CST 2011
[grid@rac002 ~]$ ssh pri001 date
The authenticity of host 'pri001 (10.1.1.11)' can't be established.
RSA key fingerprint is bf:17:7b:22:7a:fe:31:67:5e:7d:b2:a2:15:66:cf:4e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'pri001,10.1.1.11' (RSA) to the list of known hosts.
Thu Nov 10 16:48:54 CST 2011
[grid@rac002 ~]$ ssh pri002 date
The authenticity of host 'pri002 (10.1.1.12)' can't be established.
RSA key fingerprint is bf:17:7b:22:7a:fe:31:67:5e:7d:b2:a2:15:66:cf:4e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'pri002,10.1.1.12' (RSA) to the list of known hosts.
Thu Nov 10 16:49:01 CST 2011
手动运行 CVU 使用程序验证 Oracle 集群件要求(一个节点执行):
到 grid 软件解压的目录下执行 runcluvfy.sh 命令:
[root@rac001 ~]# su - grid
[grid@rac001 ~]$ cd /home/packages/grid/
[grid@rac001 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac001,rac002 -fixup -verbose
查看 CVU 报告。 在本文所述配置情况下,应该只发现如下的唯一一个错误:
Check: Membership of user "grid" in group "dba"
Node Name User Exists Group Exists User in Group Comment
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ ‐‐‐‐‐‐‐‐‐‐‐‐ ‐‐‐‐‐‐‐‐‐‐‐‐ ‐‐‐‐‐‐‐‐‐‐‐‐ ‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
rac02 yes yes no failed
rac01 yes yes no failed
.......
上面一个检查失败的原因是,通过任务角色划分 配置创建了以角色分配的组和用户,而 CVU 不能正确识
别这种配置。如何创建任务角色划分配置。CVU 不能识别此类配置,因而假定 grid 用户始终是 dba 组的
成员。可以放心地忽略这个失败的检查。CVU 执行的所有其他检查的结果报告应该为“passed” ,之后才能
继续进行 Oracle Grid Infrastructure 的安装。
同通过执行结果我们可以看到基本都是 passed, 如果 NTP 时间同步部分有 fail, 我们可以到
/etc/ntp.conf 下面将此文件改名,比如 ntp.conf.bak (安装完成之后再改回来) . 只要时间同步
就 OK .
然后在节点 1 上使用 CVU 验证硬件和操作系统设置。
同样,在 rac01 节点上以具有用户等效性配置的 grid 用户帐户运行以下命令:
[root@rac001 ~]# su - grid
[grid@rac001 grid]$ cd /home/packages/grid/
[grid@rac001 grid]$ ./runcluvfy.sh stage -post hwos -n rac001,rac002 -verbose
查看 CVU 报告。CVU 执行的所有其他检查的结果报告应该为“passed” ,之后才能继续进
行 Oracle Grid Infrastructure 的安装。
18. 开始安装 Grid Infrastructure
需要图形界面安装 Grid Infrastructure 软件,这里使用 VNC。以 grid 用户登陆,进入到 grid
infrastructure 解压后的目录 , 运行$./runInstaller
A. 选择Install and Configure Grid Infrastructure for a Cluster . 下一步。
注意: 11g要使用ASM,即使单机也需要安装Grid Infrastructure.
B. 选择高级安装 - Advanced Installation .
C. 选择需要的语言,这里我们全选了。
D. 填写Cluster Name为rac, SCAN Name为racscan(与/etc/hosts中设置匹配),port=1521.
不配置GNS.
E. 配置节点信息: rac001, vip001 ; rac002, vip002 .
F. 设置网卡(按照/etc/hosts中IP对应设置,多余的都设置为Do Not Use).
G. Storage项选择ASM .
H. 创建ASM磁盘组:注意这里high表示5个镜像,Normal表示3个镜像,我们选择External,因为
我们测试只有一个磁盘组,没有设置failure group, 生产库一般最好选择Normal.
Disk Group Name : OCR_VOTE (自己起名字),选择External, Add Disks部分,选择
Candidate Disks及对应的ORCL: OCR_VOTE01 .
备注:
(1). 如果是用来放置ocr/vd的diskgroup,那么external,normal,high 对应的failgroup至少
为1,3,5个,也就是至少需要1,3,5个disk
(2). 如果是普通的ASM 用来放置data file的diskgroup,那么external,normal,high对应的
failgroup至少为1,2,3个,也就是至少需要1,2,3个disk
I. 设置ASM实例的sys及sysasm用户的密码。
J. 选择不使用IPMI .
K. 操作系统组: asmdba, asmoper, asmadmin .
L. grid用户的ORACLE_BASE及ORACLE_HOME, 按照grid用户的.bash_profile设置来。
/u01/product/grid/crs , /u01/product/grid/11.2.0
M. Oracle Inventory : /u01/product/grid/oraInventory , 注意这些目录的权限。
O. 拷贝及安装文件,两个节点按顺序运行orainstRoot.sh及root.sh 脚本。
注意:运行root.sh比较长时间,所以最好vnc登入本机运行,以免断开。
如果中间出现异常需要卸载grid,可以使用下面命令彻底清理 :
[grid@rac001 ~]$ cd /u01/grid/
[grid@rac001 grid]$ cd deinstall/
[grid@rac001 deinstall]$ ./deinstall
如果在安装过程中出现问题需要重新安装grid infrastructure软件,那么还需要
对ocr及votingdisk所在区域进行清理 (ASM) :
[root@rac001 ~]# dd if=/dev/zero f=/dev/sdb1 bs=10M count=10
P. 执行完成后OK, 我们可能看到安装到最后可能会报错INS-20802错误,可以skip。
我们在节点 1,2 上通过 crs_stat –t 查看服务开启状态(oc4j 和 gsd offline 是正常的):
[root@rac002 oracle]# su ‐ grid
[grid@rac002 ~]$
[grid@rac002 ~]$ crs_stat ‐t
Name Type Target State Host
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐ ‐‐‐‐‐‐
ora....ER.lsnr ora....er.type ONLINE ONLINE rac01
ora....N1.lsnr ora....er.type ONLINE ONLINE rac01
ora....VOTE.dg ora....up.type ONLINE ONLINE rac01
ora.asm ora.asm.type ONLINE ONLINE rac01
ora.eons ora.eons.type ONLINE ONLINE rac01
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE rac01
ora.oc4j ora.oc4j.type OFFLINE OFFLINE
ora.ons ora.ons.type ONLINE ONLINE rac01
ora....SM1.asm application ONLINE ONLINE rac01
ora....01.lsnr application ONLINE ONLINE rac01
ora.rac01.gsd application OFFLINE OFFLINE
ora.rac01.ons application ONLINE ONLINE rac01
ora.rac01.vip ora....t1.type ONLINE ONLINE rac01
ora....SM2.asm application ONLINE ONLINE rac02
ora....02.lsnr application ONLINE ONLINE rac02
ora.rac02.gsd application OFFLINE OFFLINE
ora.rac02.ons application ONLINE ONLINE rac02
ora.rac02.vip ora....t1.type ONLINE ONLINE rac02
ora....ry.acfs ora....fs.type ONLINE ONLINE rac01
ora.scan1.vip ora....ip.type ONLINE ONLINE rac01
备注:
在 11gr2 中,默认 oc4j和 gsd 资源是 disable 的;
oc4j 是用于WLM 的一个资源 , WLM在 11.2.0.2 才可用;
gsd 是 CRS 用于跟 9i RAC 进行通信的一个模块,是为了向后兼容
才保留的,不影响性能;建议不要刪除,也不要尝试开启他们, 忽略即可。
19. 开始安装 Oracle RDBMS 软件
以 Oracle 用户创建一个 vnc session,以便登入后不用切换用户(通过 root 或其他用户切换到
oracle 可能会碰到安装图形界面显示的问题)。
[root@rac002 u01]# su - oracle
[oracle@rac002 ~]$
[oracle@rac001 database]$ pwd
/u01/packages/database
[oracle@rac001 database]$
[oracle@rac001 database]$ ./runInstaller
图形界面出现后。
A. 输入Email . 不用选择下面security部分,也不用设置proxry server.
B. 只是Install database software only.
C. 选择Real Application Clusters database installation, 可以看到两个节点node name.
点击下面的 SSH Connectivity按钮,设置oracle用户密码,点击Setup, 成功后点击Test.
D. 选择语言。
E. 安装企业版。
F. 按照Oracle用户的.bash_profile设置ORACLE_BASE及ORACLE_HOME.
G. 选择操作系统组: dba, oinstall .
H. 开始安装。
I. 按照提示在各个节点按照顺序执行root.sh .
J. 提示安装成功。
20. 运行ASMCA创建磁盘组。
以grid用户登入开始ASMCA配置磁盘组,因为是图形界面,我们使用grid用户的
vnc session .
[grid@rac001 bin]$ pwd
/u01/grid/11.2.0/bin
[grid@rac001 bin]$ ./asmca
图形界面显示到Disk Groups 。我们可以看到先前配置的OCR_VOTE已经在列。点击
下面的create 创建datafile及flash recovery area的ASM磁盘组。
磁盘组名称ORADATA, 冗余部分选择External, 点击磁盘为ORCL:ASMDATA01,点击OK.
继续create
磁盘组名称ORAFLASH, 冗余部分选择External, 磁盘为ORCL:ASMDATA02,点击OK.
全部OK后点击QUIT退出。
21. 运行 DBCA 创建数据库。
以 oracle 用户登陆系统 dbca 创建数据库,这里我们以 oracle 用户创建的 VNC session 登入
VNC .
[oracle@rac001 bin]$ dbca
A. 创建RAC库。
B. Create a Database.
C. 选择Custom Database .
D. 配置类型为Admin-Managed, 全局数据库名及SID Prefix都是racdb(Oracle用户的.bash_profile
中有设置), 节点部分Select All.
E. 企业管理器部分默认。
F. 测试用,所以所有用户设置一个密码。
G. 设置存储类型: ASM, 选择"Use Common Location for All Database Files"
Database Files Location: +ORADATA, 弹出设置ASMSNMP密码框。
H. 选择Specify Flash Recovery Area , 路径部分为+ORAFLASH, 大小为4977M,
设置自动归档,点击设置归档路径为 +ORADATA/RACDB/arch
建议: 如果只是设置+ORADATA, 这里归档文件会使用 OMF的格式,不遵循初始参数中
的 format, 建议安装完成后手工自己建立一个 ASM 目录来放置归档文件,这样文件
名称就会遵守 format 规则了。警告信息忽略。
I. 数据库组件默认即可。
J. Mem 使用AMM, Sizing使用16K, 字符集采用AL32UTF8. 连接模式采用Dedicated Server.
K. 设置控制文件及表空间,redo log group.
L. 安装完毕。
22. 运行 DBCA 创建数据库。
[grid@rac001 ~]$ crs_stat ‐t
Name Type Target State Host
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
ora....ER.lsnr ora....er.type ONLINE ONLINE rac01
ora....N1.lsnr ora....er.type ONLINE ONLINE rac01
ora....VOTE.dg ora....up.type ONLINE ONLINE rac01
ora.ORADATA.dg ora....up.type ONLINE ONLINE rac01
ora....LASH.dg ora....up.type ONLINE ONLINE rac01
ora.asm ora.asm.type ONLINE ONLINE rac01
ora.eons ora.eons.type ONLINE ONLINE rac01
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE rac01
ora.oc4j ora.oc4j.type OFFLINE OFFLINE
ora.ons ora.ons.type ONLINE ONLINE rac01
ora....SM1.asm application ONLINE ONLINE rac01
ora....01.lsnr application ONLINE ONLINE rac01
ora.rac01.gsd application OFFLINE OFFLINE
ora.rac01.ons application ONLINE ONLINE rac01
ora.rac01.vip ora....t1.type ONLINE ONLINE rac01
ora....SM2.asm application ONLINE ONLINE rac02
ora....02.lsnr application ONLINE ONLINE rac02
ora.rac02.gsd application OFFLINE OFFLINE
ora.rac02.ons application ONLINE ONLINE rac02
ora.rac02.vip ora....t1.type ONLINE ONLINE rac02
ora.racdb.db ora....se.type ONLINE ONLINE rac01
ora....ry.acfs ora....fs.type ONLINE ONLINE rac01
ora.scan1.vip ora....ip.type ONLINE ONLINE rac01
问题点及解决:
后来测试发现,归档路径不能正常归档,可能是在设置规定那个路径的时候有些问题。
这里重新通过 ASMCMD命令重新建立归档路径及重新设置初始化参数中的归档路径。
[root@rac001 ~]# su ‐ grid
[grid@rac001 ~]$ id
uid=501(grid) gid=501(oinstall) groups=501(oinstall),504(asmadmin),506(asmdba),507(asmoper)
[grid@rac001 ~]$ asmcmd
ASMCMD> help
ASMCMD> ls
OCR_VOTE/
ORADATA/
ORAFLASH/
ASMCMD> cd oradata (不区分大小写的)
ASMCMD> ls
RACDB/
ASMCMD> cd racdb
ASMCMD> ls
CONTROLFILE/
DATAFILE/
ONLINELOG/
PARAMETERFILE/
TEMPFILE/
control01.ctl
control02.ctl
redo01.log
redo02.log
redo03.log
redo04.log
spfileracdb.ora
准备在 ASM 磁盘下建立 arch 目录,用于存放归档。
ASMCMD> pwd (查看一下目录)
+oradata/racdb
ASMCMD> mkdir arch
ASMCMD>ls
[root@rac01 ~]# su ‐ oracle
[oracle@rac01 ~]$ sqlplus "/as sysdba"
SQL> alter system set log_archive_dest_1='LOCATION=+ORADATA/RACDB/arch';
归档测试,通过 ASMCMD 查看 ASM磁盘下的归档情况,测试 OK.