Virtualbox安装Oracle 19c 升级到19.8(Oracle Restart和数据库)完整步骤

简介: 这个文档是来自鼎甲科技的姚远工作中整理的,在Virtualbox上安装Oracle 19.3 再升级到19.8(Oracle Restart和数据库)完整步骤,包括:配置系统环境,安装grid,oracle,建库,补丁下载升级等全过程。

文档说明

这个文档是来自鼎甲科技的姚远工作中整理的,在Virtualbox上安装Oracle 19.3 再升级到19.8(Oracle Restart和数据库)完整步骤,包括:配置系统环境,安装grid,oracle,建库,补丁下载升级等全过程。


配置系统环境

安装软件

到metalink上下载19.8补丁包:

image.png

这个包不包含数据库的升级包。

[root@rac191 install]# ll -h
total 5.1G
drwxrwxrwx 4 root root  100 7月  16 01:43 31326369
-rw-r--r-- 1 root root 2.6G 9月  14 10:49 oracle-database-ee-19c-1.0-1.x86_64.rpm
-rw-r--r-- 1 root root  18K 9月  14 10:49 oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm
-rw-r--r-- 1 root root 2.6G 9月  14 10:52 p31326369_190000_Linux-x86-64.zip
-rw-rw-r-- 1 root root 524K 7月  28 03:17 PatchSearch.xml
[root@rac191 install]#

关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

关闭selinux

echo "SELINUX=disabled" > /etc/selinux/config
 setenforce 0


使用getenforce 进行检查


设置时区

timedatectl list-timezones |grep Shanghai 
 timedatectl set-timezone Asia/Shanghai


安装oracle-database-preinstall-19c

安装依赖包时找不到yum源,把安装光盘的iso文件mount成yum源即可!


[root@rac191 install]# yum localinstall ./oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm 
......
Error downloading packages:
  1:xorg-x11-xauth-1.0.9-1.el7.x86_64: [Errno 256] No more mirrors to try.
......
[root@rac191 ~]# cat /etc/yum.repos.d/my.repo 
[local]
name=local-yum
baseurl=file:///mnt/
enabled=1
gpgcheck=0
[root@rac191 ~]# 
[root@rac191 ~]# mount -o loop -t iso9660 ./CentOS-7.5-x86_64-Everything-1804.iso /mnt
mount: /dev/loop0 is write-protected, mounting read-only
[root@rac191 ~]# yum localinstall /ofsdata/install/oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm  -y
......
 Verifying  : 32:bind-utils-9.9.4-61.el7.x86_64                                                                                                                                                      29/29 
Installed:
  oracle-database-preinstall-19c.x86_64 0:1.0-1.el7                                                                                                                                                         
Dependency Installed:
  bc.x86_64 0:1.06.95-13.el7            bind-libs.x86_64 32:9.9.4-61.el7        bind-utils.x86_64 32:9.9.4-61.el7        ksh.x86_64 0:20120801-137.el7           libICE.x86_64 0:1.0.9-9.el7             
  libSM.x86_64 0:1.2.2-2.el7            libX11.x86_64 0:1.6.5-1.el7             libX11-common.noarch 0:1.6.5-1.el7       libXau.x86_64 0:1.0.8-2.1.el7           libXext.x86_64 0:1.3.3-3.el7            
  libXi.x86_64 0:1.7.9-1.el7            libXinerama.x86_64 0:1.1.3-2.1.el7      libXmu.x86_64 0:1.1.2-2.el7              libXrandr.x86_64 0:1.5.1-2.el7          libXrender.x86_64 0:0.9.10-1.el7        
  libXt.x86_64 0:1.1.5-3.el7            libXtst.x86_64 0:1.2.3-1.el7            libXv.x86_64 0:1.0.11-1.el7              libXxf86dga.x86_64 0:1.1.4-2.1.el7      libXxf86misc.x86_64 0:1.0.3-7.1.el7     
  libXxf86vm.x86_64 0:1.1.4-1.el7       libdmx.x86_64 0:1.1.3-3.el7             libxcb.x86_64 0:1.12-1.el7               mailx.x86_64 0:12.5-19.el7              psmisc.x86_64 0:22.20-15.el7            
  smartmontools.x86_64 1:6.5-1.el7      xorg-x11-utils.x86_64 0:7.5-22.el7      xorg-x11-xauth.x86_64 1:1.0.9-1.el7     
Complete!


创建用户和组(创建oracle、grid用户组)

oracle-database-preinstall这个包已经创建了oracle用户,下面是补充:



groupadd -g 54327 asmdba  
groupadd -g 54328 asmoper  
groupadd -g 54329 asmadmin   
usermod -a -G asmdba,backupdba,dgdba,kmdba,racdba,oper,vboxsf oracle
useradd -u 54322 -g oinstall -G asmadmin,asmdba,asmoper,dba,racdba,vboxsf grid 
[root@oledb sf_bigfiles]# id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),982(vboxsf),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba),54327(asmdba)
[root@oledb sf_bigfiles]# id grid
uid=54322(grid) gid=54321(oinstall) groups=54321(oinstall),982(vboxsf),54322(dba),54330(racdba),54327(asmdba),54328(asmoper),54329(asmadmin)
# set password of oracle,grid
echo "dingjia" | passwd --stdin oracle
echo "dingjia" | passwd --stdin grid


创建文件目录(创建oracle、grid文件目录)

mkdir -p /u01/app/19.0.0/grid 
mkdir -p /u01/app/grid 
mkdir -p /u01/app/oracle
mkdir -p /u01/app/oracle/product/19.0.0/db_1
chown -R grid:oinstall /u01
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/


设置用户的最大进程数

vi /etc/security/limits.d/20-nproc.conf

普通账号下 ulimit -u 出现的max user processes的值 默认是 /etc/security/limits.d/20-nproc.conf

(centos6 是90-nproc.conf) 文件中的
#注释这行    *          soft    nproc    1024
# 增加下面一行
* soft nproc 16384


修改完成后用 ulimit -a 查询max user processes (-u) #系统限制某用户下最多可以运行多少进程或线程


修改limits.conf

vi /etc/security/limits.conf
#ORACLE SETTING
grid  soft  nproc 16384
grid  hard  nproc 16384
grid  soft  nofile 1024
grid  hard  nofile 65536
grid  soft  stack 10240
grid  hard  stack 32768
oracle soft nproc 16384
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack  10240
oracle hard stack  32768
oracle hard memlock 3145728
oracle soft memlock 3145728


停止avahi-daemon服务

systemctl disable avahi-daemon.socket
systemctl disable avahi-daemon.service

修改内核参数(oracle-database-preinstall安装包已经做了)

添加NOZEROCONF=yes参数(oracle-database-preinstall安装包已经做了)

安装rac所依赖的包(oracle-database-preinstall安装包已经做了)

修改grid用的环境变量

export LANG=en_US
export ORACLE_SID=+ASM
export ORACLE_TERM=xterm
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/19.0.0/grid
export NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"
export PATH=.:$PATH:$HOME/bin:$ORACLE_HOME/bin
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
THREADS_FLAG=native; export THREADS_FLAG
ulimit -u 16384 -n 65536
umask 022

修改oracle 用户环境变量

[root@rac1 ~]# su - oracle
vi ~/.bash_profile
export LANG=en_US
export ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
export ORACLE_HOME=$ORACLE_BASE/product/19.0.0/db_1
export ORACLE_SID=orcl
export ORACLE_TERM=xterm
export NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"
export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK
export PATH=.:$PATH:$HOME/bin:$ORACLE_HOME/bin
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export THREADS_FLAG=native
ulimit -u 16384 -n 65536


source .bash_profile


修改root环境变量

加入grid用户$ORACLE_HOME

vi ~/.bash_profile


export PATH=$PATH:/u01/app/19.0.0/grid/bin:$HOME/bin


禁用透明大页

echo never > /sys/kernel/mm/transparent_hugepage/enabled 
echo never > /sys/kernel/mm/transparent_hugepage/defrag 
cat /sys/kernel/mm/transparent_hugepage/defrag
cat /sys/kernel/mm/transparent_hugepage/enabled 
echo 'echo never > /sys/kernel/mm/transparent_hugepage/defrag' >> /etc/rc.d/rc.local 
echo 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' >> /etc/rc.d/rc.local


grid安装

根据oracle文档:


Starting with Oracle Grid Infrastructure 12c Release 2 (12.2),

installation and configuration of Oracle Grid Infrastructure software

is simplified with image-based installation.


To install Oracle Grid Infrastructure, create the new Grid home with

the necessary user group permissions, and then extract the image file

into the newly-created Grid home, and run the setup wizard to register

the Oracle Grid Infrastructure product.


在19C中需要把grid包解压放到grid用户下ORACLE_HOME目录内

[root@oledb ~]# su - grid
[grid@oledb ~]$ cd $ORACLE_HOME
[grid@oledb grid]$ pwd
/u01/app/19.0.0/grid
[grid@oledb grid]$ ls
[grid@oledb grid]$ ll
total 0
[grid@oledb grid]$ unzip -q /media/sf_bigfiles/oracle/19c/LINUX.X64_193000_grid_home.zip 
[grid@oledb grid]$ ls
addnode     css        diagnostics   has            jdbc  network  ords   precomp   rhp             sdk       tomcat        wwg
assistants  cv         dmu           hs             jdk   nls      oss    QOpatch   root.sh         slax      ucp           xag
bin         dbjava     env.ora       install        jlib  OPatch   oui    qos       root.sh.old     sqlpatch  usm           xdk
cha         dbs        evm           instantclient  ldap  opmn     owm    racg      root.sh.old.1   sqlplus   utl
clone       deinstall  gpnp          inventory      lib   oracore  perl   rdbms     rootupgrade.sh  srvm      welcome.html
crs         demo       gridSetup.sh  javavm         md    ord      plsql  relnotes  runcluvfy.sh    suptools  wlm
[grid@oledb grid]$



安装cvuqdisk包

[root@rac191 ~]# cd /u01/app/19.0.0/grid/cv/rpm
[root@rac191 rpm]# ll
total 12
-rw-r--r-- 1 grid oinstall 11412 3月  13 2019 cvuqdisk-1.0.10-1.rpm
[root@rac191 rpm]# rpm -ivh cvuqdisk-1.0.10-1.rpm 
Preparing...                          ################################# [100%]
Updating / installing...
   1:cvuqdisk-1.0.10-1                ################################# [100%]



安装grid包

[grid@dell ~]$ cd $ORACLE_HOME
[grid@dell grid]$ pwd    
/u01/app/19.0.0/grid
[grid@dell grid]$ unzip -q /u02/install/LINUX.X64_193000_grid_home.zip 
[grid@rac191 grid]$ ./gridSetup.sh

image.png


使用asmfd创建asm磁盘失败

[root@dell ~]# /u01/app/19.0.0/grid/bin/asmcmd  afd_state  
[root@dell ~]# /u01/app/19.0.0/grid/bin/asmcmd  afd_configure
[root@dell ~]# lsblk
NAME              MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdf                 8:80   0   1.8T  0 disk 
└─datavg-datalv   252:3    0   1.8T  0 lvm  /u02
sdd                 8:48   0   1.8T  0 disk 
└─vg_data-lv_data 252:2    0   5.5T  0 lvm  /u01
sdb                 8:16   0   1.8T  0 disk 
└─vg_data-lv_data 252:2    0   5.5T  0 lvm  /u01
sr0                11:0    1  1024M  0 rom  
sdg                 8:96   0   1.8T  0 disk 
sde                 8:64   0   1.8T  0 disk 
sdc                 8:32   0   1.8T  0 disk 
└─vg_data-lv_data 252:2    0   5.5T  0 lvm  /u01
sda                 8:0    0 110.8G  0 disk 
├─sda2              8:2    0 109.8G  0 part 
│ ├─ol-swap       252:1    0   9.9G  0 lvm  [SWAP]
│ └─ol-root       252:0    0   100G  0 lvm  /
└─sda1              8:1    0     1G  0 part /boot
sdh                 8:112  0   1.8T  0 disk 
[root@dell ~]# export ORACLE_HOME=/u01/app/19.0.0/grid
[root@dell ~]# export ORACLE_BASE=/tmp
[root@dell ~]# dd if=/dev/zero of=/dev/sdg bs=8192 count=100
100+0 records in
100+0 records out
819200 bytes (819 kB) copied, 0.00172029 s, 476 MB/s
[root@dell ~]# 
/u01/app/19.0.0/grid/bin/asmcmd afd_label DATA1 /dev/sdb --init
/u01/app/19.0.0/grid/bin/asmcmd afd_label DATA2 /dev/sdc --init
/u01/app/19.0.0/grid/bin/asmcmd afd_label DATA3 /dev/sdd --init
/u01/app/19.0.0/grid/bin/asmcmd afd_label DATA4 /dev/sde --init
/u01/app/19.0.0/grid/bin/asmcmd afd_label DATA5 /dev/sdf --init
/u01/app/19.0.0/grid/bin/asmcmd afd_label DATA6 /dev/sdg --init
[root@dell ~]# dd if=/dev/zero of=/dev/sdh bs=8192 count=100
100+0 records in
100+0 records out
819200 bytes (819 kB) copied, 0.00135287 s, 606 MB/s
[root@dell ~]# /u01/app/19.0.0/grid/bin/asmcmd afd_label DATA1 /dev/sdh --init
[root@dell ~]#  /u01/app/19.0.0/grid/bin/asmcmd  afd_lslbl /dev/sdg
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
DATA1                                 /dev/sdg
[root@dell ~]#  /u01/app/19.0.0/grid/bin/asmcmd  afd_lslbl /dev/sdh
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
DATA2                                 /dev/sdh
[root@dell ~]#


image.pngimage.png



asmfd不支持,改成udev

[grid@oledb bin]$ asmcmd afd_configure
[grid@oledb bin]$ asmcmd afd_state    
[grid@oledb bin]$ afdroot install
AFD-620: AFD is not supported on this operating system version: '4.14.35-2025.400.9.1.el7uek.x86_64'


[grid@oledb bin]$

创建6个4g的磁盘,用于asm,cd 到虚拟机目录下:


VBoxManage createhd --filename asm1.vdi --size 4048 --format VDI --variant Fixed
VBoxManage createhd --filename asm2.vdi --size 4048 --format VDI --variant Fixed
VBoxManage createhd --filename asm3.vdi --size 4048 --format VDI --variant Fixed
VBoxManage createhd --filename asm4.vdi --size 4048 --format VDI --variant Fixed
VBoxManage createhd --filename asm5.vdi --size 4048 --format VDI --variant Fixed
VBoxManage createhd --filename asm6.vdi --size 4048 --format VDI --variant Fixed
attach  the asm disk to oledb
VBoxManage storageattach oledb --storagectl "SATA" --port 1 --device 0 --type hdd --medium asm1.vdi 
VBoxManage storageattach oledb --storagectl "SATA" --port 2 --device 0 --type hdd --medium asm2.vdi
VBoxManage storageattach oledb --storagectl "SATA" --port 3 --device 0 --type hdd --medium asm3.vdi
VBoxManage storageattach oledb --storagectl "SATA" --port 4 --device 0 --type hdd --medium asm4.vdi
VBoxManage storageattach oledb --storagectl "SATA" --port 5 --device 0 --type hdd --medium asm5.vdi
VBoxManage storageattach oledb --storagectl "SATA" --port 6 --device 0 --type hdd --medium asm6.vd


重新启动虚拟机后,配置udev

echo -e "n\np\n1\n\n\nw" | fdisk /dev/sdb
echo -e "n\np\n1\n\n\nw" | fdisk /dev/sdc
echo -e "n\np\n1\n\n\nw" | fdisk /dev/sdd
echo -e "n\np\n1\n\n\nw" | fdisk /dev/sde
echo -e "n\np\n1\n\n\nw" | fdisk /dev/sdf
echo -e "n\np\n1\n\n\nw" | fdisk /dev/sdg
/usr/lib/udev/scsi_id -g -u -d /dev/sdb1
/usr/lib/udev/scsi_id -g -u -d /dev/sdc1
/usr/lib/udev/scsi_id -g -u -d /dev/sdd1
/usr/lib/udev/scsi_id -g -u -d /dev/sde1
/usr/lib/udev/scsi_id -g -u -d /dev/sdf1
/usr/lib/udev/scsi_id -g -u -d /dev/sdg1
[root@oledb ~]# cat /etc/udev/rules.d/99-my-asmdevices.rules
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB17b69d2e-6454158b", SYMLINK+="asmdisks/asmdisk01", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB40507920-b939a0d8", SYMLINK+="asmdisks/asmdisk02", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBb7cbc841-f2643626", SYMLINK+="asmdisks/asmdisk03", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB36ec9719-9e6f401b", SYMLINK+="asmdisks/asmdisk04", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBc43bdaac-7419fc69", SYMLINK+="asmdisks/asmdisk05", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB6dd8a787-c33d4bdd", SYMLINK+="asmdisks/asmdisk06", OWNER="grid", GROUP="asmadmin", MODE="0660
partprobe
/sbin/partprobe /dev/sdb1
/sbin/partprobe /dev/sdc1
/sbin/partprobe /dev/sdd1
/sbin/partprobe /dev/sde1
/sbin/partprobe /dev/sdf1
/sbin/partprobe /dev/sdg1
启用udev
/sbin/udevadm trigger --type=devices --action=change
查看udev映射出来的磁盘
[root@oledb ~]# ll /dev/sd*
brw-rw----. 1 root disk     8,  0 Oct 12 13:39 /dev/sda
brw-rw----. 1 root disk     8,  1 Oct 12 13:39 /dev/sda1
brw-rw----. 1 root disk     8,  2 Oct 12 13:39 /dev/sda2
brw-rw----. 1 root disk     8, 16 Oct 12 13:39 /dev/sdb
brw-rw----. 1 grid asmadmin 8, 17 Oct 12 13:39 /dev/sdb1
brw-rw----. 1 root disk     8, 32 Oct 12 13:39 /dev/sdc
brw-rw----. 1 grid asmadmin 8, 33 Oct 12 13:39 /dev/sdc1
brw-rw----. 1 root disk     8, 48 Oct 12 13:39 /dev/sdd
brw-rw----. 1 grid asmadmin 8, 49 Oct 12 13:39 /dev/sdd1
brw-rw----. 1 root disk     8, 64 Oct 12 13:39 /dev/sde
brw-rw----. 1 grid asmadmin 8, 65 Oct 12 13:39 /dev/sde1
brw-rw----. 1 root disk     8, 80 Oct 12 13:39 /dev/sdf
brw-rw----. 1 grid asmadmin 8, 81 Oct 12 13:39 /dev/sdf1
brw-rw----. 1 root disk     8, 96 Oct 12 13:39 /dev/sdg
brw-rw----. 1 grid asmadmin 8, 97 Oct 12 13:39 /dev/sdg1
[root@oledb ~]# ll /dev/asmdisks/asmdisk0*
lrwxrwxrwx. 1 root root 7 Oct 12 13:39 /dev/asmdisks/asmdisk01 -> ../sdb1
lrwxrwxrwx. 1 root root 7 Oct 12 13:39 /dev/asmdisks/asmdisk02 -> ../sdc1
lrwxrwxrwx. 1 root root 7 Oct 12 13:39 /dev/asmdisks/asmdisk03 -> ../sdd1
lrwxrwxrwx. 1 root root 7 Oct 12 13:39 /dev/asmdisks/asmdisk04 -> ../sde1
lrwxrwxrwx. 1 root root 7 Oct 12 13:39 /dev/asmdisks/asmdisk05 -> ../sdg1
lrwxrwxrwx. 1 root root 7 Oct 12 13:39 /dev/asmdisks/asmdisk06 -> ../sdf1
[root@oledb ~]#


image.png

image.pngimage.pngimage.png

image.pngimage.png








[grid@dell ~]$ asmcmd
ASMCMD> lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304   1907196  1907088                0         1907088              0             N  DATA/
ASMCMD> 
[grid@dell ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       dell                     STABLE
ora.LISTENER.lsnr
               ONLINE  INTERMEDIATE dell                     Not All Endpoints Re
                                                             gistered,STABLE
ora.asm
               ONLINE  ONLINE       dell                     Started,STABLE
ora.ons
               OFFLINE OFFLINE      dell                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       dell                     STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       dell                     STABLE
--------------------------------------------------------------------------------
[grid@dell ~]$ 
[grid@oledb ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     24264    24104                0           24104              0             N  DATA/
[grid@oledb ~]$


安装oracle数据库软件

注意:


Starting with Oracle Database 18c, installation and configuration of

Oracle Database software is simplified with image-based installation.

To install Oracle Database, create the new Oracle home, extract the

image file into the newly-created Oracle home, and run the setup

wizard to register the Oracle Database product.


安装

[oracle@oledb db_1]$ unzip -q /media/sf_bigfiles/oracle/19c/LINUX.X64_193000_db_home.zip      
[oracle@oledb db_1]$ pwd
/u01/app/oracle/product/19.0.0/db_1
[oracle@oledb db_1]$ ./runInstaller

image.png


image.png

image.png

image.png



[root@dell ~]# /u02/install/db1930/root.sh
Performing root user operation.
The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u02/install/db1930
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Oracle Trace File Analyzer (TFA - Standalone Mode) is available at :
    /u02/install/db1930/bin/tfactl
Note :
1. tfactl will use TFA Service if that service is running and user has been granted access
2. tfactl will configure TFA Standalone Mode only if user has no access to TFA Service or TFA is not installed



建库

image.png

image.png


打补丁(Patch 31305339 )

升级optach工具

oracle 和 grid 两个用户都有升级

[grid@oledb ~]$ cd $ORACLE_HOME
[oracle@oledb db_1]$ ./OPatch/opatch version
OPatch Version: 12.2.0.1.17
OPatch succeeded.
[grid@oledb grid]$ mv OPatch OPatch.bk
[grid@oledb grid]$ unzip -q /media/sf_bigfiles/oracle/19c/patch/p6880880_190000_Linux-x86-64.zip 
[grid@oledb grid]$ ./OPatch/opatch version
OPatch Version: 12.2.0.1.21
OPatch succeeded.
[grid@oledb grid]$


使用opatchauto一次打Oracle和grid两个用户的补丁

启动虚拟机


[scutech@dell vm]$ vboxmanage startvm oledb --type headless
Waiting for VM "oledb" to power on...
VM "oledb" has been successfully started.

检查invertory


[grid@oledb grid]$ ./OPatch/opatch lsinventory -detail -oh  $ORACLE_HOME
Oracle Interim Patch Installer version 12.2.0.1.21
Copyright (c) 2020, Oracle Corporation.  All rights reserved.
Oracle Home       : /u01/app/19.0.0/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/19.0.0/grid/oraInst.loc
OPatch version    : 12.2.0.1.21
OUI version       : 12.2.0.7.0
Log file location : /u01/app/19.0.0/grid/cfgtoollogs/opatch/opatch2020-10-12_16-40-28PM_1.log
......
   Patch Location in Storage area:
     /u01/app/19.0.0/grid/.patch_storage/29401763_Apr_11_2019_22_26_25
--------------------------------------------------------------------------------
OPatch succeeded.


Patch 31305339 - GI Release Update 19.8.0.0.200714 The GI Release
Update 19.8.0.0.200714 includes updates for both the Clusterware home
and Database home that can be applied in a rolling fashion.

grid检查冲突


$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir 31305339/31281355
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir 31305339/31305087
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir 31305339/31304218
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir 31305339/31335188

oracle用户检查冲突


$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir 31305339/31281355
 $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir 31305339/31305087


grid用户目前补丁


[grid@oledb 31326369]$ $ORACLE_HOME/OPatch/opatch lspatches
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)
29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763)
OPatch succeeded.

oracle用户目前补丁


[oracle@oledb 31326369]$  $ORACLE_HOME/OPatch/opatch lspatches
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)
OPatch succeeded.

打补丁


export PATH=$PATH:/u01/app/19.0.0/grid/OPatch
To patch the GI home and all Oracle RAC database homes of the same version:
# [root@oledb ~]# export PATH=$PATH:/u01/app/19.0.0/grid/OPatch
[root@oledb ~]# cd /media/sf_bigfiles/oracle/19c/patch/31326369
[root@oledb 31326369]# opatchauto apply ./31305339
OPatchauto session is initiated at Mon Oct 12 17:13:05 2020
System initialization log file is /u01/app/19.0.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-10-12_05-13-12PM.log.
Session log file is /u01/app/19.0.0/grid/cfgtoollogs/opatchauto/opatchauto2020-10-12_05-13-18PM.log
The id for this session is 7ME5
Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/19.0.0/db_1
Patch applicability verified successfully on home /u01/app/oracle/product/19.0.0/db_1
Verifying SQL patch applicability on home /u01/app/oracle/product/19.0.0/db_1
SQL patch applicability verified successfully on home /u01/app/oracle/product/19.0.0/db_1
Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.0.0/grid
Patch applicability verified successfully on home /u01/app/19.0.0/grid
Preparing to bring down database service on home /u01/app/oracle/product/19.0.0/db_1
Successfully prepared home /u01/app/oracle/product/19.0.0/db_1 to bring down database service
Bringing down database service on home /u01/app/oracle/product/19.0.0/db_1
Following database has been stopped and will be restarted later during the session: orcl
Database service successfully brought down on home /u01/app/oracle/product/19.0.0/db_1
Bringing down CRS service on home /u01/app/19.0.0/grid
Prepatch operation log file location: /u01/app/grid/crsdata/oledb/crsconfig/hapatch_2020-10-12_05-16-02PM.log
CRS service brought down successfully on home /u01/app/19.0.0/grid
Start applying binary patch on home /u01/app/oracle/product/19.0.0/db_1
Binary patch applied successfully on home /u01/app/oracle/product/19.0.0/db_1
Start applying binary patch on home /u01/app/19.0.0/grid
Binary patch applied successfully on home /u01/app/19.0.0/grid
Starting CRS service on home /u01/app/19.0.0/grid
Postpatch operation log file location: /u01/app/grid/crsdata/oledb/crsconfig/hapatch_2020-10-12_05-32-49PM.log
CRS service started successfully on home /u01/app/19.0.0/grid
Starting database service on home /u01/app/oracle/product/19.0.0/db_1
Database service successfully started on home /u01/app/oracle/product/19.0.0/db_1
Preparing home /u01/app/oracle/product/19.0.0/db_1 after database service restarted
No step execution required.........
Trying to apply SQL patch on home /u01/app/oracle/product/19.0.0/db_1
SQL patch applied successfully on home /u01/app/oracle/product/19.0.0/db_1
OPatchAuto successful.
--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as follows:
Host:oledb
SIDB Home:/u01/app/oracle/product/19.0.0/db_1
Version:19.0.0.0.0
Summary:
==Following patches were SKIPPED:
Patch: /media/sf_bigfiles/oracle/19c/patch/31326369/31305339/31304218
Reason: This patch is not applicable to this specified target type - "oracle_database"
Patch: /media/sf_bigfiles/oracle/19c/patch/31326369/31305339/31335188
Reason: This patch is not applicable to this specified target type - "oracle_database"
==Following patches were SUCCESSFULLY applied:
Patch: /media/sf_bigfiles/oracle/19c/patch/31326369/31305339/31281355
Log: /u01/app/oracle/product/19.0.0/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2020-10-12_17-16-33PM_1.log
Patch: /media/sf_bigfiles/oracle/19c/patch/31326369/31305339/31305087
Log: /u01/app/oracle/product/19.0.0/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2020-10-12_17-16-33PM_1.log
Host:oledb
SIHA Home:/u01/app/19.0.0/grid
Version:19.0.0.0.0
Summary:
==Following patches were SUCCESSFULLY applied:
Patch: /media/sf_bigfiles/oracle/19c/patch/31326369/31305339/31281355
Log: /u01/app/19.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-10-12_17-21-45PM_1.log
Patch: /media/sf_bigfiles/oracle/19c/patch/31326369/31305339/31304218
Log: /u01/app/19.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-10-12_17-21-45PM_1.log
Patch: /media/sf_bigfiles/oracle/19c/patch/31326369/31305339/31305087
Log: /u01/app/19.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-10-12_17-21-45PM_1.log
Patch: /media/sf_bigfiles/oracle/19c/patch/31326369/31305339/31335188
Log: /u01/app/19.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-10-12_17-21-45PM_1.log
OPatchauto session completed at Mon Oct 12 17:53:35 2020
Time taken to complete the session 40 minutes, 30 seconds


如果没有使用opatchauto,还要用datapatch对数据库进行变更,这个过程需要把cdb和pdb数据库都打开。


grid用户打补丁完成后检查


[grid@oledb 31326369]$ $ORACLE_HOME/OPatch/opatch lspatches
31335188;TOMCAT RELEASE UPDATE 19.0.0.0.0 (31335188)
31305087;OCW RELEASE UPDATE 19.8.0.0.0 (31305087)
31304218;ACFS RELEASE UPDATE 19.8.0.0.0 (31304218)
31281355;Database Release Update : 19.8.0.0.200714 (31281355)
OPatch succeeded.

oracle用户打补丁完成后检查


[oracle@oledb 31326369]$  $ORACLE_HOME/OPatch/opatch lspatches
31305087;OCW RELEASE UPDATE 19.8.0.0.0 (31305087)
31281355;Database Release Update : 19.8.0.0.200714 (31281355)
OPatch succeeded.

在检查oracle数据库

SQL>  select patch_id,action,status,SOURCE_VERSION,TARGET_VERSION from dba_registry_sqlpatch;
  PATCH_ID ACTION     STATUS          SOURCE_VERSION      TARGET_VERSION
---------- ------------------------------ -------------------------------------------------- ------------------------------ ------------------------------
  29517242 APPLY     SUCCESS          19.1.0.0.0       19.3.0.0.0
  31281355 APPLY     SUCCESS          19.3.0.0.0       19.8.0.0.0


opatchauto没有在pdb里apply sql file!

检查pdb发现没有apply sql file!

SQL> alter session set container=orclpdb ;
Session altered.
SQL> startup;
Pluggable Database opened.
SQL> select patch_id,action,status,SOURCE_VERSION,TARGET_VERSION from dba_registry_sqlpatch;
  PATCH_ID ACTION     STATUS          SOURCE_VERSION      TARGET_VERSION
---------- ------------------------------ -------------------------------------------------- ------------------------------ ------------------------------
  29517242 APPLY     SUCCESS          19.1.0.0.0       19.3.0.0.0
SQL> quit


手工打补丁

[oracle@oledb OPatch]$ ./datapatch -verbose
SQL Patching tool version 19.8.0.0.0 Production on Mon Oct 12 18:01:42 2020
Copyright (c) 2012, 2020, Oracle.  All rights reserved.
Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_1193_2020_10_12_18_01_42/sqlpatch_invocation.log
Connecting to database...OK
Gathering database info...done
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done
Current state of interim SQL patches:
  No interim patches found
Current state of release update SQL patches:
  Binary registry:
    19.8.0.0.0 Release_Update 200703031501: Installed
  PDB CDB$ROOT:
    Applied 19.8.0.0.0 Release_Update 200703031501 successfully on 12-OCT-20 05.53.07.454706 PM
  PDB ORCLPDB:
    Applied 19.3.0.0.0 Release_Update 190410122720 successfully on 12-OCT-20 03.48.04.950054 PM
  PDB PDB$SEED:
    Applied 19.8.0.0.0 Release_Update 200703031501 successfully on 12-OCT-20 05.53.14.495111 PM
Adding patches to installation queue and performing prereq checks...done
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED
    No interim patches need to be rolled back
    No release update patches need to be installed
    No interim patches need to be applied
  For the following PDBs: ORCLPDB
    No interim patches need to be rolled back
    Patch 31281355 (Database Release Update : 19.8.0.0.200714 (31281355)):
      Apply from 19.3.0.0.0 Release_Update 190410122720 to 19.8.0.0.0 Release_Update 200703031501
    No interim patches need to be applied
Installing patches...
Patch installation complete.  Total patches installed: 1
Validating logfiles...done
Patch 31281355 apply (pdb ORCLPDB): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/31281355/23688465/31281355_apply_ORCL_ORCLPDB_2020Oct12_18_02_36.log (no errors)
SQL Patching tool complete on Mon Oct 12 18:08:46 2020



再检查

[oracle@oledb OPatch]$ sqlplus / as sysdba
SQL*Plus: Release 19.0.0.0.0 - Production on Mon Oct 12 18:11:15 2020
Version 19.8.0.0.0
SQL> set linesize 200
SQL> alter session set container=orclpdb;
Session altered.
SQL> select patch_id,action,status,SOURCE_VERSION,TARGET_VERSION from dba_registry_sqlpatch;
  PATCH_ID ACTION     STATUS          SOURCE_VERSION      TARGET_VERSION
---------- ------------------------------ -------------------------------------------------- ------------------------------ ------------------------------
  29517242 APPLY     SUCCESS          19.1.0.0.0       19.3.0.0.0
  31281355 APPLY     SUCCESS          19.3.0.0.0       19.8.0.0.0
SQL>
相关文章
|
2月前
|
存储 Oracle 关系型数据库
Oracle数据库的应用场景有哪些?
【10月更文挑战第15天】Oracle数据库的应用场景有哪些?
192 64
|
11天前
|
存储 Oracle 关系型数据库
数据库数据恢复—ORACLE常见故障的数据恢复方案
Oracle数据库常见故障表现: 1、ORACLE数据库无法启动或无法正常工作。 2、ORACLE ASM存储破坏。 3、ORACLE数据文件丢失。 4、ORACLE数据文件部分损坏。 5、ORACLE DUMP文件损坏。
48 11
|
23天前
|
SQL Oracle 关系型数据库
Oracle 从 DMP 文件中恢复指定表的步骤
Oracle 从 DMP 文件中恢复指定表的步骤
38 7
|
24天前
|
Oracle 关系型数据库 数据库
Oracle数据恢复—Oracle数据库文件有坏快损坏的数据恢复案例
一台Oracle数据库打开报错,报错信息: “system01.dbf需要更多的恢复来保持一致性,数据库无法打开”。管理员联系我们数据恢复中心寻求帮助,并提供了Oracle_Home目录的所有文件。用户方要求恢复zxfg用户下的数据。 由于数据库没有备份,无法通过备份去恢复数据库。
|
1月前
|
存储 Oracle 关系型数据库
oracle数据恢复—Oracle数据库文件大小变为0kb的数据恢复案例
存储掉盘超过上限,lun无法识别。管理员重组存储的位图信息并导出lun,发现linux操作系统上部署的oracle数据库中有上百个数据文件的大小变为0kb。数据库的大小缩水了80%以上。 取出&并分析oracle数据库的控制文件。重组存储位图信息,重新导出控制文件中记录的数据文件,发现这些文件的大小依然为0kb。
|
17天前
|
存储 Oracle 关系型数据库
服务器数据恢复—华为S5300存储Oracle数据库恢复案例
服务器存储数据恢复环境: 华为S5300存储中有12块FC硬盘,其中11块硬盘作为数据盘组建了一组RAID5阵列,剩下的1块硬盘作为热备盘使用。基于RAID的LUN分配给linux操作系统使用,存放的数据主要是Oracle数据库。 服务器存储故障: RAID5阵列中1块硬盘出现故障离线,热备盘自动激活开始同步数据,在同步数据的过程中又一块硬盘离线,RAID5阵列瘫痪,上层LUN无法使用。
|
1月前
|
SQL Oracle 关系型数据库
Oracle数据库优化方法
【10月更文挑战第25天】Oracle数据库优化方法
49 7
|
1月前
|
Oracle 关系型数据库 数据库
oracle数据库技巧
【10月更文挑战第25天】oracle数据库技巧
31 6
|
1月前
|
存储 Oracle 关系型数据库
Oracle数据库优化策略
【10月更文挑战第25天】Oracle数据库优化策略
29 5
|
2月前
|
存储 Oracle 关系型数据库
数据库数据恢复—Oracle ASM磁盘组故障数据恢复案例
Oracle数据库数据恢复环境&故障: Oracle ASM磁盘组由4块磁盘组成。Oracle ASM磁盘组掉线 ,ASM实例不能mount。 Oracle数据库故障分析&恢复方案: 数据库数据恢复工程师对组成ASM磁盘组的磁盘进行分析。对ASM元数据进行分析发现ASM存储元数据损坏,导致磁盘组无法挂载。

推荐镜像

更多
下一篇
DataWorks