ORAchk检查RAC后整改一例

简介: 使用ORAchk检查一个两个节点RAC的数据库,根据检查结果整改如下

使用ORAchk检查一个两个节点RAC的数据库,根据检查结果整改如下:

One or more network card used by clusterware do not have HOTPLUG=NO

修改文件,加入一行:


sudo vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
HOTPLUG=no
[oracle@localhost ~]$ sudo service network restart
Restarting network (via systemctl):                        [  OK  ]



Verify AUD$ and FGA_LOG$ tables use Automatic Segment Space Management

检查及解决办法如下:


SQL> set linesize 200;
SQL> select t.table_name,ts.segment_space_management from dba_tables t, dba_tablespaces ts where ts.tablespace_name = t.tablespace_name and t.table_name in ('AUD$','FGA_LOG$');
TABLE_NAME                SEGMEN
-------------------------------------------------------------------------------------------------------------------------------- ------
AUD$                 MANUAL
FGA_LOG$                MANUAL
SQL> 
BEGIN
DBMS_AUDIT_MGMT.set_audit_trail_location(audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,--this moves table AUD$ 
audit_trail_location_value => 'SYSAUX');  
END;  
/SQL>   2    3    4    5



PL/SQL procedure successfully completed.


SQL> SQL> 
BEGIN
DBMS_AUDIT_MGMT.set_audit_trail_location(audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD,--this moves table FGA_LOG$ 
audit_trail_location_value => 'SYSAUX');
END;
/   
SQL>   2    3    4    5  
PL/SQL procedure successfully completed.
SQL> select t.table_name,ts.segment_space_management from dba_tables t, dba_tablespaces ts where ts.tablespace_name = t.tablespace_name and t.table_name in ('AUD$','FGA_LOG$');
TABLE_NAME                SEGMEN
-------------------------------------------------------------------------------------------------------------------------------- ------
FGA_LOG$                AUTO
AUD$                 AUTO


Database parameter CLUSTER_INTERCONNECTS is not set to the recommended value

节点一:


[grid@rac1 ~]$ oifcfg iflist
eth0  192.168.84.0
eth1  192.168.60.0
eth1  169.254.0.0
[grid@rac1 ~]$ oifcfg getif
eth0  192.168.84.0  global  public
eth1  192.168.60.0  global  cluster_interconnect
[grid@rac1 ~]$ ifconfig
eth0      Link encap:Ethernet  HWaddr 00:50:56:94:66:D3  
          inet addr:192.168.84.172  Bcast:192.168.84.255  Mask:255.255.255.0
          inet6 addr: fe80::250:56ff:fe94:66d3/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:23795204 errors:0 dropped:0 overruns:0 frame:0
          TX packets:243349 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1467346168 (1.3 GiB)  TX bytes:300383686 (286.4 MiB)
eth0:3    Link encap:Ethernet  HWaddr 00:50:56:94:66:D3  
          inet addr:192.168.84.173  Bcast:192.168.84.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
eth1      Link encap:Ethernet  HWaddr 00:50:56:94:52:55  
          inet addr:192.168.60.172  Bcast:192.168.60.255  Mask:255.255.255.0
          inet6 addr: fe80::250:56ff:fe94:5255/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:17894262 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9998672 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:10556885940 (9.8 GiB)  TX bytes:6025968766 (5.6 GiB)
eth1:1    Link encap:Ethernet  HWaddr 00:50:56:94:52:55  
          inet addr:169.254.206.87  Bcast:169.254.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:2523401 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2523401 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2504861593 (2.3 GiB)  TX bytes:2504861593 (2.3 GiB)
SQL> select * from gv$cluster_interconnects;
   INST_ID NAME     IP_ADDRESS     IS_ SOURCE
---------- --------------- ---------------- --- -------------------------------
  1 eth1:1    169.254.206.87   NO
  2 eth1:1    169.254.197.14   NO


节点二


[grid@rac2 ~]$ oifcfg iflist
eth0  192.168.84.0
eth1  192.168.60.0
eth1  169.254.0.0
[grid@rac2 ~]$ oifcfg getif
eth0  192.168.84.0  global  public
eth1  192.168.60.0  global  cluster_interconnect
[grid@rac2 ~]$ ifconfig
eth0      Link encap:Ethernet  HWaddr 00:50:56:94:4B:3D  
          inet addr:192.168.84.174  Bcast:192.168.84.255  Mask:255.255.255.0
          inet6 addr: fe80::250:56ff:fe94:4b3d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:25223377 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1045553 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:4987636818 (4.6 GiB)  TX bytes:3630397073 (3.3 GiB)
eth0:1    Link encap:Ethernet  HWaddr 00:50:56:94:4B:3D  
          inet addr:192.168.84.175  Bcast:192.168.84.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
eth0:2    Link encap:Ethernet  HWaddr 00:50:56:94:4B:3D  
          inet addr:192.168.84.176  Bcast:192.168.84.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
eth1      Link encap:Ethernet  HWaddr 00:50:56:94:2D:5C  
          inet addr:192.168.60.174  Bcast:192.168.60.255  Mask:255.255.255.0
          inet6 addr: fe80::250:56ff:fe94:2d5c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:14705063 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13132509 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:6313522258 (5.8 GiB)  TX bytes:10220055693 (9.5 GiB)
eth1:1    Link encap:Ethernet  HWaddr 00:50:56:94:2D:5C  
          inet addr:169.254.197.14  Bcast:169.254.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:2858036 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2858036 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:8403308652 (7.8 GiB)  TX bytes:8403308652 (7.8 GiB)



执行下面的命令查找 interconnect IP


sqlplus> connect / as sysdba
            oradebug setmypid
            oradebug ipc
            exit


第一个节点:IP 169.254.206.87

第二个节点:


SSKGXPT 0x7fe7bf79c1b8 flags 0x5 { READPENDING } sockno 4 IP 169.254.197.14 UDP 31848 lerr 0
SKGXP:[7fe7bf79ac48.32]{ctx}:


修改CLUSTER_INTERCONNECTS

第一个节点:


alter system set cluster_interconnects="169.254.206.87" scope=spfile sid='orcl1';


第二个节点:


alter system set cluster_interconnects="169.254.197.14" scope=spfile sid='orcl1';


修改需重启数据库


Consider increasing the value of the session_cached_cursors database parameter

查询SESSION_CACHED_CURSORS’ 和 ‘OPEN_CURSORS’ 参数的使用

select
'session_cached_cursors' parameter,
lpad(value, 5) value,
decode(value, 0, ' n/a', to_char(100 * used / value, '990') || '%') usage
from
( select
max(s.value) used
from
v$statname n,
v$sesstat s
where
n.name = 'session cursor cache count' and
s.statistic# = n.statistic#
),
( select
value
from
v$parameter
where
name = 'session_cached_cursors'
)
union all
select
'open_cursors',
lpad(value, 5),
to_char(100 * used / value, '990') || '%'
from
( select
max(sum(s.value)) used
from
v$statname n,
v$sesstat s
where
n.name in ('opened cursors current') and
s.statistic# = n.statistic#
group by
s.sid
),
( select
value
from
v$parameter
where
name = 'open_cursors'
)
/ 


如果这两个参数的对应的usage接近100%,需要增加。


Linux Disk I/O Scheduler should be configured to Deadline

默认的I/O Scheduler是CFQ,数据库应该设置成Deadline


The Completely Fair Queuing (CFQ) scheduler is the default algorithm in Red Hat Enterprise Linux 4 which is suitable for a wide variety of applications and provides a good compromise between throughput and latency. In comparison to the CFQ algorithm, the Deadline scheduler caps maximum latency per request and maintains a good disk throughput which is best for disk-intensive database
applications. Hence, the Deadline scheduler is recommended for database systems.


在每个节点上执行


# echo deadline > /sys/block/${ASM_DISK}/queue/scheduler


vm.min_free_kbytes should be set as recommended.

vm.min_free_kbytes 应该设置成 >= 524288 && <= 1048576 ,检查命令


/sbin/sysctl -n vm.min_free_kbytes
cat /proc/sys/vm/min_free_kbytes

如果结果不对,修改/etc/sysctl.conf 加入:“vm.min_free_kbytes >= 524288 && <= 1048576” 并且重新启动节点


Linux transparent huge pages are enabled

Linux下的大页分为两种类型:标准大页(Huge Pages)和透明大页(Transparent Huge Pages)。标准大页管理是预分配的方式,而透明大页管理则是动态分配的方式。相信有不少人将Huge Page和Transparent Huge Pages混为一谈。目前透明大页与传统HugePages联用会出现一些问题,导致性能问题和系统重启。Oracle 建议禁用透明大页(Transparent Huge Pages)。在 Oracle Linux 6.5 版中,已删除透明 HugePages。

检查


cat /proc/meminfo |grep AnonHugeP
AnonHugePages:    391168 k

修改。在/etc/rc.local加入:

echo never > /sys/kernel/mm/transparent_hugepage/enabled 
echo never > /sys/kernel/mm/transparent_hugepage/defrag 

CSS log file size has not been increased from default (50 MB)

检查:

 [grid@rac1 crs]$ crsctl get css logfilesize
CRS-4676: Successful get logfilesize 52428800 for Cluster Synchronization Services.
[grid@rac1 crs]$ 

修改,在每个节点上运行:

$GRID_HOME/bin/crsctl set css logfilesize 157286400
相关文章
|
JavaScript Perl
一次RAC VIP漂移的结果诊断及修复
背景概述 客户的10G数据库VIP出现宕,引起VIP负载到另一个节点 事件支持细节 04:29:56.378 一号机器VIP 出现 went OFFLINE unexpectedly,当天出现这个VIP漂移的故障后为检查VIP宕掉的原因, 对VIP资源启动DEBUG 5模式:./crsctl debug log res "orahostname1.vip:5" 04:38:36.047 一号节点VIP 出现 went OFFLINE unexpectedly。
2033 0
|
存储 Oracle 关系型数据库
|
Oracle 关系型数据库 前端开发