oracle 12c rac下,两个节点,使用oracle schedule jobb创建的一个子job,指定在一个节点,然后将该节点的停掉,

oracle tnsnames.ora下RAC的配置
[问题点数:20分,结帖人zhouhaochen]
oracle tnsnames.ora下RAC的配置
[问题点数:20分,结帖人zhouhaochen]
不显示删除回复
显示所有回复
显示星级回复
显示得分回复
只显示楼主
本帖子已过去太久远了,不再提供回复功能。oracle|LOFTER(乐乎) - 记录生活,发现同好
LOFTER for ipad —— 记录生活,发现同好
记录生活,发现同好
1200位喜爱 #oracle 的小伙伴邀你来玩
查看高清大图
喜欢并收藏内容
关注达人获取动态
评论私信与同好交流
10秒注册,查看更多优质内容
{if x.type==1}
{if !!x.title}${x.title}{/if}
{if !!x.digest}${x.digest}{/if}
{if x.type==2}
{if x.type==3}
{if x.type==4}
加入 LOFTER 开通功能特权
查看高清大图
喜欢并收藏内容
关注达人获取动态
评论私信与同好交流推荐这篇日记的豆列
······By luocs (
七月 1, 2013 at 下午 12:42)
· Filed under , , , ,
Oracle Database 12c发布也有一周了,这几天尝试了下单机、RESTART和RAC的安装,其中发生了不少趣事。比如安装Oracle 12c Restart花费了4小时多最终笔记本死机、RAC安装过程中采用HAIP特性却失败等等。
Oracle 12c RAC引入了Flex Cluster的概念,但我尚未研究成功。
下面是传统方式安装Oracle 12c RAC的教程。&
OS: Oracle Enterprise Linux 6.4 (For RAC Nodes),Oracle Enterprise Linux 5.8(For DNS Server),Openfiler 2.3(For SAN Storage)
DB: GI and Database 12.1.0.1
linuxamd64_12c_database_1of2.zip
linuxamd64_12c_database_2of2.zip
linuxamd64_12c_grid_1of2.zip
linuxamd64_12c_grid_2of2.zip
– 这里只给出Oracle相关的,操作系统以及其他软件请自身准备。
操作系统信息
RAC节点服务器:
(以node1节点为例)
[root@12crac1 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.4 (Santiago)
[root@12crac1 ~]# uname -a
2.6.39-400.17.1.el6uek.x86_64 #1 SMP Fri Feb 22 18:16:18 PST
x86_64 x86_64 GNU/Linux
[root@12crac1 ~]# grep MemTotal /proc/meminfo
2051748 kB
[root@12crac1 ~]# grep SwapTotal /proc/meminfo
SwapTotal:
5119996 kB
[root@12crac1 ~]# df -h
Filesystem
Used Avail Use% Mounted on
32% /dev/shm
网络配置信息:
备注:从下面信息中可以发现,每个节点服务器我都添加了五个网卡,eth0用于PUBLIC,而eth1~eth4用于Private,本想采用HAIP特性。
但我在安装实验过程中HAIP特性上发生了节点2无法启动ASM实例的问题,因此最后只用了其中eth1接口。
至于HAIP导致的问题,可能是出于BUG,这个问题还有待仔细troubleshooting。
[root@12crac1 ~]# ifconfig
Link encap:Ethernet
HWaddr 00:0C:29:75:36:ED
inet addr:192.168.1.150
Bcast:192.168.1.255
Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe75:36ed/64 Scope:Link
UP BROADCAST RUNNING MULTICAST
RX packets:64 errors:0 dropped:0 overruns:0 frame:0
TX packets:56 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes: KiB)
TX bytes: KiB)
Link encap:Ethernet
HWaddr 00:0C:29:75:36:F7
inet addr:192.168.80.150
Bcast:192.168.80.255
Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe75:36f7/64 Scope:Link
UP BROADCAST RUNNING MULTICAST
RX packets:12 errors:0 dropped:0 overruns:0 frame:0
TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:720 (720.0 b)
TX bytes:720 (720.0 b)
Link encap:Ethernet
HWaddr 00:0C:29:75:36:01
inet addr:192.168.80.151
Bcast:192.168.80.255
Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe75:3601/64 Scope:Link
UP BROADCAST RUNNING MULTICAST
RX packets:9 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:540 (540.0 b)
TX bytes:636 (636.0 b)
Link encap:Ethernet
HWaddr 00:0C:29:75:36:0B
inet addr:192.168.80.152
Bcast:192.168.80.255
Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe75:360b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST
RX packets:5 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:300 (300.0 b)
TX bytes:636 (636.0 b)
Link encap:Ethernet
HWaddr 00:0C:29:75:36:15
inet addr:192.168.80.153
Bcast:192.168.80.255
Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fe75:3615/64 Scope:Link
UP BROADCAST RUNNING MULTICAST
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:60 (60.0 b)
TX bytes:566 (566.0 b)
Link encap:Local Loopback
inet addr:127.0.0.1
Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b)
TX bytes:0 (0.0 b)
[root@12crac2 ~]# ifconfig
Link encap:Ethernet
HWaddr 00:0C:29:A1:81:7C
inet addr:192.168.1.151
Bcast:192.168.1.255
Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fea1:817c/64 Scope:Link
UP BROADCAST RUNNING MULTICAST
RX packets:126 errors:0 dropped:0 overruns:0 frame:0
TX packets:62 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1 KiB)
TX bytes: KiB)
Link encap:Ethernet
HWaddr 00:0C:29:A1:81:86
inet addr:192.168.80.154
Bcast:192.168.80.255
Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fea1:8186/64 Scope:Link
UP BROADCAST RUNNING MULTICAST
RX packets:23 errors:0 dropped:0 overruns:0 frame:0
TX packets:27 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes: KiB)
TX bytes: KiB)
Link encap:Ethernet
HWaddr 00:0C:29:A1:81:90
inet addr:192.168.80.155
Bcast:192.168.80.255
Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fea1:8190/64 Scope:Link
UP BROADCAST RUNNING MULTICAST
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:60 (60.0 b)
TX bytes:636 (636.0 b)
Link encap:Ethernet
HWaddr 00:0C:29:A1:81:9A
inet addr:192.168.80.156
Bcast:192.168.80.255
Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b)
TX bytes:0 (0.0 b)
Link encap:Ethernet
HWaddr 00:0C:29:A1:81:A4
inet addr:192.168.80.157
Bcast:192.168.80.255
Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b)
TX bytes:0 (0.0 b)
Link encap:Local Loopback
inet addr:127.0.0.1
Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b)
TX bytes:0 (0.0 b)
确认防火墙和SELinux是禁用的
(以Node1为例,两个节点相同)
[root@12crac1 ~]# iptables -L
Chain INPUT (policy ACCEPT)
prot opt source
destination
Chain FORWARD (policy ACCEPT)
prot opt source
destination
Chain OUTPUT (policy ACCEPT)
prot opt source
destination
如果防火墙没禁用,那么通过如下方式修改:
[root@12crac1 ~]# service iptables stop
[root@12crac1 ~]# chkconfig iptables off
[root@12crac1 ~]# getenforce
如果SELinux没有被禁止,那就通过如下方式修改:
[root@12crac1 ~]# cat /etc/selinux/config
-- 改成SELINUX=disabled
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
enforcing - SELinux security policy is enforced.
permissive - SELinux prints warnings instead of enforcing.
disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
targeted - Targeted processes are protected,
mls - Multi Level Security protection.
SELINUXTYPE=targeted
DNS服务器:
[root@dns12c ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.8 (Tikanga)
[root@dns12c ~]# uname -a
2.6.32-300.10.1.el5uek #1 SMP Wed Feb 22 17:37:40 EST
x86_64 x86_64 GNU/Linux
[root@dns12c ~]# grep MemTotal /proc/meminfo
[root@dns12c ~]# grep SwapTotal /proc/meminfo
SwapTotal:
3277252 kB
[root@dns12c ~]# ifconfig
Link encap:Ethernet
HWaddr 00:0C:29:7A:FD:82
inet addr:192.168.1.158
Bcast:192.168.1.255
Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST
RX packets:114941 errors:0 dropped:0 overruns:0 frame:0
TX packets:6985 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:.5 MiB)
TX bytes:.0 MiB)
Link encap:Local Loopback
inet addr:127.0.0.1
Mask:255.0.0.0
UP LOOPBACK RUNNING
RX packets:104 errors:0 dropped:0 overruns:0 frame:0
TX packets:104 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes: KiB)
TX bytes: KiB)
Iptables和SELinux也禁止。
SAN服务器:
Openfiler 2.3来部署的,在这里分配3个LUN,大小分别为5G和两个8G。
正式部署安装
1、配置DNS服务
以下操作在DNS服务器上进行:
安装bind三个包
[root@dns12c ~]# rpm -ivh /mnt/Server/bind-9.3.6-20.P1.el5.x86_64.rpm
[root@dns12c ~]# rpm -ivh /mnt/Server/bind-chroot-9.3.6-20.P1.el5.x86_64.rpm
[root@dns12c ~]# rpm -ivh /mnt/Server/caching-nameserver-9.3.6-20.P1.el5.x86_64.rpm
配置主区域
[root@dns12c ~]# cd /var/named/chroot/etc
[root@dns12c etc]# cp -p named.caching-nameserver.conf named.conf
[root@dns12c etc]# cat named.conf
listen-on port 53 { };
listen-on-v6 port 53 { ::1; };
&/var/named&;
&/var/named/data/cache_dump.db&;
statistics-file &/var/named/data/named_stats.txt&;
memstatistics-file &/var/named/data/named_mem_stats.txt&;
// Those options should be used carefully because they disable port
// randomization
// query-source
// query-source-v6 port 53;
allow-query
allow-query-cache { };
channel default_debug {
file &data/named.run&;
view any_resolver {
match-clients
match-destinations { };
include &/etc/named.zones&;
[root@dns12c etc]# cp -p named.rfc1912.zones named.zones
[root@dns12c etc]# cat named.zones
zone && IN {
file &.zone&;
allow-update { };
zone &1.168.192.in-addr.arpa& IN {
file &1.168.192.local&;
allow-update { };
[root@dns12c ~]# cd /var/named/chroot/var/named
[root@12crac1 named]# cp -p named..zone
[root@12crac1 named]# cp -p named.local 1.168.192.local
[root@12crac1 named]# .zone
42 serial (d. adams)
3H refresh
1D ) minimum
192.168.1.154
192.168.1.155
192.168.1.156
192.168.1.157
12crac1 IN
192.168.1.150
12crac2 IN
192.168.1.151
[root@12crac1 named]# cat 1.168.192.local
28800 Refresh
14400 Retry
3600000 Expire
86400 ) Minimum
nslookup或 dig检查
给两个节点配置DNS
(以Node1为例,两个节点相同)
[root@12crac1 ~]# cat /etc/resolv.conf
#domain localdomain
search localdomain
nameserver 192.168.1.158
[root@12crac1 ~]# nslookup
192.168.1.158
192.168.1.158#53
Address: 192.168.1.156
Address: 192.168.1.154
Address: 192.168.1.155
[root@12crac1 ~]# nslookup 192.168.1.154
192.168.1.158
192.168.1.158#53
154.1.168.192.in-addr.arpa
[root@12crac1 ~]# nslookup 192.168.1.155
192.168.1.158
192.168.1.158#53
155.1.168.192.in-addr.arpa
[root@12crac1 ~]# nslookup 192.168.1.156
192.168.1.158
192.168.1.158#53
156.1.168.192.in-addr.arpa
2、配置/etc/hosts
修改/etc/hosts文件,前两行不懂,添加hostname对应信息。
(以Node1为例,两个节点相同)
[root@12crac1 ~]# cat /etc/hosts
localhost localhost.localdomain localhost4 localhost4.localdomain4
localhost localhost.localdomain localhost6 localhost6.localdomain6
# For Public
192.168.1.150
192.168.1.151
192.168.1.152
192.168.1.153
# For Private IP
192.168.80.150
12crac1- 12crac1-priv1
192.168.80.151
12crac1- 12crac1-priv2
192.168.80.152
12crac1- 12crac1-priv3
192.168.80.153
12crac1- 12crac1-priv4
192.168.80.154
12crac2- 12crac2-priv1
192.168.80.155
12crac2- 12crac2-priv2
192.168.80.156
12crac2- 12crac2-priv3
192.168.80.157
12crac2- 12crac2-priv4
# For SCAN IP
# 192.168.1.154
# 192.168.1.155
# 192.168.1.155
# For DNS Server
192.168.1.158
3、系统配置
修改/etc/sysctl.conf,添加如下内容:
fs.file-max = 6815744
kernel.sem = 250 8
kernel.shmmni = 4096
kernel.shmall =
kernel.shmmax = 4
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range =
[root@12c ~]# sysctl -p
修改/etc/security/limits.conf,添加如下内容:
grid & soft & nofile & &1024
grid & hard & nofile & &65536
grid & soft & nproc & &2047
grid & hard & nproc & &16384
grid & soft & stack & &10240
grid & hard & stack & &32768
oracle & soft & nofile & &1024
oracle & hard & nofile & &65536
oracle & soft & nproc & &2047
oracle & hard & nproc & &16384
oracle & soft & stack & &10240
oracle & hard & stack & &32768
4、配置YUM源并安装所需包
先将默认的yum配置文件删除或者移动,然后创建一个新的
(以Node1为例,两个节点相同)
[root@12crac1 ~]# cd /etc/yum.repos.d
[root@12crac1 yum.repos.d]# mkdir bk
[root@12crac1 yum.repos.d]# mv public-yum-ol6.repo bk/
[root@12crac1 yum.repos.d]# vi luocs.repo
-- 添加如下内容
name=OEL-$releasever - Media
baseurl=file:///mnt
gpgcheck=0
将光驱挂载上
[root@12crac1 yum.repos.d]# mount /dev/cdrom /mnt
mount: block device /dev/sr0 is write-protected, mounting read-only
下面就可以Yum方式安装所需包了
[root@12crac1 yum.repos.d]# yum -y install binutils compat-libstdc++-33 elfutils-libelf \
elfutils-libelf-devel elfutils-libelf-devel-static gcc gcc-c++ glibc glibc-common \
glibc-devel kernel-headers ksh libaio libaio-devel libgcc libgomp libstdc++ libstdc++-devel \
make numactl-devel sysstat unixODBC unixODBC-devel pdksh compat-libcap1
5、创建用户和组
(以Node1为例,两个节点相同)
[root@12crac1 ~]# /usr/sbin/groupadd -g 54321 oinstall
[root@12crac1 ~]# /usr/sbin/groupadd -g 54322 dba
[root@12crac1 ~]# /usr/sbin/groupadd -g 54323 oper
[root@12crac1 ~]# /usr/sbin/groupadd -g 54324 backupdba
[root@12crac1 ~]# /usr/sbin/groupadd -g 54325 dgdba
[root@12crac1 ~]# /usr/sbin/groupadd -g 54327 asmdba
[root@12crac1 ~]# /usr/sbin/groupadd -g 54328 asmoper
[root@12crac1 ~]# /usr/sbin/groupadd -g 54329 asmadmin
创建用户:
[root@12crac1 ~]# /usr/sbin/useradd -u 54321 -g oinstall -G asmadmin,asmdba,asmoper,dba grid
[root@12crac1 ~]# /usr/sbin/useradd -u 54322 -g oinstall -G dba,backupdba,dgdba,asmadmin oracle
设置口令:
[root@12crac1 ~]# passwd grid
[root@12crac1 ~]# passwd oracle
6、创建安装目录以及授权
(以Node1为例,两个节点相同)
[root@12crac1 ~]# mkdir -p /u01/app/grid
[root@12crac1 ~]# mkdir -p /u01/app/12.1.0/grid
[root@12crac1 ~]# mkdir -p /u01/app/oracle/product/12.1.0/db_1
[root@12crac1 ~]# chown -R grid.oinstall /u01
[root@12crac1 ~]# chown -R oracle.oinstall /u01/app/oracle
[root@12crac1 ~]# chmod -R 775 /u01
7、配置境变量
[root@12crac1 ~]# vi /home/grid/.bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/12.1.0/grid
export ORACLE_SID=+ASM1
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
alias sqlplus=&rlwrap sqlplus&
[root@12crac1 ~]# vi /home/oracle/.bash_profile
export PATH
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=
export ORACLE_UNQNAME=luocs12c1
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/12.1.0/db_1
export ORACLE_SID=luocs12c1
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
alias sqlplus=&rlwrap sqlplus&
alias rman=&rlwrap rman&
[root@12crac2 ~]# vi /home/grid/.bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/12.1.0/grid
export ORACLE_SID=+ASM1
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
alias sqlplus=&rlwrap sqlplus&
[root@12crac2 ~]# vi /home/oracle/.bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=
export ORACLE_UNQNAME=luocs12c2
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/12.1.0/db_1
export ORACLE_SID=luocs12c2
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
alias sqlplus=&rlwrap sqlplus&
alias rman=&rlwrap rman&
8、Iscsi挂载磁盘并配置UDEV
[root@12cr ~]# yum -y install iscsi-initiator-utils
[root@12cr ~]# service iscsid start
[root@12cr ~]# chkconfig iscsid on
[root@12cr ~]# iscsiadm -m discovery -t sendtargets -p 192.168.80.140:3260
] iscsid: [
192.168.80.140:3260,1 iqn..openfiler:tsn.3a9cad78121d
[root@12cr ~]# service iscsi restart
Stopping iscsi: [
Starting iscsi: [
[root@12crac1 ~]# fdisk -l
Disk /dev/sda: 53.7 GB,
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00046ecd
Device Boot
Partition 1 does not end on cylinder boundary.
Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
Disk /dev/sdb: 2147 MB,
67 heads, 62 sectors/track, 1009 cylinders
Units = cylinders of 4154 * 512 = 2126848 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x
Disk /dev/sdc: 10.5 GB,
64 heads, 32 sectors/track, 10016 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x
Disk /dev/sdd: 6610 MB,
204 heads, 62 sectors/track, 1020 cylinders
Units = cylinders of 12648 * 512 = 6475776 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x
Disk /dev/sdf: 8388 MB,
64 heads, 32 sectors/track, 8000 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x
Disk /dev/sde: 8388 MB,
64 heads, 32 sectors/track, 8000 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x
Disk /dev/sdg: 5335 MB,
165 heads, 62 sectors/track, 1018 cylinders
Units = cylinders of 10230 * 512 = 5237760 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x
这里我只用sde、sdf、sdg,其他的是给别的集群使用的。
[root@12crac1 ~]#do
& echo &KERNEL==\&sd*\&, BUS==\&scsi\&, PROGRAM==\&/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\&, RESULT==\&`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\&, NAME=\&asm-disk$i\&, OWNER=\&grid\&, GROUP=\&asmadmin\&, MODE=\&0660\&&
KERNEL==&sd*&, BUS==&scsi&, PROGRAM==&/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name&, RESULT==&14f504edd544c4756&, NAME=&asm-diske&, OWNER=&grid&, GROUP=&asmadmin&, MODE=&0660&
KERNEL==&sd*&, BUS==&scsi&, PROGRAM==&/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name&, RESULT==&14f504ed66744c&, NAME=&asm-diskf&, OWNER=&grid&, GROUP=&asmadmin&, MODE=&0660&
KERNEL==&sd*&, BUS==&scsi&, PROGRAM==&/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name&, RESULT==&14f504ecd716dc76&, NAME=&asm-diskg&, OWNER=&grid&, GROUP=&asmadmin&, MODE=&0660&
配置UDEV:
(以Node1为例,两个节点相同)
[root@12crac1 ~]# vi /etc/udev/rules.d/99-oracle-asmdevices.rules
-- 添加如下内容:
KERNEL==&sd*&, BUS==&scsi&, PROGRAM==&/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name&, RESULT==&14f504ed66744c&, NAME=&asm-data&, OWNER=&grid&, GROUP=&asmadmin&, MODE=&0660&
KERNEL==&sd*&, BUS==&scsi&, PROGRAM==&/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name&, RESULT==&14f504eb2dd&, NAME=&asm-fra&, OWNER=&grid&, GROUP=&asmadmin&, MODE=&0660&
KERNEL==&sd*&, BUS==&scsi&, PROGRAM==&/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name&, RESULT==&14f504ecd716dc76&, NAME=&asm-crs&, OWNER=&grid&, GROUP=&asmadmin&, MODE=&0660&
[root@12crac1 ~]# /sbin/start_udev
Starting udev: [
[root@12crac1 ~]# ls -l /dev/asm*
brw-rw---- 1 grid asmadmin 8, 96 Jun 29 21:56 /dev/asm-crs
brw-rw---- 1 grid asmadmin 8, 64 Jun 29 21:56 /dev/asm-data
brw-rw---- 1 grid asmadmin 8, 80 Jun 29 21:56 /dev/asm-fra
9、禁用NTP服务
(以Node1为例,两个节点相同)
[root@12crac1 ~]# chkconfig ntpd off
[root@12crac1 ~]# mv /etc/ntp.conf /etc/ntp.conf.bak
10、解压介质
[root@12crac1 ~]# chown -R grid.oinstall /install/
[root@12crac1 ~]# chown oracle.oinstall /install/linuxamd64_12c_database_*
[root@12crac1 ~]# chmod 775 /install
[root@12crac1 ~]# su - grid
[root@12crac1 ~]# cd /install/
[grid@12crac1 install]$ unzip linuxamd64_12c_grid_1of2.zip
[grid@12crac1 install]$ unzip linuxamd64_12c_grid_2of2.zip
[root@12crac1 ~]# su - oracle
[oracle@12crac1 install]$ unzip linuxamd64_12c_database_1of2.zip
解压之后大小为:
[oracle@12crac1 install]$ du -sh grid
[oracle@12crac1 install]$ du -sh database/
安装cvu相关rpm包:
[root@12crac1 ~]# cd /install/grid/rpm/
[root@12crac1 rpm]# rpm -ivh cvuqdisk-1.0.9-1.rpm
Preparing...
########################################### [100%]
Using default group oinstall to install package
1:cvuqdisk
########################################### [100%]
拷贝到节点2并安装:
[root@12crac1 rpm]# scp cvuqdisk-1.0.9-1.rpm 12crac2:/install
[root@12crac2 install]# rpm -ivh cvuqdisk-1.0.9-1.rpm
Preparing...
########################################### [100%]
Using default group oinstall to install package
1:cvuqdisk
########################################### [100%]
这里只贴失败的项,其中第一个是物理内存不足问题,Oracle推荐每节点至少4GB内存空间,我这里只有2G;第二个问题是配置DNS,这个问题我们可以忽略。
[grid@12crac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n 12crac1,12crac2 -verbose
Check: Total memory
------------
------------------------
------------------------
----------
1.9567GB (KB)
1.9567GB (KB)
Result: Total memory check failed
Result: Default user file creation mask check passed
Checking integrity of file &/etc/resolv.conf& across nodes
Checking the file &/etc/resolv.conf& to make sure only one of domain and search entries is defined
&domain& and &search& entries do not coexist in any
&/etc/resolv.conf& file
Checking if domain entry in file &/etc/resolv.conf& is consistent across the nodes...
&domain& entry does not exist in any &/etc/resolv.conf& file
Checking if search entry in file &/etc/resolv.conf& is consistent across the nodes...
Checking file &/etc/resolv.conf& to make sure that only one search entry is defined
More than one &search& entry does not exist in any &/etc/resolv.conf& file
All nodes have same &search& order defined in file &/etc/resolv.conf&
Checking DNS response time for an unreachable node
------------------------------------
------------------------
PRVF-5636 : The DNS response time for an unreachable node exceeded &15000& ms on following nodes: 12crac1,12crac2
Check for integrity of file &/etc/resolv.conf& failed
[root@12cr ~]# su - grid
[grid@12cr ~]$ cd /install/grid/
我打开Xmanager - Passive,设置DISPLAY,调用runInstaller启动OUI
[grid@12cr grid]$ export DISPLAY=192.168.1.1:0.0
[grid@12cr grid]$ ./runInstaller
这里有几项校验不过去,这几个都忽略。
脚本输出内容:
[root@12crac1 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@12crac1 ~]# /u01/app/12.1.0/grid/root.sh
Performing root user operation for Oracle 12c
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME=
/u01/app/12.1.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
00:30:25 CLSRSC-363: User ignored prerequisites during installation
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
00:31:22 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2673: Attempting to stop 'ora.drivers.acfs' on '12crac1'
CRS-2677: Stop of 'ora.drivers.acfs' on '12crac1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on '12crac1'
CRS-2672: Attempting to start 'ora.mdnsd' on '12crac1'
CRS-2676: Start of 'ora.mdnsd' on '12crac1' succeeded
CRS-2676: Start of 'ora.evmd' on '12crac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on '12crac1'
CRS-2676: Start of 'ora.gpnpd' on '12crac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on '12crac1'
CRS-2672: Attempting to start 'ora.gipcd' on '12crac1'
CRS-2676: Start of 'ora.cssdmonitor' on '12crac1' succeeded
CRS-2676: Start of 'ora.gipcd' on '12crac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on '12crac1'
CRS-2672: Attempting to start 'ora.diskmon' on '12crac1'
CRS-2676: Start of 'ora.diskmon' on '12crac1' succeeded
CRS-2676: Start of 'ora.cssd' on '12crac1' succeeded
ASM created and started successfully.
Disk Group RACCRS created successfully.
CRS-2672: Attempting to start 'ora.crf' on '12crac1'
CRS-2672: Attempting to start 'ora.storage' on '12crac1'
CRS-2676: Start of 'ora.storage' on '12crac1' succeeded
CRS-2676: Start of 'ora.crf' on '12crac1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on '12crac1'
CRS-2676: Start of 'ora.crsd' on '12crac1' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk d883c23a7bfc4fdcbf418c9f631bd0af.
Successfully replaced voting disk group with +RACCRS.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
File Universal Id
File Name Disk group
-----------------
--------- ---------
d883c23a7bfc4fdcbf418c9f631bd0af (/dev/asm-crs) [RACCRS]
Located 1 voting disk(s).
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on '12crac1'
CRS-2673: Attempting to stop 'ora.crsd' on '12crac1'
CRS-2677: Stop of 'ora.crsd' on '12crac1' succeeded
CRS-2673: Attempting to stop 'ora.storage' on '12crac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on '12crac1'
CRS-2673: Attempting to stop 'ora.gpnpd' on '12crac1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on '12crac1'
CRS-2677: Stop of 'ora.storage' on '12crac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on '12crac1'
CRS-2673: Attempting to stop 'ora.evmd' on '12crac1'
CRS-2673: Attempting to stop 'ora.asm' on '12crac1'
CRS-2677: Stop of 'ora.drivers.acfs' on '12crac1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on '12crac1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on '12crac1' succeeded
CRS-2677: Stop of 'ora.evmd' on '12crac1' succeeded
CRS-2677: Stop of 'ora.ctssd' on '12crac1' succeeded
CRS-2677: Stop of 'ora.asm' on '12crac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on '12crac1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on '12crac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on '12crac1'
CRS-2677: Stop of 'ora.cssd' on '12crac1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on '12crac1'
CRS-2677: Stop of 'ora.crf' on '12crac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on '12crac1'
CRS-2677: Stop of 'ora.gipcd' on '12crac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on '12crac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on '12crac1'
CRS-2672: Attempting to start 'ora.evmd' on '12crac1'
CRS-2676: Start of 'ora.mdnsd' on '12crac1' succeeded
CRS-2676: Start of 'ora.evmd' on '12crac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on '12crac1'
CRS-2676: Start of 'ora.gpnpd' on '12crac1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on '12crac1'
CRS-2676: Start of 'ora.gipcd' on '12crac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on '12crac1'
CRS-2676: Start of 'ora.cssdmonitor' on '12crac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on '12crac1'
CRS-2672: Attempting to start 'ora.diskmon' on '12crac1'
CRS-2676: Start of 'ora.diskmon' on '12crac1' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server '12crac1'
CRS-2676: Start of 'ora.cssd' on '12crac1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on '12crac1'
CRS-2672: Attempting to start 'ora.ctssd' on '12crac1'
CRS-2676: Start of 'ora.ctssd' on '12crac1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on '12crac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on '12crac1'
CRS-2676: Start of 'ora.asm' on '12crac1' succeeded
CRS-2672: Attempting to start 'ora.storage' on '12crac1'
CRS-2676: Start of 'ora.storage' on '12crac1' succeeded
CRS-2672: Attempting to start 'ora.crf' on '12crac1'
CRS-2676: Start of 'ora.crf' on '12crac1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on '12crac1'
CRS-2676: Start of 'ora.crsd' on '12crac1' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: 12crac1
CRS-6016: Resource auto-start has completed for server 12crac1
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
00:38:01 CLSRSC-343: Successfully started Oracle clusterware stack
CRS-2672: Attempting to start 'ora.asm' on '12crac1'
CRS-2676: Start of 'ora.asm' on '12crac1' succeeded
CRS-2672: Attempting to start 'ora.RACCRS.dg' on '12crac1'
CRS-2676: Start of 'ora.RACCRS.dg' on '12crac1' succeeded
00:39:51 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@12crac2 ~]# /u01/app/12.1.0/grid/root.sh
Performing root user operation for Oracle 12c
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME=
/u01/app/12.1.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
00:42:51 CLSRSC-363: User ignored prerequisites during installation
OLR initialization - successful
00:43:18 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on '12crac2'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on '12crac2'
CRS-2677: Stop of 'ora.drivers.acfs' on '12crac2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on '12crac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.evmd' on '12crac2'
CRS-2672: Attempting to start 'ora.mdnsd' on '12crac2'
CRS-2676: Start of 'ora.evmd' on '12crac2' succeeded
CRS-2676: Start of 'ora.mdnsd' on '12crac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on '12crac2'
CRS-2676: Start of 'ora.gpnpd' on '12crac2' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on '12crac2'
CRS-2676: Start of 'ora.gipcd' on '12crac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on '12crac2'
CRS-2676: Start of 'ora.cssdmonitor' on '12crac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on '12crac2'
CRS-2672: Attempting to start 'ora.diskmon' on '12crac2'
CRS-2676: Start of 'ora.diskmon' on '12crac2' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server '12crac2'
CRS-2676: Start of 'ora.cssd' on '12crac2' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on '12crac2'
CRS-2672: Attempting to start 'ora.ctssd' on '12crac2'
CRS-2676: Start of 'ora.ctssd' on '12crac2' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on '12crac2' succeeded
CRS-2672: Attempting to start 'ora.asm' on '12crac2'
CRS-2676: Start of 'ora.asm' on '12crac2' succeeded
CRS-2672: Attempting to start 'ora.storage' on '12crac2'
CRS-2676: Start of 'ora.storage' on '12crac2' succeeded
CRS-2672: Attempting to start 'ora.crf' on '12crac2'
CRS-2676: Start of 'ora.crf' on '12crac2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on '12crac2'
CRS-2676: Start of 'ora.crsd' on '12crac2' succeeded
CRS-6017: Processing resource auto-start for servers: 12crac2
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on '12crac1'
CRS-2672: Attempting to start 'ora.ons' on '12crac2'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on '12crac1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on '12crac1'
CRS-2677: Stop of 'ora.scan1.vip' on '12crac1' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on '12crac2'
CRS-2676: Start of 'ora.scan1.vip' on '12crac2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on '12crac2'
CRS-2676: Start of 'ora.ons' on '12crac2' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on '12crac2' succeeded
CRS-6016: Resource auto-start has completed for server 12crac2
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
00:48:43 CLSRSC-343: Successfully started Oracle clusterware stack
00:49:05 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
查看状态:
[grid@12crac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ora.RACCRS.dg
Started,STABLE
Started,STABLE
ora.net1.network
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.12crac1.vip
ora.12crac2.vip
ora.LISTENER_SCAN1.lsnr
ora.LISTENER_SCAN2.lsnr
ora.LISTENER_SCAN3.lsnr
ora.MGMTLSNR
169.254.88.173 192.1
68.80.150,STABLE
ora.mgmtdb
Open,STABLE
ora.scan1.vip
ora.scan2.vip
ora.scan3.vip
--------------------------------------------------------------------------------
2)创建ASM磁盘组
[grid@12crac1 ~]$ export DISPLAY=192.168.1.1:0.0
[grid@12crac1 ~]$ asmca
3)安装RDBMS软件
[root@12crac1 ~]# su - oracle
[oracle@12crac1 ~]$ cd /install/database/
[oracle@12crac1 database]$ export DISPLAY=192.168.1.1:0.0
[oracle@12crac1 database]$ ./runInstaller
执行脚本:
[root@12crac1 ~]# /u01/app/oracle/product/12.1.0/dbhome_1/root.sh
Performing root user operation for Oracle 12c
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME=
/u01/app/oracle/product/12.1.0/dbhome_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of &dbhome& have not changed. No need to overwrite.
The contents of &oraenv& have not changed. No need to overwrite.
The contents of &coraenv& have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
4)创建数据库
[oracle@12crac1 ~]$ dbca
点浏览选择ASM磁盘组
归档先别开,否则影响建库速度
13、最终结果
资源状态:
[grid@12crac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ora.RACCRS.dg
ora.RACDATA.dg
ora.RACFRA.dg
Started,STABLE
Started,STABLE
ora.net1.network
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.12crac1.vip
ora.12crac2.vip
ora.LISTENER_SCAN1.lsnr
ora.LISTENER_SCAN2.lsnr
ora.LISTENER_SCAN3.lsnr
ora.MGMTLSNR
169.254.88.173 192.1
68.80.150,STABLE
ora.luocs12c.db
Open,STABLE
Open,STABLE
ora.mgmtdb
Open,STABLE
ora.scan1.vip
ora.scan2.vip
ora.scan3.vip
--------------------------------------------------------------------------------
RAC数据库配置信息
[grid@12crac1 ~]$ srvctl config database -d luocs12c
Database unique name: luocs12c
Database name: luocs12c
Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile: +RACDATA/luocs12c/spfileluocs12c.ora
Password file: +RACDATA/luocs12c/orapwluocs12c
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: luocs12c
Database instances: luocs12c1,luocs12c2
Disk Groups: RACFRA,RACDATA
Mount point paths:
Start concurrency:
Stop concurrency:
Database is administrator managed
[grid@12crac1 ~]$ srvctl status database -d luocs12c
Instance luocs12c1 is running on node 12crac1
Instance luocs12c2 is running on node 12crac2
[grid@12crac1 ~]$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): 12crac1,12crac2
实例状态:
sys@LUOCS12C& select instance_name, status from gv$
INSTANCE_NAME
---------------- ------------
sys@LUOCS12C& col open_time for a25
sys@LUOCS12C& col name for a10
sys@LUOCS12C& select CON_ID, NAME, OPEN_MODE, OPEN_TIME, CREATE_SCN, TOTAL_SIZE from v$
CON_ID NAME
CREATE_SCN TOTAL_SIZE
---------- ---------- ---------- ------------------------- ---------- ----------
2 PDB$SEED
01-JUL-13 04.33.07.302 PM
READ WRITE 01-JUL-13 04.38.41.339 PM
查看归档启用与否:
sys@LUOCS12C& archive log list
Database log mode
No Archive Mode
Automatic archival
Archive destination
USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence
Current log sequence
现在手动开启:
[oracle@12crac1 ~]$ srvctl stop database -d luocs12c
[oracle@12crac1 ~]$ srvctl start database -d luocs12c -o mount
[oracle@12crac1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Mon Jul 1 17:13:58 2013
Copyright (c) , Oracle.
All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
idle& alter
Database altered.
Database altered.
[oracle@12crac2 ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Mon Jul 1 17:17:07 2013
Copyright (c) , Oracle.
All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
Database altered.
OK,这样RAC已经运行于归档模式了
sys@LUOCS12C& archive log list
Database log mode
Archive Mode
Automatic archival
Archive destination
USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence
Next log sequence to archive
Current log sequence
Oracle 12c RAC安装到这里。

我要回帖

更多关于 sp add jobschedule 的文章

 

随机推荐