oracle 11g rac重启中都有哪些分区类型

当前位置: >>
Centos 6.4安装oracle 11g RAC集群
Centos 6.4 x86_64 oracle 11g R2 RAC 集群 部署文档By Eric Yan 1 / 110 一、物理环境准备.................................................................................................. 5图形化安装............................................................................................................ 5 二、 系统环境准备.................................................................................................. 6配置 ISCSI 服务端 .................................................................................................. 6 配置 ISCSI 客户端 .................................................................................................. 6 查看挂载设备................................................................................................ 6 确定设备磁盘 SN(删除) ........................................................................ 18 利用 SN 创建 UDEV 规则(删除) ............................................................ 18 配置/etc/hosts ..................................................................................................... 20 配置用户组(附录脚本).................................................................................. 20 配置 oracle 安装目录(附录脚本).................................................................. 22 配置 limits 限制参数(附录脚本) ................................................................... 23 配置 pam.d 认证参数(附录脚本) ................................................................. 23 配置/etc/profile 参数(附录脚本) .................................................................. 23 配置内核配置参数(附录脚本)...................................................................... 23 配置关闭 NTP 时钟 ............................................................................................. 25 配置需求软件包.................................................................................................. 25 配置 oracle、grid 用户的 ssh 对等性 ................................................................ 26 Oracle 用户 ssh 对等性配置 ....................................................................... 26 Grid 用户 ssh 对等性配置 .......................................................................... 30 配置 ASM 磁盘 .................................................................................................... 35 安装 ASM RPM 软件 ................................................................................... 352 / 110 配置 ASM Dirve 服务 .................................................................................. 37 三、 安装 GRID ...................................................................................................... 41安装前预检查...................................................................................................... 41 安装 Grid.............................................................................................................. 41 四、 五、 六、 七、 安装 oracle 数据库........................................................................................ 61 创建 ASM 磁盘组 .......................................................................................... 69 创建数据库.................................................................................................... 72 安装后检查.................................................................................................... 81数据文件检查...................................................................................................... 81 查看浮动 IP 地址 ................................................................................................ 81 Node1 ........................................................................................................... 81 Node2 ........................................................................................................... 82 查看数据库监听.................................................................................................. 83 Node1 ........................................................................................................... 83 Node2 ........................................................................................................... 84 八、 客户端配置、连接数据库............................................................................ 85Net Manager 配置 ............................................................................................... 85 PLSQL 连接........................................................................................................... 89 九、 十、 十一、 Oracle 集群状态查询 .................................................................................... 90 Oracle 故障切换 ............................................................................................ 93 附录脚本 .................................................................................................. 102Node1 脚本 ........................................................................................................ 102 Preusers 脚本............................................................................................. 102 predir 脚本 ................................................................................................. 1043 / 110 presysctl 脚本 ............................................................................................ 104 prelimits 脚本 ............................................................................................ 105 Prelogin 脚本 ............................................................................................. 105 Preprofile 脚本........................................................................................... 106 Node2 脚本 ........................................................................................................ 106 Preusers 脚本............................................................................................. 106 Predir 脚本 ................................................................................................. 108 Presysctl 脚本 ............................................................................................ 108 Prelimits 脚本 ............................................................................................ 109 Prelogin 脚本 ............................................................................................. 110 Preprofile 脚本........................................................................................... 1104 / 110 一、 物理环境准备RAC1 和 RAC2 作为集群的两个节点,桌面安装(或者最小化安装后进行安装 桌面环境)Centos 6.5 x86_64 位操作系统,共享存储使用 RHEL5.5 搭建 Iscsi NAS 存储提供共享磁盘。 序号 1 2 3 服务器 RAC1 RAC2 ISCSI 操作系统 Centos 6.5 x86_64 Centos 6.5 x86_64 RHEL5.5 主机名 RAC1 RAC2 ISCSI IP 地址 192.168.37.11 192.168.37.12 192.168.37.200图形化安装方法联网或者是搭建本地 yum 源进行安装,或者在操作系统安装时进行安装。 [root@RAC2 ~]# yum groupinstall &Desktop& &X Window System& &Chinese Support& 本地安装参照如下链接: http://blog.csdn.net/kimsoft/article/details/8020014 http://blog.163.com/tsee123@126/blog/static// ORACLE RAC 安装文档参考 http://docs.oracle.com/cd/E11882_01/install.112/e22489/typinstl.htm#CWLIN164 RHEL 6.5 解决 oracleasm 不支持 6.0 内核问题: http://www.anbob.com/archives/2215.html Google 翻译地址: http://translate.google.cn/?hl=en#en/zh-CN/5 / 110 二、 系统环境准备配置 ISCSI 服务端这里采用 RHEL5.5 系统来搭建 ISCSI Server,共享一块 30G 的 ISCSI 磁盘。 [root@ISCSI ~]# rpm -qa |grep scsi iscsi-initiator-utils-6.2.0.871-0.16.el5 vim /etc/tgt/targets.conf &target iqn.2014-06.com.oracle:ASM& backing-store /dev/sdb write-cache off &/target&配置 ISCSI 客户端 查看挂载设备[root@node1 rules.d]# iscsiadm --mode discoverydb --type sendtargets --portal 192.168.37.10 --discover 192.168.37.10:3260,1 iqn.2014-06.com.oracle:ASM [root@node1 rules.d]# service iscsi start 正在启动 iscsi: ISCSI 删除命令,注意删除命令需要停止 iscsi 情况下使用: iscsiadm --mode discoverydb --type sendtargets --portal 192.168.37.10 --discover iscsiadm -m node -o delete -T iqn.2014-06.com.oracle:OCR1 -p 192.168.37.10 iscsiadm -m node -o delete -T iqn.2014-06.com.oracle:OCR2 -p 192.168.37.10 iscsiadm -m node -o delete -T iqn.2014-06.com.oracle:DATA1 -p 192.168.37.10 iscsiadm -m node -o delete -T iqn.2014-06.com.oracle:DATA2 -p 192.168.37.10 [root@node1 rules.d]# fdisk Cl Disk /dev/sdb: 32.2 GB,
bytes 64 heads, 32 sectors/track, 30720 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x [root@node2 rules.d]# iscsiadm --mode discoverydb --type sendtargets --portal6 / 110[确定] 192.168.37.10 --discover 192.168.37.10:3260,1 iqn.2014-06.com.oracle:ASM [root@node2 rules.d]# service iscsi start 正在启动 iscsi: [root@node2 ~]# fdisk -l Disk /dev/sdb: 32.2 GB,
bytes 64 heads, 32 sectors/track, 30720 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x (下面黄色部分删除,这里只用一块硬盘多个分区,因此不用 udev 固定磁盘) [确定]Multipath 固定 ISCSI 多磁盘方法参考文档如下: http://blog.csdn.net/staricqxyz/article/details/各节点原磁盘序列显示1. NODE1 节点磁盘序列 测试挂载 4 块磁盘,分别为 4、5、6、7GB,在 node1 中认到的磁盘序列为: 序号 1 2 3 4 磁盘设备名称 /dev/sdb /dev/sdc /dev/sdd /dev/sde 磁盘大小 4GB 6GB 5GB 7GBDisk /dev/sdb: 4294 MB,
bytes 255 heads, 63 sectors/track, 522 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd91583ae Device Boot /dev/sdb1 Start 1 End 5227 / 110Blocks Id System
Linux Disk /dev/sdc: 6442 MB,
bytes 255 heads, 63 sectors/track, 783 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x034efb8b Device Boot /dev/sdc1 Start 1 End 783 Blocks Id System
LinuxDisk /dev/sdd: 5368 MB,
bytes 255 heads, 63 sectors/track, 652 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x3389db80 Device Boot /dev/sdd1 Start 1 End 652 Blocks Id System
LinuxDisk /dev/sde: 7516 MB,
bytes 255 heads, 63 sectors/track, 913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x025db8fe Device Boot /dev/sde1 Start 1 End 913 Blocks Id System
Linux2. Node2 节点磁盘序列 序号 1 2 3 4 磁盘设备名称 /dev/sdb /dev/sdc /dev/sdd /dev/sde 磁盘大小 7GB 6GB 5GB 4GBDisk /dev/sdb: 7516 MB,
bytes 255 heads, 63 sectors/track, 913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes8 / 110 Disk identifier: 0x025db8fe Device Boot /dev/sdb1 Start 1 End 913 Blocks Id System
LinuxDisk /dev/sdd: 6442 MB,
bytes 255 heads, 63 sectors/track, 783 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x034efb8b Device Boot /dev/sdd1 Start 1 End 783 Blocks Id System
LinuxDisk /dev/sdc: 5368 MB,
bytes 255 heads, 63 sectors/track, 652 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x3389db80 Device Boot /dev/sdc1 Start 1 End 652 Blocks Id System
LinuxDisk /dev/sde: 4294 MB,
bytes 255 heads, 63 sectors/track, 522 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd91583ae Device Boot /dev/sde1 Start 1 End 522 Blocks Id System
Linux根据如上显示,Node1 和 Node2 的 sd[b-e]大小不对应,也就是说无法如下一一 对应: 序号 1 2 3 4 节点 Node1 /dev/sdb /dev/sdc /dev/sdd /dev/sde 磁盘大小 4GB 5GB 6GB 7GB 节点 Node2 /dev/sdb /dev/sdc /dev/sdd /dev/sde9 / 110设备名称 /dev/sdb /dev/sdc /dev/sdd /dev/sde磁盘大小 4GB 5GB 6GB 7GB 由于 oracleasm 创建磁盘组时需要两边磁盘和磁盘分区一一对应,两边才能同步 磁盘组信息,因此我们使用 multipath 来实现这一目标。安装 multipath 软件yum install device-mapper*获取磁盘设备的唯一标示1. Node1 节点 [root@node1 ~]# /lib/udev/scsi_id --whitelisted --device=/dev/sdb 1IET
[root@node1 ~]# /lib/udev/scsi_id --whitelisted --device=/dev/sdc 1IET
[root@node1 ~]# /lib/udev/scsi_id --whitelisted --device=/dev/sdd 1IET
[root@node1 ~]# /lib/udev/scsi_id --whitelisted --device=/dev/sde 1IET . Node2 节点 [root@node2 ~]# /lib/udev/scsi_id --whitelisted --device=/dev/sdb 1IET
[root@node2 ~]# /lib/udev/scsi_id --whitelisted --device=/dev/sdc 1IET
[root@node2 ~]# /lib/udev/scsi_id --whitelisted --device=/dev/sdd 1IET
[root@node2 ~]# /lib/udev/scsi_id --whitelisted --device=/dev/sde 1IET
注意: 有些网站上说的是通过如下命令来获取 ID,但是本人在虚拟机测试时发现利用 下面的 RHEL6 的方法来获取的 ID 是这样的,举例: [root@node2 mapper]# --device=/dev/sdb IET_ scsi_id --whitelisted --replace-whitespace实际使用上述 IET_ 的 ID 号是无法实现一一对应的,任然是凌乱的。可10 / 110 能是由于是虚拟机的问题,具体情况视生产环境而定,有些生产环境可以不用 multipath,可以使用存储自带的多路径软件。 至于本人如何发现的正确的命令呢,请查看 man 手册。 在 man 手册中可以找到此命令格式:------------------------------------------------------------------------------------RHEL5 下使用以下命令: [root@db11g ~]# /sbin/scsi_id -g -u -s /block/sdb SATA_VBOX_HARDDISK_VB3c0bb909-10aab3a0_ RHEL6 下使用以下命令 scsi_id --whitelisted --replace-whitespace C-device=/dev/sdb --------------------------------------------------------------------------------------创建 multipath.conf 配置文件Centos 6.4 安装完成 multipathd 后是没有配置文件的,在/etc 下面有个 multipath 目录,原本以为配置文件会在此目录下面但是配置测试后仍然在/etc 下面。 配置文件的获得方法如下: [root@node2 mapper]# find / -name multipath.conf /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf [root@node2 mapper]# /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf /etc/ 编辑配置文件:defaults { user_friendly_names yes queue_without_daemon no flush_on_last_del yes max_fds max } blacklist {11 / 110cp devnode &^hd[a-z]& devnode &^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*& devnode &^cciss.*& } devices { device { vendor product path_grouping_policy features 50& getuid_callout path_checker path_selector hardware_handler failback rr_weight rr_min_io } } multipaths { multipaths { multipath { wwid alias uid 1100 gid 1200 } multipath { wwid alias uid 1100 gid 1200 } multipath { wwid alias uid 1100 gid 1200 } multipath { &/sbin/scsi_id -g -u -s /block/%n& tur &round-robin 0& &1 alua& immediate uniform 128 &OPNFILER & &LUN& group_by_prio &3 queue_if_no_path pg_init_retries“1IET asm-diska”#此处一定注意加引号 #我测试不加引号别名是出不来的“1IET ” asm-diskb“1IET ” asm-diskc12 / 110 wwid alias uid 1100 gid 1200 } } }“1IET ” asm-diskd启动 multipath 服务完成后启动multipathd服务,加入开机启动项。 Service multipathd restart Chkconfig multipathd on 清空记录命令: Multipath CF 重新扫描命令: Multipath Cv3查看设备状态1. Node1 节点其实下面输出的上面还有/dev/sdb 的信息,这里省略。 Disk /dev/sdc: 5368 MB,
bytes 166 heads, 62 sectors/track, 1018 cylinders Units = cylinders of 10292 * 512 = 5269504 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x13 / 110 Disk /dev/sdd: 6442 MB,
bytes 199 heads, 62 sectors/track, 1019 cylinders Units = cylinders of 12338 * 512 = 6317056 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xDisk /dev/sdb: 4294 MB,
bytes 133 heads, 62 sectors/track, 1017 cylinders Units = cylinders of 8246 * 512 = 4221952 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x8b2c71ca Device Boot Start End Blocks Id SystemDisk /dev/sde: 7516 MB,
bytes 232 heads, 62 sectors/track, 1020 cylinders Units = cylinders of 14384 * 512 = 7364608 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xDisk /dev/mapper/asm-diskb: 5368 MB,
bytes 255 heads, 63 sectors/track, 652 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xDisk /dev/mapper/asm-diskc: 6442 MB,
bytes 255 heads, 63 sectors/track, 783 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xDisk /dev/mapper/asm-diska: 4294 MB,
bytes 255 heads, 63 sectors/track, 522 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes14 / 110 Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x8b2c71ca Device Boot System Disk /dev/mapper/asm-diskd: 7516 MB,
bytes 255 heads, 63 sectors/track, 913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x. Node2 节点 Start End Blocks IdDisk /dev/sdb: 5368 MB,
bytes 166 heads, 62 sectors/track, 1018 cylinders Units = cylinders of 10292 * 512 = 5269504 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xDisk /dev/sdc: 6442 MB,
bytes 199 heads, 62 sectors/track, 1019 cylinders Units = cylinders of 12338 * 512 = 6317056 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xDisk /dev/sdd: 7516 MB,
bytes 232 heads, 62 sectors/track, 1020 cylinders15 / 110 Units = cylinders of 14384 * 512 = 7364608 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xDisk /dev/sde: 4294 MB,
bytes 133 heads, 62 sectors/track, 1017 cylinders Units = cylinders of 8246 * 512 = 4221952 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x8b2c71ca Device Boot Start End Blocks Id SystemDisk /dev/mapper/asm-diskb: 5368 MB,
bytes 255 heads, 63 sectors/track, 652 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xDisk /dev/mapper/asm-diskc: 6442 MB,
bytes 255 heads, 63 sectors/track, 783 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xDisk /dev/mapper/asm-diskd: 7516 MB,
bytes 255 heads, 63 sectors/track, 913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xDisk /dev/mapper/asm-diska: 4294 MB,
bytes 255 heads, 63 sectors/track, 522 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes16 / 110 Disk identifier: 0x8b2c71ca 后面配置 oracleasm 磁盘的时候就需要使用如下命令: [root@RAC1 soft]# /dev/mapper/asm-diskap1 Writing disk header: done Instantiating disk: done [root@RAC1 soft]# /dev/mapper/asm-diskbp1 Disk &VOL1& already exists [root@RAC1 soft]# /dev/mapper/asm-diskbp1 Writing disk header: done Instantiating disk: done [root@RAC1 soft]# /dev/mapper/asm-diskcp1 Writing disk header: done Instantiating disk: done [root@RAC1 soft]# /dev/mapper/asm-diskdp1 Writing disk header: done Instantiating disk: done /usr/sbin/oracleasm createdisk VOL1/usr/sbin/oracleasmcreatediskVOL1/usr/sbin/oracleasmcreatediskVOL2/usr/sbin/oracleasmcreatediskVOL3/usr/sbin/oracleasmcreatediskVOL4完成后可以在 Node1 和 Node2 节点上看到 VOL【1-4】 ,创建完成后需要初始化 和扫描,参照后面文档部分。效果如下:17 / 110 UDEV 方式固定 ISCSI 多磁盘方法(删除,未成功) 确定设备磁盘 SN(未成功)[root@node1 ~]# scsi_id --whitelisted --replace-whitespace --device=/dev/sdb 1IET_ [root@node1 ~]# scsi_id --whitelisted --replace-whitespace --device=/dev/sdc 1IET_ [root@node1 ~]# scsi_id --whitelisted --replace-whitespace --device=/dev/sdd 1IET_ [root@node1 ~]# scsi_id --whitelisted --replace-whitespace --device=/dev/sde 1IET_利用 SN 创建 UDEV 规则(未成功)此处要对照 fdisk Cl 列表中的设备大小,确定对应无误再进行绑定。 [root@node1 rules.d]# pwd /etc/udev/rules.d [root@node1 rules.d]# cat 99-oracle-asmdevices.rules KERNEL==&sd*&, SUBSYSTEM==&block&, PROGRAM==&/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name&,RESULT==&1IET_&, NAME=&asm-diska&, OWNER=&grid&,GROUP=&asmadmin&, MODE=&0660& KERNEL==&sd*&, SUBSYSTEM==&block&, PROGRAM==&/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name&,RESULT==&1IET_&, NAME=&asm-diskb&, OWNER=&grid&,GROUP=&asmadmin&, MODE=&0660& KERNEL==&sd*&, SUBSYSTEM==&block&, PROGRAM==&/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name&,RESULT==&1IET_&, NAME=&asm-diskc&, OWNER=&grid&,GROUP=&asmadmin&, MODE=&0660&18 / 110 KERNEL==&sd*&, SUBSYSTEM==&block&, PROGRAM==&/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name&,RESULT==&1IET_&, NAME=&asm-diskd&, OWNER=&grid&,GROUP=&asmadmin&, MODE=&0660& [root@node2 rules.d]# cat 99-oracle-asmdevices.rules KERNEL==&sd*&, SUBSYSTEM==&block&, PROGRAM==&/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name&,RESULT==&1IET_&, NAME=&asm-diska&, OWNER=&grid&,GROUP=&asmadmin&, MODE=&0660& KERNEL==&sd*&, SUBSYSTEM==&block&, PROGRAM==&/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name&,RESULT==&1IET_&, NAME=&asm-diskb&, OWNER=&grid&,GROUP=&asmadmin&, MODE=&0660& KERNEL==&sd*&, SUBSYSTEM==&block&, PROGRAM==&/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name&,RESULT==&1IET_&, NAME=&asm-diskc&, OWNER=&grid&,GROUP=&asmadmin&, MODE=&0660& KERNEL==&sd*&, SUBSYSTEM==&block&, PROGRAM==&/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name&,RESULT==&1IET_&, NAME=&asm-diskd&, OWNER=&grid&,GROUP=&asmadmin&, MODE=&0660& 配置完成后执行 Start_udev 启动 UDEV,完成后还需要重新发现一下 ISCSI 设备,然后重启 ISCSI 服务挂载 ISCSI 磁盘,如果还不行的话就重启一下操作系统。 最终,将会在/dev 下面发现 asm-disk【a-d】四个磁盘设备。 [root@node1 ~]# ls -l /dev/asm* brw-rw----. 1 grid asmadmin 8, 64 6 月 brw-rw----. 1 grid asmadmin 8, 48 6 月 brw-rw----. 1 grid asmadmin 8, 16 6 月 brw-rw----. 1 grid asmadmin 8, 32 6 月 [root@node2 ~]# ls -l /dev/asm* brw-rw----. 1 grid asmadmin 8, 64 6 月 brw-rw----. 1 grid asmadmin 8, 32 6 月 brw-rw----. 1 grid asmadmin 8, 48 6 月 brw-rw----. 1 grid asmadmin 8, 16 6 月 17 00:16 /dev/asm-diska 17 00:16 /dev/asm-diskb 17 00:16 /dev/asm-diskc 17 00:16 /dev/asm-diskd 17 00:16 /dev/asm-diska 17 00:16 /dev/asm-diskb 17 00:16 /dev/asm-diskc 17 00:16 /dev/asm-diskd19 / 110 配置/etc/hosts[root@node1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 #Public ip 192.168.37.21 node1.localdomain node1 192.168.37.22 node2.localdomain node2 #private ip 172.16.1.1 node1-priv.localdomain node1-priv 172.16.1.2 node2-priv.localdomain node2-priv #virtual ip 192.168.37.23 node1-vip.localdomain node1-vip 192.168.37.24 node2-vip.localdomain node2-vip # scan-ip 192.168.37.200 scan-cluster.localdomain scan-cluster [root@node2 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 #Public ip 192.168.37.21 node1.localdomain node1 192.168.37.22 node2.localdomain node2 ##private ip 172.16.1.1 172.16.1.2 ##virtual ip 192.168.37.23 192.168.37.24 # scan-ip 192.168.37.200node1-priv.localdomain node2-priv.localdomainnode1-priv node2-privnode1-vip.localdomain node2-vip.localdomainnode1-vip node2-vipscan-cluster.localdomainscan-cluster配置用户组(附录脚本)[root@node1 node1]# ll 总用量 24 -rwxr--r--. 1 root root -rwxr--r--. 1 root root 659 2 月 24 15:10 predir.sh 857 2 月 24 15:10 prelimits.sh20 / 110 -rwxr--r--. 1 root root -rwxr--r--. 1 root root464 2 月 24 15:10 prelogin.sh 652 2 月 24 15:10 preprofile.sh-rwxr--r--. 1 root root 1130 2 月 24 15:10 presysctl.sh -rwxr--r--. 1 root root 3521 2 月 24 15:10 preusers.sh [root@node1 node1]# ./preusers.sh Now create 6 groups named 'oinstall','dba','asmadmin','asmdba','asmoper','oper' Plus 2 users named 'oracle','grid',Also setting the Environment 更改用户 grid 的密码 。 passwd: 所有的身份验证令牌已经成功更新。 更改用户 oracle 的密码 。 passwd: 所有的身份验证令牌已经成功更新。 The Groups and users has been created The Environment for grid,oracle also has been set successfully [root@node2 node2]# ll 总用量 24 -rwxr--r--. 1 root root -rwxr--r--. 1 root root -rwxr--r--. 1 root root -rwxr--r--. 1 root root 659 2 月 24 15:10 predir.sh 857 2 月 24 15:10 prelimits.sh 464 2 月 24 15:10 prelogin.sh 652 2 月 24 15:10 preprofile.sh-rwxr--r--. 1 root root 1130 2 月 24 15:10 presysctl.sh -rwxr--r--. 1 root root 3521 2 月 24 15:10 preusers.sh [root@node2 node2]# ./preusers.sh Now create 6 groups named 'oinstall','dba','asmadmin','asmdba','asmoper','oper' Plus 2 users named 'oracle','grid',Also setting the Environment 更改用户 grid 的密码 。 passwd: 所有的身份验证令牌已经成功更新。21 / 110 更改用户 oracle 的密码 。 passwd: 所有的身份验证令牌已经成功更新。 The Groups and users has been created The Environment for grid,oracle also has been set successfully配置 oracle 安装目录(附录脚本)[root@node1 node1]# ./predir.sh Now create the necessary directory for oracle,grid users and change the authention to oracle,grid users... The necessary directory for oracle,grid users and change the authention to oracle,grid users has been finished [root@node2 node2]# ./predir.sh Now create the necessary directory for oracle,grid users and change the authention to oracle,grid users... The necessary directory for oracle,grid users and change the authention to oracle,grid users has been finished [root@node1 u01]# tree └── app ├── 11.2.0 │ └── grid ├── grid └── oracle 5 directories, 0 files [root@node2 u01]# tree └── app ├── 11.2.0 │ └── grid ├── grid └── oracle 5 directories, 0 files22 / 110 配置 limits 限制参数(附录脚本)[root@node1 node1]# ls predir.sh prelimits.sh prelogin.sh preprofile.sh presysctl.sh preusers.sh [root@node1 node1]# ./prelimits.sh Now modify the /etc/security/limits.conf,but backup it named /etc/security/limits.conf.bak before Modifing the /etc/security/limits.conf has been succeed. [root@node2 node2]# ls predir.sh prelimits.sh prelogin.sh preprofile.sh presysctl.sh preusers.sh [root@node2 node2]# ./prelimits.sh Now modify the /etc/security/limits.conf,but backup it named /etc/security/limits.conf.bak before Modifing the /etc/security/limits.conf has been succeed.配置 pam.d 认证参数(附录脚本)[root@node1 node1]# ./prelogin.sh Now modify the /etc/pam.d/login,but with a backup named /etc/pam.d/login.bak Modifing the /etc/pam.d/login has been succeed. [root@node2 node2]# ./prelogin.sh Now modify the /etc/pam.d/login,but with a backup named /etc/pam.d/login.bak Modifing the /etc/pam.d/login has been succeed.配置/etc/profile 参数(附录脚本)[root@node1 node1]# ./preprofile.sh Now modify the /etc/profile,but with a backup named /etc/profile.bak Modifing the /etc/profile has been succeed. [root@node2 node2]# ./preprofile.sh Now modify the /etc/profile,but with a backup named /etc/profile.bak Modifing the /etc/profile has been succeed.配置 sysctl 内核配置参数(附录脚本)[root@node1 node1]# ./presysctl.sh Now modify the /etc/sysctl.conf,but with a backup named /etc/sysctl.bak Modifing the /etc/sysctl.conf has been succeed. Now make the changes take effect..... net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 123 / 110 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 error: &net.bridge.bridge-nf-call-ip6tables& is an unknown key error: &net.bridge.bridge-nf-call-iptables& is an unknown key error: &net.bridge.bridge-nf-call-arptables& is an unknown key kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax =
kernel.shmall =
fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.shmall = 2097152 kernel.shmmax =
kernel.shmmni = 4096 kernel.sem = 250 8 net.ipv4.ip_local_port_range =
net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048586 net.ipv4.tcp_wmem = 144 262144 net.ipv4.tcp_rmem =
[root@node2 node2]# ./presysctl.sh Now modify the /etc/sysctl.conf,but with a backup named /etc/sysctl.bak Modifing the /etc/sysctl.conf has been succeed. Now make the changes take effect..... net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 error: &net.bridge.bridge-nf-call-ip6tables& is an unknown key error: &net.bridge.bridge-nf-call-iptables& is an unknown key error: &net.bridge.bridge-nf-call-arptables& is an unknown key kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax =
kernel.shmall =
fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.shmall = 209715224 / 110 kernel.shmmax =
kernel.shmmni = 4096 kernel.sem = 250 8 net.ipv4.ip_local_port_range =
net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048586 net.ipv4.tcp_wmem = 144 262144 net.ipv4.tcp_rmem = 配置关闭 NTP 时钟[root@node1 etc]# date 2014 年 06 月 17 日 星期二 00:34:45 CST [root@node2 node2]# date 2014 年 06 月 17 日 星期二 00:34:48 CST [root@node1 ~]# chkconfig ntpd off [root@node1 ~]# service ntpd stop 关闭 ntpd: [root@node1 ~]# mv /etc/ntp.conf /etc/ntp.conf.bak [root@node2 ~]# service ntpd stop 关闭 ntpd: [root@node2 ~]# chkconfig ntpd off [root@node2 ~]# mv /etc/ntp.conf /etc/ntp.conf.bak [确定] [确定]配置需求软件包Grid 所需软件包 yum install libstdc++-devel libaio-devel gcc-c++ glibc-headers elfutils-libelf-devel compat-libstdc++-33 compat-libstdc++-33Cy pdksh 下载地址: http://www.cs.mun.ca/~michael/pdksh/ [root@node2 soft]# rpm -ivh pdksh-5.2.14-30.x86_64.rpm warning: pdksh-5.2.14-30.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 73307de6: NOKEY Preparing... ###########################################25 / 110glibc-devel [100%] 1:pdksh ########################################### [100%] [root@node1 grid]# cd rpm/ [root@node1 rpm]# ls cvuqdisk-1.0.9-1.rpm [root@node1 rpm]# rpm -ivh cvuqdisk-1.0.9-1.rpm Preparing... ########################################### [100%] Using default group oinstall to install package 1:cvuqdisk ########################################### [100%] [root@node1 rpm]# pwd /soft/grid/rpm配置 oracle、grid 用户的 ssh 对等性 Oracle 用户 ssh 对等性配置Oracle 对等性配置 Node1 节点[root@node1 ~]# su - oracle node1-& clear node1-& mkdir ~/.ssh node1-& chmod 755 ~/.ssh/ node1-& ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_rsa. Your public key has been saved in /home/oracle/.ssh/id_rsa.pub. The key fingerprint is: 26:c6:8a:60:1b:78:c3:6d:09:b0:c0:f3:a3:e5:ff:ce oracle@node1 The key's randomart image is: +--[ RSA 2048]----+ |+ | |.= | |. + |26 / 110 |.. * o | |ooB = + S | |.+o= o o | | .. o | | .. | | oE | +-----------------+ node1-& ssh-keygen -t dsa Generating public/private dsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_dsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_dsa. Your public key has been saved in /home/oracle/.ssh/id_dsa.pub. The key fingerprint is: d0:05:5d:0c:5b:72:42:87:f8:e8:04:66:e4:b5:bd:8c oracle@node1 The key's randomart image is: +--[ DSA 1024]----+ | .. o=*== | | .+o.+oB. | | oo.ooo | | .oo.. | | oE o | | . | | | | | | | +-----------------+ node1-& cat ~/.ssh/id_rsa.pub && .ssh/authorized_keys node1-& cat ~/.ssh/id_dsa.pub && .ssh/authorized_keys node1-& ssh node2 cat ~/.ssh/id_rsa.pub && ~/.ssh/authorized_keys oracle@node2's password: node1-& ssh node2 cat ~/.ssh/id_dsa.pub && ~/.ssh/authorized_keys oracle@node2's password: node1-& scp ~/.ssh/authorized_keys node2:~/.ssh/authorized_keys oracle@node2's password: authorized_keys 100% 2.0KB/sNode2 节点[root@node2 ~]# su - oracle node2-& mkdir ~/.ssh27 / 110 node2-& chmod 755 .ssh/ node2-& ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_rsa. Your public key has been saved in /home/oracle/.ssh/id_rsa.pub. The key fingerprint is: 92:05:4d:35:d2:c9:da:21:b4:06:43:bb:ef:4c:6e:45 oracle@node2 The key's randomart image is: +--[ RSA 2048]----+ | .=+++o. | | =oo=. | | . ++ . | | =. E | | + S. | | o . | | o. | | =. | | .+ | +-----------------+ node2-& ssh-keygen -t dsa Generating public/private dsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_dsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_dsa. Your public key has been saved in /home/oracle/.ssh/id_dsa.pub. The key fingerprint is: 99:00:7e:00:b1:a9:d8:45:68:03:4c:ce:ce:ef:30:0d oracle@node2 The key's randomart image is: +--[ DSA 1024]----+ |+oo++ | |o.+= o | | +o.o o | |+o . . . o | |oE. S | | + | |oo | | + | | . | +-----------------+ node2-& cat ~/.ssh/id_rsa.pub && .ssh/authorized_keys28 / 110 node2-& cat ~/.ssh/id_dsa.pub && .ssh/authorized_keys node2-& ssh node1 cat ~/.ssh/id_rsa.pub && ~/.ssh/authorized_keys The authenticity of host 'node1 (192.168.37.21)' can't be established. RSA key fingerprint is 53:b8:a1:de:30:e3:d4:80:17:d3:7f:f5:b2:9d:11:45. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1,192.168.37.21' (RSA) to the list of known hosts. node2-& ssh node1 cat ~/.ssh/id_dsa.pub && ~/.ssh/authorized_keys node2-& scp ~/.ssh/authorized_keys node1:~/.ssh/authorized_keys authorized_keys 100% KB/s 00:00Oracle 用户对等性检测 Node1 节点node1-& ssh node2 date Tue Jun 17 12:35:32 CST 2014 node1-& ssh node1 date The authenticity of host 'node1 (192.168.37.21)' can't be established. RSA key fingerprint is 53:b8:a1:de:30:e3:d4:80:17:d3:7f:f5:b2:9d:11:45. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1,192.168.37.21' (RSA) to the list of known hosts. Tue Jun 17 12:35:39 CST 2014 node1-& ssh node1 date Tue Jun 17 12:35:42 CST 2014 node1-& ssh node1-priv date The authenticity of host 'node1-priv (172.16.1.1)' can't be established. RSA key fingerprint is 53:b8:a1:de:30:e3:d4:80:17:d3:7f:f5:b2:9d:11:45. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1-priv,172.16.1.1' (RSA) to the list of known hosts. Tue Jun 17 12:42:40 CST 2014 node1-& ssh node1-priv date Tue Jun 17 12:42:43 CST 2014 node1-& ssh node2-priv date The authenticity of host 'node2-priv (172.16.1.2)' can't be established. RSA key fingerprint is c3:1c:e5:62:f2:35:87:4b:51:55:40:41:29:c7:bb:b1. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node2-priv,172.16.1.2' (RSA) to the list of known hosts. Tue Jun 17 12:42:48 CST 2014 node1-& ssh node2-priv date Tue Jun 17 12:42:50 CST 2014 成功! ! !29 / 110 Node2 节点node2-& ssh node1 date Tue Jun 17 12:43:06 CST 2014 node2-& ssh node2 date The authenticity of host 'node2 (192.168.37.22)' can't be established. RSA key fingerprint is c3:1c:e5:62:f2:35:87:4b:51:55:40:41:29:c7:bb:b1. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node2,192.168.37.22' (RSA) to the list of known hosts. Tue Jun 17 12:43:11 CST 2014 node2-& ssh node2 date Tue Jun 17 12:43:14 CST 2014 node2-& ssh node1-priv date The authenticity of host 'node1-priv (172.16.1.1)' can't be established. RSA key fingerprint is 53:b8:a1:de:30:e3:d4:80:17:d3:7f:f5:b2:9d:11:45. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1-priv,172.16.1.1' (RSA) to the list of known hosts. Tue Jun 17 12:43:22 CST 2014 node2-& ssh node1-priv date Tue Jun 17 12:43:24 CST 2014 node2-& ssh node2-priv date The authenticity of host 'node2-priv (172.16.1.2)' can't be established. RSA key fingerprint is c3:1c:e5:62:f2:35:87:4b:51:55:40:41:29:c7:bb:b1. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node2-priv,172.16.1.2' (RSA) to the list of known hosts. Tue Jun 17 12:43:28 CST 2014 node2-& ssh node2-priv date Tue Jun 17 12:43:31 CST 2014 成功! ! !Grid 用户 ssh 对等性配置Grid 用户对等性配置 Node1 节点[root@node1 ~]# su - grid node1-& mkdir ~/.ssh node1-& chmod 755 ~/.ssh/ node1-& ssh-keygen -t rsa30 / 110 Generating public/private rsa key pair.Enter file in which to save the key (/home/grid/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/grid/.ssh/id_rsa. Your public key has been saved in /home/grid/.ssh/id_rsa.pub. The key fingerprint is: 99:cd:f4:44:a9:f1:23:16:40:d6:a9:b7:99:a0:e7:76 grid@node1 The key's randomart image is: +--[ RSA 2048]----+ | .+o ... | | . =.. | | ..=. | | o*=oo | | .S+o=.. | | ..+ | | o | | oE | | .. | +-----------------+ node1-& ssh-keygen -t dsa Generating public/private dsa key pair. Enter file in which to save the key (/home/grid/.ssh/id_dsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/grid/.ssh/id_dsa. Your public key has been saved in /home/grid/.ssh/id_dsa.pub. The key fingerprint is: 03:60:03:14:2e:c1:77:ee:d2:55:46:00:4b:ac:bb:68 grid@node1 The key's randomart image is: +--[ DSA 1024]----+ |o.+o=o..o. | | + o.=. o | |. o +.. o | |...o | | +.S | | oo . | | .o | |E. | |. | +-----------------+ node1-& cat ~/.ssh/id_rsa.pub && .ssh/authorized_keys31 / 110 node1-& cat ~/.ssh/id_dsa.pub && .ssh/authorized_keys node1-& ssh node2 cat ~/.ssh/id_rsa.pub && ~/.ssh/authorized_keys The authenticity of host 'node2 (192.168.37.22)' can't be established. RSA key fingerprint is c3:1c:e5:62:f2:35:87:4b:51:55:40:41:29:c7:bb:b1. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node2,192.168.37.22' (RSA) to the list of known hosts. grid@node2's password: node1-& ssh node2 cat ~/.ssh/id_dsa.pub && ~/.ssh/authorized_keys grid@node2's password: node1-& scp ~/.ssh/authorized_keys node2:~/.ssh/authorized_keys grid@node2's password: authorized_keys 100% KB/s 00:00Node2 节点[root@node2 ~]# su - grid node2-& mkdir ~/.ssh node2-& chmod 755 ~/.ssh/ node2-& ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/grid/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/grid/.ssh/id_rsa. Your public key has been saved in /home/grid/.ssh/id_rsa.pub. The key fingerprint is: ae:87:bc:30:60:98:98:d4:36:bb:60:d6:dd:df:3d:54 grid@node2 The key's randomart image is: +--[ RSA 2048]----+ | | | . | |.+ E| |o+o + . . | |=+oo . .S . | |o.... .. . o | | .o. ... . o | | oo.. . | | oo | +-----------------+ node2-& ssh-keygen -t dsa Generating public/private dsa key pair.32 / 110 Enter file in which to save the key (/home/grid/.ssh/id_dsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/grid/.ssh/id_dsa. Your public key has been saved in /home/grid/.ssh/id_dsa.pub. The key fingerprint is: 63:49:2d:9b:55:54:8a:52:d8:5e:4a:c0:fc:ff:ff:91 grid@node2 The key's randomart image is: +--[ DSA 1024]----+ | o.+..o.. | | +ooo.. | | +++o. | | . B+ | | S . | | . . . .| | .E| | . .| | .+| +-----------------+ node2-& cat ~/.ssh/id_rsa.pub && .ssh/authorized_keys node2-& cat ~/.ssh/id_dsa.pub && .ssh/authorized_keys node2-& ssh node1 cat ~/.ssh/id_rsa.pub && ~/.ssh/authorized_keys The authenticity of host 'node1 (192.168.37.21)' can't be established. RSA key fingerprint is 53:b8:a1:de:30:e3:d4:80:17:d3:7f:f5:b2:9d:11:45. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1,192.168.37.21' (RSA) to the list of known hosts. node2-& ssh node1 cat ~/.ssh/id_dsa.pub && ~/.ssh/authorized_keys node2-& scp ~/.ssh/authorized_keys node1:~/.ssh/authorized_keys authorized_keys 100% KB/s 00:00Oracle 用户对等性检测 Node1 节点node1-& id uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 node1-& ssh node1 date The authenticity of host 'node1 (192.168.37.21)' can't be established. RSA key fingerprint is 53:b8:a1:de:30:e3:d4:80:17:d3:7f:f5:b2:9d:11:45. Are you sure you want to continue connecting (yes/no)? yes33 / 110 Warning: Permanently added 'node1,192.168.37.21' (RSA) to the list of known hosts. Tue Jun 17 13:10:05 CST 2014 node1-& ssh node1 date Tue Jun 17 13:10:08 CST 2014 node1-& ssh node2 date Tue Jun 17 13:10:12 CST 2014 node1-& ssh node1-priv date The authenticity of host 'node1-priv (172.16.1.1)' can't be established. RSA key fingerprint is 53:b8:a1:de:30:e3:d4:80:17:d3:7f:f5:b2:9d:11:45. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node1-priv,172.16.1.1' (RSA) to the list of known hosts. Tue Jun 17 13:10:36 CST 2014 node1-& ssh node1-priv date Tue Jun 17 13:10:37 CST 2014 node1-& ssh node2-priv date The authenticity of host 'node2-priv (172.16.1.2)' can't be established. RSA key fingerprint is c3:1c:e5:62:f2:35:87:4b:51:55:40:41:29:c7:bb:b1. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node2-priv,172.16.1.2' (RSA) to the list of known hosts. Tue Jun 17 13:10:42 CST 2014 node1-& ssh node2-priv date Tue Jun 17 13:10:45 CST 2014Node2 节点node2-& id uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 node2-& ssh node1 date Tue Jun 17 13:12:13 CST 2014 node2-& ssh node2 date The authenticity of host 'node2 (192.168.37.22)' can't be established. RSA key fingerprint is c3:1c:e5:62:f2:35:87:4b:51:55:40:41:29:c7:bb:b1. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node2,192.168.37.22' (RSA) to the list of known hosts. Tue Jun 17 13:12:18 CST 2014 node2-& ssh node2 date Tue Jun 17 13:12:20 CST 2014 node2-& ssh node1-priv date The authenticity of host 'node1-priv (172.16.1.1)' can't be established. RSA key fingerprint is 53:b8:a1:de:30:e3:d4:80:17:d3:7f:f5:b2:9d:11:45. Are you sure you want to continue connecting (yes/no)? yes34 / 110 Warning: Permanently added 'node1-priv,172.16.1.1' (RSA) to the list of known hosts. Tue Jun 17 13:12:27 CST 2014 node2-& ssh node1-priv date Tue Jun 17 13:12:29 CST 2014 node2-& ssh node2-priv date The authenticity of host 'node2-priv (172.16.1.2)' can't be established. RSA key fingerprint is c3:1c:e5:62:f2:35:87:4b:51:55:40:41:29:c7:bb:b1. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'node2-priv,172.16.1.2' (RSA) to the list of known hosts. Tue Jun 17 13:12:35 CST 2014 node2-& ssh node2-priv date Tue Jun 17 13:12:37 CST 2014配置 ASM 磁盘 安装 ASM RPM 软件首先下载 ASM 所需软件包,由于 oracle 11g 的 oracleasm 不再支持 RHEL6.0, 我们这里采用 RHEL 提供的软件包进行安装。Node1[root@node1 soft]# pwd /soft [root@node1 soft]# ls kmod-oracleasm-2.0.6.rh1-2.el6.x86_64.rpm oracle.11.2.0.3.linux.64.bit-1to2.zip oracle.11.2.0.3.linux.64.bit-2to2.zip oracle.11.2.0.3.linux.64.bit-grid.zip oracleasmlib-2.0.4-1.el6.x86_64.rpm oracleasm-support-2.1.8-1.el6.x86_64.rpm pre-script [root@node1 soft]# rpm -ivh oracleasm-support-2.1.8-1.el6.x86_64.rpm warning: oracleasm-support-2.1.8-1.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY Preparing... ########################################### [100%] 1:oracleasm-support ########################################### [100%] [root@node1 soft]# rpm -ivh kmod-oracleasm-2.0.6.rh1-2.el6.x86_64.rpm35 / 110 warning: kmod-oracleasm-2.0.6.rh1-2.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY Preparing... ########################################### [100%] 1:kmod-oracleasm ########################################### [100%] [root@node1 soft]# rpm -ivh oracleasmlib-2.0.4-1.el6.x86_64.rpm warning: oracleasmlib-2.0.4-1.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY Preparing... ########################################### [100%] 1:oracleasmlib ########################################### [100%] [root@node1 soft]#Node2[root@node2 soft]# pwd /soft [root@node2 soft]# ls kmod-oracleasm-2.0.6.rh1-2.el6.x86_64.rpm oracle.11.2.0.3.linux.64.bit-1to2.zip oracle.11.2.0.3.linux.64.bit-2to2.zip oracle.11.2.0.3.linux.64.bit-grid.zip oracleasmlib-2.0.4-1.el6.x86_64.rpm oracleasm-support-2.1.8-1.el6.x86_64.rpm pre-script [root@node2 soft]# rpm -ivh oracleasm-support-2.1.8-1.el6.x86_64.rpm warning: oracleasm-support-2.1.8-1.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY Preparing... ########################################### [100%] 1:oracleasm-support ########################################### [100%] [root@node2 soft]# rpm -ivh kmod-oracleasm-2.0.6.rh1-2.el6.x86_64.rpm warning: kmod-oracleasm-2.0.6.rh1-2.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY Preparing... ########################################### [100%] 1:kmod-oracleasm ########################################### [100%] [root@node2 soft]# rpm -ivh oracleasmlib-2.0.4-1.el6.x86_64.rpm warning: oracleasmlib-2.0.4-1.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature,36 / 110 key ID ec551f03: NOKEY Preparing... [100%] 1:oracleasmlib [100%] [root@node2 soft]############################################ ###########################################配置 ASM Dirve 服务查看 ASM Drive 服务状态默认情况下没有开启: [root@node1 soft]# /usr/sbin/oracleasm status Checking if ASM is loaded: no Checking if /dev/oracleasm is mounted: no配置 ASM 服务 Node1 ASM[root@node1 soft]# /usr/sbin/oracleasm configure -i Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting &ENTER& without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: grid Default group to own the driver interface []: asmadmin Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration: doneNode2 ASM[root@node2 soft]# /usr/sbin/oracleasm configure -i Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is37 / 110 loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting &ENTER& without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: grid Default group to own the driver interface []: asmadmin Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration: done 说明: /usr/sbin/oracleasm configure -i 命令进行配置时, 用户配置为 grid, 组为 asmadmin, 启动 ASM library driver 驱动服务, 并且将其配置为随着操作系统的启动而自动启 动。 配置完成后, 记得执行 /usr/sbin/oracleasm init 命令来加载 oracleasm 内核模块。 [root@node1 soft]# /usr/sbin/oracleasm init Creating /dev/oracleasm mount point: /dev/oracleasm Loading module &oracleasm&: oracleasm Mounting ASMlib driver filesystem: /dev/oracleasm [root@node2 soft]# /usr/sbin/oracleasm init Creating /dev/oracleasm mount point: /dev/oracleasm Loading module &oracleasm&: oracleasm Mounting ASMlib driver filesystem: /dev/oracleasm磁盘格式化分区只在一个节点上操作即可,在另外节点上用 partx /dev/sdb 同步即可。 [root@node1 ~]# fdisk -l Disk /dev/sdb: 32.2 GB,
bytes 64 heads, 32 sectors/track, 30720 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd777d4fc [root@node1 ~]# fdisk -l Device Boot Start End Blocks Id System /dev/sdb1 1
83 Linux /dev/sdb2
Linux /dev/sdb3
Linux38 / 110 /dev/sdb4
[root@node2 ~]# partx /dev/sdb # 1: 32-
sectors, # 2: 169919 (
sectors, # 3: 949183 (
sectors, # 4: 728447 (
sectors,3 MB) 6443 MB) 8590 MB) 8590 MB)83Linux配置 ASM 磁盘我们安装 ASM RPM 软件包, 配置 ASM 驱动服务的最终目的是要创建 ASM 磁盘,为将来安装 grid 软件、创建 Oracle 数据库提供存储。 说明: 只需在一个节点上创建 ASM 磁盘即可!创建完之后,在其它节点上执行 /usr/sbin/oracleasm scandisks 之后,就可看到 ASM 磁盘。 [root@node1 soft]# /usr/sbin/oracleasm listdisks [root@node1 soft]# [root@node2 soft]# /usr/sbin/oracleasm listdisks [root@node2 soft]# 报错解决: 由于 SElinux 限制,setenforce 0 或者关闭 selinux 即可。 [root@node1 ~]# /usr/sbin/oracleasm createdisk VOL1 /dev/sdb1 Writing disk header: done Instantiating disk: failed Clearing disk header: done [root@node1 ~]# getenforce Enforcing 解决: [root@node1 ~]# /usr/sbin/oracleasm createdisk VOL1 /dev/sdb1 Writing disk header: done Instantiating disk: done [root@node1 ~]# /usr/sbin/oracleasm createdisk VOL2 /dev/sdb2 Writing disk header: done Instantiating disk: done [root@node1 ~]# /usr/sbin/oracleasm createdisk VOL3 /dev/sdb3 Writing disk header: done39 / 110 Instantiating disk: done [root@node1 ~]# /usr/sbin/oracleasm createdisk VOL4 /dev/sdb4 Writing disk header: done Instantiating disk: done同步扫描 ASM-DISK[root@node2 ~]# /usr/sbin/oracleasm scandisks Reloading disk partitions: done Cleaning any stale ASM disks... Scanning system for ASM disks... Instantiating disk &VOL1& Instantiating disk &VOL2& Instantiating disk &VOL3& Instantiating disk &VOL4&查看 ASM-DISK 配置结果[root@node1 ~]# /usr/sbin/oracleasm listdisks VOL1 VOL2 VOL3 VOL4 [root@node2 ~]# /usr/sbin/oracleasm listdisks VOL1 VOL2 VOL3 VOL4 [root@node2 ~]#40 / 110 三、 安装 GRID安装前预检查node1-& ./runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -verbose Fixup information has been generated for following node(s): node2,node1 Please run the following script on each node as &root& user to execute the fixups: '/tmp/CVU_11.2.0.3.0_grid/runfixup.sh' Pre-check for cluster services setup was unsuccessful on all the nodes. [root@node1 ~]# /tmp/CVU_11.2.0.3.0_grid/runfixup.sh Response file being used is :/tmp/CVU_11.2.0.3.0_grid/fixup.response Enable file being used is :/tmp/CVU_11.2.0.3.0_grid/fixup.enable Log file location: /tmp/CVU_11.2.0.3.0_grid/orarun.log uid=1100(grid) gid=1000(oinstall) 组=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper) [root@node2 ~]# /tmp/CVU_11.2.0.3.0_grid/runfixup.sh Response file being used is :/tmp/CVU_11.2.0.3.0_grid/fixup.response Enable file being used is :/tmp/CVU_11.2.0.3.0_grid/fixup.enable Log file location: /tmp/CVU_11.2.0.3.0_grid/orarun.log uid=1100(grid) gid=1000(oinstall) 组=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)安装 Grid41 / 110 42 / 110 43 / 110 44 / 110 报错解决: 如果出现网络连接类的报错,可能是防火墙的问题,关闭防火墙即可。 [root@node1 ~]# iptables -F [root@node1 ~]# service iptables save iptables:将防火墙规则保存到 /etc/sysconfig/iptables: [root@node1 ~]# service iptables stop iptables:清除防火墙规则: iptables:将链设置为政策 ACCEPT:filter iptables:正在卸载模块: [root@node1 ~]# [确定] [确定] [确定] [确定]45 / 110 上图可能会报错,经查是 oracle 的 BUG,可以忽略。错误截图如下:46 / 110 47 / 110 参考网页解释链接如下: http://www.oradblife.com//oracle-error-prvf-5150/ http://blog.csdn.net/yfleng2002/article/details/785124948 / 110 密码 asmadmin49 / 110 50 / 110 51 / 110 52 / 110 [root@node1 u01]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@node2 ~]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. 报错解决: [root@node1 u01]# /u01/app/11.2.0/grid/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file...53 / 110 Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter /u01/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory User ignored Prerequisites during installation Failed to create keys in the OLR, rc = 127, Message: /u01/app/11.2.0/grid/bin/clscfg.bin: error while loading shared libcap.so.1: cannot open shared object file: No such file or directoryfile:libraries:Failed to create keys in the OLR at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 7497. /u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed 造成上述问题的原因是缺少软件包导致的,安装软件包: 参考网址如下: http://blog.chinaunix.net/uid--id-3940857.html http://blog.itpub.net//viewspace-1093784/ [root@node1 u01]# yum install compat-libcap1.x86_64 -y [root@node1 ~]# find /u01/ -name roothas.pl /u01/app/11.2.0/grid/crs/install/roothas.pl [root@node1 ~]# /u01/app/11.2.0/grid/crs/install/roothas.pl -deconfig -force Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params CRS-4535: Cannot communicate with Cluster Ready Services CRS-4000: Command Stop failed, or completed with errors. CRS-4535: Cannot communicate with Cluster Ready Services CRS-4000: Command Delete failed, or completed with errors. CLSU-00100: Operating System function: failed failed with error data: 2 CLSU-00101: Operating System error message: No such file or directory CLSU-00103: error location: scrsearch3 CLSU-00104: additional error information: id doesnt exist scls_scr_setval CRS-4544: Unable to connect to OHAS CRS-4000: Command Stop failed, or completed with errors. Failure in execution (rc=-1, 0, 没有那个文件或目录) for command /etc/init.d/ohasd deinstall Successfully deconfigured Oracle Restart stack [root@node2 ~]# /u01/app/11.2.0/grid/crs/install/roothas.pl -deconfig -force54 / 110 Using configuration parameter /u01/app/11.2.0/grid/crs/install/crsconfig_params CRS-4535: Cannot communicate with Cluster Ready Services CRS-4000: Command Stop failed, or completed with errors. CRS-4535: Cannot communicate with Cluster Ready Services CRS-4000: Command Delete failed, or completed with errors. CLSU-00100: Operating System function: failed failed with error data: 2 CLSU-00101: Operating System error message: No such file or directory CLSU-00103: error location: scrsearch3 CLSU-00104: additional error information: id doesnt exist scls_scr_setval CRS-4544: Unable to connect to OHAS CRS-4000: Command Stop failed, or completed with errors.file:Failure in execution (rc=-1, 0, 没有那个文件或目录) for command /etc/init.d/ohasd deinstall Successfully deconfigured Oracle Restart stack 如果采用 multipath 方式,虽然磁盘固定但是 dm-*的盘符是不对应的,需要修改 oracleasm 的配置文件,安装前就需修改。 NODE2 执行 root.sh 的时候的报错: [root@node2 ~]# /u01/app/11.2.0/grid/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of &dbhome& have not changed. No need to overwrite. The contents of &oraenv& have not changed. No need to overwrite. The contents of &coraenv& have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter /u01/app/11.2.0/grid/crs/install/crsconfig_params User ignored Prerequisites during installation OLR initialization - successful Adding Clusterware entries to upstart CRS-2672: Attempting to start 'ora.mdnsd' on 'node2'55 / 110file: CRS-2676: Start of 'ora.mdnsd' on 'node2' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'node2' CRS-2676: Start of 'ora.gpnpd' on 'node2' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node2' CRS-2672: Attempting to start 'ora.gipcd' on 'node2' CRS-2676: Start of 'ora.cssdmonitor' on 'node2' succeeded CRS-2676: Start of 'ora.gipcd' on 'node2' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'node2' CRS-2672: Attempting to start 'ora.diskmon' on 'node2' CRS-2676: Start of 'ora.diskmon' on 'node2' succeeded CRS-2676: Start of 'ora.cssd' on 'node2' succeeded DiskGroup GRIDDG creation failed with the following message: ORA-15018: diskgroup cannot be created ORA-15072: command requires at least 1 regular failure groups, discovered only 0 参照网站: http://wenku.baidu.com/link?url=V2w4aY2TGe7osqTMIFMLAZytK-jd1iig9pjkdLJKjjRa wDq8woznyhARw4c9A9nhXl9CRU9URL9b30TMn_u359p6Ty8OwvwlcYKP8HWeT3S 解决方法: 最后查询 oracle 官网说是一个 bug,这边如果忽略的话,在第二节点执行 root.sh 时提示 ORA-15072 ORA-15018 报错。 查询官网解决方案说 On all nodes, 1. Modify the /etc/sysconfig/oracleasm with: ORACLEASM_SCANORDER=&dm& ORACLEASM_SCANEXCLUDE=&sd& 2. restart the asmlib by : # /etc/init.d/oracleasm restart 3. Run root.sh on the 2nd node 可以解决此问题。 重新执行 Root 脚本(时间较长,等待。 。 。 。 。 。 。 ) : [root@node1 ~]# /u01/app/11.2.0/grid/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]:56 / 110 The contents of &dbhome& have not changed. No need to overwrite. The contents of &oraenv& have not changed. No need to overwrite. The contents of &coraenv& have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter /u01/app/11.2.0/grid/crs/install/crsconfig_params User ignored Prerequisites during installation OLR initialization - successful root wallet root wallet cert root cert export peer wallet profile reader wallet pa wallet peer wallet keys pa wallet keys peer cert request pa cert request peer cert pa cert peer root cert TP profile reader root cert TP pa root cert TP peer pa cert TP pa peer cert TP profile reader pa cert TP profile reader peer cert TP peer user cert pa user cert Adding Clusterware entries to upstart CRS-2672: Attempting to start 'ora.mdnsd' on 'node1' CRS-2676: Start of 'ora.mdnsd' on 'node1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'node1' CRS-2676: Start of 'ora.gpnpd' on 'node1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node1' CRS-2672: Attempting to start 'ora.gipcd' on 'node1' CRS-2676: Start of 'ora.gipcd' on 'node1' succeeded CRS-2676: Start of 'ora.cssdmonitor' on 'node1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'node1' CRS-2672: Attempting to start 'ora.diskmon' on 'node1'57 / 110file: CRS-2676: Start of 'ora.diskmon' on 'node1' succeeded CRS-2676: Start of 'ora.cssd' on 'node1' succeeded 已成功创建并启动 ASM。 已成功创建磁盘组 GRIDDG。 clscfg: -install mode specified Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. CRS-4256: Updating the profile Successful addition of voting disk ee1d4f9fbf0ba8bb6c9084b5. Successfully replaced voting disk group with +GRIDDG. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ## STATE File Universal Id File Name Disk group -- ----------------------------- --------1. ONLINE ee1d4f9fbf0ba8bb6c9084b5 (ORCL:VOL1) [GRIDDG] Located 1 voting disk(s). CRS-2672: Attempting to start 'ora.asm' on 'node1' CRS-2676: Start of 'ora.asm' on 'node1' succeeded CRS-2672: Attempting to start 'ora.GRIDDG.dg' on 'node1' CRS-2676: Start of 'ora.GRIDDG.dg' on 'node1' succeeded Configure Oracle Grid Infrastructure for a Cluster ... succeeded [root@node2 ~]# /u01/app/11.2.0/grid/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of &dbhome& have not changed. No need to overwrite. The contents of &oraenv& have not changed. No need to overwrite. The contents of &coraenv& have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params User ignored Prerequisites during installation OLR initialization - successful Adding Clusterware entries to upstart CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node node1, number 1, and is terminating58 / 110 An active cluster was found during exclusive startup, restarting to join the cluster Configure Oracle Grid Infrastructure for a Cluster ... succeeded59 / 110 报错排查: 问题是由于 scan-cluster 使用/etc/hosts 文件来解析造成的,直接跳过,不影响。 如果是用 dns 来解析的话应该不会报此错误,参考网站链接如下: http://blog.csdn.net/wuweilong/article/details/850656660 / 110 四、 安装 oracle 数据库61 / 110 62 / 110 63 / 110 64 / 110 65 / 110 66 / 110 [root@node1 ~]# /u01/app/oracle/product/11.2.0/db_1/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/11.2.0/db_1 Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of &dbhome& have not changed. No need to overwrite. The contents of &oraenv& have not changed. No need to overwrite. The contents of &coraenv& have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Finished product-specific root actions. [root@node2 ~]# /u01/app/oracle/product/11.2.0/db_1/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= oracle

我要回帖

更多关于 11g rac 安装 的文章

 

随机推荐