技术开发 频道

虚拟RHEL5上安装11g RAC

  创建共享磁盘

  这里安装使用NFS为RAC提供共享存储,修改下面的语句以适应你的NAS或NFS服务器。

  如果你使用了第三个Linux服务器提供NFS服务,你应该如下面语句这样创建一些共享目录:

  mkdir /shared_config

  mkdir /shared_crs

  mkdir /shared_home

  mkdir /shared_data

  将下列语句添加到/etc/exports文件:

  /shared_config *(rw,sync,no_wdelay,insecure_locks,no_root_squash)

  /shared_crs *(rw,sync,no_wdelay,insecure_locks,no_root_squash)

  /shared_home *(rw,sync,no_wdelay,insecure_locks,no_root_squash)

  /shared_data *(rw,sync,no_wdelay,insecure_locks,no_root_squash)

  运行以下命令导出NFS共享:

  chkconfig nfs on

  service nfs restart

  如果你使用的是NAS或其他一些支持NFS的存储设备,也请创建4个共享。

  在RAC1和RAC2上创建用于安装Oracle软件的目录:

  mkdir -p /u01/app/crs/product/11.1.0/crs

  mkdir -p /u01/app/oracle/product/11.1.0/db_1

  mkdir -p /u01/oradata

  mkdir -p /u01/shared_config

  chown -R oracle:oinstall /u01/app /u01/app/oracle /u01/oradata /u01/shared_config

  chmod -R 775 /u01/app /u01/app/oracle /u01/oradata /u01/shared_config

  将下面的语句添加到每个服务器的/etc/fstab文件中,挂载选项是基于Oracle metalink注记:359515.1的建议:

  nas1:/shared_config /u01/shared_config nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,noac,vers=3,timeo=600 0 0

  nas1:/shared_crs /u01/app/crs/product/11.1.0/crs nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0 0 0

  nas1:/shared_home /u01/app/oracle/product/11.1.0/db_1 nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0 0 0

  nas1:/shared_data /u01/oradata nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 0 0

  以root用户登陆两台服务器运行下列命令挂载NFS共享:

  mount /u01/shared_config

  mount /u01/app/crs/product/11.1.0/crs

  mount /u01/app/oracle/product/11.1.0/db_1

  mount /u01/oradata

  建立共享CRS配置和表决磁盘文件:

  touch /u01/shared_config/ocr_configuration

  touch /u01/shared_config/voting_disk

  在每台服务器上以root登陆执行下列命令确保共享目录的权限设置正确:

  chown -R oracle:oinstall /u01/shared_config

  chown -R oracle:oinstall /u01/app/crs/product/11.1.0/crs

  chown -R oracle:oinstall /u01/app/oracle/product/11.1.0/db_1

  chown -R oracle:oinstall /u01/oradata

  开始安装clusterware之前,先在clusterware根目录使用runcluvfy.sh检查先决条件是否已经满足:

  /mountpoint/clusterware/runcluvfy.sh stage -pre crsinst -n rac1,rac2 –verbose

  如果你收到任何失败消息,请先纠正后再继续安装。

  安装clusterware软件

  解压clusterware和数据库软件:

  unzip linux_11gR1_clusterware.zip

  unzip linux_11gR1_database.zip

  以Oracle用户登陆到RAC1,然后执行安装程序:

  cd clusterware

  ./runInstaller

  在“欢迎”屏幕,点击“下一步”按钮。

  接受默认的inventory目录,点击“下一步”按钮。

  输入“/u01/app/crs/product/11.1.0/crs ”的ORACLE HOME,并点击“下一步”按钮。

  等待先决条件检查,遇到任何失败都应该纠正并重新测试,确保所有先决条件检查都通过,然后点击“下一步”按钮。

  “指定群集配置”屏幕显示只有RAC1节点。点击“添加”按钮继续。

  输入RAC2节点的详细资料,并点击“确定”按钮

  按“下一步”按钮继续。

  在“指定网络接口用法”屏幕定义每个网络接口的用途。选中“eth0”接口,点击“修改”按钮。

  设置“eht0”接口类型“public”,并点击“确定”按钮。

  保留“eth1”接口为私有,点击“下一步”按钮。

  点击“外部冗余”选项,输入“/u01/shared_config/ocr_configuration”作为OCR位置,点击“下一步”按钮。为了有更大的冗余,我们需要确定另一个共享磁盘的备用位置。

  点击“外部冗余”选项,输入“/u01/shared_config/voting_disk”的表决磁盘位置,并点击“下一步”按钮,为了有更大的冗余,我们需要确定另一个共享磁盘的替代的位置。

  在“摘要”屏幕上,单击“安装”按钮,继续。

  等待安装

  一旦安装完成,在两个节点上运行下列屏幕显示的orainstRoot.sh root.sh脚本。

  执行orainstRoot.sh文件的输出看起来应该像下面这样。

  # cd /u01/app/oraInventory

  # ./orainstRoot.sh

  Changing permissions of /u01/app/oraInventory to 770.

  Changing groupname of /u01/app/oraInventory to oinstall.

  The execution of the script is complete

  #

  执行root.sh的输出将取决于它运行的节点。下列文字是来自RAC1节点的输出。

  # cd /u01/app/crs/product/11.1.0/crs

  # ./root.sh

  WARNING: directory '/u01/app/crs/product/11.1.0' is not owned by root

  WARNING: directory '/u01/app/crs/product' is not owned by root

  WARNING: directory '/u01/app/crs' is not owned by root

  WARNING: directory '/u01/app' is not owned by root

  Checking to see if Oracle CRS stack is already configured

  /etc/oracle does not exist. Creating it now.

  Setting the permissions on OCR backup directory

  Setting up Network socket directories

  Oracle Cluster Registry configuration upgraded successfully

  The directory '/u01/app/crs/product/11.1.0' is not owned by root. Changing owner to root

  The directory '/u01/app/crs/product' is not owned by root. Changing owner to root

  The directory '/u01/app/crs' is not owned by root. Changing owner to root

  The directory '/u01/app' is not owned by root. Changing owner to root

  Successfully accumulated necessary OCR keys.

  Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

  node :

  node 1: rac1 rac1-priv rac1

  node 2: rac2 rac2-priv rac2

  Creating OCR keys for user 'root', privgrp 'root'..

  Operation successful.

  Now formatting voting device: /u01/shared_config/voting_disk

  Format of 1 voting devices complete.

  Startup will be queued to init within 30 seconds.

  Adding daemons to inittab

  Expecting the CRS daemons to be up within 600 seconds.

  Cluster Synchronization Services is active on these nodes.

  rac1

  Cluster Synchronization Services is inactive on these nodes.

  rac2

  Local node checking complete. Run root.sh on remaining nodes to start CRS daemons.

  #

  下面的输出来自RAC2节点。

  # /u01/app/crs/product/11.1.0/crs

  # ./root.sh

  WARNING: directory '/u01/app/crs/product/11.1.0' is not owned by root

  WARNING: directory '/u01/app/crs/product' is not owned by root

  WARNING: directory '/u01/app/crs' is not owned by root

  WARNING: directory '/u01/app' is not owned by root

  Checking to see if Oracle CRS stack is already configured

  /etc/oracle does not exist. Creating it now.

  Setting the permissions on OCR backup directory

  Setting up Network socket directories

  Oracle Cluster Registry configuration upgraded successfully

  The directory '/u01/app/crs/product/11.1.0' is not owned by root. Changing owner to root

  The directory '/u01/app/crs/product' is not owned by root. Changing owner to root

  The directory '/u01/app/crs' is not owned by root. Changing owner to root

  The directory '/u01/app' is not owned by root. Changing owner to root

  clscfg: EXISTING configuration version 4 detected.

  clscfg: version 4 is 11 Release 1.

  Successfully accumulated necessary OCR keys.

  Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.

  node :

  node 1: rac1 rac1-priv rac1

  node 2: rac2 rac2-priv rac2

  clscfg: Arguments check out successfully.

  NO KEYS WERE WRITTEN. Supply -force parameter to override.

  -force is destructive and will destroy any previous cluster

  configuration.

  Oracle Cluster Registry for cluster has already been initialized

  Startup will be queued to init within 30 seconds.

  Adding daemons to inittab

  Expecting the CRS daemons to be up within 600 seconds.

  Cluster Synchronization Services is active on these nodes.

  rac1

  rac2

  Cluster Synchronization Services is active on all the nodes.

  Waiting for the Oracle CRSD and EVMD to start

  Waiting for the Oracle CRSD and EVMD to start

  Oracle CRS stack installed and running under init(1M)

  Running vipca(silent) for configuring nodeapps

  Creating VIP application resource on (2) nodes...

  Creating GSD application resource on (2) nodes...

  Creating ONS application resource on (2) nodes...

  Starting VIP application resource on (2) nodes...

  Starting GSD application resource on (2) nodes...

  Starting ONS application resource on (2) nodes...

  Done.

  #

  在这里您可以看到,有些配置的步骤被省略了,因为他们在第一个节点做,此外,最后一部分脚本以安静模式运行虚拟IP配置助理(VIPCA)。

  您现在应该回到RAC1 的“执行配置脚本”屏幕上,并点击“确定”按钮。

  等待配置助手完成。

  当安装完成后,点击“退出”按钮离开安装程序。

  该集群安装现已完成。

0
相关文章