您的位置:首页 > 数据库 > Oracle

安装oracle 10g 双节点 集群

2012-02-05 20:48 411 查看
主机配置注意事项(每个节点):

每个节点服务器必须是 双网卡 ,支持tcp/ip协议,安装集群软件服务器要支持udp
防火墙关闭

service iptables stop

SELinux禁用

vi /etc/selinux/config

SELINUX=disable

ip地址使用静态配置:static

网关要指定

# Intel Corporation 82566MM Gigabit Network Connection

DEVICE=eth0

BOOTPROTO=static

IPADDR=10.1.1.135

NETMASK=255.255.255.0

GATEWAY=10.1.3.1

HWADDR=00:1E:37:D6:FA:44

ONBOOT=yes

重启network :service network restart

hostname不要出现在回环地址!

如果启动过单机asm服务,请先停止:$ORACLE_HOME/bin/localconfig delete

卸载独占模式的oracle软件(先用OUI卸载,再手工清理垃圾文件/etc/*.ora $ORACLE_HOME)

如果是4以上的版本,降低版本到4

/etc/redhat-release

----------------------------------------------------------------------------------------------------------

配置etc/hosts (所有节点)

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 localhost.localdomain localhost

::1 localhost6.localdomain6 localhost6

# public Network - (eth0)

10.1.1.135 xie.uplooking1.com

10.1.1.132 xie.uplooking.com

# public virtual IP (eth0:#)

10.1.1.136 xie.uplooking1.com-vip

10.1.1.133 xie.uplooking.com-vip

# private Interconnect - (eth0:0)

10.1.2.135 xie.uplooking1.com-priv

10.1.2.132 xie.uplooking.com-priv

配置 ifcfg-eth0:0 (所有节点)

[root@xie network-scripts]# vi ifcfg-eth0:0

# Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+

DEVICE=eth0:0

BOOTPROTO=static

ONBOOT=yes

IPADDR=10.1.2.132

NETMASK=255.255.255.0

重启network:service network restart

----------------------------------------------------------------------------------------

配置hangcheck-timer:用于监视 Linux 内核是否挂起

vi /etc/modprobe.conf

options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180

自动加载hangcheck-timer

vi /etc/rc.local

modprobe hangcheck-timer

检查hangcheck-timer模块是否已经加载:

lsmod | grep hangcheck_timer

-------------------------------------------------------------------------------------------------------------------------------

创建oracle用户:

跑脚本:

1./install.sh

#/bin/bash

. ./adduser.sh

. ./sysctl.sh

. ./limits.sh

. ./mkdir.sh

. ./chprofile.sh

2.adduser.sh

#/bin/bash

ADDGROUPS="oinstall dba"

ADDUSERS="oracle"

for group in $ADDGROUPS ; do

if [ -z "$( awk -F: '{print $1}' /etc/group |grep $group)" ]; then

groupadd $group

echo " Add new group $group"

else

echo " Group $group already existed"

fi

done

for user in $ADDUSERS ; do

if [ -z "$( awk -F: '{print $1}' /etc/passwd |grep $user)" ]; then

useradd $user

echo " Add new user $user"

else

echo " User $user already existed"

fi

done

if $(usermod -g oinstall -G dba oracle) ; then

echo " Modify user oracle account success"

else

echo " Modify user oracle account failure"

fi

3.sysctl.sh

#/bin/bash

# echo 250 32000 100 128 > /proc/sys/kernel/sem

# echo 536870912 > /proc/sys/kernel/shmmax

# echo 4096 > /proc/sys/kernel/shmmni

# echo 2097152 > /proc/sys/kernel/shmall

# echo 65536 > /proc/sys/fs/file-max

# echo 1024 65000 > /proc/sys/net/ipv4/ip_local_port_range

SYSCTL_FILE="/etc/sysctl.conf"

RCLOCAL_FILE="/etc/rc.local"

if [ -f "$SYSCTL_FILE" ] ; then

if [ -z "$(grep "Oracle" $SYSCTL_FILE)" ] ; then

cat >>$SYSCTL_FILE << END

#Oracle configure kernel parameters

kernel.shmmax = 2147483648

kernel.shmmni = 4096

kernel.shmall = 2097152

kernel.sem = 250 32000 100 128

fs.file-max = 65536

net.ipv4.ip_local_port_range = 1024 65000

net.core.rmem_default = 262144

net.core.rmem_max = 262144

net.core.wmem_default = 262144

net.core.wmem_max = 262144

END

/sbin/sysctl -p

echo " Add Oracle configure kernel parameters success"

else

echo " Oracle configure kernel parameters already existed"

fi

else

if [ -z "$(grep "Oracle" $RCLOCAL_FILE)" ] ; then

cat >>$RCLOCAL_FILE << END

#Oracle configure kernel parameters

echo 536870912 > /proc/sys/kernel/shmmax

echo 4096 > /proc/sys/kernel/shmmni

echo 2097152 > /proc/sys/kernel/shmall

echo 250 32000 100 128 > /proc/sys/kernel/sem

echo 65536 > /proc/sys/fs/file-max

echo 1024 65000 > /proc/sys/net/ipv4/ip_local_port_range

END

. $RCLOCAL_FILE

echo " Add Oracle configure kernel parameters success"

else

echo " Oracle configure kernel parameters already existed"

fi

fi

4.limits.sh

#/bin/bash

LIMITS_FILE="/etc/security/limits.conf"

if [ -f "$LIMITS_FILE" ] ; then

if [ -z "$(grep "Oracle" $LIMITS_FILE)" ] ; then

cat >>$LIMITS_FILE << END

#Oracle configure shell parameters

oracle soft nofile 65536

oracle hard nofile 65536

oracle soft nproc 16384

oracle hard nproc 16384

END

echo " Add Oracle configure shell parameters success"

else

echo " Oracle configure shell parameters already existed"

fi

else

echo "$0: $LIMITS_FILE not found "

fi

5.mkdir.sh

#/bin/bash

ORACLE_FILE_BASE="/u01/app/oracle"

ORACLE_FILE_VAR="/var/opt/oracle"

ORACLE_FILE_HOME="$ORACLE_FILE_BASE/product/10.2.0/db_1"

for directory in $ORACLE_FILE_BASE $ORACLE_FILE_VAR $ORACLE_FILE_HOME ; do

if [ -d $directory ]; then

echo " Directory $directory already existed"

else

mkdir -p $directory

chown -R oracle.dba $directory

echo " Change directory $directory owner and group success"

fi

done

6.chprofile.sh

#/bin/bash

PROFILES="/home/oracle/.bashrc"

for PROFILE in $PROFILES ; do

if [ -f "$PROFILE" ] ; then

if [ -z "$(grep "Oracle" $PROFILE)" ] ; then

cat >>$PROFILE << END

# Oracle configure profile parameters success

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=\$ORACLE_BASE/product/10.2.0/db_1

export CRS_HOME=/u01/crs_1

export PATH=\$ORACLE_HOME/bin:\$PATH

export ORACLE_OWNER=oracle

export ORACLE_SID=racdb1

export ORACLE_TERM=vt100

export THREADS_FLAG=native

export LD_LIBRARY_PATH=\$ORACLE_HOME/lib:\$LD_LIBRARY_PATH

export PATH=\$ORACLE_HOME/bin:\$PATH

export SQLPATH=/home/oracle

export EDITOR=vi

alias sqlplus='rlwrap sqlplus'

alias lsnrctl='rlwrap lsnrctl'

alias rman='rlwrap rman'

alias asmcmd='rlwrap asmcmd'

#

# change this NLS settings to suit your country:

# example:

# german_germany.we8iso8859p15, american_america.we8iso8859p2 etc.

#

export LANG=en_US

END

echo " Add Oracle configure $PROFILE parameters success"

else

echo " Oracle configure $PROFILE parameters already existed"

fi

else

echo "$0: $PROFILE not found "

fi

done

为oracle用户设置口令:oracle

---------------------------------------------------------------

在所有节点修改/u01 权限

chown oracle.oinstall /u01 -R

----------------------------------------------------------------------------------------

配置信任关系:

stu90:10.1.1.132

su - oracle

ssh-keygen -t rsa

ssh-keygen -t dsa

cd .ssh

cat *.pub > authorized_keys

stu92:10.1.1.135

su - oracle

ssh-keygen -t rsa

ssh-keygen -t dsa

cd .ssh

cat *.pub > authorized_keys

stu90:10.1.1.132

scp authorized_keys oracle@10.1.1.135:/home/oracle/.ssh/keys_dbs

stu92:10.1.1.135

cat keys_dbs >> authorized_keys

scp authorized_keys oracle@10.1.1.132:/home/oracle/.ssh/

测试信任关系:

xie.uplooking.com:

ssh xie.uplooking.com

ssh xie1.uplooking.com

ssh xie-priv.uplooking.com

ssh xie1-priv.uplooking.com

xie1.uplooking.com

ssh xie.uplooking.com

ssh xie1.uplooking.com

ssh xie-priv.uplooking.com

ssh xie1-priv.uplooking.com

---------------------------------------------------------------------------------------------------------------

测试时间同步:

---------------------------------------------------------------

准备公用卷:iscsi

iscsi server --> stu90

yum install scsi-target-utils

vi /etc/tgt/targets.conf

----------------------------------------

<target iqn.2011-01.com.oracle.blues:luns1>

backing-store /dev/sda5

initiator-address 10.1.1.0/24

</target>

----------------------------------------

vi /etc/udev/rules.d/55-openiscsi.rules

-----------------------------------------------

KERNEL=="sd*",BUS=="scsi",PROGRAM="/etc/udev/scripts/iscsidev.sh %b",SYMLINK+="iscsi/%c"

-----------------------------------------------

vi /etc/udev/scripts/iscsidev.sh

----------------------------------------

#!/bin/bash

BUS=${1}

HOST=${BUS%%:*}

[ -e /sys/class/iscsi_host ] || exit 1

file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname"

target_name=$(cat ${file})

if [ -z "${target_name}" ] ; then

exit 1

fi

echo "${target_name##*:}"

----------------------------------------

chmod +x /etc/udev/scripts/iscsidev.sh

chkconfig iscsi off

chkconfig iscsid off

chkconfig tgtd off

service iscsi start

service iscsid start

service tgtd start

tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL

iscsiadm -m discovery -t sendtargets -p 10.1.1.xx

service iscsi start

fdisk -l

/*************************************************

重新扫描服务器

iscsiadm -m session -u

iscsiadm -m discovery -t sendtargets -p 10.1.1.103

**************************************************/

iscsi client:10.1.1.92

vi /etc/udev/rules.d/55-openiscsi.rules

-----------------------------------------------

KERNEL=="sd*",BUS=="scsi",PROGRAM="/etc/udev/scripts/iscsidev.sh %b",SYMLINK+="iscsi/%c"

-----------------------------------------------

vi /etc/udev/scripts/iscsidev.sh

----------------------------------------

#!/bin/bash

BUS=${1}

HOST=${BUS%%:*}

[ -e /sys/class/iscsi_host ] || exit 1

file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname"

target_name=$(cat ${file})

if [ -z "${target_name}" ] ; then

exit 1

fi

echo "${target_name##*:}"

----------------------------------------

chmod +x /etc/udev/scripts/iscsidev.sh

service iscsi start

iscsiadm -m discovery -t sendtargets -p 10.1.1.xx -l

service iscsi start

fdisk -l

对iscsi共享盘分区:

fdisk /dev/sdb

在所有节点:partprobe /dev/sdb

在所有节点将iscsi共享分区变为裸设备:

vi /etc/udev/rules.d/60-raw.rules

-------------------------------------

ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"

ACTION=="add", KERNEL=="sdb2", RUN+="/bin/raw /dev/raw/raw2 %N"

ACTION=="add", KERNEL=="sdb5", RUN+="/bin/raw /dev/raw/raw3 %N"

ACTION=="add", KERNEL=="sdb6", RUN+="/bin/raw /dev/raw/raw4 %N"

KERNEL=="raw[1]", MODE="0660", GROUP="oinstall", OWNER="root"

KERNEL=="raw[2]", MODE="0660", GROUP="oinstall", OWNER="oracle"

KERNEL=="raw[3]", MODE="0660", GROUP="oinstall", OWNER="oracle"

KERNEL=="raw[4]", MODE="0660", GROUP="oinstall", OWNER="oracle"

在所有节点重新启动udev

start_udev

在所有节点查看rawdevices

ll /dev/raw/

-----------------------------------------------------------------------------------------

集群安装可行性校验:

cd /mnt

tar -zxvf clusterware10GR2_32.tar.gz

chown oracle.oinstall clusterware -R

su - oracle

cd /mnt/clusterware/cluvfy/

./runcluvfy.sh stage -pre crsinst -n xie,xie1 -verbose

报错:

[oracle@xie1 cluvfy]$ ./runcluvfy.sh stage -pre crsinst -n xie.uplooking.com,xie1.uplooking.com -verbose

Performing pre-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "xie1"

Destination Node Reachable?

------------------------------------ ------------------------

xie1 yes

xie yes

Result: Node reachability check passed from node "xie1"

把 所有节点 vi /etc/hosts 改成:

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 localhost.localdomain localhost

::1 localhost6.localdomain6 localhost6

# public Network - (eth0)

10.1.1.135 xie1.uplooking.com xie1

10.1.1.132 xie.uplooking.com xie

# public virtual IP (eth0:#)

10.1.1.136 xie1-vip

10.1.1.133 xie-vip

# private Interconnect - (eth0:0)

10.1.2.135 xie1-priv

10.1.2.132 xie-priv

然后在重新测试信任关系

xie.uplooking.com:

ssh xie

ssh xie1

ssh xie-priv

ssh xie1-priv

xie1.uplooking.com

ssh xie

ssh xie1

ssh xie-priv

ssh xie1-priv

检查成功:(下面这4个包在5版本不能装上,没关系)

Check: Package existence for "compat-gcc-7.3-2.96.128"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

xie missing failed

xie1 missing failed

Result: Package existence check failed for "compat-gcc-7.3-2.96.128".

Check: Package existence for "compat-gcc-c++-7.3-2.96.128"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

xie missing failed

xie1 missing failed

Result: Package existence check failed for "compat-gcc-c++-7.3-2.96.128".

Check: Package existence for "compat-libstdc++-7.3-2.96.128"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

xie missing failed

xie1 missing failed

Result: Package existence check failed for "compat-libstdc++-7.3-2.96.128".

Check: Package existence for "compat-libstdc++-devel-7.3-2.96.128"

Node Name Status Comment

------------------------------ ------------------------------ ----------------

xie missing failed

xie1 missing failed

Result: Package existence check failed for "compat-libstdc++-devel-7.3-2.96.128".

---------------------------------------------------------------

安装clusterware软件(只需在一个节点做,但要手工将其它节点加入到群):

/mnt/clusterware/runInstaller

在运行/u01/crs_1/root.sh脚本之前,在所有节点修改vipca & srvctl

su - oracle

cd $CRS_HOME/bin

vi +123 vipca

vi + srvctl

~~~~~~~~~~~~~~~~

unset LD_ASSUME_KERNEL

/u01/crs_1/root.sh

如果报错:

Running vipca(silent) for configuring nodeapps

Error 0(Native: listNetInterfaces:[3])

[Error 0(Native: listNetInterfaces:[3])]

解决(只需在一个节点做):

cd /u01/crs_1/bin

#./oifcfg iflist

#./oifcfg setif -global eth0/10.1.1.0:public

#./oifcfg setif -global eth0:0/10.1.2.0:cluster_interconnect

#./oifcfg getif

手工运行vipca,完成root.sh脚本!

校验集群后台进程的状态:

cd /u01/crs_1/bin

[oracle@xie bin]$ ./crs_stat -t

Name Type Target State Host

------------------------------------------------------------

ora.xie.gsd application ONLINE ONLINE xie

ora.xie.ons application ONLINE ONLINE xie

ora.xie.vip application ONLINE ONLINE xie

ora.xie1.gsd application ONLINE ONLINE xie1

ora.xie1.ons application ONLINE ONLINE xie1

ora.xie1.vip application ONLINE ONLINE xie1

在root下备份OCR:

cd /u01/crs_1/bin

./ocrconfig -export /home/oracle/bk/ocr/ocr1.bk

------------------------------------------------------------------------------------------

安装数据库软件(只需在一个节点做,会出现多节点的选择选项):安装时选择只安装软件不建库

/mnt/database/runInstaller

配置集群的数据库网络:

netca

--------------------------------------------------------

打补丁:

1.在所有节点用root用户停集群

[root@xie bin]# /etc/init.d/init.crs stop

[oracle@xie bin]$ ./crs_stat -t

CRS-0184: Cannot communicate with the CRS daemon.

2.运行./runInstaller打补丁

1. 先为clusterware 打补丁 ,运行./runInstaller 选择(完成之后根据提示运行2个脚本)

打完补丁后,会自动启动集群服务

[oracle@xie1 bin]$ ./crs_stat -t

Name Type Target State Host

------------------------------------------------------------

ora....IE.lsnr application ONLINE ONLINE xie

ora.xie.gsd application ONLINE ONLINE xie

ora.xie.ons application ONLINE ONLINE xie

ora.xie.vip application ONLINE ONLINE xie

ora....E1.lsnr application ONLINE ONLINE xie1

ora.xie1.gsd application ONLINE ONLINE xie1

ora.xie1.ons application ONLINE ONLINE xie1

ora.xie1.vip application ONLINE ONLINE xie1

2. 然后再为库打补丁,运行./runInstaller 选择

[root@xie bin]# /etc/init.d/init.crs stop

[oracle@xie bin]$ ./crs_stat -t

CRS-0184: Cannot communicate with the CRS daemon.

这里装完之后不会自动起集群服务

----------------------------------------------------

安装数据库:

用dbca建库(在一台节点建库)

建好之后查询集群状态:

[oracle@xie bin]$ ./crs_stat -t

Name Type Target State Host

------------------------------------------------------------

ora.racdb.db application ONLINE ONLINE xie

ora....b1.inst application ONLINE ONLINE xie

ora....b2.inst application ONLINE ONLINE xie1

ora....SM1.asm application ONLINE ONLINE xie

ora....IE.lsnr application ONLINE ONLINE xie

ora.xie.gsd application ONLINE ONLINE xie

ora.xie.ons application ONLINE ONLINE xie

ora.xie.vip application ONLINE ONLINE xie

ora....SM2.asm application ONLINE ONLINE xie1

ora....E1.lsnr application ONLINE ONLINE xie1

ora.xie1.gsd application ONLINE ONLINE xie1

ora.xie1.ons application ONLINE ONLINE xie1

ora.xie1.vip application ONLINE ONLINE xie1
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: