您的位置:首页 > 数据库 > Oracle

Oracle Study之--Oracle 11g RAC添加节点错误

2015-06-03 17:30 811 查看
Oracle Study之--Oracle 11g RAC添加节点错误

系统环境:
操作系统:RedHat EL5
Cluster: Oracle 11gR2 Grid
Oracle: Oracle 11gR2

故障一:新节点和原节点时间不同步,添加节点失败

1、在新节点执行“root.sh"
[root@wqy3 install]# /u01/11.2.0/grid/root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME=  /u01/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)

: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

: y
Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2015-06-03 14:05:38: Parsing the host name
2015-06-03 14:05:38: Checking for super user privileges
2015-06-03 14:05:38: User has super user privileges
Using configuration parameter file: /u01/11.2.0/grid/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node wqy2, number 2, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
CRS-2672: Attempting to start 'ora.mdnsd' on 'wqy3'
CRS-2676: Start of 'ora.mdnsd' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'wqy3'
CRS-2676: Start of 'ora.gipcd' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'wqy3'
CRS-2676: Start of 'ora.gpnpd' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'wqy3'
CRS-2676: Start of 'ora.cssdmonitor' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'wqy3'
CRS-2672: Attempting to start 'ora.diskmon' on 'wqy3'
CRS-2676: Start of 'ora.diskmon' on 'wqy3' succeeded
CRS-2676: Start of 'ora.cssd' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'wqy3'
CRS-2674: Start of 'ora.ctssd' on 'wqy3' failed
CRS-4000: Command Start failed, or completed with errors.
Command return code of 1 (256) from command: /u01/11.2.0/grid/bin/crsctl start resource ora.ctssd -init -env USR_ORA_ENV=CTSS_REBOOT=TRUE
Start of resource "ora.ctssd -init -env USR_ORA_ENV=CTSS_REBOOT=TRUE" failed
Failed to start CTSS
Failed to start Oracle Clusterware stack
--启动ctssd进程失败

查看告警日志:

[root@wqy3 ctssd]# tail -f octssd.log

2015-06-03 12:00:50.644: [    CTSS][2988764048]ctssslave_meh1: Master private node name [wqy1]
2015-06-03 12:00:50.644: [    CTSS][2988764048]ctssslave_msh: Connect String is (ADDRESS=(PROTOCOL=tcp)(HOST=wqy1)(PORT=14631))
2015-06-03 12:00:50.644: [    CTSS][2988764048]ctssslave_msh: Forming connection with CTSS master node [1]
2015-06-03 12:00:50.645: [    CTSS][2988764048]ctssslave_msh: Successfully connected to master [1]
2015-06-03 12:00:50.645: [    CTSS][2988764048]ctssslave_swm: The magnitude [7198995297417 usec] of the offset [7198995297417 usec] is larger than [86400000000 usec] sec which is the CTSS limit.
2015-06-03 12:00:50.645: [    CTSS][2988764048]ctsselect_mmg9_3: Failed in clsctsselect_select_mode [12]: Time offset is too much to be corrected
2015-06-03 12:00:51.552: [    CTSS][2978274192]ctss_checkcb: clsdm requested check alive. Returns [40000050]
2015-06-03 12:00:51.552: [    CTSS][2988764048]ctsselect_mmg: CTSS daemon exiting [12].
2015-06-03 12:00:51.552: [    CTSS][2988764048]CTSS daemon aborting
2015-06-03 12:00:51.552: [    CTSS][3046532800]ctss_init: Spawn completed. Waiting for threads to join
2015-06-03 14:07:28.085: [    CTSS][3046586048]Oracle Database CTSS Release 11.2.0.1.0 Production Copyright 2006, 2007 Oracle.  All rights reserved.
2015-06-03 14:07:28.085: [    CTSS][3046586048]ctss_scls_init: SCLs Context is 0x99bd618
2015-06-03 14:07:28.098: [    CTSS][3046586048]ctss_css_init: CSS Context is 0x99bd6c0
2015-06-03 14:07:28.099: [    CTSS][3046586048]ctss_clsc_init: CLSC Context is 0x99be000
2015-06-03 14:07:28.099: [    CTSS][3046586048]ctss_init: CTSS production mode
2015-06-03 14:07:28.099: [    CTSS][3046586048]ctss_init: CTSS_REBOOT=TRUE. Overriding 'reboot' argument as if 'octssd reboot' is executed. Turn on start up step sync.
2015-06-03 14:07:28.099: [    CTSS][3046586048]sclsctss_gvss2: NTP default pid file not found
2015-06-03 14:07:28.099: [    CTSS][3046586048]sclsctss_gvss8: Return [0] and NTP status [1].
2015-06-03 14:07:28.099: [    CTSS][3046586048]ctss_check_vendor_sw: Vendor time sync software is not detected. status [1].
2015-06-03 14:07:28.099: [    CTSS][3046586048]ctsscomm_init: The Socket name is [(ADDRESS=(PROTOCOL=tcp)(HOST=wqy3))]
2015-06-03 14:07:28.101: [    CTSS][3046586048]ctsscomm_init: Successful completion.
2015-06-03 14:07:28.101: [    CTSS][3046586048]ctsscomm_init: PORT = 18607
2015-06-03 14:07:28.102: [    CTSS][3020286864]CTSS connection handler started
[    CTSS][3009797008]clsctsselect_mm: Master Monitor thread started
[    CTSS][2999307152]ctsselect_msm: Slave Monitor thread started
2015-06-03 14:07:28.102: [    CTSS][2988817296]ctsselect_mmg: The local nodenum is 4
2015-06-03 14:07:28.103: [    CTSS][2988817296]ctsselect_mmg2_5: Pub data for member [1]. {Version [1] Node [1] Priv node name [wqy1] Port num [14631] SW version [186646784] Mode [0x40]}
2015-06-03 14:07:28.104: [    CTSS][2988817296]ctsselect_mmg2_5: Pub data for member [2]. {Version [1] Node [2] Priv node name [wqy2] Port num [42486] SW version [186646784] Mode [0x40]}
2015-06-03 14:07:28.106: [    CTSS][2988817296]ctsselect_mmg4: Successfully registered with [CTSSMASTER]
2015-06-03 14:07:28.107: [    CTSS][2988817296]ctsselect_mmg6: Receive reconfig event. Inc num[9] New master [3] members count[1]
2015-06-03 14:07:28.107: [    CTSS][2988817296]ctsselect_mmg8: Host [wqy1] Node num [1] is the master
2015-06-03 14:07:28.107: [    CTSS][2988817296]ctsselect_sm2: Node [1] is the CTSS master
2015-06-03 14:07:28.107: [    CTSS][2988817296]ctssslave_meh1: Master private node name [wqy1]
2015-06-03 14:07:28.108: [    CTSS][2988817296]ctssslave_msh: Connect String is (ADDRESS=(PROTOCOL=tcp)(HOST=wqy1)(PORT=14631))
[  clsdmt][2978327440]Listening to (ADDRESS=(PROTOCOL=ipc)(KEY=wqy3DBG_CTSSD))
2015-06-03 14:07:28.108: [  clsdmt][2978327440]PID for the Process [4654], connkey 11
2015-06-03 14:07:28.108: [  clsdmt][2978327440]Creating PID [4654] file for home /u01/11.2.0/grid host wqy3 bin ctss to /u01/11.2.0/grid/ctss/init/
2015-06-03 14:07:28.108: [  clsdmt][2978327440]Writing PID [4654] to the file [/u01/11.2.0/grid/ctss/init/wqy3.pid]
2015-06-03 14:07:28.109: [    CTSS][2988817296]ctssslave_msh: Forming connection with CTSS master node [1]
2015-06-03 14:07:28.110: [    CTSS][2988817296]ctssslave_msh: Successfully connected to master [1]
2015-06-03 14:07:28.110: [    CTSS][2988817296]ctssslave_swm: The magnitude [7198995304656 usec] of the offset [7198995304656 usec] is larger than [86400000000 usec] sec which is the CTSS limit.
2015-06-03 14:07:28.110: [    CTSS][2988817296]ctsselect_mmg9_3: Failed in clsctsselect_select_mode [12]: Time offset is too much to be corrected
2015-06-03 14:07:29.019: [    CTSS][2978327440]ctss_checkcb: clsdm requested check alive. Returns [40000050]
2015-06-03 14:07:29.019: [    CTSS][2988817296]ctsselect_mmg: CTSS daemon exiting [12].
2015-06-03 14:07:29.019: [    CTSS][2988817296]CTSS daemon aborting
2015-06-03 14:07:29.020: [    CTSS][3046586048]ctss_init: Spawn completed. Waiting for threads to join


查看新节点时间:
[root@wqy3 install]# date

Wed Jun 3 14:09:38 CST 2015

其他节点时间:
[root@wqy1 ~]# date

Thu Mar 12 06:28:57 CST 2015

[root@wqy2 ~]# date

Thu Mar 12 06:29:31 CST 2015

---新节点时间和原节点严重不同步

2、调整新节点时间
[root@wqy3 install]# date 031206292015

Thu Mar 12 06:29:00 CST 2015



[root@wqy3 install]# date

Thu Mar 12 06:29:01 CST 2015

[root@wqy3 install]#

[root@wqy1 ~]# date

Thu Mar 12 06:28:57 CST 2015



[root@wqy2 ~]# date

Thu Mar 12 06:29:31 CST 2015


4、清除CRS配置,重新运行root.sh

[root@wqy3 install]# perl rootcrs.pl -deconfig -force

2015-06-03 14:08:35: Parsing the host name
2015-06-03 14:08:35: Checking for super user privileges
2015-06-03 14:08:35: User has super user privileges
Using configuration parameter file: ./crsconfig_params
PRCR-1035 : Failed to look up CRS resource ora.cluster_vip.type for 1
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.eons is registered
Cannot communicate with crsd
ACFS-9200: Supported
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'wqy3'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'wqy3'
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'wqy3'
CRS-2677: Stop of 'ora.cssdmonitor' on 'wqy3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'wqy3'
CRS-2677: Stop of 'ora.cssd' on 'wqy3' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'wqy3'
CRS-2673: Attempting to stop 'ora.diskmon' on 'wqy3'
CRS-2677: Stop of 'ora.mdnsd' on 'wqy3' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'wqy3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'wqy3'
CRS-2677: Stop of 'ora.gipcd' on 'wqy3' succeeded
CRS-2677: Stop of 'ora.diskmon' on 'wqy3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'wqy3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node


重新运行root.sh

[root@wqy3 install]# /u01/11.2.0/grid/root.sh

Running Oracle 11g root.sh script...

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME=  /u01/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)

: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

: y
Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2015-03-12 06:30:01: Parsing the host name
2015-03-12 06:30:01: Checking for super user privileges
2015-03-12 06:30:01: User has super user privileges
Using configuration parameter file: /u01/11.2.0/grid/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node wqy1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
CRS-2672: Attempting to start 'ora.mdnsd' on 'wqy3'
CRS-2676: Start of 'ora.mdnsd' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'wqy3'
CRS-2676: Start of 'ora.gipcd' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'wqy3'
CRS-2676: Start of 'ora.gpnpd' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'wqy3'
CRS-2676: Start of 'ora.cssdmonitor' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'wqy3'
CRS-2672: Attempting to start 'ora.diskmon' on 'wqy3'
CRS-2676: Start of 'ora.diskmon' on 'wqy3' succeeded
CRS-2676: Start of 'ora.cssd' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'wqy3'
CRS-2676: Start of 'ora.ctssd' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'wqy3'
CRS-2676: Start of 'ora.drivers.acfs' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'wqy3'
CRS-2676: Start of 'ora.asm' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'wqy3'
CRS-2676: Start of 'ora.crsd' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'wqy3'
CRS-2676: Start of 'ora.evmd' on 'wqy3' succeeded
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
PRKC-1024 : Invalid netmask: eth0
add nodeapps on node=wqy3 ... failed
Preparing packages for installation...
cvuqdisk-1.0.7-1
Configure Oracle Grid Infrastructure for a Cluster ... failed
Updating inventory properties for clusterware

Error in writing to directory /u01/tmp. Oracle Universal Installer needs you to have permission in this area to install. Installation cannot continue. Please make sure TEMP is set to a valid writable directory.
: Permission denied
Starting Oracle Universal Installer...
--运行失败,提示在”/u01"下建立tmp目录


故障二:新节点环境变量和原节点不同,运行”root.sh“失败

1、查看原节点环境变量

[grid@wqy1 ~]$ env |grep tmp

TMPDIR=/u01/tmp

TMP=/u01/tmp

[root@wqy3 install]# mkdir /u01/tmp

[root@wqy3 install]# chmod 1777 /u01/tmp

[root@wqy3 install]# ls -l /u01

total 28

drwxr-xr-x 3 grid oinstall 4096 Jun 3 2015 11.2.0

drwxr-xr-x 4 root root 4096 Jun 3 2015 app

drwx------ 2 root root 16384 Jun 3 2015 lost+found

drwxrwxrwt 2 root root 4096 Mar 12 06:37 tmp

2、清除CRS配置

[root@wqy3 install]# perl rootcrs.pl -deconfig -force


2015-03-12 06:34:59: Parsing the host name
2015-03-12 06:34:59: Checking for super user privileges
2015-03-12 06:34:59: User has super user privileges
Using configuration parameter file: ./crsconfig_params
VIP exists.:wqy1
VIP exists.: /wqy1-vip/192.168.8.23/255.255.255.0/eth0
VIP exists.:wqy2
VIP exists.: /wqy2-vip/192.168.8.24/255.255.255.0/eth0
GSD exists.
ONS daemon exists. Local port 6100, remote port 6200
eONS daemon exists. Multicast port 18447, multicast IP address 234.140.234.80, listening port 2016
PRKO-2439 : VIP does not exist.

PRKO-2313 : VIP wqy3 does not exist.
ACFS-9200: Supported
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'wqy3'
CRS-2677: Stop of 'ora.registry.acfs' on 'wqy3' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'wqy3'
CRS-2673: Attempting to stop 'ora.crsd' on 'wqy3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'wqy3'
CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'wqy3'
CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'wqy3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'wqy3'
CRS-2677: Stop of 'ora.asm' on 'wqy3' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'wqy3' has completed
CRS-2677: Stop of 'ora.crsd' on 'wqy3' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'wqy3'
CRS-2673: Attempting to stop 'ora.ctssd' on 'wqy3'
CRS-2673: Attempting to stop 'ora.evmd' on 'wqy3'
CRS-2673: Attempting to stop 'ora.asm' on 'wqy3'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'wqy3'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'wqy3'
CRS-2677: Stop of 'ora.cssdmonitor' on 'wqy3' succeeded
CRS-2677: Stop of 'ora.evmd' on 'wqy3' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'wqy3' succeeded
CRS-2677: Stop of 'ora.asm' on 'wqy3' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'wqy3' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'wqy3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'wqy3'
CRS-2677: Stop of 'ora.cssd' on 'wqy3' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'wqy3'
CRS-2673: Attempting to stop 'ora.diskmon' on 'wqy3'
CRS-2677: Stop of 'ora.gpnpd' on 'wqy3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'wqy3'
CRS-2677: Stop of 'ora.gipcd' on 'wqy3' succeeded
CRS-2677: Stop of 'ora.diskmon' on 'wqy3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'wqy3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node


3、重新运行root.sh script

[root@wqy3 install]# /u01/11.2.0/grid/root.sh

Running Oracle 11g root.sh script...

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME=  /u01/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)

: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

: y
Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2015-03-12 06:38:57: Parsing the host name
2015-03-12 06:38:57: Checking for super user privileges
2015-03-12 06:38:57: User has super user privileges
Using configuration parameter file: /u01/11.2.0/grid/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node wqy1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
CRS-2672: Attempting to start 'ora.mdnsd' on 'wqy3'
CRS-2676: Start of 'ora.mdnsd' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'wqy3'
CRS-2676: Start of 'ora.gipcd' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'wqy3'
CRS-2676: Start of 'ora.gpnpd' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'wqy3'
CRS-2676: Start of 'ora.cssdmonitor' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'wqy3'
CRS-2672: Attempting to start 'ora.diskmon' on 'wqy3'
CRS-2676: Start of 'ora.diskmon' on 'wqy3' succeeded
CRS-2676: Start of 'ora.cssd' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'wqy3'
CRS-2676: Start of 'ora.ctssd' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'wqy3'
CRS-2676: Start of 'ora.drivers.acfs' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'wqy3'
CRS-2676: Start of 'ora.asm' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'wqy3'
CRS-2676: Start of 'ora.crsd' on 'wqy3' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'wqy3'
CRS-2676: Start of 'ora.evmd' on 'wqy3' succeeded
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
PRKC-1024 : Invalid netmask: eth0
add nodeapps on node=wqy3 ... failed
Preparing packages for installation...
cvuqdisk-1.0.7-1
Configure Oracle Grid Infrastructure for a Cluster ... failed
Updating inventory properties for clusterware
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
---脚本运行成功!

4、验证

[root@wqy3 install]# crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

[root@wqy3 install]# crs_stat -t

Name           Type           Target    State     Host
------------------------------------------------------------
ora.DATA.dg    ora....up.type ONLINE    ONLINE    wqy1
ora....ER.lsnr ora....er.type ONLINE    ONLINE    wqy1
ora....N1.lsnr ora....er.type ONLINE    ONLINE    wqy1
ora....VOTE.dg ora....up.type ONLINE    ONLINE    wqy1
ora.RCV.dg     ora....up.type ONLINE    ONLINE    wqy1
ora.asm        ora.asm.type   ONLINE    ONLINE    wqy1
ora.eons       ora.eons.type  ONLINE    ONLINE    wqy1
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE
ora....network ora....rk.type ONLINE    ONLINE    wqy1
ora.oc4j       ora.oc4j.type  OFFLINE   OFFLINE
ora.ons        ora.ons.type   ONLINE    ONLINE    wqy1
ora.racdb.db   ora....se.type ONLINE    ONLINE    wqy1
ora....ry.acfs ora....fs.type ONLINE    ONLINE    wqy1
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    wqy1
ora....SM1.asm application    ONLINE    ONLINE    wqy1
ora....Y1.lsnr application    ONLINE    ONLINE    wqy1
ora.wqy1.gsd   application    OFFLINE   OFFLINE
ora.wqy1.ons   application    ONLINE    ONLINE    wqy1
ora.wqy1.vip   ora....t1.type ONLINE    ONLINE    wqy1
ora....SM2.asm application    ONLINE    ONLINE    wqy2
ora....Y2.lsnr application    ONLINE    ONLINE    wqy2
ora.wqy2.gsd   application    OFFLINE   OFFLINE
ora.wqy2.ons   application    ONLINE    ONLINE    wqy2
ora.wqy2.vip   ora....t1.type ONLINE    ONLINE    wqy2
ora....SM4.asm application    ONLINE    ONLINE    wqy3
ora....Y3.lsnr application    OFFLINE   OFFLINE
ora.wqy3.gsd   application    OFFLINE   OFFLINE
ora.wqy3.ons   application    ONLINE    ONLINE    wqy3


故障三:新节点缺失vip service

1、查看新节点 Service


[root@wqy3 install]# crs_stat -t

Name           Type           Target    State     Host
------------------------------------------------------------
ora.DATA.dg    ora....up.type ONLINE    ONLINE    wqy1
ora....ER.lsnr ora....er.type ONLINE    ONLINE    wqy1
ora....N1.lsnr ora....er.type ONLINE    ONLINE    wqy1
ora....VOTE.dg ora....up.type ONLINE    ONLINE    wqy1
ora.RCV.dg     ora....up.type ONLINE    ONLINE    wqy1
ora.asm        ora.asm.type   ONLINE    ONLINE    wqy1
ora.eons       ora.eons.type  ONLINE    ONLINE    wqy1
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE
ora....network ora....rk.type ONLINE    ONLINE    wqy1
ora.oc4j       ora.oc4j.type  OFFLINE   OFFLINE
ora.ons        ora.ons.type   ONLINE    ONLINE    wqy1
ora.racdb.db   ora....se.type ONLINE    ONLINE    wqy1
ora....ry.acfs ora....fs.type ONLINE    ONLINE    wqy1
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    wqy1
ora....SM1.asm application    ONLINE    ONLINE    wqy1
ora....Y1.lsnr application    ONLINE    ONLINE    wqy1
ora.wqy1.gsd   application    OFFLINE   OFFLINE
ora.wqy1.ons   application    ONLINE    ONLINE    wqy1
ora.wqy1.vip   ora....t1.type ONLINE    ONLINE    wqy1
ora....SM2.asm application    ONLINE    ONLINE    wqy2
ora....Y2.lsnr application    ONLINE    ONLINE    wqy2
ora.wqy2.gsd   application    OFFLINE   OFFLINE
ora.wqy2.ons   application    ONLINE    ONLINE    wqy2
ora.wqy2.vip   ora....t1.type ONLINE    ONLINE    wqy2
ora....SM4.asm application    ONLINE    ONLINE    wqy3
ora....Y3.lsnr application    OFFLINE   OFFLINE
ora.wqy3.gsd   application    OFFLINE   OFFLINE
ora.wqy3.ons   application    ONLINE    ONLINE    wqy3
[root@wqy3 install]# crs_stat |grep vip

NAME=ora.scan1.vip

TYPE=ora.scan_vip.type

NAME=ora.wqy1.vip

TYPE=ora.cluster_vip_net1.type

NAME=ora.wqy2.vip

TYPE=ora.cluster_vip_net1.type
---缺失node3的vip service

2、添加新节点的vip service

[root@wqy3 install]# srvctl add nodeapps -h

Adds node application configuration to the specified node.

Usage: srvctl add nodeapps { { -n <node_name> -A <name|ip>/<netmask>/[if1[|if2...]] } | { -S <subnet>/<netmask>/[if1[|if2...]] } } [-p <portnum>] [-m <multicast-ip-address>] [-e <eons-listen-port>] [-l <ons-local-port>]  [-r <ons-remote-port>] [-t <host>[:<port>][,<host>[:<port>]...]] [-v]
-n <node_name>           Node name
-A <vip_name|ip>/<netmask>/[if1[|if2...]]  VIP address spec for node applications
-S <subnet>/<netmask>/[if1[|if2...]]  NET address spec for network
-m <multicast-ip-address>   The multicast IP address for eONS
-p <multicast-portnum>    The port number for eONS
-e <eons-listen-port>     Local listen port for eONS daemon (Default port number is 2016)
-l <ons-local-port>      ONS listening port for local client connections
-r <ons-remote-port>     ONS listening port for connections from remote hosts
-t <host>[:<port>][,<host>[:<port>]...]  List of remote host/port pairs for ONS daemons outside this cluster
-h                       Print usage
-v                       Verbose output


[root@wqy3 install]# srvctl add nodeapps -A 192.168.8.27/255.255.255.0/eth0 -n wqy3


[root@wqy3 install]# crs_stat |grep vip


NAME=ora.scan1.vip

TYPE=ora.scan_vip.type

NAME=ora.wqy1.vip

TYPE=ora.cluster_vip_net1.type

NAME=ora.wqy2.vip

TYPE=ora.cluster_vip_net1.type

NAME=ora.wqy3.vip

TYPE=ora.cluster_vip_net1.type

启动vip service:


[root@wqy3 install]# crs_start -f ora.wqy3.vip

Attempting to start `ora.wqy3.vip` on member `wqy3`

Start of `ora.wqy3.vip` on member `wqy3` succeeded.

[root@wqy3 install]# ip ad sh

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 08:00:27:70:1d:15 brd ff:ff:ff:ff:ff:ff
inet 192.168.8.26/24 brd 192.168.8.255 scope global eth0
inet 192.168.8.27/24 brd 192.168.8.255 scope global secondary eth0:1
inet6 fe80::a00:27ff:fe70:1d15/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 08:00:27:60:ee:a3 brd ff:ff:ff:ff:ff:ff
inet 10.10.10.103/24 brd 10.10.10.255 scope global eth1
inet6 fe80::a00:27ff:fe60:eea3/64 scope link
valid_lft forever preferred_lft forever
4: sit0: <NOARP> mtu 1480 qdisc noop
link/sit 0.0.0.0 brd 0.0.0.0


启动新节点Listener:

[root@wqy3 install]# crs_stat |grep lsn

NAME=ora.LISTENER.lsnr

NAME=ora.LISTENER_SCAN1.lsnr

NAME=ora.wqy1.LISTENER_WQY1.lsnr

NAME=ora.wqy2.LISTENER_WQY2.lsnr

NAME=ora.wqy3.LISTENER_WQY3.lsnr

[root@wqy3 install]# crs_start -f ora.wqy3.LISTENER_WQY3.lsnr

Attempting to start `ora.LISTENER.lsnr` on member `wqy3`

Start of `ora.LISTENER.lsnr` on member `wqy3` succeeded.

[root@wqy3 install]# crs_stat -t

Name           Type           Target    State     Host
------------------------------------------------------------
ora.DATA.dg    ora....up.type ONLINE    ONLINE    wqy1
ora....ER.lsnr ora....er.type ONLINE    ONLINE    wqy1
ora....N1.lsnr ora....er.type ONLINE    ONLINE    wqy1
ora....VOTE.dg ora....up.type ONLINE    ONLINE    wqy1
ora.RCV.dg     ora....up.type ONLINE    ONLINE    wqy1
ora.asm        ora.asm.type   ONLINE    ONLINE    wqy1
ora.eons       ora.eons.type  ONLINE    ONLINE    wqy1
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE
ora....network ora....rk.type ONLINE    ONLINE    wqy1
ora.oc4j       ora.oc4j.type  OFFLINE   OFFLINE
ora.ons        ora.ons.type   ONLINE    ONLINE    wqy1
ora.racdb.db   ora....se.type ONLINE    ONLINE    wqy1
ora....ry.acfs ora....fs.type ONLINE    ONLINE    wqy1
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    wqy1
ora....SM1.asm application    ONLINE    ONLINE    wqy1
ora....Y1.lsnr application    ONLINE    ONLINE    wqy1
ora.wqy1.gsd   application    OFFLINE   OFFLINE
ora.wqy1.ons   application    ONLINE    ONLINE    wqy1
ora.wqy1.vip   ora....t1.type ONLINE    ONLINE    wqy1
ora....SM2.asm application    ONLINE    ONLINE    wqy2
ora....Y2.lsnr application    ONLINE    ONLINE    wqy2
ora.wqy2.gsd   application    OFFLINE   OFFLINE
ora.wqy2.ons   application    ONLINE    ONLINE    wqy2
ora.wqy2.vip   ora....t1.type ONLINE    ONLINE    wqy2
ora....SM4.asm application    ONLINE    ONLINE    wqy3
ora....Y3.lsnr application    ONLINE    ONLINE    wqy3
ora.wqy3.gsd   application    OFFLINE   OFFLINE
ora.wqy3.ons   application    ONLINE    ONLINE    wqy3
ora.wqy3.vip   ora....t1.type ONLINE    ONLINE    wqy3


---至此,新节点添加成功!
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: