您的位置:首页 > 数据库 > Oracle

2 Administering Oracle Clusterware 管理Oracle集群

2015-10-10 17:15 573 查看


2 Administering Oracle Clusterware

This chapter describes how to administer Oracle Clusterware and includes the following topics:

Policy-Based Cluster and Capacity Management

Role-Separated Management

Configuring Oracle Grid Infrastructure

Configuring IPMI for Failure Isolation

Cluster Time Management

Changing Network Addresses on Manually Configured Networks


Policy-Based Cluster and Capacity Management

Oracle Clusterware 11g release 2 (11.2) introduces a different method of managing nodes and resources used by a database called policy-based
management.
Oracle Clusterware 11gR2介绍了一种不同的方式来管理节点和资源,使用的是一种叫基于策略的管理方式。
This section contains the following topics:

Overview of Server Pools and Policy-Based Management

Server Attributes Assigned by Oracle Clusterware

Understanding Server Pools

How Oracle Clusterware Assigns New Servers


Overview of Server Pools and Policy-Based Management

With Oracle Clusterware 11g release 2 (11.2) and later, resources
managed by Oracle Clusterware are contained in logical groups of servers called server
pools. Resources are hosted on a shared infrastructure and are contained within server pools. The resources are restricted with respect to their hardware resource (such as CPU and memory) consumption by policies, behaving as if they were deployed
in a single-system environment.
在11gR2及以后的版本中,集群软件的资源管理被包含在服务的逻辑组中称之为server pool。这些资源因为他们的硬件资源的消耗(比如CPU和内存)而受到限制。就好像他们部署在一个单系统环境中一样。
You can choose to manage resources dynamically using server pools to provide policy-based management of resources in the cluster, or you can choose to manage resources using the traditional method of physically
assigning resources to run on particular nodes.
你可以选择使用server pool来动态管理资源,在集群中提供基于policy的资源管理方式。或者你可以选择使用传统方式,在特定节点上以物理分配资源的方式管理资源。
Policy-based management:
基于policy的管理方式:

Enables dynamic capacity assignment when needed to provide server capacity in accordance with the priorities you set with policies

Enables allocation of resources by importance, so that applications obtain the required minimum resources, whenever possible, and so that lower priority applications do not take resources from more important applications

Ensures isolation where necessary, so that you can provide dedicated servers in a cluster for applications and databases
当需要设置策略提供优先的最大可用资源服务时允许动态的分配资源。
允许按照重要性分配资源。那么应用就能尽可能获取最小要求的资源,那样低优先级的应用程序不会从更多重要的应用中拿走资源。
在需要的地方确保隔离,那样的话你就可以在集群的应用和数据中提供专用服务。

Applications and databases running in server pools do not share resources. Because of this, server pools isolate resources where necessary, but enable dynamic capacity assignments as required. Together with role-separated
management, this capability addresses the needs of organizations that have standardized cluster environments, but allow multiple administrator groups to share the common cluster infrastructure.
应用程序和数据库运行在server pool中不需要共享资源。因为这样,server pool在需要的地方隔离了资源。但是在需要的时候允许动态的分配容量。和角色独立的管理一起。这种容量虽然可以通信标准化集群环境的管理需要,但是允许多个管理员组共享普通的集群设施。

See Also:
Appendix B, "Oracle Clusterware Resource Reference" for more information about resource attributes
Oracle Clusterware efficiently allocates different resources in the cluster. You need only to provide the minimum and maximum number of nodes on which a resource can run, combined with a level of importance for
each resource that is running on these nodes.
Oracle集群能在集群中有效的分配不同的资源。你仅仅需要提供资源可以运行的最小和最大的节点数,包括这些资源运行节点上的重要级别就可以了。


Server Attributes Assigned by Oracle Clusterware

Oracle Clusterware assigns each server a set of attributes as soon as you add a server to a cluster.
If you remove the server from the cluster, then Oracle Clusterware revokes those settings. Table 2-1 lists and describes server
attributes.
Oracle集群在你把服务器加入到集群中时会尽可能快的分配给服务器一组属性。如果你从集群中删除了这台服务器,那么Oracle集群会回收这些设置。表2-1列出了这些服务器属性

Table 2-1 Server Attributes
AttributeDescription
NAME

The node name of the server. A server name can contain any platform-supported characters except the exclamation point (!) and the tilde (~). A server name cannot begin with a period, or with ora.
This attribute is required.
服务器的节点名称。服务器名称可以包括平台支持的字符串除了感叹号(!)和波浪号(tilde)。一个服务器名称不能以ora开头。这个属性是必须要有的。
ACTIVE_POOLS

A space-delimited list of the names of the server pools to which a server belongs. Oracle Clusterware manages this list, automatically.
一个空格分隔的列表,包括了服务所属的服务池。Oracle集群会自动管理这个列表。
STATE

A server can be in one of the following states:
一个服务可以有以下状态
ONLINE

The server is a member of the cluster and is available for resource placement.
服务是集群的一员,对资源而言是可用的。
OFFLINE

The server is not currently a member of the cluster. Subsequently, it is not available for resource placement.
当前服务不是集群的一员,当然,对资源就是不可用的。
JOINING

When a server joins a cluster, Oracle Clusterware processes the server to ensure that it is valid for resource placement. Oracle Clusterware also checks the state of resources configured to run on
the server. Once the validity of the server and the state of the resources are determined, the server transitions out of this state.
当服务器加入了集群,Oracle集群处理服务来确保它对资源是正常的。Oracle集群也会检查配置在服务器上的资源状态。以前,服务的有效性和资源的状态一旦确定,服务器就会从JOINING状态转换出
LEAVING

When a planned shutdown for a server begins, the state of the server transitions to
LEAVING
, making it unavailable for resource placement.
当服务器开始有计划的关机,服务的状态开始转换成LEAVING,使得服务对资源变得不可用。
VISIBLE

Servers that have Oracle Clusterware running, but not the Cluster Ready Services daemon (
crsd
), are put into the
VISIBLE
state. This usually indicates an intermittent issue
or failure and Oracle Clusterware trying to recover (restart) the daemon. Oracle Clusterware cannot manage resources on servers while the servers are in this state.
Oracle集群运行的服务,但不是crsd的守护进程,会被职位VISIBLE状态,这个通常用来说明一种间歇的情况或者是集群中有故障,并且集群在尝试重启或者恢复这些守护进程。当服务运行在这种状态下,Oracle集群就不能管理服务上的资源
RECONFIGURING

When servers move between server pools due to server pool reconfiguration, a server is placed into this state if resources that ran on it in the current server pool must be stopped and relocated.
This happens because resources running on the server may not be configured to run in the server pool to which the server is moving. As soon as the resources are successfully relocated, the server is put back into the
ONLINE
state.
Use the
crsctl status server
command to obtain server
information.
当服务由于服务池的重新配置而在服务池中移动时,并且运行在当前服务上的资源必须停止和重分配时,服务就会被置为这种状态。这种情况会发生是因为运行在服务上的资源不会被配置将在其运行在服务正在移动的服务池上。只要资源被成功重新分配,服务将会重新回到ONLINE状态,使用crsctl
status server 命令来获取服务信息。

STATE_DETAILS

This is a read-only attribute that Oracle Clusterware manages. The attribute provides additional details about the state of a server. Possible additional details about a server state are:
这是一个ORacle集群管理的只读属性。这个属性提供了关于服务状态的额外信息。可能有的额外信息有如下:

Server state:
ONLINE
:

AUTOSTARTING RESOURCES

Indicates that the resource autostart procedure (performed when a server reboots or the Oracle Clusterware stack is restarted) is in progress for the server.
说明资源自动重启程序正在执行中(当服务重启或者Oracle集群堆栈重启了,那就是已经执行重启了)。

AUTOSTART QUEUED

The server is waiting for the resource autostart to commence. Once that happens, the attribute value changes to
AUTOSTARTING RESOURCES
.
服务正在等待资源重启。以前这种情况发生,这个属性值会变为AUTOSTARTING RESOURCES.

Server state:
RECONFIGURING
:

STOPPING RESOURCES

Resources that are restricted from running in a new server pool are stopping.
资源停止了,因为运行在新的服务池上受限了。

STARTING RESOURCES

Resources that can run in a new server pool are starting.
资源可以运行在一个新服务池中,并且正在启动中。

RECONFIG FAILED

One or more resources did not stop and thus the server cannot transition into the
ONLINE
state. At this point, manual intervention is required. You must stop or unregister resources that
did not stop. After that, the server automatically transitions into the
ONLINE
state.
一次或多次资源无法停止,因此服务无法转变成ONLINE状态。这种情况下,需要手动干预。你必须停止或者对无法停止的资源取消注册。

Server state:
JOINING
:

CHECKING RESOURCES

Whenever a server reboots, the Oracle Clusterware stack restarts, or
crsd
on a server restarts, the policy engine must determine the current state of the resources on the server. While
that procedure is in progress, this value is returned.
当服务重启,Oracle集群堆栈重启,或者crsd进程重启,策略引擎必须决定服务上的资源的当前状态。当重启正在执行中,就会返回CHECKING RESOURCES这个状态。


Understanding Server Pools

This section contains the following topics:

How Server Pools Work

The Free Server Pool

The Generic Server Pool


How Server Pools Work

Server pools divide the cluster into groups of servers hosting the same or similar resources. They distribute a uniform workload (a set of Oracle Clusterware resources) over several servers in the cluster. For
example, you can restrict Oracle databases to run only in certain server pools. When you enable role-separated management, you can explicitly grant permission to operating system users to change attributes of certain server pools.
服务器讲集群划分成多个服务组托管同样或者相似的资源。它们将集群通过多个服务进行了均匀的负载(一组Oracle集群资源)。比如,你可以限制Oracle数据库仅仅在几个指定的服务池上运行。当你开启角色隔离(role-separated)的管理模式,你可以显示的赋予权限给操作系统用户来改变指定服务池的属性。
Top-level server pools:
顶级服务池

Logically divide the cluster
将集群进行逻辑划分

Are always exclusive, meaning that one server can only reside in one particular server pool at a certain point in time
总是独立的,意味着一个服务可以在一个特定的时间点驻留在一个指定的服务池中

Server pools each have three attributes that they are assigned when they are created:
服务池分别拥有三个在创建服务池时就分配的三个属性:

MIN_SIZE
: The minimum number of servers the server pool should contain. If the number of servers in a server pool is below the value
of this attribute, then Oracle Clusterware automatically moves servers from elsewhere into the server pool until the number of servers reaches the attribute value.
服务池应该包括的服务的最小值。如果服务的数量小于这个值,ORacle会自动从其他地方把服务移到这里来,知道服务的数量达到了属性值。

MAX_SIZE
: The maximum number of servers the server pool should contain.
服务池可以包含的服务的最大值。

IMPORTANCE
: A number from 0 to 1000 (0 being least important) that ranks a server pool among all other server pools in a cluster.
一个从0-1000范围(0代表最小重要性)的数值,将集群中的所有服务池进行的排序打分。
Table 2-2 lists and describes all server pool attributes.
表2-3列出了所有的服务池属性

Table 2-2 Server Pool Attributes
AttributeValues and FormatDescription
ACL

String in the following format:
owner:user:rwx,pgrp:group:rwx,other::r—

Defines the owner of the server pool and which privileges are granted to various operating system users and groups. The server pool owner defines the operating system user of the owner, and which
privileges that user is granted.
定义了服务池的拥有者,并指定对操作系统的用户和组赋予了什么权限。服务池拥有者定义了对应的操作系统用户,还有对应的用户赋予了什么权限。
The value of this optional attribute is populated at the time a server pool is created based on the identity of the process creating the server pool, unless explicitly overridden. The value can subsequently
be changed, if such a change is allowed based on the existing privileges of the server pool.
这个可选属性的值在服务池被创建的时候生成,是基于创建服务池的进程标识确定的。除非显示的重写它。如果基于当前存在的权限修改这个值,那么这个值在创建后是可以被改写的。

In the string:
在这个字符串中。

owner
: The operating system user of the server pool owner, followed by the privileges of the owner
服务池拥有者对应的操作系统用户,权限同操作系统用户。

pgrp
: The operating system group that is the primary group of the owner of the server pool, followed by the privileges of members of the primary group
服务池的主组对应是操作系统的用户组,权限同用户组。

other
: Followed by privileges of others
其他用户

r
: Read only
只读

w
: Modify attributes of the pool or delete it
修改和删除池的权限

x
: Assign resources to this pool
分配资源到池中的权限

By default, the identity of the client that creates the server pool is the
owner
. Also by default,
root
, and the
user
specified
in
owner
have full privileges. You can grant required operating system users and operating system groups their privileges by adding the following lines to the
ACL
attribute:
默认的,创建服务池的客户端标识就是owner。同样默认的,root和owner指定的用户拥有所有权限。你可以赋予必要操作系统用户和用户组相应的权限,权限加入到ACL属性中。
user:username:rwx
group:group_name:rwx

ACTIVE_SERVERS

A string of server names in the following format:
server_name1 server_name2 ...

Oracle Clusterware automatically manages this attribute, which contains the space-delimited list of servers that are currently assigned to a server pool.
Oracle集群自动管理这个属性,包含了由空格分隔的一个服务列表,这些服务被分配到了服务池中。
EXCLUSIVE_POOLS

String
This optional attribute indicates if servers assigned to this server pool are shared with other server pools. A server pool can explicitly state that it is exclusive of any other server pool that
has the same value for this attribute. Two or more server pools are mutually exclusive when the sets of servers assigned to them do not have a single server in common. For example, server pools A and B must be exclusive if they both set the value of this attribute
to
foo_A_B
.
这个可选项说明了是否分配到这个服务池中的服务被其他服务池共享。服务池可以显示看到状态,它对其他服务池是独立的,并且这个属性是拥有同一个值。当多组服务分配到服务池而不是一个池时,两个或者更多的服务池相互独立。比如,如果服务池A和B都设置了这个属性是foo_A_B那么他们必须是独立的。
Top-level server pools are mutually exclusive, by default.
顶级服务池默认是相互独立的。
IMPORTANCE

Any integer from 0 to 1000
Relative importance of the server pool, with
0
denoting the lowest level of importance and
1000
, the highest. This optional attribute is used to determine how to reconfigure
the server pools when a node joins or leaves the cluster. The default value is
0
.
服务池之间的相对重要性,0表示是最低的重要性,而1000是最高重要性。这个可选项被用来决定当节点加入或者离开集群时怎么配置服务池。默认值是0.
MAX_SIZE

Any nonnegative integer or
-1
(no limit)
The maximum number of servers a server pool can contain. This attribute is optional and is set to
-1
(no limit), by default.
服务池可以包含的服务数量的最大值。这个属性是可选的,默认是-1,表示没有限制。
Note: A value of
-1
for this attribute spans the entire cluster.
-1会影响整个集群
MIN_SIZE

Any nonnegative integer
The minimum size of a server pool. If the number of servers contained in a server pool is below the number you specify in this attribute, then Oracle Clusterware automatically moves servers from other
pools into this one until that number is met.
服务池大小的最小值。如果服务池包含的服务数量小于这个你指定的值,那么Oracle集群自动从其他池迁移服务到这个池中知道满足最小值。
Note: The value of this optional attribute does not set a hard limit. It governs the priority for server assignment whenever the cluster is reconfigured.
The default value is
0
.
这个可选项的值不要设置为硬限制。当集群重新配置时,它可以支配服务分配的优先级。
NAME

String
The name of the server pool, which you must specify when you create the server pool. Server pool names must be unique within the domain of names of user-created entities, such as resources, types,
and servers. A server pool name can contain any platform-supported characters except the exclamation point (!) and the tilde (~). A server pool name cannot begin with a period nor with ora.
服务池的名字,当你创建服务池的时候必须指定。服务池的名字必须是唯一的,使用用户创建的字符如资源,类型,服务等来命名。一个服务池名字可以包含平台支持的字符串除了感叹号(!)和波浪号(~)。一个服务池名字不能以ora开头。
PARENT_POOLS

A string of space-delimited server pool names in the following format:
sp1 sp2 ...

Use of this attribute makes it possible to create nested server pools. Server pools listed in this attribute are referred to as parent server
pools. A server pool included in a parent server pool is referred to as a child server pool.
使用这个属性来允许创建嵌套的服务池。属性中列出的服务池被认为是父服务池。在父服务池中的服务池就是子服务池。
SERVER_NAMES

A string of space-delimited server names in the following format:
server1 server2 ...

A list of candidate node names that may be associated with a server pool. If this optional attribute is empty, Oracle Clusterware assumes that any server may be assigned to any server pool, to the
extent allowed by values of other attributes, such as
PARENT_POOLS
.
一个可以被服务池使用的候选节点列表。如果这个可选项是空的,Oracle集群假设所有的服务都可以被分配到服务池中,可以被其他属性允许来扩展,比如PARENT_POOLS
The server names identified as candidate node names are not validated to confirm that they are currently active cluster members. Cluster administrators can use this attribute to define servers as
candidates that have not yet been added to the cluster.
服务名列表也就是候选节点列表不能被用来有效确认他们是当前活动的集群成员。集群管理员可以使用这个属性来定义哪些还没有加入集群的服务作为备用
You manage server pools that are managing Oracle RAC databases with the Server Control (SRVCTL) utility. Use the Oracle Clusterware Control (CRSCTL) utility to manage all other server pools. Only cluster administrators
have permission to create top-level server pools.
你可以使用服务池工具SRVCTL来管理Oracle RAC数据库的服务池。使用Oracle集群控制工具CRSCTL来管理所有其他的服务池。只有集群管理员才有权限创建顶级服务池。
When Oracle Clusterware is installed, two server pools are created automatically: Generic and Free.
All servers in a new installation are assigned to the Free server pool, initially. Servers move from Free to newly defined server pools automatically. When you upgrade Oracle Clusterware from a previous release, all nodes are assigned to the Generic server
pool, to ensure compatibility with database releases before Oracle Database 11g release 2 (11.2).
当Oracle集群软件安装完毕,两个服务池会被自动创建:Generic 和 Free。所有安装中的服务最初都被分配到Free 服务池中。服务会自动从Free服务池中移动到新定义的服务池。当你升级Oracle集群软件,所有的节点都被分配到Generic服务池中,这样确保11.2版本前的数据库的兼容性。


The Free Server Pool

The Free server pool contains servers that are not assigned to any other server pools. The attributes of the Free server pool are restricted, as follows:
Free服务池包含了需要被分配到其他服务池的服务。Free服务池的属性是受限的,比如:

SERVER_NAMES
,
MIN_SIZE
, and
MAX_SIZE
cannot be edited by the user

IMPORTANCE
and
ACL
can be edited by the user
参数SERVER_NAMES,MIN_SIZE,MAX_SIZE不能被用户编辑
参数IMPORTANCE和ACL可以被用户编辑。


The Generic Server Pool

The Generic server pool stores pre-11g release 2 (11.2) Oracle Databases and administrator-managed databases
that have fixed configurations. Additionally, the Generic server pool contains servers that match either of the following:
Generic服务池存储了11gR2以前版本数据库库和管理员管理模式的数据库,它们拥有固定配置。另外,Generic服务池包含了满足以下至少一个条件的服务。

Servers that you specified in the
HOSTING_MEMBERS
resource attribute of all resources of the
application
resource type
参数HOSTING_MEMBERS定义的服务收集了所有应用程序资源类型的所有资源属性。

See Also:

"HOSTING_MEMBERS" for more information about this attribute

"Resource Types" for more information about resource types

Servers with names you specified in the
SERVER_NAMES
attribute of the server pools that list the Generic server pool as a parent server pool
参数SERVER_NAMES中定义的服务将Generic服务池作为父服务池。

The Generic server pool's attributes are restricted, as follows:
Generic服务池属性是受限的,如下:

No one can modify configuration attributes of the Generic server pool (all attributes are read-only)
没人可以修改Generic服务池的配置属性(所有属性都是只读的)

When you specify a server name in the
HOSTING_MEMBERS
resource attribute, Oracle Clusterware only allows it if the server is:
当你在HOSTING_MEMBERS资源属性中指定一个服务名,那只有满需以下条件的情况下,Oracle集群才会允许它:

Online and exists in the Generic server pool

Online and exists in the Free server pool, in which case Oracle Clusterware moves the server into the Generic server pool

Online and exists in any other server pool and the client is either a cluster administrator or is allowed to use the server pool's servers, in which case, the server is moved into
the Generic server pool

Offline and the client is a cluster administrator
在线并且存在于Generic服务池中
在线并且存在于Free服务池中,在这种情况下,Oracle集群移动服务到Generic服务池中
在线并且存在于任何其他的服务池中,并且客户端要么是集群管理员,或者是被允许使用服务池中的服务,这样,服务就可以被移动到Generic服务池中
离线并且客户端是集群管理员

When you register a child server pool with the Generic server pool, Oracle Clusterware only allows it if the server names pass the same requirements as previously specified for the resources.
当你用Generic服务池注册一个子服务池,Oracle集群仅仅当服务名满足了像之前为资源定义的要求时才允许这个子服务池。
Servers are initially considered for assignment into the Generic server pool at cluster startup time or when a server is added to the cluster, and only after that to other server pools.
服务在最初的集群启动阶段被分配到Generic服务池,或者当有服务器加入集群,并且仅仅在加入集群以后也会这样。


How Oracle Clusterware Assigns New Servers

Oracle Clusterware assigns new servers to server pools in the following order:
Oracle集群按如下顺序分配新服务到服务池中:

Generic server pool

User-created server pool

Free server pool

Oracle Clusterware continues to assign servers to server pools until the following conditions are met:
Oracle集群继续分配服务到服务池中,知道以下条件被满足:

Until all server pools are filled in order of importance to their minimum (
MIN_SIZE
).
直到所有的服务池被填满,并满足了最小值参数MIN_SIZE

Until all server pools are filled in order of importance to their maximum (
MAX_SIZE
).
直到所有的服务都被填满,并满足了最大值参数MAX_SIZE

By default, any servers not placed in a server pool go into the Free server pool.
默认的,所有不在服务池中的都被放入Free服务池
You can modify the
IMPORTANCE
attribute for the Free server pool.
你可以为Free服务池修改
IMPORTANCE
参数


When a server joins a cluster, several things occur.
当添加服务器到集群中,几件事情就会发生。
Consider the server pools configured in Table 2-3:

Table 2-3 Sample Server Pool Attributes Configuration
NAMEIMPORTANCEMIN_SIZEMAX_SIZEPARENT_POOLSEXCLUSIVE_POOLS
sp1

1

1

10

sp2

3

1

6

sp3

2

1

2

sp2_1

2

1

5

sp2

S123

sp2_2

1

1

5

sp2

S123

For example, assume that there are no servers in a cluster; all server pools are empty.
比如,假设现在集群中没有服务,所有的服务池都是空的。
When a server, named
server1
, joins the cluster:
当一个服务,叫做server1,现在加入了集群中。

Server-to-pool assignment commences.
服务到池的分配开始

Oracle Clusterware only processes top-level server pools (those that have no parent server pools), first. In this example, the top-level server pools are
sp1
,
sp2
,
and
sp3
.
首先,Oracle集群值处理顶级服务池(那些没有父服务池的服务池)。在这个例子中,顶级服务池是sp1,sp2,sp3

Oracle Clusterware lists the server pools in order of
IMPORTANCE
, as follows:
sp2
,
sp3
,
sp1
.
Oracle集群按重要性列出服务池,如下:
sp2
,
sp3
,
sp1


Oracle Clusterware assigns
server1
to
sp2
because
sp2
has the highest
IMPORTANCE
value
and its
MIN_SIZE
value has not yet been met.
Oracle集群分配
server1
sp2
因为
sp2
有最高的重要性数值,并且它的
MIN_SIZE
值还没有被满足


Oracle Clusterware processes the remaining two server pools,
sp2_1
and
sp2_2
. The sizes of both server pools are below the value of the
MIN_SIZE
attribute
(both server pools are empty and have
MIN_SIZE
values of
1
).
Oracle集群处理剩下的两个服务池,
sp2_1
和sp2_2。这两个池的大小还在最小值
MIN_SIZE
以下(两个池都是空的,并且最小值
MIN_SIZE
是1)


Oracle Clusterware lists the two remaining pools in order of
IMPORTANCE
, as follows:
sp2_1
,
sp2_2
.
Oracle集群里列出剩下两个池的重要性参数IMPORTANCE,排列如下:
sp2_1
,
sp2_2


Oracle Clusterware assigns
server1
to
sp2_1
but cannot assign
server1
to
sp2_2
because
sp2_1
is
configured to be exclusive with
sp2_2
.
Oracle集群分配
server1
sp2_1
,但不分配
server1
sp2_2
因为
sp2_1
被配置和sp2_2是独立的。


After processing, the cluster configuration appears, as follows

Table 2-4 Post Processing Server Pool Configuration
Server Pool NameAssigned Servers
sp1

sp2

server1

sp3

sp2_1

server1

sp2_2


Servers Moving from Server Pool to Server Pool

If the number of servers in a server pool falls below the value of the
MIN_SIZE
attribute for the server pool (such as when a server fails), based on values you set for the
MIN_SIZE
and
IMPORTANCE
attributes
for all server pools, Oracle Clusterware can move servers from other server pools into the server pool whose number of servers has fallen below the value for
MIN_SIZE
. Oracle Clusterware selects servers from other server
pools to move into the deficient server pool that meet the following criteria:
如果服务池中的服务数量低于了最小值
MIN_SIZE
(比如因为服务失败了),那么基于你为服务池设置的最小值
MIN_SIZE
和重要性
IMPORTANCE
参数,Oracle集群会从其他池中把服务放到这个池中。Oracle集群选择选择哪些服务来搬运,是依照以下策略的:

For server pools that have a lower
IMPORTANCE
value than the deficient server pool, Oracle Clusterware can take servers from those server pools even if it means that the number
of servers falls below the value for the
MIN_SIZE
attribute.
对于那些比缺少服务的池有较低的重要性参数
IMPORTANCE
的池而言,Oracle会从这些池中拿走服务即使它意味着这个池中的服务数量会低于最小值参数
MIN_SIZE


For server pools with equal or greater
IMPORTANCE
, Oracle Clusterware only takes servers from those server pools if the number of servers in a server pool is greater than the
value of its
MIN_SIZE
attribute.
而对于哪些有相等或者较大重要性参数IMPORTANCE的服务池而言,Oracle集群只会从这些池中拿走高于最低值
MIN_SIZE
的服务



Role-Separated Management

This section contains the following topics

About Role-Separated Management

Managing CRS Administrators in the Cluster

Configuring Horizontal Role Separation


About Role-Separated Management

Role-separated management is a feature you can implement that enables multiple resources to share the same cluster and hardware resources. This is done by
setting permissions on server pools or resources, and then using access control lists (ACLs) to provide access. By default, this feature is not enabled during installation. Resource allocation is controlled by a user assigned the CRS
Administrator role. You can implement role-separated management in one of the following ways:
独立角色管理(Role-separated management)是一个特性,你可以启用它来开启多资源共享同一集群的软硬件资源。这需要通过在服务池或者资源上设置权限,然后使用ACLs列表来提供访问权限。默认的,这个特性在安装时是不会开启的。资源分配是由拥有CRS管理员角色(CRS
Administrator)的用户控制的。你可以启用角色独立管理在以下情况


Vertical implementation: Access permissions to server pools or resources are granted by assigning ownership
of them to different users for each layer in the enterprise architecture, and using ACLs assigned to those users. Oracle ASM provides an even more granular approach using groups. Careful planning is required to enable overlapping tasks.
垂直部署:对服务池或者是资源的访问权限通过分配他们的所有权来赋予给不同企业层级的用户,并且使用ACLs来分配给这些用户。Oracle ASM提供了使用组来更细粒度的访问。这就要求仔细设计来应用这些重叠任务。

See Also:
Oracle Grid Infrastructure Installation Guide for
Linux for more information about using groups

Horizontal implementation: Access permissions for resources are granted using ACLs assigned to server pools
and policy-managed databases or applications.
水平部署:使用ACLs来分配对服务池和策略管理数据库或者是应用的资源访问权限。


About the CRS Administrator

Caution:
To restrict the operating system users that have this privilege, Oracle strongly recommends that you add specific users to the CRS Administrators list.

为了防止操作系统用户有这个特权,Oracle强烈建议你为CRS管理员名单添加指定用户
The CRS Administrator is a predefined administrator role in Oracle Clusterware that controls the creation of server pools. Users to whom you grant the CRS Administrator role can grant or revoke access to system
resources only for server pools. The CRS Administrator role does not influence administrative rights on the server.
CRS管理员是Oracle集群中一个预定义的管理员角色,用来控制服务池的创建。哪些你赋予了CRS管理员的用户可以为服务池赋予和回收访问系统资源的权限。CRS管理员角色不会影响服务上的管理权力。
Additionally, the CRS Administrator can create resources with restricted placement, that use the asterisk (
*
) as the value for the
SERVER_POOLS
attribute
to control placement, and grant and revoke access to those resources managed by Oracle Clusterware.
另外,CRS管理员可以在受限的地方创建资源,在参数
SERVER_POOLS
中使用星号(*)来控制受限的地方。通过Oracle集群来管理赋予和回收这些资源的访问权限。

The set of users that have the CRS Administrator role is managed by a list of named CRS Administrators within Oracle Clusterware, as opposed to that set of users being members of an operating system group. Sever
pool creation enables the CRS Administrator to divide the cluster into groups of servers used by different groups of users in the organization (a horizontal implementation, as described in the preceding section), thereby enabling role-separated management.
拥有CRS管理员角色的用户是由Oracle集群中的CRS管理员列表管理的,相反的那些用户自然就是操作系统用户组的成员。服务池的创建允许CRS管理员分隔集群到不同的组中,管理不同的资源(水平部署,就像先前的描述一样),从而实现角色独立管理模式。
By default, after installing Oracle Grid Infrastructure for a cluster, or after an upgrade, all users are CRS Administrators (as denoted by the asterisk (
*
) in the CRS Administrators
list), assuming all users sharing the same infrastructure are equally privileged to manage the cluster. This default configuration allows anynamed operating system user to create server pools within Oracle
Clusterware.
默认的,在为集群安装Oracle Grid Infrastructure时,或者是在升级后,所有的用户都是CRS管理员(CRS管理员列表中是*),假设搜友用户共享这个Grid infrastructure,被相等的赋权来管理集群。这个默认配置允许所有操作系统用户创建集群服务池。
Restricting CRS Administrator privileges to the Grid user and
root
can prevent subsequently created policy-managed databases from being automatically created in newly created
server pools. If you enable role-separated management, then a CRS Administrator must create the required server pools in advance.
将CRS管理员权限仅授给Grid用户和root用户可以防止以后创建策略管理的数据库中,自动创建在一个新创建的服务池中。如果你开启了角色独立管理模式,那么CRS管理员必须提前创建所需的服务池。
The user (Grid user) that installed Oracle Clusterware in the Grid Infrastructure home (Grid home) and the system superuser (
root
on Linux and UNIX, or Administrator on Windows)
are permanent CRS Administrators, and only these two users can add or remove users from the CRS Administrators list, enabling role-separated management.
安装Oracle Clusterware的用户(Grid用户)和系统的超级用户(Linux和Unix上的root,或者是Windows上的Administrator)都是永久的CRS管理员,只有这两个用户可以从CRS管理员列表上添加或者删除用户,开启角色独立管理模式。
If the cluster is shared by various users, then the CRS Administrator can restrict access to certain server pools and, consequently, to certain hardware resources to specific users in the cluster. The permissions
are stored for each server pool in the
ACL
attribute, described in Table 2-2.
如果集群被多个用户共享,那么CRS管理员可以限制访问特定的服务池,特定的硬件资源。这些访问权限都存储在每个服务池的ACL属性中。


Managing CRS Administrators in the Cluster

Use the following commands to manage CRS Administrators in the cluster:
使用以下命令来管理集群中的CRS管理员:

To query the list of users that are CRS Administrators:
查询CRS管理员列表
$ crsctl query crs administrator


To enable role-separated management and grant privileges to non-permanent CRS Administrators, you must add specific users to the CRS Administrators list. As a permanent CRS Administrator, run the following command:
为了开启角色独立管理模式,赋权给非永久性的CRS管理员,你必须添加指定用户到CRS管理员列表中。作为一个CRS管理员,运行以下命令:
# crsctl add crs administrator -u user_name

The default asterisk (
*
) value is replaced by the user or users you add using this command.
改成默认值只需将user_name替换成*

To remove specific users from the group of CRS Administrators:
从CRS管理员组中删除指定用户
# crsctl delete crs administrator -u user_name


To make all users CRS Administrators, add the asterisk (
*
) value back to the list, as follows:
使所有用户都是CRS管理员。
# crsctl add crs administrator -u "*"

The asterisk (
*
) value must be enclosed in double quotation marks (
""
). This value replaces any previously specified users in the CRS Administrators
list.
*号必须添加双引号。


Configuring Horizontal Role Separation

Use the
crsctl setperm
command to configure horizontal role separation using ACLs that are assigned to server pools, resources, or both. The CRSCTL utility is located in the
path
Grid_home
/bin
, where
Grid_home
is
the Oracle Grid Infrastructure home.
使用crsctl setperm 命令来配置水平角色隔离,分配给服务池,资源,或者两者都是。CRSCTL工具的路径在
Grid_home
/bin
下,Grid_home 目录是Oracle
Infrastructure 主目录。

The command uses the following syntax, where you can choose to set permissions on either a resource, a resource type, or a server pool:
命令的语法如下,你可以为资源,资源类型,或者是服务池选择设置权限:
crsctl setperm {resource | type | serverpool} {-u acl_string |
-x acl_string | -o user_name | -g group_name}

The flag options are:

-u
: Update the entity ACL
-x
: Delete the entity ACL
-o
: Change the entity owner

-g
: Change the entity primary group

The ACL strings are:
{ user:user_name[:readPermwritePermexecPerm] |
group:group_name[:readPermwritePermexecPerm] |
other[::readPermwritePermexecPerm] }

where:

user
: Designates the user ACL (access permissions granted to the designated user)

group
: Designates the group ACL (permissions granted to the designated group members)

other
: Designates the other ACL (access granted to users or groups not granted particular access permissions)

readperm
: Location of the read permission (
r
grants permission and "
-
"
forbids permission)

writeperm
: Location of the write permission (
w
grants permission and "
-
"
forbids permission)

execperm
: Location of the execute permission (
x
grants permission, and "
-
"
forbids permission)

For example, as the CRS Administrator, you can set permissions on a database server pool called
testadmin
for the
oracle
user and the
oinstall
group,
where only the CRS Administrator (owner) has read, write, and execute privileges, and the user, as well as the members of the
oinstall
group, have only read and execute privileges. All other users outside of the group have
no access. The following command, run as the CRS Administrator, shows how this is done:
比如,作为CRS管理员,你可以在数据库testadmin服务池上为oracle用户,oinstall用户组设置权限,CRS管理员赋予读,写,执行权限,oinstall用户组只有读,执行权限。所有其他组外的用户没有权限。那么命令就是下面这条:
# crsctl setperm serverpool ora.testadmin -u user:oracle:r-x,group:oinstall:r-x,
other::---


Note:
The preceding example is an explicitly granted exception of using a CRSCTL command on an Oracle (
ora.*
) resource (the
ora.testadmin
server pool) for the purpose of enabling horizontal role
separation.


Configuring Oracle Grid Infrastructure

After performing a software-only installation of the Oracle Grid Infrastructure, you can configure the software using Configuration Wizard. This wizard assists
you with editing the
crsconfig_params
configuration file. Similar to the Oracle Grid Infrastructure installer, the Configuration Wizard performs various validations of the Grid home and inputs before and after you run through
the wizard.
在完成Oracle Grid Infrastructure 软件安装后,你可以用Configuration Wizard配置软件。这个想到帮助你来方便的编辑
crsconfig_params
配置文件,配置向导执行多个Grid
home的验证,你运行向导前后都可以做输入。

Using the Configuration Wizard, you can configure a new Grid Infrastructure on one or more nodes, or configure an upgraded Grid Infrastructure. You can also run the Configuration Wizard in silent mode.
使用配置向导,你可以配置在一个或多个节点上配置新的Grid Infrastructure,或者配置一个Grid Infrastructure的升级。你也可以在静默模式下运行配置向导。

Notes:

Before running the Configuration Wizard, ensure that the Grid Infrastructure home is current, with all necessary patches applied.
在运行配置向导前,确保Grid Infrastructure主目录是当前目录,所有必要的必定要打上。

To launch the Configuration Wizard in the following procedures:
运行配置向导的方法如下:
On Linux and UNIX, run the following command:
Oracle_home/crs/config/config.sh

On Windows, run the following command:
Oracle_home\crs\config\config.bat


This section includes the following topics:

Configuring a Single Node

Configuring Multiple Nodes

Upgrading Grid Infrastructure

Running the Configuration Wizard in Silent Mode

Configuring a Single Node

To use the Configuration Wizard to configure a single node:

Start the Configuration Wizard, as follows:
$ Oracle_home/crs/config/config.sh


On the Select Installation Option page, select Configure Grid Infrastructure for a Cluster.

On the Cluster Node Information page, select only the local node and corresponding VIP name.

Continue adding your information on the remaining wizard pages.

Review your inputs on the Summary page and click Finish.

Run the
root.sh
script as instructed by the Configuration Wizard.

Configuring Multiple Nodes

To use the Configuration Wizard to configure multiple nodes:

Start the Configuration Wizard, as follows:
$ Oracle_home/crs/config/config.sh


On the Select Installation Option page, select Configure Grid Infrastructure for a Cluster.

On the Cluster Node Information page, select the nodes you want to configure and their corresponding VIP names. The Configuration Wizard validates the nodes you select to ensure that they are ready.

Continue adding your information on the remaining wizard pages.

Review your inputs on the Summary page and click Finish.

Run the
root.sh
script as instructed by the Configuration Wizard.

Upgrading Grid Infrastructure

To use the Configuration Wizard to upgrade the Grid Infrastructure:

Start the Configuration Wizard, as follows:
$ Oracle_home/crs/config/config.sh


On the Select Installation Option page, select Upgrade Grid Infrastructure.

On the Grid Infrastructure Node Selection page, select the nodes you want to upgrade.

Continue adding your information on the remaining wizard pages.

Review your inputs on the Summary page and click Finish.

Run the
rootupgrade.sh
script as instructed by the Configuration Wizard.


Running the Configuration Wizard in Silent Mode

To use the Configuration Wizard in silent mode to configure or upgrade nodes, start the Configuration Wizard from the command line with
-silent -responseFile
file_name
.
The wizard validates the response file and proceeds with the configuration. If any of the inputs in the response file are found to be invalid, then the Configuration Wizard displays an error and exits. Run the
root
and
configToolAllCommands
scripts
as prompted.
为了在静默模式下使用配置向导配置或者升级节点,启动配置向导的时候添加参数
-silent -responseFile
file_name
。向导会验证响应文件,并处理响应的配置。如果响应文件内的任何输入发现是非法的,那么向导会显示错误并退出。按照提示运行root
configToolAllCommands
脚本。


Configuring IPMI for Failure Isolation

This section contains the following topics:

About Using IPMI for Failure Isolation

Configuring Server Hardware for IPMI

Postinstallation Configuration of IPMI-based Failure Isolation Using
CRSCTL


About Using IPMI for Failure Isolation

Failure isolation is a process by which a failed node is isolated from the rest of the cluster to prevent the failed node from corrupting data. The ideal fencing involves an external mechanism capable of restarting
a problem node without cooperation either from Oracle Clusterware or from the operating system running on that node. To provide this capability, Oracle Clusterware 11g release 2 (11.2) supports the Intelligent
Management Platform Interface specification (IPMI) (also known as Baseboard Management Controller (BMC)), an industry-standard management protocol.
故障隔离是一个处理过程,指的是失败的节点从剩下的集群中隔离出来,防止失败的节点损坏数据。隔离需要一个算法,可以重启问题节点,不通过集群软件或者操作系统通信。为了提供这种能力,Oracle Clusterware 11gR2支持Intelligent Management Platform Interface specification
(IPMI) (也叫做Baseboard Management Controller (BMC)),一种企业标准管理协议。

Typically, you configure failure isolation using IPMI during Grid Infrastructure installation, when you are provided with the option of configuring IPMI from the Failure Isolation Support screen. If you do not
configure IPMI during installation, then you can configure it after installation using the Oracle Clusterware Control utility (CRSCTL), as described in "Postinstallation
Configuration of IPMI-based Failure Isolation Using CRSCTL".
典型的说,你需要在Grid Infrastructure安装时就使用IPMI配置故障隔离,就在当你界面上需要提供配置IPMI的选项时。如果你在安装时不配置IPMI,那么你可以在安装后使用CRSCTL工具配置。
To use IPMI for failure isolation, each cluster member node must be equipped with an IPMI device running firmware compatible with IPMI version 1.5, which supports IPMI over a local area network (LAN). During database
operation, failure isolation is accomplished by communication from the evicting Cluster Synchronization Services daemon to the failed node's IPMI device over the LAN. The IPMI-over-LAN protocol is carried over an authenticated session protected by a user name
and password, which are obtained from the administrator during installation.
为了故障隔离需要配置IPMI,每个集群成员节点必须装备有一个运行能兼容IPMI 1.5版本的IPMI设备,通过LAN(local area network)支持IPMI。在数据库运行时,故障隔离是通过LAN由CSS进程通信到失败节点的IPMI设备并驱逐出集群来完成的。IPMI-over-LAN协议通过一个有用户名,密码验证的session执行的,在安装的时候从管理员那里获取的。
In order to support dynamic IP address assignment for IPMI using DHCP, the Cluster Synchronization Services daemon requires direct communication with the local IPMI device during Cluster Synchronization Services
startup to obtain the IP address of the IPMI device. (This is not true for HP-UX and Solaris platforms, however, which require that the IPMI device be assigned a static IP address.) This is accomplished using an IPMI probe command (OSD), which communicates
with the IPMI device through an IPMI driver, which you must install on each cluster system.
为了向IPMI使用DHCP支持动态IP地址分配,CSS进程需要在CSS进程启动过程中向本地IPMI设备直接通信,来获取IPMI设备的ip地址。(这点在HP-UX和Solaris平台上不正确,然而需要IPMI设备分配一个静态IP地址)这是由一个IPMI验证命令(OSD)来完成的,通过IPMI驱动和IPMI设备通信,驱动你必须在每个集群系统上安装。
If you assign a static IP address to the IPMI device, then the IPMI driver is not strictly required by the Cluster Synchronization Services daemon. The driver isrequired,
however, to use
ipmitool
or
ipmiutil
to configure the IPMI device but you can also do this with management consoles on some platforms.
如果你为IPMI设备分配了一个静态IP地址,那么IPMI驱动不会被CSS进程直接需要。然而使用
ipmitool
或者
ipmiutil
工具来配置IPMI设备,那么驱动是需要的。但你也要在控制台上运行这些命令。

Configuring Server Hardware for IPMI

Install and enable the IPMI driver, and configure the IPMI device, as described in the Oracle
Grid Infrastructure Installation Guide for your platform.


Postinstallation Configuration of IPMI-based Failure Isolation Using CRSCTL

This section contains the following topics:

IPMI Postinstallation Configuration with Oracle Clusterware

Modifying IPMI Configuration Using CRSCTL

Removing IPMI Configuration Using CRSCTL


IPMI Postinstallation Configuration with Oracle Clusterware

When you install IPMI during Oracle Clusterware installation, you configure failure isolation in two
phases. Before you start the installation, you install and enable the IPMI driver in the server operating system, and configure the IPMI hardware on each node (IP address mode, admin credentials, and so on), as described in Oracle
Grid Infrastructure Installation Guide. When you install Oracle Clusterware, the installer collects the IPMI administrator user ID and password, and stores them in an Oracle Wallet in node-local storage, in OLR.
当你在Oracle集群安装过程中安装了IPMI,你需要以两个阶段配置故障隔离。首先在你开始安装前,你要在服务器操作系统上安装和开启IPMI驱动,在每个节点上配置IPMI硬件(IP地址模式,管理认证和其他),当你安装Oracle Clusterware,安装者收集IPMI管理员用户ID和密码,将他们存储在一个本地节点存储的Oracle Wallet,in OLR(Oracle Local
Registry)

After you complete the server configuration, complete the following procedure on each cluster node to register IPMI administrators and passwords on the nodes.
在你完成服务器配置后,在每个节点上完成下列过程,注册IPMI管理员和密码

Note:
If IPMI is configured to obtain its IP address using DHCP, it may be necessary to reset IPMI or restart the node to cause it to obtain an address.

如果IPMI被配置成使用DHCP来获取IP地址,它需要必须重启IPMI或者重启节点来引导它获取地址

Start Oracle Clusterware, which allows it to obtain the current IP address from IPMI. This confirms the ability of the clusterware to communicate with IPMI, which is necessary at startup.
启动Oracle集群,允许它从IPMI获取当前的IP地址。这用来确认集群和IPMI通信的能力,对启动是必要的。
If Oracle Clusterware was running before IPMI was configured, you can shut Oracle Clusterware down and restart it. Alternatively, you can use the IPMI management utility to obtain the IPMI IP address and then use
CRSCTL to store the IP address in OLR by running a command similar to the following:
如果Oracle集群在IPMI配置前就运行了,你要将Oracle Clusterware停止并重启。或者,你要使用IPMI管理工具来获取IPMI的IP地址,然后使用CRSCTL来存储IP地址,通过使用下面的命令:
crsctl set css ipmiaddr 192.168.10.45


Use CRSCTL to store the previously established user ID and password for the resident IPMI in OLR by running the
crsctl set css ipmiadmin
command, and supplying password at the
prompt. For example:
使用CRSCTL来存储之前建立的用户ID和密码来存储IPMI在OLR,通过运行crsctl set
css ipmiadmin命令,并按提示提供密码,比如:
crsctl set css ipmiadmin administrator_name
IPMI BMC password: password

This command validates the supplied credentials and fails if another cluster node cannot access the local IPMI using them.
这个命令验证提供的凭证,如果另外一个集群节点不能用它们访问本地IPMI那么就算失败。
After you complete hardware and operating system configuration, and register the IPMI administrator on Oracle Clusterware, IPMI-based failure isolation should be fully functional.
在你完成硬件和操作系统的配置后,在Oracle Clusterware上注册IPMI管理员,基于IPMI的故障隔离就应该完成了。


Modifying IPMI Configuration Using CRSCTL

To modify an existing IPMI-based failure isolation configuration (for example to change IPMI passwords, or to configure IPMI for failure isolation in an existing
installation), use CRSCTL with the IPMI configuration tool appropriate to your platform. For example, to change the administrator password for IPMI, you must first modify the IMPI configuration as described in Oracle
Grid Infrastructure Installation Guide, and then use CRSCTL to change the password in OLR.
修改已经存在的基于IPMI故障隔离配置(比如修改IPMI密码,或者为一个已经存在的安装配置IPMI故障隔离),使用CRSCTL配置IPMI。例如,修改密码,首先,你必须修改IPMI配置,然后使用CRSCTL修改OLR中的密码
The configuration data needed by Oracle Clusterware for IPMI is kept in an Oracle Wallet in OCR. Because the configuration information is kept in a secure store, it must be written by the Oracle Clusterware installation
owner account (the Grid user), so you must log in as that installation user.
Oracle集群需要的IPMI配置数据保存在OCR的Oracle Wallet中。因为配置信息保存在一个保密的存储中,它必须被集群安装时的用户(Grid用户)可写。所以你必须以此用户登陆。
Use the following procedure to modify an existing IPMI configuration:
使用如下过程修改以及存在的IPMI配置:

Enter the
crsctl set css ipmiadmin
administrator_name
command. For example,
with the user
IPMIadm
:
输入
crsctl set css ipmiadmin
administrator_name
命令,比如,用户IPMIadm:

crsctl set css ipmiadmin IPMIadm

Provide the administrator password. Oracle Clusterware stores the administrator name and password for the local IPMI in OLR.
提供管理员密码。Oracle集群为存储本地IPMI管理员姓名和密码在了OLR中。
After storing the new credentials, Oracle Clusterware can retrieve the new credentials and distribute them as required.
存储了新的凭证后,Oracle集群可以接受新的凭证并按需分发

Enter the
crsctl set css ipmiaddr
bmc_ip_address
command.
For example:
输入
crsctl set css ipmiaddr
bmc_ip_address
命令,比如:

crsctl set css ipmiaddr 192.0.2.244

This command stores the new IPMI IP address of the local IPMI in OLR, After storing the IP address, Oracle Clusterware can retrieve the new configuration and distribute it as required.
该命令存储了新的本地IPMI的IP地址,在存储ip地址后,Oracle集群可以接受新的配置,并按需分发。

Enter the
crsctl get css ipmiaddr
command. For example:
输入crsctl get css ipmiaddr命令,比如
crsctl get css ipmiaddr

This command retrieves the IP address for the local IPMI from OLR and displays it on the console.
该命令获取本地IPMI的ip地址并在控制台上显示。

Remove the IPMI configuration information for the local IPMI from OLR and delete the registry entry, as follows:
删除本地IPMI配置信息,并且删除注册信息,如下
crsctl unset css ipmiconfig


See Also:
"Oracle RAC Environment CRSCTL Commands" for descriptions of these CRSCTL commands


Removing IPMI Configuration Using CRSCTL

You can remove an IPMI configuration from a cluster using CRSCTL if you want to stop using IPMI completely or if IPMI was initially configured by someone other
than the user that installed Oracle Clusterware. If the latter is true, then Oracle Clusterware cannot access the IPMI configuration data and IPMI is not usable by the Oracle Clusterware software, and you must reconfigure IPMI as the user that installed Oracle
Clusterware.
你可以删除IPMI配置信息,使用CRSCTL,如果你想要完全停止使用IPMI的话,或者是IPMI最初被其他不是安装集群的用户所配置了。如果是后者的话,那么Oracle集群不能访问IPMI配置数据,IPMI对Oracle集群软件来说是不能用的了,你必须以安装用户身份重新配置IPMI。
To completely remove IPMI, perform the following steps. To reconfigure IPMI as the user that installed Oracle Clusterware, perform steps 3 and 4,
then repeat steps 2 and 3 in "Modifying
IPMI Configuration Using CRSCTL".
为了完全删除IPMI,执行如下步骤。为了以安装集群用户身份重新配置IPMI,执行步骤3,4,然后重复2,3步骤。

Disable the IPMI driver and eliminate the boot-time installation, as follows:
禁止IPMI驱动,取消启动时的安装。
/sbin/modprobe –r


See Also:
Oracle Grid Infrastructure Installation Guide for your platform for
more information about the IPMI driver

Disable IPMI-over-LAN for the local IPMI using either
ipmitool
or
ipmiutil
, to prevent access over the LAN or change the IPMI administrator
user ID and password.
使用
ipmitool
或者ipmiutil关闭本地IPMI-over-LAN,防止通过LAN访问或者修改IPMI管理员用户ID和密码


Ensure that Oracle Clusterware is running and then use CRSCTL to remove the IPMI configuration data from OLR by running the following command:
确保Oracle集群正在运行,然后使用CRSCTL来删除OLR中的IPMI配置数据,命令如下:
crsctl unset css ipmiconfig


Restart Oracle Clusterware so that it runs without the IPMI configuration by running the following commands as
root
:
重启Oracle集群让集群不再有IPMI配置,以root身份运行如下命令:
# crsctl stop crs
# crsctl start crs



Cluster Time Management

The Cluster Time Synchronization Service (CTSS) is installed as part of Oracle Clusterware and runs
in observer mode if it detects a time synchronization service or a time synchronization service configuration, valid or broken, on the system.If CTSS detects that there is no time synchronization service
or time synchronization service configuration on any node in the cluster, then CTSS goes into active mode and takes over time management for the cluster.
CTSS是作为Oracle集群软件的一部分一起安装的,并且运行在观察者模式,侦测系统上是否有合法的或者是破坏的时间同步服务或者时间同步服务配置。如果CTSS在集群的所有节点上都没有侦测到时间同步服务或者时间同步服务配置,那么CTSS就进入活动模式,接管集群的时间管理。
When nodes join the cluster, if CTSS is in active mode, then it compares the time on those nodes to a reference clock located on one node in the cluster. If there is a discrepancy between the two times and the
discrepancy is within a certain stepping limit, then CTSS performs step time synchronization, which is to step the time of the nodes joining the cluster
to synchronize them with the reference.
当节点加入集群,如果CTSS是处于活动模式,那么他会比较这些节点上的时间和集群中的一个节点上的相对时钟做比较。如果两个时间有差异,并且差异是在指定的范围内,那么CTSS就执行时间同步,将集群中的节点都和相对时间同步。
When Oracle Clusterware starts, if CTSS is running in active mode and the time discrepancy is outside the stepping limit (the limit is 24 hours), then CTSS generates an alert in the alert log, exits, and Oracle
Clusterware startup fails. You must manually adjust the time of the nodes joining the cluster to synchronize with the cluster, after which Oracle Clusterware can start and CTSS can manage the time for the nodes.
当Oracle集群启动,如果CTSS处于活动状态,并且时间差异不在限制范围内(限制是24小时),那么CTSS在alert log中生成一个警告,然后Oracle集群启动失败。你必须手动调节节点的时间来同步,完成时间同步后,集群即可启动,CTSS接管节点的时间管理。
Clocks on the nodes in the cluster become desynchronized with the reference clock (a time CTSS uses as a basis and is on the first node started in the cluster) periodically for various reasons. When this happens,
CTSS performs slew time synchronization, which is to speed up or slow down the system time on the nodes until they are synchronized with the reference
system time. In this time synchronization method, CTSS does not adjust time backward, which guarantees monotonic increase of the system time.
因为多种原因,集群节点中时间与相对时间(第一个启动的节点的时间被CTSS用作标准)逐渐产生差异。当这种情况发生,CTSS执slew time synchronization,也就是对节点上的时间进行加速或者减速,知道和相对时间同步。通过这种同步方法,CTSS不会回溯调整时间,保证系统时间正常。
When performing slew time synchronization, CTSS never runs time backward to synchronize with the reference clock. CTSS periodically writes alerts to the alert log containing information about how often it adjusts
time on nodes to keep them synchronized with the reference clock.
当执行 slew time synchronization,CTSS不会回溯时间来同步相对时间。CTSS周期性的将警告写入警告日志,包含的信息包括多少时间调节一次时间。
To activate CTSS in your cluster, you must stop and deconfigure the vendor time synchronization service on all nodes in the cluster. CTSS detects when this
happens and assumes time management for the cluster.
为了使集群中的CTSS处于活动状态,你必须停止和卸载供应商的时间同步服务(在所有节点上)。CTSS会侦测什么时候发生了这些,并开始管理集群的时间管理。
For example, to deconfigure NTP, you must remove or rename the
ntp.conf
file.
比如,要卸载NTP,你必须删除或者重命名ntp.conf文件

Note:
In Windows, CTSS does not check whether the
ntp.conf
file exists, rather, it only detects whether there is a time synchronization service.

在Windows中,CTSS不会检查是否有ntp.conf文件存在,取而代之的是,它会侦测是否有一个时间同步服务。
Similarly, if you want to deactivate CTSS in your cluster, then do the following:
同意,如果你想要禁用CTSS服务,按下操作:

Configure the vendor time synchronization service on all nodes in the cluster. CTSS detects this change and reverts back to observer mode.
在所有节点上配置供应商的时间同步服务。CTSS侦测到这种变化,就会返回观察者模式。

Use the
crsctl check ctss
command to ensure
that CTSS is operating in observer mode.
使用 crsctl
check ctss命令确保CTSS是处于观察者模式


Start the vendor time synchronization service on all nodes in the cluster.
在所有节点开启供应商时间同步服务

Use the
cluvfy comp clocksync -n all
command
to verify that the vendor time synchronization service is operating.
使用cluvfy
comp clocksync -n all命令验证供应商时间同步服务正在运行。


See Also:
Oracle Grid Infrastructure Installation Guide for your platform for
information about configuring NTP for Oracle Clusterware, or disabling it to use CTSS


Changing Network Addresses on Manually Configured Networks

This section contains the following topics:

Understanding When You Must Configure Network Addresses

Understanding SCAN Addresses and Client Service Connections

Changing the Virtual IP Addresses

Changing Oracle Clusterware Private Network Configuration


Understanding When You Must Configure Network Addresses

An Oracle Clusterware configuration requires at least two interfaces:
一个Oracle集群配置需要至少两个接口:

A public network interface, on which users and application servers connect to access data on the database server
一个公网接口,用户和应用服务器用这个接口访问数据库

A private network interface for internode communication.
一个私网接口供内部节点通信

If you use Grid Naming Service and DHCP to manage your network connections, then you may not need to configure address information on the cluster. Using GNS allows public Virtual Internet Protocol (VIP) addresses
to be dynamic, DHCP-provided addresses. Clients submit name resolution requests to your network's Domain Name Service (DNS), which forwards the requests to the grid naming service (GNS), managed within the cluster. GNS then resolves these requests to nodes
in the cluster.
如果你使用Grid Naming Service 和DHCP 来管理你的网络连接,那么你可以不必配置ip地址信息。使用GNS可以让虚拟网络协议(VIP)变成动态的,由DHCP提供的地址。客户端提交名称解析需要你的DNS服务器,DNS用来将你的请求交给GNS,GNS由集群管理。GNS然后解析这些请求到集群中的节点上。
If you do not use GNS, and instead configure networks manually, then public VIP addresses must be statically configured in the DNS, VIPs must be statically configured in the DNS and hosts file, and private IP addresses
require static configuration.
如果你不使用GNS,那么需要手工配置网络,那么公网的VIP地址必须在DNS的配置中是静态的,VIP地址必须在DNS和hosts文件中时间静态的,并且私有的ip地址也需要静态配置。


Understanding SCAN Addresses and Client Service Connections

Public network addresses are used to provide services to clients. If your clients are connecting to the Single Client Access Name (SCAN) addresses, then you may need to change public and virtual IP addresses as
you add or remove nodes from the cluster, but you do not need to update clients with new cluster addresses.
公网地址被用来为客户端提供服务。如果你的客户端连接到Single Client Access Name (SCAN)地址,那么在你添加或删除节点的时候,虽然你需要修改公共和虚拟ip地址,但你不必修改客户端的配置。
SCANs function like a cluster alias. However, SCANs are resolved on any node in the cluster, so unlike a VIP address for a node, clients connecting to the SCAN no longer require updated VIP addresses as nodes are
added to or removed from the cluster. Because the SCAN addresses resolve to the cluster, rather than to a node address in the cluster, nodes can be added to or removed from the cluster without affecting the SCAN address configuration.
SCAN就好像一个集群别名。SCAN被集群中的所有节点解析,所以不想节点中的VIP地址,当添加或删除节点,客户端连接到SCAN不在需要修改VIP地址。因为SCAN地址是为集群解析的,而不是集群中的一个节点,节点的添加和删除,不会影响SCAN地址配置。
The SCAN is a fully qualified name (host name+domain) that is configured to resolve to all the addresses allocated for the SCAN. The addresses resolve using Round Robin DNS either on the DNS server, or within the
cluster in a GNS configuration. SCAN listeners can run on any node in the cluster. SCANs provide location independence for the databases, so that client configuration does not have to depend on which nodes run a particular database.
SCAN是一个完全标准的名称(主机名+域名),是用来给所有的SCAN地址做解析的。地址解析使用的是DNS服务器或者是集群GNS的Round Robin DNS。
Oracle Database 11g release 2 (11.2) and later instances only register with SCAN listeners as remote listeners. Upgraded databases register with SCAN listeners
as remote listeners, and also continue to register with all node listeners.
Oracle Database 11gR2和以后的版本实例只会把SCAN监听作为远程监听。升级后的数据库也将SCAN监听注册为远程监听。也作为所有节点的监听。

Note:
Because of the Oracle Clusterware installation requirement that you provide a SCAN name during installation, if you resolved at least one IP address using the server
/etc/hosts
file to bypass the installation requirement
but you do not have the infrastructure required for SCAN, then, after the installation, you can ignore the SCAN and connect to the databases in the cluster using VIPs.

因为Oracle集群在安装过程中需要你提供SCAN名称,至少你需要在/etc/hosts文件里面对对其进行解析,这样才能通过安装时的检查。然后,在安装后,你可以忽略SCAN,使用VIP来连接数据库。

Oracle does not support removing the SCAN address.
Oracle不支持删除SCAN地址。


Changing the Virtual IP Addresses

Clients configured to use public VIP addresses for Oracle Database releases before Oracle Database 11g release
2 (11.2) can continue to use their existing connection addresses. Oracle recommends that you configure clients to use SCANs, but it is not required that you use SCANs. When an earlier version of Oracle Database is upgraded, it is registered with the SCAN,
and clients can start using the SCAN to connect to that database, or continue to use VIP addresses for connections.
在11gR2以前的版本中,客户端是使用VIP地址来连接数据库的,现在也还是可以用这种方法。Oracle建议你使用SCAN来配置客户端,但是不是必须的。当一个以前版本的数据库升级了,他会注册SCAN,客户端会使用SCAN来连接数据库,或者仍然使用VIP地址来连接。
If you continue to use VIP addresses for client connections, you can modify the VIP address while Oracle Database and Oracle ASM continue to run. However, you must stop services while you modify the address. When
you restart the VIP address, services are also restarted on the node.
如果你仍然使用VIP地址来做客户端连接,你要在数据库和ASM运行的时候修改VIP地址。然而你当你修改地址,你必须停止服务。当你重启VIP地址,服务也需要在拿个节点重启。
This procedure cannot be used to change a static public subnet to use DHCP. Only the
srvctl add network -S
command creates a DHCP network.
使用DHCP的地址,这个过程不能用来修改静态地址。只有srvctl 和network
-S命令可以创建DHCP网络。


Note:
The following instructions describe how to change only a VIP address, and assume that the host name associated with the VIP address does not change. Note that you do not need to update VIP addresses manually if you are using GNS, and VIPs are assigned using
DHCP.

以下指令描述了怎么修改VIP地址,并且假设主机名不修改。注意如果你使用GNS,并且VIP地址是用DHCP地址分配的,那么你不需要手工修改VIP地址

If you are changing only the VIP address, then update the DNS and the client hosts files. Also, update the server hosts files, if those are used for VIP addresses.
如果你仅修改VIP地址,那么修改DNS和客户端hosts文件。同意,修改服务端hosts文件,如果这些被用来做VIP地址解析的话。

Perform the following steps to change a VIP address:
执行下列步骤来修改VIP地址:

Stop all services running on the node whose VIP address you want to change using the following command
syntax, where
database_name
is the name of the database,
service_name_list
is
a list of the services you want to stop, and
my_node
is the name of the node whose VIP address you want to change:
在你想要修改VIP地址的节点上用如下命令停止服务,
database_name
是数据库的名字,service_name_list是你想要停止的服务列表,
my_node
是你想要修改的节点列表
srvctl stop service -d database_name  -s service_name_list -n my_node

This example specifies the database name (
grid
) using the
-d
option and specifies the services (
sales,oltp
)
on the appropriate node (
mynode
).
这个例子指定了数据库名(grid)使用了-d参数指定服务(sales,oltp),节点mynode
$ srvctl stop service -d grid -s sales,oltp -n mynode


Confirm the current IP address for the VIP address by running the
srvctl config vip
command. This command displays the current VIP address bound to one of the network interfaces.
The following example displays the configured VIP address:
运行srvctl config vip命令确认当前的VIP地址。这个命令显示了当前VIP地址绑定的网络接口。下面这个例子显示了配置的vip地址:
$ srvctl config vip -n stbdp03VIP exists.:stbdp03VIP exists.: /stbdp03-vip/192.168.2.20/255.255.255.0/eth0


Stop the VIP resource using the
srvctl stop vip
command:
使用srvctl stop vip命令停止VIP资源
$ srvctl stop vip -n mynode


Verify that the VIP resource is no longer running by running the
ifconfig -a
command on Linux and UNIX systems (or issue the
ipconfig /all
command
on Windows systems), and confirm that the interface (in the example it was
eth0:1
) is no longer listed in the output.
在Linux和Unix通过命令ifconfig -a验证VIP资源已经不再运行(windows下运行ipconfig
/all),并且确认网卡接口(此例中是eth0:1)不再显示在输出中。


Make any changes necessary to the
/etc/hosts
files on all nodes on Linux and UNIX systems, or the
%windir%\system32\drivers\etc\hosts
file
on Windows systems, and make any necessary DNS changes to associate the new IP address with the old host name.
如果有需要修改文件/etc/hosts,或者是windows的%windir%\system32\drivers\etc\hosts文件,如果有需要,修改DNS配置,为主机名分配新ip地址。



To use a different subnet or NIC for the default network before you change any VIP resource, you must use the
srvctl modify network -S
subnet
/
netmask
/
interface
command
as
root
to change the network resource, where
subnet
is the new subnet address,
netmask
is
the new netmask, and
interface
is the new interface. After you change the subnet, then you must change each node's VIP to an IP address on the new subnet, as
described in the next step.
在你修改VIP资源前,为默认的网络使用不同的子网络或者是NIC,你必须以root身份使用
srvctl modify network -S
subnet
/
netmask
/
interface
命令修改网络资源,
subnet
是新的子网络,
netmask
是新的子网掩码,
interface
是新的接口。在你修改子网之后,那么你必须在子网上修改任意节点的VIP,这个在下一个步骤中详述。


Modify the node applications and provide the new VIP address using the following
srvctl modify nodeapps
syntax:
修改节点应用并使用下面的命令语法
srvctl modify nodeapps
提供新的VIP地址:

$ srvctl modify nodeapps -n node_name -A new_vip_address

The command includes the following flags and values:
命令包括下列标志和数值:

-n
node_name
is the node name

-A
new_vip_address
is the node-level VIP address:
name
|
ip
/
netmask
/
[
if1
[|
if2
|...]]

For example, issue the following command as the
root
user:
srvctl modify nodeapps -n mynode -A 192.168.2.125/255.255.255.0/eth0

Attempting to issue this command as the installation owner account may result in an error. For example, if the installation owner is
oracle
, then you may see the error
PRCN-2018:
Current user oracle is not a privileged user
.To avoid the error, run the command as the
root
or system administrator account.
如果作为安装时的用户账户使用这个命令可能会导致错误。比如,如果安装账户是oracle,那么你可能会看到错误,PRCN-2018: Current user oracle is not a privileged user。为了避免这个错误,已root或者系统管理员账户运行这个命令

Start the node VIP by running the
srvctl start vip
command:
运行 srvctl start vip 命令启动节点VIP
$ srvctl start vip -n node_name

The following command example starts the VIP on the node named
mynode
:
下列命令在myname的节点上启动VIP:
$ srvctl start vip -n mynode


Repeat the steps for each node in the cluster.
每个节点上重复上述步骤
Because the SRVCTL utility is a clusterwide management tool, you can accomplish these tasks for any specific node from any node in the cluster, without logging in to each of the cluster nodes.
因为SRVCTL工具是一个集群级别的管理工具,你要在集群的指定节点上完成这些任务,而不要每次登陆这些节点。

Run the following command to verify node connectivity between all of the nodes for which your cluster is configured. This command discovers all of the network interfaces available on the cluster nodes and verifies
the connectivity between all of the nodes by way of the discovered interfaces. This command also lists all of the interfaces available on the nodes which are suitable for use as VIP addresses.
运行下面的命令验证节点之间的连通性。这个命令发现所有可用的网络接口,验证使用这些发现的网络接口检测节点间的连通性。这个命令也列出了所有可用的接口
$ cluvfy comp nodecon -n all -verbose



Changing Oracle Clusterware Private Network Configuration

This section contains the following topics:

About Private Networks and Network Interfaces

Redundant Interconnect Usage

Consequences of Changing Interface Names Using OIFCFG

Changing a Network Interface


About Private Networks and Network Interfaces

Oracle Clusterware requires that each node is connected through a private network (in addition to the public network). The private network connection is referred
to as the cluster interconnect. Table
2-5 describes how the network interface card (NIC) and the private IP address are stored.
Oracle集群需要每个节点通过私有网络进行连接(除了公共网络)。私有网络连接指的是集群连接(cluster interconnect)。表2-5描述了网络接口卡NIC和私有IP地址如何存储。
Oracle only supports clusters in which all of the nodes use the same network interface connected to the same subnet (defined as a global interface with the
oifcfg
command). You
cannot use different network interfaces for each node (node-specific interfaces). Refer to Appendix D, "Oracle Interface Configuration
Tool (OIFCFG) Command Reference" for more information about global and node-specific interfaces.
Oracle仅支持集群使用相同的网络接口连接到相同的子网(使用oifcfg命令定义全局接口)。你不能为每个节点使用不同的网络接口(节点特定接口)。

Table 2-5 Storage for the Network Interface, Private IP Address, and Private Host Name
EntityStored In...Comments
Network interface name
Operating system
For example:
eth1

You can use wildcards when specifying network interface names.
For example:
eth*

Private network Interfaces
Oracle Clusterware, in the Grid Plug and Play (GPnP) Profile
Configure an interface for use as a private interface during installation by marking the interface as Private, or use the
oifcfg setif
command
to designate an interface as a private interface.
See Also: "OIFCFG
Commands" for more information about the
oifcfg setif
command


Redundant Interconnect Usage

You can define multiple interfaces for Redundant Interconnect Usage by classifying the interfaces as private either during installation or after installation using the
oifcfg setif
command.
When you do, Oracle Clusterware creates from one to four (depending on the number of interfaces you define) highly available IP (HAIP) addresses,
which Oracle Database and Oracle ASM instances use to ensure highly available and load balanced communications.
你可以使用oifcfg setif命令来定义多网卡为内部网络通信做冗余,可以在安装过程中,或者安装完成后。当你这样做了,集群会创建一到四个(视你定义的接口数量)高可用IP地址(HAIP),Oracle数据库和ASM实例用它来确保高可用和负载均衡通信。
The Oracle software (including Oracle RAC, Oracle ASM, and Oracle ACFS, all 11g release 2 (11.2.0.2), or later), by default, uses these HAIP addresses for all
of its traffic, allowing for load balancing across the provided set of cluster interconnect interfaces. If one of the defined cluster interconnect interfaces fails or becomes non-communicative, then Oracle Clusterware transparently moves the corresponding
HAIP address to one of the remaining functional interfaces.
Oracle软件(包括RAC,ASM,ACFS,11gR2和以后的版本),默认的,使用HAIP地址作为内部通信,通过提供的集群内部通信接口实现负载均衡。如果已经定义好的集群内部接口中的一个故障了或变得无法通信了,那么Oracle集群会透明的迁移相关的HAIP地址到一个其他可用的网络接口上。

Note:
Oracle Clusterware uses at most four interfaces at any given point, regardless of the number of interfaces defined. If one of the interfaces fails, then the HAIP address moves to another one of the configured interfaces in the defined set.

Oracle集群使用最多4个网卡做冗余,其他的会被护士。如果一个网卡故障了,那么HAIP地址迁移到任何一个其他已经定义好的网卡接口上。

When there is only a single HAIP address and multiple interfaces from which to select, the interface to which the HAIP address moves is no longer the original interface upon which it was configured. Oracle Clusterware
selects the interface with the lowest numerical subnet to which to add the HAIP address.
当只有一个HAIP地址,有多块网卡,HAIP地址怎么选择网卡?Oracle集群是选择那个地址的数字最小的作为HAIP地址。

See Also:
Oracle Grid Infrastructure Installation Guide for your platform for
information about defining interfaces


Consequences of Changing Interface Names Using OIFCFG

The consequences of changing interface names depend on which name you are changing, and whether you
are also changing the IP address. In cases where you are only changing the interface names, the consequences are minor. If you change the name for the public interface that is stored in OCR, then you also must modify the node applications for the cluster.
Therefore, you must stop the node applications for this change to take effect.
修改网卡接口的结果取决于你修改的名称,是否你修改了ip地址。比如你只修改了接口名称,那么影响较小。如果你修改存储在OCR中的公网接口名称,那么你必须修改集群的节点应用。因此,你必须停掉节点应用,来让这个修改产生效果。

See Also:
My Oracle Support (formerly OracleMetaLink) note 276434.1 for more details about changing the node applications to use a new public interface name, available at the following URL:
https://metalink.oracle.com



Changing a Network Interface

You can change a network interface and its associated subnet address using the following procedure.
You must perform this change on all nodes in the cluster.
你可以使用下面的操作步骤,修改网卡接口和它相关的子网地址。你必须在所有节点上执行这些操作。
This procedure changes the network interface and IP address on each node in the cluster used previously by Oracle Clusterware and Oracle Database.
这些步骤会修改网卡接口和IP地址

Caution:
The interface that the Oracle RAC (RDBMS) interconnect uses must be the same interface that Oracle Clusterware uses with the host name. Do not configure the private interconnect for Oracle RAC on a separate interface that is not monitored by Oracle Clusterware.

RAC数据库的内部通信必须使用同一个接口。不要而为RAC配置独立的接口,这些接口会不受集群监控。

Ensure that Oracle Clusterware is running on all of the cluster nodes by running the following command:
运行下面的命令,确保集群正在运行
$ olsnodes -s

The command returns output similar to the following, showing that Oracle Clusterware is running on all of the nodes in the cluster:
该命令的返回类似下面,展示了集群正在所有的节点上运行。
./olsnodes -s
myclustera Active
myclusterc Active
myclusterb Active


Ensure that the replacement interface is configured and operational in the operating system on all of the nodes. Use the
ifconfig
command (or
ipconfig
on
Windows) for your platform. For example, on Linux, use:
确保替换的网卡接口已被配置好,并在所有的节点上操作。使用命令ifconfig(Linux),ipconfig(Windows),比如
$ /sbin/ifconfig..


Add the new interface to the cluster as follows, providing the name of the new interface and the subnet address, using the following command:
按下面添加新的网卡接口,提供新的网卡接口和子网地址,使用命令如下:
$ oifcfg setif -global if_name/subnet:cluster_interconnect

You can use wildcards with the interface name. For example,
oifcfg setif -global "eth*/192.168.0.0:cluster_interconnect
is valid syntax. However, be careful to avoid ambiguity
with other addresses or masks used with other cluster interfaces. If you use wildcards, then the command returns a warning similar to the following:
你可以使用通配符表示网卡接口,比如 oifcfg setif -global "eth*/192.168.0.0:cluster_interconnect 是满足语法的。然而,小心避免和其他网卡接口的地址或者掩码造成混淆。如果你使用通配符,那么命令返回一个下面类似的警告。
eth*/192.168.0.0 global cluster_interconnect
PRIF-29: Warning: wildcard in network parameters can cause mismatch
among GPnP profile, OCR, and system


Note:
Legacy network configuration does not support wildcards; thus wildcards are resolved using current node configuration at the time of the update.

老的网卡配置不支持通配符;因此升级后才支持解析当前的节点配置。

See Also:
Appendix D, "Oracle Interface Configuration Tool (OIFCFG) Command Reference" for more information about using OIFCFG commands

After the previous step completes, you can remove the former subnet, as follows, by providing the name and subnet address of the former interface:
前面的步骤完成后,你可以删除之前的子网了,如下,提供以前网卡的名字和子网地址。
oifcfg delif -global if_name/subnet

For example:
$ oifcfg delif -global eth1/10.10.0.0


Caution:
This step should be performed only after a replacement interface is committed into the Grid Plug and Play configuration. Simple deletion of cluster interfaces without providing a valid replacement can result in invalid cluster configuration.

替换网卡提交给Grid Plug和Play配置后才能执行这步。简单的删除集群网卡接口,而没有提供合法的替换品会导致一个非法的集群配置。

Verify the current configuration using the following command:
使用下面的命令验证当前的配置:
oifcfg getif

For example:
$ oifcfg getifeth2 10.220.52.0 global cluster_interconnect
eth0 10.220.16.0 global public


Stop Oracle Clusterware on all nodes by running the following command as
root
on each node:
以root身份在每个节点上运行下面命令,停止Oracle集群:
# crsctl stop crs


Note:
With cluster network configuration changes, the cluster must be fully stopped; do not use rolling stops and restarts.

当集群网络配置修改了,集群必须完全停止,不能使用回滚停止和重启

When Oracle Clusterware stops, deconfigure the deleted network interface in the operating system using the
ifconfig
command. For example:
当集群停止了,使用ifconfig命令卸载和删除网络接口,比如:
$ ifconfig down

At this point, the IP address from network interfaces for the former subnet is deconfigured from Oracle Clusterware. This command does not affect the configuration of the IP address on the operating system.
在这个时候,以前网络接口的IP地址从集群中解除了。这个命令不会影响操作系统中的ip地址配置。
You must update the operating system configuration changes, because changes made using
ifconfig
are not persistent.
你必须修改操作系统配置文件,因为使用ifconfig做的修改不是永久性的。

See Also:
Your operating system documentation for more information about how to make
ifconfig
commands persistent

查看你的操作系统文件获取更多信息,怎么样使用ifconfig命令永久性

Restart Oracle Clusterware by running the following command on each node in the cluster as the
root
user:
以root用户运行下面命令重启Oracle集群
# crsctl start crs

The changes take effect when Oracle Clusterware restarts.
当Oracle集群重启,修改就生效了
If you use the
CLUSTER_INTERCONNECTS
initialization parameter, then
you must update it to reflect the changes.
如果你曾使用过
CLUSTER_INTERCONNECTS
初始化参数,那么你必须修改它。


Need an example? Tell us more.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: