您的位置:首页 > 其它

ElasticSearch系列(5) 集群不间断服务的节点版本升级步骤

2017-03-03 13:47 459 查看
适用于elasticsearch的小版本升级,原文见:
https://www.elastic.co/guide/en/elasticsearch/reference/current/rolling-upgrades.html
升级步骤(rolling upgrade):

Disable shard allocation

When you shut down a node, the allocation process will wait for one minute before starting to replicate the shards that were on that node to other nodes in the cluster, causing a lot of wasted I/O. This can be avoided by disabling allocation
before shutting down a node:

PUT _cluster/settings
{
"transient": {
"cluster.routing.allocation.enable": "none"
}
}


Stop non-essential indexing and perform a synced flush (Optional)

You may happily continue indexing during the upgrade. However, shard recovery will be much faster if you temporarily stop non-essential indexing and issue a

synced-flush request:

POST _flush/synced


A synced flush request is a “best effort” operation. It will fail if there are any pending indexing operations, but it is safe to reissue the request multiple times if necessary.

Stop and upgrade a single node

Shut down one of the nodes in the cluster
before starting the upgrade.



When using the zip or tarball packages, the
config
,
data
,
logs
and
plugins
directories are placed within the Elasticsearch home directory by default.

It is a good idea to place these directories in a different location so that there is no chance of deleting them when upgrading Elasticsearch. These custom paths can be

configured with the
path.conf
,
path.logs
, and
path.data
settings, and using
ES_JVM_OPTIONS
to specify the location of the
jvm.options
file.

The
Debian and
RPM packages place these directories in the appropriate place for each operating system.

To upgrade using a
Debian or
RPM package:

Use
rpm
or
dpkg
to install the new package. All files should be placed in their proper locations, and config files should not be overwritten.

To upgrade using a zip or compressed tarball:

Extract the zip or tarball to a new directory, to be sure that you don’t overwrite the
config
or
data
directories.

Either copy the files in the
config
directory from your old installation to your new installation, or set the environment variable
ES_JVM_OPTIONS
to the location of the
jvm.options
file and use the
-E path.conf=
option on the command line to point to an external config directory.

Either copy the files in the
data
directory from your old installation to your new installation, or configure the location of the data directory in the
config/elasticsearch.yml
file, with the
path.data
setting.

Upgrade any plugins

Elasticsearch plugins must be upgraded when upgrading a node. Use the
elasticsearch-plugin
script to install the correct version of any plugins that you need.

Start the upgraded node

Start the now upgraded node and confirm that it joins the cluster by checking the log file or by checking the output of this request:

GET _cat/nodes


Reenable shard allocation

Once the node has joined the cluster, reenable shard allocation to start using the node:

PUT _cluster/settings
{
"transient": {
"cluster.routing.allocation.enable": "all"
}
}


Wait for the node to recover

You should wait for the cluster to finish shard allocation before upgrading the next node. You can check on progress with the

_cat/health
request:

GET _cat/health


Wait for the
status
column to move from
yellow
to
green
. Status
green
means that all primary and replica shards have been allocated.



During a rolling upgrade, primary shards assigned to a node with the higher version will never have their replicas assigned to a node with the lower version, because the newer version may have a different data format which is not understood by the older
version.

If it is not possible to assign the replica shards to another node with the higher version — e.g. if there is only one node with the higher version in the cluster — then the replica shards will remain unassigned and the cluster health will remain status
yellow
.

In this case, check that there are no initializing or relocating shards (the
init
and
relo
columns) before proceding.

As soon as another node is upgraded, the replicas should be assigned and the cluster health will reach status
green
.

Shards that have not been
sync-flushed may take some time to recover. The recovery status of individual shards can be monitored with the

_cat/recovery
request:

GET _cat/recovery


If you stopped indexing, then it is safe to resume indexing as soon as recovery has completed.

Repeat
When the cluster is stable and the node has recovered, repeat the above steps for all remaining nodes.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
相关文章推荐