您的位置:首页 > 产品设计 > UI/UE

How to configure Red Hat Cluster using KVM fencing with two guest VM's running on a IBM PowerKVM

2017-06-29 11:21 1056 查看
https://www.ibm.com/developerworks/community/blogs/mhhaque/entry/how_to_configure_red_hat_cluster_with_fencing_of_two_kvm_guests_running_on_two_ibm_powerkvm_hosts?lang=en

Source Link:
https://access.redhat.com/solutions/293183


 

Install fencing packages on both PowerKVM host using rpm/yum command:

[root@powerkvm01 RHCS]# rpm -ivh fence-virtd-0.2.3-13.el6.ppc64.rpm fence-virtd-libvirt-0.2.3-13.el6.ppc64.rpm
fence-virtd-serial-0.2.3-13.el6.ppc64.rpm fence-virtd-multicast-0.2.3-13.el6.ppc64.rpm fence-virt-0.2.3-13.el6.ppc64.rpm


 

warning: fence-virtd-0.2.3-13.el6.ppc64.rpm: Header V4 RSA/SHA1 Signature, key ID 4108a30e: NOKEY

Preparing... ################################# [100%]

Updating / installing...

1:fence-virtd-0.2.3-13.el6 ################################# [ 20%]

2:fence-virtd-libvirt-0.2.3-13.el6 #################################
[ 40%]

3:fence-virtd-serial-0.2.3-13.el6 #################################
[ 60%]

4:fence-virtd-multicast-0.2.3-13.el#################################
[ 80%]

5:fence-virt-0.2.3-13.el6 ################################# [100%]

 

Note: I have compiled all those source rpm for IBM Power System, those rpms are not publicly available right now.

 

After installation, check /etc/cluster directory exists on all PowerKVM host and guests, if it does not please create it:
[code][root@powerkvm01 RHCS]# mkdir -p /etc/cluster

[/code]

Create "fence_xvm.key" on the PowerKVM host and copy it to the guests:

[root@powerkvm01 RHCS]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key
bs=4k count=1

1+0 records in

1+0 records out

4096 bytes (4.1 kB) copied, 0.00100394 s, 4.1 MB/s

 

[root@powerkvm01 RHCS]# scp /etc/cluster/fence_xvm.key root@gpfsnode01:/etc/cluster/fence_xvm.key

root@gpfsnode01's password:

fence_xvm.key 100% 4096 4.0KB/s 00:00

 

[root@powerkvm01 RHCS]# scp /etc/cluster/fence_xvm.key root@gpfsnode02:/etc/cluster/fence_xvm.key

root@gpfsnode02's password:

fence_xvm.key 100% 4096 4.0KB/s 00:00

 

Now, We need to configure the "fence_virtd" daemon. To do that run "fence_virtd -c" on PowerKVM host:

 
At the prompts use the following values:

accept default search path

accept multicast as default

accept default multicast address

accept default multicast port

set interface to br0 (replace the bridge name with the one configured on your hosts)

accept default fence_xvm.key path

set backend module to libvirt

accept default URI

enter "y" to write config

 

[root@powerkvm01 RHCS]# fence_virtd -c

Module search path [/usr/lib64/fence-virt]:

 

Available backends:

libvirt 0.1

Available listeners:

multicast 1.1

serial 0.4

 

Listener modules are responsible for accepting requests

from fencing clients.

 

Listener module [multicast]:

 

The multicast listener module is designed for use environments

where the guests and hosts may communicate over a network using

multicast.

 

The multicast address is the address that a client will use to

send fencing requests to fence_virtd.

 

Multicast IP Address [225.0.0.12]:

 

Using ipv4 as family.

 

Multicast IP Port [1229]:

 

Setting a preferred interface causes fence_virtd to listen only

on that interface. Normally, it listens on the default network

interface. In environments where the virtual machines are

using the host machine as a gateway, this *must* be set

 

(typically to virbr0).

Set to 'none' for no interface.

 

Interface [none]: bridge0

 

The key file is the shared key information which is used to

authenticate fencing requests. The contents of this file must

be distributed to each physical host and virtual machine within

a cluster.

 

Key File [/etc/cluster/fence_xvm.key]:

 

Backend modules are responsible for routing requests to

the appropriate hypervisor or management layer.

 

Backend module [checkpoint]: libvirt

 

The libvirt backend module is designed for single desktops or

servers. Do not use in environments where virtual machines

may be migrated between hosts.

 

Libvirt URI [qemu:///system]:

 

Configuration complete.

 

 

=== Begin Configuration ===

backends {

libvirt {

uri = "qemu:///system";

}

}

 

listeners {

multicast {

interface = "bridge0";

port = "1229";

family = "ipv4";

address = "225.0.0.12";

key_file = "/etc/cluster/fence_xvm.key";

}

}

 

fence_virtd {

module_path = "/usr/lib64/fence-virt";

backend = "libvirt";

listener = "multicast";

}

=== End Configuration ===

 

Replace /etc/fence_virt.conf with the above [y/N]? y

 

[root@powerkvm01 RHCS]# /etc/init.d/fence_virtd start

Starting fence_virtd (via systemctl): [ OK ]

[root@powerkvm01 RHCS]# chkconfig fence_virtd on

 

[root@gpfsnode01 ~]# fence_xvm -a 225.0.0.12 -k /etc/cluster/fence_xvm.key -H gpfsnode02 -o status

[root@gpfsnode02 ~]# fence_xvm -a 225.0.0.12 -k /etc/cluster/fence_xvm.key -H gpfsnode01 -o status

 

[root@gpfsnode02 ~]# fence_xvm -a 225.0.0.12 -k /etc/cluster/fence_xvm.key -H gpfsnode01 -o reboot

[root@powerkvm01 RHCS]# virsh list --all

Id Name State

----------------------------------------------------

7 gpfsnode02 running

8 gpfsnode01 running

 

[root@gpfsnode01 ~]# uptime

07:10:22 up 0 min, 1 user, load average: 1.19, 0.31, 0.10

Now, We manually configure "cluster.conf". The simple configuration that is known to be working looks as follows:
[code][root@
gpfsnode01 ~
]# cat /etc/cluster/cluster.conf

<?xml version="1.0"?>

<cluster config_version="1" name="kvm_cluster">

<clusternodes>

<clusternode name="gpfsnode01.example.com" nodeid="1">

<fence>

<method name="1">

<device domain="gpfsnode01" name="kvmfence
"/>

</method>

</fence>

</clusternode>

<clusternode name="gpfsnode02.example.com" nodeid="2">

<fence>

<method name="1">

<device domain="gpfsnode02" name="kvmfence
"/>

</method>

</fence>

</clusternode>

</clusternodes>

<cman expected_votes="1" two_node="1"/>

<fencedevices>

<fencedevice agent="fence_xvm" name="kvmfence" key_file="/etc/cluster/fence_xvm.key" multicast_address="225.0.0.12"/>

</fencedevices>

<rm>

<failoverdomains/>

<resources/>

</rm>

</cluster>
[/code]

Fore Help: Below link will help you configure Red Hat cluster on IBM Power System, Ret hat cluster software does not support IBM Power System. Currently Red Hat Cluster
related packages are only avaiable for x86_64 and x86 CPU architecture.


 

https://www.ibm.com/developerworks/community/blogs/mhhaque/entry/red_hat_two_nodes_cluster_on_ibm_power_system?lang=en

 

As per the GPL incensing plolicy, You can download all the source RPM for Red Hat Cluster & use it on your IBM Power System. But You need to recompilation all the source
rpm on your Power System. Here is the source rpm download link:


http://ftp.scientificlinux.org/linux/scientific/
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: