RHCS + Mysql安装配置
2015-12-16 17:01
465 查看
一、前期规划
1、IP分配主机名 | IP | 安装软件 |
Node1 | 192.168.52.10 | luci、ricci、gfs2-utils、rgmanager、lvm1-cluster、mysql、http、iscsi |
Node2 | 192.168.52.11 | luci、ricci、gfs2-utils、rgmanager、lvm1-cluster、mysql、http、iscsi |
Storage | 192.168.52.110 | Openfiler共享存储 |
Vip1 | 192.168.52.224 | |
Vip2 | 192.168.52.250 |
二、RHCS安装准备--基础环境:
(以下步骤Node1、Node2都要操作)1、添加Hosts
# more /etc/hosts127.0.0.1 localhost localhost.localdomainlocalhost4 localhost4.localdomain4
::1 localhostlocalhost.localdomain localhost6 localhost6.localdomain6
192.168.52.10 node1 node1.kbson.com
192.168.52.11 node2 node2.kbson.com
2、创建双机互信
# mkdir ~/.ssh# chmod 700 ~/.ssh
# ssh-keygen -t rsa
enter
enter
enter
# ssh-keygen -t dsa
enter
enter
enter
Node1执行
# cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys
# ssh node2 cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys
yes
Node2的密码
# scp ~/.ssh/authorized_keys node2:~/.ssh/authorized_keys
3、配置本地yum源
# mount /dev/cdrom /media/mount: block device /dev/sr0 is write-protected, mounting read-only
#
# more /etc/yum.repos.d/rhel-source.repo
[rhel_6_iso]
name=local iso
baseurl=file:///media
gpgcheck=1
gpgkey=file:///media/RPM-GPG-KEY-redhat-release
[HighAvailability]
name=HighAvailability
baseurl=file:///media/HighAvailability
gpgcheck=1
gpgkey=file:///media/RPM-GPG-KEY-redhat-release
[LoadBalancer]
name=LoadBalancer
baseurl=file:///media/LoadBalancer
gpgcheck=1
gpgkey=file:///media/RPM-GPG-KEY-redhat-release
[ResilientStorage]
name=ResilientStorage
baseurl=file:///media/ResilientStorage
gpgcheck=1
gpgkey=file:///media/RPM-GPG-KEY-redhat-release
[ScalableFileSystem]
name=ScalableFileSystem
baseurl=file:///media/ScalableFileSystem
gpgcheck=1
gpgkey=file:///media/RPM-GPG-KEY-redhat-release
4、Openfiler iscsi存储配置
配置略,规划磁盘空间如下:qdisk 256MB
data 30GB
5、Node1、Node2安装iscsi挂载存储
# yum install iscsi-initiator-utils -y# chkconfig iscsid on
# service iscsid start
# iscsiadm -m discovery -t sendtargets -p 192.168.52.110
Starting iscsid: [ OK ]
192.168.52.110:3260,1 iqn.2006-01.com.openfiler:tsn.raw
# iscsiadm -m node -T iqn.2006-01.com.openfiler:tsn.raw -p 192.168.52.110–l
6、关闭iptables、selinux、NetworkManager服务
# service iptables stop# chkconfig iptables off
# serviceNetworkManager stop
# chkconfigNetworkManager off
三、RHCS安装
1、Node1、Node2安装RHCS软件包
# yum -y install cman ricci gfs2-utils rgmanager lvm2-cluster2、Node1、Node2更改各节点ricci密码
# passwd ricci3、配置RHCS服务开机启动
# chkconfig ricci on# chkconfig rgmanager on
# chkconfig cman on
# service ricci start
# service rgmanager start
# service cman start
安装好RHCS,在启动cman的时候会提示如下错误:
Starting cman... xmlconfig cannot find/etc/cluster/cluster.conf [Faile]
是由于cluster.conf文件没有创建,可以安装luci图形界面配置。
四、RHCS配置
1、安装luci图形管理界面(可以独立安装到某个管理节点,此次试验把luci安装在Node1)
# yum -y install luci# service luci start
Start luci... [ OK ]
Point your web browser to https://node1.kbson.com:8084 (orequivalent) to access luci
2、访问图形管理界面,配置RHCS
https://192.168.52.10:80843、配置Cluster
Manag Clusters -> Create (其中password填写 ricci 用户的密码,同时建议将”Reboot Nodes Before Joining Cluster“推荐勾选上)4、配置并调试Fence设备
1)Fence Devices à Add,添加Fence设备2)Nodes àNode1、Node2(两个节点都要添加)à Add Fence Methodà Add Fence Instance
3)、调试fence设备
l 查看主机状态
fence_vmware_soap-a 192.168.52.254 -z -lroot -p kbsonlong -n node1 -o status
如遇如下错误,请使用uuid查询虚拟机状态,highavailability中fence设备也是通过uuid查找设备
Failed: Unabletoobtain correct plug status or plug is not available
l 查看UUID
fence_vmware_soap-a 192.168.52.254 -z -l root -p kbsonlong -o list
上图可以看到Esxi有三个UUID,分别是RHCS_node1、RHCS_node2、openfiler。
l 通过UUID查看主机状态
fence_vmware_soap -a192.168.52.254 -z -l root -p kbsonlong \ -U564d3735-f600-8365-68f4-5918090580fa-o status
如果返回Status: ON则表明fence设备正常。
5、FailOverDomains -> Add ,添加转移域
l Prioritized:故障转移时选择优先级高l Restricted:服务只运行在指定的节点上
l No Failback:故障节点恢复后,服务不用切换回去
6、Resources --> Add
l 虚拟IP:192.168.52.50l Apache脚本(你也可以自定义添加Script,或者里面有一个专门针对Apache的选项)
7、Service Groups --> Add,添加服务
l 依次在AddResource 添加IP,Scripts,并将Recovery Policy 属性设置为Relocate[转移]8、配置GFS服务
1) 分别在node01,node02启动CLVM的集成cluster锁服务lvmconf --enable-cluster
chkconfig clvmd on
service clvmd start
Activating VG(s): No volumegroups found [ OK ]
2) 将挂载的operfile共享存储的30G在任意一节点进行分区
node01节点上:
# pvcreate /dev/sdc1
# pvs
# vgcreate gfsvg /dev/sdc1
# lvcreate -l +100%FREE -n data gfsvg
node02节点上:
# /etc/init.d/clvmd start
3) 格式化GFS文件系统
node01节点上:
# mkfs.gfs2 -p lock_dlm -t Cluster:gfs2-j 2 /dev/gfsvg/data
说明:
gfs:gfs2这个Cluster就是集群的名字,gfs2是定义的名字,相当于标签。
-j是指定挂载这个文件系统的主机个数,不指定默认为1即为管理节点的。
这里实验有两个节点
4) 挂载GFS文件系统
node01,node02 上创建GFS挂载点
# mkdir /vmdata
(1)node01,node02手动挂载测试,挂载成功后,创建文件测试集群文件系统情况。
# mount.gfs2 /dev/gfsvg/data /vmdata
(2)配置开机自动挂载
# vi /etc/fstab
/dev/gfsvg/data /vmdata gfs2defaults 0 0
9、配置表决盘Qdisk
#表决磁盘是共享磁盘,无需要太大,本例采用/dev/sdg 256MB来进行创建。[root@node2 ~]# fdisk -l /dev/sdg
Disk /dev/sdg: 268 MB, 268435456 bytes
9 heads, 57 sectors/track, 1022 cylinders
Units = cylinders of 513 * 512 = 262656 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
1) 创建表决盘
# mkqdisk -c /dev/sdg-l myqdisk
mkqdiskv3.0.12.1
Writing newquorum disk label 'myqdisk' to /dev/sdg.
WARNING: Aboutto destroy all data on /dev/sdg; proceed [N/y] ? y
Initializingstatus block for node 1...
Initializingstatus block for node 2...
Initializingstatus block for node 3...
Initializingstatus block for node 4...
Initializingstatus block for node 5...
Initializingstatus block for node 6...
Initializingstatus block for node 7...
Initializingstatus block for node 8...
Initializingstatus block for node 9...
Initializingstatus block for node 10...
Initializingstatus block for node 11...
Initializingstatus block for node 12...
Initializingstatus block for node 13...
Initializingstatus block for node 14...
Initializingstatus block for node 15...
Initializingstatus block for node 16...
2) 查看表决盘信息
[root@node2 ~]#mkqdisk -L
mkqdiskv3.0.12.1
/dev/block/8:96:
/dev/disk/by-id/scsi-14f504e46494c45524a504a4b6e432d72636b492d6b44457a:
/dev/disk/by-path/ip-192.168.52.110:3260-iscsi-iqn.2006-01.com.openfiler:tsn.raw-lun-5:
/dev/sdg:
Magic: eb7a62c2
Label: RHCS_qdisk
Created: Tue Nov 24 00:24:29 2015
Host: node1
Kernel Sector Size: 512
Recorded Sector Size: 512
3) 配置表决盘Qdisk
进入管理界面Manage Clusters --> Cluster --> Configure --> QDisk
主机名 | IP | 安装软件 |
Node1 | 192.168.52.10 | luci、ricci、gfs2-utils、rgmanager、lvm1-cluster、mysql、http、iscsi |
Node2 | 192.168.52.11 | luci、ricci、gfs2-utils、rgmanager、lvm1-cluster、mysql、http、iscsi |
Storage | 192.168.52.110 | Openfiler共享存储 |
Vip1 | 192.168.52.224 | |
Vip2 | 192.168.52.250 |
相关文章推荐
- MySQL中的integer 数据类型
- MySQL存储过程
- mysql中int、bigint、smallint 和 tinyint的区别与长度
- mysql load data 导出、导入 csv
- source命令执行SQL脚本文件
- MySQL创建用户及权限控制
- MySQL管理数据表
- linux下mysql添加用户
- mysql procedure
- mysql触发器
- MySQL 备份和恢复策略
- mac下安装mysql(转载)
- mysql 修改编码 Linux/Mac/Unix/通用(杜绝修改后无法启动的情况!)
- MySQL数据的导出、导入(mysql内部命令:mysqldump、mysql)
- mysql数据行转列
- Linux下修改MySQL编码的方法
- MySQL Server 日志
- MySQL 安全事宜
- MySQL 备份与恢复