您好,登錄后才能下訂單哦!
這篇文章給大家分享的是有關如何實現(xiàn)基于ceph rbd+corosync+pacemaker HA-NFS文件共享的內(nèi)容。小編覺得挺實用的,因此分享給大家做個參考,一起跟隨小編過來看看吧。
兩臺支持rbd的nfs-server主機:10.20.18.97 10.20.18.11
Vip:10.20.18.123 設置在同一網(wǎng)段
# yum install pacemaker corosync cluster-glue resource-agents # rpm -ivh crmsh-2.1-1.6.x86_64.rpm –nodeps
# vi /etc/hosts 10.20.18.97 SZB-L0005908 10.20.18.111 SZB-L0005469
# mv /etc/corosync/corosync.conf.example /etc/corosync/corosync.conf # vi /etc/corosync/corosync.conf # Please read the corosync.conf.5 manual page compatibility: whitetank totem { version: 2 secauth: off threads: 0 interface { ringnumber: 0 bindnetaddr: 10.20.18.111 mcastaddr: 226.94.1.1 mcastport: 5405 ttl: 1 } } logging { fileline: off to_stderr: no to_logfile: yes to_syslog: yes logfile: /var/log/cluster/corosync.log debug: off timestamp: on logger_subsys { subsys: AMF debug: off } } amf { mode: disabled } service { ver: 0 name: pacemaker } aisexec { user: root group: root }
Bindnetaddr 為節(jié)點ip
Mcastaddr 為合法的組播地址,隨便填
# service corosync start
# crm configure property stonith-enabled=false # sudo crm configure property no-quorum-policy=ignore
# crm_mon -1 Last updated: Fri May 22 15:56:37 2015 Last change: Fri May 22 13:09:33 2015 via crmd on SZB-L0005469 Stack: classic openais (with plugin) Current DC: SZB-L0005908 - partition with quorum Version: 1.1.10-14.el6-368c726 2 Nodes configured, 2 expected votes 0 Resources configured Online: [ SZB-L0005469 SZB-L0005908 ]
說明: Pacemaker主要管理資源,本實驗中為了搭建rbd-nfs,所以會對rbd map 、mount 、nfs-export、vip等資源進行管理。簡而言之,自動實現(xiàn)rbd到nfs共享。
(本實驗創(chuàng)建的鏡像為share/share2),只需在一個節(jié)點做一次。
# rados mkpool share # rbd create share/share2 –size 1024 # rbd map share/share2 # rbd showmapped # mkfs.xfs /dev/rbd1 # rbd unmap share/share2
(拷貝ceph源碼中腳本src/ocf/rbd.in到下面目錄,所有節(jié)點都做)
# mkdir /usr/lib/ocf/resource.d/ceph # cd /usr/lib/ocf/resource.d/ceph/ # chmod + rbd.in
注:下面配置單個節(jié)點做
(可以用crm configure edit命令直接copy下面內(nèi)容)
# primitive p_rbd_map_1 ocf:ceph:rbd.in \ params user=admin pool=share name=share2 cephconf="/etc/ceph/ceph.conf" \ op monitor interval=10s timeout=20s
# primitive p_fs_rbd_1 Filesystem \ params directory="/mnt/share2" fstype=xfs device="/dev/rbd/share/share2" fast_stop=no \ op monitor interval=20s timeout=40s \ op start interval=0 timeout=60s \ op stop interval=0 timeout=60s
primitive p_export_rbd_1 exportfs \ params directory="/mnt/share2" clientspec="10.20.0.0/24" options="rw,async,no_subtree_check,no_root_squash" fsid=1 \ op monitor interval=10s timeout=20s \
primitive p_vip_1 IPaddr \ params ip=10.20.18.123 cidr_netmask=24 \ op monitor interval=5
primitive p_rpcbind lsb:rpcbind \ op monitor interval=10s timeout=30s primitive p_nfs_server lsb:nfs \ op monitor interval=10s timeout=30s
group g_nfs p_rpcbind p_nfs_server group g_rbd_share_1 p_rbd_map_1 p_fs_rbd_1 p_export_rbd_1 p_vip_1 clone clo_nfs g_nfs \ meta globally-unique="false" target-role="Started"
location l_g_rbd_share_1 g_rbd_share_1 inf: SZB-L0005469
# crm configure edit node SZB-L0005469 node SZB-L0005908 primitive p_export_rbd_1 exportfs \ params directory="/mnt/share2" clientspec="10.20.0.0/24" options="rw,async,no_subtree_check,no_root_squash" fsid=1 \ op monitor interval=10s timeout=20s \ op start interval=0 timeout=40s primitive p_fs_rbd_1 Filesystem \ params directory="/mnt/share2" fstype=xfs device="/dev/rbd/share/share2" fast_stop=no \ op monitor interval=20s timeout=40s \ op start interval=0 timeout=60s \ op stop interval=0 timeout=60s primitive p_nfs_server lsb:nfs \ op monitor interval=10s timeout=30s primitive p_rbd_map_1 ocf:ceph:rbd.in \ params user=admin pool=share name=share2 cephconf="/etc/ceph/ceph.conf" \ op monitor interval=10s timeout=20s primitive p_rpcbind lsb:rpcbind \ op monitor interval=10s timeout=30s primitive p_vip_1 IPaddr \ params ip=10.20.18.123 cidr_netmask=24 \ op monitor interval=5 group g_nfs p_rpcbind p_nfs_server group g_rbd_share_1 p_rbd_map_1 p_fs_rbd_1 p_export_rbd_1 p_vip_1 clone clo_nfs g_nfs \ meta globally-unique=false target-role=Started location l_g_rbd_share_1 g_rbd_share_1 inf: SZB-L0005469 property cib-bootstrap-options: \ dc-version=1.1.10-14.el6-368c726 \ cluster-infrastructure="classic openais (with plugin)" \ symmetric-cluster=true \ stonith-enabled=false \ no-quorum-policy=ignore \ expected-quorum-votes=2 rsc_defaults rsc_defaults-options: \ resource-stickiness=0 \ migration-threshold=1
# service corosync restart # crm_mon -1 Last updated: Fri May 22 16:55:14 2015 Last change: Fri May 22 16:52:04 2015 via crmd on SZB-L0005469 Stack: classic openais (with plugin) Current DC: SZB-L0005908 - partition with quorum Version: 1.1.10-14.el6-368c726 2 Nodes configured, 2 expected votes 8 Resources configured Online: [ SZB-L0005469 SZB-L0005908 ] Resource Group: g_rbd_share_1 p_rbd_map_1 (ocf::ceph:rbd.in): Started SZB-L0005469 p_fs_rbd_1 (ocf::heartbeat:Filesystem): Started SZB-L0005469 p_export_rbd_1 (ocf::heartbeat:exportfs): Started SZB-L0005469 p_vip_1 (ocf::heartbeat:IPaddr): Started SZB-L0005469 Clone Set: clo_nfs [g_nfs] Started: [ SZB-L0005469 SZB-L0005908 ]
# showmount -e 10.20.18.123 Export list for 10.20.18.123: /mnt/share2 10.20.0.0/24
# service corosync stop # SZB-L0005469 執(zhí)行 # crm_mon -1 # SZB-L0005908 執(zhí)行 Last updated: Fri May 22 17:14:31 2015 Last change: Fri May 22 16:52:04 2015 via crmd on SZB-L0005469 Stack: classic openais (with plugin) Current DC: SZB-L0005908 - partition WITHOUT quorum Version: 1.1.10-14.el6-368c726 2 Nodes configured, 2 expected votes 8 Resources configured Online: [ SZB-L0005908 ] OFFLINE: [ SZB-L0005469 ] Resource Group: g_rbd_share_1 p_rbd_map_1 (ocf::ceph:rbd.in): Started SZB-L0005908 p_fs_rbd_1 (ocf::heartbeat:Filesystem): Started SZB-L0005908 p_export_rbd_1 (ocf::heartbeat:exportfs): Started SZB-L0005908 p_vip_1 (ocf::heartbeat:IPaddr): Started SZB-L0005908 Clone Set: clo_nfs [g_nfs] Started: [ SZB-L0005908 ] Stopped: [ SZB-L0005469 ]
感謝各位的閱讀!關于“如何實現(xiàn)基于ceph rbd+corosync+pacemaker HA-NFS文件共享”這篇文章就分享到這里了,希望以上內(nèi)容可以對大家有一定的幫助,讓大家可以學到更多知識,如果覺得文章不錯,可以把它分享出去讓更多的人看到吧!
免責聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權內(nèi)容。