您好,登錄后才能下訂單哦!
這篇文章主要介紹了ceph-mds中standby_replay高速熱備狀態(tài)的示例分析,具有一定借鑒價值,感興趣的朋友可以參考下,希望大家閱讀完這篇文章之后大有收獲,下面讓小編帶著大家一起了解一下。
ceph的MDS是cephFS文件存儲服務(wù)的元數(shù)據(jù)服務(wù)。
當創(chuàng)建cephfs后便會有ceph-mds服務(wù)進行管理。默認情況下ceph會分配一個mds服務(wù)管理cephfs,即使已經(jīng)創(chuàng)建多個mds服務(wù),如下:
[root@ceph-admin my-cluster]# ceph-deploy mds create ceph-node01 ceph-node02 ....... [root@ceph-admin ~]# ceph -s cluster: id: 06dc2b9b-0132-44d2-8a1c-c53d765dca5d health: HEALTH_OK services: mon: 2 daemons, quorum ceph-admin,ceph-node01 mgr: ceph-admin(active) mds: mytest-fs-1/1/1 up {0=ceph-admin=up:active}, 2 up:standby osd: 3 osds: 3 up, 3 in rgw: 2 daemons active data: pools: 8 pools, 64 pgs objects: 299 objects, 137 MiB usage: 3.4 GiB used, 297 GiB / 300 GiB avail pgs: 64 active+clean
此時mds中只有ceph-admin處于active狀態(tài),其余處于standby狀態(tài)。
standby已近是一種災(zāi)備狀態(tài),但事實上切換速度比較緩慢,對于實際業(yè)務(wù)系統(tǒng)必定會造成影響。
測試情況如下:
[root@ceph-admin my-cluster]# killall ceph-mds [root@ceph-admin my-cluster]# ceph -s cluster: id: 06dc2b9b-0132-44d2-8a1c-c53d765dca5d health: HEALTH_WARN 1 filesystem is degraded services: mon: 2 daemons, quorum ceph-admin,ceph-node01 mgr: ceph-admin(active) mds: mytest-fs-1/1/1 up {0=ceph-node02=up:rejoin}, 1 up:standby osd: 3 osds: 3 up, 3 in rgw: 2 daemons active data: pools: 8 pools, 64 pgs objects: 299 objects, 137 MiB usage: 3.4 GiB used, 297 GiB / 300 GiB avail pgs: 64 active+clean [root@ceph-admin my-cluster]# ceph -s cluster: id: 06dc2b9b-0132-44d2-8a1c-c53d765dca5d health: HEALTH_OK services: mon: 2 daemons, quorum ceph-admin,ceph-node01 mgr: ceph-admin(active) mds: mytest-fs-1/1/1 up {0=ceph-node02=up:active}, 1 up:standby osd: 3 osds: 3 up, 3 in rgw: 2 daemons active data: pools: 8 pools, 64 pgs objects: 299 objects, 137 MiB usage: 3.4 GiB used, 297 GiB / 300 GiB avail pgs: 64 active+clean io: client: 20 KiB/s rd, 3 op/s rd, 0 op/s wr
系統(tǒng)在active的mds被kill之后,standby的mds在經(jīng)過rejoin狀態(tài)后才變成了active,大約經(jīng)過3-5s。而在生產(chǎn)環(huán)境中由于元數(shù)據(jù)的數(shù)據(jù)量更龐大,往往會更漫長。
而要讓mds更快切換,需要將我們的mds服務(wù)切換至standby_replay狀態(tài),官方對于此狀態(tài)的說明如下:
The MDS is following the journal of another up:active
MDS. Should the active MDS fail, having a standby MDS in replay mode is desirable as the MDS is replaying the live journal and will more quickly takeover. A downside to having standby replay MDSs is that they are not available to takeover for any other MDS that fails, only the MDS they follow.
事實上就是standby_replay會實時根據(jù)active的mds元數(shù)據(jù)日志進行同步更新,這樣就能加快切換的速率,那么如何讓mds運行在standby_replay狀態(tài)?
[root@ceph-node01 ~]# ps aufx|grep mds root 700547 0.0 0.0 112704 976 pts/1 S+ 13:45 0:00 \_ grep --color=auto mds ceph 690340 0.0 0.5 451944 22988 ? Ssl 10:09 0:03 /usr/bin/ceph-mds -f --cluster ceph --id ceph-node01 --setuser ceph --setgroup ceph [root@ceph-node01 ~]# killall ceph-mds [root@ceph-node01 ~]# /usr/bin/ceph-mds -f --cluster ceph --id ceph-node01 --setuser ceph --setgroup ceph --hot-standby 0 starting mds.ceph-node01 at -
我們手動關(guān)閉了ceph-mds服務(wù)并且添加了--hot-standby 參數(shù)重新啟動了mds服務(wù)。
接下來看到,ceph-node02的mds已經(jīng)處于standby-replay狀態(tài):
[root@ceph-admin my-cluster]# ceph -s cluster: id: 06dc2b9b-0132-44d2-8a1c-c53d765dca5d health: HEALTH_OK services: mon: 2 daemons, quorum ceph-admin,ceph-node01 mgr: ceph-admin(active) mds: mytest-fs-1/1/1 up {0=ceph-admin=up:active}, 1 up:standby-replay, 1 up:standby osd: 3 osds: 3 up, 3 in rgw: 2 daemons active data: pools: 8 pools, 64 pgs objects: 299 objects, 137 MiB usage: 3.4 GiB used, 297 GiB / 300 GiB avail pgs: 64 active+clean io: client: 851 B/s rd, 1 op/s rd, 0 op/s wr
當然或者可以ceph-mds的啟動腳本來讓服務(wù)啟動時自動處于standby-replay狀態(tài)
感謝你能夠認真閱讀完這篇文章,希望小編分享的“ceph-mds中standby_replay高速熱備狀態(tài)的示例分析”這篇文章對大家有幫助,同時也希望大家多多支持億速云,關(guān)注億速云行業(yè)資訊頻道,更多相關(guān)知識等著你來學(xué)習(xí)!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。