溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

docker中ceph集群的日常運維操作有哪些

發(fā)布時間:2021-11-19 11:41:16 來源:億速云 閱讀:177 作者:小新 欄目:開發(fā)技術

這篇文章給大家分享的是有關docker中ceph集群的日常運維操作有哪些的內容。小編覺得挺實用的,因此分享給大家做個參考,一起跟隨小編過來看看吧。

查看ceph的所有守護進程

[root@k8s-node1 ceph]# systemctl list-unit-files |grep ceph
ceph-disk@.service                            static  
ceph-mds@.service                             disabled
ceph-mgr@.service                             disabled
ceph-mon@.service                             enabled 
ceph-osd@.service                             enabled 
ceph-radosgw@.service                         disabled
ceph-mds.target                               enabled 
ceph-mgr.target                               enabled 
ceph-mon.target                               enabled 
ceph-osd.target                               enabled 
ceph-radosgw.target                           enabled 
ceph.target                                   enabled

按照類型在 ceph 節(jié)點上啟動特定類型的守護進程

systemctl start ceph-osd.target
systemctl start ceph-mon.target
systemctl start ceph-mds.target

ceph 節(jié)點上啟動特定的守護進程實例

systemctl start ceph-osd@{id}
systemctl start ceph-mon@{hostname}
systemctl start ceph-msd@{hostname}

mon 監(jiān)控狀態(tài)檢查

[root@k8s-node1 ceph]# ceph -s
    cluster 2e6519d9-b733-446f-8a14-8622796f83ef
     health HEALTH_OK
     monmap e4: 3 mons at {k8s-node1=172.16.22.201:6789/0,k8s-node2=172.16.22.202:6789/0,k8s-node3=172.16.22.203:6789/0}
            election epoch 26, quorum 0,1,2 k8s-node1,k8s-node2,k8s-node3
        mgr active: k8s-node1 standbys: k8s-node3, k8s-node2
     osdmap e31: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds,require_kraken_osds
      pgmap v13640: 64 pgs, 1 pools, 0 bytes data, 0 objects
            35913 MB used, 21812 MB / 57726 MB avail
                  64 active+clean
[root@k8s-node1 ceph]# ceph 
ceph> status
    cluster 2e6519d9-b733-446f-8a14-8622796f83ef
     health HEALTH_OK
     monmap e4: 3 mons at {k8s-node1=172.16.22.201:6789/0,k8s-node2=172.16.22.202:6789/0,k8s-node3=172.16.22.203:6789/0}
            election epoch 26, quorum 0,1,2 k8s-node1,k8s-node2,k8s-node3
        mgr active: k8s-node1 standbys: k8s-node3, k8s-node2
     osdmap e31: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds,require_kraken_osds
      pgmap v13670: 64 pgs, 1 pools, 0 bytes data, 0 objects
            35915 MB used, 21810 MB / 57726 MB avail
                  64 active+clean
ceph> health
HEALTH_OK
ceph> mon_status
{"name":"k8s-node1","rank":0,"state":"leader","election_epoch":26,"quorum":[0,1,2],"features":{"required_con":"9025616074522624","required_mon":["kraken"],"quorum_con":"1152921504336314367","quorum_mon":["kraken"]},"outside_quorum":[],"extra_probe_peers":["172.16.22.202:6789\/0","172.16.22.203:6789\/0"],"sync_provider":[],"monmap":{"epoch":4,"fsid":"2e6519d9-b733-446f-8a14-8622796f83ef","modified":"2018-10-28 21:30:09.197608","created":"2018-10-28 09:49:11.509071","features":{"persistent":["kraken"],"optional":[]},"mons":[{"rank":0,"name":"k8s-node1","addr":"172.16.22.201:6789\/0","public_addr":"172.16.22.201:6789\/0"},{"rank":1,"name":"k8s-node2","addr":"172.16.22.202:6789\/0","public_addr":"172.16.22.202:6789\/0"},{"rank":2,"name":"k8s-node3","addr":"172.16.22.203:6789\/0","public_addr":"172.16.22.203:6789\/0"}]}}

ceph 日志記錄

ceph 日志默認的位置保存在節(jié)點/var/log/ceph/ceph.log 里面可以使用 ceph -w 查看實時的日志記錄情況

哪個節(jié)點報錯了,就登錄到哪個節(jié)點上用下面的命令看日志。

[root@k8s-node1 ceph]# ceph -w

ceph mon 也在不斷的對自?狀態(tài)進?各種檢查,檢查失敗的時候會將自?的信息寫到集群日志中去

[root@k8s-node1 ceph]# ceph mon stat
e4: 3 mons at {k8s-node1=172.16.22.201:6789/0,k8s-node2=172.16.22.202:6789/0,k8s-node3=172.16.22.203:6789/0}, election epoch 26, quorum 0,1,2 k8s-node1,k8s-node2,k8s-node3

檢查 osd

[root@k8s-node1 ceph]# ceph osd stat
     osdmap e31: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds,require_kraken_osds
[root@k8s-node1 ceph]# ceph osd tree
ID WEIGHT  TYPE NAME          UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 0.05516 root default                                         
-2 0.01839     host k8s-node1                                   
 0 0.01839         osd.0           up  1.00000          1.00000 
-3 0.01839     host k8s-node2                                   
 1 0.01839         osd.1           up  1.00000          1.00000 
-4 0.01839     host k8s-node3                                   
 2 0.01839         osd.2           up  1.00000          1.00000

檢查 pool 的?小以及可用狀態(tài)

[root@k8s-node1 ceph]#  ceph df
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED 
    57726M     21811M       35914M         62.21 
POOLS:
    NAME     ID     USED     %USED     MAX AVAIL     OBJECTS 
    rbd      0         0         0         5817M           0

感謝各位的閱讀!關于“docker中ceph集群的日常運維操作有哪些”這篇文章就分享到這里了,希望以上內容可以對大家有一定的幫助,讓大家可以學到更多知識,如果覺得文章不錯,可以把它分享出去讓更多的人看到吧!

向AI問一下細節(jié)

免責聲明:本站發(fā)布的內容(圖片、視頻和文字)以原創(chuàng)、轉載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權內容。

AI