溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊(cè)×
其他方式登錄
點(diǎn)擊 登錄注冊(cè) 即表示同意《億速云用戶服務(wù)條款》

Ceph入門----CentOS7部署ceph三節(jié)點(diǎn)分布式存儲(chǔ)系統(tǒng)

發(fā)布時(shí)間:2020-06-20 05:49:53 來源:網(wǎng)絡(luò) 閱讀:4057 作者:三石頭 欄目:建站服務(wù)器


1.Ceph集群環(huán)境

   使用3臺(tái)虛擬機(jī),包括其中1個(gè)admin節(jié)點(diǎn),三臺(tái)虛擬機(jī)同時(shí)承擔(dān)3個(gè)monitor節(jié)點(diǎn)和3個(gè)osd節(jié)點(diǎn)

    操作系統(tǒng)采用CentOS Minimal 7 下載地址:http://124.205.69.134/files/4128000005F9FCB3/mirrors.zju.edu.cn/centos/7.4.1708/isos/x86_64/CentOS-7-x86_64-Minimal-1708.iso

 

2. 前提準(zhǔn)備所有的主機(jī)都進(jìn)行

# hostnamectl set-hostname ceph2 \\修改主機(jī)名

# vi /etc/sysconfig/network-scripts/ifcfg-ens32  或者 nmtui  \\配置IP地址

# systemctl restart network \\重啟網(wǎng)絡(luò)服務(wù)

\\ 由于安裝的CentOS Minimal版,tab鍵無法補(bǔ)全命令參數(shù),建議執(zhí)行一條命令,老鳥可以忽略

#yum -y install bash-completion.noarch

# date  \\查看系統(tǒng)時(shí)間,保證各系統(tǒng)的時(shí)間一致

#echo '192.168.59.131  ceph2' >> /etc/hosts \\修改hosts文件,添加所有服務(wù)器的映射

#setenforce 0     \\關(guān)閉selinux

#sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config  \\修改配置文件,使關(guān)閉selinux永久生效

#firewall-cmd --zone=public --add-port=6789/tcp --permanent    

#firewall-cmd --zone=public --add-port=6800-7100/tcp --permanent  \\ 添加防火墻策略

#firewall-cmd --reload    \\使其防火墻策略生效

#ssh-keygen   \\生成SSH密鑰

#ssh-copy-id root@ceph2    \\需各服務(wù)器之間進(jìn)行拷貝


3. 開始進(jìn)行ceph-deploy的部署,部署其中一臺(tái)機(jī)器即可

# vi /etc/yum.repos.d/ceph.repo    \\新增ceph yum源,輸入以下內(nèi)容  

[ceph-noarch]  

name=Ceph noarch packages

baseurl=http://download.ceph.com/rpm-luminous/el7/noarch   

enabled=1  

gpgcheck=1  

type=rpm-md  

gpgkey=https://download.ceph.com/keys/release.asc 

        

#yum update  && reboot    \\更并重啟系統(tǒng)

#yum install ceph-deploy -y    \\安裝ceph-deploy


    a. 此處出現(xiàn)錯(cuò)誤

Downloading packages:

(1/4): python-backports-ssl_match_hostname-3.4.0.2-4.el7.noarch.rpm    |  12 kB  00:00:00     

(2/4): python-backports-1.0-8.el7.x86_64.rpm      | 5.8 kB  00:00:02     

ceph-deploy-1.5.38-0.noarch.rp FAILED         ]  90 kB/s | 298 kB  00:00:04 ETA 

http://download.ceph.com/rpm-luminous/el7/noarch/ceph-deploy-1.5.38-0.noarch.rpm: [Errno -1] Package does not match intended download. Suggestion: run yum --enablerepo=ceph-noarch clean metadata

Trying other mirror.

(3/4): python-setuptools-0.9.8-7.el7.noarch.rpm         | 397 kB  00:00:05     



Error downloading packages:

ceph-deploy-1.5.38-0.noarch: [Errno 256] No more mirrors to try.

  處理方法如下:

#rpm -ivh http://download.ceph.com/rpm-luminous/el7/noarch/ceph-deploy-1.5.38-0.noarch.rpm

    b. 此處出現(xiàn)錯(cuò)誤:

            

Retrieving http://download.ceph.com/rpm-luminous/el7/noarch/ceph-deploy-1.5.38-0.noarch.rpm

warning: /var/tmp/rpm-tmp.gyId2U: Header V4 RSA/SHA256 Signature, key ID 460f3994: NOKEY

error: Failed dependencies:

python-distribute is needed by ceph-deploy-1.5.38-0.noarch

    處理方法:

# yum install python-distribute -y

 再次執(zhí)行

#rpm -ivh http://download.ceph.com/rpm-luminous/el7/noarch/ceph-deploy-1.5.38-0.noarch.rpm

                        

4. 部署monitor服務(wù)

#mkdir ~/ceph-cluster  && cd ~/ceph-cluster  \\新建集群配置目錄

#ceph-deploy new ceph2 ceph3 ceph4    \\部署完后生產(chǎn)3個(gè)文件,一個(gè)Ceph配置文件、一個(gè)monitor密鑰環(huán)和一個(gè)日志文件

#ls -l    

-rw-r--r-- 1 root root    266 Sep 19 16:41 ceph.conf

-rw-r--r-- 1 root root 172037 Sep 19 16:32 ceph-deploy-ceph.log

-rw------- 1 root root     73 Sep 19 11:03 ceph.mon.keyring

 #ceph-deploy mon create-initial    \\初始化群集

5. 安裝ceph

 #ceph-deploy install ceph2 ceph3 ceph4    \\在ceph2 ceph3 ceph4上安裝ceph

    a. 此處出現(xiàn)錯(cuò)誤

        

[ceph2][DEBUG ] Retrieving https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm

[ceph2][WARNIN] warning: /etc/yum.repos.d/ceph.repo created as /etc/yum.repos.d/ceph.repo.rpmnew

[ceph2][DEBUG ] Preparing...                          ########################################

[ceph2][DEBUG ] Updating / installing...

[ceph2][DEBUG ] ceph-release-1-1.el7                  ########################################

[ceph2][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority

[ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: 'ceph'

   處理方式:

# yum remove ceph-release -y    

 再次執(zhí)行#ceph-deploy install ceph2 ceph3 ceph4

6.    創(chuàng)建OSD

 # ceph-deploy disk list ceph{1,2,3}   \\列出各服務(wù)器磁盤

 # ceph-deploy --overwrite-conf osd prepare ceph2:sdc:/dev/sdb  ceph3:sdc:/dev/sdb  ceph4:sdc:/dev/sdb        \\準(zhǔn)備磁盤 sdb 作為journal盤,sdc作為數(shù)據(jù)盤

 # ceph-deploy osd activate ceph2:sdc:/dev/sdb  ceph3:sdc:/dev/sdb  ceph4:sdc:/dev/sdb    \\激活osd

     此處出現(xiàn)一個(gè)錯(cuò)誤,沒有從網(wǎng)上查到解決方式,請(qǐng)高手賜教,此處的錯(cuò)誤沒有影響ceph的部署,通過命令已顯示磁盤已成功mount

[ceph2][WARNIN] ceph_disk.main.FilesystemTypeError: Cannot discover filesystem type: device /dev/sdc: Line is truncated: 

[ceph2][ERROR ] RuntimeError: command returned non-zero exit status: 1

[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdc

 # lsblk 

NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT

sda           8:0    0   20G  0 disk 

├─sda1        8:1    0    1G  0 part /boot

└─sda2        8:2    0   19G  0 part 

  ├─cl-root 253:0    0   18G  0 lvm  /

  └─cl-swap 253:1    0    1G  0 lvm  [SWAP]

sdb           8:16   0   30G  0 disk 

└─sdb1        8:17   0    5G  0 part 

sdc           8:32   0   40G  0 disk 

└─sdc1        8:33   0   40G  0 part /var/lib/ceph/osd/ceph-0

sr0          11:0    1  680M  0 rom  

rbd0        252:0    0    1G  0 disk /root/rbddir


7. 部署成功

# ceph -s

    cluster e508bdeb-b986-4ee8-82c6-c25397a5f1eb

     health HEALTH_OK

     monmap e2: 3 mons at{ceph2=192.168.59.131:6789/0,ceph3=192.168.59.132:6789/0,ceph4=192.168.59.133:6789/0}

            election epoch 10, quorum 0,1,2 ceph2,ceph3,ceph4

     osdmap e55: 3 osds: 3 up, 3 in

            flags sortbitwise,require_jewel_osds

      pgmap v13638: 384 pgs, 5 pools, 386 MB data, 125 objects

            1250 MB used, 118 GB / 119 GB avail

                 384 active+clean


問題解決:

# ceph-deploy osd activate ceph2:sdc:/dev/sdb  ceph3:sdc:/dev/sdb  ceph4:sdc:/dev/sdb    \\激活osd

     此處出現(xiàn)一個(gè)錯(cuò)誤,沒有從網(wǎng)上查到解決方式,請(qǐng)高手賜教,此處的錯(cuò)誤沒有影響ceph的部署,通過命令已顯示磁盤已成功mount

[ceph2][WARNIN] ceph_disk.main.FilesystemTypeError: Cannot discover filesystem type: device /dev/sdc: Line is truncated: 

[ceph2][ERROR ] RuntimeError: command returned non-zero exit status: 1

[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdc


原因:由于ceph對(duì)磁盤進(jìn)行了分區(qū),/dev/sdb磁盤分區(qū)為/dev/sdb1

正確的命令為:

# ceph-deploy osd activate ceph2:sdc1:/dev/sdb1  ceph3:sdc1:/dev/sdb1  ceph4:sdc1:/dev/sdb1


鑄劍團(tuán)隊(duì)簽名:

【總監(jiān)】十二春秋之,3483099@qq.com;

【Master】戈稻不蒼,han169@126.com;

【Java開發(fā)】雨鷥,343691194@qq.com;思齊駿惠,qiangzhang1227@163.com;小王子,545106057@qq.com;巡山小鉆風(fēng),840260821@qq.com;

【VS開發(fā)】豆點(diǎn),2268800211@qq.com;

【系統(tǒng)測(cè)試】土鏡問道,847071279@qq.com;塵子與自由,695187655@qq.com;

【大數(shù)據(jù)】沙漠綠洲,caozhipan@126.com;張三省,570417591@qq.com;

【網(wǎng)絡(luò)】夜孤星,11297761@qq.com;

【系統(tǒng)運(yùn)營(yíng)】三石頭,261453882@qq.com;平凡怪咖,591169003@qq.com;

【容災(zāi)備份】秋天的雨,18568921@qq.com;

【安全】保密,你懂的。

原創(chuàng)作者:三石頭

著作權(quán)歸作者所有。商業(yè)轉(zhuǎn)載請(qǐng)聯(lián)系作者獲得授權(quán),非商業(yè)轉(zhuǎn)載請(qǐng)注明出處。


向AI問一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI