溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊(cè)×
其他方式登錄
點(diǎn)擊 登錄注冊(cè) 即表示同意《億速云用戶服務(wù)條款》

怎么在Ubuntu系統(tǒng)上部署分布式系統(tǒng)Ceph

發(fā)布時(shí)間:2021-09-03 17:55:05 來(lái)源:億速云 閱讀:243 作者:chen 欄目:系統(tǒng)運(yùn)維

這篇文章主要講解了“怎么在Ubuntu系統(tǒng)上部署分布式系統(tǒng)Ceph”,文中的講解內(nèi)容簡(jiǎn)單清晰,易于學(xué)習(xí)與理解,下面請(qǐng)大家跟著小編的思路慢慢深入,一起來(lái)研究和學(xué)習(xí)“怎么在Ubuntu系統(tǒng)上部署分布式系統(tǒng)Ceph”吧!

Ceph是統(tǒng)一存儲(chǔ)系統(tǒng),支持三種接口。
Object:有原生的API,而且也兼容Swift和S3的API
Block:支持精簡(jiǎn)配置、快照、克隆
File:Posix接口,支持快照
Ceph也是分布式存儲(chǔ)系統(tǒng),它的特點(diǎn)是:
高擴(kuò)展性:使用普通x86服務(wù)器,支持10~1000臺(tái)服務(wù)器,支持TB到PB級(jí)的擴(kuò)展。
高可靠性:沒(méi)有單點(diǎn)故障,多數(shù)據(jù)副本,自動(dòng)管理,自動(dòng)修復(fù)。
高性能:數(shù)據(jù)分布均衡,并行化度高。對(duì)于objects storage和block storage,不需要元數(shù)據(jù)服務(wù)器。
怎么在Ubuntu系統(tǒng)上部署分布式系統(tǒng)Ceph

架構(gòu)
怎么在Ubuntu系統(tǒng)上部署分布式系統(tǒng)Ceph

Ceph的底層是RADOS,它的意思是“A reliable, autonomous, distributed object storage”。 RADOS由兩個(gè)組件組成:
OSD: Object Storage Device,提供存儲(chǔ)資源。
Monitor:維護(hù)整個(gè)Ceph集群的全局狀態(tài)。
RADOS具有很強(qiáng)的擴(kuò)展性和可編程性,Ceph基于RADOS開(kāi)發(fā)了
Object Storage、Block Storage、FileSystem。Ceph另外兩個(gè)組件是:
MDS:用于保存CephFS的元數(shù)據(jù)。
RADOS Gateway:對(duì)外提供REST接口,兼容S3和Swift的API。

映射

Ceph的命名空間是 (Pool, Object),每個(gè)Object都會(huì)映射到一組OSD中(由這組OSD保存這個(gè)Object):
(Pool, Object) → (Pool, PG) → OSD set → Disk
Ceph中Pools的屬性有:
Object的副本數(shù)
Placement Groups的數(shù)量
所使用的CRUSH Ruleset
在Ceph中,Object先映射到PG(Placement Group),再由PG映射到OSD set。每個(gè)Pool有多個(gè)PG,每個(gè)Object通過(guò)計(jì)算hash值并取模得到它所對(duì)應(yīng)的PG。PG再映射到一組OSD(OSD的個(gè)數(shù)由Pool 的副本數(shù)決定),第一個(gè)OSD是Primary,剩下的都是Replicas。
數(shù)據(jù)映射(Data Placement)的方式?jīng)Q定了存儲(chǔ)系統(tǒng)的性能和擴(kuò)展性。(Pool, PG) → OSD set 的映射由四個(gè)因素決定:
CRUSH算法:一種偽隨機(jī)算法。
OSD MAP:包含當(dāng)前所有Pool的狀態(tài)和所有OSD的狀態(tài)。
CRUSH MAP:包含當(dāng)前磁盤(pán)、服務(wù)器、機(jī)架的層級(jí)結(jié)構(gòu)。
CRUSH Rules:數(shù)據(jù)映射的策略。這些策略可以靈活的設(shè)置object存放的區(qū)域。比如可以指定 pool1中所有objecst放置在機(jī)架1上,所有objects的第1個(gè)副本放置在機(jī)架1上的服務(wù)器A上,第2個(gè)副本分布在機(jī)架1上的服務(wù)器B上。 pool2中所有的object分布在機(jī)架2、3、4上,所有Object的第1個(gè)副本分布在機(jī)架2的服務(wù)器上,第2個(gè)副本分布在機(jī)架3的服 器上,第3個(gè)副本分布在機(jī)架4的服務(wù)器上。

怎么在Ubuntu系統(tǒng)上部署分布式系統(tǒng)Ceph

Client從Monitors中得到CRUSH MAP、OSD MAP、CRUSH Ruleset,然后使用CRUSH算法計(jì)算出Object所在的OSD set。所以Ceph不需要Name服務(wù)器,Client直接和OSD進(jìn)行通信。偽代碼如下所示:

代碼如下:


 locator = object_name
 obj_hash = hash(locator)
 pg = obj_hash % num_pg
 osds_for_pg = crush(pg)  # returns a list of osds
 primary = osds_for_pg[0]
 replicas = osds_for_pg[1:]

這種數(shù)據(jù)映射的優(yōu)點(diǎn)是:
把Object分成組,這降低了需要追蹤和處理metadata的數(shù)量(在全局的層面上,我們不需要追蹤和處理每個(gè)object的metadata和placement,只需要管理PG的metadata就可以了。PG的數(shù)量級(jí)遠(yuǎn)遠(yuǎn)低于object的數(shù)量級(jí))。
增加PG的數(shù)量可以均衡每個(gè)OSD的負(fù)載,提高并行度。
分隔故障域,提高數(shù)據(jù)的可靠性。

強(qiáng)一致性

Ceph的讀寫(xiě)操作采用Primary-Replica模型,Client只向Object所對(duì)應(yīng)OSD set的Primary發(fā)起讀寫(xiě)請(qǐng)求,這保證了數(shù)據(jù)的強(qiáng)一致性。
由于每個(gè)Object都只有一個(gè)Primary OSD,因此對(duì)Object的更新都是順序的,不存在同步問(wèn)題。
當(dāng)Primary收到Object的寫(xiě)請(qǐng)求時(shí),它負(fù)責(zé)把數(shù)據(jù)發(fā)送給其他Replicas,只要這個(gè)數(shù)據(jù)被保存在所有的OSD上時(shí),Primary才應(yīng)答Object的寫(xiě)請(qǐng)求,這保證了副本的一致性。
怎么在Ubuntu系統(tǒng)上部署分布式系統(tǒng)Ceph

容錯(cuò)性

在分布式系統(tǒng)中,常見(jiàn)的故障有網(wǎng)絡(luò)中斷、掉電、服務(wù)器宕機(jī)、硬盤(pán)故障等,Ceph能夠容忍這些故障,并進(jìn)行自動(dòng)修復(fù),保證數(shù)據(jù)的可靠性和系統(tǒng)可用性。
Monitors是Ceph管家,維護(hù)著Ceph的全局狀態(tài)。Monitors的功能和zookeeper類似,它們使用Quorum和Paxos算法去建立全局狀態(tài)的共識(shí)。
OSDs可以進(jìn)行自動(dòng)修復(fù),而且是并行修復(fù)。
故障檢測(cè):
OSD之間有心跳檢測(cè),當(dāng)OSD A檢測(cè)到OSD B沒(méi)有回應(yīng)時(shí),會(huì)報(bào)告給Monitors說(shuō)OSD B無(wú)法連接,則Monitors給OSD B標(biāo)記為down狀態(tài),并更新OSD Map。當(dāng)過(guò)了M秒之后還是無(wú)法連接到OSD B,則Monitors給OSD B標(biāo)記為out狀態(tài)(表明OSD B不能工作),并更新OSD Map。
備注:可以在Ceph中配置M的值。
故障恢復(fù):
當(dāng)某個(gè)PG對(duì)應(yīng)的OSD set中有一個(gè)OSD被標(biāo)記為down時(shí)(假如是Primary被標(biāo)記為down,則某個(gè)Replica會(huì)成為新的Primary,并處理所有讀寫(xiě) object請(qǐng)求),則該P(yáng)G處于active+degraded狀態(tài),也就是當(dāng)前PG有效的副本數(shù)是N-1。
過(guò)了M秒之后,假如還是無(wú)法連接該OSD,則它被標(biāo)記為out,Ceph會(huì)重新計(jì)算PG到OSD set的映射(當(dāng)有新的OSD加入到集群時(shí),也會(huì)重新計(jì)算所有PG到OSD set的映射),以此保證PG的有效副本數(shù)是N。
新OSD set的Primary先從舊的OSD set中收集PG log,得到一份Authoritative History(完整的、全序的操作序列),并讓其他Replicas同意這份Authoritative History(也就是其他Replicas對(duì)PG的所有objects的狀態(tài)達(dá)成一致),這個(gè)過(guò)程叫做Peering。
當(dāng)Peering過(guò)程完成之后,PG進(jìn) 入active+recoverying狀態(tài),Primary會(huì)遷移和同步那些降級(jí)的objects到所有的replicas上,保證這些objects 的副本數(shù)為N。

下面來(lái)看一下部署與配置

系統(tǒng)環(huán)境:Ubuntu 12.04.2

代碼如下:


hostname:s1 osd.0/mon.a/mds.a ip:192.168.242.128
hostname:s2 osd.1/mon.b/mds.b ip:192.168.242.129
hostname:s3 osd.2/mon.c/mds.c ip:192.168.242.130
hostname:s4 client ip:192.168.242.131


免密鑰:
s1/s2/s3 啟用root,相互之間配置免密鑰。

代碼如下:


cat id_rsa.pub_s* >> authorized_keys


安裝:

代碼如下:


apt-get install ceph ceph-common ceph-fs-common (ceph-mds)


更新到新版本:

代碼如下:


wget -q -O- ‘https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc’| sudo apt-key add -
echo deb http://ceph.com/debian/ $(lsb_release -sc) main | tee /etc/apt/sources.list.d/ceph.list
apt-get update
apt-get install ceph


分區(qū)及掛載(使用btrfs):

代碼如下:


root@s1:/data/osd.0# df -h|grep osd
/dev/sdb1 20G 180M 19G 1% /data/osd.0
root@s2:/data/osd.1# df -h|grep osd
/dev/sdb1 20G 173M 19G 1% /data/osd.1
root@s3:/data/osd.2# df -h|grep osd
/dev/sdb1 20G 180M 19G 1% /data/osd.2
root@s1:~/.ssh# mkdir -p /tmp/ceph/(每個(gè)server上執(zhí)行)


配置:

代碼如下:


root@s1:/data/osd.0# vim /etc/ceph/ceph.conf
[global]
auth cluster required = none
auth service required = none
auth client required = none
[osd]
osd data = /data/$name
[mon]
mon data = /data/$name
[mon.a]
host = s1
mon addr = 192.168.242.128:6789
[mon.b]
host = s2
mon addr = 192.168.242.129:6789
[mon.c]
host = s3
mon addr = 192.168.242.130:6789
[osd.0]
host = s1
brtfs devs = /dev/sdb1
[osd.1]
host = s2
brtfs devs = /dev/sdb1
[osd.2]
host = s3
brtfs devs = /dev/sdb1
[mds.a]
host = s1
[mds.b]
host = s2
[mds.c]
host = s3

同步配置

代碼如下:


root@s1:~/.ssh# scp /etc/ceph/ceph.conf s2:/etc/ceph/
ceph.conf 100% 555 0.5KB/s 00:00
root@s1:~/.ssh# scp /etc/ceph/ceph.conf s3:/etc/ceph/
ceph.conf 100% 555 0.5KB/s 00:00


所有server上執(zhí)行:

代碼如下:


rm -rf /data/$name/* /data/mon/*(初始化前保持沒(méi)有任何數(shù)據(jù))
root@s1:~/.ssh# mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.keyring
temp dir is /tmp/mkcephfs.qLmwP4Nd0G
preparing monmap in /tmp/mkcephfs.qLmwP4Nd0G/monmap
/usr/bin/monmaptool –create –clobber –add a 192.168.242.128:6789 –add b 192.168.242.129:6789 –add c 192.168.242.130:6789 –print /tmp/mkcephfs.qLmwP4Nd0G/monmap
/usr/bin/monmaptool: monmap file /tmp/mkcephfs.qLmwP4Nd0G/monmap
/usr/bin/monmaptool: generated fsid c26fac57-4941-411f-a6ac-3dcd024f2073
epoch 0
fsid c26fac57-4941-411f-a6ac-3dcd024f2073
last_changed 2014-05-08 16:08:06.102237
created 2014-05-08 16:08:06.102237
0: 192.168.242.128:6789/0 mon.a
1: 192.168.242.129:6789/0 mon.b
2: 192.168.242.130:6789/0 mon.c
/usr/bin/monmaptool: writing epoch 0 to /tmp/mkcephfs.qLmwP4Nd0G/monmap (3 monitors)
=== osd.0 ===
** WARNING: No osd journal is configured: write latency may be high.
If you will not be using an osd journal, write latency may be
relatively high. It can be reduced somewhat by lowering
filestore_max_sync_interval, but lower values mean lower write
throughput, especially with spinning disks.
2014-05-08 16:08:11.279610 b72cc740 created object store /data/osd.0 for osd.0 fsid c26fac57-4941-411f-a6ac-3dcd024f2073
creating private key for osd.0 keyring /tmp/mkcephfs.qLmwP4Nd0G/keyring.osd.0
creating /tmp/mkcephfs.qLmwP4Nd0G/keyring.osd.0
=== osd.1 ===
pushing conf and monmap to s2:/tmp/mkfs.ceph.5884
** WARNING: No osd journal is configured: write latency may be high.
If you will not be using an osd journal, write latency may be
relatively high. It can be reduced somewhat by lowering
filestore_max_sync_interval, but lower values mean lower write
throughput, especially with spinning disks.
2014-05-08 16:08:21.146302 b7234740 created object store /data/osd.1 for osd.1 fsid c26fac57-4941-411f-a6ac-3dcd024f2073
creating private key for osd.1 keyring /tmp/mkfs.ceph.5884/keyring.osd.1
creating /tmp/mkfs.ceph.5884/keyring.osd.1
collecting osd.1 key
=== osd.2 ===
pushing conf and monmap to s3:/tmp/mkfs.ceph.5884
** WARNING: No osd journal is configured: write latency may be high.
If you will not be using an osd journal, write latency may be
relatively high. It can be reduced somewhat by lowering
filestore_max_sync_interval, but lower values mean lower write
throughput, especially with spinning disks.
2014-05-08 16:08:27.264484 b72b3740 created object store /data/osd.2 for osd.2 fsid c26fac57-4941-411f-a6ac-3dcd024f2073
creating private key for osd.2 keyring /tmp/mkfs.ceph.5884/keyring.osd.2
creating /tmp/mkfs.ceph.5884/keyring.osd.2
collecting osd.2 key
=== mds.a ===
creating private key for mds.a keyring /tmp/mkcephfs.qLmwP4Nd0G/keyring.mds.a
creating /tmp/mkcephfs.qLmwP4Nd0G/keyring.mds.a
=== mds.b ===
pushing conf and monmap to s2:/tmp/mkfs.ceph.5884
creating private key for mds.b keyring /tmp/mkfs.ceph.5884/keyring.mds.b
creating /tmp/mkfs.ceph.5884/keyring.mds.b
collecting mds.b key
=== mds.c ===
pushing conf and monmap to s3:/tmp/mkfs.ceph.5884
creating private key for mds.c keyring /tmp/mkfs.ceph.5884/keyring.mds.c
creating /tmp/mkfs.ceph.5884/keyring.mds.c
collecting mds.c key
Building generic osdmap from /tmp/mkcephfs.qLmwP4Nd0G/conf
/usr/bin/osdmaptool: osdmap file ‘/tmp/mkcephfs.qLmwP4Nd0G/osdmap’
2014-05-08 16:08:26.100746 b731e740 adding osd.0 at {host=s1,pool=default,rack=unknownrack}
2014-05-08 16:08:26.101413 b731e740 adding osd.1 at {host=s2,pool=default,rack=unknownrack}
2014-05-08 16:08:26.101902 b731e740 adding osd.2 at {host=s3,pool=default,rack=unknownrack}
/usr/bin/osdmaptool: writing epoch 1 to /tmp/mkcephfs.qLmwP4Nd0G/osdmap
Generating admin key at /tmp/mkcephfs.qLmwP4Nd0G/keyring.admin
creating /tmp/mkcephfs.qLmwP4Nd0G/keyring.admin
Building initial monitor keyring
added entity mds.a auth auth(auid = 18446744073709551615 key=AQB3O2tTwDNwLRAAofpkrOMqtHCPTFX36EKAMA== with 0 caps)
added entity mds.b auth auth(auid = 18446744073709551615 key=AQB8O2tT8H8nIhAAq1O2lh5IV/cQ73FUUTOUug== with 0 caps)
added entity mds.c auth auth(auid = 18446744073709551615 key=AQB9O2tTWIfsKRAAVYeueMToC85tRSvlslV/jQ== with 0 caps)
added entity osd.0 auth auth(auid = 18446744073709551615 key=AQBrO2tTOLQpEhAA4MS83CnJRYAkoxrFSvC3aQ== with 0 caps)
added entity osd.1 auth auth(auid = 18446744073709551615 key=AQB1O2tTME0eChAA7U4xSrv7MJUZ8vxcEkILbw== with 0 caps)
added entity osd.2 auth auth(auid = 18446744073709551615 key=AQB7O2tT0FUKERAAQ/EJT5TclI2XSCLAWAZZOw== with 0 caps)
=== mon.a ===
/usr/bin/ceph-mon: created monfs at /data/mon for mon.a
=== mon.b ===
pushing everything to s2
/usr/bin/ceph-mon: created monfs at /data/mon for mon.b
=== mon.c ===
pushing everything to s3
/usr/bin/ceph-mon: created monfs at /data/mon for mon.c
placing client.admin keyring in /etc/ceph/ceph.keyring


上面提示了沒(méi)有配置journal。

代碼如下:


root@s1:~# /etc/init.d/ceph -a start
=== mon.a ===
Starting Ceph mon.a on s1…already running
=== mds.a ===
Starting Ceph mds.a on s1…already running
=== osd.0 ===
Starting Ceph osd.0 on s1…
** WARNING: Ceph is still under development. Any feedback can be directed **
** at ceph-devel@vger.kernel.org or http://ceph.newdream.net/. **
starting osd.0 at 0.0.0.0:6801/2264 osd_data /data/osd.0 (no journal)


查看狀態(tài):

代碼如下:


root@s1:~# ceph -s
2014-05-09 09:37:40.477978 pg v444: 594 pgs: 594 active+clean; 38199 bytes data, 531 MB used, 56869 MB / 60472 MB avail
2014-05-09 09:37:40.485092 mds e23: 1/1/1 up {0=a=up:active}, 2 up:standby
2014-05-09 09:37:40.485601 osd e34: 3 osds: 3 up, 3 in
2014-05-09 09:37:40.486276 log 2014-05-09 09:36:25.843782 mds.0 192.168.242.128:6800/1053 1 : [INF] closing stale session client.4104 192.168.242.131:0/2123448720 after 302.954724
2014-05-09 09:37:40.486577 mon e1: 3 mons at {a=192.168.242.128:6789/0,b=192.168.242.129:6789/0,c=192.168.242.130:6789/0}</p> <p>root@s1:~# for i in 1 2 3 ;do ceph health;done
2014-05-09 10:05:30.306575 mon <- [health]
2014-05-09 10:05:30.309366 mon.1 -> &lsquo;HEALTH_OK&rsquo; (0)
2014-05-09 10:05:30.330317 mon <- [health]
2014-05-09 10:05:30.333608 mon.2 -> &lsquo;HEALTH_OK&rsquo; (0)
2014-05-09 10:05:30.352617 mon <- [health]
2014-05-09 10:05:30.353984 mon.0 -> &lsquo;HEALTH_OK&rsquo; (0)


并同時(shí)查看 s1、s2、s3 log可以看到,證明3個(gè)節(jié)點(diǎn)都正常:

代碼如下:


2014-05-09 09:39:32.316795 b4bfeb40 mon.a@0(leader) e1 handle_command mon_command(health v 0) v1
2014-05-09 09:39:40.789748 b4bfeb40 mon.a@0(leader).osd e35 e35: 3 osds: 3 up, 3 in
2014-05-09 09:40:00.796979 b4bfeb40 mon.a@0(leader).osd e36 e36: 3 osds: 3 up, 3 in
2014-05-09 09:40:41.781141 b4bfeb40 mon.a@0(leader) e1 handle_command mon_command(health v 0) v1
2014-05-09 09:40:42.409235 b4bfeb40 mon.a@0(leader) e1 handle_command mon_command(health v 0) v1


log 里面會(huì)看到如下時(shí)間未同步信息:

代碼如下:


2014-05-09 09:43:13.485212 b49fcb40 log [WRN] : message from mon.0 was stamped 6.050738s in the future, clocks not synchronized
2014-05-09 09:43:13.861985 b49fcb40 log [WRN] : message from mon.0 was stamped 6.050886s in the future, clocks not synchronized
2014-05-09 09:43:14.012633 b49fcb40 log [WRN] : message from mon.0 was stamped 6.050681s in the future, clocks not synchronized
2014-05-09 09:43:15.809439 b49fcb40 log [WRN] : message from mon.0 was stamped 6.050781s in the future, clocks not synchronized


所以我們?cè)谧黾褐白詈媚茉诩簝?nèi)部做好ntp服務(wù)器,確保各節(jié)點(diǎn)之前時(shí)間一致。

3. 接下來(lái)在客戶機(jī)s4上進(jìn)行驗(yàn)證操作:

代碼如下:


root@s4:/mnt# mount -t ceph s1:6789:/ /mnt/s1fs/
root@s4:/mnt# mount -t ceph s2:6789:/ /mnt/s2fs/
root@s4:/mnt# mount -t ceph s3:6789:/ /mnt/s3fs/
root@s4:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 79G 1.3G 74G 2% /
udev 241M 4.0K 241M 1% /dev
tmpfs 100M 304K 99M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 248M 0 248M 0% /run/shm
192.168.242.130:6789:/ 60G 3.6G 56G 6% /mnt/s3fs
192.168.242.129:6789:/ 60G 3.6G 56G 6% /mnt/s2fs
192.168.242.128:6789:/ 60G 3.6G 56G 6% /mnt/s1fs</p> <p>root@s4:/mnt/s2fs# touch aa
root@s4:/mnt/s2fs# ls -al /mnt/s1fs
total 4
drwxr-xr-x 1 root root 0 May 8 18:08 ./
drwxr-xr-x 7 root root 4096 May 8 17:28 ../
-rw-r&ndash;r&ndash; 1 root root 0 May 8 18:08 aa
root@s4:/mnt/s2fs# ls -al /mnt/s3fs
total 4
drwxr-xr-x 1 root root 0 May 8 18:08 ./
drwxr-xr-x 7 root root 4096 May 8 17:28 ../
-rw-r&ndash;r&ndash; 1 root root 0 May 8 18:08 aa</p> <p>root@s4:/mnt/s2fs# rm -f aa
root@s4:/mnt/s2fs# ls -al /mnt/s1fs/
total 4
drwxr-xr-x 1 root root 0 May 8 2014 ./
drwxr-xr-x 7 root root 4096 May 8 17:28 ../
root@s4:/mnt/s2fs# ls -al /mnt/s3fs/
total 4
drwxr-xr-x 1 root root 0 May 8 18:07 ./
drwxr-xr-x 7 root root 4096 May 8 17:28 ../


接下來(lái)我們驗(yàn)證單點(diǎn)故障:
將s1服務(wù)停掉,

代碼如下:


root@s1:~# /etc/init.d/ceph stop
=== mon.a ===
Stopping Ceph mon.a on s1&hellip;kill 965&hellip;done
=== mds.a ===
Stopping Ceph mds.a on s1&hellip;kill 1314&hellip;done
=== osd.0 ===
Stopping Ceph osd.0 on s1&hellip;kill 2265&hellip;done


s2上log 立馬顯示:
省掉了很多,基本的意思就是mon監(jiān)控中心發(fā)現(xiàn),剔除故障節(jié)點(diǎn),進(jìn)行自動(dòng)切換,集群恢復(fù)。

代碼如下:


2014-05-09 10:16:44.906370 a5af0b40 &mdash; 192.168.242.129:6802/1495 >> 192.168.242.128:6802/1466 pipe(0xb1e1b1a8 sd=19 pgs=3 cs=3 l=0).fault with nothing to send, going to standby
2014-05-09 10:16:44.906982 a68feb40 &mdash; 192.168.242.129:6803/1495 >> 192.168.242.128:0/1467 pipe(0xa6e00d50 sd=17 pgs=1 cs=1 l=0).fault with nothing to send, going to standby
2014-05-09 10:16:44.907415 a63f9b40 &mdash; 192.168.242.129:0/1506 >> 192.168.242.128:6803/1466 pipe(0xb1e26d50 sd=20 pgs=1 cs=1 l=0).fault with nothing to send, going to standby
2014-05-09 10:16:49.028640 b5199b40 mds.0.6 handle_mds_map i am now mds.0.6
2014-05-09 10:16:49.029018 b5199b40 mds.0.6 handle_mds_map state change up:reconnect &ndash;> up:rejoin
2014-05-09 10:16:49.029260 b5199b40 mds.0.6 rejoin_joint_start
2014-05-09 10:16:49.032134 b5199b40 mds.0.6 rejoin_done
==> /var/log/ceph/mon.b.log <==
2014-05-09 10:16:49.060870 b5198b40 log [INF] : mds.0 192.168.242.129:6804/1341 up:active
==> /var/log/ceph/mds.b.log <==
2014-05-09 10:16:49.073135 b5199b40 mds.0.6 handle_mds_map i am now mds.0.6
2014-05-09 10:16:49.073237 b5199b40 mds.0.6 handle_mds_map state change up:rejoin --> up:active
2014-05-09 10:16:49.073252 b5199b40 mds.0.6 recovery_done &mdash; successful recovery!
2014-05-09 10:16:49.073871 b5199b40 mds.0.6 active_start
2014-05-09 10:16:49.073934 b5199b40 mds.0.6 cluster recovered.
==> /var/log/ceph/mds.b.log <==
2014-05-09 10:16:49.073135 b5199b40 mds.0.6 handle_mds_map i am now mds.0.6
2014-05-09 10:16:49.073237 b5199b40 mds.0.6 handle_mds_map state change up:rejoin --> up:active
2014-05-09 10:16:49.073252 b5199b40 mds.0.6 recovery_done &mdash; successful recovery!
2014-05-09 10:16:49.073871 b5199b40 mds.0.6 active_start
2014-05-09 10:16:49.073934 b5199b40 mds.0.6 cluster recovered.
==> /var/log/ceph/mon.b.log <==
2014-05-09 10:18:24.366217 b5198b40 mon.b@1(leader) e1 handle_command mon_command(health v 0) v1
2014-05-09 10:18:25.717589 b5198b40 mon.b@1(leader) e1 handle_command mon_command(health v 0) v1
2014-05-09 10:18:29.481811 b5198b40 mon.b@1(leader) e1 handle_command mon_command(health v 0) v1
2014-05-09 10:21:39.184889 b4997b40 log [INF] : osd.0 out (down for 303.572445)
2014-05-09 10:21:39.195596 b5198b40 mon.b@1(leader).osd e42 e42: 3 osds: 2 up, 2 in
2014-05-09 10:21:40.199772 b5198b40 mon.b@1(leader).osd e43 e43: 3 osds: 2 up, 2 in
root@s2:~# ceph -s
2014-05-09 10:24:18.075291 pg v501: 594 pgs: 594 active+clean; 47294 bytes data, 359 MB used, 37907 MB / 40315 MB avail
2014-05-09 10:24:18.093637 mds e27: 1/1/1 up {0=b=up:active}, 1 up:standby
2014-05-09 10:24:18.094047 osd e43: 3 osds: 2 up, 2 in
2014-05-09 10:24:18.094833 log 2014-05-09 10:21:39.185547 mon.1 192.168.242.129:6789/0 40 : [INF] osd.0 out (down for 303.572445)
2014-05-09 10:24:18.095606 mon e1: 3 mons at {a=192.168.242.128:6789/0,b=192.168.242.129:6789/0,c=192.168.242.130:6789/0}
root@s1:~# ceph health
2014-05-09 10:18:43.185714 mon <- [health]
2014-05-09 10:18:43.189028 mon.2 -> &lsquo;HEALTH_WARN 1/3 in osds are down; 1 mons down, quorum 1,2&prime; (0)
root@s2:~# ceph health
2014-05-09 10:23:40.655548 mon <- [health]
2014-05-09 10:23:40.658293 mon.2 -> &lsquo;HEALTH_WARN 1 mons down, quorum 1,2&prime; (0)
root@s3:~# ceph health
2014-05-09 10:23:28.058080 mon <- [health]
2014-05-09 10:23:28.061126 mon.1 -> &lsquo;HEALTH_WARN 1 mons down, quorum 1,2&prime; (0)


再接下來(lái),關(guān)閉s2,只開(kāi)啟s3:
s3上log顯示大量

代碼如下:


==> /var/log/ceph/mds.c.log <==
2014-05-09 10:33:04.274503 b5180b40 mds.-1.0 ms_handle_connect on 192.168.242.130:6789/0</p> <p>==> /var/log/ceph/osd.2.log <==
2014-05-09 10:33:04.832597 b4178b40 osd.2 43 heartbeat_check: no heartbeat from osd.1 since 2014-05-09 10:29:54.607954 (cutoff 2014-05-09 10:32:44.832568)
2014-05-09 10:33:05.084620 a7be9b40 osd.2 43 heartbeat_check: no heartbeat from osd.1 since 2014-05-09 10:29:54.607954 (cutoff 2014-05-09 10:32:45.084592)
2014-05-09 10:33:05.585583 a7be9b40 osd.2 43 heartbeat_check: no heartbeat from osd.1 since 2014-05-09 10:29:54.607954 (cutoff 2014-05-09 10:32:45.585553)
2014-05-09 10:33:05.834589 b4178b40 osd.2 43 heartbeat_check: no heartbeat from osd.1 since 2014-05-09 10:29:54.607954 (cutoff 2014-05-09 10:32:45.834559)
2014-05-09 10:33:06.086562 a7be9b40 osd.2 43 heartbeat_check: no heartbeat from osd.1 since 2014-05-09 10:29:54.607954 (cutoff 2014-05-09 10:32:46.086533)
2014-05-09 10:33:06.835683 b4178b40 osd.2 43 heartbeat_check: no heartbeat from osd.1 since 2014-05-09 10:29:54.607954 (cutoff 2014-05-09 10:32:46.835641)
2014-05-09 10:33:07.287766 a7be9b40 osd.2 43 heartbeat_check: no heartbeat from osd.1 since 2014-05-09 10:29:54.607954 (cutoff 2014-05-09 10:32:47.287737)


健康檢測(cè)不能從s2上的osd.1 獲取no heartbeat 。
s1、s2、s3上都有mon、mds、osd。但是總個(gè)集群中只有一個(gè)節(jié)點(diǎn),所以不能提供服務(wù)。

感謝各位的閱讀,以上就是“怎么在Ubuntu系統(tǒng)上部署分布式系統(tǒng)Ceph”的內(nèi)容了,經(jīng)過(guò)本文的學(xué)習(xí)后,相信大家對(duì)怎么在Ubuntu系統(tǒng)上部署分布式系統(tǒng)Ceph這一問(wèn)題有了更深刻的體會(huì),具體使用情況還需要大家實(shí)踐驗(yàn)證。這里是億速云,小編將為大家推送更多相關(guān)知識(shí)點(diǎn)的文章,歡迎關(guān)注!

向AI問(wèn)一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI