溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

openpstack-Pike對接cephRBD單集群的配置步驟

發(fā)布時間:2020-05-27 13:46:51 來源:億速云 閱讀:411 作者:鴿子 欄目:云計算

環(huán)境說明
openpstack-Pike對接cephRBD單集群,配置簡單,可參考openstack官網(wǎng)或者ceph官網(wǎng);
1.Openstack官網(wǎng)參考配置:
https://docs.openstack.org/cinder/train/configuration/block-storage/drivers/ceph-rbd-volume-driver.html
2.Ceph官網(wǎng)參考配置:
https://docs.ceph.com/docs/master/install/install-ceph-deploy/
由于物理環(huán)境和業(yè)務需求變更,當前配置云計算環(huán)境要求一套openstack對接后臺兩套不同版本的cephRBD存儲集群;
此處以現(xiàn)有以下正常運行環(huán)境展開配置;
1)openstack-Pike
2)Ceph Luminous 12.2.5
3)Ceph Nautilus 14.2.7
其中,openstack對接ceph Luminous配置完成,且正常運行。現(xiàn)在此套openstack+ceph環(huán)境基礎上,新增一套ceph Nautilus存儲集群,使openstack能夠同時調(diào)用兩套存儲資源。

配置步驟
1.拷貝配置文件
#拷貝配置文件、cinder賬戶key到openstack的cinder節(jié)點
/etc/ceph/ceph3.conf
/etc/ceph/ceph.client.cinder2.keyring
#此處使用cinder賬戶,僅拷貝cinder2賬戶的key即可

2.創(chuàng)建存儲池
#OSD添加完成后,創(chuàng)建存儲池,指定存儲池pg/pgp數(shù),配置其對應功能模式
ceph osd pool create volumes 512 512
ceph osd pool create backups 128 128
ceph osd pool create vms 512 512
ceph osd pool create images 128 128

ceph osd pool application enable volumes rbd
ceph osd pool application enable backups rbd
ceph osd pool application enable vms rbd
ceph osd pool application enable images rbd

3.創(chuàng)建集群訪問賬戶
ceph auth get-or-create client.cinder2 mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
ceph auth get-or-create client.cinder2-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'

4.查看進程信息
#查看當前openstack的cinder組件服務進程
source /root/keystonerc.admin
cinder service-list

5.修改配置文件
#修改cinder配置文件
[DEFAULT]
enabled_backends = ceph2,ceph3

[ceph2]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph2
rbd_pool = volumes1
rbd_ceph_conf = /etc/ceph2/ceph2.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder1
rbd_secret_uuid = **

[ceph3]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph3
rbd_pool = volumes2
rbd_ceph_conf = /etc/ceph/ceph3/ceph3.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder2
rbd_secret_uuid = **

6.重啟服務
#重啟cinder-volume服務
service openstack-cinder-volume restart Redirecting to /bin/systemctl restart openstack-cinder-volume.service
service openstack-cinder-scheduler restart Redirecting to /bin/systemctl restart openstack-cinder-scheduler.service

7.查看進程
cinder service-list

8.創(chuàng)建卷測試
#卷類型綁定
cinder type-create ceph2
cinder type-key ceph2 set volume_backend_name=ceph2
cinder type-create ceph3
cinder type-key ceph3 set volume_backend_name=ceph3

9.查看綁定結(jié)果
cinder create --volume-type ceph2 --display_name {volume-name}{volume-size}
cinder create --volume-type ceph3 --display_name {volume-name}{volume-size}

配置libvirt
1.將第二套ceph的密鑰添加到nova-compute節(jié)點的libvirt
#為了使VM可以訪問到第二套cephRBD云盤,需要在nova-compute節(jié)點上將第二套ceph的cinder用戶的密鑰添加到libvirt
ceph -c /etc/ceph3/ceph3/ceph3.conf -k /etc/ceph3/ceph.client.cinder2.keyring auth get-key client.cinder2 |tee client.cinder2.key

#綁定之前cinder.conf中第二個ceph集群的uuid
cat > secret2.xml <<EOF
<secret ephemeral='no' private='no'>
<uuid>***</uuid>
<usage type='ceph'>
<name>client.cinder2 secret</name>
</usage>
</secret>
#以上整段拷貝執(zhí)行即可,替換uuid值

sudo virsh secret-define --file secret2.xml

sudo virsh secret-set-value --secret ***** --base64 $(cat client.cinder2.key) rm client.cinder2.key secret2.xml
#刪除提示信息,輸入Y即可

2.驗證配置是否生效
#通過之前創(chuàng)建的兩個類型的云盤掛載到openstack的VM驗證配置
nova volume-attach {instance-id}{volume1-id}
nova volume-attach {instance-id}{volume2-id}

向AI問一下細節(jié)

免責聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI