您好,登錄后才能下訂單哦!
這篇文章主要講解了“Ceph的搭建與配置步驟”,文中的講解內(nèi)容簡單清晰,易于學(xué)習(xí)與理解,下面請大家跟著小編的思路慢慢深入,一起來研究和學(xué)習(xí)“Ceph的搭建與配置步驟”吧!
平臺:VirtualBox 4.3.12
虛擬機:CentOS 6.5 Linux 2.6.32-504.3.3.el6.x86_64
修改主機名 配置 IP
注:以下步驟需要按情況對store01和store02分別配置
[root@store01 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 TYPE=Ethernet UUID=82e3956c-6850-426a-afd7-977a26a77dab ONBOOT=yes NM_CONTROLLED=yes BOOTPROTO=static IPADDR=192.168.1.179 NETMASK=255.255.255.0 GATEWAY=192.168.1.1 HWADDR=08:00:27:65:4B:DD DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=yes IPV6INIT=no NAME="System eth0" [root@store01 ~]# ifconfig eth0 eth0 Link encap:Ethernet HWaddr 08:00:27:65:4B:DD inet addr:192.168.1.179 Bcast:192.168.127.255 Mask:255.255.128.0 inet6 addr: fe80::a00:27ff:fe65:4bdd/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:75576 errors:0 dropped:0 overruns:0 frame:0 TX packets:41422 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:88133010 (84.0 MiB) TX bytes:4529474 (4.3 MiB) [root@store01 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.179 store01 192.168.1.190 store02
配置 NTP 時間同步
[root@store01 ~]# yum install ntp ntpdate [root@store01 ~]# service ntpd start Starting ntpd: [ OK ] [root@store01 ~]# chkconfig ntpd on [root@store01 ~]# netstat -tunlp |grep 123 udp 0 0 192.168.1.179:123 0.0.0.0:* 12254/ntpd udp 0 0 127.0.0.1:123 0.0.0.0:* 12254/ntpd udp 0 0 0.0.0.0:123 0.0.0.0:* 12254/ntpd udp 0 0 fe80::a00:27ff:fe65:4bdd:123 :::* 12254/ntpd udp 0 0 ::1:123 :::* 12254/ntpd udp 0 0 :::123 :::* 12254/ntpd [root@store01 ~]# ntpq -p remote refid st t when poll reach delay offset jitter ============================================================================== +gus.buptnet.edu 202.112.31.197 3 u 7 64 377 115.339 4.700 46.105 *dns2.synet.edu. 202.118.1.46 2 u 69 64 373 44.619 1.680 6.667 [root@store02 ~]# yum install ntp ntpdate [root@store02 ~]# vim /etc/ntp.conf server store01 iburst [root@store02 ~]# service ntpd start Starting ntpd: [ OK ] [root@store02 ~]# chkconfig ntpd on [root@store02 ~]# ntpq -p remote refid st t when poll reach delay offset jitter ============================================================================== store01 202.112.10.36 4 u 56 64 1 0.412 0.354 0.000 [root@store02 ~]# netstat -tunlp |grep 123 udp 0 0 192.168.1.190:123 0.0.0.0:* 12971/ntpd udp 0 0 127.0.0.1:123 0.0.0.0:* 12971/ntpd udp 0 0 0.0.0.0:123 0.0.0.0:* 12971/ntpd udp 0 0 fe80::a00:27ff:fead:71b:123 :::* 12971/ntpd udp 0 0 ::1:123 :::* 12971/ntpd udp 0 0 :::123 :::* 12971/ntpd
關(guān)閉 SELinux IPTables
[root@store01 ~]# /etc/init.d/iptables stop iptables: Setting chains to policy ACCEPT: filter [ OK ] iptables: Flushing firewall rules: [ OK ] iptables: Unloading modules: [ OK ] [root@store01 ~]# /etc/init.d/ip6tables stop ip6tables: Setting chains to policy ACCEPT: filter [ OK ] ip6tables: Flushing firewall rules: [ OK ] ip6tables: Unloading modules: [ OK ] [root@store01 ~]# chkconfig iptables off [root@store01 ~]# chkconfig ip6tables off [root@store01 ~]# setenforce 0 [root@store01 ~]# vim /etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of these two values: # targeted - Targeted processes are protected, # mls - Multi Level Security protection.
設(shè)置 root 用戶 ssh 無密碼訪問(參考本博客另一篇博文)
添加源(Ceph Version:0.72)
# vim /etc/yum.repos.d/ceph.repo
[ceph] name=Ceph packages for $basearch baseurl=http://ceph.com/rpm-emperor/el6/$basearch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc [ceph-noarch] name=Ceph noarch packages baseurl=http://ceph.com/rpm-emperor/el6/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc [ceph-source] name=Ceph source packages baseurl=http://ceph.com/rpm-emperor/el6/SRPMS enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
安裝Ceph
[root@store01 ~]# yum install ceph ceph-deploy [root@store01 ~]# ceph-deploy --version 1.5.11 [root@store01 ~]# ceph --version ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60) [root@store02 ~]# yum install ceph
[root@store01 ~]# mkdir my-cluster [root@store01 ~]# cd my-cluster/ [root@store01 my-cluster]# ls [root@store01 my-cluster]# ceph-deploy new store01 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.11): /usr/bin/ceph-deploy new store01 [ceph_deploy.new][DEBUG ] Creating new cluster named ceph [ceph_deploy.new][DEBUG ] Resolving host store01 [ceph_deploy.new][DEBUG ] Monitor store01 at 192.168.1.179 [ceph_deploy.new][INFO ] making sure passwordless SSH succeeds [ceph_deploy.new][DEBUG ] Monitor initial members are ['store01'] [ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.1.179'] [ceph_deploy.new][DEBUG ] Creating a random mon key... [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf... [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring... [root@store01 my-cluster]# ls ceph.conf ceph.log ceph.mon.keyring [root@store01 my-cluster]# cat ceph.conf [global] auth_service_required = cephx filestore_xattr_use_omap = true auth_client_required = cephx auth_cluster_required = cephx mon_host = 192.168.1.179 mon_initial_members = store01 fsid = b45a03be-3abf-4736-8475-f238e1f2f479 [root@store01 my-cluster]# vim ceph.conf [global] auth_service_required = cephx filestore_xattr_use_omap = true auth_client_required = cephx auth_cluster_required = cephx mon_host = 192.168.1.179 mon_initial_members = store01 fsid = b45a03be-3abf-4736-8475-f238e1f2f479 osd pool default size = 2 [root@store01 my-cluster]# ceph-deploy mon create-initial [root@store01 my-cluster]# ll total 28 -rw-r--r-- 1 root root 72 Dec 29 10:34 ceph.bootstrap-mds.keyring -rw-r--r-- 1 root root 72 Dec 29 10:34 ceph.bootstrap-osd.keyring -rw-r--r-- 1 root root 64 Dec 29 10:34 ceph.client.admin.keyring -rw-r--r-- 1 root root 257 Dec 29 10:34 ceph.conf -rw-r--r-- 1 root root 5783 Dec 29 10:34 ceph.log -rw-r--r-- 1 root root 73 Dec 29 10:33 ceph.mon.keyring [root@store01 my-cluster]# ceph-deploy disk list store01 store02 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.11): /usr/bin/ceph-deploy disk list store01 store02 [store01][DEBUG ] connected to host: store01 [store01][DEBUG ] detect platform information from remote host [store01][DEBUG ] detect machine type [ceph_deploy.osd][INFO ] Distro info: CentOS 6.6 Final [ceph_deploy.osd][DEBUG ] Listing disks on store01... [store01][DEBUG ] find the location of an executable [store01][INFO ] Running command: /usr/sbin/ceph-disk list [store01][DEBUG ] /dev/sda : [store01][DEBUG ] /dev/sda1 other, ext4, mounted on /boot [store01][DEBUG ] /dev/sda2 other, LVM2_member [store01][DEBUG ] /dev/sdb other, unknown [store01][DEBUG ] /dev/sdc other, unknown [store01][DEBUG ] /dev/sr0 other, unknown [store02][DEBUG ] connected to host: store02 [store02][DEBUG ] detect platform information from remote host [store02][DEBUG ] detect machine type [ceph_deploy.osd][INFO ] Distro info: CentOS 6.6 Final [ceph_deploy.osd][DEBUG ] Listing disks on store02... [store02][DEBUG ] find the location of an executable [store02][INFO ] Running command: /usr/sbin/ceph-disk list [store02][DEBUG ] /dev/sda : [store02][DEBUG ] /dev/sda1 other, ext4, mounted on /boot [store02][DEBUG ] /dev/sda2 other, LVM2_member [store02][DEBUG ] /dev/sdb other, unknown [store02][DEBUG ] /dev/sdc other, unknown [store02][DEBUG ] /dev/sr0 other, unknown [root@store01 my-cluster]# ceph-deploy disk zap store01:sd{b,c} [root@store01 my-cluster]# ceph-deploy disk zap store02:sd{b,c} [root@store01 my-cluster]# ceph-deploy osd create store01:sd{b,c} [root@store01 my-cluster]# ceph-deploy osd create store02:sd{b,c} [root@store01 my-cluster]# ceph status cluster e5c2f7f3-2c8a-4ae0-af26-ab0cf5f67343 health HEALTH_OK monmap e1: 1 mons at {store01=192.168.1.179:6789/0}, election epoch 1, quorum 0 store01 osdmap e18: 4 osds: 4 up, 4 in pgmap v28: 192 pgs, 3 pools, 0 bytes data, 0 objects 136 MB used, 107 GB / 107 GB avail 192 active+clean [root@store01 my-cluster]# ceph osd tree # id weight type name up/down reweight -1 0.12 root default -2 0.06 host store01 1 0.03 osd.1 up 1 0 0.03 osd.0 up 1 -3 0.06 host store02 3 0.03 osd.3 up 1 2 0.03 osd.2 up 1
格式: ceph osd pool set {pool-name} pg_num 注:pg_num選擇標(biāo)準(zhǔn) Less than 5 OSDs set pg_num to 128 Between 5 and 10 OSDs set pg_num to 512 Between 10 and 50 OSDs set pg_num to 4096 If you have more than 50 OSDs, you need to understand the tradeoffs and how to calculate the pg_num value by yourself [root@store01 my-cluster]# ceph osd pool create volumes 128 pool 'volumes' created [root@store01 my-cluster]# ceph osd pool create images 128 pool 'images' created [root@store01 my-cluster]# ceph osd lspools 0 data,1 metadata,2 rbd,3 volumes,4 images, ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images' ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
至此 Ceph配置完畢
可以將此Ceph配置到OpenStack的Cinder、Nova以及Glance服務(wù)中作為后端。
感謝各位的閱讀,以上就是“Ceph的搭建與配置步驟”的內(nèi)容了,經(jīng)過本文的學(xué)習(xí)后,相信大家對Ceph的搭建與配置步驟這一問題有了更深刻的體會,具體使用情況還需要大家實踐驗證。這里是億速云,小編將為大家推送更多相關(guān)知識點的文章,歡迎關(guān)注!
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。