您好,登錄后才能下訂單哦!
一、環(huán)境需求
1、網(wǎng)卡
em1 | em2 | em3 | em4 | |
controller1 | 172.16.16.1 | 172.16.17.1 | none | none |
controller1 | 172.16.16.2 | 172.16.17.2 | none | none |
compute1 | 172.16.16.3 | 172.16.17.3 | none | none |
compute2 | 172.16.16.4 | 172.16.17.4 | none | none |
compute3 | 172.16.16.5 | 172.16.17.5 | none | none |
…… |
2、消息隊(duì)列
使用mirror-queue mode,詳細(xì)部署方式,參見(jiàn)禪道上的rabbtmq集群部署文檔。
3、數(shù)據(jù)庫(kù)
使用mariaDB+innodb+gelera,版本10.0.18以上, 詳細(xì)部署方式,參見(jiàn)禪道上的rabbtmq集群部署文檔。
4、中間件
使用memcached,未采用集群形式,編輯/etc/sysconfig/memcached,修改127.0.0.1為本地主機(jī)名(或者IP)。
二、部署方案
本機(jī)使用controller1作為認(rèn)證名,
所有服務(wù)密碼使用$MODULE+manager,例如:novamanager,glancemanager。
數(shù)據(jù)庫(kù)使用dftc+$MODULE例如:DB_PASS,DB_PASS。
規(guī)劃IP段:172.16.16.0/24作為管理網(wǎng)段;172.16.17.0/24存儲(chǔ)網(wǎng)段;172.16.18.0/23作為外部網(wǎng)絡(luò)的網(wǎng)段。
操作之前使用MYIP=`ip add show em1|grep inet|head -1|awk '{print $2}'|awk-F'/' '{print $1}'`賦值變量
本文使用flat+vxlan網(wǎng)絡(luò)部署方式,如果需要更改,請(qǐng)自行百度;
1、database
mysql -uroot-p****** -e "create database keystone;"
mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost'IDENTIFIED BY 'DB_PASS';"
mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%'IDENTIFIED BY 'DB_PASS';"
mysql -uroot-p****** -e "create database glance;"
mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost'IDENTIFIED BY 'DB_PASS';"
mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIEDBY 'DB_PASS';"
mysql -uroot-p****** -e "create database nova;"
mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost'IDENTIFIED BY 'DB_PASS';"
mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY'DB_PASS';"
mysql -uroot-p****** -e "create database nova_api;"
mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost'IDENTIFIED BY 'DB_PASS';"
mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIEDBY 'DB_PASS';"
mysql -uroot-p****** -e "create database neutron;"
mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost'IDENTIFIED BY 'DB_PASS';"
mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIEDBY 'DB_PASS';"
mysql -uroot-p****** -e "create database cinder;"
mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost'IDENTIFIED BY 'DB_PASS';"
mysql -uroot-p****** -e "GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIEDBY 'DB_PASS';"
mysql -uroot-p****** -e "FLUSH PRIVILEGES;"
2、keystone
### 安裝依賴(lài)包
yum installopenstack-keystone httpd mod_wsgi
### 修改配置文件
openstack-config--set /etc/keystone/keystone.conf DEFAULT admin_token 749d6ead6be998642461
openstack-config--set /etc/keystone/keystone.conf database connectionmysql+pymysql://keystone:DB_PASS@controller1/keystone
openstack-config--set /etc/keystone/keystone.conf token provider fernet
### 同步數(shù)據(jù)庫(kù)并生成fernet
su -s /bin/sh -c"keystone-manage db_sync" keystone
keystone-managefernet_setup --keystone-user keystone --keystone-group keystone
###/etc/httpd/conf/httpd.conf
touch /etc/httpd/conf.d/wsgi-keystone.conf
echo <<EOF
Listen 5000
Listen 35357
<VirtualHost*:5000>
WSGIDaemonProcess keystone-publicprocesses=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias //usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog/var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
<VirtualHost*:35357>
WSGIDaemonProcess keystone-adminprocesses=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone-error.log
CustomLog/var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>
EOF
####
systemctl enablehttpd.service && systemctl start httpd.service
###
exportOS_TOKEN=749d6ead6be998642461
exportOS_URL=http://controller1:35357/v3
exportOS_IDENTITY_API_VERSION=3
openstack servicecreate --name keystone --description "DFTCIAAS Identity" identity
openstackendpoint create --region scxbxxzx identity public http://controller1:5000/v3
openstackendpoint create --region scxbxxzx identity internal http://controller1:5000/v3
openstackendpoint create --region scxbxxzx identity admin http://controller1:35357/v3
openstack domaincreate --description "Default Domain" default
openstack projectcreate --domain default --description"Admin Project" admin
openstack user create--domain default --password-prompt admin
######## createrole project and user
openstack rolecreate admin
openstack roleadd --project admin --user admin admin
openstack projectcreate --domain default --description"Service Project" service
openstack projectcreate --domain default --description"Demo Project" demo
openstack usercreate --domain default --password-prompt demo
echo######## create glance server andendpoint
openstack rolecreate user
openstack roleadd --project demo --user demo user
sed -i"/^pipeline/ s#admin_token_auth##g" /etc/keystone/keystone-paste.ini
unsetOS_TOKEN OS_URL
openstack usercreate --domain default --password-prompt glance
echo ########create p_w_picpath server and endpoint
openstack roleadd --project service --user glance admin
openstack servicecreate --name glance --description"DFTCIAAS Image" p_w_picpath
openstackendpoint create --region scxbxxzx p_w_picpath public http://controller1:9292
openstackendpoint create --region scxbxxzx p_w_picpath internal http://controller1:9292
openstackendpoint create --region scxbxxzx p_w_picpath admin http://controller1:9292
openstack usercreate --domain default --password-prompt nova
echo ########create compute server and endpoint
openstack roleadd --project service --user nova admin
openstack servicecreate --name nova --description"DFTCIAAS Compute" compute
openstackendpoint create --region scxbxxzx compute publichttp://controller1:8774/v2.1/%\(tenant_id\)s
openstackendpoint create --region scxbxxzx compute internal http://controller1:8774/v2.1/%\(tenant_id\)s
openstackendpoint create --region scxbxxzx compute adminhttp://controller1:8774/v2.1/%\(tenant_id\)s
openstack usercreate --domain default --password-prompt neutron
echo ########create network server and endpoint
openstack roleadd --project service --user neutron admin
openstack servicecreate --name neutron --description"DFTCIAAS Networking" network
openstackendpoint create --region scxbxxzx network public http://controller1:9696
openstackendpoint create --region scxbxxzx network internal http://controller1:9696
openstackendpoint create --region scxbxxzx network admin http://controller1:9696
openstack usercreate --domain default --password-prompt cinder
echo ########create volume server and endpoint
openstack roleadd --project service --user cinder admin
openstack servicecreate --name cinder --description "DFTCIAAS Block Storage" volume
openstack servicecreate --name cinderv2 --description "DFTCIAAS Block Storage"volumev2
openstackendpoint create --region scxbxxzx volumepublic http://controller1:8776/v1/%\(tenant_id\)s
openstackendpoint create --region scxbxxzx volumeinternal http://controller1:8776/v1/%\(tenant_id\)s
openstackendpoint create --region scxbxxzx volumeadmin http://controller1:8776/v1/%\(tenant_id\)s
openstackendpoint create --region scxbxxzx volumev2 public http://controller1:8776/v2/%\(tenant_id\)s
openstackendpoint create --region scxbxxzx volumev2 internal http://controller1:8776/v2/%\(tenant_id\)s
openstackendpoint create --region scxbxxzx volumev2 admin http://controller1:8776/v2/%\(tenant_id\)s
3、glance
#### 安裝依賴(lài)包
yum installopenstack-glance
#### 修改配置文件內(nèi)容
openstack-config--set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:DB_PASS@controller1/glance
openstack-config--set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller1:5000
openstack-config--set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller1:35357
openstack-config--set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller1:11211
openstack-config--set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config--set /etc/glance/glance-api.conf keystone_authtoken project_domain_name default
openstack-config--set /etc/glance/glance-api.conf keystone_authtoken user_domain_name default
openstack-config--set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config--set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config--set /etc/glance/glance-api.conf keystone_authtoken password glancemanager
openstack-config--set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config--set /etc/glance/glance-api.conf glance_store stores file,http
openstack-config--set /etc/glance/glance-api.conf glance_store default_store file
openstack-config--set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/p_w_picpaths/
openstack-config--set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:DB_PASS@controller1/glance
openstack-config--set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller1:5000
openstack-config--set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller1:35357
openstack-config--set /etc/glance/glance-registry.conf keystone_authtokenmemcached_servers controller1:11211
openstack-config--set /etc/glance/glance-registry.conf keystone_authtoken auth_type password
openstack-config--set /etc/glance/glance-registry.conf keystone_authtokenproject_domain_name default
openstack-config--set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name default
openstack-config--set /etc/glance/glance-registry.conf keystone_authtoken project_name service
openstack-config--set /etc/glance/glance-registry.conf keystone_authtoken username glance
openstack-config--set /etc/glance/glance-registry.conf keystone_authtoken password glancemanager
openstack-config--set /etc/glance/glance-registry.conf paste_deploy flavor keystone
###同步數(shù)據(jù)庫(kù)
su -s /bin/sh -c"glance-manage db_sync" glance
###啟動(dòng)服務(wù)
systemctl enableopenstack-glance-api.service openstack-glance-registry.service
systemctl startopenstack-glance-api.service openstack-glance-registry.service
4、nova
4.1控制節(jié)點(diǎn)
#安裝依賴(lài)包
yum installopenstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler
#修改配置文件內(nèi)容
openstack-config--set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config--set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:DB_PASS@controller1/nova_api
openstack-config--set /etc/nova/nova.conf database connection mysql+pymysql://nova:DB_PASS@controller1/nova
openstack-config--set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config--set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_hosts controller1:5672
openstack-config--set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid dftc
openstack-config--set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password ******
openstack-config--set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config--set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller1:5000/v3
openstack-config--set /etc/nova/nova.conf keystone_authtoken auth_url http://controller1:35357/v3
openstack-config--set /etc/nova/nova.conf keystone_authtoken memcached_servers controller1:11211
openstack-config --set/etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config--set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config--set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config--set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config--set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config--set /etc/nova/nova.conf keystone_authtoken password novamanager
openstack-config--set /etc/nova/nova.conf DEFAULT my_ip $MYIP
openstack-config--set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config--set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config--set /etc/nova/nova.conf vnc vncserver_listen $MYIP
openstack-config--set /etc/nova/nova.conf vnc vncserver_proxyclient_address $MYIP
openstack-config--set /etc/nova/nova.conf glance api_servers http://controller1:9292
openstack-config--set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
#同步數(shù)據(jù)庫(kù)
su -s /bin/sh -c"nova-manage api_db sync" nova
su -s /bin/sh -c"nova-manage db sync" nova
#啟動(dòng)服務(wù)
systemctl enableopenstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.serviceopenstack-nova-novncproxy.service
systemctl startopenstack-nova-api.service openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
4.2計(jì)算節(jié)點(diǎn)
#安裝依賴(lài)包
yum installopenstack-nova-compute
#修改配置文件
openstack-config--set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config--set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_hosts controller1:5672
openstack-config--set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid dftc
openstack-config--set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password ******
openstack-config--set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set/etc/nova/nova.conf keystone_authtoken auth_uri http://controller1:5000/v3
openstack-config--set /etc/nova/nova.conf keystone_authtoken auth_url http://controller1:35357/v3
openstack-config--set /etc/nova/nova.conf keystone_authtoken memcached_servers controller1:11211
openstack-config--set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config--set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config--set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config--set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config--set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config--set /etc/nova/nova.conf keystone_authtoken password novamanager
openstack-config--set /etc/nova/nova.conf DEFAULT my_ip $MYIP
openstack-config--set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config--set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config--set /etc/nova/nova.conf vnc enabled True
openstack-config--set /etc/nova/nova.conf vnc vncserver_listen $my_ip
openstack-config--set /etc/nova/nova.conf vnc vncserver_proxyclient_address $my_ip
openstack-config--set /etc/nova/nova.conf vnc novncproxy_base_url http://controller1:6080/vnc_auto.html
openstack-config--set /etc/nova/nova.conf glance api_servers http://controller1:9292
openstack-config--set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config--set /etc/nova/nova.conf libvirt virt_type qemu
# 啟動(dòng)服務(wù)
systemctl enablelibvirtd.service openstack-nova-compute.service
systemctl startlibvirtd.service openstack-nova-compute.service
5、neutron
5.1 控制節(jié)點(diǎn)
#安裝依賴(lài)包
yum install openstack-neutronopenstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
#修改neutron.conf
openstack-config--set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:DB_PASS@controller1/neutron
openstack-config--set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config--set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config--set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True
openstack-config--set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config--set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_hostscontroller1:5672
openstack-config--set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid dftc
openstack-config--set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password dftcpass
openstack-config--set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config--set /etc/neutron/neutron.conf keystone_authtoken auth_urihttp://controller1:5000/v3
openstack-config--set /etc/neutron/neutron.conf keystone_authtoken auth_urlhttp://controller1:35357/v3
openstack-config--set /etc/neutron/neutron.conf keystone_authtoken memcached controller1:11211
openstack-config--set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config--set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config--set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config--set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config--set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config--set /etc/neutron/neutron.conf keystone_authtoken password neutronmanager
openstack-config--set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True
openstack-config--set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True
openstack-config--set /etc/neutron/neutron.conf nova auth_url http://controller1:35357/v3
openstack-config--set /etc/neutron/neutron.conf nova auth_type password
openstack-config--set /etc/neutron/neutron.conf nova project_domain_name default
openstack-config--set /etc/neutron/neutron.conf nova user_domain_name default
openstack-config --set/etc/neutron/neutron.conf nova region_name scxbxxzx
openstack-config--set /etc/neutron/neutron.conf nova project_name service
openstack-config--set /etc/neutron/neutron.conf nova username nova
openstack-config--set /etc/neutron/neutron.conf nova password novamanager
openstack-config--set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
##修改ml2_config.ini
openstack-config--set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan,vxlan
openstack-config--set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
openstack-config--set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population
openstack-config--set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config--set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks public
openstack-config--set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:500
openstack-config--set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True
##修改linuxbridge.ini
openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridgephysical_interface_mappings default:em3,public:em3
openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan True
openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip $MYIP
openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population True
openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroupenable_security_group True
openstack-config--set /etc/neutron/plugins/ml2//inuxbridge_agent.ini securitygroupfirewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
##i修改l3-agent.ini
openstack-config--set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver
openstack-config--set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge `echo ' '`
##修改dhcp_agent.ini
openstack-config--set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver
openstack-config--set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config--set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata True
##修改metadata_agent.ini
openstack-config--set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller1
openstack-config--set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret metadatamanager
##修改nova.conf,使nova使用網(wǎng)絡(luò)服務(wù)
openstack-config--set /etc/nova/nova.conf neutron url http://controller1:9696
openstack-config--set /etc/nova/nova.conf neutron auth_url http://controller1:35357/v3
openstack-config--set /etc/nova/nova.conf neutron auth_type password
openstack-config--set /etc/nova/nova.conf neutron project_domain_name default
openstack-config--set /etc/nova/nova.conf neutron user_domain_name default
openstack-config--set /etc/nova/nova.conf neutron region_name scxbxxzx
openstack-config--set /etc/nova/nova.conf neutron project_name service
openstack-config--set /etc/nova/nova.conf neutron username neutron
openstack-config--set /etc/nova/nova.conf neutron password neutronmanager
openstack-config--set /etc/nova/nova.conf neutron service_metadata_proxy True
openstack-config--set /etc/nova/nova.conf neutron metadata_proxy_shared_secret metadatamanager
#創(chuàng)建鏈接文件
ln -s/etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
#同步數(shù)據(jù)庫(kù)
su -s /bin/sh -c"neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
#啟動(dòng)服務(wù)
systemctl restartopenstack-nova-api.service
systemctl enableneutron-server.service neutron-linuxbridge-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
systemctl startneutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl startneutron-l3-agent.service
5.2 計(jì)算節(jié)點(diǎn)
##安裝依賴(lài)包
yum installopenstack-neutron-linuxbridge ebtables ipset
##修改neutron.conf文件內(nèi)容
openstack-config--set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config--set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_hostscontroller1:5672
openstack-config--set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid dftc
openstack-config--set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password dftcpass
openstack-config--set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config--set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller1:5000/v3
openstack-config--set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller1:35357/v3
openstack-config--set /etc/neutron/neutron.conf keystone_authtoken memcached controller1:11211
openstack-config--set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config--set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config--set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config--set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config--set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config--set /etc/neutron/neutron.conf keystone_authtoken password neutronmanager
openstack-config--set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
##修改linuxbridge_agent.ini
openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridgephysical_interface_mappings default:em3,public:em4
openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan True
openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip $MYIP
openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population True
openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroupenable_security_group True
openstack-config--set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroupfirewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
openstack-config--set /etc/nova/nova.conf neutron url http://controller1:9696
openstack-config--set /etc/nova/nova.conf neutron auth_url http://controller1:35357/v3
openstack-config--set /etc/nova/nova.conf neutron auth_type password
openstack-config--set /etc/nova/nova.conf neutron project_domain_name default
openstack-config--set /etc/nova/nova.conf neutron user_domain_name default
openstack-config--set /etc/nova/nova.conf neutron region_name scxbxxzx
openstack-config--set /etc/nova/nova.conf neutron project_name service
openstack-config--set /etc/nova/nova.conf neutron username neutron
openstack-config--set /etc/nova/nova.conf neutron password neutronmanager
#啟動(dòng)服務(wù)
systemctl restartopenstack-nova-compute.service
systemctl enableneutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
6、dashboard
## 安裝依賴(lài)包
yum installopenstack-dashboard
##編輯文件/etc/openstack-dashboard/local_settings修改如下內(nèi)容
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*', ]
SESSION_ENGINE ='django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND':'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
OPENSTACK_KEYSTONE_URL ="http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT =True
OPENSTACK_API_VERSIONS= {
"identity": 3,
"p_w_picpath": 2,
"volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN ="default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE ="user"
OPENSTACK_NEUTRON_NETWORK= {
...
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_***': False,
'enable_fip_topology_check': False,
}
TIME_ZONE = "Asia/Chongqing"
7、cinder
##修改配置文件
openstack-config--set /etc/cinder/cinder.conf DEFAULT rpc_backend rabbit
openstack-config--set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config--set /etc/cinder/cinder.conf DEFAULT my_ip $MYIP
openstack-config--set /etc/cinder/cinder.conf database connection mysql://cinder:DB_PASS@controller1/DB_PASS
openstack-config--set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller1:5000/v3
openstack-config--set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller1:35357/v3
openstack-config--set /etc/cinder/cinder.conf keystone_authtoken memcached controller1:11211
openstack-config--set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config--set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config--set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config--set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config--set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config--set /etc/cinder/cinder.conf keystone_authtoken password cindermanager
openstack-config--set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp
openstack-config--set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_hosts controller1:5672
openstack-config--set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_userid dftc
openstack-config--set /etc/cinder/cinder.conf oslo_messaging_rabbit rabbit_password dftcpass
openstackendpoint create --region scxbxxzx volume public http://controller1:8776/v1/%\(tenant_id\)s
openstackendpoint create --region scxbxxzx volume internal http://controller1:8776/v1/%\(tenant_id\)s
openstackendpoint create --region scxbxxzx volume admin http://controller1:8776/v1/%\(tenant_id\)s
openstackendpoint create --region scxbxxzx volumev2 public http://controller1:8776/v2/%\(tenant_id\)s
openstackendpoint create --region scxbxxzx volumev2 internal http://controller1:8776/v2/%\(tenant_id\)s
openstackendpoint create --region scxbxxzx volumev2 admin http://controller1:8776/v2/%\(tenant_id\)s
8、###clean all ceph configure file andpackage
ceph-deploy purgecontroller1 compute1 compute2 compute3
ceph-deploypurgedata controller1 compute1 compute2 compute3
ceph-deployforgetkeys
ssh compute1 sudorm -rf /osd/osd0/*
ssh compute2 sudorm -rf /osd/osd1/*
ssh compute3 sudorm -rf /osd/osd2/*
###install newceph-cluster
su - dftc
mkdir cluster
cd cluster
#initial mon node
ceph-deploynew controller1
##changeconfigure file
echo "osdpool default size = 2" >> ceph.conf
echo "publicnetwork = 172.16.16.0/24" >> ceph.conf
echo"cluster network = 172.16.17.0/24" >> ceph.conf
## 安裝ceph節(jié)點(diǎn)
### ceph.x86_64 1:10.2.5-0.el7 ceph-base.x86_641:10.2.5-0.el7
### ceph-common.x86_64 1:10.2.5-0.el7 ceph-mds.x86_641:10.2.5-0.el7
### ceph-mon.x86_64 1:10.2.5-0.el7 ceph-osd.x86_64 1:10.2.5-0.el7
### ceph-radosgw.x86_64 1:10.2.5-0.el7 ceph-selinux.x86_641:10.2.5-0.el7
ceph-deployinstall controller1 compute1 compute2 compute3
##初始化ceph-mon
ceph-deploy moncreate-initial
###########errmessage
[compute3][DEBUG] detect platform information from remote host
[compute3][DEBUG] detect machine type
[compute3][DEBUG] find the location of an executable
[compute3][INFO ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.compute3.asok mon_status
[ceph_deploy.mon][WARNIN]mon.compute3 monitor is not yet in quorum, tries left: 5
[ceph_deploy.mon][WARNIN]waiting 5 seconds before retrying
[compute3][INFO ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.compute3.asok mon_status
[ceph_deploy.mon][WARNIN]mon.compute3 monitor is not yet in quorum, tries left: 4
[ceph_deploy.mon][WARNIN]waiting 10 seconds before retrying
[compute3][INFO ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.compute3.asok mon_status
[ceph_deploy.mon][WARNIN]mon.compute3 monitor is not yet in quorum, tries left: 3
[ceph_deploy.mon][WARNIN]waiting 10 seconds before retrying
[compute3][INFO ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.compute3.asok mon_status
[ceph_deploy.mon][WARNIN]mon.compute3 monitor is not yet in quorum, tries left: 2
[ceph_deploy.mon][WARNIN]waiting 15 seconds before retrying
[compute3][INFO ] Running command: sudo ceph --cluster=ceph--admin-daemon /var/run/ceph/ceph-mon.compute3.asok mon_status
[ceph_deploy.mon][WARNIN]mon.compute3 monitor is not yet in quorum, tries left: 1
[ceph_deploy.mon][WARNIN]waiting 20 seconds before retrying
[ceph_deploy.mon][ERROR] Some monitors have still not reached quorum:
[ceph_deploy.mon][ERROR] compute1
[ceph_deploy.mon][ERROR] compute3
[ceph_deploy.mon][ERROR] compute2
########resolve
copy remoteconfigure file to localhost
compare two file, same file content
so , go aheadnext step
##initial osd
ceph-deploy osdprepare compute1:/osd/osd0/ compute2:/osd/osd1 compute3:/osd/osd2
ceph-deploy osdactivate compute1:/osd/osd0/ compute2:/osd/osd1 compute3:/osd/osd2
ceph-deploy admincontroller1 compute1 compute2 compute3
chmod +r/etc/ceph/ceph.client.admin.keyring
####
ceph authget-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefixrbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rxpool=p_w_picpaths';ceph auth get-or-create client.glance mon 'allow r' osd 'allowclass-read object_prefix rbd_children, allow rwx pool=p_w_picpaths';ceph authget-or-create client.cinder-backup mon 'allow r' osd 'allow class-readobject_prefix rbd_children, allow rwx pool=backups'
####
ceph authget-or-create client.glance | ssh controller1 sudo tee/etc/ceph/ceph.client.glance.keyring
ssh controller1sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
ceph authget-or-create client.cinder | ssh compute1 sudo tee/etc/ceph/ceph.client.cinder.keyring
ssh compute1 sudochown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
ceph authget-or-create client.cinder | ssh compute2 sudo tee/etc/ceph/ceph.client.cinder.keyring
ssh compute2 sudochown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
ceph authget-or-create client.cinder | ssh compute3 sudo tee/etc/ceph/ceph.client.cinder.keyring
ssh compute3 sudochown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
ceph authget-or-create client.cinder-backup | ssh compute1 sudo tee/etc/ceph/ceph.client.cinder-backup.keyring
ssh compute1 sudochown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring
ceph authget-or-create client.cinder-backup | ssh compute2 sudo tee/etc/ceph/ceph.client.cinder-backup.keyring
ssh compute2 sudochown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring
ceph authget-or-create client.cinder-backup | ssh compute3 sudo tee/etc/ceph/ceph.client.cinder-backup.keyring
ssh compute3 sudochown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring
### controller node run belowe command#########################
ceph auth get-keyclient.cinder | ssh compute1 tee client.cinder.key
ceph auth get-keyclient.cinder | ssh compute2 tee client.cinder.key
ceph auth get-keyclient.cinder | ssh compute3 tee client.cinder.key
### compute'snode dftc user run ################
cat >secret.xml <<EOF
<secretephemeral='no' private='no'>
<uuid>c2ad36f3-f184-48b3-81c3-49411cc6566f</uuid>
<usage type='ceph'>
<name>client.cindersecret</name>
</usage>
</secret>
EOF
sudo virshsecret-define --file secret.xml
sudo virshsecret-set-value --secret c2ad36f3-f184-48b3-81c3-49411cc6566f --base64AQAhhXhYL3ApHhAAYO5wYNEdz63pNxermCgjFg== && rm client.cinder.keysecret.xml
######
virshsecret-set-value --secret c2ad36f3-f184-48b3-81c3-49411cc6566f --base64AQAhhXhYL3ApHhAAYO5wYNEdz63pNxermCgjFg==
##### OLD VERSION
openstack-config--set /etc/glance/glance-api.conf DEFAULT default_store rbd
##### NEW VERSION
openstack-config--set /etc/glance/glance-api.conf glance_store default_store rbd
openstack-config--set /etc/glance/glance-api.conf DEFAULT show_p_w_picpath_direct_url True
openstack-config--set /etc/glance/glance-api.conf glance_store stores rbd
openstack-config--set /etc/glance/glance-api.conf glance_store rbd_store_pool p_w_picpaths
openstack-config--set /etc/glance/glance-api.conf glance_store rbd_store_user glance
openstack-config--set /etc/glance/glance-api.conf glance_store rbd_store_ceph_conf /etc/ceph/ceph.conf
openstack-config--set /etc/glance/glance-api.conf glance_store rbd_store_chunk_size 8
openstack-config--set /etc/glance/glance-api.conf paste_deploy flavor keystone
##Image 屬性
###建議配置如下 p_w_picpath 屬性:
### hw_scsi_model=virtio-scsi: 添加 virtio-scsi 控制器以獲得更好的性能、并支持 discard 操作;
### hw_disk_bus=scsi: 把所有 cinder 塊設(shè)備都連到這個(gè)控制器;
### hw_qemu_guest_agent=yes: 啟用 QEMU guest agent (訪(fǎng)客代理)
### os_require_quiesce=yes: 通過(guò) QEMU guest agent 發(fā)送 fs-freeze/thaw 調(diào)用
openstack-config--set /etc/cinder/cinder.conf DEFAULT enabled_backends ceph
openstack-config--set /etc/cinder/cinder.conf DEFAULT glance_api_version 2
openstack-config--set /etc/cinder/cinder.conf ceph volume_driver cinder.volume.drivers.rbd.RBDDriver
openstack-config--set /etc/cinder/cinder.conf ceph rbd_pool volumes
openstack-config--set /etc/cinder/cinder.conf ceph rbd_ceph_conf /etc/ceph/ceph.conf
openstack-config--set /etc/cinder/cinder.conf ceph rbd_flatten_volume_from_snapshot false
openstack-config--set /etc/cinder/cinder.conf ceph rbd_max_clone_depth 5
openstack-config--set /etc/cinder/cinder.conf ceph rbd_store_chunk_size 4
openstack-config--set /etc/cinder/cinder.conf ceph rados_connect_timeout -1
openstack-config--set /etc/cinder/cinder.conf ceph glance_api_version 2
openstack-config--set /etc/cinder/cinder.conf ceph rbd_user cinder
openstack-config--set /etc/cinder/cinder.conf ceph rbd_secret_uuid c2ad36f3-f184-48b3-81c3-49411cc6566f
openstack-config--set /etc/cinder/cinder.conf DEFAULT backup_driver cinder.backup.drivers.ceph
openstack-config--set /etc/cinder/cinder.conf DEFAULT backup_ceph_conf /etc/ceph/ceph.conf
openstack-config--set /etc/cinder/cinder.conf DEFAULT backup_ceph_user cinder-backup
openstack-config--set /etc/cinder/cinder.conf DEFAULT backup_ceph_chunk_size 134217728
openstack-config--set /etc/cinder/cinder.conf DEFAULT backup_ceph_pool backups
openstack-config--set /etc/cinder/cinder.conf DEFAULT backup_ceph_stripe_unit 0
openstack-config--set /etc/cinder/cinder.conf DEFAULT backup_ceph_stripe_count 0
openstack-config--set /etc/cinder/cinder.conf DEFAULT restore_discard_excess_bytes true
openstack-config--set /etc/nova/nova.conf libvirt rbd_user cinder
openstack-config--set /etc/nova/nova.conf libvirt rbd_secret_uuid 457eb676-33da-42ec-9a8c-9293d545c337
###############
[client]
rbd cache = true
rbd cache writethrough until flush =true
admin socket =/var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file =/var/log/qemu/qemu-guest-$pid.log
rbd concurrent management ops = 20
Note:Addopen issues that you identify while writing or reviewing this document to theopen issues section. As you resolveissues, move them to the closed issues section and keep the issue ID the same. Include an explanation of the resolution.
When this deliverable is complete, any open issues should be transferred to theproject- or process-level Risk and Issue Log (PJM.CR.040) and managed using aproject level Risk and Issue Form (PJM.CR.040). In addition, the open items should remain in the open issues section ofthis deliverable, but flagged in the resolution column as being transferred.
流程流程說(shuō)Add open issues that you identifywhile writing or reviewing this document to the open issues section. As you resolve issues, move them to theclosed issues section and keep the issue ID the same. Include an explanation of the resolution.
When this deliverable is complete, any open issues should be transferred to theproject- or process-level Risk and Issue Log (PJM.CR.040) and managed using aproject level Risk and Issue Form (PJM.CR.040). In addition, the open items should remain in the open issues section ofthis deliverable, but flagged in the resolution column as being transferred.
Note:Addopen issues that you identify while writing or reviewing this document to theopen issues section. As you resolveissues, move them to the closed issues section and keep the issue ID thesame. Include an explanation of theresolution.
When this deliverable is complete, any open issues should be transferred to theproject- or process-level Risk and Issue Log (PJM.CR.040) and managed using aproject level Risk and Issue Form (PJM.CR.040). In addition, the open items should remain in the open issues section ofthis deliverable, but flagged in the resolution column as being transferred.
ID: 001
Issue: DVR功能實(shí)現(xiàn)
Resolution: 無(wú)
Tips: openstack-openvswitch來(lái)實(shí)現(xiàn)東西向流量之后,可以實(shí)現(xiàn)DVR;
ID: 002
Issue: HA功能實(shí)現(xiàn)
Resolution:
Tips: 使用keepalived提供虛擬IP, haproxy提供均衡負(fù)載和端口數(shù)據(jù)轉(zhuǎn)發(fā);
ID:003
Issue: glance模塊使用虛擬IP,端口不可達(dá),無(wú)法上傳鏡像,nova,neutron同樣問(wèn)題
Resolution:
暫無(wú),
……
ID:001
Issue: 需要重置keystone數(shù)據(jù)庫(kù)
Resolution:
#### clear old database and old data########
mysql -uroot -p**** -e "createdatabase keystone;"
mysql -uroot -p**** -e "createdatabase keystone;"
mysql -uroot -p**** -e "GRANT ALLPRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY'DB_PASS';"
mysql -uroot -p**** -e "GRANT ALLPRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'DB_PASS';"
mysql -uroot -p**** -e "createdatabase glance;"
openstack-config --set /etc/keystone/keystone.confDEFAULT admin_token 749d6ead6be998642461
openstack-config --set/etc/keystone/keystone.conf database connectionmysql+pymysql://keystone:DB_PASS@controller1/keystone
openstack-config --set/etc/keystone/keystone.conf token provider fernet
### sync database and use fernetkey #######
su -s /bin/sh -c "keystone-managedb_sync" keystone
keystone-manage fernet_setup--keystone-user keystone --keystone-group keystone
ID:002
Issue: 某個(gè)模塊使用CLI,信息提示auth faild!
Resolution:
重置模塊所有的用戶(hù),服務(wù)和認(rèn)證端點(diǎn),重新建立用戶(hù)添加到admin,重建模塊服務(wù)和模塊的認(rèn)證端點(diǎn)。
ID:003
Issue: vnc打不開(kāi)問(wèn)題
Resolution:
compute節(jié)點(diǎn)執(zhí)行:
MYIP=`ip add show em1|grep inet|head -1|awk '{print $2}'|awk -F'/' '{print $1}'`
openstack-config --set /etc/nova/nova.confvnc novncproxy_base_url http://$MYIP:6080/
ID:004
Issue: glance模塊使用本地存儲(chǔ),無(wú)法上傳鏡像
Resolution:
檢查端口9292是否啟動(dòng),是否可以telnet。檢查發(fā)現(xiàn)端口無(wú)法啟動(dòng),重新檢查配置文件,與ceph對(duì)接不能使用“virt_type”的選項(xiàng),ceph自身直接使用rbd格式對(duì)所有的對(duì)象進(jìn)行統(tǒng)一標(biāo)記管理;
ID:005
Issue: 創(chuàng)建虛擬機(jī)失敗,界面提示鏈接http://controller:9696失敗
Resolution:
檢查9696端口啟用正常,本地主機(jī)名實(shí)際為controller1,更新配置文件/etc/nova/nova.conf
[neutron]
url = http://controller1:9696為正確可解析主機(jī)名;
ID:006
Issue: glance-api顯示服務(wù)正常,端口每10秒出現(xiàn)一次,無(wú)法正常鏈接;api日志無(wú)異常輸出,systemctl status拋出python異常ERROR:Store for schema file not found
Resolution:
ceph對(duì)接過(guò)程使用default_store在舊版本中添加在[DEFAULT]選項(xiàng)之后,新版本中添加到[glance]選項(xiàng)之后,更新之后正常
default_store = rbd
ID:007
Issue: 無(wú)法執(zhí)行完成 openstack-nova-compute.service 命令,一直卡住不動(dòng)
Resolution: 檢查配置文件,無(wú)法鏈接消息隊(duì)列,發(fā)現(xiàn)之前更新文件,忘記修改rabbitmq端口;修改到正確的5672端口后重新執(zhí)行。
ID:008
Issue: openstack-nova-api.service 無(wú)法啟動(dòng),報(bào)錯(cuò): ACCESS_REFUSED - Login wasrefused using authentication mechanism AMQPLAIN
Resolution:
rabbitmq消息隊(duì)列的配置不對(duì),用戶(hù)名填寫(xiě)錯(cuò)誤,更改正確后正常。
ID:009
Issue: 無(wú)法安裝ceph-deploy依賴(lài)的包
Processing Dependency:
python-distribute for package:
ceph-deploy-1.5.34-0.noarch Package python-setuptools-0.9.8-4.el7.noarch isobsoleted by python2-setuptools-22.0.5-1.el7.noarch
which is already installed --> FinishedDependency Resolution
Error:
Package: ceph-deploy-1.5.34-0.noarch (ceph-noarch) Requires:python-distribute Available: python-setuptools-0.9.8-4.el7.noarch (base)python-distribute = 0.9.8-4.el7
You could try using --skip-broken to work around the problem You could tryrunning: rpm -Va --nofiles --nodigest
包沖突 $ rpm -qa|grep setuptools
python2-setuptools-22.0.5-1.el7.noarch
卸載
Resolution:
利用 pip 安裝解決
yum install python-pip
pip installceph-deploy
ID:010
Issue: 配置完成dashboard后,界面無(wú)法正常訪(fǎng)問(wèn)
Resolution:
memcached can not bind the hostname port!
ID:011
Issue: dashboard界面總是拋出異常錯(cuò)誤?
在點(diǎn)擊openstack的dashboard時(shí)右上角總是彈出一些錯(cuò)誤的提示,再次刷新時(shí)又不提示
Resolution:
MYSQL數(shù)據(jù)庫(kù)安裝完成后,默認(rèn)最大連接數(shù)是100,一般流量稍微大一點(diǎn)這個(gè)連接數(shù)是遠(yuǎn)遠(yuǎn)不夠
1、修改mairadb的配置文件,將最大連接數(shù)改為1500
echo"max_connections=1500" >>/etc/my.cnf.d/server.cnf
2、重新啟動(dòng)數(shù)據(jù)庫(kù)
service mariadb restart
……
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀(guān)點(diǎn)不代表本網(wǎng)站立場(chǎng),如果涉及侵權(quán)請(qǐng)聯(lián)系站長(zhǎng)郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。