您好,登錄后才能下訂單哦!
本指南會告訴你如何使用 Red Hat Enterprise Linux 7和其衍生的EPEL倉庫安裝OpenStack。
說明:目前統(tǒng)一采用Centos7.3版本進行Openstack-liberty版本的安裝。測試實驗了KVM環(huán)境下的創(chuàng)建安裝使用虛擬機。
systemctl stop iptables
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
yum install vim net-tools
控制節(jié)點/計算節(jié)點配置hosts
echo "192.168.0.231 controller" >> /etc/hosts
echo "192.168.0.232 compute1" >> /etc/hosts
# yum install -y chrony
# vim /etc/chrony.conf
allow 192.168/16 #允許那些服務(wù)器和自己同步時間
# systemctl enable chronyd.service #開機啟動
# systemctl start chronyd.service
# timedatectl set-timezone Asia/Shanghai #設(shè)置時區(qū)
# timedatectl status
# yum install -y chrony
# vim /etc/chrony.conf
server 192.168.1.17 iburst #只留一行
# systemctl enable chronyd.service
# systemctl start chronyd.service
# timedatectl set-timezone Asia/Shanghai
# chronyc sources
# vi CentOS-OpenStack-liberty.repo
[centos-openstack-liberty]
name=CentOS-7 - OpenStack liberty
baseurl=http://mirrors.aliyun.com/centos/7/cloud/$basearch/openstack-liberty/
gpgcheck=0
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Centos-7
[centos-openstack-liberty-test]
name=CentOS-7 - OpenStack liberty Testing
baseurl=http://buildlogs.centos.org/centos/7/cloud/$basearch/openstack-liberty/
gpgcheck=0
enabled=0
# 或采用 CentOS7安裝OpenStack提供的epel源
# yum install -y centos-release-openstack-liberty
#Base
yum install -y http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm
yum install -y centos-release-openstack-liberty
yum install -y python-openstackclient
##MySQL
yum install -y mariadb mariadb-server MySQL-python
##RabbitMQ
yum install -y rabbitmq-server
##Keystone
yum install -y openstack-keystone httpd mod_wsgi memcached python-memcached
##Glance
yum install -y openstack-glance python-glance python-glanceclient
##Nova
yum install -y openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient
##Neutron linux-node1.example.com
yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset
##Dashboard
yum install -y openstack-dashboard
##Cinder
yum install -y openstack-cinder python-cinderclient
##Base
yum install -y http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm
yum install centos-release-openstack-liberty
yum install python-openstackclient
##Nova
yum install -y openstack-nova-compute sysfsutils
##Neutron
yum install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset
##Cinder
yum install -y openstack-cinder python-cinderclient targetcli python-oslo-policy
[root@controller ~]# yum install mariadb mariadb-server MySQL-python
[root@controller ~]# vi /etc/my.cnf.d/mariadb_openstack.cnf
[mysqld]
bind-address = 192.168.0.231
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
max_connections=1000
[root@controller ~]# systemctl enable mariadb.service
[root@controller ~]# systemctl start mariadb.service
[root@controller ~]# mysql_secure_installation
[root@controller ~]# vi /usr/lib/systemd/system/mariadb.service
[Service]新添加兩行如下參數(shù):
LimitNOFILE=10000
LimitNPROC=10000
systemctl --system daemon-reload
systemctl restart mariadb.service
mysql -uroot -popenstack
SQL> show variables like 'max_connections';
# mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'openstack';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'openstack';
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'openstack';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'openstack';
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'openstack';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'openstack';
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'openstack';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'openstack';
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'openstack';
FLUSH PRIVILEGES;
SHOW DATABASES;
yum install rabbitmq-server
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
# netstat -tunlp | grep 5672
tcp 0 0 0.0.0.0:15672 0.0.0.0:* LISTEN 1694/beam.smp
tcp 0 0 0.0.0.0:25672 0.0.0.0:* LISTEN 1694/beam.smp
tcp6 0 0 :::5672 :::* LISTEN 1694/beam.smp
# rabbitmqctl add_user openstack openstack
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
# rabbitmq-plugins list
# rabbitmq-plugins enable rabbitmq_management
The following plugins have been enabled:
mochiweb
webmachine
rabbitmq_web_dispatch
amqp_client
rabbitmq_management_agent
rabbitmq_management
Applying plugin configuration to rabbit@controller... started 6 plugins.
# rabbitmq-plugins list
#會啟用如下服務(wù):
Configured: E = explicitly enabled; e = implicitly enabled
| Status: * = running on rabbit@controller
|/
[e*] amqp_client 3.6.5
[e*] mochiweb 2.13.1
[E*] rabbitmq_management 3.6.5
[e*] rabbitmq_management_agent 3.6.5
[e*] rabbitmq_web_dispatch 3.6.5
[e*] webmachine 1.10.3
# systemctl start rabbitmq-server.service
會啟動15672端口的web界面
http://192.168.0.231:15672
guest/guest 默認(rèn)密碼,也為管理員
注:配置openstack/openstack為tags為administrator
keystone 安裝在 controller 節(jié)點,為了提高服務(wù)性能,使用 apache 提供WEB請求,由 memcached 來保存 Token 信息
# yum install openstack-keystone httpd mod_wsgi memcached python-memcached
注意:不同版本號的keystone,其默認(rèn)配置可能會有所不同
openssl rand -hex 10
c885b63d0ce5760ff23e
隨機一個值。改成admin_token值
cat /etc/keystone/keystone.conf |grep -v "^#" | grep -v "^$"
[DEFAULT]
admin_token = c885b63d0ce5760ff23e
[database]
connection = mysql://keystone:openstack@192.168.0.231/keystone
[memcache]
servers = 192.168.0.231:11211
[revoke]
driver = sql
[token]
provider = uuid
driver = memcache
# chown -R keystone:keystone /var/log/keystone
# su -s /bin/sh -c "keystone-manage db_sync" keystone
##############################################################################
會在/var/log/keystone/ 下生成一個keystone.log日志,keystone在啟動時會寫該文件。
##############################################################################
# mysql -h 192.168.0.231 -ukeystone -popenstack -e "use keystone;show tables;"
systemctl enable memcached.service
systemctl start memcached.service
# netstat -tunlp | grep 11211
tcp 0 0 127.0.0.1:11211 0.0.0.0:* LISTEN 3288/memcached
tcp6 0 0 ::1:11211 :::* LISTEN 3288/memcached
udp 0 0 127.0.0.1:11211 0.0.0.0:* 3288/memcached
udp6 0 0 ::1:11211 :::* 3288/memcached
# vi /etc/httpd/conf/httpd.conf
ServerName 192.168.0.231:80
# vi /etc/httpd/conf.d/wsgi-keystone.conf
Listen 5000
Listen 35357
<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
systemctl enable httpd.service
systemctl start httpd.service
驗證:
[root@controller ~]# ss -ntl | grep -E "5000|35357"
LISTEN 0 128 *:35357 *:*
LISTEN 0 128 *:5000 *:*
[root@controller ~]#
export OS_TOKEN=c885b63d0ce5760ff23e
export OS_URL=http://192.168.0.231:35357/v3
export OS_IDENTITY_API_VERSION=3
[root@controller ~]# openstack service create --name keystone --description "OpenStack Identity" identity
2.5.3 keystone API注冊admin 管理的、public公共的、internal內(nèi)部的
[root@controller ~]# openstack endpoint create --region RegionOne identity public http://192.168.0.231:5000/v2.0
[root@controller ~]# openstack endpoint create --region RegionOne identity internal http://192.168.0.231:5000/v2.0
[root@controller ~]# openstack endpoint create --region RegionOne identity admin http://192.168.0.231:35357/v2.0
[root@controller ~]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
| 05a5e9b559664d848b45d353d12594c1 | RegionOne | keystone | identity | True | admin | http://192.168.0.231:35357/v2.0 |
| 9a240664c4dc438aa8b9f892c668cb27 | RegionOne | keystone | identity | True | internal | http://192.168.0.231:5000/v2.0 |
| e63642b80e4f45b69866825e9e1b9837 | RegionOne | keystone | identity | True | public | http://192.168.0.231:5000/v2.0 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------+
[root@controller ~]# openstack project create --domain default --description "Admin Project" admin
[root@controller ~]# openstack user create --domain default --password=openstack admin
[root@controller ~]# openstack role create admin
[root@controller ~]# openstack role add --project admin --user admin admin
[root@controller ~]# openstack user list
+----------------------------------+-------+
| ID | Name |
+----------------------------------+-------+
| e2bae88d31b54e4ab1a4cb2251da8a6a | admin |
+----------------------------------+-------+
[root@controller ~]# openstack project create --domain default --description "Demo Project" demo
[root@controller ~]# openstack user create --domain default --password=openstack demo
[root@controller ~]# openstack role create user
[root@controller ~]# openstack role add --project demo --user demo user
[root@controller ~]# openstack user list
+----------------------------------+-------+
| ID | Name |
+----------------------------------+-------+
| 4151e2b9b78842d282250d4cfb31ebba | demo |
| 508b377f6f3a478f80a5a019e2c5b10a | admin |
+----------------------------------+-------+
[root@controller ~]# openstack project create --domain default --description "Service Project" service
查看項目:
[root@controller ~]# openstack project list
+----------------------------------+---------+
| ID | Name |
+----------------------------------+---------+
| 184655bf46de4c3fbc0f8f13d1d9bfb8 | service |
| 3bfa1c4208d7482a8f21709d458f924e | demo |
| 77f86bae2d344a658f26f71d03933c45 | admin |
+----------------------------------+---------+
[root@linux-node1 ~]# openstack endpoint delete ID
為了驗證,臨時改環(huán)境變量, 要使用用戶名密碼驗證,不需要token驗證,要去掉環(huán)境變量。
[root@controller ~]# unset OS_TOKEN OS_URL
為 admin 用戶請求 token
[root@controller ~]# openstack --os-auth-url http://192.168.0.231:35357/v3 \
--os-project-domain-id default --os-user-domain-id default \
--os-project-name admin --os-username admin --os-auth-type password token issue
Password:
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| expires | 2017-05-10T03:12:15.764769Z |
| id | b28410f9c6314cd8aebeca0beb478bf9 |
| project_id | 79d295e81e5a4255a02a8ea26ae4606a |
| user_id | 4015e1151aee4ab7811f320378ce6031 |
+------------+----------------------------------+
為 domo 用戶請求 token
[root@controller ~]# openstack --os-auth-url http://192.168.0.231:5000/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name demo --os-username demo token issue
Password:
+------------+----------------------------------+
| Field | Value |
+------------+----------------------------------+
| expires | 2017-05-10T03:12:59.252178Z |
| id | 110b9597c5fd49ac9ac3c1957648ede7 |
| project_id | ce0af495eb844e199db649d7f7baccb4 |
| user_id | afd908684eee42aaa7d73e22671eee24 |
+------------+----------------------------------+
[root@controller ~]# vim admin-openrc.sh
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_AUTH_URL=http://192.168.0.231:35357/v3
export OS_IDENTITY_API_VERSION=3
[root@controller ~]# vim demo-openrc.sh
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=demo
export OS_TENANT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=openstack
export OS_AUTH_URL=http://192.168.0.231:5000/v3
export OS_IDENTITY_API_VERSION=3
glance為用戶提供虛擬機鏡像的發(fā)現(xiàn)、注冊和取回服務(wù)。默認(rèn)把鏡像存放在 /var/lib/glance/p_w_picpaths/ 目錄下
[root@controller ~]# source admin-openrc.sh
[root@controller ~]# openstack user create --domain default --password=openstack glance
[root@controller ~]# openstack role add --project service --user glance admin
[root@controller ~]# openstack service create --name glance --description "OpenStack Image service" p_w_picpath
[root@controller ~]# openstack endpoint create --region RegionOne p_w_picpath public http://192.168.0.231:9292
[root@controller ~]# openstack endpoint create --region RegionOne p_w_picpath internal http://192.168.0.231:9292
[root@controller ~]# openstack endpoint create --region RegionOne p_w_picpath admin http://192.168.0.231:9292
[root@controller ~]# yum install openstack-glance python-glance python-glanceclient
cat /etc/glance/glance-api.conf |grep -v "^#" | grep -v "^$"
[DEFAULT]
verbose=True
notification_driver = noop
[database]
connection = mysql://glance:openstack@192.168.0.231/glance
[glance_store]
default_store=file
filesystem_store_datadir=/var/lib/glance/p_w_picpaths/
[keystone_authtoken]
auth_uri=http://192.168.0.231:5000
auth_url=http://192.168.0.231:35357
auth_plugin=password
project_domain_id=default
user_domain_id=default
project_name=service
username=glance
password=openstack
[paste_deploy]
flavor=keystone
cat /etc/glance/glance-registry.conf |grep -v "^#" | grep -v "^$"
[DEFAULT]
verbose=True
notification_driver = noop
[database]
connection = mysql://glance:openstack@192.168.0.231/glance
[keystone_authtoken]
auth_uri=http://192.168.0.231:5000
auth_url=http://192.168.0.231:35357
auth_plugin=password
project_domain_id=default
user_domain_id=default
project_name=service
username=glance
password=openstack
[paste_deploy]
flavor=keystone
[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance
No handlers could be found for logger "oslo_config.cfg"
/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py:450: Warning: Duplicate index `ix_p_w_picpath_properties_p_w_picpath_id_name`. This is deprecated and will be disallowed in a future release.
cursor.execute(statement, parameters)
這個錯誤可以忽略, 可以用mysql測試數(shù)據(jù)庫登錄與創(chuàng)建是否成功。
# mysql -h 192.168.0.231 -uglance -popenstack -e "use glance;show tables;"
# systemctl enable openstack-glance-api.service openstack-glance-registry.service
# systemctl start openstack-glance-api.service openstack-glance-registry.service
驗證:
[root@controller ~]# ss -ntl | grep -E "9191|9292"
LISTEN 0 128 *:9292 *:*
LISTEN 0 128 *:9191 *:*
我們使用一個非常小的系統(tǒng)鏡像來驗證 glance 是否成功部署
[root@controller ~]# echo "export OS_IMAGE_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh
[root@controller ~]# source admin-openrc.sh
[root@controller ~]# wget http://cloud.centos.org/centos/7/p_w_picpaths/CentOS-7-x86_64-GenericCloud.qcow2
[root@controller ~]# glance p_w_picpath-create --name "CentOS-7-x86_64" --file CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare \
--visibility public --progress
[=============================>] 100%
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | 212b6a881800cad892347073f0de2117 |
| container_format | bare |
| created_at | 2017-05-22T10:13:24Z |
| disk_format | qcow2 |
| id | e7e2316a-f585-488e-9fd9-85ce75b098d4 |
| min_disk | 0 |
| min_ram | 0 |
| name | CentOS-7-x86_64 |
| owner | be420231d13848809da36178cbac4d22 |
| protected | False |
| size | 741539840 |
| status | active |
| tags | [] |
| updated_at | 2017-05-22T10:13:31Z |
| virtual_size | None |
| visibility | public |
+------------------+--------------------------------------+
[root@controller ~]# glance p_w_picpath-list
+--------------------------------------+-----------------+
| ID | Name |
+--------------------------------------+-----------------+
| 2ac90c0c-b923-43ff-8f99-294195a64ced | CentOS-7-x86_64 |
+--------------------------------------+-----------------+
查看磁盤上的文件:
[root@controller ~]# ll /var/lib/glance/p_w_picpaths/
總用量 12980
-rw-r-----. 1 glance glance 1569390592 Aug 26 12:50 2ac90c0c-b923-43ff-8f99-294195a64ced
這一部分講述的是 nova 在控制節(jié)點(compute)上的部署
[root@controller ~]# source admin-openrc.sh
[root@controller ~]# openstack user create --domain default --password=openstack nova
[root@controller ~]# openstack role add --project service --user nova admin
[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute
[root@controller ~]# openstack endpoint create --region RegionOne compute public http://192.168.0.231:8774/v2/%\(tenant_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne compute internal http://192.168.0.231:8774/v2/%\(tenant_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://192.168.0.231:8774/v2/%\(tenant_id\)s
# yum install openstack-nova-api openstack-nova-cert \
openstack-nova-conductor openstack-nova-console \
openstack-nova-novncproxy openstack-nova-scheduler \
python-novaclient
cat /etc/nova/nova.conf|grep -v "^#" | grep -v "^$"
[DEFAULT]
my_ip=192.168.0.231
enabled_apis=osapi_compute,metadata
auth_strategy=keystone
allow_resize_to_same_host=True
network_api_class=nova.network.neutronv2.api.API
linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
security_group_api=neutron
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
firewall_driver=nova.virt.firewall.NoopFirewallDriver
verbose=true
rpc_backend=rabbit
[database]
connection=mysql://nova:openstack@192.168.0.231/nova
[glance]
host=192.168.0.231
[keystone_authtoken]
auth_uri=http://192.168.0.231:5000
auth_url=http://192.168.0.231:35357
auth_plugin=password
project_domain_id=default
user_domain_id=default
project_name=service
username=nova
password=openstack
[libvirt]
virt_type=kvm
[neutron]
url=http://192.168.0.231:9696
auth_url=http://192.168.0.231:35357
auth_plugin=password
project_domain_id=default
user_domain_id=default
region_name=RegionOne
project_name=service
username=neutron
password=openstack
service_metadata_proxy=true
metadata_proxy_shared_secret=METADATA_SECRET
lock_path=/var/lib/nova/tmp
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_rabbit]
rabbit_host=192.168.0.231
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=openstack
[vnc]
vncserver_listen=$my_ip
vncserver_proxyclient_address=$my_ip
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
No handlers could be found for logger "oslo_config.cfg"
/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py:450: Warning: Duplicate index `block_device_mapping_instance_uuid_virtual_name_device_name_idx`. This is deprecated and will be disallowed in a future release.
cursor.execute(statement, parameters)
/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py:450: Warning: Duplicate index `uniq_instances0uuid`. This is deprecated and will be disallowed in a future release.
cursor.execute(statement, parameters)
# mysql -h 192.168.0.231 -unova -popenstack -e "use nova;show tables;"
# systemctl enable openstack-nova-api.service \
openstack-nova-cert.service openstack-nova-consoleauth.service \
openstack-nova-scheduler.service openstack-nova-conductor.service \
openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service \
openstack-nova-cert.service openstack-nova-consoleauth.service \
openstack-nova-scheduler.service openstack-nova-conductor.service \
openstack-nova-novncproxy.service
[root@controller ~]# openstack host list
+------------+-------------+----------+
| Host Name | Service | Zone |
+------------+-------------+----------+
| controller | consoleauth | internal | //consoleauth用來做控制臺驗證的
| controller | conductor | internal | //conductor用來訪問數(shù)據(jù)庫
| controller | cert | internal | //cert用來作身份驗證
| controller | scheduler | internal | //scheduler用來作調(diào)度的
+------------+-------------+----------+
這一部分講述的是 nova 在計算節(jié)點(compute)上的部署
[root@compute1 ~]# yum install openstack-nova-compute sysfsutils
[DEFAULT]
my_ip=192.168.0.232
[vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.0.231:6080/vnc_auto.html
keymap=en-us
[glance]
host = 192.168.0.231
[libvirt]
virt_type=kvm
查看配置文件是否正常:
[root@compute1 ~]# cat /etc/nova/nova.conf |grep -v "^#" | grep -v "^$"
[DEFAULT]
my_ip=192.168.0.232
enabled_apis=osapi_compute,metadata
auth_strategy=keystone
allow_resize_to_same_host=True
network_api_class=nova.network.neutronv2.api.API
linuxnet_interface_driver=nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
security_group_api=neutron
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
firewall_driver=nova.virt.firewall.NoopFirewallDriver
verbose=true
rpc_backend=rabbit
[database]
connection=mysql://nova:openstack@192.168.0.231/nova
[glance]
host=192.168.0.231
[keystone_authtoken]
auth_uri=http://192.168.0.231:5000
auth_url=http://192.168.0.231:35357
auth_plugin=password
project_domain_id=default
user_domain_id=default
project_name=service
username=nova
password=openstack
[libvirt]
virt_type=kvm
inject_password =true
inject_key = true
[neutron]
url=http://192.168.0.231:9696
auth_url=http://192.168.0.231:35357
auth_plugin=password
project_domain_id=default
user_domain_id=default
region_name=RegionOne
project_name=service
username=neutron
password=openstack
service_metadata_proxy=true
metadata_proxy_shared_secret=METADATA_SECRET
lock_path=/var/lib/nova/tmp
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_rabbit]
rabbit_host=192.168.0.231
rabbit_port=5672
rabbit_userid=openstack
rabbit_password=openstack
[vnc]
novncproxy_base_url=http://192.168.0.231:6080/vnc_auto.html
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=$my_ip
enabled=true
檢查服務(wù)器是否支持硬件虛擬化:
[root@compute1 ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
4
如果顯示的數(shù)字是0,則表示不支持硬件虛擬化.
[root@compute1 ~]# systemctl enable libvirtd.service openstack-nova-compute.service
[root@compute1 ~]# systemctl start libvirtd.service openstack-nova-compute.service
[root@compute1 ~]# scp controller:~/*openrc.sh .
root@controller's password:
admin-openrc.sh 100% 289 0.3KB/s 00:00
demo-openrc.sh 100% 285 0.3KB/s 00:00
[root@compute1 ~]# source admin-openrc.sh
[root@compute1 ~]# nova p_w_picpath-list
+--------------------------------------+-----------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+-----------------+--------+--------+
| 2ac90c0c-b923-43ff-8f99-294195a64ced | CentOS-6-x86_64 | ACTIVE | |
+--------------------------------------+-----------------+--------+--------+
[root@compute1 ~]# openstack host list
+------------+-------------+----------+
| Host Name | Service | Zone |
+------------+-------------+----------+
| controller | consoleauth | internal |
| controller | conductor | internal |
| controller | cert | internal |
| controller | scheduler | internal |
| compute1 | compute | nova |
+------------+-------------+----------+
[root@compute1 ~]# nova service-list
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| 1 | nova-consoleauth | controller | internal | enabled | up | 2017-05-10T09:17:29.000000 | - |
| 2 | nova-conductor | controller | internal | enabled | up | 2017-05-10T09:17:31.000000 | - |
| 4 | nova-cert | controller | internal | enabled | up | 2017-05-10T09:17:29.000000 | - |
| 5 | nova-scheduler | controller | internal | enabled | up | 2017-05-10T09:17:29.000000 | - |
| 6 | nova-compute | compute1 | nova | enabled | up | 2017-05-10T09:17:33.000000 | - |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
[root@compute1 ~]# nova endpoints
[root@controller ~]# source admin-openrc.sh
[root@controller ~]# openstack user create --domain default --password=openstack neutron
[root@controller ~]# openstack role add --project service --user neutron admin
[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network
[root@controller ~]# openstack endpoint create --region RegionOne network public http://192.168.0.231:9696
[root@controller ~]# openstack endpoint create --region RegionOne network internal http://192.168.0.231:9696
[root@controller ~]# openstack endpoint create --region RegionOne network admin http://192.168.0.231:9696
[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient ebtables ipset
服務(wù)端組件配置包含數(shù)據(jù)庫、認(rèn)證、消息隊列、拓樸變化通知、插件
vi /etc/neutron/neutron.conf
[DEFAULT]
state_path = /var/lib/neutron
core_plugin = ml2
service_plugins = router
rpc_backend=rabbit
auth_strategy=keystone
notify_nova_on_port_status_changes=True
notify_nova_on_port_data_changes=True
nova_url=http://192.168.0.231:8774/v2
verbose=True
[database]
connection = mysql://neutron:openstack@192.168.0.231/neutron
[oslo_messaging_rabbit]
rabbit_host = 192.168.0.231
rabbit_port = 5672
rabbit_userid = openstack
rabbit_password = openstack
[oslo_concurrency]
lock_path = $state_path/lock
[keystone_authtoken]
auth_uri=http://192.168.0.231:5000
auth_url=http://192.168.0.231:35357
auth_plugin=password
project_domain_id=default
user_domain_id=default
project_name=service
username=neutron
password=openstack
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
[nova]
auth_url=http://192.168.0.231:35357
auth_plugin=password
project_domain_id=default
user_domain_id=default
region_name=RegionOne
project_name=service
username=nova
password=openstack
vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
# 注意:啟用ML2后,如果刪除了type_drivers的值將導(dǎo)致數(shù)據(jù)庫異常
type_drivers = flat,vlan,gre,vxlan,geneve
tenant_network_types = vlan,gre,vxlan,geneve
mechanism_drivers = openvswitch,linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = physnet1
[securitygroup]
enable_ipset = True
vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = physnet1:eth0
[vxlan]
enable_vxlan = False
[agent]
prevent_arp_spoofing = True
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
vi /etc/neutron/metadata_agent.ini
[DEFAULT]
auth_uri = http://192.168.0.231:5000
auth_url = http://192.168.0.231:35357
auth_region = RegionOne
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = openstack
nova_metadata_ip = 192.168.0.231
metadata_proxy_shared_secret = METADATA_SECRET
verbose = True
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
同步數(shù)據(jù)
[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
重啟nova-api服務(wù)
[root@controller ~]# systemctl restart openstack-nova-api.service
啟動及配置開機啟動
[root@controller ~]# systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
[root@controller ~]# systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
[root@controller ~]# source admin-openrc.sh
[root@controller ~]# neutron agent-list
需要等60秒以上才能出來。
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| 5d05a4fc-3a5e-49ef-b9da-28c7f4969532 | DHCP agent | controller | :-) | True | neutron-dhcp-agent |
| 6e1979c0-c576-42d1-a7d7-5d28cfa74793 | Metadata agent | controller | :-) | True | neutron-metadata-agent |
| f4af7059-0f36-430a-beee-f168ff55fd90 | Linux bridge agent | controller | :-) | True | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
# yum install openstack-neutron openstack-neutron-linuxbridge ebtables ipset
網(wǎng)絡(luò)公共組件配置包含認(rèn)證、消息隊列和插件,直接從控制節(jié)點上拷貝。
[root@controller ~]# scp /etc/neutron/neutron.conf 192.168.0.232:/etc/neutron/
[root@controller ~]# scp /etc/neutron/plugins/ml2/linuxbridge_agent.ini 192.168.0.232:/etc/neutron/plugins/ml2/
[root@controller ~]# scp /etc/neutron/plugins/ml2/ml2_conf.ini 192.168.0.232:/etc/neutron/plugins/ml2/
完成安裝,建立鏈接
[root@compute1 ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
[root@compute1 ~]# vi /etc/neutron/neutron.conf
[database]
# 注釋掉該模塊的所有配置,因不需要 compute 節(jié)點直接連接數(shù)據(jù)庫
重啟compute服務(wù):
[root@compute1 ~]# systemctl restart openstack-nova-compute.service
啟動Linux bridge agent并設(shè)置開機自啟動
[root@compute1 ~]# systemctl enable neutron-linuxbridge-agent.service
[root@compute1 ~]# systemctl start neutron-linuxbridge-agent.service
以下命令在controller節(jié)點上執(zhí)行
[root@controller ~]# source admin-openrc.sh
[root@controller ~]# neutron ext-list
+-----------------------+-----------------------------------------------+
| alias | name |
+-----------------------+-----------------------------------------------+
| dns-integration | DNS Integration |
| ext-gw-mode | Neutron L3 Configurable external gateway mode |
| binding | Port Binding |
| agent | agent |
| subnet_allocation | Subnet Allocation |
| l3_agent_scheduler | L3 Agent Scheduler |
| external-net | Neutron external network |
| flavors | Neutron Service Flavors |
| net-mtu | Network MTU |
| quotas | Quota management support |
| l3-ha | HA Router extension |
| provider | Provider Network |
| multi-provider | Multi Provider Network |
| extraroute | Neutron Extra Route |
| router | Neutron L3 Router |
| extra_dhcp_opt | Neutron Extra DHCP opts |
| security-group | security-group |
| dhcp_agent_scheduler | DHCP Agent Scheduler |
| rbac-policies | RBAC Policies |
| port-security | Port Security |
| allowed-address-pairs | Allowed Address Pairs |
| dvr | Distributed Virtual Router |
+-----------------------+-----------------------------------------------+
[root@controller ~]# neutron agent-list
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| 5d05a4fc-3a5e-49ef-b9da-28c7f4969532 | DHCP agent | controller | :-) | True | neutron-dhcp-agent |
| 6e1979c0-c576-42d1-a7d7-5d28cfa74793 | Metadata agent | controller | :-) | True | neutron-metadata-agent |
| f0aa7ff3-01c9-450f-bcc4-63ffee250bd7 | Linux bridge agent | compute1 | :-) | True | neutron-linuxbridge-agent |
| f4af7059-0f36-430a-beee-f168ff55fd90 | Linux bridge agent | controller | :-) | True | neutron-linuxbridge-agent |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
下面這個應(yīng)該能看到4個agent,3個在controller節(jié)點,1個在compute1節(jié)點
[root@controller ~]# source admin-openrc.sh
[root@controller ~]# neutron net-create public --shared --provider:physical_network physnet1 --provider:network_type flat
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 6759f3eb-a4c8-4503-b92b-da6daacf0ab4 |
| mtu | 0 |
| name | public |
| port_security_enabled | True |
| provider:network_type | flat |
| provider:physical_network | physnet1 |
| provider:segmentation_id | |
| router:external | False |
| shared | True |
| status | ACTIVE |
| subnets | |
| tenant_id | 10952875490e43938d80d921337cb053 |
+---------------------------+--------------------------------------+
--shared 表示允許所有的項目使用該網(wǎng)絡(luò)
[root@controller ~]# neutron subnet-create public 192.168.0.0/24 --name public-subunet --allocation-pool start=192.168.0.200,end=192.168.0.210\
--dns-nameserver 202.100.192.68 --gateway 192.168.0.253
Created a new subnet:
+-------------------+----------------------------------------------------+
| Field | Value |
+-------------------+----------------------------------------------------+
| allocation_pools | {"start": "192.168.0.200", "end": "192.168.0.210"} |
| cidr | 192.168.0.0/24 |
| dns_nameservers | 202.100.192.68 |
| enable_dhcp | True |
| gateway_ip | 192.168.0.253 |
| host_routes | |
| id | da75b2db-56f4-45d2-b3f3-2ccf172f8798 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | public-subunet |
| network_id | 2e098da8-70f9-40bc-a393-868ed9a446cf |
| subnetpool_id | |
| tenant_id | be420231d13848809da36178cbac4d22 |
+-------------------+----------------------------------------------------+
[root@controller ~]# neutron net-list
+--------------------------------------+--------+-----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+--------+-----------------------------------------------------+
| 2e098da8-70f9-40bc-a393-868ed9a446cf | public | da75b2db-56f4-45d2-b3f3-2ccf172f8798 192.168.0.0/24 |
+--------------------------------------+--------+-----------------------------------------------------+
[root@controller ~]# neutron subnet-list
+--------------------------------------+----------------+----------------+----------------------------------------------------+
| id | name | cidr | allocation_pools |
+--------------------------------------+----------------+----------------+----------------------------------------------------+
| da75b2db-56f4-45d2-b3f3-2ccf172f8798 | public-subunet | 192.168.0.0/24 | {"start": "192.168.0.200", "end": "192.168.0.210"} |
+--------------------------------------+----------------+----------------+----------------------------------------------------+
[root@controller ~]# source admin-openrc.sh
如果已有密鑰,則可以不使用 ssh-keygen 重新生成
[root@controller ~]# ssh-keygen -q -N ""
Enter file in which to save the key (/root/.ssh/id_rsa):
[root@controller ~]# nova keypair-add --pub-key ~/.ssh/id_rsa.pub mykey
[root@controller ~]# nova keypair-list
+-------+-------------------------------------------------+
| Name | Fingerprint |
+-------+-------------------------------------------------+
| mykey | bc:ca:8e:bb:61:01:7f:8a:ab:5e:d8:b2:2c:35:b7:83 |
+-------+-------------------------------------------------+
默認(rèn)情況下,安全規(guī)則組 default 會應(yīng)用到所有的實例當(dāng)中,它會通過防火墻規(guī)則來拒絕所有的遠(yuǎn)程訪問。一般來說,我們通常會放行 ICMP 和 SSH 這兩種協(xié)議的訪問。
[root@controller ~]# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
[root@controller ~]# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
[root@controller ~]# yum install openstack-dashboard
[root@controller ~]# vi /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*', ]
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
"identity": 3,
"volume": 2,
}
OPENSTACK_NEUTRON_NETWORK = {
...
'enable_router': False,
'enable_quotas': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_***': False,
'enable_fip_topology_check': False,
}
#時區(qū)設(shè)置
TIME_ZONE = "Asia/Shanghai"
#創(chuàng)建虛擬機的時候可以修改密碼
OPENSTACK_HYPERVISOR_FEATURES = {
'can_set_mount_point': True,
'can_set_password': True,
'requires_keypair': True,
}
[root@controller ~]# systemctl enable httpd.service memcached.service
[root@controller ~]# systemctl restart httpd.service memcached.service
用瀏覽器打開:http://192.168.0.231/dashboard
域: default
用戶: admin 或 demo 密碼為自己創(chuàng)建的密碼。
說明:官方下載的centos7鏡像。不知道密碼,需要在創(chuàng)建指定一個密碼,默認(rèn)是centos7 可以SSH登錄,但是無法默認(rèn)讓root直接ssh登錄,需要在創(chuàng)建虛擬機實例時取消root ssh登錄 。
如果你希望SSH可以使用密碼登錄。那么你需要用腳本修改ssh root登錄設(shè)置,不推薦用那個cirros鏡像測試。
#!/bin/sh
sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
systemctl restart sshd
測試SSH登錄
通過安裝openstack的過程理解openstack各個組件的工作原理以及具體實現(xiàn)方式,在這個基礎(chǔ)上可以擴展其它內(nèi)容。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。