溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點(diǎn)擊 登錄注冊 即表示同意《億速云用戶服務(wù)條款》

openstack octavia 簡介以及手工安裝過程

發(fā)布時(shí)間:2020-07-12 07:51:30 來源:網(wǎng)絡(luò) 閱讀:36311 作者:superbigsea 欄目:數(shù)據(jù)庫

openstack octavia 是 openstack lbaas的支持的一種后臺(tái)程序,提供為虛擬機(jī)流量的負(fù)載均衡。實(shí)質(zhì)是類似于trove,調(diào)用 nove 以及neutron的api生成一臺(tái)安裝好haproxy和keepalived軟件的虛擬機(jī),并連接到目標(biāo)網(wǎng)路。octavia共有4個(gè)組件 housekeeping,worker,api,health-manager,octavia agent。api作用就不詳細(xì)說了。worker:主要作用是和nova,neutron等組件通信,用于虛擬機(jī)調(diào)度以及把對于虛擬機(jī)操作的指令下發(fā)給octavia agent。housekeeping:查看octavia/controller/housekeeping/house_keeping.py得知其三個(gè)功能點(diǎn):SpareAmphora,DatabaseCleanup,CertRotation。依次是清理虛擬機(jī)的池子,清理過期數(shù)據(jù)庫,更新證書。health-manager:檢查虛擬機(jī)狀態(tài),和虛擬機(jī)中的octavia agent通信,來更新各個(gè)組件的狀態(tài)。octavia agent 位于虛擬機(jī)內(nèi)部:對下是接受指令操作下層的haproxy軟件,對上是和health-manager通信匯報(bào)各種情況??梢詤⒖疾┪膆ttp://lingxiankong.github.io/blog/2016/03/30/octavia/?utm_source=tuicool&utm_medium=referral

寫的比我更詳細(xì)一點(diǎn)




目前官方不提供安裝文檔。谷歌了下似乎也沒人寫過具體的安裝步驟,只推薦用devstack來進(jìn)行安裝。本人嘗試根據(jù)devstack的安裝腳本總結(jié)了下安裝octavia的步驟,驗(yàn)證是成功的,不當(dāng)之處請各位指正。

一 安裝

1、創(chuàng)建數(shù)據(jù)庫

mysql> CREATE DATABASE octavia;
mysql> GRANT ALL PRIVILEGES ON octavia.* TO 'octavia'@'localhost'  IDENTIFIED BY 'OCTAVIA_DBPASS';mysql> GRANT ALL PRIVILEGES ON octavia.* TO 'octavia'@'%' \  IDENTIFIED BY 'OCTAVIA_DBPASS';

2 創(chuàng)建用戶 角色 endpoint

openstack user create --domain default --password-prompt octavia
openstack role add --project service --user cinder admin
openstack endpoint create octavia public http://10.1.65.58:9876/ --region RegionOne 
openstack endpoint create octavia admin http://10.1.65.58:9876/ --region RegionOne 
openstack endpoint create octavia internal http://10.1.65.58:9876/ --region RegionOne

3 安裝軟件包

yum install openstack-octavia-worker openstack-octavia-api python-octavia openstack-octavia openstack-octavia openstack-octavia

4 導(dǎo)入鏡像 鏡像是從devstack 生成的系統(tǒng)中導(dǎo)出來的

openstack  p_w_picpath create amphora-x64-haproxy --public --container-format=bare --disk-format qcow2

5 創(chuàng)建管理網(wǎng)絡(luò),并在主機(jī)創(chuàng)建ovs端口,使octavia-worker,octavia-housekeeping,octavia-health-manager能和生成的虛擬機(jī)實(shí)例通訊

 5.1 生成管理網(wǎng)絡(luò),網(wǎng)段

openstack network create lb-mgmt-net
openstack subnet create --subnet-range 192.168.0.0/24 --allocation-pool       start=192.168.0.2,end=192.168.0.200 --network lb-mgmt-net lb-mgmt-subnet


 5.2 生成管理端口防火墻規(guī)則 

5555端口是管理網(wǎng)絡(luò),考慮到octavia組件尚不成熟,開啟了22端口,鏡像本身也是開啟了22端口,這點(diǎn)吐槽下trove,同樣是不成熟的模塊,默認(rèn)不開啟22端口,還得去改源碼。

openstack security group create lb-mgmt-sec-grp
openstack security group rule create --protocol udp --dst-port 5555 lb-health-mgr-sec-grp
openstack security group rule create --protocol tcp --dst-port 22 lb-mgmt-sec-grp


 5.3 在管理網(wǎng)絡(luò)創(chuàng)建一個(gè)端口用于連接宿主機(jī)中的octavia health_manager

neutron port-create --name octavia-health-manager-standalone-listen-port --security-group lb-health-mgr-sec-grp --device-owner Octavia:health-mgr --binding:host_id=controller lb-mgmt-net

 5.4 創(chuàng)建宿主機(jī)的ovs端口 并連接至5.1生成的網(wǎng)絡(luò)

ovs-vsctl  --may-exist add-port br-int o-hm0 -- set Interface o-hm0 type=internal -- set Interface o-hm0 external-ids:iface-status=active -- set Interface o-hm0 external-ids:attached-mac=fa:16:3e:6f:9f:9a -- set Interface o-hm0 external-ids:iface-id=457e4953-b2d6-49ee-908b-2991506602b2

其中iface-id 和attached-mac 為 5.3生成的port的 屬性

ip link set dev o-hm0 address fa:16:3e:6f:9f:9a

 5.5 在宿主機(jī)上創(chuàng)建dhcp (為啥不用傳統(tǒng)的dnsmasq呢?)

dhclient -v o-hm0 -cf /etc/octavia/dhcp/dhclient.conf

6 配置修改,和其他openstack組件設(shè)置差不多

  6.1 設(shè)置數(shù)據(jù)庫

[database]
connection = mysql+pymysql://octavia:OCTAVIA_DBPASS@controller/octavia

  6.2 設(shè)置消息隊(duì)列

[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS

6.3 設(shè)置 keystone的認(rèn)證信息

[keystone_authtoken]
auth_version = 2
admin_password = OCTAVIA_PASS
admin_tenant_name = octavia
admin_user = octavia
auth_uri = http://controller:5000/v2.0

6.4 設(shè)置health_manager組件監(jiān)聽地址,此ip地址等于5.3中創(chuàng)建的io地址

[health_manager]
bind_port = 5555
bind_ip = 192.168.0.7
controller_ip_port_list = 192.168.0.7:5555

6.5 設(shè)置和虛擬機(jī)通信的 公鑰私鑰

[haproxy_amphora]
server_ca = /etc/octavia/certs/ca_01.pem
client_cert = /etc/octavia/certs/client.pem
key_path = /etc/octavia/.ssh/octavia_ssh_key
base_path = /var/lib/octavia
base_cert_dir = /var/lib/octavia/certs
connection_max_retries = 1500
connection_retry_interval = 1

6.6 設(shè)置 用于開啟虛擬機(jī)實(shí)例的信息

[controller_worker]
amp_boot_network_list = 826be4f4-a23d-4c5c-bff5-7739936fac76 # 步驟5.1中生成的id
amp_p_w_picpath_tag = amphora # 步驟4 中已經(jīng)定義這個(gè)metadata
amp_secgroup_list = d949202b-ba09-4003-962f-746ae75809f7 # 步驟5.2 生成的安全組id
amp_flavor_id = dd49b3d5-4693-4407-a76e-2ca95e00a9ec
amp_p_w_picpath_id = b23dda5f-210f-40e6-9c2c-c40e9daa661a #步驟4中生成的鏡像id
amp_ssh_key_name = 155 #
amp_active_wait_sec = 1
amp_active_retries = 100
network_driver = allowed_address_pairs_driver
compute_driver = compute_nova_driver
amphora_driver = amphora_haproxy_rest_driver

7 修改neutron配置

  7.1 修改 /etc/neutron/neutron.conf 增加lbaas服務(wù)

 

service_plugins = [existing service plugins],neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2

7.2 在[service_providers] 章節(jié) 設(shè)置lbaas 的服務(wù)提供者為octavia

 

 service_provider = LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default

8 啟動(dòng)服務(wù)

如果之前 開啟了 LBaaS v2 with an agent 服務(wù) 請關(guān)閉,并清理下neutron數(shù)據(jù)庫下 lbaas_loadbalancers lbaas_loadbalancer_statistics 不然會(huì)報(bào)錯(cuò)

同步數(shù)據(jù)庫

 

octavia-db-manage   upgrade head

重啟neutron 

systemctl restart neutron-server

啟動(dòng)octavia

systemctl restart  octavia-housekeeping  octavia-worker octavia-api octavia-health-manager

二 驗(yàn)證

9.1 創(chuàng)建loadbalancer

[root@controller ~]# neutron lbaas-loadbalancer-create --name test-lb-1 lbtest
Created a new loadbalancer:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| description         |                                      |
| id                  | 5af472bb-2068-4b96-bcb3-bef7ff7abc56 |
| listeners           |                                      |
| name                | test-lb-1                            |
| operating_status    | OFFLINE                              |
| pools               |                                      |
| provider            | octavia                              |
| provisioning_status | PENDING_CREATE                       |
| tenant_id           | 9a4b2de78c2d45cfbf6880dd34877f7b     |
| vip_address         | 192.168.123.10                       |
| vip_port_id         | d163b73c-258a-4e03-90ad-5db31cfe23ac |
| vip_subnet_id       | 74aea53a-014a-4f9c-86f9-805a2a772a27 |
+---------------------+--------------------------------------+

 9.2 查看虛擬機(jī),值得注意的地方,loadbalancer的地址是vip,和虛擬機(jī)的地址是不相同的

[root@controller ~]# openstack server list |grep 82b59e85-29f2-46ce-ae0b-045b7fceb5ca
| 82b59e85-29f2-46ce-ae0b-045b7fceb5ca | amphora-734da57c-e444-4b8e-a706-455230ae0803 | ACTIVE  | lbtest=192.168.123.9; lb-mgmt-net=192.168.0.6        | amphora-x64-haproxy 201610131607    |

 9.3 創(chuàng)建linstener

neutron lbaas-listener-create --name test-lb-tcp --loadbalancer test-lb-1 --protocol TCP  --protocol-port 22

 9.4 設(shè)置安全組

 neutron port-update  --security-group default d163b73c-258a-4e03-90ad-5db31cfe23ac

 9.5 創(chuàng)建pool,新建三臺(tái)虛擬機(jī),并加入pool

openstack server create  --flavor m1.small --nic net-id=22525640-297e-40eb-bd77-0a9afd861f8c --p_w_picpath "cirros for kvm raw"  --min 3 --max 3 test

[root@controller ~]# openstack server list |grep test-
| d8dc22d4-e657-4c54-96f9-3a53ca67533d | test-3                                       | ACTIVE  | lbtest=192.168.123.8                                 | cirros for kvm raw                  |
| c7926665-84c5-48a5-9de5-5e15e71baa5d | test-2                                       | ACTIVE  | lbtest=192.168.123.13                                | cirros for kvm raw                  |
| fcf60c23-b799-4d08-a5a7-2b0fc9f1905e | test-1                                       | ACTIVE  | lbtest=192.168.123.11                                | cirros for kvm raw                  |

neutron lbaas-pool-create   --name test-lb-pool-tcp  --lb-algorithm ROUND_ROBIN --listener test-lb-tcp --protocol TCP
 
for i in {8,13,11}
do
neutron lbaas-member-create --subnet lbtest  --address 192.168.123.${i}  --protocol-port 22  test-lb-pool-tcp
done

 9.6 驗(yàn)證

[root@controller ~]# >/root/.ssh/known_hosts;ip netns exec qrouter-4718cc34-68cc-47a7-9201-405d1fc09213 ssh cirros@192.168.123.10 "hostname"
The authenticity of host '192.168.123.10 (192.168.123.10)' can't be established.
RSA key fingerprint is 72:c4:11:41:53:51:f2:1b:b5:e6:1b:69:a8:c2:5b:d4.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.123.10' (RSA) to the list of known hosts.
cirros@192.168.123.10's password: 
test-3
[root@controller ~]# >/root/.ssh/known_hosts;ip netns exec qrouter-4718cc34-68cc-47a7-9201-405d1fc09213 ssh cirros@192.168.123.10 "hostname"
The authenticity of host '192.168.123.10 (192.168.123.10)' can't be established.
RSA key fingerprint is 3d:88:0f:4a:b1:77:c9:6a:fd:82:4d:31:0c:ca:82:d8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.123.10' (RSA) to the list of known hosts.
cirros@192.168.123.10's password: 
test-1
[root@controller ~]# >/root/.ssh/known_hosts;ip netns exec qrouter-4718cc34-68cc-47a7-9201-405d1fc09213 ssh cirros@192.168.123.10 "hostname"
The authenticity of host '192.168.123.10 (192.168.123.10)' can't be established.
RSA key fingerprint is 1c:03:f0:f9:92:a7:0f:5d:9d:09:22:14:94:62:e4:c4.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.123.10' (RSA) to the list of known hosts.
cirros@192.168.123.10's password: 
test-2

三 過程分析

10.1 worker的相關(guān)操作

 創(chuàng)建 云主機(jī)實(shí)例,關(guān)聯(lián)到管理網(wǎng)絡(luò):

REQ: curl -g -i -X POST http://controller:8774/v2.1/9a4b2de78c2d45cfbf6880dd34877f7b/servers -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-OpenStack-Nova-API-Version: 2.1" -H "X-Auth-Token: {SHA1}0f810ab0fdd5b92489f73a7f0988adfc9da4e517" -d '{"server": {"name": "amphora-4f22d55b-0680-4111-aef6-da98c9ccd1d4", "p_w_picpathRef": "b23dda5f-210f-40e6-9c2c-c40e9daa661a", "key_name": "155", "flavorRef": "dd49b3d5-4693-4407-a76e-2ca95e00a9ec", "max_count": 1, "min_count": 1, "personality": [{"path": "/etc/octavia/amphora-agent.conf", "contents": ""}, {"path": "/etc/octavia/certs/client_ca.pem", "contents": "="}, {"path": "/etc/octavia/certs/server.pem", "contents": ""}], "networks": [{"uuid": "826be4f4-a23d-4c5c-bff5-7739936fac76"}], "security_groups": [{"name": "d949202b-ba09-4003-962f-746ae75809f7"}], "config_drive": true}}' _http_log_request /usr/lib/python2.7/site-packages/keystoneauth2/session.py:337

當(dāng)檢測到目標(biāo)云主機(jī)的管理網(wǎng)絡(luò)狀態(tài)變?yōu)閍ctive后進(jìn)行下一步

REQ: curl -g -i -X GET http://controller:8774/v2.1/9a4b2de78c2d45cfbf6880dd34877f7b/servers/d3c97360-56b2-4f75-b905-2ef83870a342/os-interface -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-OpenStack-Nova-API-Version: 2.1" -H "X-Auth-Token: {SHA1}3f6ccac4cb8b70b06fb5e62b9db2272702d8ec67" _http_log_request /usr/lib/python2.7/site-packages/keystoneauth2/session.py:337
2016-10-17 12:06:30.041 29993 DEBUG novaclient.v2.client [-] RESP: [200] Content-Length: 286 Content-Type: application/json Openstack-Api-Version: compute 2.1 X-Openstack-Nova-Api-Version: 2.1 Vary: OpenStack-API-Version, X-OpenStack-Nova-API-Version X-Compute-Request-Id: req-ccc07b37-e942-4a5b-a87a-b0e8d3887ba3 Date: Mon, 17 Oct 2016 04:06:30 GMT Connection: keep-alive
RESP BODY: {"interfaceAttachments": [{"port_state": "ACTIVE", "fixed_ips": [{"subnet_id": "4e3409e5-4e9a-4599-9b2e-f760b2fab380", "ip_address": "192.168.0.11"}], "port_id": "bbf99a69-0fb2-42a6-b7de-b7969bda9d73", "net_id": "826be4f4-a23d-4c5c-bff5-7739936fac76", "mac_addr": "fa:16:3e:01:04:2c"}]}
2016-10-17 12:06:30.078 29993 DEBUG octavia.controller.worker.tasks.amphora_driver_tasks [-] Finalized the amphora. execute /usr/lib/python2.7/site-packages/octavia/controller/worker/tasks/amphora_driver_tasks.py:164

創(chuàng)建對外服務(wù)的vip的端口

2016-10-17 12:06:30.226 29993 DEBUG octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.network_tasks.AllocateVIP' (af8ea5a0-42c8-4d30-9ffa-016668811fc8) transitioned into state 'RUNNING' from state 'PENDING' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:189
2016-10-17 12:06:30.227 29993 DEBUG octavia.controller.worker.tasks.network_tasks [-] Allocate_vip port_id c7d7b552-83ac-4e0c-84bf-0b9cae661eab, subnet_id 74aea53a-014a-4f9c-86f9-805a2a772a27,ip_address 192.168.123.31 execute /usr/lib/python2.7/site-packages/octavia/controller/worker/tasks/network_tasks.py:328

在該vip下面創(chuàng)建一個(gè)實(shí)際的port 并把該port   attach至 云主機(jī)

2016-10-17 12:06:32.662 29993 DEBUG octavia.network.drivers.neutron.allowed_address_pairs [-] Created vip port: 1627d28d-bf54-46eb-9d78-410c5d647bf4 for amphora: 3f6e22a1-e0b0-4098-ba20-daf47cfdae19 _plug_amphora_vip /usr/lib/python2.7/site-packages/octavia/network/drivers/neutron/allowed_address_pairs.py:97
2016-10-17 12:06:32.663 29993 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X POST http://controller:8774/v2.1/9a4b2de78c2d45cfbf6880dd34877f7b/servers/d3c97360-56b2-4f75-b905-2ef83870a342/os-interface -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-OpenStack-Nova-API-Version: 2.1" -H "X-Auth-Token: {SHA1}3f6ccac4cb8b70b06fb5e62b9db2272702d8ec67" -d '{"interfaceAttachment": {"port_id": "1627d28d-bf54-46eb-9d78-410c5d647bf4"}}' _http_log_request /usr/lib/python2.7/site-packages/keystoneauth2/session.py:337

創(chuàng)建listener

2016-10-17 19:01:09.384 29993 DEBUG octavia.amphorae.drivers.haproxy.rest_api_driver [-] request url https://192.168.0.9:9443/0.5/listeners/c3a1867c-b2e5-49a7-819b-7a7d39063dda/reload request /usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py:248
2016-10-17 19:01:09.412 29993 DEBUG octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connected to amphora. Response: <Response [202]> request /usr/lib/python2.7/site-packages/octavia/amphorae/drivers/haproxy/rest_api_driver.py:270
2016-10-17 19:01:09.414 29993 DEBUG octavia.controller.worker.controller_worker [-] Task 'octavia.controller.worker.tasks.amphora_driver_tasks.ListenersUpdate' (0f588287-a383-4c70-9847-20187dd19f9f) transitioned into state 'SUCCESS' from state 'RUNNING' with result 'None' _task_receiver /usr/lib/python2.7/site-packages/taskflow/listeners/logging.py:178

10.2

octavia agent分析

在9443端口創(chuàng)建監(jiān)聽端口給worker和health-manager 訪問

2016-10-17 12:10:41.344 1043 INFO werkzeug [-]  * Running on https://0.0.0.0:9443/ (Press CTRL+C to quit)

octavia agent的似乎有bug,不顯示debug信息。

11 高可用測試

將/etc/octavia/octavia.conf配置文件中的loadbalancer_topology = SINGLE 改成 ACTIVE_STANDBY 可以啟用高可用模式,目前不持雙ACTIVE

生成loadbalancer之后,可以看到生成兩個(gè)虛擬機(jī)

[root@controller octavia]# neutron lbaas-loadbalancer-create --name test-lb1238 lbtest2 

Created a new loadbalancer:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| admin_state_up      | True                                 |
| description         |                                      |
| id                  | 4e43f3c7-c0f6-44c7-8dab-e2fc8ed16e0f |
| listeners           |                                      |
| name                | test-lb1238                          |
| operating_status    | OFFLINE                              |
| pools               |                                      |
| provider            | octavia                              |
| provisioning_status | PENDING_CREATE                       |
| tenant_id           | 9a4b2de78c2d45cfbf6880dd34877f7b     |
| vip_address         | 192.168.235.14                       |
| vip_port_id         | 42f72c9f-4623-4bf5-ae82-29f8cf588d2d |
| vip_subnet_id       | 52e93565-eab2-4316-a04c-3e554992c993 |
+---------------------+--------------------------------------+
[root@controller ~]# openstack server list |grep 192.168.235                 |
| 736b8b76-2918-49a7-8477-995a168709bd | amphora-5379f109-01fa-429c-860b-c739e0c5ad5e | ACTIVE  | lb-mgmt-net=192.168.0.8; lbtest2=192.168.235.10  | amphora-x64-haproxy 201610131607    |
| bd867667-b8d2-49c5-bb1e-54f0753d33a3 | amphora-23540889-b07e-4c0e-ab9b-df0075fbb9c3 | ACTIVE  | lb-mgmt-net=192.168.0.25; lbtest2=192.168.235.19 | amphora-x64-haproxy 201610131607

看到3個(gè)ip:vip是192.168.235.14,兩臺(tái)機(jī)器出口openstack octavia  簡介以及手工安裝過程ip是192.168.235.10和192.168.235.19

登陸虛擬機(jī)驗(yàn)證一下,注意登陸的ip是管理網(wǎng)絡(luò)的ip:

[root@controller ~]# ssh 192.168.0.8 "ps -ef |grep keepalived; cat  /var/lib/octavia/vrrp/octavia-keepalived.conf"
root      1868     1  0 04:40 ?        00:00:00 /usr/sbin/keepalived -D -d -f /var/lib/octavia/vrrp/octavia-keepalived.conf
root      1869  1868  0 04:40 ?        00:00:00 /usr/sbin/keepalived -D -d -f /var/lib/octavia/vrrp/octavia-keepalived.conf
root      1870  1868  0 04:40 ?        00:00:00 /usr/sbin/keepalived -D -d -f /var/lib/octavia/vrrp/octavia-keepalived.conf
root      5448  5377  0 05:00 ?        00:00:00 bash -c ps -ef |grep keepalived; cat  /var/lib/octavia/vrrp/octavia-keepalived.conf
root      5450  5448  0 05:00 ?        00:00:00 grep keepalived
vrrp_script check_script {
  script /var/lib/octavia/vrrp/check_script.sh
  interval 5
  fall 2
  rise 2
}
vrrp_instance 4e43f3c7c0f644c78dabe2fc8ed16e0f {
 state MASTER
 interface eth2
 virtual_router_id 1
 priority 100
 nopreempt
 garp_master_refresh 5
 garp_master_refresh_repeat 2
 advert_int 1
 authentication {
  auth_type PASS
  auth_pass ee46125
 }
 unicast_src_ip 192.168.235.10
 unicast_peer {
       192.168.235.19
 }
 virtual_ipaddress {
  192.168.235.14
 }
 track_script {
    check_script
 }
}
[root@controller ~]# ssh 192.168.0.8 "ps -ef |grep haproxy; cat  /var/lib/octavia/836053f0-ea72-46ae-9fae-8b80153ef593/haproxy.cfg"
nobody    2195     1  0 04:43 ?        00:00:00 /usr/sbin/haproxy -f /var/lib/octavia/836053f0-ea72-46ae-9fae-8b80153ef593/haproxy.cfg -L jrwLnRhlvXcPd21JhvXEMStRHh0 -p /var/lib/octavia/836053f0-ea72-46ae-9fae-8b80153ef593/836053f0-ea72-46ae-9fae-8b80153ef593.pid -sf 2154
root      6745  6676  0 05:06 ?        00:00:00 bash -c ps -ef |grep haproxy; cat  /var/lib/octavia/836053f0-ea72-46ae-9fae-8b80153ef593/haproxy.cfg
root      6747  6745  0 05:06 ?        00:00:00 grep haproxy
# Configuration for test-lb1238
global
    daemon
    user nobody
    group nogroup
    log /dev/log local0
    log /dev/log local1 notice
    stats socket /var/lib/octavia/836053f0-ea72-46ae-9fae-8b80153ef593.sock mode 0666 level user

defaults
    log global
    retries 3
    option redispatch
    timeout connect 5000
    timeout client 50000
    timeout server 50000

peers 836053f0ea7246ae9fae8b80153ef593_peers
    peer 3OduZJiPzm475Q7IgyshE5oq1Jk 192.168.235.19:1025
    peer jrwLnRhlvXcPd21JhvXEMStRHh0 192.168.235.10:1025


frontend 836053f0-ea72-46ae-9fae-8b80153ef593
    option tcplog
    bind 192.168.235.14:22
    mode tcp
    default_backend 457d4de5-3213-4969-8f20-1f2d3505ff1e

backend 457d4de5-3213-4969-8f20-1f2d3505ff1e
    mode tcp
    balance leastconn
    timeout check 5
    server fa28676f-a762-4a8e-91ab-7a83f071b62b 192.168.235.20:22 weight 1 check inter 5s fall 3 rise 3
    server 1ded44da-cba5-434c-8578-95153656c392 192.168.235.24:22 weight 1 check inter 5s fall 3 rise 3

另一臺(tái)結(jié)果結(jié)果類似。

結(jié)論:octavia的高可用是通過haproxy加keepalived來完成的。

四、其他

1、在services_lbaas.conf下有個(gè)選項(xiàng) 

[octavia]

request_poll_timeout = 200

此選項(xiàng)的定義了,創(chuàng)建loadbalancer之后,當(dāng)超過這個(gè)時(shí)間以后,如果octavia還沒有的狀態(tài)沒有變成active,neutron就會(huì)把這個(gè)loadbalancer設(shè)置為error,默認(rèn)值是100,在我的環(huán)境下高可用模式會(huì)來不及。日志如下:

2016-10-19 09:38:26.392 6256 DEBUG neutron_lbaas.drivers.octavia.driver [req-bee3619a-f9d4-4463-adcd-3cb99826b600 - - - - -] Octavia reports load balancer 2676dac6-c41d-4501-9c41-781a176c6baf has provisioning status of PENDING_CREATE thread_op /usr/lib/python2.7/site-packages/neutron_lbaas/drivers/octavia/driver.py:75
2016-10-19 09:38:29.393 6256 DEBUG neutron_lbaas.drivers.octavia.driver [req-bee3619a-f9d4-4463-adcd-3cb99826b600 - - - - -] Timeout has expired for load balancer 2676dac6-c41d-4501-9c41-781a176c6baf to complete an operation.  The last reported status was PENDING_CREATE thread_op /usr/lib/python2.7/site-packages/neutron_lbaas/drivers/octavia/driver.py:94

2、源代碼小修改例子:

 當(dāng)neutron的loadbalancer狀態(tài)發(fā)生變成active或者error時(shí)候時(shí)候推送到報(bào)警系統(tǒng)

 修改/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/octavia/driver.py

        if prov_status == 'ACTIVE' or prov_status == 'DELETED':
            kwargs = {'delete': delete}
            if manager.driver.allocates_vip and lb_create:
                kwargs['lb_create'] = lb_create
                # TODO(blogan): drop fk constraint on vip_port_id to ports
                # table because the port can't be removed unless the load
                # balancer has been deleted.  Until then we won't populate the
                # vip_port_id field.
                # entity.vip_port_id = octavia_lb.get('vip').get('port_id')
                entity.vip_address = octavia_lb.get('vip').get('ip_address')
            manager.successful_completion(context, entity, **kwargs)
            if prov_status == 'ACTIVE':
              urllib2.urlopen('http://********')
              LOG.debug("report  status to******* {0}{1}".format(entity.root_loadbalancer.id, prov_status))
            return
        elif prov_status == 'ERROR':
            manager.failed_completion(context, entity)
            urllib2.urlopen('http://*******')
            LOG.debug("report status to ******* {0}{1}".format(entity.root_loadbalancer.id, prov_status))
            return

 



3、octavia的數(shù)據(jù)庫和neutron不是同一張表,但是里面有很多數(shù)據(jù)要求要保持一致,一定要保持兩者相關(guān)數(shù)據(jù)的同步,不同步的話會(huì)帶來很多問題,親身經(jīng)歷。





向AI問一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI