溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

OpenStack新增節(jié)點報錯Failed to create resource provider怎么辦

發(fā)布時間:2021-12-29 14:45:26 來源:億速云 閱讀:692 作者:小新 欄目:云計算

這篇文章將為大家詳細講解有關OpenStack新增節(jié)點報錯Failed to create resource provider怎么辦,小編覺得挺實用的,因此分享給大家做個參考,希望大家閱讀完這篇文章后可以有所收獲。

1、背景

一臺計算節(jié)點故障宕機導致該計算節(jié)點不能繼續(xù)工作,先將其移出集群,節(jié)點恢復后再重新加入集群發(fā)現(xiàn)報錯ResourceProviderCreationFailed: Failed to create resource provider

2、報錯信息

# vim nova-compute.log
2019-07-16 16:27:55.441 1166754 ERROR nova.scheduler.client.report [req-c50f65e8-ffd8-4a10-8d5e-0ec8d408a3c8 - - - - -] [req-9e5aad63-21d1-4297-be27-92ba9b8bfe9f] Failed to create resource provider record in placement API for UUID 4d
9ed4b4-f3a2-4e5d-9d8e-2f657a844a04. Got 409: {"errors": [{"status": 409, "request_id": "req-9e5aad63-21d1-4297-be27-92ba9b8bfe9f", "detail": "There was a conflict when trying to complete your request.\n\n Conflicting resource provide
r name: bdc2 already exists.  ", "title": "Conflict"}]}.
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager [req-c50f65e8-ffd8-4a10-8d5e-0ec8d408a3c8 - - - - -] Error updating resources for node bdc2.: ResourceProviderCreationFailed: Failed to create resource provider bdc2
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager Traceback (most recent call last):
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 7426, in update_available_resource_for_node
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager     rt.update_available_resource(context, nodename)
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 688, in update_available_resource
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager     self._update_available_resource(context, resources)
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager     return f(*args, **kwargs)
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 712, in _update_available_resource
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager     self._init_compute_node(context, resources)
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 561, in _init_compute_node
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager     self._update(context, cn)
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 886, in _update
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager     inv_data,
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 68, in set_inventory_for_provider
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager     parent_provider_uuid=parent_provider_uuid,
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager     return getattr(self.instance, __name)(*args, **kwargs)
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 1104, in set_inventory_for_provider
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager     parent_provider_uuid=parent_provider_uuid)
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 665, in _ensure_resource_provider
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager     parent_provider_uuid=parent_provider_uuid)
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 64, in wrapper
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager     return f(self, *a, **k)
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager   File "/usr/lib/python2.7/site-packages/nova/scheduler/client/report.py", line 612, in _create_resource_provider
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager     raise exception.ResourceProviderCreationFailed(name=name)
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager ResourceProviderCreationFailed: Failed to create resource provider bdc2
2019-07-16 16:27:55.442 1166754 ERROR nova.compute.manager

3、處理過程

3.1、問題一 uuid沖突

具體報錯信息,重點看到Conflicting resource provider name: bdc2 already exists. 移除bdc2的時候,確定nova庫中的service和computer-node都清除了,包括元數(shù)據(jù)也delete了,但是這里還是有元數(shù)據(jù)信息

發(fā)現(xiàn)cell庫中并沒有刪除

# nova-manage cell_v2 list_hosts                          
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  exception.NotSupportedWarning
+-----------+--------------------------------------+----------+
| Cell Name |              Cell UUID               | Hostname |
+-----------+--------------------------------------+----------+
|   cell1   | df0d7c04-52b3-454d-a295-4f4ad836526b |   bdc1   |
|   cell1   | df0d7c04-52b3-454d-a295-4f4ad836526b |   bdc2   |
|   cell1   | df0d7c04-52b3-454d-a295-4f4ad836526b |   bdc3   |
|   cell1   | df0d7c04-52b3-454d-a295-4f4ad836526b |   bdc4   |
|   cell1   | df0d7c04-52b3-454d-a295-4f4ad836526b |   bdc5   |
|   cell1   | df0d7c04-52b3-454d-a295-4f4ad836526b |   bdc6   |
|   cell1   | df0d7c04-52b3-454d-a295-4f4ad836526b |   bdc7   |
|   cell1   | df0d7c04-52b3-454d-a295-4f4ad836526b |   bdc8   |
+-----------+--------------------------------------+----------+

于是手動刪除再添加,發(fā)現(xiàn)報錯并沒有改變

# su -s /bin/sh -c "nova-manage cell_v2 delete_host --cell_uuid df0d7c04-52b3-454d-a295-4f4ad836526b --host bdc2 " nova
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:332: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
  exception.NotSupportedWarning
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': df0d7c04-52b3-454d-a295-4f4ad836526b
Found 0 unmapped computes in cell: df0d7c04-52b3-454d-a295-4f4ad836526b

報錯中提示的uuid為4d9ed4b4-f3a2-4e5d-9d8e-2f657a844a04和bdc2沖突,檢查元數(shù)據(jù)庫,除了nova庫,還有nova_api庫

MariaDB [nova_api]> select uuid,name from resource_providers where name='bdc2';
+--------------------------------------+------+
| uuid                                 | name |
+--------------------------------------+------+
| e131e7c4-f7db-4889-8c34-e750e7b129da | bdc2 |
+--------------------------------------+------+


MariaDB [nova_api]>  select uuid,host from nova.compute_nodes where host='bdc2';
+--------------------------------------+------+
| uuid                                 | host |
+--------------------------------------+------+
| 4d9ed4b4-f3a2-4e5d-9d8e-2f657a844a04 | bdc2 |
+--------------------------------------+------+

看到癥結所在,確實uuid沖突了,e131e7c4-f7db-4889-8c34-e750e7b129da應該是舊bdc2的uuid 手動更新表resource_providers中的uuid

MariaDB [nova_api]> update resource_providers set uuid='4d9ed4b4-f3a2-4e5d-9d8e-2f657a844a04' where name='bdc2' and uuid='e131e7c4-f7db-4889-8c34-e750e7b129da';

3.2、問題二 分配沖突

到這里沖突的問題解決了,但是新增的計算節(jié)點還是有異常,創(chuàng)建的新云主機居然不在這臺新計算節(jié)點上創(chuàng)建,但是可以遷移一個小資源的云主機,不能遷移大資源占用的云主機 nova-compute的日志一直在刷warning

2019-07-16 19:10:02.684 1192779 WARNING nova.compute.resource_tracker 
[req-7c022b9b-7659-4a4d-9b53-30366a7fd150 - - - - -] Instance 6446a84d-cdfd-4cfe-bcd2-2d1d75db229f has been moved to another host bdc3(bdc3). There are allocations remaining against the source host that might need to be removed: {u'resources': {u'VCPU': 8, u'MEMORY_MB': 16384, u'DISK_GB': 50}}.
2019-07-16 19:10:02.738 1192779 WARNING nova.compute.resource_tracker [req-7c022b9b-7659-4a4d-9b53-30366a7fd150 - - - - -] Instance e0d8d6df-4b48-402b-aa33-97c4a6166c5b has been moved to another host bdc6(bdc6). There are allocations remaining against the source host that might need to be removed: {u'resources': {u'VCPU': 6, u'MEMORY_MB': 12288, u'DISK_GB': 50}}.
2019-07-16 19:10:02.791 1192779 WARNING nova.compute.resource_tracker [req-7c022b9b-7659-4a4d-9b53-30366a7fd150 - - - - -] Instance 9d860729-597a-4420-bb8f-e9415587d808 has been moved to another host bdc3(bdc3). There are allocations remaining against the source host that might need to be removed: {u'resources': {u'VCPU': 4, u'MEMORY_MB': 8192, u'DISK_GB': 50}}.
2019-07-16 19:10:02.860 1192779 WARNING nova.compute.resource_tracker [req-7c022b9b-7659-4a4d-9b53-30366a7fd150 - - - - -] Instance 8e42328d-fd1c-4abc-acac-5c6e09623af6 has been moved to another host bdc5(bdc5). There are allocations remaining against the source host that might need to be removed: {u'resources': {u'VCPU': 8, u'MEMORY_MB': 16384, u'DISK_GB': 50}}.
2019-07-16 19:10:02.912 1192779 WARNING nova.compute.resource_tracker [req-7c022b9b-7659-4a4d-9b53-30366a7fd150 - - - - -] Instance 1d59e7db-bf1b-478c-a6bd-10287365cb65 has been moved to another host bdc3(bdc3). There are allocations remaining against the source host that might need to be removed: {u'resources': {u'VCPU': 8, u'MEMORY_MB': 16384, u'DISK_GB': 50}}.
2019-07-16 19:10:02.960 1192779 INFO nova.compute.resource_tracker [req-7c022b9b-7659-4a4d-9b53-30366a7fd150 - - - - -] Instance 61223c2d-0b0c-4729-85e6-741c88e6e476 has allocations against this compute host but is not found in the database.
2019-07-16 19:10:03.014 1192779 WARNING nova.compute.resource_tracker [req-7c022b9b-7659-4a4d-9b53-30366a7fd150 - - - - -] Instance 50f71a07-306d-4d2c-8f4a-6eaa11fbd233 has been moved to another host bdc6(bdc6). There are allocations remaining against the source host that might need to be removed: {u'resources': {u'VCPU': 8, u'MEMORY_MB': 16384, u'DISK_GB': 50}}.: InstanceNotFound_Remote: Instance 61223c2d-0b0c-4729-85e6-741c88e6e476 could not be found.
2019-07-16 19:10:03.068 1192779 WARNING nova.compute.resource_tracker [req-7c022b9b-7659-4a4d-9b53-30366a7fd150 - - - - -] Instance 0e7c21d4-a5fb-4059-aa47-bad47700e827 has been moved to another host bdc1(bdc1). There are allocations remaining against the source host that might need to be removed: {u'resources': {u'VCPU': 8, u'MEMORY_MB': 16384, u'DISK_GB': 50}}.: InstanceNotFound_Remote: Instance 61223c2d-0b0c-4729-85e6-741c88e6e476 could not be found.
2019-07-16 19:10:03.069 1192779 INFO nova.compute.resource_tracker [req-7c022b9b-7659-4a4d-9b53-30366a7fd150 - - - - -] Final resource view: name=bdc2 phys_ram=131037MB used_ram=512MB phys_disk=115480GB used_disk=0GB total_vcpus=24 used_vcpus=0 pci_stats=[]

Warning信息說明當前計算節(jié)點上有幾個實例信息與其它節(jié)點上的信息沖突,查看元數(shù)據(jù)庫

MariaDB [nova_api]> select * from allocations where resource_provider_id=7;
+---------------------+------------+------+----------------------+--------------------------------------+-------------------+-------+
| created_at          | updated_at | id   | resource_provider_id | consumer_id                          | resource_class_id | used  |
+---------------------+------------+------+----------------------+--------------------------------------+-------------------+-------+
| 2019-07-09 09:10:27 | NULL       | 1471 |                    7 | 9d860729-597a-4420-bb8f-e9415587d808 |                 0 |     4 |
| 2019-07-08 14:58:36 | NULL       | 1444 |                    7 | 61223c2d-0b0c-4729-85e6-741c88e6e476 |                 0 |     6 |
| 2019-07-09 10:09:33 | NULL       | 1510 |                    7 | e0d8d6df-4b48-402b-aa33-97c4a6166c5b |                 0 |     6 |
| 2019-07-09 09:18:30 | NULL       | 1477 |                    7 | 1d59e7db-bf1b-478c-a6bd-10287365cb65 |                 0 |     8 |
| 2019-07-09 09:26:26 | NULL       | 1483 |                    7 | 6446a84d-cdfd-4cfe-bcd2-2d1d75db229f |                 0 |     8 |
| 2019-07-09 09:36:40 | NULL       | 1486 |                    7 | 0e7c21d4-a5fb-4059-aa47-bad47700e827 |                 0 |     8 |
| 2019-07-09 09:46:02 | NULL       | 1492 |                    7 | 8e42328d-fd1c-4abc-acac-5c6e09623af6 |                 0 |     8 |
| 2019-07-09 10:02:57 | NULL       | 1504 |                    7 | 50f71a07-306d-4d2c-8f4a-6eaa11fbd233 |                 0 |     8 |
| 2019-07-09 09:10:27 | NULL       | 1472 |                    7 | 9d860729-597a-4420-bb8f-e9415587d808 |                 1 |  8192 |
| 2019-07-08 14:58:36 | NULL       | 1445 |                    7 | 61223c2d-0b0c-4729-85e6-741c88e6e476 |                 1 | 12288 |
| 2019-07-09 10:09:33 | NULL       | 1511 |                    7 | e0d8d6df-4b48-402b-aa33-97c4a6166c5b |                 1 | 12288 |
| 2019-07-09 09:18:30 | NULL       | 1478 |                    7 | 1d59e7db-bf1b-478c-a6bd-10287365cb65 |                 1 | 16384 |
| 2019-07-09 09:26:26 | NULL       | 1484 |                    7 | 6446a84d-cdfd-4cfe-bcd2-2d1d75db229f |                 1 | 16384 |
| 2019-07-09 09:36:40 | NULL       | 1487 |                    7 | 0e7c21d4-a5fb-4059-aa47-bad47700e827 |                 1 | 16384 |
| 2019-07-09 09:46:02 | NULL       | 1493 |                    7 | 8e42328d-fd1c-4abc-acac-5c6e09623af6 |                 1 | 16384 |
| 2019-07-09 10:02:57 | NULL       | 1505 |                    7 | 50f71a07-306d-4d2c-8f4a-6eaa11fbd233 |                 1 | 16384 |
| 2019-07-08 14:58:36 | NULL       | 1446 |                    7 | 61223c2d-0b0c-4729-85e6-741c88e6e476 |                 2 |    50 |
| 2019-07-09 09:10:27 | NULL       | 1473 |                    7 | 9d860729-597a-4420-bb8f-e9415587d808 |                 2 |    50 |
| 2019-07-09 09:18:30 | NULL       | 1479 |                    7 | 1d59e7db-bf1b-478c-a6bd-10287365cb65 |                 2 |    50 |
| 2019-07-09 09:26:26 | NULL       | 1485 |                    7 | 6446a84d-cdfd-4cfe-bcd2-2d1d75db229f |                 2 |    50 |
| 2019-07-09 09:36:40 | NULL       | 1488 |                    7 | 0e7c21d4-a5fb-4059-aa47-bad47700e827 |                 2 |    50 |
| 2019-07-09 09:46:02 | NULL       | 1494 |                    7 | 8e42328d-fd1c-4abc-acac-5c6e09623af6 |                 2 |    50 |
| 2019-07-09 10:02:57 | NULL       | 1506 |                    7 | 50f71a07-306d-4d2c-8f4a-6eaa11fbd233 |                 2 |    50 |
| 2019-07-09 10:09:33 | NULL       | 1512 |                    7 | e0d8d6df-4b48-402b-aa33-97c4a6166c5b |                 2 |    50 |
+---------------------+------------+------+----------------------+--------------------------------------+-------------------+-------+
24 rows in set (0.00 sec)

MariaDB [nova_api]> select * from allocations where consumer_id='9d860729-597a-4420-bb8f-e9415587d808';
+---------------------+------------+------+----------------------+--------------------------------------+-------------------+------+
| created_at          | updated_at | id   | resource_provider_id | consumer_id                          | resource_class_id | used |
+---------------------+------------+------+----------------------+--------------------------------------+-------------------+------+
| 2019-07-09 09:10:27 | NULL       | 1468 |                    6 | 9d860729-597a-4420-bb8f-e9415587d808 |                 0 |    4 |
| 2019-07-09 09:10:27 | NULL       | 1469 |                    6 | 9d860729-597a-4420-bb8f-e9415587d808 |                 1 | 8192 |
| 2019-07-09 09:10:27 | NULL       | 1470 |                    6 | 9d860729-597a-4420-bb8f-e9415587d808 |                 2 |   50 |
| 2019-07-09 09:10:27 | NULL       | 1471 |                    7 | 9d860729-597a-4420-bb8f-e9415587d808 |                 0 |    4 |
| 2019-07-09 09:10:27 | NULL       | 1472 |                    7 | 9d860729-597a-4420-bb8f-e9415587d808 |                 1 | 8192 |
| 2019-07-09 09:10:27 | NULL       | 1473 |                    7 | 9d860729-597a-4420-bb8f-e9415587d808 |                 2 |   50 |
+---------------------+------------+------+----------------------+--------------------------------------+-------------------+------+

可以看到resource_provider_id為7也就是新增的bdc2上有資源分配信息,摘出其中一個實例id查看發(fā)現(xiàn)它不僅在7也在6上,聯(lián)想到先前舊節(jié)點上的實例疏散,新增節(jié)點是改了uuid繼承了舊節(jié)點的信息,所以日志會報沖突Warining。并且 (8192+12288+12288+16384+16384+16384+16384+16384)/1024=112G

內存資源占了112G,即將達到當前機器的內存上限,所以創(chuàng)建云主機不會優(yōu)先選擇這個節(jié)點,遷移也只能遷移小的

既然已經(jīng)修改了元數(shù)據(jù),那就走到黑,繼續(xù)清除

MariaDB [nova_api]> delete from allocations where resource_provider_id=7;
Query OK, 24 rows affected (0.00 sec)

MariaDB [nova_api]> select * from allocations where resource_provider_id=7;
Empty set (0.00 sec)

4、驗證

創(chuàng)建三個需要大資源的虛擬機,發(fā)現(xiàn)都創(chuàng)建在了bdc2上,并且nova-compute日志中沒有刷類似的Warning,說明問題解決。 OpenStack新增節(jié)點報錯Failed to create resource provider怎么辦

5、備注

OpenStack基礎組件的有數(shù)據(jù)的元數(shù)據(jù)表

MariaDB [nova_api]> SELECT table_name,table_rows FROM information_schema.tables WHERE TABLE_SCHEMA = 'nova' and table_rows<>0 ORDER BY table_rows DESC; 
+--------------------------+------------+
| table_name               | table_rows |
+--------------------------+------------+
| instance_actions_events  |       2335 |
| instance_system_metadata |       2324 |
| instance_actions         |       1959 |
| virtual_interfaces       |        474 |
| block_device_mapping     |        451 |
| instance_info_caches     |        267 |
| instances                |        267 |
| instance_id_mappings     |        260 |
| instance_faults          |        217 |
| instance_extra           |        206 |
| migrations               |        122 |
| s3_images                |         17 |
| services                 |         13 |
| compute_nodes            |          8 |
| security_groups          |          6 |
+--------------------------+------------+
15 rows in set (0.00 sec)

MariaDB [nova_api]> SELECT table_name,table_rows FROM information_schema.tables WHERE TABLE_SCHEMA = 'nova_api' and table_rows<>0 ORDER BY table_rows DESC; 
+--------------------+------------+
| table_name         | table_rows |
+--------------------+------------+
| consumers          |        339 |
| instance_mappings  |        283 |
| request_specs      |        213 |
| allocations        |        195 |
| traits             |        164 |
| quotas             |         57 |
| inventories        |         23 |
| flavors            |          9 |
| projects           |          8 |
| key_pairs          |          8 |
| users              |          8 |
| resource_providers |          8 |
| host_mappings      |          8 |
| cell_mappings      |          2 |
+--------------------+------------+
14 rows in set (0.01 sec)

MariaDB [nova_api]> SELECT table_name,table_rows FROM information_schema.tables WHERE TABLE_SCHEMA = 'nova_cell0' and table_rows<>0 ORDER BY table_rows DESC; 
+--------------------------+------------+
| table_name               | table_rows |
+--------------------------+------------+
| instance_system_metadata |        112 |
| instance_id_mappings     |         16 |
| block_device_mapping     |         16 |
| instance_faults          |         16 |
| instance_extra           |         16 |
| instances                |         16 |
| instance_info_caches     |         16 |
| s3_images                |          2 |
+--------------------------+------------+
MariaDB [nova_api]> SELECT table_name,table_rows FROM information_schema.tables WHERE TABLE_SCHEMA = 'cinder' and table_rows<>0 ORDER BY table_rows DESC; 
+------------------------+------------+
| table_name             | table_rows |
+------------------------+------------+
| reservations           |        573 |
| volume_admin_metadata  |        478 |
| volume_attachment      |        451 |
| volumes                |        200 |
| volume_glance_metadata |         64 |
| quotas                 |         21 |
| quota_usages           |         15 |
| quota_classes          |          6 |
| services               |          2 |
| workers                |          1 |
+------------------------+------------+
10 rows in set (0.07 sec)

MariaDB [nova_api]> SELECT table_name,table_rows FROM information_schema.tables WHERE TABLE_SCHEMA = 'glance' and table_rows<>0 ORDER BY table_rows DESC; 
+------------------+------------+
| table_name       | table_rows |
+------------------+------------+
| images           |         19 |
| image_locations  |         19 |
| image_properties |          9 |
| alembic_version  |          1 |
+------------------+------------+
4 rows in set (0.03 sec)

MariaDB [nova_api]> SELECT table_name,table_rows FROM information_schema.tables WHERE TABLE_SCHEMA = 'keystone' and table_rows<>0 ORDER BY table_rows DESC; 
+-----------------+------------+
| table_name      | table_rows |
+-----------------+------------+
| endpoint        |         18 |
| assignment      |         17 |
| user            |         14 |
| password        |         14 |
| local_user      |         14 |
| project         |         12 |
| service         |          6 |
| migrate_version |          4 |
| role            |          2 |
+-----------------+------------+
9 rows in set (0.08 sec)

MariaDB [nova_api]> SELECT table_name,table_rows FROM information_schema.tables WHERE TABLE_SCHEMA = 'neutron' and table_rows<>0 ORDER BY table_rows DESC; 
+---------------------------+------------+
| table_name                | table_rows |
+---------------------------+------------+
| ml2_vxlan_allocations     |       1000 |
| standardattributes        |        149 |
| ports                     |         69 |
| ipamallocations           |         69 |
| ipallocations             |         69 |
| ml2_port_bindings         |         69 |
| portsecuritybindings      |         69 |
| securitygroupportbindings |         68 |
| ml2_port_binding_levels   |         66 |
| securitygrouprules        |         59 |
| quotas                    |         56 |
| quotausages               |         17 |
| agents                    |         11 |
| default_security_group    |          8 |
| segmenthostmappings       |          8 |
| securitygroups            |          8 |
| allowedaddresspairs       |          4 |
| provisioningblocks        |          4 |
| alembic_version           |          2 |
+---------------------------+------------+
19 rows in set (0.18 sec)

6、總結

1、元數(shù)據(jù)操作非常危險,盡量不動或者少動,如果要動,先備份數(shù)據(jù)庫;
2、刪除不掉的云主機和卷,不要直接修改元數(shù)據(jù)的deleted字段,這是自欺欺人的辦法,只是dashboard上看不到而已,實際資源并不釋放而且后端存儲中還存在文件;
3、硬件操作要小心,總之任何危險的操作都要再三確認。

關于“OpenStack新增節(jié)點報錯Failed to create resource provider怎么辦”這篇文章就分享到這里了,希望以上內容可以對大家有一定的幫助,使各位可以學到更多知識,如果覺得文章不錯,請把它分享出去讓更多的人看到。

向AI問一下細節(jié)

免責聲明:本站發(fā)布的內容(圖片、視頻和文字)以原創(chuàng)、轉載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權請聯(lián)系站長郵箱:is@yisu.com進行舉報,并提供相關證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權內容。

AI