您好,登錄后才能下訂單哦!
最近在做OpenStack控制節(jié)點高可用(三控)的測試,當(dāng)關(guān)掉其中一個控制節(jié)點的時候,nova service-list 看到所有nova服務(wù)都是down的。 nova-compute的log中有大量這種錯誤信息:
2016-11-08 03:46:23.887 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe 2016-11-08 03:46:27.275 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe 2016-11-08 03:46:27.276 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe 2016-11-08 03:46:27.276 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe 2016-11-08 03:46:27.277 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe 2016-11-08 03:46:27.277 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe 2016-11-08 03:46:27.278 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe 2016-11-08 03:46:27.278 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe
上述拋出的異常在oslo_messaging/_drivers/impl_rabbit.py中定位出來了:
def _heartbeat_thread_job(self): """Thread that maintains inactive connections """ while not self._heartbeat_exit_event.is_set(): with self._connection_lock.for_heartbeat(): recoverable_errors = ( self.connection.recoverable_channel_errors + self.connection.recoverable_connection_errors) try: try: self._heartbeat_check() # NOTE(sileht): We need to drain event to receive # heartbeat from the broker but don't hold the # connection too much times. In amqpdriver a connection # is used exclusivly for read or for write, so we have # to do this for connection used for write drain_events # already do that for other connection try: self.connection.drain_events(timeout=0.001) except socket.timeout: pass except recoverable_errors as exc: LOG.info(_LI("A recoverable connection/channel error " "occurred, trying to reconnect: %s"), exc) self.ensure_connection() except Exception: LOG.warning(_LW("Unexpected error during heartbeart " "thread processing, retrying...")) LOG.debug('Exception', exc_info=True) self._heartbeat_exit_event.wait( timeout=self._heartbeat_wait_timeout) self._heartbeat_exit_event.clear()
原本heartbeat check就是來檢測組件服務(wù)和rabbitmq server之間的連接是否是活著的,oslo_messaging中的heartbeat_check任務(wù)在服務(wù)啟動的時候就跑在后臺了,當(dāng)關(guān)閉一個控制節(jié)點時,實際上也關(guān)閉了一個rabbitmq server節(jié)點。只不過這里會一直處于循環(huán)之中,一直拋出recoverable_errors捕獲到的異常,只有當(dāng)self._heartbeat_exit_event.is_set()才會退出while循環(huán)。按理說應(yīng)該加個超時的東西,這樣就就不會一直處于循環(huán)之中,過好幾分鐘后才恢復(fù)。
今天我在虛擬機(jī)中安裝了三控高可用,在nova.conf中加了如下參數(shù):
[oslo_messaging_rabbit]
rabbit_max_retries = 2 # 重連最大次數(shù)
heartbeat_timeout_threshold = 0 # 禁止heartbeat check
測試,nova_compute 并不會一直拋出recoverable_errors捕獲到的異常,nova service-list并不會出現(xiàn)所有服務(wù)down的情況。
后續(xù)有待在物理機(jī)上測試。。。。。。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點不代表本網(wǎng)站立場,如果涉及侵權(quán)請聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報,并提供相關(guān)證據(jù),一經(jīng)查實,將立刻刪除涉嫌侵權(quán)內(nèi)容。