溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊(cè)×
其他方式登錄
點(diǎn)擊 登錄注冊(cè) 即表示同意《億速云用戶服務(wù)條款》

如何理解Receiver啟動(dòng)以及啟動(dòng)源碼分析

發(fā)布時(shí)間:2021-11-24 16:05:31 來源:億速云 閱讀:123 作者:柒染 欄目:云計(jì)算

今天就跟大家聊聊有關(guān)如何理解Receiver啟動(dòng)以及啟動(dòng)源碼分析,可能很多人都不太了解,為了讓大家更加了解,小編給大家總結(jié)了以下內(nèi)容,希望大家根據(jù)這篇文章可以有所收獲。

為什么要Receiver?

Receiver不斷持續(xù)接收外部數(shù)據(jù)源的數(shù)據(jù),并把數(shù)據(jù)匯報(bào)給Driver端,這樣我們每隔BatchDuration會(huì)把匯報(bào)數(shù)據(jù)生成不同的Job,來執(zhí)行RDD的操作。

Receiver是隨著應(yīng)用程序的啟動(dòng)而啟動(dòng)的。

Receiver和InputDStream是一一對(duì)應(yīng)的。

RDD[Receiver]只有一個(gè)Partition,一個(gè)Receiver實(shí)例。

Spark Core并不知道RDD[Receiver]的特殊性,依然按照普通RDD對(duì)應(yīng)的Job進(jìn)行調(diào)度,就有可能在同樣一個(gè)Executor上啟動(dòng)多個(gè)Receiver,會(huì)導(dǎo)致負(fù)載不均衡,會(huì)導(dǎo)致Receiver啟動(dòng)失敗。

Receiver在Executor啟動(dòng)的方案:

1,啟動(dòng)不同Receiver采用RDD中不同Partiton的方式,不同的Partiton代表不同的Receiver,在執(zhí)行層面就是不同的Task,在每個(gè)Task啟動(dòng)時(shí)就啟動(dòng)Receiver。

這種方式實(shí)現(xiàn)簡單巧妙,但是存在弊端啟動(dòng)可能失敗,運(yùn)行過程中Receiver失敗,會(huì)導(dǎo)致TaskRetry,如果3次失敗就會(huì)導(dǎo)致Job失敗,會(huì)導(dǎo)致整個(gè)Spark應(yīng)用程序失敗。因?yàn)镽eceiver的故障,導(dǎo)致Job失敗,不能容錯(cuò)。

2.第二種方式就是Spark Streaming采用的方式。

在ReceiverTacker的start方法中,先實(shí)例化Rpc消息通信體ReceiverTrackerEndpoint,再調(diào)用

launchReceivers方法。

/** Start the endpoint and receiver execution thread. */
def start(): Unit = synchronized {
  if (isTrackerStarted) {
    throw new SparkException("ReceiverTracker already started")
  }

  if (!receiverInputStreams.isEmpty) {
    endpoint = ssc.env.rpcEnv.setupEndpoint(
      "ReceiverTracker", new ReceiverTrackerEndpoint(ssc.env.rpcEnv))
    if (!skipReceiverLaunch) launchReceivers()
    logInfo("ReceiverTracker started")
    trackerState = Started
  }
}

在launchReceivers方法中,先對(duì)每一個(gè)ReceiverInputStream獲取到對(duì)應(yīng)的一個(gè)Receiver,然后發(fā)送StartAllReceivers消息。Receiver對(duì)應(yīng)一個(gè)數(shù)據(jù)來源。

/**
 * Get the receivers from the ReceiverInputDStreams, distributes them to the
 * worker nodes as a parallel collection, and runs them.
 */
private def launchReceivers(): Unit = {
  val receivers = receiverInputStreams.map(nis => {
    val rcvr = nis.getReceiver()
    rcvr.setReceiverId(nis.id)
    rcvr
  })

  runDummySparkJob()

  logInfo("Starting " + receivers.length + " receivers")
  endpoint.send(StartAllReceivers(receivers))
}

ReceiverTrackerEndpoint接收到StartAllReceivers消息后,先找到Receiver運(yùn)行在哪些Executor上,然后調(diào)用startReceiver方法。

override def receive: PartialFunction[Any, Unit] = {
  // Local messages
  case StartAllReceivers(receivers) =>
    val scheduledLocations = schedulingPolicy.scheduleReceivers(receivers, getExecutors)
    for (receiver <- receivers) {
      val executors = scheduledLocations(receiver.streamId)
      updateReceiverScheduledExecutors(receiver.streamId, executors)
      receiverPreferredLocations(receiver.streamId) = receiver.preferredLocation
      startReceiver(receiver, executors)
    }

startReceiver方法在Driver層面自己指定了TaskLocation,而不用Spark Core來幫我們選擇TaskLocation。其有以下特點(diǎn):終止Receiver不需要重啟Spark Job;第一次啟動(dòng)Receiver,不會(huì)執(zhí)行第二次;為了啟動(dòng)Receiver而啟動(dòng)了一個(gè)Spark作業(yè),一個(gè)Spark作業(yè)啟動(dòng)一個(gè)Receiver。每個(gè)Receiver啟動(dòng)觸發(fā)一個(gè)Spark作業(yè),而不是每個(gè)Receiver是在一個(gè)Spark作業(yè)的一個(gè)Task來啟動(dòng)。當(dāng)提交啟動(dòng)Receiver的作業(yè)失敗時(shí)發(fā)送RestartReceiver消息,來重啟Receiver。

/**
 * Start a receiver along with its scheduled executors
 */
private def startReceiver(
    receiver: Receiver[_],
    scheduledLocations: Seq[TaskLocation]): Unit = {
  def shouldStartReceiver: Boolean = {
    // It's okay to start when trackerState is Initialized or Started
    !(isTrackerStopping || isTrackerStopped)
  }

  val receiverId = receiver.streamId
  if (!shouldStartReceiver) {
    onReceiverJobFinish(receiverId)
    return
  }

  val checkpointDirOption = Option(ssc.checkpointDir)
  val serializableHadoopConf =
    new SerializableConfiguration(ssc.sparkContext.hadoopConfiguration)

  // Function to start the receiver on the worker node
  val startReceiverFunc: Iterator[Receiver[_]] => Unit =
    (iterator: Iterator[Receiver[_]]) => {
      if (!iterator.hasNext) {
        throw new SparkException(
          "Could not start receiver as object not found.")
      }
      if (TaskContext.get().attemptNumber() == 0) {
        val receiver = iterator.next()
        assert(iterator.hasNext == false)
        val supervisor = new ReceiverSupervisorImpl(
          receiver, SparkEnv.get, serializableHadoopConf.value, checkpointDirOption)
        supervisor.start()
        supervisor.awaitTermination()
      } else {
        // It's restarted by TaskScheduler, but we want to reschedule it again. So exit it.
      }
    }

  // Create the RDD using the scheduledLocations to run the receiver in a Spark job
  val receiverRDD: RDD[Receiver[_]] =
    if (scheduledLocations.isEmpty) {
      ssc.sc.makeRDD(Seq(receiver), 1)
    } else {
      val preferredLocations = scheduledLocations.map(_.toString).distinct
      ssc.sc.makeRDD(Seq(receiver -> preferredLocations))
    }
  receiverRDD.setName(s"Receiver $receiverId")
  ssc.sparkContext.setJobDescription(s"Streaming job running receiver $receiverId")
  ssc.sparkContext.setCallSite(Option(ssc.getStartSite()).getOrElse(Utils.getCallSite()))

  val future = ssc.sparkContext.submitJob[Receiver[_], Unit, Unit](
    receiverRDD, startReceiverFunc, Seq(0), (_, _) => Unit, ())
  // We will keep restarting the receiver job until ReceiverTracker is stopped
  future.onComplete {
    case Success(_) =>
      if (!shouldStartReceiver) {
        onReceiverJobFinish(receiverId)
      } else {
        logInfo(s"Restarting Receiver $receiverId")
        self.send(RestartReceiver(receiver))
      }
    case Failure(e) =>
      if (!shouldStartReceiver) {
        onReceiverJobFinish(receiverId)
      } else {
        logError("Receiver has been stopped. Try to restart it.", e)
        logInfo(s"Restarting Receiver $receiverId")
        self.send(RestartReceiver(receiver))
      }
  }(submitJobThreadPool)
  logInfo(s"Receiver ${receiver.streamId} started")
}

看完上述內(nèi)容,你們對(duì)如何理解Receiver啟動(dòng)以及啟動(dòng)源碼分析有進(jìn)一步的了解嗎?如果還想了解更多知識(shí)或者相關(guān)內(nèi)容,請(qǐng)關(guān)注億速云行業(yè)資訊頻道,感謝大家的支持。

向AI問一下細(xì)節(jié)

免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請(qǐng)聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。

AI