本期内容:
1,Receiver启动的方式设想
2,Receiver启动源码彻底分析
为什么要Receiver?
Receiver不断持续接收外部数据源的数据,并把数据汇报给Driver端,这样我们每隔BatchDuration会把汇报数据生成不同的Job,来执行RDD的操作。
Receiver是随着应用程序的启动而启动的。
Receiver和InputDStream是一一对应的。
RDD[Receiver]只有一个Partition,一个Receiver实例。
Spark Core并不知道RDD[Receiver]的特殊性,依然按照普通RDD对应的Job进行调度,就有可能在同样一个Executor上启动多个Receiver,会导致负载不均衡,会导致Receiver启动失败。
Receiver在Executor启动的方案:
1,启动不同Receiver采用RDD中不同Partiton的方式,不同的Partiton代表不同的Receiver,在执行层面就是不同的Task,在每个Task启动时就启动Receiver。
这种方式实现简单巧妙,但是存在弊端启动可能失败,运行过程中Receiver失败,会导致TaskRetry,如果3次失败就会导致Job失败,会导致整个Spark应用程序失败。因为Receiver的故障,导致Job失败,不能容错。
2.第二种方式就是Spark Streaming采用的方式。
在ReceiverTacker的start方法中,先实例化Rpc消息通信体ReceiverTrackerEndpoint,再调用
launchReceivers方法。
/** Start the endpoint and receiver execution thread. */ def start(): Unit = synchronized { if (isTrackerStarted) { throw new SparkException("ReceiverTracker already started") } if (!receiverInputStreams.isEmpty) { endpoint = ssc.env.rpcEnv.setupEndpoint( "ReceiverTracker", new ReceiverTrackerEndpoint(ssc.env.rpcEnv)) if (!skipReceiverLaunch) launchReceivers() logInfo("ReceiverTracker started") trackerState = Started } } |
在launchReceivers方法中,先对每一个ReceiverInputStream获取到对应的一个Receiver,然后发送StartAllReceivers消息。Receiver对应一个数据来源。
/** * Get the receivers from the ReceiverInputDStreams, distributes them to the * worker nodes as a parallel collection, and runs them. */private def launchReceivers(): Unit = { val receivers = receiverInputStreams.map(nis => { val rcvr = nis.getReceiver() rcvr.setReceiverId(nis.id) rcvr }) runDummySparkJob() logInfo("Starting " + receivers.length + " receivers") endpoint.send(StartAllReceivers(receivers)) } |
ReceiverTrackerEndpoint接收到StartAllReceivers消息后,先找到Receiver运行在哪些Executor上,然后调用startReceiver方法。
override def receive: PartialFunction[Any, Unit] = { // Local messages case StartAllReceivers(receivers) => val scheduledLocations = schedulingPolicy.scheduleReceivers(receivers, getExecutors) for (receiver <- receivers) { val executors = scheduledLocations(receiver.streamId) updateReceiverScheduledExecutors(receiver.streamId, executors) receiverPreferredLocations(receiver.streamId) = receiver.preferredLocation startReceiver(receiver, executors) } |
startReceiver方法在Driver层面自己指定了TaskLocation,而不用Spark Core来帮我们选择TaskLocation。其有以下特点:终止Receiver不需要重启Spark Job;第一次启动Receiver,不会执行第二次;为了启动Receiver而启动了一个Spark作业,一个Spark作业启动一个Receiver。每个Receiver启动触发一个Spark作业,而不是每个Receiver是在一个Spark作业的一个Task来启动。当提交启动Receiver的作业失败时发送RestartReceiver消息,来重启Receiver。
/** * Start a receiver along with its scheduled executors */private def startReceiver( receiver: Receiver[_], scheduledLocations: Seq[TaskLocation]): Unit = { def shouldStartReceiver: Boolean = { // It's okay to start when trackerState is Initialized or Started !(isTrackerStopping || isTrackerStopped) } val receiverId = receiver.streamId if (!shouldStartReceiver) { onReceiverJobFinish(receiverId) return } val checkpointDirOption = Option(ssc.checkpointDir) val serializableHadoopConf = new SerializableConfiguration(ssc.sparkContext.hadoopConfiguration) // Function to start the receiver on the worker node val startReceiverFunc: Iterator[Receiver[_]] => Unit = (iterator: Iterator[Receiver[_]]) => { if (!iterator.hasNext) { throw new SparkException( "Could not start receiver as object not found.") } if (TaskContext.get().attemptNumber() == 0) { val receiver = iterator.next() assert(iterator.hasNext == false) val supervisor = new ReceiverSupervisorImpl( receiver, SparkEnv.get, serializableHadoopConf.value, checkpointDirOption) supervisor.start() supervisor.awaitTermination() } else { // It's restarted by TaskScheduler, but we want to reschedule it again. So exit it. } } // Create the RDD using the scheduledLocations to run the receiver in a Spark job val receiverRDD: RDD[Receiver[_]] = if (scheduledLocations.isEmpty) { ssc.sc.makeRDD(Seq(receiver), 1) } else { val preferredLocations = scheduledLocations.map(_.toString).distinct ssc.sc.makeRDD(Seq(receiver -> preferredLocations)) } receiverRDD.setName(s"Receiver $receiverId") ssc.sparkContext.setJobDescription(s"Streaming job running receiver $receiverId") ssc.sparkContext.setCallSite(Option(ssc.getStartSite()).getOrElse(Utils.getCallSite())) val future = ssc.sparkContext.submitJob[Receiver[_], Unit, Unit]( receiverRDD, startReceiverFunc, Seq(0), (_, _) => Unit, ()) // We will keep restarting the receiver job until ReceiverTracker is stopped future.onComplete { case Success(_) => if (!shouldStartReceiver) { onReceiverJobFinish(receiverId) } else { logInfo(s"Restarting Receiver $receiverId") self.send(RestartReceiver(receiver)) } case Failure(e) => if (!shouldStartReceiver) { onReceiverJobFinish(receiverId) } else { logError("Receiver has been stopped. Try to restart it.", e) logInfo(s"Restarting Receiver $receiverId") self.send(RestartReceiver(receiver)) } }(submitJobThreadPool) logInfo(s"Receiver ${receiver.streamId} started") } |