您的位置:首页 > 其它

Spark源码阅读笔记之Broadcast(一)

2015-08-13 14:26 483 查看
Spark会序列化在各个任务上使用到的变量,然后传递到Executor中,由于Executor中得到的只是变量的拷贝,因此对变量的改变只在该Executor有效。序列化后的任务的大小是有限制的(由spark.akka.frameSize决定,值为其减去200K,默认为10M-200K),spark会进行检查,超出该限制的任务会被抛弃。因此,对于需要共享比较大的数据时,需要使用Broadcast。

Spark实现了两种传输Broadcast的机制:Http和Torrent(由参数spark.broadcast.factory确定,默认为org.apache.spark.broadcast.TorrentBroadcastFactory,可以修改为org.apache.spark.broadcast.HttpBroadcastFactory)。

Http的传输机制是在Driver中启动Http服务,然后将需要传输的变量存储为Http服务根目录下的一个文件,当Executor中需要使用时便向Http服务请求,下载该文件,然后读取。这种机制下,所有Executor都需要向Driver请求下载,Driver的网络通信会成为瓶颈。

Torrent则是一种类似于BitTorrent的实现机制。当要共享变量时,Driver先将序列化后的变量分片,然后存储到BlockManager中,当Executor需要使用时,则会以重新洗牌后的顺序向BlockManager请求该变量的各个分片,然后重新组合成完整的变量。在Executor请求变量的分片的过程中,每当该Executor取得一个分片,则马上存储到BlockManager中,这样其他的Executor若需要该分片时也可以向已经取得分片的Executor获取,而不需向Driver获取,同时由于各个Executor都是按随机洗牌的顺序来请求各个分片的,因此一般不会出现所有的Executor同时请求相同的分片情况,从而造成Driver网络开销过大。因此该机制能够防止Driver的网络通信成为瓶颈。

A BitTorrent-like implementation.The mechanism is as follows:

The driver divides the serialized object into small chunks and stores those chunks in the BlockManager of the driver.

On each executor, the executor first attempts to fetch the object from its BlockManager. If it does not exist, it then uses remote fetches to fetch the small chunks from the driver and/or other executors if available. Once it gets the chunks, it puts the chunks in its own BlockManager, ready for other executors to fetch from.

This prevents the driver from being the bottleneck in sending out multiple copies of the

broadcast data (one per executor) as done by the [[org.apache.spark.broadcast.HttpBroadcast]].

Spark统一通过BroadcastManager来创建Broadcast(SparkContext中的broadcast函数调用了BroadcastManager的newBroadcast函数,BroadcastManager则在SparkEnv中被创建),BroadcastManager则封装了BroadcastFactoryBroadcastManagerinitializenewBroadcastunbroadcaststop四个函数:initialize函数根据spark.broadcast.factory的配置创建BroadcastFactory,并初始化;newBroadcast调用BroadcastFactorynewBroadcast函数;unbroadcast调用BroadcastFactoryunbroadcast函数;stop函数调用BroadcastFactorystop函数。

BroadcastManager代码

private[spark] class BroadcastManager(
val isDriver: Boolean,
conf: SparkConf,
securityManager: SecurityManager)
extends Logging {

private var initialized = false
private var broadcastFactory: BroadcastFactory = null

initialize()

// Called by SparkContext or Executor before using Broadcast
private def initialize() {
synchronized {
if (!initialized) {
val broadcastFactoryClass =
conf.get("spark.broadcast.factory", "org.apache.spark.broadcast.TorrentBroadcastFactory")

broadcastFactory =
Class.forName(broadcastFactoryClass).newInstance.asInstanceOf[BroadcastFactory]

// Initialize appropriate BroadcastFactory and BroadcastObject
broadcastFactory.initialize(isDriver, conf, securityManager)

initialized = true
}
}
}

def stop() {
broadcastFactory.stop()
}

private val nextBroadcastId = new AtomicLong(0)

def newBroadcast[T: ClassTag](value_ : T, isLocal: Boolean) = {
broadcastFactory.newBroadcast[T](value_, isLocal, nextBroadcastId.getAndIncrement())
}

def unbroadcast(id: Long, removeFromDriver: Boolean, blocking: Boolean) {
broadcastFactory.unbroadcast(id, removeFromDriver, blocking)
}
}


BroadcastFactory是一个接口(特质),有initializenewBroadcastunbroadcaststop四个函数来供BroadcastManager调用。BroadcastFactory有两种实现:HttpBroadcastFactoryTorrentBroadcastFactory,对应Broadcast的两种传输机制:Http和Torrent。

An interface for all the broadcast implementations in Spark (to allow multiple broadcast implementations). SparkContext uses a user-specified BroadcastFactory implementation to instantiate a particular broadcast for the entire Spark job.

BroadcastFactory代码

trait BroadcastFactory {

def initialize(isDriver: Boolean, conf: SparkConf, securityMgr: SecurityManager): Unit

/**
* Creates a new broadcast variable.
*
* @param value value to broadcast
* @param isLocal whether we are in local mode (single JVM process)
* @param id unique id representing this broadcast variable
*/
def newBroadcast[T: ClassTag](value: T, isLocal: Boolean, id: Long): Broadcast[T]

def unbroadcast(id: Long, removeFromDriver: Boolean, blocking: Boolean): Unit

def stop(): Unit
}


BroadcastFactorynewBroadcast生成BroadcastBroadcast是一个实现Serializable接口的抽象类,主要有三个抽象方法:getValuedoUnpersistdoDestroy,其他的方法底层都会调用这三个方法,因此子类需要实现这三个方法和序列化机制。

Broadcast代码

abstract class Broadcast[T: ClassTag](val id: Long) extends Serializable with Logging {

/**
* Flag signifying whether the broadcast variable is valid
* (that is, not already destroyed) or not.
*/
@volatile private var _isValid = true

private var _destroySite = ""

/** Get the broadcasted value. */
def value: T = {
assertValid()
getValue()
}

/**
* Asynchronously delete cached copies of this broadcast on the executors.
* If the broadcast is used after this is called, it will need to be re-sent to each executor.
*/
def unpersist() {
unpersist(blocking = false)
}

/**
* Delete cached copies of this broadcast on the executors. If the broadcast is used after
* this is called, it will need to be re-sent to each executor.
* @param blocking Whether to block until unpersisting has completed
*/
def unpersist(blocking: Boolean) {
assertValid()
doUnpersist(blocking)
}

/**
* Destroy all data and metadata related to this broadcast variable. Use this with caution;
* once a broadcast variable has been destroyed, it cannot be used again.
* This method blocks until destroy has completed
*/
def destroy() {
destroy(blocking = true)
}

/**
* Destroy all data and metadata related to this broadcast variable. Use this with caution;
* once a broadcast variable has been destroyed, it cannot be used again.
* @param blocking Whether to block until destroy has completed
*/
private[spark] def destroy(blocking: Boolean) {
assertValid()
_isValid = false
_destroySite = Utils.getCallSite().shortForm
logInfo("Destroying %s (from %s)".format(toString, _destroySite))
doDestroy(blocking)
}

/**
* Whether this Broadcast is actually usable. This should be false once persisted state is
* removed from the driver.
*/
private[spark] def isValid: Boolean = {
_isValid
}

/**
* Actually get the broadcasted value. Concrete implementations of Broadcast class must
* define their own way to get the value.
*/
protected def getValue(): T

/**
* Actually unpersist the broadcasted value on the executors. Concrete implementations of
* Broadcast class must define their own logic to unpersist their own data.
*/
protected def doUnpersist(blocking: Boolean)

/**
* Actually destroy all data and metadata related to this broadcast variable.
* Implementation of Broadcast class must define their own logic to destroy their own
* state.
*/
protected def doDestroy(blocking: Boolean)

/** Check if this broadcast is valid. If not valid, exception is thrown. */
protected def assertValid() {
if (!_isValid) {
throw new SparkException(
"Attempted to use %s after it was destroyed (%s) ".format(toString, _destroySite))
}
}

override def toString = "Broadcast(" + id + ")"
}
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: