您的位置:首页 > 其它

第4讲:Scala模式匹配、类型系统彻底精通与Spark源码阅读

2016-02-04 22:34 375 查看
简介:

本篇文章主要要点有:

Scala模式匹配彻底详解

Scala类型系统彻底详解

Spark源码阅读及作业

1. Scala模式匹配彻底详解

Scala中的模式匹配类似于java中的switch case,但是switch case是对值进行匹配,操作的对象也是值。Scala除了对值可以进行匹配外,还可以对类型进行匹配,也可以对集合,例如,map,list里面的元素进行匹配。

1.1 值匹配

scala> def bigData(data:String){
| data match{
| case "Spark" => println("Wow...")
| case "Hadoop" => println("Ok")
// _ 表示不满足上述的所有情况
| case _ => println("Something others")
| }
| }
//这时候返回的值的类型是Unit,因为println的返回类型是Uni
bigData: (data: String)Unit
//匹配之后程序就结束了,不会往下执行的
scala> bigData("Hadoop")
Ok
//可以在case后面加入判断语句,加上守卫,双重判断
case _ if data == "Flink" => println("Cools")
//可以将变量赋值,设置一个变量 data_ 在匹配的时候将data的值赋值给、data_,然后再判断
case data_ if data_ == "Flink" => println("Cools")

scala> bigData("Flink")
Cools


1.2 匹配类型

scala> import java.io._
import java.io._

scala> def exception(e:Exception){
|   e match{
|      case fileException:FileNotFoundException => println("File not Found!!!" + fileException)
|      case _ : Exception => println("Exception getting thread dump from executor SexecutorId",e)
|       }
|       }

scala> exception(new FileNotFoundException("Oos..."))
File not Found!!!java.io.FileNotFoundException: Oos...


1.3 集合匹配

scala> def data(array:Array[String]){
| array match{
//以指定元素,进行匹配
| case Array("Scala") => println("Scala")
//指定元素的个数,不需要指定元素的类型
| case Array(spark,hadoop,flink) => println(spark + " " + hadoop + " " + flink)
//以指定的元素开头
| case Array("Spark",_*) => println("Spark....")
| case _ => println("Unknown")
| }
| }
data: (array: Array[String])Unit

scala> data(Array("Scala"))
Scala

scala> data(Array("Spark","Hadoop","Flink"))
Spark Hadoop Flink

scala> data(Array("Spark","Scala"))
Spark....


1.4 case class 匹配(样例类)

用于消息的封装,并发编程的消息通信。

scala> case class Person(name:String)

defined class Person

只定义属性,由scala的编译器自动编译的时候提供getter和setter方法,会生成case class伴生对象case Object,class Person背后会有 Object Person里面会有编译器会自动为你生成apply方法,Person(“Spark”)参数里面的内容”Spark”会传递给apply作为参数,apply接收到这个具体内容之后就会为我们创建实际的case class对象。

//把参数传入Person类中,编译器会调用apply方法,会返回case class Person的实例
scala> case class Person(name:String)
defined class Person

scala> Person("Spark")
res8: Person = Person(Spark)

scala> class Person
defined class Person
//在主构造器,传入的参数不需要val定义,默认只读成员,默认的
//会加入val。
scala> case class Worker(name:String,salary:Double) extends Person
defined class Worker

scala> case class Student(name:String,score:Double) extends Person
defined class Student

scala> def sayHi(person : Person){
| person match{
//直接接收变量的参数,然后右边是具体操作
| case Student(name,score) => println(name + " " + score)
| case Worker(name,salary) => println(name + salary)
| case _ => println("Unknown")
| }
| }
sayHi: (person: Person)Unit

scala> sayHi(Worker("Spark",6.5))
Spark6.5

scala> sayHi(Student("Spark",6.6))
Spark 6.6


2. 类型参数—泛型类和泛型函数

// class Person[T]泛型类T类型的
scala> class Person[T](val content : T){
| def getContent(id : T) = id + "_" + content
| }
defined class Person
warning: previously defined object Person is not a companion to class Person.
Companions must be defined together; you may wish to use :paste mode for this.
//这里面指定类型为String,那么后面传入的参数一定要是String
//类型的。
scala> val p = new Person[String]("Spark")
p: Person[String] = Person@15e8f9b2
//传入的参数一定要是String类型的,因为前面指定了类型。
scala> p.getContent("Scala")
res11: String = Scala_Spark
//参数不是String所以报错
scala> p.getContent(100)
<console>:13: error: type mismatch;
found   : Int(100)
required: String
p.getContent(100)


3. 上边界和下边界

例如,某公司要招聘大数据工程师,大数据工程师本身是一个泛型,它本身包含了很多,但是如果你要限定它的类型,这时候你就需要边界,例如说,这个工程师必须要掌握Spark,可能除了Spark技术之外还需要掌握其他的,这个就是子类的事了,这就是边界,很多时候对类型也要限定边界,如果我们指定了类型的上边界。那么所有的类型必须是上边界的类型或者是其子类型,这个时候我们就确认在内部方法调用的时候,一定有父类的某种方法,例如,Spark工程师,它一定会Spark,至于其他功能就是子类的事了。

上边界: <:

_ 代表的一定是CompressionCodec的类型,或者是其子类型,这就确保了CompressionCodec里面有啥方法,子类型一定可以调用。

def saveAsTextFile(path: String, codec: Class[_ <: CompressionCodec])


下边界: <: 指定了泛型类型必须是某个类型的父类,或者说是这个类的本身。

4. View Bounds—视图界定

语法:<% 对类型进行隐式转换

scala> class Compare[T : Ordering](val n1:T , val n2 : T){
| def bigger(implicit ordered : Ordering[T]) = if(ordered.compare(n1,n2) > 0) n1 else n2
| }
defined class Compare

scala> new Compare[Int](8,3).bigger
res14: Int = 8

scala> new Compare[String]("Spark","Hadoop").bigger
res15: String = Spark

scala> Ordering[String]
res16: scala.math.Ordering[String] = scala.math.Ordering$String$@c262f2f

scala> Ordering[Int]
res17: scala.math.Ordering[Int] = scala.math.Ordering$Int$@1bb96449


T:ClassTag

*   scala> def mkArray[T : ClassTag](elems: T*) = Array[T](elems: _*)
*   mkArray: [T](elems: T*)(implicit evidence$1: scala.reflect.ClassTag[T])Array[T]
*
*   scala> mkArray(42, 13)
*   res0: Array[Int] = Array(42, 13)
*
*   scala> mkArray("Japan","Brazil","Germany")
*   res1: Array[String] = Array(Japan, Brazil, Germany)


作业:阅读Spark源码RDD,HadoopRDD,SparkContext,Master,Worker的源码,并分析里面使用的所有的模式匹配和类型参数的内容

RDD源码阅读

Some是一个case class样例类,对类进行匹配,ReliableRDDCheckpointData[_]就相当于

ReliableRDDCheckpointData[T]

case _ 其中 _ 表示任何,当前面都没有匹配成功的时候就会执行他。

checkpointData match {
case Some(_: ReliableRDDCheckpointData[_]) => logWarning(
"RDD was already marked for reliable checkpointing: overriding with local checkpoint.")
case _ =>
}


case直接对值进行匹配

case 0 => Seq.empty
case 1 =>
val d = rdd.dependencies.head
debugString(d.rdd, prefix, d.isInstanceOf[ShuffleDependency[_, _, _]], true)
case _ =>


case对指定参数进行匹配

case (desc: String, 0) => s"$partitionStr $desc"
case (desc: String, _) => s"$nextPrefix $desc"


case对数组进行匹配

case Array(t) => t
case _ => throw new UnsupportedOperationException("empty collection")


泛型函数JavaRDD[T]函数的返回类型T

def toJavaRDD() : JavaRDD[T] = {
new JavaRDD(this)(elementClassTag)
}

implicit def rddToAsyncRDDActions[T: ClassTag](rdd: RDD[T]): AsyncRDDActions[T] = {
new AsyncRDDActions(rdd)
}


HadoopRDD源码阅读

case 对异常进行匹配

case eof: EOFException =>
finished = true

case e: Exception =>
if (!ShutdownHookManager.inShutdown()) {
logWarning("Exception in RecordReader.close()", e)


SparkContext源码阅读

case可以对函数进行匹配

case NonFatal(e) =>
logError("Error initializing SparkContext.", e)


case对匿名函数的匹配

val data = br.map { case (k, v) =>
val bytes = v.getBytes
assert(bytes.length == recordLength, "Byte array does not have correct length")


case对字符串进行匹配

case "local" =>  "file:" + uri.getPath
case _ =>


new ReliableCheckpointRDD[T]泛型类,泛型类T类型的

protected[spark] def checkpointFile[T: ClassTag](path: String): RDD[T] = withScope {
new ReliableCheckpointRDD[T](this, path)
}


泛型

def runJob[T, U: ClassTag](
rdd: RDD[T],
func: Iterator[T] => U,
partitions: Seq[Int]): Array[U] = {
val cleanedFunc = clean(func)
runJob(rdd, (ctx: TaskContext, it: Iterator[T]) => cleanedFunc(it), partitions)


Master源码阅读

对case class和case object进行匹配

case RequestMasterState => {
context.reply(MasterStateResponse(
address.host, address.port, restServerBoundPort,
workers.toArray, apps.toArray, completedApps.toArray,
drivers.toArray, completedDrivers.toArray, state))
}

case BoundPortsRequest => {
context.reply(BoundPortsResponse(address.port, webUi.boundPort, restServerBoundPort))
}

case RequestExecutors(appId, requestedTotal) =>
context.reply(handleRequestExecutors(appId, requestedTotal))

case KillExecutors(appId, executorIds) =>
val formattedExecutorIds = formatExecutorIds(executorIds)
context.reply(handleKillExecutors(appId, formattedExecutorIds))


Worker源码阅读

对特定参数进行匹配

case (executorId, _) => finishedExecutors.remove(executorId)
case (driverId, _) => finishedDrivers.remove(driverId)


将值赋值给_result,然后执行后面操作

case pattern(_result) => _result.toBoolean


课程笔记来源:

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: