您的位置:首页 > 其它

3000门徒内部训练绝密视频(泄密版)第4课:Scala模式匹配、类型系统彻底精通与Spark源码阅读

2016-08-11 21:29 549 查看

Scala模式匹配、类型系统彻底精通与Spark源码阅读

模式匹配和java中的switch case 差不多,但scala比java强大,因为他可以对函数、集合、class进行匹配

Option表示是否有值

scala> def bigData(data: String){
data match {
case "Spark" => println("Wow!")
case "Hadoop" => println("Ok")
case _ => println("Something others")
}
}

scala> def bigData(data: String){
data match {
case "Spark" => println("Wow!")
case "Hadoop" => println("Ok")
case _ if data == "Flink" => println("Cool")
case _ => println("Something else")
}
}

scala> def bigData(data: String){
data match {
case "Spark" => println("Wow!")
case "Hadoop" => println("Ok")
case data_ if data_ == "Flink" => println("Cool" + data_)
case _ => println("Something else")
}
}


scala 匹配类型

def exception(e: Exception) {
e match{
case fileException: FileNotFoundException => println("File not Found :" + fileException)
case _: Exception => println("Exception getting thread thread dump executor")
}
}


scala 匹配集合

scala> def data(array: Array[String]){
array match {
case Array("Scala") => println("Scala")
case Array(spark, hadoop, flink) => println(spark+" : "+hadoop+" : "+flink)
case Array("Spark", _*) => println("Spark ...")
case _ => println("Unknown")
}
}

scala> data(Array("Spark"))
Spark ...                           ^

scala> data(Array("Scala","Sfwewef","Kafaka"))
Scala : Sfwewef : Kafaka


scala case class 样例类

类似于javaBean中默认只读成员,特别适合于并发编程中的消息通信,主构造器不用写参数,默认只有getter(val),因为只读。内部工作时会自动生成case class的伴生对象,且内部提供了apply方法,apply方法中会自动将调用时的参数作为apply方法的参数

apply方法接收这个参数之后就会帮我们创建实际的case class对象

scala> case class Worker(name:String, salary:Double) extends Person
defined class Worker

scala> case class Student(name:String, score:Double) extends Person
defined class Student

scala> def sayHi(person: Person){
person match {
case Student(name, score) => println("I'm a student " + name + score)
case Worker(name, salary) => println("I'm a worker " + name + salary)
case _ => println("Unknown")
}
}

scala> sayHi(Student("Spark", 6.6))
I'm a student Spark6.6

scala> sayHi(Worker("Spark", 6.5))
I'm a worker Spark6.5


Option的Some表示有值,None表示没有值

case class 每次工作时会生成实例,case object 本身是个实例,全局唯一的

类型参数

泛型、类型参数。参数本身也有自己的类型。

scala> class Person[T](val content : T){
def getContent(id: T) = id + " : " + content
}
defined class Person

scala> val p = new Person[String]("Spark")
p: Person[String] = Person@69ffdaa8

scala> p.getContent("Scala")
res20: String = Scala : Spark

scala> p.getContent(100)
<console>:11: error: type mismatch;
found   : Int(100)
required: String
p.getContent(100)
^


isinstanceOf,类似java instanceof

类型的边界 <:上边界,这样这个上边界的方法都可以被调用

类型的下边界>:下边界,用的不多,指定泛型类型必须是某个类的父类或者类本身

View Bounds视图界定,支持对类型本身的隐式转换。 语法:<% ,对类型进行隐式转换

上下文Bounds,Context Bounds 声明方式 [T],在上下文中注入隐式值,而且注入的过程是自动的

scala> class Compare[T : Ordering](val n1: T, val n2: T){
| def bigger(implicit ordered: Ordering[T]) =if (ordered.compare(n1,n2) >0) n1 else n2}
defined class Compare

scala> new Compare[Int](8,3).bigger
res23: Int = 8

scala> new Compare[String]("Spark","Hadoop").bigger
res24: String = Spark

scala> Ordering[String]
res25: scala.math.Ordering[String] = scala.math.Ordering$String$@550fa96f

scala> Ordering[Int]
res26: scala.math.Ordering[Int] = scala.math.Ordering$Int$@210635fd


[T: Manifest] ,把类型分类,后来演化成ClassTag,运行时才会获取全部的类型的信息

Array[T]

scala> def mkArray[T : ClassTag](elems: T*) = Array[T](elems: _*)
mkArray: [T](elems: T*)(implicit evidence$1: scala.reflect.ClassTag[T])Array[T]

scala> mkArray(42, 13)
res0: Array[Int] = Array(42, 13)

scala> mkArray("Japan","Brazil","Germany")
res1: Array[String] = Array(Japan, Brazil, Germany)


class Person[+T] 协变,父类和子类之间的继承关系在泛型中有+,那么类型本身作为类型参数的类也会有父类和子类。如果继承方向一致是协变,否则是逆变

Dependency[_] 相当于Dependency[T]
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  scala
相关文章推荐