您的位置:首页 > 其它

Spark MLlib特征处理:Binarizer 二值化---原理及实战

2016-11-12 11:19 483 查看

原理

连续特征根据阈值二值化,大于阈值的为1.0,小于等于阈值的为0.0。

代码实战

代码块语法遵循标准markdown代码,例如:

import org.apache.spark.ml.feature.Binarizer
import org.apache.spark.sql.{DataFrame, SQLContext}
import org.apache.spark.{SparkContext, SparkConf}

object BinarizerExample {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("BinarizerExample").setMaster("local[8]")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
val data: Array[(Int, Double)] = Array((0, 0.1), (1, 0.8), (2, 0.2))
//将Array转换成DataFrame
val dataFrame: DataFrame = sqlContext.createDataFrame(data).toDF("label", "feature")
//Threshold阈值
val binarizer: Binarizer = new Binarizer().setInputCol("feature").setOutputCol("binarized_feature").setThreshold(0.5)
// transform 开始转换,将该列数据二值化,大于阈值的为1.0,否则为0.0
// spark源码:udf { in: Double => if (in > td) 1.0 else 0.0 }
val binarizedDataFrame = binarizer.transform(dataFrame)
val binarizedFeatures = binarizedDataFrame.select("label", "feature","binarized_feature")
binarizedFeatures.show()
sc.stop()
}
}

//输出
//+-----+-------+-----------------+
//|label|feature|binarized_feature|
//+-----+-------+-----------------+
//|    0|    0.1|              0.0|
//|    1|    0.8|              1.0|
//|    2|    0.2|              0.0|
//+-----+-------+-----------------+
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: