sparkstreaming整合kafka参数设置,message偏移量写入mysql
2018-02-05 16:05
537 查看
kafka高级数据源拉取到spark,偏移量自我维护,借助scalikejdbc写入到mysql。
需要导入
<dependency>
<groupId>org.scalikejdbc</groupId>
<artifactId>scalikejdbc_2.11</artifactId>
<version>2.5.0</version>
</dependency><!-- scalikejdbc-config_2.11 -->
<dependency>
<groupId>org.scalikejdbc</groupId>
<artifactId>scalikejdbc-config_2.11</artifactId>
<version>2.5.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
<version>2.2.1</version>
</dependency>
2.scalikejdbc配置可参考官网。
需要导入
<dependency>
<groupId>org.scalikejdbc</groupId>
<artifactId>scalikejdbc_2.11</artifactId>
<version>2.5.0</version>
</dependency><!-- scalikejdbc-config_2.11 -->
<dependency>
<groupId>org.scalikejdbc</groupId>
<artifactId>scalikejdbc-config_2.11</artifactId>
<version>2.5.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
<version>2.2.1</version>
</dependency>
import org.apache.kafka.common.TopicPartition import org.apache.kafka.common.serialization.StringDeserializer import org.apache.log4j.{Level, Logger} import org.apache.spark.SparkConf import org.apache.spark.rdd.RDD import org.apache.spark.streaming.kafka010._ import org.apache.spark.streaming.{Seconds, StreamingContext} import scalikejdbc.config.DBs import scalikejdbc.{DB, SQL} /** * kafka数据读取,偏移量自己管理,偏移量数据传入mysql。 * log数据可以使用其他的进行保存。 */ object WCKafkaMysqlDB_offset { Logger.getLogger("org").setLevel(Level.WARN) def main(args: Array[String]): Unit = { val conf = new SparkConf().setMaster("local[*]").setAppName("xx") //每秒钟每个分区kafka拉取消息的速率 .set("spark.streaming.kafka.maxRatePerPartition", "100") // 序列化 .set("spark.serilizer", "org.apache.spark.serializer.KryoSerializer") // 建议开启rdd的压缩 .set("spark.rdd.compress", "true") val ssc = new StreamingContext(conf, Seconds(2)) //一参数设置 val groupId = "1" val kafkaParams = Map[String, Object]( "bootstrap.servers" -> "hdp01:9092,hdp02:9092,hdp03:9092", "key.deserializer" -> classOf[StringDeserializer], "value.deserializer" -> classOf[StringDeserializer], "group.id" -> groupId, "auto.offset.reset" -> "earliest", "enable.auto.commit" -> (false: java.lang.Boolean) //自己维护偏移量。连接kafka的集群。 ) val topics = Array("test") //二参数设置 DBs.setup() val fromdbOffset: Map[TopicPartition, Long] = DB.readOnly { implicit session => SQL(s"select * from `offset` where groupId = '${groupId}'") .map(rs => (new TopicPartition(rs.string("topic"), rs.int("partition")), rs.long("untilOffset"))) .list().apply() }.toMap //程序启动,拉取kafka的消息。 val stream = if (fromdbOffset.size == 0) { KafkaUtils.createDirectStream[String, String]( ssc, LocationStrategies.PreferConsistent, ConsumerStrategies.Subscribe[String, String](topics, kafkaParams) ) } else { KafkaUtils.createDirectStream( ssc, LocationStrategies.PreferConsistent, ConsumerStrategies.Assign[String, String](fromdbOffset.keys, kafkaParams, fromdbOffset) ) } stream.foreachRDD({ rdd => val offsetRanges: Array[OffsetRange] = rdd.asInstanceOf[HasOffsetRanges].offsetRanges //数据处理 val resout: RDD[(String, Int)] = rdd.flatMap(_.value().split(" ")).map((_, 1)).reduceByKey(_ + _) resout.foreach(println) resout.foreachPartition({ it => val jedis = RedisUtils.getJedis it.foreach({ va => jedis.hincrBy("wc", va._1, va._2) }) jedis.close() }) //偏移量存入mysql,使用scalikejdbc框架事务 DB.localTx { implicit session => for (or <- offsetRanges) { SQL("replace into `offset`(groupId,topic,partition,untilOffset) values(?,?,?,?)") .bind(groupId, or.topic, or.partition, or.untilOffset).update().apply() } } }) ssc.start() ssc.awaitTermination() } }
2.scalikejdbc配置可参考官网。
相关文章推荐
- sparkstreaming整合kafka参数设置,message偏移量写入mysql
- sparkstreaming整合kafka参数设置,message偏移量写入redis
- sparkstreaming整合kafka参数设置,message偏移量写入redis
- Spark Streaming + Kafka整合指南
- SparkStreaming与Kafka整合遇到的问题及解决方案
- kafka->spark->streaming->mysql(scala)实时数据处理案列
- spark streaming 整合kafka
- sparkstreaming之基于flume+kafka+sparkstreaming整合
- Kafka+Spark Streaming+Redis实时计算整合实践
- Spark Streaming + Kafka整合(Kafka broker版本0.8.2.1+)
- Spark Streaming 中使用kafka低级api+zookeeper 保存 offset 并重用 以及 相关代码整合
- 整合Kafka到Spark Streaming——代码示例和挑战
- Kafka+Spark Streaming+Redis实时计算整合实践
- Maven+Eclipse+SparkStreaming+Kafka整合
- 如何管理Spark Streaming消费Kafka的偏移量(三)
- 【总结】Spark Streaming和Kafka整合保证数据零丢失
- Flume+Kafka+SparkStreaming整合
- 整合Kafka到Spark Streaming——代码示例和挑战
- Spark streaming整合Kafka之Direct方式
- zookeeper+kafka安装以及kafka+spark streaming 的简单整合