Up and running with Apache Spark on Apache Kudu
2017-03-06 09:35
399 查看
After the GA of Apache Kudu in Cloudera CDH 5.10, we take a look at the Apache Spark on Kudu integration, share code snippets, and explain how to get up and running quickly, as Kudu is already a first-class citizen in Spark’s ecosystem.
As the Apache Kudu development team celebrates the initial 1.0 release launched on September 19, and the most recent 1.2.0 version now GA as part of Cloudera’s CDH 5.10 release, we take a look at Apache Spark and the capabilities already in place for working with Kudu.The Spark integration with Kudu supports:
DDL operations (Create/Delete)
Native Kudu RDD
Native Kudu Data Source, for DataFrame integration
Reading from Kudu
Performing insert/update/upsert/delete from Kudu
Predicate pushdown
Schema mapping between Kudu and Spark SQL
Kudu IS:
a replicated and distributed storage engine for fast analytics and fast data
a storage engine that provides a balance between high throughput for large scans and low latency for random access and updates
a storage engine that provides database-like semantics and a relational data model
Kudu is NOT:
a file system
an application running on HDFS
a replacement for HDFS nor for HBase
Kudu is configured given directories on pre-defined, typical Linux file systems where the table data actually resides. You may dedicate file systems to Kudu alone, or even assign a directory for Kudu table data next to an already existing directory servicing HDFS. For example, HDFS may be assigned /data1/dfs for HDFS data while Kudu may be configured to store its data in /data1/kudu.
SQL access is available for Kudu tables using SQL engines written that support Kudu as the storage layer. Currently, Impala and Spark SQL provide that capability. Kudu is a complementary technology to HDFS and HBase as it provides fast sequential scans and fast random access though not scans as fast as sequential scans as Parquet on HDFS or random access as fast as HBase. It also does not provide the NoSQL ad hoc column creation capabilities of HBase and the variety of data formats stored in HDFS, as Kudu mandates structure and strong typing on the content it stores.
Spark is a processing engine running on top of Kudu, allowing one to integrate various datasets, whether they be on HDFS, HBase, Kudu or other storage engines, into a single application providing a unified view of your data. Spark SQL in particular nicely aligns with Kudu as Kudu tables already contain a strongly-typed, relational data model.
Setting up your Application
Always refer to the latest documentation found in the Developing Applications with Apache Kudu online documentation. All examples in this blog post may be found on Github.At a high level, start your Spark application development by defining the following in your pom.xml file as we build our project using Maven.
Maven repository element
1 2 3 4 5 6 7 8 | <repository> <id>cdh.repo</id> <name>Cloudera Repositories</name> <url>https://repository.cloudera.com/artifactory/cloudera-repos</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> |
Maven artifact dependencies
1 2 3 4 5 6 7 8 9 10 | <dependency> <groupId>org.apache.kudu</groupId> <artifactId>kudu-client</artifactId> <version>1.2.0-cdh5.10.0</version> </dependency> <dependency> <groupId>org.apache.kudu</groupId> <artifactId>kudu-spark_2.10</artifactId> <version>1.2.0-cdh5.10.0</version> </dependency> |
Introducing the KuduContext
By now you may have heard about several contexts such as SparkContext, SQLContext, HiveContext, SparkSession, and now, with Kudu, we introduce a KuduContext. This is the primary serializable object that can be broadcasted in your Spark application. This class interacts with the Kudu Java client on your behalf in your Spark executors.KuduContext provides the methods needed to perform DDL operations, interface with the native Kudu RDD, perform updates/inserts/deletes on your data, convert data types from Kudu to Spark, and more.
Much of the implementation provided here you don’t need to worry about. Just know that such a context exists, and that you will likely interact with this and the DataFrame APIs when working with Kudu in Spark.
Common preamble code
1 2 3 4 5 6 7 8 9 10 11 12 | // Create a Spark and SQL context val sc = new SparkContext(sparkConf) val sqlContext = new SQLContext(sc) // Comma-separated list of Kudu masters with port numbers val master1 = "ip-10-13-4-249.ec2.internal:7051" val master2 = "ip-10-13-5-150.ec2.internal:7051" val master3 = "ip-10-13-5-56.ec2.internal:7051" val kuduMasters = Seq(master1, master2, master3).mkString(",") // Create an instance of a KuduContext val kuduContext = new KuduContext(kuduMasters) |
Kudu DDL
We start with examples on how to define your Kudu tables via Spark. First, the following show simple, yet ever useful, table ‘exists’ and ‘delete’ methods.Table exists and delete
1 2 3 4 5 6 7 | // Specify a table name var kuduTableName = "spark_kudu_tbl" // Check if the table exists, and drop it if it does if (kuduContext.tableExists(kuduTableName)) { kuduContext.deleteTable(kuduTableName) } |
Provide the table name
Provide the schema
Provide the primary key
Define important options like describing your partitioning schema
Call the create table API.
Be sure to refer to Apache Kudu Schema Design documentation for hints and tips on defining your table appropriately for your use case. Keep in mind that schema design is the single most important thing within your control to maximize the performance of your Kudu cluster.
Create table
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | // 1. Give your table a name kuduTableName = "spark_kudu_tbl" // 2. Define a schema val kuduTableSchema = StructType( // col name type nullable? StructField("name", StringType , false) :: StructField("age" , IntegerType, true ) :: StructField("city", StringType , true ) :: Nil) // 3. Define the primary key val kuduPrimaryKey = Seq("name") // 4. Specify any further options val kuduTableOptions = new CreateTableOptions() kuduTableOptions. setRangePartitionColumns(List("name").asJava). setNumReplicas(3) // 5. Call create table API kuduContext.createTable( // Table name, schema, primary key and options kuduTableName, kuduTableSchema, kuduPrimaryKey, kuduTableOptions) |
To make the “asJava” method available, remember to import the JavaConverters libraries.
Import JavaConverters to specify Java types
1 | import scala.collection.JavaConverters._ |
Next, you can see the list of tablets representing this table along with which host is currently acting as the leader tablet.
Finally, if you do decide in the future to create this table using Impala, the CREATE TABLE statement is shown for you as reference.
DataFrames and Kudu
Kudu comes with a custom, native Data Source for Kudu tables. Hence, DataFrame APIs are tightly integrated. To demonstrate this, we define a DataFrame we’re going to work with, then show the capabilities available through this API.Define your DataFrame
DataFrames can be created from many sources, including an existing RDD, Hive table, or from Spark data. Here, we will define a tiny dataset of Customers, convert into an RDD, and from there get our DataFrame.Creating a simple dataset, converting it into a DataFrame
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | // Define your case class *outside* your main method case class Customer(name:String, age:Int, city:String) // This allows us to implicitly convert RDD to a DataFrame import sqlContext.implicits._ // Define a list of customers based on the case class already defined above val customers = Array( Customer("jane", 30, "new york"), Customer("jordan", 18, "toronto")) // Create RDD out of the customers Array val customersRDD = sc.parallelize(customers) // Now, using reflection, this RDD can easily be converted to a DataFrame // Ensure to do the : // import sqlContext.implicits._ // above to have the toDF() function available to you val customersDF = customersRDD.toDF() |
DML – Insert, Insert-Ignore, Upsert, Update, Delete with KuduContext
Kudu supports a number of DML type operations, several of which are included in the Spark on Kudu integration. Supported Spark operations on Kudu DataFrame objects include:INSERT – Insert rows of the DataFrame into the Kudu table. Note that although the API fully supports INSERT, the use of it within Spark is discouraged. It is risky to use INSERT because Spark tasks may require re-execution, which means rows inserted already may be requested to be inserted again. Doing so will result in failure, since INSERT will not allow rows to be inserted if they already exist (causes a failure). Instead, we encourage the use of INSERT_IGNORE described below.
INSERT-IGNORE – Insert rows of the DataFrame into the Kudu table. Ignore records if they already exist in the Kudu table.
DELETE – Delete rows found in the DataFrame from the Kudu table
UPSERT – Rows in the DataFrame are updated in the Kudu table if they exist, otherwise they are inserted.
UPDATE – Rows in the DataFrame are updated in the Kudu table
It is recommended to use the KuduContext for these operations, although as you will see later, many of these can also be done through the DataFrame API.
Insert data
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | // Define Kudu options used by various operations val kuduOptions: Map[String, String] = Map( "kudu.table" -> kuduTableName, "kudu.master" -> kuduMasters) // 1. Specify your Kudu table name kuduTableName = "spark_kudu_tbl" // 2. Insert our customer DataFrame data set into the Kudu table kuduContext.insertRows(customersDF, kuduTableName) // 3. Read back the records from the Kudu table to see them dumped sqlContext.read.options(kuduOptions).kudu.show +------+---+--------+ | name|age| city| +------+---+--------+ | jane| 30|new york| |jordan| 18| toronto| +------+---+--------+ |
Delete data
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | // 1. Specify your Kudu table name kuduTableName = "spark_kudu_tbl" // 2. Let’s register our customer dataframe as a temporary table so we // refer to it in Spark SQL customersDF.registerTempTable("customers") // 3. Filter and create a keys-only DataFrame to be deleted from our table val deleteKeysDF = sqlContext.sql("select name from customers where age > 20") // 4. Delete the rows from our Kudu table kuduContext.deleteRows(deleteKeysDF, kuduTableName) // 5. Read data from Kudu table sqlContext.read.options(kuduOptions).kudu.show +------+---+-------+ | name|age| city| +------+---+-------+ |jordan| 18|toronto| +------+---+-------+ |
Our customer Jordan just had a birthday, and we’ve onboarded a number of new customers. We want to perform an upsert now, where ‘jordan’ will get an updated record, and we’ll have several new customers inserted.
Upsert data
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | // 1. Specify your Kudu table name kuduTableName = "spark_kudu_tbl" // 2. Define the dataset we want to upsert val newAndChangedCustomers = Array( Customer("michael", 25, "chicago"), Customer("denise" , 43, "winnipeg"), Customer("jordan" , 19, "toronto")) // 3. Create our dataframe val newAndChangedRDD = sc.parallelize(newAndChangedCustomers) val newAndChangedDF = newAndChangedRDD.toDF() // 4. Call upsert with our new and changed customers DataFrame kuduContext.upsertRows(newAndChangedDF, kuduTableName) // 5. Show contents of Kudu table sqlContext.read.options(kuduOptions).kudu.show +-------+---+--------+ | name|age| city| +-------+---+--------+ | denise| 43|winnipeg| | jordan| 19| toronto| |michael| 25| chicago| +-------+---+--------+ |
Update data
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | // 1. Specify your Kudu table name kuduTableName = "spark_kudu_tbl" // 2. Create a DataFrame of updated rows val modifiedCustomers = Array(Customer("michael", 25, "toronto")) val modifiedCustomersRDD = sc.parallelize(modifiedCustomers) val modifiedCustomersDF = modifiedCustomersRDD.toDF() // 3. Call update with our new and changed customers DataFrame kuduContext.updateRows(modifiedCustomersDF, kuduTableName) // 4. Show contents of Kudu table sqlContext.read.options(kuduOptions).kudu.show +-------+---+--------+ | name|age| city| +-------+---+--------+ | denise| 43|winnipeg| | jordan| 19| toronto| |michael| 25| toronto| +-------+---+--------+ |
Kudu Native RDD
Spark integration with Kudu also provides you with a native Kudu RDD. Reading in the RDD provides you with a RDD[Row] type of objects. The only element you want to supply is the list of columns you want to project from the underlying table and away you go.Reading with Native Kudu RDD
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | // 1. Specify a table name kuduTableName = "spark_kudu_tbl" // 2. Specify the columns you want to project val kuduTableProjColumns = Seq("name", "age") // 3. Read table, represented now as RDD val custRDD = kuduContext.kuduRDD(sc, kuduTableName, kuduTableProjColumns) // We get a RDD[Row] coming back to us. Lets send through a map to pull // out the name and age into the form of a tuple val custTuple = custRDD.map { case Row(name: String, age: Int) => (name, age) } // Print it on the screen just for fun custTuple.collect().foreach(println(_)) (jordan,19) (michael,25) (denise,43) |
Read and Write – using the DataFrame API
While we can perform a number of manipulations through the KuduContext shown above, we also have the ability to call the read/write APIs straight from the default data source itself.To setup a read, we need to specify options for the Kudu table naming the table we want to read alongside the list of Kudu master servers of the Kudu cluster servicing the table.
DataFrame read
1 2 3 4 5 6 7 8 9 10 11 12 13 | // Read our table into a DataFrame - reusing kuduOptions specified // above val customerReadDF = sqlContext.read.options(kuduOptions).kudu // Show our table to the screen. customerReadDF.show() +-------+---+--------+ | name|age| city| +-------+---+--------+ | jordan| 19| toronto| |michael| 25| toronto| | denise| 43|winnipeg| +-------+---+--------+ |
In any case, “append” mode with Kudu defaults behaviour to “upsert”; rows will be updated if the key already exists otherwise rows are inserted into the table.
DataFrame write
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | // Create a small dataset to write (append) to the Kudu table val customersAppend = Array( Customer("bob", 30, "boston"), Customer("charlie", 23, "san francisco")) // Create our DataFrame our of our dataset val customersAppendDF = sc.parallelize(customersAppend).toDF() // Specify the table name kuduTableName = "spark_kudu_tbl" // Call the write method on our DataFrame directly in "append" mode customersAppendDF.write.options(kuduOptions).mode("append").kudu // See results of our append sqlContext.read.options(kuduOptions).kudu.show() +-------+---+-------------+ | name|age| city| +-------+---+-------------+ | bob| 30| boston| |charlie| 23|san francisco| | denise| 43| winnipeg| | jordan| 19| toronto| |michael| 25| toronto| +-------+---+-------------+ |
Spark SQL INSERT
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | // Quickly prepare a Kudu table we will use as our source table in Spark // SQL. // First, some sample data val srcTableData = Array( Customer("enzo", 43, "oakland"), Customer("laura", 27, "vancouver")) // Create our DataFrame val srcTableDF = sc.parallelize(srcTableData).toDF() // Register our source table srcTableDF.registerTempTable("source_table") // Specify Kudu table name we will be inserting into kuduTableName = "spark_kudu_tbl" // Register your table as a Spark SQL table. // Remember that kuduOptions stores the kuduTableName already as well as // the list of Kudu masters. sqlContext.read.options(kuduOptions).kudu.registerTempTable(kuduTableName) // Use Spark SQL to INSERT (treated as UPSERT by default) into Kudu table sqlContext.sql(s"INSERT INTO TABLE $kuduTableName SELECT * FROM source_table") // See results of our insert sqlContext.read.options(kuduOptions).kudu.show() +-------+---+-------------+ | name|age| city| +-------+---+-------------+ |michael| 25| toronto| | bob| 30| boston| |charlie| 23|san francisco| | denise| 43| winnipeg| | enzo| 43| oakland| | jordan| 19| toronto| | laura| 27| vancouver| +-------+---+-------------+ |
Predicate pushdown
Pushing predicate evaluation down into the Kudu engine improves performance as it reduces the amount of data that needs to flow back to the Spark engine for further evaluation and processing.The set of predicates that are currently supported for predicate pushdown through the Spark API include:
Equal to (=)
Greater than (>)
Greater than or equal (>=)
Less than (<)
Less than or equal (<=)
Hence, such statements in Spark SQL will push the predicate evaluation down into Kudu’s storage engine, improving overall performance.
Predicate pushdown
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | // Kudu table name kuduTableName = "spark_kudu_tbl" // Register Kudu table as a Spark SQL temp table sqlContext.read.options(kuduOptions).kudu. registerTempTable(kuduTableName) // Now refer to that temp table name in our Spark SQL statement val customerNameAgeDF = sqlContext. sql(s"""SELECT name, age FROM $kuduTableName WHERE age >= 30""") // Show the results customerNameAgeDF.show() +------+---+ | name|age| +------+---+ | bob| 30| |denise| 43| | enzo| 43| +------+---+ |
This would have to be reviewed while the query is running, which of course, is not practical especially when scans are too quick to spot in the UI.
Using Spark’s explain() function, you can also validate that your predicates are being pushed down as so (continued from the previous example of the predicate age >= 30)
1 2 3 | customerNameAgeDF.explain() == Physical Plan == Scan org.apache.kudu.spark.kudu.KuduRelation@682615b3[name#6,age#7] PushedFilters: [GreaterThanOrEqual(age,30)] |
1 | PushedFilters: [GreaterThanOrEqual(age,30), GreaterThan(name,a)] |
Schema Mapping
Kudu and Spark SQL are altogether separate entities and engines. Spark is a processing framework, while Kudu a storage engine. Therefore, they have their own data types and schemas. Integration is already in place where Spark SQL schemas will be mapped accordingly to Kudu schemas under the covers already for you (with a few current limitations, see Spark Integration Known Issues and Limitations for details). Because of this, no additional work to be done on your end!Conclusion
In this blog post, we’ve walked you through several aspects of the Apache Spark on Kudu integration. We have shown examples from setting up your application build properties, to defining your Kudu tables to showing various ways of how to interact with your Kudu tables through Spark. Kudu is now a first-class citizen in the Spark ecosystem, and hopefully by now you can start processing all your data through Spark whether it exists in Kudu or any other Hadoop storage engine.相关文章推荐
- Configure Puppet Master with Passenger and Apache on Centos
- Setting up a PHP 5 with Apache 2 and MySQL 4.1.3
- Setting up and running Subversion and Tortoise SVN with Visual Studio and .NET
- 【Data Algorithms_Recipes for Scaling up with Hadoop and Spark】Chapter4 LeftOuterJoin
- 【Data Algorithms_Recipes for Scaling up with Hadoop and Spark】Chapter6 MovingAverage
- getting git-svn up and running on ubuntu
- iOS on Rails- up and running
- How to Install PHP 7 with Apache and MariaDB on CentOS 7/Debian 8
- How To Setup a Rails 4 App With Apache and Passenger on CentOS 6
- Ruby on Rails: Up and Running
- A child is running up a staircase with N steps, and can hop either 1 step,2steps,3 steps at a time. Count how many possible ways the child can run up
- GETTING UP AND RUNNING WITH NODE.JS, EXPRESS, JADE, AND MONGODB
- up and running with cassandra
- THE DEAD-SIMPLE STEP-BY-STEP GUIDE FOR FRONT-END DEVELOPERS TO GETTING UP AND RUNNING WITH NODE.JS,
- 【Data Algorithms_Recipes for Scaling up with Hadoop and Spark】Chapter1 Secondary Sort
- .JBOSS 4.3.0 EAP Clustering with multiple instances running on Same machine balanced with Apache HTT
- Java SSH工程报错 Are you running on Java 1.4 or below with Apache Crimson? Upgrade to Apache Xerces (or Java 1.5) for full XSD suppo
- Proxy and site on same port with Apache 2.2
- 【Data Algorithms_Recipes for Scaling up with Hadoop and Spark】Chapter5 Order Inversion Pattern
- 【Data Algorithms_Recipes for Scaling up with Hadoop and Spark】Chapter3 Top 10 List