您的位置:首页 > 编程语言 > Java开发

Java 实现SparkSQL保存查询结果带有字段信息到(header)HDFS

2017-12-25 15:44 489 查看
public class SparkSQLJob {
private static final Logger LOG = Logger.getLogger(SparkSQLJob.class);
public static void main(String[] args) throws InterruptedException{
LOG.setLevel(Level.INFO);
if (args == null || args.length < 2){
LOG.error("Please input the AppName, querySQL, savePath, LogPath parameters!");
return;
}
SparkSQLJob searchObj = new SparkSQLJob();
String querySQL = args[0];
String savePath = args[1];
searchObj.run(querySQL, savePath);
}

private void run(String querySQL, String savePath){
SparkSession spark = null;
try{
spark = SparkSession
.builder()
.enableHiveSupport()
.getOrCreate();
String applicationId = spark.sparkContext().applicationId();
LOG.info("applicationId = " + applicationId);
LOG.info("spark sql = " + querySQL);
LOG.info("Start Spark Job, data file will in " + savePath);
spark.sql(querySQL).write().format(Constants.WRITE_FORMAT).option("header", true).save(savePath);
LOG.info("Finished Spark Job, data file in " + savePath);
}catch (Exception error){
LOG.error("Spark Job Error", error);
return;
}finally {
try{
spark.close();
}catch (Exception error){
LOG.error("Spark Job Close Failed");
}
}
}
}
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐