您的位置:首页 > 大数据 > Hadoop

HDFS 断点续传,写文件功能

2015-12-09 11:59 761 查看
实际上这是个 HDFS 的工具类部分代码。 首先
public static Configuration configuration = null;public static FileSystem fileSystem = null;static {try {if (null == configuration) {configuration = new Configuration();}if (null == fileSystem) {fileSystem = FileSystem.get(URI.create(RockyConstants.HDFS_PATH), configuration);}} catch (IOException e) {e.printStackTrace();}}/*** 整文件存入 HDFS** @throws Exception*/public static boolean putHDFS(String filePath, byte[] info) {try {FSDataOutputStream writer = fileSystem.create(new Path(filePath), true);writer.write(info);writer.flush();writer.close();} catch (IOException e) {e.printStackTrace();return false;}return true;}
/*** 断点续传存入* @throws IOException*/public static void continueUpload(String targetPath, byte[] info) throws IOException{Path fsPath = new Path(targetPath);// 第一次if (!fileSystem.exists(fsPath)) {putHDFS(targetPath,info);} else {// 续传FSDataOutputStream writer = fileSystem.append(fsPath);writer.write(info);writer.flush();writer.close();}}
来自为知笔记(Wiz)
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: