java 操作 hadoop hdfs
2017-05-22 00:00
302 查看
java 操作 hadoop hdfs
package com.traveller.bumble.hadoop.hdfs; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.*; import org.apache.hadoop.io.IOUtils; import org.junit.Test; import java.io.ByteArrayInputStream; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.net.URL; import java.net.URLConnection; /** * Created by macbook on 2017/5/21. */ public class TestHDFS { // @Test public void testRead() { try { URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory()); URL url = new URL("hdfs://master:8020/user/master/my.cnf"); URLConnection conn = url.openConnection(); InputStream inputStream = conn.getInputStream(); IOUtils.copyBytes(inputStream, System.out, 1024); IOUtils.closeStream(inputStream); } catch (Exception e) { e.printStackTrace(); } } // @Test public void testWrite() { try { URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory()); URL url = new URL("hdfs://master:8020/user/slave1/my.cnf"); URLConnection conn = url.openConnection(); OutputStream outputStream = conn.getOutputStream(); ByteArrayInputStream bis = new ByteArrayInputStream("woxihuanni".getBytes()); IOUtils.copyBytes(bis, outputStream, 1024); IOUtils.closeStream(bis); } catch (Exception e) { e.printStackTrace(); } } // @Test public void testFSRead() { try { Configuration conf = new Configuration(); conf.set("fs.defaultFS", "hdfs://master:8020"); FileSystem fileSystem = FileSystem.get(conf); FSDataInputStream open = fileSystem.open(new Path("/user/master/my.cnf")); IOUtils.copyBytes(open, System.out, 1024); IOUtils.closeStream(open); } catch (IOException e) { e.printStackTrace(); } } // @Test public void testFSWrite() { try { Configuration conf = new Configuration(); conf.set("fs.defaultFS", "hdfs://master:8020"); FileSystem fileSystem = FileSystem.get(conf); FSDataOutputStream fsDataOutputStream = fileSystem.create(new Path("/user/master/my.cnf")); ByteArrayInputStream bis = new ByteArrayInputStream("woxihuanni henjiule".getBytes()); IOUtils.copyBytes(bis, fsDataOutputStream, 1024); IOUtils.closeStream(bis); } catch (IOException e) { e.printStackTrace(); } } @Test public void testRemove() { try { Configuration conf = new Configuration(); conf.set("fs.defaultFS", "hdfs://master:8020"); FileSystem fileSystem = FileSystem.get(conf); boolean flag = fileSystem.deleteOnExit(new Path("/user/master/my.cnf")); System.out.println(flag); } catch (IOException e) { e.printStackTrace(); } } }
相关文章推荐
- hadoop java操作hdfs
- Hadoop HDFS文件操作的Java代码
- hadoop - hadoop2.6 伪分布式 - Java API 操作 HDFS
- Hadoop的HDFS Java pai 读写操作
- 第二篇:Hadoop HDFS常用JAVA api操作程序
- JAVA操作HDFS API(hadoop)
- hadoop java操作hdfs
- [Hadoop--基础]--java操作hdfs(上传、下载、查询)
- JAVA操作HDFS API(hadoop) HDFS API详解
- JAVA操作HDFS API(hadoop)
- hadoop学习:Java对HDFS的基本操作
- hadoop hdfs的java操作
- Hadoop Java API 操作 hdfs--1
- Hadoop HDFS文件操作的Java代码
- hadoop java HDFS 读写操作
- _00002 Hadoop HDFS体系结构及shell、java操作方式
- Hadoop HDFS的Java操作
- JAVA操作HDFS API(hadoop)
- C#、JAVA操作Hadoop(HDFS、Map/Reduce)真实过程概述。组件、源码下载。无法解决:Response status code does not indicate success: 500。
- java操作HDFS------Hadoop学习(3)