Installing Hadoop on Mac OSX Yosemite Tutorial Part 1.
2015-06-14 18:39
686 查看
Installing Hadoop on Mac OSX Yosemite Tutorial Part 1.
September 23, 2014 Marek 68 CommentsInstall HomeBrewInstalling Hadoop
SSH Localhost
Configuring Hadoop
Starting and Stopping Hadoop
Good to knowAdditional ResourcesGithub Wordcount example.
Install HomeBrew
Found here: http://brew.sh/ or simply paste this inside the terminal$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Install Hadoop
$ brew install hadoop
Hadoop will be installed in the following directory
/usr/local/Cellar/hadoop
Configuring Hadoop
Edit hadoop-env.sh
The file can be located at /usr/local/Cellar/hadoop/2.6.0/libexec/etc/hadoop/hadoop-env.shwhere 2.6.0 is the hadoop version.Find the line with
exportHADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"[/code]
and change it to
exportHADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true -Djava.security.krb5.realm= -Djava.security.krb5.kdc="
Edit Core-site.xml
The file can be located at /usr/local/Cellar/hadoop/2.6.0/libexec/etc/hadoop/core-site.xml<configuration> <property> <name>hadoop.tmp.dir</name> <value>/usr/local/Cellar/hadoop/hdfs/tmp</value> <description>A base for other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> </configuration>
Edit mapred-site.xml
The file can be located at /usr/local/Cellar/hadoop/2.6.0/libexec/etc/hadoop/mapred-site.xml and by default will be blank.<configuration> <property> <name>mapred.job.tracker</name> <value>localhost:9010</value> </property> </configuration>
Edit hdfs-site.xml
The file can be located at /usr/local/Cellar/hadoop/2.6.0/libexec/etc/hadoop/hdfs-site.xml<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>
To simplify life edit your ~/.profile using vim or your favorite editor and add the following two commands
alias hstart="/usr/local/Cellar/hadoop/2.6.0/sbin/start-dfs.sh;/usr/local/Cellar/hadoop/2.6.0/sbin/start-yarn.sh" alias hstop="/usr/local/Cellar/hadoop/2.6.0/sbin/stop-yarn.sh;/usr/local/Cellar/hadoop/2.6.0/sbin/stop-dfs.sh"
and execute
$ source ~/.profile
in the terminal to update.Before we can run Hadoop we first need to format the HDFS using
$ hdfs namenode -format
SSH Localhost
Nothing needs to be done here if you have already generated ssh keys. To verify just check for the existance of ~/.ssh/id_rsa and the ~/.ssh/id_rsa.pub files. If not the keys can be generated using$ ssh-keygen -t rsa
Enable Remote Login
“System Preferences” -> “Sharing”. Check “Remote Login”
Authorize SSH Keys
To allow your system to accept login, we have to make it aware of the keys that will be used
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
Let’s try to login.
$ ssh localhost > Last login: Fri Mar 6 20:30:53 2015 $ exit
Running Hadoop
Now we can run Hadoop just by typing$ hstart
and stopping using
$ hstop
Download Examples
To run examples, Hadoop needs to be started.Hadoop Examples 1.2.1 (Old)Hadoop Examples 2.6.0 (Current)Test them out using:
$ hadoop jar <path to the hadoop-examples file> pi 10 100
Good to know
We can access the Hadoop web interface by connecting toResource Manager: http://localhost:50070 JobTracker: http://localhost:8088 Specific Node Information: http://localhost:8042[/code] [/code]
This we can use to access the HDFS filesystem, for any resulting output files.Errors
To resolve ‘WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable’ (Stackoverflow.com)Connection Refused after installing Hadoop
$ hdfs dfs -ls > 15/03/06 20:13:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable > ls: Call From spaceship.local/192.168.1.65 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused[/code] The start-up scripts such as start-all.sh do not provide you with specifics about why the startups failed. Some of the time it won’t even notify you that a startup failed… To troubleshoot the service that isn’t functioning execute it manually.$ hdfs namenode > 15/03/06 20:18:31 WARN namenode.FSNamesystem: Encountered exception loading fsimage org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /usr/local/Cellar/hadoop/hdfs/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. > 15/03/06 20:18:31 FATAL namenode.NameNode: Failed to start namenode.
and the problem is…$ hadoop namenode -format
To verify the problem is fixed run$ hstart$ hdfs dfs -ls /
If ‘hdfs dfs -ls’ gives you a error> ls: `.': No such file or directory
then we need to create the default directory structure Hadoop expects (ie. /user/whoami_output/)$ whoami > spaceship $ hdfs dfs -mkdir -p /user/spaceship > 15/03/06 20:31:19 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable $ hdfs dfs -ls > 15/03/06 20:31:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable $ hdfs dfs -put book.txt > 15/03/06 20:32:29 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable $ hdfs dfs -ls > 15/03/06 20:32:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable > Found 1 items > -rw-r--r-- 1 marekbejda supergroup 29578 2015-03-06 20:32 book.txtJPS and Nothing Works…
Seems like certain builds of Java 1.8 (i.e.. 1.8_40) are missing a critical package that breaks Yarn. Check your logs at$ jps > 5935 Jps $ vim /usr/local/Cellar/hadoop/2.6.0/libexec/logs/yarn-* > 2015-03-07 16:21:32,934 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.lang.NoClassDefFoundError: sun/management/ExtendedPlatformComponent .. > 2015-03-07 16:21:32,937 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 > 2015-03-07 16:21:32,939 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
http://mail.openjdk.java.net/pipermail/core-libs-dev/2014-November/029818.htmlEither downgrade to Java 1.7 or I’m currently running 1.8.0_20$ java -version > java version "1.8.0_20" > Java(TM) SE Runtime Environment (build 1.8.0_20-b26) > Java HotSpot(TM) 64-Bit Server VM (build 25.20-b23, mixed mode)About these adsShare this:
TwitterGoogleFacebookTumblrPinterestLinkedInMoreLike this:
Like Loading...转自:http://amodernstory.com/2014/09/23/installing-hadoop-on-mac-osx-yosemite/#hadoopHbase(参考:http://freddy.cellcore.org/post/52568231952/hadoop-hbase-on-osx-10-8-mountain-lion)
Downloading Hbase
Now that you have successfully setup and launch Hadoop it’s time to install Hbase. Similarly to Hadoop, you have two options to get Hbase. You can either go to the Hbase distribution site, choose a mirror close to your location and download it (then copy to$HD_HOME), or execute the following commands:cd ~/Downloads curl http://apache.websitebeheerjd.nl/hbase/stable/hbase-0.94.8.tar.gz > hbase-0.94.8.tar.gz mv hbase-0.94.8.tar.gz $HD_HOME/ cd $HD_HOME tar xvzf hbase-0.94.8.tar.gz ln -s hbase-0.94.8 hbase 备注使用,省去很多事情[code]brew install hbase[/code]Configuring Hbase
Configuring Hbase is quite easy (a very basic instance), you need to modify only two files (located under$HBASE_HOME/conf).hbase-env.sh
The filefor Hadoop. Add the following lines tohbase-env.shsets the execution environment for Hbase. This file works the same way with as [code]hadoop-env.shhbase-env.sh:[/code]JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/HomeHBASE_OPTS="-Djava.net.preferIPv4Stack=true -Djava.security.krb5.realm=OX.AC.UK -Djava.security.krb5.kdc=kdc0.ox.ac.uk:kdc1.ox.ac.uk"hbase-site.xml
Hbase properties are governed by the file, the Hbase root directory. This directory can be either a local filehbase-site.xml. The only configuration parameter that you need to specify to make Hbase work is [code]hbase.rootdirfile:///or an HDFS instancehdfs://. In this particular case we are pointing Hbase to our newly installed HDFS instance. Other properties that can be set in this files can be found here.[/code]
Hbase requires Zookeper to work. By default Hbase comes with an embedded instance of Zookeeper, which relieves us from the task of setting one by ourselves. In the case that you may want to know more about Zookeper, its configuration, and its role on the Hbase architecture checkout this article.<configuration><property><name>hbase.rootdir</name><value>hdfs://localhost:9000/hbase</value></property></configuration>Running Hbase
Now you are ready to launch with Hbase. To start Hbase just execute the following command:$HBASE_HOME/bin/start-hbase.shTest it
In order to test your Hbase installation, launch the Hbase shell and play with it (heavily inspired from http://hbase.apache.org/book/quickstart.html). To launch the Hbase shell execute the following command:$HBASE_HOME/bin/hbase shellYou should be prompted to the Hbase interactive interpreter:HBase Shell; enter 'help' for list of supported commands. Type "exit" to leave the HBase Shell Version 0.94.8, r1485407, Wed May 22 20:53:13 UTC 2013Create a new table and put new values on it:hbase(main):003:0> create 'test', 'cf' 0 row(s) in 1.2200 seconds hbase(main):003:0> list 'test' .. 1 row(s) in 0.0550 seconds hbase(main):004:0> put 'test', 'row1', 'cf:a', 'value1' 0 row(s) in 0.0560 seconds hbase(main):005:0> put 'test', 'row2', 'cf:b', 'value2' 0 row(s) in 0.0370 seconds hbase(main):006:0> put 'test', 'row3', 'cf:c', 'value3' 0 row(s) in 0.0450 secondsscan the table values:hbase(main):007:0> scan 'test' ROW COLUMN+CELL row1 column=cf:a, timestamp=1288380727188, value=value1 row2 column=cf:b, timestamp=1288380738440, value=value2 row3 column=cf:c, timestamp=1288380747365, value=value3 3 row(s) in 0.0590 secondsget a value through its key:hbase(main):008:0> get 'test', 'row1' COLUMN CELL cf:a timestamp=1288380727188, value=value1 1 row(s) in 0.0400 secondsdisable and drop (delete) the table.hbase(main):012:0> disable 'test' 0 row(s) in 1.0930 seconds hbase(main):013:0> drop 'test' 0 row(s) in 0.0770 secondsIf you could execute those commands successfully then your hbase instance is working properly.Hbase web-interfaces
http://localhost:60010/ Hbase master webui http://localhost:60030/ Hbase region server webuiStopping Hbase
$HBASE_HOME/bin/stop-hbase.sh
相关文章推荐
- Linux内核与驱动开发学习总结:PCI中线初始化(十一)
- shell删除指定日期之前修改过的文件
- Centos7安装给firefox 安装adobe flash
- nginx
- Linux各个目录的作用
- Linux &quot;ls -l&quot;文件列表权限详解
- LoadRunner监控Linux的三种方法
- 使用Xshell连接Ubuntu
- Linux下好玩的命令
- linux ssh-keygen
- ubuntu apache2配置详解(含虚拟主机配置方法)
- linux 常用 命令 笔记二
- 文件系统 IO 并发 一致性
- Linux 的cp命令
- Linux :: vi E212: Can&#39;t open file for writing
- Linux :: vi E212: Can&#39;t open file for writing
- Linux命令之nano -
- linux下git使用记录1 git 提交
- Linux 删除文件夹和文件的命令
- linux grep命令