hadoop重启虚机后hdfs中数据丢失,需要重新格式化问题
2014-03-24 21:22
495 查看
1,虚机每次重启后,如果不格式化namenode,就错误,日志如下:
INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-sylar/dfs/name does not exist. ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed. org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-sylar/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
2,解决方法,重新格式化:
~]$bin/hadoop namenode -format
3, 如果想彻底消除错误,并且每次重启机器不格式化,需要修改core-site.xml 和 hdfs-site.xml, 如下:
vm1:~/hadoop/conf$ cat core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://192.168.1.105:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/filesystem/tmp</value>
<description> temporary directories.</description>
</property>
<property>
<name>dfs.name.dir</name>
<value>/home/hadoop/filesystem/name</value>
<description>where on the local filesystem the DFS name node should store the name table</description>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/hadoop/filesystem/data</value>
<description>where on the local filesystem an DFS data node should store its blocks.</description>
</property>
</configuration>
vm1:~/hadoop/conf$
~/hadoop/conf$ cat hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-sylar/dfs/name does not exist. ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed. org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-sylar/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
2,解决方法,重新格式化:
~]$bin/hadoop namenode -format
3, 如果想彻底消除错误,并且每次重启机器不格式化,需要修改core-site.xml 和 hdfs-site.xml, 如下:
vm1:~/hadoop/conf$ cat core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://192.168.1.105:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/filesystem/tmp</value>
<description> temporary directories.</description>
</property>
<property>
<name>dfs.name.dir</name>
<value>/home/hadoop/filesystem/name</value>
<description>where on the local filesystem the DFS name node should store the name table</description>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/hadoop/filesystem/data</value>
<description>where on the local filesystem an DFS data node should store its blocks.</description>
</property>
</configuration>
vm1:~/hadoop/conf$
~/hadoop/conf$ cat hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
相关文章推荐
- ef codeFirst 修改表结构 增加字段等 EF code first需要重新生成库导致数据丢失的问题.
- Hadoop重启需要格式化的问题
- Hadoop重启需要格式化的问题
- 系统重启后,需要重新格式化HDFS问题解决方案
- Hadoop1重新格式化HDFS
- Hadoop namenode重新格式化需注意问题
- 双精度浮点型数据运算精度丢失以及数据的格式化问题
- Redis重启数据丢失问题
- Hadoop 在重启或者多次格式化后无法启动datanode问题的解决
- hadoop 突然断电数据丢失问题
- 异常断电导致HDFS文件块丢失,影响HBASE数据的问题处理
- Hadoop namenode重新格式化需注意问题
- hadoop2.6 每次重启linux 都必须重新format datanode的问题
- 如何在不重启或重新格式化hadoop集群的情况下删除集群节点
- hadoop2.8 每次重启linux 都必须重新format datanode的问题
- hadoop重新格式化HDFS步骤解析
- 安装单机Hadoop时格式化HDFS出现问题
- linux 系统重新安装,但需要保存系统内数据的问题
- 【hadoop2.6.0】数据丢失问题解决
- (2)Hadoop重新格式化HDFS的方法