您的位置:首页 > 其它

CM5.2部署impala中可能遇到的问题及解决方法

2015-03-14 03:46 1051 查看
1.下载cm的安装文件
2.安装集群,自动完成agent启动以后

3.部署hive
3.0到Ubuntu mysql 不能远程连接的问题 /article/4177440.html

3.1 is not allowed to connect to this MySQL serverConnection closed by foreign host.
/article/11183904.html 授权
3.2 Unable to connect to database on host 'udms-101.lab.udms.org:3306' from host 'udms-101.lab.udms.org' using the credential provided
用户名,密码须填写正确。(如果在Cloudera Manager中想要用到对应的用户名和密码,需要在Mysql中授权)
3.3需要在mysql中创建‘hive’表,并且已经授权对应的用户名及密码。

3.4遇到jdbc connector什么什么连接错误时
下载mysql-connector-java-5.1.22.tar.gz,并把解压后的mysql-connector-java-5.1.22-bin.jar文件
拷贝至/usr/lib/hive/lib下。

4.hdfs
4.1遇到错误org.apache.hadoop.security.AccessControlException:
Permission denied: user=udms, access=udms1234
/article/5589775.html

udms的用户,提示写/user,权限错误
org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/user":hdfs:supergroup:drwx
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.jav
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:236)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:214)
at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)

解决办法:
hadoop fs -chmod 777 /user/hadoop ,提示权限不够,通过查看hdfs的文件目录,发现/user因此,使用 sudo
-u hdfs hadoop fs -chmod 777 /usr/hadoop

4.2Exception in secureMainjava.lang.RuntimeException: Cannot start datanode because the configured max locked memory size (dfs.datanode.max.locked.memory) of 4294967296 bytes is more than the
datanode's available RLIMIT_MEMLOCK ulimit of 65536 bytes.
http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cm_ig_troubleshooting.html
http://www.cloudera.com/content/cloudera/en/documentation/cloudera-manager/v5-0-0/PDF/Cloudera-Manager-Installation-Guide.pdf
原因:HDFS caching, which is enabled bydefault in CDH 5, requires new memlock functionality fromCloudera Manager 5 Agents.

解决办法:
1. Stop all services, including
the Cloudera Management Service.
2. On all hosts with Cloudera Manager
Agents, run the command:
$sudo
service cloudera-scm-agent hard_restart
3. Start all services.

4.3如果提示无法格式化namenode,则删除/data/dfs文件夹

4.4如果提示

Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to udms-101.lab.udms.org/172.21.1.101:8022. Exiting.
java.io.IOException: Incompatible clusterIDs in /dfs/dn: namenode clusterID = cluster18; datanode clusterID = cluster16

删除各个datanode上的/dfs文件夹

4.5提示short-circuit未启动,hdfs - dfs.client.use.legacy.blockreader.local is not enabled

原因:可能是HDFS上的服务down掉了,或者是HDFS主机上面的hdfs-site.xml不见了,或者是impala工程目录的/fe/src/test/resource中的hdfs-site.xml缺少了short-circuit这个property,

解决办法:
hdfs-site.xml不见了,就从其他机子拷过去就行了;
缺少short-circuit这一项,在hdfs-site.xml中添加该property,value值为true
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐