5.Hive元数据配置内嵌模式报错Error: FUNCTION ‘NUCLEUS_ASCII’ already exists
在Hive中,元数据又种配置方式,即内置模式、本地模式和远程模式,在配置内置模式执行命令schema initialization to 2.3.0初始化元数据库时,执行命令用于生成metastore目录metastore_db,可能会报错Error: FUNCTION 'NUCLEUS_ASCII' already exists,具体如下:
Error: FUNCTION 'NUCLEUS_ASCII' already exists. (state=X0Y68,code=30000) org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !! Underlying cause: java.io.IOException : Schema script failed, errorcode 2 Use --verbose for detailed stacktrace.
这是因为在Hive的安装路径下已经存在metastore_db
目录,所以在执行命令创建该目录时会存在冲突。
解决办法:
在安装目录下修改metastore_db
目录的名字或者直接删除即可解决问题。
6.编译Hue时报错/usr/bin/ld: cannot find -lcrypto和/usr/bin/ld: cannot find -lssl
Hue是Hadoop的可视化框架,安装是通过编译实现的,通过make install
命令进行编译,在编译时可能会报错,如下:
/usr/bin/ld: cannot find -lssl /usr/bin/ld: cannot find -lcrypto collect2: 错误:ld 返回 1 error: command 'gcc' failed with exit status 1 make[2]: *** [/opt/software/hue-release-4.3.0/desktop/core/build/python-ldap-2.3.13/egg.stamp] 错误 1 make[2]: 离开目录“/opt/software/hue-release-4.3.0/desktop/core” make[1]: *** [.recursive-install-bdist/core] 错误 2 make[1]: 离开目录“/opt/software/hue-release-4.3.0/desktop” make: *** [install-desktop] 错误 2
分析:
即没有找到libssl和libcrypto,查看yum info openssl
可以发现openssl已经安装,再查看如下:
[root@node02 lib64]$ ll /usr/lib64/libssl* -rwxr-xr-x. 1 root root 340976 9月 27 2018 /usr/lib64/libssl3.so lrwxrwxrwx. 1 root root 16 8月 19 03:23 /usr/lib64/libssl.so.10 -> libssl.so.1.0.2k -rwxr-xr-x. 1 root root 470360 10月 31 2018 /usr/lib64/libssl.so.1.0.2k
可以看到,根本原因是虽然有libssl的动态库文件,但没有文件名为libssl.so的文件,系
统找不到它。
解决办法:
添加软链接,将动态库文件指向ld可以找到的链接,如下:
[root@node02 lib64]$ ln -s /usr/lib64/libssl.so.1.0.2k /usr/lib64/libssl.so [root@node02 lib64]$ ln -s /usr/lib64/libcrypto.so.1.0.2k /usr/lib64/libcrypto.so [root@node02 lib64]$ ll /usr/lib64/libssl* -rwxr-xr-x. 1 root root 340976 9月 27 2018 /usr/lib64/libssl3.so lrwxrwxrwx 1 root root 27 10月 3 13:28 /usr/lib64/libssl.so -> /usr/lib64/libssl.so.1.0.2k lrwxrwxrwx. 1 root root 16 8月 19 03:23 /usr/lib64/libssl.so.10 -> libssl.so.1.0.2k -rwxr-xr-x. 1 root root 470360 10月 31 2018 /usr/lib64/libssl.so.1.0.2k [root@node02 lib64]$ ll /usr/lib64/libcrypto* lrwxrwxrwx 1 root root 19 10月 3 13:32 /usr/lib64/libcrypto.so -> libcrypto.so.1.0.2k lrwxrwxrwx 1 root root 19 10月 3 13:32 /usr/lib64/libcrypto.so.10 -> libcrypto.so.1.0.2k -rwxr-xr-x 1 root root 2520768 12月 17 2020 /usr/lib64/libcrypto.so.1.0.2k
可以看到,已经添加了软链接。
此时再编译即能成功。
7.编译Hue报错EnvironmentError: mysql_config not found
在编译Hue时,需要MySQL,所以在编译时可能会报错,如下:
EnvironmentError: mysql_config not found make[2]: *** [/opt/software/hue-release-4.3.0/desktop/core/build/MySQL-python-1.2.5/egg.stamp] 错误 1 make[2]: 离开目录“/opt/software/hue-release-4.3.0/desktop/core” make[1]: *** [.recursive-install-bdist/core] 错误 2 make[1]: 离开目录“/opt/software/hue-release-4.3.0/desktop” make: *** [install-desktop] 错误 2
此时需要安装mysql-devel,直接执行命令yum -y install mysql-devel
进行安装即可。
然后再编译即能编译成功。
8.启动Impala时报错Unit not found
Impala包括3个角色:
- impala-server
- impala-statestored
- impala-catalogd
在配置好安装时需要分别启动,在启动impala-state-store和impala-catalog时可能会报错:
[root@node03 ~]$ service impala-state-store start Redirecting to /bin/systemctl start impala-state-store.service Failed to start impala-state-store.service: Unit not found. [root@node03 ~]$ service impala-catalog start Redirecting to /bin/systemctl start impala-catalog.service Failed to start impala-catalog.service: Unit not found.
即会报错Unit not found.
,说明安装的服务未找到,此时可以查看:
[root@node03 ~]$ yum list | grep impala impala.x86_64 2.5.0+cdh5.7.6+0-1.cdh5.7.6.p0.7.el7 impala-catalog.x86_64 2.5.0+cdh5.7.6+0-1.cdh5.7.6.p0.7.el7 impala-server.x86_64 2.5.0+cdh5.7.6+0-1.cdh5.7.6.p0.7.el7 impala-shell.x86_64 2.5.0+cdh5.7.6+0-1.cdh5.7.6.p0.7.el7 impala-state-store.x86_64 2.5.0+cdh5.7.6+0-1.cdh5.7.6.p0.7.el7 hue-impala.x86_64 3.9.0+cdh5.7.6+1881-1.cdh5.7.6.p0.7.el7 impala-debuginfo.x86_64 2.5.0+cdh5.7.6+0-1.cdh5.7.6.p0.7.el7 impala-udf-devel.x86_64 2.5.0+cdh5.7.6+0-1.cdh5.7.6.p0.7.el7
显然已经安装了这两个服务,说明可能安装的时候出了问题,未加载到systemctl的列表中,查看systemctl的服务列表可以使用systemctl list-unit-files --type=service命令。
解决办法:
先执行命令yum remove impala-state-store.x86_64 -y和yum remove impala-catalog.x86_64 -y这两个服务,再执行yum -y install impala-state-store和yum -y install impala-catalog命令重新安装这两个服务,然后再启动就能成功了。
9.安装Impala后启动HDFS报错java.io.IOException
在安装Impala时,需要进行Hadoop相关的配置,但是在进行相关配置后,可能会出现一些问题,例如启动HDFS时可能不能成功启动DataNode,此时可以查看日志,发现报错如下:
2021-10-10 20:44:40,037 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.io.IOException: The path component: '/var/lib/hadoop-hdfs' in '/var/lib/hadoop-hdfs/dn_socket' has permissions 0755 uid 993 and gid 1003. It is not protected because it is owned by a user who is not root and not the effective user: '0'. This might help: 'chown root /var/lib/hadoop-hdfs' or 'chown 0 /var/lib/hadoop-hdfs'. For more information: https://wiki.apache.org/hadoop/SocketPathSecurity at org.apache.hadoop.net.unix.DomainSocket.validateSocketPathSecurity0(Native Method) at org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:193) at org.apache.hadoop.hdfs.net.DomainPeerServer.<init>(DomainPeerServer.java:40) at org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:1171) at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:1137) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1369) at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:495) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2695) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2598) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2645) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2789) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2813) 2021-10-10 20:44:40,046 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.io.IOException: The path component: '/var/lib/hadoop-hdfs' in '/var/lib/hadoop-hdfs/dn_socket' has permissions 0755 uid 993 and gid 1003. It is not protected because it is owned by a user who is not root and not the effective user: '0'. This might help: 'chown root /var/lib/hadoop-hdfs' or 'chown 0 /var/lib/hadoop-hdfs'. For more information: https://wiki.apache.org/hadoop/SocketPathSecurity 2021-10-10 20:44:40,052 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down DataNode at node01/192.168.31.155 ************************************************************/
显然,这是因为root用户没有对/var/lib/hadoop-hdfs目录的权限。
在安装Impala时可以进行短路读取配置,需要创建短路读取中转站,即目录/var/lib/hadoop-hdfs,如果不添加root用户的操作权限,就可能存在启动失败。
解决办法:
此时只需要在所有节点都执行chown root /var/lib/hadoop-hdfs设置用户即可。