环境如下
linux版本:ubuntu 14.04 LTS
jdk版本:jdk1.7.0_67
hadoop版本:hadoop-2.0.0-cdh4.1.0.tar.gz
impala版本:impala_1.4.0-1.impala1.4.0.p0.7~precise-impala1.4.0_all.deb
hadoop-cdh下载地址:http://archive.cloudera.com/cdh4/cdh/4/
ubuntu impala下载地址:
http://archive.cloudera.com/impala/ubuntu/precise/amd64/impala/pool/contrib/i/impala/
建议:hadoop的版本不要太高,还是用cdh4比较靠谱,之前我用了apache-hadoop2.7,hadoop2.6-cdh5,impala启动时都报了错误,原因为protobuf不兼容,该错误我查了几天。
为了方便,以下教程我在root用户下进行,及root作为使用用户。
1、安装hadoop
# apt-get install openssh-server
设置免密码登录
# ssh-keygen -t rsa -P ""
# cat .ssh/id_rsa.pub >> .ssh/authorized_keys
下载jdk-7u67-linux-x64.tar.gz,解压后配置环境变量
# tar -vzxf jdk-7u67-linux-x64.tar.gz
# mkdir /usr/java
# mv jdk1.7.0_67 /usr/java/
# vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.7.0_67
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH
export CLASSPATH=$CLASSPATH:.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
# tar -xvzf hadoop-2.0.0-cdh4.1.0.tar.gz
# mv hadoop-2.0.0-cdh4.1.0 /usr/local/
# vi /etc/profile
export HADOOP_HOME=/usr/local/hadoop-2.0.0-cdh4.1.0
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
export HADOOP_PREFIX=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_LIB=$HADOOP_HOME/lib
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
# source /etc/profile
2、配置hadoop,伪分布式配置
# cd /etc/local/hadoop-2.0.0-cdh4.1.0
# cd /etc/hadoop
# vi hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_67
# vi core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name> <!-- 临时目录 -->
<value>file:/root/hadoop/tmp</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
# vi hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name> <!-- namenode目录-->
<value>file:/root/hadoop/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name> <!-- datanode目录 -->
<value>file:/root/hadoop/tmp/dfs/data</value>
</property>
</configuration>
# cd ~
# mkdir -p hadoop/tmp/dfs/name
# mkdir hadoop/tmp/dfs/data
注:需要保证用户为hadoop-2.0.0-cdh4.1.0目录、namenode目录和datanode目录的拥有者
3、启动hadoop
格式化namenode
# hadoop namenode -format
# start-all.sh (该命令在$HADOOP_HOME/sbin)
测试
# hadoop fs -ls / #查看hdfs的/目录
# hadoop fs -mkdir /user #在hdfs创建目录user
# hadoop fs -put a.out /user #在hdfs的/user下上传文件a.out
# hadoop fs -get /user/a.out #下载a.out文件到本地
关闭hadoop
# stop-all.sh
4、安装impala
修改源:
# vi /etc/apt/sources.list.d/cloudera.list
deb [arch=amd64] http://archive.cloudera.com/cm5/ubuntu/trusty/amd64/cm trusty-cm5 contrib
deb-src http://archive.cloudera.com/cm5/ubuntu/trusty/amd64/cm trusty-cm5 contrib
deb [arch=amd64] http://archive.cloudera.com/impala/ubuntu/precise/amd64/impala precise-impala1 contrib
deb-src http://archive.cloudera.com/impala/ubuntu/precise/amd64/impala precise-impala1 contrib
# apt-get update
# apt-get install bigtop-utils
用apt-get下载impala太慢了,可在
http://archive.cloudera.com/impala/ubuntu/precise/amd64/impala/pool/contrib/i/impala/
下载相应安装包。
# dpkg -i impala_1.4.0-1.impala1.4.0.p0.7~precise-impala1.4.0_all.deb
# dpkg -i impala-server_1.4.0-1.impala1.4.0.p0.7-precise-impala1.4.0_all.deb
# dpkg -i impala-state-store_1.4.0-1.impala1.4.0.p0.7-precise-impala1.4.0_all.deb
# dpkg -i impala-catalog_1.4.0-1.impala1.4.0.p0.7-precise-impala1.4.0_all.deb
# apt-get install python-setuptools
出错则根据错误修改(apt-get -f install)
# dpkg -i impala-shell_1.4.0-1.impala1.4.0.p0.7-precise-impala1.4.0_all.deb
impala安装完毕。
5、impala配置
# vi /etc/hosts
127.0.0.1 localhost
在$HADOOP_HOME/etc/hadoop下将core-site.xml及hdfs-site.xml拷贝到/etc/impala/conf
# cd /usr/local/hadoop-2.0.0-cdh4.1.0/etc/hadoop/
# cp core-site.xml hdfs-site.xml /etc/impala/conf
# cd /etc/impala/conf
# vi hdfs-site.xml
增加:
<property>
<name>dfs.client.read.shortcircuit</name>
<value>true</value>
</property>
<property>
<name>dfs.domain.socket.path</name>
<value>/var/run/hadoop-hdfs/dn._PORT</value>
</property>
<property>
<name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.client.use.legacy.blockreader.local</name>
<value>true</value>
</property>
<property>
<name>dfs.datanode.data.dir.perm</name>
<value>750</value>
</property>
<property>
<name>dfs.block.local-path-access.user</name>
<value>impala</value>
</property>
<property>
<name>dfs.client.file-block-storage-locations.timeout</name>
<value>3000</value>
</property>
# mkdir /var/run/hadoop-hdfs
注:保证/var/run/hadoop-hdfs为用户所有
6、impala启动
# service impala-state-store start
# service impala-catalog start
# service impala-server start
查看是否启动:
# ps -ef | grep impala
错误信息查看日志
启动impala-shell
# impala-shell -i localhost --quiet
[localhost:21000] > select version();
...
[localhost:21000] > select current_database();
...
impala-shell操作见
http://www.cloudera.com/documentation/enterprise/latest/topics/impala_tutorial.html#tutorial
7、impala日志错误处理
impala日志位置为:/var/log/impala
impala启动错误1:
Failed on local exception:
com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: callId, status; Host Details : local host is: "database32/127.0.1.1"; destination host is: "localhost":9000;
原因:
hadoop2.6的protobuf版本为2.5,为impala用的版本为protobuf2.4
解决:
将hadoop的版本降低时与impala的版本匹配,这里impala采用二进制方式安装,无法
重新编译,解决为将hadoop的版本与impala版本一致。我下载的hadoop为hadoop-2.0.0-cdh4.1.0,impala为impala_1.4.0
impala启动错误2:
dfs.client.read.shortcircuit is not enabled because - dfs.client.use.legacy
.blockreader.local is not enabled
原因:
hdfs-site.xml配置出错
解决:
将dfs.datanode.hdfs-blocks-metadata.enabled选项设为true
impala启动错误3:
Impalad services did not start correctly, exiting. Error: Couldn't open
transport for 127.0.0.1:24000(connect() failed: Connection refused)
原因:
未启动impala-state-store,impala-catalog
解决:
# service impala-state-store start
# service impala-catalog start
# service impala start
Centos部分: