这里一共有三台机器,系统为Ubuntu 14.04.2,其中一台为master 其余两台为slave
1、集群之间各台机器上添加相同的用户
首先用adduser命令添加一个普通用户,命令如下:
#adduser lq //添加一个名为tommy的用户
#passwd lq //修改密码
Changing password for user lq.
New UNIX password: //在这里输入新密码
Retype new UNIX password: //再次输入新密码
passwd: all authentication tokens updated successfully.
2、赋予root权限
修改 /etc/sudoers 文件,找到下面一行,把前面的注释(#)去掉
修改 /etc/sudoers 文件,找到下面一行,在root下面添加一行,如下所示:
## Allow root to run any commands anywhere
root ALL=(ALL) ALL
lq ALL=(ALL) ALL
修改完毕,现在可以用lq帐号登录,然后用命令 su - ,即可获得root权限进行操作。
这里设置三台机器的用户名统一为lq
3、修改三台机器上的/etc/hostname文件
/etc/hostname文件中存放的是主机名,修改文件后保存然后重启机器,重新登陆后主机名生效 这里设置master机器主机名为RfidLabMaster 其余两台机器主机名分别为RfidLabSlave1、RfidLabSlave2
重启
sudo reboot
配置hosts 在三台机器上分别修改/etc/hosts文件,如在master机器上修改:
lq@RfidLabMaster:~$ sudo vim /etc/hosts
用以下形式添加
Master机器ip RfidLabMaster
Slave1机器ip RfidLabSlave1
Slave2机器ip RfidLabSlave2
4、master到slave配置ssh无密码验证配置
在master机器下
cd ~
cd .ssh/
ssh-keygen -t rsa
一直回车。.ssh目录下多出两个文件
私钥文件:id_rsa公钥文件:id_rsa.pub
复制id_rsa.pub文件为authorized_keys
cp id_rsa.pub authorized_keys
将公钥文件authorized_keys分发到节点RfidLabSlave1、RfidLabSlave2上:
scp authorized_keys lq@RfidLabSlave1:/home/lq/.ssh/
scp authorized_keys lq@RfidLabSlave2:/home/lq/.ssh/
注意:如果当前用户目录下没有.ssh目录,可以自己创建一个该目录,该目录的权限最好设置为700,authorized_keys权限最好设置为600
验证ssh无密码登录:
lq@RfidLabMaster:~$ ssh RfidLabSlave1
lq@RfidLabSlave1:~$
5、配置JDK环境
由于master机器上已经安装过java,安装目录为/usr/lib/jvm/jdk1.8.0_60,所以直接将安装目录发到其他的slave节点,如果没有java,就要去官网下载解压。
sudo scp -r /usr/lib/jvm/jdk1.8.0_60 root@RfidLabSlave1:/usr/lib/jvm/
sudo scp -r /usr/lib/jvm/jdk1.8.0_60 root@RfidLabSlave2:/usr/lib/jvm/
修改/etc/profile文件 配置java环境变量
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_60
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
6、安装hadoop
hadoop下载地址
可以选择一个自己需要的版本,这里选择的是hadoop-2.5.2
先下载一个到master服务器的/opt/tools路径下,如果该路径不存在就自己创建一个该目录
lq@RfidLabMaster:~$ cd /opt/tools/
lq@RfidLabMaster:/opt/tools$ wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.5.2/hadoop-2.5.2.tar.gz
下载完成后解压
lq@RfidLabMaster:/opt/tools$ tar -zxvf hadoop-2.5.2.tar.gz
修改hadoop xml文件配置
lq@RfidLabMaster:/opt/tools/hadoop-2.5.2$ vim etc/hadoop/core-site.xml
修改etc/conf/core-site.xml 配置如下
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://RfidLabMaster:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/lq/hadoop/tmp</value>
</property>
</configuration>
修改etc/conf/mapred-site.xml 配置如下
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobtracker.http.address</name>
<value>RfidLabMaster:50030</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>RfidLabMaster:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>RfidLabMaster:19888</value>
</property>
</configuration>
conf/hdfs-site.xml 配置如下,注意文件路径中不要包含一些点、逗号等特殊字符,文件路径需要写成完全路径,以file:开头
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/lq/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/lq/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.namenode.rpc-address</name>
<value>RfidLabMaster:9000</value>
</property>
<property>
<name>dfs.block.size</name>
<value>134217728</value>
</property>
</configuration
修改etc/hadoop/yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>RfidLabMaster:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>RfidLabMaster:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>RfidLabMaster:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>RfidLabMaster:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>RfidLabMaster:8088</value>
</property>
</configuration>
修改 etc/hadoop/slaves
RfidLabSlave1
RfidLabSlave2
etc/hadoop/hadoop-env.sh和yarn-env.sh中配置Java环境变量
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_60
使用scp 直接把以上配置copy到另外的集群上
lq@RfidLabMaster:/opt/tools$ scp -r hadoop-2.5.2 lq@RfidLabSlave1:/opt/tools
lq@RfidLabMaster:/opt/tools$ scp -r hadoop-2.5.2 lq@RfidLabSlave2:/opt/tools
修改/etc/profile文件 配置hadoop环境变量
export HADOOP_HOME=/opt/tools/hadoop-2.5.2
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_HOME/lib/native"
分布式hadoop环境搭建完毕
7、启动验证hadoop
(1)格式化文件系统
lq@RfidLabMaster:/opt/tools/hadoop-2.5.2$ bin/hdfs namenode -format
如果启动失败 则需要手动创建目录
mkdir /home/lq/hadoop/dfs
成功会显示 INFO common.Storage: Storage directory /home/lq/hadoop/dfs/name has been successfully formatted.
...
16/03/02 16:44:54 INFO namenode.NNConf: ACLs enabled? false
16/03/02 16:44:54 INFO namenode.NNConf: XAttrs enabled? true
16/03/02 16:44:54 INFO namenode.NNConf: Maximum size of an xattr: 16384
16/03/02 16:44:54 INFO namenode.FSImage: Allocated new BlockPoolId: BP-677850346-120.25.162.238-1456908294436
16/03/02 16:44:54 INFO common.Storage: Storage directory /home/lq/hadoop/dfs/name has been successfully formatted.
16/03/02 16:44:54 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
16/03/02 16:44:54 INFO util.ExitUtil: Exiting with status 0
16/03/02 16:44:54 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at RfidLabMaster/120.25.162.238
************************************************************/
(2) 启动hadoop
lq@RfidLabMaster:/opt/tools/hadoop-2.5.2$ sbin/start-all.sh
输出:
lq@RfidLabMaster:/opt/tools/hadoop-2.5.2$ sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [RfidLabMaster]
RfidLabMaster: starting namenode, logging to /opt/tools/hadoop-2.5.2/logs/hadoop-lq-namenode-RfidLabMaster.out
RfidLabSlave2: starting datanode, logging to /opt/tools/hadoop-2.5.2/logs/hadoop-lq-datanode-RfidLabSlave2.out
RfidLabSlave3: starting datanode, logging to /opt/tools/hadoop-2.5.2/logs/hadoop-lq-datanode-RfidLabSlave3.out
RfidLabSlave1: starting datanode, logging to /opt/tools/hadoop-2.5.2/logs/hadoop-lq-datanode-RfidLabSlave1.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /opt/tools/hadoop-2.5.2/logs/hadoop-lq-secondarynamenode-RfidLabMaster.out
starting yarn daemons
starting resourcemanager, logging to /opt/tools/hadoop-2.5.2/logs/yarn-lq-resourcemanager-RfidLabMaster.out
RfidLabSlave1: starting nodemanager, logging to /opt/tools/hadoop-2.5.2/logs/yarn-lq-nodemanager-RfidLabSlave1.out
RfidLabSlave3: starting nodemanager, logging to /opt/tools/hadoop-2.5.2/logs/yarn-lq-nodemanager-RfidLabSlave3.out
RfidLabSlave2: starting nodemanager, logging to /opt/tools/hadoop-2.5.2/logs/yarn-lq-nodemanager-RfidLabSlave2.out
输入jps查看java进程
lq@RfidLabMaster:/opt/tools/hadoop-2.5.2$ jps
25073 NameNode
25412 ResourceManager
25676 Jps
25262 SecondaryNameNode
如果出现以下输出使其卡着不动,则要在/etc/ssh/ssh_config 文件中添加
StrictHostKeyChecking no 然后重启ssh服务/etc/init.d/ssh restart
...
The authenticity of host 'localhost (127.0.0.1)' can't be established.ECDSA key fingerprint is 08:1d:db:e4:d2:e0:87:89:ed:ca:69:82:17:6a:83:57
...
去其他slave节点输入 jps 查看
lq@RfidLabSlave1:~/hadoop$ jps
2646 NodeManager
2733 Jps
2526 DataNode
8、遇到的问题
FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode Uuid unassigned) service to master/xxx. Exiting. java.io.IOException: Incompatible clusterIDs
问题定位:所有namenode目录、所有datanode目录、从节点临时目录
问题原因:
1) 主节点的namenode clusterID与从节点的datanode clusterID不一致
2) 多次格式化了namenode跟datanode之后的结果,格式化之后从节点生成了新的ID,造成不一致
解决办法:
在格式化之前,先把所有的服务停掉(stop-dfs.sh、stop-yarn.sh或者stop-all.sh),确保都停掉了之后,分别到所有节点的namenode目录、datanode目录、临时目录,把以上目录里面的所有内容都删除掉。然后再重新启动就可以了。一个个机器删除比较麻烦 附上一个脚本可以在各台机器上批量删除,参考该博客改写的
链接:http://blog.csdn.net/nuaazdh/article/details/39643283
创建脚本:allcmd.sh
if [ "$#" -ne 2 ] ; then
echo "USAGE: $0 -f server_list_file cmd"
exit -1
fi
file_name=$1
cmd_str=$2
cwd=$(pwd)
cd $cwd
serverlist_file="$cwd/$file_name"
cmdlist_file="$cwd/$cmd_str"
if [ ! -e $serverlist_file ] ; then
echo 'server.list not exist';
exit 0
fi
if [ ! -e $cmdlist_file ] ; then
echo 'cmd.list not exist';
exit 0
fi
while read line
do
#echo $line
if [ -n "$line" ] ; then
echo "DOING--->>>>>" $line "<<<<<<<"
while read cmd_str
do
ssh $line $cmd_str < /dev/null > /dev/null
if [ $? -eq 0 ] ; then
echo "$cmd_str done!"
else
echo "error: " $?
fi
done<$cmdlist_file
fi
done < $serverlist_file
创建完执行chmod +x allcmd.sh
创建命令文件 cmdList
rm -r /home/lq/hadoop/dfs/*
rm -r /home/lq/hadoop/tmp/*
rm -r /opt/tools/hadoop-2.5.2/logs/*
创建服务器列表文件 serverList
RfidLabMaster
RfidLabSlave1
RfidLabSlave2
RfidLabSlave3
使用方法:在脚本所在目录下建立cmdList文件和serverList
然后调用:./allcmd.sh serverList cmdList 即可