目前环境说明:
主机名 IP地址 用途
hadoop1 192.168.3.65 namenode、jobtracker
hadoop2 192.168.3.66 datanode、tasktracker
hadoop3 192.168.3.67 datanode、tasktracker、secondarynamenode
hadoop4 192.168.3.64 datanode、tasktracker
要求准备新增一台datanode节点到集群中,机器信息如下:
主机名 IP地址 用途
stat.localdomain 172.16.7.164 datanode
部署前提条件:
新增的datanode节点务必能和集群中的每个节点互相ping通!这个属于网络的问题就不多讨论了!
1、新增的datanode节点的hosts文件增加如下内容:
192.168.5.54 master
2、从hadoop1节点拷贝id_rsa.pub并重命名为authorized_keys
2、从hadoop1节点拷贝id_rsa.pub并重命名为authorized_keys
[root@stat .ssh]# scp 192.168.3.65:/root/.ssh/id_rsa.pub authorized_keys
3、从hadoop1节点拷贝id_rsa到新增datanode上
3、从hadoop1节点拷贝id_rsa到新增datanode上
[root@stat .ssh]# scp 192.168.3.65:/root/.ssh/id_rsa .
4、验证hadoop1登录到新增datanode,并且新增datanode登录到hadoop1是否需要密码登录?
4、验证hadoop1登录到新增datanode,并且新增datanode登录到hadoop1是否需要密码登录?
[root@hadoop1 .ssh]# ssh stat.localdomain
The authenticity of host 'stat.localdomain (172.16.7.164)' can't be established.
RSA key fingerprint is b5:50:2e:4a:1e:81:37:a2:4d:e3:6c:a0:cd:a8:1a:1b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'stat.localdomain,172.16.7.164' (RSA) to the list of known hosts.
Last login: Mon Jul 2 07:04:38 2012 from zengzhunzhun.ninetowns.cn
[root@stat .ssh]# ssh hadoop1
Last login: Mon Jul 2 10:49:34 2012 from zengzhunzhun.ninetowns.cn
5、从hadoop1节点拷贝jdk到新增datanode上。并且设置相应的环境变量
The authenticity of host 'stat.localdomain (172.16.7.164)' can't be established.
RSA key fingerprint is b5:50:2e:4a:1e:81:37:a2:4d:e3:6c:a0:cd:a8:1a:1b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'stat.localdomain,172.16.7.164' (RSA) to the list of known hosts.
Last login: Mon Jul 2 07:04:38 2012 from zengzhunzhun.ninetowns.cn
[root@stat .ssh]# ssh hadoop1
Last login: Mon Jul 2 10:49:34 2012 from zengzhunzhun.ninetowns.cn
5、从hadoop1节点拷贝jdk到新增datanode上。并且设置相应的环境变量
[root@stat ~]# mkdir -p /usr/java
[root@stat ~]# cd /usr/java
[root@stat java]# scp -r hadoop1:/usr/java/jdk* .
添加环境变量,在/root/.bash_profile文件里增加如下内容:
添加环境变量,在/root/.bash_profile文件里增加如下内容:
export JAVA_HOME=/usr/java/jdk1.6.0_14
export CLASSPATH=$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar
export PATH=$JAVA_HOME/bin:$PATH
执行如下命令让变量生效
export CLASSPATH=$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar
export PATH=$JAVA_HOME/bin:$PATH
执行如下命令让变量生效
[root@stat ~]# source .bash_profile
显示如下则说明jdk没有问题。
[root@stat ~]# java -version
java version "1.6.0_14"
Java(TM) SE Runtime Environment (build 1.6.0_14-b08)
Java HotSpot(TM) 64-Bit Server VM (build 14.0-b16, mixed mode)
6、拷贝hadoop应用到新增的datanode节点上
java version "1.6.0_14"
Java(TM) SE Runtime Environment (build 1.6.0_14-b08)
Java HotSpot(TM) 64-Bit Server VM (build 14.0-b16, mixed mode)
6、拷贝hadoop应用到新增的datanode节点上
[root@stat ~]# scp -r hadoop1:/root/hadoop .
7、启动新增的datanode节点的datanode和tasktracker进行
7、启动新增的datanode节点的datanode和tasktracker进行
[root@stat ~]# hadoop/bin/hadoop-daemon.sh start datanode
[root@stat ~]# hadoop/bin/hadoop-daemon.sh start tasktracker
[root@stat ~]# hadoop/bin/hadoop-daemon.sh start tasktracker
8、最后测试,我们可以在浏览器里面查看,或者hadoop dfsadmin -report查看都行!我这里就不测试了!应该没有任何问题的!但是这里有个建议,这里新增了一台datanode节点,下次start-all.sh的时候,这个是启动不起来的,除非把datanode节点加入到配置文件中并且rsync到每个节点哈!切记!!!
9、还有一点,有很多文章说要对新加的节点进行块均衡。不然以后的数据都会放到新增的datanode上,也就是执行如下命令:
[root@stat ~]# hadoop/bin/start-balancer.sh
但是我自己做过实验,不进行块均衡,后续块文件也是分开放的,并不是只放到新增的datanode上。这点也许我做的不太充分的测试,等生产环境上了就可以知道了!