7.3 改目录名称
hd@master apps]$ mv hadoop-3.0.0 hadoop
[hd@master apps]$ ll
total 324644
drwxr-xr-x. 12 hd hd 192 Jul 11 00:09 hadoop
7.4 修改hadoop配置文件
7.4.1 修改hadoop-env.sh
[hd@master ~]$ cd /home/hd/apps/hadoop/etc/hadoop/
[hd@master hadoop]$ pwd
/home/hd/apps/hadoop/etc/hadoop
[hd@master hadoop]$ vi hadoop-env.sh
#在文件的尾部(按“G”可以跳到文档的尾部),增加
export JAVA_HOME=/home/hd/apps/java
7.4.2 修改core-site.xml
<configuration>
<!-- 指定HADOOP所使用的文件系统schema(URI),HDFS的老大(NameNode)的地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<!-- 指定hadoop运行时产生文件的存储目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hd/apps/hadoop/tmpdata</value>
</property>
</configuration>
7.4.3 修改hdfs-site.xml
<configuration>
<!-- 指定HDFS副本的数量 -->
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<!-- 设置namenode的http通讯地址 -->
<property>
<name>dfs.namenode.http-address</name>
<value>master:50070</value>
</property>
<!-- 设置secondarynamenode的http通讯地址 -->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:50090</value>
</property>
<!-- 设置namenode存放的路径 -->
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hd/apps/hadoop/namenode</value>
</property>
<!-- 设置datanode存放的路径 -->
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hd/apps/hadoop/datanode</value>
</property>
</configuration>
7.4.4 修改mapred-site.xml
<configuration>
<!-- 指定mr运行在yarn上 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=/home/hd/apps/hadoop</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=/home/hd/apps/hadoop</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=/home/hd/apps/hadoop</value>
</property>
</configuration>
7.4.5 修改yarn-site.xml
<configuration>
<!-- 指定YARN的老大(ResourceManager)的地址 -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<!-- reducer获取数据的方式 -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
7.4.6 修改workers
[hd@master hadoop]$ vi workers
slave01
slave02
7.4.7 修改环境变量
[hd@master hadoop]$ su root
Password:
[root@master hadoop]# vi /etc/profile
#增加
export HADOOP_HOME=/home/hd/apps/hadoop
#增加
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
7.5 拷贝到第二,三台机
[root@master hadoop]# su hd
[hd@master hadoop]$ scp -r /home/hd/apps/hadoop hd@slave01:/home/hd/apps/
[hd@master hadoop]$ scp -r /home/hd/apps/hadoop hd@slave02:/home/hd/apps/
[hd@master hadoop]$ su root
Password:
[root@master hadoop]# scp /etc/profile root@slave01:/etc/
root@slave01's password:
profile 100% 1896 1.9KB/s 00:00
[root@master hadoop]# scp /etc/profile root@slave02:/etc/
profile 100% 1896 1.9KB/s 00:00
在第三台机器里加载环境
[root@master hadoop]# source /etc/profile
[hd@master hadoop]$ hadoop version
Hadoop 3.0.0
7.6 格式化
[hd@master hadoop]$ ll /home/hd/apps/hadoop/namenode
ls: cannot access /home/hd/apps/hadoop/namenode: No such file or directory
[hd@master hadoop]$ hadoop namenode -format
7.7 启动hadoop
start-dfs.sh 启动HDFS分布式文件系统,停止stop-dfs.sh
start-yarn.sh 启动Yarn资源管理器,停止stop-yarn.sh
start-all.sh HDFS分布式文件系统与Yarn启动,停止stop-all.sh
7.8 jps查看进程
[hd@master ~]$ jps
23668 SecondaryNameNode
23467 NameNode
23903 ResourceManager
24207 Jps
[hd@slave01 ~]$ jps
22341 DataNode
22649 Jps
22458 NodeManager
[hd@slave02 ~]$ jps
23367 Jps
23176 NodeManager
23051 DataNode
7.9 测试
hdfs 文件系统访问地址:http://192.168.126.128:50070/dfshealth.html#tab-overview
Yarn资源管理器访问地址:http://192.168.126.128:8088/cluster