先准备如下三台虚拟服务器
Hadoop中心服务器 192.168.31.160
节点Node1:192.168.31.161
节点Node2:192.168.31.162
1、三台服务器均需要安装JDK8,然后配置环境变量
1)安装jdk rpm -ivh jdk-8u221-linux-x64.rpm
2)配置环境变量vi /etc/profile,将如下三行加到最后
export JAVA_HOME=/usr/java/jdk1.8.0_221-amd64 export PATH=$PATH:$JAVA_HOME/bin export CLASSPATH=.:$JAVA_HOME/jre/lib:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar
3)使生效source /etc/profile
2、并且在每台节点上面全创建一个hadoop用户,设置用户密码(我这里三台均设置为hadoop)
[root@Hadoop ~]# useradd -d /usr/hadoop hadoop [root@Hadoop ~]# chmod 755 /usr/hadoop [root@Hadoop ~]# passwd hadoop
3、均使用hadoop用户登录服务器,然后在主节点上设置ssh免密登录,保证能直接免密登录到三台节点上
su - hadoop ssh-keygen ssh-copy-id localhost ssh-copy-id 192.168.31.161 ssh-copy-id 192.168.31.162
4、在三台服务器上传hadoop安装包,均解压hadoop包到/usr/hadoop目录,进行如下操作
tar -zxf hadoop-3.1.2.tar.gz -C /usr/hadoop --strip-components 1 vi ~/.bash_profile 追加 export HADOOP_HOME=/usr/hadoop export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_YARN_HOME=$HADOOP_HOME export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native" export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin source ~/.bash_profile使生效
5、使用hadoop用户登录到主节点,配置hadoop的相关配置文件
[hadoop@Hadoop ~]$ vi ~/etc/hadoop/hdfs-site.xml
1)hdfs-site.xml <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///usr/hadoop/datanode</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:///usr/hadoop/namenode</value> </property>
2)core-site.xml <property> <name>fs.defaultFS</name> <value>hdfs://Hadoop:9000/</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/usr/hadoop/tmp</value> </property>
3)yarn-site.xml <property> <name>yarn.resourcemanager.hostname</name> <value>Hadoop</value> </property> <property> <name>yarn.nodemanager.hostname</name> <value>Hadoop</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property>
4)mapred-site.xml <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>yarn.app.mapreduce.am.env</name> <value>HADOOP_MAPRED_HOME=/usr/hadoop/</value> </property> <property> <name>mapreduce.map.env</name> <value>HADOOP_MAPRED_HOME=/usr/hadoop/</value> </property> <property> <name>mapreduce.reduce.env</name> <value>HADOOP_MAPRED_HOME=/usr/hadoop/</value> </property>
5)hadoop-env.sh
取消export JAVA_HOME注释,并修改
export JAVA_HOME=/usr/java/jdk1.8.0_221-amd64
6)workers
7)接下来切换到root用户vi编辑/etc/hosts
8)创建datanode和namenode目录
在主节点编辑完成之后,直接scp发送给Node1,Node2节点
scp ./etc/hadoop/* Node1:~/etc/hadoop/ scp ./etc/hadoop/* Node2:~/etc/hadoop/
9)分别在 start-dfs.sh 和 stop-dfs.sh 中添加如下内容
HDFS_DATANODE_USER=hadoop HDFS_DATANODE_SECURE_USER=hdfs HDFS_NAMENODE_USER=hadoop HDFS_SECONDARYNAMENODE_USER=hadoop
别在 start-yarn.sh 和 stop-yarn.sh 中添加如下内容
YARN_RESOURCEMANAGER_USER=hadoop HADOOP_SECURE_DN_USER=yarn YARN_NODEMANAGER_USER=hadoop
6、hadoop服务启动以及相关操作
hdfs namenode -format
然后start-dfs.sh,start-yarn.sh或者直接start-all.sh启动hadoop
jps查看状态,或者http://192.168.31.160:9870查看集群状态
7、mapreduce样例测试
[hadoop@Hadoop ~]$ hadoop fs -mkdir /test [hadoop@Hadoop ~]$ hadoop fs -put /usr/hadoop/test.log /test
hadoop fs -ls -R / hadoop fs -cat /output01/part-r-00000