1.安装Java jdk
vi /etc/profile
export JAVA_HOME=/opt/jdk1.8.0_251
export CLASSPATH=.:JAVAHOME/lib/dt.jar:JAVA_HOME/lib/dt.jar:JAVA_HOME/lib/tools.jar
export PATH=PATH:PATH:JAVA_HOME/bin
source /etc/profile
2.安装hadoop
步骤一:下载 https://hadoop.apache.org/releases.html
步骤二:配置环境变量
vi /etc/profile
export JAVA_HOME=/opt/jdk1.8.0_251
export HADOOP_HOME=/opt/hadoop-2.9.2
export CLASSPATH=.:JAVAHOME/lib/dt.jar:JAVA_HOME/lib/dt.jar:JAVA_HOME/lib/tools.jar
export PATH=PATH:PATH:JAVA_HOME/bin:HADOOPHOME/bin:HADOOP_HOME/bin:HADOOP_HOME/sbin
source /etc/profile
测试:配置成功
步骤三:免密登录
(1)生成秘钥
ssh-keygen -t dsa -f ~/.ssh/id_dsa
(2) 公钥存储
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
测试:ssh localhost
步骤四:修改配置文件,hadoop伪分布模式需要修改5个配置文件,配置文件目录为/opt/hadoop-2.9.2/etc/hadoop
(1)hadoop-env.sh
export JAVA_HOME=/opt/jdk1.8.0_251
(2)core-site.xml
fs.default.name
hdfs://127.0.0.1:9000
hadoop.tmp.dir
/data/hadoop/tmp
(3)hdfs-site.xml
dfs.namenode.name.dir
file:/data/hadoop/hdfs/name
dfs.datanode.name.dir
file:/data/hadoop/hdfs/data
dfs.replication
1
(4)修改mapred-site.xml,首先去掉.template后缀,指令:mv mapred-site.xml.template mapred-site.xml
mapreduce.framework.name
yarn
(5)yarn-site.xml
yarn.nodemanager.aux-services
mapreduce_shuffle
步骤五:格式化HDFS文件系统 /opt/hadoop-2.9.2/bin
./hdfs namenode -format
步骤六:启动 /opt/hadoop-2.9.2/sbin
./start-all.sh
测试是否成功 jps
访问地址: