配置ssh免密登录
mac下开启远程登录
系统偏好设置 -> 共享 -> 远程登录
授权免密登录
# 生成秘钥(如果没有) ssh-keygen -t rsa -P '' # 授权免密登录 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys # 免密登录 ssh lcoalhost
安装hadoop
brew install hadoop
配置
# 查看hadoop路径 brew info hadoop # 查看java路径 which java
进入配置文件路径
/usr/local/Cellar/hadoop/3.1.1/libexec/etc/hadoop
1、配置JAVA环境
hadoop-env.sh
export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true -Djava.security.krb5.realm= -Djava.security.krb5.kdc=" export JAVA_HOME="/Library/Java/JavaVirtualMachines/jdk1.8.0_172.jdk/Contents/Home"
2、配置hdfs地址和端口
core-site.xml
<configuration> <property> <name>hadoop.tmp.dir</name> <value>/usr/local/Cellar/hadoop/hdfs/tmp</value> </property> <property> <name>fs.default.name</name> <value>hdfs://localhost:8020</value> </property> </configuration>
3、配置jobtracker的地址和端口
mapred-site.xml
<configuration> <property> <name>mapred.job.tracker</name> <value>localhost:8021</value> </property> </configuration>
4、修改hdfs备份数
hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.name.dir</name> <value>/usr/local/Cellar/hadoop/hdfs/name</value> </property> <property> <name>dfs.data.dir</name> <value>/usr/local/Cellar/hadoop/hdfs/data</value> </property> <property> <name>dfs.http.address</name> <value>localhost:50070</value> </property> </configuration>
添加环境变量
vim ~/.bash_profile
export HADOOP_HOME=/usr/local/Cellar/hadoop/3.1.1 export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
生效
source ~/.bash_profile
格式化hdfs
hdfs namenode -format
启动关闭
start-dfs.sh stop-dfs.sh start-yarn.sh stop-yarn.sh
hadoop http://localhost:50070
yarn http://localhost:8088
查看文件
hdfs dfs -ls /
指令没反应加个sudo