主要内容
- Hadoop安装
前提
- zookeeper正常使用
- JAVA_HOME环境变量
安装包
- Hadoop 2.7.7
角色划分
角色分配 | NN | DN | SNN |
cluster-master | 是 | 否 | 否 |
cluster-slave1 | 否 | 是 | 是 |
cluster-slave1 | 否 | 是 | 否 |
cluster-slave1 | 否 | 是 | 否 |
一、环境准备
上传到docker镜像
docker cp hadoop-2.7.7.tar.gz cluster-master:/root/tar
解压
tar xivf hadoop-2.7.7.tar.gz -C /opt/hadoop
二、配置文件
core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://jinbill</value> </property> <property> <name>ha.zookeeper.quorum</name> <value>cluster-master:2181</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop</value> </property> </configuration>
yarn-site.xml
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.cluster-id</name> <value>mr_jinbill</value> </property> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>cluster-slave2</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>cluster-slave3</value> </property> <property> <name>yarn.resourcemanager.zk-address</name> <value>192.168.11.46:12181</value> </property> <property> <name>yarn.nodemanager.pmem-check-enabled</name> <value>false</value> </property> <property> <name>yarn.nodemanager.vmem-check-enabled</name> <value>false</value> </property> </configuration>
hadoop-env.sh
export JAVA_HOME=/opt/jdk/jdk1.8.0_221
hdfs-site.xml
<configuration> <property> <name>dfs.nameservices</name> <value>jinbill</value> </property> <property> <name>dfs.ha.namenodes.jinbill</name> <value>nn1,nn2</value> </property> <property> <name>dfs.namenode.rpc-address.jinbill.nn1</name> <value>cluster-master:8020</value> </property> <property> <name>dfs.namenode.rpc-address.jinbill.nn2</name> <value>cluster-slave1:8020</value> </property> <property> <name>dfs.namenode.http-address.shsxt.nn1</name> <value>cluster-master:50070</value> </property> <property> <name>dfs.namenode.http-address.shsxt.nn2</name> <value>cluster-slave1:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://cluster-slave1:8485;cluster-slave2:8485;cluster-slave3:8485/jinbill</value> </property> <property> <name>dfs.client.failover.proxy.provider.jinbill</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider </value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_rsa</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/opt/hadoop/data</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> </configuration>
新建slaves文件,若有则直接编辑
cluster-slave1 cluster-slave2 cluster-slave3
三、初始化
启动所有节点JournalNode
hadoop-daemon.sh start journalnode
在NN上初始化元数据
hdfs namenode -forma
将格式化后的元数据拷贝到SNN上
scp /opt/zookeeper/dfs cluster-slave1:/opt/hadoop
启动master节点的NN
hadoop-daemon.sh start namenode
在SNN上执行
hdfs namenode -bootstrapStandby
启动SNN
hadoop-daemon.sh start namenode
在NN或SNN上初始化ZKFC
hdfs zkfc -formatZK
停止上面节点
stop-dfs.sh
四、 启动
start-dfs.sh
start-yarn.sh
五、测试是否成功
因为网段不同,所以得加路由才能访问
- 打开cmd,需要管理员权限
- route add 172.15.0.0 mask 255.255.0.0 192.168.11.38 -p