【实验】Hadoop-2.7.2+zookeeper-3.4.6完全分布式环境搭建(HDFS、YARN HA)

本文涉及的产品
注册配置 MSE Nacos/ZooKeeper,118元/月
云原生网关 MSE Higress,422元/月
服务治理 MSE Sentinel/OpenSergo,Agent数量 不受限
简介: Hadoop-2.7.2+Zookeeper-3.4.6完全分布式环境搭建 一.版本 组件名 版本 ...

Hadoop-2.7.2+Zookeeper-3.4.6完全分布式环境搭建

.版本

组件名

版本

说明

JRE

java version "1.7.0_67"

Java(TM) SE Runtime Environment (build 1.7.0_67-b01)

Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)

 

Hadoop

hadoop-2.7.2.tar.gz

主程序包

Zookeeper

zookeeper-3.4.6.tar.gz

热切,Yarn 存储数据使用的协调服务

.主机规划

IP

Host 及安装软件

部署模块

进程

172.16.101.55

sht-sgmhadoopnn-01

hadoop

NameNode

ResourceManager

NameNode

DFSZKFailoverController

ResourceManager

172.16.101.56

sht-sgmhadoopnn-02

hadoop

NameNode

ResourceManager

NameNode

DFSZKFailoverController

ResourceManager

172.16.101.58

sht-sgmhadoopdn-01

hadoop、zookeeper

DataNode

NodeManager

Zookeeper

DataNode

NodeManager

JournalNode

QuorumPeerMain

172.16.101.59

sht-sgmhadoopdn-02

Hadoop、zookeeper

DataNode

NodeManager

Zookeeper

DataNode

NodeManager

JournalNode

QuorumPeerMain

172.16.101.60

sht-sgmhadoopdn-03

Hadoop、zookeeper

DataNode

NodeManager

Zookeeper

DataNode

NodeManager

JournalNode

QuorumPeerMain

.目录规划

名称

路径

$HADOOP_HOME

/hadoop/hadoop-2.7.2

Data

$ HADOOP_HOME/data

Log

$ HADOOP_HOME/logs

.常用脚本及命令

1.启动集群

start-dfs.sh

start-yarn.sh

2.关闭集群

stop-yarn.sh

stop-dfs.sh

3.监控集群

hdfs dfsadmin -report

4.单个进程启动/关闭

hadoop-daemon.sh start|stop namenode|datanode| journalnode

yarn-daemon.sh start |stop resourcemanager|nodemanager

http://blog.chinaunix.net/uid-25723371-id-4943894.html

.环境准备

1 .设置ip地址(5)

点击(此处)折叠或打开

[root@sht-sgmhadoopnn-01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
BOOTPROTO="static"
DNS1="172.16.101.63"
DNS2="172.16.101.64"
GATEWAY="172.16.101.1"
HWADDR="00:50:56:82:50:1E"
IPADDR="172.16.101.55"
NETMASK="255.255.255.0"
NM_CONTROLLED="yes"
ONBOOT="yes"
TYPE="Ethernet"
UUID="257c075f-6c6a-47ef-a025-e625367cbd9c"


执行命令: service network restart

验证:ifconfig

2 .关闭防火墙(5)

执行命:service iptables stop

验证:service iptables status

3.关闭防火墙的自动运行(5)

执行命令:chkconfig iptables off

验证:chkconfig --list | grep iptables

设置主机名(5)

执行命令
(1)hostname sht-sgmhadoopnn-01

(2)vi /etc/sysconfig/network

点击(此处)折叠或打开

[root@sht-sgmhadoopnn-01 ~]# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=sht-sgmhadoopnn-01.telenav.cn
GATEWAY=172.16.101.1

5 iphostname绑定(5)

点击(此处)折叠或打开

[root@sht-sgmhadoopnn-01 ~]# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.101.55 sht-sgmhadoopnn-01.telenav.cn sht-sgmhadoopnn-01
172.16.101.56 sht-sgmhadoopnn-02.telenav.cn sht-sgmhadoopnn-02
172.16.101.58 sht-sgmhadoopdn-01.telenav.cn sht-sgmhadoopdn-01
172.16.101.59 sht-sgmhadoopdn-02.telenav.cn sht-sgmhadoopdn-02
172.16.101.60 sht-sgmhadoopdn-03.telenav.cn sht-sgmhadoopdn-03
验证:ping sht-sgmhadoopnn-01

6. 设置5machines,SSH互相通信
http://blog.itpub.net/30089851/viewspace-1992210/

7 .安装JDK(5)

点击(此处)折叠或打开

(1)执行命令
[root@sht-sgmhadoopnn-01 ~]# cd /usr/java
[root@sht-sgmhadoopnn-01 java]# cp /tmp/jdk-7u67-linux-x64.gz ./
[root@sht-sgmhadoopnn-01 java]# tar -xzvf jdk-7u67-linux-x64.gz
(2)vi /etc/profile 增加内容如下:
export JAVA_HOME=/usr/java/jdk1.7.0_67
export HADOOP_HOME=/hadoop/hadoop-2.7.2
export ZOOKEEPER_HOME=/hadoop/zookeeper
export PATH=.:$HADOOP_HOME/bin:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH
#先把HADOOP_HOME, ZOOKEEPER_HOME配置了
#本次实验机器已经配置好了jdk1.7.0_67-cloudera
(3)执行 source /etc/profile
(4)验证:java –version

8.创建文件夹(5)

mkdir /hadoop

.安装Zookeeper

sht-sgmhadoopdn-01/02/03

1.下载解压zookeeper-3.4.6.tar.gz

点击(此处)折叠或打开

[root@sht-sgmhadoopdn-01 tmp]# wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
[root@sht-sgmhadoopdn-02 tmp]# wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
[root@sht-sgmhadoopdn-03 tmp]# wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
[root@sht-sgmhadoopdn-01 tmp]# tar -xvf zookeeper-3.4.6.tar.gz
[root@sht-sgmhadoopdn-02 tmp]# tar -xvf zookeeper-3.4.6.tar.gz
[root@sht-sgmhadoopdn-03 tmp]# tar -xvf zookeeper-3.4.6.tar.gz
[root@sht-sgmhadoopdn-01 tmp]# mv zookeeper-3.4.6 /hadoop/zookeeper
[root@sht-sgmhadoopdn-02 tmp]# mv zookeeper-3.4.6 /hadoop/zookeeper
[root@sht-sgmhadoopdn-03 tmp]# mv zookeeper-3.4.6 /hadoop/zookeeper

2.修改配置

点击(此处)折叠或打开

[root@sht-sgmhadoopdn-01 tmp]# cd /hadoop/zookeeper/conf
[root@sht-sgmhadoopdn-01 conf]# cp zoo_sample.cfg zoo.cfg
[root@sht-sgmhadoopdn-01 conf]# vi zoo.cfg
修改dataDir
dataDir=/hadoop/zookeeper/data
添加下面三行
server.1=sht-sgmhadoopdn-01:2888:3888
server.2=sht-sgmhadoopdn-02:2888:3888
server.3=sht-sgmhadoopdn-03:2888:3888
[root@sht-sgmhadoopdn-01 conf]# cd ../
[root@sht-sgmhadoopdn-01 zookeeper]# mkdir data
[root@sht-sgmhadoopdn-01 zookeeper]# touch data/myid
[root@sht-sgmhadoopdn-01 zookeeper]# echo 1 > data/myid
[root@sht-sgmhadoopdn-01 zookeeper]# more data/myid
1
## sht-sgmhadoopdn-02/03,也修改配置,就如下不同
[root@sht-sgmhadoopdn-02 zookeeper]# echo 2 > data/myid
[root@sht-sgmhadoopdn-03 zookeeper]# echo 3 > data/myid

.安装Hadoop(HDFS HA+YARN HA)

#step3~7,SecureCRT ssh  linux的环境中,假如copy 内容从window  linux ,中文乱码,请参照修改http://www.cnblogs.com/qi09/archive/2013/02/05/2892922.html

1.下载解压hadoop-2.7.2.tar.gz

点击(此处)折叠或打开

[root@sht-sgmhadoopdn-01 tmp]# cd /hadoop/zookeeper/conf
[root@sht-sgmhadoopdn-01 conf]# cp zoo_sample.cfg zoo.cfg
[root@sht-sgmhadoopdn-01 conf]# vi zoo.cfg
修改dataDir
dataDir=/hadoop/zookeeper/data
添加下面三行
server.1=sht-sgmhadoopdn-01:2888:3888
server.2=sht-sgmhadoopdn-02:2888:3888
server.3=sht-sgmhadoopdn-03:2888:3888
[root@sht-sgmhadoopdn-01 conf]# cd ../
[root@sht-sgmhadoopdn-01 zookeeper]# mkdir data
[root@sht-sgmhadoopdn-01 zookeeper]# touch data/myid
[root@sht-sgmhadoopdn-01 zookeeper]# echo 1 > data/myid
[root@sht-sgmhadoopdn-01 zookeeper]# more data/myid
1
## sht-sgmhadoopdn-02/03,也修改配置,就如下不同
[root@sht-sgmhadoopdn-02 zookeeper]# echo 2 > data/myid
[root@sht-sgmhadoopdn-03 zookeeper]# echo 3 > data/myid

2.修改$HADOOP_HOME/etc/hadoop/hadoop-env.sh

export JAVA_HOME="/usr/java/jdk1.7.0_67-cloudera"

3.修改$HADOOP_HOME/etc/hadoop/core-site.xml

点击(此处)折叠或打开

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <!--Yarn 需要使用 fs.defaultFS 指定NameNode URI -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://mycluster</value>
    </property>
    <!--HDFS超级用户 -->
    <property>
        <name>dfs.permissions.superusergroup</name>
        <value>root</value>
    </property>
    <!--==============================Trash机制======================================= -->
    <property>
        <!--多长时间创建CheckPoint NameNode截点上运行的CheckPointer 从Current文件夹创建CheckPoint;默认:0 由fs.trash.interval项指定 -->
        <name>fs.trash.checkpoint.interval</name>
        <value>0</value>
    </property>
    <property>
        <!--多少分钟.Trash下的CheckPoint目录会被删除,该配置服务器设置优先级大于客户端,默认:0 不删除 -->
        <name>fs.trash.interval</name>
        <value>1440</value>
    </property>
</configuration>


4.修改$HADOOP_HOME/etc/hadoop/hdfs-site.xml


点击(此处)折叠或打开

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
    <!--开启web hdfs -->
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/hadoop/hadoop-2.7.2/data/dfs/name</value>
        <description> namenode 存放name table(fsimage)本地目录(需要修改)</description>
    </property>
    <property>
        <name>dfs.namenode.edits.dir</name>
        <value>${dfs.namenode.name.dir}</value>
        <description>namenode粗放 transaction file(edits)本地目录(需要修改)</description>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/hadoop/hadoop-2.7.2/data/dfs/data</value>
        <description>datanode存放block本地目录(需要修改)</description>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <!-- 块大小 (默认) -->
    <property>
        <name>dfs.blocksize</name>
        <value>268435456</value>
    </property>
    <!--======================================================================= -->
    <!--HDFS高可用配置 -->
    <!--nameservices逻辑名 -->
    <property>
        <name>dfs.nameservices</name>
        <value>mycluster</value>
    </property>
    <property>
        <!--设置NameNode IDs 此版本最大只支持两个NameNode -->
        <name>dfs.ha.namenodes.mycluster</name>
        <value>nn1,nn2</value>
    </property>

    <!-- Hdfs HA: dfs.namenode.rpc-address.[nameservice ID] rpc 通信地址 -->
    <property>
        <name>dfs.namenode.rpc-address.mycluster.nn1</name>
        <value>sht-sgmhadoopnn-01:8020</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.mycluster.nn2</name>
        <value>sht-sgmhadoopnn-02:8020</value>
    </property>

    <!-- Hdfs HA: dfs.namenode.http-address.[nameservice ID] http 通信地址 -->
    <property>
        <name>dfs.namenode.http-address.mycluster.nn1</name>
        <value>sht-sgmhadoopnn-01:50070</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.mycluster.nn2</name>
        <value>sht-sgmhadoopnn-02:50070</value>
    </property>

    <!--==================Namenode editlog同步 ============================================ -->
    <!--保证数据恢复 -->
    <property>
        <name>dfs.journalnode.http-address</name>
        <value>0.0.0.0:8480</value>
    </property>
    <property>
        <name>dfs.journalnode.rpc-address</name>
        <value>0.0.0.0:8485</value>
    </property>
    <property>
        <!--设置JournalNode服务器地址,QuorumJournalManager 用于存储editlog -->
        <!--格式:qjournal://<host1:port1>;<host2:port2>;<host3:port3>/<journalId> 端口同journalnode.rpc-address -->
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://sht-sgmhadoopdn-01:8485;sht-sgmhadoopdn-02:8485;sht-sgmhadoopdn-03:8485/mycluster</value>
    </property>

    <property>
        <!--JournalNode存放数据地址 -->
        <name>dfs.journalnode.edits.dir</name>
        <value>/hadoop/hadoop-2.7.2/data/dfs/jn</value>
    </property>
    <!--==================DataNode editlog同步 ============================================ -->
    <property>
        <!--DataNode,Client连接Namenode识别选择Active NameNode策略 -->
        <name>dfs.client.failover.proxy.provider.mycluster</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <!--==================Namenode fencing:=============================================== -->
    <!--Failover后防止停掉的Namenode启动,造成两个服务 -->
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/root/.ssh/id_rsa</value>
    </property>
    <property>
        <!--多少milliseconds 认为fencing失败 -->
        <name>dfs.ha.fencing.ssh.connect-timeout</name>
        <value>30000</value>
    </property>

    <!--==================NameNode auto failover base ZKFC and Zookeeper====================== -->
    <!--开启基于Zookeeper及ZKFC进程的自动备援设置,监视进程是否死掉 -->
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181</value>
    </property>
    <property>
        <!--指定ZooKeeper超时间隔,单位毫秒 -->
        <name>ha.zookeeper.session-timeout.ms</name>
        <value>2000</value>
    </property>
</configuration>

5.修改$HADOOP_HOME/etc/hadoop/yarn-env.sh

#Yarn Daemon Options

#export YARN_RESOURCEMANAGER_OPTS

#export YARN_NODEMANAGER_OPTS

#export YARN_PROXYSERVER_OPTS

#export HADOOP_JOB_HISTORYSERVER_OPTS

#Yarn Logs

export YARN_LOG_DIR="/hadoop/hadoop-2.7.2/logs"

6.修改$HADOOP_HOEM/etc/hadoop/mapred-site.xml

点击(此处)折叠或打开

[root@sht-sgmhadoopnn-01 hadoop]# cp mapred-site.xml.template mapred-site.xml
[root@sht-sgmhadoopnn-01 hadoop]# vi mapred-site.xml
<configuration>
    <!-- 配置 MapReduce Applications -->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <!-- JobHistory Server ============================================================== -->
    <!-- 配置 MapReduce JobHistory Server 地址 ,默认: 0.0.0.0:10020 -->
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>sht-sgmhadoopnn-01:10020</value>
    </property>
    <!-- 配置 MapReduce JobHistory Server web ui 地址, 默认: 0.0.0.0:19888 -->
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>sht-sgmhadoopnn-01:19888</value>
    </property>
</configuration>

7.修改$HADOOP_HOME/etc/hadoop/yarn-site.xml

点击(此处)折叠或打开

<configuration>
    <!-- nodemanager 配置 ================================================= -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <description>Address where the localizer IPC is.</description>
        <name>yarn.nodemanager.localizer.address</name>
        <value>0.0.0.0:23344</value>
    </property>
    <property>
        <description>NM Webapp address.</description>
        <name>yarn.nodemanager.webapp.address</name>
        <value>0.0.0.0:23999</value>
    </property>

    <!-- HA 配置 =============================================================== -->
    <!-- Resource Manager Configs -->
    <property>
        <name>yarn.resourcemanager.connect.retry-interval.ms</name>
        <value>2000</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
    <!-- 使嵌入式自动故障转移。HA环境启动,与 ZKRMStateStore 配合 处理fencing -->
    <property>
        <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
        <value>true</value>
    </property>
    <!-- 集群名称,确保HA选举时对应的集群 -->
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>yarn-cluster</value>
    </property>
    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>
    <!--这里RM主备结点需要单独指定,(可选)
    <property>
        <name>yarn.resourcemanager.ha.id</name>
        <value>rm2</value>
</property>
 -->
    <property>
        <name>yarn.resourcemanager.scheduler.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
        <value>5000</value>
    </property>
    <!-- ZKRMStateStore 配置 -->
    <property>
        <name>yarn.resourcemanager.store.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
    </property>
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181</value>
    </property>
    <property>
        <name>yarn.resourcemanager.zk.state-store.address</name>
        <value>sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181</value>
    </property>
    <!-- Client访问RM的RPC地址 (applications manager interface) -->
    <property>
        <name>yarn.resourcemanager.address.rm1</name>
        <value>sht-sgmhadoopnn-01:23140</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address.rm2</name>
        <value>sht-sgmhadoopnn-02:23140</value>
    </property>
    <!-- AM访问RM的RPC地址(scheduler interface) -->
    <property>
        <name>yarn.resourcemanager.scheduler.address.rm1</name>
        <value>sht-sgmhadoopnn-01:23130</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address.rm2</name>
        <value>sht-sgmhadoopnn-02:23130</value>
    </property>
    <!-- RM admin interface -->
    <property>
        <name>yarn.resourcemanager.admin.address.rm1</name>
        <value>sht-sgmhadoopnn-01:23141</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address.rm2</name>
        <value>sht-sgmhadoopnn-02:23141</value>
    </property>
    <!--NM访问RM的RPC端口 -->
    <property>
        <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
        <value>sht-sgmhadoopnn-01:23125</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
        <value>sht-sgmhadoopnn-02:23125</value>
    </property>
    <!-- RM web application 地址 -->
    <property>
        <name>yarn.resourcemanager.webapp.address.rm1</name>
        <value>sht-sgmhadoopnn-01:8088</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address.rm2</name>
        <value>sht-sgmhadoopnn-02:8088</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.https.address.rm1</name>
        <value>sht-sgmhadoopnn-01:23189</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.https.address.rm2</name>
        <value>sht-sgmhadoopnn-02:23189</value>
    </property>
</configuration>

8.修改slaves

[root@sht-sgmhadoopnn-01 hadoop]# vi slaves

sht-sgmhadoopdn-01

sht-sgmhadoopdn-02

sht-sgmhadoopdn-03

9.分发文件夹

[root@sht-sgmhadoopnn-01 hadoop]# scp -r hadoop-2.7.2 root@sht-sgmhadoopnn-02:/hadoop

[root@sht-sgmhadoopnn-01 hadoop]# scp -r hadoop-2.7.2 root@sht-sgmhadoopdn-01:/hadoop

[root@sht-sgmhadoopnn-01 hadoop]# scp -r hadoop-2.7.2 root@sht-sgmhadoopdn-02:/hadoop

[root@sht-sgmhadoopnn-01 hadoop]# scp -r hadoop-2.7.2 root@sht-sgmhadoopdn-03:/hadoop

.启动集群

另外一种启动方式:http://www.micmiu.com/bigdata/hadoop/hadoop2-cluster-ha-setup/

1.启动zookeeper

点击(此处)折叠或打开

command: ./zkServer.sh start|stop|status
[root@sht-sgmhadoopdn-01 bin]# ./zkServer.sh start
JMX enabled by default
Using config: /hadoop/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@sht-sgmhadoopdn-01 bin]# jps
2073 QuorumPeerMain
2106 Jps
[root@sht-sgmhadoopdn-02 bin]# ./zkServer.sh start
JMX enabled by default
Using config: /hadoop/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@sht-sgmhadoopdn-02 bin]# jps
2073 QuorumPeerMain
2106 Jps
[root@sht-sgmhadoopdn-03 bin]# ./zkServer.sh start
JMX enabled by default
Using config: /hadoop/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@sht-sgmhadoopdn-03 bin]# jps
2073 QuorumPeerMain
2106 Jps

2.启动hadoop(HDFS+YARN)

a.格式化前,先在journalnode 节点机器上先启动JournalNode进程

点击(此处)折叠或打开

[root@sht-sgmhadoopdn-01 ~]# cd /hadoop/hadoop-2.7.2/sbin
[root@sht-sgmhadoopdn-01 sbin]# hadoop-daemon.sh start journalnode
starting journalnode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-journalnode-sht-sgmhadoopdn-03.telenav.cn.out
[root@sht-sgmhadoopdn-03 sbin]# jps
16722 JournalNode
16775 Jps
15519 QuorumPeerMain
[root@sht-sgmhadoopdn-02 ~]# cd /hadoop/hadoop-2.7.2/sbin
[root@sht-sgmhadoopdn-02 sbin]# hadoop-daemon.sh start journalnode
starting journalnode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-journalnode-sht-sgmhadoopdn-03.telenav.cn.out
[root@sht-sgmhadoopdn-03 sbin]# jps
16722 JournalNode
16775 Jps
15519 QuorumPeerMain
[root@sht-sgmhadoopdn-03 ~]# cd /hadoop/hadoop-2.7.2/sbin
[root@sht-sgmhadoopdn-03 sbin]# hadoop-daemon.sh start journalnode
starting journalnode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-journalnode-sht-sgmhadoopdn-03.telenav.cn.out
[root@sht-sgmhadoopdn-03 sbin]# jps
16722 JournalNode
16775 Jps
15519 QuorumPeerMain

b.NameNode格式化

点击(此处)折叠或打开

[root@sht-sgmhadoopnn-01 bin]# hadoop namenode -format
16/02/25 14:05:04 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = sht-sgmhadoopnn-01.telenav.cn/172.16.101.55
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.7.2
STARTUP_MSG: classpath =
……………..
………………
16/02/25 14:05:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
16/02/25 14:05:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
16/02/25 14:05:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
16/02/25 14:05:07 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
16/02/25 14:05:07 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
16/02/25 14:05:07 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
16/02/25 14:05:07 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
16/02/25 14:05:07 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
16/02/25 14:05:07 INFO util.GSet: Computing capacity for map NameNodeRetryCache
16/02/25 14:05:07 INFO util.GSet: VM type = 64-bit
16/02/25 14:05:07 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
16/02/25 14:05:07 INFO util.GSet: capacity = 2^15 = 32768 entries
16/02/25 14:05:08 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1182930464-172.16.101.55-1456380308394
16/02/25 14:05:08 INFO common.Storage: Storage directory /hadoop/hadoop-2.7.2/data/dfs/name has been successfully formatted.
16/02/25 14:05:08 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
16/02/25 14:05:08 INFO util.ExitUtil: Exiting with status 0
16/02/25 14:05:08 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at sht-sgmhadoopnn-01.telenav.cn/172.16.101.55
************************************************************/

c.同步NameNode元数据

点击(此处)折叠或打开

同步sht-sgmhadoopnn-01 元数据到sht-sgmhadoopnn-02
主要是:dfs.namenode.name.dir,dfs.namenode.edits.dir还应该确保共享存储目录下(dfs.namenode.shared.edits.dir ) 包含NameNode 所有的元数据。
[root@sht-sgmhadoopnn-01 hadoop-2.7.2]# pwd
/hadoop/hadoop-2.7.2
[root@sht-sgmhadoopnn-01 hadoop-2.7.2]# scp -r data/ root@sht-sgmhadoopnn-02:/hadoop/hadoop-2.7.2
seen_txid 100% 2 0.0KB/s 00:00
fsimage_0000000000000000000 100% 351 0.3KB/s 00:00
fsimage_0000000000000000000.md5 100% 62 0.1KB/s 00:00
VERSION 100% 205 0.2KB/s 00:00

d.初始化ZFCK

点击(此处)折叠或打开

[root@sht-sgmhadoopnn-01 bin]# hdfs zkfc -formatZK
……………..
……………..
16/02/25 14:14:41 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
16/02/25 14:14:41 INFO zookeeper.ZooKeeper: Client environment:user.dir=/hadoop/hadoop-2.7.2/bin
16/02/25 14:14:41 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181 sessionTimeout=2000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@5f4298a5
16/02/25 14:14:41 INFO zookeeper.ClientCnxn: Opening socket connection to server sht-sgmhadoopdn-01.telenav.cn/172.16.101.58:2181. Will not attempt to authenticate using SASL (unknown error)
16/02/25 14:14:41 INFO zookeeper.ClientCnxn: Socket connection established to sht-sgmhadoopdn-01.telenav.cn/172.16.101.58:2181, initiating session
16/02/25 14:14:42 INFO zookeeper.ClientCnxn: Session establishment complete on server sht-sgmhadoopdn-01.telenav.cn/172.16.101.58:2181, sessionid = 0x15316c965750000, negotiated timeout = 4000
16/02/25 14:14:42 INFO ha.ActiveStandbyElector: Session connected.
16/02/25 14:14:42 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK.
16/02/25 14:14:42 INFO zookeeper.ClientCnxn: EventThread shut down
16/02/25 14:14:42 INFO zookeeper.ZooKeeper: Session: 0x15316c965750000 closed

e.启动HDFS 系统

集群启动,在sht-sgmhadoopnn-01执行start-dfs.sh

集群关闭,在sht-sgmhadoopnn-01执行stop-dfs.sh

#####集群启动############

点击(此处)折叠或打开

[root@sht-sgmhadoopnn-01 sbin]# start-dfs.sh
16/02/25 14:21:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [sht-sgmhadoopnn-01 sht-sgmhadoopnn-02]
sht-sgmhadoopnn-01: starting namenode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-namenode-sht-sgmhadoopnn-01.telenav.cn.out
sht-sgmhadoopnn-02: starting namenode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-namenode-sht-sgmhadoopnn-02.telenav.cn.out
sht-sgmhadoopdn-01: starting datanode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-sht-sgmhadoopdn-01.telenav.cn.out
sht-sgmhadoopdn-02: starting datanode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-sht-sgmhadoopdn-02.telenav.cn.out
sht-sgmhadoopdn-03: starting datanode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-sht-sgmhadoopdn-03.telenav.cn.out
Starting journal nodes [sht-sgmhadoopdn-01 sht-sgmhadoopdn-02 sht-sgmhadoopdn-03]
sht-sgmhadoopdn-01: journalnode running as process 6348. Stop it first.
sht-sgmhadoopdn-03: journalnode running as process 16722. Stop it first.
sht-sgmhadoopdn-02: journalnode running as process 7197. Stop it first.
16/02/25 14:21:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting ZK Failover Controllers on NN hosts [sht-sgmhadoopnn-01 sht-sgmhadoopnn-02]
sht-sgmhadoopnn-01: starting zkfc, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-zkfc-sht-sgmhadoopnn-01.telenav.cn.out
sht-sgmhadoopnn-02: starting zkfc, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-zkfc-sht-sgmhadoopnn-02.telenav.cn.out
You have mail in /var/spool/mail/root

####单进程启动###########

NameNode(sht-sgmhadoopnn-01, sht-sgmhadoopnn-02):

hadoop-daemon.sh start namenode

DataNode(sht-sgmhadoopdn-01, sht-sgmhadoopdn-02, sht-sgmhadoopdn-03):

hadoop-daemon.sh start datanode

JournamNode(sht-sgmhadoopdn-01, sht-sgmhadoopdn-02, sht-sgmhadoopdn-03):

hadoop-daemon.sh start journalnode

ZKFC(sht-sgmhadoopnn-01, sht-sgmhadoopnn-02):

hadoop-daemon.sh start zkfc

f.验证namenode,datanode,zkfc

1) 进程

点击(此处)折叠或打开

[root@sht-sgmhadoopnn-01 sbin]# jps
12712 Jps
12593 DFSZKFailoverController
12278 NameNode
[root@sht-sgmhadoopnn-02 ~]# jps
29714 NameNode
29849 DFSZKFailoverController
30229 Jps
[root@sht-sgmhadoopdn-01 ~]# jps
6348 JournalNode
8775 Jps
559 QuorumPeerMain
8509 DataNode
[root@sht-sgmhadoopdn-02 ~]# jps
9430 Jps
9160 DataNode
7197 JournalNode
2073 QuorumPeerMain
[root@sht-sgmhadoopdn-03 ~]# jps
16722 JournalNode
17369 Jps
15519 QuorumPeerMain
17214 DataNode

2) 页面

sht-sgmhadoopnn-01:

http://172.16.101.55:50070/

sht-sgmhadoopnn-02:

http://172.16.101.56:50070/

g.启动YARN运算框架

#####集群启动############

1) sht-sgmhadoopnn-01启动Yarn,命令所在目录:$HADOOP_HOME/sbin

点击(此处)折叠或打开

[root@sht-sgmhadoopnn-01 sbin]# start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /hadoop/hadoop-2.7.2/logs/yarn-root-resourcemanager-sht-sgmhadoopnn-01.telenav.cn.out
sht-sgmhadoopdn-03: starting nodemanager, logging to /hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-sht-sgmhadoopdn-03.telenav.cn.out
sht-sgmhadoopdn-02: starting nodemanager, logging to /hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-sht-sgmhadoopdn-02.telenav.cn.out
sht-sgmhadoopdn-01: starting nodemanager, logging to /hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-sht-sgmhadoopdn-01.telenav.cn.out

2) sht-sgmhadoopnn-02备机启动RM

点击(此处)折叠或打开

[root@sht-sgmhadoopnn-02 sbin]# yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /hadoop/hadoop-2.7.2/logs/yarn-root-resourcemanager-sht-sgmhadoopnn-02.telenav.cn.out

####单进程启动###########

1) ResourceManager(sht-sgmhadoopnn-01, sht-sgmhadoopnn-02)

yarn-daemon.sh start resourcemanager

2) NodeManager(sht-sgmhadoopdn-01, sht-sgmhadoopdn-02, sht-sgmhadoopdn-03)

yarn-daemon.sh start nodemanager

######关闭#############

[root@sht-sgmhadoopnn-01 sbin]# stop-yarn.sh

#包含namenode的resourcemanager进程,datanode的nodemanager进程

[root@sht-sgmhadoopnn-02 sbin]# yarn-daemon.sh stop resourcemanager

h.验证resourcemanager,nodemanager

1) 进程

点击(此处)折叠或打开

[root@sht-sgmhadoopnn-01 sbin]# jps
13611 Jps
12593 DFSZKFailoverController
12278 NameNode
13384 ResourceManager
[root@sht-sgmhadoopnn-02 sbin]# jps
32265 ResourceManager
32304 Jps
29714 NameNode
29849 DFSZKFailoverController
[root@sht-sgmhadoopdn-01 ~]# jps
6348 JournalNode
559 QuorumPeerMain
8509 DataNode
10286 NodeManager
10423 Jps
[root@sht-sgmhadoopdn-02 ~]# jps
9160 DataNode
10909 NodeManager
11937 Jps
7197 JournalNode
2073 QuorumPeerMain
[root@sht-sgmhadoopdn-03 ~]# jps
18031 Jps
16722 JournalNode
17710 NodeManager
15519 QuorumPeerMain
17214 DataNode

2) 页面

ResourceManger(Active):http://172.16.101.55:8088

ResourceManger(Standby):http://172.16.101.56:8088/cluster/cluster

.监控集群

[root@sht-sgmhadoopnn-01 ~]# hdfs dfsadmin -report

.附件及参考

#http://archive-primary.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.5.2.tar.gz

#http://archive-primary.cloudera.com/cdh5/cdh/5/zookeeper-3.4.5-cdh5.5.2.tar.gz

hadoop : http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.7.2/hadoop-2.7.2.tar.gz

zookeeper :http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz

参考:

Hadoop-2.3.0-cdh5.0.1完全分布式环境搭建(NameNode,ResourceManager HA):

http://blog.itpub.net/30089851/viewspace-1987620/

如何解决这类问题:The string "--" is not permitted within comments:

http://blog.csdn.net/free4294/article/details/38681095

SecureCRT连接linux终端中文显示乱码解决办法:

http://www.cnblogs.com/qi09/archive/2013/02/05/2892922.html


参照:http://blog.itpub.net/30089851/viewspace-1987620/





相关实践学习
基于MSE实现微服务的全链路灰度
通过本场景的实验操作,您将了解并实现在线业务的微服务全链路灰度能力。
目录
相关文章
|
15天前
|
分布式计算 资源调度 Hadoop
【赵渝强老师】基于ZooKeeper实现Hadoop HA
本文介绍了如何在4个节点(bigdata112、bigdata113、bigdata114和bigdata115)上部署HDFS高可用(HA)架构,并同时部署Yarn的HA。详细步骤包括环境变量设置、配置文件修改、ZooKeeper集群启动、JournalNode启动、HDFS格式化、ZooKeeper格式化以及启动Hadoop集群等。最后通过jps命令检查各节点上的后台进程,确保部署成功。
|
1月前
|
分布式计算 NoSQL Java
Hadoop-32 ZooKeeper 分布式锁问题 分布式锁Java实现 附带案例和实现思路代码
Hadoop-32 ZooKeeper 分布式锁问题 分布式锁Java实现 附带案例和实现思路代码
45 2
|
1月前
|
分布式计算 负载均衡 算法
Hadoop-31 ZooKeeper 内部原理 简述Leader选举 ZAB协议 一致性
Hadoop-31 ZooKeeper 内部原理 简述Leader选举 ZAB协议 一致性
30 1
|
1月前
|
分布式计算 Java Hadoop
Hadoop-30 ZooKeeper集群 JavaAPI 客户端 POM Java操作ZK 监听节点 监听数据变化 创建节点 删除节点
Hadoop-30 ZooKeeper集群 JavaAPI 客户端 POM Java操作ZK 监听节点 监听数据变化 创建节点 删除节点
62 1
|
1月前
|
分布式计算 监控 Hadoop
Hadoop-29 ZooKeeper集群 Watcher机制 工作原理 与 ZK基本命令 测试集群效果 3台公网云服务器
Hadoop-29 ZooKeeper集群 Watcher机制 工作原理 与 ZK基本命令 测试集群效果 3台公网云服务器
42 1
|
1月前
|
分布式计算 Hadoop Unix
Hadoop-28 ZooKeeper集群 ZNode简介概念和测试 数据结构与监听机制 持久性节点 持久顺序节点 事务ID Watcher机制
Hadoop-28 ZooKeeper集群 ZNode简介概念和测试 数据结构与监听机制 持久性节点 持久顺序节点 事务ID Watcher机制
42 1
|
1月前
|
分布式计算 Hadoop
Hadoop-27 ZooKeeper集群 集群配置启动 3台云服务器 myid集群 zoo.cfg多节点配置 分布式协调框架 Leader Follower Observer
Hadoop-27 ZooKeeper集群 集群配置启动 3台云服务器 myid集群 zoo.cfg多节点配置 分布式协调框架 Leader Follower Observer
48 1
|
1月前
|
存储 SQL 消息中间件
Hadoop-26 ZooKeeper集群 3台云服务器 基础概念简介与环境的配置使用 架构组成 分布式协调框架 Leader Follower Observer
Hadoop-26 ZooKeeper集群 3台云服务器 基础概念简介与环境的配置使用 架构组成 分布式协调框架 Leader Follower Observer
49 0
|
1月前
|
NoSQL Java Redis
太惨痛: Redis 分布式锁 5个大坑,又大又深, 如何才能 避开 ?
Redis分布式锁在高并发场景下是重要的技术手段,但其实现过程中常遇到五大深坑:**原子性问题**、**连接耗尽问题**、**锁过期问题**、**锁失效问题**以及**锁分段问题**。这些问题不仅影响系统的稳定性和性能,还可能导致数据不一致。尼恩在实际项目中总结了这些坑,并提供了详细的解决方案,包括使用Lua脚本保证原子性、设置合理的锁过期时间和使用看门狗机制、以及通过锁分段提升性能。这些经验和技巧对面试和实际开发都有很大帮助,值得深入学习和实践。
太惨痛: Redis 分布式锁 5个大坑,又大又深, 如何才能 避开 ?
|
3月前
|
NoSQL Redis
基于Redis的高可用分布式锁——RedLock
这篇文章介绍了基于Redis的高可用分布式锁RedLock的概念、工作流程、获取和释放锁的方法,以及RedLock相比单机锁在高可用性上的优势,同时指出了其在某些特殊场景下的不足,并提到了ZooKeeper作为另一种实现分布式锁的方案。
114 2
基于Redis的高可用分布式锁——RedLock

热门文章

最新文章

下一篇
无影云桌面