五、启动Zookeeper集群
在每一台机器上输入:
zkServer.sh start
六、启动journalnode
在bigdata12和bigdata13两台节点上启动journalnode节点:
hadoop-daemon.sh start journalnode
七、格式化HDFS和Zookeeper(在bigdata12上执行)
格式化HDFS:
hdfs namenode -format
将/root/training/hadoop-2.7.3/tmp拷贝到bigdata13的/root/training/hadoop-2.7.3/tmp下
scp -r dfs/ root@bigdata13:/root/training/hadoop-2.7.3/tmp
格式化zookeeper:
hdfs zkfc -formatZK
日志:INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/ns1 in ZK.
以上日志表明在Zookeeper的文件系统中创建了/hadoop-ha/ns1的子目录用于保存Namenode的结构信息
八、启动Hadoop集群(在bigdata12上执行)
启动Hadoop集群的命令:
start-all.sh
日志:
Starting namenodes on [bigdata12 bigdata13] bigdata12: starting namenode, logging to /root/training/hadoop-2.4.1/logs/hadoop-root-namenode-hadoop113.out bigdata13: starting namenode, logging to /root/training/hadoop-2.4.1/logs/hadoop-root-namenode-hadoop112.out bigdata14: starting datanode, logging to /root/training/hadoop-2.4.1/logs/hadoop-root-datanode-hadoop115.out bigdata15: starting datanode, logging to /root/training/hadoop-2.4.1/logs/hadoop-root-datanode-hadoop114.out bigdata13: starting zkfc, logging to /root/training/hadoop-2.7.3/logs/hadoop-root-zkfc- bigdata13.out bigdata12: starting zkfc, logging to /root/training/hadoop-2.7.3/logs/hadoop-root-zkfc-bigdata12.out
在bigdata13上手动启动ResourceManager作为Yarn的备用主节点:
yarn-daemon.sh start resourcemanager
至此,Hadoop集群的HA架构就已经搭建成功。