【实验】Hadoop2.6.0的伪分布安装

简介: hadoop-2.6.0.tar.gz: http://apache.fayea.com/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.
hadoop-2.6.0.tar.gz: http://apache.fayea.com/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz
jdk-7u79-linux-x64.gz: http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html

1 设置ip地址 

点击(此处)折叠或打开

  1. [root@test1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
  2. # Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
  3. DEVICE=eth0
  4. BOOTPROTO=none
  5. ONBOOT=yes
  6. HWADDR=00:0c:29:51:cc:37
  7. TYPE=Ethernet
  8. NETMASK=255.255.255.0
  9. IPADDR=192.168.23.131
  10. GATEWAY=192.168.23.1
  11. USERCTL=no
  12. IPV6INIT=no
  13. PEERDNS=yes
执行命令 service network restart
验证: ifconfig

2 关闭防火墙
执行命令 service iptables stop
验证: service iptables status

3 关闭防火墙的自动运行
执行命令 chkconfig iptables off
验证: chkconfig --list | grep iptables

4 设置主机名
执行命令
(1)hostname hadoop1
(2)vi /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=hadoop1

5 ip与hostname绑定
执行命令 (1)vi /etc/hosts
                        192.168.23.131    hadoop1.localdomain hadoop1


验证: ping hadoop1

6 设置ssh免密码登陆
执行命令
(1)ssh-keygen -t rsa
(2)cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
验证:
[root@test1 ~]# ssh hadoop1
The authenticity of host 'hadoop1 (192.168.23.131)' can't be established.
RSA key fingerprint is e9:9f:f2:ea:f2:aa:47:58:5f:12:ea:3c:50:3f:0d:1b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop1,192.168.23.131' (RSA) to the list of known hosts.
Last login: Thu Feb 11 20:54:11 2016 from 192.168.23.1
[root@hadoop1 ~]# ssh hadoop1
Last login: Thu Feb 11 20:57:56 2016 from hadoop1.localdomain

7 安装jdk http://my.oschina.net/gaowm/blog/275184
(1)执行命令       

点击(此处)折叠或打开

  1. [root@hadoop1 java]# cd /usr/share/java
  2. [root@hadoop1 java]# cd
  3. [root@hadoop1 ~]# cd /usr/share/java
  4. [root@hadoop1 java]# cp /tmp/jdk-7u79-linux-x64.gz ./
  5. [root@hadoop1 java]# tar -xzvf jdk-7u79-linux-x64.gz
(2)vi /etc/profile 增加内容如下:
export JAVA_HOME=/usr/share/java/jdk1.7.0_79
export PATH=.:$JAVA_HOME/bin:$PATH
(3)source /etc/profile
验证: java -version

8 安装hadoop
(1)执行命令     

点击(此处)折叠或打开

  1. [root@hadoop1 ~]# cd /usr/local/
  2. [root@hadoop1 local]# cp /tmp/hadoop-2.6.0.tar.gz ./
  3. [root@hadoop1 local]# tar -zxvf hadoop-2.6.0.tar.gz
  4. [root@hadoop1 local]# mv hadoop-2.6.0 hadoop
(2)vi /etc/profile 增加内容如下:
export JAVA_HOME=/usr/share/java/jdk1.7.0_79
export HADOOP_HOME=/usr/local/hadoop
export PATH=.:$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH
(3)source /etc/profile
(4)修改/usr/local/hadoop/etc/hadoop目录下的配置文件hadoop-env.sh、core-site.xml、hdfs-site.xml、mapred-site.xml

点击(此处)折叠或打开

  1. [root@hadoop1 hadoop]# vi hadoop-env.sh
  2.     export JAVA_HOME=/usr/share/java/jdk1.7.0_79

  3.     [root@hadoop1 hadoop]# vi core-site.xml
  4.     <configuration>
  5.     <property>
  6.         <name>fs.default.name</name>
  7.         <value>hdfs://hadoop1:9000</value>
  8.      </property>
  9.      <property>
  10.         <name>hadoop.tmp.dir</name>
  11.         <value>/usr/local/hadoop/tmp</value>
  12.      </property>

  13.     </configuration>


  14.     [root@hadoop1 hadoop]# vi hdfs-site.xml

  15.     <configuration>
  16.      <property>
  17.         <name>dfs.replication</name>
  18.         <value>1</value>
  19.      </property>
  20.      <property>
  21.         <name>dfs.permissions</name>
  22.         <value>false</value>
  23.      </property>

  24.     </configuration>
  25.     ~

  26.     [root@hadoop1 hadoop]# cp mapred-site.xml.template mapred-site.xml
  27.     [root@hadoop1 hadoop]# vi mapred-site.xml
  28.     <configuration>
  29.     <property>
  30.         <name>mapred.job.tracker</name>
  31.         <value>hadoop1:9001</value>
  32.      </property>

  33.     </configuration>

(5)hadoop namenode -format
(6)start-all.sh

点击(此处)折叠或打开

  1. [root@hadoop1 hadoop]# cd sbin
  2. [root@hadoop1 sbin]# start-all.sh
  3. This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
  4. 16/02/11 21:40:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  5. Starting namenodes on [hadoop1]
  6. hadoop1: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-hadoop1.out
  7. The authenticity of host 'localhost (127.0.0.1)' can't be established.
  8. RSA key fingerprint is e9:9f:f2:ea:f2:aa:47:58:5f:12:ea:3c:50:3f:0d:1b.
  9. Are you sure you want to continue connecting (yes/no)? yes
  10. localhost: Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
  11. localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-hadoop1.out
  12. Starting secondary namenodes [0.0.0.0]
  13. The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
  14. RSA key fingerprint is e9:9f:f2:ea:f2:aa:47:58:5f:12:ea:3c:50:3f:0d:1b.
  15. Are you sure you want to continue connecting (yes/no)? yes
  16. 0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.
  17. 0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-root-secondarynamenode-hadoop1.out
  18. 16/02/11 21:41:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  19. starting yarn daemons
  20. starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-root-resourcemanager-hadoop1.out
  21. localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-hadoop1.out
  22. [root@hadoop1 sbin]# jps
  23. 7192 SecondaryNameNode
  24. 7432 NodeManager
  25. 7468 Jps
  26. 6913 NameNode
  27. 7333 ResourceManager
  28. 7036 DataNode



验证: (1)执行命令jps 如果看到5个新的java进程,分别是NameNode、SecondaryNameNode、DataNode、ResourceManager、NodeManager
(2)在浏览器查看
hadoop web控制台页面的端口整理:
50070:hdfs文件管理   http://192.168.23.131:50070
8088:ResourceManager http://192.168.23.131:8088
8042:NodeManager     http://192.168.23.131:8042
   
9 启动时没有NameNode的可能原因:
(1)没有格式化
(2)环境变量设置错误
(3)ip与hostname绑定失败

参考:
          http://stark-summer.iteye.com/blog/2184123
          http://www.aboutyun.com/thread-7513-1-1.html
目录
相关文章
|
2月前
|
分布式计算 Hadoop Java
Hadoop快速入门——第一章、认识Hadoop与创建伪分布式模式(Hadoop3.1.3版本配置)
Hadoop快速入门——第一章、认识Hadoop与创建伪分布式模式(Hadoop3.1.3版本配置)
65 0
|
4月前
|
分布式计算 Hadoop Java
我用免费白拿的服务器搭建了一台基于CentOS7的Hadoop3.x伪分布式环境
我用免费白拿的服务器搭建了一台基于CentOS7的Hadoop3.x伪分布式环境
55 0
|
4月前
|
消息中间件 分布式计算 大数据
【大数据技术Hadoop+Spark】Flume、Kafka的简介及安装(图文解释 超详细)
【大数据技术Hadoop+Spark】Flume、Kafka的简介及安装(图文解释 超详细)
72 0
|
8天前
|
分布式计算 资源调度 Hadoop
hadoop的伪分布式搭建-带网盘
hadoop的伪分布式搭建-带网盘
17 3
|
15天前
|
分布式计算 Hadoop Java
centos 部署Hadoop-3.0-高性能集群(一)安装
centos 部署Hadoop-3.0-高性能集群(一)安装
16 0
|
4月前
|
消息中间件 存储 分布式计算
Hadoop学习笔记(HDP)-Part.19 安装Kafka
01 关于HDP 02 核心组件原理 03 资源规划 04 基础环境配置 05 Yum源配置 06 安装OracleJDK 07 安装MySQL 08 部署Ambari集群 09 安装OpenLDAP 10 创建集群 11 安装Kerberos 12 安装HDFS 13 安装Ranger 14 安装YARN+MR 15 安装HIVE 16 安装HBase 17 安装Spark2 18 安装Flink 19 安装Kafka 20 安装Flume
73 0
Hadoop学习笔记(HDP)-Part.19 安装Kafka
|
3月前
|
分布式计算 资源调度 Hadoop
在Linux系统上安装Hadoop的详细步骤
【1月更文挑战第4天】在Linux系统上安装Hadoop的详细步骤
418 0
|
4月前
|
存储 分布式计算 Hadoop
hadoop 安装系列教程二——伪分布式
hadoop 安装系列教程二——伪分布式
45 0
|
4月前
|
分布式计算 Hadoop Java
hadoop系列——linux hadoop安装
hadoop系列——linux hadoop安装
76 0
|
4月前
|
分布式计算 Hadoop Java
Hadoop【部署 01】腾讯云Linux环境CentOS Linux release 7.5.1804单机版hadoop-3.1.3详细安装步骤(安装+配置+初始化+启动脚本+验证)
Hadoop【部署 01】腾讯云Linux环境CentOS Linux release 7.5.1804单机版hadoop-3.1.3详细安装步骤(安装+配置+初始化+启动脚本+验证)
87 0

热门文章

最新文章

相关实验场景

更多