[Hadoop]Hadoop安装

简介:
1. SSH

参考博文:[Hadoop]SSH免密码登录以及失败解决方案(http://blog.csdn.net/sunnyyoona/article/details/51689041#t1

2. 下载

(1)直接从官网上下载 http://hadoop.apache.org/releases.html

(2)使用命令行下载:

 
  1. xiaosi@yoona:~$ wget http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
  2. --2016-06-16 08:40:07--  http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
  3. 正在解析主机 mirrors.hust.edu.cn (mirrors.hust.edu.cn)... 202.114.18.160
  4. 正在连接 mirrors.hust.edu.cn (mirrors.hust.edu.cn)|202.114.18.160|:80... 已连接。
  5. 已发出 HTTP 请求,正在等待回应... 200 OK
  6. 长度: 196015975 (187M) [application/octet-stream]
  7. 正在保存至: “hadoop-2.6.4.tar.gz”
3. 解压缩Hadoop包

解压位于根目录/文件夹下的hadoop-2.7.3.tar.gz到~/opt文件夹下

 
 
  1. xiaosi@yoona:~$ tar -zxvf hadoop-2.7.3.tar.gz -C opt/

4. 配置

配置文件都位于安装目录下的 /etc/hadoop文件夹下:

 
 
  1. xiaosi@yoona:~/opt/hadoop-2.7.3/etc/hadoop$ ls
  2. capacity-scheduler.xml hadoop-env.sh httpfs-log4j.properties log4j.properties mapred-site.xml.template
  3. configuration.xsl hadoop-metrics2.properties httpfs-signature.secret log4j.properties slaves
  4. container-executor.cfg hadoop-metrics.properties httpfs-site.xml mapred-env.cmd ssl-client.xml.example
  5. core-site.xml hadoop-policy.xml kms-acls.xml mapred-env.sh ssl-server.xml.example
  6. core-site.xml hdfs-site.xml kms-env.sh mapred-queues.xml.template yarn-env.cmd
  7. hadoop-env.cmd hdfs-site.xml kms-log4j.properties mapred-site.xml yarn-env.sh
  8. hadoop-env.sh httpfs-env.sh kms-site.xml mapred-site.xml yarn-site.xml
Hadoop的各个组件均可利用XML文件进行配置。core-site.xml文件用于配置Common组件的属性,hdfs-site.xml文件用于配置HDFS属性,而mapred-site.xml文件则用于配置MapReduce属性。

备注:

Hadoop早期版本采用一个配置文件hadoop-site.xml来配置Common,HDFS和MapReduce组件。从0.20.0版本开始该文件以分为三,各对应一个组件。

4.1 配置core-site.xml

core-site.xml 配置如下:

 
 
  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3. <!--
  4.  Licensed under the Apache License, Version 2.0 (the "License");
  5.  you may not use this file except in compliance with the License.
  6.  You may obtain a copy of the License at
  7.    http://www.apache.org/licenses/LICENSE-2.0
  8.  Unless required by applicable law or agreed to in writing, software
  9.  distributed under the License is distributed on an "AS IS" BASIS,
  10.  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  11.  See the License for the specific language governing permissions and
  12.  limitations under the License. See accompanying LICENSE file.
  13. -->
  14. <!-- Put site-specific property overrides in this file. -->
  15. <configuration>
  16.   <property>
  17.      <name>hadoop.tmp.dir</name>
  18.      <value>/home/${user.name}/tmp/hadoop</value>
  19.      <description>Abase for other temporary directories.</description>
  20.   </property>
  21.   <property>
  22.      <name>fs.defaultFS</name>
  23.      <value>hdfs://localhost:9000</value>
  24.   </property>
  25.  
  26.    <property>
  27.       <name>hadoop.proxyuser.xiaosi.hosts</name>
  28.       <value>*</value>
  29.       <description>The superuser can connect only from host1 and host2 to impersonate a user</description>
  30.    </property>
  31.    <property>
  32.       <name>hadoop.proxyuser.xiaosi.groups</name>
  33.       <value>*</value>
  34.       <description>Allow the superuser oozie to impersonate any members of the group group1 and group2</description>
  35.    </property>
  36. </configuration>


4.2 配置hdfs-site.xml

hdfs-site.xml配置如下:

 
 
  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3. <!--
  4.  Licensed under the Apache License, Version 2.0 (the "License");
  5.  you may not use this file except in compliance with the License.
  6.  You may obtain a copy of the License at
  7.    http://www.apache.org/licenses/LICENSE-2.0
  8.  Unless required by applicable law or agreed to in writing, software
  9.  distributed under the License is distributed on an "AS IS" BASIS,
  10.  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  11.  See the License for the specific language governing permissions and
  12.  limitations under the License. See accompanying LICENSE file.
  13. -->
  14. <!-- Put site-specific property overrides in this file. -->
  15. <configuration>
  16.   <property>
  17.      <name>dfs.replication</name>
  18.      <value>1</value>
  19.   </property>
  20.   <property>
  21.      <name>dfs.namenode.name.dir</name>
  22.      <value>file:/home/xiaosi/tmp/hadoop/dfs/name</value>
  23.   </property>
  24.   <property>
  25.      <name>dfs.datanode.data.dir</name>
  26.      <value>file:/home/xiaosi/tmp/hadoop/dfs/data</value>
  27.   </property>
  28. </configuration>

4.3 配置 mapred-site.xml

mapred-site.xml配置如下:

 
 
  1. <?xml version="1.0"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3. <!--
  4.  Licensed under the Apache License, Version 2.0 (the "License");
  5.  you may not use this file except in compliance with the License.
  6.  You may obtain a copy of the License at
  7.    http://www.apache.org/licenses/LICENSE-2.0
  8.  Unless required by applicable law or agreed to in writing, software
  9.  distributed under the License is distributed on an "AS IS" BASIS,
  10.  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  11.  See the License for the specific language governing permissions and
  12.  limitations under the License. See accompanying LICENSE file.
  13. -->
  14. <!-- Put site-specific property overrides in this file. -->
  15. <configuration>
  16.   <property>
  17.      <name>mapred.job.tracker</name>
  18.      <value>localhost:9001</value>
  19.   </property>
  20. </configuration>

运行Hadoop的时候可能会找不到jdk,需要我们修改hadoop.env.sh脚本文件,唯一需要修改的环境变量就是JAVE_HOME,其他选项都是可选的:

 
 
  1. export JAVA_HOME=/home/xiaosi/opt/jdk-1.8.0

5. 运行
5.1 初始化HDFS系统

在配置完成后,运行hadoop前,要初始化HDFS系统,在bin/目录下执行如下命令:

 
 
  1. xiaosi@yoona:~/opt/hadoop-2.7.3$ ./bin/hdfs namenode -format
5.2 启动

开启NameNode和DataNode守护进程:

 
 
  1. xiaosi@yoona:~/opt/hadoop-2.7.3$ ./sbin/start-dfs.sh
  2. Starting namenodes on [localhost]
  3. localhost: starting namenode, logging to /home/xiaosi/opt/hadoop-2.7.3/logs/hadoop-xiaosi-namenode-yoona.out
  4. localhost: starting datanode, logging to /home/xiaosi/opt/hadoop-2.7.3/logs/hadoop-xiaosi-datanode-yoona.out
  5. Starting secondary namenodes [0.0.0.0]
  6. 0.0.0.0: starting secondarynamenode, logging to /home/xiaosi/opt/hadoop-2.7.3/logs/hadoop-xiaosi-secondarynamenode-yoona.out

通过jps命令查看namenode和datanode是否已经启动起来:

 
 
  1. xiaosi@yoona:~/opt/hadoop-2.7.3$ jps
  2. 13400 SecondaryNameNode
  3. 13035 NameNode
  4. 13197 DataNode
  5. 13535 Jps
从启动日志我们可以知道,日志信息存储在 hadoop-2.7.3/logs/目录下,如果启动过程中有任何问题,可以通过查看日志来确认问题原因。


目录
相关文章
|
6月前
|
消息中间件 分布式计算 大数据
【大数据技术Hadoop+Spark】Flume、Kafka的简介及安装(图文解释 超详细)
【大数据技术Hadoop+Spark】Flume、Kafka的简介及安装(图文解释 超详细)
308 0
|
1月前
|
SQL 分布式计算 Hadoop
Hadoop-12-Hive 基本介绍 下载安装配置 MariaDB安装 3台云服务Hadoop集群 架构图 对比SQL HQL
Hadoop-12-Hive 基本介绍 下载安装配置 MariaDB安装 3台云服务Hadoop集群 架构图 对比SQL HQL
66 3
|
3月前
|
分布式计算 资源调度 Hadoop
centos7二进制安装Hadoop3
centos7二进制安装Hadoop3
|
3月前
|
分布式计算 Ubuntu Hadoop
在Ubuntu 16.04上如何在独立模式下安装Hadoop
在Ubuntu 16.04上如何在独立模式下安装Hadoop
35 1
|
4月前
|
SQL 分布式计算 关系型数据库
Hadoop-12-Hive 基本介绍 下载安装配置 MariaDB安装 3台云服务Hadoop集群 架构图 对比SQL HQL
Hadoop-12-Hive 基本介绍 下载安装配置 MariaDB安装 3台云服务Hadoop集群 架构图 对比SQL HQL
71 2
|
6月前
|
弹性计算 分布式计算 Hadoop
Linux(阿里云)安装Hadoop(详细教程+避坑)
Linux(阿里云)安装Hadoop(详细教程+避坑)
1534 3
|
6月前
|
存储 分布式计算 Hadoop
【分布式计算框架】Hadoop伪分布式安装
【分布式计算框架】Hadoop伪分布式安装
80 2
|
5月前
|
分布式计算 Hadoop 大数据
【大数据】Hadoop下载安装及伪分布式集群搭建教程
【大数据】Hadoop下载安装及伪分布式集群搭建教程
225 0
|
6月前
|
分布式计算 资源调度 Hadoop
安装hadoop学习笔记
安装hadoop学习笔记
66 0
安装hadoop学习笔记
|
6月前
|
分布式计算 Hadoop Java
hadoop的基础设施-protobuf-2.5.0编译和安装
hadoop的基础设施-protobuf-2.5.0编译和安装
53 0