hadoop-2.6.4集群编译搭建-阿里云和腾讯云

简介: 腾讯云阿里云 hadoop集群编译搭建 环境准备 阿里云配置: [hadoop@lizer_ali ~]$ uname -a Linux lizer_ali 2.

腾讯云阿里云 hadoop集群编译搭建

环境准备

阿里云配置:

[hadoop@lizer_ali ~]$ uname -a  
Linux lizer_ali 2.6.32-573.22.1.el6.x86_64 #1 SMP Wed Mar 23 03:35:39 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[hadoop@lizer_ali ~]$ head -n 1 /etc/issue
CentOS release 6.5 (Final)
[hadoop@lizer_ali ~]$ cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c 
      1  Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz
[hadoop@lizer_ali ~]$ getconf LONG_BIT 
64
[hadoop@lizer_ali ~]$ cat /proc/meminfo 
MemTotal:        1018508 kB
MemFree:          353912 kB

腾讯云配置:

[hadoop@lizer_tx ~]$ uname -a  
Linux lizer_tx 2.6.32-573.18.1.el6.x86_64 #1 SMP Tue Feb 9 22:46:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
[hadoop@lizer_tx ~]$ head -n 1 /etc/issue
CentOS release 6.7 (Final)
[hadoop@lizer_tx ~]$ cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c 
      1  Intel(R) Xeon(R) CPU E5-26xx v3
[hadoop@lizer_tx ~]$ getconf LONG_BIT 
64
[hadoop@lizer_tx ~]$ cat /proc/meminfo 
MemTotal:        1020224 kB
MemFree:          688488 kB

创建用户

useradd hadoop
passwd haddop

jdk1.7安装:

下载:http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-javase7-521261.html#jdk-7u80-oth-JPR

wget http://download.oracle.com/otn/java/jdk/7u80-b15/jdk-7u80-linux-x64.tar.gz?AuthParam=1469844164_7ce09e1f99570835183215c3510e95e0

mv jdk-7u80-linux-x64.tar.gz\?AuthParam\=1469844164_7ce09e1f99570835183215c3510e95e0 jdk-7u80-linux-x64.tar.gz

配置jdk
tar zxf jdk-7u80-linux-x64.tar.gz -C /opt/

配置环境变量

vim /etc/profile
export JAVA_HOME=/opt/jdk1.7.0_80
export JRE_HOME=/opt/jdk1.7.0_80/jre
export CLASSPATH=./$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH

生效:
source /etc/profile

编译hadoop2.6.4所需软件

yum install gcc cmake gcc-c++

安装maven

wget http://www-eu.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz
安装maven:http://www.blogjava.net/caojianhua/archive/2011/04/02/347559.html

tar zxf apache-maven-3.3.9-bin.tar.gz -C /usr/local/
vim /etc/profile
export MAVEN_HOME=/usr/local/apache-maven-3.3.9
export PATH=$PATH:$MAVEN_HOME/bin

source /etc/profile

[root@lizer_ali hadoop]# mvn -v
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-11T00:41:47+08:00)
Maven home: /usr/local/apache-maven-3.3.9
Java version: 1.7.0_80, vendor: Oracle Corporation
Java home: /opt/jdk1.7.0_80/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "2.6.32-573.22.1.el6.x86_64", arch: "amd64", family: "unix"

安装protobuf

要求版本protobuf-2.5.0
wget https://github.com/google/protobuf/releases/download/v2.5.0/protobuf-2.5.0.tar.gz

cd protobuf-2.5.0/
./configure -prefix=/usr/local/protobuf-2.5.0
make && make install

vim /etc/profile
export PROTOBUF=/usr/local/protobuf-2.5.0
export PATH=$PROTOBUF/bin:$PATH

protoc --version

安装ant

wget http://www-eu.apache.org/dist//ant/binaries/apache-ant-1.9.7-bin.tar.gz

tar zxf apache-ant-1.9.7-bin.tar.gz -C /usr/local/
vim /etc/profile
export ANT_HOME=/usr/local/apache-ant-1.9.7
export PATH=$PATH:$ANT_HOME/bin

source /etc/profile

ant -version
Apache Ant(TM) version 1.9.7 compiled on April 9 2016

yum install autoconf automake libtool
yum install openssl-devel

安装findbugs

http://findbugs.sourceforge.net/downloads.html

wget http://prdownloads.sourceforge.net/findbugs/findbugs-3.0.1.tar.gz?download
mv findbugs-3.0.1.tar.gz\?download findbugs-3.0.1.tar.gz

tar zxf findbugs-3.0.1.tar.gz -C /usr/local/
vim /etc/profile
export FINDBUGS_HOME=/usr/local/findbugs-3.0.1
export PATH=$FINDBUGS_HOME/bin:$PATH

findbugs -version

hadoop编译安装:

下载hadoop:http://hadoop.apache.org/releases.html

wget http://www-eu.apache.org/dist/hadoop/common/hadoop-2.6.4/hadoop-2.6.4-src.tar.gz

tar zxf hadoop-2.6.4-src.tar.gz
cd hadoop-2.6.4-src

more BUILDING.txt
查看如何编译安装

mvn clean package -Pdist,native,docs -DskipTests -Dtar
编译过程中,需要下载很多包,等待时间比较长。当看到hadoop各个项目都编译成功,即出现一系列的SUCCESS之后,即为编译成功。

有些包下载卡住,重复执行上面的命令,或可以根据提示到相应的网址(https://repo.maven.apache.org/maven2)下载放到指定位置

出现错误1:
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-common: An Ant BuildException has occured: input file /home/hadoop/hadoop-2.6.4-src/hadoop-common-project/hadoop-common/target/findbugsXml.xml does not exist
[ERROR] around Ant part ...<xslt style="/usr/local/findbugs-3.0.1/src/xsl/default.xsl" in="/home/hadoop/hadoop-2.6.4-src/hadoop-common-project/hadoop-common/target/findbugsXml.xml" out="/home/hadoop/hadoop-2.6.4-src/hadoop-common-project/hadoop-common/target/site/findbugs.html"/>... @ 44:256 in /home/hadoop/hadoop-2.6.4-src/hadoop-common-project/hadoop-common/target/antrun/build-main.xml

解决办法1:
参考:http://www.itnose.net/detail/6143808.html
从该命令删除docs参数再运行mvn package -Pdist,native -DskipTests -Dtar

出现错误2:
[INFO] Executing tasks
main:
[mkdir] Created dir: /home/hadoop/hadoop-2.6.4-src/hadoop-common-project/hadoop-kms/downloads
[get] Getting: http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.41/bin/apache-tomcat-6.0.41.tar.gz
[get] To: /home/hadoop/hadoop-2.6.4-src/hadoop-common-project/hadoop-kms/downloads/apache-tomcat-6.0.41.tar.gz

解决2:
卡在这里,应该是不能下载
下载上传到指定位置

出现错误3:
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (module-javadocs) on project hadoop-hdfs: MavenReportException: Error while creating archive:
[ERROR] ExcludePrivateAnnotationsStandardDoclet
[ERROR] Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000f31a4000, 130400256, 0) failed; error='Cannot allocate memory' (errno=12)
[ERROR] #
[ERROR] # There is insufficient memory for the Java Runtime Environment to continue.
[ERROR] # Native memory allocation (malloc) failed to allocate 130400256 bytes for committing reserved memory.
[ERROR] # An error report file with more information is saved as:
[ERROR] # /home/hadoop/hadoop-2.6.4-src/hadoop-hdfs-project/hadoop-hdfs/target/hs_err_pid24729.log
[ERROR]
[ERROR] Error occurred during initialization of VM, try to reduce the Java heap size for the MAVEN_OPTS environnement variable using -Xms:<size> and -Xmx:<size>.
[ERROR] Or, try to reduce the Java heap size for the Javadoc goal using -Dminmemory=<size> and -Dmaxmemory=<size>.

解决3:
应该是内存不够,没有分配swap
添加2G swap分区
添加或扩大交换分区
dd if=/dev/zero of=/home/swap bs=512 count=4096000
bs 是扇区大小 bs=512 指大小为512B count为扇区数量
表示创建一个大小为4G 的文件 /home/swap 用空值填充。of位置可以自己调整。

查看当前分区的大小
free -m

格式化并挂载
mkswap /home/swap
swapon /home/swap

查看挂载情况
swapon -s

开机自动挂载
vim /etc/fstab
/home/swap swap swap defaults 0 0

想写在分区
swapoff /home/swap

出现问题4:
main:
[mkdir] Created dir: /home/hadoop/hadoop-2.6.4-src/hadoop-hdfs-project/hadoop-hdfs-httpfs/downloads
[get] Getting: http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.41/bin/apache-tomcat-6.0.41.tar.gz
[get] To: /home/hadoop/hadoop-2.6.4-src/hadoop-hdfs-project/hadoop-hdfs-httpfs/downloads/apache-tomcat-6.0.41.tar.gz

解决4:
网络问题,同上
cp /home/hadoop/hadoop-2.6.4-src/hadoop-common-project/hadoop-kms/downloads/apache-tomcat-6.0.41.tar.gz /home/hadoop/hadoop-2.6.4-src/hadoop-hdfs-project/hadoop-hdfs-httpfs/downloads/

编译安装成功

[INFO] Apache Hadoop Gridmix .............................. SUCCESS [  6.239 s]
[INFO] Apache Hadoop Data Join ............................ SUCCESS [  4.070 s]
[INFO] Apache Hadoop Ant Tasks ............................ SUCCESS [  3.304 s]
[INFO] Apache Hadoop Extras ............................... SUCCESS [  4.653 s]
[INFO] Apache Hadoop Pipes ................................ SUCCESS [  8.279 s]
[INFO] Apache Hadoop OpenStack support .................... SUCCESS [  7.736 s]
[INFO] Apache Hadoop Amazon Web Services support .......... SUCCESS [06:22 min]
[INFO] Apache Hadoop Client ............................... SUCCESS [  9.608 s]
[INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [  0.258 s]
[INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [  6.721 s]
[INFO] Apache Hadoop Tools Dist ........................... SUCCESS [ 15.171 s]
[INFO] Apache Hadoop Tools ................................ SUCCESS [  0.022 s]
[INFO] Apache Hadoop Distribution ......................... SUCCESS [ 37.343 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 31:45 min
[INFO] Finished at: 2016-07-30T23:59:50+08:00
[INFO] Final Memory: 101M/241M
[INFO] ------------------------------------------------------------------------

hadoop-dist/target/已经生成了可执行文件

拷贝到用户目录
cp -r hadoop-2.6.4 ~/

配置

两台机免密互通

ssh-keygen
各自生成的公钥放到另一台机上
cat id_rsa_else.pub >> authorized_keys
chmod 600 authorized_keys

网络规划:

hadoop1 123.206.33.182 slave
hadoop0 114.215.92.77 master

配置hosts

vim /etc/hosts
123.206.33.182 hadoop1 tx lizer_tx
114.215.92.77 hadoop0 ali lizer_ali

配置环境变量

vim /etc/profile
export HADOOP_HOME=/home/hadoop/hadoop-2.6.4
export PATH=$HADOOP_HOME/bin:$PATH

Hadoop配置

配置文件放在$HADOOP_HOME/etc/hadoop/
修改一下配置:

vim hadoop-env.sh
export JAVA_HOME=/opt/jdk1.7.0_80

vim yarn-env.sh
export JAVA_HOME=/opt/jdk1.7.0_80

vim slaves (这里没有了master配置文件)
hadoop1

vim core-site.xml

<configuration>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/home/hadoop/hadoop/tmp</value>
                <description>A base for other temporary directories.</description>
        </property>
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://hadoop0:9000</value>
        </property>
        <property>
                <name>io.file.buffer.size</name>
                <value>4096</value>
        </property>
</configuration>

vim hdfs-site.xml

<configuration>
        <property>
                <name>dfs.http.address</name>
                <value>hadoop0:50070</value>
        </property>
        <property>
                <name>dfs.secondary.http.address</name>
                <value>hadoop0:50090</value>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>/home/hadoop/hadoop/name</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>/home/hadoop/hadoop/data</value>
        </property>
        <property>
                <name>dfs.replication</name>
                <value>1</value>
        </property>
        <property>
                <name>dfs.nameservices</name>
                <value>hadoop0</value>
        </property>
        <property>
                <name>dfs.namenode.secondary.http-address</name>
                <value>hadoop0:50090</value>
        </property>
        <property>
                <name>dfs.webhdfs.enabled</name>
                <value>true</value>
        </property>
        <property>
                <name>dfs.permissions</name>
                <value>false</value>
        </property>
</configuration>

cp mapred-site.xml.template mapred-site.xml

vim /etc/mapred-site.xml

<configuration>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
                <final>true</final>
        </property>
        <property>
                <name>mapreduce.jobtracker.http.address</name>
                <value>hadoop0:50030</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.address</name>
                <value>hadoop0:10020</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.webapp.address</name>
                <value>hadoop0:19888</value>
        </property>
        <property>
                <name>mapred.job.tracker</name>
                <value>hadoop0:9001</value>
        </property>
</configuration>

vim yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->
        <property>
                <name>yarn.resourcemanager.hostname</name>
                <value>hadoop0</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
        <property>
                <name>yarn.resourcemanager.address</name>
                <value>hadoop0:8032</value>
        </property>
        <property>
                <name>yarn.resourcemanager.scheduler.address</name>
                <value>hadoop0:8030</value>
        </property>
        <property>
                <name>yarn.resourcemanager.resource-tracker.address</name>
                <value>hadoop0:8031</value>
        </property>
        <property>
                <name>yarn.resourcemanager.admin.address</name>
                <value>hadoop0:8033</value>
        </property>
        <property>
                <name>yarn.resourcemanager.webapp.address</name>
                <value>hadoop0:8088</value>
        </property>
</configuration>

vim master
hadoop0

scp -r /home/hadoop/hadoop-2.6.4/etc/hadoop/* tx:~/hadoop-2.6.4/etc/hadoop/

启动和关闭hadoop

参考:http://my.oschina.net/penngo/blog/653049

bin/hdfs namenode -format
sbin/start-dfs.sh
sbin/stop-dfs.sh
sbin/start-yarn.sh
sbin/stop-yarn.sh
sbin/mr-jobhistory-daemon.sh start historyserver
sbin/mr-jobhistory-daemon.sh stop historyserver
sbin/hadoop-daemon.sh start secondarynamenode
sbin/hadoop-daemon.sh stop secondarynamenode

[hadoop@lizer_ali hadoop-2.6.4]$ jps
3099 ResourceManager
3430 SecondaryNameNode
2879 NameNode
3470 Jps
3382 JobHistoryServer

[hadoop@lizer_tx ~]$ jps
9757 DataNode
9853 NodeManager
10064 Jps

检查节点配置情况
bin/hadoop dfsadmin -report

网页节点管理
http://114.215.92.77:8088/cluster

网页资源管理
http://114.215.92.77:50070/dfshealth.html#tab-overview

新建文件夹

bin/hdfs dfs -mkdir -p input

参考文档:

http://hadoop.apache.org/docs/r2.6.4/hadoop-project-dist/hadoop-common/ClusterSetup.html

http://hadoop.apache.org/docs/r2.6.4/hadoop-project-dist/hadoop-common/SingleCluster.html

目录
相关文章
|
1月前
|
分布式计算 Kubernetes Hadoop
大数据-82 Spark 集群模式启动、集群架构、集群管理器 Spark的HelloWorld + Hadoop + HDFS
大数据-82 Spark 集群模式启动、集群架构、集群管理器 Spark的HelloWorld + Hadoop + HDFS
162 6
|
1月前
|
分布式计算 Hadoop Shell
Hadoop-35 HBase 集群配置和启动 3节点云服务器 集群效果测试 Shell测试
Hadoop-35 HBase 集群配置和启动 3节点云服务器 集群效果测试 Shell测试
71 4
|
1月前
|
SQL 分布式计算 Hadoop
Hadoop-37 HBase集群 JavaAPI 操作3台云服务器 POM 实现增删改查调用操作 列族信息 扫描全表
Hadoop-37 HBase集群 JavaAPI 操作3台云服务器 POM 实现增删改查调用操作 列族信息 扫描全表
33 3
|
1月前
|
分布式计算 Hadoop Shell
Hadoop-36 HBase 3节点云服务器集群 HBase Shell 增删改查 全程多图详细 列族 row key value filter
Hadoop-36 HBase 3节点云服务器集群 HBase Shell 增删改查 全程多图详细 列族 row key value filter
58 3
|
1月前
|
分布式计算 Java Hadoop
Hadoop-30 ZooKeeper集群 JavaAPI 客户端 POM Java操作ZK 监听节点 监听数据变化 创建节点 删除节点
Hadoop-30 ZooKeeper集群 JavaAPI 客户端 POM Java操作ZK 监听节点 监听数据变化 创建节点 删除节点
62 1
|
1月前
|
分布式计算 监控 Hadoop
Hadoop-29 ZooKeeper集群 Watcher机制 工作原理 与 ZK基本命令 测试集群效果 3台公网云服务器
Hadoop-29 ZooKeeper集群 Watcher机制 工作原理 与 ZK基本命令 测试集群效果 3台公网云服务器
42 1
|
1月前
|
分布式计算 Hadoop Unix
Hadoop-28 ZooKeeper集群 ZNode简介概念和测试 数据结构与监听机制 持久性节点 持久顺序节点 事务ID Watcher机制
Hadoop-28 ZooKeeper集群 ZNode简介概念和测试 数据结构与监听机制 持久性节点 持久顺序节点 事务ID Watcher机制
42 1
|
1月前
|
分布式计算 Hadoop
Hadoop-27 ZooKeeper集群 集群配置启动 3台云服务器 myid集群 zoo.cfg多节点配置 分布式协调框架 Leader Follower Observer
Hadoop-27 ZooKeeper集群 集群配置启动 3台云服务器 myid集群 zoo.cfg多节点配置 分布式协调框架 Leader Follower Observer
48 1
|
1月前
|
存储 SQL 消息中间件
Hadoop-26 ZooKeeper集群 3台云服务器 基础概念简介与环境的配置使用 架构组成 分布式协调框架 Leader Follower Observer
Hadoop-26 ZooKeeper集群 3台云服务器 基础概念简介与环境的配置使用 架构组成 分布式协调框架 Leader Follower Observer
49 0
|
1月前
|
SQL 分布式计算 关系型数据库
Hadoop-24 Sqoop迁移 MySQL到Hive 与 Hive到MySQL SQL生成数据 HDFS集群 Sqoop import jdbc ETL MapReduce
Hadoop-24 Sqoop迁移 MySQL到Hive 与 Hive到MySQL SQL生成数据 HDFS集群 Sqoop import jdbc ETL MapReduce
87 0

热门文章

最新文章

下一篇
无影云桌面