Hadoop伪分布式安装

简介:

卸载JDK:

1
2
3
4
5
[root@localhost mzsx] # rpm -qa | grep java
java-1.6.0-openjdk-1.6.0.0-1.45.1.11.1.el6.x86_64
tzdata-java-2012c-1.el6.noarch
[root@localhost mzsx] # rpm -e --nodeps tzdata-java-2012c-1.el6.noarch
[root@localhost mzsx] # rpm -e --nodeps  java-1.6.0-openjdk-1.6.0.0-1.45.1.11.1.el6.x86_64

安装JDK:

1
2
3
4
5
6
7
8
9
10
11
[root@localhost mzsx] # chmod 777 jdk-7u45-linux-x64.rpm
[root@localhost mzsx] # rpm -ivh jdk-7u45-linux-x64.rpm
Preparing...                 ########################################### [100%]
    1:jdk                     ########################################### [100%]
Unpacking JAR files...
     rt.jar...
     jsse.jar...
     charsets.jar...
     tools.jar...
     localedata.jar...
     jfxrt.jar...

配置JDK环境:

1
2
3
4
5
6
7
8
9
[root@srv-dfh526 bin] #vim  /etc/profile
export  JAVA_HOME= /usr/local/jdk
export  JAVA_BIN= /usr/local/jdk/bin
export  PATH=$PATH:$JAVA_HOME /bin
export  CLASSPATH=.:$JAVA_HOME /lib/dt .jar:$JAVA_HOME /lib/tools .jar
export  HADOOP_HOME= /usr/local/hadoop
export  PATH= /usr/local/hadoop/bin :$PATH
export  HADOOP_HOME_WARN_SUPPRESS=1
[root@srv-dfh526 bin] #source /etc/profile

关闭防火墙的方法为:

1
2
3
4
5
6
7
. 永久性生效
开启:chkconfig iptables on
关闭:chkconfig iptables off
[root@localhost ~] # service network restart
关闭环回接口:                                             [确定]
弹出环回接口:                                             [确定]
[root@localhost ~] #

免密钥登录,注:一直回车

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
[root@localhost jdk1.7.0_45] # cd /root/
[root@localhost ~] # ssh-keygen -t rsa
Generating public /private  rsa key pair.
Enter  file  in  which  to save the key ( /root/ . ssh /id_rsa ):
Enter passphrase (empty  for  no passphrase):
Enter same passphrase again:
Your identification has been saved  in  /root/ . ssh /id_rsa .
Your public key has been saved  in  /root/ . ssh /id_rsa .pub.
The key fingerprint is:
51:19:ce:ed:ae:f1:57:d4:30:c8:a2:a3:d7:8d:e1:2d root@localhost
The key's randomart image is:
+--[ RSA 2048]----+
|          o+ .   |
|         +o.o o  |
|        ..o..  o.|
|        o...    o|
|       .S+ =.  . |
|      . . E.o   .|
|       .  ...  . |
|           +  .  |
|          . ..   |
+-----------------+
[root@localhost ~] # cd .ssh
[root@localhost . ssh ] # ll -a
总用量 16
drwx------.  2 root root 4096 11月  8 03:07 .
dr-xr-x---. 26 root root 4096 11月  8 03:03 ..
-rw-------.  1 root root 1675 11月  8 03:07 id_rsa
-rw-r--r--.  1 root root  408 11月  8 03:07 id_rsa.pub
[root@localhost . ssh ] # cp id_rsa.pub authorized_keys
[root@localhost . ssh ] # serv
servertool   service      serviceconf
[root@localhost . ssh ] # serv
servertool   service      serviceconf
[root@localhost . ssh ] # service sshd restart
停止 sshd:                                                [确定]
正在启动 sshd:                                            [确定]
[root@localhost . ssh ] #

HADOOP配置:

core-site.xml

1
2
3
4
5
6
7
8
9
10
< configuration >
     < property >
         < name >hadoop.tem.dir</ name >
         < value >/home/hadoop/tem</ value >
     </ property >
     < property >
         < name >fs.default.name</ name >
         < value >hdfs://192.168.197.131:9000</ value >
     </ property >
</ configuration >

hdfs-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
< configuration >
     < property >
         < name >dfs.data.dir</ name >
         < value >/home/hadoop/data</ value >
     </ property >
     < property >
         < name >dfs.replication</ name >
         < value >1</ value >
     </ property >
     < property >
         < name >dfs.permissions</ name >
         < value >false</ value >
     </ property >
</ configuration >

mapred-site.xml

1
2
3
4
5
6
<configuration>
     <property>
         <name>mapred.job.tracker< /name >
         <value>192.168.197.131:9001< /value >
     < /property >
< /configuration >

hadoop-env.sh

1
2
# export JAVA_HOME=/usr/lib/j2sdk1.5-sun
export  JAVA_HOME= /usr/local/jdk

格式化分布式文件系统

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[root@localhost bin] # ./hadoop namenode -format
Warning: $HADOOP_HOME is deprecated.
13 /11/08  03:30:03 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = localhost.localdomain /127 .0.0.1
STARTUP_MSG:   args = [- format ]
STARTUP_MSG:   version = 1.2.0
STARTUP_MSG:   build = https: //svn .apache.org /repos/asf/hadoop/common/branches/branch-1 .2 -r 1479473; compiled by  'hortonfo'  on Mon May  6 06:59:37 UTC 2013
STARTUP_MSG:   java = 1.7.0_45
************************************************************/
13 /11/08  03:30:03 INFO util.GSet: Computing capacity  for  map BlocksMap
13 /11/08  03:30:03 INFO util.GSet: VM  type        = 64-bit
13 /11/08  03:30:03 INFO util.GSet: 2.0% max memory = 1013645312
13 /11/08  03:30:03 INFO util.GSet: capacity      = 2^21 = 2097152 entries
13 /11/08  03:30:03 INFO util.GSet: recommended=2097152, actual=2097152
13 /11/08  03:30:03 INFO namenode.FSNamesystem: fsOwner=root
13 /11/08  03:30:04 INFO namenode.FSNamesystem: supergroup=supergroup
13 /11/08  03:30:04 INFO namenode.FSNamesystem: isPermissionEnabled= true
13 /11/08  03:30:04 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
13 /11/08  03:30:04 INFO namenode.FSNamesystem: isAccessTokenEnabled= false  accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
13 /11/08  03:30:04 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
13 /11/08  03:30:04 INFO namenode.NameNode: Caching  file  names occuring  more  than 10  times
13 /11/08  03:30:04 INFO common.Storage: Image  file  of size 110 saved  in  0 seconds.
13 /11/08  03:30:04 INFO namenode.FSEditLog: closing edit log: position=4, editlog= /tmp/hadoop-root/dfs/name/current/edits
13 /11/08  03:30:04 INFO namenode.FSEditLog: close success: truncate to 4, editlog= /tmp/hadoop-root/dfs/name/current/edits
13 /11/08  03:30:04 INFO common.Storage: Storage directory  /tmp/hadoop-root/dfs/name  has been successfully formatted.
13 /11/08  03:30:04 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain /127 .0.0.1
************************************************************/

启动Hadoop

1
2
3
4
5
6
7
8
9
10
[root@hadoop bin] # ./start-all.sh
starting namenode, logging to  /home/mzsx/hadoop-0 .20.2 /bin/ .. /logs/hadoop-root-namenode-hadoop .out
The authenticity of host  'localhost (::1)'  can't be established.
RSA key fingerprint is 0f:68:b7:e0:2d:13:38:4b:44:ac:ec:a3:7f:b8:f6:7e.
Are you sure you want to  continue  connecting ( yes /no )?  yes
localhost: Warning: Permanently added  'localhost'  (RSA) to the list of known hosts.
localhost: starting datanode, logging to  /home/mzsx/hadoop-0 .20.2 /bin/ .. /logs/hadoop-root-datanode-hadoop .out
localhost: starting secondarynamenode, logging to  /home/mzsx/hadoop-0 .20.2 /bin/ .. /logs/hadoop-root-secondarynamenode-hadoop .out
starting jobtracker, logging to  /home/mzsx/hadoop-0 .20.2 /bin/ .. /logs/hadoop-root-jobtracker-hadoop .out
localhost: starting tasktracker, logging to  /home/mzsx/hadoop-0 .20.2 /bin/ .. /logs/hadoop-root-tasktracker-hadoop .out

查看与Java相关的线程信息

1
2
3
4
5
6
7
8
9
10
11
[root@srv-dfh526 bin] # jps
14828 DataNode
15600 Bootstrap
23062 Bootstrap
15232 TaskTracker
18636 Jps
2935 Bootstrap
14982 SecondaryNameNode
23445 Bootstrap
14652 NameNode
15080 JobTracker

fs操作

1
2
3
4
5
6
7
8
9
10
查看目录和文件
[root@srv-dfh526 bin] # ./hadoop fs -lsr /
drwxr-xr-x   - root supergroup          0 2014-04-23 17:57  /tmp
drwxr-xr-x   - root supergroup          0 2014-04-23 17:57  /tmp/hadoop-root
drwxr-xr-x   - root supergroup          0 2014-04-23 18:38  /tmp/hadoop-root/mapred
drwxr-xr-x   - root supergroup          0 2014-04-23 18:38  /tmp/hadoop-root/mapred/staging
drwxr-xr-x   - root supergroup          0 2014-04-23 18:38  /tmp/hadoop-root/mapred/staging/root
drwx------   - root supergroup          0 2014-04-23 18:38  /tmp/hadoop-root/mapred/staging/root/ .staging
drwx------   - root supergroup          0 2014-04-23 18:38  /tmp/hadoop-root/mapred/system
-rw-------   1 root supergroup          4 2014-04-23 17:57  /tmp/hadoop-root/mapred/system/jobtracker .info
1
2
上传文件
[root@srv-dfh526 bin] # ./hadoop fs -put /home/hadoop/aoman.txt ./
1
2
执行wordcount
[root@srv-dfh526 bin] # bin/hadoop jar hadoop-examples-1.2.1.jar  wordcount /user/root/aoman.txt ./in

下载文件


1
[root@srv-dfh526 bin] # ./hadoop fs -get /user/root/in/part-r-00000 /home/hadoop/wordcount.txt



本文转自 梦朝思夕 51CTO博客,原文链接:http://blog.51cto.com/qiangmzsx/1401341



相关文章
|
3月前
|
分布式计算 Hadoop Java
Hadoop快速入门——第一章、认识Hadoop与创建伪分布式模式(Hadoop3.1.3版本配置)
Hadoop快速入门——第一章、认识Hadoop与创建伪分布式模式(Hadoop3.1.3版本配置)
77 0
|
5月前
|
分布式计算 Hadoop Java
我用免费白拿的服务器搭建了一台基于CentOS7的Hadoop3.x伪分布式环境
我用免费白拿的服务器搭建了一台基于CentOS7的Hadoop3.x伪分布式环境
58 0
|
5月前
|
消息中间件 分布式计算 大数据
【大数据技术Hadoop+Spark】Flume、Kafka的简介及安装(图文解释 超详细)
【大数据技术Hadoop+Spark】Flume、Kafka的简介及安装(图文解释 超详细)
77 0
|
3天前
|
分布式计算 资源调度 Hadoop
安装hadoop学习笔记
安装hadoop学习笔记
14 0
安装hadoop学习笔记
|
18天前
|
分布式计算 Hadoop Linux
找到Hadoop的安装目录
【4月更文挑战第19天】具体的安装目录可能因您的安装方式和环境而有所不同。如果您在安装Hadoop时遵循了特定的教程或文档,建议参考该教程或文档中的安装目录信息。
14 3
|
19天前
|
分布式计算 Hadoop 大数据
[大数据] mac 史上最简单 hadoop 安装过程
[大数据] mac 史上最简单 hadoop 安装过程
|
24天前
|
资源调度
Hadoop3的安装
Hadoop3的安装
22 0
|
27天前
|
分布式计算 资源调度 Hadoop
hadoop的伪分布式搭建-带网盘
hadoop的伪分布式搭建-带网盘
22 3
|
1月前
|
分布式计算 Hadoop Java
centos 部署Hadoop-3.0-高性能集群(一)安装
centos 部署Hadoop-3.0-高性能集群(一)安装
28 0
|
4月前
|
分布式计算 资源调度 Hadoop
在Linux系统上安装Hadoop的详细步骤
【1月更文挑战第4天】在Linux系统上安装Hadoop的详细步骤
456 0

相关实验场景

更多