hadoop的部署以及应用

简介:

1.基础环境

1
2
3
4
5
6
7
8
9
10
[hadoop@master ~]$  cat   /etc/redhat-release 
CentOS Linux release 7.2.1511 (Core) 
[hadoop@master ~]$ 
[hadoop@master ~]$ getenforce 
Disabled
[hadoop@master ~]$ systemctl  status  firewalld 
● firewalld.service - firewalld - dynamic firewall daemon
    Loaded: loaded ( /usr/lib/systemd/system/firewalld .service; disabled; vendor preset: enabled)
    Active: inactive (dead)
[hadoop@master ~]$

2.IP以及对应节点


IP 主机名 hadoop node hadoop  进程名称
192.168.56.100 master master
namenode,jobtracker

192.168.56.101 slave1 slave datanode,tasktracker
192.168.56.102 slave2 slave datanode,tasktracker
192.168.56.103 slave3 slave datanode,tasktracker
1
2
3
4
5
6
[hadoop@master ~] # cat  /etc/hosts
192.168.56.100  Master
192.168.56.101  slave1
192.168.56.102  slave2
192.168.56.103  slave3
[hadoop@master ~] #

3.增加hadoop用户,所有节点

1
2
useradd   hadoop
echo  hadoop| passwd   --stdin  hadoop

4.jdk

1
2
3
4
5
6
7
8
9
10
11
12
[hadoop@slave1 application] # ll
total 4
lrwxrwxrwx 1 root root   24 Jul 10 01:35 jdk ->  /application/jdk1 .8.0_60
drwxr-xr-x 8 root root 4096 Aug  5  2015 jdk1.8.0_60
[hadoop@slave1 application] # pwd
/application
[hadoop@slave1 application]
[hadoop@master ~] # java  -version 
java version  "1.8.0_60"
Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
[hadoop@master ~] #

5.master(192.168.56.100)上的hadoop用户可以ssh所有slave节点的hadoop用户下

6.设置hadoop安装路径 以及环境变量(所有节点)

1
2
3
4
5
6
7
su   -  hadoop
tar  xf  hadoop-2.7.0tar.gz
/home/hadoop/hadoop-2 .7.0
vi  /etc/profile   添加hadoop环境变量
export  HADOOP_HOME= /home/hadoop/hadoop-2 .7.0
export  PATH=$PATH:$HADOOP_HOME /bin
source  /etc/profile

7.修改hadoop的环境的Java环境变量

1
2
3
4
/home/hadoop/hadoop-2 .7.0 /etc/hadoop
vi  hadoop- env .sh 添加
###JAVA_HOME
export  JAVA_HOME= /application/jdk/

8.修改hadoop的配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
cd   /home/hadoop/hadoop-2 .7.0 /etc/hadoop
1. ##############################
[hadoop@master hadoop]$  cat  core-site.xml
<?xml version= "1.0"  encoding= "UTF-8" ?>
<?xml-stylesheet  type = "text/xsl"  href= "configuration.xsl" ?>
<!--
   Licensed under the Apache License, Version 2.0 (the  "License" );
   you may not use this  file  except  in  compliance with the License.
   You may obtain a copy of the License at
 
     http: //www .apache.org /licenses/LICENSE-2 .0
 
   Unless required by applicable law or agreed to  in  writing, software
   distributed under the License is distributed on an  "AS IS"  BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License  for  the specific language governing permissions and
   limitations under the License. See accompanying LICENSE  file .
-->
 
<!-- Put site-specific property overrides  in  this  file . -->
<configuration>
<property>
  <name>fs.default.name< /name >
   <value>hdfs: //master :9000< /value >
< /property >
<property>
   <name>hadoop.tmp. dir < /name >
  <value> /home/hadoop/tmp < /value >
< /property >
< /configuration >
[hadoop@master hadoop]$ 
2. ###################################(默认不存在 拷贝个模板即可)
[hadoop@master hadoop]$  cat  mapred-site.xml
<?xml version= "1.0" ?>
<?xml-stylesheet  type = "text/xsl"  href= "configuration.xsl" ?>
<!--
   Licensed under the Apache License, Version 2.0 (the  "License" );
   you may not use this  file  except  in  compliance with the License.
   You may obtain a copy of the License at
 
     http: //www .apache.org /licenses/LICENSE-2 .0
 
   Unless required by applicable law or agreed to  in  writing, software
   distributed under the License is distributed on an  "AS IS"  BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License  for  the specific language governing permissions and
   limitations under the License. See accompanying LICENSE  file .
-->
 
<!-- Put site-specific property overrides  in  this  file . -->
<configuration>
<property>
   <name>mapred.job.tracker< /name >
   <value>master:9001< /value >
< /property >
<property>
   <name>mapred. local . dir < /name >
  <value> /home/hadoop/tmp < /value >
< /property >
< /configuration >
[hadoop@master hadoop]$ 
3. #########################################
[hadoop@master hadoop]$  cat   hdfs-site.xml
<?xml version= "1.0"  encoding= "UTF-8" ?>
<?xml-stylesheet  type = "text/xsl"  href= "configuration.xsl" ?>
<!--
   Licensed under the Apache License, Version 2.0 (the  "License" );
   you may not use this  file  except  in  compliance with the License.
   You may obtain a copy of the License at
 
     http: //www .apache.org /licenses/LICENSE-2 .0
 
   Unless required by applicable law or agreed to  in  writing, software
   distributed under the License is distributed on an  "AS IS"  BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License  for  the specific language governing permissions and
   limitations under the License. See accompanying LICENSE  file .
-->
 
<!-- Put site-specific property overrides  in  this  file . -->
<configuration>
<property>
<name>dfs.name. dir < /name >
<value> /home/hadoop/name1 , /home/hadoop/name2 , /home/hadoop/name3 < /value
<description>  < /description >
< /property >
<property>
<name>dfs.data. dir < /name >
<value> /home/hadoop/data1 , /home/hadoop/data2 , /home/hadoop/data3 < /value >
<description> < /description >
< /property >
<property>
   <name>dfs.replication< /name >
   <value>3< /value >
< /property >
< /configuration >
[hadoop@master hadoop]$ 
[hadoop@master hadoop]$  cat  masters 
master
[hadoop@master hadoop]$  cat  slaves 
slave1
slave2
slave3
[hadoop@master hadoop]$

9.分发到slave节点

1
2
3
scp    -r   /home/hadoop/hadoop-2 .7.0  slave1: /home/hadoop/
scp    -r   /home/hadoop/hadoop-2 .7.0  slave2: /home/hadoop/
scp    -r   /home/hadoop/hadoop-2 .7.0  slave3: /home/hadoop/

10.master 节点测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
/home/hadoop/name1  /home/hadoop/name2   /home/hadoop/name3   这三个目录不要创建,如果创建会提示
重新reload
 
cd   /home/hadoop/hadoop-2 .7.0
[hadoop@master hadoop-2.7.0]$ . /bin/hadoop  namenode - format
DEPRECATED: Use of this script to execute hdfs  command  is deprecated.
Instead use the hdfs  command  for  it.
 
17 /07/10  02:57:34 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = Master /192 .168.56.100
STARTUP_MSG:   args = [- format ]
STARTUP_MSG:   version = 2.7.0
STARTUP_MSG:   classpath =  /home/hadoop/hadoop-2 .7.0 /etc/hadoop : /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-lang-2 .6.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/hadoop-auth-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/curator-recipes-2 .7.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/asm-3 .2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-collections-3 .2.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/xmlenc-0 .52.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/htrace-core-3 .1.0-incubating.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jetty-util-6 .1.26.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-beanutils-core-1 .8.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-digester-1 .8.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jsp-api-2 .1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/httpcore-4 .2.5.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jersey-core-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/mockito-all-1 .8.5.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/hadoop-annotations-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jackson-mapper-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/junit-4 .11.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-logging-1 .1.3.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-configuration-1 .6.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/api-util-1 .0.0-M20.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jaxb-impl-2 .2.3-1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-compress-1 .4.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jersey-json-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jettison-1 .1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jackson-jaxrs-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/hamcrest-core-1 .3.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-net-3 .1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-cli-1 .2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/api-asn1-api-1 .0.0-M20.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-math3-3 .1.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jackson-xc-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/stax-api-1 .0-2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-codec-1 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/slf4j-log4j12-1 .7.10.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jsr305-3 .0.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/snappy-java-1 .0.4.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jetty-6 .1.26.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/guava-11 .0.2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/netty-3 .6.2.Final.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/avro-1 .7.4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/paranamer-2 .3.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-beanutils-1 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jackson-core-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jsch-0 .1.42.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/apacheds-kerberos-codec-2 .0.0-M15.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jets3t-0 .9.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/activation-1 .1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/protobuf-java-2 .5.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/java-xmlbuilder-0 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/servlet-api-2 .5.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/xz-1 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/apacheds-i18n-2 .0.0-M15.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-httpclient-3 .1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/gson-2 .2.4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-io-2 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jaxb-api-2 .2.2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/curator-framework-2 .7.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jersey-server-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/httpclient-4 .2.5.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/curator-client-2 .7.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/log4j-1 .2.17.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/slf4j-api-1 .7.10.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/zookeeper-3 .4.6.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/hadoop-nfs-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/hadoop-common-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/hadoop-common-2 .7.0-tests.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs : /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/commons-lang-2 .6.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/asm-3 .2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/xmlenc-0 .52.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/htrace-core-3 .1.0-incubating.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/jetty-util-6 .1.26.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/jersey-core-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/jackson-mapper-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/commons-logging-1 .1.3.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/leveldbjni-all-1 .8.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/commons-cli-1 .2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/commons-daemon-1 .0.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/commons-codec-1 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/jsr305-3 .0.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/jetty-6 .1.26.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/guava-11 .0.2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/netty-3 .6.2.Final.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/jackson-core-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/protobuf-java-2 .5.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/servlet-api-2 .5.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/xercesImpl-2 .9.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/commons-io-2 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/netty-all-4 .0.23.Final.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/jersey-server-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/log4j-1 .2.17.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/xml-apis-1 .3.04.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/hadoop-hdfs-nfs-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/hadoop-hdfs-2 .7.0-tests.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/hadoop-hdfs-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/commons-lang-2 .6.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/asm-3 .2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/commons-collections-3 .2.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/javax .inject-1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jetty-util-6 .1.26.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/aopalliance-1 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jersey-core-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jackson-mapper-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/commons-logging-1 .1.3.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/leveldbjni-all-1 .8.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jaxb-impl-2 .2.3-1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/commons-compress-1 .4.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jersey-json-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jettison-1 .1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jackson-jaxrs-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/commons-cli-1 .2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jackson-xc-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/stax-api-1 .0-2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/commons-codec-1 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jsr305-3 .0.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jetty-6 .1.26.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/guava-11 .0.2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jersey-guice-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/guice-3 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/netty-3 .6.2.Final.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jackson-core-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/activation-1 .1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/protobuf-java-2 .5.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/servlet-api-2 .5.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/xz-1 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/commons-io-2 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jaxb-api-2 .2.2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/zookeeper-3 .4.6-tests.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/guice-servlet-3 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jersey-client-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jersey-server-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/log4j-1 .2.17.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/zookeeper-3 .4.6.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-registry-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-server-tests-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-server-common-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-common-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-server-web-proxy-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-api-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-server-nodemanager-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-client-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/asm-3 .2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/javax .inject-1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/aopalliance-1 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/jersey-core-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/hadoop-annotations-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/jackson-mapper-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/junit-4 .11.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/leveldbjni-all-1 .8.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/commons-compress-1 .4.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/hamcrest-core-1 .3.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/snappy-java-1 .0.4.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/jersey-guice-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/guice-3 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/netty-3 .6.2.Final.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/avro-1 .7.4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/paranamer-2 .3.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/jackson-core-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/protobuf-java-2 .5.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/xz-1 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/commons-io-2 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/guice-servlet-3 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/jersey-server-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/log4j-1 .2.17.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-examples-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2 .7.0-tests.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-core-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-common-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-app-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /contrib/capacity-scheduler/ *.jar: /home/hadoop/hadoop-2 .7.0 /contrib/capacity-scheduler/ *.jar
STARTUP_MSG:   build = Unknown -r Unknown; compiled by  'root'  on 2015-05-27T13:56Z
STARTUP_MSG:   java = 1.8.0_60
************************************************************/
17 /07/10  02:57:34 INFO namenode.NameNode: registered UNIX signal handlers  for  [TERM, HUP, INT]
17 /07/10  02:57:34 INFO namenode.NameNode: createNameNode [- format ]
17 /07/10  02:57:35 WARN common.Util: Path  /home/hadoop/name1  should be specified as a URI  in  configuration files. Please update hdfs configuration.
17 /07/10  02:57:35 WARN common.Util: Path  /home/hadoop/name2  should be specified as a URI  in  configuration files. Please update hdfs configuration.
17 /07/10  02:57:35 WARN common.Util: Path  /home/hadoop/name3  should be specified as a URI  in  configuration files. Please update hdfs configuration.
17 /07/10  02:57:35 WARN common.Util: Path  /home/hadoop/name1  should be specified as a URI  in  configuration files. Please update hdfs configuration.
17 /07/10  02:57:35 WARN common.Util: Path  /home/hadoop/name2  should be specified as a URI  in  configuration files. Please update hdfs configuration.
17 /07/10  02:57:35 WARN common.Util: Path  /home/hadoop/name3  should be specified as a URI  in  configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-77e0896d-bda2-49f1-8127-c5343f1c52c9
17 /07/10  02:57:35 INFO namenode.FSNamesystem: No KeyProvider found.
17 /07/10  02:57:35 INFO namenode.FSNamesystem: fsLock is fair: true
17 /07/10  02:57:35 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
17 /07/10  02:57:35 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip- hostname -check= true
17 /07/10  02:57:35 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is  set  to 000:00:00:00.000
17 /07/10  02:57:36 INFO blockmanagement.BlockManager: The block deletion will start around 2017 Jul 10 02:57:36
17 /07/10  02:57:36 INFO util.GSet: Computing capacity  for  map BlocksMap
17 /07/10  02:57:36 INFO util.GSet: VM  type        = 64-bit
17 /07/10  02:57:36 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
17 /07/10  02:57:36 INFO util.GSet: capacity      = 2^21 = 2097152 entries
17 /07/10  02:57:36 INFO blockmanagement.BlockManager: dfs.block.access.token. enable = false
17 /07/10  02:57:36 INFO blockmanagement.BlockManager: defaultReplication         = 3
17 /07/10  02:57:36 INFO blockmanagement.BlockManager: maxReplication             = 512
17 /07/10  02:57:36 INFO blockmanagement.BlockManager: minReplication             = 1
17 /07/10  02:57:36 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
17 /07/10  02:57:36 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  =  false
17 /07/10  02:57:36 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
17 /07/10  02:57:36 INFO blockmanagement.BlockManager: encryptDataTransfer        =  false
17 /07/10  02:57:36 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
17 /07/10  02:57:36 INFO namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
17 /07/10  02:57:36 INFO namenode.FSNamesystem: supergroup          = supergroup
17 /07/10  02:57:36 INFO namenode.FSNamesystem: isPermissionEnabled =  true
17 /07/10  02:57:36 INFO namenode.FSNamesystem: HA Enabled:  false
17 /07/10  02:57:36 INFO namenode.FSNamesystem: Append Enabled:  true
17 /07/10  02:57:36 INFO util.GSet: Computing capacity  for  map INodeMap
17 /07/10  02:57:36 INFO util.GSet: VM  type        = 64-bit
17 /07/10  02:57:36 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
17 /07/10  02:57:36 INFO util.GSet: capacity      = 2^20 = 1048576 entries
17 /07/10  02:57:36 INFO namenode.FSDirectory: ACLs enabled?  false
17 /07/10  02:57:36 INFO namenode.FSDirectory: XAttrs enabled?  true
17 /07/10  02:57:36 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
17 /07/10  02:57:36 INFO namenode.NameNode: Caching  file  names occuring  more  than 10  times
17 /07/10  02:57:36 INFO util.GSet: Computing capacity  for  map cachedBlocks
17 /07/10  02:57:36 INFO util.GSet: VM  type        = 64-bit
17 /07/10  02:57:36 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
17 /07/10  02:57:36 INFO util.GSet: capacity      = 2^18 = 262144 entries
17 /07/10  02:57:36 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
17 /07/10  02:57:36 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
17 /07/10  02:57:36 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
17 /07/10  02:57:36 INFO metrics.TopMetrics: NNTop conf: dfs.namenode. top .window.num.buckets = 10
17 /07/10  02:57:36 INFO metrics.TopMetrics: NNTop conf: dfs.namenode. top .num. users  = 10
17 /07/10  02:57:36 INFO metrics.TopMetrics: NNTop conf: dfs.namenode. top .windows.minutes = 1,5,25
17 /07/10  02:57:36 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17 /07/10  02:57:36 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry  time  is 600000 millis
17 /07/10  02:57:36 INFO util.GSet: Computing capacity  for  map NameNodeRetryCache
17 /07/10  02:57:36 INFO util.GSet: VM  type        = 64-bit
17 /07/10  02:57:36 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
17 /07/10  02:57:36 INFO util.GSet: capacity      = 2^15 = 32768 entries
17 /07/10  02:57:36 INFO namenode.FSImage: Allocated new BlockPoolId: BP-467031090-192.168.56.100-1499626656612
17 /07/10  02:57:36 INFO common.Storage: Storage directory  /home/hadoop/name1  has been successfully formatted.
17 /07/10  02:57:36 INFO common.Storage: Storage directory  /home/hadoop/name2  has been successfully formatted.
17 /07/10  02:57:36 INFO common.Storage: Storage directory  /home/hadoop/name3  has been successfully formatted.
17 /07/10  02:57:36 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17 /07/10  02:57:36 INFO util.ExitUtil: Exiting with status 0
17 /07/10  02:57:37 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Master /192 .168.56.100
************************************************************/
[hadoop@master hadoop-2.7.0]$

11.启动服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[hadoop@master sbin]$  pwd
/home/hadoop/hadoop-2 .7.0 /sbin
[hadoop@master sbin]$ 
[hadoop@master sbin]$ . /start-all .sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to  /home/hadoop/hadoop-2 .7.0 /logs/hadoop-hadoop-namenode-master .out
slave3: starting datanode, logging to  /home/hadoop/hadoop-2 .7.0 /logs/hadoop-hadoop-datanode-slave3 .out
slave2: starting datanode, logging to  /home/hadoop/hadoop-2 .7.0 /logs/hadoop-hadoop-datanode-slave2 .out
slave1: starting datanode, logging to  /home/hadoop/hadoop-2 .7.0 /logs/hadoop-hadoop-datanode-slave1 .out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to  /home/hadoop/hadoop-2 .7.0 /logs/hadoop-hadoop-secondarynamenode-master .out
starting yarn daemons
starting resourcemanager, logging to  /home/hadoop/hadoop-2 .7.0 /logs/yarn-hadoop-resourcemanager-master .out
slave3: starting nodemanager, logging to  /home/hadoop/hadoop-2 .7.0 /logs/yarn-hadoop-nodemanager-slave3 .out
slave2: starting nodemanager, logging to  /home/hadoop/hadoop-2 .7.0 /logs/yarn-hadoop-nodemanager-slave2 .out
slave1: starting nodemanager, logging to  /home/hadoop/hadoop-2 .7.0 /logs/yarn-hadoop-nodemanager-slave1 .out
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[hadoop@master sbin]$  netstat   -lntup 
(Not all processes could be identified, non-owned process info
  will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID /Program  name    
tcp        0      0 192.168.56.100:9000     0.0.0.0:*               LISTEN      4405 /java           
tcp        0      0 0.0.0.0:50090           0.0.0.0:*               LISTEN      4606 /java           
tcp        0      0 0.0.0.0:50070           0.0.0.0:*               LISTEN      4405 /java           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -                   
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      -                   
tcp6       0      0 :::22                   :::*                    LISTEN      -                   
tcp6       0      0 :::8088                 :::*                    LISTEN      4757 /java           
tcp6       0      0 ::1:25                  :::*                    LISTEN      -                   
tcp6       0      0 :::8030                 :::*                    LISTEN      4757 /java           
tcp6       0      0 :::8031                 :::*                    LISTEN      4757 /java           
tcp6       0      0 :::8032                 :::*                    LISTEN      4757 /java           
tcp6       0      0 :::8033                 :::*                    LISTEN      4757 /java           
[hadoop@master sbin]$

http://192.168.56.100:50070/dfshealth.html#tab-overview

http://192.168.56.103:8042/node/allApplications

http://192.168.56.100:50090/status.html










本文转自 小小三郎1 51CTO博客,原文链接:http://blog.51cto.com/wsxxsl/1945709,如需转载请自行联系原作者
目录
相关文章
|
5月前
|
SQL 分布式计算 Hadoop
大数据行业部署实战1:Hadoop伪分布式部署
大数据行业部署实战1:Hadoop伪分布式部署
164 0
|
6月前
|
存储 分布式计算 Hadoop
基于docker的Hadoop环境搭建与应用实践(脚本部署)
本文介绍了Hadoop环境的搭建与应用实践。对Hadoop的概念和原理进行了简要说明,包括HDFS分布式文件系统和MapReduce计算模型等,主要通过脚本的方式进行快捷部署,在部署完成后对HDFS和mapreduce进行了测试,确保其功能正常。
|
4天前
|
存储 分布式计算 Hadoop
基于Hadoop分布式数据库HBase1.0部署及使用
基于Hadoop分布式数据库HBase1.0部署及使用
|
19天前
|
分布式计算 Hadoop Java
centos 部署Hadoop-3.0-高性能集群(一)安装
centos 部署Hadoop-3.0-高性能集群(一)安装
17 0
|
4月前
|
SQL 分布式计算 Hadoop
Hadoop学习笔记(HDP)-Part.08 部署Ambari集群
01 关于HDP 02 核心组件原理 03 资源规划 04 基础环境配置 05 Yum源配置 06 安装OracleJDK 07 安装MySQL 08 部署Ambari集群 09 安装OpenLDAP 10 创建集群 11 安装Kerberos 12 安装HDFS 13 安装Ranger 14 安装YARN+MR 15 安装HIVE 16 安装HBase 17 安装Spark2 18 安装Flink 19 安装Kafka 20 安装Flume
85 0
Hadoop学习笔记(HDP)-Part.08 部署Ambari集群
|
6月前
|
分布式计算 搜索推荐 Hadoop
03 Hadoop国内外应用案例介绍
03 Hadoop国内外应用案例介绍
29 0
|
6月前
|
分布式计算 Hadoop Java
Hadoop伪分布式环境部署(非脚本)
本实验基于ECS云服务器(centOS7.7)搭建Hadoop伪分布式环境,并通过运行一个MapReduce示例程序熟悉Hadoop平台的使用。
|
6月前
|
SQL 分布式计算 Kubernetes
Hadoop on K8s 编排部署进阶篇
Hadoop on K8s 编排部署进阶篇
Hadoop on K8s 编排部署进阶篇
|
4月前
|
分布式计算 资源调度 Hadoop
Hadoop【部署 02】hadoop-3.1.3 单机版YARN(配置、启动停止shell脚本修改及服务验证)
Hadoop【部署 02】hadoop-3.1.3 单机版YARN(配置、启动停止shell脚本修改及服务验证)
55 0
|
4月前
|
分布式计算 Hadoop Java
Hadoop【部署 01】腾讯云Linux环境CentOS Linux release 7.5.1804单机版hadoop-3.1.3详细安装步骤(安装+配置+初始化+启动脚本+验证)
Hadoop【部署 01】腾讯云Linux环境CentOS Linux release 7.5.1804单机版hadoop-3.1.3详细安装步骤(安装+配置+初始化+启动脚本+验证)
88 0

相关实验场景

更多