hadoop的部署以及应用

简介:

1.基础环境

1
2
3
4
5
6
7
8
9
10
[hadoop@master ~]$  cat   /etc/redhat-release 
CentOS Linux release 7.2.1511 (Core) 
[hadoop@master ~]$ 
[hadoop@master ~]$ getenforce 
Disabled
[hadoop@master ~]$ systemctl  status  firewalld 
● firewalld.service - firewalld - dynamic firewall daemon
    Loaded: loaded ( /usr/lib/systemd/system/firewalld .service; disabled; vendor preset: enabled)
    Active: inactive (dead)
[hadoop@master ~]$

2.IP以及对应节点


IP 主机名 hadoop node hadoop  进程名称
192.168.56.100 master master
namenode,jobtracker

192.168.56.101 slave1 slave datanode,tasktracker
192.168.56.102 slave2 slave datanode,tasktracker
192.168.56.103 slave3 slave datanode,tasktracker
1
2
3
4
5
6
[hadoop@master ~] # cat  /etc/hosts
192.168.56.100  Master
192.168.56.101  slave1
192.168.56.102  slave2
192.168.56.103  slave3
[hadoop@master ~] #

3.增加hadoop用户,所有节点

1
2
useradd   hadoop
echo  hadoop| passwd   --stdin  hadoop

4.jdk

1
2
3
4
5
6
7
8
9
10
11
12
[hadoop@slave1 application] # ll
total 4
lrwxrwxrwx 1 root root   24 Jul 10 01:35 jdk ->  /application/jdk1 .8.0_60
drwxr-xr-x 8 root root 4096 Aug  5  2015 jdk1.8.0_60
[hadoop@slave1 application] # pwd
/application
[hadoop@slave1 application]
[hadoop@master ~] # java  -version 
java version  "1.8.0_60"
Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
[hadoop@master ~] #

5.master(192.168.56.100)上的hadoop用户可以ssh所有slave节点的hadoop用户下

6.设置hadoop安装路径 以及环境变量(所有节点)

1
2
3
4
5
6
7
su   -  hadoop
tar  xf  hadoop-2.7.0tar.gz
/home/hadoop/hadoop-2 .7.0
vi  /etc/profile   添加hadoop环境变量
export  HADOOP_HOME= /home/hadoop/hadoop-2 .7.0
export  PATH=$PATH:$HADOOP_HOME /bin
source  /etc/profile

7.修改hadoop的环境的Java环境变量

1
2
3
4
/home/hadoop/hadoop-2 .7.0 /etc/hadoop
vi  hadoop- env .sh 添加
###JAVA_HOME
export  JAVA_HOME= /application/jdk/

8.修改hadoop的配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
cd   /home/hadoop/hadoop-2 .7.0 /etc/hadoop
1. ##############################
[hadoop@master hadoop]$  cat  core-site.xml
<?xml version= "1.0"  encoding= "UTF-8" ?>
<?xml-stylesheet  type = "text/xsl"  href= "configuration.xsl" ?>
<!--
   Licensed under the Apache License, Version 2.0 (the  "License" );
   you may not use this  file  except  in  compliance with the License.
   You may obtain a copy of the License at
 
     http: //www .apache.org /licenses/LICENSE-2 .0
 
   Unless required by applicable law or agreed to  in  writing, software
   distributed under the License is distributed on an  "AS IS"  BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License  for  the specific language governing permissions and
   limitations under the License. See accompanying LICENSE  file .
-->
 
<!-- Put site-specific property overrides  in  this  file . -->
<configuration>
<property>
  <name>fs.default.name< /name >
   <value>hdfs: //master :9000< /value >
< /property >
<property>
   <name>hadoop.tmp. dir < /name >
  <value> /home/hadoop/tmp < /value >
< /property >
< /configuration >
[hadoop@master hadoop]$ 
2. ###################################(默认不存在 拷贝个模板即可)
[hadoop@master hadoop]$  cat  mapred-site.xml
<?xml version= "1.0" ?>
<?xml-stylesheet  type = "text/xsl"  href= "configuration.xsl" ?>
<!--
   Licensed under the Apache License, Version 2.0 (the  "License" );
   you may not use this  file  except  in  compliance with the License.
   You may obtain a copy of the License at
 
     http: //www .apache.org /licenses/LICENSE-2 .0
 
   Unless required by applicable law or agreed to  in  writing, software
   distributed under the License is distributed on an  "AS IS"  BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License  for  the specific language governing permissions and
   limitations under the License. See accompanying LICENSE  file .
-->
 
<!-- Put site-specific property overrides  in  this  file . -->
<configuration>
<property>
   <name>mapred.job.tracker< /name >
   <value>master:9001< /value >
< /property >
<property>
   <name>mapred. local . dir < /name >
  <value> /home/hadoop/tmp < /value >
< /property >
< /configuration >
[hadoop@master hadoop]$ 
3. #########################################
[hadoop@master hadoop]$  cat   hdfs-site.xml
<?xml version= "1.0"  encoding= "UTF-8" ?>
<?xml-stylesheet  type = "text/xsl"  href= "configuration.xsl" ?>
<!--
   Licensed under the Apache License, Version 2.0 (the  "License" );
   you may not use this  file  except  in  compliance with the License.
   You may obtain a copy of the License at
 
     http: //www .apache.org /licenses/LICENSE-2 .0
 
   Unless required by applicable law or agreed to  in  writing, software
   distributed under the License is distributed on an  "AS IS"  BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License  for  the specific language governing permissions and
   limitations under the License. See accompanying LICENSE  file .
-->
 
<!-- Put site-specific property overrides  in  this  file . -->
<configuration>
<property>
<name>dfs.name. dir < /name >
<value> /home/hadoop/name1 , /home/hadoop/name2 , /home/hadoop/name3 < /value
<description>  < /description >
< /property >
<property>
<name>dfs.data. dir < /name >
<value> /home/hadoop/data1 , /home/hadoop/data2 , /home/hadoop/data3 < /value >
<description> < /description >
< /property >
<property>
   <name>dfs.replication< /name >
   <value>3< /value >
< /property >
< /configuration >
[hadoop@master hadoop]$ 
[hadoop@master hadoop]$  cat  masters 
master
[hadoop@master hadoop]$  cat  slaves 
slave1
slave2
slave3
[hadoop@master hadoop]$

9.分发到slave节点

1
2
3
scp    -r   /home/hadoop/hadoop-2 .7.0  slave1: /home/hadoop/
scp    -r   /home/hadoop/hadoop-2 .7.0  slave2: /home/hadoop/
scp    -r   /home/hadoop/hadoop-2 .7.0  slave3: /home/hadoop/

10.master 节点测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
/home/hadoop/name1  /home/hadoop/name2   /home/hadoop/name3   这三个目录不要创建,如果创建会提示
重新reload
 
cd   /home/hadoop/hadoop-2 .7.0
[hadoop@master hadoop-2.7.0]$ . /bin/hadoop  namenode - format
DEPRECATED: Use of this script to execute hdfs  command  is deprecated.
Instead use the hdfs  command  for  it.
 
17 /07/10  02:57:34 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = Master /192 .168.56.100
STARTUP_MSG:   args = [- format ]
STARTUP_MSG:   version = 2.7.0
STARTUP_MSG:   classpath =  /home/hadoop/hadoop-2 .7.0 /etc/hadoop : /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-lang-2 .6.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/hadoop-auth-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/curator-recipes-2 .7.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/asm-3 .2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-collections-3 .2.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/xmlenc-0 .52.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/htrace-core-3 .1.0-incubating.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jetty-util-6 .1.26.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-beanutils-core-1 .8.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-digester-1 .8.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jsp-api-2 .1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/httpcore-4 .2.5.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jersey-core-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/mockito-all-1 .8.5.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/hadoop-annotations-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jackson-mapper-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/junit-4 .11.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-logging-1 .1.3.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-configuration-1 .6.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/api-util-1 .0.0-M20.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jaxb-impl-2 .2.3-1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-compress-1 .4.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jersey-json-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jettison-1 .1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jackson-jaxrs-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/hamcrest-core-1 .3.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-net-3 .1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-cli-1 .2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/api-asn1-api-1 .0.0-M20.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-math3-3 .1.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jackson-xc-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/stax-api-1 .0-2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-codec-1 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/slf4j-log4j12-1 .7.10.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jsr305-3 .0.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/snappy-java-1 .0.4.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jetty-6 .1.26.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/guava-11 .0.2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/netty-3 .6.2.Final.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/avro-1 .7.4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/paranamer-2 .3.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-beanutils-1 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jackson-core-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jsch-0 .1.42.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/apacheds-kerberos-codec-2 .0.0-M15.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jets3t-0 .9.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/activation-1 .1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/protobuf-java-2 .5.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/java-xmlbuilder-0 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/servlet-api-2 .5.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/xz-1 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/apacheds-i18n-2 .0.0-M15.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-httpclient-3 .1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/gson-2 .2.4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/commons-io-2 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jaxb-api-2 .2.2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/curator-framework-2 .7.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/jersey-server-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/httpclient-4 .2.5.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/curator-client-2 .7.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/log4j-1 .2.17.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/slf4j-api-1 .7.10.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/lib/zookeeper-3 .4.6.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/hadoop-nfs-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/hadoop-common-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/common/hadoop-common-2 .7.0-tests.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs : /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/commons-lang-2 .6.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/asm-3 .2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/xmlenc-0 .52.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/htrace-core-3 .1.0-incubating.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/jetty-util-6 .1.26.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/jersey-core-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/jackson-mapper-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/commons-logging-1 .1.3.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/leveldbjni-all-1 .8.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/commons-cli-1 .2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/commons-daemon-1 .0.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/commons-codec-1 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/jsr305-3 .0.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/jetty-6 .1.26.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/guava-11 .0.2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/netty-3 .6.2.Final.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/jackson-core-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/protobuf-java-2 .5.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/servlet-api-2 .5.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/xercesImpl-2 .9.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/commons-io-2 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/netty-all-4 .0.23.Final.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/jersey-server-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/log4j-1 .2.17.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/lib/xml-apis-1 .3.04.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/hadoop-hdfs-nfs-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/hadoop-hdfs-2 .7.0-tests.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/hdfs/hadoop-hdfs-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/commons-lang-2 .6.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/asm-3 .2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/commons-collections-3 .2.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/javax .inject-1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jetty-util-6 .1.26.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/aopalliance-1 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jersey-core-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jackson-mapper-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/commons-logging-1 .1.3.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/leveldbjni-all-1 .8.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jaxb-impl-2 .2.3-1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/commons-compress-1 .4.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jersey-json-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jettison-1 .1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jackson-jaxrs-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/commons-cli-1 .2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jackson-xc-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/stax-api-1 .0-2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/commons-codec-1 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jsr305-3 .0.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jetty-6 .1.26.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/guava-11 .0.2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jersey-guice-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/guice-3 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/netty-3 .6.2.Final.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jackson-core-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/activation-1 .1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/protobuf-java-2 .5.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/servlet-api-2 .5.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/xz-1 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/commons-io-2 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jaxb-api-2 .2.2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/zookeeper-3 .4.6-tests.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/guice-servlet-3 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jersey-client-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/jersey-server-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/log4j-1 .2.17.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/lib/zookeeper-3 .4.6.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-registry-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-server-tests-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-server-common-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-common-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-server-web-proxy-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-api-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-server-nodemanager-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/yarn/hadoop-yarn-client-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/asm-3 .2.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/javax .inject-1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/aopalliance-1 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/jersey-core-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/hadoop-annotations-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/jackson-mapper-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/junit-4 .11.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/leveldbjni-all-1 .8.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/commons-compress-1 .4.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/hamcrest-core-1 .3.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/snappy-java-1 .0.4.1.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/jersey-guice-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/guice-3 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/netty-3 .6.2.Final.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/avro-1 .7.4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/paranamer-2 .3.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/jackson-core-asl-1 .9.13.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/protobuf-java-2 .5.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/xz-1 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/commons-io-2 .4.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/guice-servlet-3 .0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/jersey-server-1 .9.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/lib/log4j-1 .2.17.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-examples-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2 .7.0-tests.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-core-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-common-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /share/hadoop/mapreduce/hadoop-mapreduce-client-app-2 .7.0.jar: /home/hadoop/hadoop-2 .7.0 /contrib/capacity-scheduler/ *.jar: /home/hadoop/hadoop-2 .7.0 /contrib/capacity-scheduler/ *.jar
STARTUP_MSG:   build = Unknown -r Unknown; compiled by  'root'  on 2015-05-27T13:56Z
STARTUP_MSG:   java = 1.8.0_60
************************************************************/
17 /07/10  02:57:34 INFO namenode.NameNode: registered UNIX signal handlers  for  [TERM, HUP, INT]
17 /07/10  02:57:34 INFO namenode.NameNode: createNameNode [- format ]
17 /07/10  02:57:35 WARN common.Util: Path  /home/hadoop/name1  should be specified as a URI  in  configuration files. Please update hdfs configuration.
17 /07/10  02:57:35 WARN common.Util: Path  /home/hadoop/name2  should be specified as a URI  in  configuration files. Please update hdfs configuration.
17 /07/10  02:57:35 WARN common.Util: Path  /home/hadoop/name3  should be specified as a URI  in  configuration files. Please update hdfs configuration.
17 /07/10  02:57:35 WARN common.Util: Path  /home/hadoop/name1  should be specified as a URI  in  configuration files. Please update hdfs configuration.
17 /07/10  02:57:35 WARN common.Util: Path  /home/hadoop/name2  should be specified as a URI  in  configuration files. Please update hdfs configuration.
17 /07/10  02:57:35 WARN common.Util: Path  /home/hadoop/name3  should be specified as a URI  in  configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-77e0896d-bda2-49f1-8127-c5343f1c52c9
17 /07/10  02:57:35 INFO namenode.FSNamesystem: No KeyProvider found.
17 /07/10  02:57:35 INFO namenode.FSNamesystem: fsLock is fair: true
17 /07/10  02:57:35 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
17 /07/10  02:57:35 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip- hostname -check= true
17 /07/10  02:57:35 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is  set  to 000:00:00:00.000
17 /07/10  02:57:36 INFO blockmanagement.BlockManager: The block deletion will start around 2017 Jul 10 02:57:36
17 /07/10  02:57:36 INFO util.GSet: Computing capacity  for  map BlocksMap
17 /07/10  02:57:36 INFO util.GSet: VM  type        = 64-bit
17 /07/10  02:57:36 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
17 /07/10  02:57:36 INFO util.GSet: capacity      = 2^21 = 2097152 entries
17 /07/10  02:57:36 INFO blockmanagement.BlockManager: dfs.block.access.token. enable = false
17 /07/10  02:57:36 INFO blockmanagement.BlockManager: defaultReplication         = 3
17 /07/10  02:57:36 INFO blockmanagement.BlockManager: maxReplication             = 512
17 /07/10  02:57:36 INFO blockmanagement.BlockManager: minReplication             = 1
17 /07/10  02:57:36 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
17 /07/10  02:57:36 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  =  false
17 /07/10  02:57:36 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
17 /07/10  02:57:36 INFO blockmanagement.BlockManager: encryptDataTransfer        =  false
17 /07/10  02:57:36 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
17 /07/10  02:57:36 INFO namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
17 /07/10  02:57:36 INFO namenode.FSNamesystem: supergroup          = supergroup
17 /07/10  02:57:36 INFO namenode.FSNamesystem: isPermissionEnabled =  true
17 /07/10  02:57:36 INFO namenode.FSNamesystem: HA Enabled:  false
17 /07/10  02:57:36 INFO namenode.FSNamesystem: Append Enabled:  true
17 /07/10  02:57:36 INFO util.GSet: Computing capacity  for  map INodeMap
17 /07/10  02:57:36 INFO util.GSet: VM  type        = 64-bit
17 /07/10  02:57:36 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
17 /07/10  02:57:36 INFO util.GSet: capacity      = 2^20 = 1048576 entries
17 /07/10  02:57:36 INFO namenode.FSDirectory: ACLs enabled?  false
17 /07/10  02:57:36 INFO namenode.FSDirectory: XAttrs enabled?  true
17 /07/10  02:57:36 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
17 /07/10  02:57:36 INFO namenode.NameNode: Caching  file  names occuring  more  than 10  times
17 /07/10  02:57:36 INFO util.GSet: Computing capacity  for  map cachedBlocks
17 /07/10  02:57:36 INFO util.GSet: VM  type        = 64-bit
17 /07/10  02:57:36 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
17 /07/10  02:57:36 INFO util.GSet: capacity      = 2^18 = 262144 entries
17 /07/10  02:57:36 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
17 /07/10  02:57:36 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
17 /07/10  02:57:36 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
17 /07/10  02:57:36 INFO metrics.TopMetrics: NNTop conf: dfs.namenode. top .window.num.buckets = 10
17 /07/10  02:57:36 INFO metrics.TopMetrics: NNTop conf: dfs.namenode. top .num. users  = 10
17 /07/10  02:57:36 INFO metrics.TopMetrics: NNTop conf: dfs.namenode. top .windows.minutes = 1,5,25
17 /07/10  02:57:36 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17 /07/10  02:57:36 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry  time  is 600000 millis
17 /07/10  02:57:36 INFO util.GSet: Computing capacity  for  map NameNodeRetryCache
17 /07/10  02:57:36 INFO util.GSet: VM  type        = 64-bit
17 /07/10  02:57:36 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
17 /07/10  02:57:36 INFO util.GSet: capacity      = 2^15 = 32768 entries
17 /07/10  02:57:36 INFO namenode.FSImage: Allocated new BlockPoolId: BP-467031090-192.168.56.100-1499626656612
17 /07/10  02:57:36 INFO common.Storage: Storage directory  /home/hadoop/name1  has been successfully formatted.
17 /07/10  02:57:36 INFO common.Storage: Storage directory  /home/hadoop/name2  has been successfully formatted.
17 /07/10  02:57:36 INFO common.Storage: Storage directory  /home/hadoop/name3  has been successfully formatted.
17 /07/10  02:57:36 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17 /07/10  02:57:36 INFO util.ExitUtil: Exiting with status 0
17 /07/10  02:57:37 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Master /192 .168.56.100
************************************************************/
[hadoop@master hadoop-2.7.0]$

11.启动服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[hadoop@master sbin]$  pwd
/home/hadoop/hadoop-2 .7.0 /sbin
[hadoop@master sbin]$ 
[hadoop@master sbin]$ . /start-all .sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to  /home/hadoop/hadoop-2 .7.0 /logs/hadoop-hadoop-namenode-master .out
slave3: starting datanode, logging to  /home/hadoop/hadoop-2 .7.0 /logs/hadoop-hadoop-datanode-slave3 .out
slave2: starting datanode, logging to  /home/hadoop/hadoop-2 .7.0 /logs/hadoop-hadoop-datanode-slave2 .out
slave1: starting datanode, logging to  /home/hadoop/hadoop-2 .7.0 /logs/hadoop-hadoop-datanode-slave1 .out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to  /home/hadoop/hadoop-2 .7.0 /logs/hadoop-hadoop-secondarynamenode-master .out
starting yarn daemons
starting resourcemanager, logging to  /home/hadoop/hadoop-2 .7.0 /logs/yarn-hadoop-resourcemanager-master .out
slave3: starting nodemanager, logging to  /home/hadoop/hadoop-2 .7.0 /logs/yarn-hadoop-nodemanager-slave3 .out
slave2: starting nodemanager, logging to  /home/hadoop/hadoop-2 .7.0 /logs/yarn-hadoop-nodemanager-slave2 .out
slave1: starting nodemanager, logging to  /home/hadoop/hadoop-2 .7.0 /logs/yarn-hadoop-nodemanager-slave1 .out
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[hadoop@master sbin]$  netstat   -lntup 
(Not all processes could be identified, non-owned process info
  will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID /Program  name    
tcp        0      0 192.168.56.100:9000     0.0.0.0:*               LISTEN      4405 /java           
tcp        0      0 0.0.0.0:50090           0.0.0.0:*               LISTEN      4606 /java           
tcp        0      0 0.0.0.0:50070           0.0.0.0:*               LISTEN      4405 /java           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -                   
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      -                   
tcp6       0      0 :::22                   :::*                    LISTEN      -                   
tcp6       0      0 :::8088                 :::*                    LISTEN      4757 /java           
tcp6       0      0 ::1:25                  :::*                    LISTEN      -                   
tcp6       0      0 :::8030                 :::*                    LISTEN      4757 /java           
tcp6       0      0 :::8031                 :::*                    LISTEN      4757 /java           
tcp6       0      0 :::8032                 :::*                    LISTEN      4757 /java           
tcp6       0      0 :::8033                 :::*                    LISTEN      4757 /java           
[hadoop@master sbin]$

http://192.168.56.100:50070/dfshealth.html#tab-overview

http://192.168.56.103:8042/node/allApplications

http://192.168.56.100:50090/status.html










本文转自 小小三郎1 51CTO博客,原文链接:http://blog.51cto.com/wsxxsl/1945709,如需转载请自行联系原作者
目录
相关文章
|
7月前
|
分布式计算 资源调度 Hadoop
Hadoop节点资源扩展环境部署
【4月更文挑战第16天】扩展Hadoop集群涉及多个步骤:准备新节点,配置静态IP,安装并配置Hadoop,将新节点添加到集群,验证测试,及优化调整。确保符合硬件需求,更新集群节点列表,执行`hdfs dfsadmin -refreshNodes`命令,检查新节点状态,并依据测试结果优化性能。注意不同环境可能需要调整具体步骤,建议参照官方文档并在测试环境中预演。
64 4
|
2月前
|
分布式计算 资源调度 Hadoop
大数据-80 Spark 简要概述 系统架构 部署模式 与Hadoop MapReduce对比
大数据-80 Spark 简要概述 系统架构 部署模式 与Hadoop MapReduce对比
83 2
|
1月前
|
分布式计算 资源调度 Hadoop
【赵渝强老师】部署Hadoop的本地模式
本文介绍了Hadoop的目录结构及本地模式部署方法,包括解压安装、设置环境变量、配置Hadoop参数等步骤,并通过一个简单的WordCount程序示例,演示了如何在本地模式下运行MapReduce任务。
|
4月前
|
分布式计算 资源调度 Hadoop
Hadoop入门基础(二):Hadoop集群安装与部署详解(超详细教程)(二)
Hadoop入门基础(二):Hadoop集群安装与部署详解(超详细教程)(二)
|
4月前
|
分布式计算 Ubuntu Hadoop
Hadoop入门基础(二):Hadoop集群安装与部署详解(超详细教程)(一)
Hadoop入门基础(二):Hadoop集群安装与部署详解(超详细教程)(一)
|
4月前
|
SQL 分布式计算 Hadoop
centos7通过CDH部署Hadoop
centos7通过CDH部署Hadoop
|
4月前
|
分布式计算 Java Linux
centos7通过Ambari2.74部署Hadoop
centos7通过Ambari2.74部署Hadoop
|
4月前
|
存储 分布式计算 监控
Hadoop在云计算环境下的部署策略
【8月更文第28天】Hadoop是一个开源软件框架,用于分布式存储和处理大规模数据集。随着云计算技术的发展,越来越多的企业开始利用云平台的优势来部署Hadoop集群,以实现更高的可扩展性、可用性和成本效益。本文将探讨如何在公有云、私有云及混合云环境下部署和管理Hadoop集群,并提供具体的部署策略和代码示例。
170 0
|
4月前
|
资源调度 分布式计算 监控
【揭秘Hadoop YARN背后的奥秘!】从零开始,带你深入了解YARN资源管理框架的核心架构与实战应用!
【8月更文挑战第24天】Hadoop YARN(Yet Another Resource Negotiator)是Hadoop生态系统中的资源管理器,为Hadoop集群上的应用提供统一的资源管理和调度框架。YARN通过ResourceManager、NodeManager和ApplicationMaster三大核心组件实现高效集群资源利用及多框架支持。本文剖析YARN架构及组件工作原理,并通过示例代码展示如何运行简单的MapReduce任务,帮助读者深入了解YARN机制及其在大数据处理中的应用价值。
111 0
|
6月前
|
分布式计算 Hadoop 网络安全

相关实验场景

更多