Hadoop HDFS: the directory item limit is exceed: limit=1048576问题的解决

简介:

问题描述:

1.文件无法写入hadoop hdfs文件系统;

2.hadoop namenode日志记录 

1
the directory item limit is exceed: limit=1048576

3.hadoop单个目录下文件超1048576个,默认limit限制数为1048576,所以要调大limit限制数

解决办法:

1
2
3
4
5
6
7
8
9
10
hdfs-site.xml配置文件添加配置参数
<property>
   <name>dfs.namenode.fs-limits.max-directory-items< /name >
   <value>3200000< /value >
   <description>Defines the maximum number of items that a directory may
       contain. Cannot  set  the property to a value  less  than 1 or  more  than
       6400000.< /description >
< /property >
将配置文件推送至hadoop集群所有节点
重启hadoop服务

延伸问题:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
2017-11-17 13:03:31,795 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-11-17 13:03:31,797 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at name-01 /10 .0.0.101
************************************************************/
2017-11-17 13:09:45,016 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = name-01 /10 .0.0.101
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.6.0-cdh5.7.4
STARTUP_MSG:   classpath =  /etc/hadoop/conf : /usr/lib/hadoop/lib/commons-io-2 .4.jar: /usr/lib/hadoop/lib/jettison-1 .1.jar: /usr/lib/hadoop/lib/stax-api-1 .0-2.jar: /usr/lib/hadoop/lib/zookeeper .jar: /usr/lib/hadoop/lib/avro .jar: /usr/lib/hadoop/lib/jsch-0 .1.42.jar: /usr/lib/hadoop/lib/java-xmlbuilder-0 .4.jar: /usr/lib/hadoop/lib/curator-client-2 .7.1.jar: /usr/lib/hadoop/lib/jasper-runtime-5 .5.23.jar: /usr/lib/hadoop/lib/servlet-api-2 .5.jar: /usr/lib/hadoop/lib/hamcrest-core-1 .3.jar: /usr/lib/hadoop/lib/commons-net-3 .1.jar: /usr/lib/hadoop/lib/jets3t-0 .9.0.jar: /usr/lib/hadoop/lib/httpcore-4 .2.5.jar: /usr/lib/hadoop/lib/jersey-server-1 .9.jar: /usr/lib/hadoop/lib/activation-1 .1.jar: /usr/lib/hadoop/lib/slf4j-api-1 .7.5.jar: /usr/lib/hadoop/lib/apacheds-kerberos-codec-2 .0.0-M15.jar: /usr/lib/hadoop/lib/xmlenc-0 .52.jar: /usr/lib/hadoop/lib/commons-beanutils-core-1 .8.0.jar: /usr/lib/hadoop/lib/commons-codec-1 .4.jar: /usr/lib/hadoop/lib/jaxb-api-2 .2.2.jar: /usr/lib/hadoop/lib/curator-recipes-2 .7.1.jar: /usr/lib/hadoop/lib/jsp-api-2 .1.jar: /usr/lib/hadoop/lib/aws-java-sdk-s3-1 .10.6.jar: /usr/lib/hadoop/lib/apacheds-i18n-2 .0.0-M15.jar: /usr/lib/hadoop/lib/jaxb-impl-2 .2.3-1.jar: /usr/lib/hadoop/lib/log4j-1 .2.17.jar: /usr/lib/hadoop/lib/commons-logging-1 .1.3.jar: /usr/lib/hadoop/lib/jetty-util-6 .1.26.cloudera.4.jar: /usr/lib/hadoop/lib/jackson-xc-1 .8.8.jar: /usr/lib/hadoop/lib/commons-math3-3 .1.1.jar: /usr/lib/hadoop/lib/jackson-jaxrs-1 .8.8.jar: /usr/lib/hadoop/lib/jetty-6 .1.26.cloudera.4.jar: /usr/lib/hadoop/lib/htrace-core4-4 .0.1-incubating.jar: /usr/lib/hadoop/lib/junit-4 .11.jar: /usr/lib/hadoop/lib/logredactor-1 .0.3.jar: /usr/lib/hadoop/lib/api-util-1 .0.0-M20.jar: /usr/lib/hadoop/lib/commons-httpclient-3 .1.jar: /usr/lib/hadoop/lib/jersey-json-1 .9.jar: /usr/lib/hadoop/lib/api-asn1-api-1 .0.0-M20.jar: /usr/lib/hadoop/lib/commons-cli-1 .2.jar: /usr/lib/hadoop/lib/aws-java-sdk-core-1 .10.6.jar: /usr/lib/hadoop/lib/commons-configuration-1 .6.jar: /usr/lib/hadoop/lib/mockito-all-1 .8.5.jar: /usr/lib/hadoop/lib/xz-1 .0.jar: /usr/lib/hadoop/lib/aws-java-sdk-kms-1 .10.6.jar: /usr/lib/hadoop/lib/snappy-java-1 .0.4.1.jar: /usr/lib/hadoop/lib/guava-11 .0.2.jar: /usr/lib/hadoop/lib/commons-collections-3 .2.2.jar: /usr/lib/hadoop/lib/jasper-compiler-5 .5.23.jar: /usr/lib/hadoop/lib/jersey-core-1 .9.jar: /usr/lib/hadoop/lib/jackson-mapper-asl-1 .8.8.jar: /usr/lib/hadoop/lib/asm-3 .2.jar: /usr/lib/hadoop/lib/protobuf-java-2 .5.0.jar: /usr/lib/hadoop/lib/curator-framework-2 .7.1.jar: /usr/lib/hadoop/lib/commons-el-1 .0.jar: /usr/lib/hadoop/lib/httpclient-4 .2.5.jar: /usr/lib/hadoop/lib/gson-2 .2.4.jar: /usr/lib/hadoop/lib/commons-compress-1 .4.1.jar: /usr/lib/hadoop/lib/jackson-core-asl-1 .8.8.jar: /usr/lib/hadoop/lib/slf4j-log4j12 .jar: /usr/lib/hadoop/lib/commons-digester-1 .8.jar: /usr/lib/hadoop/lib/commons-lang-2 .6.jar: /usr/lib/hadoop/lib/netty-3 .6.2.Final.jar: /usr/lib/hadoop/lib/paranamer-2 .3.jar: /usr/lib/hadoop/lib/jsr305-3 .0.0.jar: /usr/lib/hadoop/lib/commons-beanutils-1 .7.0.jar: /usr/lib/hadoop/ . //hadoop-common-2 .6.0-cdh5.7.4-tests.jar: /usr/lib/hadoop/ . //hadoop-annotations .jar: /usr/lib/hadoop/ . //hadoop-annotations-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop/ . //hadoop-nfs .jar: /usr/lib/hadoop/ . //parquet-generator .jar: /usr/lib/hadoop/ . //parquet-test-hadoop2 .jar: /usr/lib/hadoop/ . //hadoop-aws-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop/ . //parquet-column .jar: /usr/lib/hadoop/ . //parquet-pig-bundle .jar: /usr/lib/hadoop/ . //parquet-scrooge_2 .10.jar: /usr/lib/hadoop/ . //hadoop-auth-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop/ . //hadoop-nfs-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop/ . //hadoop-common-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop/ . //parquet-encoding .jar: /usr/lib/hadoop/ . //hadoop-aws .jar: /usr/lib/hadoop/ . //parquet-format-sources .jar: /usr/lib/hadoop/ . //parquet-tools .jar: /usr/lib/hadoop/ . //parquet-format-javadoc .jar: /usr/lib/hadoop/ . //parquet-hadoop .jar: /usr/lib/hadoop/ . //parquet-jackson .jar: /usr/lib/hadoop/ . //parquet-common .jar: /usr/lib/hadoop/ . //parquet-avro .jar: /usr/lib/hadoop/ . //parquet-thrift .jar: /usr/lib/hadoop/ . //parquet-format .jar: /usr/lib/hadoop/ . //parquet-cascading .jar: /usr/lib/hadoop/ . //parquet-scala_2 .10.jar: /usr/lib/hadoop/ . //hadoop-common .jar: /usr/lib/hadoop/ . //parquet-pig .jar: /usr/lib/hadoop/ . //hadoop-auth .jar: /usr/lib/hadoop/ . //hadoop-common-tests .jar: /usr/lib/hadoop/ . //parquet-protobuf .jar: /usr/lib/hadoop/ . //parquet-hadoop-bundle .jar: /usr/lib/hadoop-hdfs/ ./: /usr/lib/hadoop-hdfs/lib/commons-io-2 .4.jar: /usr/lib/hadoop-hdfs/lib/jasper-runtime-5 .5.23.jar: /usr/lib/hadoop-hdfs/lib/servlet-api-2 .5.jar: /usr/lib/hadoop-hdfs/lib/jersey-server-1 .9.jar: /usr/lib/hadoop-hdfs/lib/xercesImpl-2 .9.1.jar: /usr/lib/hadoop-hdfs/lib/xmlenc-0 .52.jar: /usr/lib/hadoop-hdfs/lib/commons-codec-1 .4.jar: /usr/lib/hadoop-hdfs/lib/jsp-api-2 .1.jar: /usr/lib/hadoop-hdfs/lib/log4j-1 .2.17.jar: /usr/lib/hadoop-hdfs/lib/commons-logging-1 .1.3.jar: /usr/lib/hadoop-hdfs/lib/jetty-util-6 .1.26.cloudera.4.jar: /usr/lib/hadoop-hdfs/lib/commons-daemon-1 .0.13.jar: /usr/lib/hadoop-hdfs/lib/jetty-6 .1.26.cloudera.4.jar: /usr/lib/hadoop-hdfs/lib/htrace-core4-4 .0.1-incubating.jar: /usr/lib/hadoop-hdfs/lib/xml-apis-1 .3.04.jar: /usr/lib/hadoop-hdfs/lib/commons-cli-1 .2.jar: /usr/lib/hadoop-hdfs/lib/guava-11 .0.2.jar: /usr/lib/hadoop-hdfs/lib/jersey-core-1 .9.jar: /usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1 .8.8.jar: /usr/lib/hadoop-hdfs/lib/asm-3 .2.jar: /usr/lib/hadoop-hdfs/lib/leveldbjni-all-1 .8.jar: /usr/lib/hadoop-hdfs/lib/protobuf-java-2 .5.0.jar: /usr/lib/hadoop-hdfs/lib/commons-el-1 .0.jar: /usr/lib/hadoop-hdfs/lib/jackson-core-asl-1 .8.8.jar: /usr/lib/hadoop-hdfs/lib/commons-lang-2 .6.jar: /usr/lib/hadoop-hdfs/lib/netty-3 .6.2.Final.jar: /usr/lib/hadoop-hdfs/lib/jsr305-3 .0.0.jar: /usr/lib/hadoop-hdfs/ . //hadoop-hdfs-nfs .jar: /usr/lib/hadoop-hdfs/ . //hadoop-hdfs-tests .jar: /usr/lib/hadoop-hdfs/ . //hadoop-hdfs-nfs-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-hdfs/ . //hadoop-hdfs-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-hdfs/ . //hadoop-hdfs .jar: /usr/lib/hadoop-hdfs/ . //hadoop-hdfs-2 .6.0-cdh5.7.4-tests.jar: /usr/lib/hadoop-yarn/lib/commons-io-2 .4.jar: /usr/lib/hadoop-yarn/lib/guice-servlet-3 .0.jar: /usr/lib/hadoop-yarn/lib/jettison-1 .1.jar: /usr/lib/hadoop-yarn/lib/stax-api-1 .0-2.jar: /usr/lib/hadoop-yarn/lib/zookeeper .jar: /usr/lib/hadoop-yarn/lib/jline-2 .11.jar: /usr/lib/hadoop-yarn/lib/javax .inject-1.jar: /usr/lib/hadoop-yarn/lib/servlet-api-2 .5.jar: /usr/lib/hadoop-yarn/lib/aopalliance-1 .0.jar: /usr/lib/hadoop-yarn/lib/jersey-server-1 .9.jar: /usr/lib/hadoop-yarn/lib/activation-1 .1.jar: /usr/lib/hadoop-yarn/lib/guice-3 .0.jar: /usr/lib/hadoop-yarn/lib/commons-codec-1 .4.jar: /usr/lib/hadoop-yarn/lib/jaxb-api-2 .2.2.jar: /usr/lib/hadoop-yarn/lib/jaxb-impl-2 .2.3-1.jar: /usr/lib/hadoop-yarn/lib/log4j-1 .2.17.jar: /usr/lib/hadoop-yarn/lib/commons-logging-1 .1.3.jar: /usr/lib/hadoop-yarn/lib/jetty-util-6 .1.26.cloudera.4.jar: /usr/lib/hadoop-yarn/lib/jackson-xc-1 .8.8.jar: /usr/lib/hadoop-yarn/lib/jackson-jaxrs-1 .8.8.jar: /usr/lib/hadoop-yarn/lib/jetty-6 .1.26.cloudera.4.jar: /usr/lib/hadoop-yarn/lib/jersey-guice-1 .9.jar: /usr/lib/hadoop-yarn/lib/commons-httpclient-3 .1.jar: /usr/lib/hadoop-yarn/lib/jersey-json-1 .9.jar: /usr/lib/hadoop-yarn/lib/commons-cli-1 .2.jar: /usr/lib/hadoop-yarn/lib/xz-1 .0.jar: /usr/lib/hadoop-yarn/lib/guava-11 .0.2.jar: /usr/lib/hadoop-yarn/lib/commons-collections-3 .2.2.jar: /usr/lib/hadoop-yarn/lib/jersey-core-1 .9.jar: /usr/lib/hadoop-yarn/lib/jackson-mapper-asl-1 .8.8.jar: /usr/lib/hadoop-yarn/lib/asm-3 .2.jar: /usr/lib/hadoop-yarn/lib/leveldbjni-all-1 .8.jar: /usr/lib/hadoop-yarn/lib/protobuf-java-2 .5.0.jar: /usr/lib/hadoop-yarn/lib/commons-compress-1 .4.1.jar: /usr/lib/hadoop-yarn/lib/jackson-core-asl-1 .8.8.jar: /usr/lib/hadoop-yarn/lib/commons-lang-2 .6.jar: /usr/lib/hadoop-yarn/lib/jersey-client-1 .9.jar: /usr/lib/hadoop-yarn/lib/jsr305-3 .0.0.jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-server-common-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-registry .jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-server-resourcemanager .jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-server-nodemanager .jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-client .jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-client-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-server-tests .jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-server-common .jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-server-nodemanager-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-server-web-proxy-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-server-applicationhistoryservice .jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-common-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-server-resourcemanager-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-applications-unmanaged-am-launcher .jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-applications-distributedshell .jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-applications-distributedshell-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-api .jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-server-tests-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-registry-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-common .jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-api-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-server-applicationhistoryservice-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-server-web-proxy .jar: /usr/lib/hadoop-yarn/ . //hadoop-yarn-applications-unmanaged-am-launcher-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/lib/commons-io-2 .4.jar: /usr/lib/hadoop-mapreduce/lib/guice-servlet-3 .0.jar: /usr/lib/hadoop-mapreduce/lib/avro .jar: /usr/lib/hadoop-mapreduce/lib/javax .inject-1.jar: /usr/lib/hadoop-mapreduce/lib/hamcrest-core-1 .3.jar: /usr/lib/hadoop-mapreduce/lib/aopalliance-1 .0.jar: /usr/lib/hadoop-mapreduce/lib/jersey-server-1 .9.jar: /usr/lib/hadoop-mapreduce/lib/guice-3 .0.jar: /usr/lib/hadoop-mapreduce/lib/log4j-1 .2.17.jar: /usr/lib/hadoop-mapreduce/lib/junit-4 .11.jar: /usr/lib/hadoop-mapreduce/lib/jersey-guice-1 .9.jar: /usr/lib/hadoop-mapreduce/lib/xz-1 .0.jar: /usr/lib/hadoop-mapreduce/lib/snappy-java-1 .0.4.1.jar: /usr/lib/hadoop-mapreduce/lib/jersey-core-1 .9.jar: /usr/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1 .8.8.jar: /usr/lib/hadoop-mapreduce/lib/asm-3 .2.jar: /usr/lib/hadoop-mapreduce/lib/leveldbjni-all-1 .8.jar: /usr/lib/hadoop-mapreduce/lib/protobuf-java-2 .5.0.jar: /usr/lib/hadoop-mapreduce/lib/commons-compress-1 .4.1.jar: /usr/lib/hadoop-mapreduce/lib/jackson-core-asl-1 .8.8.jar: /usr/lib/hadoop-mapreduce/lib/netty-3 .6.2.Final.jar: /usr/lib/hadoop-mapreduce/lib/paranamer-2 .3.jar: /usr/lib/hadoop-mapreduce/ . //commons-io-2 .4.jar: /usr/lib/hadoop-mapreduce/ . //jettison-1 .1.jar: /usr/lib/hadoop-mapreduce/ . //stax-api-1 .0-2.jar: /usr/lib/hadoop-mapreduce/ . //microsoft-windowsazure-storage-sdk-0 .6.0.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-rumen-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/ . //zookeeper .jar: /usr/lib/hadoop-mapreduce/ . //hadoop-extras-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-mapreduce-client-hs-plugins .jar: /usr/lib/hadoop-mapreduce/ . //avro .jar: /usr/lib/hadoop-mapreduce/ . //jsch-0 .1.42.jar: /usr/lib/hadoop-mapreduce/ . //metrics-core-3 .0.2.jar: /usr/lib/hadoop-mapreduce/ . //java-xmlbuilder-0 .4.jar: /usr/lib/hadoop-mapreduce/ . //curator-client-2 .7.1.jar: /usr/lib/hadoop-mapreduce/ . //jasper-runtime-5 .5.23.jar: /usr/lib/hadoop-mapreduce/ . //servlet-api-2 .5.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-mapreduce-client-nativetask-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/ . //hamcrest-core-1 .3.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-datajoin .jar: /usr/lib/hadoop-mapreduce/ . //commons-net-3 .1.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-openstack .jar: /usr/lib/hadoop-mapreduce/ . //jets3t-0 .9.0.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-mapreduce-client-core-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/ . //jackson-databind-2 .2.3.jar: /usr/lib/hadoop-mapreduce/ . //httpcore-4 .2.5.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-mapreduce-client-jobclient .jar: /usr/lib/hadoop-mapreduce/ . //hadoop-mapreduce-client-jobclient-2 .6.0-cdh5.7.4-tests.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-mapreduce-client-hs-plugins-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/ . //jersey-server-1 .9.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-gridmix .jar: /usr/lib/hadoop-mapreduce/ . //activation-1 .1.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-mapreduce-client-common .jar: /usr/lib/hadoop-mapreduce/ . //hadoop-sls .jar: /usr/lib/hadoop-mapreduce/ . //apacheds-kerberos-codec-2 .0.0-M15.jar: /usr/lib/hadoop-mapreduce/ . //xmlenc-0 .52.jar: /usr/lib/hadoop-mapreduce/ . //commons-beanutils-core-1 .8.0.jar: /usr/lib/hadoop-mapreduce/ . //commons-codec-1 .4.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-datajoin-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/ . //jaxb-api-2 .2.2.jar: /usr/lib/hadoop-mapreduce/ . //curator-recipes-2 .7.1.jar: /usr/lib/hadoop-mapreduce/ . //jsp-api-2 .1.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-openstack-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-mapreduce-client-common-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/ . //apacheds-i18n-2 .0.0-M15.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-auth-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/ . //jaxb-impl-2 .2.3-1.jar: /usr/lib/hadoop-mapreduce/ . //log4j-1 .2.17.jar: /usr/lib/hadoop-mapreduce/ . //commons-logging-1 .1.3.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-archive-logs-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/ . //jetty-util-6 .1.26.cloudera.4.jar: /usr/lib/hadoop-mapreduce/ . //jackson-xc-1 .8.8.jar: /usr/lib/hadoop-mapreduce/ . //commons-math3-3 .1.1.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-mapreduce-client-nativetask .jar: /usr/lib/hadoop-mapreduce/ . //hadoop-archives-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-streaming-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-mapreduce-client-app .jar: /usr/lib/hadoop-mapreduce/ . //hadoop-azure .jar: /usr/lib/hadoop-mapreduce/ . //hadoop-mapreduce-examples-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-mapreduce-client-shuffle-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-mapreduce-client-hs .jar: /usr/lib/hadoop-mapreduce/ . //jackson-jaxrs-1 .8.8.jar: /usr/lib/hadoop-mapreduce/ . //jetty-6 .1.26.cloudera.4.jar: /usr/lib/hadoop-mapreduce/ . //htrace-core4-4 .0.1-incubating.jar: /usr/lib/hadoop-mapreduce/ . //junit-4 .11.jar: /usr/lib/hadoop-mapreduce/ . //api-util-1 .0.0-M20.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-mapreduce-client-jobclient-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-sls-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/ . //commons-httpclient-3 .1.jar: /usr/lib/hadoop-mapreduce/ . //jersey-json-1 .9.jar: /usr/lib/hadoop-mapreduce/ . //api-asn1-api-1 .0.0-M20.jar: /usr/lib/hadoop-mapreduce/ . //commons-cli-1 .2.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-mapreduce-client-jobclient-tests .jar: /usr/lib/hadoop-mapreduce/ . //commons-configuration-1 .6.jar: /usr/lib/hadoop-mapreduce/ . //mockito-all-1 .8.5.jar: /usr/lib/hadoop-mapreduce/ . //xz-1 .0.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-azure-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-mapreduce-client-app-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/ . //snappy-java-1 .0.4.1.jar: /usr/lib/hadoop-mapreduce/ . //guava-11 .0.2.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-archives .jar: /usr/lib/hadoop-mapreduce/ . //hadoop-rumen .jar: /usr/lib/hadoop-mapreduce/ . //commons-collections-3 .2.2.jar: /usr/lib/hadoop-mapreduce/ . //jasper-compiler-5 .5.23.jar: /usr/lib/hadoop-mapreduce/ . //jersey-core-1 .9.jar: /usr/lib/hadoop-mapreduce/ . //jackson-mapper-asl-1 .8.8.jar: /usr/lib/hadoop-mapreduce/ . //asm-3 .2.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-ant .jar: /usr/lib/hadoop-mapreduce/ . //jackson-core-2 .2.3.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-mapreduce-client-shuffle .jar: /usr/lib/hadoop-mapreduce/ . //hadoop-distcp .jar: /usr/lib/hadoop-mapreduce/ . //hadoop-extras .jar: /usr/lib/hadoop-mapreduce/ . //protobuf-java-2 .5.0.jar: /usr/lib/hadoop-mapreduce/ . //curator-framework-2 .7.1.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-mapreduce-client-hs-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-mapreduce-examples .jar: /usr/lib/hadoop-mapreduce/ . //hadoop-ant-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/ . //commons-el-1 .0.jar: /usr/lib/hadoop-mapreduce/ . //httpclient-4 .2.5.jar: /usr/lib/hadoop-mapreduce/ . //gson-2 .2.4.jar: /usr/lib/hadoop-mapreduce/ . //jackson-annotations-2 .2.3.jar: /usr/lib/hadoop-mapreduce/ . //commons-compress-1 .4.1.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-auth .jar: /usr/lib/hadoop-mapreduce/ . //jackson-core-asl-1 .8.8.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-distcp-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-gridmix-2 .6.0-cdh5.7.4.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-archive-logs .jar: /usr/lib/hadoop-mapreduce/ . //commons-digester-1 .8.jar: /usr/lib/hadoop-mapreduce/ . //commons-lang-2 .6.jar: /usr/lib/hadoop-mapreduce/ . //paranamer-2 .3.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-streaming .jar: /usr/lib/hadoop-mapreduce/ . //jsr305-3 .0.0.jar: /usr/lib/hadoop-mapreduce/ . //hadoop-mapreduce-client-core .jar: /usr/lib/hadoop-mapreduce/ . //commons-beanutils-1 .7.0.jar
STARTUP_MSG:   build = http: //github .com /cloudera/hadoop  -r 2390c11b3cb7a741189f62797de0d9862f48e211; compiled by  'jenkins'  on 2016-09-20T23:02Z
STARTUP_MSG:   java = 1.7.0_75
************************************************************/
2017-11-17 13:09:45,026 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers  for  [TERM, HUP, INT]
2017-11-17 13:09:45,030 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2017-11-17 13:09:45,506 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2017-11-17 13:09:45,624 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2017-11-17 13:09:45,624 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2017-11-17 13:09:45,653 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs: //bmh
2017-11-17 13:09:45,654 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use bmh to access this namenode /service .
2017-11-17 13:09:46,001 INFO org.apache.hadoop.util.JvmPauseMonitor: Starting JVM pause monitor
2017-11-17 13:09:46,016 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server  for  hdfs at: http: //name-01 :50070
2017-11-17 13:09:46,071 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2017-11-17 13:09:46,080 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2017-11-17 13:09:46,087 INFO org.apache.hadoop.http.HttpRequestLog: Http request log  for  http.requests.namenode is not defined
2017-11-17 13:09:46,100 INFO org.apache.hadoop.http.HttpServer2: Added global filter  'safety'  (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2017-11-17 13:09:46,106 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2017-11-17 13:09:46,106 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2017-11-17 13:09:46,106 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2017-11-17 13:09:46,142 INFO org.apache.hadoop.http.HttpServer2: Added filter  'org.apache.hadoop.hdfs.web.AuthFilter'  (class=org.apache.hadoop.hdfs.web.AuthFilter)
2017-11-17 13:09:46,144 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec= /webhdfs/v1/ *
2017-11-17 13:09:46,162 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2017-11-17 13:09:46,162 INFO org.mortbay.log: jetty-6.1.26.cloudera.4
2017-11-17 13:09:46,468 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@name-01:50070
2017-11-17 13:09:46,508 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name. dir ) configured. Beware of data loss due to lack of redundant storage directories!
2017-11-17 13:09:46,558 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2017-11-17 13:09:46,566 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair: true
2017-11-17 13:09:46,621 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2017-11-17 13:09:46,622 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip- hostname -check= true
2017-11-17 13:09:46,624 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is  set  to 000:00:00:00.000
2017-11-17 13:09:46,626 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2017 Nov 17 13:09:46
2017-11-17 13:09:46,628 INFO org.apache.hadoop.util.GSet: Computing capacity  for  map BlocksMap
2017-11-17 13:09:46,628 INFO org.apache.hadoop.util.GSet: VM  type        = 64-bit
2017-11-17 13:09:46,630 INFO org.apache.hadoop.util.GSet: 2.0% max memory 958.5 MB = 19.2 MB
2017-11-17 13:09:46,630 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2017-11-17 13:09:46,641 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token. enable = true
2017-11-17 13:09:46,641 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=null
2017-11-17 13:09:46,797 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 2
2017-11-17 13:09:46,797 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2017-11-17 13:09:46,797 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2017-11-17 13:09:46,797 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2017-11-17 13:09:46,797 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2017-11-17 13:09:46,797 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        =  false
2017-11-17 13:09:46,797 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2017-11-17 13:09:46,803 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = hdfs (auth:SIMPLE)
2017-11-17 13:09:46,803 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2017-11-17 13:09:46,803 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled =  true
2017-11-17 13:09:46,803 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Determined nameservice ID: bmh
2017-11-17 13:09:46,803 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled:  true
2017-11-17 13:09:46,805 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled:  true
2017-11-17 13:09:46,852 INFO org.apache.hadoop.util.GSet: Computing capacity  for  map INodeMap
2017-11-17 13:09:46,852 INFO org.apache.hadoop.util.GSet: VM  type        = 64-bit
2017-11-17 13:09:46,852 INFO org.apache.hadoop.util.GSet: 1.0% max memory 958.5 MB = 9.6 MB
2017-11-17 13:09:46,852 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
2017-11-17 13:09:46,854 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching  file  names occuring  more  than 10  times
2017-11-17 13:09:46,865 INFO org.apache.hadoop.util.GSet: Computing capacity  for  map cachedBlocks
2017-11-17 13:09:46,865 INFO org.apache.hadoop.util.GSet: VM  type        = 64-bit
2017-11-17 13:09:46,865 INFO org.apache.hadoop.util.GSet: 0.25% max memory 958.5 MB = 2.4 MB
2017-11-17 13:09:46,865 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 = 262144 entries
2017-11-17 13:09:46,867 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2017-11-17 13:09:46,867 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2017-11-17 13:09:46,867 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2017-11-17 13:09:46,871 INFO org.apache.hadoop.hdfs.server.namenode. top .metrics.TopMetrics: NNTop conf: dfs.namenode. top .window.num.buckets = 10
2017-11-17 13:09:46,871 INFO org.apache.hadoop.hdfs.server.namenode. top .metrics.TopMetrics: NNTop conf: dfs.namenode. top .num. users  = 10
2017-11-17 13:09:46,871 INFO org.apache.hadoop.hdfs.server.namenode. top .metrics.TopMetrics: NNTop conf: dfs.namenode. top .windows.minutes = 1,5,25
2017-11-17 13:09:46,872 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2017-11-17 13:09:46,873 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry  time  is 600000 millis
2017-11-17 13:09:46,875 INFO org.apache.hadoop.util.GSet: Computing capacity  for  map NameNodeRetryCache
2017-11-17 13:09:46,875 INFO org.apache.hadoop.util.GSet: VM  type        = 64-bit
2017-11-17 13:09:46,876 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 958.5 MB = 294.5 KB
2017-11-17 13:09:46,876 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 = 32768 entries
2017-11-17 13:09:46,879 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ACLs enabled?  false
2017-11-17 13:09:46,879 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: XAttrs enabled?  true
2017-11-17 13:09:46,879 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Maximum size of an xattr: 16384
2017-11-17 13:09:46,887 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on  /hdname/in_use .lock acquired by nodename 53363@name-01
2017-11-17 13:09:47,773 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Planning to load image: FSImageFile( file = /hdname/current/fsimage_0000000000025720921 , cpktTxId=0000000000025720921)
2017-11-17 13:09:49,818 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 3109720 INodes.
2017-11-17 13:10:01,010 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 2984ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =3433ms
GC pool  'PS Scavenge'  had collection(s): count=1  time =40ms
2017-11-17 13:10:04,191 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 2679ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =3064ms
2017-11-17 13:10:07,361 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 2669ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =3044ms
2017-11-17 13:10:09,395 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1534ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =1904ms
2017-11-17 13:10:11,453 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1557ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =1934ms
2017-11-17 13:10:13,630 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1676ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =2060ms
2017-11-17 13:10:15,817 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1686ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =2085ms
2017-11-17 13:10:18,025 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1707ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =2112ms
2017-11-17 13:10:20,272 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1746ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =2156ms
2017-11-17 13:10:22,439 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1666ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =2081ms
2017-11-17 13:10:24,630 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1691ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =2102ms
2017-11-17 13:10:26,830 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1699ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =2115ms
2017-11-17 13:10:29,056 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1725ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =2146ms
2017-11-17 13:10:31,385 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1828ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =2252ms
2017-11-17 13:10:35,134 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 2748ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =2830ms
2017-11-17 13:10:37,381 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1246ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =1625ms
2017-11-17 13:10:40,385 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 2503ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =2599ms
2017-11-17 13:10:43,274 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 2388ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =2389ms
2017-11-17 13:10:46,499 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 2224ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =2574ms
2017-11-17 13:10:50,074 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 2574ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =2992ms
2017-11-17 13:10:54,803 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 4228ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =4334ms
2017-11-17 13:10:59,246 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 3942ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =4160ms
2017-11-17 13:11:03,330 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 3583ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =3908ms
2017-11-17 13:11:06,025 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 2194ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =2598ms
2017-11-17 13:11:08,504 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1978ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =2412ms
2017-11-17 13:11:10,137 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1132ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =1583ms
2017-11-17 13:11:11,784 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1147ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =1606ms
2017-11-17 13:11:13,642 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1357ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =1828ms
2017-11-17 13:11:15,375 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1233ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =1714ms
2017-11-17 13:11:18,098 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 2222ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =2708ms
2017-11-17 13:11:19,720 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1121ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =1612ms
2017-11-17 13:11:21,282 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1061ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =1556ms
2017-11-17 13:11:22,907 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1125ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =1621ms
2017-11-17 13:11:24,460 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1052ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =1550ms
2017-11-17 13:11:26,013 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1052ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =1550ms
2017-11-17 13:11:27,557 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1043ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =1542ms
2017-11-17 13:11:29,185 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1127ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =1627ms
2017-11-17 13:11:30,955 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1269ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =1769ms
2017-11-17 13:11:32,573 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1117ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =1617ms
2017-11-17 13:11:35,833 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1077ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =1578ms
2017-11-17 13:12:08,946 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 1252ms
GC pool  'PS MarkSweep'  had collection(s): count=16  time =26562ms
2017-11-17 13:12:11,521 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 2074ms
GC pool  'PS MarkSweep'  had collection(s): count=6  time =10800ms
2017-11-17 13:12:14,285 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause  in  JVM or host machine (eg GC): pause of approximately 2264ms
GC pool  'PS MarkSweep'  had collection(s): count=1  time =2763ms
2017-11-17 13:12:14,338 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.lang.OutOfMemoryError: GC overhead limit exceeded
         at org.apache.hadoop.hdfs.server.namenode.INodeMap.get(INodeMap.java:92)
         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.getInode(FSDirectory.java:2357)
         at org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeDirectorySection(FSImageFormatPBINode.java:207)
         at org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:262)
         at org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:181)
         at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:226)
         at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:946)
         at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:930)
         at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:749)
         at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:680)
         at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:292)
         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1096)
         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:778)
         at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:609)
         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:670)
         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:838)
         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:817)
         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1538)
         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1606)
2017-11-17 13:12:14,343 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-11-17 13:12:14,344 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at name-01 /10 .0.0.101
************************************************************/

hadoop hdfs jvm内存溢出,无法成功启动

默认启动-Xmx1000m

解决办法:

由于cdh yum安装默认无hadoop-env.sh文件,添加hadoop-env.sh文件,并增加环境参数

1
export  HADOOP_HEAPSIZE=32000

问题解决!


      本文转自YU文武貝 51CTO博客,原文链接:http://blog.51cto.com/linuxerxy/1982873,如需转载请自行联系原作者






相关文章
|
1月前
|
存储 分布式计算 Hadoop
Hadoop Distributed File System (HDFS): 概念、功能点及实战
【6月更文挑战第12天】Hadoop Distributed File System (HDFS) 是 Hadoop 生态系统中的核心组件之一。它设计用于在大规模集群环境中存储和管理海量数据,提供高吞吐量的数据访问和容错能力。
263 4
|
25天前
|
存储 分布式计算 Hadoop
Hadoop的HDFS数据均衡
【6月更文挑战第13天】
34 3
|
1月前
|
存储 分布式计算 安全
|
1月前
|
存储 分布式计算 NoSQL
|
1月前
|
存储 分布式计算 资源调度
|
2月前
|
存储 分布式计算 Hadoop
|
2月前
|
存储 分布式计算 Hadoop
Hadoop节点HDFS数据块的作用
【5月更文挑战第19天】
33 3
|
22天前
|
分布式计算 Hadoop Java
分布式系统详解--框架(Hadoop--JAVA操作HDFS文件)
分布式系统详解--框架(Hadoop--JAVA操作HDFS文件)
22 0
|
2月前
|
存储 分布式计算 Hadoop
hadoop节点HDFS数据块基本概念
【5月更文挑战第19天】
37 1
|
1月前
|
存储 分布式计算 Hadoop
Hadoop生态系统详解:HDFS与MapReduce编程
Apache Hadoop是大数据处理的关键,其核心包括HDFS(分布式文件系统)和MapReduce(并行计算框架)。HDFS为大数据存储提供高容错性和高吞吐量,采用主从结构,通过数据复制保证可靠性。MapReduce将任务分解为Map和Reduce阶段,适合大规模数据集的处理。通过代码示例展示了如何使用MapReduce实现Word Count功能。HDFS和MapReduce的结合,加上YARN的资源管理,构成处理和分析大数据的强大力量。了解和掌握这些基础对于有效管理大数据至关重要。【6月更文挑战第12天】
47 0