hbase_异常_03_java.io.EOFException: Premature EOF: no length prefix available

简介: 一、异常现象 更改了hadoop的配置文件:core-site.xml  和   mapred-site.xml  之后,重启hadoop 和 hbase 之后,发现hbase日志中抛出了如下异常: 2018-03-22 15:56:09,948 WARN [ResponseProcessor for block BP-792111345-192.

一、异常现象

更改了hadoop的配置文件:core-site.xml  和   mapred-site.xml  之后,重启hadoop 和 hbase 之后,发现hbase日志中抛出了如下异常:

2018-03-22 15:56:09,948 WARN  [ResponseProcessor for block BP-792111345-192.168.1.102-1521639243869:blk_1073741858_1034] hdfs.DFSClient: DFSOutputStream ResponseProcessor exception  for block BP-792111345-192.168.1.102-1521639243869:blk_1073741858_1034
java.io.EOFException: Premature EOF: no length prefix available
    at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2294)
    at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:847)
2018-03-22 15:56:09,951 WARN  [ResponseProcessor for block BP-792111345-192.168.1.102-1521639243869:blk_1073741857_1033] hdfs.DFSClient: DFSOutputStream ResponseProcessor exception  for block BP-792111345-192.168.1.102-1521639243869:blk_1073741857_1033
java.io.EOFException: Premature EOF: no length prefix available
    at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2294)
    at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:847)
2018-03-22 15:56:22,866 INFO  [regionserver/rayner/192.168.1.102:0.logRoller] wal.FSHLog: 
java.io.IOException: All datanodes DatanodeInfoWithStorage[127.0.0.1:50010,DS-44043b27-9b72-419d-9b17-372546490c57,DISK] are bad. Aborting...
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1224)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:990)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:507)
2018-03-22 15:56:23,381 INFO  [RS_OPEN_META-rayner:48626-0-MetaLogRoller] wal.FSHLog: 
java.io.IOException: All datanodes DatanodeInfoWithStorage[127.0.0.1:50010,DS-44043b27-9b72-419d-9b17-372546490c57,DISK] are bad. Aborting...
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1224)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:990)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:507)
2018-03-22 15:56:25,296 WARN  [LeaseRenewer:ray@localhost:9000] hdfs.LeaseRenewer: Failed to renew lease for [DFSClient_NONMAPREDUCE_-1494745515_1] for 30 seconds.  Will retry shortly ...
java.net.ConnectException: Call From rayner/192.168.1.102 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
    at org.apache.hadoop.ipc.Client.call(Client.java:1480)
    at org.apache.hadoop.ipc.Client.call(Client.java:1413)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy15.renewLease(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:595)
    at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy16.renewLease(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:307)
    at com.sun.proxy.$Proxy17.renewLease(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:307)
    at com.sun.proxy.$Proxy17.renewLease(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:892)
    at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:423)
    at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:448)
    at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
    at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:304)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713)
    at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
    at org.apache.hadoop.ipc.Client.call(Client.java:1452)
    ... 26 more
View Code

 

二、异常原因

 

 

 

三、异常解决

 

 

四、参考资料

1.java.io.EOFException: Premature EOF: no length prefix available

 

相关实践学习
云数据库HBase版使用教程
  相关的阿里云产品:云数据库 HBase 版 面向大数据领域的一站式NoSQL服务,100%兼容开源HBase并深度扩展,支持海量数据下的实时存储、高并发吞吐、轻SQL分析、全文检索、时序时空查询等能力,是风控、推荐、广告、物联网、车联网、Feeds流、数据大屏等场景首选数据库,是为淘宝、支付宝、菜鸟等众多阿里核心业务提供关键支撑的数据库。 了解产品详情: https://cn.aliyun.com/product/hbase   ------------------------------------------------------------------------- 阿里云数据库体验:数据库上云实战 开发者云会免费提供一台带自建MySQL的源数据库 ECS 实例和一台目标数据库 RDS实例。跟着指引,您可以一步步实现将ECS自建数据库迁移到目标数据库RDS。 点击下方链接,领取免费ECS&RDS资源,30分钟完成数据库上云实战!https://developer.aliyun.com/adc/scenario/51eefbd1894e42f6bb9acacadd3f9121?spm=a2c6h.13788135.J_3257954370.9.4ba85f24utseFl
目录
相关文章
|
1月前
|
Java
Java中的异常链:从根源到解决方案
Java中的异常链:从根源到解决方案
37 0
|
1月前
|
存储 监控 Java
Java认识异常(超级详细)
Java认识异常(超级详细)
|
27天前
启动报错:java.nio.charset.MalformedInputException: Input length = 1
启动报错:java.nio.charset.MalformedInputException: Input length = 1
16 0
|
1天前
|
存储 Java 程序员
JavaSE&Java的异常
JavaSE&Java的异常
|
17天前
|
Java
Java中的异常类总结
Java中的异常类总结
|
1月前
|
SQL Java
java中的异常
java中的异常
10 1
|
1月前
|
Java 程序员 编译器
Java中异常
Java中异常
12 0
|
1月前
|
Java 程序员 编译器
Java中的异常
Java中的异常
9 0
|
1月前
|
Java
Java异常的抛出
Java异常的抛出
8 0
|
1月前
|
Java 索引
JAVA异常类及其主要方法
JAVA异常类及其主要方法
37 3