开发者社区> 问答> 正文

hadoop datanode 报错(DataXceiver error pro?报错

hadoop集群启动节点DataNode、NodeManager都可以正常启动,但查看日志

2015-12-01 16:31:09,216 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-2032151256-192.168.18.139-1448951568590
2015-12-01 16:31:09,222 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlockMap
2015-12-01 16:31:09,222 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-12-01 16:31:09,223 INFO org.apache.hadoop.util.GSet: 0.5% max memory 889 MB = 4.4 MB
2015-12-01 16:31:09,223 INFO org.apache.hadoop.util.GSet: capacity      = 2^19 = 524288 entries
2015-12-01 16:31:09,224 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-2032151256-192.168.18.139-1448951568590
2015-12-01 16:31:09,231 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added bpid=BP-2032151256-192.168.18.139-1448951568590 to blockPoolScannerMap, new size=1
2015-12-01 16:31:34,169 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: data2:50010:DataXceiver error processing unknown operation  src: /127.0.0.1:36068 dst: /127.0.0.1:50010
java.io.EOFException
at java.io.DataInputStream.readShort(DataInputStream.java:315)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:212)
at java.lang.Thread.run(Thread.java:744)
2015-12-01 16:32:34,195 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: data2:50010:DataXceiver error processing unknown operation  src: /127.0.0.1:36093 dst: /127.0.0.1:50010
java.io.EOFException
at java.io.DataInputStream.readShort(DataInputStream.java:315)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:212)
at java.lang.Thread.run(Thread.java:744)

4个节点有3个节点都报这个错,其中1个节点不报。请问各位这是什么原因?

展开
收起
爱吃鱼的程序员 2020-06-10 10:24:18 695 0
1 条回答
写回答
取消 提交回答
  • https://developer.aliyun.com/profile/5yerqm5bn5yqg?spm=a2c6h.12873639.0.0.6eae304abcjaIB

    你是不是安装ambari,如果是的话,那就是因为Ambari每分钟会向datanode发送"ping"连接一下去确保datanode是正常工作的.否则它会触发alert。但是datanode并没有处理空内容的逻辑,所以直接异常了。

    2020-06-10 10:24:33
    赞同 展开评论 打赏
问答标签:
问答地址:
问答排行榜
最热
最新

相关电子书

更多
《构建Hadoop生态批流一体的实时数仓》 立即下载
零基础实现hadoop 迁移 MaxCompute 之 数据 立即下载
CIO 指南:如何在SAP软件架构中使用Hadoop 立即下载

相关实验场景

更多