开发者社区 > 大数据与机器学习 > 开源大数据平台 E-MapReduce > 正文

hdfs 文件块过多(1亿)每个datanode 占3000万块(分16G内),某时刻内存陡然增加?

image.png

集群读写非常慢,namenode当前分配了30G,datanode分配了16G,当前情况是datanode异常,某个时刻陡然内存增长 gc耗时过长,异常一段时间后自动恢复。

从日志看有很多从其它节点IO异常
2023-10-16 08:00:21,114 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-993521896-172.16.21.33-1598248794639:blk_1246679076_172988181 src: /172.16.21.35:40750 dest: /172.16.21.66:50010
2023-10-16 08:00:23,903 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.16.21.66:50010, datanodeUuid=00a88682-7ac7-48bf-a777-004657ef9efa, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-57;cid=cluster13;nsid=25978125;c=1598248794639) Starting thread to transfer BP-993521896-172.16.21.33-1598248794639:blk_1246678698_172988138 to 172.16.21.37:50010
2023-10-16 08:00:23,905 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DataTransfer, at cdh-slave-12:50010: Transmitted BP-993521896-172.16.21.33-1598248794639:blk_1246678698_172988138 (numBytes=83) to /172.16.21.37:50010
2023-10-16 08:00:26,730 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-993521896-172.16.21.33-1598248794639:blk_1246679081_172988186 src: /172.16.21.65:45258 dest: /172.16.21.66:50010
2023-10-16 08:00:33,669 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-993521896-172.16.21.33-1598248794639:blk_1246679084_172988189 src: /172.16.21.61:39274 dest: /172.16.21.66:50010
2023-10-16 08:00:38,898 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-993521896-172.16.21.33-1598248794639:blk_1246679088_172988195 src: /172.16.21.39:32818 dest: /172.16.21.66:50010
2023-10-16 08:00:40,173 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.16.21.39:32818, dest: /172.16.21.66:50010, bytes: 16135336, op: HDFS_WRITE, cliID: DFSClientNONMAPREDUCE-1712840665_1, offset: 0, srvID: 00a88682-7ac7-48bf-a777-004657ef9efa, blockid: BP-993521896-172.16.21.33-1598248794639:blk_1246679088_172988195, duration(ns): 1273528075
2023-10-16 08:00:40,174 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-993521896-172.16.21.33-1598248794639:blk_1246679088_172988195, type=LAST_IN_PIPELINE terminating
2023-10-16 08:00:42,561 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in BlockReceiver.run():
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1633)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1568)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1481)
at java.lang.Thread.run(Thread.java:748)
连接的超时异常

2023-10-16 08:01:05,610 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-993521896-172.16.21.33-1598248794639:blk_1246679103_172988211 src: /172.16.21.66:46272 dest: /172.16.21.66:50010
2023-10-16 08:01:07,010 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.16.21.39:32848, dest: /172.16.21.66:50010, bytes: 38595290, op: HDFS_WRITE, cliID: DFSClientNONMAPREDUCE-1871563624_1, offset: 0, srvID: 00a88682-7ac7-48bf-a777-004657ef9efa, blockid: BP-993521896-172.16.21.33-1598248794639:blk_1246679100_172988208, duration(ns): 3514426343
2023-10-16 08:01:07,010 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-993521896-172.16.21.33-1598248794639:blk_1246679100_172988208, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[172.16.21.40:50010] terminating
2023-10-16 08:01:17,512 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode{data=FSDataset{dirpath='[/hdfs1/dfs/dn, /hdfs2/dfs/dn, /hdfs3/dfs/dn, /hdfs4/dfs/dn, /hdfs5/dfs/dn, /hdfs6/dfs/dn, /hdfs7/dfs/dn, /hdfs8/dfs/dn]'}, localName='cdh-slave-12:50010', datanodeUuid='00a88682-7ac7-48bf-a777-004657ef9efa', xmitsInProgress=0}:Exception transfering block BP-993521896-172.16.21.33-1598248794639:blk_1246678862_172987966 to mirror 172.16.21.64:50010: java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/172.16.21.66:58034 remote=/172.16.21.64:50010]
2023-10-16 08:01:17,513 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-993521896-172.16.21.33-1598248794639:blk_1246678862_172987966 received exception java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/172.16.21.66:58034 remote=/172.16.21.64:50010]
2023-10-16 08:01:17,513 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: cdh-slave-12:50010:DataXceiver error processing WRITE_BLOCK operation src: /172.16.21.62:51360 dst: /172.16.21.66:50010
java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/172.16.21.66:58034 remote=/172.16.21.64:50010]
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:537)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:846)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291)
at java.lang.Thread.run(Thread.java:748)
还有各种管道异常断开、截断器

2023-10-16 08:01:17,765 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-993521896-172.16.21.33-1598248794639:blk_1246679111_172988219 src: /172.16.21.62:51570 dest: /172.16.21.66:50010
2023-10-16 08:01:21,668 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-993521896-172.16.21.33-1598248794639:blk_1246679121_172988229 src: /172.16.21.35:40862 dest: /172.16.21.66:50010
2023-10-16 08:01:21,837 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: A packet was last sent 65046 milliseconds ago.
2023-10-16 08:01:21,837 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: The downstream error might be due to congestion in upstream including this node. Propagating the error:
java.io.EOFException: Unexpected EOF while trying to read response from server
at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:539)
at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1384)
at java.lang.Thread.run(Thread.java:748)
2023-10-16 08:01:21,838 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in BlockReceiver.run():
java.io.EOFException: Unexpected EOF while trying to read response from server
at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:539)
at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1384)
at java.lang.Thread.run(Thread.java:748)
2023-10-16 08:01:21,838 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-993521896-172.16.21.33-1598248794639:blk_1246678914_172988018, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=2:[172.16.21.40:50010, 172.16.21.36:50010]
java.io.EOFException: Unexpected EOF while trying to read response from server
at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:539)
at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1384)
at java.lang.Thread.run(Thread.java:748)
2023-10-16 08:01:21,837 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-993521896-172.16.21.33-1598248794639:blk_1246678914_172988018
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:210)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:971)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:904)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291)
at java.lang.Thread.run(Thread.java:748)

2023-10-16 09:19:47,101 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-993521896-172.16.21.33-1598248794639:blk_1246718750_173027872, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[172.16.21.39:50010]: Thread is interrupted.
2023-10-16 09:19:47,101 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-993521896-172.16.21.33-1598248794639:blk_1246718750_173027872, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[172.16.21.39:50010] terminating
2023-10-16 09:19:47,102 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-993521896-172.16.21.33-1598248794639:blk_1246718750_173027872 received exception java.io.IOException: Premature EOF from inputStream
2023-10-16 09:19:47,102 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: cdh-slave-12:50010:DataXceiver error processing WRITE_BLOCK operation src: /172.16.21.36:57480 dst: /172.16.21.66:50010
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:210)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:971)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:904)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291)
at java.lang.Thread.run(Thread.java:748)
2023-10-16 09:19:47,107 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-993521896-172.16.21.33-1598248794639:blk_1246718764_173027886
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:210)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:971)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:904)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291)
at java.lang.Thread.run(Thread.java:748)
2023-10-16 09:19:47,108 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-993521896-172.16.21.33-1598248794639:blk_1246718765_173027887
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:210)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:971)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:904)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291)
at java.lang.Thread.run(Thread.java:748)
2023-10-16 09:19:47,108 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-993521896-172.16.21.33-1598248794639:blk_1246718764_173027886, type=LAST_IN_PIPELINE: Thread is interrupted.
2023-10-16 09:19:47,108 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-993521896-172.16.21.33-1598248794639:blk_1246718764_173027886, type=LAST_IN_PIPELINE terminating
2023-10-16 09:19:47,108 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-993521896-172.16.21.33-1598248794639:blk_1246718765_173027887, type=LAST_IN_PIPELINE: Thread is interrupted.
2023-10-16 09:19:47,109 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-993521896-172.16.21.33-1598248794639:blk_1246718765_173027887, type=LAST_IN_PIPELINE terminating
2023-10-16 09:19:47,109 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-993521896-172.16.21.33-1598248794639:blk_1246718764_173027886 received exception java.io.IOException: Premature EOF from inputStream
2023-10-16 09:19:47,109 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-993521896-172.16.21.33-1598248794639:blk_1246718765_173027887 received exception java.io.IOException: Premature EOF from inputStream
2023-10-16 09:19:47,109 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: cdh-slave-12:50010:DataXceiver error processing WRITE_BLOCK operation src: /172.16.21.61:58040 dst: /172.16.21.66:50010
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:210)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:971)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:904)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291)
at java.lang.Thread.run(Thread.java:748)
2023-10-16 09:19:47,109 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: cdh-slave-12:50010:DataXceiver error processing WRITE_BLOCK operation src: /172.16.21.39:43300 dst: /172.16.21.66:50010
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:210)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:971)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:904)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291)
at java.lang.Thread.run(Thread.java:748)
2023-10-16 09:19:47,110 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-993521896-172.16.21.33-1598248794639:blk_1246718766_173027888
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:210)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:971)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:904)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291)
at java.lang.Thread.run(Thread.java:748)

展开
收起
游客icwdnh6bpv5yg 2023-10-16 14:15:09 191 0
1 条回答
写回答
取消 提交回答
  • 如果HDFS文件块过多,每个datanode占用3000万块,那么可能会出现内存陡然增加的情况。这是因为HDFS中的每个文件、目录以及文件块,在NameNode内存中都会有记录,每一条记录大约占用150字节的内存空间。当文件块数量过多时,NameNode内存会占用大量的空间,导致内存使用率陡然增加。
    此外,如果datanode的内存不足,可能会导致HDFS的性能下降,例如块的读取和写入速度变慢。在这种情况下,你可以考虑增大datanode的内存,或者调整HDFS的参数,如块大小、副本数等,以减少内存的使用和提高性能。
    总之,HDFS文件块过多和datanode内存不足都可能导致内存使用率陡然增加,你应该及时采取措施来解决这些问题,以保证HDFS的稳定运行。

    2023-10-23 11:49:28
    赞同 展开评论 打赏

阿里云EMR是云原生开源大数据平台,为客户提供简单易集成的Hadoop、Hive、Spark、Flink、Presto、ClickHouse、StarRocks、Delta、Hudi等开源大数据计算和存储引擎,计算资源可以根据业务的需要调整。EMR可以部署在阿里云公有云的ECS和ACK平台。

相关电子书

更多
内存取证与IaaS云平台恶意行 为的安全监控 立即下载
海量数据分布式存储——Apache HDFS之最新进展 立即下载
云服务器ECS内存增强型实例re6全新发布 立即下载