<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont

本文涉及的产品
转发路由器TR,750小时连接 100GB跨地域
简介:  2014-08-25 15:35:05,691 ERROR org.apache.hadoop.hdfs.server.datanode.
 2014-08-25 15:35:05,691 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.130.136.136:50010, storageID=DS-1533727399-10.130.136.136-50010-1388038551296, infoPort=50075, ipcPort=50020):DataXceiver
java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/10.130.136.136:50010 remote=/10.130.136.136:34264]
        at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
        at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
        at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
        at java.lang.Thread.run(Thread.java:724)
2014-08-25 15:35:06,464 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.130.136.136:37121, dest: /10.130.136.136:50010, bytes: 67108864, op: HDFS_WRITE, cliID: DFSClient_hb_rs_xx,60020,1388115177740_1837727868_26, offset: 0, srvID: DS-1533727399-10.130.136.136-50010-1388038551296, blockid: blk_-3628597342762703578_40720686, duration: 6339411379
2014-08-25 15:35:06,464 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 2 for block blk_-3628597342762703578_40720686 terminating
2014-08-25 15:35:06,465 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-7509787569548089877_40720689 src: /10.130.136.136:37142 dest: /10.130.136.136:50010
2014-08-25 15:35:06,724 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.130.136.136:50010, dest: /10.130.136.136:33647, bytes: 5921280, op: HDFS_READ, cliID: DFSClient_hb_rs_xx,60020,1388115177740_1837727868_26, offset: 388096, srvID: DS-1533727399-10.130.136.136-50010-1388038551296, blockid: blk_2616588945174162483_33797955, duration: 547889496646
2014-08-25 15:35:06,725 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.130.136.136:50010, storageID=DS-1533727399-10.130.136.136-50010-1388038551296, infoPort=50075, ipcPort=50020):Got exception while serving blk_2616588945174162483_33797955 to /10.130.136.136:
java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/10.130.136.136:50010 remote=/10.130.136.136:33647]
        at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
        at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
        at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
        at java.lang.Thread.run(Thread.java:724)

2014-08-25 15:35:06,725 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.130.136.136:50010, storageID=DS-1533727399-10.130.136.136-50010-1388038551296, infoPort=50075, ipcPort=50020):DataXceiver
java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/10.130.136.136:50010 remote=/10.130.136.136:33647]
        at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
        at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
        at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
        at java.lang.Thread.run(Thread.java:724)



 解决方法:
  <property>
    <name>dfs.socket.timeout</name>
    <value>900000</value>
  </property>

  <property>
    <name>dfs.datanode.handler.count</name>
    <value>20</value>
  </property>

  <property>
    <name>dfs.namenode.handler.count</name>
    <value>30</value>
  </property>

  <property>
    <name>dfs.datanode.socket.write.timeout</name>
    <value>10800000</value>
    <description>set to 30 minutes ,default 8 * 60 * 1000,just for 480000 millis timeout while waiting for channel to be ready for w
rite</description>
  </property>



















目录
相关文章
|
Web App开发 前端开发 Android开发
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
使用MAT分析内存泄露 对于大型服务端应用程序来说,有些内存泄露问题很难在测试阶段发现,此时就需要分析JVM Heap Dump文件来找出问题。
899 0
|
Web App开发 监控 前端开发
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
系统的升级涉及各个架构组件,细节很多。常年累月的修修补补使老系统积累了很多问题。 系统升级则意味着需要repair之前埋下的雷,那为何还要升级,可以考虑以下几个方面 成熟老系统常见问题: 1. 缺乏文档(这应该是大小公司都存在的问题。
679 0
|
Web App开发 前端开发
|
Web App开发 前端开发
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
PipeMapRed.waitOutputThreads(): subprocess failed with code X ,这里code X对应的信息如下:error code 1: Operation not perm...
1045 0
|
Web App开发 前端开发 算法
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
基于大数据的精准营销与应用场景 2015年08月11日 大数据 大数据营销时代来临营销学领域过去半个多世纪的发展让我们见证了从“以产品为中心”到“以客户为中心”的转变。
1001 0
|
Web App开发 前端开发
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
service cloudera-scm-agent stop service cloudera-scm-agent stop umount /var/run/cloudera-scm-agent/process umo...
824 0
|
SQL Web App开发 前端开发
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
     如果在INSERT语句末尾指定了ON DUPLICATE KEY UPDATE,并且插入行后会导致在一个UNIQUE索引或PRIMARY KEY中出现重复值,则在出现重复值的行执行UPDATE;如果不会导致唯一值列重复的问题,则插入新行。
842 0
|
Web App开发 监控 前端开发
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
使用hive分析日志作业很多的时候,需要修改mysql的默认连接数 修改方法   打开/etc/my.cnf文件 在[mysqld]  中添加 max_connections=1000 重启mysql服务  service mysqld restart mysql>show variables like '%max_connections%'; 查看当前mysql的连接数方法 mysqladmin -uroot -p status 其中,Uptime:mysqld运行时间,单位秒。
715 0
|
Web App开发 前端开发
|
Web App开发 前端开发
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
1.使用lsmod查看ipv6的模块是否被加载。 lsmod | grep ipv6 [root@dmhadoop011 ~]# lsmod | grep ipv6 ipv6                  317340  127 bonding 如果加载了,则进行如下操作: 2.
863 0

热门文章

最新文章