<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont

简介:  2014-08-25 15:35:05,691 ERROR org.apache.hadoop.hdfs.server.datanode.
 2014-08-25 15:35:05,691 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.130.136.136:50010, storageID=DS-1533727399-10.130.136.136-50010-1388038551296, infoPort=50075, ipcPort=50020):DataXceiver
java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/10.130.136.136:50010 remote=/10.130.136.136:34264]
        at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
        at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
        at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
        at java.lang.Thread.run(Thread.java:724)
2014-08-25 15:35:06,464 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.130.136.136:37121, dest: /10.130.136.136:50010, bytes: 67108864, op: HDFS_WRITE, cliID: DFSClient_hb_rs_xx,60020,1388115177740_1837727868_26, offset: 0, srvID: DS-1533727399-10.130.136.136-50010-1388038551296, blockid: blk_-3628597342762703578_40720686, duration: 6339411379
2014-08-25 15:35:06,464 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 2 for block blk_-3628597342762703578_40720686 terminating
2014-08-25 15:35:06,465 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-7509787569548089877_40720689 src: /10.130.136.136:37142 dest: /10.130.136.136:50010
2014-08-25 15:35:06,724 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.130.136.136:50010, dest: /10.130.136.136:33647, bytes: 5921280, op: HDFS_READ, cliID: DFSClient_hb_rs_xx,60020,1388115177740_1837727868_26, offset: 388096, srvID: DS-1533727399-10.130.136.136-50010-1388038551296, blockid: blk_2616588945174162483_33797955, duration: 547889496646
2014-08-25 15:35:06,725 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.130.136.136:50010, storageID=DS-1533727399-10.130.136.136-50010-1388038551296, infoPort=50075, ipcPort=50020):Got exception while serving blk_2616588945174162483_33797955 to /10.130.136.136:
java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/10.130.136.136:50010 remote=/10.130.136.136:33647]
        at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
        at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
        at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
        at java.lang.Thread.run(Thread.java:724)

2014-08-25 15:35:06,725 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.130.136.136:50010, storageID=DS-1533727399-10.130.136.136-50010-1388038551296, infoPort=50075, ipcPort=50020):DataXceiver
java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/10.130.136.136:50010 remote=/10.130.136.136:33647]
        at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
        at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
        at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
        at java.lang.Thread.run(Thread.java:724)



 解决方法:
  <property>
    <name>dfs.socket.timeout</name>
    <value>900000</value>
  </property>

  <property>
    <name>dfs.datanode.handler.count</name>
    <value>20</value>
  </property>

  <property>
    <name>dfs.namenode.handler.count</name>
    <value>30</value>
  </property>

  <property>
    <name>dfs.datanode.socket.write.timeout</name>
    <value>10800000</value>
    <description>set to 30 minutes ,default 8 * 60 * 1000,just for 480000 millis timeout while waiting for channel to be ready for w
rite</description>
  </property>



















目录
相关文章
|
Web App开发 前端开发 Java
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
ZooKeeper 保证了数据的强一致性,  zk集群中任意节点(一个zkServer)上的相同znode下的数据一定是相同的。
931 0
|
Web App开发 前端开发 测试技术
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
一、迁移步骤 1.首先安装最新版本gitlab(gitlab7.2安装) 2.停止旧版本gitlab服务 3.将旧的项目文件完整导入新的gitlab   bundle exec rake gitlab:import:r...
839 0
|
Web App开发 监控 前端开发
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
hadoop服务器更换硬盘操作步骤(datanode hadoop目录${HADOOP_HOME}/bin    日志位置:/var/log/hadoop)1.登陆服务器,切换到mapred用户,执行jps命令,查看是否有TaskTracker进程。
1178 0
|
Web App开发 数据库
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
可伸缩系统的架构经验 Feb 27th, 2013 | Comments 最近,阅读了Will Larson的文章Introduction to Architecting System for Scale,感觉很有价值。
2512 0
|
Web App开发 前端开发
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
authentification验证 - 是指验证who you are(你是谁), 所以需要用到username和password进行身份验证。
725 0
|
Web App开发 前端开发
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
一个典型的星型模式包括一个大型的事实表和一组逻辑上围绕这个事实表的维度表。  事实表是星型模型的核心,事实表由主键和度量数据两部分组成。
651 0
|
Web App开发 前端开发 测试技术
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
写下第二天要做的全部事情 按重要顺序,从“1”到“6”标出六件最重要的事情 每天一开始,全力做标号为“1”的事情,直到完成或完全准备好,然后再全力以赴做标号为“2”的事情,以此类推。
753 0
|
新零售 Web App开发 前端开发
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
各大互联网公司架构演进之路汇总 大型网站架构演化历程 大型网站架构技术一览 Web 支付宝和蚂蚁花呗的技术架构及实践 支付宝的高可用与容灾架构演进 聚划算架构演进和系统优化 (视频+PPT) 淘宝交易系统演进之路 (专访) 淘宝数据魔方技术架构解析 淘宝技术发展历程和架构经验分享(视频+PPT)(2.
2235 0
|
Web App开发 前端开发 Java
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
在hadoop测试集群运行job的过程中发现部分运行失败,有Cannot obtain block length for LocatedBlock,使用hdfs dfs -cat ${文件}的时候也报这个错,看过代码后发现...
814 0