<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont

本文涉及的产品
转发路由器TR,750小时连接 100GB跨地域
简介:  2014-08-25 15:35:05,691 ERROR org.apache.hadoop.hdfs.server.datanode.
 2014-08-25 15:35:05,691 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.130.136.136:50010, storageID=DS-1533727399-10.130.136.136-50010-1388038551296, infoPort=50075, ipcPort=50020):DataXceiver
java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/10.130.136.136:50010 remote=/10.130.136.136:34264]
        at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
        at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
        at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
        at java.lang.Thread.run(Thread.java:724)
2014-08-25 15:35:06,464 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.130.136.136:37121, dest: /10.130.136.136:50010, bytes: 67108864, op: HDFS_WRITE, cliID: DFSClient_hb_rs_xx,60020,1388115177740_1837727868_26, offset: 0, srvID: DS-1533727399-10.130.136.136-50010-1388038551296, blockid: blk_-3628597342762703578_40720686, duration: 6339411379
2014-08-25 15:35:06,464 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 2 for block blk_-3628597342762703578_40720686 terminating
2014-08-25 15:35:06,465 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-7509787569548089877_40720689 src: /10.130.136.136:37142 dest: /10.130.136.136:50010
2014-08-25 15:35:06,724 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.130.136.136:50010, dest: /10.130.136.136:33647, bytes: 5921280, op: HDFS_READ, cliID: DFSClient_hb_rs_xx,60020,1388115177740_1837727868_26, offset: 388096, srvID: DS-1533727399-10.130.136.136-50010-1388038551296, blockid: blk_2616588945174162483_33797955, duration: 547889496646
2014-08-25 15:35:06,725 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.130.136.136:50010, storageID=DS-1533727399-10.130.136.136-50010-1388038551296, infoPort=50075, ipcPort=50020):Got exception while serving blk_2616588945174162483_33797955 to /10.130.136.136:
java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/10.130.136.136:50010 remote=/10.130.136.136:33647]
        at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
        at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
        at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
        at java.lang.Thread.run(Thread.java:724)

2014-08-25 15:35:06,725 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.130.136.136:50010, storageID=DS-1533727399-10.130.136.136-50010-1388038551296, infoPort=50075, ipcPort=50020):DataXceiver
java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/10.130.136.136:50010 remote=/10.130.136.136:33647]
        at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
        at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
        at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:392)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:490)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:202)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
        at java.lang.Thread.run(Thread.java:724)



 解决方法:
  <property>
    <name>dfs.socket.timeout</name>
    <value>900000</value>
  </property>

  <property>
    <name>dfs.datanode.handler.count</name>
    <value>20</value>
  </property>

  <property>
    <name>dfs.namenode.handler.count</name>
    <value>30</value>
  </property>

  <property>
    <name>dfs.datanode.socket.write.timeout</name>
    <value>10800000</value>
    <description>set to 30 minutes ,default 8 * 60 * 1000,just for 480000 millis timeout while waiting for channel to be ready for w
rite</description>
  </property>



















目录
相关文章
|
Web App开发 监控 前端开发
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
Datanode的日志中看到: 10/12/14 20:10:31 INFO hdfs.DFSClient: Could not obtain block blk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.
693 0
|
Web App开发 Apache
|
Web App开发 前端开发 Java
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
 Connection reset by peer的常见原因: 1)服务器的并发连接数超过了其承载量,服务器会将其中一些连接关闭;    如果知道实际连接服务器的并发客户数没有超过服务器的承载量,看下有没有网络流量异常。
858 0
|
Web App开发 存储 前端开发
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
NoSuchObjectException(message:There is no database named cloudera_manager_metastore_canary_test_db_hive_hivemetastore_df61080e04cd7eb36c4336f71b5a8bc4) at org.
1080 0
|
Web App开发 前端开发 数据库
|
Web App开发 监控 前端开发
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
使用hive分析日志作业很多的时候,需要修改mysql的默认连接数 修改方法   打开/etc/my.cnf文件 在[mysqld]  中添加 max_connections=1000 重启mysql服务  service mysqld restart mysql>show variables like '%max_connections%'; 查看当前mysql的连接数方法 mysqladmin -uroot -p status 其中,Uptime:mysqld运行时间,单位秒。
664 0
|
Web App开发 前端开发
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
Before performing any upgrades or uninstalling software, stop all of the Hadoop services in the following order...
1215 0
|
Web App开发 前端开发 大数据
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
一、概述   多维数据模型是最流行的数据仓库的数据模型,多维数据模型最典型的数据模式包括星型模式、雪花模式和事实星座模式,本文以实例方式展示三者的模式和区别。
762 0
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html><head><meta http-equiv="Cont
在上一期的专栏文章中,我们曾经提到:数据分析系统的总体架构分为四个部分 —— 源系统、数据仓库、多维数据库、客户端(图一:pic1.bmp) 其中,数据仓库(DW)起到了数据大集中的作用。
1143 0