开发者社区> 问答> 正文

Hive向分区表导入数据File not found: File does no?400报错

Hive向分区表导入数据File not found: File does not exist:reduce.xml? 400 报错

启动hdfs正常,NN、DN、SN都正常。
启动hive只有一个runjar进程,但查询、建本地表、查表都正常。

在从本地表tb3导入分区表tb4_p时出错:

insert overwrite table tb4_p partition ( pid='p01', pname='pHive01' ) select id, name from tb3;

 日志信息:
15/05/16 16:32:44 [Thread-11]: DEBUG ipc.ProtobufRpcEngine: Call: getServerDefaults took 14ms
15/05/16 16:32:44 [Thread-11]: DEBUG sasl.SaslDataTransferClient: SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:50010,DS-9dce30df-cbfc-4f8b-bee8-9300a613b9af,DISK]
15/05/16 16:32:44 [DataStreamer for file /tmp/hive-1.1.0/scratchdir/root/d7559095-7bcd-43b8-a01b-911eedea4696/hive_2015-05-16_16-32-35_780_6620657129532699458-1/-mr-10004/f4f16c37-6793-4871-a713-10421a89075c/map.xml block BP-1539090635-127.0.0.1-1431686395752:blk_1073741844_1020]: DEBUG hdfs.DFSClient: DataStreamer block BP-1539090635-127.0.0.1-1431686395752:blk_1073741844_1020 sending packet packet seqno: 0 offsetInBlock: 0 lastPacketInBlock: false lastByteOffsetInBlock: 3062
15/05/16 16:32:45 [ResponseProcessor for block BP-1539090635-127.0.0.1-1431686395752:blk_1073741844_1020]: DEBUG hdfs.DFSClient: DFSClient seqno: 0 reply: SUCCESS downstreamAckTimeNanos: 0 flag: 0
15/05/16 16:32:45 [DataStreamer for file /tmp/hive-1.1.0/scratchdir/root/d7559095-7bcd-43b8-a01b-911eedea4696/hive_2015-05-16_16-32-35_780_6620657129532699458-1/-mr-10004/f4f16c37-6793-4871-a713-10421a89075c/map.xml block BP-1539090635-127.0.0.1-1431686395752:blk_1073741844_1020]: DEBUG hdfs.DFSClient: DataStreamer block BP-1539090635-127.0.0.1-1431686395752:blk_1073741844_1020 sending packet packet seqno: 1 offsetInBlock: 3062 lastPacketInBlock: true lastByteOffsetInBlock: 3062
15/05/16 16:32:45 [ResponseProcessor for block BP-1539090635-127.0.0.1-1431686395752:blk_1073741844_1020]: DEBUG hdfs.DFSClient: DFSClient seqno: 1 reply: SUCCESS downstreamAckTimeNanos: 0 flag: 0
15/05/16 16:32:45 [IPC Parameter Sending Thread #2]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root sending #29
15/05/16 16:32:45 [IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root got value #29
15/05/16 16:32:45 [main]: DEBUG ipc.ProtobufRpcEngine: Call: complete took 11ms
15/05/16 16:32:45 [main]: INFO log.PerfLogger: 
15/05/16 16:32:45 [main]: INFO Configuration.deprecation: mapred.submit.replication is deprecated. Instead, use mapreduce.client.submit.file.replication
15/05/16 16:32:45 [IPC Parameter Sending Thread #2]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root sending #30
15/05/16 16:32:45 [IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root got value #30
15/05/16 16:32:45 [main]: DEBUG ipc.ProtobufRpcEngine: Call: setReplication took 17ms
15/05/16 16:32:45 [main]: DEBUG mapreduce.Cluster: Trying ClientProtocolProvider : org.apache.hadoop.mapred.LocalClientProtocolProvider
15/05/16 16:32:45 [main]: DEBUG mapreduce.Cluster: Cannot pick org.apache.hadoop.mapred.LocalClientProtocolProvider as the ClientProtocolProvider - returned null protocol
15/05/16 16:32:45 [main]: DEBUG mapreduce.Cluster: Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
15/05/16 16:32:45 [main]: DEBUG service.AbstractService: Service: org.apache.hadoop.mapred.ResourceMgrDelegate entered state INITED
15/05/16 16:32:45 [main]: DEBUG service.AbstractService: Service: org.apache.hadoop.yarn.client.api.impl.YarnClientImpl entered state INITED
15/05/16 16:32:45 [main]: INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
15/05/16 16:32:45 [main]: DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.yarn.client.RMProxy.getProxy(RMProxy.java:136)
15/05/16 16:32:45 [main]: DEBUG ipc.YarnRPC: Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
15/05/16 16:32:45 [main]: DEBUG ipc.HadoopYarnProtoRPC: Creating a HadoopYarnProtoRpc proxy for protocol interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
15/05/16 16:32:45 [main]: DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@3698f2a1
15/05/16 16:32:46 [main]: DEBUG service.AbstractService: Service org.apache.hadoop.yarn.client.api.impl.YarnClientImpl is started
15/05/16 16:32:46 [main]: DEBUG service.AbstractService: Service org.apache.hadoop.mapred.ResourceMgrDelegate is started
15/05/16 16:32:46 [main]: DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:331)
15/05/16 16:32:46 [main]: DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
15/05/16 16:32:46 [main]: DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
15/05/16 16:32:46 [main]: DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
15/05/16 16:32:46 [main]: DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path = 
15/05/16 16:32:46 [main]: DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
15/05/16 16:32:46 [main]: DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@3698f2a1
15/05/16 16:32:46 [main]: DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
15/05/16 16:32:46 [main]: DEBUG mapreduce.Cluster: Picked org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider
15/05/16 16:32:46 [main]: DEBUG exec.Utilities: Use session specified class loader
15/05/16 16:32:46 [main]: DEBUG fs.FSStatsPublisher: Initing FSStatsPublisher with : hdfs://localhost:9000/home/hive-1.1.0/warehousedir/tb4_p/pid=p01/pname=pHive01/.hive-staging_hive_2015-05-16_16-32-35_780_6620657129532699458-1/-ext-10001
15/05/16 16:32:46 [main]: DEBUG hdfs.DFSClient: /home/hive-1.1.0/warehousedir/tb4_p/pid=p01/pname=pHive01/.hive-staging_hive_2015-05-16_16-32-35_780_6620657129532699458-1/-ext-10001: masked=rwxr-xr-x
15/05/16 16:32:46 [IPC Parameter Sending Thread #2]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root sending #31
15/05/16 16:32:46 [IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root got value #31
15/05/16 16:32:46 [main]: DEBUG ipc.ProtobufRpcEngine: Call: mkdirs took 10ms
15/05/16 16:32:46 [main]: INFO fs.FSStatsPublisher: created : hdfs://localhost:9000/home/hive-1.1.0/warehousedir/tb4_p/pid=p01/pname=pHive01/.hive-staging_hive_2015-05-16_16-32-35_780_6620657129532699458-1/-ext-10001
15/05/16 16:32:46 [main]: DEBUG hdfs.DFSClient: /home/hive-1.1.0/warehousedir/tb4_p/pid=p01/pname=pHive01/.hive-staging_hive_2015-05-16_16-32-35_780_6620657129532699458-1/_tmp.-ext-10002: masked=rwxr-xr-x
15/05/16 16:32:46 [IPC Parameter Sending Thread #2]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root sending #32
15/05/16 16:32:46 [IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root got value #32
15/05/16 16:32:46 [main]: DEBUG ipc.ProtobufRpcEngine: Call: mkdirs took 13ms
15/05/16 16:32:46 [main]: DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
15/05/16 16:32:46 [main]: DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.connect(Job.java:1255)
15/05/16 16:32:46 [main]: DEBUG mapreduce.Cluster: Trying ClientProtocolProvider : org.apache.hadoop.mapred.LocalClientProtocolProvider
15/05/16 16:32:46 [main]: DEBUG mapreduce.Cluster: Cannot pick org.apache.hadoop.mapred.LocalClientProtocolProvider as the ClientProtocolProvider - returned null protocol
15/05/16 16:32:46 [main]: DEBUG mapreduce.Cluster: Trying ClientProtocolProvider : org.apache.hadoop.mapred.YarnClientProtocolProvider
15/05/16 16:32:46 [main]: DEBUG service.AbstractService: Service: org.apache.hadoop.mapred.ResourceMgrDelegate entered state INITED
15/05/16 16:32:46 [main]: DEBUG service.AbstractService: Service: org.apache.hadoop.yarn.client.api.impl.YarnClientImpl entered state INITED
15/05/16 16:32:46 [main]: INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
15/05/16 16:32:46 [main]: DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.yarn.client.RMProxy.getProxy(RMProxy.java:136)
15/05/16 16:32:46 [main]: DEBUG ipc.YarnRPC: Creating YarnRPC for org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
15/05/16 16:32:46 [main]: DEBUG ipc.HadoopYarnProtoRPC: Creating a HadoopYarnProtoRpc proxy for protocol interface org.apache.hadoop.yarn.api.ApplicationClientProtocol
15/05/16 16:32:46 [main]: DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@3698f2a1
15/05/16 16:32:46 [main]: DEBUG service.AbstractService: Service org.apache.hadoop.yarn.client.api.impl.YarnClientImpl is started
15/05/16 16:32:46 [main]: DEBUG service.AbstractService: Service org.apache.hadoop.mapred.ResourceMgrDelegate is started
15/05/16 16:32:46 [main]: DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:331)
15/05/16 16:32:46 [main]: DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
15/05/16 16:32:46 [main]: DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
15/05/16 16:32:46 [main]: DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
15/05/16 16:32:46 [main]: DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path = 
15/05/16 16:32:46 [main]: DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
15/05/16 16:32:46 [main]: DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@3698f2a1
15/05/16 16:32:46 [main]: DEBUG sasl.DataTransferSaslUtil: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
15/05/16 16:32:46 [main]: DEBUG mapreduce.Cluster: Picked org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider
15/05/16 16:32:46 [main]: DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Cluster.getFileSystem(Cluster.java:162)
15/05/16 16:32:46 [main]: DEBUG security.UserGroupInformation: PrivilegedAction as:root (auth:SIMPLE) from:org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
15/05/16 16:32:46 [main]: INFO exec.Utilities: PLAN PATH = hdfs://localhost:9000/tmp/hive-1.1.0/scratchdir/root/d7559095-7bcd-43b8-a01b-911eedea4696/hive_2015-05-16_16-32-35_780_6620657129532699458-1/-mr-10004/f4f16c37-6793-4871-a713-10421a89075c/map.xml
15/05/16 16:32:46 [main]: DEBUG exec.Utilities: Found plan in cache for name: map.xml
15/05/16 16:32:46 [main]: INFO exec.Utilities: PLAN PATH = hdfs://localhost:9000/tmp/hive-1.1.0/scratchdir/root/d7559095-7bcd-43b8-a01b-911eedea4696/hive_2015-05-16_16-32-35_780_6620657129532699458-1/-mr-10004/f4f16c37-6793-4871-a713-10421a89075c/reduce.xml
15/05/16 16:32:46 [main]: INFO exec.Utilities: ***************non-local mode***************
15/05/16 16:32:46 [main]: INFO exec.Utilities: local path = hdfs://localhost:9000/tmp/hive-1.1.0/scratchdir/root/d7559095-7bcd-43b8-a01b-911eedea4696/hive_2015-05-16_16-32-35_780_6620657129532699458-1/-mr-10004/f4f16c37-6793-4871-a713-10421a89075c/reduce.xml
15/05/16 16:32:46 [main]: INFO exec.Utilities: Open file to read in plan: hdfs://localhost:9000/tmp/hive-1.1.0/scratchdir/root/d7559095-7bcd-43b8-a01b-911eedea4696/hive_2015-05-16_16-32-35_780_6620657129532699458-1/-mr-10004/f4f16c37-6793-4871-a713-10421a89075c/reduce.xml
15/05/16 16:32:46 [IPC Parameter Sending Thread #2]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root sending #33
15/05/16 16:32:46 [IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root got value #33
15/05/16 16:32:46 [main]: INFO exec.Utilities: File not found: File does not exist: /tmp/hive-1.1.0/scratchdir/root/d7559095-7bcd-43b8-a01b-911eedea4696/hive_2015-05-16_16-32-35_780_6620657129532699458-1/-mr-10004/f4f16c37-6793-4871-a713-10421a89075c/reduce.xml
	at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:71)
	at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:61)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1803)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1774)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1710)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:586)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:365)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

15/05/16 16:32:46 [main]: INFO exec.Utilities: No plan file found: hdfs://localhost:9000/tmp/hive-1.1.0/scratchdir/root/d7559095-7bcd-43b8-a01b-911eedea4696/hive_2015-05-16_16-32-35_780_6620657129532699458-1/-mr-10004/f4f16c37-6793-4871-a713-10421a89075c/reduce.xml
15/05/16 16:32:46 [main]: DEBUG mapred.ResourceMgrDelegate: getStagingAreaDir: dir=/tmp/hadoop-yarn/staging/root/.staging
15/05/16 16:32:46 [IPC Parameter Sending Thread #2]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root sending #34
15/05/16 16:32:46 [IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root got value #34
15/05/16 16:32:46 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 12ms
15/05/16 16:32:46 [IPC Parameter Sending Thread #2]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root sending #35
15/05/16 16:32:46 [IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root]: DEBUG ipc.Client: IPC Client (311355524) connection to localhost/127.0.0.1:9000 from root got value #35
15/05/16 16:32:46 [main]: DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 8ms
15/05/16 16:32:46 [main]: DEBUG ipc.Client: The ping interval is 60000 ms.
15/05/16 16:32:46 [main]: DEBUG ipc.Client: Connecting to localhost/127.0.0.1:8032
15/05/16 16:32:47 [main]: INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/05/16 16:32:48 [main]: INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/05/16 16:32:49 [main]: INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/05/16 16:32:50 [main]: INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/05/16 16:32:51 [main]: INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/05/16 16:32:52 [main]: INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/05/16 16:32:53 [main]: INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
一直打印...

/tmp/hive-1.1.0/scratchdir/是HDFS中临时目录,查看HDFS中,其中只有map.xml文件,确实没有reduce.xml文件。 

参考了http://bbs.csdn.net/topics/390911781这个,但我这个环境很简单,没有hbase、mysql。没有找到原因。

求助!



展开
收起
爱吃鱼的程序员 2020-05-30 23:50:57 1136 0
1 条回答
写回答
取消 提交回答
  • https://developer.aliyun.com/profile/5yerqm5bn5yqg?spm=a2c6h.12873639.0.0.6eae304abcjaIB

    了了,字段分隔符的问题######請問您如何解決的? 我也遇到了一樣的狀況, 謝謝######請問如何解決的? 我也遇到了, 杯具了....######记不清了,参考下 http://www.cuiweiyou.com/1454.html 希望对你有帮助######hive 这边做分区处理的话,只需要启动hiveserver2即可。这样使用jps查看的时候,只有一个runjar进程,同时可以去查看log信息,应该是没有异常信息的。

    2020-05-30 23:50:59
    赞同 展开评论 打赏
问答排行榜
最热
最新

相关电子书

更多
Hive Bucketing in Apache Spark 立即下载
spark替代HIVE实现ETL作业 立即下载
2019大数据技术公开课第五季—Hive迁移到MaxCompute最佳实践 立即下载