DataWorks同步任务同步到OSS,总是报下面的错误,如何解决?
Caused by: com.aliyun.oss.ClientException: The target server failed to respond
at com.aliyun.oss.common.utils.ExceptionFactory.createNetworkException(ExceptionFactory.java:71)
at com.aliyun.oss.common.comm.DefaultServiceClient.sendRequestCore(DefaultServiceClient.java:127)
at com.aliyun.oss.common.comm.ServiceClient.sendRequestImpl(ServiceClient.java:133)
at com.aliyun.oss.common.comm.ServiceClient.sendRequest(ServiceClient.java:70)
at com.aliyun.oss.internal.OSSOperation.send(OSSOperation.java:83)
at com.aliyun.oss.internal.OSSOperation.doOperation(OSSOperation.java:145)
at com.aliyun.oss.internal.OSSOperation.doOperation(OSSOperation.java:102)
at com.aliyun.oss.internal.OSSMultipartOperation.initiateMultipartUpload(OSSMultipartOperation.java:226)
at com.aliyun.oss.OSSClient.initiateMultipartUpload(OSSClient.java:727)
at org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystemStore.getUploadId(AliyunOSSFileSystemStore.java:641)
at org.apache.hadoop.fs.aliyun.oss.AliyunOSSBlockOutputStream.uploadCurrentPart(AliyunOSSBlockOutputStream.java:177)
at org.apache.hadoop.fs.aliyun.oss.AliyunOSSBlockOutputStream.write(AliyunOSSBlockOutputStream.java:151)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
at parquet.bytes.ConcatenatingByteArrayCollector.writeAllTo(ConcatenatingByteArrayCollector.java:46)
at parquet.hadoop.ParquetFileWriter.writeDataPages(ParquetFileWriter.java:347)
at parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writeToFileWriter(ColumnChunkPageWriteStore.java:182)
at parquet.hadoop.ColumnChunkPageWriteStore.flushToFileWriter(ColumnChunkPageWriteStore.java:238)
at parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:155)
at parquet.hadoop.InternalParquetRecordWriter.checkBlockSizeReached(InternalParquetRecordWriter.java:131)
at parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:123)
at parquet.hadoop.ParquetWriter.write(ParquetWriter.java:258)
at com.alibaba.datax.plugin.writer.hdfswriter.HdfsHelper.parquetFileStartWrite(HdfsHelper.java:1068)
... 4 more
这个错误是由于目标服务器无法响应导致的。你可以尝试以下方法来解决这个问题:
目前oss数据源 数据集成连通性测试是通过的么,看历史的实例运行情况只失败了这一次 目前怀疑是并发操作异常之类的导致 我再确认下哈,排查了下 可能是凌晨的时候网络抖动导致写入失败,所以文件处于一个不正常状态 目前看任务已经配置上了自动重跑 自动重跑可以缓解网络抖动导致的影响 另外可以尽量避免任务在凌晨高峰期执行 ,此回答整理自钉群“DataWorks交流群(答疑@机器人)”
版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。
DataWorks基于MaxCompute/Hologres/EMR/CDP等大数据引擎,为数据仓库/数据湖/湖仓一体等解决方案提供统一的全链路大数据开发治理平台。