开发者社区 > 数据库 > 正文

DMS管理中使用DMS内置存储保存归档数据--归档失败,请问题怎么处理?

DMS管理中使用DMS内置存储保存归档数据--归档失败,请问题怎么处理?工单号:8137738 归档任务日志: 2023-02-13 13:31:20[GMT+08:00] INFO - Resource Control is active! 2023-02-13 13:31:20[GMT+08:00] INFO - Starting job j_145267 at Mon Feb 13 13:31:20 CST 2023 2023-02-13 13:31:20[GMT+08:00] INFO - job JVM args: '-dms.flowid=f_44683' '-dms.execid=14441889' '-dms.jobid=j_145267' 2023-02-13 13:31:20[GMT+08:00] INFO - user.to.proxy property was not set, defaulting to submit user zhulong 2023-02-13 13:31:20[GMT+08:00] INFO - Building lindorm_backup job executor. 2023-02-13 13:31:20[GMT+08:00] INFO - Starting execution of spark backup job... 2023-02-13 13:31:20[GMT+08:00] INFO - 执行归档到inner_oss的任务,任务版本[f_44683],业务时间[1676178972531] 2023-02-13 13:31:20[GMT+08:00] INFO - build oss archive param 2023-02-13 13:31:20[GMT+08:00] INFO - wait for spark backup task complete. taskId:1123460 2023-02-13 13:31:20[GMT+08:00] INFO - spark task id 1123460 status: NEW 2023-02-13 13:31:20[GMT+08:00] INFO - Print task detail log. 2023-02-13 13:31:20[GMT+08:00] INFO - DG uri null:null invalid. 2023-02-13 13:32:30[GMT+08:00] INFO - spark task id 1123460 status: RUNNING 2023-02-13 13:32:30[GMT+08:00] INFO - Print task detail log. 2023-02-13 13:31:29[GMT+08:00] INFO - Starting to run job. 2023-02-13 13:32:24[GMT+08:00] INFO - retry !!!,schema:insclaim_core,table:ins_request_log 2023-02-13 13:32:29[GMT+08:00] INFO - start create temp table,temp table name:tmp_dms_8137738_20230212131612_ins_request_log 2023-02-13 13:33:40[GMT+08:00] INFO - spark task id 1123460 status: FAILED 2023-02-13 13:33:40[GMT+08:00] INFO - start get spark log task . taskId:1123460 2023-02-13 13:33:40[GMT+08:00] INFO - check spark response :https://datafactory-test.oss-cn-hangzhou.aliyuncs.com/runlog/production/open/1123460/driver.log?Expires=1676352820&OSSAccessKeyId=LTAI5tAZmejvKKAS9KY2ggPU&Signature=Q2Ydzm7xLb9RTtMC61dGvh3HgRE%3D 2023-02-13 13:33:41[GMT+08:00] INFO - check spark response :["https://datafactory-test.oss-cn-hangzhou.aliyuncs.com/runlog/production/open/1123460/driver.log?Expires=1676352821&OSSAccessKeyId=LTAI5tAZmejvKKAS9KY2ggPU&Signature=2ccrCfJmfv%2BrufL4c4u2XQvFttQ%3D"] 2023-02-13 13:33:41[GMT+08:00] INFO - finish get spark log task . taskId:1123460 2023-02-13 13:33:41[GMT+08:00] ERROR - Spark logs list:{"executorLogsPath":"["https://datafactory-test.oss-cn-hangzhou.aliyuncs.com/runlog/production/open/1123460/driver.log?Expires=1676352821&OSSAccessKeyId=LTAI5tAZmejvKKAS9KY2ggPU&Signature=2ccrCfJmfv%2BrufL4c4u2XQvFttQ%3D"]","driverLogPath":"https://datafactory-test.oss-cn-hangzhou.aliyuncs.com/runlog/production/open/1123460/driver.log?Expires=1676352820&OSSAccessKeyId=LTAI5tAZmejvKKAS9KY2ggPU&Signature=Q2Ydzm7xLb9RTtMC61dGvh3HgRE%3D"} 2023-02-13 13:33:41[GMT+08:00] ERROR - Job run failed! com.alibaba.datafactory.common.exception.TaskFlowRuntimeException: spark task id:1123460,Spark backup task failed. status:FAILED at com.alibaba.datafactory.zhulong.plugin.lindorm.service.impl.ExecuteJobServiceImpl.waitForSparkTaskComplete(ExecuteJobServiceImpl.java:509) at com.alibaba.datafactory.zhulong.plugin.lindorm.service.impl.ExecuteJobServiceImpl.startSparkJob(ExecuteJobServiceImpl.java:326) at com.alibaba.datafactory.zhulong.plugin.lindorm.service.impl.ExecuteJobServiceImpl.executeJob(ExecuteJobServiceImpl.java:231) at com.alibaba.datafactory.zhulong.plugin.lindorm.LindormBackupJob.run(LindormBackupJob.java:56) at zhulong.execapp.JobRunner.runJob(JobRunner.java:909) at zhulong.execapp.JobRunner.doRun(JobRunner.java:624) at zhulong.execapp.JobRunner.run(JobRunner.java:573) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:627) at java.lang.Thread.run(Thread.java:882) 2023-02-13 13:33:41[GMT+08:00] INFO - Finishing job j_145267 at 1676266421242 with status FAILED

展开
收起
solitude. 2023-02-18 21:46:55 466 1
2 条回答
写回答
取消 提交回答
  • 妥善处理

    2023-02-22 20:47:41
    赞同 展开评论 打赏
  • 十分耕耘,一定会有一分收获!

    楼主你好,看了你的报错信息,推测出来是Task失败监控错误,你可以添加一个监听器来看具体的错误信息,然后对应的解决。

    2023-02-19 08:17:07
    赞同 展开评论 打赏

数据库领域前沿技术分享与交流

相关电子书

更多
金融行业高频交易数据管理解决方案 立即下载
基于Spark的统一数据管理与数据探索平台 立即下载
INFINIDATA:基于Spark的统一数据管理与探索平台 立即下载