MaxCompute操作报错合集之DataWorks在绑定MaxCompute引擎时,报错,如何解决

本文涉及的产品
云原生大数据计算服务 MaxCompute,5000CU*H 100GB 3个月
云原生大数据计算服务MaxCompute,500CU*H 100GB 3个月
简介: MaxCompute是阿里云提供的大规模离线数据处理服务,用于大数据分析、挖掘和报表生成等场景。在使用MaxCompute进行数据处理时,可能会遇到各种操作报错。以下是一些常见的MaxCompute操作报错及其可能的原因与解决措施的合集。

问题一:DataWorks绑定maxcompute引擎报这个错误,可能是啥问题啊?

DataWorks绑定maxcompute引擎报这个错误,可能是啥问题啊?



参考答案:

这个名字是您创建maxcompute数据源的时候手动取的吗 老版本的逻辑绑定默认会生成一个叫odps first的数据源



关于本问题的更多回答可点击进行查看:

https://developer.aliyun.com/ask/574807



问题二:DataWorks采用阿里flink写入maxcompute,拉取依赖的时候报错了?

DataWorks采用阿里flink写入maxcompute,拉取依赖的时候报错了?

这个是什么问题呀?



参考答案:

这个错误提示表示在拉取依赖时,找不到名为'com.aliyun.odps:flink-connector-odps:iar:113.0'的依赖。请检查以下几点:

  1. 确保您的项目中已经添加了正确的依赖。在项目的pom.xml文件中添加以下依赖:
<dependency>
    <groupId>com.aliyun.odps</groupId>
    <artifactId>flink-connector-odps</artifactId>
    <version>iar:113.0</version>
</dependency>
  1. 如果您使用的是Maven,请确保您的本地仓库中存在该依赖。您可以通过运行mvn clean install命令来下载并安装依赖。
  2. 如果问题仍然存在,尝试将版本号更改为最新版本,例如:
<dependency>
    <groupId>com.aliyun.odps</groupId>
    <artifactId>flink-connector-odps</artifactId>
    <version>iar:latest</version>
</dependency>
  1. 如果以上方法都无法解决问题,请检查您的网络连接和防火墙设置,确保您可以访问阿里云的Maven仓库。



关于本问题的更多回答可点击进行查看:

https://developer.aliyun.com/ask/574773



问题三:开源spark3.1.3结构化流写maxcompute报错

当我使用https://github.com/aliyun/aliyun-maxcompute-data-collectors/tree/master/spark-datasource-v3.1 中开源的spark连接器往maxcompute写数据时会在固定时间段报错,白天可以正常写入数据,但是到凌晨有一定概率会报错,提示报错如下:

Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 32.0 failed 4 times, most recent failure: Lost task 1.3 in stage 32.0 (TID 130) (10.233.122.167 executor 1): java.net.SocketException: Unexpected end of file from server

at java.base/sun.net.www.http.HttpClient.parseHTTPHeader(Unknown Source)

at java.base/sun.net.www.http.HttpClient.parseHTTP(Unknown Source)

at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(Unknown Source)

at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source)

at java.base/java.net.HttpURLConnection.getResponseCode(Unknown Source)

at com.aliyun.odps.commons.transport.DefaultConnection.getResponse(DefaultConnection.java:132)

at com.aliyun.odps.tunnel.io.TunnelRecordWriter.write(TunnelRecordWriter.java:75)

at com.aliyun.odps.cupid.table.v1.tunnel.impl.TunnelWriter.write(TunnelWriter.java:62)

at com.aliyun.odps.cupid.table.v1.tunnel.impl.TunnelWriter.write(TunnelWriter.java:19)

at org.apache.spark.sql.odps.writer.DynamicPartitionWriter.write(DynamicPartitionWriter.scala:47)

at org.apache.spark.sql.odps.writer.DynamicPartitionWriter.write(DynamicPartitionWriter.scala:30)

at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$1(WriteToDataSourceV2Exec.scala:416)

at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1473)

at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:452)

at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:360)

at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)

at org.apache.spark.scheduler.Task.run(Task.scala:131)

at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498)

at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)

at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501)

at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

at java.base/java.lang.Thread.run(Unknown Source)

Suppressed: java.io.IOException: Stream is closed

at java.base/sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(Unknown Source)

at java.base/sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(Unknown Source)

at java.base/java.util.zip.DeflaterOutputStream.deflate(Unknown Source)

at java.base/java.util.zip.DeflaterOutputStream.write(Unknown Source)

at org.apache.commons.io.output.ProxyOutputStream.write(ProxyOutputStream.java:90)

at com.google.protobuf.CodedOutputStream.refreshBuffer(CodedOutputStream.java:833)

at com.google.protobuf.CodedOutputStream.writeRawByte(CodedOutputStream.java:892)

at com.google.protobuf.CodedOutputStream.writeRawByte(CodedOutputStream.java:900)

at com.google.protobuf.CodedOutputStream.writeRawVarint32(CodedOutputStream.java:1012)

at com.google.protobuf.CodedOutputStream.writeTag(CodedOutputStream.java:994)

at com.google.protobuf.CodedOutputStream.writeSInt64(CodedOutputStream.java:273)

at com.aliyun.odps.commons.proto.ProtobufRecordStreamWriter.close(ProtobufRecordStreamWriter.java:371)

at com.aliyun.odps.tunnel.io.TunnelRecordWriter.close(TunnelRecordWriter.java:85)

at com.aliyun.odps.cupid.table.v1.tunnel.impl.TunnelWriter.close(TunnelWriter.java:71)

at org.apache.spark.sql.odps.writer.DynamicPartitionWriter.abort(DynamicPartitionWriter.scala:62)

at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$6(WriteToDataSourceV2Exec.scala:448)

at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1484)

... 10 more

Suppressed: java.lang.NullPointerException: Deflater has been closed

at java.base/java.util.zip.Deflater.ensureOpen(Unknown Source)

at java.base/java.util.zip.Deflater.deflate(Unknown Source)

at java.base/java.util.zip.Deflater.deflate(Unknown Source)

at java.base/java.util.zip.DeflaterOutputStream.deflate(Unknown Source)

at java.base/java.util.zip.DeflaterOutputStream.write(Unknown Source)

at org.apache.commons.io.output.ProxyOutputStream.write(ProxyOutputStream.java:90)

at com.google.protobuf.CodedOutputStream.refreshBuffer(CodedOutputStream.java:833)

at com.google.protobuf.CodedOutputStream.writeRawByte(CodedOutputStream.java:892)

at com.google.protobuf.CodedOutputStream.writeRawByte(CodedOutputStream.java:900)

at com.google.protobuf.CodedOutputStream.writeRawVarint32(CodedOutputStream.java:1012)

at com.google.protobuf.CodedOutputStream.writeTag(CodedOutputStream.java:994)

at com.google.protobuf.CodedOutputStream.writeSInt64(CodedOutputStream.java:273)

at com.aliyun.odps.commons.proto.ProtobufRecordStreamWriter.close(ProtobufRecordStreamWriter.java:371)

at com.aliyun.odps.tunnel.io.TunnelRecordWriter.close(TunnelRecordWriter.java:85)

at com.aliyun.odps.cupid.table.v1.tunnel.impl.TunnelWriter.close(TunnelWriter.java:71)

at org.apache.spark.sql.odps.writer.DynamicPartitionWriter.close(DynamicPartitionWriter.scala:68)

at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$9(WriteToDataSourceV2Exec.scala:452)

at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1495)

... 10 more

Driver stacktrace:

at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2303)

at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2252)

at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2251)

at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)

at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)

at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)

at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2251)

at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1124)

at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1124)

at scala.Option.foreach(Option.scala:407)

at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1124)

at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2490)

at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2432)

at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2421)

at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)

at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:902)

at org.apache.spark.SparkContext.runJob(SparkContext.scala:2196)

at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:357)

... 49 more

Caused by: java.net.SocketException: Unexpected end of file from server

at java.base/sun.net.www.http.HttpClient.parseHTTPHeader(Unknown Source)

at java.base/sun.net.www.http.HttpClient.parseHTTP(Unknown Source)

at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(Unknown Source)

at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source)

at java.base/java.net.HttpURLConnection.getResponseCode(Unknown Source)

at com.aliyun.odps.commons.transport.DefaultConnection.getResponse(DefaultConnection.java:132)

at com.aliyun.odps.tunnel.io.TunnelRecordWriter.write(TunnelRecordWriter.java:75)

at com.aliyun.odps.cupid.table.v1.tunnel.impl.TunnelWriter.write(TunnelWriter.java:62)

at com.aliyun.odps.cupid.table.v1.tunnel.impl.TunnelWriter.write(TunnelWriter.java:19)

at org.apache.spark.sql.odps.writer.DynamicPartitionWriter.write(DynamicPartitionWriter.scala:47)

at org.apache.spark.sql.odps.writer.DynamicPartitionWriter.write(DynamicPartitionWriter.scala:30)

at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$1(WriteToDataSourceV2Exec.scala:416)

at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1473)

at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:452)

at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:360)

at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)

at org.apache.spark.scheduler.Task.run(Task.scala:131)

at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498)

at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)

at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501)

at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

at java.base/java.lang.Thread.run(Unknown Source)

Suppressed: java.io.IOException: Stream is closed

at java.base/sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(Unknown Source)

at java.base/sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(Unknown Source)

at java.base/java.util.zip.DeflaterOutputStream.deflate(Unknown Source)

at java.base/java.util.zip.DeflaterOutputStream.write(Unknown Source)

at org.apache.commons.io.output.ProxyOutputStream.write(ProxyOutputStream.java:90)

at com.google.protobuf.CodedOutputStream.refreshBuffer(CodedOutputStream.java:833)

at com.google.protobuf.CodedOutputStream.writeRawByte(CodedOutputStream.java:892)

at com.google.protobuf.CodedOutputStream.writeRawByte(CodedOutputStream.java:900)

at com.google.protobuf.CodedOutputStream.writeRawVarint32(CodedOutputStream.java:1012)

at com.google.protobuf.CodedOutputStream.writeTag(CodedOutputStream.java:994)

at com.google.protobuf.CodedOutputStream.writeSInt64(CodedOutputStream.java:273)

at com.aliyun.odps.commons.proto.ProtobufRecordStreamWriter.close(ProtobufRecordStreamWriter.java:371)

at com.aliyun.odps.tunnel.io.TunnelRecordWriter.close(TunnelRecordWriter.java:85)

at com.aliyun.odps.cupid.table.v1.tunnel.impl.TunnelWriter.close(TunnelWriter.java:71)

at org.apache.spark.sql.odps.writer.DynamicPartitionWriter.abort(DynamicPartitionWriter.scala:62)

at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$6(WriteToDataSourceV2Exec.scala:448)

at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1484)

... 10 more

Suppressed: java.lang.NullPointerException: Deflater has been closed

at java.base/java.util.zip.Deflater.ensureOpen(Unknown Source)

at java.base/java.util.zip.Deflater.deflate(Unknown Source)

at java.base/java.util.zip.Deflater.deflate(Unknown Source)

at java.base/java.util.zip.DeflaterOutputStream.deflate(Unknown Source)

at java.base/java.util.zip.DeflaterOutputStream.write(Unknown Source)

at org.apache.commons.io.output.ProxyOutputStream.write(ProxyOutputStream.java:90)

at com.google.protobuf.CodedOutputStream.refreshBuffer(CodedOutputStream.java:833)

at com.google.protobuf.CodedOutputStream.writeRawByte(CodedOutputStream.java:892)

at com.google.protobuf.CodedOutputStream.writeRawByte(CodedOutputStream.java:900)

at com.google.protobuf.CodedOutputStream.writeRawVarint32(CodedOutputStream.java:1012)

at com.google.protobuf.CodedOutputStream.writeTag(CodedOutputStream.java:994)

at com.google.protobuf.CodedOutputStream.writeSInt64(CodedOutputStream.java:273)

at com.aliyun.odps.commons.proto.ProtobufRecordStreamWriter.close(ProtobufRecordStreamWriter.java:371)

at com.aliyun.odps.tunnel.io.TunnelRecordWriter.close(TunnelRecordWriter.java:85)

at com.aliyun.odps.cupid.table.v1.tunnel.impl.TunnelWriter.close(TunnelWriter.java:71)

at org.apache.spark.sql.odps.writer.DynamicPartitionWriter.close(DynamicPartitionWriter.scala:68)

at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.$anonfun$run$9(WriteToDataSourceV2Exec.scala:452)

at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1495)

... 10 more

23/11/06 05:49:05 INFO ShutdownHookManager: Shutdown hook called

23/11/06 05:49:05 INFO ShutdownHookManager: Deleting directory /var/data/spark-d92fd15e-9117-485c-a426-29bb36269af6/spark-b2b68550-ac67-4daa-9ace-1796efe27dc2

23/11/06 05:49:05 INFO ShutdownHookManager: Deleting directory /tmp/spark-16859bbb-6a2a-43c1-aa11-32ca5ee840a2

目前怀疑是在凌晨的时候dataworks的定时调度较多,导致网络阻塞引起的,请问有人了解该问题吗?是否有什么解决办法?



参考答案:

这个问题可能是由于网络不稳定或者服务器端的问题导致的。你可以尝试以下方法来解决这个问题:

  1. 增加重试次数:在写入数据时,可以设置一个较大的重试次数,当遇到错误时,会自动重试,直到达到最大重试次数。
  2. 增加超时时间:在连接服务器时,可以设置一个较长的超时时间,以便在网络不稳定的情况下,有足够的时间来完成连接。
  3. 检查服务器状态:确保服务器正常运行,没有出现故障或维护。
  4. 使用其他数据源:如果问题仍然存在,可以考虑使用其他数据源,如MaxCompute的ODPS Connector,看看是否能正常工作。



关于本问题的更多回答可点击进行查看:

https://developer.aliyun.com/ask/574238



问题四:DataWorks人工可以改,但是很容易出错?

DataWorks目的是从 maxcompute 回流到 ADB 3.0 - MySQL,目前一键自动建表,会把 maxcompute decamal(38,18) 映射成 mysql decamail,导致精度丢失。

人工可以改,但是很容易出错?



参考答案:

目前不支持修改默认映射类型哈 只能先手动改一下 或者提前手动建表



关于本问题的更多回答可点击进行查看:

https://developer.aliyun.com/ask/573625



问题五:DataWorks从maxcompute同步到mysql,mysql表id是自增id,和配置有关吗?

DataWorks从maxcompute同步到mysql,mysql表id是自增id,字段映射应该怎么配置?我现在两边字段配了一样数量,没把id配置进去,报了个错,和配置有关吗?两边的表都是存在的?com.aliyun.odps.tunnel.tunnelexception: RequestId=20231121182159c4e4ef0a054202be, ErrorCode=InvalidProjectTable, ErrorMessage=The specified project or table name is not valid or missing.



参考答案:

看这个报错是项目名 或者表名配置错误 可以参考maxcompute reader配置一下 https://help.aliyun.com/zh/dataworks/user-guide/maxcompute-data-source?spm=a2c4g.11186623.0.i0#task-2308965

另外执行desc 项目名.表名 确认看下表实际是否还存在



关于本问题的更多回答可点击进行查看:

https://developer.aliyun.com/ask/573461

相关实践学习
基于MaxCompute的热门话题分析
本实验围绕社交用户发布的文章做了详尽的分析,通过分析能得到用户群体年龄分布,性别分布,地理位置分布,以及热门话题的热度。
一站式大数据开发治理平台DataWorks初级课程
DataWorks 从 2009 年开始,十ー年里一直支持阿里巴巴集团内部数据中台的建设,2019 年双 11 稳定支撑每日千万级的任务调度。每天阿里巴巴内部有数万名数据和算法工程师正在使用DataWorks,承了阿里巴巴 99%的据业务构建。本课程主要介绍了阿里巴巴大数据技术发展历程与 DataWorks 几大模块的基本能力。 课程目标 &nbsp;通过讲师的详细讲解与实际演示,学员可以一边学习一边进行实际操作,可以深入了解DataWorks各大模块的使用方式和具体功能,让学员对DataWorks数据集成、开发、分析、运维、安全、治理等方面有深刻的了解,加深对阿里云大数据产品体系的理解与认识。 适合人群 &nbsp;企业数据仓库开发人员 &nbsp;大数据平台开发人员 &nbsp;数据分析师 &nbsp;大数据运维人员 &nbsp;对于大数据平台、数据中台产品感兴趣的开发者
相关文章
|
3月前
|
分布式计算 DataWorks 调度
oss数据同步maxcompute报错
在使用阿里云DataWorks同步OSS数据至MaxCompute时,遇到“Input is not in the .gz format”的报错。问题源于目标目录中存在一个空文件,导致同步时识别错误。
|
4月前
|
SQL 分布式计算 DataWorks
DataWorks操作报错合集之开发环境正常,提交到生产时报错,是什么原因
DataWorks是阿里云提供的一站式大数据开发与治理平台,支持数据集成、数据开发、数据服务、数据质量管理、数据安全管理等全流程数据处理。在使用DataWorks过程中,可能会遇到各种操作报错。以下是一些常见的报错情况及其可能的原因和解决方法。
|
4月前
|
数据采集 存储 DataWorks
DataWorks操作报错合集之离线同步时目标端关键字冲突报错,该怎么处理
DataWorks是阿里云提供的一站式大数据开发与治理平台,支持数据集成、数据开发、数据服务、数据质量管理、数据安全管理等全流程数据处理。在使用DataWorks过程中,可能会遇到各种操作报错。以下是一些常见的报错情况及其可能的原因和解决方法。
|
26天前
|
分布式计算 Java MaxCompute
ODPS MR节点跑graph连通分量计算代码报错java heap space如何解决
任务启动命令:jar -resources odps-graph-connect-family-2.0-SNAPSHOT.jar -classpath ./odps-graph-connect-family-2.0-SNAPSHOT.jar ConnectFamily 若是设置参数该如何设置
|
4月前
|
数据采集 JSON DataWorks
DataWorks产品使用合集之支持哪些数据引擎
DataWorks作为一站式的数据开发与治理平台,提供了从数据采集、清洗、开发、调度、服务化、质量监控到安全管理的全套解决方案,帮助企业构建高效、规范、安全的大数据处理体系。以下是对DataWorks产品使用合集的概述,涵盖数据处理的各个环节。
|
4月前
|
Web App开发 DataWorks 关系型数据库
DataWorks操作报错合集之查看数据源界面报错:ConsoleNeedLogin,该怎么办
DataWorks是阿里云提供的一站式大数据开发与治理平台,支持数据集成、数据开发、数据服务、数据质量管理、数据安全管理等全流程数据处理。在使用DataWorks过程中,可能会遇到各种操作报错。以下是一些常见的报错情况及其可能的原因和解决方法。
|
4月前
|
分布式计算 DataWorks 数据管理
DataWorks操作报错合集之写入ODPS目的表时遇到脏数据报错,该怎么解决
DataWorks是阿里云提供的一站式大数据开发与治理平台,支持数据集成、数据开发、数据服务、数据质量管理、数据安全管理等全流程数据处理。在使用DataWorks过程中,可能会遇到各种操作报错。以下是一些常见的报错情况及其可能的原因和解决方法。
|
4月前
|
SQL DataWorks 关系型数据库
DataWorks操作报错合集之如何处理在DI节点同步到OceanBase数据库时,出现SQLException: Not supported feature or function
DataWorks是阿里云提供的一站式大数据开发与治理平台,支持数据集成、数据开发、数据服务、数据质量管理、数据安全管理等全流程数据处理。在使用DataWorks过程中,可能会遇到各种操作报错。以下是一些常见的报错情况及其可能的原因和解决方法。
|
4月前
|
DataWorks Kubernetes 大数据
飞天大数据平台产品问题之DataWorks提供的商业化服务如何解决
飞天大数据平台产品问题之DataWorks提供的商业化服务如何解决
|
4月前
|
SQL DataWorks 安全
DataWorks产品使用合集之如何实现分钟级调度
DataWorks作为一站式的数据开发与治理平台,提供了从数据采集、清洗、开发、调度、服务化、质量监控到安全管理的全套解决方案,帮助企业构建高效、规范、安全的大数据处理体系。以下是对DataWorks产品使用合集的概述,涵盖数据处理的各个环节。

相关产品

  • 云原生大数据计算服务 MaxCompute