开发者社区 > 大数据与机器学习 > 大数据计算 MaxCompute > 正文

大数据计算MaxCompute中ck同步数据时常失败,偶尔成功?

大数据计算MaxCompute中ck同步数据时常失败,偶尔成功?

展开
收起
真的很搞笑 2024-08-14 18:06:50 28 0
1 条回答
写回答
取消 提交回答
  • Code:[DBUtilErrorCode-06], Description:[执行数据库 Sql 失败, 请检查您的配置的 column/table/where/querySql或者向 DBA 寻求帮助.]. - 执行的SQL为:select id,first_id,second_id,deviceAdId,IDFV,app_competitor,bu_device_id from hp_sa_users_au where (-614871941269094976 <= id AND id < 614903867185428479) 具体错误信息为:org.apache.http.MalformedChunkCodingException: CRLF expected at end of chunk - java.sql.SQLException: org.apache.http.MalformedChunkCodingException: CRLF expected at end of chunk
    at ru.yandex.clickhouse.response.ClickHouseResultSet.hasNext(ClickHouseResultSet.java:146)
    at ru.yandex.clickhouse.response.ClickHouseResultSet.next(ClickHouseResultSet.java:182)
    at com.alibaba.datax.plugin.rdbms.reader.CommonRdbmsReader$Task.doReadOneSplitSlice(CommonRdbmsReader.java:638)
    at com.alibaba.datax.plugin.rdbms.reader.CommonRdbmsReader$Task.doStartRead(CommonRdbmsReader.java:493)
    at com.alibaba.datax.plugin.rdbms.reader.CommonRdbmsReader$Task.startRead(CommonRdbmsReader.java:684)
    at com.alibaba.datax.plugin.reader.clickhousereader.ClickhouseReader$Task.startRead(ClickhouseReader.java:356)
    at com.alibaba.datax.core.taskgroup.runner.ReaderRunner.run(ReaderRunner.java:98)
    at java.lang.Thread.run(Thread.java:853)
    Caused by: org.apache.http.MalformedChunkCodingException: CRLF expected at end of chunk
    at org.apache.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:255)
    at org.apache.http.impl.io.ChunkedInputStream.nextChunk(ChunkedInputStream.java:227)
    at org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:186)
    at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:137)
    at java.io.FilterInputStream.read(FilterInputStream.java:133)
    at com.google.common.io.ByteStreams.read(ByteStreams.java:733)
    at com.google.common.io.ByteStreams.readFully(ByteStreams.java:641)
    at com.google.common.io.ByteStreams.readFully(ByteStreams.java:622)
    at com.google.common.io.LittleEndianDataInputStream.readFully(LittleEndianDataInputStream.java:66)
    at ru.yandex.clickhouse.response.ClickHouseLZ4Stream.readNextBlock(ClickHouseLZ4Stream.java:101)
    at ru.yandex.clickhouse.response.ClickHouseLZ4Stream.checkNext(ClickHouseLZ4Stream.java:74)
    at ru.yandex.clickhouse.response.ClickHouseLZ4Stream.read(ClickHouseLZ4Stream.java:60)
    at ru.yandex.clickhouse.response.StreamSplitter.readFromStream(StreamSplitter.java:92)
    at ru.yandex.clickhouse.response.StreamSplitter.next(StreamSplitter.java:64)
    at ru.yandex.clickhouse.response.ClickHouseResultSet.hasNext(ClickHouseResultSet.java:133)
    ... 7 more
    问下ck的同学这个报错啥意思,我估计是网络问题。org.apache.http.MalformedChunkCodingException: CRLF expected at end of chunk - java.sql.SQLException: org.apache.http.MalformedChunkCodingException: CRLF expected at end of chunk
    at ru.yandex.clickhouse.response.ClickHouseResultSet.hasNext(ClickHouseResultSet.java:146)
    at ru.yandex.clickhouse.response.ClickHouseResultSet.next(ClickHouseResultSet.java:182)
    at ,此回答整理自钉群“MaxCompute开发者社区2群”

    2024-08-14 19:10:41
    赞同 7 展开评论 打赏

MaxCompute(原ODPS)是一项面向分析的大数据计算服务,它以Serverless架构提供快速、全托管的在线数据仓库服务,消除传统数据平台在资源扩展性和弹性方面的限制,最小化用户运维投入,使您经济并高效的分析处理海量数据。

相关产品

  • 云原生大数据计算服务 MaxCompute
  • 相关电子书

    更多
    Data+AI时代大数据平台应该如何建设 立即下载
    大数据AI一体化的解读 立即下载
    极氪大数据 Serverless 应用实践 立即下载