Flink SQL 问题之窗口函数报错如何解决

本文涉及的产品
实时计算 Flink 版,5000CU*H 3个月
简介: Flink SQL报错通常指在使用Apache Flink的SQL接口执行数据处理任务时遇到的问题;本合集将收集常见的Flink SQL报错情况及其解决方法,帮助用户迅速恢复数据处理流程。

问题一:请教个问题,按照快速上手的来,flink SQL 建表后查询,出来表结构,但是有个超时报错,怎么办


c2aae2a060591fc96064b7b8db3cc4a6_5f5742feb5504b8db812b336abf3a20c.png


参考回答:

3930f46d98379452bc9ac2a07401afb0_hyoav2ribzjia_84fcf3581e564490a91549cd33c03919.png


关于本问题的更多回答可点击原文查看:https://developer.aliyun.com/ask/440359?spm=a2c6h.14164896.0.0.671063bfD4aSq3


问题二:Flink SQL使用Tumble窗口函数报错


大家使用Flink SQL的tumble函数时,将结果表转换为流,报如下错误的异常吗 Exception in thread "main" java.lang.NoSuchMethodError: org.apache.flink.streaming.api.datastream.WindowedStream.aggregate(Lorg/apache/flink/api/common/functions/AggregateFunction;Lorg/apache/flink/streaming/api/functions/windowing/WindowFunction;Lorg/apache/flink/api/common/typeinfo/TypeInformation;Lorg/apache/flink/api/common/typeinfo/TypeInformation;Lorg/apache/flink/api/common/typeinfo/TypeInformation;)Lorg/apache/flink/streaming/api/datastream/SingleOutputStreamOperator; at org.apache.flink.table.plan.nodes.datastream.DataStreamGroupWindowAggregate.translateToPlan(DataStreamGroupWindowAggregate.scala:214) *来自志愿者整理的flink邮件归档


参考回答:

你的 Flink 版本是哪个呢。从报错来看你在用 legacy planner,可以使用 blink planner 试试。


关于本问题的更多回答可点击原文查看:https://developer.aliyun.com/ask/370092?spm=a2c6h.14164896.0.0.671063bfD4aSq3


问题三:Flinksql通过phoenix查询维表,报错


flinksql消费kafka,自定义的 connector phoenix 查询维表

任务在启动一段时间有时候一周左右后,任务挂掉,看日志是:

2020-11-24 00:52:38,534 ERROR com.custom.jdbc.table.JdbcRowDataLookupFunction [] - JDBC executeBatch error, retry times = 2 java.sql.SQLException: null at org.apache.calcite.avatica.Helper.createException(Helper.java:56) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.Helper.createException(Helper.java:41) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:557) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaPreparedStatement.executeQuery(AvaticaPreparedStatement.java:137) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at com.custom.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:145) [sql-client-1.0-SNAPSHOT.jar:?] at LookupFunction2.flatMap(UnknownSource)[flinktableblink2.111.11.1.jar:?]atorg.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82)[flinktableblink2.111.11.1.jar:1.11.1]atorg.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36)[flinktableblink2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)[flinkdist2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.OperatorChain2.flatMap(UnknownSource)[flink−table−blink2.11−1.11.1.jar:?]atorg.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82)[flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36)[flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.OperatorChain2.flatMap(Unknown Source) [flink-table-blink_2.11-1.11.1.jar:?] at org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82) [flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36) [flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.pushToOperator(OperatorChain.java:717) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.collect(OperatorChain.java:692)[flinkdist2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.collect(OperatorChain.java:692)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.collect(OperatorChain.java:692) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.collect(OperatorChain.java:672) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.StreamSourceContextsNonTimestampContext.collect(StreamSourceContexts.java:104)[flinkdist2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.StreamSourceContextsNonTimestampContext.collect(StreamSourceContexts.java:104)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.StreamSourceContextsNonTimestampContext.collect(StreamSourceContexts.java:104) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.StreamSourceContextsNonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordsWithTimestamps(AbstractFetcher.java:352) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:185) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:141) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:755) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.SourceStreamTaskLegacySourceFunctionThread.run(SourceStreamTask.java:201)[flinkdist2.111.11.1.jar:1.11.1]Causedby:org.apache.calcite.avatica.NoSuchStatementExceptionatorg.apache.calcite.avatica.remote.RemoteMetaLegacySourceFunctionThread.run(SourceStreamTask.java:201)[flink−dist2.11−1.11.1.jar:1.11.1]Causedby:org.apache.calcite.avatica.NoSuchStatementExceptionatorg.apache.calcite.avatica.remote.RemoteMetaLegacySourceFunctionThread.run(SourceStreamTask.java:201) [flink-dist_2.11-1.11.1.jar:1.11.1] Caused by: org.apache.calcite.avatica.NoSuchStatementException at org.apache.calcite.avatica.remote.RemoteMeta15.call(RemoteMeta.java:349) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.remote.RemoteMeta15.call(RemoteMeta.java:343) [flinktableblink2.111.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaConnection.invokeWithRetries(AvaticaConnection.java:793) [flinktableblink2.111.11.1.jar:1.11.1]atorg.apache.calcite.avatica.remote.RemoteMeta.execute(RemoteMeta.java:342) [flinktableblink2.111.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:548) [flinktableblink2.111.11.1.jar:1.11.1]...20more2020112400:52:40,539ERRORorg.apache.flink.connector.jdbc.table.JdbcRowDataLookupFunction[]JDBCexecuteBatcherror,retrytimes=3java.sql.SQLException:nullatorg.apache.calcite.avatica.Helper.createException(Helper.java:56) [flinktableblink2.111.11.1.jar:1.11.1]atorg.apache.calcite.avatica.Helper.createException(Helper.java:41) [flinktableblink2.111.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:557) [flinktableblink2.111.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaPreparedStatement.executeQuery(AvaticaPreparedStatement.java:137) [flinktableblink2.111.11.1.jar:1.11.1]atcom.custom.phoenix.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:145)[sqlclient1.0SNAPSHOT.jar:?]atLookupFunction15.call(RemoteMeta.java:343) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaConnection.invokeWithRetries(AvaticaConnection.java:793) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.remote.RemoteMeta.execute(RemoteMeta.java:342) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:548) [flink−table−blink2.11−1.11.1.jar:1.11.1]...20more2020−11−2400:52:40,539ERRORorg.apache.flink.connector.jdbc.table.JdbcRowDataLookupFunction[]−JDBCexecuteBatcherror,retrytimes=3java.sql.SQLException:nullatorg.apache.calcite.avatica.Helper.createException(Helper.java:56) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.Helper.createException(Helper.java:41) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:557) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaPreparedStatement.executeQuery(AvaticaPreparedStatement.java:137) [flink−table−blink2.11−1.11.1.jar:1.11.1]atcom.custom.phoenix.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:145)[sql−client−1.0−SNAPSHOT.jar:?]atLookupFunction15.call(RemoteMeta.java:343) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaConnection.invokeWithRetries(AvaticaConnection.java:793) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.remote.RemoteMeta.execute(RemoteMeta.java:342) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:548) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] ... 20 more 2020-11-24 00:52:40,539 ERROR org.apache.flink.connector.jdbc.table.JdbcRowDataLookupFunction [] - JDBC executeBatch error, retry times = 3 java.sql.SQLException: null at org.apache.calcite.avatica.Helper.createException(Helper.java:56) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.Helper.createException(Helper.java:41) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:557) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaPreparedStatement.executeQuery(AvaticaPreparedStatement.java:137) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at com.custom.phoenix.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:145) [sql-client-1.0-SNAPSHOT.jar:?] at LookupFunction2.flatMap(Unknown Source) [flink-table-blink_2.11-1.11.1.jar:?] at org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82) [flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36) [flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.pushToOperator(OperatorChain.java:717)[flinkdist2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.pushToOperator(OperatorChain.java:717)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.pushToOperator(OperatorChain.java:717) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.collect(OperatorChain.java:692) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.collect(OperatorChain.java:672)[flinkdist2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)[flinkdist2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)[flinkdist2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.StreamSourceContextsCopyingChainingOutput.collect(OperatorChain.java:672)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.StreamSourceContextsCopyingChainingOutput.collect(OperatorChain.java:672) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.StreamSourceContextsNonTimestampContext.collect(StreamSourceContexts.java:104) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.StreamSourceContextsNonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)[flinkdist2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordsWithTimestamps(AbstractFetcher.java:352)[flinksqlconnectorkafka2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:185)[flinksqlconnectorkafka2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:141)[flinksqlconnectorkafka2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:755)[flinksqlconnectorkafka2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)[flinkdist2.111.11.1.jar:1.11.1atorg.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)[flinkdist2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.SourceStreamTaskNonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordsWithTimestamps(AbstractFetcher.java:352)[flink−sql−connector−kafka2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:185)[flink−sql−connector−kafka2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:141)[flink−sql−connector−kafka2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:755)[flink−sql−connector−kafka2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)[flink−dist2.11−1.11.1.jar:1.11.1atorg.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.SourceStreamTaskNonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordsWithTimestamps(AbstractFetcher.java:352) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:185) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:141) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:755) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100) [flink-dist_2.11-1.11.1.jar:1.11.1 at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.SourceStreamTaskLegacySourceFunctionThread.run(SourceStreamTask.java:201) [flink-dist_2.11-1.11.1.jar:1.11.1] Caused by: org.apache.calcite.avatica.NoSuchStatementException at org.apache.calcite.avatica.remote.RemoteMeta15.call(RemoteMeta.java:349) [flinktableblink2.111.11.1.jar:1.11.1]atorg.apache.calcite.avatica.remote.RemoteMeta15.call(RemoteMeta.java:349) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.remote.RemoteMeta15.call(RemoteMeta.java:349) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.remote.RemoteMeta15.call(RemoteMeta.java:343) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaConnection.invokeWithRetries(AvaticaConnection.java:793) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.remote.RemoteMeta.execute(RemoteMeta.java:342) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:548) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] ... 20 more 2020-11-24 00:52:40,635 WARN  org.apache.flink.runtime.taskmanager.Task                switched from RUNNING to FAILED. java.lang.RuntimeException: Execution of JDBC statement failed.

各位大佬帮我看下哪的问题*来自志愿者整理的flink邮件归档


参考回答:

从你的堆栈看,你自定义的 “com.custom.jdbc.table.JdbcRowDataLookupFunction” 函数引用的 PreparedStatement 包不对。 具体实现可以参考:https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/table/JdbcRowDataLookupFunction.java 我理解如果 phoenix 支持标准的 SQL 协议的话,直接用提供的 JDBCRowDataLookupFunction 也可以?


关于本问题的更多回答可点击原文查看:https://developer.aliyun.com/ask/370097?spm=a2c6h.14164896.0.0.671063bfD4aSq3


问题四:flinksql 1.12.1 row中字段访问报错怎么办


hi, all 定义一个 ScalarFunction class Test extends ScalarFunction{ @DataTypeHint("ROW") def eval(): Row ={ Row.of("a", "b", "c") } }

当执行下面语句的时候 select Test().a from taba1 会报下面的错误:

java.io.IOException: Fail to run stream sql job at org.apache.zeppelin.flink.sql.AbstractStreamSqlJob.run(AbstractStreamSqlJob.java:172) at org.apache.zeppelin.flink.sql.AbstractStreamSqlJob.run(AbstractStreamSqlJob.java:105) at org.apache.zeppelin.flink.FlinkStreamSqlInterpreter.callInnerSelect(FlinkStreamSqlInterpreter.java:89) at org.apache.zeppelin.flink.FlinkSqlInterrpeter.callSelect(FlinkSqlInterrpeter.java:494) at org.apache.zeppelin.flink.FlinkSqlInterrpeter.callCommand(FlinkSqlInterrpeter.java:257) at org.apache.zeppelin.flink.FlinkSqlInterrpeter.runSqlList(FlinkSqlInterrpeter.java:151) at org.apache.zeppelin.flink.FlinkSqlInterrpeter.internalInterpret(FlinkSqlInterrpeter.java:111) at org.apache.zeppelin.interpreter.AbstractInterpreter.interpret(AbstractInterpreter.java:47) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:110) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServerInterpretJob.jobRun(RemoteInterpreterServer.java:852)atorg.apache.zeppelin.interpreter.remote.RemoteInterpreterServerInterpretJob.jobRun(RemoteInterpreterServer.java:852)atorg.apache.zeppelin.interpreter.remote.RemoteInterpreterServerInterpretJob.jobRun(RemoteInterpreterServer.java:852) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServerInterpretJob.jobRun(RemoteInterpreterServer.java:744) at org.apache.zeppelin.scheduler.Job.run(Job.java:172) at org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:132) at org.apache.zeppelin.scheduler.ParallelScheduler.lambdarunJobInSchedulerrunJobInSchedulerrunJobInScheduler0(ParallelScheduler.java:46) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutorWorker.run(ThreadPoolExecutor.java:624)atjava.lang.Thread.run(Thread.java:748)Causedby:scala.MatchError:Test()(ofclassorg.apache.calcite.rex.RexCall)atorg.apache.flink.table.planner.plan.utils.NestedSchemaExtractor.internalVisitWorker.run(ThreadPoolExecutor.java:624)atjava.lang.Thread.run(Thread.java:748)Causedby:scala.MatchError:Test()(ofclassorg.apache.calcite.rex.RexCall)atorg.apache.flink.table.planner.plan.utils.NestedSchemaExtractor.internalVisitWorker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: scala.MatchError: Test() (of class org.apache.calcite.rex.RexCall) at org.apache.flink.table.planner.plan.utils.NestedSchemaExtractor.internalVisit1(NestedProjectionUtil.scala:273) at org.apache.flink.table.planner.plan.utils.NestedSchemaExtractor.visitFieldAccess(NestedProjectionUtil.scala:283) at org.apache.flink.table.planner.plan.utils.NestedSchemaExtractor.visitFieldAccess(NestedProjectionUtil.scala:269) at org.apache.calcite.rex.RexFieldAccess.accept(RexFieldAccess.java:92) at org.apache.flink.table.planner.plan.utils.NestedProjectionUtil

anonfun$build$1.apply(NestedProjectionUtil.scala:112)atorg.apache.flink.table.planner.plan.utils.NestedProjectionUtilanonfun$build$1.apply(NestedProjectionUtil.scala:112)atorg.apache.flink.table.planner.plan.utils.NestedProjectionUtil

anonfun$build$1.apply(NestedProjectionUtil.scala:112) at org.apache.flink.table.planner.plan.utils.NestedProjectionUtilanonfunbuildbuildbuild1.apply(NestedProjectionUtil.scala:111) at scala.collection.Iteratorclass.foreach(Iterator.scala:891)atscala.collection.AbstractIterator.foreach(Iterator.scala:1334)atscala.collection.IterableLikeclass.foreach(Iterator.scala:891)atscala.collection.AbstractIterator.foreach(Iterator.scala:1334)atscala.collection.IterableLikeclass.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.IterableLikeclass.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at org.apache.flink.table.planner.plan.utils.NestedProjectionUtil.build(NestedProjectionUtil.scala:111)atorg.apache.flink.table.planner.plan.utils.NestedProjectionUtil.build(NestedProjectionUtil.scala)atorg.apache.flink.table.planner.plan.rules.logical.ProjectWatermarkAssignerTransposeRule.getUsedFieldsInTopLevelProjectAndWatermarkAssigner(ProjectWatermarkAssignerTransposeRule.java:155)atorg.apache.flink.table.planner.plan.rules.logical.ProjectWatermarkAssignerTransposeRule.matches(ProjectWatermarkAssignerTransposeRule.java:65)atorg.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:284)atorg.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:411)atorg.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:268)atorg.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:985)atorg.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1245)atorg.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)atorg.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)atorg.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84)atorg.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268)atorg.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132)atorg.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)atorg.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)atorg.apache.calcite.plan.volcano.VolcanoPlanner.changeTraits(VolcanoPlanner.java:486)atorg.apache.calcite.tools.Programs.build(NestedProjectionUtil.scala:111)atorg.apache.flink.table.planner.plan.utils.NestedProjectionUtil.build(NestedProjectionUtil.scala)atorg.apache.flink.table.planner.plan.rules.logical.ProjectWatermarkAssignerTransposeRule.getUsedFieldsInTopLevelProjectAndWatermarkAssigner(ProjectWatermarkAssignerTransposeRule.java:155)atorg.apache.flink.table.planner.plan.rules.logical.ProjectWatermarkAssignerTransposeRule.matches(ProjectWatermarkAssignerTransposeRule.java:65)atorg.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:284)atorg.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:411)atorg.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:268)atorg.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:985)atorg.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1245)atorg.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)atorg.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)atorg.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84)atorg.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268)atorg.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132)atorg.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)atorg.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)atorg.apache.calcite.plan.volcano.VolcanoPlanner.changeTraits(VolcanoPlanner.java:486)atorg.apache.calcite.tools.Programs.build(NestedProjectionUtil.scala:111) at org.apache.flink.table.planner.plan.utils.NestedProjectionUtil.build(NestedProjectionUtil.scala) at org.apache.flink.table.planner.plan.rules.logical.ProjectWatermarkAssignerTransposeRule.getUsedFieldsInTopLevelProjectAndWatermarkAssigner(ProjectWatermarkAssignerTransposeRule.java:155) at org.apache.flink.table.planner.plan.rules.logical.ProjectWatermarkAssignerTransposeRule.matches(ProjectWatermarkAssignerTransposeRule.java:65) at org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:284) at org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:411) at org.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:268) at org.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:985) at org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1245) at org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589) at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604) at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84) at org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268) at org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132) at org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589) at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604) at org.apache.calcite.plan.volcano.VolcanoPlanner.changeTraits(VolcanoPlanner.java:486) at org.apache.calcite.tools.ProgramsRuleSetProgram.run(Programs.java:309) at org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:64) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram

anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)atorg.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgramanonfun$optimize$1.apply(FlinkChainedProgram.scala:62)atorg.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram

anonfun$optimize$1.apply(FlinkChainedProgram.scala:62) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgramanonfunoptimizeoptimizeoptimize1.apply(FlinkChainedProgram.scala:58) at scala.collection.TraversableOnce

anonfun$foldLeft$1.apply(TraversableOnce.scala:157)atscala.collection.TraversableOnceanonfun$foldLeft$1.apply(TraversableOnce.scala:157)atscala.collection.TraversableOnce

anonfun$foldLeft$1.apply(TraversableOnce.scala:157) at scala.collection.TraversableOnceanonfunfoldLeftfoldLeftfoldLeft1.apply(TraversableOnce.scala:157) at scala.collection.Iteratorclass.foreach(Iterator.scala:891)atscala.collection.AbstractIterator.foreach(Iterator.scala:1334)atscala.collection.IterableLikeclass.foreach(Iterator.scala:891)atscala.collection.AbstractIterator.foreach(Iterator.scala:1334)atscala.collection.IterableLikeclass.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.IterableLikeclass.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57) at org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeTree(StreamCommonSubGraphBasedOptimizer.scala:163) at org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.doOptimize(StreamCommonSubGraphBasedOptimizer.scala:79) at org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77) at org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:286) at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:165) at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1329) at org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1321) at org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:1276) at org.apache.zeppelin.flink.sql.AbstractStreamSqlJob.run(AbstractStreamSqlJob.java:161) ... 16 more

应该是优化的时候出了问题,是bug不?

现在改在解决这个问题呢?

Best Regards.*来自志愿者整理的flink邮件归档


参考回答:

如果单独执行这个function 的话是没有问题的

select Test().a 是没有问题的


关于本问题的更多回答可点击原文查看:https://developer.aliyun.com/ask/359483?spm=a2c6h.14164896.0.0.4a6263bfEP5Bvd


问题五:在使用flink sql ddl语句向hbase中写的时候出现报错怎么解决?


在使用flink sql ddl语句向hbase中写的时候报如下错误: java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration at org.apache.flink.addons.hbase.HBaseUpsertTableSink.consumeDataStream(HBaseUpsertTableSink.java:87) at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:141) at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:50)

项目maven中已经引入过下面依赖 hbase-server hbase-common hadoop-common flink-hbase_2.11 而且我看jar中是有HBaseConfiguration这个类的,为什么放到服务器上执行就报错呢,在本地执行没问题

*来自志愿者整理的flink邮件归档


参考回答:

你服务器上是否配置了hadoop_classpath? 建议hbase在试用时 用 hadoop_classpath + flink-hbase jar,不然依赖问题会比较麻烦。


关于本问题的更多回答可点击原文查看:https://developer.aliyun.com/ask/370756?spm=a2c6h.14164896.0.0.4a6263bfEP5Bvd

相关实践学习
基于Hologres轻松玩转一站式实时仓库
本场景介绍如何利用阿里云MaxCompute、实时计算Flink和交互式分析服务Hologres开发离线、实时数据融合分析的数据大屏应用。
Linux入门到精通
本套课程是从入门开始的Linux学习课程,适合初学者阅读。由浅入深案例丰富,通俗易懂。主要涉及基础的系统操作以及工作中常用的各种服务软件的应用、部署和优化。即使是零基础的学员,只要能够坚持把所有章节都学完,也一定会受益匪浅。
目录
相关文章
|
4天前
|
SQL XML Java
mybatis 调用修改SQL时 出现了一个问题 没有修改成功也没有报错
mybatis 调用修改SQL时 出现了一个问题 没有修改成功也没有报错
13 0
|
1月前
|
SQL JSON Kubernetes
Seata常见问题之服务端 error日志没有输出,客户端执行sql报错如何解决
Seata 是一个开源的分布式事务解决方案,旨在提供高效且简单的事务协调机制,以解决微服务架构下跨服务调用(分布式场景)的一致性问题。以下是Seata常见问题的一个合集
107 0
|
1月前
|
SQL 存储 Kubernetes
Seata常见问题之mybatisplus的批量插入方法报SQL错误如何解决
Seata 是一个开源的分布式事务解决方案,旨在提供高效且简单的事务协调机制,以解决微服务架构下跨服务调用(分布式场景)的一致性问题。以下是Seata常见问题的一个合集
26 0
|
1月前
|
Java 关系型数据库 MySQL
Flink1.18.1和CDC2.4.1 本地没问题 提交任务到服务器 报错java.lang.NoClassDefFoundError: Could not initialize class io.debezium.connector.mysql.MySqlConnectorConfig
【2月更文挑战第33天】Flink1.18.1和CDC2.4.1 本地没问题 提交任务到服务器 报错java.lang.NoClassDefFoundError: Could not initialize class io.debezium.connector.mysql.MySqlConnectorConfig
52 2
|
1月前
|
Java 关系型数据库 MySQL
Flink CDC有见这个报错不?
【2月更文挑战第29天】Flink CDC有见这个报错不?
22 2
|
1月前
|
存储 关系型数据库 MySQL
Flink CDC产品常见问题之写hudi的时候报错如何解决
Flink CDC(Change Data Capture)是一个基于Apache Flink的实时数据变更捕获库,用于实现数据库的实时同步和变更流的处理;在本汇总中,我们组织了关于Flink CDC产品在实践中用户经常提出的问题及其解答,目的是辅助用户更好地理解和应用这一技术,优化实时数据处理流程。
|
1月前
|
资源调度 关系型数据库 测试技术
Flink CDC产品常见问题之没有报错但是一直监听不到数据如何解决
Flink CDC(Change Data Capture)是一个基于Apache Flink的实时数据变更捕获库,用于实现数据库的实时同步和变更流的处理;在本汇总中,我们组织了关于Flink CDC产品在实践中用户经常提出的问题及其解答,目的是辅助用户更好地理解和应用这一技术,优化实时数据处理流程。
|
1月前
|
缓存 监控 Java
Flink CDC产品常见问题之flink集群jps命令报错如何解决
Flink CDC(Change Data Capture)是一个基于Apache Flink的实时数据变更捕获库,用于实现数据库的实时同步和变更流的处理;在本汇总中,我们组织了关于Flink CDC产品在实践中用户经常提出的问题及其解答,目的是辅助用户更好地理解和应用这一技术,优化实时数据处理流程。
|
1月前
|
SQL 数据库 索引
解决SQL报错:索引中丢失IN或OUT參数
解决SQL报错:索引中丢失IN或OUT參数
|
1月前
|
Oracle 关系型数据库 MySQL
Flink CDC产品常见问题之用superset连接starrocks报错如何解决
Flink CDC(Change Data Capture)是一个基于Apache Flink的实时数据变更捕获库,用于实现数据库的实时同步和变更流的处理;在本汇总中,我们组织了关于Flink CDC产品在实践中用户经常提出的问题及其解答,目的是辅助用户更好地理解和应用这一技术,优化实时数据处理流程。

相关产品

  • 实时计算 Flink版