Flink SQL 问题之窗口函数报错如何解决

本文涉及的产品
实时计算 Flink 版,5000CU*H 3个月
简介: Flink SQL报错通常指在使用Apache Flink的SQL接口执行数据处理任务时遇到的问题;本合集将收集常见的Flink SQL报错情况及其解决方法,帮助用户迅速恢复数据处理流程。

问题一:请教个问题,按照快速上手的来,flink SQL 建表后查询,出来表结构,但是有个超时报错,怎么办


c2aae2a060591fc96064b7b8db3cc4a6_5f5742feb5504b8db812b336abf3a20c.png


参考回答:

3930f46d98379452bc9ac2a07401afb0_hyoav2ribzjia_84fcf3581e564490a91549cd33c03919.png


关于本问题的更多回答可点击原文查看:https://developer.aliyun.com/ask/440359?spm=a2c6h.14164896.0.0.671063bfD4aSq3


问题二:Flink SQL使用Tumble窗口函数报错


大家使用Flink SQL的tumble函数时,将结果表转换为流,报如下错误的异常吗 Exception in thread "main" java.lang.NoSuchMethodError: org.apache.flink.streaming.api.datastream.WindowedStream.aggregate(Lorg/apache/flink/api/common/functions/AggregateFunction;Lorg/apache/flink/streaming/api/functions/windowing/WindowFunction;Lorg/apache/flink/api/common/typeinfo/TypeInformation;Lorg/apache/flink/api/common/typeinfo/TypeInformation;Lorg/apache/flink/api/common/typeinfo/TypeInformation;)Lorg/apache/flink/streaming/api/datastream/SingleOutputStreamOperator; at org.apache.flink.table.plan.nodes.datastream.DataStreamGroupWindowAggregate.translateToPlan(DataStreamGroupWindowAggregate.scala:214) *来自志愿者整理的flink邮件归档


参考回答:

你的 Flink 版本是哪个呢。从报错来看你在用 legacy planner,可以使用 blink planner 试试。


关于本问题的更多回答可点击原文查看:https://developer.aliyun.com/ask/370092?spm=a2c6h.14164896.0.0.671063bfD4aSq3


问题三:Flinksql通过phoenix查询维表,报错


flinksql消费kafka,自定义的 connector phoenix 查询维表

任务在启动一段时间有时候一周左右后,任务挂掉,看日志是:

2020-11-24 00:52:38,534 ERROR com.custom.jdbc.table.JdbcRowDataLookupFunction [] - JDBC executeBatch error, retry times = 2 java.sql.SQLException: null at org.apache.calcite.avatica.Helper.createException(Helper.java:56) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.Helper.createException(Helper.java:41) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:557) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaPreparedStatement.executeQuery(AvaticaPreparedStatement.java:137) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at com.custom.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:145) [sql-client-1.0-SNAPSHOT.jar:?] at LookupFunction2.flatMap(UnknownSource)[flinktableblink2.111.11.1.jar:?]atorg.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82)[flinktableblink2.111.11.1.jar:1.11.1]atorg.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36)[flinktableblink2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)[flinkdist2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.OperatorChain2.flatMap(UnknownSource)[flink−table−blink2.11−1.11.1.jar:?]atorg.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82)[flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36)[flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.OperatorChain2.flatMap(Unknown Source) [flink-table-blink_2.11-1.11.1.jar:?] at org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82) [flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36) [flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.pushToOperator(OperatorChain.java:717) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.collect(OperatorChain.java:692)[flinkdist2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.collect(OperatorChain.java:692)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.collect(OperatorChain.java:692) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.collect(OperatorChain.java:672) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.StreamSourceContextsNonTimestampContext.collect(StreamSourceContexts.java:104)[flinkdist2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.StreamSourceContextsNonTimestampContext.collect(StreamSourceContexts.java:104)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.StreamSourceContextsNonTimestampContext.collect(StreamSourceContexts.java:104) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.StreamSourceContextsNonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordsWithTimestamps(AbstractFetcher.java:352) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:185) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:141) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:755) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.SourceStreamTaskLegacySourceFunctionThread.run(SourceStreamTask.java:201)[flinkdist2.111.11.1.jar:1.11.1]Causedby:org.apache.calcite.avatica.NoSuchStatementExceptionatorg.apache.calcite.avatica.remote.RemoteMetaLegacySourceFunctionThread.run(SourceStreamTask.java:201)[flink−dist2.11−1.11.1.jar:1.11.1]Causedby:org.apache.calcite.avatica.NoSuchStatementExceptionatorg.apache.calcite.avatica.remote.RemoteMetaLegacySourceFunctionThread.run(SourceStreamTask.java:201) [flink-dist_2.11-1.11.1.jar:1.11.1] Caused by: org.apache.calcite.avatica.NoSuchStatementException at org.apache.calcite.avatica.remote.RemoteMeta15.call(RemoteMeta.java:349) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.remote.RemoteMeta15.call(RemoteMeta.java:343) [flinktableblink2.111.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaConnection.invokeWithRetries(AvaticaConnection.java:793) [flinktableblink2.111.11.1.jar:1.11.1]atorg.apache.calcite.avatica.remote.RemoteMeta.execute(RemoteMeta.java:342) [flinktableblink2.111.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:548) [flinktableblink2.111.11.1.jar:1.11.1]...20more2020112400:52:40,539ERRORorg.apache.flink.connector.jdbc.table.JdbcRowDataLookupFunction[]JDBCexecuteBatcherror,retrytimes=3java.sql.SQLException:nullatorg.apache.calcite.avatica.Helper.createException(Helper.java:56) [flinktableblink2.111.11.1.jar:1.11.1]atorg.apache.calcite.avatica.Helper.createException(Helper.java:41) [flinktableblink2.111.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:557) [flinktableblink2.111.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaPreparedStatement.executeQuery(AvaticaPreparedStatement.java:137) [flinktableblink2.111.11.1.jar:1.11.1]atcom.custom.phoenix.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:145)[sqlclient1.0SNAPSHOT.jar:?]atLookupFunction15.call(RemoteMeta.java:343) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaConnection.invokeWithRetries(AvaticaConnection.java:793) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.remote.RemoteMeta.execute(RemoteMeta.java:342) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:548) [flink−table−blink2.11−1.11.1.jar:1.11.1]...20more2020−11−2400:52:40,539ERRORorg.apache.flink.connector.jdbc.table.JdbcRowDataLookupFunction[]−JDBCexecuteBatcherror,retrytimes=3java.sql.SQLException:nullatorg.apache.calcite.avatica.Helper.createException(Helper.java:56) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.Helper.createException(Helper.java:41) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:557) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaPreparedStatement.executeQuery(AvaticaPreparedStatement.java:137) [flink−table−blink2.11−1.11.1.jar:1.11.1]atcom.custom.phoenix.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:145)[sql−client−1.0−SNAPSHOT.jar:?]atLookupFunction15.call(RemoteMeta.java:343) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaConnection.invokeWithRetries(AvaticaConnection.java:793) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.remote.RemoteMeta.execute(RemoteMeta.java:342) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:548) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] ... 20 more 2020-11-24 00:52:40,539 ERROR org.apache.flink.connector.jdbc.table.JdbcRowDataLookupFunction [] - JDBC executeBatch error, retry times = 3 java.sql.SQLException: null at org.apache.calcite.avatica.Helper.createException(Helper.java:56) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.Helper.createException(Helper.java:41) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:557) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaPreparedStatement.executeQuery(AvaticaPreparedStatement.java:137) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at com.custom.phoenix.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:145) [sql-client-1.0-SNAPSHOT.jar:?] at LookupFunction2.flatMap(Unknown Source) [flink-table-blink_2.11-1.11.1.jar:?] at org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82) [flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36) [flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.pushToOperator(OperatorChain.java:717)[flinkdist2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.pushToOperator(OperatorChain.java:717)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.pushToOperator(OperatorChain.java:717) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.collect(OperatorChain.java:692) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.collect(OperatorChain.java:672)[flinkdist2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)[flinkdist2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)[flinkdist2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.StreamSourceContextsCopyingChainingOutput.collect(OperatorChain.java:672)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.StreamSourceContextsCopyingChainingOutput.collect(OperatorChain.java:672) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.StreamSourceContextsNonTimestampContext.collect(StreamSourceContexts.java:104) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.StreamSourceContextsNonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)[flinkdist2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordsWithTimestamps(AbstractFetcher.java:352)[flinksqlconnectorkafka2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:185)[flinksqlconnectorkafka2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:141)[flinksqlconnectorkafka2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:755)[flinksqlconnectorkafka2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)[flinkdist2.111.11.1.jar:1.11.1atorg.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)[flinkdist2.111.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.SourceStreamTaskNonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordsWithTimestamps(AbstractFetcher.java:352)[flink−sql−connector−kafka2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:185)[flink−sql−connector−kafka2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:141)[flink−sql−connector−kafka2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:755)[flink−sql−connector−kafka2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)[flink−dist2.11−1.11.1.jar:1.11.1atorg.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.SourceStreamTaskNonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordsWithTimestamps(AbstractFetcher.java:352) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:185) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:141) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:755) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100) [flink-dist_2.11-1.11.1.jar:1.11.1 at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.SourceStreamTaskLegacySourceFunctionThread.run(SourceStreamTask.java:201) [flink-dist_2.11-1.11.1.jar:1.11.1] Caused by: org.apache.calcite.avatica.NoSuchStatementException at org.apache.calcite.avatica.remote.RemoteMeta15.call(RemoteMeta.java:349) [flinktableblink2.111.11.1.jar:1.11.1]atorg.apache.calcite.avatica.remote.RemoteMeta15.call(RemoteMeta.java:349) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.remote.RemoteMeta15.call(RemoteMeta.java:349) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.remote.RemoteMeta15.call(RemoteMeta.java:343) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaConnection.invokeWithRetries(AvaticaConnection.java:793) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.remote.RemoteMeta.execute(RemoteMeta.java:342) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:548) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] ... 20 more 2020-11-24 00:52:40,635 WARN  org.apache.flink.runtime.taskmanager.Task                switched from RUNNING to FAILED. java.lang.RuntimeException: Execution of JDBC statement failed.

各位大佬帮我看下哪的问题*来自志愿者整理的flink邮件归档


参考回答:

从你的堆栈看,你自定义的 “com.custom.jdbc.table.JdbcRowDataLookupFunction” 函数引用的 PreparedStatement 包不对。 具体实现可以参考:https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/table/JdbcRowDataLookupFunction.java 我理解如果 phoenix 支持标准的 SQL 协议的话,直接用提供的 JDBCRowDataLookupFunction 也可以?


关于本问题的更多回答可点击原文查看:https://developer.aliyun.com/ask/370097?spm=a2c6h.14164896.0.0.671063bfD4aSq3


问题四:flinksql 1.12.1 row中字段访问报错怎么办


hi, all 定义一个 ScalarFunction class Test extends ScalarFunction{ @DataTypeHint("ROW") def eval(): Row ={ Row.of("a", "b", "c") } }

当执行下面语句的时候 select Test().a from taba1 会报下面的错误:

java.io.IOException: Fail to run stream sql job at org.apache.zeppelin.flink.sql.AbstractStreamSqlJob.run(AbstractStreamSqlJob.java:172) at org.apache.zeppelin.flink.sql.AbstractStreamSqlJob.run(AbstractStreamSqlJob.java:105) at org.apache.zeppelin.flink.FlinkStreamSqlInterpreter.callInnerSelect(FlinkStreamSqlInterpreter.java:89) at org.apache.zeppelin.flink.FlinkSqlInterrpeter.callSelect(FlinkSqlInterrpeter.java:494) at org.apache.zeppelin.flink.FlinkSqlInterrpeter.callCommand(FlinkSqlInterrpeter.java:257) at org.apache.zeppelin.flink.FlinkSqlInterrpeter.runSqlList(FlinkSqlInterrpeter.java:151) at org.apache.zeppelin.flink.FlinkSqlInterrpeter.internalInterpret(FlinkSqlInterrpeter.java:111) at org.apache.zeppelin.interpreter.AbstractInterpreter.interpret(AbstractInterpreter.java:47) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:110) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServerInterpretJob.jobRun(RemoteInterpreterServer.java:852)atorg.apache.zeppelin.interpreter.remote.RemoteInterpreterServerInterpretJob.jobRun(RemoteInterpreterServer.java:852)atorg.apache.zeppelin.interpreter.remote.RemoteInterpreterServerInterpretJob.jobRun(RemoteInterpreterServer.java:852) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServerInterpretJob.jobRun(RemoteInterpreterServer.java:744) at org.apache.zeppelin.scheduler.Job.run(Job.java:172) at org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:132) at org.apache.zeppelin.scheduler.ParallelScheduler.lambdarunJobInSchedulerrunJobInSchedulerrunJobInScheduler0(ParallelScheduler.java:46) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutorWorker.run(ThreadPoolExecutor.java:624)atjava.lang.Thread.run(Thread.java:748)Causedby:scala.MatchError:Test()(ofclassorg.apache.calcite.rex.RexCall)atorg.apache.flink.table.planner.plan.utils.NestedSchemaExtractor.internalVisitWorker.run(ThreadPoolExecutor.java:624)atjava.lang.Thread.run(Thread.java:748)Causedby:scala.MatchError:Test()(ofclassorg.apache.calcite.rex.RexCall)atorg.apache.flink.table.planner.plan.utils.NestedSchemaExtractor.internalVisitWorker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: scala.MatchError: Test() (of class org.apache.calcite.rex.RexCall) at org.apache.flink.table.planner.plan.utils.NestedSchemaExtractor.internalVisit1(NestedProjectionUtil.scala:273) at org.apache.flink.table.planner.plan.utils.NestedSchemaExtractor.visitFieldAccess(NestedProjectionUtil.scala:283) at org.apache.flink.table.planner.plan.utils.NestedSchemaExtractor.visitFieldAccess(NestedProjectionUtil.scala:269) at org.apache.calcite.rex.RexFieldAccess.accept(RexFieldAccess.java:92) at org.apache.flink.table.planner.plan.utils.NestedProjectionUtil

anonfun$build$1.apply(NestedProjectionUtil.scala:112)atorg.apache.flink.table.planner.plan.utils.NestedProjectionUtilanonfun$build$1.apply(NestedProjectionUtil.scala:112)atorg.apache.flink.table.planner.plan.utils.NestedProjectionUtil

anonfun$build$1.apply(NestedProjectionUtil.scala:112) at org.apache.flink.table.planner.plan.utils.NestedProjectionUtilanonfunbuildbuildbuild1.apply(NestedProjectionUtil.scala:111) at scala.collection.Iteratorclass.foreach(Iterator.scala:891)atscala.collection.AbstractIterator.foreach(Iterator.scala:1334)atscala.collection.IterableLikeclass.foreach(Iterator.scala:891)atscala.collection.AbstractIterator.foreach(Iterator.scala:1334)atscala.collection.IterableLikeclass.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.IterableLikeclass.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at org.apache.flink.table.planner.plan.utils.NestedProjectionUtil.build(NestedProjectionUtil.scala:111)atorg.apache.flink.table.planner.plan.utils.NestedProjectionUtil.build(NestedProjectionUtil.scala)atorg.apache.flink.table.planner.plan.rules.logical.ProjectWatermarkAssignerTransposeRule.getUsedFieldsInTopLevelProjectAndWatermarkAssigner(ProjectWatermarkAssignerTransposeRule.java:155)atorg.apache.flink.table.planner.plan.rules.logical.ProjectWatermarkAssignerTransposeRule.matches(ProjectWatermarkAssignerTransposeRule.java:65)atorg.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:284)atorg.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:411)atorg.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:268)atorg.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:985)atorg.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1245)atorg.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)atorg.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)atorg.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84)atorg.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268)atorg.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132)atorg.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)atorg.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)atorg.apache.calcite.plan.volcano.VolcanoPlanner.changeTraits(VolcanoPlanner.java:486)atorg.apache.calcite.tools.Programs.build(NestedProjectionUtil.scala:111)atorg.apache.flink.table.planner.plan.utils.NestedProjectionUtil.build(NestedProjectionUtil.scala)atorg.apache.flink.table.planner.plan.rules.logical.ProjectWatermarkAssignerTransposeRule.getUsedFieldsInTopLevelProjectAndWatermarkAssigner(ProjectWatermarkAssignerTransposeRule.java:155)atorg.apache.flink.table.planner.plan.rules.logical.ProjectWatermarkAssignerTransposeRule.matches(ProjectWatermarkAssignerTransposeRule.java:65)atorg.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:284)atorg.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:411)atorg.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:268)atorg.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:985)atorg.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1245)atorg.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)atorg.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)atorg.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84)atorg.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268)atorg.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132)atorg.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)atorg.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)atorg.apache.calcite.plan.volcano.VolcanoPlanner.changeTraits(VolcanoPlanner.java:486)atorg.apache.calcite.tools.Programs.build(NestedProjectionUtil.scala:111) at org.apache.flink.table.planner.plan.utils.NestedProjectionUtil.build(NestedProjectionUtil.scala) at org.apache.flink.table.planner.plan.rules.logical.ProjectWatermarkAssignerTransposeRule.getUsedFieldsInTopLevelProjectAndWatermarkAssigner(ProjectWatermarkAssignerTransposeRule.java:155) at org.apache.flink.table.planner.plan.rules.logical.ProjectWatermarkAssignerTransposeRule.matches(ProjectWatermarkAssignerTransposeRule.java:65) at org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:284) at org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:411) at org.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:268) at org.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:985) at org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1245) at org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589) at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604) at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84) at org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268) at org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132) at org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589) at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604) at org.apache.calcite.plan.volcano.VolcanoPlanner.changeTraits(VolcanoPlanner.java:486) at org.apache.calcite.tools.ProgramsRuleSetProgram.run(Programs.java:309) at org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:64) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram

anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)atorg.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgramanonfun$optimize$1.apply(FlinkChainedProgram.scala:62)atorg.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram

anonfun$optimize$1.apply(FlinkChainedProgram.scala:62) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgramanonfunoptimizeoptimizeoptimize1.apply(FlinkChainedProgram.scala:58) at scala.collection.TraversableOnce

anonfun$foldLeft$1.apply(TraversableOnce.scala:157)atscala.collection.TraversableOnceanonfun$foldLeft$1.apply(TraversableOnce.scala:157)atscala.collection.TraversableOnce

anonfun$foldLeft$1.apply(TraversableOnce.scala:157) at scala.collection.TraversableOnceanonfunfoldLeftfoldLeftfoldLeft1.apply(TraversableOnce.scala:157) at scala.collection.Iteratorclass.foreach(Iterator.scala:891)atscala.collection.AbstractIterator.foreach(Iterator.scala:1334)atscala.collection.IterableLikeclass.foreach(Iterator.scala:891)atscala.collection.AbstractIterator.foreach(Iterator.scala:1334)atscala.collection.IterableLikeclass.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.IterableLikeclass.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57) at org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeTree(StreamCommonSubGraphBasedOptimizer.scala:163) at org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.doOptimize(StreamCommonSubGraphBasedOptimizer.scala:79) at org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77) at org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:286) at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:165) at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1329) at org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1321) at org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:1276) at org.apache.zeppelin.flink.sql.AbstractStreamSqlJob.run(AbstractStreamSqlJob.java:161) ... 16 more

应该是优化的时候出了问题,是bug不?

现在改在解决这个问题呢?

Best Regards.*来自志愿者整理的flink邮件归档


参考回答:

如果单独执行这个function 的话是没有问题的

select Test().a 是没有问题的


关于本问题的更多回答可点击原文查看:https://developer.aliyun.com/ask/359483?spm=a2c6h.14164896.0.0.4a6263bfEP5Bvd


问题五:在使用flink sql ddl语句向hbase中写的时候出现报错怎么解决?


在使用flink sql ddl语句向hbase中写的时候报如下错误: java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration at org.apache.flink.addons.hbase.HBaseUpsertTableSink.consumeDataStream(HBaseUpsertTableSink.java:87) at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:141) at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:50)

项目maven中已经引入过下面依赖 hbase-server hbase-common hadoop-common flink-hbase_2.11 而且我看jar中是有HBaseConfiguration这个类的,为什么放到服务器上执行就报错呢,在本地执行没问题

*来自志愿者整理的flink邮件归档


参考回答:

你服务器上是否配置了hadoop_classpath? 建议hbase在试用时 用 hadoop_classpath + flink-hbase jar,不然依赖问题会比较麻烦。


关于本问题的更多回答可点击原文查看:https://developer.aliyun.com/ask/370756?spm=a2c6h.14164896.0.0.4a6263bfEP5Bvd

相关实践学习
基于Hologres轻松玩转一站式实时仓库
本场景介绍如何利用阿里云MaxCompute、实时计算Flink和交互式分析服务Hologres开发离线、实时数据融合分析的数据大屏应用。
Linux入门到精通
本套课程是从入门开始的Linux学习课程,适合初学者阅读。由浅入深案例丰富,通俗易懂。主要涉及基础的系统操作以及工作中常用的各种服务软件的应用、部署和优化。即使是零基础的学员,只要能够坚持把所有章节都学完,也一定会受益匪浅。
相关文章
|
2月前
|
SQL 关系型数据库 MySQL
这样的SQL执行为什么不会报错?optimizer_trace深度历险
【10月更文挑战第12天】本文探讨了一条看似错误但实际上能成功执行的SQL语句,通过开启MySQL的优化器追踪功能,详细分析了SQL的执行过程,揭示了子查询被优化器解析为连接操作的原因,最终解释了为何该SQL不会报错。文章不仅增进了对SQL优化机制的理解,也展示了如何利用优化器追踪解决实际问题。
|
3月前
|
SQL 数据库
SQL解析相关报错
SQL解析相关报错
49 5
|
3天前
|
SQL 存储 缓存
Flink SQL Deduplication 去重以及如何获取最新状态操作
Flink SQL Deduplication 是一种高效的数据去重功能,支持多种数据类型和灵活的配置选项。它通过哈希表、时间窗口和状态管理等技术实现去重,适用于流处理和批处理场景。本文介绍了其特性、原理、实际案例及源码分析,帮助读者更好地理解和应用这一功能。
48 14
|
2月前
|
SQL 关系型数据库 MySQL
|
3月前
|
关系型数据库 MySQL Nacos
nacos启动报错 load derby-schema.sql error
这篇文章描述了作者在使用Nacos时遇到的启动错误,错误提示为加载derby-schema.sql失败,作者通过将数据库从Derby更换为MySQL解决了问题。
nacos启动报错 load derby-schema.sql error
|
3月前
|
关系型数据库 MySQL Java
flywa报错java.sql.SQLSyntaxErrorException: Unknown database ‘flyway‘
flywa报错java.sql.SQLSyntaxErrorException: Unknown database ‘flyway‘
41 1
|
2月前
|
SQL 大数据 API
大数据-132 - Flink SQL 基本介绍 与 HelloWorld案例
大数据-132 - Flink SQL 基本介绍 与 HelloWorld案例
56 0
|
2月前
|
SQL 数据库
SQL:如何使用窗口函数实现高效分页查询??
SQL:如何使用窗口函数实现高效分页查询??
35 0
|
7月前
|
SQL NoSQL Java
Flink SQL 问题之执行报错如何解决
Flink SQL报错通常指在使用Apache Flink的SQL接口执行数据处理任务时遇到的问题;本合集将收集常见的Flink SQL报错情况及其解决方法,帮助用户迅速恢复数据处理流程。
623 2
|
7月前
|
SQL Java 关系型数据库
Flink SQL 问题之用代码执行报错如何解决
Flink SQL报错通常指在使用Apache Flink的SQL接口执行数据处理任务时遇到的问题;本合集将收集常见的Flink SQL报错情况及其解决方法,帮助用户迅速恢复数据处理流程。
753 6

相关产品

  • 实时计算 Flink版
  • 下一篇
    DataWorks