问题一:请教个问题,按照快速上手的来,flink SQL 建表后查询,出来表结构,但是有个超时报错,怎么办
参考回答:
关于本问题的更多回答可点击原文查看:https://developer.aliyun.com/ask/440359?spm=a2c6h.14164896.0.0.671063bfD4aSq3
问题二:Flink SQL使用Tumble窗口函数报错
大家使用Flink SQL的tumble函数时,将结果表转换为流,报如下错误的异常吗 Exception in thread "main" java.lang.NoSuchMethodError: org.apache.flink.streaming.api.datastream.WindowedStream.aggregate(Lorg/apache/flink/api/common/functions/AggregateFunction;Lorg/apache/flink/streaming/api/functions/windowing/WindowFunction;Lorg/apache/flink/api/common/typeinfo/TypeInformation;Lorg/apache/flink/api/common/typeinfo/TypeInformation;Lorg/apache/flink/api/common/typeinfo/TypeInformation;)Lorg/apache/flink/streaming/api/datastream/SingleOutputStreamOperator; at org.apache.flink.table.plan.nodes.datastream.DataStreamGroupWindowAggregate.translateToPlan(DataStreamGroupWindowAggregate.scala:214) *来自志愿者整理的flink邮件归档
参考回答:
你的 Flink 版本是哪个呢。从报错来看你在用 legacy planner,可以使用 blink planner 试试。
关于本问题的更多回答可点击原文查看:https://developer.aliyun.com/ask/370092?spm=a2c6h.14164896.0.0.671063bfD4aSq3
问题三:Flinksql通过phoenix查询维表,报错
flinksql消费kafka,自定义的 connector phoenix 查询维表
任务在启动一段时间有时候一周左右后,任务挂掉,看日志是:
2020-11-24 00:52:38,534 ERROR com.custom.jdbc.table.JdbcRowDataLookupFunction [] - JDBC executeBatch error, retry times = 2 java.sql.SQLException: null at org.apache.calcite.avatica.Helper.createException(Helper.java:56) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.Helper.createException(Helper.java:41) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:557) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaPreparedStatement.executeQuery(AvaticaPreparedStatement.java:137) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at com.custom.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:145) [sql-client-1.0-SNAPSHOT.jar:?] at LookupFunction2.flatMap(UnknownSource)[flink−table−blink2.11−1.11.1.jar:?]atorg.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82)[flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36)[flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.OperatorChain2.flatMap(UnknownSource)[flink−table−blink2.11−1.11.1.jar:?]atorg.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82)[flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36)[flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.OperatorChain2.flatMap(Unknown Source) [flink-table-blink_2.11-1.11.1.jar:?] at org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82) [flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36) [flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.pushToOperator(OperatorChain.java:717) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.collect(OperatorChain.java:692)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.collect(OperatorChain.java:692)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.collect(OperatorChain.java:692) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.collect(OperatorChain.java:672) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.StreamSourceContextsNonTimestampContext.collect(StreamSourceContexts.java:104)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.StreamSourceContextsNonTimestampContext.collect(StreamSourceContexts.java:104)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.StreamSourceContextsNonTimestampContext.collect(StreamSourceContexts.java:104) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.StreamSourceContextsNonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordsWithTimestamps(AbstractFetcher.java:352) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:185) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:141) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:755) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.SourceStreamTaskLegacySourceFunctionThread.run(SourceStreamTask.java:201)[flink−dist2.11−1.11.1.jar:1.11.1]Causedby:org.apache.calcite.avatica.NoSuchStatementExceptionatorg.apache.calcite.avatica.remote.RemoteMetaLegacySourceFunctionThread.run(SourceStreamTask.java:201)[flink−dist2.11−1.11.1.jar:1.11.1]Causedby:org.apache.calcite.avatica.NoSuchStatementExceptionatorg.apache.calcite.avatica.remote.RemoteMetaLegacySourceFunctionThread.run(SourceStreamTask.java:201) [flink-dist_2.11-1.11.1.jar:1.11.1] Caused by: org.apache.calcite.avatica.NoSuchStatementException at org.apache.calcite.avatica.remote.RemoteMeta15.call(RemoteMeta.java:349) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.remote.RemoteMeta15.call(RemoteMeta.java:343) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaConnection.invokeWithRetries(AvaticaConnection.java:793) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.remote.RemoteMeta.execute(RemoteMeta.java:342) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:548) [flink−table−blink2.11−1.11.1.jar:1.11.1]...20more2020−11−2400:52:40,539ERRORorg.apache.flink.connector.jdbc.table.JdbcRowDataLookupFunction[]−JDBCexecuteBatcherror,retrytimes=3java.sql.SQLException:nullatorg.apache.calcite.avatica.Helper.createException(Helper.java:56) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.Helper.createException(Helper.java:41) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:557) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaPreparedStatement.executeQuery(AvaticaPreparedStatement.java:137) [flink−table−blink2.11−1.11.1.jar:1.11.1]atcom.custom.phoenix.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:145)[sql−client−1.0−SNAPSHOT.jar:?]atLookupFunction15.call(RemoteMeta.java:343) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaConnection.invokeWithRetries(AvaticaConnection.java:793) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.remote.RemoteMeta.execute(RemoteMeta.java:342) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:548) [flink−table−blink2.11−1.11.1.jar:1.11.1]...20more2020−11−2400:52:40,539ERRORorg.apache.flink.connector.jdbc.table.JdbcRowDataLookupFunction[]−JDBCexecuteBatcherror,retrytimes=3java.sql.SQLException:nullatorg.apache.calcite.avatica.Helper.createException(Helper.java:56) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.Helper.createException(Helper.java:41) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:557) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.AvaticaPreparedStatement.executeQuery(AvaticaPreparedStatement.java:137) [flink−table−blink2.11−1.11.1.jar:1.11.1]atcom.custom.phoenix.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:145)[sql−client−1.0−SNAPSHOT.jar:?]atLookupFunction15.call(RemoteMeta.java:343) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaConnection.invokeWithRetries(AvaticaConnection.java:793) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.remote.RemoteMeta.execute(RemoteMeta.java:342) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:548) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] ... 20 more 2020-11-24 00:52:40,539 ERROR org.apache.flink.connector.jdbc.table.JdbcRowDataLookupFunction [] - JDBC executeBatch error, retry times = 3 java.sql.SQLException: null at org.apache.calcite.avatica.Helper.createException(Helper.java:56) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.Helper.createException(Helper.java:41) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:557) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaPreparedStatement.executeQuery(AvaticaPreparedStatement.java:137) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at com.custom.phoenix.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:145) [sql-client-1.0-SNAPSHOT.jar:?] at LookupFunction2.flatMap(Unknown Source) [flink-table-blink_2.11-1.11.1.jar:?] at org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:82) [flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.flink.table.runtime.operators.join.lookup.LookupJoinRunner.processElement(LookupJoinRunner.java:36) [flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.pushToOperator(OperatorChain.java:717)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.pushToOperator(OperatorChain.java:717)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.pushToOperator(OperatorChain.java:717) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.collect(OperatorChain.java:692) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.OperatorChainCopyingChainingOutput.collect(OperatorChain.java:672)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.StreamSourceContextsCopyingChainingOutput.collect(OperatorChain.java:672)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.StreamSourceContextsCopyingChainingOutput.collect(OperatorChain.java:672) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.StreamSourceContextsNonTimestampContext.collect(StreamSourceContexts.java:104) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.StreamSourceContextsNonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordsWithTimestamps(AbstractFetcher.java:352)[flink−sql−connector−kafka2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:185)[flink−sql−connector−kafka2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:141)[flink−sql−connector−kafka2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:755)[flink−sql−connector−kafka2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)[flink−dist2.11−1.11.1.jar:1.11.1atorg.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.SourceStreamTaskNonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordsWithTimestamps(AbstractFetcher.java:352)[flink−sql−connector−kafka2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:185)[flink−sql−connector−kafka2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:141)[flink−sql−connector−kafka2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:755)[flink−sql−connector−kafka2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)[flink−dist2.11−1.11.1.jar:1.11.1atorg.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)[flink−dist2.11−1.11.1.jar:1.11.1]atorg.apache.flink.streaming.runtime.tasks.SourceStreamTaskNonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordsWithTimestamps(AbstractFetcher.java:352) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:185) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:141) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:755) [flink-sql-connector-kafka_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100) [flink-dist_2.11-1.11.1.jar:1.11.1 at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63) [flink-dist_2.11-1.11.1.jar:1.11.1] at org.apache.flink.streaming.runtime.tasks.SourceStreamTaskLegacySourceFunctionThread.run(SourceStreamTask.java:201) [flink-dist_2.11-1.11.1.jar:1.11.1] Caused by: org.apache.calcite.avatica.NoSuchStatementException at org.apache.calcite.avatica.remote.RemoteMeta15.call(RemoteMeta.java:349) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.remote.RemoteMeta15.call(RemoteMeta.java:349) [flink−table−blink2.11−1.11.1.jar:1.11.1]atorg.apache.calcite.avatica.remote.RemoteMeta15.call(RemoteMeta.java:349) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.remote.RemoteMeta15.call(RemoteMeta.java:343) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaConnection.invokeWithRetries(AvaticaConnection.java:793) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.remote.RemoteMeta.execute(RemoteMeta.java:342) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] at org.apache.calcite.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:548) ~[flink-table-blink_2.11-1.11.1.jar:1.11.1] ... 20 more 2020-11-24 00:52:40,635 WARN org.apache.flink.runtime.taskmanager.Task switched from RUNNING to FAILED. java.lang.RuntimeException: Execution of JDBC statement failed.
各位大佬帮我看下哪的问题*来自志愿者整理的flink邮件归档
参考回答:
从你的堆栈看,你自定义的 “com.custom.jdbc.table.JdbcRowDataLookupFunction” 函数引用的 PreparedStatement 包不对。 具体实现可以参考:https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/table/JdbcRowDataLookupFunction.java 我理解如果 phoenix 支持标准的 SQL 协议的话,直接用提供的 JDBCRowDataLookupFunction 也可以?
关于本问题的更多回答可点击原文查看:https://developer.aliyun.com/ask/370097?spm=a2c6h.14164896.0.0.671063bfD4aSq3
问题四:flinksql 1.12.1 row中字段访问报错怎么办
hi, all 定义一个 ScalarFunction class Test extends ScalarFunction{ @DataTypeHint("ROW") def eval(): Row ={ Row.of("a", "b", "c") } }
当执行下面语句的时候 select Test().a from taba1 会报下面的错误:
anonfun$build$1.apply(NestedProjectionUtil.scala:112) at org.apache.flink.table.planner.plan.utils.NestedProjectionUtilanonfunbuildbuildbuild1.apply(NestedProjectionUtil.scala:111) at scala.collection.Iteratorclass.foreach(Iterator.scala:891)atscala.collection.AbstractIterator.foreach(Iterator.scala:1334)atscala.collection.IterableLikeclass.foreach(Iterator.scala:891)atscala.collection.AbstractIterator.foreach(Iterator.scala:1334)atscala.collection.IterableLikeclass.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.IterableLikeclass.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at org.apache.flink.table.planner.plan.utils.NestedProjectionUtil.build(NestedProjectionUtil.scala:111)atorg.apache.flink.table.planner.plan.utils.NestedProjectionUtil.build(NestedProjectionUtil.scala)atorg.apache.flink.table.planner.plan.rules.logical.ProjectWatermarkAssignerTransposeRule.getUsedFieldsInTopLevelProjectAndWatermarkAssigner(ProjectWatermarkAssignerTransposeRule.java:155)atorg.apache.flink.table.planner.plan.rules.logical.ProjectWatermarkAssignerTransposeRule.matches(ProjectWatermarkAssignerTransposeRule.java:65)atorg.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:284)atorg.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:411)atorg.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:268)atorg.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:985)atorg.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1245)atorg.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)atorg.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)atorg.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84)atorg.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268)atorg.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132)atorg.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)atorg.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)atorg.apache.calcite.plan.volcano.VolcanoPlanner.changeTraits(VolcanoPlanner.java:486)atorg.apache.calcite.tools.Programs.build(NestedProjectionUtil.scala:111)atorg.apache.flink.table.planner.plan.utils.NestedProjectionUtil.build(NestedProjectionUtil.scala)atorg.apache.flink.table.planner.plan.rules.logical.ProjectWatermarkAssignerTransposeRule.getUsedFieldsInTopLevelProjectAndWatermarkAssigner(ProjectWatermarkAssignerTransposeRule.java:155)atorg.apache.flink.table.planner.plan.rules.logical.ProjectWatermarkAssignerTransposeRule.matches(ProjectWatermarkAssignerTransposeRule.java:65)atorg.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:284)atorg.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:411)atorg.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:268)atorg.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:985)atorg.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1245)atorg.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)atorg.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)atorg.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84)atorg.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268)atorg.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132)atorg.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)atorg.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)atorg.apache.calcite.plan.volcano.VolcanoPlanner.changeTraits(VolcanoPlanner.java:486)atorg.apache.calcite.tools.Programs.build(NestedProjectionUtil.scala:111) at org.apache.flink.table.planner.plan.utils.NestedProjectionUtil.build(NestedProjectionUtil.scala) at org.apache.flink.table.planner.plan.rules.logical.ProjectWatermarkAssignerTransposeRule.getUsedFieldsInTopLevelProjectAndWatermarkAssigner(ProjectWatermarkAssignerTransposeRule.java:155) at org.apache.flink.table.planner.plan.rules.logical.ProjectWatermarkAssignerTransposeRule.matches(ProjectWatermarkAssignerTransposeRule.java:65) at org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:284) at org.apache.calcite.plan.volcano.VolcanoRuleCall.matchRecurse(VolcanoRuleCall.java:411) at org.apache.calcite.plan.volcano.VolcanoRuleCall.match(VolcanoRuleCall.java:268) at org.apache.calcite.plan.volcano.VolcanoPlanner.fireRules(VolcanoPlanner.java:985) at org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1245) at org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589) at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604) at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:84) at org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:268) at org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1132) at org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589) at org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604) at org.apache.calcite.plan.volcano.VolcanoPlanner.changeTraits(VolcanoPlanner.java:486) at org.apache.calcite.tools.ProgramsRuleSetProgram.run(Programs.java:309) at org.apache.flink.table.planner.plan.optimize.program.FlinkVolcanoProgram.optimize(FlinkVolcanoProgram.scala:64) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgramanonfun$optimize$1.apply(FlinkChainedProgram.scala:62)atorg.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgramanonfun$optimize$1.apply(FlinkChainedProgram.scala:62)atorg.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram
anonfun$optimize$1.apply(FlinkChainedProgram.scala:62) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgramanonfunoptimizeoptimizeoptimize1.apply(FlinkChainedProgram.scala:58) at scala.collection.TraversableOnceanonfun$foldLeft$1.apply(TraversableOnce.scala:157)atscala.collection.TraversableOnceanonfun$foldLeft$1.apply(TraversableOnce.scala:157)atscala.collection.TraversableOnce
anonfun$foldLeft$1.apply(TraversableOnce.scala:157) at scala.collection.TraversableOnceanonfunfoldLeftfoldLeftfoldLeft1.apply(TraversableOnce.scala:157) at scala.collection.Iteratorclass.foreach(Iterator.scala:891)atscala.collection.AbstractIterator.foreach(Iterator.scala:1334)atscala.collection.IterableLikeclass.foreach(Iterator.scala:891)atscala.collection.AbstractIterator.foreach(Iterator.scala:1334)atscala.collection.IterableLikeclass.foreach(Iterator.scala:891) at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) at scala.collection.IterableLikeclass.foreach(IterableLike.scala:72) at scala.collection.AbstractIterable.foreach(Iterable.scala:54) at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157) at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57) at org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.optimizeTree(StreamCommonSubGraphBasedOptimizer.scala:163) at org.apache.flink.table.planner.plan.optimize.StreamCommonSubGraphBasedOptimizer.doOptimize(StreamCommonSubGraphBasedOptimizer.scala:79) at org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:77) at org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:286) at org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:165) at org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1329) at org.apache.flink.table.api.internal.TableEnvironmentImpl.translateAndClearBuffer(TableEnvironmentImpl.java:1321) at org.apache.flink.table.api.internal.TableEnvironmentImpl.execute(TableEnvironmentImpl.java:1276) at org.apache.zeppelin.flink.sql.AbstractStreamSqlJob.run(AbstractStreamSqlJob.java:161) ... 16 moreBest Regards.*来自志愿者整理的flink邮件归档
关于本问题的更多回答可点击原文查看:https://developer.aliyun.com/ask/359483?spm=a2c6h.14164896.0.0.4a6263bfEP5Bvd
问题五:在使用flink sql ddl语句向hbase中写的时候出现报错怎么解决?
在使用flink sql ddl语句向hbase中写的时候报如下错误: java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration at org.apache.flink.addons.hbase.HBaseUpsertTableSink.consumeDataStream(HBaseUpsertTableSink.java:87) at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:141) at org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:50)
项目maven中已经引入过下面依赖 hbase-server hbase-common hadoop-common flink-hbase_2.11 而且我看jar中是有HBaseConfiguration这个类的,为什么放到服务器上执行就报错呢,在本地执行没问题
*来自志愿者整理的flink邮件归档
参考回答:
你服务器上是否配置了hadoop_classpath? 建议hbase在试用时 用 hadoop_classpath + flink-hbase jar,不然依赖问题会比较麻烦。
关于本问题的更多回答可点击原文查看:https://developer.aliyun.com/ask/370756?spm=a2c6h.14164896.0.0.4a6263bfEP5Bvd