Flink这个报错原因是什么? INTERNAL: Occur FlinkServerException or FlinkSQLException during submitting preview: org.apache.flink.table.api.TableException: Failed to execute sql
at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:894)
at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:756)
at org.apache.flink.table.sqlserver.execution.OperationExecutorImpl.executeModifyOperations(OperationExecutorImpl.java:489)
at org.apache.flink.table.sqlserver.preview.InsertPreviewJob.doSubmitJob(InsertPreviewJob.java:123)
at org.apache.flink.table.sqlserver.preview.AbstractPreviewJob.submitJob(AbstractPreviewJob.java:40)
at org.apache.flink.table.sqlserver.execution.OperationExecutorImpl.runPreview(OperationExecutorImpl.java:660)
at org.apache.flink.table.sqlserver.execution.DelegateOperationExecutor.lambda$runPreview$16(DelegateOperationExecutor.java:194)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at org.apache.flink.table.sqlserver.context.SqlServerSecurityContext.runSecured(SqlServerSecurityContext.java:72)
at org.apache.flink.table.sqlserver.execution.DelegateOperationExecutor.wrapClassLoader(DelegateOperationExecutor.java:311)
at org.apache.flink.table.sqlserver.execution.DelegateOperationExecutor.lambda$wrapExecutor$35(DelegateOperationExecutor.java:333)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)
at java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.RuntimeException: Couldn't retrieve standalone cluster
at org.apache.flink.client.deployment.StandaloneClusterDescriptor.lambda$retrieve$0(StandaloneClusterDescriptor.java:53)
at org.apache.flink.client.deployment.executors.AbstractSessionClusterExecutor.execute(AbstractSessionClusterExecutor.java:79)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:2133)
at org.apache.flink.table.planner.delegation.DefaultExecutor.executeAsync(DefaultExecutor.java:110)
at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:877)
... 16 more
Caused by: org.apache.flink.util.FlinkException: Could not create the client ha services from the instantiated HighAvailabilityServicesFactory org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory.
at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createCustomClientHAServices(HighAvailabilityServicesUtils.java:314)
at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createClientHAService(HighAvailabilityServicesUtils.java:165)
at org.apache.flink.runtime.highavailability.DefaultClientHighAvailabilityServicesFactory.create(DefaultClientHighAvailabilityServicesFactory.java:35)
at org.apache.flink.client.program.rest.RestClusterClient.(RestClusterClient.java:220)
at org.apache.flink.client.program.rest.RestClusterClient.(RestClusterClient.java:180)
at org.apache.flink.client.program.rest.RestClusterClient.(RestClusterClient.java:174)
at org.apache.flink.client.deployment.StandaloneClusterDescriptor.lambda$retrieve$0(StandaloneClusterDescriptor.java:51)
... 20 more
Caused by: java.lang.IllegalArgumentException: Configuration option 'kubernetes.cluster-id' is not set.
at org.apache.flink.kubernetes.kubeclient.Fabric8FlinkKubeClient.lambda$new$0(Fabric8FlinkKubeClient.java:101)
at java.util.Optional.orElseThrow(Optional.java:290)
at org.apache.flink.kubernetes.kubeclient.Fabric8FlinkKubeClient.(Fabric8FlinkKubeClient.java:98)
at org.apache.flink.kubernetes.kubeclient.FlinkKubeClientFactory.fromConfiguration(FlinkKubeClientFactory.java:102)
at org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory.createHAServices(KubernetesHaServicesFactory.java:50)
at org.apache.flink.runtime.highavailability.HighAvailabilityServicesFactory.createClientHAServices(HighAvailabilityServicesFactory.java:49)
at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createCustomClientHAServices(HighAvailabilityServicesUtils.java:311)
... 26 more
这个错误信息表明Flink在执行SQL语句时遇到了问题。具体来说,它无法从独立集群中检索信息。这可能是由于网络问题或者Flink客户端和服务器之间的通信问题导致的。
以下是一些可能的解决方案:
检查你的Flink客户端和服务器之间的网络连接。确保它们之间没有防火墙规则阻止通信。
确保你的Flink服务器正在运行并且已经正确配置。
尝试重新启动你的Flink服务器和客户端。
检查你的Flink配置文件,确保所有的参数都是正确的。
如果你使用的是YARN或其他资源管理器,确保它们也已经正确配置并且正在运行。
如果你使用的是Flink的本地运行模式,确保你的JVM内存设置是足够的。
该报错是由于Flink提交预览时发生了异常,具体原因可以通过分析异常栈信息来查找。以下是异常栈信息的解读:
异常的详细原因是:
Caused by: org.apache.flink.util.FlinkException: Could not create the client ha services from the instantiated HighAvailabilityServicesFactory org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory.
Caused by: java.lang.IllegalArgumentException: Configuration option 'kubernetes.cluster-id' is not set.
根据异常信息分析,可能原因是在配置项中缺少了 kubernetes.cluster-id
配置。建议检查配置文件中的相关配置项,并确保配置项正确设置。
同时,还需要注意检查其它可能的配置错误或依赖项配置是否正确。例如,是否存在网络或权限问题,是否正确配置了Flink集群和Kubernetes相关的信息等。
如果上述检查没有发现问题,可以尝试进行更详细的日志调试,以进一步了解问题所在。
这个报错的原因是由于在提交数据预览时发生了FlinkServerException或FlinkSQLException,导致了TableException的抛出。从报错信息中可以看到,错误的原因是无法获取独立的集群。
从代码中可以看到,TableEnvironmentImpl
中的executeInternal
方法在尝试执行SQL语句时发生了异常。异常的原因是StandaloneClusterDescriptor
在尝试获取独立的集群时发生了异常。从异常堆栈中可以看到,异常的原因是FlinkKubeClientFactory
在尝试创建FlinkKubeClient时发生了异常。异常的原因是配置选项kubernetes.cluster-id
未设置。解决这个报错的方法是检查配置选项kubernetes.cluster-id
是否设置,如果没有设置,需要将其设置为正确的值。同时,需要确保Kubernetes环境的正确配置和运行状态。
版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。
实时计算Flink版是阿里云提供的全托管Serverless Flink云服务,基于 Apache Flink 构建的企业级、高性能实时大数据处理系统。提供全托管版 Flink 集群和引擎,提高作业开发运维效率。