开发者社区> 问答> 正文

flink1.11.1启动问题

使用了如下命令来提交flink作业到yarn上运行,结果出错。如果job jar路径改成本地的就没有问题。我已经将 flink-oss-fs-hadoop-1.12.0.jar 放到flink lib目录下面,并且在flink.conf配置文件中设置好了oss参数。试问,这种作业jar在远端的分布式文件系统flink难道不支持吗?

./bin/flink run-application -t yarn-application \

-Dyarn.provided.lib.dirs="oss://odps-prd/rtdp/flinkLib" \

oss://odps-prd/rtdp/flinkJobs/TopSpeedWindowing.jar


The program finished with the following exception:

org.apache.flink.client.deployment.ClusterDeploymentException: Couldn't deploy Yarn Application Cluster

at org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:443)

at org.apache.flink.client.deployment.application.cli.ApplicationClusterDeployer.run(ApplicationClusterDeployer.java:64)

at org.apache.flink.client.cli.CliFrontend.runApplication(CliFrontend.java:207)

at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:974)

at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1047)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:422)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)

at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)

at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1047)

Caused by: java.io.IOException: No FileSystem for scheme: oss

at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2799)

at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2810)

at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)

at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2849)

at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2831)

at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)

at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)

at org.apache.flink.yarn.Utils.lambda$getQualifiedRemoteSharedPaths$1(Utils.java:577)

at org.apache.flink.configuration.ConfigUtils.decodeListFromConfig(ConfigUtils.java:127)

at org.apache.flink.yarn.Utils.getRemoteSharedPaths(Utils.java:585)

at org.apache.flink.yarn.Utils.getQualifiedRemoteSharedPaths(Utils.java:573)

at org.apache.flink.yarn.YarnClusterDescriptor.startAppMaster(YarnClusterDescriptor.java:708)

at org.apache.flink.yarn.YarnClusterDescriptor.deployInternal(YarnClusterDescriptor.java:558)

at org.apache.flink.yarn.YarnClusterDescriptor.deployApplicationCluster(YarnClusterDescriptor.java:436)

... 9 more*来自志愿者整理的flink邮件归档

展开
收起
JACKJACK 2021-12-08 16:17:12 1376 0
1 条回答
写回答
取消 提交回答
  • 对的是我!

    目前user jar是可以支持远程,但是只能是hadoop compatiable的schema 因为远程的这个user jar并不会下载到Flink client本地,而是直接注册为Yarn的local resource来使用

    所以你的这个报错是预期内的,还没有办法支持*来自志愿者整理的flink邮件归档

    2021-12-08 16:54:14
    赞同 展开评论 打赏
问答排行榜
最热
最新

相关电子书

更多
Flink CDC Meetup PPT - 龚中强 立即下载
Flink CDC Meetup PPT - 王赫 立即下载
Flink CDC Meetup PPT - 覃立辉 立即下载