开发者社区> 问答> 正文

为什么Flink 提交到yarn失败了

FLINK版本: FLINK-1.12 HADOOP环境: CDH6.1.1 利用yarn-per-job模式提交失败,看堆栈应该是在初始化hdfs连接的时候出错,不过看起来应该是使用了正确的hdfs-client包,不知道为什么还是会有returnType的问题?

Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Cannot support file system for 'hdfs' via Hadoop, because Hadoop is not in the classpath, or some classes are missing from the classpath. at org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:184) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:487) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:389) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at org.apache.flink.core.fs.Path.getFileSystem(Path.java:292) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:100) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:89) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:117) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:309) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:272) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:212) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:173) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181] at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_181] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746) ~[cloud-flinkAppCrashAnalysis-1.0.0-encodetest-RELEASE.jar:?] at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:172) ~[flink-dist_2.11-1.12.0.jar:1.12.0] ... 2 more Caused by: java.lang.VerifyError: Bad return type Exception Details: Location:

org/apache/hadoop/hdfs/DFSClient.getQuotaUsage(Ljava/lang/String;)Lorg/apache/hadoop/fs/QuotaUsage; @157: areturn Reason: Type 'org/apache/hadoop/fs/ContentSummary' (current frame, stack[0]) is not assignable to 'org/apache/hadoop/fs/QuotaUsage' (from method signature) Current Frame: bci: @157 flags: { } locals: { 'org/apache/hadoop/hdfs/DFSClient', 'java/lang/String', 'org/apache/hadoop/ipc/RemoteException', 'java/io/IOException' } stack: { 'org/apache/hadoop/fs/ContentSummary' } Bytecode: 0x0000000: 2ab6 00b5 2a13 01f4 2bb6 00b7 4d01 4e2a 0x0000010: b400 422b b901 f502 003a 042c c600 1d2d 0x0000020: c600 152c b600 b9a7 0012 3a05 2d19 05b6 0x0000030: 00bb a700 072c b600 b919 04b0 3a04 1904 0x0000040: 4e19 04bf 3a06 2cc6 001d 2dc6 0015 2cb6 0x0000050: 00b9 a700 123a 072d 1907 b600 bba7 0007 0x0000060: 2cb6 00b9 1906 bf4d 2c07 bd00 d459 0312 0x0000070: d653 5904 12e0 5359 0512 e153 5906 1301 0x0000080: f653 b600 d74e 2dc1 01f6 9900 14b2 0023 0x0000090: 1301 f7b9 002b 0200 2a2b b601 f8b0 2dbf 0x00000a0: Exception Handler Table: bci [35, 39] => handler: 42 bci [15, 27] => handler: 60 bci [15, 27] => handler: 68 bci [78, 82] => handler: 85 bci [60, 70] => handler: 68 bci [4, 57] => handler: 103 bci [60, 103] => handler: 103 Stackmap Table:

full_frame(@42,{Object[#751],Object[#774],Object[#829],Object[#799],Object[#1221]},{Object[#799]}) same_frame(@53) same_frame(@57)

full_frame(@60,{Object[#751],Object[#774],Object[#829],Object[#799]},{Object[#799]}) same_locals_1_stack_item_frame(@68,Object[#799])

full_frame(@85,{Object[#751],Object[#774],Object[#829],Object[#799],Top,Top,Object[#799]},{Object[#799]}) same_frame(@96) same_frame(@100) full_frame(@103,{Object[#751],Object[#774]},{Object[#854]}) append_frame(@158,Object[#854],Object[#814])

at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:167) ~[hadoop-hdfs-client-3.0.0-cdh6.1.1.jar:?] at org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:164) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:487) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:389) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at org.apache.flink.core.fs.Path.getFileSystem(Path.java:292) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:100) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:89) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:117) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:309) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:272) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:212) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:173) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181] at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_181] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746) ~[cloud-flinkAppCrashAnalysis-1.0.0-encodetest-RELEASE.jar:?] at org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) ~[flink-dist_2.11-1.12.0.jar:1.12.0] at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:172) ~[flink-dist_2.11-1.12.0.jar:1.12.0] ... 2 more*来自志愿者整理的flink邮件归档

展开
收起
彗星halation 2021-12-02 11:39:48 870 0
1 条回答
写回答
取消 提交回答
  • HADOOP_CLASSPATH 设置了吗?*来自志愿者整理的FLINK邮件归档

    2021-12-02 11:54:36
    赞同 展开评论 打赏
问答排行榜
最热
最新

相关电子书

更多
深度学习+大数据 TensorFlow on Yarn 立即下载
Docker on Yarn 微服务实践 立即下载
深度学习+大数据-TensorFlow on Yarn 立即下载