开发者社区 > 云存储 > 对象存储OSS > 正文

自建Hadoop集群使用阿里云oss-hdfs报错

背景:公司自建Hadoop集群;使用阿里云服务;申请了oss-hdfs bucket;按照官方说明也配置了 https://help.aliyun.com/document_detail/415019.html 但是还是遇到了问题.,问题如下:

hdfs操作可以;已经生成了测试数据并上传到oss-hdfs hive操作,可以创建database,并制定oss-hdfs的location select * from xxx 的操作是可以的 但是涉及到mapreduce的操作就失败了,具体报错如下:

hive.out Error: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1672735353826_0098_1_00, diagnostics=[Vertex vertex_1672735353826_0098_1_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: lineitem initializer failed, vertex=vertex_1672735353826_0098_1_00 [Map 1], java.lang.RuntimeException: java.lang.ClassNotFoundException: Class com.aliyun.jindodata.oss.JindoOssFileSystem not found at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2597) at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3269) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3301) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361) at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:98) at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:81) at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:216) at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:325) at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:524) at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:781) at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:243) at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278) at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269) at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253) at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.ClassNotFoundException: Class com.aliyun.jindodata.oss.JindoOssFileSystem not found at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2501) at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2595) ... 27 more ]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1672735353826_0098_1_01, diagnostics=[Vertex received Kill in INITED state., Vertex vertex_1672735353826_0098_1_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]Vertex killed, vertexName=Reducer 3, vertexId=vertex_1672735353826_0098_1_02, diagnostics=[Vertex received Kill in INITED state., Vertex vertex_1672735353826_0098_1_02 [Reducer 3] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:2 (state=08S01,code=2)

hiveserver2.log 2023-01-13 15:16:47,507 DEBUG [pool-8-thread-184] oss.JindoOssFileSystem#initialize(93) - Initializing JindoOssFileSystem for szzt-sjzt-bigdata-test 2023-01-13 15:16:47,509 DEBUG [pool-8-thread-184] oss.JindoOssFileSystem#initDelegationTokenService(118) - Using delegation tokens 2023-01-13 15:16:47,509 DEBUG [pool-8-thread-184] auth.DelegationTokenService#serviceInit(162) - Filesystem oss://szzt-sjzt-bigdata-test.cn-beijing.oss-dls.aliyuncs.com/warehouse/ods.db/region is using delegation tokens of kind OssDelegationToken/Auth 2023-01-13 15:16:47,510 DEBUG [pool-8-thread-184] auth.DelegationTokenService#lookupToken(557) - Looking for token for service oss://szzt-sjzt-bigdata-test.cn-beijing.oss-dls.aliyuncs.com/warehouse/ods.db/region in credentials 2023-01-13 15:16:47,510 DEBUG [pool-8-thread-184] auth.DelegationTokenService#lookupToken(577) - No token for oss://szzt-sjzt-bigdata-test.cn-beijing.oss-dls.aliyuncs.com/warehouse/ods.db/region found 2023-01-13 15:16:47,510 DEBUG [pool-8-thread-184] auth.DelegationTokenService#deployUnbonded(220) - No delegation tokens present: using direct authentication 2023-01-13 15:16:47,510 DEBUG [pool-8-thread-184] auth.DelegationTokenService#serviceStart(177) - OSS Delegation support token (none) with Token binding OssDelegationToken/Auth 2023-01-13 15:16:47,510 DEBUG [pool-8-thread-184] oss.JindoOssFileSystem#initDelegationTokenService(131) - No delegation token for this instance 2023-01-13 15:16:47,510 DEBUG [pool-8-thread-184] util.AuthUtils#propagateBucketOptions(615) - Propagating entries under fs.oss.bucket.szzt-sjzt-bigdata-test. 2023-01-13 15:16:47,512 DEBUG [pool-8-thread-184] auth.HadoopLoginUserInfo#(39) - User: gtair, authMethod: KERBEROS 2023-01-13 15:16:47,513 DEBUG [pool-8-thread-184] util.AuthUtils#createCredentialProvider(544) - Credential provider class is com.aliyun.jindodata.oss.auth.SimpleCredentialsProvider 2023-01-13 15:16:47,520 DEBUG [pool-8-thread-184] util.AuthUtils#createCredentialProvider(544) - Credential provider class is com.aliyun.jindodata.oss.auth.EnvironmentVariableCredentialsProvider 2023-01-13 15:16:47,520 DEBUG [pool-8-thread-184] util.AuthUtils#createCredentialProvider(544) - Credential provider class is com.aliyun.jindodata.oss.auth.CommonCredentialsProvider 2023-01-13 15:16:47,520 DEBUG [pool-8-thread-184] impl.OssAuthUtils#createCredentialProviderSet(177) - For URI oss://szzt-sjzt-bigdata-test.cn-beijing.oss-dls.aliyuncs.com, using credential providers JindoCredentialProviderList[refcount= 1: [SimpleCredentialsProvider, EnvironmentVariableCredentialsProvider, CommonCredentialsProvider] 2023-01-13 15:16:47,521 DEBUG [pool-8-thread-184] auth.JindoCredentialProviderList#getCredentials(185) - Using credentials from SimpleCredentialsProvider 2023-01-13 15:16:47,529 INFO [pool-8-thread-184] common.JindoHadoopSystem#initializeCore(141) - Initialized native file system: 2023-01-13 15:16:47,597 INFO [pool-8-thread-184] common.FsStats#logStats(18) - cmd=getFileStatus, src=oss://szzt-sjzt-bigdata-test.cn-beijing.oss-dls.aliyuncs.com/warehouse/ods.db/region, dst=null, size=0, parameter=null, time-in-ms=68, version=4.5.0 2023-01-13 15:16:47,598 INFO [pool-8-thread-184] common.FsStats#logStats(18) - cmd=checkPermission, src=oss://szzt-sjzt-bigdata-test.cn-beijing.oss-dls.aliyuncs.com/warehouse/ods.db/region, dst=null, size=0, parameter=null, time-in-ms=0, version=4.5.0 2023-01-13 15:16:47,598 DEBUG [pool-8-thread-184] oss.JindoOssFileSystem#close(294) - JindoOssFilesystem oss://szzt-sjzt-bigdata-test.cn-beijing.oss-dls.aliyuncs.com/warehouse/ods.db/region is closed 2023-01-13 15:16:47,598 DEBUG [pool-8-thread-184] common.JindoHadoopSystem#close(973) - FileSystem [ oss://szzt-sjzt-bigdata-test.cn-beijing.oss-dls.aliyuncs.com ] closed 2023-01-13 15:16:47,599 DEBUG [pool-8-thread-184] auth.DelegationTokenService#serviceStop(198) - Stopping delegation tokens 2023-01-13 15:16:47,683 DEBUG [pool-8-thread-184] oss.JindoOssFileSystem#initialize(93) - Initializing JindoOssFileSystem for szzt-sjzt-bigdata-test 2023-01-13 15:16:47,683 DEBUG [pool-8-thread-184] oss.JindoOssFileSystem#initDelegationTokenService(118) - Using delegation tokens 2023-01-13 15:16:47,683 DEBUG [pool-8-thread-184] auth.DelegationTokenService#serviceInit(162) - Filesystem oss://szzt-sjzt-bigdata-test.cn-beijing.oss-dls.aliyuncs.com/warehouse/ods.db/region is using delegation tokens of kind OssDelegationToken/Auth 2023-01-13 15:16:47,684 DEBUG [pool-8-thread-184] auth.DelegationTokenService#lookupToken(557) - Looking for token for service oss://szzt-sjzt-bigdata-test.cn-beijing.oss-dls.aliyuncs.com/warehouse/ods.db/region in credentials 2023-01-13 15:16:47,684 DEBUG [pool-8-thread-184] auth.DelegationTokenService#lookupToken(577) - No token for oss://szzt-sjzt-bigdata-test.cn-beijing.oss-dls.aliyuncs.com/warehouse/ods.db/region found 2023-01-13 15:16:47,684 DEBUG [pool-8-thread-184] auth.DelegationTokenService#deployUnbonded(220) - No delegation tokens present: using direct authentication 2023-01-13 15:16:47,684 DEBUG [pool-8-thread-184] auth.DelegationTokenService#serviceStart(177) - OSS Delegation support token (none) with Token binding OssDelegationToken/Auth 2023-01-13 15:16:47,684 DEBUG [pool-8-thread-184] oss.JindoOssFileSystem#initDelegationTokenService(131) - No delegation token for this instance 2023-01-13 15:16:47,684 DEBUG [pool-8-thread-184] util.AuthUtils#propagateBucketOptions(615) - Propagating entries under fs.oss.bucket.szzt-sjzt-bigdata-test. 2023-01-13 15:16:47,685 DEBUG [pool-8-thread-184] auth.HadoopLoginUserInfo#(39) - User: gtair, authMethod: KERBEROS 2023-01-13 15:16:47,685 DEBUG [pool-8-thread-184] util.AuthUtils#createCredentialProvider(544) - Credential provider class is com.aliyun.jindodata.oss.auth.SimpleCredentialsProvider 2023-01-13 15:16:47,690 DEBUG [pool-8-thread-184] util.AuthUtils#createCredentialProvider(544) - Credential provider class is com.aliyun.jindodata.oss.auth.EnvironmentVariableCredentialsProvider 2023-01-13 15:16:47,690 DEBUG [pool-8-thread-184] util.AuthUtils#createCredentialProvider(544) - Credential provider class is com.aliyun.jindodata.oss.auth.CommonCredentialsProvider 2023-01-13 15:16:47,690 DEBUG [pool-8-thread-184] impl.OssAuthUtils#createCredentialProviderSet(177) - For URI oss://szzt-sjzt-bigdata-test.cn-beijing.oss-dls.aliyuncs.com, using credential providers JindoCredentialProviderList[refcount= 1: [SimpleCredentialsProvider, EnvironmentVariableCredentialsProvider, CommonCredentialsProvider] 2023-01-13 15:16:47,690 DEBUG [pool-8-thread-184] auth.JindoCredentialProviderList#getCredentials(185) - Using credentials from SimpleCredentialsProvider 2023-01-13 15:16:47,694 INFO [pool-8-thread-184] common.JindoHadoopSystem#initializeCore(141) - Initialized native file system: 2023-01-13 15:16:47,698 INFO [pool-8-thread-184] common.FsStats#logStats(18) - cmd=getFileStatus, src=oss://szzt-sjzt-bigdata-test.cn-beijing.oss-dls.aliyuncs.com/warehouse/ods.db/region, dst=null, size=0, parameter=null, time-in-ms=3, version=4.5.0 2023-01-13 15:16:47,698 INFO [pool-8-thread-184] common.FsStats#logStats(18) - cmd=checkPermission, src=oss://szzt-sjzt-bigdata-test.cn-beijing.oss-dls.aliyuncs.com/warehouse/ods.db/region, dst=null, size=0, parameter=null, time-in-ms=0, version=4.5.0 2023-01-13 15:16:47,699 DEBUG [pool-8-thread-184] oss.JindoOssFileSystem#close(294) - JindoOssFilesystem oss://szzt-sjzt-bigdata-test.cn-beijing.oss-dls.aliyuncs.com/warehouse/ods.db/region is closed 2023-01-13 15:16:47,699 DEBUG [pool-8-thread-184] common.JindoHadoopSystem#close(973) - FileSystem [ oss://szzt-sjzt-bigdata-test.cn-beijing.oss-dls.aliyuncs.com ] closed 2023-01-13 15:16:47,699 DEBUG [pool-8-thread-184] auth.DelegationTokenService#serviceStop(198) - Stopping delegation tokens 想咨询一下各位大佬,有没有遇到过同样的问题,最后怎么解决的?

展开
收起
3p6ebcm5bet24 2023-01-13 16:16:44 763 0
2 条回答
写回答
取消 提交回答
  • GitHub https://github.com/co63oc/cloud

    com.aliyun.jindodata.oss.JindoOssFileSystem类查找失败,依赖没有找到

    2023-01-14 09:12:46
    赞同 展开评论 打赏
  • 这个错误是由于Hadoop集群在使用阿里云OSS-HDFS时出现问题导致的。具体原因可能是Hadoop配置不正确或OSS-HDFS连接出现问题。建议检查Hadoop配置,确保OSS-HDFS连接正常,并查看日志以获取更多详细信息。

    2023-01-13 16:33:17
    赞同 展开评论 打赏

对象存储 OSS 是一款安全、稳定、高性价比、高性能的云存储服务,可以帮助各行业的客户在互联网应用、大数据分析、机器学习、数据归档等各种使用场景存储任意数量的数据,以及进行任意位置的访问,同时通过丰富的数据处理能力更便捷地使用数据。

相关产品

  • 对象存储
  • 热门讨论

    热门文章

    相关电子书

    更多
    《构建Hadoop生态批流一体的实时数仓》 立即下载
    零基础实现hadoop 迁移 MaxCompute 之 数据 立即下载
    CIO 指南:如何在SAP软件架构中使用Hadoop 立即下载