开发者社区 问答 正文

新建hive表连接oss报错:No FileSystem for scheme: oss

我搭建的是用ambari管理下的集群,不是emapeduce能连上oss吗?
2017-11-22 19:38:24,939 INFO [pool-8-thread-198]: metastore.HiveMetaStore (HiveMetaStore.java:create_table_core(1447)) - create_table_core default.tests
2017-11-22 19:38:24,939 INFO [pool-8-thread-198]: metastore.HiveMetaStore (HiveMetaStore.java:create_table_core(1478)) - create_table_core preEvent default.tests
2017-11-22 19:38:24,944 INFO [pool-8-thread-198]: metastore.HiveMetaStore (HiveMetaStore.java:create_table_core(1554)) - create_table_core rdbms listeners default.tests
2017-11-22 19:38:24,944 INFO [pool-8-thread-198]: metastore.HiveMetaStore (HiveMetaStore.java:create_table_core(1561)) - create_table_core rdbms listeners done default.tests
2017-11-22 19:38:24,945 ERROR [pool-8-thread-198]: metastore.RetryingHMSHandler (RetryingHMSHandler.java:invokeInternal(199)) - MetaException(message:java.io.IOException: No FileSystem for scheme: oss)

at org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener.metaException(AuthorizationPreEventListener.java:411)
at org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener.authorizeCreateTable(AuthorizationPreEventListener.java:272)
at org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener.onEvent(AuthorizationPreEventListener.java:140)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.firePreEvent(HiveMetaStore.java:2131)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1479)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1579)
at sun.reflect.GeneratedMethodAccessor53.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
at com.sun.proxy.$Proxy17.create_table_with_environment_context(Unknown Source)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:9399)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:9383)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: No FileSystem for scheme: oss

at org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.hiveException(StorageBasedAuthorizationProvider.java:436)
at org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.authorize(StorageBasedAuthorizationProvider.java:355)
at org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.authorize(StorageBasedAuthorizationProvider.java:193)
at org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener.authorizeCreateTable(AuthorizationPreEventListener.java:265)
... 23 more

Caused by: java.io.IOException: No FileSystem for scheme: oss

at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2786)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2793)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2829)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2811)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:390)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.checkPermissions(StorageBasedAuthorizationProvider.java:371)
at org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.authorize(StorageBasedAuthorizationProvider.java:348)
... 25 more

请问有人遇到过这种问题吗

展开
收起
泪三国云无痕 2017-11-22 19:48:06 7681 分享 版权
阿里云 AI 助理回答

您遇到的问题是Hive在尝试访问OSS(阿里云对象存储)时,无法识别oss文件系统类型。这意味着Hive没有正确配置以支持OSS作为其文件系统。要解决这个问题,您需要为Hive添加对OSS文件系统的支持,具体可以通过安装和配置阿里云的Hadoop插件来实现。

以下是解决步骤:

  1. 下载阿里云Hadoop插件: 访问阿里云官方文档或直接到阿里云官网下载适用于您Hadoop版本的阿里云Hadoop插件。这个插件包含了对OSS文件系统的支持。

  2. 安装插件: 将下载的插件包解压,然后将解压后的jar文件复制到所有Hadoop节点的$HADOOP_HOME/share/hadoop/common/lib/目录下。确保这些jar文件对于Hadoop的所有进程都是可访问的。

  3. 配置Hadoop: 在Hadoop的配置文件(如core-site.xml)中添加OSS相关的配置信息,包括AccessKey、SecretKey以及endpoint等。示例如下:

    <property>
       <name>fs.oss.accessKeyId</name>
       <value>your_access_key_id</value>
    </property>
    <property>
       <name>fs.oss.accessKeySecret</name>
       <value>your_access_key_secret</value>
    </property>
    <property>
       <name>fs.oss.endpoint</name>
       <value>your_endpoint</value>
    </property>
    <property>
       <name>fs.defaultFS</name>
       <value>oss://your_bucket_name</value>
    </property>
    

    其中,your_access_key_idyour_access_key_secretyour_endpoint需要替换为您的实际阿里云账号信息,your_bucket_name是您打算使用的OSS bucket名称。

  4. 重启Hadoop服务: 完成上述配置后,重启所有Hadoop相关服务,包括HDFS、YARN、Hive等,以使配置生效。

  5. 测试连接: 通过Hive命令行或者Ambari界面尝试创建表到OSS,验证是否能够成功连接并操作OSS。

完成以上步骤后,Hive应该能够识别并使用OSS作为存储系统了。如果问题仍然存在,建议检查网络配置、权限设置以及日志文件中的详细错误信息,以进一步定位问题。

有帮助
无帮助
AI 助理回答生成答案可能存在不准确,仅供参考
0 条回答
写回答
取消 提交回答