开发者社区> 问答> 正文

storm-hive本地模式运行topology时,hive文件夹在本地建立而不是集群?报错

新手实验storm-hive插件的使用遇到下面的问题:



代码:

连接的远程集群,结果,创建文件却在本地计算机,请问该怎么修改呢?
重新建立maven项目后,运行topo
报错:
java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: ---------
    at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:690) ~[hive-exec-2.1.0.jar:2.1.0]
    at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:622) ~[hive-exec-2.1.0.jar:2.1.0]
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:550) ~[hive-exec-2.1.0.jar:2.1.0]
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:513) ~[hive-exec-2.1.0.jar:2.1.0]
    at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.createPartitionIfNotExists(HiveEndPoint.java:445) ~[hive-hcatalog-streaming-2.1.0.jar:2.1.0]
    at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.<init>(HiveEndPoint.java:314) ~[hive-hcatalog-streaming-2.1.0.jar:2.1.0]
    at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.<init>(HiveEndPoint.java:278) ~[hive-hcatalog-streaming-2.1.0.jar:2.1.0]
    at org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnectionImpl(HiveEndPoint.java:215) ~[hive-hcatalog-streaming-2.1.0.jar:2.1.0]
    at org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnection(HiveEndPoint.java:192) ~[hive-hcatalog-streaming-2.1.0.jar:2.1.0]
    at org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnection(HiveEndPoint.java:122) ~[hive-hcatalog-streaming-2.1.0.jar:2.1.0]
    at org.apache.storm.hive.common.HiveWriter$5.call(HiveWriter.java:229) ~[storm-hive-0.10.0.jar:0.10.0]
    at org.apache.storm.hive.common.HiveWriter$5.call(HiveWriter.java:226) ~[storm-hive-0.10.0.jar:0.10.0]
    at org.apache.storm.hive.common.HiveWriter$9.call(HiveWriter.java:332) ~[storm-hive-0.10.0.jar:0.10.0]
    at java.util.concurrent.FutureTask.run(FutureTask.java:262) ~[?:1.7.0_80]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[?:1.7.0_80]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[?:1.7.0_80]
    at java.lang.Thread.run(Thread.java:745) [?:1.7.0_80]
**** RESENDING FAILED TUPLE
24347 [hive-bolt-0] INFO  o.a.h.h.q.Driver - Compiling command(queryId=Administrator_20170906152102_275e25b7-8d79-4f0b-a449-3128b4e7fbee): use default
FAILED: NullPointerException Non-local session path expected to be non-null

文件本应在 /user/hive/warehoouse/  下

展开
收起
爱吃鱼的程序员 2020-06-08 11:15:03 702 0
1 条回答
写回答
取消 提交回答
  • https://developer.aliyun.com/profile/5yerqm5bn5yqg?spm=a2c6h.12873639.0.0.6eae304abcjaIB

    FAILED:Errorinacquiringlocks:Errorcommunicatingwiththemetastore
    14:20:20.090[hive-bolt-0]ERRORorg.apache.hadoop.hive.ql.Driver-FAILED:Errorinacquiringlocks:Errorcommunicatingwiththemetastore
    org.apache.hadoop.hive.ql.lockmgr.LockException:Errorcommunicatingwiththemetastore

    Causedby:org.apache.thrift.TApplicationException:Internalerrorprocessinglock

    14:20:20.540[Thread-8-user_bolt]ERRORcom.test.bolt.HiveBolt-FailedtocreateHiveWriterforendpoint:{metaStoreUri='thrift://192.168.1.105:9083',database='default',table='user_test10',partitionVals=[sunnyvale]}
    com.test.common.HiveWriter$ConnectFailure:FailedconnectingtoEndPoint{metaStoreUri='thrift://192.168.1.105:9083',database='default',table='user_test10',partitionVals=[sunnyvale]}
       
    Causedby:org.apache.hive.hcatalog.streaming.StreamingException:partitionvalues=[sunnyvale].Unabletogetpathforendpoint:[sunnyvale]
     

     

    2020-06-08 11:15:15
    赞同 展开评论 打赏
问答分类:
问答标签:
问答地址:
问答排行榜
最热
最新

相关电子书

更多
Comparison of Spark SQL with Hive 立即下载
Hive Bucketing in Apache Spark 立即下载
2019大数据技术公开课第五季—Hive迁移到MaxCompute最佳实践 立即下载