一、安装:(注意版本兼容)
hadoop2.7.2,hive1.3,spark1.6
二、相关配置:
- hive配置hive-site.xml:
- 把hive-site.xml拷贝至spark的conf目录下并添加:
<property>
<name>hive.metastore.uris</name>
<value>thrift://192.168.234.128:9083</value>
</property>
注:192.168.234.128是hive的主机
- hadoop目录/hadoop2.7.2/etc/hadoop/的core-site.xml与hdfs-site.xml拷贝至spark下的conf目录下
- 把hive的lib目录下的mysql-connector-java-5.1.31.jar拷贝至spark的jar目录下
三、运行:
- 在hive上执行:
nohup hive --service metastore > metastore.log 2>&1 &
- 在spark上执行:
/home/hadoop/apps/spark/bin/spark-shell \
--master spark://hadoop01:7077 \
--executor-memory 512m \
--total-executor-cores 2 \
--driver-class-path /home/hadoop/apps/hive/lib/mysql-connector-java-5.0.8-bin.jar
建议:bin/spark-shel
四、执行:
spark.sql(“show databases”).show