本文介绍quicksql0.7.0使用方法,先将安装包qsql-0.7.0-install上传至服务器。
配置运行环境
进入conf目录,修改运行环境配置文件quicksql-env.sh
如下,根据实际情况配置java及spark环境变量
1. #!/bin/bash
2. # This file is sourced when running quicksql programs
3. # Copy it as quicksql-env.sh and edit it that to configure quicksql
4. # Options read when launching programs
5. # export SPARK_HOME= # [Required] - SPARK_HOME, to set spark home for quicksql running. quicksql needs spark 2.0 or above.
6. # export JAVA_HOME= # [Required] - JAVA_HOME, to set java home for quicksql running. quicksql needs java 1.8 or above.
7. #配置java环境变量
8. export JAVA_HOME=/usr/java/jdk1.8.0_181-cloudera
9. #配置spark环境变量
10. export SPARK_HOME=/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/spark
11. # export FLINK_HOME= # [Required] - FLINK_HOME, to set flink home for quicksql running. quicksql needs flink 1.9.0 or
12. # above.
13. # export QSQL_CLUSTER_URL= # [Required] - QSQL_CLUSTER_URL, to set hadoop file system url.
14. # export QSQL_HDFS_TMP= # [Required] - QSQL_HDFS_TMP, to set hadoop file system tmp url.
15. # Options read when using command line "quicksql.sh -e" with runner "Jdbc", "Spark" or "Dynamic" runner determined that
16. # this
17. # They all have default values. But we recommend you to set them here for more properly to your site.
18. # Those are default value for running a quicksql program if user does not set those properties.
19. # export QSQL_DEFAULT_WORKER_NUM=20 # [Optional] - QSQL_DEFAULT_WORKER_NUM, to set default worker_num for quicksql programs. if it is not set, default value is
20. # export QSQL_DEFAULT_WORKER_MEMORY=1G # [Optional] - QSQL_DEFAULT_WORKER_MEMORY, to set default worker_memory for quicksql programs. if it is not set, default
21. # export QSQL_DEFAULT_DRIVER_MEMORY=3G # [Optional] - QSQL_DEFAULT_DRIVER_MEMORY, to set default driver_memory for quicksql programs. if it is not set, default
22. # export QSQL_DEFAULT_MASTER=yarn-client # [Optional] - QSQL_DEFAULT_MASTER, to set default master for quicksql programs. if it is not set, default value is
23. # export QSQL_DEFAULT_RUNNER=DYNAMIC # [Optional] - QSQL_DEFAULT_RUNNER, to set default master for quicksql programs. if it is not set, default value is dynamic.
配置完如下图:
导入源数据库表结构
quicksql目前支持Hive, MySQL, Kylin, Elasticsearch, Oracle, MongoDB等6种数据源导入,导入时使用bin/metadata-extract.sh脚本,语法如下:
1. #<SCHEMA-JSON>包含数据源jdbc类型,url,连接账号,密码
2. #<DATA-SOURCE>为数据库名称
3. #<TABLE-NAME-REGEX>为数据表
4. $ ./bin/metadata-extract -p "<SCHEMA-JSON>" -d "<DATA-SOURCE>" -r "<TABLE-NAME-REGEX>"
其中,-r 参数可以使用LIKE语法,['%': 全部匹配,'_': 占位匹配,'?': 可选匹配])
本文以导入mysql数据和hive数据为例:
导入mysql数据,192.168.112.1:3306地址test数据库的student表
1. #导入192.168.112.1:3306地址test数据库的student表
2. ./metadata-extract.sh -p "{\"jdbcDriver\": \"com.mysql.jdbc.Driver\", \"jdbcUrl\": \"jdbc:mysql://192.168.112.1:3306/test\", \"jdbcUser\": \"root\",\"jdbcPassword\": \"root\"}" -d "mysql" -r "student"
导入成功如下图:
导入hive数据,如hive使用mysql作为元数据库,则使用如下语句:
1. #mysql作为hive元数据,metastore为hive元数据数据库名称,dbName为要导入的业务数据库,studentt_ext为业务数据表,填写正确的数据库连接url,用户名,密码
2. ./metadata-extract.sh -p "{\"jdbcDriver\": \"com.mysql.jdbc.Driver\", \"jdbcUrl\": \"jdbc:mysql://192.168.112.180:3306/metastore\", \"jdbcUser\": \"hive\",\"jdbcPassword\": \"123456789\",\"dbName\": \"test2\"}" -d "hive" -r "student_ext"
导入成功如下图:
更多导入JSON如下:
1. ##MySQL
2. {
3. "jdbcDriver": "com.mysql.jdbc.Driver",
4. "jdbcUrl": "jdbc:mysql://localhost:3306/db",
5. "jdbcUser": "USER",
6. "jdbcPassword": "PASSWORD"
7. }
8. ##Oracle
9. {
10. "jdbcDriver": "oracle.jdbc.driver.OracleDriver",
11. "jdbcUrl": "jdbc:oracle:thin:@localhost:1521/namespace",
12. "jdbcUser": "USER",
13. "jdbcPassword": "PASSWORD"
14. }
15. ##Elasticsearch
16. {
17. "esNodes": "192.168.1.1",
18. "esPort": "9000",
19. "esUser": "USER",
20. "esPass": "PASSWORD",
21. "esIndex": "index/type"
22. }
23. ##Hive(Hive元数据存在MySQL中)
24. {
25. "jdbcDriver": "com.mysql.jdbc.Driver",
26. "jdbcUrl": "jdbc:mysql://localhost:3306/db",
27. "jdbcUser": "USER",
28. "jdbcPassword": "PASSWORD",
29. "dbName": "hive_db"
30. }
31. ##Hive-Jdbc(Hive元数据通过Jdbc访问 )
32. {
33. "jdbcDriver": "org.apache.hive.jdbc.HiveDriver",
34. "jdbcUrl": "jdbc:hive2://localhost:7070/learn_kylin",
35. "jdbcUser": "USER",
36. "jdbcPassword": "PASSWORD",
37. "dbName": "default"
38. }
39. ##Kylin
40. {
41. "jdbcDriver": "org.apache.kylin.jdbc.Driver",
42. "jdbcUrl": "jdbc:kylin://localhost:7070/learn_kylin",
43. "jdbcUser": "ADMIN",
44. "jdbcPassword": "KYLIN",
45. "dbName": "default"
46. }
47. ##Mongodb
48. {
49. "host": "192.168.1.1",
50. "port": "27017",
51. "dataBaseName": "test",
52. "authMechanism": "SCRAM-SHA-1",
53. "userName": "admin",
54. "password": "admin",
55. "collectionName": "products"
56. }
注意:Shell中双引号是特殊字符,传JSON参数时需要做转义!!
Shell脚本实现混查
使用bin/ quicksql.sh脚本进行查询,语法如下:
1. ./quicksql.sh -e "YOU SQL";
查询mysql上的单表 student
1. ./quicksql.sh -e "select * from student";
查询结果如下图:
查询hive上的单表 student_ext
1. ./quicksql.sh -e "select * from student_ext";
两表left join 混查:
1. ./quicksql.sh -e "select * from student as a left join student_ext as b on a.id = b.id";
查询结果:
使用java编写程序实现客户端JDBC调用
启动quicksql-server服务,运行 bin/quicksql-server.sh
1. ./quicksql-server.sh start
启动成功如下图:
通讯端口为5888,也可使用./quicksql-server.sh start | restart | status | stop 进行进行其他操作。
编写java程序,引入依赖
1. #qsql-client通过本地方式引用,qsql-client-0.7.0.jar文件可从qsql-0.7.0-install/lib下复制过来
2. <dependency>
3. <groupId>com.qihoo.qsql</groupId>
4. <artifactId>qsql</artifactId>
5. <version>0.7.0</version>
6. <scope>system</scope>
7. <systemPath>${project.basedir}/qsql-client-0.7.0.jar</systemPath>
8. </dependency>
9. <dependency>
10. <groupId>org.apache.calcite.avatica</groupId>
11. <artifactId>avatica-server</artifactId>
12. <version>1.12.0</version>
13. </dependency>
如下图:
jdbc连接地址按实际情况配置。
运行结果如下: