hive介绍
hive是基于Hadoop的一个数据仓库工具,可以将结构化的数据文件映射为一张数据库表,并提供简单的sql查询功能,可以将sql语句转换为MapReduce任务进行运行。其优点是学习成本低,可以通过类SQL语句快速实现简单的MapReduce统计,不必开发专门的MapReduce应用,十分适合数据仓库的统计分析。
hive的运行机制
图示
假设我在hive命令行客户端使用创建了一个数据库(database)myhive,接着又在该数据库中创建了一张表emp。
create database myhive; use myhive; create table emp(id int,name string);
那么hive会将元数据存储在数据库中。Hive 中的元数据包括表的名字,表的列和分区及其属性,表的属性(是否为外部表等),表的数据所在目录等。
hive是基于hadoop的,所以数据库和表均表现在hdfs上的目录,数据信息当然也是存储在hdfs上。
对于上面的库和表来说,会在hdfs上创建/user/hive/warehouse/myhive.db这样的目录结构,而表的信息则可以自己上传个文件比如图中的emp.data到/user/hive/warehouse/myhive.db目录下。那么就可以写sql进行查询了(注:写查询语句写的是myhive这张表不删emp.data,如select * from myhive,但是查询到的是emp.data中的信息,两者结合可以理解为传统数据库的某张表),而这些元数据信息都会存储到外部的数据库中(如mysql,当然也可以使用内嵌的derby,不推荐使用derby毕竟是内嵌的不能共享信息)。
然后我再写个查询语句
select id,name from emp where id>2 order by id desc;
那么是怎么执行的呢?查询语句交给hive,hive利用解析器、优化器等(图中表示Compiler),调用mapreduce模板,形成计划,生成的查询计划存储在 HDFS 中,随后由Mapreduce程序调用,提交给job放在Yarn上运行。
hive与mapreduce关系
hive的数据存储
1、Hive中所有的数据都存储在 HDFS 中,没有专门的数据存储格式(可支持Text,SequenceFile,ParquetFile,RCFILE等)
2、只需要在创建表的时候告诉 Hive 数据中的列分隔符和行分隔符,Hive 就可以解析数据。
3、Hive 中包含以下数据模型:DB、Table,External Table,Partition,Bucket。
db:在hdfs中表现为${hive.metastore.warehouse.dir}目录下一个文件夹
table:在hdfs中表现所属db目录下一个文件夹
external table:外部表, 与table类似,不过其数据存放位置可以在任意指定路径
普通表: 删除表后, hdfs上的文件都删了
External外部表删除后, hdfs上的文件没有删除, 只是把文件删除了
partition:在hdfs中表现为table目录下的子目录
bucket:桶, 在hdfs中表现为同一个表目录下根据hash散列之后的多个文件, 会根据不同的文件把数据放到不同的文件中
理论总让人头昏,下面介绍hive的初步使用上面的自然就明白了。
hive的使用
虽然可以使用hive与shell交互的方式启动hive
[root@mini1 ~]# cd apps/hive/bin [root@mini1 bin]# ll 总用量 32 -rwxr-xr-x. 1 root root 1031 4月 30 2015 beeline drwxr-xr-x. 3 root root 4096 10月 17 12:38 ext -rwxr-xr-x. 1 root root 7844 5月 8 2015 hive -rwxr-xr-x. 1 root root 1900 4月 30 2015 hive-config.sh -rwxr-xr-x. 1 root root 885 4月 30 2015 hiveserver2 -rwxr-xr-x. 1 root root 832 4月 30 2015 metatool -rwxr-xr-x. 1 root root 884 4月 30 2015 schematool [root@mini1 bin]# ./hive hive>
但是界面并不好看,而hive也可以发布为服务(Hive thrift服务),然后可以使用hive自带的beeline去连接。如下
窗口1,开启服务
[root@mini1 ~]# cd apps/hive/bin [root@mini1 bin]# ll 总用量 32 -rwxr-xr-x. 1 root root 1031 4月 30 2015 beeline drwxr-xr-x. 3 root root 4096 10月 17 12:38 ext -rwxr-xr-x. 1 root root 7844 5月 8 2015 hive -rwxr-xr-x. 1 root root 1900 4月 30 2015 hive-config.sh -rwxr-xr-x. 1 root root 885 4月 30 2015 hiveserver2 -rwxr-xr-x. 1 root root 832 4月 30 2015 metatool -rwxr-xr-x. 1 root root 884 4月 30 2015 schematool [root@mini1 bin]# ./hiveserver2
窗口2,作为客户端连接
[root@mini1 bin]# ./beeline Beeline version 1.2.1 by Apache Hive beeline> [root@mini1 bin]# [root@mini1 bin]# ./beeline Beeline version 1.2.1 by Apache Hive beeline> !connect jdbc:hive2://localhost:10000 Connecting to jdbc:hive2://localhost:10000 Enter username for jdbc:hive2://localhost:10000: root Enter password for jdbc:hive2://localhost:10000: ****** Connected to: Apache Hive (version 1.2.1) Driver: Hive JDBC (version 1.2.1) Transaction isolation: TRANSACTION_REPEATABLE_READ 0: jdbc:hive2://localhost:10000>
可能出现错误
Error: Failed to open new session: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=EXECUTE, inode="/tmp":hadoop3:supergroup:drwx------
./hadoop dfs -chmod -R 777 /tmp
下面进行简单使用,感觉下使用sql的舒适吧
1、查看数据库
0: jdbc:hive2://localhost:10000> show databases; +----------------+--+ | database_name | +----------------+--+ | default | +----------------+--+ 1 row selected (1.456 seconds)
2、创建并使用数据库,查看表
0: jdbc:hive2://localhost:10000> create database myhive; No rows affected (0.576 seconds) 0: jdbc:hive2://localhost:10000> show databases; +----------------+--+ | database_name | +----------------+--+ | default | | myhive | +----------------+--+ 0: jdbc:hive2://localhost:10000> use myhive; No rows affected (0.265 seconds) 0: jdbc:hive2://localhost:10000> show tables; +-----------+--+ | tab_name | +-----------+--+ +-----------+--+
3、创建表
0: jdbc:hive2://localhost:10000> create table emp(id int,name string); No rows affected (0.29 seconds) 0: jdbc:hive2://localhost:10000> show tables; +-----------+--+ | tab_name | +-----------+--+ | emp | +-----------+--+ 1 row selected (0.261 seconds)
上传数据到该目录下,从页面看的话是个目录,如下
image.png
里面没有文件当然没有数据,那么我们需要上传个文件到该目录下。
[root@mini1 ~]# cat sz.data 1,zhangsan 2,lisi 3,wangwu 4,furong 5,fengjie [root@mini1 ~]# hadoop fs -put sz.data /user/hive/warehouse/myhive.db/emp
再查看
4、查看表信息
0: jdbc:hive2://localhost:10000> select * from emp; +---------+-----------+--+ | emp.id | emp.name | +---------+-----------+--+ | NULL | NULL | | NULL | NULL | | NULL | NULL | | NULL | NULL | | NULL | NULL | +---------+-----------+--+
结果肯定都是null,因为创建表的时候根本没指定根据”,”来切分,而文件中的字段分隔用了逗号。那么删除该表,重新上传文件,重新建表语句如下
0: jdbc:hive2://localhost:10000> drop table emp; No rows affected (1.122 seconds) 0: jdbc:hive2://localhost:10000> show tables; +-----------+--+ | tab_name | +-----------+--+ +-----------+--+ 0: jdbc:hive2://localhost:10000> create table emp(id int,name string) 0: jdbc:hive2://localhost:10000> row format delimited 0: jdbc:hive2://localhost:10000> fields terminated by ','; No rows affected (0.265 seconds) 0: jdbc:hive2://localhost:10000> [root@mini1 ~]# hadoop fs -put sz.data /user/hive/warehouse/myhive.db/emp 0: jdbc:hive2://localhost:10000> select * from emp; +---------+-----------+--+ | emp.id | emp.name | +---------+-----------+--+ | 1 | zhangsan | | 2 | lisi | | 3 | wangwu | | 4 | furong | | 5 | fengjie | +---------+-----------+--+
6、条件查询
0: jdbc:hive2://localhost:10000> select id,name from emp where id>2 order by id desc; INFO : Number of reduce tasks determined at compile time: 1 INFO : In order to change the average load for a reducer (in bytes): INFO : set hive.exec.reducers.bytes.per.reducer=<number> INFO : In order to limit the maximum number of reducers: INFO : set hive.exec.reducers.max=<number> INFO : In order to set a constant number of reducers: INFO : set mapreduce.job.reduces=<number> INFO : number of splits:1 INFO : Submitting tokens for job: job_1508216103995_0004 INFO : The url to track the job: http://mini1:8088/proxy/application_1508216103995_0004/ INFO : Starting Job = job_1508216103995_0004, Tracking URL = http://mini1:8088/proxy/application_1508216103995_0004/ INFO : Kill Command = /root/apps/hadoop-2.6.4/bin/hadoop job -kill job_1508216103995_0004 INFO : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 INFO : 2017-10-18 00:35:39,865 Stage-1 map = 0%, reduce = 0% INFO : 2017-10-18 00:35:46,275 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.33 sec INFO : 2017-10-18 00:35:51,487 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.34 sec INFO : MapReduce Total cumulative CPU time: 2 seconds 340 msec INFO : Ended Job = job_1508216103995_0004 +-----+----------+--+ | id | name | +-----+----------+--+ | 5 | fengjie | | 4 | furong | | 3 | wangwu | +-----+----------+--+ 3 rows selected (18.96 seconds)
看到这就能明白了,写的sql最后是被解析为了mapreduce程序放到yarn上来跑的,hive其实是提供了众多的mapreduce模板。
7、创建外部表
0: jdbc:hive2://localhost:10000> create external table emp2(id int,name string) 0: jdbc:hive2://localhost:10000> row format delimited fields terminated by ','//指定逗号分割 0: jdbc:hive2://localhost:10000> stored as textfile//文本存储方式 0: jdbc:hive2://localhost:10000> location '/company'; No rows affected (0.101 seconds)//存储在/company目录下 0: jdbc:hive2://localhost:10000> dfs -ls /; +----------------------------------------------------------------------------------------+--+ | DFS Output | +----------------------------------------------------------------------------------------+--+ | Found 16 items | | -rw-r--r-- 2 angelababy mygirls 7 2017-10-01 20:22 /canglaoshi_wuma.avi | | -rw-r--r-- 2 root supergroup 22 2017-10-03 21:12 /cangmumayi.avi | | drwxr-xr-x - root supergroup 0 2017-10-18 00:55 /company | | drwxr-xr-x - root supergroup 0 2017-10-13 04:44 /flowcount | | drwxr-xr-x - root supergroup 0 2017-10-17 03:44 /friends | | drwxr-xr-x - root supergroup 0 2017-10-17 06:19 /gc | | drwxr-xr-x - root supergroup 0 2017-10-07 07:28 /liushishi.log | | -rw-r--r-- 3 12706 supergroup 60 2017-10-04 21:58 /liushishi.love | | drwxr-xr-x - root supergroup 0 2017-10-17 07:32 /logenhance | | -rw-r--r-- 2 root supergroup 26 2017-10-16 20:49 /mapjoin | | drwxr-xr-x - root supergroup 0 2017-10-16 21:16 /mapjoincache | | drwxr-xr-x - root supergroup 0 2017-10-13 13:15 /mrjoin | | drwxr-xr-x - root supergroup 0 2017-10-16 23:35 /reverse | | drwx------ - root supergroup 0 2017-10-17 13:10 /tmp | | drwxr-xr-x - root supergroup 0 2017-10-17 13:13 /user | | drwxr-xr-x - root supergroup 0 2017-10-14 01:33 /wordcount | +----------------------------------------------------------------------------------------+--+ 0: jdbc:hive2://localhost:10000> create external table t_sz_ext(id int,name string) 0: jdbc:hive2://localhost:10000> row format delimited fields terminated by '\t' 0: jdbc:hive2://localhost:10000> stored as textfile 0: jdbc:hive2://localhost:10000> location '/company'; No rows affected (0.135 seconds) 0: jdbc:hive2://localhost:10000> show tables; +-----------+--+ | tab_name | +-----------+--+ | emp | | emp2 | | t_sz_ext | +-----------+--+
能发现多了目录/company和两张表,不过这个时候/company下是没任何东西的。
8、加载文件信息到表中
前面使用了hadoop命令将文件上传到了表对应的目录下,但是也可以在命令行下直接导入文件信息
0: jdbc:hive2://localhost:10000> load data local inpath '/root/sz.data' into table emp2;(也可以用hadoo直接上传) INFO : Loading data to table myhive.emp2 from file:/root/sz.data INFO : Table myhive.emp2 stats: [numFiles=0, totalSize=0] No rows affected (0.414 seconds) 0: jdbc:hive2://localhost:10000> select * from emp2; +----------+------------+--+ | emp2.id | emp2.name | +----------+------------+--+ | 1 | zhangsan | | 2 | lisi | | 3 | wangwu | | 4 | furong | | 5 | fengjie | +----------+------------+--+
9、表分区,分区字段为school,导入数据到2个不同的分区中
0: jdbc:hive2://localhost:10000> create table stu(id int,name string) 0: jdbc:hive2://localhost:10000> partitioned by(school string) 0: jdbc:hive2://localhost:10000> row format delimited fields terminated by ','; No rows affected (0.319 seconds) 0: jdbc:hive2://localhost:10000> show tables; +-----------+--+ | tab_name | +-----------+--+ | emp | | emp2 | | stu | | t_sz_ext | +-----------+--+ 0: jdbc:hive2://localhost:10000> load data local inpath '/root/sz.data' into table stu partition(school='scu'); INFO : Loading data to table myhive.stu partition (school=scu) from file:/root/sz.data INFO : Partition myhive.stu{school=scu} stats: [numFiles=1, numRows=0, totalSize=46, rawDataSize=0] No rows affected (0.607 seconds) 0: jdbc:hive2://localhost:10000> select * from stu; +---------+-----------+-------------+--+ | stu.id | stu.name | stu.school | +---------+-----------+-------------+--+ | 1 | zhangsan | scu | | 2 | lisi | scu | | 3 | wangwu | scu | | 4 | furong | scu | | 5 | fengjie | scu | +---------+-----------+-------------+--+ 5 rows selected (0.286 seconds) 0: jdbc:hive2://localhost:10000> load data local inpath '/root/sz2.data' into table stu partition(school='hfut'); INFO : Loading data to table myhive.stu partition (school=hfut) from file:/root/sz2.data INFO : Partition myhive.stu{school=hfut} stats: [numFiles=1, numRows=0, totalSize=46, rawDataSize=0] No rows affected (0.671 seconds) 0: jdbc:hive2://localhost:10000> select * from stu; +---------+-----------+-------------+--+ | stu.id | stu.name | stu.school | +---------+-----------+-------------+--+ | 1 | Tom | hfut | | 2 | Jack | hfut | | 3 | Lucy | hfut | | 4 | Kitty | hfut | | 5 | Lucene | hfut | | 6 | Sakura | hfut | | 1 | zhangsan | scu | | 2 | lisi | scu | | 3 | wangwu | scu | | 4 | furong | scu | | 5 | fengjie | scu | +---------+-----------+-------------+--+
注:hive是不遵循三范式的,别去考虑主键了。
10、添加分区
0: jdbc:hive2://localhost:10000> alter table stu add partition (school='Tokyo');
为了更直观,去页面查看
hive元数据表说明