[Hadoop大数据]——Hive连接JOIN用例详解

简介:

SQL里面通常都会用Join来连接两个表,做复杂的关联查询。比如用户表和订单表,能通过join得到某个用户购买的产品;或者某个产品被购买的人群....

Hive也支持这样的操作,而且由于Hive底层运行在hadoop上,因此有很多地方可以进行优化。比如小表到大表的连接操作、小表进行缓存、大表进行避免缓存等等...

下面就来看看hive里面的连接操作吧!其实跟SQL还是差不多的...

数据准备:创建数据-->创建表-->导入数据

首先创建两个原始数据的文件,这两个文件分别有三列,第一列是id、第二列是名称、第三列是另外一个表的id。通过第二列可以明显的看到两个表做连接查询的结果:

[xingoo@localhost tmp]$ cat aa.txt 
1 a 3
2 b 4
3 c 1
[xingoo@localhost tmp]$ cat bb.txt 
1 xxx 2
2 yyy 3
3 zzz 5

接下来创建两个表,需要注意的是表的字段分隔符为空格,另一个表可以直接基于当前的表创建。

hive> create table aa
    > (a string,b string,c string)
    > row format delimited
    > fields terminated by ' ';
OK
Time taken: 0.19 seconds
hive> create table bb like aa;
OK
Time taken: 0.188 seconds

查看两个表的结构:

hive> describe aa;
OK
a                       string                                      
b                       string                                      
c                       string                                      
Time taken: 0.068 seconds, Fetched: 3 row(s)
hive> describe bb;
OK
a                       string                                      
b                       string                                      
c                       string                                      
Time taken: 0.045 seconds, Fetched: 3 row(s)

下面可以基于本地的文件,导入数据

hive> load data local inpath '/usr/tmp/aa.txt' overwrite into table aa;
Loading data to table test.aa
OK
Time taken: 0.519 seconds
hive> load data local inpath '/usr/tmp/bb.txt' overwrite into table bb;
Loading data to table test.bb
OK
Time taken: 0.321 seconds

内连接

内连接即基于on语句,仅列出表1和表2符合连接条件的数据。

hive> select * from aa a join bb b on a.c=b.a;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = root_20160824161233_f9ecefa2-e5d7-416d-8d90-e191937e7313
Total jobs = 1
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/apache-hive-2.1.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.6.4/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
2016-08-24 16:12:44 Starting to launch local task to process map join;  maximum memory = 518979584
2016-08-24 16:12:47 Dump the side-table for tag: 0 with group count: 3 into file: file:/usr/hive/tmp/xingoo/a69078ea-b7d5-4a78-9342-05a1695e9f98/hive_2016-08-24_16-12-33_145_337836390845333215-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile00--.hashtable
2016-08-24 16:12:47 Uploaded 1 File to: file:/usr/hive/tmp/xingoo/a69078ea-b7d5-4a78-9342-05a1695e9f98/hive_2016-08-24_16-12-33_145_337836390845333215-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile00--.hashtable (332 bytes)
2016-08-24 16:12:47 End of local task; Time Taken: 3.425 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2016-08-24 16:12:50,222 Stage-3 map = 100%,  reduce = 0%
Ended Job = job_local944389202_0007
MapReduce Jobs Launched: 
Stage-Stage-3:  HDFS Read: 1264 HDFS Write: 90 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
3   c   1   1   xxx 2
1   a   3   3   zzz 5
Time taken: 17.083 seconds, Fetched: 2 row(s)

左连接

左连接是显示左边的表的所有数据,如果有右边表与之对应,则显示;否则显示null

ive> select * from aa a left outer join bb b on a.c=b.a;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = root_20160824161637_6d540592-13fd-4f59-a2cf-0a91c0fc9533
Total jobs = 1
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/apache-hive-2.1.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.6.4/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
2016-08-24 16:16:48 Starting to launch local task to process map join;  maximum memory = 518979584
2016-08-24 16:16:51 Dump the side-table for tag: 1 with group count: 3 into file: file:/usr/hive/tmp/xingoo/a69078ea-b7d5-4a78-9342-05a1695e9f98/hive_2016-08-24_16-16-37_813_4572869866822819707-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile11--.hashtable
2016-08-24 16:16:51 Uploaded 1 File to: file:/usr/hive/tmp/xingoo/a69078ea-b7d5-4a78-9342-05a1695e9f98/hive_2016-08-24_16-16-37_813_4572869866822819707-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile11--.hashtable (338 bytes)
2016-08-24 16:16:51 End of local task; Time Taken: 2.634 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2016-08-24 16:16:53,843 Stage-3 map = 100%,  reduce = 0%
Ended Job = job_local1670258961_0008
MapReduce Jobs Launched: 
Stage-Stage-3:  HDFS Read: 1282 HDFS Write: 90 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
1   a   3   3   zzz 5
2   b   4   NULL    NULL    NULL
3   c   1   1   xxx 2
Time taken: 16.048 seconds, Fetched: 3 row(s)

右连接

类似左连接,同理。

hive> select * from aa a right outer join bb b on a.c=b.a;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = root_20160824162227_5d0f0090-1a9b-4a3f-9e82-e93c4d180f4b
Total jobs = 1
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/apache-hive-2.1.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.6.4/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
2016-08-24 16:22:37 Starting to launch local task to process map join;  maximum memory = 518979584
2016-08-24 16:22:40 Dump the side-table for tag: 0 with group count: 3 into file: file:/usr/hive/tmp/xingoo/a69078ea-b7d5-4a78-9342-05a1695e9f98/hive_2016-08-24_16-22-27_619_7820027359528638029-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile20--.hashtable
2016-08-24 16:22:40 Uploaded 1 File to: file:/usr/hive/tmp/xingoo/a69078ea-b7d5-4a78-9342-05a1695e9f98/hive_2016-08-24_16-22-27_619_7820027359528638029-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile20--.hashtable (332 bytes)
2016-08-24 16:22:40 End of local task; Time Taken: 2.368 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2016-08-24 16:22:43,060 Stage-3 map = 100%,  reduce = 0%
Ended Job = job_local2001415675_0009
MapReduce Jobs Launched: 
Stage-Stage-3:  HDFS Read: 1306 HDFS Write: 90 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
3   c   1   1   xxx 2
NULL    NULL    NULL    2   yyy 3
1   a   3   3   zzz 5
Time taken: 15.483 seconds, Fetched: 3 row(s)

全连接

相当于表1和表2的数据都显示,如果没有对应的数据,则显示Null.

hive> select * from aa a full outer join bb b on a.c=b.a;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = root_20160824162252_c71b2fae-9768-4b9a-b5ad-c06d7cdb60fb
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Job running in-process (local Hadoop)
2016-08-24 16:22:54,111 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_local1766586034_0010
MapReduce Jobs Launched: 
Stage-Stage-1:  HDFS Read: 4026 HDFS Write: 270 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
3   c   1   1   xxx 2
NULL    NULL    NULL    2   yyy 3
1   a   3   3   zzz 5
2   b   4   NULL    NULL    NULL
Time taken: 1.689 seconds, Fetched: 4 row(s)

左半开连接

这个比较特殊,SEMI-JOIN仅仅会显示表1的数据,即左边表的数据。但是效率会比左连接快,因为他会先拿到表1的数据,然后在表2中查找,只要查找到结果立马就返回数据。

hive> select * from aa a left semi join bb b on a.c=b.a;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = root_20160824162327_e7fc72a7-ef91-4d39-83bc-ff8159ea8816
Total jobs = 1
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/apache-hive-2.1.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.6.4/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
2016-08-24 16:23:37 Starting to launch local task to process map join;  maximum memory = 518979584
2016-08-24 16:23:41 Dump the side-table for tag: 1 with group count: 3 into file: file:/usr/hive/tmp/xingoo/a69078ea-b7d5-4a78-9342-05a1695e9f98/hive_2016-08-24_16-23-27_008_3026796648107813784-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile31--.hashtable
2016-08-24 16:23:41 Uploaded 1 File to: file:/usr/hive/tmp/xingoo/a69078ea-b7d5-4a78-9342-05a1695e9f98/hive_2016-08-24_16-23-27_008_3026796648107813784-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile31--.hashtable (317 bytes)
2016-08-24 16:23:41 End of local task; Time Taken: 3.586 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2016-08-24 16:23:43,798 Stage-3 map = 100%,  reduce = 0%
Ended Job = job_local521961878_0011
MapReduce Jobs Launched: 
Stage-Stage-3:  HDFS Read: 1366 HDFS Write: 90 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
1   a   3
3   c   1
Time taken: 16.811 seconds, Fetched: 2 row(s)

笛卡尔积

笛卡尔积会针对表1和表2的每条数据做连接...

hive> select * from aa join bb;
Warning: Map Join MAPJOIN[9][bigTable=?] in task 'Stage-3:MAPRED' is a cross product
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = root_20160824162449_20e4b5ec-768f-48cf-a840-7d9ff360975f
Total jobs = 1
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/apache-hive-2.1.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.6.4/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
2016-08-24 16:25:00 Starting to launch local task to process map join;  maximum memory = 518979584
2016-08-24 16:25:02 Dump the side-table for tag: 0 with group count: 1 into file: file:/usr/hive/tmp/xingoo/a69078ea-b7d5-4a78-9342-05a1695e9f98/hive_2016-08-24_16-24-49_294_2706432574075169306-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile40--.hashtable
2016-08-24 16:25:02 Uploaded 1 File to: file:/usr/hive/tmp/xingoo/a69078ea-b7d5-4a78-9342-05a1695e9f98/hive_2016-08-24_16-24-49_294_2706432574075169306-1/-local-10004/HashTable-Stage-3/MapJoin-mapfile40--.hashtable (305 bytes)
2016-08-24 16:25:02 End of local task; Time Taken: 2.892 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2016-08-24 16:25:05,677 Stage-3 map = 100%,  reduce = 0%
Ended Job = job_local2068422373_0012
MapReduce Jobs Launched: 
Stage-Stage-3:  HDFS Read: 1390 HDFS Write: 90 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK
1   a   3   1   xxx 2
2   b   4   1   xxx 2
3   c   1   1   xxx 2
1   a   3   2   yyy 3
2   b   4   2   yyy 3
3   c   1   2   yyy 3
1   a   3   3   zzz 5
2   b   4   3   zzz 5
3   c   1   3   zzz 5

上面就是hive中的连接查询,其实与SQL一样的。

本文转自博客园xingoo的博客,原文链接:[Hadoop大数据]——Hive连接JOIN用例详解,如需转载请自行联系原博主。
相关实践学习
基于MaxCompute的热门话题分析
本实验围绕社交用户发布的文章做了详尽的分析,通过分析能得到用户群体年龄分布,性别分布,地理位置分布,以及热门话题的热度。
SaaS 模式云数据仓库必修课
本课程由阿里云开发者社区和阿里云大数据团队共同出品,是SaaS模式云原生数据仓库领导者MaxCompute核心课程。本课程由阿里云资深产品和技术专家们从概念到方法,从场景到实践,体系化的将阿里巴巴飞天大数据平台10多年的经过验证的方法与实践深入浅出的讲给开发者们。帮助大数据开发者快速了解并掌握SaaS模式的云原生的数据仓库,助力开发者学习了解先进的技术栈,并能在实际业务中敏捷的进行大数据分析,赋能企业业务。 通过本课程可以了解SaaS模式云原生数据仓库领导者MaxCompute核心功能及典型适用场景,可应用MaxCompute实现数仓搭建,快速进行大数据分析。适合大数据工程师、大数据分析师 大量数据需要处理、存储和管理,需要搭建数据仓库?学它! 没有足够人员和经验来运维大数据平台,不想自建IDC买机器,需要免运维的大数据平台?会SQL就等于会大数据?学它! 想知道大数据用得对不对,想用更少的钱得到持续演进的数仓能力?获得极致弹性的计算资源和更好的性能,以及持续保护数据安全的生产环境?学它! 想要获得灵活的分析能力,快速洞察数据规律特征?想要兼得数据湖的灵活性与数据仓库的成长性?学它! 出品人:阿里云大数据产品及研发团队专家 产品 MaxCompute 官网 https://www.aliyun.com/product/odps&nbsp;
相关文章
|
2天前
|
SQL 分布式计算 大数据
MaxCompute产品使用合集之启用hive兼容的时候,某个字段是null,是否会把这个字段当成空白连起来
MaxCompute作为一款全面的大数据处理平台,广泛应用于各类大数据分析、数据挖掘、BI及机器学习场景。掌握其核心功能、熟练操作流程、遵循最佳实践,可以帮助用户高效、安全地管理和利用海量数据。以下是一个关于MaxCompute产品使用的合集,涵盖了其核心功能、应用场景、操作流程以及最佳实践等内容。
|
28天前
|
SQL DataWorks 网络安全
DataWorks操作报错合集之DataWorks连接Hive数据库时出现连接超时的问题如何解决
DataWorks是阿里云提供的一站式大数据开发与治理平台,支持数据集成、数据开发、数据服务、数据质量管理、数据安全管理等全流程数据处理。在使用DataWorks过程中,可能会遇到各种操作报错。以下是一些常见的报错情况及其可能的原因和解决方法。
|
28天前
|
SQL 分布式计算 大数据
[AIGC 大数据基础]hive浅谈
[AIGC 大数据基础]hive浅谈
|
28天前
|
SQL 存储 分布式计算
基于Hadoop数据仓库Hive1.2部署及使用
基于Hadoop数据仓库Hive1.2部署及使用
|
28天前
|
SQL 分布式计算 Hadoop
利用Hive与Hadoop构建大数据仓库:从零到一
【4月更文挑战第7天】本文介绍了如何使用Apache Hive与Hadoop构建大数据仓库。Hadoop的HDFS和YARN提供分布式存储和资源管理,而Hive作为基于Hadoop的数据仓库系统,通过HiveQL简化大数据查询。构建过程包括设置Hadoop集群、安装配置Hive、数据导入与管理、查询分析以及ETL与调度。大数据仓库的应用场景包括海量数据存储、离线分析、数据服务化和数据湖构建,为企业决策和创新提供支持。
141 1
|
28天前
|
SQL 数据采集 分布式计算
Hadoop和Hive中的数据倾斜问题及其解决方案
Hadoop和Hive中的数据倾斜问题及其解决方案
55 0
|
28天前
|
SQL 分布式计算 安全
HIVE启动错误:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeExcept
HIVE启动错误:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeExcept
183 0
|
28天前
|
存储 分布式计算 Hadoop
大数据处理架构Hadoop
【4月更文挑战第10天】Hadoop是开源的分布式计算框架,核心包括MapReduce和HDFS,用于海量数据的存储和计算。具备高可靠性、高扩展性、高效率和低成本优势,但存在低延迟访问、小文件存储和多用户写入等问题。运行模式有单机、伪分布式和分布式。NameNode管理文件系统,DataNode存储数据并处理请求。Hadoop为大数据处理提供高效可靠的解决方案。
112 2
|
28天前
|
分布式计算 Hadoop 大数据
大数据技术与Python:结合Spark和Hadoop进行分布式计算
【4月更文挑战第12天】本文介绍了大数据技术及其4V特性,阐述了Hadoop和Spark在大数据处理中的作用。Hadoop提供分布式文件系统和MapReduce,Spark则为内存计算提供快速处理能力。通过Python结合Spark和Hadoop,可在分布式环境中进行数据处理和分析。文章详细讲解了如何配置Python环境、安装Spark和Hadoop,以及使用Python编写和提交代码到集群进行计算。掌握这些技能有助于应对大数据挑战。
|
15天前
|
分布式计算 Hadoop 大数据
探索大数据技术:Hadoop与Spark的奥秘之旅
【5月更文挑战第28天】本文探讨了大数据技术中的Hadoop和Spark,Hadoop作为分布式系统基础架构,通过HDFS和MapReduce处理大规模数据,适用于搜索引擎等场景。Spark是快速数据处理引擎,采用内存计算和DAG模型,适用于实时推荐和机器学习。两者各有优势,未来将继续发展和完善,助力大数据时代的发展。

热门文章

最新文章