前面我们介绍了Hadoop环境下,分析淘宝大数据案例的过程及方法,根据分析效果还不错,实验效果也体现了大数据的数据集大的一个特点。本次实验我们假设了一个实际场景:假设现在有一个省份的学生选课数据需要你去分析,那么在数据集复杂和庞大的面前,我们的Hadoop是否还实用,这里所说的数据集复杂是因为有多个数据表需要你去采集,分析也需要进行多表连接。
对Hadoop里面的hive的认识,我觉得没有MySQL智能和方便,那么为什么我们还要去使用Hadoop里面的hive呢,因为大数据集,后面我们会引入spark这个引擎,会极大地方便我们开始新的旅途!
项目简介
假设一个数据集十分庞大的学生选课数据集,当然我们这里只是模拟了一个数据集,便于我们使用,我们在Python环境下面产生一些虚拟数据集,有四个表分别是学生表,选课表,课程表,课程_班级表,当然现实生活中真的需要你去分析,那么也至少是千万级的数据集。
分析数据:
a、男女生比例
b、及格率
c、每门课程的平均分,要求显示出课程的中文名字
d、有2门课不及格的学生
e、在mysql中创建结果表,把结果用sqoop写入到表中,并显示结果。
思路其实之前类似:
一些基本的操作和配置在之前的淘宝案例里面我们都有介绍,这里我就不做过多的赘述了,有需要的小伙伴请移步到上一篇文章
需要注意的是,之前我们发现导入导出进行查询的时候,显示中文是???本次我们修改了一些数据参数,现在可以显示正常了操作步骤如下:
在useSSL=false后面添加下面参数即可:
&allowPublicKeyRetrieval=true&useUnicode=true&characterEncoding=UTF-8
考虑到有的人容易打错,所以只需要将下面的代码全选覆盖之前的也可以
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://localhost:3306/hive?useSSL=false&allowPublicKeyRetrieval=true&useUnicode=true&characterEncoding=UTF-8</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>root</value> </property> <property> <name>hive.support.concurrency</name> <value>true</value> </property> <property> <name>hive.exec.dynamic.partition.mode</name> <value>nonstrict</value> </property> <property> <name>hive.txn.manager</name> <value>org.apache.hadoop.hive.ql.lockmgr.DbTxnManager</value> </property> <property> <name>hive.compactor.initiator.on</name> <value>true</value> </property> <property> <name>hive.compactor.worker.threads</name> <value>1</value> <!--这里的线程数必须大于0 :理想状态和分桶数一致--> </property> <property> <name>hive.enforce.bucketing</name> <value>true</value> </property> </configuration>
创建文件夹,便于后续操作(和之前的一样)
mkdir -p /home/hadoop/xsxk/data mkdir -p /home/hadoop/xsxk/tmp/point
创建四个配置文件,帮助我们导入到hive里面(flume组件配置)
vi tb_course.properties
#定义agent名, source、channel、sink的名称 agent3.sources = source3 agent3.channels = channel3 agent3.sinks = sink3 #具体定义source agent3.sources.source3.type = spooldir agent3.sources.source3.spoolDir = /home/hadoop/xsxk/data agent3.sources.source3.fileHeader=false #设置channel类型为磁盘 agent3.channels.channel3.type = file #file channle checkpoint文件的路径 agent3.channels.channel3.checkpointDir=/home/hadoop/xsxk/tmp/point # file channel data文件的路径 agent3.channels.channel3.dataDirs=/home/hadoop/xsxk/tmp #具体定义sink agent3.sinks.sink3.type = hive agent3.sinks.sink3.hive.metastore = thrift://hadoop:9083 agent3.sinks.sink3.hive.database = xsxk agent3.sinks.sink3.hive.table = tb_course agent3.sinks.sink3.serializer = DELIMITED agent3.sinks.sink3.serializer.delimiter = "," agent3.sinks.sink3.serializer.serdeSeparator = ',' agent3.sinks.sink3.serializer.fieldnames = code,name,period,credit agent3.sinks.sink3.batchSize = 90 #组装source、channel、sink agent3.sources.source3.channels = channel3 agent3.sinks.sink3.channel = channel3
vi tb_course_class.properties
#定义agent名, source、channel、sink的名称 agent3.sources = source3 agent3.channels = channel3 agent3.sinks = sink3 #具体定义source agent3.sources.source3.type = spooldir agent3.sources.source3.spoolDir = /home/hadoop/xsxk/data agent3.sources.source3.fileHeader=false #设置channel类型为磁盘 agent3.channels.channel3.type = file #file channle checkpoint文件的路径 agent3.channels.channel3.checkpointDir=/home/hadoop/xsxk/tmp/point # file channel data文件的路径 agent3.channels.channel3.dataDirs=/home/hadoop/xsxk/tmp #具体定义sink agent3.sinks.sink3.type = hive agent3.sinks.sink3.hive.metastore = thrift://hadoop:9083 agent3.sinks.sink3.hive.database = xsxk agent3.sinks.sink3.hive.table = tb_course_class agent3.sinks.sink3.serializer = DELIMITED agent3.sinks.sink3.serializer.delimiter = "," agent3.sinks.sink3.serializer.serdeSeparator = ',' agent3.sinks.sink3.serializer.fieldnames = code,semester,teacher_id,course_code agent3.sinks.sink3.batchSize = 90 #组装source、channel、sink agent3.sources.source3.channels = channel3 agent3.sinks.sink3.channel = channel3
vi tb_student.properties
#定义agent名, source、channel、sink的名称 agent3.sources = source3 agent3.channels = channel3 agent3.sinks = sink3 #具体定义source agent3.sources.source3.type = spooldir agent3.sources.source3.spoolDir = /home/hadoop/xsxk/data agent3.sources.source3.fileHeader=false #设置channel类型为磁盘 agent3.channels.channel3.type = file #file channle checkpoint文件的路径 agent3.channels.channel3.checkpointDir=/home/hadoop/xsxk/tmp/point # file channel data文件的路径 agent3.channels.channel3.dataDirs=/home/hadoop/xsxk/tmp #具体定义sink agent3.sinks.sink3.type = hive agent3.sinks.sink3.hive.metastore = thrift://hadoop:9083 agent3.sinks.sink3.hive.database = xsxk agent3.sinks.sink3.hive.table = tb_student agent3.sinks.sink3.serializer = DELIMITED agent3.sinks.sink3.serializer.delimiter = "," agent3.sinks.sink3.serializer.serdeSeparator = ',' agent3.sinks.sink3.serializer.fieldnames = id,name,gender,birthdate,phonenumber,major_class agent3.sinks.sink3.batchSize = 90 #组装source、channel、sink agent3.sources.source3.channels = channel3 agent3.sinks.sink3.channel = channel3
注意在vi里面的操作,不知道的请移步上一篇文章。这样我们创建了四个文件,到时候我们只需要启动四个不同监听就可以同时都四个表的数据了
在hive里面创建相应的四个表
启动hive
hive
create database xsxk;
use xsxk;
create table `xsxk`.`tb_course` ( `code` varchar(255) , `name` varchar(255) , `period` varchar(255) , `credit` varchar(255) ) clustered by(code) into 3 buckets row format delimited fields terminated by ',' stored as orc tblproperties('transactional'='true');
create table `xsxk`.`tb_course_class` ( `code` varchar(255) , `semester` varchar(255) , `teacher_id` varchar(255) , `course_code` varchar(255) ) clustered by(code) into 3 buckets row format delimited fields terminated by ',' stored as orc tblproperties('transactional'='true');
create table `xsxk`.`tb_electives` ( `course_class_code` varchar(255) , `student_id` varchar(255) , `score` varchar(255) ) clustered by(course_class_code) into 3 buckets row format delimited fields terminated by ',' stored as orc tblproperties('transactional'='true');
create table `xsxk`.`tb_student` ( `id` varchar(255) , `name` varchar(255) , `gender` varchar(255) , `birthdate` varchar(255), `phonenumber` varchar(255) , `major_class` varchar(255) ) clustered by(id) into 3 buckets row format delimited fields terminated by ',' stored as orc tblproperties('transactional'='true');
hive里面创建结果表
create table `xsxk`.`xsxk_result` ( `key` varchar(255) , `value` varchar(255)) ;
MySQL里面创建接收表
CREATE DATABASE xsxk;
create table `xsxk`.`xsxk_result` ( `key` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL, `value` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL ) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Dynamic;
导入数据,准备四个flume导入监听代码
首先运行该代码启动
hive --service metastore -p 9083
依次运行下面的代码,每次导入成功都需要,关闭依次终端(或者启动多个终端):
flume-ng agent --conf conf --conf-file tb_course.properties -name agent3 -Dflume.hadoop.logger=INFO,console
flume-ng agent --conf conf --conf-file tb_course_class.properties -name agent3 -Dflume.hadoop.logger=INFO,console
flume-ng agent --conf conf --conf-file tb_electives.properties -name agent3 -Dflume.hadoop.logger=INFO,console
flume-ng agent --conf conf --conf-file tb_student.properties -name agent3 -Dflume.hadoop.logger=INFO,console
以上就把四个表格全部导入进去了,下面我们在hive里面查询一下这些东西是否真的成功了
select count(*) from tb_course; select count(*) from tb_course_class; select count(*) from tb_electives; select count(*) from tb_student;
数据分析
这一步是最重要的,我们之前在MySQL里面测试了的,所以我们直接把这些结果插入到我们的hive结果表里面
注意每次查询的时候最好先执行这个代码,因为这个是hive的特性,在多表查询的时候需要进行等值连接
set hive.mapred.mode=nonstrict;
男女比例
INSERT INTO xsxk_result (SELECT '男女比例',c.`男女比例` FROM (SELECT a.`男生人数`/b.`女生人数` as `男女比例` from (SELECT count(*) as `男生人数` from tb_student WHERE gender='男') as a, (SELECT count(*) as `女生人数` from tb_student WHERE gender='女') as b) as c);
男女结果近似1:1.说明模拟的数据还是比较合理,虽然男生高于女生数量,但是也不是特别的高。
对于男女比例失调的当今社会,我只想说“你有多大的磁场就会有多大的引力,温水煮青蛙,只能坐井观天,你想要的只有自己才能给自己
及格率
INSERT INTO xsxk_result (SELECT '及格率',c.`及格率` FROM (SELECT b.`及格人数`/a.`总数` as `及格率` FROM (SELECT count(*) as `总数` from tb_electives) as a, (SELECT count(*) as `及格人数` from tb_electives WHERE SCORE>='60') as b) as c);
查询一下刚刚插入到结果表里面的信息:
select * from xsxk_result;
每门课程的平均分,要求显示出课程的中文名字
select q.`NAME`,ff.`课程平均分` from tb_course as q, (SELECT ee.`课程编号`,AVG(ee.`分数`) as `课程平均分` from (SELECT cc.`COURSE_CODE` as `课程编号`,dd.`平均分` as `分数`from tb_course_class as cc, (select course_class_code as `班号`,AVG(SCORE)as `平均分` from tb_electives group by course_class_code order by `平均分` DESC) as dd WHERE cc.`CODE`=dd.`班号`) as ee GROUP BY ee.`课程编号`) as ff WHERE q.`CODE`=ff.`课程编号`
INSERT INTO xsxk_result (SELECT c.`name` as `课程名称`,AVG(SCORE) as `平均分` from tb_course as c,tb_electives as e,tb_course_class as cc WHERE cc.`CODE`=e.COURSE_CLASS_CODE and cc.COURSE_CODE=c.`CODE` GROUP BY c.`CODE` ORDER BY `平均分` DESC);
有2门课不及格的学生
INSERT INTO xsxk_result ( SELECT s.`NAME` as `姓名` ,'挂科两门' from tb_student as s JOIN tb_electives as e on s.ID=e.STUDENT_ID WHERE e.SCORE<60 GROUP BY s.`NAME` HAVING COUNT(s.`NAME`)=2 );
把结果导出到MySQL里面
sqoop export --connect jdbc:mysql://localhost:3306/xsxk --username root -P --table xsxk_result --export-dir /user/hive/
Hive操作——删除表(drop、truncate)
Hive删除操作主要分为几大类:删除数据(保留表)、删除库表、删除分区。
一、仅删除表中数据,保留表结构
hive> truncate table 表名;
truncate操作用于删除指定表中的所有行,相当于delete from table where 1=1.表达的是一个意思。
注意:truncate 不能删除外部表!因为外部表里的数据并不是存放在Hive Meta store中。创建表的时候指定了EXTERNAL,外部表在删除分区后,hdfs中的数据还存在,不会被删除。因此要想删除外部表数据,可以把外部表转成内部表或者删除hdfs文件。
二、删除表
hive> drop table if exists 表名;
drop table if exists table_name;
三、删除库
hive> drop database if exists 库名;
注意如果库里有表会报错
解决这个错误有两种方法:一、就是很简单的将所有表先删除完,再删除库。
另外一种就是使用下述的方法:使用cascade关键字执行强制删库。
drop database if exists 库名 cascade;
四、删除hive分区
alter table table_name drop partition (partition_name=‘分区名’)