【大数据开发运维解决方案】Sqoop增量同步Oracle数据到hive:merge-key再次详解

简介: 这篇文章是基于上面连接的文章继续做的拓展,上篇文章结尾说了如果一个表很大。我第一次初始化一部分最新的数据到hive表,如果没初始化进来的历史数据今天发生了变更,那merge-key的增量方式会不会报错呢?之所以会提出这个问题,是因为笔者真的有这个测试需求,接下来先对oracle端的库表数据做下修改,来模拟这种场景。

前言

对于sqoop增量同步Oracle数据到hive的命令参数以及如何定制自动增量job的测试已经再前面几篇文章详细测试过了,这篇文章是基于上面连接的文章继续做的拓展,上篇文章结尾说了如果一个表很大。我第一次初始化一部分最新的数据到hive表,如果没初始化进来的历史数据今天发生了变更,那merge-key的增量方式会不会报错呢?之所以会提出这个问题,是因为笔者真的有这个测试需求,接下来先对oracle端的库表数据做下修改,来模拟这种场景。


一、先插入一条数据

当前时间为:

SQL> select sysdate from dual;

SYSDATE
-------------------
2019-03-25 18:20:26

为了模拟我是有一部分历史数据没有导入到hive表,我这里先给oracle表插入一条历史数据:

SQL> select * from inr_job;

     EMPNO ENAME      JOB           SAL ETLTIME
---------- ---------- --------- ---------- -------------------
     1 er          CLERK           800 2019-03-22 17:24:42
     2 ALLEN      SALESMAN          1600 2019-03-22 17:24:42
     3 WARD       SALESMAN          1250 2019-03-22 17:24:42
     4 JONES      MANAGER          2975 2019-03-22 17:24:42
     5 MARTIN     SALESMAN          1250 2019-03-22 17:24:42
     6 zhao       DBA          1000 2019-03-22 17:24:42
     7 yan          BI           100 2019-03-22 17:24:42
     8 dong       JAVA           400 2019-03-22 17:24:42

8 rows selected.


SQL> insert into inr_job values(9,'test','test',200,sysdate-20);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from inr_job;

     EMPNO ENAME      JOB           SAL ETLTIME
---------- ---------- --------- ---------- -------------------
     1 er          CLERK           800 2019-03-22 17:24:42
     2 ALLEN      SALESMAN          1600 2019-03-22 17:24:42
     3 WARD       SALESMAN          1250 2019-03-22 17:24:42
     4 JONES      MANAGER          2975 2019-03-22 17:24:42
     5 MARTIN     SALESMAN          1250 2019-03-22 17:24:42
     6 zhao       DBA          1000 2019-03-22 17:24:42
     7 yan          BI           100 2019-03-22 17:24:42
     8 dong       JAVA           400 2019-03-22 17:24:42
     9 test       test           200 2019-03-05 18:53:23--模仿没初始化到hive表的his数据

9 rows selected.

二、更新历史数据

接下来手动更新一下这个历史数据

SQL> update inr_job set sal=999,etltime=sysdate where empno=9;

1 row updated.

SQL> commit;

Commit complete.

查询一下表数据

SQL> select * from inr_job;

     EMPNO ENAME      JOB           SAL ETLTIME
---------- ---------- --------- ---------- -------------------
     1 er          CLERK           800 2019-03-22 17:24:42
     2 ALLEN      SALESMAN          1600 2019-03-22 17:24:42
     3 WARD       SALESMAN          1250 2019-03-22 17:24:42
     4 JONES      MANAGER          2975 2019-03-22 17:24:42
     5 MARTIN     SALESMAN          1250 2019-03-22 17:24:42
     6 zhao       DBA          1000 2019-03-22 17:24:42
     7 yan          BI           100 2019-03-22 17:24:42
     8 dong       JAVA           400 2019-03-22 17:24:42
     9 test       test           999 2019-03-25 18:54:39

9 rows selected.

现在数据发生了变动,然后去执行一下增量脚本:

[root@hadoop hadoop]# sqoop job --exec auto_job
Warning: /hadoop/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
19/03/25 18:55:49 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/hadoop/hbase/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/hadoop/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
19/03/25 18:55:51 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled.
19/03/25 18:55:51 INFO manager.SqlManager: Using default fetchSize of 1000
19/03/25 18:55:51 INFO tool.CodeGenTool: Beginning code generation
19/03/25 18:55:52 INFO manager.OracleManager: Time zone has been set to GMT
19/03/25 18:55:52 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM INR_JOB t WHERE 1=0
19/03/25 18:55:52 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /hadoop
Note: /tmp/sqoop-root/compile/f64e34273a58459369885b96fe46a1ad/INR_JOB.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
19/03/25 18:55:56 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/f64e34273a58459369885b96fe46a1ad/INR_JOB.jar
19/03/25 18:55:56 INFO manager.OracleManager: Time zone has been set to GMT
19/03/25 18:55:56 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM INR_JOB t WHERE 1=0
19/03/25 18:55:56 INFO tool.ImportTool: Incremental import based on column ETLTIME
19/03/25 18:55:56 INFO tool.ImportTool: Lower bound value: TO_TIMESTAMP('2019-03-25 18:50:07.0', 'YYYY-MM-DD HH24:MI:SS.FF')
19/03/25 18:55:56 INFO tool.ImportTool: Upper bound value: TO_TIMESTAMP('2019-03-25 18:55:56.0', 'YYYY-MM-DD HH24:MI:SS.FF')
19/03/25 18:55:56 INFO manager.OracleManager: Time zone has been set to GMT
19/03/25 18:55:56 INFO mapreduce.ImportJobBase: Beginning import of INR_JOB
19/03/25 18:55:56 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
19/03/25 18:55:56 INFO manager.OracleManager: Time zone has been set to GMT
19/03/25 18:55:56 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
19/03/25 18:55:56 INFO client.RMProxy: Connecting to ResourceManager at /192.168.1.66:8032
19/03/25 18:55:59 INFO db.DBInputFormat: Using read commited transaction isolation
19/03/25 18:55:59 INFO mapreduce.JobSubmitter: number of splits:1
19/03/25 18:56:00 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1553503985304_0013
19/03/25 18:56:00 INFO impl.YarnClientImpl: Submitted application application_1553503985304_0013
19/03/25 18:56:00 INFO mapreduce.Job: The url to track the job: http://hadoop:8088/proxy/application_1553503985304_0013/
19/03/25 18:56:00 INFO mapreduce.Job: Running job: job_1553503985304_0013
19/03/25 18:56:10 INFO mapreduce.Job: Job job_1553503985304_0013 running in uber mode : false
19/03/25 18:56:10 INFO mapreduce.Job:  map 0% reduce 0%
19/03/25 18:56:19 INFO mapreduce.Job:  map 100% reduce 0%
19/03/25 18:56:20 INFO mapreduce.Job: Job job_1553503985304_0013 completed successfully
19/03/25 18:56:20 INFO mapreduce.Job: Counters: 30
    File System Counters
        FILE: Number of bytes read=0
        FILE: Number of bytes written=144777
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=87
        HDFS: Number of bytes written=38
        HDFS: Number of read operations=4
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=2
    Job Counters 
        Launched map tasks=1
        Other local map tasks=1
        Total time spent by all maps in occupied slots (ms)=5870
        Total time spent by all reduces in occupied slots (ms)=0
        Total time spent by all map tasks (ms)=5870
        Total vcore-milliseconds taken by all map tasks=5870
        Total megabyte-milliseconds taken by all map tasks=6010880
    Map-Reduce Framework
        Map input records=1
        Map output records=1
        Input split bytes=87
        Spilled Records=0
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=100
        CPU time spent (ms)=3220
        Physical memory (bytes) snapshot=189059072
        Virtual memory (bytes) snapshot=2147303424
        Total committed heap usage (bytes)=102236160
    File Input Format Counters 
        Bytes Read=0
    File Output Format Counters 
        Bytes Written=38
19/03/25 18:56:20 INFO mapreduce.ImportJobBase: Transferred 38 bytes in 23.7426 seconds (1.6005 bytes/sec)
19/03/25 18:56:20 INFO mapreduce.ImportJobBase: Retrieved 1 records.
19/03/25 18:56:20 INFO tool.ImportTool: Final destination exists, will run merge job.
19/03/25 18:56:20 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
19/03/25 18:56:20 INFO client.RMProxy: Connecting to ResourceManager at /192.168.1.66:8032
19/03/25 18:56:22 INFO input.FileInputFormat: Total input paths to process : 2
19/03/25 18:56:23 INFO mapreduce.JobSubmitter: number of splits:2
19/03/25 18:56:23 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1553503985304_0014
19/03/25 18:56:23 INFO impl.YarnClientImpl: Submitted application application_1553503985304_0014
19/03/25 18:56:23 INFO mapreduce.Job: The url to track the job: http://hadoop:8088/proxy/application_1553503985304_0014/
19/03/25 18:56:23 INFO mapreduce.Job: Running job: job_1553503985304_0014
19/03/25 18:56:37 INFO mapreduce.Job: Job job_1553503985304_0014 running in uber mode : false
19/03/25 18:56:37 INFO mapreduce.Job:  map 0% reduce 0%
19/03/25 18:56:46 INFO mapreduce.Job:  map 100% reduce 0%
19/03/25 18:56:56 INFO mapreduce.Job:  map 100% reduce 100%
19/03/25 18:56:57 INFO mapreduce.Job: Job job_1553503985304_0014 completed successfully
19/03/25 18:56:57 INFO mapreduce.Job: Counters: 49
    File System Counters
        FILE: Number of bytes read=614
        FILE: Number of bytes written=435819
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=657
        HDFS: Number of bytes written=361
        HDFS: Number of read operations=9
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=2
    Job Counters 
        Launched map tasks=2
        Launched reduce tasks=1
        Data-local map tasks=2
        Total time spent by all maps in occupied slots (ms)=11103
        Total time spent by all reduces in occupied slots (ms)=7376
        Total time spent by all map tasks (ms)=11103
        Total time spent by all reduce tasks (ms)=7376
        Total vcore-milliseconds taken by all map tasks=11103
        Total vcore-milliseconds taken by all reduce tasks=7376
        Total megabyte-milliseconds taken by all map tasks=11369472
        Total megabyte-milliseconds taken by all reduce tasks=7553024
    Map-Reduce Framework
        Map input records=9
        Map output records=9
        Map output bytes=590
        Map output materialized bytes=620
        Input split bytes=296
        Combine input records=0
        Combine output records=0
        Reduce input groups=9
        Reduce shuffle bytes=620
        Reduce input records=9
        Reduce output records=9
        Spilled Records=18
        Shuffled Maps =2
        Failed Shuffles=0
        Merged Map outputs=2
        GC time elapsed (ms)=263
        CPU time spent (ms)=3980
        Physical memory (bytes) snapshot=670138368
        Virtual memory (bytes) snapshot=6394978304
        Total committed heap usage (bytes)=508559360
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters 
        Bytes Read=361
    File Output Format Counters 
        Bytes Written=361
19/03/25 18:56:57 INFO tool.ImportTool: Saving incremental import state to the metastore
19/03/25 18:56:57 INFO tool.ImportTool: Updated data for job: auto_job

发现没有报错唉,然后去看看hive表:

hive> select * from inr_job;
OK
1    er    CLERK    800.0    2019-03-22 17:24:42.0
2    ALLEN    SALESMAN    1600.0    2019-03-22 17:24:42.0
3    WARD    SALESMAN    1250.0    2019-03-22 17:24:42.0
4    JONES    MANAGER    2975.0    2019-03-22 17:24:42.0
5    MARTIN    SALESMAN    1250.0    2019-03-22 17:24:42.0
6    zhao    DBA    1000.0    2019-03-22 17:24:42.0
7    yan    BI    100.0    2019-03-22 17:24:42.0
8    dong    JAVA    400.0    2019-03-22 17:24:42.0
9    test    test    999.0    2019-03-25 18:54:39.0
Time taken: 0.336 seconds, Fetched: 9 row(s)

没初始化进来的历史数据在近期变动之后,如果符合增量条件的话,也会append进来并不会报错,完全符合笔者需求,其实看看merge-key参数大致原理,也是知道这样是可行的,毕竟是通过主键和最后修改时间去做增量合并。


总结

上面是博主对merge-key做的再一次深入测试,因为实际工作中的确会用到,那就自己研究清楚了!

相关实践学习
基于MaxCompute的热门话题分析
Apsara Clouder大数据专项技能认证配套课程:基于MaxCompute的热门话题分析
相关文章
|
6月前
|
机器学习/深度学习 传感器 分布式计算
数据才是真救命的:聊聊如何用大数据提升灾难预警的精准度
数据才是真救命的:聊聊如何用大数据提升灾难预警的精准度
446 14
|
6月前
|
传感器 人工智能 监控
数据下田,庄稼不“瞎种”——聊聊大数据如何帮农业提效
数据下田,庄稼不“瞎种”——聊聊大数据如何帮农业提效
222 14
|
5月前
|
传感器 人工智能 监控
拔俗多模态跨尺度大数据AI分析平台:让复杂数据“开口说话”的智能引擎
在数字化时代,多模态跨尺度大数据AI分析平台应运而生,打破数据孤岛,融合图像、文本、视频等多源信息,贯通微观与宏观尺度,实现智能诊断、预测与决策,广泛应用于医疗、制造、金融等领域,推动AI从“看懂”到“会思考”的跃迁。
445 0
|
6月前
|
机器学习/深度学习 传感器 监控
吃得安心靠数据?聊聊用大数据盯紧咱们的餐桌安全
吃得安心靠数据?聊聊用大数据盯紧咱们的餐桌安全
214 1
|
6月前
|
数据采集 自动驾驶 机器人
数据喂得好,机器人才能学得快:大数据对智能机器人训练的真正影响
数据喂得好,机器人才能学得快:大数据对智能机器人训练的真正影响
596 1
|
SQL 数据采集 数据挖掘
大数据行业应用之Hive数据分析航班线路相关的各项指标
大数据行业应用之Hive数据分析航班线路相关的各项指标
507 1
|
SQL 分布式计算 数据库
【大数据技术Spark】Spark SQL操作Dataframe、读写MySQL、Hive数据库实战(附源码)
【大数据技术Spark】Spark SQL操作Dataframe、读写MySQL、Hive数据库实战(附源码)
868 0
|
10月前
|
SQL 分布式计算 大数据
大数据新视界 --大数据大厂之Hive与大数据融合:构建强大数据仓库实战指南
本文深入介绍 Hive 与大数据融合构建强大数据仓库的实战指南。涵盖 Hive 简介、优势、安装配置、数据处理、性能优化及安全管理等内容,并通过互联网广告和物流行业案例分析,展示其实际应用。具有专业性、可操作性和参考价值。
大数据新视界 --大数据大厂之Hive与大数据融合:构建强大数据仓库实战指南
|
SQL 分布式计算 Java
大数据-96 Spark 集群 SparkSQL Scala编写SQL操作SparkSQL的数据源:JSON、CSV、JDBC、Hive
大数据-96 Spark 集群 SparkSQL Scala编写SQL操作SparkSQL的数据源:JSON、CSV、JDBC、Hive
370 0
|
SQL 分布式计算 大数据
大数据处理平台Hive详解
【7月更文挑战第15天】Hive作为基于Hadoop的数据仓库工具,在大数据处理和分析领域发挥着重要作用。通过提供类SQL的查询语言,Hive降低了数据处理的门槛,使得具有SQL背景的开发者可以轻松地处理大规模数据。然而,Hive也存在查询延迟高、表达能力有限等缺点,需要在实际应用中根据具体场景和需求进行选择和优化。
1216 6

热门文章

最新文章

推荐镜像

更多