开发者社区> split_two> 正文
阿里云
为了无法计算的价值
打开APP
阿里云APP内打开

【原创】浅谈hbase表中数据导出导入(也就是备份)

简介: 最近因为生产环境hbase中某张表的数据要导出到测试环境(数据不多,大概200W条左右),如果用程序掉接口导入的话太慢,所以考虑直接用hbase的功能来导入导出。因为此次是实验,所以我在正式环境建了一张小表,只有两条数据,目的是将它导入到一张新表中(空表,但是表结构一样) hbas...
+关注继续查看
最近因为生产环境hbase中某张表的数据要导出到测试环境(数据不多,大概200W条左右),如果用程序掉接口导入的话太慢,所以考虑直接用hbase的功能来导入导出。因为此次是实验,所以我在正式环境建了一张小表,只有两条数据,目的是将它导入到一张新表中(空表,但是表结构一样)
hbase(main):004:0> scan 'xyz'
ROW                   COLUMN+CELL                                              
 10000                column=cf1:val, timestamp=1345598242644, value=china     
 20000                column=cf1:val, timestamp=1345598283332, value=zengzhunzhu
                      n                                                        
2 row(s) in 0.0350 seconds
开始导出:
[hadoop@master ~]$ hbase/bin/hbase org.apache.hadoop.hbase.mapreduce.Driver expo
rt xyz file:///home/hadoop/xyz
12/08/22 10:12:07 INFO mapreduce.Export: verisons=1, starttime=0, endtime=9223372036854775807
12/08/22 10:12:08 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
12/08/22 10:12:08 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
12/08/22 10:12:08 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
12/08/22 10:12:08 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
12/08/22 10:12:08 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
12/08/22 10:12:08 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
12/08/22 10:12:08 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
12/08/22 10:12:08 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hbase/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:host.name=master
12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_14
12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.6.0_14/jre
12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/home/hadoop/hbase/bin/../conf:/usr/java/jdk1.6.0_14/lib/tools.jar:/home/hadoop/hbase:/home/hadoop/hbase/hbase-0.92.1.jar:/home/hadoop/hbase/hbase-0.92.1-tests.jar:/home/hadoop/hbase/lib/activation-1.1.jar:/home/hadoop/hbase/lib/asm-3.1.jar:/home/hadoop/hbase/lib/avro-1.5.3.jar:/home/hadoop/hbase/lib/avro-ipc-1.5.3.jar:/home/hadoop/hbase/lib/commons-beanutils-1.7.0.jar:/home/hadoop/hbase/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hbase/lib/commons-cli-1.2.jar:/home/hadoop/hbase/lib/commons-codec-1.4.jar:/home/hadoop/hbase/lib/commons-collections-3.2.1.jar:/home/hadoop/hbase/lib/commons-configuration-1.6.jar:/home/hadoop/hbase/lib/commons-digester-1.8.jar:/home/hadoop/hbase/lib/commons-el-1.0.jar:/home/hadoop/hbase/lib/commons-httpclient-3.1.jar:/home/hadoop/hbase/lib/commons-lang-2.5.jar:/home/hadoop/hbase/lib/commons-logging-1.1.1.jar:/home/hadoop/hbase/lib/commons-math-2.1.jar:/home/hadoop/hbase/lib/commons-net-1.4.1.jar:/home/hadoop/hbase/lib/core-3.1.1.jar:/home/hadoop/hbase/lib/guava-r09.jar:/home/hadoop/hbase/lib/hadoop-core-1.0.0.jar:/home/hadoop/hbase/lib/high-scale-lib-1.1.1.jar:/home/hadoop/hbase/lib/httpclient-4.0.1.jar:/home/hadoop/hbase/lib/httpcore-4.0.1.jar:/home/hadoop/hbase/lib/jackson-core-asl-1.5.5.jar:/home/hadoop/hbase/lib/jackson-jaxrs-1.5.5.jar:/home/hadoop/hbase/lib/jackson-mapper-asl-1.5.5.jar:/home/hadoop/hbase/lib/jackson-xc-1.5.5.jar:/home/hadoop/hbase/lib/jamon-runtime-2.3.1.jar:/home/hadoop/hbase/lib/jasper-compiler-5.5.23.jar:/home/hadoop/hbase/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hbase/lib/jaxb-api-2.1.jar:/home/hadoop/hbase/lib/jaxb-impl-2.1.12.jar:/home/hadoop/hbase/lib/jersey-core-1.4.jar:/home/hadoop/hbase/lib/jersey-json-1.4.jar:/home/hadoop/hbase/lib/jersey-server-1.4.jar:/home/hadoop/hbase/lib/jettison-1.1.jar:/home/hadoop/hbase/lib/jetty-6.1.26.jar:/home/hadoop/hbase/lib/jetty-util-6.1.26.jar:/home/hadoop/hbase/lib/jruby-complete-1.6.5.jar:/home/hadoop/hbase/lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase/lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hbase/lib/libthrift-0.7.0.jar:/home/hadoop/hbase/lib/log4j-1.2.16.jar:/home/hadoop/hbase/lib/netty-3.2.4.Final.jar:/home/hadoop/hbase/lib/protobuf-java-2.4.0a.jar:/home/hadoop/hbase/lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hbase/lib/servlet-api-2.5.jar:/home/hadoop/hbase/lib/slf4j-api-1.5.8.jar:/home/hadoop/hbase/lib/slf4j-log4j12-1.5.8.jar:/home/hadoop/hbase/lib/snappy-java-1.0.3.2.jar:/home/hadoop/hbase/lib/stax-api-1.0.1.jar:/home/hadoop/hbase/lib/velocity-1.7.jar:/home/hadoop/hbase/lib/xmlenc-0.52.jar:/home/hadoop/hbase/lib/zookeeper-3.4.3.jar:/home/hadoop/hadoop/libexec/../conf:/usr/java/jdk1.6.0_14/lib/tools.jar:/home/hadoop/hadoop/libexec/..:/home/hadoop/hadoop/libexec/../hadoop-core-1.0.3.jar:/home/hadoop/hadoop/libexec/../lib/asm-3.2.jar:/home/hadoop/hadoop/libexec/../lib/aspectjrt-1.6.5.jar:/home/hadoop/hadoop/libexec/../lib/aspectjtools-1.6.5.jar:/home/hadoop/hadoop/libexec/../lib/commons-beanutils-1.7.0.jar:/home/hadoop/hadoop/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hadoop/libexec/../lib/commons-cli-1.2.jar:/home/hadoop/hadoop/libexec/../lib/commons-codec-1.4.jar:/home/hadoop/hadoop/libexec/../lib/commons-collections-3.2.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-configuration-1.6.jar:/home/hadoop/hadoop/libexec/../lib/commons-daemon-1.0.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-digester-1.8.jar:/home/hadoop/hadoop/libexec/../lib/commons-el-1.0.jar:/home/hadoop/hadoop/libexec/../lib/commons-httpclient-3.0.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-io-2.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-lang-2.4.jar:/home/hadoop/hadoop/libexec/../lib/commons-logging-1.1.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-logging-api-1.0.4.jar:/home/hadoop/hadoop/libexec/../lib/commons-math-2.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-net-1.4.1.jar:/home/hadoop/hadoop/libexec/../lib/core-3.1.1.jar:/home/hadoop/hadoop/libexec/../lib/hadoop-capacity-scheduler-1.0.3.jar:/home/hadoop/hadoop/libexec/../lib/hadoop-fairscheduler-1.0.3.jar:/home/hadoop/hadoop/libexec/../lib/hadoop-thriftfs-1.0.3.jar:/home/hadoop/hadoop/libexec/../lib/hbase-0.92.1.jar:/home/hadoop/hadoop/libexec/../lib/hsqldb-1.8.0.10.jar:/home/hadoop/hadoop/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop/libexec/../lib/jasper-compiler-5.5.12.jar:/home/hadoop/hadoop/libexec/../lib/jasper-runtime-5.5.12.jar:/home/hadoop/hadoop/libexec/../lib/jdeb-0.8.jar:/home/hadoop/hadoop/libexec/../lib/jersey-core-1.8.jar:/home/hadoop/hadoop/libexec/../lib/jersey-json-1.8.jar:/home/hadoop/hadoop/libexec/../lib/jersey-server-1.8.jar:/home/hadoop/hadoop/libexec/../lib/jets3t-0.6.1.jar:/home/hadoop/hadoop/libexec/../lib/jetty-6.1.26.jar:/home/hadoop/hadoop/libexec/../lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop/libexec/../lib/jsch-0.1.42.jar:/home/hadoop/hadoop/libexec/../lib/junit-4.5.jar:/home/hadoop/hadoop/libexec/../lib/kfs-0.2.2.jar:/home/hadoop/hadoop/libexec/../lib/log4j-1.2.15.jar:/home/hadoop/hadoop/libexec/../lib/mockito-all-1.8.5.jar:/home/hadoop/hadoop/libexec/../lib/oro-2.0.8.jar:/home/hadoop/hadoop/libexec/../lib/servlet-api-2.5-20081211.jar:/home/hadoop/hadoop/libexec/../lib/slf4j-api-1.4.3.jar:/home/hadoop/hadoop/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/hadoop/hadoop/libexec/../lib/xmlenc-0.52.jar:/home/hadoop/hadoop/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/hadoop/hadoop/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/home/hadoop/hadoop/libexec/../lib/native/Linux-amd64-64:/home/hadoop/hbase/lib/native/Linux-amd64-64
12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:java.compiler=
12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.9-89.ELsmp
12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:user.name=hadoop
12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop
12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop
12/08/22 10:12:09 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=slave2:2222,slave1:2222,slave3:2222 sessionTimeout=180000 watcher=hconnection
12/08/22 10:12:09 INFO zookeeper.ClientCnxn: Opening socket connection to server /192.168.15.132:2222
12/08/22 10:12:09 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 23606@master
12/08/22 10:12:09 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: 无法定位登录配置 occurred when trying to find JAAS configuration.
12/08/22 10:12:09 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
12/08/22 10:12:09 INFO zookeeper.ClientCnxn: Socket connection established to slave3/192.168.15.132:2222, initiating session
12/08/22 10:12:09 WARN zookeeper.ClientCnxnSocket: Connected to an old server; r-o mode will be unavailable
12/08/22 10:12:09 INFO zookeeper.ClientCnxn: Session establishment complete on server slave3/192.168.15.132:2222, sessionid = 0x33943bafeb90005, negotiated timeout = 40000
12/08/22 10:12:09 DEBUG client.HConnectionManager$HConnectionImplementation: Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@28bb494b; serverName=slave2,60020,1345461138645
12/08/22 10:12:09 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for .META.,,1.1028785192 is slave3:60020
12/08/22 10:12:09 DEBUG client.MetaScanner: Scanning .META. starting at row=xyz,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@28bb494b
12/08/22 10:12:09 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for xyz,,1340764906812.6aa4cb2fb4c9eb34f360953acdb1e21c. is slave2:60020
12/08/22 10:12:09 DEBUG client.MetaScanner: Scanning .META. starting at row=xyz,,00000000000000 for max=2147483647 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@28bb494b
12/08/22 10:12:09 DEBUG mapreduce.TableInputFormatBase: getSplits: split -> 0 -> slave2:,
12/08/22 10:12:10 INFO mapred.JobClient: Running job: job_201208201908_0002
12/08/22 10:12:11 INFO mapred.JobClient:  map 0% reduce 0%
12/08/22 10:12:30 INFO mapred.JobClient:  map 100% reduce 0%
12/08/22 10:12:35 INFO mapred.JobClient: Job complete: job_201208201908_0002
12/08/22 10:12:35 INFO mapred.JobClient: Counters: 18
12/08/22 10:12:35 INFO mapred.JobClient:   Job Counters
12/08/22 10:12:35 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=16203
12/08/22 10:12:35 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
12/08/22 10:12:35 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
12/08/22 10:12:35 INFO mapred.JobClient:     Launched map tasks=1
12/08/22 10:12:35 INFO mapred.JobClient:     Data-local map tasks=1
12/08/22 10:12:35 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
12/08/22 10:12:35 INFO mapred.JobClient:   File Output Format Counters
12/08/22 10:12:35 INFO mapred.JobClient:     Bytes Written=255
12/08/22 10:12:35 INFO mapred.JobClient:   FileSystemCounters
12/08/22 10:12:35 INFO mapred.JobClient:     HDFS_BYTES_READ=58
12/08/22 10:12:35 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=31370
12/08/22 10:12:35 INFO mapred.JobClient:   File Input Format Counters
12/08/22 10:12:35 INFO mapred.JobClient:     Bytes Read=0
12/08/22 10:12:35 INFO mapred.JobClient:   Map-Reduce Framework
12/08/22 10:12:35 INFO mapred.JobClient:     Map input records=2
12/08/22 10:12:35 INFO mapred.JobClient:     Physical memory (bytes) snapshot=77606912
12/08/22 10:12:35 INFO mapred.JobClient:     Spilled Records=0
12/08/22 10:12:35 INFO mapred.JobClient:     CPU time spent (ms)=1830
12/08/22 10:12:35 INFO mapred.JobClient:     Total committed heap usage (bytes)=31850496
12/08/22 10:12:35 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=488656896
12/08/22 10:12:35 INFO mapred.JobClient:     Map output records=2
12/08/22 10:12:35 INFO mapred.JobClient:     SPLIT_RAW_BYTES=58
上述红色字体输出已提示导出了两条数据了,因为我有3个datanode,加上数据比较少,所以肯定只会在一台datanode上有导出文件。如果数据很多,可能每个datanode节点都会有导出文件,至于在哪台datanode上你就需要找一下/home/hadoop目录下有没有xyz目录了。
我这找到后文件如下:
[hadoop@slave2 ~]$ cd xyz/
[hadoop@slave2 xyz]$ ls
part-m-00000  _SUCCESS
开始新建一张新表,表结构和xyz表一样
hbase(main):001:0> create 'zzz','cf1'
0 row(s) in 2.0490 seconds
然后开始导入,这里我就利用导出文件在哪我就在哪导入了。当然你也可以拿这个part-m-00000文件到其余的datanode上导入,顺便友情提醒一下,如果导出的数据很多,你导入的时候千万别把所有的part-m-0000*文件都放到一个目录下开始导入,肯定会失败的!你只能把part-m-0000*文件一个个开始导入。
[hadoop@slave2 ~]$ hbase/bin/hbase org.apache.hadoop.hbase.mapreduce.Driver impo
rt zzz file:///home/hadoop/xyz/
12/08/22 10:30:42 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
12/08/22 10:30:42 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
12/08/22 10:30:42 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
12/08/22 10:30:42 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
12/08/22 10:30:42 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
12/08/22 10:30:42 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
12/08/22 10:30:42 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
12/08/22 10:30:42 DEBUG mapreduce.TableMapReduceUtil: New JarFinder: org.apache.hadoop.util.JarFinder.getJar not available.  Using old findContainingJar
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hbase/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:host.name=slave2
12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_14
12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.6.0_14/jre
12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/home/hadoop/hbase/bin/../conf:/usr/java/jdk1.6.0_14/lib/tools.jar:/home/hadoop/hbase:/home/hadoop/hbase/hbase-0.92.1.jar:/home/hadoop/hbase/hbase-0.92.1-tests.jar:/home/hadoop/hbase/lib/activation-1.1.jar:/home/hadoop/hbase/lib/asm-3.1.jar:/home/hadoop/hbase/lib/avro-1.5.3.jar:/home/hadoop/hbase/lib/avro-ipc-1.5.3.jar:/home/hadoop/hbase/lib/commons-beanutils-1.7.0.jar:/home/hadoop/hbase/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hbase/lib/commons-cli-1.2.jar:/home/hadoop/hbase/lib/commons-codec-1.4.jar:/home/hadoop/hbase/lib/commons-collections-3.2.1.jar:/home/hadoop/hbase/lib/commons-configuration-1.6.jar:/home/hadoop/hbase/lib/commons-digester-1.8.jar:/home/hadoop/hbase/lib/commons-el-1.0.jar:/home/hadoop/hbase/lib/commons-httpclient-3.1.jar:/home/hadoop/hbase/lib/commons-lang-2.5.jar:/home/hadoop/hbase/lib/commons-logging-1.1.1.jar:/home/hadoop/hbase/lib/commons-math-2.1.jar:/home/hadoop/hbase/lib/commons-net-1.4.1.jar:/home/hadoop/hbase/lib/core-3.1.1.jar:/home/hadoop/hbase/lib/guava-r09.jar:/home/hadoop/hbase/lib/hadoop-core-1.0.0.jar:/home/hadoop/hbase/lib/high-scale-lib-1.1.1.jar:/home/hadoop/hbase/lib/httpclient-4.0.1.jar:/home/hadoop/hbase/lib/httpcore-4.0.1.jar:/home/hadoop/hbase/lib/jackson-core-asl-1.5.5.jar:/home/hadoop/hbase/lib/jackson-jaxrs-1.5.5.jar:/home/hadoop/hbase/lib/jackson-mapper-asl-1.5.5.jar:/home/hadoop/hbase/lib/jackson-xc-1.5.5.jar:/home/hadoop/hbase/lib/jamon-runtime-2.3.1.jar:/home/hadoop/hbase/lib/jasper-compiler-5.5.23.jar:/home/hadoop/hbase/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hbase/lib/jaxb-api-2.1.jar:/home/hadoop/hbase/lib/jaxb-impl-2.1.12.jar:/home/hadoop/hbase/lib/jersey-core-1.4.jar:/home/hadoop/hbase/lib/jersey-json-1.4.jar:/home/hadoop/hbase/lib/jersey-server-1.4.jar:/home/hadoop/hbase/lib/jettison-1.1.jar:/home/hadoop/hbase/lib/jetty-6.1.26.jar:/home/hadoop/hbase/lib/jetty-util-6.1.26.jar:/home/hadoop/hbase/lib/jruby-complete-1.6.5.jar:/home/hadoop/hbase/lib/jsp-2.1-6.1.14.jar:/home/hadoop/hbase/lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/hbase/lib/libthrift-0.7.0.jar:/home/hadoop/hbase/lib/log4j-1.2.16.jar:/home/hadoop/hbase/lib/netty-3.2.4.Final.jar:/home/hadoop/hbase/lib/protobuf-java-2.4.0a.jar:/home/hadoop/hbase/lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/hbase/lib/servlet-api-2.5.jar:/home/hadoop/hbase/lib/slf4j-api-1.5.8.jar:/home/hadoop/hbase/lib/slf4j-log4j12-1.5.8.jar:/home/hadoop/hbase/lib/snappy-java-1.0.3.2.jar:/home/hadoop/hbase/lib/stax-api-1.0.1.jar:/home/hadoop/hbase/lib/velocity-1.7.jar:/home/hadoop/hbase/lib/xmlenc-0.52.jar:/home/hadoop/hbase/lib/zookeeper-3.4.3.jar:/home/hadoop/hadoop/libexec/../conf:/usr/java/jdk1.6.0_14/lib/tools.jar:/home/hadoop/hadoop/libexec/..:/home/hadoop/hadoop/libexec/../hadoop-core-1.0.3.jar:/home/hadoop/hadoop/libexec/../lib/asm-3.2.jar:/home/hadoop/hadoop/libexec/../lib/aspectjrt-1.6.5.jar:/home/hadoop/hadoop/libexec/../lib/aspectjtools-1.6.5.jar:/home/hadoop/hadoop/libexec/../lib/commons-beanutils-1.7.0.jar:/home/hadoop/hadoop/libexec/../lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hadoop/libexec/../lib/commons-cli-1.2.jar:/home/hadoop/hadoop/libexec/../lib/commons-codec-1.4.jar:/home/hadoop/hadoop/libexec/../lib/commons-collections-3.2.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-configuration-1.6.jar:/home/hadoop/hadoop/libexec/../lib/commons-daemon-1.0.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-digester-1.8.jar:/home/hadoop/hadoop/libexec/../lib/commons-el-1.0.jar:/home/hadoop/hadoop/libexec/../lib/commons-httpclient-3.0.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-io-2.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-lang-2.4.jar:/home/hadoop/hadoop/libexec/../lib/commons-logging-1.1.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-logging-api-1.0.4.jar:/home/hadoop/hadoop/libexec/../lib/commons-math-2.1.jar:/home/hadoop/hadoop/libexec/../lib/commons-net-1.4.1.jar:/home/hadoop/hadoop/libexec/../lib/core-3.1.1.jar:/home/hadoop/hadoop/libexec/../lib/hadoop-capacity-scheduler-1.0.3.jar:/home/hadoop/hadoop/libexec/../lib/hadoop-fairscheduler-1.0.3.jar:/home/hadoop/hadoop/libexec/../lib/hadoop-thriftfs-1.0.3.jar:/home/hadoop/hadoop/libexec/../lib/hbase-0.92.1.jar:/home/hadoop/hadoop/libexec/../lib/hsqldb-1.8.0.10.jar:/home/hadoop/hadoop/libexec/../lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop/libexec/../lib/jasper-compiler-5.5.12.jar:/home/hadoop/hadoop/libexec/../lib/jasper-runtime-5.5.12.jar:/home/hadoop/hadoop/libexec/../lib/jdeb-0.8.jar:/home/hadoop/hadoop/libexec/../lib/jersey-core-1.8.jar:/home/hadoop/hadoop/libexec/../lib/jersey-json-1.8.jar:/home/hadoop/hadoop/libexec/../lib/jersey-server-1.8.jar:/home/hadoop/hadoop/libexec/../lib/jets3t-0.6.1.jar:/home/hadoop/hadoop/libexec/../lib/jetty-6.1.26.jar:/home/hadoop/hadoop/libexec/../lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop/libexec/../lib/jsch-0.1.42.jar:/home/hadoop/hadoop/libexec/../lib/junit-4.5.jar:/home/hadoop/hadoop/libexec/../lib/kfs-0.2.2.jar:/home/hadoop/hadoop/libexec/../lib/log4j-1.2.15.jar:/home/hadoop/hadoop/libexec/../lib/mockito-all-1.8.5.jar:/home/hadoop/hadoop/libexec/../lib/oro-2.0.8.jar:/home/hadoop/hadoop/libexec/../lib/servlet-api-2.5-20081211.jar:/home/hadoop/hadoop/libexec/../lib/slf4j-api-1.4.3.jar:/home/hadoop/hadoop/libexec/../lib/slf4j-log4j12-1.4.3.jar:/home/hadoop/hadoop/libexec/../lib/xmlenc-0.52.jar:/home/hadoop/hadoop/libexec/../lib/jsp-2.1/jsp-2.1.jar:/home/hadoop/hadoop/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/home/hadoop/hadoop/libexec/../lib/native/Linux-amd64-64:/home/hadoop/hbase/lib/native/Linux-amd64-64
12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:java.compiler=
12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.9-89.ELsmp
12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:user.name=hadoop
12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop
12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop
12/08/22 10:30:44 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=slave2:2222,slave1:2222,slave3:2222 sessionTimeout=180000 watcher=hconnection
12/08/22 10:30:44 INFO zookeeper.ClientCnxn: Opening socket connection to server /192.168.15.72:2222
12/08/22 10:30:44 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 30654@slave2
12/08/22 10:30:44 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: 无法定位登录配置 occurred when trying to find JAAS configuration.
12/08/22 10:30:44 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
12/08/22 10:30:44 INFO zookeeper.ClientCnxn: Socket connection established to slave1/192.168.15.72:2222, initiating session
12/08/22 10:30:44 WARN zookeeper.ClientCnxnSocket: Connected to an old server; r-o mode will be unavailable
12/08/22 10:30:44 INFO zookeeper.ClientCnxn: Session establishment complete on server slave1/192.168.15.72:2222, sessionid = 0x13943ba912f0007, negotiated timeout = 40000
12/08/22 10:30:44 DEBUG client.HConnectionManager$HConnectionImplementation: Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@1c23f1bb; serverName=slave2,60020,1345461138645
12/08/22 10:30:44 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for .META.,,1.1028785192 is slave3:60020
12/08/22 10:30:44 DEBUG client.MetaScanner: Scanning .META. starting at row=zzz,,00000000000000 for max=10 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@1c23f1bb
12/08/22 10:30:44 DEBUG client.HConnectionManager$HConnectionImplementation: Cached location for zzz,,1345602149536.dbeb5fc388bcc537d40b5602b60798ff. is slave3:60020
12/08/22 10:30:44 INFO mapreduce.TableOutputFormat: Created table instance for zzz
12/08/22 10:30:44 INFO input.FileInputFormat: Total input paths to process : 1
12/08/22 10:30:45 INFO mapred.JobClient: Running job: job_201208201908_0004
12/08/22 10:30:46 INFO mapred.JobClient:  map 0% reduce 0%
12/08/22 10:31:23 INFO mapred.JobClient: Task Id : attempt_201208201908_0004_m_000000_0, Status : FAILED
java.io.FileNotFoundException: File file:/home/hadoop/xyz/part-m-00000 does not exist.
        at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:397)
        at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
        at org.apache.hadoop.fs.FileSystem.getLength(FileSystem.java:796)
        at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1475)
        at org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1470)
        at org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.initialize(SequenceFileRecordReader.java:50)
        at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:522)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.mapred.Child.main(Child.java:249)
12/08/22 10:31:35 INFO mapred.JobClient:  map 100% reduce 0%
12/08/22 10:31:40 INFO mapred.JobClient: Job complete: job_201208201908_0004
12/08/22 10:31:40 INFO mapred.JobClient: Counters: 19
12/08/22 10:31:40 INFO mapred.JobClient:   Job Counters
12/08/22 10:31:40 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=22927
12/08/22 10:31:40 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
12/08/22 10:31:40 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
12/08/22 10:31:40 INFO mapred.JobClient:     Rack-local map tasks=2
12/08/22 10:31:40 INFO mapred.JobClient:     Launched map tasks=2
12/08/22 10:31:40 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
12/08/22 10:31:40 INFO mapred.JobClient:   File Output Format Counters
12/08/22 10:31:40 INFO mapred.JobClient:     Bytes Written=0
12/08/22 10:31:40 INFO mapred.JobClient:   FileSystemCounters
12/08/22 10:31:40 INFO mapred.JobClient:     FILE_BYTES_READ=255
12/08/22 10:31:40 INFO mapred.JobClient:     HDFS_BYTES_READ=99
12/08/22 10:31:40 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=31054
12/08/22 10:31:40 INFO mapred.JobClient:   File Input Format Counters
12/08/22 10:31:40 INFO mapred.JobClient:     Bytes Read=255
12/08/22 10:31:40 INFO mapred.JobClient:   Map-Reduce Framework
12/08/22 10:31:40 INFO mapred.JobClient:     Map input records=2
12/08/22 10:31:40 INFO mapred.JobClient:     Physical memory (bytes) snapshot=72753152
12/08/22 10:31:40 INFO mapred.JobClient:     Spilled Records=0
12/08/22 10:31:40 INFO mapred.JobClient:     CPU time spent (ms)=260
12/08/22 10:31:40 INFO mapred.JobClient:     Total committed heap usage (bytes)=18350080
12/08/22 10:31:40 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=491810816
12/08/22 10:31:40 INFO mapred.JobClient:     Map output records=2
12/08/22 10:31:40 INFO mapred.JobClient:     SPLIT_RAW_BYTES=99
上述输出中可以看到导入了2条记录,但是仍然会报错,报文件不存在,这就不知道是什么原因了。但是数据是导入进去了。
查看zzz表中的数据
hbase(main):003:0> scan 'zzz'
ROW                   COLUMN+CELL                                              
 10000                column=cf1:val, timestamp=1345598242644, value=china     
 20000                column=cf1:val, timestamp=1345598283332, value=zengzhunzhu
                      n                                                        
2 row(s) in 0.0410 seconds
这样基本就完成了hbase表中的数据我们可以转化为mapreduce任务进程开始导出导入。当然也可以这么备份的。


                                                                                                                                                                       
                                                                                                                                                                        
 

版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。

相关文章
导入 Import--全量数据导入 Hive | 学习笔记
快速学习 导入 Import--全量数据导入 Hive
0 0
【笔记】用户指南—数据导入和导出—使用mysqldump导入导出数据
本文介绍了通过mysqldump工具将PolarDB-X数据导入导出的几种常见场景和详细操作步骤。 PolarDB-X支持MySQL官方数据导出工具mysqldump。mysqldump命令的详细说明请参见MySQL 官方文档。
0 0
用户指南—数据导入和导出—使用mysqldump导入导出数据
本文介绍了通过mysqldump工具将PolarDB-X数据导入导出的几种常见场景和详细操作步骤。 PolarDB-X支持MySQL官方数据导出工具mysqldump。mysqldump命令的详细说明请参见MySQL 官方文档。
0 0
HBase常用导入导出工具图示比较
HBase常用导入导出工具图示比较
341 0
Hbase增量导入导出
Cluster-A导出: hbase --config /tmp/hbase-client-conf org.apache.hadoop.hbase.mapreduce.
1752 0
Sqoop数据导入/导出
1. 从HDFS导出到RDBMS数据库 1.1 准备工作 写一个文件 sqoop_export.txt 1201,laojiao, manager,50000, TP 1202,fantj,preader,50000,TP 1203,jiao,dev...
2207 0
solr 从数据库导入数据,全量索引和增量索引
首先说一下是从mysql数据库导入数据 这里使用的是mysql测试。 1、先在mysql中建一个表:solr_test     2、插入几条测试数据:   3、用记事本打solrconfig.xml文件,在solrhome文件夹中。E:\solrhome\mycore\conf\solrconfig.xml (solrhome文件
4766 0
【原创】hive关联hbase表后导致统计数据报错
环境说明: 搭建好的hadoop+hbase+zookeeper集群,因为hbase里面查询数据不支持select语句,所以搭建起了hive(数据仓库)。我的hive搭建过程也不做太多的介绍,用的是第三方数据库mysql存储hive的元数据。
898 0
+关注
split_two
忠于开源,热爱开源。
文章
问答
文章排行榜
最热
最新
相关电子书
更多
HBase在时间序列数据库中的应用
立即下载
HBase 数据备份与恢复
立即下载
HBase Schema 设计
立即下载