开发者社区> 技术小哥哥> 正文
阿里云
为了无法计算的价值
打开APP
阿里云APP内打开

HBase编程 API入门系列之HTable pool(6)

简介:
+关注继续查看

 HTable是一个比较重的对此,比如加载配置文件,连接ZK,查询meta表等等,高并发的时候影响系统的性能,因此引入了“池”的概念。

 

  引入“HBase里的连接池”的目的是: 

                   为了更高的,提高程序的并发和访问速度。

 

 

 

  从“池”里去拿,拿完之后,放“池”即可。

复制代码
 1 package zhouls.bigdata.HbaseProject.Pool;
 2 
 3 import java.io.IOException;
 4 import java.util.concurrent.ExecutorService;
 5 import java.util.concurrent.Executors;
 6 
 7 import org.apache.hadoop.conf.Configuration;
 8 import org.apache.hadoop.hbase.HBaseConfiguration;
 9 import org.apache.hadoop.hbase.client.HConnection;
10 import org.apache.hadoop.hbase.client.HConnectionManager;
11 
12 
13 public class TableConnection {
14     private TableConnection(){
15 }
16     private static HConnection connection = null;
17 public static HConnection getConnection(){
18     if(connection == null){
19         ExecutorService pool = Executors.newFixedThreadPool(10);//建立一个固定大小的线程池
20         Configuration conf = HBaseConfiguration.create();
21         conf.set("hbase.zookeeper.quorum","HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
22         try{
23             connection = HConnectionManager.createConnection(conf,pool);//创建连接时,拿到配置文件和线程池
24         }catch (IOException e){
25         }
26     }
27     return connection;    
28     }
29 }
复制代码

 

 

 

 

 

 

 

 

 

 

  转到程序里,怎么来用这个“池”呢?

  即,TableConnection是公共的,新建好的“池”。可以一直作为模板啦。

 

 

 

 

1、引用“池”超过

HBase编程 API入门系列之put(客户端而言)(1)

  上面这种方式

 

 

 

 

复制代码
 1 package zhouls.bigdata.HbaseProject.Pool;
 2 
 3 import java.io.IOException;
 4 
 5 import zhouls.bigdata.HbaseProject.Pool.TableConnection;
 6 
 7 import javax.xml.transform.Result;
 8 
 9 import org.apache.hadoop.conf.Configuration;
10 import org.apache.hadoop.hbase.Cell;
11 import org.apache.hadoop.hbase.CellUtil;
12 import org.apache.hadoop.hbase.HBaseConfiguration;
13 import org.apache.hadoop.hbase.TableName;
14 import org.apache.hadoop.hbase.client.Delete;
15 import org.apache.hadoop.hbase.client.Get;
16 import org.apache.hadoop.hbase.client.HTable;
17 import org.apache.hadoop.hbase.client.HTableInterface;
18 import org.apache.hadoop.hbase.client.Put;
19 import org.apache.hadoop.hbase.client.ResultScanner;
20 import org.apache.hadoop.hbase.client.Scan;
21 import org.apache.hadoop.hbase.util.Bytes;
22 
23 public class HBaseTest {
24     public static void main(String[] args) throws Exception {
25 //        HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
26 //        Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
27 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
28 //        put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
29 //        table.put(put);
30 //        table.close();
31 
32 //        Get get = new Get(Bytes.toBytes("row_04"));
33 //        get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
34 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
35 //        System.out.println(rest.toString());
36 //        table.close();
37 
38 //        Delete delete = new Delete(Bytes.toBytes("row_2"));
39 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
40 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
41 //        table.delete(delete);
42 //        table.close();
43 
44 //        Delete delete = new Delete(Bytes.toBytes("row_04"));
45 ////    delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
46 //        delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
47 //        table.delete(delete);
48 //        table.close();
49 
50 
51 //        Scan scan = new Scan();
52 //        scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
53 //        scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
54 //        scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
55 //        ResultScanner rst = table.getScanner(scan);//整个循环
56 //        System.out.println(rst.toString());
57 //        for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
58 //        for(Cell cell:next.rawCells()){//某个row key下的循坏
59 //        System.out.println(next.toString());
60 //        System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
61 //        System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
62 //        System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
63 //        }
64 //        }
65 //    table.close();
66 
67         HBaseTest hbasetest =new HBaseTest();
68         hbasetest.insertValue();
69     }
70 
71     public void insertValue() throws Exception{
72         HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
73         Put put = new Put(Bytes.toBytes("row_04"));//行键是row_01
74         put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("北京"));
75         table.put(put);
76         table.close();
77     }
78 
79 
80 
81     public static Configuration getConfig(){
82         Configuration configuration = new Configuration(); 
83 //        conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
84         configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
85         return configuration;
86     }
87 }
复制代码

 

 

 

 

 

 

 

 

 

hbase(main):035:0> scan 'test_table'
ROW COLUMN+CELL 
row_01 column=f:col, timestamp=1478095650110, value=maizi 
row_01 column=f:name, timestamp=1478095741767, value=Andy2 
row_02 column=f:name, timestamp=1478095849538, value=Andy2 
row_03 column=f:name, timestamp=1478095893278, value=Andy3 
row_04 column=f:name, timestamp=1478096702098, value=Andy1 
4 row(s) in 0.1190 seconds

hbase(main):036:0> scan 'test_table'
ROW COLUMN+CELL 
row_01 column=f:col, timestamp=1478095650110, value=maizi 
row_01 column=f:name, timestamp=1478095741767, value=Andy2 
row_02 column=f:name, timestamp=1478095849538, value=Andy2 
row_03 column=f:name, timestamp=1478095893278, value=Andy3 
row_04 column=f:name, timestamp=1478097220790, value=\xE5\x8C\x97\xE4\xBA\xAC 
4 row(s) in 0.5970 seconds

hbase(main):037:0>

 

 

 

 

 

 

 

 

 

复制代码
 1 package zhouls.bigdata.HbaseProject.Pool;
 2 
 3 import java.io.IOException;
 4 
 5 import zhouls.bigdata.HbaseProject.Pool.TableConnection;
 6 
 7 import javax.xml.transform.Result;
 8 
 9 import org.apache.hadoop.conf.Configuration;
10 import org.apache.hadoop.hbase.Cell;
11 import org.apache.hadoop.hbase.CellUtil;
12 import org.apache.hadoop.hbase.HBaseConfiguration;
13 import org.apache.hadoop.hbase.TableName;
14 import org.apache.hadoop.hbase.client.Delete;
15 import org.apache.hadoop.hbase.client.Get;
16 import org.apache.hadoop.hbase.client.HTable;
17 import org.apache.hadoop.hbase.client.HTableInterface;
18 import org.apache.hadoop.hbase.client.Put;
19 import org.apache.hadoop.hbase.client.ResultScanner;
20 import org.apache.hadoop.hbase.client.Scan;
21 import org.apache.hadoop.hbase.util.Bytes;
22 
23 public class HBaseTest {
24     public static void main(String[] args) throws Exception {
25 //        HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
26 //        Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
27 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
28 //        put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
29 //        table.put(put);
30 //        table.close();
31 
32 //        Get get = new Get(Bytes.toBytes("row_04"));
33 //        get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
34 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
35 //        System.out.println(rest.toString());
36 //        table.close();
37 
38 //        Delete delete = new Delete(Bytes.toBytes("row_2"));
39 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
40 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
41 //        table.delete(delete);
42 //        table.close();
43 
44 //        Delete delete = new Delete(Bytes.toBytes("row_04"));
45 ////    delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
46 //        delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
47 //        table.delete(delete);
48 //        table.close();
49 
50 
51 //        Scan scan = new Scan();
52 //        scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
53 //        scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
54 //        scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
55 //        ResultScanner rst = table.getScanner(scan);//整个循环
56 //        System.out.println(rst.toString());
57 //        for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
58 //        for(Cell cell:next.rawCells()){//某个row key下的循坏
59 //        System.out.println(next.toString());
60 //        System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
61 //        System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
62 //        System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
63 //        }
64 //        }
65 //        table.close();
66 
67         HBaseTest hbasetest =new HBaseTest();
68         hbasetest.insertValue();
69 }
70 
71     public void insertValue() throws Exception{
72         HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
73         Put put = new Put(Bytes.toBytes("row_05"));//行键是row_01
74         put.add(Bytes.toBytes("f"),Bytes.toBytes("address"),Bytes.toBytes("beijng"));
75         table.put(put);
76         table.close();
77     }
78 
79 
80 
81     public static Configuration getConfig(){
82         Configuration configuration = new Configuration(); 
83 //        conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
84         configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
85         return configuration;
86     }
87 }
复制代码

 

 

 

 

 

 

 

 

 

 

2016-12-11 14:22:14,784 INFO [org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper] - Process identifier=hconnection-0x19d12e87 connecting to ZooKeeper ensemble=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181
2016-12-11 14:22:14,796 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2016-12-11 14:22:14,796 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:host.name=WIN-BQOBV63OBNM
2016-12-11 14:22:14,796 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.version=1.7.0_51
2016-12-11 14:22:14,796 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.vendor=Oracle Corporation
2016-12-11 14:22:14,796 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.home=C:\Program Files\Java\jdk1.7.0_51\jre
2016-12-11 14:22:14,797 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.class.path=D:\Code\MyEclipseJavaCode\HbaseProject\bin;D:\SoftWare\hbase-1.2.3\lib\activation-1.1.jar;D:\SoftWare\hbase-1.2.3\lib\aopalliance-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-i18n-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-kerberos-codec-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\api-asn1-api-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\api-util-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\asm-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\avro-1.7.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-1.7.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-core-1.8.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-cli-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-codec-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\commons-collections-3.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-compress-1.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-configuration-1.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-daemon-1.0.13.jar;D:\SoftWare\hbase-1.2.3\lib\commons-digester-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\commons-el-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-httpclient-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-io-2.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-lang-2.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-logging-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math-2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math3-3.1.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-net-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\disruptor-3.3.0.jar;D:\SoftWare\hbase-1.2.3\lib\findbugs-annotations-1.3.9-1.jar;D:\SoftWare\hbase-1.2.3\lib\guava-12.0.1.jar;D:\SoftWare\hbase-1.2.3\lib\guice-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\guice-servlet-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-annotations-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-auth-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-hdfs-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-app-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-core-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-jobclient-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-shuffle-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-api-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-server-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-client-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-examples-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-external-blockcache-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop2-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-prefix-tree-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-procedure-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-protocol-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-resource-bundle-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-rest-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-shell-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-thrift-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\htrace-core-3.1.0-incubating.jar;D:\SoftWare\hbase-1.2.3\lib\httpclient-4.2.5.jar;D:\SoftWare\hbase-1.2.3\lib\httpcore-4.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-core-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-jaxrs-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-mapper-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-xc-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jamon-runtime-2.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-compiler-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-runtime-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\javax.inject-1.jar;D:\SoftWare\hbase-1.2.3\lib\java-xmlbuilder-0.4.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-api-2.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-impl-2.2.3-1.jar;D:\SoftWare\hbase-1.2.3\lib\jcodings-1.0.8.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-client-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-core-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-guice-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-json-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-server-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jets3t-0.9.0.jar;D:\SoftWare\hbase-1.2.3\lib\jettison-1.3.3.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-sslengine-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-util-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\joni-2.1.2.jar;D:\SoftWare\hbase-1.2.3\lib\jruby-complete-1.6.8.jar;D:\SoftWare\hbase-1.2.3\lib\jsch-0.1.42.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-api-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\junit-4.12.jar;D:\SoftWare\hbase-1.2.3\lib\leveldbjni-all-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\libthrift-0.9.3.jar;D:\SoftWare\hbase-1.2.3\lib\log4j-1.2.17.jar;D:\SoftWare\hbase-1.2.3\lib\metrics-core-2.2.0.jar;D:\SoftWare\hbase-1.2.3\lib\netty-all-4.0.23.Final.jar;D:\SoftWare\hbase-1.2.3\lib\paranamer-2.3.jar;D:\SoftWare\hbase-1.2.3\lib\protobuf-java-2.5.0.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-api-1.7.7.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-log4j12-1.7.5.jar;D:\SoftWare\hbase-1.2.3\lib\snappy-java-1.0.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\spymemcached-2.11.6.jar;D:\SoftWare\hbase-1.2.3\lib\xmlenc-0.52.jar;D:\SoftWare\hbase-1.2.3\lib\xz-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\zookeeper-3.4.6.jar
2016-12-11 14:22:14,797 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.library.path=C:\Program Files\Java\jdk1.7.0_51\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\ProgramData\Oracle\Java\javapath;C:\Python27\;C:\Python27\Scripts;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\SoftWare\MATLAB R2013a\runtime\win64;D:\SoftWare\MATLAB R2013a\bin;C:\Program Files (x86)\IDM Computer Solutions\UltraCompare;C:\Program Files\Java\jdk1.7.0_51\bin;C:\Program Files\Java\jdk1.7.0_51\jre\bin;D:\SoftWare\apache-ant-1.9.0\bin;HADOOP_HOME\bin;D:\SoftWare\apache-maven-3.3.9\bin;D:\SoftWare\Scala\bin;D:\SoftWare\Scala\jre\bin;%MYSQL_HOME\bin;D:\SoftWare\MySQL Server\MySQL Server 5.0\bin;D:\SoftWare\apache-tomcat-7.0.69\bin;%C:\Windows\System32;%C:\Windows\SysWOW64;D:\SoftWare\SSH Secure Shell;.
2016-12-11 14:22:14,798 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\
2016-12-11 14:22:14,798 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.compiler=<NA>
2016-12-11 14:22:14,798 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.name=Windows 7
2016-12-11 14:22:14,798 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.arch=amd64
2016-12-11 14:22:14,798 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.version=6.1
2016-12-11 14:22:14,799 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.name=Administrator
2016-12-11 14:22:14,799 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.home=C:\Users\Administrator
2016-12-11 14:22:14,799 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.dir=D:\Code\MyEclipseJavaCode\HbaseProject
2016-12-11 14:22:14,801 INFO [org.apache.zookeeper.ZooKeeper] - Initiating client connection, connectString=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181 sessionTimeout=90000 watcher=hconnection-0x19d12e870x0, quorum=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181, baseZNode=/hbase
2016-12-11 14:22:14,853 INFO [org.apache.zookeeper.ClientCnxn] - Opening socket connection to server HadoopMaster/192.168.80.10:2181. Will not attempt to authenticate using SASL (unknown error)
2016-12-11 14:22:14,855 INFO [org.apache.zookeeper.ClientCnxn] - Socket connection established to HadoopMaster/192.168.80.10:2181, initiating session
2016-12-11 14:22:14,960 INFO [org.apache.zookeeper.ClientCnxn] - Session establishment complete on server HadoopMaster/192.168.80.10:2181, sessionid = 0x1582556e7c5001c, negotiated timeout = 40000

 

 

 

 

 

 

hbase(main):035:0> scan 'test_table'
ROW COLUMN+CELL 
row_01 column=f:col, timestamp=1478095650110, value=maizi 
row_01 column=f:name, timestamp=1478095741767, value=Andy2 
row_02 column=f:name, timestamp=1478095849538, value=Andy2 
row_03 column=f:name, timestamp=1478095893278, value=Andy3 
row_04 column=f:name, timestamp=1478096702098, value=Andy1 
4 row(s) in 0.1190 seconds

hbase(main):036:0> scan 'test_table'
ROW COLUMN+CELL 
row_01 column=f:col, timestamp=1478095650110, value=maizi 
row_01 column=f:name, timestamp=1478095741767, value=Andy2 
row_02 column=f:name, timestamp=1478095849538, value=Andy2 
row_03 column=f:name, timestamp=1478095893278, value=Andy3 
row_04 column=f:name, timestamp=1478097220790, value=\xE5\x8C\x97\xE4\xBA\xAC 
4 row(s) in 0.5970 seconds

hbase(main):037:0> scan 'test_table'
ROW COLUMN+CELL 
row_01 column=f:col, timestamp=1478095650110, value=maizi 
row_01 column=f:name, timestamp=1478095741767, value=Andy2 
row_02 column=f:name, timestamp=1478095849538, value=Andy2 
row_03 column=f:name, timestamp=1478095893278, value=Andy3 
row_04 column=f:name, timestamp=1478097227253, value=\xE5\x8C\x97\xE4\xBA\xAC 
row_05 column=f:address, timestamp=1478097364649, value=beijng 
5 row(s) in 0.2630 seconds

hbase(main):038:0>

 

 

 

   即,这就是,“”的概念,会一直保持

 

 

 

 

 

 

 

 

 

  详细分析

      这里,我设定的是10个线程池,

  其实,很简单,就好比,你来拿一个去用,别人来拿一个去用。等你们用完了,再还回来。(好比跟图书馆里的借书一样)

 

  那有人会问,若我设定的固定10个线程池,都被别人拿完了,若第11个来了,怎办?岂不是,没得拿?

      答案:那你就等着呗,等别人还回来。这跟队列是一样的原理。

  

 

 

 

  这样做的理由,很简单,有了线程池,不需,我们再每次都手动配置文件啊连接zk了。因为,在TableConnection.java里,写好了。

 

 

 

 

 

 

 

2、引用“池”超过

HBase编程 API入门系列之get(客户端而言)(2)

  上面这种方式

 

  为了更进一步,给博友们,深层次明白,“池”的魅力,当然,这也是在公司实际开发里,首推和强烈建议去做的。

 

hbase(main):038:0> scan 'test_table'
ROW COLUMN+CELL 
row_01 column=f:col, timestamp=1478095650110, value=maizi 
row_01 column=f:name, timestamp=1478095741767, value=Andy2 
row_02 column=f:name, timestamp=1478095849538, value=Andy2 
row_03 column=f:name, timestamp=1478095893278, value=Andy3 
row_04 column=f:name, timestamp=1478097227253, value=\xE5\x8C\x97\xE4\xBA\xAC 
row_05 column=f:address, timestamp=1478097364649, value=beijng 
5 row(s) in 0.2280 seconds

hbase(main):039:0>

 

 

 

 

 

复制代码
  1 package zhouls.bigdata.HbaseProject.Pool;
  2 
  3 import java.io.IOException;
  4 
  5 import zhouls.bigdata.HbaseProject.Pool.TableConnection;
  6 
  7 import javax.xml.transform.Result;
  8 
  9 import org.apache.hadoop.conf.Configuration;
 10 import org.apache.hadoop.hbase.Cell;
 11 import org.apache.hadoop.hbase.CellUtil;
 12 import org.apache.hadoop.hbase.HBaseConfiguration;
 13 import org.apache.hadoop.hbase.TableName;
 14 import org.apache.hadoop.hbase.client.Delete;
 15 import org.apache.hadoop.hbase.client.Get;
 16 import org.apache.hadoop.hbase.client.HTable;
 17 import org.apache.hadoop.hbase.client.HTableInterface;
 18 import org.apache.hadoop.hbase.client.Put;
 19 import org.apache.hadoop.hbase.client.ResultScanner;
 20 import org.apache.hadoop.hbase.client.Scan;
 21 import org.apache.hadoop.hbase.util.Bytes;
 22 
 23 public class HBaseTest {
 24     public static void main(String[] args) throws Exception {
 25 //        HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
 26 //        Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
 27 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
 28 //        put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
 29 //        table.put(put);
 30 //        table.close();
 31 
 32 //        Get get = new Get(Bytes.toBytes("row_04"));
 33 //        get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
 34 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
 35 //        System.out.println(rest.toString());
 36 //        table.close();
 37         
 38 //        Delete delete = new Delete(Bytes.toBytes("row_2"));
 39 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
 40 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
 41 //        table.delete(delete);
 42 //        table.close();
 43 
 44 //        Delete delete = new Delete(Bytes.toBytes("row_04"));
 45 ////    delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
 46 //        delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
 47 //        table.delete(delete);
 48 //        table.close();
 49 
 50 
 51 //        Scan scan = new Scan();
 52 //        scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
 53 //        scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
 54 //        scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 55 //        ResultScanner rst = table.getScanner(scan);//整个循环
 56 //        System.out.println(rst.toString());
 57 //        for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
 58 //        for(Cell cell:next.rawCells()){//某个row key下的循坏
 59 //        System.out.println(next.toString());
 60 //        System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
 61 //        System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
 62 //        System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
 63 //        }
 64 //        }
 65 //        table.close();
 66 
 67         HBaseTest hbasetest =new HBaseTest();
 68 //        hbasetest.insertValue();
 69         hbasetest.getValue();
 70 }
 71 
 72 
 73 //        public void insertValue() throws Exception{
 74 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 75 //        Put put = new Put(Bytes.toBytes("row_05"));//行键是row_01
 76 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("address"),Bytes.toBytes("beijng"));
 77 //        table.put(put);
 78 //        table.close();
 79 //        }
 80 
 81  
 82 
 83     public void getValue() throws Exception{
 84         HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 85         Get get = new Get(Bytes.toBytes("row_03"));
 86         get.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 87         org.apache.hadoop.hbase.client.Result rest = table.get(get);
 88         System.out.println(rest.toString());
 89         table.close();
 90     }
 91 
 92 
 93 
 94     public static Configuration getConfig(){
 95         Configuration configuration = new Configuration(); 
 96 //        conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
 97         configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
 98         return configuration;
 99     }
100 }
复制代码

 

 

 

 

 

 

 

 

 

 

2016-12-11 14:37:12,030 INFO [org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper] - Process identifier=hconnection-0x7660aac9 connecting to ZooKeeper ensemble=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181
2016-12-11 14:37:12,040 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2016-12-11 14:37:12,041 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:host.name=WIN-BQOBV63OBNM
2016-12-11 14:37:12,041 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.version=1.7.0_51
2016-12-11 14:37:12,041 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.vendor=Oracle Corporation
2016-12-11 14:37:12,041 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.home=C:\Program Files\Java\jdk1.7.0_51\jre
2016-12-11 14:37:12,041 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.class.path=D:\Code\MyEclipseJavaCode\HbaseProject\bin;D:\SoftWare\hbase-1.2.3\lib\activation-1.1.jar;D:\SoftWare\hbase-1.2.3\lib\aopalliance-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-i18n-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-kerberos-codec-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\api-asn1-api-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\api-util-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\asm-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\avro-1.7.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-1.7.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-core-1.8.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-cli-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-codec-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\commons-collections-3.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-compress-1.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-configuration-1.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-daemon-1.0.13.jar;D:\SoftWare\hbase-1.2.3\lib\commons-digester-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\commons-el-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-httpclient-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-io-2.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-lang-2.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-logging-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math-2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math3-3.1.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-net-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\disruptor-3.3.0.jar;D:\SoftWare\hbase-1.2.3\lib\findbugs-annotations-1.3.9-1.jar;D:\SoftWare\hbase-1.2.3\lib\guava-12.0.1.jar;D:\SoftWare\hbase-1.2.3\lib\guice-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\guice-servlet-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-annotations-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-auth-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-hdfs-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-app-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-core-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-jobclient-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-shuffle-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-api-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-server-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-client-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-examples-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-external-blockcache-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop2-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-prefix-tree-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-procedure-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-protocol-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-resource-bundle-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-rest-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-shell-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-thrift-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\htrace-core-3.1.0-incubating.jar;D:\SoftWare\hbase-1.2.3\lib\httpclient-4.2.5.jar;D:\SoftWare\hbase-1.2.3\lib\httpcore-4.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-core-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-jaxrs-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-mapper-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-xc-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jamon-runtime-2.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-compiler-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-runtime-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\javax.inject-1.jar;D:\SoftWare\hbase-1.2.3\lib\java-xmlbuilder-0.4.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-api-2.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-impl-2.2.3-1.jar;D:\SoftWare\hbase-1.2.3\lib\jcodings-1.0.8.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-client-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-core-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-guice-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-json-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-server-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jets3t-0.9.0.jar;D:\SoftWare\hbase-1.2.3\lib\jettison-1.3.3.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-sslengine-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-util-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\joni-2.1.2.jar;D:\SoftWare\hbase-1.2.3\lib\jruby-complete-1.6.8.jar;D:\SoftWare\hbase-1.2.3\lib\jsch-0.1.42.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-api-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\junit-4.12.jar;D:\SoftWare\hbase-1.2.3\lib\leveldbjni-all-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\libthrift-0.9.3.jar;D:\SoftWare\hbase-1.2.3\lib\log4j-1.2.17.jar;D:\SoftWare\hbase-1.2.3\lib\metrics-core-2.2.0.jar;D:\SoftWare\hbase-1.2.3\lib\netty-all-4.0.23.Final.jar;D:\SoftWare\hbase-1.2.3\lib\paranamer-2.3.jar;D:\SoftWare\hbase-1.2.3\lib\protobuf-java-2.5.0.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-api-1.7.7.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-log4j12-1.7.5.jar;D:\SoftWare\hbase-1.2.3\lib\snappy-java-1.0.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\spymemcached-2.11.6.jar;D:\SoftWare\hbase-1.2.3\lib\xmlenc-0.52.jar;D:\SoftWare\hbase-1.2.3\lib\xz-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\zookeeper-3.4.6.jar
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.library.path=C:\Program Files\Java\jdk1.7.0_51\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\ProgramData\Oracle\Java\javapath;C:\Python27\;C:\Python27\Scripts;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\SoftWare\MATLAB R2013a\runtime\win64;D:\SoftWare\MATLAB R2013a\bin;C:\Program Files (x86)\IDM Computer Solutions\UltraCompare;C:\Program Files\Java\jdk1.7.0_51\bin;C:\Program Files\Java\jdk1.7.0_51\jre\bin;D:\SoftWare\apache-ant-1.9.0\bin;HADOOP_HOME\bin;D:\SoftWare\apache-maven-3.3.9\bin;D:\SoftWare\Scala\bin;D:\SoftWare\Scala\jre\bin;%MYSQL_HOME\bin;D:\SoftWare\MySQL Server\MySQL Server 5.0\bin;D:\SoftWare\apache-tomcat-7.0.69\bin;%C:\Windows\System32;%C:\Windows\SysWOW64;D:\SoftWare\SSH Secure Shell;.
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.compiler=<NA>
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.name=Windows 7
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.arch=amd64
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.version=6.1
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.name=Administrator
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.home=C:\Users\Administrator
2016-12-11 14:37:12,042 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.dir=D:\Code\MyEclipseJavaCode\HbaseProject
2016-12-11 14:37:12,044 INFO [org.apache.zookeeper.ZooKeeper] - Initiating client connection, connectString=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181 sessionTimeout=90000 watcher=hconnection-0x7660aac90x0, quorum=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181, baseZNode=/hbase
2016-12-11 14:37:12,091 INFO [org.apache.zookeeper.ClientCnxn] - Opening socket connection to server HadoopMaster/192.168.80.10:2181. Will not attempt to authenticate using SASL (unknown error)
2016-12-11 14:37:12,094 INFO [org.apache.zookeeper.ClientCnxn] - Socket connection established to HadoopMaster/192.168.80.10:2181, initiating session
2016-12-11 14:37:12,162 INFO [org.apache.zookeeper.ClientCnxn] - Session establishment complete on server HadoopMaster/192.168.80.10:2181, sessionid = 0x1582556e7c5001d, negotiated timeout = 40000
keyvalues={row_03/f:name/1478095893278/Put/vlen=5/seqid=0}

 

 

 

 

 

 

 

 

 

 

 

 

3.1、引用“池”超过

HBase编程 API入门系列之delete(客户端而言)(3)

HBase编程 API入门之delete.deleteColumn和delete.deleteColumns区别(客户端而言)(4)

 

  上面这种方式

    时间戳版本旧到新,是Andy2   ->   Andy1   ->  Andy0

               先建                              后建

 

 

 

 

 

复制代码
  1 package zhouls.bigdata.HbaseProject.Pool;
  2 
  3 import java.io.IOException;
  4 
  5 import zhouls.bigdata.HbaseProject.Pool.TableConnection;
  6 
  7 import javax.xml.transform.Result;
  8 
  9 import org.apache.hadoop.conf.Configuration;
 10 import org.apache.hadoop.hbase.Cell;
 11 import org.apache.hadoop.hbase.CellUtil;
 12 import org.apache.hadoop.hbase.HBaseConfiguration;
 13 import org.apache.hadoop.hbase.TableName;
 14 import org.apache.hadoop.hbase.client.Delete;
 15 import org.apache.hadoop.hbase.client.Get;
 16 import org.apache.hadoop.hbase.client.HTable;
 17 import org.apache.hadoop.hbase.client.HTableInterface;
 18 import org.apache.hadoop.hbase.client.Put;
 19 import org.apache.hadoop.hbase.client.ResultScanner;
 20 import org.apache.hadoop.hbase.client.Scan;
 21 import org.apache.hadoop.hbase.util.Bytes;
 22 
 23 public class HBaseTest {
 24     public static void main(String[] args) throws Exception {
 25 //        HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
 26 //        Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
 27 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
 28 //        put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
 29 //        table.put(put);
 30 //        table.close();
 31 
 32 //        Get get = new Get(Bytes.toBytes("row_04"));
 33 //        get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
 34 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
 35 //        System.out.println(rest.toString());
 36 //        table.close();
 37 
 38 //        Delete delete = new Delete(Bytes.toBytes("row_2"));
 39 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
 40 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
 41 //        table.delete(delete);
 42 //        table.close();
 43 
 44 //        Delete delete = new Delete(Bytes.toBytes("row_04"));
 45 ////    delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
 46 //        delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
 47 //        table.delete(delete);
 48 //        table.close();
 49 
 50 
 51 //        Scan scan = new Scan();
 52 //        scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
 53 //        scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
 54 //        scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 55 //        ResultScanner rst = table.getScanner(scan);//整个循环
 56 //        System.out.println(rst.toString());
 57 //        for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
 58 //            for(Cell cell:next.rawCells()){//某个row key下的循坏
 59 //            System.out.println(next.toString());
 60 //            System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
 61 //            System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
 62 //            System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
 63 //        }
 64 //        }
 65 //        table.close();
 66 
 67         HBaseTest hbasetest =new HBaseTest();
 68 //        hbasetest.insertValue();
 69 //        hbasetest.getValue();
 70         hbasetest.delete();
 71     }
 72 
 73 
 74 //    public void insertValue() throws Exception{
 75 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 76 //        Put put = new Put(Bytes.toBytes("row_01"));//行键是row_01
 77 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy0"));
 78 //        table.put(put);
 79 //        table.close();
 80 //    }
 81 
 82  
 83 
 84 //    public void getValue() throws Exception{
 85 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 86 //        Get get = new Get(Bytes.toBytes("row_03"));
 87 //        get.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 88 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
 89 //        System.out.println(rest.toString());
 90 //        table.close();
 91 //    }
 92 //    
 93 
 94 
 95     public void delete() throws Exception{
 96         HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 97         Delete delete = new Delete(Bytes.toBytes("row_01"));
 98 //        delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
 99         delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
100         table.delete(delete);
101         table.close();
102     }
103 
104 
105 
106     public static Configuration getConfig(){
107         Configuration configuration = new Configuration(); 
108 //        conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
109         configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
110         return configuration;
111     }
112 }
复制代码

 

 

 

 

 

 

   

delete.deleteColumn和delete.deleteColumns区别是:

    deleteColumn是删除某一个列簇里的最新时间戳版本。

    delete.deleteColumns是删除某个列簇里的所有时间戳版本。

     

 

 

 

 

 

 

 

3.2、引用“池”超过

HBase编程 API入门之delete(客户端而言)

HBase编程 API入门之delete.deleteColumn和delete.deleteColumns区别(客户端而言)

  上面这种方式

 

  时间戳版本旧到新,是Andy2   ->   Andy1   ->  Andy0

               先建                              后建

 

  

      时间戳版本旧到新,是Andy2   ->   Andy1   ->  Andy0

               先建                              后建

 

 

 

 

 

复制代码
  1 package zhouls.bigdata.HbaseProject.Pool;
  2 
  3 import java.io.IOException;
  4 
  5 import zhouls.bigdata.HbaseProject.Pool.TableConnection;
  6 
  7 import javax.xml.transform.Result;
  8 
  9 import org.apache.hadoop.conf.Configuration;
 10 import org.apache.hadoop.hbase.Cell;
 11 import org.apache.hadoop.hbase.CellUtil;
 12 import org.apache.hadoop.hbase.HBaseConfiguration;
 13 import org.apache.hadoop.hbase.TableName;
 14 import org.apache.hadoop.hbase.client.Delete;
 15 import org.apache.hadoop.hbase.client.Get;
 16 import org.apache.hadoop.hbase.client.HTable;
 17 import org.apache.hadoop.hbase.client.HTableInterface;
 18 import org.apache.hadoop.hbase.client.Put;
 19 import org.apache.hadoop.hbase.client.ResultScanner;
 20 import org.apache.hadoop.hbase.client.Scan;
 21 import org.apache.hadoop.hbase.util.Bytes;
 22 
 23 public class HBaseTest {
 24     public static void main(String[] args) throws Exception {
 25 //        HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
 26 //        Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
 27 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
 28 //        put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
 29 //        table.put(put);
 30 //        table.close();
 31 
 32 //        Get get = new Get(Bytes.toBytes("row_04"));
 33 //        get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
 34 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
 35 //        System.out.println(rest.toString());
 36 //        table.close();
 37 
 38 //        Delete delete = new Delete(Bytes.toBytes("row_2"));
 39 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
 40 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
 41 //        table.delete(delete);
 42 //        table.close();
 43 
 44 //        Delete delete = new Delete(Bytes.toBytes("row_04"));
 45 ////    delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
 46 //        delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
 47 //        table.delete(delete);
 48 //        table.close();
 49 
 50 
 51 //        Scan scan = new Scan();
 52 //        scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
 53 //        scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
 54 //        scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 55 //        ResultScanner rst = table.getScanner(scan);//整个循环
 56 //        System.out.println(rst.toString());
 57 //        for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
 58 //        for(Cell cell:next.rawCells()){//某个row key下的循坏
 59 //            System.out.println(next.toString());
 60 //            System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
 61 //            System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
 62 //            System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
 63 //        }
 64 //    }
 65 //        table.close();
 66 
 67         HBaseTest hbasetest =new HBaseTest();
 68 //        hbasetest.insertValue();
 69 //        hbasetest.getValue();
 70         hbasetest.delete();
 71     }
 72 
 73 
 74 //    public void insertValue() throws Exception{
 75 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 76 //        Put put = new Put(Bytes.toBytes("row_01"));//行键是row_01
 77 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy0"));
 78 //        table.put(put);
 79 //        table.close();
 80 //    }
 81 
 82  
 83 
 84 //    public void getValue() throws Exception{
 85 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 86 //        Get get = new Get(Bytes.toBytes("row_03"));
 87 //        get.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 88 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
 89 //        System.out.println(rest.toString());
 90 //        table.close();
 91 //    }
 92 //    
 93 
 94 
 95     public void delete() throws Exception{
 96         HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 97         Delete delete = new Delete(Bytes.toBytes("row_01"));
 98         delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
 99 //        delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
100         table.delete(delete);
101         table.close();
102 }
103 
104 
105 
106     public static Configuration getConfig(){
107         Configuration configuration = new Configuration(); 
108 //        conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
109         configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
110         return configuration;
111     }
112 }
复制代码

 

 

 

 

 

 

 

      

 

 

 

            时间戳版本旧到新,是Andy2   ->   Andy1   ->  Andy0

                                  先建                              后建

 

 delete.deleteColumn和delete.deleteColumns区别是:

    deleteColumn是删除某一个列簇里的最新时间戳版本。

    delete.deleteColumns是删除某个列簇里的所有时间戳版本。

 

 

 

 

 

 

 

 

 

 

 

 

 

4、引用“池”超过

HBase编程 API入门之scan(客户端而言)

  上面这种方式

 

复制代码
  1 package zhouls.bigdata.HbaseProject.Pool;
  2 
  3 import java.io.IOException;
  4 
  5 import zhouls.bigdata.HbaseProject.Pool.TableConnection;
  6 
  7 import javax.xml.transform.Result;
  8 
  9 import org.apache.hadoop.conf.Configuration;
 10 import org.apache.hadoop.hbase.Cell;
 11 import org.apache.hadoop.hbase.CellUtil;
 12 import org.apache.hadoop.hbase.HBaseConfiguration;
 13 import org.apache.hadoop.hbase.TableName;
 14 import org.apache.hadoop.hbase.client.Delete;
 15 import org.apache.hadoop.hbase.client.Get;
 16 import org.apache.hadoop.hbase.client.HTable;
 17 import org.apache.hadoop.hbase.client.HTableInterface;
 18 import org.apache.hadoop.hbase.client.Put;
 19 import org.apache.hadoop.hbase.client.ResultScanner;
 20 import org.apache.hadoop.hbase.client.Scan;
 21 import org.apache.hadoop.hbase.util.Bytes;
 22 
 23 public class HBaseTest {
 24     public static void main(String[] args) throws Exception {
 25 //        HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
 26 //        Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
 27 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
 28 //        put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
 29 //        table.put(put);
 30 //        table.close();
 31 
 32 //        Get get = new Get(Bytes.toBytes("row_04"));
 33 //        get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
 34 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
 35 //        System.out.println(rest.toString());
 36 //        table.close();
 37 
 38 //        Delete delete = new Delete(Bytes.toBytes("row_2"));
 39 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
 40 //        delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
 41 //        table.delete(delete);
 42 //        table.close();
 43 
 44 //        Delete delete = new Delete(Bytes.toBytes("row_04"));
 45 ////    delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
 46 //        delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
 47 //        table.delete(delete);
 48 //        table.close();
 49 
 50 
 51 //        Scan scan = new Scan();
 52 //        scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
 53 //        scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
 54 //        scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 55 //        ResultScanner rst = table.getScanner(scan);//整个循环
 56 //        System.out.println(rst.toString());
 57 //        for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
 58 //            for(Cell cell:next.rawCells()){//某个row key下的循坏
 59 //                System.out.println(next.toString());
 60 //                System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
 61 //                System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
 62 //                System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
 63 //            }
 64 //        }
 65 //        table.close();
 66 
 67         HBaseTest hbasetest =new HBaseTest();
 68 //        hbasetest.insertValue();
 69 //        hbasetest.getValue();
 70 //        hbasetest.delete();
 71         hbasetest.scanValue();
 72     }
 73 
 74 
 75 //    public void insertValue() throws Exception{
 76 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 77 //        Put put = new Put(Bytes.toBytes("row_01"));//行键是row_01
 78 //        put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy0"));
 79 //        table.put(put);
 80 //        table.close();
 81 //    }
 82 
 83  
 84 
 85 //    public void getValue() throws Exception{
 86 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 87 //        Get get = new Get(Bytes.toBytes("row_03"));
 88 //        get.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
 89 //        org.apache.hadoop.hbase.client.Result rest = table.get(get);
 90 //        System.out.println(rest.toString());
 91 //        table.close();
 92 //    }
 93 //    
 94 
 95 
 96 //    public void delete() throws Exception{
 97 //        HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
 98 //        Delete delete = new Delete(Bytes.toBytes("row_01"));
 99 //     delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
100 ////    delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
101 //        table.delete(delete);
102 //        table.close();
103 //    }
104 
105 
106     public void scanValue() throws Exception{
107         HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
108         Scan scan = new Scan();
109         scan.setStartRow(Bytes.toBytes("row_02"));//包含开始行键
110         scan.setStopRow(Bytes.toBytes("row_04"));//不包含结束行键
111         scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
112         ResultScanner rst = table.getScanner(scan);//整个循环
113         System.out.println(rst.toString());
114         for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() ){
115             for(Cell cell:next.rawCells()){//某个row key下的循坏
116                 System.out.println(next.toString());
117                 System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
118                 System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
119                 System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
120             }
121         }
122         table.close();
123     }
124 
125 
126 
127     public static Configuration getConfig(){
128         Configuration configuration = new Configuration(); 
129 //        conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
130         configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
131         return configuration;
132     }
133 }
复制代码

 

 

 

 

 

 

 

 

 

2016-12-11 15:14:56,940 INFO [org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper] - Process identifier=hconnection-0x278a676 connecting to ZooKeeper ensemble=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:host.name=WIN-BQOBV63OBNM
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.version=1.7.0_51
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.vendor=Oracle Corporation
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.home=C:\Program Files\Java\jdk1.7.0_51\jre
2016-12-11 15:14:56,954 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.class.path=D:\Code\MyEclipseJavaCode\HbaseProject\bin;D:\SoftWare\hbase-1.2.3\lib\activation-1.1.jar;D:\SoftWare\hbase-1.2.3\lib\aopalliance-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-i18n-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\apacheds-kerberos-codec-2.0.0-M15.jar;D:\SoftWare\hbase-1.2.3\lib\api-asn1-api-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\api-util-1.0.0-M20.jar;D:\SoftWare\hbase-1.2.3\lib\asm-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\avro-1.7.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-1.7.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-beanutils-core-1.8.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-cli-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-codec-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\commons-collections-3.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-compress-1.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-configuration-1.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-daemon-1.0.13.jar;D:\SoftWare\hbase-1.2.3\lib\commons-digester-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\commons-el-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\commons-httpclient-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-io-2.4.jar;D:\SoftWare\hbase-1.2.3\lib\commons-lang-2.6.jar;D:\SoftWare\hbase-1.2.3\lib\commons-logging-1.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math-2.2.jar;D:\SoftWare\hbase-1.2.3\lib\commons-math3-3.1.1.jar;D:\SoftWare\hbase-1.2.3\lib\commons-net-3.1.jar;D:\SoftWare\hbase-1.2.3\lib\disruptor-3.3.0.jar;D:\SoftWare\hbase-1.2.3\lib\findbugs-annotations-1.3.9-1.jar;D:\SoftWare\hbase-1.2.3\lib\guava-12.0.1.jar;D:\SoftWare\hbase-1.2.3\lib\guice-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\guice-servlet-3.0.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-annotations-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-auth-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-hdfs-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-app-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-core-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-jobclient-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-mapreduce-client-shuffle-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-api-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-client-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hadoop-yarn-server-common-2.5.1.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-annotations-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-client-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-common-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-examples-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-external-blockcache-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop2-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-hadoop-compat-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-it-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-prefix-tree-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-procedure-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-protocol-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-resource-bundle-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-rest-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-server-1.2.3-tests.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-shell-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\hbase-thrift-1.2.3.jar;D:\SoftWare\hbase-1.2.3\lib\htrace-core-3.1.0-incubating.jar;D:\SoftWare\hbase-1.2.3\lib\httpclient-4.2.5.jar;D:\SoftWare\hbase-1.2.3\lib\httpcore-4.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-core-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-jaxrs-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-mapper-asl-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jackson-xc-1.9.13.jar;D:\SoftWare\hbase-1.2.3\lib\jamon-runtime-2.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-compiler-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\jasper-runtime-5.5.23.jar;D:\SoftWare\hbase-1.2.3\lib\javax.inject-1.jar;D:\SoftWare\hbase-1.2.3\lib\java-xmlbuilder-0.4.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-api-2.2.2.jar;D:\SoftWare\hbase-1.2.3\lib\jaxb-impl-2.2.3-1.jar;D:\SoftWare\hbase-1.2.3\lib\jcodings-1.0.8.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-client-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-core-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-guice-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-json-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jersey-server-1.9.jar;D:\SoftWare\hbase-1.2.3\lib\jets3t-0.9.0.jar;D:\SoftWare\hbase-1.2.3\lib\jettison-1.3.3.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-sslengine-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\jetty-util-6.1.26.jar;D:\SoftWare\hbase-1.2.3\lib\joni-2.1.2.jar;D:\SoftWare\hbase-1.2.3\lib\jruby-complete-1.6.8.jar;D:\SoftWare\hbase-1.2.3\lib\jsch-0.1.42.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\jsp-api-2.1-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\junit-4.12.jar;D:\SoftWare\hbase-1.2.3\lib\leveldbjni-all-1.8.jar;D:\SoftWare\hbase-1.2.3\lib\libthrift-0.9.3.jar;D:\SoftWare\hbase-1.2.3\lib\log4j-1.2.17.jar;D:\SoftWare\hbase-1.2.3\lib\metrics-core-2.2.0.jar;D:\SoftWare\hbase-1.2.3\lib\netty-all-4.0.23.Final.jar;D:\SoftWare\hbase-1.2.3\lib\paranamer-2.3.jar;D:\SoftWare\hbase-1.2.3\lib\protobuf-java-2.5.0.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5.jar;D:\SoftWare\hbase-1.2.3\lib\servlet-api-2.5-6.1.14.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-api-1.7.7.jar;D:\SoftWare\hbase-1.2.3\lib\slf4j-log4j12-1.7.5.jar;D:\SoftWare\hbase-1.2.3\lib\snappy-java-1.0.4.1.jar;D:\SoftWare\hbase-1.2.3\lib\spymemcached-2.11.6.jar;D:\SoftWare\hbase-1.2.3\lib\xmlenc-0.52.jar;D:\SoftWare\hbase-1.2.3\lib\xz-1.0.jar;D:\SoftWare\hbase-1.2.3\lib\zookeeper-3.4.6.jar
2016-12-11 15:14:56,955 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.library.path=C:\Program Files\Java\jdk1.7.0_51\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\ProgramData\Oracle\Java\javapath;C:\Python27\;C:\Python27\Scripts;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;D:\SoftWare\MATLAB R2013a\runtime\win64;D:\SoftWare\MATLAB R2013a\bin;C:\Program Files (x86)\IDM Computer Solutions\UltraCompare;C:\Program Files\Java\jdk1.7.0_51\bin;C:\Program Files\Java\jdk1.7.0_51\jre\bin;D:\SoftWare\apache-ant-1.9.0\bin;HADOOP_HOME\bin;D:\SoftWare\apache-maven-3.3.9\bin;D:\SoftWare\Scala\bin;D:\SoftWare\Scala\jre\bin;%MYSQL_HOME\bin;D:\SoftWare\MySQL Server\MySQL Server 5.0\bin;D:\SoftWare\apache-tomcat-7.0.69\bin;%C:\Windows\System32;%C:\Windows\SysWOW64;D:\SoftWare\SSH Secure Shell;.
2016-12-11 15:14:56,956 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\
2016-12-11 15:14:56,956 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:java.compiler=<NA>
2016-12-11 15:14:56,956 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.name=Windows 7
2016-12-11 15:14:56,956 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.arch=amd64
2016-12-11 15:14:56,956 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:os.version=6.1
2016-12-11 15:14:56,957 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.name=Administrator
2016-12-11 15:14:56,957 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.home=C:\Users\Administrator
2016-12-11 15:14:56,957 INFO [org.apache.zookeeper.ZooKeeper] - Client environment:user.dir=D:\Code\MyEclipseJavaCode\HbaseProject
2016-12-11 15:14:56,958 INFO [org.apache.zookeeper.ZooKeeper] - Initiating client connection, connectString=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181 sessionTimeout=90000 watcher=hconnection-0x278a6760x0, quorum=HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181, baseZNode=/hbase
2016-12-11 15:14:57,015 INFO [org.apache.zookeeper.ClientCnxn] - Opening socket connection to server HadoopMaster/192.168.80.10:2181. Will not attempt to authenticate using SASL (unknown error)
2016-12-11 15:14:57,018 INFO [org.apache.zookeeper.ClientCnxn] - Socket connection established to HadoopMaster/192.168.80.10:2181, initiating session
2016-12-11 15:14:57,044 INFO [org.apache.zookeeper.ClientCnxn] - Session establishment complete on server HadoopMaster/192.168.80.10:2181, sessionid = 0x1582556e7c50024, negotiated timeout = 40000
org.apache.hadoop.hbase.client.ClientScanner@4362f2fe
keyvalues={row_02/f:name/1478095849538/Put/vlen=5/seqid=0}
family:f
col:name
valueAndy2
keyvalues={row_03/f:name/1478095893278/Put/vlen=5/seqid=0}
family:f
col:name
valueAndy3

 

 

 

  好的,其他的功能,就不带领大家去做了,自行去研究。

最后,总结:

  在实际开发中,一定要掌握线程池!!!

 

附上代码

 

package zhouls.bigdata.HbaseProject.Pool;

 

import java.io.IOException;

 

import zhouls.bigdata.HbaseProject.Pool.TableConnection;

 

import javax.xml.transform.Result;

 

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.Cell;
import org.apache.hadoop.hbase.CellUtil;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Delete;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.HTableInterface;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.util.Bytes;

 

public class HBaseTest {

 

public static void main(String[] args) throws Exception {
// HTable table = new HTable(getConfig(),TableName.valueOf("test_table"));//表名是test_table
// Put put = new Put(Bytes.toBytes("row_04"));//行键是row_04
// put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy1"));//列簇是f,列修饰符是name,值是Andy0
// put.add(Bytes.toBytes("f2"),Bytes.toBytes("name"),Bytes.toBytes("Andy3"));//列簇是f2,列修饰符是name,值是Andy3
// table.put(put);
// table.close();

// Get get = new Get(Bytes.toBytes("row_04"));
// get.addColumn(Bytes.toBytes("f1"), Bytes.toBytes("age"));如现在这样,不指定,默认把所有的全拿出来
// org.apache.hadoop.hbase.client.Result rest = table.get(get);
// System.out.println(rest.toString());
// table.close();

// Delete delete = new Delete(Bytes.toBytes("row_2"));
// delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("email"));
// delete.deleteColumn(Bytes.toBytes("f1"), Bytes.toBytes("name"));
// table.delete(delete);
// table.close();

 


// Delete delete = new Delete(Bytes.toBytes("row_04"));
//// delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
// delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
// table.delete(delete);
// table.close();


// Scan scan = new Scan();
// scan.setStartRow(Bytes.toBytes("row_01"));//包含开始行键
// scan.setStopRow(Bytes.toBytes("row_03"));//不包含结束行键
// scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
// ResultScanner rst = table.getScanner(scan);//整个循环
// System.out.println(rst.toString());
// for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() )
// {
// for(Cell cell:next.rawCells()){//某个row key下的循坏
// System.out.println(next.toString());
// System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
// System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
// System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
// }
// }
// table.close();

HBaseTest hbasetest =new HBaseTest();
// hbasetest.insertValue();
// hbasetest.getValue();
// hbasetest.delete();
hbasetest.scanValue();

}


//生产开发中,建议这样用线程池做
// public void insertValue() throws Exception{
// HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
// Put put = new Put(Bytes.toBytes("row_01"));//行键是row_01
// put.add(Bytes.toBytes("f"),Bytes.toBytes("name"),Bytes.toBytes("Andy0"));
// table.put(put);
// table.close();
// }

 



//生产开发中,建议这样用线程池做
// public void getValue() throws Exception{
// HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
// Get get = new Get(Bytes.toBytes("row_03"));
// get.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
// org.apache.hadoop.hbase.client.Result rest = table.get(get);
// System.out.println(rest.toString());
// table.close();
// }
//

 

//生产开发中,建议这样用线程池做
// public void delete() throws Exception{
// HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
// Delete delete = new Delete(Bytes.toBytes("row_01"));
// delete.deleteColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));//deleteColumn是删除某一个列簇里的最新时间戳版本。
//// delete.deleteColumns(Bytes.toBytes("f"), Bytes.toBytes("name"));//delete.deleteColumns是删除某个列簇里的所有时间戳版本。
// table.delete(delete);
// table.close();
//
// }

 

//生产开发中,建议这样用线程池做
public void scanValue() throws Exception{
HTableInterface table = TableConnection.getConnection().getTable(TableName.valueOf("test_table"));
Scan scan = new Scan();
scan.setStartRow(Bytes.toBytes("row_02"));//包含开始行键
scan.setStopRow(Bytes.toBytes("row_04"));//不包含结束行键
scan.addColumn(Bytes.toBytes("f"), Bytes.toBytes("name"));
ResultScanner rst = table.getScanner(scan);//整个循环
System.out.println(rst.toString());
for (org.apache.hadoop.hbase.client.Result next = rst.next();next !=null;next = rst.next() )
{
for(Cell cell:next.rawCells()){//某个row key下的循坏
System.out.println(next.toString());
System.out.println("family:" + Bytes.toString(CellUtil.cloneFamily(cell)));
System.out.println("col:" + Bytes.toString(CellUtil.cloneQualifier(cell)));
System.out.println("value" + Bytes.toString(CellUtil.cloneValue(cell)));
}
}
table.close();
}



public static Configuration getConfig(){
Configuration configuration = new Configuration(); 
// conf.set("hbase.rootdir","hdfs:HadoopMaster:9000/hbase");
configuration.set("hbase.zookeeper.quorum", "HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
return configuration;
}
}

 

 

 

 

 

 

package zhouls.bigdata.HbaseProject.Pool;

import java.io.IOException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.HConnection;
import org.apache.hadoop.hbase.client.HConnectionManager;


public class TableConnection {
private TableConnection(){
}
private static HConnection connection = null;
public static HConnection getConnection(){
if(connection == null){
ExecutorService pool = Executors.newFixedThreadPool(10);//建立一个固定大小的线程池
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum","HadoopMaster:2181,HadoopSlave1:2181,HadoopSlave2:2181");
try{
connection = HConnectionManager.createConnection(conf,pool);//创建连接时,拿到配置文件和线程池
}catch (IOException e){
}
}
return connection;
}
}

 

 



本文转自大数据躺过的坑博客园博客,原文链接:http://www.cnblogs.com/zlslch/p/6159427.html,如需转载请自行联系原作者

版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。

相关文章
如何正确管理HBase的连接
如何正确管理HBase的连接
102 0
Tablestore Adhoc 分析性能测试白皮书
本文主要介绍 Tablestore 在 Adhoc 分析场景下的性能测试,主要包括测试环境、测试工具、测试方案、测试结果以及测试结论等。
288 0
表格存储Tablestore-新手入门攻略
这里期望给大家提供一个简易小攻略,快速上手表格存储Tablestore。期望表格存储Tablestore能够解决你当前架构与场景中遇到问题。
4178 0
连接HBase的正确姿势
在云HBase值班的时候,经常会遇见有用户咨询诸如“HBase是否支持连接池?”这样的问题,也有用户因为应用中创建的Connection对象过多,触发了zookeeper的连接数限制,导致客户端连不上的。
12570 0
阿里云PyODPS 0.7.18发布,针对聚合函数进行优化同时新增对Python 3.7支持
PyODPS是MaxCompute的Python版本的SDK,SDK的意思非常广泛,辅助开发某一类软件的相关文档、范例和工具的集合都可以叫做“SDK”。 PyODPS在这里的作用是提供了对MaxCompute对象的基本操作和DataFrame框架,可以轻松地在MaxCompute上进行数据分析。
1733 0
记录一次spark连接mysql遇到的问题
在使用spark连接mysql的过程中报错了,错误如下 08:51:32.495 [main] ERROR - Error loading factory org.apache.calcite.jdbc.
1777 0
hbase安装问题记录
stop-hbase.sh stopping hbase................... Slave2: no zookeeper to stop because no pid file /var/hadoop/pids/hbase-root-zookeeper.
1279 0
基于MaxCompute的图计算实践分享-解析图加载过程
一、前言 MaxCompute Graph 是基于飞天平台实现的面向迭代的图处理框架,为用户提供了类似于 Pregel 的编程接口。MaxCompute Graph(以下简称 Graph )作业包含图加载和计算两个阶段: 加载,将存储在表中的数据载入到内存中,以点和边的形式存在; 计算,遍历内
3778 0
2010
文章
0
问答
文章排行榜
最热
最新
相关电子书
更多
低代码开发师(初级)实战教程
立即下载
阿里巴巴DevOps 最佳实践手册
立即下载
冬季实战营第三期:MySQL数据库进阶实战
立即下载