著作权归作者所有,任何形式的转载都请联系作者获得授权并注明出处。
在对着本文进行实践操着之前,建议阅读蒋老湿之前的2篇文章
mycat-dockerfile
############################################ # version : debugman007/c7-mycat:v1 # desc : centos7 上安装的mycat ############################################ # 设置继承自 centos7 官方镜像 FROM centos:7 RUN echo "root:root" | chpasswd RUN yum -y install net-tools # install java ADD http://mirrors.linuxeye.com/jdk/jdk-7u80-linux-x64.tar.gz /usr/local/ RUN cd /usr/local && tar -zxvf jdk-7u80-linux-x64.tar.gz && rm -f jdk-7u80-linux-x64.tar.gz ENV JAVA_HOME /usr/local/jdk1.7.0_80 ENV CLASSPATH ${JAVA_HOME}/lib/dt.jar:$JAVA_HOME/lib/tools.jar ENV PATH $PATH:${JAVA_HOME}/bin #install mycat ADD http://dl.mycat.io/1.6-RELEASE/Mycat-server-1.6-RELEASE-20161028204710-linux.tar.gz /usr/local RUN cd /usr/local && tar -zxvf Mycat-server-1.6-RELEASE-20161028204710-linux.tar.gz && rm -f Mycat-server-1.6-RELEASE-20161028204710-linux.tar.gz VOLUME d://docker/mycat:/usr/local/mycat/conf EXPOSE 8066 9066 CMD ["/usr/local/mycat/bin/mycat", "console"]
mysql 的docker 请点击此处查看:
mysql5.7-dockerfile
- 首先把mysql的dockerfile文件打包镜像
docker build -t mycat -f mycat-dockerfile .
,或者直接docker pull
从dockerhub中拉取mysql 的镜像。通过如下命令启动2个mysql实例,分别叫one-mysql,two-mysql
docker run --name one-mysql -e MYSQL_ROOT_PASSWORD=123456 -d -p 3307:3306 mysql:5.7 docker run --name two-mysql -e MYSQL_ROOT_PASSWORD=123456 -d -p 3307:3306 mysql:5.7
2.docker run -d --name mycat mycat
启动构建mycat容器,
3.docker cp mycat:/usr/local/mycat/conf .
使用命令复制 mycat容器中目录下/usr/local/mycat/的conf文件夹到当前宿主机,我的是E盘,其他的盘符也可以 , .
为当前路径,这么做的原因是待会要用这个文件做一个bind Mounting映射,因为如果我们直接修改mycat容器中的配置文件后重启,配置如果有错误的话,会导致容器一起启动不了,这个时候我们就很难再进入容器中修改mycat的配置文件了。 记得要开启如下的配置,否则做不了volume配置,具体参考Docker的持久化存储和数据共享(四)
4.接下来我们删除掉刚刚创建的mycat容器,重新再生成一个
docker rm -f mycat docker run -d -v e:/mycat/conf:/usr/local/mycat/conf -d 8066:8066 -d 9066:9066 --name mycat mycat docker ps
这个时候我们就不用每次使用winpty docker exec -it mycat bash
进入容器中去修改mycat 配置文件了,可以直接在e:/mycat/conf目录下修改保存,然后在命令行窗口执行docker restart mycat && docker attach mycat
重启mycat,这个时候我们就不用每次使 是为了在启动的时候打印日志到控制台,方便实时查看错误
5.使用navicat for mysql 连接我们刚刚创建的mycat,mysql
6.mycat的schema.xml修改如下,其中writeHost的url该怎么写呢?因为我是在一台机器上部署了3个Docker实列,所以对应的连接应该是可以通过docker network inspect bridge
查看
docker network inspect bridge " Name ":" bridge ", " Id ":"7b7afe9c67fc355d550627b867170a996f4ebff050eb48f053b8aebc6422eed8”, Created " priver : cW .fa1se,' IPAM : " Driver ":" default "”," Options Config ”:[ :2018-08-09 T 01:52:03.9206018Z Subnet ": Gateway ": “:1o: o ”. " Internal ": false ," Attachable ": false , Containers " 53a16a6e2d62a01abef09a551d24af627db140fbe1109a3bdfe7c":{ " tWO - mysgl Namie EndpontlD : MacAddress ”: ”IPv4Address “IPv6Addre5 S ” “03607e3fb8ac168bc7df33720971a2d6ecf52fef0fe2737de26a7e61c6a25be0", 0242aC 172.17.0.4 8edcc2f059afe14fc20e2ab475914f12d476a607f1de4c09df3db6cef182c861":{ " Name ":" one - mysql ", EndpointiD MacAddress ”'IPv4Address IPv6Addr55 a3cd2eded9e8191ca86dd369f42f259f5ac18a233fe2033d0010648356e15a24”, "172.17.0.3 “d5ba1aa2ed6489047313af89955e1118bd80fc3e9f0a774296813e157fe90815":{ Name ":" mycat " EndpointID ": " MacAddress ":"IPv4Address” IPv6Address”: "e0b42bfd97e97f56eb68f5c3b31da2c1c78796bec792d1316504b3e34eb8b686”, 02:42:ac:11:00:02” 2.1.0.16. Options ": t , " com . docker . network . bridge . default , bridge ":" true ”, " com . docker . network . bridge . enable _ icc ”: " com . docker . network . bridge . enable _ ip _ masquerade ":" true "," com . docker . network . bri ridge . host _ binding _ipv4":"0.0.0.0”, "com.docker.network.bridge.name” docker0 " com . docker . network . driver . mtu ":"1500" true " Labels ":{} } ]
<?xml version="1.0"?> <!DOCTYPE mycat:schema SYSTEM "schema.dtd"> <mycat:schema xmlns:mycat="http://io.mycat/"> <schema name="TESTDB" checkSQLschema="false" sqlMaxLimit="100"> <!-- auto sharding by id (long) --> <table name="tb_test" dataNode="dn1,dn2,dn3,dn4,dn5,dn6" rule="auto-sharding-long-jp" /> <!-- 全局表自动克隆到所有已定义的数据节点,因此可以连接与分片节点位于同一数据节点中的任何表 --> <table name="company" primaryKey="ID" type="global" dataNode="dn1,dn2,dn3,dn4,dn5,dn6" rule="auto-sharding-long"/> <!-- 使用mod sharind规则的随机共享 --> </schema> <dataNode name="dn1" dataHost="localhost1" database="db1" /> <dataNode name="dn2" dataHost="localhost1" database="db2" /> <dataNode name="dn3" dataHost="localhost1" database="db3" /> <dataNode name="dn4" dataHost="localhost2" database="db1" /> <dataNode name="dn5" dataHost="localhost2" database="db2" /> <dataNode name="dn6" dataHost="localhost2" database="db3" /> <dataHost name="localhost1" maxCon="1000" minCon="10" balance="0" writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"> <heartbeat>select user()</heartbeat> <!-- 可以有多写主机 --> <writeHost host="hostM1" url="172.17.0.3:3306" user="root" password="123456"> <!-- 可以有多读主机 <readHost host="hostS2" url="192.168.1.200:3306" user="root" password="xxx" /> --> </writeHost> <!--<writeHost host="hostS1" url="localhost:3316" user="root" password="123456" /> --> </dataHost> <dataHost name="localhost2" maxCon="1000" minCon="10" balance="0" writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"> <heartbeat>select user()</heartbeat> <writeHost host="hostM1" url="172.17.0.4:3306" user="root" password="123456"/> </dataHost> </mycat:schema>
- 具体配置说明可以参考Mycat之——配置文件schema.xml
- 到这基本算完成了,现在我们来做一个示例吧!执行如下sql语句:
CREATE TABLE `company` ( `id` int(11) NOT NULL, `name` varchar(20) DEFAULT NULL, `age` int(11) DEFAULT '0', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; insert into company (id,NAME,age) values (1,'jp',25);
7.可以看到one-mysql,two-mysql的所有db库里都有一个company表,而且每个库表里都有刚刚插入的内容, 但是这个效果看起来像是全量复制的感觉,没有很分片,这是因为schema.xml中
<table name="company" primaryKey="ID" type="global" dataNode="dn1,dn2,dn3,dn4,dn5,dn6" rule="auto-sharding-long"/>
设置了type="global",所以刚刚每个库表都有内容。取消global全局设置就可以了。下面再来一个演示示例。
- 自定义分片规则
<ru1e> 28< columns > sharding _ id </ columns >< algorithm > hash - int </ algorithm > 30 </ rule > </ tableRule > < tableRule name =" auto - sharding - long "> < rule > < columns > id </ columns > < algorithm > rang - long </ algorithm > </ rule ></ tableRule >< tableRule < rule > name =" auto - sharding - long - jp "> < columns > id </ columns >< algorithmx rang - long - jp / algorithm > </ rule ></ tableRulex < tableRule < rule > name = mod -1ong> < columns > id </co1umns> < algorithm > mod -1ong</ algorithm > </ rule > </tab1eRule>
< property name =" count ">2</ property ><!--要分片的数据库节点数量,必须指定, </ function > < function name =" hash - int class = io . mycat . route . function . PartitionByFileMap "> pYOperty name =" mapFile "> partition - hash - int . txt </ prOperty > </ function > < function name =" rang - long autopartition - long jp txt class =" io . mycat . route . function . AutoPartitionByLong "> < property name =" mapFile "> autopartition - long . txt </ property > </ function > < function name =" rang - long - jp " class =" io . mycat . route . function . AutoPartitionByLong "> < property name =" mapFile "> autopartition - long - jp . txtk k / property > / IunCtO < function name = mod - long class = io . mycat . route . function . PartitionByMod "> <!-- how many data nodes -->
# range start - end , data node index #默认是三个,有多少 dataNode 加多少 dataNode ,不然就会报错,说节点少了# -1000条记录, M =10000条记录 0-5K-0 5K-10K=1 10K-15K=2 15K-20K-3 20K-25K-4 25K-30K=5
- 执行如下sql:
CREATE TABLE tb_test ( id BIGINT(20) NOT NULL, title VARCHAR(100) NOT NULL , PRIMARY KEY (id) ) ENGINE=INNODB DEFAULT CHARSET=utf8 ; INSERT INTO tb_test(ID,TITLE) VALUES(1,'蒋老湿1'); INSERT INTO tb_test(ID,TITLE) VALUES(6000,'蒋老湿2'); INSERT INTO tb_test(ID,TITLE) VALUES(10001,'蒋老湿3'); INSERT INTO tb_test(ID,TITLE) VALUES(15001,'蒋老湿4'); INSERT INTO tb_test(ID,TITLE) VALUES(20001,'蒋老湿5'); INSERT INTO tb_test(ID,TITLE) VALUES(25001,'蒋老湿6'); INSERT INTO tb_test(ID,TITLE) VALUES(30001,'蒋老湿error');
发现结果不一样了,根据id的范围区间不同,数据所在的数据库实列也不同,所在的database也不同。
- 超出id最大值则会有如下错误!因为我们规则中定义的最大id范围为30K = 30000
range start - end , data node index #默认是三个,有多少 dataNode 加多少 dataNode ,# K =1000条记录, M =10000条记录 0-5K=0 5K-10K=1 10K-15K-2 15K-20K=3 20K-25K-4 25K-30K=5
INSERT INTO tb _ test ( ID , TITLE ) VALUEs (30001,'蒋老湿 error '); [SQLINSERT INTO tb _ test ( ID , TITLE ) VALUES (30001,'蒋老湿 error '); [ Err ]1064- can ' t find any valid datanode : TB _ TEST -> ID ->30001
好了~至于主从复制和读写分离,在了解本文之后,其实也不是什么很困难的事情了,在这里就不再演示说明了。