
1. 创建用户组 注意用户组权限 2. 添加用户 3. 设置报警媒介 4. 设置Action 注意:添加新主机后,要确认权限分配
nginx端相关操作 在nginx.conf中添加如下内容 #vi nginx.conf ... location /nginx-status { stub_status on; access_log off; allow 192.168.233.0/24; deny all; } ... 检查修改后的配置文件 # nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful 重新加载 # nginx -s reload 浏览器访问 zabbix端相关操作 添加监控项 # pwd /etc/zabbix/zabbix_agentd.d # vi nginx.conf UserParameter=nginx.active,curl -s "http://192.168.233.131:8080/nginx-status" | grep 'Active' | awk '{print $NF}' 重启zabbix agent # systemctl restart zabbix-agent 安装zabbix-get # yum install zabbix-get -y 验证监控 # zabbix_get -s 192.168.233.131 -p 10050 -k "nginx.active" 1 添加item 添加graphs 压测一下 ab -c 20 -n 10000000 http://192.168.233.131:8080/ screen
server端安装 安装yum源(这里我们使用的是阿里云提供的repo) # rpm -ivh http://mirrors.aliyun.com/zabbix/zabbix/3.2/rhel/7/x86_64/zabbix-release-3.2-1.el7.noarch.rpm 安装相关软件 yum install zabbix-server zabbix-web zabbix-server-mysql zabbix-web-mysql mariadb-server mariadb -y 注:如果Server端也需要监控则需要安装zabbix-agent 修改zabbix的PHP前端配置文件 #vi /etc/httpd/conf.d/zabbix.conf <IfModule mod_php5.c> php_value max_execution_time 300 php_value memory_limit 128M php_value post_max_size 16M php_value upload_max_filesize 2M php_value max_input_time 300 php_value always_populate_raw_post_data -1 php_value date.timezone Asia/Shanghai </IfModule> 初始化数据库配置 shell> mysql -uroot -p<root_password> mysql> create database zabbix character set utf8 collate utf8_bin; mysql> grant all privileges on zabbix.* to zabbix@localhost identified by '<password>'; mysql> quit; 导入模型和数据 # zcat /usr/share/doc/zabbix-server-mysql-3.2.3/create.sql.gz | mysql -uzabbix -p<password> zabbix 修改zabbix配置文件,编辑数据库信息 # vi /etc/zabbix/zabbix_server.conf DBHost=localhost DBName=zabbix DBUser=zabbix DBPassword=<password> 启动zabbix server并设置开机启动 # systemctl start zabbix-server # systemctl enable zabbix-server 启动httpd服务 # systemctl start httpd 浏览器访问,并进行安装 http://ipaddr:80/zabbix
原文链接 MySQL InnoDB事务的隔离级别有四级,默认是“可重复读”(REPEATABLE READ)。 未提交读(READ UNCOMMITTED)。另一个事务修改了数据,但尚未提交,而本事务中的SELECT会读到这些未被提交的数据(脏读)。 提交读(READ COMMITTED)。本事务读取到的是最新的数据(其他事务提交后的)。问题是,在同一个事务里,前后两次相同的SELECT会读到不同的结果(不重复读)。 可重复读(REPEATABLE READ)。在同一个事务里,SELECT的结果是事务开始时时间点的状态,因此,同样的SELECT操作读到的结果会是一致的。但是,会有幻读现象(稍后解释)。 串行化(SERIALIZABLE)。读操作会隐式获取共享锁,可以保证不同事务间的互斥。 四个级别逐渐增强,每个级别解决一个问题。 脏读,最容易理解。另一个事务修改了数据,但尚未提交,而本事务中的SELECT会读到这些未被提交的数据。 不重复读。解决了脏读后,会遇到,同一个事务执行过程中,另外一个事务提交了新数据,因此本事务先后两次读到的数据结果会不一致。 幻读。解决了不重复读,保证了同一个事务里,查询的结果都是事务开始时的状态(一致性)。但是,如果另一个事务同时提交了新数据,本事务再更新时,就会“惊奇的”发现了这些新数据,貌似之前读到的数据是“鬼影”一样的幻觉。 借鉴并改造了一个搞笑的比喻: 脏读。假如,中午去食堂打饭吃,看到一个座位被同学小Q占上了,就认为这个座位被占去了,就转身去找其他的座位。不料,这个同学小Q起身走了。事实:该同学小Q只是临时坐了一小下,并未“提交”。 不重复读。假如,中午去食堂打饭吃,看到一个座位是空的,便屁颠屁颠的去打饭,回来后却发现这个座位却被同学小Q占去了。 幻读。假如,中午去食堂打饭吃,看到一个座位是空的,便屁颠屁颠的去打饭,回来后,发现这些座位都还是空的(重复读),窃喜。走到跟前刚准备坐下时,却惊现一个恐龙妹,严重影响食欲。仿佛之前看到的空座位是“幻影”一样。 一些文章写到InnoDB的可重复读避免了“幻读”(phantom read),这个说法并不准确。 做个试验:(以下所有试验要注意存储引擎和隔离级别) mysql> show create table t_bitfly\G; CREATE TABLE `t_bitfly` ( `id` bigint(20) NOT NULL default '0', `value` varchar(32) default NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=gbk mysql> select @@global.tx_isolation, @@tx_isolation; +-----------------------+-----------------+ | @@global.tx_isolation | @@tx_isolation | +-----------------------+-----------------+ | REPEATABLE-READ | REPEATABLE-READ | +-----------------------+-----------------+ 试验一: t Session A Session B | | START TRANSACTION; START TRANSACTION; | | SELECT * FROM t_bitfly; | empty set | INSERT INTO t_bitfly | VALUES (1, 'a'); | | SELECT * FROM t_bitfly; | empty set | COMMIT; | | SELECT * FROM t_bitfly; | empty set | | INSERT INTO t_bitfly VALUES (1, 'a'); | ERROR 1062 (23000): | Duplicate entry '1' for key 1 v (shit, 刚刚明明告诉我没有这条记录的) 如此就出现了幻读,以为表里没有数据,其实数据已经存在了,傻乎乎的提交后,才发现数据冲突了。 试验二: t Session A Session B | | START TRANSACTION; START TRANSACTION; | | SELECT * FROM t_bitfly; | +------+-------+ | | id | value | | +------+-------+ | | 1 | a | | +------+-------+ | INSERT INTO t_bitfly | VALUES (2, 'b'); | | SELECT * FROM t_bitfly; | +------+-------+ | | id | value | | +------+-------+ | | 1 | a | | +------+-------+ | COMMIT; | | SELECT * FROM t_bitfly; | +------+-------+ | | id | value | | +------+-------+ | | 1 | a | | +------+-------+ | | UPDATE t_bitfly SET value='z'; | Rows matched: 2 Changed: 2 Warnings: 0 | (怎么多出来一行) | | SELECT * FROM t_bitfly; | +------+-------+ | | id | value | | +------+-------+ | | 1 | z | | | 2 | z | | +------+-------+ | v 本事务中第一次读取出一行,做了一次更新后,另一个事务里提交的数据就出现了。也可以看做是一种幻读。 那么,InnoDB指出的可以避免幻读是怎么回事呢? http://dev.mysql.com/doc/refman/5.0/en/innodb-record-level-locks.html By default, InnoDB operates in REPEATABLE READ transaction isolation level and with the innodb_locks_unsafe_for_binlog system variable disabled. In this case, InnoDB uses next-key locks for searches and index scans, which prevents phantom rows (see Section 13.6.8.5, “Avoiding the Phantom Problem Using Next-Key Locking”). 准备的理解是,当隔离级别是可重复读,且禁用innodb_locks_unsafe_for_binlog的情况下,在搜索和扫描index的时候使用的next-key locks可以避免幻读。 关键点在于,是InnoDB默认对一个普通的查询也会加next-key locks,还是说需要应用自己来加锁呢?如果单看这一句,可能会以为InnoDB对普通的查询也加了锁,如果是,那和序列化(SERIALIZABLE)的区别又在哪里呢? MySQL manual里还有一段: 13.2.8.5. Avoiding the Phantom Problem Using Next-Key Locking (http://dev.mysql.com/doc/refman/5.0/en/innodb-next-key-locking.html) To prevent phantoms, InnoDB uses an algorithm called next-key locking that combines index-row locking with gap locking. You can use next-key locking to implement a uniqueness check in your application: If you read your data in share mode and do not see a duplicate for a row you are going to insert, then you can safely insert your row and know that the next-key lock set on the successor of your row during the read prevents anyone meanwhile inserting a duplicate for your row. Thus, the next-key locking enables you to “lock” the nonexistence of something in your table. 我的理解是说,InnoDB提供了next-key locks,但需要应用程序自己去加锁。manual里提供一个例子: SELECT * FROM child WHERE id > 100 FOR UPDATE; 这样,InnoDB会给id大于100的行(假如child表里有一行id为102),以及100-102,102+的gap都加上锁。 可以使用show innodb status来查看是否给表加上了锁。 再看一个实验,要注意,表t_bitfly里的id为主键字段。实验三: t Session A Session B | | START TRANSACTION; START TRANSACTION; | | SELECT * FROM t_bitfly | WHERE id&lt;=1 | FOR UPDATE; | +------+-------+ | | id | value | | +------+-------+ | | 1 | a | | +------+-------+ | INSERT INTO t_bitfly | VALUES (2, 'b'); | Query OK, 1 row affected | | SELECT * FROM t_bitfly; | +------+-------+ | | id | value | | +------+-------+ | | 1 | a | | +------+-------+ | INSERT INTO t_bitfly | VALUES (0, '0'); | (waiting for lock ... | then timeout) | ERROR 1205 (HY000): | Lock wait timeout exceeded; | try restarting transaction | | SELECT * FROM t_bitfly; | +------+-------+ | | id | value | | +------+-------+ | | 1 | a | | +------+-------+ | COMMIT; | | SELECT * FROM t_bitfly; | +------+-------+ | | id | value | | +------+-------+ | | 1 | a | | +------+-------+ v 可以看到,用id<=1加的锁,只锁住了id<=1的范围,可以成功添加id为2的记录,添加id为0的记录时就会等待锁的释放。 MySQL manual里对可重复读里的锁的详细解释: http://dev.mysql.com/doc/refman/5.0/en/set-transaction.html#isolevel_repeatable-read For locking reads (SELECT with FOR UPDATE or LOCK IN SHARE MODE),UPDATE, and DELETE statements, locking depends on whether the statement uses a unique index with a unique search condition, or a range-type search condition. For a unique index with a unique search condition, InnoDB locks only the index record found, not the gap before it. For other search conditions, InnoDB locks the index range scanned, using gap locks or next-key (gap plus index-record) locks to block insertions by other sessions into the gaps covered by the range. 一致性读和提交读,先看实验,实验四: t Session A Session B | | START TRANSACTION; START TRANSACTION; | | SELECT * FROM t_bitfly; | +----+-------+ | | id | value | | +----+-------+ | | 1 | a | | +----+-------+ | INSERT INTO t_bitfly | VALUES (2, 'b'); | COMMIT; | | SELECT * FROM t_bitfly; | +----+-------+ | | id | value | | +----+-------+ | | 1 | a | | +----+-------+ | | SELECT * FROM t_bitfly LOCK IN SHARE MODE; | +----+-------+ | | id | value | | +----+-------+ | | 1 | a | | | 2 | b | | +----+-------+ | | SELECT * FROM t_bitfly FOR UPDATE; | +----+-------+ | | id | value | | +----+-------+ | | 1 | a | | | 2 | b | | +----+-------+ | | SELECT * FROM t_bitfly; | +----+-------+ | | id | value | | +----+-------+ | | 1 | a | | +----+-------+ v 如果使用普通的读,会得到一致性的结果,如果使用了加锁的读,就会读到“最新的”“提交”读的结果。 本身,可重复读和提交读是矛盾的。在同一个事务里,如果保证了可重复读,就会看不到其他事务的提交,违背了提交读;如果保证了提交读,就会导致前后两次读到的结果不一致,违背了可重复读。 可以这么讲,InnoDB提供了这样的机制,在默认的可重复读的隔离级别里,可以使用加锁读去查询最新的数据。 http://dev.mysql.com/doc/refman/5.0/en/innodb-consistent-read.html If you want to see the “freshest” state of the database, you should use either the READ COMMITTED isolation level or a locking read: SELECT * FROM t_bitfly LOCK IN SHARE MODE; 结论:MySQL InnoDB的可重复读并不保证避免幻读,需要应用使用加锁读来保证。而这个加锁度使用到的机制就是next-key locks。
python的Dictionary类型是无序的,所以当我们想要字典安装插入的顺序输出时,我们可以使用collections的OrderedDict来实现 示例: from collections import OrderedDict favorite_languages = OrderedDict() favorite_languages['jen'] = 'python' favorite_languages['sarah'] = 'c' favorite_languages['edward'] = 'ruby' favorite_languages['phil'] = 'python' for name,language in favorite_languages.items(): print(name.title() + "'s favorite language is " + language.title() + ".") 输出结果
源码包下载 # wget http://nchc.dl.sourceforge.net/project/zabbix/ZABBIX%20Latest%20Stable/3.0.4/zabbix-3.0.4.tar.gz 安装zabbix server 创建用户 # groupadd zabbix # useradd -gzabbix zabbix 编译源代码 –enable-server:我们安装的是zabbix server –enable-agent:安装zabbix agent –with-mysql:使用mysql作为数据库 –prefix:指定zabbix安装目录 –with-net-snmp:开启snmp监控 # ./configure --enable-server --enable-agent --with-mysql --with-net-snmp --with-libcurl --with-libxml2 --with-openipmi --with-unixodbc --prefix=/usr/local/zabbix ... #表明安装成功 *********************************************************** * Now run 'make install' * * * * Thank you for using Zabbix! * * <http://www.zabbix.com> * *********************************************************** 注:安装过程中可能缺少相关包 yum install libxml2-devel yum install unixODBC-devel yum install net-snmp-devel yum install OpenIPMI-devel yum install curl-devel yum install mysql-devel 安装 #make install 数据库配置 mysql> create database zabbix character set utf8 collate utf8_bin; Query OK, 1 row affected (0.00 sec) mysql> grant all privileges on zabbix.* to zabbix_user@'192.168.204.100' identified by 'XXX'; Query OK, 0 rows affected (0.00 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec) # mysql -uroot -p zabbix < /usr/local/zabbix-3.0.4/database/mysql/schema.sql # mysql -uroot -p zabbix < /usr/local/zabbix-3.0.4/database/mysql/images.sql # mysql -uroot -p zabbix < /usr/local/zabbix-3.0.4/database/mysql/data.sql 修改配置文件 vi zabbix_server.conf DBName=zabbix DBUser=zabbix_user DBPassword=XXXX DBSocket=/tmp/mysql.sock ...... 运行zabbix server # /usr/local/zabbix/sbin/zabbix_server 安装zabbix web前端 安装PHP及容器 # yum install php55w # yum install httpd 注:php数据库依赖包 yum install php55w-mysql yum install php55w-bcmath yum install php55w-mbstring yum install php55w-gd yum install php55w-xml 配置文件修改 # vi /etc/httpd/conf/httpd.conf DocumentRoot "/usr/local/zabbix/php" <Directory "/usr/local/zabbix/php">
运维架构 硬件标准化(服务器、内存、系统版本) 软件标准化(应用版本) 运维自动化(监控、发布、CMDB) 监控报警(系统数据,应用指标的监控和出错时及时报警) 发布系统(代码发布,发布后的检查,代码的回滚,灰度发布) 服务器标准化(cobbler装机加puppet,做到硬件、软件标准化) CMDB(配置管理数据库,存储了所有运维相关数据,包括服务器硬件信息、域名和服务器关系、IDC容量等) 监控系统的角色 监控数据收集及可视化 监控系统能够自定义监控的内容,可以自己写脚本来收集需要的数据 数据保存在数据库,可用于分析计算 能够方便快速将监控加入到服务器 数据可视化要直观好用 异常数据报警 可以定义复杂的报警逻辑,可以做到item之间关联报警 报警需要被确认,让运维知道多少报警已经有人认领并开始处理 报警方式能够自定义,发邮件和短信 报警内容可设置,邮件中加入简单分析 报警后可以自动跑一些命令。这些命令可以获取需要的信息,也可以自动修复 和其他系统协同工作 有强大的API可以使用,让其他系统调用 监控数据开放,数据库中数据结构不要太复杂 监控可视化的图方便引用,而不是一大串javascript
日志两种格式 物理的日志 记录完整的页 记录页中被修改的 内容 逻辑日志 记录执行的 SQL 语句 物理日志相对比较大,比如索引分裂是会影响很多记录,导致日志量很大 逻辑日志日志量会很小,但是会有部分执行 数据一致性问题 物理日志 优点:恢复时可直接通过日志覆盖,不依赖原页面上面的内容。并且是幂等的。 缺点:日志量很大 逻辑日志 优点:日志量小 缺点: 插入记录时 , 如果表中有多个索引 需要向B-TREE中插入记录,有可能第一个成功了第二个没有成功。 索引分裂时可能需要将页A中数据移动一半到B中,A页写入磁盘成功,B没有写入磁盘时发生了Crash Innodb日志方式 采用逻辑与物理相结合 物理到Page Page内部是逻辑的(space id, page no, operation code, data) 虽然采取此方式可以解决部分执行问题 但是数据一致性还是无法解决(因为对于page内的信息使用逻辑日志进行记录,所以当出现坏块时,并不能解决)
redo redo是数据库实例恢复的重要组成部分。其目的是保证数据库实例或服务器发生故障时,不会导致数据库崩溃,不会丢失已经提交的数据。并且通过RWA机制保证数据库被变更之前,其redo日志必须先写入日志缓冲区,而事务提交之前也必须首先保证将日志缓冲区和这个事务相关的redo信息写入到redo日志文件。 通过这一机制,可以保证数据库宕机后,重新启动实例,数据库中没有被及时写入数据文件中的buffer pool中的信息和一些不一致的未提交的事务信息会被正确恢复,而且数据库可以恢复到宕机前的一致性状态。 redo日志分为多个组,这样当一组日志写完后,可以切换到新的日志组,而不必等待原日志组的中相关的数据刷盘,再重写。 undo undo日志除了用于事务回滚外,还实现了MVCC
mysql checkpoint分为两种 sharp checkpoint fuzzy checkpoint sharp checkpoint sharp checkpoint会把所有已提交事务相关的脏页刷到磁盘,并记录最新的已提交事务的LSN号。sharp checkpoint刷新到磁盘的脏页是某一时刻的一致性数据。 当数据库关闭时,会发生sharp checkpoint fuzzy checkpoint fuzzy checkpoint则复杂很多。fuzzy checkpoint会一点点的把脏页刷新到磁盘,直到与sharp checkpoint达到相同的目的(即所有的已提交事务相关的脏页到刷到磁盘)。fuzzy checkpoint会把两个LSN之间的脏页刷新到磁盘。但是并不能保障LSN之间的数据时一致性的。所以被称为fuzzy(失真) checkpoint。 innodb使用buffer pool来避免数据直接写入磁盘。这样数据可以再buffer pool中多次修改并最终写入磁盘,这样就减少了磁盘IO。buffer pool中维护了几个重要的list:free list、LRU list、flush list。当有新的数据读入buffer pool中时,会从free list中分配page。当free list中没有空闲page时,需要等待flush list中的数据刷到磁盘,这样很慢。所以innodb会定期的把flush list中的旧数据刷到磁盘。 再者,innodb redo log文件是循环使用的,所以必须保证日志文件在重写前,所有buffer pool中相关的脏数据刷新的磁盘,不然数据库宕机后这些数据就会丢失。因为日志是按照数据修改的时间记录的,所以旧的脏数据会被先刷到磁盘,这也就是fuzzy checkpoint的工作。因为日志中的旧数据已经刷新到磁盘,所以数据库宕机后,实例恢复会从fuzzy checkpoint后的LSN开始。 当数据库正常工作时,会进行fuzzy checkpoint
环境 mysql> select version(); +------------+ | version() | +------------+ | 5.6.31-log | +------------+ 1 row in set (0.00 sec) 提交读 查看当前数据库隔离级别 mysql> select @@global.tx_isolation,@@tx_isolation; +-----------------------+----------------+ | @@global.tx_isolation | @@tx_isolation | +-----------------------+----------------+ | READ-COMMITTED | READ-COMMITTED | +-----------------------+----------------+ 1 row in set (0.00 sec) 查看测试表结构 mysql> show create table t\G; *************************** 1. row *************************** Table: t Create Table: CREATE TABLE `t` ( `id` int(11) NOT NULL AUTO_INCREMENT, `comment` varchar(30) DEFAULT NULL, `time` datetime DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 1 row in set (0.00 sec) 解决脏读问题,但是会出现不可重复读 #session 1 session 2 mysql> begin; mysql> begin; Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.00 sec) mysql> select * from t; Empty set (0.00 sec) mysql> insert into t(comment,time) values ('r1',now()); Query OK, 1 row affected (0.00 sec) mysql> commit; Query OK, 0 rows affected (0.00 sec) mysql> select * from t; +----+---------+---------------------+ | id | comment | time | +----+---------+---------------------+ | 1 | r1 | 2016-09-23 15:56:17 | +----+---------+---------------------+ 1 row in set (0.00 sec) mysql> commit; Query OK, 0 rows affected (0.00 sec) 可重复读 设置全局的隔离级别为REPEATABLE-READ mysql> set global tx_isolation='REPEATABLE-READ'; Query OK, 0 rows affected (0.00 sec) 清空测试数据 mysql> truncate table t; Query OK, 0 rows affected (0.03 sec) 重复执行上面的实验,可见在可重复读的隔离级别下,不可重复读得到了解决 #session 1 session 2 mysql> begin; mysql> begin; Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.00 sec) mysql> select * from t; Empty set (0.00 sec) mysql> insert into t(comment,time) values ('r2',now()); Query OK, 1 row affected (0.00 sec) mysql> commit; Query OK, 0 rows affected (0.00 sec) mysql> select * from t; Empty set (0.00 sec) mysql> commit; Query OK, 0 rows affected (0.00 sec) 清空测试数据 mysql> truncate table t; Query OK, 0 rows affected (0.03 sec) 进一步测试,幻读现象出现 #session 1 session 2 mysql> begin; mysql> begin; Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.00 sec) mysql> select * from t; Empty set (0.00 sec) mysql> insert into t(comment,time) values ('r3',now()); Query OK, 1 row affected (0.00 sec) mysql> commit; Query OK, 0 rows affected (0.00 sec) mysql> select * from t; Empty set (0.00 sec) mysql> update t set comment='c3'; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> select * from t; +----+---------+---------------------+ | id | comment | time | +----+---------+---------------------+ | 1 | c3 | 2016-09-23 16:53:28 | +----+---------+---------------------+ 1 row in set (0.00 sec) mysql> commit; Query OK, 0 rows affected (0.00 sec)
基本概念 查看可用字符集 mysql> show character set; +----------+-----------------------------+---------------------+--------+ | Charset | Description | Default collation | Maxlen | +----------+-----------------------------+---------------------+--------+ | big5 | Big5 Traditional Chinese | big5_chinese_ci | 2 | | dec8 | DEC West European | dec8_swedish_ci | 1 | | cp850 | DOS West European | cp850_general_ci | 1 | | hp8 | HP West European | hp8_english_ci | 1 | 。。。。。。 +----------+-----------------------------+---------------------+--------+ 40 rows in set (0.00 sec) #或者查看character_sets mysql> desc information_schema.character_sets; +----------------------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +----------------------+-------------+------+-----+---------+-------+ | CHARACTER_SET_NAME | varchar(32) | NO | | | | | DEFAULT_COLLATE_NAME | varchar(32) | NO | | | | | DESCRIPTION | varchar(60) | NO | | | | | MAXLEN | bigint(3) | NO | | 0 | | +----------------------+-------------+------+-----+---------+-------+ 4 rows in set (0.00 sec) mysql字符集包括字符集和校对规则。 字符集定义mysql存储字符串的方式;校对规则用来定义比较字符串的方式。 可以使用show collation like 。。。或information_schema.collations来查看字符集的校对规则 mysql> show collation like 'gbk%'; +----------------+---------+----+---------+----------+---------+ | Collation | Charset | Id | Default | Compiled | Sortlen | +----------------+---------+----+---------+----------+---------+ | gbk_chinese_ci | gbk | 28 | Yes | Yes | 1 | | gbk_bin | gbk | 87 | | Yes | 1 | +----------------+---------+----+---------+----------+---------+ 2 rows in set (0.00 sec) #或者查看collations mysql> desc information_schema.collations; +--------------------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +--------------------+-------------+------+-----+---------+-------+ | COLLATION_NAME | varchar(32) | NO | | | | | CHARACTER_SET_NAME | varchar(32) | NO | | | | | ID | bigint(11) | NO | | 0 | | | IS_DEFAULT | varchar(3) | NO | | | | | IS_COMPILED | varchar(3) | NO | | | | | SORTLEN | bigint(3) | NO | | 0 | | +--------------------+-------------+------+-----+---------+-------+ 6 rows in set (0.00 sec) 校对规则命名 _ci 大小写不敏感 _cs 大小写敏感 _bin 二元 (比较基于字符编码的值而与language无关) mysql> select case when 'A' collate utf8_general_ci = 'a' collate utf8_general_ci then 1 else 0 end; +---------------------------------------------------------------------------------------+ | case when 'A' collate utf8_general_ci = 'a' collate utf8_general_ci then 1 else 0 end | +---------------------------------------------------------------------------------------+ | 1 | +---------------------------------------------------------------------------------------+ 1 row in set (0.00 sec) mysql> select case when 'A' collate utf8_bin = 'a' collate utf8_bin then 1 else 0 end; +-------------------------------------------------------------------------+ | case when 'A' collate utf8_bin = 'a' collate utf8_bin then 1 else 0 end | +-------------------------------------------------------------------------+ | 0 | +-------------------------------------------------------------------------+ 1 row in set (0.00 sec) 字符集的设置 可以在my.cnf中设置 [mysqld] character-set-server=utf8 也可以在启动时设置 # mysqld --character-set-server=utf8 查看当前字符集和校对规则 mysql> show variables like 'character_set_server'; +----------------------+-------+ | Variable_name | Value | +----------------------+-------+ | character_set_server | utf8 | +----------------------+-------+ 1 row in set (0.02 sec) mysql> show variables like 'collation%'; +----------------------+-----------------+ | Variable_name | Value | +----------------------+-----------------+ | collation_connection | utf8_general_ci | | collation_database | utf8_general_ci | | collation_server | utf8_general_ci | +----------------------+-----------------+ 3 rows in set (0.00 sec) 连接字符集和校对规则 对于客户端与服务器交互,mysql提供三个参数: character_set_client 客户端 character_set_connection 连接 character_set_results 返回结果 大多数情况下三个参数值是一致的,这样才能确保数据写入和正确的读出。 可以使用如下命令设置三个参数,但是这是针对每次连接的 mysql> set names utf8; Query OK, 0 rows affected (0.00 sec) 更简单的方式,是在my.cnf中设置,这样所有的连接默认使用utf8进行 [mysql] default-character-set=utf8 字符集的修改步骤 应用在运行一段时间后,如果想更改字符集,不能使用”alter database character set “或”alter table character set *“来进行,因为该命令只对新数据有用,旧的数据没有效果。 step 1:导出表结构 #--default-character-set表示以什么字符串连接 -d表示只导出表结构 # mysqldump -uroot -p --default-character-set=utf8 -d dbname > createtab.sql step 2:修改createtab.sql中表定义中的字符集为新字符集 step 3:确保记录不在更新,导出数据 #--quick:用于转储大表。强制mysqldump从服务器一次一行的检索表而不是一次检索所有行,并在输出前缓存在内存中 #--extended-insert:使用多行insert语法。可以加快插入速度。 #--no-create-info:不导出每个转储表的建表语句 #--default-character-set:按照原字符集导出数据,防止乱码 # mysqldump -uroot -p --quick --no-create-info --extended-insert --default-character-set=gbk dbname > data.sql step 4:将set names gbk改成set names utf8 step 5:使用新的字符集创建新的数据库 mysql> create database dbname default charset utf8; step 6:创建表 # mysql -uroot -p dbname < createtab.sql step 7:导入数据 # mysql -uroot -p dbname < data.sql 注意:目标字符集最好是源字符集的超集,否则可能导致乱码
原文链接 InnoDB’s checkpoint algorithm is not well documented. It is too complex to explain in even a long blog post, because to understand checkpoints, you need to understand a lot of other things that InnoDB does. I hope that explaining how InnoDB does checkpoints in high-level terms, with simplifications, will be helpful. A lot of the simplifications are because I do not want to explain the complexities of how the simple rules can be tweaked for optimization purposes, while not violating the ACID guarantees they enforce. A bit of background: Gray and Reuter’s classic text on transaction processing introduced two types of checkpoints beginning on page 605. There is a sharp checkpoint, and there is a fuzzy checkpoint. A sharp checkpoint is accomplished by flushing all modified pages for committed transactions to disk, and writing down the log sequence number (LSN) of the most recent committed transaction. Modified pages for uncommitted transactions should not be flushed – that would violate the rule of write-ahead logging. (This is a deliberate and gross over-simplification; I will not draw attention to further simplifications I make.) Upon recovery, log REDO can start from the LSN at which the checkpoint took place. A sharp checkpoint is called “sharp” because everything that is flushed to disk for the checkpoint is consistent as of a single point in time – the checkpoint LSN. A fuzzy checkpoint is more complex. It flushes pages as time passes, until it has flushed all pages that a sharp checkpoint would have done. It completes by writing down two LSNs: when the checkpoint started and when it ended. But the pages it flushed might not all be consistent with each other as of a single point in time, which is why it’s called “fuzzy.” A page that got flushed early might have been modified since then, and a page that got flushed late might have a newer LSN than the starting LSN. A fuzzy checkpoint can conceptually be converted into a sharp checkpoint by performing REDO from the starting LSN to the ending LSN. Upon recovery, then, REDO can begin from the LSN at which the checkpoint started. It is often said that InnoDB does fuzzy checkpointing. The truth is, it does both types. When the database shuts down, it performs a sharp checkpoint. During normal operation, it performs fuzzy checkpoints. And InnoDB’s implementation of fuzzy checkpoints is not exactly the same as that described in Gray & Reuter. Here is where the weeds get deep: I will try to explain some of the subtleties that let InnoDB provide uniform quality of service by performing checkpoints almost constantly, instead of checkpoints being significant events that occur periodically. It can be said that InnoDB actually never does a checkpoint during normal operation. Instead, the state of the database on disk is a constantly advancing fuzzy checkpoint. The advances are performed by regular flushing of dirty pages as a normal part of the database’s operation. The details are far too many and complex to write here, in part because they have changed significantly as new versions have been released, but I will try to sketch the outline. InnoDB maintains a large buffer pool in memory with many database pages, and doesn’t write modifications to disk immediately. Instead, it keeps dirty pages in memory, hoping that they will be modified many times before they are written to disk. This is called write combining, and is a performance optimization. InnoDB keeps track of the pages in the buffer pool through several lists: the free list notes which pages are available to be used, the LRU list notes which pages have been used least recently, and the flush list contains all of the dirty pages in LSN order, least-recently-modified first. These lists are all important, but for the simplified explanation here, I will focus on the flush list. InnoDB has limited space in the buffer pool, and if there aren’t any free spots to store a page that InnoDB needs to read from disk, it must flush and free a dirty page to make room. This is slow, so InnoDB tries to avoid the need for this by flushing dirty pages continually, keeping a reserve of clean pages that can be replaced without having to be flushed. It flushes the oldest-modified pages from the flush list on a regular basis, trying to keep from hitting certain high-water marks. It chooses the pages based on their physical locations on disk and their LSN (which is their modification time). In addition to avoiding the high-water marks, InnoDB must avoid a very important low-water mark as well. The transaction logs (aka REDO logs, WAL logs) in InnoDB are fixed-size, and are written in a circular fashion. But spaces in the logs cannot be overwritten if they contain records of changes to a dirty page that hasn’t been flushed yet. If that happened and the server crashed, all records of those changes would be lost. Therefore, InnoDB has a limited amount of time to write out a given page’s modifications, because the ongoing transaction logging is hungry for space in the logs. The size of the logs imposes the limit. If the log writing activity wraps around in a circle and bumps into its own tail, it will cause a very bad server stall while InnoDB scrambles to free up some room in the logs. This is why InnoDB generally chooses to flush in oldest-modification order: the oldest-modified pages are the furthest behind in the logs, and will be bumped into first. The oldest unflushed dirty page’s LSN is the low-water mark in the transaction logs, and InnoDB tries to raise that low-water mark to keep as much room available in the transaction logs as it can. Making the logs larger reduces the urgency of freeing up log space and and permits various performance optimizations to do the flushing more efficiently. And now, with that simplified explanation in place, we can understand how InnoDB actually makes a fuzzy checkpoint. When InnoDB flushes dirty pages to disk, it finds the oldest dirty page’s LSN and treats that as the checkpoint low-water mark. It then writes this to the transaction log header. You can see this in the functions log_checkpoint_margin() and log_checkpoint(). Therefore, every time InnoDB flushes dirty pages from the head of the flush list, it is actually making a checkpoint by advancing the oldest LSN in the system. And that is how continual fuzzy checkpointing is implemented without ever “doing a checkpoint” as a separate event. If there is a server crash, then recovery simply proceeds from the oldest LSN onwards. When InnoDB shuts down, it does some additional work. First, it stops all updates to data; then it flushes all dirty buffers to disk; then it writes the current LSN to the transaction logs. This is the sharp checkpoint. Additionally, it writes the LSN to the first page of each data file in the database, which serves as a signal that they have been checkpointed up to that LSN. This permits further optimizations during recovery and when opening these data files. There is a lot more to study if you want to learn how it’s really done in detail; there are many fine points to the process. This is one area where the usually excellent manual is a bit lacking. Some of the best resources are as follows: Gray and Reuter’s book Mark Callaghan’s note on fuzzy checkpoints Peter Zaitsev’s post on why the flushing algorithm in older InnoDB used to cause spikes Mark Callaghan’s slides from Percona Performance Conference 2009 A related topic that is equally as complex is how InnoDB flushes dirty pages at the right speed to keep up with the database’s workload. Too fast and the server does too much work; too slow and it gets behind and hurries to catch up, causing spikes of furious flushing activity and degraded quality of service. Percona Server has arguably the most advanced and effective algorithms for this, in the XtraDB storage engine (a variant of InnoDB). Percona Server calls it “adaptive checkpointing.” InnoDB followed suit by implementing something similar, but harder to tune correctly. InnoDB calls it “adaptive flushing,” which is a more accurate name. Much (and I do mean much!) has been written about this. I know that Vadim has done hundreds of benchmarks to analyze how flushing and checkpointing works, some of them many hours long to study long-term performance characteristics. I will point you to a couple of pages that I think are the most succinct summaries of the implementation and how it performs: Dimitri Kravtchuk’s blog post about adaptive flushing and the innodb_io_capacity variable Vadim’s benchmarks of Percona Server and MySQL 5.5.8, showing how to tune so that “adaptive flushing” works well Percona Server documentation for adaptive checkpointing My own blog post about balancing dirty page flushing and write combining If these types of topics interest you, you should attend Percona Live in San Francisco in a couple of weeks. Both Peter Zaitsev and Mark Callaghan will be speaking there on topics such as InnoDB internals, along with a variety of other speakers; there is a several-hour class on InnoDB internals. I’m Baron Schwartz, the founder and CEO of VividCortex. I am the author of High Performance MySQL and many open-source tools for performance analysis, monitoring, and system administration. I contribute to various database communities such as Oracle, PostgreSQL, Redis and MongoDB.
字符串函数 mysql> select concat('a','b','c'),concat('a',null); +---------------------+------------------+ | concat('a','b','c') | concat('a',null) | +---------------------+------------------+ | abc | NULL | +---------------------+------------------+ 1 row in set (0.00 sec) mysql> select insert('beijing2008you',12,3,'me'); +------------------------------------+ | insert('beijing2008you',12,3,'me') | +------------------------------------+ | beijing2008me | +------------------------------------+ 1 row in set (0.01 sec) mysql> select lower('BEIJING'),upper('beijing'); +------------------+------------------+ | lower('BEIJING') | upper('beijing') | +------------------+------------------+ | beijing | BEIJING | +------------------+------------------+ 1 row in set (0.00 sec) mysql> select left('beijing2008',7),left('beijing2008',null),right('beijing2008',4); +-----------------------+--------------------------+------------------------+ | left('beijing2008',7) | left('beijing2008',null) | right('beijing2008',4) | +-----------------------+--------------------------+------------------------+ | beijing | NULL | 2008 | +-----------------------+--------------------------+------------------------+ 1 row in set (0.00 sec) mysql> select lpad('2008',20,'beijing'),rpad('beijing',20,'2008'); +---------------------------+---------------------------+ | lpad('2008',20,'beijing') | rpad('beijing',20,'2008') | +---------------------------+---------------------------+ | beijingbeijingbe2008 | beijing2008200820082 | +---------------------------+---------------------------+ 1 row in set (0.00 sec) mysql> select ltrim(' |beijing'),rtrim('beijing| '); +--------------------+---------------------+ | ltrim(' |beijing') | rtrim('beijing| ') | +--------------------+---------------------+ | |beijing | beijing| | +--------------------+---------------------+ 1 row in set (0.00 sec) mysql> select repeat('mysql ',3); +--------------------+ | repeat('mysql ',3) | +--------------------+ | mysql mysql mysql | +--------------------+ 1 row in set (0.00 sec) mysql> select replace('beijing_2010','_2010','2008'); +----------------------------------------+ | replace('beijing_2010','_2010','2008') | +----------------------------------------+ | beijing2008 | +----------------------------------------+ 1 row in set (0.00 sec) mysql> select strcmp('a','b'),strcmp('b','b'),strcmp('c','b'); +-----------------+-----------------+-----------------+ | strcmp('a','b') | strcmp('b','b') | strcmp('c','b') | +-----------------+-----------------+-----------------+ | -1 | 0 | 1 | +-----------------+-----------------+-----------------+ 1 row in set (0.00 sec) mysql> select trim(' $ beijing $ '); +-----------------------+ | trim(' $ beijing $ ') | +-----------------------+ | $ beijing $ | +-----------------------+ 1 row in set (0.00 sec) mysql> select substring('beijing2008',8,4),substring('beijing2008',1,7); +------------------------------+------------------------------+ | substring('beijing2008',8,4) | substring('beijing2008',1,7) | +------------------------------+------------------------------+ | 2008 | beijing | +------------------------------+------------------------------+ 1 row in set (0.00 sec) 数值函数 mysql> select abs(-0.8),abs(0.8); +-----------+----------+ | abs(-0.8) | abs(0.8) | +-----------+----------+ | 0.8 | 0.8 | +-----------+----------+ 1 row in set (0.01 sec) mysql> select ceil(-0.8),ceil(0.8); +------------+-----------+ | ceil(-0.8) | ceil(0.8) | +------------+-----------+ | 0 | 1 | +------------+-----------+ 1 row in set (0.02 sec) mysql> select floor(-0.8),floor(0.8); +-------------+------------+ | floor(-0.8) | floor(0.8) | +-------------+------------+ | -1 | 0 | +-------------+------------+ 1 row in set (0.00 sec) mysql> select mod(15,10),mod(1,11),mod(null,10); +------------+-----------+--------------+ | mod(15,10) | mod(1,11) | mod(null,10) | +------------+-----------+--------------+ | 5 | 1 | NULL | +------------+-----------+--------------+ 1 row in set (0.01 sec) mysql> select rand(),rand(); +---------------------+---------------------+ | rand() | rand() | +---------------------+---------------------+ | 0.14735915059909147 | 0.09535266654133114 | +---------------------+---------------------+ 1 row in set (0.00 sec) mysql> select ceil(100*rand()),ceil(100*rand()); +------------------+------------------+ | ceil(100*rand()) | ceil(100*rand()) | +------------------+------------------+ | 4 | 89 | +------------------+------------------+ 1 row in set (0.01 sec) mysql> select round(1.1),round(1.1,2),round(1,2); +------------+--------------+------------+ | round(1.1) | round(1.1,2) | round(1,2) | +------------+--------------+------------+ | 1 | 1.10 | 1 | +------------+--------------+------------+ 1 row in set (0.01 sec) mysql> select round(1.235,2),truncate(1.235,2); +----------------+-------------------+ | round(1.235,2) | truncate(1.235,2) | +----------------+-------------------+ | 1.24 | 1.23 | +----------------+-------------------+ 1 row in set (0.00 sec) 日期和时间函数 mysql> select curdate(); +------------+ | curdate() | +------------+ | 2016-09-21 | +------------+ 1 row in set (0.00 sec) mysql> select curtime(); +-----------+ | curtime() | +-----------+ | 10:22:39 | +-----------+ 1 row in set (0.00 sec) mysql> select now(); +---------------------+ | now() | +---------------------+ | 2016-09-21 10:22:48 | +---------------------+ 1 row in set (0.00 sec) mysql> select unix_timestamp(now()); +-----------------------+ | unix_timestamp(now()) | +-----------------------+ | 1474424612 | +-----------------------+ 1 row in set (0.00 sec) mysql> select from_unixtime(1474424612); +---------------------------+ | from_unixtime(1474424612) | +---------------------------+ | 2016-09-21 10:23:32 | +---------------------------+ 1 row in set (0.00 sec) mysql> select from_unixtime(1); +---------------------+ | from_unixtime(1) | +---------------------+ | 1970-01-01 08:00:01 | +---------------------+ 1 row in set (0.00 sec) mysql> select week(now()),year(now()); +-------------+-------------+ | week(now()) | year(now()) | +-------------+-------------+ | 38 | 2016 | +-------------+-------------+ 1 row in set (0.00 sec) mysql> select hour(now()),minute(now()); +-------------+---------------+ | hour(now()) | minute(now()) | +-------------+---------------+ | 10 | 27 | +-------------+---------------+ 1 row in set (0.00 sec) mysql> select monthname(now()); +------------------+ | monthname(now()) | +------------------+ | September | +------------------+ 1 row in set (0.02 sec) mysql日期和时间格式 mysql> select date_format(now(),'%M,%D,%Y'); +-------------------------------+ | date_format(now(),'%M,%D,%Y') | +-------------------------------+ | September,21st,2016 | +-------------------------------+ 1 row in set (0.01 sec) mysql日期间隔类型 mysql> select now() current, date_add(now(),interval 31 day) after31days; +---------------------+---------------------+ | current | after31days | +---------------------+---------------------+ | 2016-09-21 11:08:33 | 2016-10-22 11:08:33 | +---------------------+---------------------+ 1 row in set (0.00 sec) mysql> select now() current, date_add(now(),interval -31 day) before31days,date_add(now(),interval '-1_-2' year_month) before_oneyear_twomonth; +---------------------+---------------------+-------------------------+ | current | before31days | before_oneyear_twomonth | +---------------------+---------------------+-------------------------+ | 2016-09-21 11:10:34 | 2016-08-21 11:10:34 | 2015-07-21 11:10:34 | +---------------------+---------------------+-------------------------+ 1 row in set (0.00 sec) mysql> select datediff('2015-07-06',now()); +------------------------------+ | datediff('2015-07-06',now()) | +------------------------------+ | -443 | +------------------------------+ 1 row in set (0.00 sec) 流程函数 测试数据 mysql> create table salary(userid int,salary decimal(9,2)); Query OK, 0 rows affected (0.01 sec) mysql> insert into salary values (1,1000),(2,2000),(3,3000),(4,4000),(5,5000),(1,null); Query OK, 6 rows affected (0.01 sec) Records: 6 Duplicates: 0 Warnings: 0 mysql> select * from salary; +--------+---------+ | userid | salary | +--------+---------+ | 1 | 1000.00 | | 2 | 2000.00 | | 3 | 3000.00 | | 4 | 4000.00 | | 5 | 5000.00 | | 1 | NULL | +--------+---------+ 6 rows in set (0.00 sec) mysql> select userid, if(salary>2000,'high','low') from salary; +--------+------------------------------+ | userid | if(salary>2000,'high','low') | +--------+------------------------------+ | 1 | low | | 2 | low | | 3 | high | | 4 | high | | 5 | high | | 1 | low | +--------+------------------------------+ 6 rows in set (0.00 sec) mysql> select userid,ifnull(salary,0) from salary; +--------+------------------+ | userid | ifnull(salary,0) | +--------+------------------+ | 1 | 1000.00 | | 2 | 2000.00 | | 3 | 3000.00 | | 4 | 4000.00 | | 5 | 5000.00 | | 1 | 0.00 | +--------+------------------+ 6 rows in set (0.00 sec) mysql> select userid, case when salary<=2000 then 'low' else 'high' end from salary; +--------+---------------------------------------------------+ | userid | case when salary<=2000 then 'low' else 'high' end | +--------+---------------------------------------------------+ | 1 | low | | 2 | low | | 3 | high | | 4 | high | | 5 | high | | 1 | high | +--------+---------------------------------------------------+ 6 rows in set (0.00 sec) mysql> select userid,case salary when 1000 then 'low' when 2000 then 'mid' else 'high' end from salary; +--------+-----------------------------------------------------------------------+ | userid | case salary when 1000 then 'low' when 2000 then 'mid' else 'high' end | +--------+-----------------------------------------------------------------------+ | 1 | low | | 2 | mid | | 3 | high | | 4 | high | | 5 | high | | 1 | high | +--------+-----------------------------------------------------------------------+ 6 rows in set (0.00 sec) 其他函数 mysql> select database(); +------------+ | database() | +------------+ | test | +------------+ 1 row in set (0.00 sec) mysql> select version(); +------------+ | version() | +------------+ | 5.6.31-log | +------------+ 1 row in set (0.00 sec) mysql> select user(); +----------------+ | user() | +----------------+ | root@localhost | +----------------+ 1 row in set (0.00 sec) mysql> select inet_aton('192.168.0.7'); +--------------------------+ | inet_aton('192.168.0.7') | +--------------------------+ | 3232235527 | +--------------------------+ 1 row in set (0.00 sec) mysql> select inet_ntoa(3232235527); +-----------------------+ | inet_ntoa(3232235527) | +-----------------------+ | 192.168.0.7 | +-----------------------+ 1 row in set (0.00 sec) 使用inet_aton查看192.168.1.3和192.168.1.20之间有多少个ip地址 mysql> create table ipaddr (ip varchar(15)); Query OK, 0 rows affected (0.00 sec) mysql> insert into ipaddr values ('192.168.1.1'),('192.168.1.3'),('192.168.1.6'),('192.168.1.10'),('192.168.1.20'),('192.168.1.30'); Query OK, 6 rows affected (0.00 sec) Records: 6 Duplicates: 0 Warnings: 0 mysql> select * from ipaddr; +--------------+ | ip | +--------------+ | 192.168.1.1 | | 192.168.1.3 | | 192.168.1.6 | | 192.168.1.10 | | 192.168.1.20 | | 192.168.1.30 | +--------------+ 6 rows in set (0.00 sec) mysql> select * from ipaddr where inet_aton(ip)>=inet_aton('192.168.1.3') and inet_aton(ip)<=inet_aton('192.168.1.20'); +--------------+ | ip | +--------------+ | 192.168.1.3 | | 192.168.1.6 | | 192.168.1.10 | | 192.168.1.20 | +--------------+ 4 rows in set (0.00 sec) password用户系统用户,md5用于应用 mysql> select password('123456'); +-------------------------------------------+ | password('123456') | +-------------------------------------------+ | *6BB4837EB74329105EE4568DDA7DC67ED2CA2AD9 | +-------------------------------------------+ 1 row in set (0.00 sec) mysql> select md5('123456'); +----------------------------------+ | md5('123456') | +----------------------------------+ | e10adc3949ba59abbe56e057f20f883e | +----------------------------------+ 1 row in set (0.00 sec)
查看一下mysql版本 mysql> select version(); +------------+ | version() | +------------+ | 5.6.27-log | +------------+ 1 row in set (0.01 sec) 数值类型 整数类型 创建测试表t1 mysql> create table t1 (id1 int, in2 int(5)); Query OK, 0 rows affected (0.02 sec) mysql> desc t1; +-------+---------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+---------+------+-----+---------+-------+ | id1 | int(11) | YES | | NULL | | | in2 | int(5) | YES | | NULL | | +-------+---------+------+-----+---------+-------+ 2 rows in set (0.01 sec) 插入测试数据 mysql> insert into t1 values(1,1); Query OK, 1 row affected (0.02 sec) mysql> select * from t1; +------+------+ | id1 | in2 | +------+------+ | 1 | 1 | +------+------+ 1 row in set (0.01 sec) 修改字段类型,加入zerofill,可以看到数值前面用字符0填充了剩余的宽度 mysql> alter table t1 modify id1 int zerofill; Query OK, 1 row affected (0.04 sec) Records: 1 Duplicates: 0 Warnings: 0 mysql> alter table t1 modify in2 int(5) zerofill; Query OK, 1 row affected (0.02 sec) Records: 1 Duplicates: 0 Warnings: 0 mysql> select * from t1; +------------+-------+ | id1 | in2 | +------------+-------+ | 0000000001 | 00001 | +------------+-------+ 1 row in set (0.01 sec) 测试插入大于宽度限制的值,可见宽度限制并不影响数据的正常保存 mysql> insert into t1 values(1,1111111); Query OK, 1 row affected (0.00 sec) mysql> select * from t1; +------------+---------+ | id1 | in2 | +------------+---------+ | 0000000001 | 00001 | | 0000000001 | 1111111 | +------------+---------+ 2 rows in set (0.00 sec) 小数类型 mysql小数表示分为:浮点数和定点数。定点数在mysql中用字符串形式存放,比浮点数精确,适用于表示货币等精度高的数据。两者都可以使用(M,D)方式表示,M:精度;D:标度。默认定点数在不指定精度时,默认会按照实际精度显示,二定点数默认整数位为10,小数位为0 创建测试表t1 mysql> create table t1 ( -> id1 float(5,2) default null, -> id2 double(5,2) default null, -> id3 decimal(5,2) default null); Query OK, 0 rows affected (0.03 sec) 插入数据1.23,数据都正常显示 mysql> insert into t1 values (1.23,1.23,1.23); Query OK, 1 row affected (0.00 sec) mysql> select * from t1; +------+------+------+ | id1 | id2 | id3 | +------+------+------+ | 1.23 | 1.23 | 1.23 | +------+------+------+ 1 row in set (0.00 sec) 插入数据1.234,id1、id2犹豫标度限制,舍去最后一位;id3显示被截断 mysql> insert into t1 values (1.234,1.234,1.234); Query OK, 1 row affected, 1 warning (0.00 sec) mysql> show warnings; +-------+------+------------------------------------------+ | Level | Code | Message | +-------+------+------------------------------------------+ | Note | 1265 | Data truncated for column 'id3' at row 1 | +-------+------+------------------------------------------+ 1 row in set (0.00 sec) mysql> select * from t1; +------+------+------+ | id1 | id2 | id3 | +------+------+------+ | 1.23 | 1.23 | 1.23 | | 1.23 | 1.23 | 1.23 | +------+------+------+ 2 rows in set (0.00 sec) 将字段的精度及标度都去掉,插入数据1.23。可以看到id1、id2正常,id3截断。 mysql> alter table t1 modify id1 float; Query OK, 0 rows affected (0.02 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> alter table t1 modify id2 double; Query OK, 0 rows affected (0.00 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> alter table t1 modify id3 decimal; Query OK, 2 rows affected, 2 warnings (0.03 sec) Records: 2 Duplicates: 0 Warnings: 2 mysql> show warnings; +-------+------+------------------------------------------+ | Level | Code | Message | +-------+------+------------------------------------------+ | Note | 1265 | Data truncated for column 'id3' at row 1 | | Note | 1265 | Data truncated for column 'id3' at row 2 | +-------+------+------------------------------------------+ 2 rows in set (0.00 sec) mysql> desc t1; +-------+---------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+---------------+------+-----+---------+-------+ | id1 | float | YES | | NULL | | | id2 | double | YES | | NULL | | | id3 | decimal(10,0) | YES | | NULL | | +-------+---------------+------+-----+---------+-------+ 3 rows in set (0.00 sec) mysql> insert into t1 values (1.234,1.234,1.234); Query OK, 1 row affected, 1 warning (0.00 sec) mysql> select * from t1; +-------+-------+------+ | id1 | id2 | id3 | +-------+-------+------+ | 1.23 | 1.23 | 1 | | 1.23 | 1.23 | 1 | | 1.234 | 1.234 | 1 | +-------+-------+------+ 3 rows in set (0.00 sec) 通过上面的例子,可以看到浮点数如果没有精度和标度,会安装实际精度显示,如果有精度和标度,会四舍五入。定点数如果不写精度和标度,会按照默认值decimal(10,0)来进行操作,如果数据超越了精度和标度值,系统会报错。 日期时间类型 根据实际需要选择最小存储的日期类型。如果只需要记录年份,南无year类型即可。 如果需要记录年月日时分秒,并且记录年份比较久远,那么最好选择datetime,因为datetime比timestamp日期范围长 如果日期需要让不同时区的用户使用,那么最好使用timestamp 通过测试可知,datetime为date和time的组合。 mysql> create table t ( -> d date, -> t time, -> dt datetime); Query OK, 0 rows affected (0.02 sec) mysql> desc t; +-------+----------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+----------+------+-----+---------+-------+ | d | date | YES | | NULL | | | t | time | YES | | NULL | | | dt | datetime | YES | | NULL | | +-------+----------+------+-----+---------+-------+ 3 rows in set (0.00 sec) mysql> insert into t values (now(),now(),now()); Query OK, 1 row affected, 1 warning (0.00 sec) mysql> select * from t; +------------+----------+---------------------+ | d | t | dt | +------------+----------+---------------------+ | 2016-09-20 | 14:51:08 | 2016-09-20 14:51:08 | +------------+----------+---------------------+ 1 row in set (0.00 sec) timestamp类型也可用来表示日期 mysql> create table t (id1 timestamp,id2 datetime); Query OK, 0 rows affected (0.00 sec) mysql> insert into t values(now(),now()); Query OK, 1 row affected (0.00 sec) mysql> select * from t; +---------------------+---------------------+ | id1 | id2 | +---------------------+---------------------+ | 2016-09-20 15:07:55 | 2016-09-20 15:07:55 | +---------------------+---------------------+ 1 row in set (0.00 sec) 修改时区,可见timestamp显示当地实际时间 #当前为系统时区(东八区) mysql> show variables like 'time_zone'; +---------------+--------+ | Variable_name | Value | +---------------+--------+ | time_zone | SYSTEM | +---------------+--------+ 1 row in set (0.00 sec) #修改时区 mysql> set time_zone='+9:00'; Query OK, 0 rows affected (0.00 sec) mysql> select * from t; +---------------------+---------------------+ | id1 | id2 | +---------------------+---------------------+ | 2016-09-20 16:07:55 | 2016-09-20 15:07:55 | +---------------------+---------------------+ 1 row in set (0.00 sec) 字符串类型 varchar char 创建测试表vc mysql> create table vc (v varchar(4), c char(4)); Query OK, 0 rows affected (0.05 sec) 插入测试数据 mysql> insert into vc values ('ab ','ab '); Query OK, 1 row affected (0.01 sec) 显示查询结果 mysql> select length(v), length(c) from vc; +-----------+-----------+ | length(v) | length(c) | +-----------+-----------+ | 4 | 2 | +-----------+-----------+ 1 row in set (0.01 sec) mysql> select concat(v,'+'),concat(c,'+') from vc; +---------------+---------------+ | concat(v,'+') | concat(c,'+') | +---------------+---------------+ | ab + | ab+ | +---------------+---------------+ 1 row in set (0.06 sec) 可以看到char类型自动去除尾部的空格 text blob 主要区别 text只能存字符数据,如日记 blob用来存二进制数据,如照片 blob和text会引起一些性能问题,特别是在执行大量删除操作时。删除操作会造成空洞,建议使用OPTIMIZE TABLE进行碎片整理。 mysql> create table t (id varchar(100),context text); Query OK, 0 rows affected (0.00 sec) mysql> insert into t values (1,repeat('haha',100)); Query OK, 1 row affected (0.00 sec) mysql> insert into t values (2,repeat('haha',100)); Query OK, 1 row affected (0.00 sec) mysql> insert into t values (3,repeat('haha',100)); Query OK, 1 row affected (0.00 sec) mysql> insert into t select * from t; Query OK, 3 rows affected (0.00 sec) Records: 3 Duplicates: 0 Warnings: 0 ...... mysql> insert into t select * from t; Query OK, 393216 rows affected (4.05 sec) Records: 393216 Duplicates: 0 Warnings: 0 查看文件大小 # du -sh t.* 12K t.frm 365M t.ibd 删除部分数据;查看文件大小,没变化 mysql> delete from t where id=1; Query OK, 262144 rows affected (1.29 sec) # du -sh t.* 12K t.frm 365M t.ibd 对表进行OPTIMIZE mysql> optimize table t; +--------+----------+----------+-------------------------------------------------------------------+ | Table | Op | Msg_type | Msg_text | +--------+----------+----------+-------------------------------------------------------------------+ | test.t | optimize | note | Table does not support optimize, doing recreate + analyze instead | | test.t | optimize | status | OK | +--------+----------+----------+-------------------------------------------------------------------+ 2 rows in set (8.34 sec) [root@db3 test]# du -sh t.* 12K t.frm 237M t.ibd 可见空洞被回收 对于blob和text字段的查询性能问题。可以使用合成索引和前缀索引进行优化。 合成索引示例 mysql> create table t ( -> id varchar(100), -> context blob, -> hash_value varchar(40)); Query OK, 0 rows affected (0.00 sec) mysql> insert into t values (1,repeat('beijing',2),md5(context)); Query OK, 1 row affected (0.01 sec) mysql> insert into t values (2,repeat('beijing',2),md5(context)); Query OK, 1 row affected (0.00 sec) mysql> insert into t values (3,repeat('beijing 2008',2),md5(context)); Query OK, 1 row affected (0.00 sec) mysql> select * from t; +------+--------------------------+----------------------------------+ | id | context | hash_value | +------+--------------------------+----------------------------------+ | 1 | beijingbeijing | 09746eef633dbbccb7997dfd795cff17 | | 2 | beijingbeijing | 09746eef633dbbccb7997dfd795cff17 | | 3 | beijing 2008beijing 2008 | 1c0ddb82cca9ed63e1cacbddd3f74082 | +------+--------------------------+----------------------------------+ 3 rows in set (0.00 sec) mysql> select * from t where hash_value=md5(repeat('beijing 2008',2)); +------+--------------------------+----------------------------------+ | id | context | hash_value | +------+--------------------------+----------------------------------+ | 3 | beijing 2008beijing 2008 | 1c0ddb82cca9ed63e1cacbddd3f74082 | +------+--------------------------+----------------------------------+ 1 row in set (0.00 sec) 合成索引只能用于精确匹配。 使用前缀索引实现模糊查询 mysql> create index idx_blob on t(context(100)); Query OK, 0 rows affected (0.03 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql> desc select * from t where context like 'beijing%'\G; *************************** 1. row *************************** id: 1 select_type: SIMPLE table: t type: ALL possible_keys: idx_blob key: NULL key_len: NULL ref: NULL rows: 3 Extra: Using where 1 row in set (0.00 sec) 避免对有blob和text字段的表进行全扫描,应尽量使用where子句并取所需字段的信息,避免造成大量的网络传输。 某些情况下,可以考虑将blob和text分离到单独的表中。
relay-log-info 记录SQL线程读取Master binlog的位置,用于Slave 宕机后根据文件中记录的pos点恢复Sql线程 master-info 记录IO线程读取已经读取到的master binlog位置,用于slave宕机后IO线程根据文件中的POS点重新拉取binlog日志 sync_relay_log_info 执行多少个事务后将relay-log-info,sync一下文件刷新到磁盘 sync_master_info 执行多少个事务后将master-info,sync一下文件刷新到磁盘 所以问题来了,如果下面两个sync参数设置较大,当宕机时: sync_master_info较大将导致重复拉取binlog日志 sync_relay_log_info较大将导致重复执行binlog日志 那么设置两个参数为1 sync_master_info=1 sync_relay_log_info=1 是否解决问题了呢? 那么,请考虑如下场景 正常跑的slave突然掉电,最后一个事务已经commit成功了。但是可能还没有将sync_relay_log_info sync到磁盘上。因为sync文件这个动作不是在事务中,所以不能得到保证。当slave恢复之后,读取relay_log_info文件。会将最后一个事务重新做一遍,导致主从数据不一致 问题的关键在于sync操作不是在事务里,所以mysql提供了两个参数 +---------------------------+-------+ | Variable_name | Value | +---------------------------+-------+ | master_info_repository | TABLE | | relay_log_info_repository | TABLE | +---------------------------+-------+ 这样记录SQL线程及IO线程的执行情况将记录在表中(slave_relay_log_info,slave_master_info),事务的原子性得到了保证。 这里查看一下,sync_relay_log_info与relay_log_info_repository参数间的关系 可以看到当relay_log_info_repository为TABLE时,对于事务来说,sync_relay_log_info参数没有作用了 为了简化设置,这里介绍一个参数 relay_log_recovery 参数含义:当slave重启之后会根据slave_relay_log_info重新创建一个文件,SQL线程会根据这个文件进行恢复复制,IO线程会读取SQL线程的POS点,根据这个POS点向主库申请拉取数据 所以最终只要设置如下两个参数即可 relay_log_info_repository = TABLE relay_log_recovery = ON
安装 # yum -y install pciutils 查看RAID设备 # lspci|grep RAID 03:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS-3 3108 [Invader] (rev 02)
5.6mysql半同步复制的原理图 通过图片,我们看到,当master提交事务时,并不等待slave节点确认。所以并不保证slave节点的事务是否也能commit成功(例如duplicate key error)。因此可能出现主库提交的数据,从库看不到的现象。 5.7mysql半同步复制的原理图 通过图片,我们可以看到5.7把slave端ACK提前了。但是依然会有问题出现。 当slave端ACK超时(rpl_semi_sync_master_timeout)时,半同步将转为异步。如果此时主库宕机,备库并不能保证数据的一致性。 当slave返回ACK后,master commit之前,主库宕机,会导致数据不一致。
BIOS设置 其中 #设置服务器在最大性能模式运行(很重要) Power Management → Power Profile Maximum Performance #intel超线程选项 Intel Hyperthreading Options IPMI配置(远程管理) 磁盘 机械硬盘 SSD(建议使用MLC) SLC性能最好,价格最高,寿命长 MLC折中方案 TLC性能最差,价格便宜,寿命短 硬盘接口 SATA SAS(兼容SATA) PCIE接口 要注意卡类型与主板插槽匹配 磁盘对比 RAID 放置binlog和redo log的盘把,写cache打开,对性能提升很大。 关闭预读 设置NoCachedBadBBU 防止在电池充放电时数据(raid卡电源分为电池和电容两种,如果是电池要设置)
step 1: 全备 # innobackupex --defaults-file=/etc/my.cnf --no-timestamp /home/ssd/ali_backup/full_xtra_3306_20160826 --user root --password beijing --socket=/home/ssd/ali_data/my3306.sock 查看xtrabackup_checkpoints 文件 # more full_xtra_3306_20160826/xtrabackup_checkpoints backup_type = full-backuped from_lsn = 0 to_lsn = 12222652952 last_lsn = 12222652952 compact = 0 recover_binlog_info = 0 测试数据 mysql> create database miles; Query OK, 1 row affected (0.01 sec) mysql> use miles; Database changed mysql> create table t( id int, name varchar(30)); Query OK, 0 rows affected (0.05 sec) step 2:1级备份 # innobackupex --defaults-file=/etc/my.cnf --no-timestamp --incremental /home/ssd/ali_backup/full_xtra_3306_20160826_i_01 --incremental-basedir=/home/ssd/ali_backup/full_xtra_3306_20160826/ --user root --password beijing --socket=/home/ssd/ali_data/my3306.sock 查看xtrabackup_checkpoints 文件 # more full_xtra_3306_20160826_i_01/xtrabackup_checkpoints backup_type = incremental from_lsn = 12222652952 to_lsn = 12222657366 last_lsn = 12222657366 compact = 0 recover_binlog_info = 0 测试数据 mysql> insert into t values (1, 'i1'),(2,'i2'),(3,'i3'); mysql> select * from t; +------+------+ | id | name | +------+------+ | 1 | i1 | | 2 | i2 | | 3 | i3 | +------+------+ 3 rows in set (0.01 sec) step 3:2级备份 # innobackupex --defaults-file=/etc/my.cnf --no-timestamp --incremental /home/ssd/ali_backup/full_xtra_3306_20160826_i_02 --incremental-basedir=/home/ssd/ali_backup/full_xtra_3306_20160826_i_01/ --user root --password beijing --socket=/home/ssd/ali_data/my3306.sock 查看xtrabackup_checkpoints 文件 # more full_xtra_3306_20160826_i_02/xtrabackup_checkpoints backup_type = incremental from_lsn = 12222657366 to_lsn = 12222659058 last_lsn = 12222659058 compact = 0 recover_binlog_info = 0 测试数据 mysql> delete from t where id=2; Query OK, 1 row affected (0.02 sec) mysql> select * from t; +------+------+ | id | name | +------+------+ | 1 | i1 | | 3 | i3 | +------+------+ 2 rows in set (0.00 sec) mysql> drop database miles; Query OK, 1 row affected (0.02 sec) step 4:查看当前日志 mysql> show master status\G; *************************** 1. row *************************** File: 3306-mysql-bin.000003 Position: 1319 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 87fd24be-683d-11e6-ba97-1418774c98d8:49-53, cdf80832-6a74-11e6-890b-1418774c98d8:1-13 1 row in set (0.00 sec) mysql> flush logs; Query OK, 0 rows affected (0.02 sec) step 5:关库 # mysqladmin -uroot shutdown --socket=/data/mysql.sock -p step 6:保留旧数据目录 # mv ali_data ali_data_bak step 7:准备全备 # innobackupex --apply-log --redo-only /home/ssd/ali_backup/full_xtra_3306_20160826/ step 8:准备1级备份 # innobackupex --apply-log --redo-only /home/ssd/ali_backup/full_xtra_3306_20160826/ --incremental-dir=/home/ssd/ali_backup/full_xtra_3306_20160826_i_01/ step 9:准备2级备份 # innobackupex --apply-log --redo-only /home/ssd/ali_backup/full_xtra_3306_20160826/ --incremental-dir=/home/ssd/ali_backup/full_xtra_3306_20160826_i_02/ step 10:再次准备全备 # innobackupex --apply-log /home/ssd/ali_backup/full_xtra_3306_20160826/ step 11:恢复数据库 # innobackupex --defaults-file=/etc/my.cnf --copy-back /home/ssd/ali_backup/full_xtra_3306_20160826 # chown -R mysql:mysql ali_data step 12:启动实例 # mysqld_safe --defaults-file=/etc/my.cnf --user=mysql --datadir=/home/ssd/ali_data/ & mysql> select * from t; +------+------+ | id | name | +------+------+ | 1 | i1 | | 2 | i2 | | 3 | i3 | +------+------+ 3 rows in set (0.00 sec) step 13:查看备份Position # more full_xtra_3306_20160826/xtrabackup_binlog_info 3306-mysql-bin.000003 933 87fd24be-683d-11e6-ba97-1418774c98d8:49-53, cdf80832-6a74-11e6-890b-1418774c98d8:1-11 step 14:查看binlog日志,确定start-position和stop-position # mysqlbinlog -vv --base64-output=decode-rows 3306-mysql-bin.000003 ... SET @@SESSION.GTID_NEXT= 'cdf80832-6a74-11e6-890b-1418774c98d8:11'/*!*/; # at 722 #160826 14:00:28 server id 201983306 end_log_pos 795 CRC32 0x98f6ac81 Query thread_id=27 exec_time=0 error_code=0 SET TIMESTAMP=1472191228/*!*/; BEGIN /*!*/; # at 795 #160826 14:00:28 server id 201983306 end_log_pos 843 CRC32 0x98615829 Table_map: `miles`.`t` mapped to number 77 # at 843 #160826 14:00:28 server id 201983306 end_log_pos 902 CRC32 0x44246525 Write_rows: table id 77 flags: STMT_END_F ### INSERT INTO `miles`.`t` ### SET ### @1=1 /* INT meta=0 nullable=1 is_null=0 */ ### @2='i1' /* VARSTRING(90) meta=90 nullable=1 is_null=0 */ ### INSERT INTO `miles`.`t` ### SET ### @1=2 /* INT meta=0 nullable=1 is_null=0 */ ### @2='i2' /* VARSTRING(90) meta=90 nullable=1 is_null=0 */ ### INSERT INTO `miles`.`t` ### SET ### @1=3 /* INT meta=0 nullable=1 is_null=0 */ ### @2='i3' /* VARSTRING(90) meta=90 nullable=1 is_null=0 */ # at 902 #160826 14:00:28 server id 201983306 end_log_pos 933 CRC32 0x47fad9c0 Xid = 239 COMMIT/*!*/; # at 933 #160826 14:08:31 server id 201983306 end_log_pos 981 CRC32 0xd5f3269b GTID [commit=yes] SET @@SESSION.GTID_NEXT= 'cdf80832-6a74-11e6-890b-1418774c98d8:12'/*!*/; # at 981 #160826 14:08:31 server id 201983306 end_log_pos 1054 CRC32 0x993def1d Query thread_id=32 exec_time=0 error_code=0 SET TIMESTAMP=1472191711/*!*/; BEGIN /*!*/; # at 1054 #160826 14:08:31 server id 201983306 end_log_pos 1102 CRC32 0x5fe76bad Table_map: `miles`.`t` mapped to number 78 # at 1102 #160826 14:08:31 server id 201983306 end_log_pos 1145 CRC32 0x98128279 Delete_rows: table id 78 flags: STMT_END_F ### DELETE FROM `miles`.`t` ### WHERE ### @1=2 /* INT meta=0 nullable=1 is_null=0 */ ### @2='i2' /* VARSTRING(90) meta=90 nullable=1 is_null=0 */ # at 1145 #160826 14:08:31 server id 201983306 end_log_pos 1176 CRC32 0xa2617fb2 Xid = 261 COMMIT/*!*/; # at 1176 #160826 14:08:40 server id 201983306 end_log_pos 1224 CRC32 0x5cda357a GTID [commit=yes] SET @@SESSION.GTID_NEXT= 'cdf80832-6a74-11e6-890b-1418774c98d8:13'/*!*/; # at 1224 #160826 14:08:40 server id 201983306 end_log_pos 1319 CRC32 0x5de28a5d Query thread_id=32 exec_time=0 error_code=0 SET TIMESTAMP=1472191720/*!*/; drop database miles /*!*/; SET @@SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog */ /*!*/; # at 1319 #160826 14:09:05 server id 201983306 end_log_pos 1371 CRC32 0xee498579 Rotate to 3306-mysql-bin.000004 pos: 4 DELIMITER ; # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; /*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/; step 15:基于binlog日志恢复 # mysqlbinlog --start-position=933 --stop-position=1176 3306-mysql-bin.000003 | mysql -uroot -p --socket=/home/ssd/ali_data/my3306.sock 查看恢复情况 mysql> select * from t; +------+------+ | id | name | +------+------+ | 1 | i1 | | 3 | i3 | +------+------+ 2 rows in set (0.00 sec) step 16:确定数据无误后,重新做一份全备 # innobackupex --defaults-file=/etc/my.cnf --user root --password beijing --no-timestamp /home/ssd/ali_backup/full_xtra_3306_20160826_1
STEP 1: 生成备份并prepare 主库做一个全备,做好看到 completed OK 表明备份成功 # innobackupex --defaults-file=/etc/my.cnf --user root --password XXX --no-timestamp /home/ssd/ali_backup/full_xtra_3306_20160825_1 ... 160825 14:57:41 completed OK! 为了保证备份的数据一致性,需要prepare # innobackupex --defaults-file=/etc/my.cnf --user root --password XXX --apply-log /home/ssd/ali_backup/full_xtra_3306_20160825_1 。。。 160825 15:15:04 completed OK! STEP 2: 把备份传到备机上 压缩打包 # zip -r full_xtra_3306_20160825_1.zip full_xtra_3306_20160825_1/* 把备份从master传到slave # scp full_xtra_3306_20160825_1.zip db2:/home/oradata/ali_data_candicate 修改slave上datadir的权限 # chown -R mysql:mysql ali_data_candicate STEP 3:master节点上创建复制用户 创建用于主从复制的用户,并赋予合适的权限 GRANT REPLICATION SLAVE ON *.* TO 'repl'@'db2' identified by 'XXX'; 在slave上校验创建的用户 # mysql -hdb1 -urepl -pXXX Warning: Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 8 Server version: 5.6.31-log MySQL Community Server (GPL) Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show grants; +-------------------------------------------------------------------------------------------+ | Grants for repl@192.168.201.99 | +-------------------------------------------------------------------------------------------+ | GRANT REPLICATION SLAVE ON *.* TO 'repl'@'192.168.201.99' IDENTIFIED BY PASSWORD <secret> | +-------------------------------------------------------------------------------------------+ 1 row in set (0.00 sec) STEP 4: slave节点配置 拷贝一份配置文件到slave # scp /etc/my.cnf db2:/etc/ 在slave编辑配置文件,修改相关路径,注意server_id要与主节点不同 server_id=201993306 slave启动mysql STEP 5: 开启复制 查看xtrabackup_binlog_info确定复制开始position # more xtrabackup_binlog_info 3306-mysql-bin.000002 191 87fd24be-683d-11e6-ba97-1418774c98d8:49-53 在slave节点执行CHANGE MASTER语句 CHANGE MASTER TO MASTER_HOST = 'db1', MASTER_PORT = 3306, MASTER_USER = 'repl', MASTER_PASSWORD = 'repl', MASTER_AUTO_POSITION = 1; 跳过已经执行过的事务 mysql> set global gtid_purged='87fd24be-683d-11e6-ba97-1418774c98d8:49-53'; Query OK, 0 rows affected (0.00 sec) 开始复制 mysql> start slave; Query OK, 0 rows affected, 1 warning (0.00 sec) STEP 6: 检查 mysql> show slave status\G; 。。。 Slave_IO_Running: Yes Slave_SQL_Running: Yes
恢复到特定的时间点,可以使用innobackupex 和数据库binlog日志 首先,我们需要一个数据库快照,使用innobackupex 生成一个全库备份 # innobackupex --defaults-file=/etc/my.cnf --user root --password XXXX --no-timestamp /home/ssd/ali_backup/full_xtra_3306_20160825/ 模拟测试数据 mysql> create database miles; Query OK, 1 row affected (0.00 sec) mysql> use miles; Database changed mysql> create table t ( -> id int, -> name varchar(30)); Query OK, 0 rows affected (0.01 sec) mysql> insert into t values (1,'m1'),(2,'m2'),(3,'m3'); Query OK, 3 rows affected (0.00 sec) Records: 3 Duplicates: 0 Warnings: 0 mysql> select * from t; +------+------+ | id | name | +------+------+ | 1 | m1 | | 2 | m2 | | 3 | m3 | +------+------+ 3 rows in set (0.00 sec) mysql> update t set name='c2' where id=2; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> select * from t; +------+------+ | id | name | +------+------+ | 1 | m1 | | 2 | c2 | | 3 | m3 | +------+------+ 3 rows in set (0.00 sec) mysql> delete from t where id=1; Query OK, 1 row affected (0.00 sec) mysql> select * from t; +------+------+ | id | name | +------+------+ | 2 | c2 | | 3 | m3 | +------+------+ 2 rows in set (0.00 sec) #误操作删除数据库 mysql> drop database miles; Query OK, 1 row affected (0.01 sec) 查看当前binlog文件 mysql> show binary logs; +-----------------------+-----------+ | Log_name | File_size | +-----------------------+-----------+ | 3306-mysql-bin.000001 | 6622 | | 3306-mysql-bin.000002 | 4533 | +-----------------------+-----------+ 2 rows in set (0.00 sec) 查看当前的binlog,及日志的Position mysql> show master status\G; *************************** 1. row *************************** File: 3306-mysql-bin.000002 Position: 4533 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 87fd24be-683d-11e6-ba97-1418774c98d8:1-54 1 row in set (0.00 sec) mysql> flush logs; Query OK, 0 rows affected (0.01 sec) 关库 # mysqladmin --defaults-file=/home/ssd/ali_data/my.cnf shutdown -uroot -p --socket=/home/ssd/ali_data/my3306.sock 保护误删除数据库目录结构 # mv ali_data ali_data_bak 应用日志到快照 # innobackupex --defaults-file=/etc/my.cnf --apply-log /home/ssd/ali_backup/full_xtra_3306_20160825/ 在备份路径下,通过xtrabackup_binlog_info文件查看快照的Position # more xtrabackup_binlog_info 3306-mysql-bin.000002 3325 87fd24be-683d-11e6-ba97-1418774c98d8:1-48 将快照拷贝回datadir路径,并更改目录属性 # innobackupex --defaults-file=/etc/my.cnf --copy-back /home/ssd/ali_backup/full_xtra_3306_20160825/ # chown -R mysql:mysql ali_data 查看binlog日志,确定开始的Position和drop操作的Position # mysqlbinlog -vv --base64-output=decode-rows 3306-mysql-bin.000002 ... # at 3325 #160825 10:59:10 server id 201983306 end_log_pos 3373 CRC32 0xf555e0a5 GTID [commit=yes] SET @@SESSION.GTID_NEXT= '87fd24be-683d-11e6-ba97-1418774c98d8:49'/*!*/; # at 3373 #160825 10:59:10 server id 201983306 end_log_pos 3470 CRC32 0x594a3bbd Query thread_id=51 exec_time=0 error_code=0 SET TIMESTAMP=1472093950/*!*/; create database miles /*!*/; ... # at 4147 #160825 11:02:10 server id 201983306 end_log_pos 4195 CRC32 0x453242f4 GTID [commit=yes] SET @@SESSION.GTID_NEXT= '87fd24be-683d-11e6-ba97-1418774c98d8:53'/*!*/; # at 4195 #160825 11:02:10 server id 201983306 end_log_pos 4268 CRC32 0xca99e1e1 Query thread_id=51 exec_time=0 error_code=0 SET TIMESTAMP=1472094130/*!*/; BEGIN /*!*/; # at 4268 #160825 11:02:10 server id 201983306 end_log_pos 4316 CRC32 0x3ea968f2 Table_map: `miles`.`t` mapped to number 284 # at 4316 #160825 11:02:10 server id 201983306 end_log_pos 4359 CRC32 0x6e6e79a5 Delete_rows: table id 284 flags: STMT_END_F ### DELETE FROM `miles`.`t` ### WHERE ### @1=1 /* INT meta=0 nullable=1 is_null=0 */ ### @2='m1' /* VARSTRING(90) meta=90 nullable=1 is_null=0 */ # at 4359 #160825 11:02:10 server id 201983306 end_log_pos 4390 CRC32 0x8a88bf8a Xid = 1371 COMMIT/*!*/; # at 4390 #160825 11:02:42 server id 201983306 end_log_pos 4438 CRC32 0xe4a75c72 GTID [commit=yes] SET @@SESSION.GTID_NEXT= '87fd24be-683d-11e6-ba97-1418774c98d8:54'/*!*/; # at 4438 #160825 11:02:42 server id 201983306 end_log_pos 4533 CRC32 0x5f2c1fe0 Query thread_id=51 exec_time=0 error_code=0 SET TIMESTAMP=1472094162/*!*/; drop database miles 应用日志 # mysqlbinlog --start-position=3325 --stop-position=4390 3306-mysql-bin.000002 | mysql -uroot --socket=/home/ssd/ali_data/my3306.sock -p 登录数据库,确认数据恢复情况 mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | miles | | mysql | | performance_schema | | tmp | | ywcf | +--------------------+ 6 rows in set (0.00 sec) mysql> use miles; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> select * from t; +------+------+ | id | name | +------+------+ | 2 | c2 | | 3 | m3 | +------+------+ 2 rows in set (0.00 sec) 确定数据无误后,重新做一份快照 # innobackupex --defaults-file=/etc/my.cnf --user root --password beijing --no-timestamp /home/ssd/ali_backup/full_xtra_3306_20160825_1
在mysqlGTID下,使用 SET GLOBAL SQL_SLAVE_SKIP_COUNTER = n 会产生如下错误 ERROR 1858 (HY000): sql_slave_skip_counter can not be set when the server is running with @@GLOBAL.GTID_MODE = ON. Instead, for each transaction that you want to skip, generate an empty transaction with the same GTID as the transaction 根据报错提示,我们可以看到我们可以通过执行空事务来跳过复制错误 STOP SLAVE; SET GTID_NEXT="7d72f9b4-8577-11e2-a3d7-080027635ef5:5"; BEGIN; COMMIT; SET GTID_NEXT="AUTOMATIC"; START SLAVE; 但是当需要跳过的事务较多时,这个方法比较麻烦。可以使用MySQL Utilities中的mysqlslavetrx跳过错误 mysqlslavetrx使用说明 mysqlslavetrx --gtid-set=87fd24be-683d-11e6-ba97-1418774c98d8:3-40 --slaves=root:beijing@localhost:/home/mysql/my3306.sock 参数说明: --dryrun Execute the utility in dry-run mode, show the transactions (GTID) that would have been skipped for each slave but without effectively skipping them. This option is useful to verify if the correct transactions will be skipped. --gtid-set=<gtid-set> Set of Global Transaction Identifiers (GTID) to skip. --help Display a help message and exit. --license Display license information and exit. --slaves=<slaves_connections> Connection information for slave servers. List multiple slaves in comma-separated list. To connect to a server, it is necessary to specify connection parameters such as the user name, host name, password, and either a port or socket. MySQL Utilities provides a number of ways to supply this information. All of the methods require specifying your choice via a command-line option such as --server, --master, --slave, etc. The methods include the following in order of most secure to least secure. Use login-paths from your .mylogin.cnf file (encrypted, not visible). Example : <login-path>[:<port>][:<socket>] Use a configuration file (unencrypted, not visible) Note: available in release-1.5.0. Example : <configuration-file-path>[:<section>] Specify the data on the command-line (unencrypted, visible). Example : <user>[:<passwd>]@<host>[:<port>][:<socket>] --ssl-ca The path to a file that contains a list of trusted SSL CAs. --ssl-cert The name of the SSL certificate file to use for establishing a secure connection. --ssl-key The name of the SSL key file to use for establishing a secure connection. --ssl Specifies if the server connection requires use of SSL. If an encrypted connection cannot be established, the connection attempt fails. Default setting is 0 (SSL not required). --verbose, -v Specify how much information to display. Use this option multiple times to increase the amount of information. For example, -v = verbose, -vv = more verbose, -vvv = debug. --version Display version information and exit. 也可以通过 set global gtid_purged='887fd24be-683d-11e6-ba97-1418774c98d8:3-40'; 跳过已经purge的部分,之后重新开启复制即可。 完成这些操作后,如果对数据一致性的问题有顾虑,可以通过 pt-table-checksum来进行一致性检查。
软件版本 mysql> select version(); +------------+ | version() | +------------+ | 5.6.27-log | +------------+ 1 row in set (0.00 sec) 官方文档地址 ROW模式 优点: row level模式下,bin-log中可以不记录执行的sql语句的上下文相关的信息,仅仅只需要记录那一条记录被修改了,修改成什么样了。所以row level的日志内容会非常清楚的记录下每一行数据修改的细节。且不会出现某些特定情况下的存储过程,或function,以及 trigger的调用和触发无法被正确复制的问题。 缺点: row level模式下,所有的执行的语句当记录到日志中的时候,都将以每行记录的修改来记录,这样可能会产生大量的日志内容,比如有这样一条update语句:update product set owner_member_id = ‘b’ where owner_member_id = ‘a’,执行之后,日志中记录的不是这条update语句所对应额事件(MySQL以事件的形式来记录bin-log日志),而是这条语句所更新的每一条记录的变化情况,这样就记录成很多条记录被更新的很多个事件。自然,bin-log日志的量就会很大。尤其是当执行alter table之类的语句的时候,产生的日志量是惊人的。因为MySQL对于alter table之类的表结构变更语句的处理方式是整个表的每一条记录都需要变动,实际上就是重建了整个表。那么该表的每一条记录都会被记录到日志中。 binlog状态 mysql> show master status\G; *************************** 1. row *************************** File: mysql-bin.000028 Position: 1082 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 1 row in set (0.00 sec) 测试数据 mysql> set autocommit=0; Query OK, 0 rows affected (0.00 sec) mysql> SET SESSION binlog_format = 'ROW'; Query OK, 0 rows affected (0.00 sec) mysql> start transaction; Query OK, 0 rows affected (0.00 sec) mysql> delete from t1 where id >5; Query OK, 2 rows affected (0.01 sec) mysql> insert into t1 values (6,'name6',6),(7,'name7',7); Query OK, 2 rows affected (0.00 sec) Records: 2 Duplicates: 0 Warnings: 0 mysql> update t1 set name='nameX' where id=2; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> commit; Query OK, 0 rows affected (0.00 sec) 查看binlog # mysqlbinlog -v --base64-output=decode-rows -v --start-position=1082 --database test mysql-bin.000028 /*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/; /*!40019 SET @@session.max_insert_delayed_threads=0*/; /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/; DELIMITER /*!*/; # at 1082 #160729 21:01:49 server id 168030 end_log_pos 1154 CRC32 0x8731195e Query thread_id=7 exec_time=1 error_code=0 SET TIMESTAMP=1469797309/*!*/; SET @@session.pseudo_thread_id=7/*!*/; SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_checks=1, @@session.autocommit=1/*!*/; SET @@session.sql_mode=1075838976/*!*/; SET @@session.auto_increment_increment=1, @@session.auto_increment_offset=1/*!*/; /*!\C utf8 *//*!*/; SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=33/*!*/; SET @@session.lc_time_names=0/*!*/; SET @@session.collation_database=DEFAULT/*!*/; BEGIN /*!*/; # at 1154 #160729 21:01:49 server id 168030 end_log_pos 1203 CRC32 0x295e098d Table_map: `test`.`t1` mapped to number 72 # at 1203 #160729 21:01:49 server id 168030 end_log_pos 1268 CRC32 0xcb0bab01 Delete_rows: table id 72 flags: STMT_END_F ### DELETE FROM `test`.`t1` ### WHERE ### @1=6 /* INT meta=0 nullable=0 is_null=0 */ ### @2='name6' /* VARSTRING(30) meta=30 nullable=0 is_null=0 */ ### @3=6 /* INT meta=0 nullable=0 is_null=0 */ ### DELETE FROM `test`.`t1` ### WHERE ### @1=7 /* INT meta=0 nullable=0 is_null=0 */ ### @2='name7' /* VARSTRING(30) meta=30 nullable=0 is_null=0 */ ### @3=7 /* INT meta=0 nullable=0 is_null=0 */ # at 1268 #160729 21:01:56 server id 168030 end_log_pos 1317 CRC32 0x8b73fec3 Table_map: `test`.`t1` mapped to number 72 # at 1317 #160729 21:01:56 server id 168030 end_log_pos 1382 CRC32 0x8df0f07c Write_rows: table id 72 flags: STMT_END_F ### INSERT INTO `test`.`t1` ### SET ### @1=6 /* INT meta=0 nullable=0 is_null=0 */ ### @2='name6' /* VARSTRING(30) meta=30 nullable=0 is_null=0 */ ### @3=6 /* INT meta=0 nullable=0 is_null=0 */ ### INSERT INTO `test`.`t1` ### SET ### @1=7 /* INT meta=0 nullable=0 is_null=0 */ ### @2='name7' /* VARSTRING(30) meta=30 nullable=0 is_null=0 */ ### @3=7 /* INT meta=0 nullable=0 is_null=0 */ # at 1382 #160729 21:02:31 server id 168030 end_log_pos 1431 CRC32 0x32c40fac Table_map: `test`.`t1` mapped to number 72 # at 1431 #160729 21:02:31 server id 168030 end_log_pos 1497 CRC32 0x2ec069d8 Update_rows: table id 72 flags: STMT_END_F ### UPDATE `test`.`t1` ### WHERE ### @1=2 /* INT meta=0 nullable=0 is_null=0 */ ### @2='name2' /* VARSTRING(30) meta=30 nullable=0 is_null=0 */ ### @3=10 /* INT meta=0 nullable=0 is_null=0 */ ### SET ### @1=2 /* INT meta=0 nullable=0 is_null=0 */ ### @2='nameX' /* VARSTRING(30) meta=30 nullable=0 is_null=0 */ ### @3=10 /* INT meta=0 nullable=0 is_null=0 */ # at 1497 #160729 21:04:27 server id 168030 end_log_pos 1528 CRC32 0xe9081ac6 Xid = 170 COMMIT/*!*/; DELIMITER ; # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; /*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/; STATEMENT模式 优点: statement level下的优点首先就是解决了row level下的缺点,不需要记录每一行数据的变化,减少bin-log日志量,节约IO,提高性能。因为他只需要记录在Master上所执行的语句的细节,以及执行语句时候的上下文的信息。 缺点: 由于他是记录的执行语句,所以,为了让这些语句在slave端也能正确执行,那么他还必须记录每条语句在执行的时候的一些相关信息,也就是上下文信息,以保证所有语句在slave端杯执行的时候能够得到和在master端执行时候相同的结果。另外就是,由于MySQL现在发展比较快,很多的新功能不断的加入,使MySQL得复制遇到了不小的挑战,自然复制的时候涉及到越复杂的内容,bug也就越容易出现。在statement level下,目前已经发现的就有不少情况会造成MySQL的复制出现问题,主要是修改数据的时候使用了某些特定的函数或者功能的时候会出现,比如:sleep()函数在有些版本中就不能真确复制,在存储过程中使用了last_insert_id()函数,可能会使slave和master上得到不一致的id等等。由于row level是基于每一行来记录的变化,所以不会出现类似的问题。 binlog状态 mysql> show master status\G; *************************** 1. row *************************** File: mysql-bin.000028 Position: 1528 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 1 row in set (0.00 sec) 测试数据 mysql> SET SESSION binlog_format = 'STATEMENT'; Query OK, 0 rows affected (0.00 sec) mysql> start transaction; Query OK, 0 rows affected (0.00 sec) mysql> delete from t1 where id >5; Query OK, 2 rows affected (0.01 sec) mysql> insert into t1 values (6,'name6',6),(7,'name7',7); Query OK, 2 rows affected (0.00 sec) Records: 2 Duplicates: 0 Warnings: 0 mysql> update t1 set name='nameX' where id=2; Query OK, 0 rows affected (0.00 sec) Rows matched: 1 Changed: 0 Warnings: 0 mysql> commit; Query OK, 0 rows affected (0.00 sec) 查看binlog # mysqlbinlog -v --start-position=1528 --database test mysql-bin.000028 /*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/; /*!40019 SET @@session.max_insert_delayed_threads=0*/; /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/; DELIMITER /*!*/; # at 4 #700101 8:00:00 server id 168030 end_log_pos 120 CRC32 0x3562200b Start: binlog v 4, server v 5.6.27-log created 700101 8:00:00 # Warning: this binlog is either in use or was not closed properly. BINLOG ' AAAAAA9ekAIAdAAAAHgAAAABAAQANS42LjI3LWxvZwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAEzgNAAgAEgAEBAQEEgAAXAAEGggAAAAICAgCAAAACgoKGRkAAQsg YjU= '/*!*/; # at 1528 #160729 21:17:24 server id 168030 end_log_pos 1607 CRC32 0xf0b1a6ec Query thread_id=7 exec_time=0 error_code=0 SET TIMESTAMP=1469798244/*!*/; SET @@session.pseudo_thread_id=7/*!*/; SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_checks=1, @@session.autocommit=1/*!*/; SET @@session.sql_mode=1075838976/*!*/; SET @@session.auto_increment_increment=1, @@session.auto_increment_offset=1/*!*/; /*!\C utf8 *//*!*/; SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=33/*!*/; SET @@session.lc_time_names=0/*!*/; SET @@session.collation_database=DEFAULT/*!*/; BEGIN /*!*/; # at 1607 #160729 21:17:24 server id 168030 end_log_pos 1708 CRC32 0x91b80f3a Query thread_id=7 exec_time=0 error_code=0 use `test`/*!*/; SET TIMESTAMP=1469798244/*!*/; delete from t1 where id >5 /*!*/; # at 1708 #160729 21:17:34 server id 168030 end_log_pos 1831 CRC32 0x281b83d4 Query thread_id=7 exec_time=0 error_code=0 SET TIMESTAMP=1469798254/*!*/; insert into t1 values (6,'name6',6),(7,'name7',7) /*!*/; # at 1831 #160729 21:17:40 server id 168030 end_log_pos 1942 CRC32 0xdc3b8c4e Query thread_id=7 exec_time=0 error_code=0 SET TIMESTAMP=1469798260/*!*/; update t1 set name='nameX' where id=2 /*!*/; # at 1942 #160729 21:17:45 server id 168030 end_log_pos 1973 CRC32 0x43ef8f69 Xid = 178 COMMIT/*!*/; DELIMITER ; # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; /*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/; MIXED模式 Mixed模式,可以理解为是前两种模式的结合。 Mixed模式下,MySQL会根据执行的每一条具体的sql语句来区分对待记录的日志形式,也就是在Statement和Row之间选择一种。 新版本中的Statment level还是和以前一样,仅仅记录执行的语句。而新版本的MySQL中队row level模式也被做了优化,并不是所有的修改都会以row level来记录,像遇到表结构变更的时候就会以statement模式来记录,如果sql语句确实就是update或者delete等修改数据的语句,那么还是会记录所有行的变更。 binlog状态 mysql> show master status\G; *************************** 1. row *************************** File: mysql-bin.000028 Position: 1973 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set: 1 row in set (0.00 sec) 测试数据 mysql> SET SESSION binlog_format = 'MIXED'; Query OK, 0 rows affected (0.00 sec) mysql> start transaction; Query OK, 0 rows affected (0.00 sec) mysql> delete from t1 where id >5; Query OK, 2 rows affected (0.01 sec) mysql> insert into t1 values (6,'name6',6),(7,'name7',7); Query OK, 2 rows affected (0.00 sec) Records: 2 Duplicates: 0 Warnings: 0 mysql> update t1 set name='nameX' where id=2; Query OK, 0 rows affected (0.00 sec) Rows matched: 1 Changed: 0 Warnings: 0 mysql> commit; Query OK, 0 rows affected (0.00 sec) 查看binlog # mysqlbinlog -v --start-position=1973 --database test mysql-bin.000028 /*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/; /*!40019 SET @@session.max_insert_delayed_threads=0*/; /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/; DELIMITER /*!*/; # at 4 #700101 8:00:00 server id 168030 end_log_pos 120 CRC32 0x3562200b Start: binlog v 4, server v 5.6.27-log created 700101 8:00:00 # Warning: this binlog is either in use or was not closed properly. BINLOG ' AAAAAA9ekAIAdAAAAHgAAAABAAQANS42LjI3LWxvZwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAEzgNAAgAEgAEBAQEEgAAXAAEGggAAAAICAgCAAAACgoKGRkAAQsg YjU= '/*!*/; # at 1973 #160729 21:21:20 server id 168030 end_log_pos 2052 CRC32 0xa7bda1f9 Query thread_id=7 exec_time=0 error_code=0 SET TIMESTAMP=1469798480/*!*/; SET @@session.pseudo_thread_id=7/*!*/; SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_checks=1, @@session.autocommit=1/*!*/; SET @@session.sql_mode=1075838976/*!*/; SET @@session.auto_increment_increment=1, @@session.auto_increment_offset=1/*!*/; /*!\C utf8 *//*!*/; SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=33/*!*/; SET @@session.lc_time_names=0/*!*/; SET @@session.collation_database=DEFAULT/*!*/; BEGIN /*!*/; # at 2052 #160729 21:21:20 server id 168030 end_log_pos 2153 CRC32 0xd9247bfd Query thread_id=7 exec_time=0 error_code=0 use `test`/*!*/; SET TIMESTAMP=1469798480/*!*/; delete from t1 where id >5 /*!*/; # at 2153 #160729 21:21:24 server id 168030 end_log_pos 2276 CRC32 0x8e0e641c Query thread_id=7 exec_time=0 error_code=0 SET TIMESTAMP=1469798484/*!*/; insert into t1 values (6,'name6',6),(7,'name7',7) /*!*/; # at 2276 #160729 21:21:28 server id 168030 end_log_pos 2387 CRC32 0x03317cda Query thread_id=7 exec_time=0 error_code=0 SET TIMESTAMP=1469798488/*!*/; update t1 set name='nameX' where id=2 /*!*/; # at 2387 #160729 21:21:31 server id 168030 end_log_pos 2418 CRC32 0x21f01c0d Xid = 186 COMMIT/*!*/; DELIMITER ; # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; /*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;
系统环境 # cat /etc/redhat-release CentOS release 6.5 (Final) # uname -a Linux miles 2.6.32-431.el6.x86_64 #1 SMP Fri Nov 22 03:15:09 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux 软件版本 mysql> select version(); +------------+ | version() | +------------+ | 5.6.27-log | +------------+ 打开general_log mysql> set global general_log=1; Query OK, 0 rows affected (0.04 sec) mysql> show global variables like '%gen%'; +------------------+-----------------------+ | Variable_name | Value | +------------------+-----------------------+ | general_log | ON | | general_log_file | /data/mysql/miles.log | +------------------+-----------------------+ 2 rows in set (0.01 sec) 创建测试数据 mysql> create database test; Query OK, 1 row affected (0.03 sec) mysql> use test Database changed mysql> create table t1 ( -> id int primary key auto_increment, -> name varchar(10) not null, -> age int not null); Query OK, 0 rows affected (0.07 sec) mysql> insert into t1(name,age) values('name1',10),('name2',10),('name3',10),('name4',10),('name5',10),('name6',10),('name7',10),('name8',10),('name9',10); Query OK, 9 rows affected (0.01 sec) Records: 9 Duplicates: 0 Warnings: 0 mysql> create table t2(id int not null primary key auto_increment,name varchar(10) not null,age int not null); Query OK, 0 rows affected (0.02 sec) mysql> insert into t2(name,age) values('name1',10),('name2',10),('name3',10),('name4',10),('name5',10),('name6',10),('name7',10),('name8',10),('name9',10); Query OK, 9 rows affected (0.01 sec) Records: 9 Duplicates: 0 Warnings: 0 不开启–single-transaction执行备份: # mysqldump -uroot -p --socket=/data/mysql.sock -B test 查看对应的general_log 160728 21:28:00 5 Connect root@localhost on 5 Query /*!40100 SET @@SQL_MODE='' */ 5 Query /*!40103 SET TIME_ZONE='+00:00' */ 5 Query SHOW VARIABLES LIKE 'gtid\_mode' 5 Query SELECT LOGFILE_GROUP_NAME, FILE_NAME, TOTAL_EXTENTS, INITIAL_SIZE, ENGINE, EXTRA FROM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'UNDO LOG' AND FILE_NAME IS NOT NULL AND LOGFILE_GROUP_NAME IN (SELECT DISTINCT LOGFILE_GROUP_NAME FROM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_SCHEMA IN ('test'))) GROUP BY LOGFILE_GROUP_NAME, FILE_NAME, ENGINE ORDER BY LOGFILE_GROUP_NAME 5 Query SELECT DISTINCT TABLESPACE_NAME, FILE_NAME, LOGFILE_GROUP_NAME, EXTENT_SIZE, INITIAL_SIZE, ENGINE FROM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_SCHEMA IN ('test')) ORDER BY TABLESPACE_NAME, LOGFILE_GROUP_NAME 5 Query SHOW VARIABLES LIKE 'ndbinfo\_version' 5 Init DB test 5 Query SHOW CREATE DATABASE IF NOT EXISTS `test` 5 Query show tables 5 Query LOCK TABLES `t1` READ /*!32311 LOCAL */,`t2` READ /*!32311 LOCAL */ 5 Query show table status like 't1' 5 Query SET SQL_QUOTE_SHOW_CREATE=1 5 Query SET SESSION character_set_results = 'binary' 5 Query show create table `t1` 5 Query SET SESSION character_set_results = 'utf8' 5 Query show fields from `t1` 5 Query SELECT /*!40001 SQL_NO_CACHE */ * FROM `t1` 5 Query SET SESSION character_set_results = 'binary' 5 Query use `test` 5 Query select @@collation_database 5 Query SHOW TRIGGERS LIKE 't1' 5 Query SET SESSION character_set_results = 'utf8' 5 Query show table status like 't2' 5 Query SET SQL_QUOTE_SHOW_CREATE=1 5 Query SET SESSION character_set_results = 'binary' 5 Query show create table `t2` 5 Query SET SESSION character_set_results = 'utf8' 5 Query show fields from `t2` 5 Query SELECT /*!40001 SQL_NO_CACHE */ * FROM `t2` 5 Query SET SESSION character_set_results = 'binary' 5 Query use `test` 5 Query select @@collation_database 5 Query SHOW TRIGGERS LIKE 't2' 5 Query SET SESSION character_set_results = 'utf8' 5 Query UNLOCK TABLES 原理: 加全局读锁(防止备份数据不一致) LOCK TABLES `t1` READ /*!32311 LOCAL */,`t2` READ /*!32311 LOCAL */ 获取建表信息生成建表语句 show create table `t1` 当前字符集会做一些处理,防止乱码 SET SESSION character_set_results = 'utf8' 获取行记录从而转换成insert插入语句 show fields from `t1` SELECT /*!40001 SQL_NO_CACHE */ * FROM `t1` 重复表t1操作 释放读锁 UNLOCK TABLES 开启–single-transaction备份: # mysqldump -uroot -p --socket=/data/mysql.sock -B test --single-transaction 查看对应的general_log 160728 21:28:41 6 Connect root@localhost on 6 Query /*!40100 SET @@SQL_MODE='' */ 6 Query /*!40103 SET TIME_ZONE='+00:00' */ 6 Query SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ 6 Query START TRANSACTION /*!40100 WITH CONSISTENT SNAPSHOT */ 6 Query SHOW VARIABLES LIKE 'gtid\_mode' 6 Query UNLOCK TABLES 6 Query SELECT LOGFILE_GROUP_NAME, FILE_NAME, TOTAL_EXTENTS, INITIAL_SIZE, ENGINE, EXTRA FROM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'UNDO LOG' AND FILE_NAME IS NOT NULL AND LOGFILE_GROUP_NAME IN (SELECT DISTINCT LOGFILE_GROUP_NAME FROM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_SCHEMA IN ('test'))) GROUP BY LOGFILE_GROUP_NAME, FILE_NAME, ENGINE ORDER BY LOGFILE_GROUP_NAME 6 Query SELECT DISTINCT TABLESPACE_NAME, FILE_NAME, LOGFILE_GROUP_NAME, EXTENT_SIZE, INITIAL_SIZE, ENGINE FROM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SCHEMA.PARTITIONS WHERE TABLE_SCHEMA IN ('test')) ORDER BY TABLESPACE_NAME, LOGFILE_GROUP_NAME 6 Query SHOW VARIABLES LIKE 'ndbinfo\_version' 6 Init DB test 6 Query SHOW CREATE DATABASE IF NOT EXISTS `test` 6 Query SAVEPOINT sp 6 Query show tables 6 Query show table status like 't1' 6 Query SET SQL_QUOTE_SHOW_CREATE=1 6 Query SET SESSION character_set_results = 'binary' 6 Query show create table `t1` 6 Query SET SESSION character_set_results = 'utf8' 6 Query show fields from `t1` 6 Query SELECT /*!40001 SQL_NO_CACHE */ * FROM `t1` 6 Query SET SESSION character_set_results = 'binary' 6 Query use `test` 6 Query select @@collation_database 6 Query SHOW TRIGGERS LIKE 't1' 6 Query SET SESSION character_set_results = 'utf8' 6 Query ROLLBACK TO SAVEPOINT sp 6 Query show table status like 't2' 6 Query SET SQL_QUOTE_SHOW_CREATE=1 6 Query SET SESSION character_set_results = 'binary' 6 Query show create table `t2` 6 Query SET SESSION character_set_results = 'utf8' 6 Query show fields from `t2` 6 Query SELECT /*!40001 SQL_NO_CACHE */ * FROM `t2` 6 Query SET SESSION character_set_results = 'binary' 6 Query use `test` 6 Query select @@collation_database 6 Query SHOW TRIGGERS LIKE 't2' 6 Query SET SESSION character_set_results = 'utf8' 6 Query ROLLBACK TO SAVEPOINT sp 6 Query RELEASE SAVEPOINT sp 原理: 更改当前会话隔离级别为rr SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ 开始事务(并生成一致性快照) START TRANSACTION /*!40100 WITH CONSISTENT SNAPSHOT */ 释放表锁 UNLOCK TABLES 设置一个savepoint SAVEPOINT sp 查看数据库下的表 show tables 对t1表进行操作,获得表信息、建表语句及数据 show create table `t1` SET SESSION character_set_results = 'utf8' show fields from `t1` SELECT /*!40001 SQL_NO_CACHE */ * FROM `t1` 回滚到savepoint sp ROLLBACK TO SAVEPOINT sp 对其他表重复该过程 最后释放savepoint RELEASE SAVEPOINT sp 整个备份过程,没有任何锁。RR隔离级别保证在事务中只读取本事务之前的一致性的数据 。 rollback to savepoint sp,保证了对数据库中的数据没有影响。
SHOW BINLOG EVENTS [IN 'log_name'] [FROM pos] [LIMIT [offset,] row_count] 用于在二进制日志中显示事件。如果您不指定’log_name’,则显示第一个二进制日志。 LIMIT子句和SELECT语句具有相同的语法。 实验: mysql> create table tbl (test TEXT); Query OK, 0 rows affected (0.00 sec) mysql> insert into tbl values ("hello,Miles"); Query OK, 1 row affected (0.00 sec) mysql> flush logs; Query OK, 0 rows affected (0.00 sec) 默认显示的信息显然不符合我们的要求 mysql> show binlog events\G; *************************** 1. row *************************** Log_name: mysql-bin.000001 Pos: 4 Event_type: Format_desc Server_id: 1 End_log_pos: 107 Info: Server ver: 5.5.49-log, Binlog ver: 4 *************************** 2. row *************************** Log_name: mysql-bin.000001 Pos: 107 Event_type: Query Server_id: 1 End_log_pos: 276 Info: grant replication slave,replication client on *.* to repl@'192.168.%' identified by 'beijing' *************************** 3. row *************************** Log_name: mysql-bin.000001 Pos: 276 Event_type: Rotate Server_id: 1 End_log_pos: 319 Info: mysql-bin.000002;pos=4 3 rows in set (0.00 sec) 找到当前binlog,因为之前的语句中有flush logs,所以执行的命令记录在上一个日志中 mysql> show master status; +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000007 | 107 | | | +------------------+----------+--------------+------------------+ 1 row in set (0.00 sec) mysql> show binlog events in 'mysql-bin.000006'\G; ...... Log_name: mysql-bin.000006 Pos: 623 Event_type: Query Server_id: 1 End_log_pos: 708 Info: create database miles *************************** 7. row *************************** Log_name: mysql-bin.000006 Pos: 708 Event_type: Query Server_id: 1 End_log_pos: 800 Info: use `miles`; create table tbl (test TEXT) *************************** 8. row *************************** Log_name: mysql-bin.000006 Pos: 800 Event_type: Query Server_id: 1 End_log_pos: 869 Info: BEGIN *************************** 9. row *************************** Log_name: mysql-bin.000006 Pos: 869 Event_type: Query Server_id: 1 End_log_pos: 971 Info: use `miles`; insert into tbl values ("hello,Miles") *************************** 10. row *************************** Log_name: mysql-bin.000006 Pos: 971 Event_type: Xid Server_id: 1 End_log_pos: 998 Info: COMMIT /* xid=1992 */ *************************** 11. row *************************** Log_name: mysql-bin.000006 Pos: 998 Event_type: Rotate Server_id: 1 End_log_pos: 1041 Info: mysql-bin.000007;pos=4 11 rows in set (0.00 sec)
参考文档 安装 Step 1. 安装MariaDB Enterprise repository configuration package # rpm -i http://downloads.mariadb.com/enterprise/hakg-cyew/generate/10.1/mariadb-enterprise-repository.rpm Step 2. 安装MaxScale # yum install maxscale 启动MaxScale # service maxscale start Starting MaxScale: maxscale (pid 7023) is running... [ OK ]
单实例安装请参照该文档,本文在此基础上进行安装 多实例部署优势: 充分利用系统资源 mysql为单进程多线程的模型,它对多核的利用不是很好,无法充分利用系统资源。所以服务器上可以考虑多实例部署 资源隔离 如果不同的业务,部署在一个实例里,那么连接数、缓存等资源都是共享的。如果某个业务压力很大的话,很可能影响另一个业务的正常运行 业务、模块隔离 例如,A业务需要支持移动端,那么就需要升级数据库以支持utf8mb4字符集。而B业务则不需要。如果A、B是部署在一起的话,升级数据库必然会对B的业务造成影响 安装过程 复制已有的配置文件 # pwd /etc/mysql # cp -p mysql3306.cnf mysql3307.cnf # ll total 8 -rw-r--r--. 1 mysql mysql 2368 Jul 19 06:59 mysql3306.cnf -rw-r--r--. 1 mysql mysql 2368 Jul 19 06:59 mysql3307.cnf 修改配置文件 # sed -i '1,$s/3306/3307/g' mysql3307.cnf # more mysql3307.cnf [mysqld] # GENERAL # user = mysql port = 3307 socket = /data1/db3307/my3307.sock pid_file = /data1/db3307/mysql.pid datadir = /data1/db3307/ tmpdir = /data1/tmp log_bin = /data1/db3307/3307-mysql-bin relay-log = /data1/db3307/3307-relay-bin log_error = /data1/db3307/error.log slow_query_log_file = /data1/db3307/slow-queries.log long_query_time=1 sync_binlog = 0 expire_logs_days = 7 back_log=1024 skip-name-resolve skip-slave-start skip-external-locking skip-character-set-client-handshake explicit_defaults_for_timestamp=true default_storage_engine = InnoDB bind-address=0.0.0.0 #lower_case_table_names = 0 myisam_recover = FORCE,BACKUP transaction-isolation = READ-COMMITTED table_definition_cache = 4096 table_open_cache = 4096 # connection # max_connections = 1100 max_user_connections = 1000 max_connect_errors = 1000 # timeout # wait_timeout = 100 interactive_timeout = 100 lock_wait_timeout = 3 connect_timeout = 20 slave-net-timeout = 30 # character # character-set-server=utf8 init-connect='SET NAMES utf8' # disabled query cache # query_cache_type = 0 query_cache_size = 0 # replication # server_id=71493307 gtid_mode=ON enforce-gtid-consistency log-slave-updates binlog-format=row slave-parallel-workers=6 master-info-repository=TABLE relay-log-info-repository=TABLE sync_master_info = 10000 slave_sql_verify_checksum=1 skip-slave-start # session # key_buffer_size = 128M tmp_table_size = 32M max_heap_table_size = 32M max_allowed_packet = 32M bulk_insert_buffer_size = 32M sort_buffer_size = 128K read_buffer_size = 1M read_rnd_buffer_size = 1M join_buffer_size = 128K myisam_sort_buffer_size = 32M tmp_table_size = 32M max_heap_table_size = 64M thread_cache_size = 64 #thread_concurrency = 32 thread_stack = 192K # INNODB # innodb_flush_method = O_DIRECT innodb_data_home_dir = /data1/db3307/ innodb_data_file_path = ibdata1:10M:autoextend #redo log innodb_log_group_home_dir=/data1/db3307/ innodb_log_files_in_group = 3 innodb_log_file_size = 1G #innodb performance innodb_flush_log_at_trx_commit = 0 innodb_file_per_table = 1 innodb_buffer_pool_instances = 8 innodb_io_capacity = 2000 innodb_lock_wait_timeout = 30 binlog_error_action = ABORT_SERVER innodb_buffer_pool_size = 256M innodb_max_dirty_pages_pct=90 innodb_file_format=Barracuda innodb_support_xa = 0 innodb_buffer_pool_dump_at_shutdown = 1 innodb_buffer_pool_load_at_startup = 1 创建相关目录,修改目录拥有者 # mkdir -p /data1/db3307 # chown -R mysql:mysql /data1/db3307 实例初始化 # ./mysql_install_db --basedir=/usr/local/mysql --defaults-file=/etc/mysql/mysql3307.cnf Installing MySQL system tables...2016-07-19 08:49:04 0 [Note] /usr/local/mysql/bin/mysqld (mysqld 5.6.24-log) starting as process 5438 ... OK Filling help tables...2016-07-19 08:50:33 0 [Note] /usr/local/mysql/bin/mysqld (mysqld 5.6.24-log) starting as process 5484 ... OK ...... # pwd /data1/db3307 # ll total 3222684 -rw-rw----. 1 mysql mysql 65420 Jul 19 08:50 3307-mysql-bin.000001 -rw-rw----. 1 mysql mysql 1099687 Jul 19 08:50 3307-mysql-bin.000002 -rw-rw----. 1 mysql mysql 72 Jul 19 08:50 3307-mysql-bin.index -rw-rw----. 1 mysql mysql 5077 Jul 19 08:50 error.log -rw-rw----. 1 mysql mysql 962 Jul 19 08:50 ib_buffer_pool -rw-rw----. 1 mysql mysql 77594624 Jul 19 08:50 ibdata1 -rw-rw----. 1 mysql mysql 1073741824 Jul 19 08:50 ib_logfile0 -rw-rw----. 1 mysql mysql 1073741824 Jul 19 08:50 ib_logfile1 -rw-rw----. 1 mysql mysql 1073741824 Jul 19 08:50 ib_logfile2 drwx------. 2 mysql mysql 4096 Jul 19 08:50 mysql drwx------. 2 mysql mysql 4096 Jul 19 08:50 performance_schema drwx------. 2 mysql mysql 4096 Jul 19 08:49 test 启动实例 # mysqld_safe --defaults-file=/etc/mysql/mysql3307.cnf & 查看相关进程 # ps -ef |grep mysql root 3551 2818 0 07:39 pts/1 00:00:00 /bin/sh /usr/local/mysql/bin/mysqld_safe --defaults-file=/etc/mysql/mysql3306.cnf mysql 4604 3551 0 07:39 pts/1 00:00:04 /usr/local/mysql/bin/mysqld --defaults-file=/etc/mysql/mysql3306.cnf --basedir=/usr/local/mysql --datadir=/data1/db3306/ --plugin-dir=/usr/local/mysql/lib/plugin --user=mysql --log-error=/data1/db3306/error.log --pid-file=/data1/db3306/mysql.pid --socket=/data1/db3306/my3306.sock --port=3306 root 4630 2818 0 07:40 pts/1 00:00:00 mysql -uroot --socket=/data1/db3306/my3306.sock root 5513 5341 0 08:53 pts/3 00:00:00 /bin/sh /usr/local/mysql/bin/mysqld_safe --defaults-file=/etc/mysql/mysql3307.cnf mysql 6566 5513 15 08:53 pts/3 00:00:01 /usr/local/mysql/bin/mysqld --defaults-file=/etc/mysql/mysql3307.cnf --basedir=/usr/local/mysql --datadir=/data1/db3307/ --plugin-dir=/usr/local/mysql/lib/plugin --user=mysql --log-error=/data1/db3307/error.log --pid-file=/data1/db3307/mysql.pid --socket=/data1/db3307/my3307.sock --port=3307 账号安全设置 mysql> select user,host,password from mysql.user; +------+-----------+----------+ | user | host | password | +------+-----------+----------+ | root | localhost | | | root | miles | | | root | 127.0.0.1 | | | root | ::1 | | | | localhost | | | | miles | | +------+-----------+----------+ 6 rows in set (0.01 sec) mysql> delete from mysql.user where user=''; Query OK, 2 rows affected (0.01 sec) mysql> delete from mysql.user where host <>'localhost'; Query OK, 3 rows affected (0.00 sec) mysql> select user,host,password from mysql.user; +------+-----------+----------+ | user | host | password | +------+-----------+----------+ | root | localhost | | +------+-----------+----------+ 1 row in set (0.00 sec) mysql> set password for root@'localhost' = password('XXXXXX'); Query OK, 0 rows affected (0.01 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec) 可以看到test数据库存在安全隐患,任何用户都可以在test进行操作 mysql> select * from mysql.db \G *************************** 1. row *************************** Host: % Db: test User: Select_priv: Y Insert_priv: Y Update_priv: Y Delete_priv: Y Create_priv: Y Drop_priv: Y Grant_priv: N References_priv: Y Index_priv: Y Alter_priv: Y Create_tmp_table_priv: Y Lock_tables_priv: Y Create_view_priv: Y Show_view_priv: Y Create_routine_priv: Y Alter_routine_priv: N Execute_priv: N Event_priv: Y Trigger_priv: Y *************************** 2. row *************************** Host: % Db: test\_% User: Select_priv: Y Insert_priv: Y Update_priv: Y Delete_priv: Y Create_priv: Y Drop_priv: Y Grant_priv: N References_priv: Y Index_priv: Y Alter_priv: Y Create_tmp_table_priv: Y Lock_tables_priv: Y Create_view_priv: Y Show_view_priv: Y Create_routine_priv: Y Alter_routine_priv: N Execute_priv: N Event_priv: Y Trigger_priv: Y 2 rows in set (0.00 sec) mysql> delete from mysql.db; Query OK, 2 rows affected (0.00 sec) mysql> drop database test; Query OK, 0 rows affected (0.01 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec)
线上部署考虑因素: 版本选择,5.1、5.5还是5.6? 5.1官方已不再维护,不建议 5.5是现在线上使用最多的版本 5.6最新的稳定版,已发布3年多,现在使用的也很多 分支选择,官方社区版?persona server ? Mariadb ? 官方版 推荐使用 persona server 管理性能上有改进,并且完全兼容mysql Mariadb 是mysql原班人马开发的 安装方式,包安装?二进制包安装源码安装? 线上推荐使用二进制包安装 路径配置,参数配置(尽量模板化、标准化) 一个实例多个库或多个实例单个库? 如果只部署一个实例,因为mysql是单进程的,对多核的利用不充分,导致资源浪费 多实例的方式,是现在主流的线上部署方式 检查系统是否含有mysql相关组件 # rpm -qa |grep mysql mysql-libs-5.1.71-1.el6.x86_64 移除相关组件 # yum remove mysql-libs* 下载mysql二进制包 # wget http://120.52.72.24/cdn.mysql.com/c3pr90ntc0td/archives/mysql-5.6/mysql-5.6.24-linux-glibc2.5-x86_64.tar.gz 安装依赖包 # yum install libaio 创建mysql用户、组 # groupadd mysql # useradd -g mysql mysql 解压二进制包 # tar xzf mysql-5.6.24-linux-glibc2.5-x86_64.tar.gz # pwd /opt/mysql-5.6.24-linux-glibc2.5-x86_64 创建软连接 # pwd /usr/local # ln -s /opt/mysql-5.6.24-linux-glibc2.5-x86_64/ mysql 指定文件拥有者为mysql # chown -R mysql:mysql /opt/mysql-5.6.24-linux-glibc2.5-x86_64 # chown -R mysql:mysql /usr/local/mysql 创建相关目录 配置文件目录 # mkdir -p /etc/mysql # chown -R mysql:mysql /etc/mysql 数据文件目录 # mkdir -p /data1/db3306 临时文件目录 # mkdir -p /data1/tmp # chown -R mysql:mysql /data1 设置环境变量(追加) # vi /etc/profile ... export MYSQL_HOME=/usr/local/mysql export PATH=$MYSQL_HOME/bin:$PATH export C_INCLUDE_PATH=$MYSQL_HOME/include export LIBDIR=/usr/local/lib export LD_LIBRARY_PATH=/usr/local/lib:$MYSQL_HOME/lib export LD_RUN_PATH=/usr/local/lib # source /etc/profile # mysql --version mysql Ver 14.14 Distrib 5.6.24, for linux-glibc2.5 (x86_64) using EditLine wrapper 编辑配置文件 # vi /etc/mysql/mysql3306.cnf [mysqld] # GENERAL # user = mysql port = 3306 socket = /data1/db3306/my3306.sock pid_file = /data1/db3306/mysql.pid datadir = /data1/db3306/ tmpdir = /data1/tmp log_bin = /data1/db3306/3306-mysql-bin relay-log = /data1/db3306/3306-relay-bin log_error = /data1/db3306/error.log slow_query_log_file = /data1/db3306/slow-queries.log long_query_time=1 sync_binlog = 0 expire_logs_days = 7 back_log=1024 skip-name-resolve skip-slave-start skip-external-locking skip-character-set-client-handshake explicit_defaults_for_timestamp=true default_storage_engine = InnoDB bind-address=0.0.0.0 #lower_case_table_names = 0 myisam_recover = FORCE,BACKUP transaction-isolation = READ-COMMITTED table_definition_cache = 4096 table_open_cache = 4096 # connection # max_connections = 1100 max_user_connections = 1000 max_connect_errors = 1000 # timeout # wait_timeout = 100 interactive_timeout = 100 lock_wait_timeout = 3 connect_timeout = 20 slave-net-timeout = 30 # character # character-set-server=utf8 init-connect='SET NAMES utf8' # disabled query cache # query_cache_type = 0 query_cache_size = 0 # replication # server_id=71493306 gtid_mode=ON enforce-gtid-consistency log-slave-updates binlog-format=row slave-parallel-workers=6 master-info-repository=TABLE relay-log-info-repository=TABLE sync_master_info = 10000 slave_sql_verify_checksum=1 skip-slave-start # session # key_buffer_size = 128M tmp_table_size = 32M max_heap_table_size = 32M max_allowed_packet = 32M bulk_insert_buffer_size = 32M sort_buffer_size = 128K read_buffer_size = 1M read_rnd_buffer_size = 1M join_buffer_size = 128K myisam_sort_buffer_size = 32M tmp_table_size = 32M max_heap_table_size = 64M thread_cache_size = 64 #thread_concurrency = 32 thread_stack = 192K # INNODB # innodb_flush_method = O_DIRECT innodb_data_home_dir = /data1/db3306/ innodb_data_file_path = ibdata1:10M:autoextend #redo log innodb_log_group_home_dir=/data1/db3306/ innodb_log_files_in_group = 3 innodb_log_file_size = 1G #innodb performance innodb_flush_log_at_trx_commit = 0 innodb_file_per_table = 1 innodb_buffer_pool_instances = 8 innodb_io_capacity = 2000 innodb_lock_wait_timeout = 30 binlog_error_action = ABORT_SERVER innodb_buffer_pool_size = 256M innodb_max_dirty_pages_pct=90 innodb_file_format=Barracuda innodb_support_xa = 0 innodb_buffer_pool_dump_at_shutdown = 1 innodb_buffer_pool_load_at_startup = 1 初始化 # pwd /usr/local/mysql/scripts 查看帮助(列出部分重要的参数) # ./mysql_install_db -h --basedir=path The path to the MySQL installation directory. --datadir=path The path to the MySQL data directory. If missing, the directory will be created, but its parent directory must already exist and be writable. --defaults-file=name Only read default options from the given file name. --user=user_name The login username to use for running mysqld. Files and directories created by mysqld will be owned by this user. You must be root to use this option. By default mysqld runs using your current login name and files and directories that it creates will be owned by you. 看到两个ok # ./mysql_install_db --defaults-file=/etc/mysql/mysql3306.cnf --basedir=/usr/local/mysql Installing MySQL system tables...2016-07-19 07:36:49 0 [Note] /usr/local/mysql/bin/mysqld (mysqld 5.6.24-log) starting as process 3499 ... OK Filling help tables...2016-07-19 07:37:09 0 [Note] /usr/local/mysql/bin/mysqld (mysqld 5.6.24-log) starting as process 3521 ... OK ... 启动mysql # mysqld_safe --defaults-file=/etc/mysql/mysql3306.cnf & # ps -ef |grep mysql root 3551 2818 0 07:39 pts/1 00:00:00 /bin/sh /usr/local/mysql/bin/mysqld_safe --defaults-file=/etc/mysql/mysql3306.cnf mysql 4604 3551 3 07:39 pts/1 00:00:00 /usr/local/mysql/bin/mysqld --defaults-file=/etc/mysql/mysql3306.cnf --basedir=/usr/local/mysql --datadir=/data1/db3306/ --plugin-dir=/usr/local/mysql/lib/plugin --user=mysql --log-error=/data1/db3306/error.log --pid-file=/data1/db3306/mysql.pid --socket=/data1/db3306/my3306.sock --port=3306 账号安全设置 # mysql -uroot --socket=/data1/db3306/my3306.sock mysql> select user,host,password from mysql.user; +------+-----------+----------+ | user | host | password | +------+-----------+----------+ | root | localhost | | | root | miles | | | root | 127.0.0.1 | | | root | ::1 | | | | localhost | | | | miles | | +------+-----------+----------+ 6 rows in set (0.01 sec) mysql> delete from mysql.user where user=''; Query OK, 2 rows affected (0.01 sec) mysql> delete from mysql.user where host <>'localhost'; Query OK, 3 rows affected (0.00 sec) mysql> select user,host,password from mysql.user; +------+-----------+----------+ | user | host | password | +------+-----------+----------+ | root | localhost | | +------+-----------+----------+ 1 row in set (0.00 sec) mysql> set password for root@'localhost' = password('XXXXXX'); Query OK, 0 rows affected (0.01 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec) 可以看到test数据库存在安全隐患,任何用户都可以在test进行操作 mysql> select * from mysql.db \G *************************** 1. row *************************** Host: % Db: test User: Select_priv: Y Insert_priv: Y Update_priv: Y Delete_priv: Y Create_priv: Y Drop_priv: Y Grant_priv: N References_priv: Y Index_priv: Y Alter_priv: Y Create_tmp_table_priv: Y Lock_tables_priv: Y Create_view_priv: Y Show_view_priv: Y Create_routine_priv: Y Alter_routine_priv: N Execute_priv: N Event_priv: Y Trigger_priv: Y *************************** 2. row *************************** Host: % Db: test\_% User: Select_priv: Y Insert_priv: Y Update_priv: Y Delete_priv: Y Create_priv: Y Drop_priv: Y Grant_priv: N References_priv: Y Index_priv: Y Alter_priv: Y Create_tmp_table_priv: Y Lock_tables_priv: Y Create_view_priv: Y Show_view_priv: Y Create_routine_priv: Y Alter_routine_priv: N Execute_priv: N Event_priv: Y Trigger_priv: Y 2 rows in set (0.00 sec) mysql> delete from mysql.db; Query OK, 2 rows affected (0.00 sec) mysql> drop database test; Query OK, 0 rows affected (0.01 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec)
原文链接 Nginx is well-known for its ability to act as a reverse-proxy with small memory footprint. It usually sits in the front-end web tier to redirect connections to available backend services, provided these passed some health checks. Using a reverse-proxy is common when you are running a critical application or service that requires high availability. It also distributes the load equally among the backend services. Recently, nginx 1.9 introduced support for TCP load balancing - similar to what HAProxy is capable of. The one major drawback is that it does not support advanced backend health checks. This is required when running MySQL Galera Cluster, as we’ll explain in the next section. Note that this limitation is removed in the paid-only edition called NGINX Plus. In this blog post, we are going to play around with nginx as a reverse-proxy for MySQL Galera Cluster services to achieve higher availability. We had a Galera cluster up and running, deployed using ClusterControl on CentOS 7.1. We are going to install nginx on a fresh new host, as illustrated in the following diagram: Backend Health Checks With the existence of a synchronously-replicated cluster like Galera or NDB, it has become quite popular to use a TCP reverse-proxy as load balancer. All MySQL nodes are treated equal as one can read or write from any node. No read-write splitting is required, as you would with MySQL master-slave replication. Nginx is not database-aware, so some additional steps are required to configure the health checks for Galera Cluster backends so that they return something understandable. If you are running HAProxy, a healthcheck script on each MySQL server in the load balancing set should be able to return an HTTP response status. For example, if the backend MySQL server is healthy, then the script will return a simple HTTP 200 OK status code. Else, the script will return 503 Service unavailable. HAProxy can then update the routing table to exclude the problematic backend servers from the load balancing set and redirect the incoming connections only to the available servers. This is well explained in this webinar on HAProxy. Unfortunately, the HAProxy health check uses xinetd to daemonize and listen to a custom port (9200) which is not configurable in nginx yet. The following flowchart illustrates the process to report the health of a Galera node for multi-master setup: At the time of this writing, NGINX Plus (the paid release of nginx) also supports advanced backend health but it does not support custom backend monitoring port. **Update - 20th January 2016: NGINX Plus just released nginx-plus-r8 which supports custom backend monitoring port. Details at http://nginx.org/r/health_check. Using clustercheck-iptables To overcome this limitation, we’ve created a healthcheck script called clustercheck-iptables. It is a background script that checks the availability of a Galera node, and adds a redirection port using iptables if the Galera node is healthy (instead of returning HTTP response). This allows other TCP-load balancers with limited health check capabilities to monitor the backend Galera nodes correctly. Other than HAProxy, you can now use your favorite reverse proxy like nginx (>1.9), IPVS, keepalived, piranha, distributor, balance or pen to load balance requests across Galera nodes. So how does it work? The script performs a health check every second on each Galera node. If the node is healthy (wsrep_cluster_state_comment=Synced and read_only=OFF) or (wsrep_cluster_state_comment=Donor and wsrep_sst_method=xtrabackup/xtrabackup-v2), a port redirection will be setup using iptables (default: 3308 redirects to 3306) using the following command: $ iptables -t nat -A PREROUTING -s $0.0.0.0/0 -p tcp --dport 3308 -j REDIRECT --to-ports 3306 Else, the above rule will be taken out from the iptables PREROUTING chain. On the load balancer, define the designated redirection port (3308) instead. If the backend node is “unhealthy”, port 3308 will be unreachable because the corresponding iptables rule is removed on the database node. The load balancer shall then exclude it from the load balancing set. Let’s install the script and see how it works in practice. 1.On the database servers, run the following commands to install the script: $ git clone https://github.com/ashraf-s9s/clustercheck-iptables $ cp clustercheck-iptables/mysqlchk_iptables /usr/local/sbin 2.By default, the script will use a MySQL user called “mysqlchk_user” with password “mysqlchk_password”. We need to ensure this MySQL user exists with the corresponding password before the script is able to perform health checks. Run the following DDL statements on one of the DB nodes (Galera should replicate the statement to the other nodes): mysql> GRANT PROCESS ON *.* TO 'mysqlchk_user'@'localhost' IDENTIFIED BY 'mysqlchk_password'; mysql> FLUSH PRIVILEGES; ** If you would like to run as different user/password, specify -u and/or -p argument in the command line. See examples on the Github page. 3.This script requires running iptables. In this example, we ran on CentOS 7 which comes with firewalld by default. We have to install iptables-services beforehand: $ yum install -y iptables-services $ systemctl enable iptables $ systemctl start iptables Then, setup basic rules for MySQL Galera Cluster so iptables won’t affect the database communication: $ iptables -I INPUT -m tcp -p tcp --dport 3306 -j ACCEPT $ iptables -I INPUT -m tcp -p tcp --dport 3308 -j ACCEPT $ iptables -I INPUT -m tcp -p tcp --dport 4444 -j ACCEPT $ iptables -I INPUT -m tcp -p tcp --dport 4567:4568 -j ACCEPT $ service iptables save $ service iptables restart 4.Once the basic rules are added, verify them with the following commands: $ iptables -L -n 5.Test mysqlchk_iptables: $ mysqlchk_iptables -t Detected variables/status: wsrep_local_state: 4 wsrep_sst_method: xtrabackup-v2 read_only: OFF [11-11-15 08:33:49.257478192] [INFO] Galera Cluster Node is synced. 6.Looks good. Now we can daemonize the health check script: $ mysqlchk_iptables -d /usr/local/sbin/mysqlchk_iptables started with PID 66566. 7.Our PREROUTING rules will look something like this: $ iptables -L -n -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination REDIRECT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3308 redir ports 3306 Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination 8.Finally, add the health check command into /etc/rc.local so it starts automatically on boot: echo '/usr/local/sbin/mysqlchk_iptables -d' >> /etc/rc.local In some distributions, you need to verify that rc.local holds the correct permission to execute scripts on boot. Verify with: $ chmod +x /etc/rc.local From the application side, verify that you can connect to MySQL through port 3308. Repeat the above steps (except step #2) for the remaining DB nodes. Now, we have configured our backend health checks correctly. Let’s set up our MySQL load balancer as described in the next section. Setting Up nginx as MySQL Load Balancer 1.On the load balancer node, install the required packages: $ yum -y install pcre-devel zlib-devel 2.Install nginx 1.9 from source with TCP proxy module (–with-stream): $ wget http://nginx.org/download/nginx-1.9.6.tar.gz $ tar -xzf nginx-1.9.6.tar.gz $ ./configure --with-stream $ make $ make install 3.Add the following lines into nginx configuration file located at /usr/local/nginx/conf/nginx.conf: stream { upstream stream_backend { zone tcp_servers 64k; server 192.168.55.201:3308; server 192.168.55.202:3308; server 192.168.55.203:3308; } server { listen 3307; proxy_pass stream_backend; proxy_connect_timeout 1s; } } 4.Start nginx: $ /usr/local/nginx/sbin/nginx 5.Verify that nginx is listening to port 3307 that we have defined. MySQL connections should be coming via this port of this node and then redirects to available backends on port 3308. Then the respective DB node will redirect it to port 3306 where MySQL is listening: $ netstat -tulpn | grep 3307 tcp 0 0 0.0.0.0:3307 0.0.0.0:* LISTEN 5348/nginx: master Great. We have now set up our nginx instance as MySQL Galera Cluster load balancer. Let’s test it out! Testing Let’s perform some tests to verify that our Galera cluster is correctly load balanced. We performed various exercises to look at nginx and Galera cluster work in action. We performed the following actions consecutively: Turn g1.local to read-only=ON and read_only=OFF. Kill mysql service on g1.local and force SST when startup. Kill the other two database nodes so g1.local will become non-primary. Bootstrap g1.local from non-primary state. Rejoin the other 2 nodes back to the cluster. The screencast below contains several terminal outputs which explained as follows: Terminal 1 (top left): iptables PREROUTING chain output Terminal 2 (top right): MySQL error log for g1.local Terminal 3 (middle left): Application output when connecting to nginx load balancer. It reports date, hostname, wsrep_last_committed and wsrep_local_state_comment Terminal 4 (middle right): Output of /var/log/mysqlchk_iptables Terminal 5 (bottom left): Output of read_only and wsrep_sst_method on g1.local Terminal 6 (bottom right): Action console The following asciinema recording shows the result: Summary Galera node health checks by a TCP load balancer was limited to HAProxy due to its ability to use a custom port for backend health checks. With this script, it’s now possible for any TCP load balancers/reverse-proxies to monitor Galera nodes correctly. You are welcome to fork, pull request and extend the capabilities of this script.
原文链接 本文实例讲述了MySQL实现树状所有子节点查询的方法。分享给大家供大家参考,具体如下: 在Oracle 中我们知道有一个 Hierarchical Queries 通过CONNECT BY 我们可以方便的查了所有当前节点下的所有子节点。但很遗憾,在MySQL的目前版本中还没有对应的功能。 在MySQL中如果是有限的层次,比如我们事先如果可以确定这个树的最大深度是4, 那么所有节点为根的树的深度均不会超过4,则我们可以直接通过left join 来实现。 但很多时候我们无法控制树的深度。这时就需要在MySQL中用存储过程来实现或在你的程序中来实现这个递归。本文讨论一下几种实现的方法。 样例数据: mysql> create table treeNodes -> ( -> id int primary key, -> nodename varchar(20), -> pid int -> ); Query OK, 0 rows affected (0.09 sec) mysql> select * from treenodes; +----+----------+------+ | id | nodename | pid | +----+----------+------+ | 1 | A | 0 | | 2 | B | 1 | | 3 | C | 1 | | 4 | D | 2 | | 5 | E | 2 | | 6 | F | 3 | | 7 | G | 6 | | 8 | H | 0 | | 9 | I | 8 | | 10 | J | 8 | | 11 | K | 8 | | 12 | L | 9 | | 13 | M | 9 | | 14 | N | 12 | | 15 | O | 12 | | 16 | P | 15 | | 17 | Q | 15 | +----+----------+------+ 17 rows in set (0.00 sec) 树形图如下 1:A +-- 2:B | +-- 4:D | +-- 5:E +-- 3:C +-- 6:F +-- 7:G 8:H +-- 9:I | +-- 12:L | | +--14:N | | +--15:O | | +--16:P | | +--17:Q | +-- 13:M +-- 10:J +-- 11:K 方法一:利用函数来得到所有子节点号。 创建一个function getChildLst, 得到一个由所有子节点号组成的字符串. mysql> delimiter // mysql> mysql> CREATE FUNCTION `getChildLst`(rootId INT) -> RETURNS varchar(1000) -> BEGIN -> DECLARE sTemp VARCHAR(1000); -> DECLARE sTempChd VARCHAR(1000); -> -> SET sTemp = '$'; -> SET sTempChd =cast(rootId as CHAR); -> -> WHILE sTempChd is not null DO -> SET sTemp = concat(sTemp,',',sTempChd); -> SELECT group_concat(id) INTO sTempChd FROM treeNodes where FIND_IN_SET(pid,sTempChd)>0; -> END WHILE; -> RETURN sTemp; -> END -> // Query OK, 0 rows affected (0.00 sec) mysql> mysql> delimiter ; 使用我们直接利用find_in_set函数配合这个getChildlst来查找 mysql> select getChildLst(1); +-----------------+ | getChildLst(1) | +-----------------+ | $,1,2,3,4,5,6,7 | +-----------------+ 1 row in set (0.00 sec) mysql> select * from treeNodes -> where FIND_IN_SET(id, getChildLst(1)); +----+----------+------+ | id | nodename | pid | +----+----------+------+ | 1 | A | 0 | | 2 | B | 1 | | 3 | C | 1 | | 4 | D | 2 | | 5 | E | 2 | | 6 | F | 3 | | 7 | G | 6 | +----+----------+------+ 7 rows in set (0.01 sec) mysql> select * from treeNodes -> where FIND_IN_SET(id, getChildLst(3)); +----+----------+------+ | id | nodename | pid | +----+----------+------+ | 3 | C | 1 | | 6 | F | 3 | | 7 | G | 6 | +----+----------+------+ 3 rows in set (0.01 sec) 优点: 简单,方便,没有递归调用层次深度的限制 (max_sp_recursion_depth,最大255) ; 缺点:长度受限,虽然可以扩大 RETURNS varchar(1000),但总是有最大限制的。 MySQL目前版本( 5.1.33-community)中还不支持function 的递归调用。 方法二:利用临时表和过程递归 创建存储过程如下。createChildLst 为递归过程,showChildLst为调用入口过程,准备临时表及初始化。 mysql> delimiter // mysql> mysql> # 入口过程 mysql> CREATE PROCEDURE showChildLst (IN rootId INT) -> BEGIN -> CREATE TEMPORARY TABLE IF NOT EXISTS tmpLst -> (sno int primary key auto_increment,id int,depth int); -> DELETE FROM tmpLst; -> -> CALL createChildLst(rootId,0); -> -> select tmpLst.*,treeNodes.* from tmpLst,treeNodes where tmpLst.id=treeNodes.id order by tmpLst.sno; -> END; -> // Query OK, 0 rows affected (0.00 sec) mysql> mysql> # 递归过程 mysql> CREATE PROCEDURE createChildLst (IN rootId INT,IN nDepth INT) -> BEGIN -> DECLARE done INT DEFAULT 0; -> DECLARE b INT; -> DECLARE cur1 CURSOR FOR SELECT id FROM treeNodes where pid=rootId; -> DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1; -> -> insert into tmpLst values (null,rootId,nDepth); -> -> OPEN cur1; -> -> FETCH cur1 INTO b; -> WHILE done=0 DO -> CALL createChildLst(b,nDepth+1); -> FETCH cur1 INTO b; -> END WHILE; -> -> CLOSE cur1; -> END; -> // Query OK, 0 rows affected (0.00 sec) mysql> delimiter ; 调用时传入结点 mysql> call showChildLst(1); +-----+------+-------+----+----------+------+ | sno | id | depth | id | nodename | pid | +-----+------+-------+----+----------+------+ | 4 | 1 | 0 | 1 | A | 0 | | 5 | 2 | 1 | 2 | B | 1 | | 6 | 4 | 2 | 4 | D | 2 | | 7 | 5 | 2 | 5 | E | 2 | | 8 | 3 | 1 | 3 | C | 1 | | 9 | 6 | 2 | 6 | F | 3 | | 10 | 7 | 3 | 7 | G | 6 | +-----+------+-------+----+----------+------+ 7 rows in set (0.13 sec) Query OK, 0 rows affected, 1 warning (0.14 sec) mysql> mysql> call showChildLst(3); +-----+------+-------+----+----------+------+ | sno | id | depth | id | nodename | pid | +-----+------+-------+----+----------+------+ | 1 | 3 | 0 | 3 | C | 1 | | 2 | 6 | 1 | 6 | F | 3 | | 3 | 7 | 2 | 7 | G | 6 | +-----+------+-------+----+----------+------+ 3 rows in set (0.11 sec) Query OK, 0 rows affected, 1 warning (0.11 sec) depth 为深度,这样可以在程序进行一些显示上的格式化处理。类似于oracle中的 level 伪列。sno 仅供排序控制。这样你还可以通过临时表tmpLst与数据库中其它表进行联接查询。 MySQL中你可以利用系统参数 max_sp_recursion_depth 来控制递归调用的层数上限。如下例设为12. mysql> set max_sp_recursion_depth=12; Query OK, 0 rows affected (0.00 sec) 优点 : 可以更灵活处理,及层数的显示。并且可以按照树的遍历顺序得到结果。 缺点 : 递归有255的限制。 方法三:利用中间表和过程 创建存储过程如下。由于MySQL中不允许在同一语句中对临时表多次引用,只以使用普通表tmpLst来实现了。当然你的程序中负责在用完后清除这个表。 delimiter // drop PROCEDURE IF EXISTS showTreeNodes_yongyupost2000// CREATE PROCEDURE showTreeNodes_yongyupost2000 (IN rootid INT) BEGIN DECLARE Level int ; drop TABLE IF EXISTS tmpLst; CREATE TABLE tmpLst ( id int, nLevel int, sCort varchar(8000) ); Set Level=0 ; INSERT into tmpLst SELECT id,Level,ID FROM treeNodes WHERE PID=rootid; WHILE ROW_COUNT()>0 DO SET Level=Level+1 ; INSERT into tmpLst SELECT A.ID,Level,concat(B.sCort,A.ID) FROM treeNodes A,tmpLst B WHERE A.PID=B.ID AND B.nLevel=Level-1 ; END WHILE; END; // delimiter ; CALL showTreeNodes_yongyupost2000(0); 执行完后会产生一个tmpLst表,nLevel 为节点深度,sCort 为排序字段。 使用方法 SELECT concat(SPACE(B.nLevel*2),'+--',A.nodename) FROM treeNodes A,tmpLst B WHERE A.ID=B.ID ORDER BY B.sCort; +--------------------------------------------+ | concat(SPACE(B.nLevel*2),'+--',A.nodename) | +--------------------------------------------+ | +--A | | +--B | | +--D | | +--E | | +--C | | +--F | | +--G | | +--H | | +--J | | +--K | | +--I | | +--L | | +--N | | +--O | | +--P | | +--Q | | +--M | +--------------------------------------------+ 17 rows in set (0.00 sec) 优点 : 层数的显示。并且可以按照树的遍历顺序得到结果。没有递归限制。 缺点 : MySQL中对临时表的限制,只能使用普通表,需做事后清理。 以上是几个在MySQL中用存储过程比较简单的实现方法。
插入缓冲 两次写 自适应哈希索引 异步IO 刷新邻接页 插入缓冲 innodb存储引擎对于非聚集索引的插入或更新操作,不是每一次直接插入到索引页中,而是先判断插入的非聚集索引页是否在缓冲池,若在,则直接插入;不在,则先放在一个insert buffer对象中。数据库这个非聚集的索引已经插到叶子节点,而实际并没有,知识存放在另一个位置。然后再以一定的频率和情况进行insert buffer和辅助索引页子节点的merge操作,这样就把多次插入操作合并为一个。 insert buffer使用条件: 索引为辅助索引 索引不是唯一的 使用命令show engine innodb status查看插入缓冲信息 mysql>show engine innodb status\G; ...... ------------------------------------- INSERT BUFFER AND ADAPTIVE HASH INDEX ------------------------------------- Ibuf: size 1, free list len 33, seg size 35, 2118 merges merged operations: insert 2296, delete mark 0, delete 0 discarded operations: insert 0, delete mark 0, delete 0 ...... seg size显示当前insert buffer的大小为35*16KB; free list len表示空闲列表的长度; size代表已经合并记录页的数量 merges合并次数 merged operations和discarded operations显示change buffer中每个操作的次数 merged operations合并操作 merged operations insert插入的记录数 merged operations delete mark删除的记录数 merged operations delete清除记录数 discarded operations表示发生merge操作时,表已删除。 discarded operations insert表示取消的合并操作数 。。。 change buffer innodb从1.0.x开始引入change buffer。innodb可以对DML操作都进行缓冲。 通过参数innodb_change_buffering开启各种buffer选项。可选值:inserts、deletes、purges、changes、all、none。changes表示启用inserts和deletes,all表示启用全部,none表示都不启用。默认all。 mysql> show variables like 'innodb_change_buffering'; +-------------------------+-------+ | Variable_name | Value | +-------------------------+-------+ | innodb_change_buffering | all | +-------------------------+-------+ 通过参数innodb_change_buffer_max_size控制change buffer最大内存使用数量。默认25,表示最多使用1/4的缓冲池内存空间。该参数最大有效值50. mysql> show variables like 'innodb_change_buffer_max_size'; +-------------------------------+-------+ | Variable_name | Value | +-------------------------------+-------+ | innodb_change_buffer_max_size | 25 | +-------------------------------+-------+
checkpoint是为了解决: 缩短数据库恢复时间 缓冲池不够用时,将脏页刷新到磁盘 重做日志不可用时,刷新脏页 所以当数据库发生宕机时,数据库不需要重做所有的日志,因为checkpoint之前的页都已经刷新到磁盘了。数据库只需对checkpoint之后的重做日志进行恢复。 当缓冲池不够用时,根据LRU算法将最近最少使用的脏页,强制执行checkpoint,将脏页刷新到磁盘。 重做日志可以被重用的部分是指这些重做日志不在被需要,即宕机时,数据库恢复操作不需要这部分日志。若重做日志还需要使用,那么必须强制产生checkpoint,将缓冲池中的页至少刷新到当前重做日志的位置。 可以通过show engine innodb status查看LSN mysql>show engine innodb status\G; ...... LOG --- Log sequence number 17383720925 Log flushed up to 17383720925 Last checkpoint at 17383720535 innodb有两种checkpoint,分别为: sharp fuzzy sharp checkpoint发生在数据库关闭时将所有的脏页都刷新到磁盘,这是默认的工作方式,即参数innodb_fast_shutdown=1 fuzzy checkpoint实在数据库运行时的方式,一次只刷新一部分脏页到磁盘 发生fuzzy checkpoint的情况: master thread checkpoint flush_lru_list checkpoint async/sync flush checkpoint dirty page too mush checkpoint master thread中发生checkpoint,以每秒或每十秒从缓冲池的脏页列表中刷新一定比例的页到磁盘。这个过程是异步的。 flush_lru_list checkpoint 是因为innodb要保证lru列表中需要100左右的空闲有可用。如果不足,则把lru列表尾端的也移除,如果其中有脏页,则进行checkpoint。从5.6开始这个过程由page cleaner线程进行,用户可以通过参数innodb_lru_scan_depth来控制lru列表中可用页的数量,默认1024: mysql> select version(); +------------+ | version() | +------------+ | 5.6.30-log | +------------+ 1 row in set (0.00 sec) mysql> show variables like 'innodb_lru_scan_depth'\G; *************************** 1. row *************************** Variable_name: innodb_lru_scan_depth Value: 1024 1 row in set (0.10 sec) async/sync flush checkpoint指的是重做日志文件不可用的情况,这是需要强制将脏页列表中的一些数据刷新到磁盘。若将已经写入重做日志的LSN记为redo_lsn,将已经刷新回磁盘的最新页LSN记为checkpoint_lsn,则可定义: checkpoint_age = redo_lsn - checkpoint_lsn async_water_mark = 0.75 * total_redo_log_file_size sync_water_mark = 0.9 * total_redo_log_file_size 当checkpoint_age < async_water_mark时,不需要刷新任何脏页到磁盘; 当async_water_mark < checkpoint_age < sync_water_mark时触发async flush,从flush列表刷新足够的脏页回磁盘,使得刷新后满足checkpoint_age < async_water_mark; checkpoint_age > sync_water_mark很少发生,除非设置的重做日志文件太小,并且进行类似load data的bulk insert操作。此时出发sync flush操作,从flush列表刷新足够的脏页回磁盘,使得刷新后满足checkpoint_age < async_water_mark; async/sync flush checkpoint是为了保证重做日志循环使用的可用性。 dirty page too much checkpoint,即脏页数量太多,导致innodb存储引擎强制进行checkpoint。主要还是为了保证缓冲池有足够可用的页。可由参数innodb_max_dirty_pages_pct控制 mysql> show variables like 'innodb_max_dirty_pages_pct'\G; *************************** 1. row *************************** Variable_name: innodb_max_dirty_pages_pct Value: 75 1 row in set (0.08 sec)
innodb内存区域除了缓冲池外,还有重做日志缓冲。innodb存储引擎会先将重做日志信息写入缓冲区,然后按照一定频率刷新到重做日志文件。其可由参数innodb_log_buffer_size进行控制。 mysql> show variables like 'innodb_log_buffer_size'\G; *************************** 1. row *************************** Variable_name: innodb_log_buffer_size Value: 8388608 1 row in set (0.00 sec) 8M的重做日志缓冲能够满足大多数应用。因为在以下条件会刷新到重做日志文件 master thread每一秒将重做日志缓冲刷新到重做日志文件 每个事务提交时会刷新到重做日志文件 当重做日志缓冲池剩余空间小于1/2时,重做日志缓冲会刷新到重做日志文件
LRU list innodb中新读取到的页,并不直接放在LRU列表首部,而是放在midpoint位置。默认该位置在LRU列表5/8处。midpoint可有参数innodb_old_blocks_pct控制 mysql> show variables like 'innodb_old_blocks_pct'\G; *************************** 1. row *************************** Variable_name: innodb_old_blocks_pct Value: 37 1 row in set (0.00 sec) 当有大的查询时,可能会将热点数据页从LRU列表中移除,为了避免这个问题可以通过参数innodb_old_blocks_time的修改来实现,该参数表示页读取到mid位置后需要等待多久才会被加入到LRU列表的热端。 mysql> show variables like 'innodb_old_blocks_time'\G; *************************** 1. row *************************** Variable_name: innodb_old_blocks_time Value: 1000 1 row in set (0.00 sec) 可以通过innodb status来查看LRU列表和Free列表的使用和运行状态 mysql> show variables like 'innodb_old_blocks_time'\G; 。。。。。。 Buffer pool size 8191 Free buffers 7699 Database pages 491 Old database pages 0 Modified db pages 0 Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0 Pages made young 0, not young 0 0.00 youngs/s, 0.00 non-youngs/s Pages read 449, created 42, written 101 0.00 reads/s, 0.00 creates/s, 0.00 writes/s Buffer pool hit rate 788 / 1000, young-making rate 0 / 1000 not 0 / 1000 。。。。。。 可以看到Buffer pool size共有8179个页,即8179*16k,共128M的缓冲池。 Free buffefreers表示free列表中页的数量 Database pages表示LRU列表中页的数量 Database pages与Free buffers之和不等于Buffer pool size,因为还可能分配给自适应哈希索引、lock信息、insert buffer等页。 因为是本地环境,没有做更新操作,所以即使设置了innodb_old_blocks_time,not young还是为0。 buffer pool hit rate,表示缓冲池命中率,一般不低于95%,如果偏低,要看看是不是有全表扫描造成LRU列表污染。 还可以通过innodb_buffer_pool_stats查看缓冲池的运行状态 mysql> select pool_id,hit_rate, -> pages_made_young,pages_not_made_young -> from information_schema.innodb_buffer_pool_stats\G; *************************** 1. row *************************** pool_id: 0 hit_rate: 0 pages_made_young: 0 pages_not_made_young: 0 1 row in set (0.00 sec) 可以通过innodb_buffer_page_lru 观察每个LRU列表中每个页的具体信息 mysql> select table_name,space,page_number,page_type -> from information_schema.innodb_buffer_page_lru where space=1; +------------------------------+-------+-------------+-------------+ | table_name | space | page_number | page_type | +------------------------------+-------+-------------+-------------+ | `mysql`.`innodb_table_stats` | 1 | 3 | INDEX | | NULL | 1 | 1 | IBUF_BITMAP | +------------------------------+-------+-------------+-------------+ 2 rows in set (0.13 sec) innodb存储引擎支持压缩页功能,即将原本16k的页压缩为1k、2k、4k、8k。所以对于非16k的页通过unzip_LRU列表管理 mysql> show engine innodb status\G; 。。。。。。 Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s LRU len: 223, unzip_LRU len: 0 I/O sum[0]:cur[0], unzip sum[0]:cur[0] 。。。。。。 可以看到LRU列表义工223个页,unzip_LRU 列表中没有数据。LRU中的页包含unzip_LRU列表中的页。可以通过innodb_buffer_page_lru来观察unzip_LRU 列表中的页。 mysql> select -> table_name,space,page_number,compressed_size -> from information_schema.innodb_buffer_page_lru -> where compressed_size <> 0; Flush list LRU列中数据被修改后,产生脏页。数据库通过checkpoint机制将脏页刷新会磁盘,flush list中的页即为脏页列表。脏页即存在于LRU中,也存在于Flush中。LRU list用于管理缓冲池中页的可用性,Flush list用于将页刷新回磁盘。 mysql> show engine innodb status\G; 。。。。。。 Modified db pages 2456 。。。。。。 Modified db pages显示了脏页的数量。 脏页的数据可通过innodb_buffer_page_lru查询 mysql> select table_name,space,page_number,page_type from information_schema.innodb_buffer_page_lru where oldest_modification>0;
innodb存储引擎缓冲池设置为21GB mysql> show variables like 'innodb_buffer_pool_size'\G; *************************** 1. row *************************** Variable_name: innodb_buffer_pool_size Value: 21474836480 1 row in set (0.00 sec) innodb存储引擎的内存结构情况 可以设置多个缓冲池实例。每个页根据hash值分配到不同的缓冲池实例中,减少资源竞争,增加并发处理能力 mysql> show variables like 'innodb_buffer_pool_instances'\G; *************************** 1. row *************************** Variable_name: innodb_buffer_pool_instances Value: 4 1 row in set (0.00 sec) 在配置文件中将innodb_buffer_pool_instances设置大于1即可。查看innodb status 。。。。。。 ---BUFFER POOL 0 Buffer pool size 327680 Free buffers 327562 Database pages 118 Old database pages 0 Modified db pages 0 Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0 Pages made young 0, not young 0 0.00 youngs/s, 0.00 non-youngs/s Pages read 118, created 0, written 0 0.00 reads/s, 0.00 creates/s, 0.00 writes/s No buffer pool page gets since the last printout Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s LRU len: 118, unzip_LRU len: 0 I/O sum[0]:cur[0], unzip sum[0]:cur[0] ---BUFFER POOL 1 Buffer pool size 327679 Free buffers 327658 Database pages 21 Old database pages 0 Modified db pages 0 Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0 Pages made young 0, not young 0 0.00 youngs/s, 0.00 non-youngs/s Pages read 21, created 0, written 0 0.00 reads/s, 0.00 creates/s, 0.00 writes/s No buffer pool page gets since the last printout Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s LRU len: 21, unzip_LRU len: 0 I/O sum[0]:cur[0], unzip sum[0]:cur[0] 。。。。。。 5.6中可以通过运行如下命令查看各个缓冲池使用状态 mysql> select pool_id,pool_size, -> free_buffers,database_pages -> from information_schema.innodb_buffer_pool_stats\G; *************************** 1. row *************************** pool_id: 0 pool_size: 327680 free_buffers: 327562 database_pages: 118 *************************** 2. row *************************** pool_id: 1 pool_size: 327679 free_buffers: 327658 database_pages: 21 *************************** 3. row *************************** pool_id: 2 pool_size: 327679 free_buffers: 327675 database_pages: 4 *************************** 4. row *************************** pool_id: 3 pool_size: 327679 free_buffers: 327596 database_pages: 83 4 rows in set (0.00 sec)
mysql主要由以下几部分组成 连接池组件 管理服务和工具组件 SQL接口组件 查询分析器组件 优化器组件 缓冲组件 插件式存储引擎 物理文件
master thread 核心的后台线程,主要负责将缓冲池中的数据异步刷新到磁盘,保证数据的一致性,包括脏页的刷新、合并插入缓冲、undo页的回收等。 IO thread 主要负责IO请求的回掉处理。分别为write、read、insert buffer和log IO thread。线程数量可以通过参数进行调整 mysql> show variables like 'innodb_version'\G; *************************** 1. row *************************** Variable_name: innodb_version Value: 5.5.49 1 row in set (0.00 sec) mysql> show variables like 'innodb_%io_threads'\G; *************************** 1. row *************************** Variable_name: innodb_read_io_threads Value: 4 *************************** 2. row *************************** Variable_name: innodb_write_io_threads Value: 4 2 rows in set (0.00 sec) 可以通过innodb status来观察innodb中IO thread mysql> show engine innodb status\G; *************************** 1. row *************************** Type: InnoDB Name: Status: ===================================== 160620 13:08:00 INNODB MONITOR OUTPUT ===================================== 。。。。。。 -------- FILE I/O -------- I/O thread 0 state: waiting for completed aio requests (insert buffer thread) I/O thread 1 state: waiting for completed aio requests (log thread) I/O thread 2 state: waiting for completed aio requests (read thread) I/O thread 3 state: waiting for completed aio requests (read thread) I/O thread 4 state: waiting for completed aio requests (read thread) I/O thread 5 state: waiting for completed aio requests (read thread) I/O thread 6 state: waiting for completed aio requests (write thread) I/O thread 7 state: waiting for completed aio requests (write thread) I/O thread 8 state: waiting for completed aio requests (write thread) I/O thread 9 state: waiting for completed aio requests (write thread) purge thread 负责回收已经使用并分配的undo页,purge操作默认是由master thread中完成的,为了减轻master thread的工作,提高cpu使用率以及提升存储引擎的性能。用户可以在参数文件中添加如下命令来启动独立的purge thread [mysqld] innodb_purge_threads=1 innodb1.2版本开始支持多个purge thread,这样可以进一步加快undo页的回收。同时由于purge thread离散读取undo页,这样也进一步利用磁盘的随机读取性能 mysql> select version()\G; *************************** 1. row *************************** version(): 5.6.30-log 1 row in set (0.00 sec) mysql> show variables like 'innodb_purge_threads'; +----------------------+-------+ | Variable_name | Value | +----------------------+-------+ | innodb_purge_threads | 4 | +----------------------+-------+ page cleaner thread 执行脏页刷新操作
TCP/IP 当客户端和mysql实例不在同一台服务器上时,两台机器通过TCP/IP网络连接 # mysql -h192.168.74.128 -uroot -p 需要注意在通过TCP/IP连接时,mysql会先检查一张权限视图,用来判断客户端IP是否允许连接到实例 mysql> select host,user,password from mysql.user\G *************************** 1. row *************************** host: localhost user: root password: *63D90C8BB77C99F7F5D836C5AE2D1E9BE5CE43C2 *************************** 2. row *************************** host: 192.168.74.128 user: root password: *63D90C8BB77C99F7F5D836C5AE2D1E9BE5CE43C2 *************************** 3. row *************************** host: 192.168.10.246 user: root password: *63D90C8BB77C99F7F5D836C5AE2D1E9BE5CE43C2 *************************** 4. row *************************** host: 192.168.74.1 user: root password: *63D90C8BB77C99F7F5D836C5AE2D1E9BE5CE43C2 *************************** 5. row *************************** host: % user: root password: *63D90C8BB77C99F7F5D836C5AE2D1E9BE5CE43C2 5 rows in set (0.00 sec) 命名管道和共享内存 在windows平台上,如果两个通信进程在一台机器上,可以使用命名管道。 mysql须在配置文件中启动–enable-named-pipe选项。 mysql还提供了共享内存的连接方式,可以通过在配置文件中添加–shared-memory实现;如果想使用共享内存连接,客户端必须使用–protocol=memory选项 UNIX域套接字 在linux和unix环境下,当实例与客户端在一台服务器上时,可以使用套接字访问 套接字文件路径 mysql> show variables like 'socket'; +---------------+-----------------+ | Variable_name | Value | +---------------+-----------------+ | socket | /tmp/mysql.sock | +---------------+-----------------+ 1 row in set (0.00 sec) 访问方式 # mysql -uroot -p -S /tmp/mysql.sock
通过以下命令可以查看mysql 实例启动时,会在哪些位置查找配置文件 # mysql --help|grep my.cnf order of preference, my.cnf, $MYSQL_TCP_PORT, /etc/my.cnf /etc/mysql/my.cnf /usr/local/mysql/etc/my.cnf /usr/local/mysql5.6.27/my.cnf ~/.my.cnf 当几个配置文件中包含同一个参数,mysql会以最后一个配置文件中的参数为准
原文链接 本文介绍的实例成功的实现了动态行转列。下面我以一个简单的数据库为例子,说明一下。 数据表结构 这里我用一个比较简单的例子来说明,也是行转列的经典例子,就是学生的成绩 三张表:学生表、课程表、成绩表 学生表 就简单一点,学生学号、学生姓名两个字段 CREATE TABLE `student` ( `stuid` VARCHAR(16) NOT NULL COMMENT '学号', `stunm` VARCHAR(20) NOT NULL COMMENT '学生姓名', PRIMARY KEY (`stuid`) ) COLLATE='utf8_general_ci' ENGINE=InnoDB; 课程表 课程编号、课程名 CREATE TABLE `courses` ( `courseno` VARCHAR(20) NOT NULL, `coursenm` VARCHAR(100) NOT NULL, PRIMARY KEY (`courseno`) ) COMMENT='课程表' COLLATE='utf8_general_ci' ENGINE=InnoDB; 成绩表 学生学号、课程号、成绩 CREATE TABLE `score` ( `stuid` VARCHAR(16) NOT NULL, `courseno` VARCHAR(20) NOT NULL, `scores` FLOAT NULL DEFAULT NULL, PRIMARY KEY (`stuid`, `courseno`) ) COLLATE='utf8_general_ci' ENGINE=InnoDB; 以上就是数据库表的结构了,这里没有建立外键,但是根据表的结构,可以清楚的看到成绩表中的学号和课程号是与学生表、课程表分别关联起来的。 数据准备 /*学生表数据*/ Insert Into student (stuid, stunm) Values('1001', '张三'); Insert Into student (stuid, stunm) Values('1002', '李四'); Insert Into student (stuid, stunm) Values('1003', '赵二'); Insert Into student (stuid, stunm) Values('1004', '王五'); Insert Into student (stuid, stunm) Values('1005', '刘青'); Insert Into student (stuid, stunm) Values('1006', '周明'); /*课程表数据*/ Insert Into courses (courseno, coursenm) Values('C001', '大学语文'); Insert Into courses (courseno, coursenm) Values('C002', '新视野英语'); Insert Into courses (courseno, coursenm) Values('C003', '离散数学'); Insert Into courses (courseno, coursenm) Values('C004', '概率论与数理统计'); Insert Into courses (courseno, coursenm) Values('C005', '线性代数'); Insert Into courses (courseno, coursenm) Values('C006', '高等数学(一)'); Insert Into courses (courseno, coursenm) Values('C007', '高等数学(二)'); /*成绩表数据*/ Insert Into score(stuid, courseno, scores) Values('1001', 'C001', 67); Insert Into score(stuid, courseno, scores) Values('1002', 'C001', 68); Insert Into score(stuid, courseno, scores) Values('1003', 'C001', 69); Insert Into score(stuid, courseno, scores) Values('1004', 'C001', 70); Insert Into score(stuid, courseno, scores) Values('1005', 'C001', 71); Insert Into score(stuid, courseno, scores) Values('1006', 'C001', 72); Insert Into score(stuid, courseno, scores) Values('1001', 'C002', 87); Insert Into score(stuid, courseno, scores) Values('1002', 'C002', 88); Insert Into score(stuid, courseno, scores) Values('1003', 'C002', 89); Insert Into score(stuid, courseno, scores) Values('1004', 'C002', 90); Insert Into score(stuid, courseno, scores) Values('1005', 'C002', 91); Insert Into score(stuid, courseno, scores) Values('1006', 'C002', 92); Insert Into score(stuid, courseno, scores) Values('1001', 'C003', 83); Insert Into score(stuid, courseno, scores) Values('1002', 'C003', 84); Insert Into score(stuid, courseno, scores) Values('1003', 'C003', 85); Insert Into score(stuid, courseno, scores) Values('1004', 'C003', 86); Insert Into score(stuid, courseno, scores) Values('1005', 'C003', 87); Insert Into score(stuid, courseno, scores) Values('1006', 'C003', 88); Insert Into score(stuid, courseno, scores) Values('1001', 'C004', 88); Insert Into score(stuid, courseno, scores) Values('1002', 'C004', 89); Insert Into score(stuid, courseno, scores) Values('1003', 'C004', 90); Insert Into score(stuid, courseno, scores) Values('1004', 'C004', 91); Insert Into score(stuid, courseno, scores) Values('1005', 'C004', 92); Insert Into score(stuid, courseno, scores) Values('1006', 'C004', 93); Insert Into score(stuid, courseno, scores) Values('1001', 'C005', 77); Insert Into score(stuid, courseno, scores) Values('1002', 'C005', 78); Insert Into score(stuid, courseno, scores) Values('1003', 'C005', 79); Insert Into score(stuid, courseno, scores) Values('1004', 'C005', 80); Insert Into score(stuid, courseno, scores) Values('1005', 'C005', 81); Insert Into score(stuid, courseno, scores) Values('1006', 'C005', 82); Insert Into score(stuid, courseno, scores) Values('1001', 'C006', 77); Insert Into score(stuid, courseno, scores) Values('1002', 'C006', 78); Insert Into score(stuid, courseno, scores) Values('1003', 'C006', 79); Insert Into score(stuid, courseno, scores) Values('1004', 'C006', 80); Insert Into score(stuid, courseno, scores) Values('1005', 'C006', 81); Insert Into score(stuid, courseno, scores) Values('1006', 'C006', 82); 为什么要行转列 这是我们进行成绩查询的时候看到的这种纵列的结果,但是一般的时候,我们想要看到下图这种结果 那么需要这样的结果就要进行行转列来操作了。 怎么行转列 像得到上图的结果,一般的行转列,我们只需要这么做 静态行转列 Select st.stuid, st.stunm, MAX(CASE c.coursenm WHEN '大学语文' THEN s.scores ELSE 0 END ) '大学语文', MAX(CASE c.coursenm WHEN '新视野英语' THEN ifnull(s.scores,0) ELSE 0 END ) '新视野英语', MAX(CASE c.coursenm WHEN '离散数学' THEN ifnull(s.scores,0) ELSE 0 END ) '离散数学', MAX(CASE c.coursenm WHEN '概率论与数理统计' THEN ifnull(s.scores,0) ELSE 0 END ) '概率论与数理统计', MAX(CASE c.coursenm WHEN '线性代数' THEN ifnull(s.scores,0) ELSE 0 END ) '线性代数', MAX(CASE c.coursenm WHEN '高等数学(一)' THEN ifnull(s.scores,0) ELSE 0 END ) '高等数学(一)', MAX(CASE c.coursenm WHEN '高等数学(二)' THEN ifnull(s.scores,0) ELSE 0 END ) '高等数学(二)' From Student st Left Join score s On st.stuid = s.stuid Left Join courses c On c.courseno = s.courseno Group by st.stuid 看上面的语句可以看出,我们是在知道固定的几门课程之后,可以使用 MAX(CASE c.coursenm WHEN '线性代数' THEN ifnull(s.scores,0) ELSE 0 END ) '线性代数', 这样的语句来实现行转列 但我们都知道,课程不仅仅这几门,如果用上面的语句去写,第一要确定有多少课程,这么多课程的课程名要再拿出来,那样的话写一个查询语句下来,可是要写很多了。那么就想能不能动态进行行转列的操作?答案当然是肯定的了! 动态行转列 那么如何进行动态行转列呢? 首先我们要动态获取这样的语句 MAX(CASE c.coursenm WHEN '大学语文' THEN s.scores ELSE 0 END ) '大学语文', MAX(CASE c.coursenm WHEN '线性代数' THEN ifnull(s.scores,0) ELSE 0 END ) '线性代数', MAX(CASE c.coursenm WHEN '离散数学' THEN ifnull(s.scores,0) ELSE 0 END ) '离散数学' 而不是像上面那样一句句写出来,那如何得到这样的语句呢? 这里就要用到SQL语句拼接了。具体就是下面的语句 SELECT GROUP_CONCAT(DISTINCT CONCAT( 'MAX(IF(c.coursenm = ''', c.coursenm, ''', s.scores, 0)) AS ''', c.coursenm, '''' ) ) FROM courses c; 得到的结果就是 MAX(IF(c.coursenm = '大学语文', s.scores, 0)) AS '大学语文', MAX(IF(c.coursenm = '新视野英语', s.scores, 0)) AS '新视野英语', MAX(IF(c.coursenm = '离散数学', s.scores, 0)) AS '离散数学', MAX(IF(c.coursenm = '概率论与数理统计', s.scores, 0)) AS '概率论与数理统计', MAX(IF(c.coursenm = '线性代数', s.scores, 0)) AS '线性代数', MAX(IF(c.coursenm = '高等数学(一)', s.scores, 0)) AS '高等数学(一)', MAX(IF(c.coursenm = '高等数学(二)', s.scores, 0)) AS '高等数学(二)' 对,没错,就是我们上面进行行转列查询要用的语句,那样就不用知道多少课程和这些课程的名字,只要这样几行代码便可以得到动态的列了。 动态的列是拿到了,那如何再结合SQL语句进行查询得到结果呢? 这里要说明一点,因为用到了拼接函数,如果像上面的查询语句,只是把那几行语句替换掉,也就是下面这样 Select st.stuid, st.stunm, ( SELECT GROUP_CONCAT(DISTINCT CONCAT( 'MAX(IF(c.coursenm = ''', c.coursenm, ''', s.scores, NULL)) AS ', c.coursenm ) ) FROM courses c ) From Student st Left Join score s On st.stuid = s.stuid Left Join courses c On c.courseno = s.courseno Group by st.stuid; 然而得到的结果却是这样的 这里我就不多做赘述了,想必大家也明白。那么既然这样不行,那该怎么做呢? 没错,这里就要像普通的那些语句那样,进行声明,将语句拼接完整之后,再执行,也就是下面这样 SET @sql = NULL; SELECT GROUP_CONCAT(DISTINCT CONCAT( 'MAX(IF(c.coursenm = ''', c.coursenm, ''', s.scores, 0)) AS ''', c.coursenm, '''' ) ) INTO @sql FROM courses c; SET @sql = CONCAT('Select st.stuid, st.stunm, ', @sql, ' From Student st Left Join score s On st.stuid = s.stuid Left Join courses c On c.courseno = s.courseno Group by st.stuid'); PREPARE stmt FROM @sql; EXECUTE stmt; DEALLOCATE PREPARE stmt; 直接执行这些语句,得到如下结果。 没错,和开始的时候那种全部拼出来的语句一样,这样就实现了动态行转列的目的了。而且我们不用知道多少课程,也无需把这些课程名一一列出来。 当然这个语句拼接中的查询可以加入条件查询,比如我们要查询学号是1003的成绩 也就是下面这样 语句则如下 SET @sql = NULL; SET @stuid = '1003'; SELECT GROUP_CONCAT(DISTINCT CONCAT( 'MAX(IF(c.coursenm = ''', c.coursenm, ''', s.scores, 0)) AS ''', c.coursenm, '''' ) ) INTO @sql FROM courses c; SET @sql = CONCAT('Select st.stuid, st.stunm, ', @sql, ' From Student st Left Join score s On st.stuid = s.stuid Left Join courses c On c.courseno = s.courseno Where st.stuid = ''', @stuid, ''' Group by st.stuid'); PREPARE stmt FROM @sql; EXECUTE stmt; DEALLOCATE PREPARE stmt; 对比前面的语句,我们可以看到在第二行的Left join后面我改了一些,还有就是前面的变量加了一个@stuid [ 注:这里的 @ 符号是在SQL语句定义变量习惯用法,我个人理解应该是用来区分吧!] 那么问题来了,行转列的查询已经实现了,怎么标题中还写着存储过程?对,没错,就是存储过程! 像上面的语句,我们如果直接在MySQL中操作是没问题的,但如果用到项目中,那么这个语句显然我们没法用,而且我这次做的项目是结合使用MyBatis,大家都知道在MyBatis中的XML文件中可以自己写SQL语句,但是这样的很显然我们没法放到XML文件中。 而且最关键的是,这里不能用 If 条件,好比我们要判断学号是否为空或者等于0再加上条件进行查询,可是这里不支持。 没错就是下面这样 SET @sql = NULL; SET @stuid = '1003'; SET @courseno = 'C002'; SELECT GROUP_CONCAT(DISTINCT CONCAT( 'MAX(IF(c.coursenm = ''', c.coursenm, ''', s.scores, 0)) AS ''', c.coursenm, '''' ) ) INTO @sql FROM courses c; SET @sql = CONCAT('Select st.stuid, st.stunm, ', @sql, ' From Student st Left Join score s On st.stuid = s.stuid Left Join courses c On c.courseno = s.courseno'); IF @stuid is not null and @stuid != 0 then SET @sql = CONCAT(@sql, ' Where st.stuid = ''', @stuid, ''''); END IF; SET @sql = CONCAT(@sql, ' Group by st.stuid'); PREPARE stmt FROM @sql; EXECUTE stmt; DEALLOCATE PREPARE stmt; 对,我就是加上 if 之后人家就是不支持,就是这么任性。 所以就要用到存储过程啦,而且用存储过程的好处是,方便我们调用,相当于一个函数,其他可能也是类似的查询不需再重复写代码,直接调存储过程就好,还能随心所欲的加上if条件判断,多么美好的事情,哈哈~。 那么说到存储过程,这里该如何写呢? 创建存储过程的语句我就不多写了,这里呢把上面的查询语句直接放到创建存储过程的begin和end直接就可以了,如下: DELIMITER && drop procedure if exists SP_QueryData; Create Procedure SP_QueryData(IN stuid varchar(16)) READS SQL DATA BEGIN SET @sql = NULL; SET @stuid = NULL; SELECT GROUP_CONCAT(DISTINCT CONCAT( 'MAX(IF(c.coursenm = ''', c.coursenm, ''', s.scores, 0)) AS ''', c.coursenm, '\'' ) ) INTO @sql FROM courses c; SET @sql = CONCAT('Select st.stuid, st.stunm, ', @sql, ' From Student st Left Join score s On st.stuid = s.stuid Left Join courses c On c.courseno = s.courseno'); IF stuid is not null and stuid <> '' then SET @stuid = stuid; SET @sql = CONCAT(@sql, ' Where st.stuid = \'', @stuid, '\''); END IF; SET @sql = CONCAT(@sql, ' Group by st.stuid'); PREPARE stmt FROM @sql; EXECUTE stmt; DEALLOCATE PREPARE stmt; END && DELIMITER ; 嗯,对比上面简单的SQL语句可以看出,这里使用了 if 语句,对学号进行了判断 不过这里要注意一点,这里的if语句不像我们平时java啊那种写法也就是下面 if(条件) { 要执行的语句块 } 对,在SQL里面的if语句不一样,不需要括号啊什么的,就像直接说英文一样 IF @stuid is not null and @stuid != 0 then SET @sql = CONCAT(@sql, ' Where st.stuid = ''', @stuid, ''''); END IF; 嗯,就是这么简单明了,如果条件满足,那么就怎么样,然后结束。 然后我们就可以传参数调用这个SP了 CALL `SP_QueryData`('1001'); 得到如下结果 当然我们也可以直接传个空串过去 CALL `SP_QueryData`(''); 同样得到我们想要的结果 好了,以上就是这次我在MySQL进行动态行转列的实现过程。 总结及问题 开始的时候,只想到要行转列,写着写着突然发现要动态的,因为我不确定到底有多少列。 在网上各种找资料,然而看不太懂! 后来,参考了Pivot table with dynamic columns in MySQL这个,才写出来的。 然后是各种问题,先是SQL语句中加入if条件,我像平时写java那样,发现并没有什么用,网上也说就是这种 IF(stuid is not null && stuid <> '') then SET @stuid = stuid; SET @sql = CONCAT(@sql, ' Where st.stuid = \'', @stuid, '\''); END IF; 可是我这么写了之后并没有什么用,还是报错,找了不少之后才发现原来不是这么写的,然后改了过来。 改完之后我以为可以了,可是,发现依旧不行。然后我就在想是不是这里不能用if判断,因为不是一个function或者procedure,于是我就写创建procedure的语句。 改造完之后,procedure成功的创建了。那创建完我就试试能不能,调用procedure之后,当当当当,结果出来了。 嗯,这个过程还是收获很多的,对MySQL的行转列,以及存储过程,还有在SQL语句中的使用不一样的地方等。 而且,这个行转列的实现了之后,这个项目基本上没啥大问题了对数据的处理,相当好啊,哈哈~ 以上就是我在行转列实现的过程中所有的内容,相对来说,我觉得,这里写的很清楚很明了了,所以只要你有耐心看完并认真研究的话,这个内容对你的行转列还是有很大裨益的。 PS:如果生成的变量长度较长,导致拼接的sql有问题,可以通过设置如下变量来调整 set group_concat_max_len=4000;
将表从一个引擎改为另一个引擎的方法 方法1: 最简单的方法 alter table test engine=innodb; 但是如果表数据量大,则需要执行很长时间。因为mysql会按行将数据从原表复制到新表,在复制期间可能会消耗系统所有的IO,同时原表上会加读锁,所以业务繁忙的表要小心该操作。 方法2: 一种解决方案是使用导出导入 - 使用mysqldump导出文件 - 修改文件中的create table语句的存储引擎选项,同时修改表名(注意:mysqldump会在create table前加上drop table语句,不注意可能导致数据丢失) - 导入文件 - 方法3: 先创建新存储引擎的表,然后使用insert。。。select语法导数据 数据量不大时: mysql> create table test_innodb like test_myisam; mysql> alter table test_innodb engine=innodb; mysql> insert into test_innodb select * from test_myisam; 数据量大时,可以分批处理,避免大事务产生过多的undo mysql>start transaction; mysql>insert into test_innodb select * from test_myisam where id between a and b; mysql>commit; 方法4: 使用percona toolkit提供的pt_online-schema-change的工具实现在线schema变更 # pt-online-schema-change -u root -p beijing --alter "ENGINE=MyISAM" --execute D=miles,t=actor 该工具详细使用方式请参考 https://www.percona.com/doc/percona-toolkit/2.2/pt-online-schema-change.html
innodb的MVCC是通过在每行记录后面保存两个隐藏的列来实现。一个保存行的创建时间,一个保存行的过期时间,存储的值为系统版本号。每开启一个新的事务,系统版本号都会自动增加。在repeatable read隔离级别下,MVCC的具体操作: select innodb会根据以下两个条件检查每行记录: innodb只查找版本早于当前事务版本的数据行。 行删除版本要么未定义,要么大于当前事务版本号 insert innodb为新插入的每一行保存当前系统版本号作为行版本号 delete innodb为删除每一行保存当前系统版本号作为行删除标识 update innodb为插入一行新纪录,保存当前系统版本号为行版本号,同时保存当前系统版本号为原来行删除标识 保存这两个额外系统版本号,使大多数读操作都可以不用加锁。但是会增加额外的存储空间,需要做更多的行检查和额外的维护工作 MVCC只在repeatable read和read committed两个隔离级别工作。read uncommitted总是读取最新数据。serializable则会对所有读取的行都加锁
配置percona的yum仓库 一、先安装依赖: yum install perl-DBI yum install perl-DBD-MySQL yum install perl-Time-HiRes yum install perl-IO-Socket-SSL 二、配置yum源、 方法1、自动安装percona的yum仓库(以下分别为x86_64和i386平台) #rpm -ivh http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm #rpm -ivh http://www.percona.com/redir/downloads/percona-release/percona-release-0.0-1.i386.rpm 方法2、手动yum仓库 创建文件/etc/yum.repos.d/Percona.repo内容如下: [percona] name = CentOS $releasever - Percona baseurl=http://repo.percona.com/centos/$releasever/os/$basearch/ enabled = 1 gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-percona gpgcheck = 1 获取和保存key #wget http://www.percona.com/redir/downloads/percona-release/RPM-GPG-KEY-percona #cp RPM-GPG-KEY-percona /etc/pki/rpm-gpg/RPM-GPG-KEY-percona 安装 percona xtrabackup 下面我们就可以安装percona一系列工具了: 安装xtrabackup #yum -y install percona-xtrabackup 安装toolkit #yum -y install percona-toolkit
#!/bin/bash freemem=$(cat /proc/meminfo | grep "MemFree" | awk '{print $2}') if [ $freemem -le 23500000 ];then date >> /tmp/mem.log free -m >> /tmp/mem.log sync sync echo 3 > /proc/sys/vm/drop_caches free -m >> /tmp/mem.log fi
#!/bin/sh PATH=/usr/kerberos/sbin:/usr/kerberos/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin export JAVA_HOME=/usr/java/jdk1.6.0_37 export CLASSPATH=$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar export PATH=$PATH:$JAVA_HOME/bin TomcatID=$(ps -ef | grep tomcat|grep -w 'tomcat-7.0.42'|grep -v grep|awk '{print $2}') StartTomcat=/opt/soft/tomcat-7.0.42-linux-pro/bin/startup.sh TomcatCache=/opt/soft/tomcat-7.0.42-linux-pro/work WebUrl=https://localhost/ GetPageInfo=/tmp/TomcatMonitor.Info TomcatMonitorLog=/tmp/TomcatMonitor.log Monitor() { echo "[info]开始监控tomcat...[$(date +'%F %H:%M:%S')]" if [ $TomcatID ];then TomcatServiceCode=$(curl -o $GetPageInfo -s -m 10 --connect-timeout 10 -w %{http_code} -k $WebUrl) if [ $TomcatServiceCode -eq 200 ];then echo "[info]页面返回码为$TomcatServiceCode,tomcat启动成功,测试页面正常......" else echo "[error]tomcat页面出错,请注意......状态码为$TomcatServiceCode,错误日志已输出到$GetPageInfo" echo "[error]页面访问出错,开始重启tomcat" kill -9 $TomcatID sleep 3 rm -rf $TomcatCache $StartTomcat fi else echo "[error]tomcat进程不存在!tomcat开始自动重启..." echo "[info]$StartTomcat,请稍候......" rm -rf $TomcatCache $StartTomcat fi } Monitor>>$TomcatMonitorLog
Oracle 11G在用EXPORT导出时空表不能导出 11G R2中有个新特性当表无数据时不分配segment以节省空间 解决方法 一、 insert一行再rollback就产生segment了。 该方法是在在空表中插入数据再删除则产生segment。导出时则可导出 空表。 二、 设置deferred_segment_creation 参数 该参数值默认是TRUE当改为FALSE时无论是空表还是非空表都分配 segment。修改SQL语句 alter system set deferred_segment_creation=false scope=both; 需注意的是该值设置后对以前导入的空表不产生作用仍不能导出只能 对后面新增的表产生作用。如需导出之前的空表只能用第一种方法。 三、 用以下这句查找空表 select ‘alter table ‘||table_name||’ allocate extent;’ from user_tables where num_rows=0; 把查询结果导出执行导出的语句强行修改segment值然后再导出即可 导出空表 注意数据库插入数据前修改11g_R2参数可以导出空表 查找空表 select ‘alter table ‘||table_name||’ allocate extent;’ from user_tables where num_rows=0 四、 Oracle 10g以后增加了expdp和impdp工具用此 工具也可以导出空的表 oracle expdp/impdp 用法详解 1) 创建逻辑目录该命令不会在操作系统创建真正的目录最好以system等 管理员创建。 create directory db_bak as ‘d:\test\dump’; 2) 查看管理理员目录同时查看操作系统是否存在因为Oracle并不关心该目 录是否存在如果不存在则出错 select * from dba_directories; 3) 给system用户赋予在指定目录的操作权限最好以system等管理员赋予。 grant read,write on directory db_bak to system; 4) 导出数据 按用户导 expdp system/manager@orcl schemas=system dumpfile=expdp.dmp DIRECTORY=db_bak 并行进程parallel expdp system/manager@orcl directory=db_bak dumpfile=system3.dmp parallel=40 job_name=system3 按表名导 expdp system/manager@orcl TABLES=emp,dept dumpfile=expdp.dmp DIRECTORY=db_bak; 按查询条件导 expdp system/manager@orcl directory=db_bak dumpfile=expdp.dmp Tables=emp query=’WHERE deptno=20’; 按表空间导 expdp system/manager DIRECTORY=db_bak DUMPFILE=tablespace.dmp TABLESPACES=temp,example; 导整个数据库 expdp system/mtmadmin DIRECTORY=db_bak DUMPFILE=full.dmp FULL=y 5) 五、还原数据 1)导到指定用户下 impdp system/manager DIRECTORY=db_bak DUMPFILE=expdp.dmp SCHEMAS=system; 2)改变表的owner impdp system/manager DIRECTORY=db_bak DUMPFILE=expdp.dmp TABLES=system.dept REMAP_SCHEMA=system:system; 3)导入表空间 impdp system/manager DIRECTORY=db_bak DUMPFILE=tablespace.dmp TABLESPACES=example; 4)导入数据库 impdb system/mtmadmin DIRECTORY=db_bak DUMPFILE=full.dmp FULL=y; 5)追加数据 impdp system/manager DIRECTORY=db_bak DUMPFILE=expdp.dmp SCHEMAS=system TABLE_EXISTS_ACTION=append; 五、 Expdp/Impdp的相关参数 EXPDP命令行选项 1. ATTACH 该选项用于在客户会话与已存在导出作用之间建立关联.语法如下 ATTACH=[schema_name.]job_name Schema_name用于指定方案名,job_name用于指定导出作业名.注意,如果使用 ATTACH选项,在命令行除了连接字符串和ATTACH选项外,不能指定任何其他选项, 示例如下: Expdp system/manager ATTACH=system.export_job 2. CONTENT 该选项用于指定要导出的内容.默认值为ALL CONTENT={ALL | DATA_ONLY | METADATA_ONLY} 当设置CONTENT为ALL 时,将导出对象定义及其所有数据.为DATA_ONLY时,只导 出对象数据,为METADATA_ONLY时,只导出对象定义 Expdp system/manager DIRECTORY=dump DUMPFILE=a.dump CONTENT=METADATA_ONLY 3. DIRECTORY 指定转储文件和日志文件所在的目录 DIRECTORY=directory_object Directory_object用于指定目录对象名称.需要注意,目录对象是使用CREATE DIRECTORY语句建立的对象,而不是OS 目录 Expdp system/manager DIRECTORY=dump DUMPFILE=a.dump 建立目录: CREATE DIRECTORY dump as ‘d:dump’; 查询创建了那些子目录: SELECT * FROM dba_directories; 4. DUMPFILE 用于指定转储文件的名称,默认名称为expdat.dmp DUMPFILE=directory_object:+file_name ,….+ Directory_object用于指定目录对象名,file_name用于指定转储文件名.需要注意, 如果不指定directory_object,导出工具会自动使用DIRECTORY选项指定的目录对 象 Expdp system/manager DIRECTORY=dump1 DUMPFILE=dump2:a.dmp 5. ESTIMATE 指定估算被导出表所占用磁盘空间分方法.默认值是BLOCKS EXTIMATE={BLOCKS | STATISTICS} 设置为BLOCKS时,oracle会按照目标对象所占用的数据块个数乘以数据块尺寸估 算对象占用的空间,设置为STATISTICS时,根据最近统计值估算对象占用空间 Expdp system/manager TABLES=emp ESTIMATE=STATISTICS DIRECTORY=dump DUMPFILE=a.dump 6. EXTIMATE_ONLY 指定是否只估算导出作业所占用的磁盘空间,默认值为N EXTIMATE_ONLY={Y | N} 设置为Y时,导出作用只估算对象所占用的磁盘空间,而不会执行导出作业,为N时, 不仅估算对象所占用的磁盘空间,还会执行导出操作. Expdp system/manager ESTIMATE_ONLY=y NOLOGFILE=y 7. EXCLUDE 该选项用于指定执行操作时释放要排除对象类型或相关对象 EXCLUDE=object_type*:name_clause+ *,….+ Object_type用于指定要排除的对象类型,name_clause用于指定要排除的具体对 象.EXCLUDE和INCLUDE不能同时使用 Expdp system/manager DIRECTORY=dump DUMPFILE=a.dup EXCLUDE=VIEW 8. FILESIZE 指定导出文件的最大尺寸,默认为0,(表示文件尺寸没有限制) 9. FLASHBACK_SCN 指定导出特定SCN时刻的表数据 FLASHBACK_SCN=scn_value Scn_value用于标识SCN值.FLASHBACK_SCN和FLASHBACK_TIME不能同时使用 Expdp system/manager DIRECTORY=dump DUMPFILE=a.dmp FLASHBACK_SCN=358523 10. FLASHBACK_TIME 指定导出特定时间点的表数据 FLASHBACK_TIME=”TO_TIMESTAMP(time_value)” Expdp system/manager DIRECTORY=dump DUMPFILE=a.dmp FLASHBACK_TIME= “TO_TIMESTAMP(’25-08-2004 14:35:00’,’DD-MM-YYYY HH24:MI:SS’)” FULL 指定数据库模式导出,默认为N FULL={Y | N} 为Y时,标识执行数据库导出. HELP 指定是否显示EXPDP命令行选项的帮助信息,默认为N 当设置为Y时,会显示导出选项的帮助信息. Expdp help=y INCLUDE 指定导出时要包含的对象类型及相关对象 INCLUDE = object_type*:name_clause+ *,… + JOB_NAME 指定要导出作用的名称,默认为SYS_XXX JOB_NAME=jobname_string LOGFILE 指定导出日志文件文件的名称,默认名称为export.log LOGFILE=[directory_object:]file_name Directory_object用于指定目录对象名称,file_name用于指定导出日志文件名.如 果不指定directory_object.导出作用会自动使用DIRECTORY的相应选项值. Expdp system/manager DIRECTORY=dump DUMPFILE=a.dmp logfile=a.log NETWORK_LINK 指定数据库链名,如果要将远程数据库对象导出到本地例程的转储文件中,必须设 置该选项. NOLOGFILE 该选项用于指定禁止生成导出日志文件,默认值为N. PARALLEL 指定执行导出操作的并行进程个数,默认值为1 PARFILE 指定导出参数文件的名称 PARFILE=[directory_path] file_name QUERY 用于指定过滤导出数据的where条件 QUERY=[schema.] [table_name:] query_clause Schema 用于指定方案名,table_name用于指定表名,query_clause用于指定条件 限制子句.QUERY选项不能与 CONNECT=METADATA_ONLY,EXTIMATE_ONLY,TRANSPORT_TABLESPACES等选项同 时使用. Expdp system/manager directory=dump dumpfiel=a.dmp Tables=emp query=’WHERE deptno=20’ SCHEMAS 该方案用于指定执行方案模式导出,默认为当前用户方案. STATUS 指定显示导出作用进程的详细状态,默认值为0 TABLES 指定表模式导出 TABLES=schema_name.+table_name:partition_name+*,…+ Schema_name用于指定方案名,table_name用于指定导出的表名,partition_name 用于指定要导出的分区名. TABLESPACES 指定要导出表空间列表 TRANSPORT_FULL_CHECK 该选项用于指定被搬移表空间和未搬移表空间关联关系的检查方式,默认为N. 当设置为Y时,导出作用会检查表空间直接的完整关联关系,如果表空间所在表空 间或其索引所在的表空间只有一个表空间被搬移,将显示错误信息.当设置为N时, 导出作用只检查单端依赖,如果搬移索引所在表空间,但未搬移表所在表空间,将 显示出错信息,如果搬移表所在表空间,未搬移索引所在表空间,则不会显示错误 信息. TRANSPORT_TABLESPACES 指定执行表空间模式导出 VERSION 指定被导出对象的数据库版本,默认值为COMPATIBLE. VERSION={COMPATIBLE | LATEST | version_string} 为COMPATIBLE时,会根据初始化参数COMPATIBLE生成对象元数据;为LATEST时, 会根据数据库的实际版本生成对象元数据.version_string用于指定数据库版本字 符串.调用EXPDP 使用EXPDP工具时,其转储文件只能被存放在DIRECTORY对象对应的OS目录中, 而不能直接指定转储文件所在的OS目录.因此, 使用EXPDP工具时,必须首先建立DIRECTORY对象.并且需要为数据库用户授予使 用DIRECTORY对象权限
原文链接 imp/exp 用户 表空间 users tablespace 使用exp把用户pwgh的数据导出后,再使用imp把数据导入另外一个数据库时。 发现一个问题,由于数据的导出用户pwgh的一些表建在表空间users里, 而我想把这些数据全部导入到另外一个数据库的某个表空间里,例如表空间pwgh_fs_tablespace。 虽然在创建用户的时候指定该用户的默认表空间(pwgh_fs_tablespace),但是还是有些表导入表空间users里。 查了一下发现使用如下步骤就可以把数据全部导入表空间pwgh_fs_tablespace里。 1. 使用DBA用户收回用户pwgh_fs的unlimited tablespace权限。 ——-收回用户在表空间上配额大小无限制的权限 revoke unlimited tablespace from pwgh_fs; 2. 取消用户pwgh_fs在表空间users上的配额。 —-给用户分配某个表空间的可用大小限额 alter user pwgh_fs quota 0 on users; ——为了防止用户在表空间pwgh_fs_tablespace上的配额不足,使用如下sql让用户在该表空间上不限配额。 alter user pwgh_fs quota unlimited on PWGH_FS_TABLESPACE; 这样再导入,就会把数据都导入表空间pwgh_fs_tablespace。 实际上就是让用户pwgh_fs可用的表空间只有pwgh_fs_tablespace这一个表空间就行了。 使用到的exp和imp语句: exp pwgh/pwgh_test@FSPWGH_CS file=”d:\oracle_temp\pwgh_fs_table20111103.dmp” owner=(pwgh) imp pwgh_fs/pwgh_fs@pwgh file=”d:\oracle_temp\pwgh_fs_table20111103.dmp” full=y
原文链接 隐藏参数 (hidden parameters) ,由oracle内部使用,以 ‘_’ 开头。 可以通过以下两种方式查看所有隐藏参数: [sql] view plain copy SELECT i.ksppinm name, i.ksppdesc description, CV.ksppstvl VALUE, CV.ksppstdf isdefault, DECODE (BITAND (CV.ksppstvf, 7), 1, 'MODIFIED', 4, 'SYSTEM_MOD', 'FALSE') ismodified, DECODE (BITAND (CV.ksppstvf, 2), 2, 'TRUE', 'FALSE') isadjusted FROM sys.x$ksppi i, sys.x$ksppcv CV WHERE i.inst_id = USERENV ('Instance') AND CV.inst_id = USERENV ('Instance') AND i.indx = CV.indx AND i.ksppinm LIKE '/_%' ESCAPE '/' ORDER BY REPLACE (i.ksppinm, '_', ''); [sql] view plain copy SELECT ksppinm, ksppstvl, ksppdesc FROM x$ksppi x, x$ksppcv y WHERE x.indx = y.indx AND TRANSLATE (ksppinm, '_', '#') LIKE '#%'; 示例:如果想查看_db_block_hash_buckets的参数值 [sql] view plain copy SELECT ksppinm, ksppstvl, ksppdesc FROM x$ksppi x, x$ksppcv y WHERE x.indx = y.indx AND ksppinm = '_db_block_hash_buckets';
DBWR DBWR执行将数据块缓冲区写入数据文件的工作。 下列情况DBWR会将脏块写入磁盘 服务器进程将一缓冲区移入dirty链,当dirty链达到临界长度时,服务器进程会通知DBWR写入操作。临界长度是数据块隐藏参数_DB_BLOCK_WRITE_BATCH值的一半 服务器进程在LRU表中查找可用的数据块缓冲,如果查找了_DB_BLOCK_MAX_SCAN_CNT定义数量的缓冲区后,仍没有查到未用缓冲区,则停止查找,并通知DBWR写入操作 如果DBWR3秒未活动,则出现超时。 DBWR对LRU表查找指定数目的缓冲区,将找到的脏缓冲区写入磁盘。每当超时,DBWR就查找新的缓冲区组。查找的数目为_DB_BLOCK_WRITE_BATCH值得两倍 出现检查点,LGWR指定一修改缓冲区表写入磁盘,DBWR负责写入 LGWR LGWR负责将日志缓冲区写入到日志文件 触发LGWR写操作的条件 当用户进程提交事务时,写入一个提交记录 每3秒将日志缓冲区输出 当缓冲区1/3满时 当DBWR将修改缓冲区写入磁盘时,则将日志缓冲区输出 当log buffer达到1M时 CKPT CKPT进程在检查点出现时,对全部数据文件的文件头进行修改,并在控制文件中记录该检查点。 SMON SMON负责实例启动时执行实例恢复,并清理不在使用的临时段。9i开始,事务回滚操作也是由SMON负责。SMON本身不做恢复操作,主要起整体协调作用。 PMON PMON在用户进程出现故障时执行进程恢复,负责清理存储区和释放该进程所使用的资源 RECO RECO启用分布式选项才会存在该进程。RECO能够自动解决分布式事务中的故障 MMAN MMAN实现共享内存自动管理的功能,自动调整共享内存 各个组件的大小 ARCH ARCH将已填满的在线日志文件复制到指定的存储设备 LCKn LCKn在具有并行服务器选件的环境下使用,可多至10个进程,用于实例间的封锁 CJQ0和JXXX 它是一个任务队列的调度进程,负责从job$表中找到需要执行的任务,并分配job进程执行,如果job进程不足,会自动产生新的job进程(JOB_QUEUE_PROCESSES参数范围内).cjq0进程如果被杀掉,会重启。所以JXXX进程也可以杀掉。当一些job进程占用大量系统资源,导致数据库性能问题时,可以考虑杀掉job进程,不过在杀掉前,要做好分析,如果job进程正在做一个数据量很大的大型修改事务,那么杀掉job会导致大量的回滚操作,使系统性能问题加剧。 QMNC和QXXX QMNC是队列监控同步进程。QXXX是队列服务进程 PMON,SMON,DBWR,LGWR,RECO,CKPT,MMAN,PSP0为oracle数据库必不可少的核心进程,无论哪个出现故障,都会导致数据库实例崩溃