PostgreSQL的streaming replication

本文涉及的产品
云原生数据库 PolarDB MySQL 版,通用型 2核4GB 50GB
云原生数据库 PolarDB PostgreSQL 版,标准版 2核4GB 50GB
简介:

主要参考的是如下url:

http://www.rassoc.com/gregr/weblog/2013/02/16/zero-to-postgresql-streaming-replication-in-10-mins/

准备两台机器,

master: 10.10.10.2

slave:    10.10.10.1

 

首先在 master机器上,建立一个名为replicator的用户:

psql -c "CREATE USER replicator REPLICATION LOGIN ENCRYPTED PASSWORD 'thepassword';"

master机器上的 postgresql.conf,配置成这样:

listen_address = # make sure we're listening as appropriate
wal_level = hot_standby
max_wal_senders = 3
checkpoint_segments = 8    
wal_keep_segments = 8 

 

然后在master的 pg_hba.conf文件,中,进行如下配置,添加一行,允许replicator用户从远端访问:

host  replication     replicator      10.10.10.1/32              md5

然后,启动master端的postgresql

 

然后,在slave端:

在slave端的postgresql停止的前提下,以postgres用户身份,删除data目录:

 rm -rf /usr/local/pgsql/data

 

然后,在slave端,执行pg_basebackup程序:

pg_basebackup -h 10.10.10.2 -D /usr/local/pgsql/data -U replicator -v -P

 

在执行完毕 pg_basebackup后,会得到一个从 master端拷贝到的/usr/local/pgsql/data目录,

编辑其中的 postgresql.conf,把其standby_mode设置为on。

 

在slave端,编辑一个/usr/local/pgsql/data/recovery.conf文件,

内容如下:

  standby_mode = 'on'
  primary_conninfo = 'host=10.10.10.2 port=5432 user=replicator password=thepassword sslmode=require'
  trigger_file = '/tmp/postgresql.trigger'

 

然后,在slave端,启动postgresql:

复制代码
[postgres@pg200 pgsql]$ ./bin/pg_ctl -D ./data start
pg_ctl: another server might be running; trying to start server anyway
server starting
[postgres@pg200 pgsql]$ LOG:  database system was interrupted while in recovery at log time 2013-09-27 17:28:27 CST
HINT:  If this has occurred more than once some data might be corrupted and you might need to choose an earlier recovery target.
LOG:  entering standby mode
LOG:  consistent recovery state reached at 0/5012F78
LOG:  redo starts at 0/5012EE0
LOG:  record with zero length at 0/5012F78
LOG:  database system is ready to accept read only connections
LOG:  streaming replication successfully connected to primary
复制代码

 

从log中,可以看到,postgresql的 streaming repliation开始工作了。

 

下面进行简单的验证:

master端,新增数据:

复制代码
[postgres@pg200 ~]$ cd /usr/local/pgsql/
[postgres@pg200 pgsql]$ ./bin/psql
psql (9.2.4)
Type "help" for help.

postgres=# \l
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
-----------+----------+----------+-------------+-------------+-----------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
(3 rows)

postgres=# \d
        List of relations
 Schema | Name | Type  |  Owner   
--------+------+-------+----------
 public | test | table | postgres
(1 row)

postgres=# select * from test;
 id 
----
  1
  2
  3
(3 rows)

postgres=# insert into test values(4);
INSERT 0 1
postgres=# 
复制代码

 

slave端,可以看到数据:

复制代码
[postgres@pg200 ~]$ cd /usr/local/pgsql/bin
[postgres@pg200 bin]$ ./psql
psql (9.2.4)
Type "help" for help.

postgres=# select * from test;
 id 
----
  1
  2
  3
  4
(4 rows)

postgres=# 
复制代码

 

 关于pg_basebackup,其官方文档说明如下:

复制代码
http://www.postgresql.org/docs/current/static/app-pgbasebackup.html

pg_basebackup is used to take base backups of a running PostgreSQL database cluster. These are taken without affecting other clients to the database, and can be used both for point-in-time recovery (see Section 24.3) and as the starting point for a log shipping or streaming replication standby servers (see Section 25.2).

pg_basebackup makes a binary copy of the database cluster files, while making sure the system is automatically put in and out of backup mode automatically. Backups are always taken of the entire database cluster, it is not possible to back up individual databases or database objects. For individual database backups, a tool such as pg_dump must be used.

The backup is made over a regular PostgreSQL connection, and uses the replication protocol. The connection must be made with a superuser or a user having REPLICATION permissions (see Section 20.2), and pg_hba.conf must explicitly permit the replication connection. The server must also be configured with max_wal_senders set high enough to leave at least one session available for the backup.
复制代码

 

但是,实际上有一个问题是需要引起注意的,上述的streaming replication,并没有使用到archive_log模式,这个也不是必须的。

可是如果master很繁忙,比如像这样:

 

create table test01(id integer, val char(1024)); 
insert into test01 values(generate_series(1,1228800),repeat( chr(int4(random()*26)+65),1024));

 

此时,master端的online wal log,不断地快速产生,有的会随着新的wal log的生成而被删除掉。

此时,就会出现如下错误:

复制代码
[postgres@pg200 ~]$ cd /usr/local/pgsql
[postgres@pg200 pgsql]$ ./bin/pg_ctl -D ./data start
server starting
[postgres@pg200 pgsql]$ LOG:  database system was shut down in recovery at 2013-09-30 14:51:27 CST
LOG:  entering standby mode
LOG:  consistent recovery state reached at 0/5013A48
LOG:  redo starts at 0/50139B0
LOG:  record with zero length at 0/5013A48
LOG:  database system is ready to accept read only connections
LOG:  streaming replication successfully connected to primary
FATAL:  could not receive data from WAL stream: FATAL:  requested WAL segment 000000010000000000000011 has already been removed

LOG:  invalid magic number 0000 in log file 0, segment 17, offset 14467072
LOG:  streaming replication successfully connected to primary
FATAL:  could not receive data from WAL stream: FATAL:  requested WAL segment 000000010000000000000011 has already been removed
复制代码

从这个意义上说,使用 archive log是必须的。







本文转自健哥的数据花园博客园博客,原文链接:http://www.cnblogs.com/gaojian/p/3347203.html,如需转载请自行联系原作者

相关实践学习
使用PolarDB和ECS搭建门户网站
本场景主要介绍基于PolarDB和ECS实现搭建门户网站。
阿里云数据库产品家族及特性
阿里云智能数据库产品团队一直致力于不断健全产品体系,提升产品性能,打磨产品功能,从而帮助客户实现更加极致的弹性能力、具备更强的扩展能力、并利用云设施进一步降低企业成本。以云原生+分布式为核心技术抓手,打造以自研的在线事务型(OLTP)数据库Polar DB和在线分析型(OLAP)数据库Analytic DB为代表的新一代企业级云原生数据库产品体系, 结合NoSQL数据库、数据库生态工具、云原生智能化数据库管控平台,为阿里巴巴经济体以及各个行业的企业客户和开发者提供从公共云到混合云再到私有云的完整解决方案,提供基于云基础设施进行数据从处理、到存储、再到计算与分析的一体化解决方案。本节课带你了解阿里云数据库产品家族及特性。
目录
相关文章
|
网络协议 算法 关系型数据库
解读 MySQL Client/Server Protocol: Connection & Replication(上)
解读 MySQL Client/Server Protocol: Connection & Replication
185 0
|
7月前
|
监控 负载均衡 关系型数据库
MySQL技能完整学习列表13、MySQL高级特性——1、分区表(Partitioning)——2、复制(Replication)——3、集群(Clustering)
MySQL技能完整学习列表13、MySQL高级特性——1、分区表(Partitioning)——2、复制(Replication)——3、集群(Clustering)
98 0
|
SQL 存储 关系型数据库
解读 MySQL Client/Server Protocol: Connection & Replication(下)
解读 MySQL Client/Server Protocol: Connection & Replication
159 1
|
存储 NoSQL 关系型数据库
An Overview of PostgreSQL & MySQL Cross Replication
An Overview of PostgreSQL & MySQL Cross Replication
106 0
|
关系型数据库 MySQL
《从理论到实践,深度解析MySQL Group Replication》电子版地址
从理论到实践,深度解析MySQL Group Replication
103 0
《从理论到实践,深度解析MySQL Group Replication》电子版地址
|
关系型数据库 MySQL
MySQL Group Replication
MySQL Group Replication
111 0
|
监控 关系型数据库 MySQL
[MySQL FAQ]系列 — 大数据量时如何部署MySQL Replication从库
[MySQL FAQ]系列 — 大数据量时如何部署MySQL Replication从库
121 0
|
监控 关系型数据库 MySQL
[MySQL FAQ]系列 — 大数据量时如何部署MySQL Replication从库
[MySQL FAQ]系列 — 大数据量时如何部署MySQL Replication从库
|
关系型数据库 数据库 SQL
PostgreSQL Logical Replication
限制及特性 1、只支持普通表生效,不支持序列、视图、物化视图、外部表、分区表和大对象 2、只支持普通表的DML(INSERT、UPDATE、DELETE)操作,不支持truncate、DDL操作 3、需要同步的表必须设置REPLICA IDENTITY 不能为noting(默认值是default).
8790 0
|
SQL Web App开发 关系型数据库