Enable point-in-time-recovery in PostgreSQL

本文涉及的产品
云原生数据库 PolarDB PostgreSQL 版,标准版 2核4GB 50GB
云原生数据库 PolarDB MySQL 版,通用型 2核4GB 50GB
简介: 转载地址:https://dev.to/pythonmeister/enable-point-in-time-recovery-in-postgresql-4d19I really love PostgreSQL.

转载地址:https://dev.to/pythonmeister/enable-point-in-time-recovery-in-postgresql-4d19

I really love PostgreSQL.

It has a really good performance, has a large eco-system, has a maltitude of extensions for every special case and - what I love the most is rock-solid and offers an almost complete SQL standard implementation.

When you start using (and developing with) it, you most likely use
pg_dump or pg_dumpall for backups.

Once started they create a single SQL file containing both DDL and DML statements to restore or copy your database or a complete cluster (PostgreSQL terminology:the sum of all databases under control of a single PostgreSQL service, e.g. all databases on one machine).

Sooner or later you go into production and you want to minimize downtime - and thus dataloss - to the absolute minimum possible.

With pg_dump you can only call it more often (from once a day to every hour) but sooner or later your database size will prevent the backup to be taken before the next run is started.

One escape from this is to enable point-in-time-recovery.

After a default installation PostgreSQL is run in a non-archive mode.
That means that all changes are written to the WAL, Write Ahead Log, before they finally make it to the database files.

This ensures entegrity in the event of an outage or filesystem error.When the root cause is cleared and you restart the PostgreSQL server it reads the WAL and applies all changes or discards unfinished transactions.

So the first step to enable PITR is to enable WAL-archiving. That basically means the finished WAL files are copied to a archive destination.

To get an idea please assume to have an archive directory structure as this,to which the postgres user has full access:

mkdir /archive/base
mkdir /archive/wal
chown postgres:postgres /archive/base /archive/wal
/archive/
|-- base
|-- wal

Enable the following settings in postgresql.conf:

wal_level = replica
archive_mode = on
archive_command = 'cp %p /archive/wal/%f'
max_wal_senders = 1

Add the following line to pg_hba.conf:

local   replication     postgres                                peer

After doing so, PostgreSQL will copy all finished WAL files to the archive directory.

Now you can take a full database backup any time by issuing commands like these:

mkdir /archive/base/$(date +%Y-%m-%d)
pg_basebackup -D /archive/base/$(date +%Y-%m-%d)

Once pg_basebackup completes you have a complete backup of the PostgreSQL cluster data:

tree /archive/base/ -L 2
/archive/base/`-- 2019-03-02    
|-- backup_label    
|-- base    
|-- global    
|-- pg_clog    
|-- pg_commit_ts    
|-- pg_dynshmem    
|-- pg_logical    
|-- pg_multixact    
|-- pg_notify    
|-- pg_replslot    
|-- pg_serial    
|-- pg_snapshots    
|-- pg_stat    
|-- pg_stat_tmp    
|-- pg_subtrans    
|-- pg_tblspc    
|-- pg_twophase    
|-- PG_VERSION    
|-- pg_xlog    
`-- postgresql.auto.conf18 directories, 3 files

Also, you can see that there are WAL archive files, with one of it marked to be a special file for restores:

tree /archive/wal/ -L 2
/archive/wal/
|-- 000000010000000000000001
|-- 000000010000000000000002
|-- 000000010000000000000003
|-- 000000010000000000000004
|-- 000000010000000000000005
|-- 000000010000000000000006`
-- 000000010000000000000006.00000028.backup

The one which ends on .backup contains information about when the backup was started and when it ended, so that a restore does not need to read and apply all WAL archive files in the archive, but only those written during and after it.

Now, lets create some random data to see that we have successfully enabled PITR and are able to use it.

CREATE DATABASE SAMPLE4 TEMPLATE template0;
\C SAMPLE4; -- connect to database SAMPLE4
CREATE TABLE INFO (ID INTEGER, TITLE VARCHAR(20));
INSERT INTO INFO (ID,TITLE) VALUES (1,'23:10');
SELECT pg_switch_wal(); -- force switching WAL

After that you should see a new WAL in the archive directory:

tree /archive/wal/ -L 2
/archive/wal/
|-- 000000010000000000000001
|-- 000000010000000000000002
|-- 000000010000000000000003
|-- 000000010000000000000004
|-- 000000010000000000000005
|-- 000000010000000000000006
|-- 000000010000000000000006.00000028.backup`
-- 000000010000000000000007

P.S.: You should not use pg_switch_wal without a reason. Consult the PostgreSQL manual for information about WAL configuration and switch timings!

Next, we gonna kill the database:

[postgres@localhost data]$ pg_ctl -D /usr/local/pgsql/data/ stop
waiting for server to shut down.... done
server stopped
[postgres@localhost data]$ 

At this point the database cluster is gone and you will see error messages,

when you try to start it.

So, time to recover!

Copy the base backup back to the clusters data directory:

cp-ar /archive/base/2019-03-02/* /usr/local/pgsql/data/

Create a file /usr/local/pgsql/recovery.conf with this content:

restore_command = 'cp /archive/wal/%f "%p"'
recovery_target_timeline = 'latest'

P.S.: The recovery.conf file will be renamed to recovery.done by the PostgreSQL process once the restore is finished.

Then start the server and once the recovery process is done query some data:

\C SAMPLE4;
select * from info;  
SAMPLE4 id | 
title
----+-------  
1 | 23:10  
1 | 23:11(2 rows)

If you take a look at the clusters log file you will notice, that it automatically recovered the WAL archives and the most recent data is present.

This was a very simple setup; trust me, productive environments usually have more complexity (archiving WAL files to remote servers, restore from tape, make full backups by taking filesystem snapshots etc. etc.).

To make this example complete: add a cronjob entry which creates a full-backup every day, twice a day or even every hour and you should be able to recover in no time.

相关实践学习
使用PolarDB和ECS搭建门户网站
本场景主要介绍基于PolarDB和ECS实现搭建门户网站。
阿里云数据库产品家族及特性
阿里云智能数据库产品团队一直致力于不断健全产品体系,提升产品性能,打磨产品功能,从而帮助客户实现更加极致的弹性能力、具备更强的扩展能力、并利用云设施进一步降低企业成本。以云原生+分布式为核心技术抓手,打造以自研的在线事务型(OLTP)数据库Polar DB和在线分析型(OLAP)数据库Analytic DB为代表的新一代企业级云原生数据库产品体系, 结合NoSQL数据库、数据库生态工具、云原生智能化数据库管控平台,为阿里巴巴经济体以及各个行业的企业客户和开发者提供从公共云到混合云再到私有云的完整解决方案,提供基于云基础设施进行数据从处理、到存储、再到计算与分析的一体化解决方案。本节课带你了解阿里云数据库产品家族及特性。
相关文章
|
关系型数据库 API 数据库
PostgreSQL pg_start_backup
本文探讨 pg_start_backup命令的连续归档备份
1535 0
PostgreSQL pg_start_backup
|
监控 固态存储 关系型数据库
mysql xtrabackup mysqlbinlog Point-In-Time recovery
恢复到特定的时间点,可以使用innobackupex 和数据库binlog日志 首先,我们需要一个数据库快照,使用innobackupex 生成一个全库备份 # innobackupex --defaults-file=/etc/my.
1431 0
|
关系型数据库 数据库 PostgreSQL