Enable point-in-time-recovery in PostgreSQL

本文涉及的产品
云原生数据库 PolarDB PostgreSQL 版,标准版 2核4GB 50GB
云原生数据库 PolarDB MySQL 版,通用型 2核8GB 50GB
简介: 转载地址:https://dev.to/pythonmeister/enable-point-in-time-recovery-in-postgresql-4d19I really love PostgreSQL.

转载地址:https://dev.to/pythonmeister/enable-point-in-time-recovery-in-postgresql-4d19

I really love PostgreSQL.

It has a really good performance, has a large eco-system, has a maltitude of extensions for every special case and - what I love the most is rock-solid and offers an almost complete SQL standard implementation.

When you start using (and developing with) it, you most likely use
pg_dump or pg_dumpall for backups.

Once started they create a single SQL file containing both DDL and DML statements to restore or copy your database or a complete cluster (PostgreSQL terminology:the sum of all databases under control of a single PostgreSQL service, e.g. all databases on one machine).

Sooner or later you go into production and you want to minimize downtime - and thus dataloss - to the absolute minimum possible.

With pg_dump you can only call it more often (from once a day to every hour) but sooner or later your database size will prevent the backup to be taken before the next run is started.

One escape from this is to enable point-in-time-recovery.

After a default installation PostgreSQL is run in a non-archive mode.
That means that all changes are written to the WAL, Write Ahead Log, before they finally make it to the database files.

This ensures entegrity in the event of an outage or filesystem error.When the root cause is cleared and you restart the PostgreSQL server it reads the WAL and applies all changes or discards unfinished transactions.

So the first step to enable PITR is to enable WAL-archiving. That basically means the finished WAL files are copied to a archive destination.

To get an idea please assume to have an archive directory structure as this,to which the postgres user has full access:

mkdir /archive/base
mkdir /archive/wal
chown postgres:postgres /archive/base /archive/wal
/archive/
|-- base
|-- wal

Enable the following settings in postgresql.conf:

wal_level = replica
archive_mode = on
archive_command = 'cp %p /archive/wal/%f'
max_wal_senders = 1

Add the following line to pg_hba.conf:

local   replication     postgres                                peer

After doing so, PostgreSQL will copy all finished WAL files to the archive directory.

Now you can take a full database backup any time by issuing commands like these:

mkdir /archive/base/$(date +%Y-%m-%d)
pg_basebackup -D /archive/base/$(date +%Y-%m-%d)

Once pg_basebackup completes you have a complete backup of the PostgreSQL cluster data:

tree /archive/base/ -L 2
/archive/base/`-- 2019-03-02    
|-- backup_label    
|-- base    
|-- global    
|-- pg_clog    
|-- pg_commit_ts    
|-- pg_dynshmem    
|-- pg_logical    
|-- pg_multixact    
|-- pg_notify    
|-- pg_replslot    
|-- pg_serial    
|-- pg_snapshots    
|-- pg_stat    
|-- pg_stat_tmp    
|-- pg_subtrans    
|-- pg_tblspc    
|-- pg_twophase    
|-- PG_VERSION    
|-- pg_xlog    
`-- postgresql.auto.conf18 directories, 3 files

Also, you can see that there are WAL archive files, with one of it marked to be a special file for restores:

tree /archive/wal/ -L 2
/archive/wal/
|-- 000000010000000000000001
|-- 000000010000000000000002
|-- 000000010000000000000003
|-- 000000010000000000000004
|-- 000000010000000000000005
|-- 000000010000000000000006`
-- 000000010000000000000006.00000028.backup

The one which ends on .backup contains information about when the backup was started and when it ended, so that a restore does not need to read and apply all WAL archive files in the archive, but only those written during and after it.

Now, lets create some random data to see that we have successfully enabled PITR and are able to use it.

CREATE DATABASE SAMPLE4 TEMPLATE template0;
\C SAMPLE4; -- connect to database SAMPLE4
CREATE TABLE INFO (ID INTEGER, TITLE VARCHAR(20));
INSERT INTO INFO (ID,TITLE) VALUES (1,'23:10');
SELECT pg_switch_wal(); -- force switching WAL

After that you should see a new WAL in the archive directory:

tree /archive/wal/ -L 2
/archive/wal/
|-- 000000010000000000000001
|-- 000000010000000000000002
|-- 000000010000000000000003
|-- 000000010000000000000004
|-- 000000010000000000000005
|-- 000000010000000000000006
|-- 000000010000000000000006.00000028.backup`
-- 000000010000000000000007

P.S.: You should not use pg_switch_wal without a reason. Consult the PostgreSQL manual for information about WAL configuration and switch timings!

Next, we gonna kill the database:

[postgres@localhost data]$ pg_ctl -D /usr/local/pgsql/data/ stop
waiting for server to shut down.... done
server stopped
[postgres@localhost data]$ 

At this point the database cluster is gone and you will see error messages,

when you try to start it.

So, time to recover!

Copy the base backup back to the clusters data directory:

cp-ar /archive/base/2019-03-02/* /usr/local/pgsql/data/

Create a file /usr/local/pgsql/recovery.conf with this content:

restore_command = 'cp /archive/wal/%f "%p"'
recovery_target_timeline = 'latest'

P.S.: The recovery.conf file will be renamed to recovery.done by the PostgreSQL process once the restore is finished.

Then start the server and once the recovery process is done query some data:

\C SAMPLE4;
select * from info;  
SAMPLE4 id | 
title
----+-------  
1 | 23:10  
1 | 23:11(2 rows)

If you take a look at the clusters log file you will notice, that it automatically recovered the WAL archives and the most recent data is present.

This was a very simple setup; trust me, productive environments usually have more complexity (archiving WAL files to remote servers, restore from tape, make full backups by taking filesystem snapshots etc. etc.).

To make this example complete: add a cronjob entry which creates a full-backup every day, twice a day or even every hour and you should be able to recover in no time.

相关实践学习
使用PolarDB和ECS搭建门户网站
本场景主要介绍如何基于PolarDB和ECS实现搭建门户网站。
阿里云数据库产品家族及特性
阿里云智能数据库产品团队一直致力于不断健全产品体系,提升产品性能,打磨产品功能,从而帮助客户实现更加极致的弹性能力、具备更强的扩展能力、并利用云设施进一步降低企业成本。以云原生+分布式为核心技术抓手,打造以自研的在线事务型(OLTP)数据库Polar DB和在线分析型(OLAP)数据库Analytic DB为代表的新一代企业级云原生数据库产品体系, 结合NoSQL数据库、数据库生态工具、云原生智能化数据库管控平台,为阿里巴巴经济体以及各个行业的企业客户和开发者提供从公共云到混合云再到私有云的完整解决方案,提供基于云基础设施进行数据从处理、到存储、再到计算与分析的一体化解决方案。本节课带你了解阿里云数据库产品家族及特性。
相关文章
|
机器学习/深度学习 人工智能 数据可视化
号称能打败MLP的KAN到底行不行?数学核心原理全面解析
Kolmogorov-Arnold Networks (KANs) 是一种新型神经网络架构,挑战了多层感知器(mlp)的基础,通过在权重而非节点上使用可学习的激活函数(如b样条),提高了准确性和可解释性。KANs利用Kolmogorov-Arnold表示定理,将复杂函数分解为简单函数的组合,简化了神经网络的近似过程。与mlp相比,KAN在参数量较少的情况下能达到类似或更好的性能,并能直观地可视化,增强了模型的可解释性。尽管仍需更多研究验证其优势,KAN为深度学习领域带来了新的思路。
5186 5
|
JSON 数据格式
Retrofit,Gson解析,请求返回的类型不统一,假如double返回的是null
Retrofit,Gson解析,请求返回的类型不统一,假如double返回的是null
575 0
|
7天前
|
弹性计算 关系型数据库 微服务
基于 Docker 与 Kubernetes(K3s)的微服务:阿里云生产环境扩容实践
在微服务架构中,如何实现“稳定扩容”与“成本可控”是企业面临的核心挑战。本文结合 Python FastAPI 微服务实战,详解如何基于阿里云基础设施,利用 Docker 封装服务、K3s 实现容器编排,构建生产级微服务架构。内容涵盖容器构建、集群部署、自动扩缩容、可观测性等关键环节,适配阿里云资源特性与服务生态,助力企业打造低成本、高可靠、易扩展的微服务解决方案。
1176 3
|
6天前
|
机器学习/深度学习 人工智能 前端开发
通义DeepResearch全面开源!同步分享可落地的高阶Agent构建方法论
通义研究团队开源发布通义 DeepResearch —— 首个在性能上可与 OpenAI DeepResearch 相媲美、并在多项权威基准测试中取得领先表现的全开源 Web Agent。
875 12
|
5天前
|
机器学习/深度学习 物联网
Wan2.2再次开源数字人:Animate-14B!一键实现电影角色替换和动作驱动
今天,通义万相的视频生成模型又又又开源了!Wan2.2系列模型家族新增数字人成员Wan2.2-Animate-14B。
456 10
|
16天前
|
人工智能 运维 安全
|
7天前
|
弹性计算 Kubernetes jenkins
如何在 ECS/EKS 集群中有效使用 Jenkins
本文探讨了如何将 Jenkins 与 AWS ECS 和 EKS 集群集成,以构建高效、灵活且具备自动扩缩容能力的 CI/CD 流水线,提升软件交付效率并优化资源成本。
331 0
|
7天前
|
消息中间件 Java Apache
SpringBoot集成RocketMq
RocketMQ 是一款开源的分布式消息中间件,采用纯 Java 编写,支持事务消息、顺序消息、批量消息、定时消息及消息回溯等功能。其优势包括去除对 ZooKeeper 的依赖、支持异步和同步刷盘、高吞吐量及消息过滤等特性。RocketMQ 具备高可用性和高可靠性,适用于大规模分布式系统,能有效保障消息传输的一致性和顺序性。
402 2