PostgreSQL on ECS多云盘的部署、快照备份和恢复

本文涉及的产品
云数据库 RDS SQL Server,基础系列 2核4GB
云原生数据库 PolarDB 分布式版,标准版 2核8GB
RDS SQL Server Serverless,2-4RCU 50GB 3个月
推荐场景:
简介:

标签

PostgreSQL , ECS , 云盘 , 快照 , 一致性备份 , 时间点恢复 , zfs , lvm , raid , 并行计算


背景

随着阿里云云盘的发展,云盘的性能已经越来越好了。IOPS可以做到十万以上,读写吞吐也超过1GB/s了。相信随着网络的发展,SSD云盘IOPS突破40万,读写吞吐突破4GB/s也不远了。

不过这里的IOPS和吞吐是指并发的IO能力,单次IO的延迟与本地还是不能比(将来RDMA网络也许能解决这个问题)。

PostgreSQL 如何解决SSD云盘IO延迟高缺陷

某些业务对数据库的(要求响应时间很快的写小事务)对单次IO延迟比较敏感,不过PostgreSQL有方法可以解决这个小缺陷。

1、后台IO,(write syscall)

PostgreSQL 的大多数IO为后台IO(bgwriter, backend writer),所以刷shared buffer对IO延迟不敏感。

2、异步提交

事务提交时,写WAL异步提交。不会造成数据不一致,但是当数据库CRASH,可能丢失在wal buffer中未提交的事务(最多10毫秒)。

这种方法是最有效的。

3、组提交

组提交,解决WAL写瓶颈,将多个同时提交的事务的WAL fsync动作合并为单次,从而减少FSYNC次数,提高高并发时的写小事务的TPS。

PostgreSQL 如何充分利用并发IO的能力

由于我们看到的云盘IOPS和读写吞吐指标是并发指标,数据库如何利用好这么好的指标呢?对高并发小事务不是问题,肯定是能将它用起来的,但是对于低并发,长事务(分析型业务),如何利用云盘的IOPS能力和读写吞吐能力呢?

PostgreSQL的并行计算特性可以充分利用云盘的并发IOPS和读写带宽。

多云盘卷组

单块云盘的IOPS能力有上限,容量有上限,读写带宽也有上限。好在ECS支持多块云盘,目前已支持一台ECS挂载16块云盘。

通过多块云盘,组卷条带后,提高读写带宽。以Linux RHEL/CentOS 7.x为例,组卷和条带方法:

1、逻辑卷

yum install lvm2  

2、软RAID

yum install -y mdadm  

3、ZFS

http://zfsonlinux.org/

以centos 7.3为例  
  
wget http://download.zfsonlinux.org/epel/zfs-release.el7_3.noarch.rpm  
rpm -ivh zfs-release.el7_3.noarch.rpm   
yum install -y zfs  

逻辑卷例子

假设环境中有16块SSD云盘

1、创建PV

pvcreate /dev/vd[b-q]  

2、创建VG

vgcreate -A y -s 128M vgdata01 /dev/vd[b-q]  
-s, --physicalextentsize PhysicalExtentSize[bBsSkKmMgGtTpPeE]  
       Sets  the physical extent size on physical volumes of this volume group.  A size suffix   
       (k for kilobytes up to t for terabytes) is optional, megabytes is the default if no suffix is present.   
       For LVM2 format, the value  
       must be a power of 2 of at least 1 sector (where the sector size is the largest sector size of the   
       PVs currently used in the VG) or, if not a power of 2, at least 128KiB.  For the older LVM1 format, it must be a power  
       of  2  of at least 8KiB.  The default is 4 MiB.  Once this value has been set, it is difficult to   
       change it without recreating the volume group which would involve backing up and restoring data on any logical volumes.  
       However, if no extents need moving for the new value to apply, it can be altered using vgchange -s.  
  
       If the volume group metadata uses lvm1 format, extents can vary in size from 8KiB to 16GiB and   
       there is a limit of 65534 extents in each logical volume.  The default of 4 MiB leads to a maximum logical volume size  of  
       around 256GiB.  
  
       If  the  volume group metadata uses lvm2 format those restrictions do not apply, but having a   
       large number of extents will slow down the tools but have no impact on I/O performance to the logical volume.  The smallest  
       PE is 1KiB  
  
       The 2.4 kernel has a limitation of 2TiB per block device.  

3、创建LV,设置条带

16块盘,每个条带单位为8KB。

lvcreate -A y -i 16 -I 8 -l 100%VG -n lv01 vgdata01  
-i|--stripes Stripes  
       Gives the number of stripes.  This is equal to the number of physical volumes to scatter the logical volume data.    
       When creating a RAID 4/5/6 logical volume, the extra devices which are necessary for parity are internally accounted for.    
       Specifying -i 3 would cause 3 devices for striped and RAID 0 logical volumes, 4 devices for   
       RAID 4/5, 5 devices for RAID 6 and 6 devices for RAID 10.  Alternatively, RAID 0 will stripe  across  2  
       devices,  RAID  4/5  across  3  PVs,  RAID  6  across  5  PVs and RAID 10 across 4 PVs in the volume group   
       if the -i argument is omitted.  In order to stripe across all PVs of the VG if the -i argument is omitted, set  
       raid_stripe_all_devices=1 in the allocation section of lvm.conf (5) or add  
       --config allocation/raid_stripe_all_devices=1  
       to the command.  
  
       Note the current maxima for stripes depend on the created RAID type.  For raid10, the maximum of stripes is 32,   
       for raid0, it is 64, for raid4/5, it is 63 and for raid6 it is 62.  
  
       See the --nosync option to optionally avoid initial syncrhonization of RaidLVs.  
  
       Two implementations of basic striping are available in the kernel.  The original device-mapper implementation   
       is the default and should normally be used.  The alternative implementation using MD, available since  ver‐  
       sion  1.7  of  the  RAID  device-mapper  kernel  target (kernel version 4.2) is provided to facilitate the   
       development of new RAID features.  It may be accessed with --type raid0[_meta], but is best avoided at present  
       because of assorted restrictions on resizing and converting such devices.  
  
-I|--stripesize StripeSize  
       Gives the number of kilobytes for the granularity of the stripes.  
       StripeSize must be 2^n (n = 2 to 9) for metadata in LVM1 format.  For metadata in LVM2 format,   
       the stripe size may be a larger power of 2 but must not exceed the physical extent size.  

4、创建文件系统,设置条带

当数据库数据块=32K时,chunk大小(单块盘读写32KB,再写下一块)。 条带大小512KB(32KB*16)。  
mkfs.ext4 /dev/mapper/vgdata01-lv01 -m 0 -O extent,uninit_bg -E lazy_itable_init=1,stride=8,stripe_width=128 -b 4096 -T largefile -L lv01  
  
或  
  
当数据库数据块=8K时,chunk大小(单块盘读写8KB,再写下一块)。 条带大小128KB(8KB*16)。  
mkfs.ext4 /dev/mapper/vgdata01-lv01 -m 0 -O extent,uninit_bg -E lazy_itable_init=1,stride=2,stripe_width=32 -b 4096 -T largefile -L lv01  
-b block-size  
       Specify the size of blocks in bytes.  Valid block-size values are 1024, 2048 and 4096 bytes per block.    
       If omitted, block-size is heuristically determined by the filesystem size and the expected usage of the  filesys‐  
       tem  (see  the  -T option).  If block-size is preceded by a negative sign ('-'), then mke2fs will use   
       heuristics to determine the appropriate block size, with the constraint that the block size will be at least block-  
       size bytes.  This is useful for certain hardware devices which require that the blocksize be a multiple of 2k.  
  
stride=stride-size  单位为blocks  
       Configure the filesystem for a RAID array with stride-size filesystem blocks. This is the number   
        of blocks read or written to disk before moving to the next disk, which is  sometimes  referred  to  as  the  
       chunk  size.   This mostly affects placement of filesystem metadata like bitmaps at mke2fs   
        time to avoid placing them on a single disk, which can hurt performance.    
        It may also be used by the block allocator.  
  
stripe_width=stripe-width  单位为blocks  
       Configure the filesystem for a RAID array with stripe-width filesystem blocks per stripe.   
        This is typically stride-size * N, where N is the number of data-bearing disks in the RAID   
        (e.g. for RAID  5  there  
       is  one  parity disk, so N will be the number of disks in the array minus 1).    
        This allows the block allocator to prevent read-modify-write of the parity in a   
        RAID stripe if possible when the data is written.  
  
lazy_itable_init[= <0 to disable, 1 to enable>]  
      If enabled and the uninit_bg feature is enabled, the inode table will not be fully   
      initialized by mke2fs.  This speeds up filesystem initialization noticeably,   
      but it requires the kernel to finish initial‐  
      izing the filesystem in the background when the filesystem is first mounted.    
      If the option value is omitted, it defaults to 1 to enable lazy inode table zeroing.  

5、MOUNT文件系统

如果不使用云盘镜像备份,使用这种挂载模式。  
mount -o defaults,noatime,nodiratime,nodelalloc,barrier=0,data=writeback LABEL=lv01 /data01  
  
或  
  
如果使用云盘镜像备份,使用这种挂载模式。  
mount -o defaults,noatime,nodiratime,nodelalloc,barrier=1,data=ordered LABEL=lv01 /data01  

6、FIO测试

yum install -y fio  
  
fio -filename=/data01/testdir -direct=1 -thread -rw=write -ioengine=libaio -bs=8K -size=16G -numjobs=128 -runtime=60 -group_reporting -name=mytest >/tmp/fio_write.log 2>&1  
fio -filename=/data01/testdir -direct=1 -thread -rw=read -ioengine=libaio -bs=8K -size=16G -numjobs=128 -runtime=60 -group_reporting -name=mytest >/tmp/fio_read.log 2>&1  
fio -filename=/data01/testdir -direct=1 -thread -rw=randwrite -ioengine=libaio -bs=8K -size=16G -numjobs=128 -runtime=60 -group_reporting -name=mytest >/tmp/fio_randwrite.log 2>&1  
fio -filename=/data01/testdir -direct=1 -thread -rw=randread -ioengine=libaio -bs=8K -size=16G -numjobs=128 -runtime=60 -group_reporting -name=mytest >/tmp/fio_randread.log 2>&1  
顺序  
READ: io=72621MB, aggrb=1210.3MB/s, minb=1210.3MB/s, maxb=1210.3MB/s, mint=60003msec, maxt=60003msec  
WRITE: io=36845MB, aggrb=628743KB/s, minb=628743KB/s, maxb=628743KB/s, mint=60007msec, maxt=60007msec  
  
随机  
READ: io=53390MB, aggrb=911160KB/s, minb=911160KB/s, maxb=911160KB/s, mint=60002msec, maxt=60002msec  
WRITE: io=26078MB, aggrb=445032KB/s, minb=445032KB/s, maxb=445032KB/s, mint=60004msec, maxt=60004msec  

数据库部署

1、安装软件

《PostgreSQL on Linux 最佳部署手册》

2、初始化数据库

initdb -E SQL_ASCII -U postgres --locale=C -D $PGDATA  

3、配置postgresql.conf

port = 1921                               
max_connections = 1000                    
superuser_reserved_connections = 13       
unix_socket_directories = '.'     
shared_buffers = 64GB                     
maintenance_work_mem = 1GB                
dynamic_shared_memory_type = posix        
vacuum_cost_delay = 0                     
bgwriter_delay = 10ms                     
bgwriter_lru_maxpages = 500               
bgwriter_lru_multiplier = 5.0             
bgwriter_flush_after = 0                  
effective_io_concurrency = 16             
max_worker_processes = 128                
max_parallel_workers_per_gather = 64      
max_parallel_workers = 128                
backend_flush_after = 0           
wal_level = minimal                       
synchronous_commit = off                  
wal_sync_method = fsync           
full_page_writes = on                     
wal_buffers = 1GB                         
wal_writer_delay = 10ms           
wal_writer_flush_after = 0                
checkpoint_timeout = 35min                
max_wal_size = 128GB  
min_wal_size = 8GB  
checkpoint_completion_target = 0.5        
checkpoint_flush_after = 0                
checkpoint_warning = 30s                  
max_wal_senders = 0               
random_page_cost = 1.2                    
parallel_tuple_cost = 0           
parallel_setup_cost = 0   
min_parallel_table_scan_size = 0   
min_parallel_index_scan_size = 0  
effective_cache_size = 200GB  
log_destination = 'csvlog'                
logging_collector = on            
log_directory = 'log'                     
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'   
log_file_mode = 0600                      
log_truncate_on_rotation = on             
log_checkpoints = on  
log_connections = on  
log_disconnections = on  
log_error_verbosity = verbose   
log_timezone = 'PRC'  
log_autovacuum_min_duration = 0   
autovacuum_max_workers = 8                
autovacuum_vacuum_cost_delay = 0          
autovacuum_vacuum_cost_limit = 0          
datestyle = 'iso, mdy'  
timezone = 'PRC'  
lc_messages = 'C'                         
lc_monetary = 'C'                         
lc_numeric = 'C'                          
lc_time = 'C'                             
default_text_search_config = 'pg_catalog.english'  
max_locks_per_transaction = 6400  
pg_ctl start  

4、配置归档,归档目录不能放在数据盘的云盘里面,你可以放到其他云盘,或者放到OSS。

这里为了演示方便,放在本地云盘。(保证与数据云盘不是同一云盘)

mkdir /archivedir  
chown digoal:digoal /archivedir  
vi postgresql.conf  
  
wal_level = replica  
archive_mode = on  
archive_command = 'test ! -f /archivedir/%f && cp %p /archivedir/%f'  
pg_ctl restart -m fast  

5、压测

5.1、写入大吞吐压测

初始化

-- 建表  
do language plpgsql $$   
declare   
begin   
  for i in 0..65535 loop    
    execute 'create table IF NOT EXISTS test_'||i||'(id int, info text, crt_time timestamp) with (autovacuum_enabled=off, toast.autovacuum_enabled=off)';  
  end loop;  
end;  
$$;  
  
-- 删表  
do language plpgsql $$   
declare   
begin   
  for i in 0..65535 loop    
    execute 'drop table IF EXISTS test_'||i;  
  end loop;  
end;  
$$;  
  
-- 写表  
create or replace function batch_ins(int) returns void as $$  
declare  
begin  
  execute 'insert into test_'||$1||' select generate_series(1,10000), md5(random()::text), now()';  
end;  
$$ language plpgsql strict;  

大吞吐写入测试,写入速度直逼200万行/s

vi test.sql  
  
\set id random(0,65535)  
select batch_ins(:id);  
pgbench -M prepared -n -r -P 1 -f ./test.sql -c 32 -j 32 -T 120  
  
transaction type: ./test.sql  
scaling factor: 1  
query mode: prepared  
number of clients: 32  
number of threads: 32  
duration: 120 s  
number of transactions actually processed: 20801  
latency average = 184.678 ms  
latency stddev = 57.522 ms  
tps = 173.218345 (including connections establishing)  
tps = 173.233151 (excluding connections establishing)  
script statistics:  
 - statement latencies in milliseconds:  
         0.003  \set id random(0,65535)  
       184.684  select batch_ins(:id);  

实际为写WAL的瓶颈,采用UNLOGGED TABLE,写入速度直逼1000万行/s。

transaction type: ./test.sql  
scaling factor: 1  
query mode: prepared  
number of clients: 32  
number of threads: 32  
duration: 120 s  
number of transactions actually processed: 117367  
latency average = 32.731 ms  
latency stddev = 23.047 ms  
tps = 977.228590 (including connections establishing)  
tps = 977.331826 (excluding connections establishing)  
script statistics:  
 - statement latencies in milliseconds:  
         0.002  \set id random(0,65535)  
        32.727  select batch_ins(:id);  

5.2、10亿数据量,OLTP压测

pgbench -i -s 10000  
pgbench -M prepared -n -r -P 1 -c 64 -j 64 -T 120  
  
transaction type: <builtin: TPC-B (sort of)>  
scaling factor: 10000  
query mode: prepared  
number of clients: 64  
number of threads: 64  
duration: 120 s  
number of transactions actually processed: 580535  
latency average = 13.229 ms  
latency stddev = 30.073 ms  
tps = 4829.300799 (including connections establishing)  
tps = 4829.989470 (excluding connections establishing)  
script statistics:  
 - statement latencies in milliseconds:  
         0.003  \set aid random(1, 100000 * :scale)  
         0.001  \set bid random(1, 1 * :scale)  
         0.001  \set tid random(1, 10 * :scale)  
         0.001  \set delta random(-5000, 5000)  
         0.097  BEGIN;  
         5.650  UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;  
         0.153  SELECT abalance FROM pgbench_accounts WHERE aid = :aid;  
         3.122  UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;  
         1.631  UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;  
         1.418  INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);  
         1.148  END;  

5.3、并行读压测

postgres=# \dt+ e  
                   List of relations  
 Schema | Name | Type  |  Owner   | Size  | Description   
--------+------+-------+----------+-------+-------------  
 public | e    | table | postgres | 24 GB |   
(1 row)  
  
-- 32个并行读,24G数据处理耗时0.3秒。 
postgres=# alter table e set (parallel_workers =32);  
postgres=# set max_parallel_workers_per_gather =32;  
postgres=# set min_parallel_table_scan_size =0;  
postgres=# set min_parallel_index_scan_size =0;  
postgres=# set parallel_setup_cost =0;  
postgres=# set parallel_tuple_cost =0;
postgres=# set force_parallel_mode =on;
  
postgres=# select count(*) from e;  
  count     
----------  
 28766181  
(1 row)  
Time: 993.876 ms  
  
postgres=# explain (analyze,verbose,timing,costs,buffers) select count(*) from e;  
                                                                           QUERY PLAN                                                                              
-----------------------------------------------------------------------------------------------------------------------------------------------------------------  
 Finalize Aggregate  (cost=178479.59..178479.60 rows=1 width=8) (actual time=305.796..305.796 rows=1 loops=1)  
   Output: count(*)  
   Buffers: shared hit=27422  
   ->  Gather  (cost=178479.50..178479.51 rows=32 width=8) (actual time=304.868..305.784 rows=33 loops=1)  
         Output: (PARTIAL count(*))  
         Workers Planned: 32  
         Workers Launched: 32  
         Buffers: shared hit=27422  
         ->  Partial Aggregate  (cost=178479.50..178479.51 rows=1 width=8) (actual time=291.483..291.483 rows=1 loops=33)  
               Output: PARTIAL count(*)  
               Buffers: shared hit=23934  
               Worker 0: actual time=281.287..281.287 rows=1 loops=1  
                 Buffers: shared hit=718  
..........  
               Worker 30: actual time=299.304..299.304 rows=1 loops=1  
                 Buffers: shared hit=438  
               Worker 31: actual time=300.227..300.227 rows=1 loops=1  
                 Buffers: shared hit=460  
               ->  Parallel Index Only Scan using uk_e on public.e  (cost=0.44..176232.78 rows=898689 width=0) (actual time=0.085..178.852 rows=871702 loops=33)  
                     Heap Fetches: 0  
                     Buffers: shared hit=23934  
                     Worker 0: actual time=0.078..173.121 rows=1014806 loops=1  
                       Buffers: shared hit=718  
..........  
                     Worker 30: actual time=0.089..184.251 rows=635007 loops=1  
                       Buffers: shared hit=438  
                     Worker 31: actual time=0.070..184.000 rows=649729 loops=1  
                       Buffers: shared hit=460  
 Planning time: 0.092 ms  
 Execution time: 314.527 ms  
(144 rows)  

基于云盘镜像的备份

备份过程

1、开始备份

select pg_start_backup('xx');  

2、打云盘快照

调用API或在阿里云ECS控制台执行。

3、压测

打快照的过程中,产生一些数据库变更。

使用前面的例子,大吞吐写入测试(使用logged table)、OLTP测试、夹杂建表、删表。

4、云盘快照结束

5、停止压测

6、结束备份

select pg_stop_backup();  

7、切换日志

checkpoint;  
select pg_switch_wal();  
checkpoint;  
select pg_switch_wal();  

8、检查归档已正常

确保打快照过程中产生的WAL全部归档成功。

9、生成每个测试表的CHECKSUM

do language plpgsql $$  
declare  
  n name;  
  res int8;  
begin  
  for n in select tablename from pg_tables where schemaname='public'    
  loop  
    execute 'select sum(hashtext(t.*::text)) from '||n||' t' into res;  
    raise notice 'checksum %: %', n, res;  
  end loop;  
end;  
$$;  
  
NOTICE:  checksum pgbench_history: -422046586146  
NOTICE:  checksum pgbench_tellers: 215006661683  
NOTICE:  checksum pgbench_branches: 158568443210  
......  

基于云盘镜像的恢复

实际上PostgreSQL本身已支持增量备份、块级增量备份的功能,但是既然有云盘快照,可以只备归档和云盘镜像,数据少走一次网络开销。

基于PG本身的备份也简单介绍一下,见本文参考部分。

恢复过程

1、创建新ECS(可以根据镜像创建)

2、如果根据数据库ECS镜像创建,则不需要这一步。

部署postgresql软件,注意需要与主库软件、插件等一致。

3、根据云盘快照创建云盘

4、复原逻辑卷

pvscan  
  
vgscan  
  
lvscan  

5、检查文件系统

fsck.ext4 -y /dev/mapper/vgdata01-lv01  

6、加载文件系统

mount -o defaults,noatime,nodiratime,nodelalloc,barrier=1,data=ordered LABEL=lv01 /data01  

7、清理WAL文件,(因为是多盘快照,快照不可能在同一时间点,因此WAL文件可能出现partial write。wal文件是恢复数据文件的关键,因此必须完整。所以我们从归档来获取WAL。)。

rm -f $PGDATA/pg_wal/*  

注意,如果是单盘,可以只删除最后一个WAL。

8、配置恢复

cp $PGHOME/share/recovery.conf.sample $PGDATA/recovery.conf  
vi $PGDATA/recovery.conf     # (auto copy archive to $PGDATA/pg_wal)  
  
restore_command = 'cp /archivedir/%f %p'           # e.g. 'cp /mnt/server/archivedir/%f %p'  
standby_mode = on  

9、启动数据库

pg_ctl start  

10、等待恢复完成,恢复完成即恢复到最后一个已归档的WAL文件。

11、激活数据库

pg_ctl promote  

12、检查CHECKSUM

do language plpgsql $$  
declare  
  n name;  
  res int8;  
begin  
  for n in select tablename from pg_tables where schemaname='public'    
  loop  
    execute 'select sum(hashtext(t.*::text)) from '||n||' t' into res;  
    raise notice 'checksum %: %', n, res;  
  end loop;  
end;  
$$;  

13、检查所有数据块是否正常。

set vacuum_freeze_min_age =0;  
vacuum freeze;  

小结

1、PostgreSQL 具备WAL和FULL PAGE WRITE,可以实现数据文件的不一致恢复,也就是说备份时的数据文件partial write可以被检查点后的wal full page write PAGE修复。最终实现一致性。

2、多块云盘并行备份,提高了数据备份的速度。

3、多块云盘,提升了整体的读写IOPS和读写带宽。

4、PostgreSQL 通过组提交、异步提交、数据文件异步写,解决了云盘单次IO延迟较本地延迟更高一点的性能问题。也就是说对IO延迟不敏感。

5、PostgreSQL 利用多核并行,可以在单条SQL中,充分利用多块云盘带来的高IOPS和高读写带宽的能力。

6、通过逻辑卷、软RADI、ZFS等手段,可以充分利用多云盘的能力。

参考

1、《PostgreSQL on Linux 最佳部署手册》

2、zfs

《PostgreSQL 最佳实践 - 块级增量备份(ZFS篇)验证 - recovery test script for zfs snapshot clone + postgresql stream replication + archive》

《PostgreSQL 最佳实践 - 块级增量备份(ZFS篇)双机HA与块级备份部署》

《PostgreSQL 最佳实践 - 块级增量备份(ZFS篇)单个数据库采用多个zfs卷(如表空间)时如何一致性备份》

《PostgreSQL 最佳实践 - 块级增量备份(ZFS篇)备份集自动校验》

《PostgreSQL 最佳实践 - 块级增量备份(ZFS篇)方案与实战》

3、man lvm

4、man mdadm

5、备份原理

《PostgreSQL 最佳实践 - 块级别增量备份(pg_rman baseon LSN)源码浅析与使用》

6、时间点恢复原理与最佳实践

《PostgreSQL 最佳实践 - 任意时间点恢复源码分析》

《PostgreSQL 最佳实践 - 在线增量备份与任意时间点恢复》

7、https://github.com/pgbackrest/pgbackrest

相关实践学习
使用PolarDB和ECS搭建门户网站
本场景主要介绍基于PolarDB和ECS实现搭建门户网站。
阿里云数据库产品家族及特性
阿里云智能数据库产品团队一直致力于不断健全产品体系,提升产品性能,打磨产品功能,从而帮助客户实现更加极致的弹性能力、具备更强的扩展能力、并利用云设施进一步降低企业成本。以云原生+分布式为核心技术抓手,打造以自研的在线事务型(OLTP)数据库Polar DB和在线分析型(OLAP)数据库Analytic DB为代表的新一代企业级云原生数据库产品体系, 结合NoSQL数据库、数据库生态工具、云原生智能化数据库管控平台,为阿里巴巴经济体以及各个行业的企业客户和开发者提供从公共云到混合云再到私有云的完整解决方案,提供基于云基础设施进行数据从处理、到存储、再到计算与分析的一体化解决方案。本节课带你了解阿里云数据库产品家族及特性。
目录
相关文章
|
23天前
|
存储 NoSQL 固态存储
阿里云服务器ESSD云盘解析:性能、计费与适用场景介绍
阿里云服务器ESSD(Enterprise SSD)云盘深度融合了25 GE(千兆以太网)网络与RDMA(Remote Direct Memory Access,远程直接内存访问)技术。凭借这一先进的技术组合,它能够为单个云盘赋予高达100万的随机读写性能,同时实现单路情况下的低时延特性。本文将围绕阿里云服务器ESSD云盘展开全面且深入的介绍,涵盖其规格参数、计费模式、适用应用场景、对NVMe协议的支持情况、容量范围与性能级别之间的关联,以及不同实例规格下的存储I/O性能表现等多个方面,旨在助力读者全方位、深入地了解这一卓越的云盘产品。
|
26天前
|
存储 弹性计算 固态存储
阿里云服务器ESSD Entry和ESSD等云盘解析:区别、应用场景与选择参考
阿里云服务器提供了包括ESSD Entry、ESSD、SSD云盘、高效云盘等丰富多样的云盘类型,以满足不同用户在不同业务场景下的需求。每种云盘都有其独特的性能特点和适用场景,为了帮助用户更好地理解和选择云盘,本文将详细解析阿里云服务器各个云盘的定义、区别、选择参考以及常见问题。让大家对阿里云服务器不同云盘的性能和适用场景有更全面的了解,尤其是ESSD Entry云盘和ESSD云盘,并能够根据自己的需求做出最适合自己的选择。
|
2月前
|
存储 弹性计算 安全
阿里云服务器ECS实例选购参考:vCPU到云盘IOPS等指标详解
阿里云服务器ECS实例可以分为多种实例规格族,而根据CPU、内存等配置的不同,一种实例规格族又进一步细分为多种实例规格。这些实例规格包含了众多关键的性能指标,如 vCPU、处理器、内存、vTPM、本地存储、网络带宽、网络收发包 PPS、连接数、弹性网卡、云盘带宽、云盘 IOPS 等。深入理解这些性能指标,对于用户在阿里云服务器购买过程中选择最适合自己业务需求的实例规格至关重要。
|
3月前
|
存储 缓存 监控
阿里云服务器配置与云盘容量选择参考:实例规格、云盘等相关配置选择解析
对于初次接触云服务器的用户来说,面对众多配置选项和云盘容量选择,可能会不知道如何选择。有些用户甚至不清楚云服务器应该购买多大容量的云盘,也不知道哪一款配置的云服务器更适合自己的业务。本文将详细探讨这两个问题,并结合阿里云服务器的特点,为您提供一份云服务器配置与云盘容量选择指南,以供了解和选择参考。
|
4月前
|
域名解析 人工智能 弹性计算
DeepSeek服务器繁忙解决方法:使用阿里云一键部署DeepSeek个人网站!
通过阿里云一键部署DeepSeek个人网站,解决服务器繁忙问题。学生用户可领取300元代金券实现0成本部署,普通用户则可用99元/年的服务器。教程涵盖从选择套餐、设置密码到获取百炼API-KEY的全流程,助您快速搭建专属大模型主页,体验DeepSeek、Qwen-max、Llama等多款模型,无需代码,最快5分钟完成部署。支持绑定个人域名,共享亲友使用,日均成本仅约1元。
401 10
|
25天前
|
Ubuntu 安全 数据安全/隐私保护
在Docker容器中部署GitLab服务器的步骤(面向Ubuntu 16.04)
现在,你已经成功地在Docker上部署了GitLab。这就是我们在星际中的壮举,轻松如同土豆一样简单!星际旅行结束,靠岸,打开舱门,迎接全新的代码时代。Prepare to code, astronaut!
121 12
|
1月前
|
定位技术 数据中心
安徽京准电钟分享:NTP授时服务器极速部署指南
安徽京准电钟分享:NTP授时服务器极速部署指南
143 14
|
2月前
|
人工智能 安全 Shell
Jupyter MCP服务器部署实战:AI模型与Python环境无缝集成教程
Jupyter MCP服务器基于模型上下文协议(MCP),实现大型语言模型与Jupyter环境的无缝集成。它通过标准化接口,让AI模型安全访问和操作Jupyter核心组件,如内核、文件系统和终端。本文深入解析其技术架构、功能特性及部署方法。MCP服务器解决了传统AI模型缺乏实时上下文感知的问题,支持代码执行、变量状态获取、文件管理等功能,提升编程效率。同时,严格的权限控制确保了安全性。作为智能化交互工具,Jupyter MCP为动态计算环境与AI模型之间搭建了高效桥梁。
200 2
Jupyter MCP服务器部署实战:AI模型与Python环境无缝集成教程
|
2月前
|
弹性计算 NoSQL 数据库
阿里云服务器如何备份数据?
阿里云服务器数据备份有多种方法,用户可按需选择。主要方式包括:1)快照备份,创建云盘的时间点拷贝,支持定期备份与数据恢复;2)数据库备份DBS,适用于多种环境的数据库备份,涵盖本地及多云场景;3)云备份Cloud Backup,提供统一灾备平台,支持ECS整机、数据库、文件系统等全方位备份,保障数据安全。
|
3月前
|
关系型数据库 MySQL Linux
在Linux环境下备份Docker中的MySQL数据并传输到其他服务器以实现数据级别的容灾
以上就是在Linux环境下备份Docker中的MySQL数据并传输到其他服务器以实现数据级别的容灾的步骤。这个过程就像是一场接力赛,数据从MySQL数据库中接力棒一样传递到备份文件,再从备份文件传递到其他服务器,最后再传递回MySQL数据库。这样,即使在灾难发生时,我们也可以快速恢复数据,保证业务的正常运行。
189 28

相关产品

  • 云原生数据库 PolarDB
  • 云数据库 RDS PostgreSQL 版
  • 推荐镜像

    更多