PostgreSQL on ECS多云盘的部署、快照备份和恢复

本文涉及的产品
云原生数据库 PolarDB 分布式版,标准版 2核8GB
RDS SQL Server Serverless,2-4RCU 50GB 3个月
推荐场景:
云原生数据库 PolarDB MySQL 版,Serverless 5000PCU 100GB
简介:

标签

PostgreSQL , ECS , 云盘 , 快照 , 一致性备份 , 时间点恢复 , zfs , lvm , raid , 并行计算


背景

随着阿里云云盘的发展,云盘的性能已经越来越好了。IOPS可以做到十万以上,读写吞吐也超过1GB/s了。相信随着网络的发展,SSD云盘IOPS突破40万,读写吞吐突破4GB/s也不远了。

不过这里的IOPS和吞吐是指并发的IO能力,单次IO的延迟与本地还是不能比(将来RDMA网络也许能解决这个问题)。

PostgreSQL 如何解决SSD云盘IO延迟高缺陷

某些业务对数据库的(要求响应时间很快的写小事务)对单次IO延迟比较敏感,不过PostgreSQL有方法可以解决这个小缺陷。

1、后台IO,(write syscall)

PostgreSQL 的大多数IO为后台IO(bgwriter, backend writer),所以刷shared buffer对IO延迟不敏感。

2、异步提交

事务提交时,写WAL异步提交。不会造成数据不一致,但是当数据库CRASH,可能丢失在wal buffer中未提交的事务(最多10毫秒)。

这种方法是最有效的。

3、组提交

组提交,解决WAL写瓶颈,将多个同时提交的事务的WAL fsync动作合并为单次,从而减少FSYNC次数,提高高并发时的写小事务的TPS。

PostgreSQL 如何充分利用并发IO的能力

由于我们看到的云盘IOPS和读写吞吐指标是并发指标,数据库如何利用好这么好的指标呢?对高并发小事务不是问题,肯定是能将它用起来的,但是对于低并发,长事务(分析型业务),如何利用云盘的IOPS能力和读写吞吐能力呢?

PostgreSQL的并行计算特性可以充分利用云盘的并发IOPS和读写带宽。

多云盘卷组

单块云盘的IOPS能力有上限,容量有上限,读写带宽也有上限。好在ECS支持多块云盘,目前已支持一台ECS挂载16块云盘。

通过多块云盘,组卷条带后,提高读写带宽。以Linux RHEL/CentOS 7.x为例,组卷和条带方法:

1、逻辑卷

yum install lvm2  

2、软RAID

yum install -y mdadm  

3、ZFS

http://zfsonlinux.org/

以centos 7.3为例  
  
wget http://download.zfsonlinux.org/epel/zfs-release.el7_3.noarch.rpm  
rpm -ivh zfs-release.el7_3.noarch.rpm   
yum install -y zfs  

逻辑卷例子

假设环境中有16块SSD云盘

1、创建PV

pvcreate /dev/vd[b-q]  

2、创建VG

vgcreate -A y -s 128M vgdata01 /dev/vd[b-q]  
-s, --physicalextentsize PhysicalExtentSize[bBsSkKmMgGtTpPeE]  
       Sets  the physical extent size on physical volumes of this volume group.  A size suffix   
       (k for kilobytes up to t for terabytes) is optional, megabytes is the default if no suffix is present.   
       For LVM2 format, the value  
       must be a power of 2 of at least 1 sector (where the sector size is the largest sector size of the   
       PVs currently used in the VG) or, if not a power of 2, at least 128KiB.  For the older LVM1 format, it must be a power  
       of  2  of at least 8KiB.  The default is 4 MiB.  Once this value has been set, it is difficult to   
       change it without recreating the volume group which would involve backing up and restoring data on any logical volumes.  
       However, if no extents need moving for the new value to apply, it can be altered using vgchange -s.  
  
       If the volume group metadata uses lvm1 format, extents can vary in size from 8KiB to 16GiB and   
       there is a limit of 65534 extents in each logical volume.  The default of 4 MiB leads to a maximum logical volume size  of  
       around 256GiB.  
  
       If  the  volume group metadata uses lvm2 format those restrictions do not apply, but having a   
       large number of extents will slow down the tools but have no impact on I/O performance to the logical volume.  The smallest  
       PE is 1KiB  
  
       The 2.4 kernel has a limitation of 2TiB per block device.  

3、创建LV,设置条带

16块盘,每个条带单位为8KB。

lvcreate -A y -i 16 -I 8 -l 100%VG -n lv01 vgdata01  
-i|--stripes Stripes  
       Gives the number of stripes.  This is equal to the number of physical volumes to scatter the logical volume data.    
       When creating a RAID 4/5/6 logical volume, the extra devices which are necessary for parity are internally accounted for.    
       Specifying -i 3 would cause 3 devices for striped and RAID 0 logical volumes, 4 devices for   
       RAID 4/5, 5 devices for RAID 6 and 6 devices for RAID 10.  Alternatively, RAID 0 will stripe  across  2  
       devices,  RAID  4/5  across  3  PVs,  RAID  6  across  5  PVs and RAID 10 across 4 PVs in the volume group   
       if the -i argument is omitted.  In order to stripe across all PVs of the VG if the -i argument is omitted, set  
       raid_stripe_all_devices=1 in the allocation section of lvm.conf (5) or add  
       --config allocation/raid_stripe_all_devices=1  
       to the command.  
  
       Note the current maxima for stripes depend on the created RAID type.  For raid10, the maximum of stripes is 32,   
       for raid0, it is 64, for raid4/5, it is 63 and for raid6 it is 62.  
  
       See the --nosync option to optionally avoid initial syncrhonization of RaidLVs.  
  
       Two implementations of basic striping are available in the kernel.  The original device-mapper implementation   
       is the default and should normally be used.  The alternative implementation using MD, available since  ver‐  
       sion  1.7  of  the  RAID  device-mapper  kernel  target (kernel version 4.2) is provided to facilitate the   
       development of new RAID features.  It may be accessed with --type raid0[_meta], but is best avoided at present  
       because of assorted restrictions on resizing and converting such devices.  
  
-I|--stripesize StripeSize  
       Gives the number of kilobytes for the granularity of the stripes.  
       StripeSize must be 2^n (n = 2 to 9) for metadata in LVM1 format.  For metadata in LVM2 format,   
       the stripe size may be a larger power of 2 but must not exceed the physical extent size.  

4、创建文件系统,设置条带

当数据库数据块=32K时,chunk大小(单块盘读写32KB,再写下一块)。 条带大小512KB(32KB*16)。  
mkfs.ext4 /dev/mapper/vgdata01-lv01 -m 0 -O extent,uninit_bg -E lazy_itable_init=1,stride=8,stripe_width=128 -b 4096 -T largefile -L lv01  
  
或  
  
当数据库数据块=8K时,chunk大小(单块盘读写8KB,再写下一块)。 条带大小128KB(8KB*16)。  
mkfs.ext4 /dev/mapper/vgdata01-lv01 -m 0 -O extent,uninit_bg -E lazy_itable_init=1,stride=2,stripe_width=32 -b 4096 -T largefile -L lv01  
-b block-size  
       Specify the size of blocks in bytes.  Valid block-size values are 1024, 2048 and 4096 bytes per block.    
       If omitted, block-size is heuristically determined by the filesystem size and the expected usage of the  filesys‐  
       tem  (see  the  -T option).  If block-size is preceded by a negative sign ('-'), then mke2fs will use   
       heuristics to determine the appropriate block size, with the constraint that the block size will be at least block-  
       size bytes.  This is useful for certain hardware devices which require that the blocksize be a multiple of 2k.  
  
stride=stride-size  单位为blocks  
       Configure the filesystem for a RAID array with stride-size filesystem blocks. This is the number   
        of blocks read or written to disk before moving to the next disk, which is  sometimes  referred  to  as  the  
       chunk  size.   This mostly affects placement of filesystem metadata like bitmaps at mke2fs   
        time to avoid placing them on a single disk, which can hurt performance.    
        It may also be used by the block allocator.  
  
stripe_width=stripe-width  单位为blocks  
       Configure the filesystem for a RAID array with stripe-width filesystem blocks per stripe.   
        This is typically stride-size * N, where N is the number of data-bearing disks in the RAID   
        (e.g. for RAID  5  there  
       is  one  parity disk, so N will be the number of disks in the array minus 1).    
        This allows the block allocator to prevent read-modify-write of the parity in a   
        RAID stripe if possible when the data is written.  
  
lazy_itable_init[= <0 to disable, 1 to enable>]  
      If enabled and the uninit_bg feature is enabled, the inode table will not be fully   
      initialized by mke2fs.  This speeds up filesystem initialization noticeably,   
      but it requires the kernel to finish initial‐  
      izing the filesystem in the background when the filesystem is first mounted.    
      If the option value is omitted, it defaults to 1 to enable lazy inode table zeroing.  

5、MOUNT文件系统

如果不使用云盘镜像备份,使用这种挂载模式。  
mount -o defaults,noatime,nodiratime,nodelalloc,barrier=0,data=writeback LABEL=lv01 /data01  
  
或  
  
如果使用云盘镜像备份,使用这种挂载模式。  
mount -o defaults,noatime,nodiratime,nodelalloc,barrier=1,data=ordered LABEL=lv01 /data01  

6、FIO测试

yum install -y fio  
  
fio -filename=/data01/testdir -direct=1 -thread -rw=write -ioengine=libaio -bs=8K -size=16G -numjobs=128 -runtime=60 -group_reporting -name=mytest >/tmp/fio_write.log 2>&1  
fio -filename=/data01/testdir -direct=1 -thread -rw=read -ioengine=libaio -bs=8K -size=16G -numjobs=128 -runtime=60 -group_reporting -name=mytest >/tmp/fio_read.log 2>&1  
fio -filename=/data01/testdir -direct=1 -thread -rw=randwrite -ioengine=libaio -bs=8K -size=16G -numjobs=128 -runtime=60 -group_reporting -name=mytest >/tmp/fio_randwrite.log 2>&1  
fio -filename=/data01/testdir -direct=1 -thread -rw=randread -ioengine=libaio -bs=8K -size=16G -numjobs=128 -runtime=60 -group_reporting -name=mytest >/tmp/fio_randread.log 2>&1  
顺序  
READ: io=72621MB, aggrb=1210.3MB/s, minb=1210.3MB/s, maxb=1210.3MB/s, mint=60003msec, maxt=60003msec  
WRITE: io=36845MB, aggrb=628743KB/s, minb=628743KB/s, maxb=628743KB/s, mint=60007msec, maxt=60007msec  
  
随机  
READ: io=53390MB, aggrb=911160KB/s, minb=911160KB/s, maxb=911160KB/s, mint=60002msec, maxt=60002msec  
WRITE: io=26078MB, aggrb=445032KB/s, minb=445032KB/s, maxb=445032KB/s, mint=60004msec, maxt=60004msec  

数据库部署

1、安装软件

《PostgreSQL on Linux 最佳部署手册》

2、初始化数据库

initdb -E SQL_ASCII -U postgres --locale=C -D $PGDATA  

3、配置postgresql.conf

port = 1921                               
max_connections = 1000                    
superuser_reserved_connections = 13       
unix_socket_directories = '.'     
shared_buffers = 64GB                     
maintenance_work_mem = 1GB                
dynamic_shared_memory_type = posix        
vacuum_cost_delay = 0                     
bgwriter_delay = 10ms                     
bgwriter_lru_maxpages = 500               
bgwriter_lru_multiplier = 5.0             
bgwriter_flush_after = 0                  
effective_io_concurrency = 16             
max_worker_processes = 128                
max_parallel_workers_per_gather = 64      
max_parallel_workers = 128                
backend_flush_after = 0           
wal_level = minimal                       
synchronous_commit = off                  
wal_sync_method = fsync           
full_page_writes = on                     
wal_buffers = 1GB                         
wal_writer_delay = 10ms           
wal_writer_flush_after = 0                
checkpoint_timeout = 35min                
max_wal_size = 128GB  
min_wal_size = 8GB  
checkpoint_completion_target = 0.5        
checkpoint_flush_after = 0                
checkpoint_warning = 30s                  
max_wal_senders = 0               
random_page_cost = 1.2                    
parallel_tuple_cost = 0           
parallel_setup_cost = 0   
min_parallel_table_scan_size = 0   
min_parallel_index_scan_size = 0  
effective_cache_size = 200GB  
log_destination = 'csvlog'                
logging_collector = on            
log_directory = 'log'                     
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'   
log_file_mode = 0600                      
log_truncate_on_rotation = on             
log_checkpoints = on  
log_connections = on  
log_disconnections = on  
log_error_verbosity = verbose   
log_timezone = 'PRC'  
log_autovacuum_min_duration = 0   
autovacuum_max_workers = 8                
autovacuum_vacuum_cost_delay = 0          
autovacuum_vacuum_cost_limit = 0          
datestyle = 'iso, mdy'  
timezone = 'PRC'  
lc_messages = 'C'                         
lc_monetary = 'C'                         
lc_numeric = 'C'                          
lc_time = 'C'                             
default_text_search_config = 'pg_catalog.english'  
max_locks_per_transaction = 6400  
pg_ctl start  

4、配置归档,归档目录不能放在数据盘的云盘里面,你可以放到其他云盘,或者放到OSS。

这里为了演示方便,放在本地云盘。(保证与数据云盘不是同一云盘)

mkdir /archivedir  
chown digoal:digoal /archivedir  
vi postgresql.conf  
  
wal_level = replica  
archive_mode = on  
archive_command = 'test ! -f /archivedir/%f && cp %p /archivedir/%f'  
pg_ctl restart -m fast  

5、压测

5.1、写入大吞吐压测

初始化

-- 建表  
do language plpgsql $$   
declare   
begin   
  for i in 0..65535 loop    
    execute 'create table IF NOT EXISTS test_'||i||'(id int, info text, crt_time timestamp) with (autovacuum_enabled=off, toast.autovacuum_enabled=off)';  
  end loop;  
end;  
$$;  
  
-- 删表  
do language plpgsql $$   
declare   
begin   
  for i in 0..65535 loop    
    execute 'drop table IF EXISTS test_'||i;  
  end loop;  
end;  
$$;  
  
-- 写表  
create or replace function batch_ins(int) returns void as $$  
declare  
begin  
  execute 'insert into test_'||$1||' select generate_series(1,10000), md5(random()::text), now()';  
end;  
$$ language plpgsql strict;  

大吞吐写入测试,写入速度直逼200万行/s

vi test.sql  
  
\set id random(0,65535)  
select batch_ins(:id);  
pgbench -M prepared -n -r -P 1 -f ./test.sql -c 32 -j 32 -T 120  
  
transaction type: ./test.sql  
scaling factor: 1  
query mode: prepared  
number of clients: 32  
number of threads: 32  
duration: 120 s  
number of transactions actually processed: 20801  
latency average = 184.678 ms  
latency stddev = 57.522 ms  
tps = 173.218345 (including connections establishing)  
tps = 173.233151 (excluding connections establishing)  
script statistics:  
 - statement latencies in milliseconds:  
         0.003  \set id random(0,65535)  
       184.684  select batch_ins(:id);  

实际为写WAL的瓶颈,采用UNLOGGED TABLE,写入速度直逼1000万行/s。

transaction type: ./test.sql  
scaling factor: 1  
query mode: prepared  
number of clients: 32  
number of threads: 32  
duration: 120 s  
number of transactions actually processed: 117367  
latency average = 32.731 ms  
latency stddev = 23.047 ms  
tps = 977.228590 (including connections establishing)  
tps = 977.331826 (excluding connections establishing)  
script statistics:  
 - statement latencies in milliseconds:  
         0.002  \set id random(0,65535)  
        32.727  select batch_ins(:id);  

5.2、10亿数据量,OLTP压测

pgbench -i -s 10000  
pgbench -M prepared -n -r -P 1 -c 64 -j 64 -T 120  
  
transaction type: <builtin: TPC-B (sort of)>  
scaling factor: 10000  
query mode: prepared  
number of clients: 64  
number of threads: 64  
duration: 120 s  
number of transactions actually processed: 580535  
latency average = 13.229 ms  
latency stddev = 30.073 ms  
tps = 4829.300799 (including connections establishing)  
tps = 4829.989470 (excluding connections establishing)  
script statistics:  
 - statement latencies in milliseconds:  
         0.003  \set aid random(1, 100000 * :scale)  
         0.001  \set bid random(1, 1 * :scale)  
         0.001  \set tid random(1, 10 * :scale)  
         0.001  \set delta random(-5000, 5000)  
         0.097  BEGIN;  
         5.650  UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;  
         0.153  SELECT abalance FROM pgbench_accounts WHERE aid = :aid;  
         3.122  UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;  
         1.631  UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;  
         1.418  INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);  
         1.148  END;  

5.3、并行读压测

postgres=# \dt+ e  
                   List of relations  
 Schema | Name | Type  |  Owner   | Size  | Description   
--------+------+-------+----------+-------+-------------  
 public | e    | table | postgres | 24 GB |   
(1 row)  
  
-- 32个并行读,24G数据处理耗时0.3秒。 
postgres=# alter table e set (parallel_workers =32);  
postgres=# set max_parallel_workers_per_gather =32;  
postgres=# set min_parallel_table_scan_size =0;  
postgres=# set min_parallel_index_scan_size =0;  
postgres=# set parallel_setup_cost =0;  
postgres=# set parallel_tuple_cost =0;
postgres=# set force_parallel_mode =on;
  
postgres=# select count(*) from e;  
  count     
----------  
 28766181  
(1 row)  
Time: 993.876 ms  
  
postgres=# explain (analyze,verbose,timing,costs,buffers) select count(*) from e;  
                                                                           QUERY PLAN                                                                              
-----------------------------------------------------------------------------------------------------------------------------------------------------------------  
 Finalize Aggregate  (cost=178479.59..178479.60 rows=1 width=8) (actual time=305.796..305.796 rows=1 loops=1)  
   Output: count(*)  
   Buffers: shared hit=27422  
   ->  Gather  (cost=178479.50..178479.51 rows=32 width=8) (actual time=304.868..305.784 rows=33 loops=1)  
         Output: (PARTIAL count(*))  
         Workers Planned: 32  
         Workers Launched: 32  
         Buffers: shared hit=27422  
         ->  Partial Aggregate  (cost=178479.50..178479.51 rows=1 width=8) (actual time=291.483..291.483 rows=1 loops=33)  
               Output: PARTIAL count(*)  
               Buffers: shared hit=23934  
               Worker 0: actual time=281.287..281.287 rows=1 loops=1  
                 Buffers: shared hit=718  
..........  
               Worker 30: actual time=299.304..299.304 rows=1 loops=1  
                 Buffers: shared hit=438  
               Worker 31: actual time=300.227..300.227 rows=1 loops=1  
                 Buffers: shared hit=460  
               ->  Parallel Index Only Scan using uk_e on public.e  (cost=0.44..176232.78 rows=898689 width=0) (actual time=0.085..178.852 rows=871702 loops=33)  
                     Heap Fetches: 0  
                     Buffers: shared hit=23934  
                     Worker 0: actual time=0.078..173.121 rows=1014806 loops=1  
                       Buffers: shared hit=718  
..........  
                     Worker 30: actual time=0.089..184.251 rows=635007 loops=1  
                       Buffers: shared hit=438  
                     Worker 31: actual time=0.070..184.000 rows=649729 loops=1  
                       Buffers: shared hit=460  
 Planning time: 0.092 ms  
 Execution time: 314.527 ms  
(144 rows)  

基于云盘镜像的备份

备份过程

1、开始备份

select pg_start_backup('xx');  

2、打云盘快照

调用API或在阿里云ECS控制台执行。

3、压测

打快照的过程中,产生一些数据库变更。

使用前面的例子,大吞吐写入测试(使用logged table)、OLTP测试、夹杂建表、删表。

4、云盘快照结束

5、停止压测

6、结束备份

select pg_stop_backup();  

7、切换日志

checkpoint;  
select pg_switch_wal();  
checkpoint;  
select pg_switch_wal();  

8、检查归档已正常

确保打快照过程中产生的WAL全部归档成功。

9、生成每个测试表的CHECKSUM

do language plpgsql $$  
declare  
  n name;  
  res int8;  
begin  
  for n in select tablename from pg_tables where schemaname='public'    
  loop  
    execute 'select sum(hashtext(t.*::text)) from '||n||' t' into res;  
    raise notice 'checksum %: %', n, res;  
  end loop;  
end;  
$$;  
  
NOTICE:  checksum pgbench_history: -422046586146  
NOTICE:  checksum pgbench_tellers: 215006661683  
NOTICE:  checksum pgbench_branches: 158568443210  
......  

基于云盘镜像的恢复

实际上PostgreSQL本身已支持增量备份、块级增量备份的功能,但是既然有云盘快照,可以只备归档和云盘镜像,数据少走一次网络开销。

基于PG本身的备份也简单介绍一下,见本文参考部分。

恢复过程

1、创建新ECS(可以根据镜像创建)

2、如果根据数据库ECS镜像创建,则不需要这一步。

部署postgresql软件,注意需要与主库软件、插件等一致。

3、根据云盘快照创建云盘

4、复原逻辑卷

pvscan  
  
vgscan  
  
lvscan  

5、检查文件系统

fsck.ext4 -y /dev/mapper/vgdata01-lv01  

6、加载文件系统

mount -o defaults,noatime,nodiratime,nodelalloc,barrier=1,data=ordered LABEL=lv01 /data01  

7、清理WAL文件,(因为是多盘快照,快照不可能在同一时间点,因此WAL文件可能出现partial write。wal文件是恢复数据文件的关键,因此必须完整。所以我们从归档来获取WAL。)。

rm -f $PGDATA/pg_wal/*  

注意,如果是单盘,可以只删除最后一个WAL。

8、配置恢复

cp $PGHOME/share/recovery.conf.sample $PGDATA/recovery.conf  
vi $PGDATA/recovery.conf     # (auto copy archive to $PGDATA/pg_wal)  
  
restore_command = 'cp /archivedir/%f %p'           # e.g. 'cp /mnt/server/archivedir/%f %p'  
standby_mode = on  

9、启动数据库

pg_ctl start  

10、等待恢复完成,恢复完成即恢复到最后一个已归档的WAL文件。

11、激活数据库

pg_ctl promote  

12、检查CHECKSUM

do language plpgsql $$  
declare  
  n name;  
  res int8;  
begin  
  for n in select tablename from pg_tables where schemaname='public'    
  loop  
    execute 'select sum(hashtext(t.*::text)) from '||n||' t' into res;  
    raise notice 'checksum %: %', n, res;  
  end loop;  
end;  
$$;  

13、检查所有数据块是否正常。

set vacuum_freeze_min_age =0;  
vacuum freeze;  

小结

1、PostgreSQL 具备WAL和FULL PAGE WRITE,可以实现数据文件的不一致恢复,也就是说备份时的数据文件partial write可以被检查点后的wal full page write PAGE修复。最终实现一致性。

2、多块云盘并行备份,提高了数据备份的速度。

3、多块云盘,提升了整体的读写IOPS和读写带宽。

4、PostgreSQL 通过组提交、异步提交、数据文件异步写,解决了云盘单次IO延迟较本地延迟更高一点的性能问题。也就是说对IO延迟不敏感。

5、PostgreSQL 利用多核并行,可以在单条SQL中,充分利用多块云盘带来的高IOPS和高读写带宽的能力。

6、通过逻辑卷、软RADI、ZFS等手段,可以充分利用多云盘的能力。

参考

1、《PostgreSQL on Linux 最佳部署手册》

2、zfs

《PostgreSQL 最佳实践 - 块级增量备份(ZFS篇)验证 - recovery test script for zfs snapshot clone + postgresql stream replication + archive》

《PostgreSQL 最佳实践 - 块级增量备份(ZFS篇)双机HA与块级备份部署》

《PostgreSQL 最佳实践 - 块级增量备份(ZFS篇)单个数据库采用多个zfs卷(如表空间)时如何一致性备份》

《PostgreSQL 最佳实践 - 块级增量备份(ZFS篇)备份集自动校验》

《PostgreSQL 最佳实践 - 块级增量备份(ZFS篇)方案与实战》

3、man lvm

4、man mdadm

5、备份原理

《PostgreSQL 最佳实践 - 块级别增量备份(pg_rman baseon LSN)源码浅析与使用》

6、时间点恢复原理与最佳实践

《PostgreSQL 最佳实践 - 任意时间点恢复源码分析》

《PostgreSQL 最佳实践 - 在线增量备份与任意时间点恢复》

7、https://github.com/pgbackrest/pgbackrest

相关实践学习
使用PolarDB和ECS搭建门户网站
本场景主要介绍基于PolarDB和ECS实现搭建门户网站。
阿里云数据库产品家族及特性
阿里云智能数据库产品团队一直致力于不断健全产品体系,提升产品性能,打磨产品功能,从而帮助客户实现更加极致的弹性能力、具备更强的扩展能力、并利用云设施进一步降低企业成本。以云原生+分布式为核心技术抓手,打造以自研的在线事务型(OLTP)数据库Polar DB和在线分析型(OLAP)数据库Analytic DB为代表的新一代企业级云原生数据库产品体系, 结合NoSQL数据库、数据库生态工具、云原生智能化数据库管控平台,为阿里巴巴经济体以及各个行业的企业客户和开发者提供从公共云到混合云再到私有云的完整解决方案,提供基于云基础设施进行数据从处理、到存储、再到计算与分析的一体化解决方案。本节课带你了解阿里云数据库产品家族及特性。
相关文章
|
7天前
|
弹性计算 数据库 数据安全/隐私保护
阿里云服务器真香宝典之Calibre-Web个人图书馆云端部署
在阿里云ECS(2核2G,SSD40G,3M带宽)上,安装Ubuntu 22.04,然后配置Docker和FTP。创建 `/config` 和 `/books` 目录,设置权限,开放端口,拉取 `johngong/calibre-web` Docker镜像,以`calibre-web`命名容器,映射端口,配置环境变量,挂载卷,确保重启策略。本地安装Calibre客户端,上传metadata.db到服务器。在Calibre-web服务端配置数据库,启用上传权限,修改管理员账户信息。完成配置后,开始上传电子书并进行阅读。
83 2
阿里云服务器真香宝典之Calibre-Web个人图书馆云端部署
|
11天前
|
存储 弹性计算 固态存储
阿里云服务器可选云盘ESSD AutoPL、ESSD云盘、SSD云盘区别参考
目前阿里云服务器的云盘有ESSD AutoPL、高效云盘、ESSD云盘、SSD云盘等可供选择,有的新手用户并不清楚他们之间的区别,因此也就不知道应该如何选择,因为不同种类的云盘在最大IOPS、最大吞吐量等性能上是有区别的,下面我们一起来他们之间的区别,这样就有助于我们选择适合自己需求的系统盘与数据盘了。
阿里云服务器可选云盘ESSD AutoPL、ESSD云盘、SSD云盘区别参考
|
17天前
|
Oracle NoSQL 固态存储
阿里云服务器ESSD Entry云盘与ESSD云盘选择指南:性能与场景解析
在我们选择阿里云服务器的时候,有部分云服务器同时支持ESSD Entry云盘和ESSD云盘,选择不同的云盘,价格也有所差异,有的用户还不清楚他们之间的区别,因此不知道选择哪种更好更能满足自己场景的需求,本文为大家介绍一下阿里云服务器ESSD Entry云盘和ESSD云盘的区别及选择参考。
阿里云服务器ESSD Entry云盘与ESSD云盘选择指南:性能与场景解析
|
23小时前
|
存储 关系型数据库 分布式数据库
PolarDB产品使用问题之如何查看PolarDB for PostgreSQL的备份信息
PolarDB产品使用合集涵盖了从创建与管理、数据管理、性能优化与诊断、安全与合规到生态与集成、运维与支持等全方位的功能和服务,旨在帮助企业轻松构建高可用、高性能且易于管理的数据库环境,满足不同业务场景的需求。用户可以通过阿里云控制台、API、SDK等方式便捷地使用这些功能,实现数据库的高效运维与持续优化。
|
21小时前
|
关系型数据库 Java 分布式数据库
PolarDB产品使用问题之部署到服务器上的Java应用(以jar包形式运行)无法连接,如何解决
PolarDB产品使用合集涵盖了从创建与管理、数据管理、性能优化与诊断、安全与合规到生态与集成、运维与支持等全方位的功能和服务,旨在帮助企业轻松构建高可用、高性能且易于管理的数据库环境,满足不同业务场景的需求。用户可以通过阿里云控制台、API、SDK等方式便捷地使用这些功能,实现数据库的高效运维与持续优化。
|
9天前
|
监控 NoSQL Ubuntu
|
8天前
|
弹性计算 持续交付 Docker
阿里云云效产品使用合集之如何部署到阿里云服务器上的 Windows Server 上的 IIS
云效作为一款全面覆盖研发全生命周期管理的云端效能平台,致力于帮助企业实现高效协同、敏捷研发和持续交付。本合集收集整理了用户在使用云效过程中遇到的常见问题,问题涉及项目创建与管理、需求规划与迭代、代码托管与版本控制、自动化测试、持续集成与发布等方面。
|
9天前
|
域名解析 网络协议 安全
服务器部署后 访问出错的原因和解决办法
部署到服务器后无法访问可能是由于配置错误、权限问题或网络设置不当。解决办法包括检查服务器配置文件、确保文件路径正确、调整权限设置以及检查防火墙和端口设置。
44 0
|
18天前
|
SQL 弹性计算 安全
通过阿里云的活动购买的云服务器,后续购买并挂载云盘、设置密码及安全组教程
现在大多数用户购买阿里云的云服务器通常都是通过阿里云的活动来购买,这种购买方式主要是价格更实惠,且购买流程简单,但是选购活动中的云服务器,一般只有系统盘,没有数据盘,这需要我们在购买之后单独购买并挂载云盘作为数据盘,而且云服务器的密码和安全组等基础设置也是需要在购买之后再设置的。本文为大家介绍后续购买并挂载云盘、设置密码及安全组的相关教程,以供参考。
通过阿里云的活动购买的云服务器,后续购买并挂载云盘、设置密码及安全组教程
若依修改,若依部署在本地运行时的注意事项,后端连接了服务器,本地的vue.config.js要先改成localhost:端口号与后端匹配,部署的时候再改公网IP:端口号
若依修改,若依部署在本地运行时的注意事项,后端连接了服务器,本地的vue.config.js要先改成localhost:端口号与后端匹配,部署的时候再改公网IP:端口号

相关产品

  • 云原生数据库 PolarDB