PostgreSQL 异步消息(LISTEN/NOTIFY)缓存有多大?

本文涉及的产品
云原生数据库 PolarDB 分布式版,标准版 2核8GB
RDS MySQL Serverless 基础系列,0.5-2RCU 50GB
RDS PostgreSQL Serverless,0.5-4RCU 50GB 3个月
推荐场景:
对影评进行热评分析
简介:

标签

PostgreSQL , 异步消息 , notify , listen , queue


背景

PostgreSQL异步消息功能的一些应用:

《PostgreSQL 流式处理应用实践 - 二手商品实时归类》

《PostgreSQL 事件触发器应用 - DDL审计记录 + 异步通知(notify)》

《从电波表到数据库小程序之 - 数据库异步广播(notify/listen)》

《use PostgreSQL async Notification as a chat group》

《PostgreSQL Notify/Listen Like ESB》

那么一条异步消息支持多大的容量,当客户端消费堵塞时,数据库端最多可以HOLD多少条异步消息(或者多少容量)呢?

单条异步消息的上限

单条异步消息的上限

/*  
 * Maximum size of a NOTIFY payload, including terminating NULL.  This  
 * must be kept small enough so that a notification message fits on one  
 * SLRU page.  The magic fudge factor here is noncritical as long as it's  
 * more than AsyncQueueEntryEmptySize --- we make it significantly bigger  
 * than that, so changes in that data structure won't affect user-visible  
 * restrictions.  
 */  
#define NOTIFY_PAYLOAD_MAX_LENGTH       (BLCKSZ - NAMEDATALEN - 128)  

异步消息结构

/*  
 * Struct representing an entry in the global notify queue  
 *  
 * This struct declaration has the maximal length, but in a real queue entry  
 * the data area is only big enough for the actual channel and payload strings  
 * (each null-terminated).  AsyncQueueEntryEmptySize is the minimum possible  
 * entry size, if both channel and payload strings are empty (but note it  
 * doesn't include alignment padding).  
 *  
 * The "length" field should always be rounded up to the next QUEUEALIGN  
 * multiple so that all fields are properly aligned.  
 */  
typedef struct AsyncQueueEntry  
{  
        int                     length;                 /* total allocated length of entry */  
        Oid                     dboid;                  /* sender's database OID */  
        TransactionId xid;                      /* sender's XID */  
        int32           srcPid;                 /* sender's PID */  
        char            data[NAMEDATALEN + NOTIFY_PAYLOAD_MAX_LENGTH];  
} AsyncQueueEntry;  

数据库端最多可以HOLD多少异步消息

/*  
 * Define SLRU segment size.  A page is the same BLCKSZ as is used everywhere  
 * else in Postgres.  The segment size can be chosen somewhat arbitrarily;  
 * we make it 32 pages by default, or 256Kb, i.e. 1M transactions for CLOG  
 * or 64K transactions for SUBTRANS.  
 *  
 * Note: because TransactionIds are 32 bits and wrap around at 0xFFFFFFFF,  
 * page numbering also wraps around at 0xFFFFFFFF/xxxx_XACTS_PER_PAGE (where  
 * xxxx is CLOG or SUBTRANS, respectively), and segment numbering at  
 * 0xFFFFFFFF/xxxx_XACTS_PER_PAGE/SLRU_PAGES_PER_SEGMENT.  We need  
 * take no explicit notice of that fact in slru.c, except when comparing  
 * segment and page numbers in SimpleLruTruncate (see PagePrecedes()).  
 */  
#define SLRU_PAGES_PER_SEGMENT  32  
 * The amount of shared memory used for notify management (NUM_ASYNC_BUFFERS)  
 * can be varied without affecting anything but performance.  The maximum  
 * amount of notification data that can be queued at one time is determined  
 * by slru.c's wraparound limit; see QUEUE_MAX_PAGE below.  
/*  
 * slru.c currently assumes that all filenames are four characters of hex  
 * digits. That means that we can use segments 0000 through FFFF.  
 * Each segment contains SLRU_PAGES_PER_SEGMENT pages which gives us  
 * the pages from 0 to SLRU_PAGES_PER_SEGMENT * 0x10000 - 1.  
 *  
 * It's of course possible to enhance slru.c, but this gives us so much  
 * space already that it doesn't seem worth the trouble.  
 *  
 * The most data we can have in the queue at a time is QUEUE_MAX_PAGE/2  
 * pages, because more than that would confuse slru.c into thinking there  
 * was a wraparound condition.  With the default BLCKSZ this means there  
 * can be up to 8GB of queued-and-not-read data.  
 *  
 * Note: it's possible to redefine QUEUE_MAX_PAGE with a smaller multiple of  
 * SLRU_PAGES_PER_SEGMENT, for easier testing of queue-full behaviour.  
 */  
#define QUEUE_MAX_PAGE                  (SLRU_PAGES_PER_SEGMENT * 0x10000 - 1)  

当blocksize, pagesize=8KB时,最大可以HOLD约16GB。

8k * 65535 = 16GB  

被HOLD的异步消息存在哪里?

digoal@iZbp13nu0s9j3x3op4zpd4Z-> ll  
total 300K  
-rw------- 1 digoal digoal  193 Nov  8  2017 backup_label.old  
drwx------ 9 digoal digoal 4.0K Dec 15  2017 base  
-rw------- 1 digoal digoal   30 Jun 16 11:37 current_logfiles  
drwx------ 2 digoal digoal 4.0K Jun 16 11:37 global  
drwx------ 2 digoal digoal 4.0K Nov 13  2017 log  
drwx------ 2 digoal digoal 4.0K Nov  7  2017 pg_commit_ts  
drwx------ 2 digoal digoal 4.0K Nov  7  2017 pg_dynshmem  
-rw------- 1 digoal digoal 4.5K Nov  7  2017 pg_hba.conf  
-rw------- 1 digoal digoal 1.6K Nov  7  2017 pg_ident.conf  
drwx------ 4 digoal digoal 4.0K Jun 16 11:37 pg_logical  
drwx------ 4 digoal digoal 4.0K Nov  7  2017 pg_multixact  
drwx------ 2 digoal digoal  36K Jun 16 11:37 pg_notify  
drwx------ 4 digoal digoal 4.0K Dec 27  2017 pg_replslot  
drwx------ 2 digoal digoal 4.0K Nov  7  2017 pg_serial  
drwx------ 2 digoal digoal 4.0K Nov  7  2017 pg_snapshots  
drwx------ 2 digoal digoal 4.0K Jun 16 11:37 pg_stat  
drwx------ 2 digoal digoal 4.0K Jun 16 11:37 pg_stat_tmp  
drwx------ 2 digoal digoal 132K Dec 27  2017 pg_subtrans  
drwx------ 2 digoal digoal 4.0K Nov  9  2017 pg_tblspc  
drwx------ 2 digoal digoal 4.0K Nov  7  2017 pg_twophase  
-rw------- 1 digoal digoal    3 Nov  7  2017 PG_VERSION  
lrwxrwxrwx 1 digoal digoal   22 Nov  7  2017 pg_wal -> /data02/pg/pg_wal_1999  
drwx------ 2 digoal digoal  20K Dec 27  2017 pg_xact  
-rw------- 1 digoal digoal 2.5K Jan 11  2018 postgresql.auto.conf  
-rw------- 1 digoal digoal  23K Jan 11  2018 postgresql.conf  
-rw------- 1 digoal digoal   34 Jun 16 11:37 postmaster.opts  
  
digoal@iZbp13nu0s9j3x3op4zpd4Z-> cd pg_notify/  
digoal@iZbp13nu0s9j3x3op4zpd4Z-> ll  
total 8.0K  
-rw------- 1 digoal digoal 8.0K Jun 16 11:37 0000  

参考

src/include/access/slru.h

src/backend/commands/async.c

相关实践学习
使用PolarDB和ECS搭建门户网站
本场景主要介绍基于PolarDB和ECS实现搭建门户网站。
阿里云数据库产品家族及特性
阿里云智能数据库产品团队一直致力于不断健全产品体系,提升产品性能,打磨产品功能,从而帮助客户实现更加极致的弹性能力、具备更强的扩展能力、并利用云设施进一步降低企业成本。以云原生+分布式为核心技术抓手,打造以自研的在线事务型(OLTP)数据库Polar DB和在线分析型(OLAP)数据库Analytic DB为代表的新一代企业级云原生数据库产品体系, 结合NoSQL数据库、数据库生态工具、云原生智能化数据库管控平台,为阿里巴巴经济体以及各个行业的企业客户和开发者提供从公共云到混合云再到私有云的完整解决方案,提供基于云基础设施进行数据从处理、到存储、再到计算与分析的一体化解决方案。本节课带你了解阿里云数据库产品家族及特性。
目录
相关文章
|
2月前
|
canal 缓存 NoSQL
Redis缓存与数据库如何保证一致性?同步删除+延时双删+异步监听+多重保障方案
根据对一致性的要求程度,提出多种解决方案:同步删除、同步删除+可靠消息、延时双删、异步监听+可靠消息、多重保障方案
Redis缓存与数据库如何保证一致性?同步删除+延时双删+异步监听+多重保障方案
|
缓存 安全 关系型数据库
6 PostgreSQL 连接池,本地高速缓存,异地高速缓存|学习笔记
快速学习6 PostgreSQL 连接池,本地高速缓存,异地高速缓存
6 PostgreSQL 连接池,本地高速缓存,异地高速缓存|学习笔记
|
缓存 运维 负载均衡
Redis连环炮:内存淘汰?事务?分布式锁?分步式限流?异步队列?延时队列?高可用?如何部署?哈希槽?数据库和缓存的数据一致性?
Redis连环炮:内存淘汰?事务?分布式锁?分步式限流?异步队列?延时队列?高可用?如何部署?哈希槽?数据库和缓存的数据一致性?
227 0
Redis连环炮:内存淘汰?事务?分布式锁?分步式限流?异步队列?延时队列?高可用?如何部署?哈希槽?数据库和缓存的数据一致性?
|
消息中间件 缓存 监控
RocketMQ消息积压,异步方案,缓存策略解决方案
RocketMQ消息积压,异步方案,缓存策略解决方案
RocketMQ消息积压,异步方案,缓存策略解决方案
|
存储 缓存 UED
watchOS中进行异步图片加载和缓存的策略
watchOS中进行异步图片加载和缓存的策略
106 0
|
缓存 Swift iOS开发
iOS开发swift版异步加载网络图片(带缓存和缺省图片)
iOS开发swift版异步加载网络图片(带缓存和缺省图片)
483 0
|
传感器 SQL 缓存
【重新发现PostgreSQL之美】- 7 垂帘听政 异步消息
大家好,这里是重新发现PostgreSQL之美 - 7 垂帘听政 异步消息
|
SQL Oracle 关系型数据库
PostgreSQL pg_rewind,时间线修复,脑裂修复,flashback - 从库开启读写后,回退为只读从库。异步主从发生角色切换后,主库rewind为新主库的从库
PostgreSQL pg_rewind,时间线修复,脑裂修复,flashback - 从库开启读写后,回退为只读从库。异步主从发生角色切换后,主库rewind为新主库的从库
2147 1
|
SQL Oracle 关系型数据库
PostgreSQL pg_rewind,时间线修复,脑裂修复,flashback - 从库开启读写后,回退为只读从库。异步主从发生角色切换后,主库rewind为新主库的从库
标签 PostgreSQL , pg_rewind , 主从切换 , 时间线修复 , 脑裂修复 , 从库开启读写后,回退为只读从库 , 异步主从发生角色切换后,主库rewind为新主库的从库 背景 1、PG物理流复制的从库,当激活后,可以开启读写,使用pg_rewind可以将从库回退为只读从库的角色。而不需要重建整个从库。 2、当异步主从发生角色切换后,主库的wal目录中可能还有没完全
1111 0

相关产品

  • 云原生数据库 PolarDB
  • 云数据库 RDS PostgreSQL 版