配置 pgcat for PolarDB 读写分离连接池

本文涉及的产品
云原生数据库 PolarDB MySQL 版,通用型 2核4GB 50GB
云原生数据库 PolarDB PostgreSQL 版,标准版 2核4GB 50GB
简介: PolarDB 开源数据库支持云原生存算分离分布式架构, 一份存储支持多个计算节点, 目前是一写多读的架构. 内核已经很强大了, 怎么实现业务透明的读写分离, 还缺一个连接池, pgcat是不错的选择. pgcat支持连接池、sharding、读写负载均衡等, 更多功能请参考其官网 https://github.com/levkk/pgcat

背景

PolarDB 开源数据库支持云原生存算分离分布式架构, 一份存储支持多个计算节点, 目前是一写多读的架构. 内核已经很强大了, 怎么实现业务透明的读写分离, 还缺一个连接池, pgcat是不错的选择.

pgcat支持连接池、sharding、读写负载均衡等, 更多功能请参考其官网 https://github.com/levkk/pgcat

pgcat部署例子

这个例子直接在PolarDB容器中部署pgcat.

PolarDB部署请参考:

进入PolarDB环境

docker exec -it 67e1eed1b4b6 bash  

安装rust

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh  
  
source "$HOME/.cargo/env"  

下载pgcat

git clone --depth=1 https://github.com/levkk/pgcat  

安装pgcat

cd pgcat/  
  
  
cargo build --release  

配置pgcat, 修改为PolarDB数据库连接串

vi pgcat.toml  
  
pgcat.toml  
#  
# PgCat config example.  
#  
  
#  
# General pooler settings  
[general]  
# What IP to run on, 0.0.0.0 means accessible from everywhere.  
host = "0.0.0.0"  
  
# Port to run on, same as PgBouncer used in this example.  
port = 6432  
  
# Whether to enable prometheus exporter or not.  
enable_prometheus_exporter = true  
  
# Port at which prometheus exporter listens on.  
prometheus_exporter_port = 9930  
  
# How long to wait before aborting a server connection (ms).  
connect_timeout = 5000  
  
# How much time to give the health check query to return with a result (ms).  
healthcheck_timeout = 1000  
  
# How long to keep connection available for immediate re-use, without running a healthcheck query on it  
healthcheck_delay = 30000  
  
# How much time to give clients during shutdown before forcibly killing client connections (ms).  
shutdown_timeout = 60000  
  
# For how long to ban a server if it fails a health check (seconds).  
ban_time = 60 # seconds  
  
# If we should log client connections  
log_client_connections = false  
  
# If we should log client disconnections  
log_client_disconnections = false  
  
# Reload config automatically if it changes.  
autoreload = false  
  
# TLS  
# tls_certificate = "server.cert"  
# tls_private_key = "server.key"  
  
# Credentials to access the virtual administrative database (pgbouncer or pgcat)  
# Connecting to that database allows running commands like `SHOW POOLS`, `SHOW DATABASES`, etc..  
admin_username = "admin_user"  
admin_password = "admin_pass"  
  
# pool  
# configs are structured as pool.<pool_name>  
# the pool_name is what clients use as database name when connecting  
# For the example below a client can connect using "postgres://sharding_user:sharding_user@pgcat_host:pgcat_port/sharded_db"  
[pools.sharded_db]  
# Pool mode (see PgBouncer docs for more).  
# session: one server connection per connected client  
# transaction: one server connection per client transaction  
pool_mode = "transaction"  
  
# If the client doesn't specify, route traffic to  
# this role by default.  
#  
# any: round-robin between primary and replicas,  
# replica: round-robin between replicas only without touching the primary,  
# primary: all queries go to the primary unless otherwise specified.  
default_role = "any"  
  
# Query parser. If enabled, we'll attempt to parse  
# every incoming query to determine if it's a read or a write.  
# If it's a read query, we'll direct it to a replica. Otherwise, if it's a write,  
# we'll direct it to the primary.  
query_parser_enabled = true  
  
# If the query parser is enabled and this setting is enabled, the primary will be part of the pool of databases used for  
# load balancing of read queries. Otherwise, the primary will only be used for write  
# queries. The primary can always be explicitly selected with our custom protocol.  
primary_reads_enabled = true  
  
# So what if you wanted to implement a different hashing function,  
# or you've already built one and you want this pooler to use it?  
#  
# Current options:  
#  
# pg_bigint_hash: PARTITION BY HASH (Postgres hashing function)  
# sha1: A hashing function based on SHA1  
#  
sharding_function = "pg_bigint_hash"  
  
# Automatically parse this from queries and route queries to the right shard!  
automatic_sharding_key = "id"  
  
# Credentials for users that may connect to this cluster  
[pools.sharded_db.users.0]  
username = "sharding_user"  
password = "sharding_user"  
# Maximum number of server connections that can be established for this user  
# The maximum number of connection from a single Pgcat process to any database in the cluster  
# is the sum of pool_size across all users.  
pool_size = 9  
  
# Maximum query duration. Dangerous, but protects against DBs that died in a non-obvious way.  
statement_timeout = 0  
  
[pools.sharded_db.users.1]  
username = "other_user"  
password = "other_user"  
pool_size = 21  
statement_timeout = 15000  
  
# Shard 0  
[pools.sharded_db.shards.0]  
# [ host, port, role ]  
servers = [  
    [ "127.0.0.1", 5432, "primary" ],  
    [ "localhost", 5433, "replica" ],  
    [ "localhost", 5434, "replica" ]  
]  
# Database name (e.g. "postgres")  
database = "shard0"  
  
[pools.sharded_db.shards.1]  
servers = [  
    [ "127.0.0.1", 5432, "primary" ],  
    [ "localhost", 5433, "replica" ],  
    [ "localhost", 5434, "replica" ]  
]  
database = "shard1"  
  
[pools.sharded_db.shards.2]  
servers = [  
    [ "127.0.0.1", 5432, "primary" ],  
    [ "localhost", 5433, "replica" ],  
    [ "localhost", 5434, "replica" ]  
]  
database = "shard2"  
  
  
[pools.simple_db]  
pool_mode = "transaction"  
default_role = "primary"  
query_parser_enabled = true  
primary_reads_enabled = true  
sharding_function = "pg_bigint_hash"  
  
[pools.simple_db.users.0]  
username = "simple_user"  
password = "simple_user"  
pool_size = 5  
statement_timeout = 0  
  
[pools.simple_db.shards.0]  
servers = [  
    [ "127.0.0.1", 5432, "primary" ],  
    [ "localhost", 5433, "replica" ],  
    [ "localhost", 5434, "replica" ]  
]  
database = "some_db"  

直接连接polardb primary, 安装pgcat配置文件中设置的数据库、路由关系等.

psql -h 127.0.0.1 -f tests/sharding/query_routing_setup.sql  

启动pgcat

[postgres@67e1eed1b4b6 pgcat]$ RUST_LOG=info cargo run --release  
    Finished release [optimized] target(s) in 0.12s  
     Running `target/release/pgcat`  
[2022-12-02T06:30:23.280411Z INFO  pgcat] Welcome to PgCat! Meow. (Version 0.6.0-alpha1)  
[2022-12-02T06:30:23.284527Z INFO  pgcat] Running on 0.0.0.0:6432  
[2022-12-02T06:30:23.284572Z INFO  pgcat::config] Ban time: 60s  
[2022-12-02T06:30:23.284580Z INFO  pgcat::config] Healthcheck timeout: 1000ms  
[2022-12-02T06:30:23.284585Z INFO  pgcat::config] Connection timeout: 5000ms  
[2022-12-02T06:30:23.284590Z INFO  pgcat::config] Log client connections: false  
[2022-12-02T06:30:23.284594Z INFO  pgcat::config] Log client disconnections: false  
[2022-12-02T06:30:23.284599Z INFO  pgcat::config] Shutdown timeout: 60000ms  
[2022-12-02T06:30:23.284603Z INFO  pgcat::config] Healthcheck delay: 30000ms  
[2022-12-02T06:30:23.284608Z INFO  pgcat::config] TLS support is disabled  
[2022-12-02T06:30:23.284613Z INFO  pgcat::config] [pool: sharded_db] Maximum user connections: 30  
[2022-12-02T06:30:23.284618Z INFO  pgcat::config] [pool: sharded_db] Pool mode: Transaction  
[2022-12-02T06:30:23.284624Z INFO  pgcat::config] [pool: sharded_db] Connection timeout: 5000ms  
[2022-12-02T06:30:23.284629Z INFO  pgcat::config] [pool: sharded_db] Sharding function: pg_bigint_hash  
[2022-12-02T06:30:23.284634Z INFO  pgcat::config] [pool: sharded_db] Primary reads: true  
[2022-12-02T06:30:23.284804Z INFO  pgcat::config] [pool: sharded_db] Query router: true  
[2022-12-02T06:30:23.284879Z INFO  pgcat::config] [pool: sharded_db] Number of shards: 3  
[2022-12-02T06:30:23.285156Z INFO  pgcat::config] [pool: sharded_db] Number of users: 2  
[2022-12-02T06:30:23.285171Z INFO  pgcat::config] [pool: sharded_db][user: sharding_user] Pool size: 9  
[2022-12-02T06:30:23.285204Z INFO  pgcat::config] [pool: sharded_db][user: sharding_user] Statement timeout: 0  
[2022-12-02T06:30:23.285345Z INFO  pgcat::config] [pool: sharded_db][user: other_user] Pool size: 21  
[2022-12-02T06:30:23.285353Z INFO  pgcat::config] [pool: sharded_db][user: other_user] Statement timeout: 15000  
[2022-12-02T06:30:23.285359Z INFO  pgcat::config] [pool: simple_db] Maximum user connections: 5  
[2022-12-02T06:30:23.285364Z INFO  pgcat::config] [pool: simple_db] Pool mode: Transaction  
[2022-12-02T06:30:23.285369Z INFO  pgcat::config] [pool: simple_db] Connection timeout: 5000ms  
[2022-12-02T06:30:23.285374Z INFO  pgcat::config] [pool: simple_db] Sharding function: pg_bigint_hash  
[2022-12-02T06:30:23.285379Z INFO  pgcat::config] [pool: simple_db] Primary reads: true  
[2022-12-02T06:30:23.285383Z INFO  pgcat::config] [pool: simple_db] Query router: true  
[2022-12-02T06:30:23.285388Z INFO  pgcat::config] [pool: simple_db] Number of shards: 1  
[2022-12-02T06:30:23.285392Z INFO  pgcat::config] [pool: simple_db] Number of users: 1  
[2022-12-02T06:30:23.285397Z INFO  pgcat::config] [pool: simple_db][user: simple_user] Pool size: 5  
[2022-12-02T06:30:23.285402Z INFO  pgcat::config] [pool: simple_db][user: simple_user] Statement timeout: 0  
[2022-12-02T06:30:23.284609Z INFO  pgcat::prometheus] Exposing prometheus metrics on http://0.0.0.0:9930/metrics.  
[2022-12-02T06:30:23.285516Z INFO  pgcat::pool] [pool: sharded_db][user: sharding_user] creating new pool  
[2022-12-02T06:30:23.285883Z INFO  pgcat::pool] Creating a new server connection Address { id: 0, host: "127.0.0.1", port: 5432, shard: 0, database: "shard0", role: Primary, replica_number: 0, address_index: 0, username: "sharding_user", pool_name: "sharded_db" }  
[2022-12-02T06:30:23.292142Z INFO  pgcat::pool] Creating a new server connection Address { id: 1, host: "localhost", port: 5433, shard: 0, database: "shard0", role: Replica, replica_number: 0, address_index: 1, username: "sharding_user", pool_name: "sharded_db" }  
[2022-12-02T06:30:23.316973Z INFO  pgcat::pool] Creating a new server connection Address { id: 2, host: "localhost", port: 5434, shard: 0, database: "shard0", role: Replica, replica_number: 1, address_index: 2, username: "sharding_user", pool_name: "sharded_db" }  
[2022-12-02T06:30:23.337804Z INFO  pgcat::pool] Creating a new server connection Address { id: 3, host: "127.0.0.1", port: 5432, shard: 1, database: "shard1", role: Primary, replica_number: 0, address_index: 0, username: "sharding_user", pool_name: "sharded_db" }  
[2022-12-02T06:30:23.342340Z INFO  pgcat::pool] Creating a new server connection Address { id: 4, host: "localhost", port: 5433, shard: 1, database: "shard1", role: Replica, replica_number: 0, address_index: 1, username: "sharding_user", pool_name: "sharded_db" }  
[2022-12-02T06:30:23.357989Z INFO  pgcat::pool] Creating a new server connection Address { id: 5, host: "localhost", port: 5434, shard: 1, database: "shard1", role: Replica, replica_number: 1, address_index: 2, username: "sharding_user", pool_name: "sharded_db" }  
[2022-12-02T06:30:23.369388Z INFO  pgcat::pool] Creating a new server connection Address { id: 6, host: "127.0.0.1", port: 5432, shard: 2, database: "shard2", role: Primary, replica_number: 0, address_index: 0, username: "sharding_user", pool_name: "sharded_db" }  
[2022-12-02T06:30:23.374910Z INFO  pgcat::pool] Creating a new server connection Address { id: 7, host: "localhost", port: 5433, shard: 2, database: "shard2", role: Replica, replica_number: 0, address_index: 1, username: "sharding_user", pool_name: "sharded_db" }  
[2022-12-02T06:30:23.394223Z INFO  pgcat::pool] Creating a new server connection Address { id: 8, host: "localhost", port: 5434, shard: 2, database: "shard2", role: Replica, replica_number: 1, address_index: 2, username: "sharding_user", pool_name: "sharded_db" }  
[2022-12-02T06:30:23.405927Z INFO  pgcat::pool] [pool: sharded_db][user: other_user] creating new pool  
[2022-12-02T06:30:23.406268Z INFO  pgcat::pool] Creating a new server connection Address { id: 9, host: "127.0.0.1", port: 5432, shard: 0, database: "shard0", role: Primary, replica_number: 0, address_index: 0, username: "other_user", pool_name: "sharded_db" }  
[2022-12-02T06:30:23.412445Z INFO  pgcat::pool] Creating a new server connection Address { id: 10, host: "localhost", port: 5433, shard: 0, database: "shard0", role: Replica, replica_number: 0, address_index: 1, username: "other_user", pool_name: "sharded_db" }  
[2022-12-02T06:30:23.417602Z INFO  pgcat::pool] Creating a new server connection Address { id: 11, host: "localhost", port: 5434, shard: 0, database: "shard0", role: Replica, replica_number: 1, address_index: 2, username: "other_user", pool_name: "sharded_db" }  
[2022-12-02T06:30:23.422787Z INFO  pgcat::pool] Creating a new server connection Address { id: 12, host: "127.0.0.1", port: 5432, shard: 1, database: "shard1", role: Primary, replica_number: 0, address_index: 0, username: "other_user", pool_name: "sharded_db" }  
[2022-12-02T06:30:23.429330Z INFO  pgcat::pool] Creating a new server connection Address { id: 13, host: "localhost", port: 5433, shard: 1, database: "shard1", role: Replica, replica_number: 0, address_index: 1, username: "other_user", pool_name: "sharded_db" }  
[2022-12-02T06:30:23.438427Z INFO  pgcat::pool] Creating a new server connection Address { id: 14, host: "localhost", port: 5434, shard: 1, database: "shard1", role: Replica, replica_number: 1, address_index: 2, username: "other_user", pool_name: "sharded_db" }  
[2022-12-02T06:30:23.445699Z INFO  pgcat::pool] Creating a new server connection Address { id: 15, host: "127.0.0.1", port: 5432, shard: 2, database: "shard2", role: Primary, replica_number: 0, address_index: 0, username: "other_user", pool_name: "sharded_db" }  
[2022-12-02T06:30:23.452223Z INFO  pgcat::pool] Creating a new server connection Address { id: 16, host: "localhost", port: 5433, shard: 2, database: "shard2", role: Replica, replica_number: 0, address_index: 1, username: "other_user", pool_name: "sharded_db" }  
[2022-12-02T06:30:23.458212Z INFO  pgcat::pool] Creating a new server connection Address { id: 17, host: "localhost", port: 5434, shard: 2, database: "shard2", role: Replica, replica_number: 1, address_index: 2, username: "other_user", pool_name: "sharded_db" }  
[2022-12-02T06:30:23.464638Z INFO  pgcat::pool] [pool: simple_db][user: simple_user] creating new pool  
[2022-12-02T06:30:23.464848Z INFO  pgcat::pool] Creating a new server connection Address { id: 18, host: "127.0.0.1", port: 5432, shard: 0, database: "some_db", role: Primary, replica_number: 0, address_index: 0, username: "simple_user", pool_name: "simple_db" }  
[2022-12-02T06:30:23.472276Z INFO  pgcat::pool] Creating a new server connection Address { id: 19, host: "localhost", port: 5433, shard: 0, database: "some_db", role: Replica, replica_number: 0, address_index: 1, username: "simple_user", pool_name: "simple_db" }  
[2022-12-02T06:30:23.484090Z INFO  pgcat::pool] Creating a new server connection Address { id: 20, host: "localhost", port: 5434, shard: 0, database: "some_db", role: Replica, replica_number: 1, address_index: 2, username: "simple_user", pool_name: "simple_db" }  
[2022-12-02T06:30:23.496559Z INFO  pgcat] Config autoreloader: false  
[2022-12-02T06:30:23.496816Z INFO  pgcat] Waiting for clients  
[2022-12-02T06:30:23.496611Z INFO  pgcat::stats] Events reporter started  

新开一个容器bash, 测试pgcat的连接

docker exec -it 67e1eed1b4b6 bash  
  
[postgres@67e1eed1b4b6 pgcat]$ export PGPASSWORD=simple_user  
[postgres@67e1eed1b4b6 pgcat]$ psql -h 127.0.0.1 -p 6432 -U simple_user simple_db  
psql (11.9)  
Type "help" for help.  
  
simple_db=> \q  
  
pgbench -i -s 10  -h 127.0.0.1 -p 6432 -U simple_user simple_db  
  
pgbench -M prepared -n -r -P 1 -c 4 -j 4 -T 120 -S -h 127.0.0.1 -p 6432 -U simple_user simple_db  
  
pgbench -M extended -n -r -P 1 -c 12 -j 12 -T 120 -S -h 127.0.0.1 -p 6432 -U simple_user simple_db  
progress: 1.0 s, 15276.8 tps, lat 0.782 ms stddev 0.388  
progress: 2.0 s, 14739.9 tps, lat 0.814 ms stddev 0.417  
progress: 3.0 s, 13652.3 tps, lat 0.879 ms stddev 0.483  
progress: 4.0 s, 13377.7 tps, lat 0.897 ms stddev 0.483  
  
pgbench -M extended -n -r -P 1 -c 120 -j 1200 -T 1200 -S -h 127.0.0.1 -p 6432 -U simple_user simple_db  
progress: 1.0 s, 14764.8 tps, lat 7.635 ms stddev 4.248  
progress: 2.0 s, 15562.4 tps, lat 7.704 ms stddev 4.211  
progress: 3.0 s, 15672.1 tps, lat 7.646 ms stddev 5.242  
progress: 4.0 s, 16104.7 tps, lat 7.448 ms stddev 5.672  
progress: 5.0 s, 15078.6 tps, lat 7.965 ms stddev 6.022  

读写分离测试:

psql -h 127.0.0.1 -p 6432 -U simple_user simple_db
psql (11.9)
Type "help" for help.

simple_db=> select pg_is_in_recovery();
 pg_is_in_recovery 
-------------------
 t
(1 row)

simple_db=> select pg_is_in_recovery();
 pg_is_in_recovery 
-------------------
 t
(1 row)

simple_db=> begin;
BEGIN
simple_db=> select pg_is_in_recovery();
 pg_is_in_recovery 
-------------------
 f
(1 row)

simple_db=> select pg_is_in_recovery();
 pg_is_in_recovery 
-------------------
 f
(1 row)

参考

《如何用 PolarDB 证明巴菲特的投资理念 - 包括PolarDB简单部署》

https://github.com/levkk/pgcat

《PostgreSQL sharding+pool+读写分离 中间件 - PGcat》

https://www.rust-lang.org/tools/install

相关实践学习
使用PolarDB和ECS搭建门户网站
本场景主要介绍基于PolarDB和ECS实现搭建门户网站。
阿里云数据库产品家族及特性
阿里云智能数据库产品团队一直致力于不断健全产品体系,提升产品性能,打磨产品功能,从而帮助客户实现更加极致的弹性能力、具备更强的扩展能力、并利用云设施进一步降低企业成本。以云原生+分布式为核心技术抓手,打造以自研的在线事务型(OLTP)数据库Polar DB和在线分析型(OLAP)数据库Analytic DB为代表的新一代企业级云原生数据库产品体系, 结合NoSQL数据库、数据库生态工具、云原生智能化数据库管控平台,为阿里巴巴经济体以及各个行业的企业客户和开发者提供从公共云到混合云再到私有云的完整解决方案,提供基于云基础设施进行数据从处理、到存储、再到计算与分析的一体化解决方案。本节课带你了解阿里云数据库产品家族及特性。
目录
相关文章
|
2月前
|
SQL 关系型数据库 分布式数据库
PolarDB Proxy配置与优化:提升数据库访问效率
【9月更文挑战第6天】PolarDB是阿里云推出的高性能分布式关系型数据库,PolarDB Proxy作为其关键组件,位于客户端与PolarDB集群间,负责SQL请求的解析与转发,并支持连接池管理、SQL过滤及路由规则等功能。本文详细介绍了PolarDB Proxy的配置方法,包括连接池、负载均衡和SQL过滤设置,并探讨了监控调优、缓存及网络优化策略,以帮助提升数据库访问效率。
51 1
|
3月前
|
监控 关系型数据库 分布式数据库
PolarDB 读写分离的最佳实践
【8月更文第27天】PolarDB 是阿里云推出的一款高度兼容 MySQL、PostgreSQL 和 Oracle 的云原生数据库服务。它支持读写分离,能够显著提高应用的性能和响应速度。本文将详细介绍如何在 PolarDB 中实施读写分离策略,并通过示例代码演示具体的配置步骤。
130 1
|
3月前
|
SQL 关系型数据库 分布式数据库
PolarDB Proxy配置与优化:提升数据库访问效率
PolarDB是阿里云的高性能分布式数据库,PolarDB Proxy作为关键组件,解析并转发SQL请求至集群。本文概览PolarDB Proxy功能,包括连接池管理、负载均衡及SQL过滤;并提供配置示例,如连接池的最大连接数与空闲超时时间设置、一致哈希路由及SQL安全过滤规则。最后探讨了监控调优、查询缓存及网络优化策略,以实现高效稳定的数据库访问。
111 2
|
4月前
|
关系型数据库 分布式数据库 数据库
PolarDB产品使用问题之已经配置公网地址但依然无法连接上,该如何排查
PolarDB产品使用合集涵盖了从创建与管理、数据管理、性能优化与诊断、安全与合规到生态与集成、运维与支持等全方位的功能和服务,旨在帮助企业轻松构建高可用、高性能且易于管理的数据库环境,满足不同业务场景的需求。用户可以通过阿里云控制台、API、SDK等方式便捷地使用这些功能,实现数据库的高效运维与持续优化。
|
4月前
|
关系型数据库 分布式数据库 网络安全
PolarDB产品使用问题之如何配置白名单以实现远程访问
PolarDB产品使用合集涵盖了从创建与管理、数据管理、性能优化与诊断、安全与合规到生态与集成、运维与支持等全方位的功能和服务,旨在帮助企业轻松构建高可用、高性能且易于管理的数据库环境,满足不同业务场景的需求。用户可以通过阿里云控制台、API、SDK等方式便捷地使用这些功能,实现数据库的高效运维与持续优化。
|
4月前
|
监控 关系型数据库 分布式数据库
PolarDB产品使用问题之如何配置端口
PolarDB产品使用合集涵盖了从创建与管理、数据管理、性能优化与诊断、安全与合规到生态与集成、运维与支持等全方位的功能和服务,旨在帮助企业轻松构建高可用、高性能且易于管理的数据库环境,满足不同业务场景的需求。用户可以通过阿里云控制台、API、SDK等方式便捷地使用这些功能,实现数据库的高效运维与持续优化。
|
4月前
|
关系型数据库 应用服务中间件 分布式数据库
PolarDB产品使用问题之nginx的超时时间配置多少合适
PolarDB产品使用合集涵盖了从创建与管理、数据管理、性能优化与诊断、安全与合规到生态与集成、运维与支持等全方位的功能和服务,旨在帮助企业轻松构建高可用、高性能且易于管理的数据库环境,满足不同业务场景的需求。用户可以通过阿里云控制台、API、SDK等方式便捷地使用这些功能,实现数据库的高效运维与持续优化。
|
4月前
|
关系型数据库 分布式数据库 数据库
PolarDB产品使用问题之读写分离是否有延时时间
PolarDB产品使用合集涵盖了从创建与管理、数据管理、性能优化与诊断、安全与合规到生态与集成、运维与支持等全方位的功能和服务,旨在帮助企业轻松构建高可用、高性能且易于管理的数据库环境,满足不同业务场景的需求。用户可以通过阿里云控制台、API、SDK等方式便捷地使用这些功能,实现数据库的高效运维与持续优化。
|
4月前
|
关系型数据库 分布式数据库 数据库
PolarDB产品使用问题之使用ShardingSphere进行分表相比于原配置有什么优势
PolarDB产品使用合集涵盖了从创建与管理、数据管理、性能优化与诊断、安全与合规到生态与集成、运维与支持等全方位的功能和服务,旨在帮助企业轻松构建高可用、高性能且易于管理的数据库环境,满足不同业务场景的需求。用户可以通过阿里云控制台、API、SDK等方式便捷地使用这些功能,实现数据库的高效运维与持续优化。
|
5月前
|
运维 关系型数据库 MySQL
PolarDB产品使用问题之多主集群配置多个全局只读节点最多可增加到多少个
PolarDB产品使用合集涵盖了从创建与管理、数据管理、性能优化与诊断、安全与合规到生态与集成、运维与支持等全方位的功能和服务,旨在帮助企业轻松构建高可用、高性能且易于管理的数据库环境,满足不同业务场景的需求。用户可以通过阿里云控制台、API、SDK等方式便捷地使用这些功能,实现数据库的高效运维与持续优化。

相关产品

  • 云原生数据库 PolarDB
  • 下一篇
    无影云桌面