PolarDB-X 集中式三节点高可用集群部署 & Java 场景 CRUD 应用

简介: 本文介绍在CentOS 7.9、openEuler 20.03及银河麒麟V10上部署PolarDB-X三节点高可用集群的完整过程,涵盖环境准备、配置文件设置、集群初始化与启动,并通过Java应用实现CRUD操作验证。集群支持自动主备切换,确保服务高可用,适用于生产环境数据库架构搭建与学习参考。

PolarDB-X 集中式、三节点高可用集群部署 & Java 场景 CRUD 应用

本篇文章基于上述 3篇 文章 单机版

可以在单机版的基础上进行操作,组集群

1. env 3 nodes

  • node 1
[polarx@c79-model ~]$ hostname -I
192.168.122.58
[polarx@c79-model ~]$ cat /etc/centos-release
CentOS Linux release 7.9.2009 (Core)
[polarx@c79-model ~]$ arch
x86_64
[polarx@c79-model ~]$ free -h
              total        used        free      shared  buff/cache   available
Mem:           3.7G        1.5G        781M        8.6M        1.5G        2.0G
Swap:            0B          0B          0B
[polarx@c79-model ~]$ lscpu |grep "^CPU(s)"
CPU(s):                2
[polarx@c79-model ~]$
  • node 2
[polarx@openeuler-model ~]$ hostname -I
192.168.122.10
[polarx@openeuler-model ~]$ cat /etc/openEuler-release
openEuler release 20.03 (LTS)
[polarx@openeuler-model ~]$ arch
x86_64
[polarx@openeuler-model ~]$ free -h
              total        used        free      shared  buff/cache   available
Mem:          7.3Gi       753Mi       5.7Gi       0.0Ki       840Mi       6.1Gi
Swap:         4.0Gi          0B       4.0Gi
[polarx@openeuler-model ~]$ lscpu |grep "^CPU(s)"
CPU(s):                          2
[polarx@openeuler-model ~]$
  • node 3
[polarx@kylinv10sp1 ~]$ hostname -I
192.168.122.225
[polarx@kylinv10sp1 ~]$ cat /etc/kylin-release
Kylin Linux Advanced Server release V10 (Tercel)
[polarx@kylinv10sp1 ~]$ arch
x86_64
[polarx@kylinv10sp1 ~]$ free -h
              total        used        free      shared  buff/cache   available
Mem:           15Gi       1.7Gi        12Gi       7.0Mi       1.2Gi        13Gi
Swap:            0B          0B          0B
[polarx@kylinv10sp1 ~]$ lscpu |grep "^CPU(s)"
CPU(s):                          2
[polarx@kylinv10sp1 ~]$

2. basic config

  • after install rpm ball
yum install ./t-polardbx-engine-8.4.19-20250825_17558853.el7.x86_64.rpm -y
  • basic directories 3 nodes
[polarx@c79-model ~]$ mkdir -p polardbx-engine/{log,mysql,run,data,tmp}
  • node 1 my.cnf
[polarx@c79-model ~]$ cat polardbx-engine/my.cnf
[mysqld]
basedir = /opt/polardbx_engine
log_error_verbosity = 2
default_authentication_plugin = mysql_native_password
gtid_mode = ON
enforce_gtid_consistency = ON
log_bin = mysql-binlog
binlog_format = row
binlog_row_image = FULL
master_info_repository = TABLE
relay_log_info_repository = TABLE

# change me if needed
datadir = /home/polarx/polardbx-engine/data
tmpdir = /home/polarx/polardbx-engine/tmp
socket = /home/polarx/polardbx-engine/tmp.mysql.sock
log_error = /home/polarx/polardbx-engine/log/alert.log
port = 4886
cluster_id = 1234
cluster_info = 127.0.0.1:14886@1
server_id = 1

[mysqld_safe]
pid_file = /home/polarx/polardbx-engine/run/mysql.pid
[polarx@c79-model ~]$
  • node 2 my.cnf
[polarx@openeuler-model ~]$ cat polardbx-engine/my.cnf
[mysqld]
basedir = /opt/polardbx_engine
log_error_verbosity = 2
default_authentication_plugin = mysql_native_password
gtid_mode = ON
enforce_gtid_consistency = ON
log_bin = mysql-binlog
binlog_format = row
binlog_row_image = FULL
master_info_repository = TABLE
relay_log_info_repository = TABLE

# change me if needed
datadir = /home/polarx/polardbx-engine/data
tmpdir = /home/polarx/polardbx-engine/tmp
socket = /home/polarx/polardbx-engine/tmp.mysql.sock
log_error = /home/polarx/polardbx-engine/log/alert.log
port = 4886
cluster_id = 1234
cluster_info = 127.0.0.1:14886@2
server_id = 2

[mysqld_safe]
pid_file = /home/polarx/polardbx-engine/run/mysql.pid
[polarx@openeuler-model ~]$
  • node 3 my.cnf
[polarx@kylinv10sp1 ~]$ cat polardbx-engine/my.cnf
[mysqld]
basedir = /opt/polardbx_engine
log_error_verbosity = 2
default_authentication_plugin = mysql_native_password
gtid_mode = ON
enforce_gtid_consistency = ON
log_bin = mysql-binlog
binlog_format = row
binlog_row_image = FULL
master_info_repository = TABLE
relay_log_info_repository = TABLE

# change me if needed
datadir = /home/polarx/polardbx-engine/data
tmpdir = /home/polarx/polardbx-engine/tmp
socket = /home/polarx/polardbx-engine/tmp.mysql.sock
log_error = /home/polarx/polardbx-engine/log/alert.log
port = 4886
cluster_id = 1234
cluster_info = 127.0.0.1:14886@3
server_id = 3

[mysqld_safe]
pid_file = /home/polarx/polardbx-engine/run/mysql.pid
[polarx@kylinv10sp1 ~]$

3. init cluster 3 nodes one by one

  • node 1 init
[polarx@c79-model ~]$ cd polardbx-engine
[polarx@c79-model polardbx-engine]$ cat init-cm.sh

/opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf \
  --cluster-info='192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@1' \
  --initialize-insecure

[polarx@c79-model polardbx-engine]$ sh init-cm.sh
  • node 2 init
[polarx@openeuler-model ~]$ cd polardbx-engine
[polarx@openeuler-model polardbx-engine]$ cat init-om.sh

/opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf \
  --cluster-info='192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@2' \
  --initialize-insecure

[polarx@openeuler-model polardbx-engine]$ sh init-om.sh
  • node 3 init
[polarx@kylinv10sp1 ~]$ cd polardbx-engine
[polarx@kylinv10sp1 polardbx-engine]$ cat init-km.sh

/opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf \
  --cluster-info='192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@3' \
  --initialize-insecure

[polarx@kylinv10sp1 polardbx-engine]$ sh init-km.sh

4. start cluster 3 nodes one by one

  • node 1 start
[polarx@c79-model ~]$ cd polardbx-engine
[polarx@c79-model polardbx-engine]$ cat start-cm.sh

/opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf \
  --cluster-info='192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@1' \
  &

[polarx@c79-model polardbx-engine]$ sh start-cm.sh
  • node 2 start
[polarx@openeuler-model ~]$ cd polardbx-engine
[polarx@openeuler-model polardbx-engine]$ cat start-om.sh

/opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf \
  --cluster-info='192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@2' \
  &

[polarx@openeuler-model polardbx-engine]$ sh start-om.sh
  • node 3 start
[polarx@kylinv10sp1 ~]$ cd polardbx-engine
[polarx@kylinv10sp1 polardbx-engine]$ cat start-km.sh

/opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf \
  --cluster-info='192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@3' \
  &

[polarx@kylinv10sp1 polardbx-engine]$ sh start-km.sh

5. process & ports tests

5.1 node 1

  • process
[polarx@c79-model polardbx-engine]$ ps -ef|grep mysql
polarx    4870     1  0 Jan12 pts/0    00:00:00 /bin/sh /opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf --cluster-info=192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@1
polarx    5323  5322  0 Jan12 pts/2    00:00:00 /opt/polardbx_engine/bin/mysql -h127.0.0.1 -P4886 -uroot
polarx    5436  4870  1 Jan12 pts/0    00:16:50 /opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf --basedir=/opt/polardbx_engine --datadir=/home/polarx/polardbx-engine/data --plugin-dir=/opt/polardbx_engine/lib/plugin --cluster-info=192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@1 --log-error=/home/polarx/polardbx-engine/log/alert.log --pid-file=/home/polarx/polardbx-engine/run/mysql.pid --socket=/home/polarx/polardbx-engine/tmp.mysql.sock --port=4886
polarx   13637  1791  0 11:32 pts/0    00:00:00 grep --color=auto mysql
[polarx@c79-model polardbx-engine]$
  • ports
[polarx@c79-model polardbx-engine]$ netstat -antp|grep 4886
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 192.168.122.58:34896    192.168.122.225:14886   ESTABLISHED 5436/mysqld
tcp        0      0 192.168.122.58:37996    192.168.122.10:14886    ESTABLISHED 5436/mysqld
tcp6       0      0 :::4886                 :::*                    LISTEN      5436/mysqld
tcp6       0      0 :::14886                :::*                    LISTEN      5436/mysqld
tcp6       0      0 192.168.122.58:14886    192.168.122.10:36194    ESTABLISHED 5436/mysqld
tcp6       0      0 192.168.122.58:14886    192.168.122.225:10094   ESTABLISHED 5436/mysqld
[polarx@c79-model polardbx-engine]$

5.2 node 2

  • process
[polarx@c79-model ~]$
larx@openeuler-model polardbx-engine]$ ps -ef|grep mysql
polarx     10446       1  0 Jan12 pts/0    00:00:00 /bin/sh /opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf --cluster-info=192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@2
polarx     10783   10446  0 Jan12 pts/0    00:10:47 /opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf --basedir=/opt/polardbx_engine --datadir=/home/polarx/polardbx-engine/data --plugin-dir=/opt/polardbx_engine/lib/plugin --cluster-info=192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@2 --log-error=/home/polarx/polardbx-engine/log/alert.log --pid-file=/home/polarx/polardbx-engine/run/mysql.pid --socket=/home/polarx/polardbx-engine/tmp.mysql.sock --port=4886
polarx     12036    2671  0 11:36 pts/0    00:00:00 grep --color=auto mysql
[polarx@openeuler-model polardbx-engine]$
  • ports
[polarx@openeuler-model polardbx-engine]$ netstat -antp|grep 4886
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 192.168.122.10:36194    192.168.122.58:14886    ESTABLISHED 10783/mysqld
tcp        0      0 192.168.122.10:33736    192.168.122.225:14886   ESTABLISHED 10783/mysqld
tcp6       0      0 :::4886                 :::*                    LISTEN      10783/mysqld
tcp6       0      0 :::14886                :::*                    LISTEN      10783/mysqld
tcp6       0      0 192.168.122.10:14886    192.168.122.225:19530   ESTABLISHED 10783/mysqld
tcp6       0      0 192.168.122.10:14886    192.168.122.58:37996    ESTABLISHED 10783/mysqld
[polarx@openeuler-model polardbx-engine]$

5.3 node 3

  • process
[polarx@kylinv10sp1 polardbx-engine]$ ps -ef|grep mysql
polarx     14703       1  0 Jan12 pts/1    00:00:00 /bin/sh /opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf --cluster-info=192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@3
polarx     15035   14703  0 Jan12 pts/1    00:03:49 /opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf --basedir=/opt/polardbx_engine --datadir=/home/polarx/polardbx-engine/data --plugin-dir=/opt/polardbx_engine/lib/plugin --cluster-info=192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@3 --log-error=/home/polarx/polardbx-engine/log/alert.log --pid-file=/home/polarx/polardbx-engine/run/mysql.pid --socket=/home/polarx/polardbx-engine/tmp.mysql.sock --port=4886
polarx     18356    3987  0 11:37 pts/1    00:00:00 grep mysql
[polarx@kylinv10sp1 polardbx-engine]$
  • ports
[polarx@kylinv10sp1 polardbx-engine]$ netstat -antp|grep 4886
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 192.168.122.225:19530   192.168.122.10:14886    ESTABLISHED 15035/mysqld
tcp        0      0 192.168.122.225:10094   192.168.122.58:14886    ESTABLISHED 15035/mysqld
tcp6       0      0 :::14886                :::*                    LISTEN      15035/mysqld
tcp6       0      0 :::4886                 :::*                    LISTEN      15035/mysqld
tcp6       0      0 192.168.122.225:14886   192.168.122.10:33736    ESTABLISHED 15035/mysqld
[polarx@kylinv10sp1 polardbx-engine]$

6. client connect tests

  • node 1
[polarx@c79-model ~]$ cat login_polardb-x.sh
/opt/polardbx_engine/bin/mysql -h127.0.0.1 -P4886 -uroot
[polarx@c79-model ~]$
[polarx@c79-model ~]$ sh login_polardb-x.sh
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 14
Server version: 8.0.32-X-Cluster-8.4.19-20250825 Source distribution

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>
  • node 2
[polarx@openeuler-model ~]$ cat login_polardb-x.sh
/opt/polardbx_engine/bin/mysql -h127.0.0.1 -P4886 -uroot
[polarx@openeuler-model ~]$ sh login_polardb-x.sh
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 14
Server version: 8.0.32-X-Cluster-8.4.19-20250825 Source distribution

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>
  • node 3
[polarx@kylinv10sp1 ~]$ cat login_polardb-x.sh
/opt/polardbx_engine/bin/mysql -h127.0.0.1 -P4886 -uroot
[polarx@kylinv10sp1 ~]$ sh login_polardb-x.sh
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 15
Server version: 8.0.32-X-Cluster-8.4.19-20250825 Source distribution

Copyright (c) 2000, 2023, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

7. role change leader/follower

7.1 before change leader/follower

  • node 1: 192.168.122.58
mysql> SELECT * FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_LOCAL \G
*************************** 1. row ***************************
          SERVER_ID: 1
       CURRENT_TERM: 6
     CURRENT_LEADER: 192.168.122.225:14886
       COMMIT_INDEX: 8
      LAST_LOG_TERM: 6
     LAST_LOG_INDEX: 8
               ROLE: Follower
          VOTED_FOR: 3
   LAST_APPLY_INDEX: 8
SERVER_READY_FOR_RW: No
      INSTANCE_TYPE: Normal
   DISABLE_ELECTION: No
      APPLY_RUNNING: Yes
          LEADER_IP:
        LEADER_PORT: 0
1 row in set (0.00 sec)

mysql>
  • node 2: 192.168.122.10
mysql> SELECT * FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_LOCAL \G
*************************** 1. row ***************************
          SERVER_ID: 2
       CURRENT_TERM: 6
     CURRENT_LEADER: 192.168.122.225:14886
       COMMIT_INDEX: 8
      LAST_LOG_TERM: 6
     LAST_LOG_INDEX: 8
               ROLE: Follower
          VOTED_FOR: 0
   LAST_APPLY_INDEX: 8
SERVER_READY_FOR_RW: No
      INSTANCE_TYPE: Normal
   DISABLE_ELECTION: No
      APPLY_RUNNING: Yes
          LEADER_IP:
        LEADER_PORT: 0
1 row in set (0.00 sec)

mysql>
  • node 3: 192.168.122.225
mysql> SELECT * FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_LOCAL \G
*************************** 1. row ***************************
          SERVER_ID: 3
       CURRENT_TERM: 6
     CURRENT_LEADER: 192.168.122.225:14886
       COMMIT_INDEX: 8
      LAST_LOG_TERM: 6
     LAST_LOG_INDEX: 8
               ROLE: Leader
          VOTED_FOR: 3
   LAST_APPLY_INDEX: 8
SERVER_READY_FOR_RW: Yes
      INSTANCE_TYPE: Normal
   DISABLE_ELECTION: No
      APPLY_RUNNING: No
          LEADER_IP:
        LEADER_PORT: 0
1 row in set (0.00 sec)

mysql>
  • leader check cluster 3 nodes
mysql> SELECT * FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_GLOBAL;
+-----------+-----------------------+-------------+------------+----------+-----------+------------+-----------------+----------------+---------------+------------+--------------+---------------+------------------+-----------+-------------+
| SERVER_ID | IP_PORT               | MATCH_INDEX | NEXT_INDEX | ROLE     | HAS_VOTED | FORCE_SYNC | ELECTION_WEIGHT | LEARNER_SOURCE | APPLIED_INDEX | PIPELINING | SEND_APPLIED | INSTANCE_TYPE | DISABLE_ELECTION | SERVER_IP | SERVER_PORT |
+-----------+-----------------------+-------------+------------+----------+-----------+------------+-----------------+----------------+---------------+------------+--------------+---------------+------------------+-----------+-------------+
|         1 | 192.168.122.58:14886  |           8 |          9 | Follower | Yes       | No         |               5 |              0 |             8 | Yes        | No           | Normal        | No               |           |           0 |
|         2 | 192.168.122.10:14886  |           8 |          9 | Follower | No        | No         |               5 |              0 |             8 | Yes        | No           | Normal        | No               |           |           0 |
|         3 | 192.168.122.225:14886 |           8 |          0 | Leader   | Yes       | No         |               5 |              0 |             8 | No         | No           | Normal        | No               |           |           0 |
+-----------+-----------------------+-------------+------------+----------+-----------+------------+-----------------+----------------+---------------+------------+--------------+---------------+------------------+-----------+-------------+
3 rows in set (0.00 sec)

mysql>

7.2 change leader/follower

  • kill leader node (node 3)
[polarx@kylinv10sp1 polardbx-engine]$ ps -ef|grep mysql
polarx     14703       1  0 Jan12 pts/1    00:00:00 /bin/sh /opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf --cluster-info=192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@3
polarx     15035   14703  0 Jan12 pts/1    00:03:52 /opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf --basedir=/opt/polardbx_engine --datadir=/home/polarx/polardbx-engine/data --plugin-dir=/opt/polardbx_engine/lib/plugin --cluster-info=192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@3 --log-error=/home/polarx/polardbx-engine/log/alert.log --pid-file=/home/polarx/polardbx-engine/run/mysql.pid --socket=/home/polarx/polardbx-engine/tmp.mysql.sock --port=4886
polarx     18460   18459  0 11:40 pts/3    00:00:00 /opt/polardbx_engine/bin/mysql -h127.0.0.1 -P4886 -uroot
polarx     18489    3987  0 11:47 pts/1    00:00:00 grep mysql
[polarx@kylinv10sp1 polardbx-engine]$
[polarx@kylinv10sp1 polardbx-engine]$ kill -9 14703
[polarx@kylinv10sp1 polardbx-engine]$
[polarx@kylinv10sp1 polardbx-engine]$ ps -ef|grep mysql
polarx     15035       1  0 Jan12 pts/1    00:03:52 /opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf --basedir=/opt/polardbx_engine --datadir=/home/polarx/polardbx-engine/data --plugin-dir=/opt/polardbx_engine/lib/plugin --cluster-info=192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@3 --log-error=/home/polarx/polardbx-engine/log/alert.log --pid-file=/home/polarx/polardbx-engine/run/mysql.pid --socket=/home/polarx/polardbx-engine/tmp.mysql.sock --port=4886
polarx     18460   18459  0 11:40 pts/3    00:00:00 /opt/polardbx_engine/bin/mysql -h127.0.0.1 -P4886 -uroot
polarx     18537    3987  0 11:48 pts/1    00:00:00 grep mysql
[polarx@kylinv10sp1 polardbx-engine]$
[polarx@kylinv10sp1 polardbx-engine]$ kill -9 15035
[polarx@kylinv10sp1 polardbx-engine]$
[polarx@kylinv10sp1 polardbx-engine]$ ps -ef|grep mysql
polarx     18460   18459  0 11:40 pts/3    00:00:00 /opt/polardbx_engine/bin/mysql -h127.0.0.1 -P4886 -uroot
polarx     18612    3987  0 11:48 pts/1    00:00:00 grep mysql
[polarx@kylinv10sp1 polardbx-engine]$

7.3 after change leader/follower

  • node 3
mysql> SELECT * FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_LOCAL \G
ERROR 2013 (HY000): Lost connection to MySQL server during query
No connection. Trying to reconnect...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1:4886' (111)
ERROR:
Can't connect to the server

mysql> SELECT * FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_GLOBAL;
No connection. Trying to reconnect...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1:4886' (111)
ERROR:
Can't connect to the server

mysql>
  • node 2
mysql> SELECT * FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_LOCAL \G
*************************** 1. row ***************************
          SERVER_ID: 2
       CURRENT_TERM: 9
     CURRENT_LEADER: 192.168.122.10:14886
       COMMIT_INDEX: 9
      LAST_LOG_TERM: 9
     LAST_LOG_INDEX: 9
               ROLE: Leader
          VOTED_FOR: 2
   LAST_APPLY_INDEX: 9
SERVER_READY_FOR_RW: Yes
      INSTANCE_TYPE: Normal
   DISABLE_ELECTION: No
      APPLY_RUNNING: No
          LEADER_IP:
        LEADER_PORT: 0
1 row in set (0.00 sec)

mysql>
  • node 1
mysql> SELECT * FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_LOCAL \G
*************************** 1. row ***************************
          SERVER_ID: 1
       CURRENT_TERM: 9
     CURRENT_LEADER: 192.168.122.10:14886
       COMMIT_INDEX: 9
      LAST_LOG_TERM: 9
     LAST_LOG_INDEX: 9
               ROLE: Follower
          VOTED_FOR: 0
   LAST_APPLY_INDEX: 9
SERVER_READY_FOR_RW: No
      INSTANCE_TYPE: Normal
   DISABLE_ELECTION: No
      APPLY_RUNNING: Yes
          LEADER_IP:
        LEADER_PORT: 0
1 row in set (0.01 sec)

mysql>
  • leader check cluster
mysql> SELECT * FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_GLOBAL;
+-----------+-----------------------+-------------+------------+----------+-----------+------------+-----------------+----------------+---------------+------------+--------------+---------------+------------------+-----------+-------------+
| SERVER_ID | IP_PORT               | MATCH_INDEX | NEXT_INDEX | ROLE     | HAS_VOTED | FORCE_SYNC | ELECTION_WEIGHT | LEARNER_SOURCE | APPLIED_INDEX | PIPELINING | SEND_APPLIED | INSTANCE_TYPE | DISABLE_ELECTION | SERVER_IP | SERVER_PORT |
+-----------+-----------------------+-------------+------------+----------+-----------+------------+-----------------+----------------+---------------+------------+--------------+---------------+------------------+-----------+-------------+
|         1 | 192.168.122.58:14886  |           9 |         10 | Follower | Yes       | No         |               5 |              0 |             9 | Yes        | No           | Normal        | No               |           |           0 |
|         2 | 192.168.122.10:14886  |           9 |          0 | Leader   | Yes       | No         |               5 |              0 |             9 | No         | No           | Normal        | No               |           |           0 |
|         3 | 192.168.122.225:14886 |           0 |          9 | Follower | No        | No         |               5 |              0 |             0 | Yes        | No           | Normal        | No               |           |           0 |
+-----------+-----------------------+-------------+------------+----------+-----------+------------+-----------------+----------------+---------------+------------+--------------+---------------+------------------+-----------+-------------+
3 rows in set (0.00 sec)

mysql>

7.4 add follower

  • start node 3
[polarx@kylinv10sp1 polardbx-engine]$ ps -ef|grep mysql
polarx     18460   18459  0 11:40 pts/3    00:00:00 /opt/polardbx_engine/bin/mysql -h127.0.0.1 -P4886 -uroot
polarx     18644    3987  0 11:54 pts/1    00:00:00 grep mysql
[polarx@kylinv10sp1 polardbx-engine]$
[polarx@kylinv10sp1 polardbx-engine]$ cat start-km.sh

/opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf \
  --cluster-info='192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@3' \
  &

[polarx@kylinv10sp1 polardbx-engine]$
[polarx@kylinv10sp1 polardbx-engine]$ sh start-km.sh
[polarx@kylinv10sp1 polardbx-engine]$ 2026-01-13T03:54:48.214727Z mysqld_safe Logging to '/home/polarx/polardbx-engine/log/alert.log'.
2026-01-13T03:54:48.229089Z mysqld_safe Starting mysqld daemon with databases from /home/polarx/polardbx-engine/data

[polarx@kylinv10sp1 polardbx-engine]$ ps -ef|grep mysql
polarx     18460   18459  0 11:40 pts/3    00:00:00 /opt/polardbx_engine/bin/mysql -h127.0.0.1 -P4886 -uroot
polarx     18697       1  0 11:54 pts/1    00:00:00 /bin/sh /opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf --cluster-info=192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@3
polarx     19034   18697 16 11:54 pts/1    00:00:00 /opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf --basedir=/opt/polardbx_engine --datadir=/home/polarx/polardbx-engine/data --plugin-dir=/opt/polardbx_engine/lib/plugin --cluster-info=192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@3 --log-error=/home/polarx/polardbx-engine/log/alert.log --pid-file=/home/polarx/polardbx-engine/run/mysql.pid --socket=/home/polarx/polardbx-engine/tmp.mysql.sock --port=4886
polarx     19119    3987  0 11:54 pts/1    00:00:00 grep mysql
[polarx@kylinv10sp1 polardbx-engine]$
[polarx@kylinv10sp1 polardbx-engine]$
  • check node 3
mysql> SELECT * FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_LOCAL \G
No connection. Trying to reconnect...
Connection id:    13
Current database: *** NONE ***

*************************** 1. row ***************************
          SERVER_ID: 3
       CURRENT_TERM: 9
     CURRENT_LEADER: 192.168.122.10:14886
       COMMIT_INDEX: 9
      LAST_LOG_TERM: 9
     LAST_LOG_INDEX: 9
               ROLE: Follower
          VOTED_FOR: 0
   LAST_APPLY_INDEX: 9
SERVER_READY_FOR_RW: No
      INSTANCE_TYPE: Normal
   DISABLE_ELECTION: No
      APPLY_RUNNING: Yes
          LEADER_IP:
        LEADER_PORT: 0
1 row in set (0.00 sec)

mysql>
  • check leader (node 2)
mysql> SELECT * FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_GLOBAL;
+-----------+-----------------------+-------------+------------+----------+-----------+------------+-----------------+----------------+---------------+------------+--------------+---------------+------------------+-----------+-------------+
| SERVER_ID | IP_PORT               | MATCH_INDEX | NEXT_INDEX | ROLE     | HAS_VOTED | FORCE_SYNC | ELECTION_WEIGHT | LEARNER_SOURCE | APPLIED_INDEX | PIPELINING | SEND_APPLIED | INSTANCE_TYPE | DISABLE_ELECTION | SERVER_IP | SERVER_PORT |
+-----------+-----------------------+-------------+------------+----------+-----------+------------+-----------------+----------------+---------------+------------+--------------+---------------+------------------+-----------+-------------+
|         1 | 192.168.122.58:14886  |           9 |         10 | Follower | Yes       | No         |               5 |              0 |             9 | Yes        | No           | Normal        | No               |           |           0 |
|         2 | 192.168.122.10:14886  |           9 |          0 | Leader   | Yes       | No         |               5 |              0 |             9 | No         | No           | Normal        | No               |           |           0 |
|         3 | 192.168.122.225:14886 |           9 |         10 | Follower | No        | No         |               5 |              0 |             9 | Yes        | No           | Normal        | No               |           |           0 |
+-----------+-----------------------+-------------+------------+----------+-----------+------------+-----------------+----------------+---------------+------------+--------------+---------------+------------------+-----------+-------------+
3 rows in set (0.00 sec)

mysql>

8. database/table tests

8.1 1st insert data, just leader node can insert data

  • init leader node 2
CREATE DATABASE db1;
USE db1;
CREATE TABLE tb1 (id int);
INSERT INTO db1.tb1 VALUES (0), (1), (2);
SELECT * FROM db1.tb1;
  • node 2 query
mysql> select * from db1.tb1;
+------+
| id   |
+------+
|    0 |
|    1 |
|    2 |
+------+
3 rows in set (0.00 sec)

mysql>
  • node 1 query
mysql> select * from db1.tb1;
+------+
| id   |
+------+
|    0 |
|    1 |
|    2 |
+------+
3 rows in set (0.00 sec)

mysql>
  • node 3 query
mysql> select * from db1.tb1;
+------+
| id   |
+------+
|    0 |
|    1 |
|    2 |
+------+
3 rows in set (0.00 sec)

mysql>
  • when node 1 (follower) insert data, it will error, do not write data to table
mysql> INSERT INTO db1.tb1 VALUES (3), (4), (5);
ERROR 7557 (HY000): The consensus follower is not allowed to to do current operation.
mysql>
  • when node 3 (follower) insert data, it will error, do not write data to table
mysql> INSERT INTO db1.tb1 VALUES (3), (4), (5);
ERROR 7557 (HY000): The consensus follower is not allowed to to do current operation.
mysql>
  • then node 2 (leader) insert data, it will success
mysql> INSERT INTO db1.tb1 VALUES (3), (4), (5);
Query OK, 3 rows affected (0.05 sec)
Records: 3  Duplicates: 0  Warnings: 0

mysql>
  • then node 2 (leader) query
mysql> select * from db1.tb1;
+------+
| id   |
+------+
|    0 |
|    1 |
|    2 |
|    3 |
|    4 |
|    5 |
+------+
6 rows in set (0.01 sec)

mysql>
  • then node 1 query
mysql> select * from db1.tb1;
+------+
| id   |
+------+
|    0 |
|    1 |
|    2 |
|    3 |
|    4 |
|    5 |
+------+
6 rows in set (0.00 sec)

mysql>
  • then node 3 query
mysql> select * from db1.tb1;
+------+
| id   |
+------+
|    0 |
|    1 |
|    2 |
|    3 |
|    4 |
|    5 |
+------+
6 rows in set (0.01 sec)

mysql>

8.2 2d data test

when leader change other node, then the new leader can insert data

  • kill old leader (node 2)
[polarx@openeuler-model polardbx-engine]$ ps -ef|grep mysql
polarx     10446       1  0 Jan12 pts/0    00:00:00 /bin/sh /opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf --cluster-info=192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@2
polarx     10783   10446  0 Jan12 pts/0    00:11:26 /opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf --basedir=/opt/polardbx_engine --datadir=/home/polarx/polardbx-engine/data --plugin-dir=/opt/polardbx_engine/lib/plugin --cluster-info=192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@2 --log-error=/home/polarx/polardbx-engine/log/alert.log --pid-file=/home/polarx/polardbx-engine/run/mysql.pid --socket=/home/polarx/polardbx-engine/tmp.mysql.sock --port=4886
polarx     12089   12088  0 11:40 pts/2    00:00:00 /opt/polardbx_engine/bin/mysql -h127.0.0.1 -P4886 -uroot
polarx     12237    2671  0 14:15 pts/0    00:00:00 grep --color=auto mysql
[polarx@openeuler-model polardbx-engine]$
[polarx@openeuler-model polardbx-engine]$ kill -9 10783
[polarx@openeuler-model polardbx-engine]$ /opt/polardbx_engine/bin/mysqld_safe: line 205: 10783 Killed                  env MYSQLD_PARENT_PID=10446 nohup /opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf --basedir=/opt/polardbx_engine --datadir=/home/polarx/polardbx-engine/data --plugin-dir=/opt/polardbx_engine/lib/plugin --cluster-info=192.168.122.58\:14886\;192.168.122.10\:14886\;192.168.122.225\:14886\@2 --log-error=/home/polarx/polardbx-engine/log/alert.log --pid-file=/home/polarx/polardbx-engine/run/mysql.pid --socket=/home/polarx/polardbx-engine/tmp.mysql.sock --port=4886 < /dev/null >> /home/polarx/polardbx-engine/log/alert.log 2>&1
2026-01-13T06:16:01.346463Z mysqld_safe Rename corefile from  to
2026-01-13T06:16:01.354783Z mysqld_safe Number of processes running now: 0
2026-01-13T06:16:01.358661Z mysqld_safe mysqld restarted

[polarx@openeuler-model polardbx-engine]$ ps -ef|grep mysql
polarx     10446       1  0 Jan12 pts/0    00:00:00 /bin/sh /opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf --cluster-info=192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@2
polarx     12089   12088  0 11:40 pts/2    00:00:00 /opt/polardbx_engine/bin/mysql -h127.0.0.1 -P4886 -uroot
polarx     12301   10446  5 14:16 pts/0    00:00:00 /opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf --basedir=/opt/polardbx_engine --datadir=/home/polarx/polardbx-engine/data --plugin-dir=/opt/polardbx_engine/lib/plugin --cluster-info=192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@2 --log-error=/home/polarx/polardbx-engine/log/alert.log --pid-file=/home/polarx/polardbx-engine/run/mysql.pid --socket=/home/polarx/polardbx-engine/tmp.mysql.sock --port=4886
polarx     12386    2671  0 14:16 pts/0    00:00:00 grep --color=auto mysql
[polarx@openeuler-model polardbx-engine]$
  • check new leader

node 2

mysql> SELECT * FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_LOCAL \G
*************************** 1. row ***************************
          SERVER_ID: 2
       CURRENT_TERM: 10
     CURRENT_LEADER: 192.168.122.58:14886
       COMMIT_INDEX: 10
      LAST_LOG_TERM: 10
     LAST_LOG_INDEX: 10
               ROLE: Follower
          VOTED_FOR: 1
   LAST_APPLY_INDEX: 10
SERVER_READY_FOR_RW: No
      INSTANCE_TYPE: Normal
   DISABLE_ELECTION: No
      APPLY_RUNNING: Yes
          LEADER_IP:
        LEADER_PORT: 0
1 row in set (0.00 sec)

mysql>

node 3

mysql> SELECT * FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_LOCAL \G
*************************** 1. row ***************************
          SERVER_ID: 3
       CURRENT_TERM: 10
     CURRENT_LEADER: 192.168.122.58:14886
       COMMIT_INDEX: 10
      LAST_LOG_TERM: 10
     LAST_LOG_INDEX: 10
               ROLE: Follower
          VOTED_FOR: 3
   LAST_APPLY_INDEX: 10
SERVER_READY_FOR_RW: No
      INSTANCE_TYPE: Normal
   DISABLE_ELECTION: No
      APPLY_RUNNING: Yes
          LEADER_IP:
        LEADER_PORT: 0
1 row in set (0.00 sec)

mysql>

node 1

mysql> SELECT * FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_LOCAL \G
*************************** 1. row ***************************
          SERVER_ID: 1
       CURRENT_TERM: 10
     CURRENT_LEADER: 192.168.122.58:14886
       COMMIT_INDEX: 10
      LAST_LOG_TERM: 10
     LAST_LOG_INDEX: 10
               ROLE: Leader
          VOTED_FOR: 1
   LAST_APPLY_INDEX: 10
SERVER_READY_FOR_RW: Yes
      INSTANCE_TYPE: Normal
   DISABLE_ELECTION: No
      APPLY_RUNNING: No
          LEADER_IP:
        LEADER_PORT: 0
1 row in set (0.00 sec)

mysql>

so, new leader is node 1

  • check cluster by node 1 (leader)
mysql> SELECT * FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_GLOBAL;
+-----------+-----------------------+-------------+------------+----------+-----------+------------+-----------------+----------------+---------------+------------+--------------+---------------+------------------+-----------+-------------+
| SERVER_ID | IP_PORT               | MATCH_INDEX | NEXT_INDEX | ROLE     | HAS_VOTED | FORCE_SYNC | ELECTION_WEIGHT | LEARNER_SOURCE | APPLIED_INDEX | PIPELINING | SEND_APPLIED | INSTANCE_TYPE | DISABLE_ELECTION | SERVER_IP | SERVER_PORT |
+-----------+-----------------------+-------------+------------+----------+-----------+------------+-----------------+----------------+---------------+------------+--------------+---------------+------------------+-----------+-------------+
|         1 | 192.168.122.58:14886  |          10 |          0 | Leader   | Yes       | No         |               5 |              0 |            10 | No         | No           | Normal        | No               |           |           0 |
|         2 | 192.168.122.10:14886  |          10 |         11 | Follower | Yes       | No         |               5 |              0 |            10 | Yes        | No           | Normal        | No               |           |           0 |
|         3 | 192.168.122.225:14886 |          10 |         11 | Follower | No        | No         |               5 |              0 |            10 | Yes        | No           | Normal        | No               |           |           0 |
+-----------+-----------------------+-------------+------------+----------+-----------+------------+-----------------+----------------+---------------+------------+--------------+---------------+------------------+-----------+-------------+
3 rows in set (0.00 sec)

mysql>
  • then insert data tests

node 2 (follower)

mysql> INSERT INTO db1.tb1 VALUES (6), (7), (8);
ERROR 7557 (HY000): The consensus follower is not allowed to to do current operation.
mysql>

node 3 (follower)

mysql> INSERT INTO db1.tb1 VALUES (6), (7), (8);
ERROR 7557 (HY000): The consensus follower is not allowed to to do current operation.
mysql>

node 1 (leader)

mysql> INSERT INTO db1.tb1 VALUES (6), (7), (8);
Query OK, 3 rows affected (0.06 sec)
Records: 3  Duplicates: 0  Warnings: 0

mysql>
  • then 3 nodes query

node 1 (leader) query

mysql> select * from db1.tb1;
+------+
| id   |
+------+
|    0 |
|    1 |
|    2 |
|    3 |
|    4 |
|    5 |
|    6 |
|    7 |
|    8 |
+------+
9 rows in set (0.00 sec)

mysql>

node 2 (follower) query

mysql> select * from db1.tb1;
+------+
| id   |
+------+
|    0 |
|    1 |
|    2 |
|    3 |
|    4 |
|    5 |
|    6 |
|    7 |
|    8 |
+------+
9 rows in set (0.00 sec)

mysql>

node 3 (follower) query

mysql> select * from db1.tb1;
+------+
| id   |
+------+
|    0 |
|    1 |
|    2 |
|    3 |
|    4 |
|    5 |
|    6 |
|    7 |
|    8 |
+------+
9 rows in set (0.00 sec)

mysql>

9. set PolarDB-X root password

  • just in leader node set, it will auto sync to other 2 nodes
mysql> select host,user from mysql.user;
+-----------+------------------+
| host      | user             |
+-----------+------------------+
| localhost | mysql.infoschema |
| localhost | mysql.session    |
| localhost | mysql.sys        |
| localhost | root             |
+-----------+------------------+
4 rows in set (0.00 sec)

mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.05 sec)

mysql>
mysql> update mysql.user set host = '%' where user = 'root';
Query OK, 1 row affected (0.05 sec)
Rows matched: 1  Changed: 1  Warnings: 0

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.02 sec)

mysql> select host,user from mysql.user;
+-----------+------------------+
| host      | user             |
+-----------+------------------+
| %         | root             |
| localhost | mysql.infoschema |
| localhost | mysql.session    |
| localhost | mysql.sys        |
+-----------+------------------+
4 rows in set (0.00 sec)

mysql>
  • other 2 nodes check

node 2

mysql> select host,user from mysql.user;
+-----------+------------------+
| host      | user             |
+-----------+------------------+
| localhost | mysql.infoschema |
| localhost | mysql.session    |
| localhost | mysql.sys        |
| localhost | root             |
+-----------+------------------+
4 rows in set (0.00 sec)

mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY '123456';
ERROR 7557 (HY000): The consensus follower is not allowed to to do current operation.
mysql>
mysql> select host,user from mysql.user;
+-----------+------------------+
| host      | user             |
+-----------+------------------+
| %         | root             |
| localhost | mysql.infoschema |
| localhost | mysql.session    |
| localhost | mysql.sys        |
+-----------+------------------+
4 rows in set (0.00 sec)

mysql>

node 3

mysql> select host,user from mysql.user;
+-----------+------------------+
| host      | user             |
+-----------+------------------+
| localhost | mysql.infoschema |
| localhost | mysql.session    |
| localhost | mysql.sys        |
| localhost | root             |
+-----------+------------------+
4 rows in set (0.00 sec)

mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY '123456';
ERROR 7557 (HY000): The consensus follower is not allowed to to do current operation.
mysql>
mysql> select host,user from mysql.user;
+-----------+------------------+
| host      | user             |
+-----------+------------------+
| %         | root             |
| localhost | mysql.infoschema |
| localhost | mysql.session    |
| localhost | mysql.sys        |
+-----------+------------------+
4 rows in set (0.00 sec)

mysql>

10. APP use PolarDB-X Cluster Tests

JDK 8 + Maven 3

IDEA new maven project

    <dependencies>
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>8.0.33</version>
        </dependency>
        <dependency>
            <groupId>com.alibaba.polardbx</groupId>
            <artifactId>polardbx-connector-java</artifactId>
            <version>2.1.1</version>
        </dependency>
    </dependencies>

10.1 query data

  • select 1
package org.example;

import java.sql.*;
import java.util.StringJoiner;

public class Main {

    private static final String URL      = "jdbc:polardbx://192.168.122.58:4886,192.168.122.10:4886,192.168.122.225:4886/";
    private static final String USER     = "root";
    private static final String PASSWORD = "123456";
    private static final String SQL      = "SELECT 1";

    public static void main(String[] args) {

        try {
            Class.forName("com.alibaba.polardbx.Driver");
        } catch (ClassNotFoundException e) {
            System.err.println("PolarDB-X 驱动未找到,请检查依赖");
            e.printStackTrace();
            return;
        }

        try (Connection conn = DriverManager.getConnection(URL, USER, PASSWORD);
             Statement stmt = conn.createStatement();
             ResultSet rs = stmt.executeQuery(SQL)) {

            ResultSetMetaData md = rs.getMetaData();
            int cols = md.getColumnCount();

            StringJoiner head = new StringJoiner("\t");
            for (int i = 1; i <= cols; i++) head.add(md.getColumnName(i));
            System.out.println(head);

            while (rs.next()) {
                StringJoiner row = new StringJoiner("\t");
                for (int i = 1; i <= cols; i++) row.add(String.valueOf(rs.getObject(i)));
                System.out.println(row);
            }
        } catch (SQLException e) {
            System.err.println("DB error: " + e.getMessage());
            e.printStackTrace();
        }
    }
}

result

1    
1    

Process finished with exit code 0
  • query cluster
package org.example;

import java.sql.*;
import java.util.StringJoiner;

public class Main {

    private static final String URL      = "jdbc:polardbx://192.168.122.58:4886,192.168.122.10:4886,192.168.122.225:4886/";
    private static final String USER     = "root";
    private static final String PASSWORD = "123456";
    private static final String SQL      = "SELECT SERVER_ID,IP_PORT,MATCH_INDEX,ROLE,APPLIED_INDEX FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_GLOBAL";

    public static void main(String[] args) {

        try {
            Class.forName("com.alibaba.polardbx.Driver");
        } catch (ClassNotFoundException e) {
            System.err.println("PolarDB-X 驱动未找到,请检查依赖");
            e.printStackTrace();
            return;
        }

        try (Connection conn = DriverManager.getConnection(URL, USER, PASSWORD);
             Statement stmt = conn.createStatement();
             ResultSet rs = stmt.executeQuery(SQL)) {

            ResultSetMetaData md = rs.getMetaData();
            int cols = md.getColumnCount();

            StringJoiner head = new StringJoiner("\t");
            for (int i = 1; i <= cols; i++) head.add(md.getColumnName(i));
            System.out.println(head);

            while (rs.next()) {
                StringJoiner row = new StringJoiner("\t");
                for (int i = 1; i <= cols; i++) row.add(String.valueOf(rs.getObject(i)));
                System.out.println(row);
            }
        } catch (SQLException e) {
            System.err.println("DB error: " + e.getMessage());
            e.printStackTrace();
        }
    }
}

result

SERVER_ID    IP_PORT    MATCH_INDEX    ROLE    APPLIED_INDEX    
1    192.168.122.58:14886    14    Leader    14    
2    192.168.122.10:14886    14    Follower    14    
3    192.168.122.225:14886    14    Follower    14    

Process finished with exit code 0
  • query table
package org.example;

import java.sql.*;
import java.util.StringJoiner;

public class Main {

    private static final String URL      = "jdbc:polardbx://192.168.122.58:4886,192.168.122.10:4886,192.168.122.225:4886/";
    private static final String USER     = "root";
    private static final String PASSWORD = "123456";
    private static final String SQL      = "SELECT id FROM db1.tb1";

    public static void main(String[] args) {

        try {
            Class.forName("com.alibaba.polardbx.Driver");
        } catch (ClassNotFoundException e) {
            System.err.println("PolarDB-X 驱动未找到,请检查依赖");
            e.printStackTrace();
            return;
        }

        try (Connection conn = DriverManager.getConnection(URL, USER, PASSWORD);
             Statement stmt = conn.createStatement();
             ResultSet rs = stmt.executeQuery(SQL)) {

            ResultSetMetaData md = rs.getMetaData();
            int cols = md.getColumnCount();

            StringJoiner head = new StringJoiner("\t");
            for (int i = 1; i <= cols; i++) head.add(md.getColumnName(i));
            System.out.println(head);

            while (rs.next()) {
                StringJoiner row = new StringJoiner("\t");
                for (int i = 1; i <= cols; i++) row.add(String.valueOf(rs.getObject(i)));
                System.out.println(row);
            }
        } catch (SQLException e) {
            System.err.println("DB error: " + e.getMessage());
            e.printStackTrace();
        }
    }
}

result

id    
0    
1    
2    
3    
4    
5    
6    
7    
8    

Process finished with exit code 0

10.2 insert data & query

  • insert data & query
package org.example;

import java.sql.*;
import java.util.StringJoiner;

public class Main {

    private static final String URL = "jdbc:polardbx://192.168.122.58:4886,192.168.122.10:4886,192.168.122.225:4886/";
    private static final String USER = "root";
    private static final String PASSWORD = "123456";
    private static final String SQL1 = "SELECT id FROM db1.tb1";
    private static final String SQL2 = "INSERT INTO db1.tb1 VALUES (9)";

    public static void main(String[] args) {

        try {
            Class.forName("com.alibaba.polardbx.Driver");
        } catch (ClassNotFoundException e) {
            System.err.println("PolarDB-X 驱动未找到,请检查依赖");
            e.printStackTrace();
            return;
        }

        try (Connection conn = DriverManager.getConnection(URL, USER, PASSWORD);
             Statement stmt = conn.createStatement()) {

            int rows = stmt.executeUpdate(SQL2);
            System.out.println("已插入 " + rows + " 行\n");

            try (ResultSet rs = stmt.executeQuery(SQL1)) {
                ResultSetMetaData md = rs.getMetaData();
                int cols = md.getColumnCount();

                StringJoiner head = new StringJoiner("\t");
                for (int i = 1; i <= cols; i++) head.add(md.getColumnName(i));
                System.out.println(head);

                while (rs.next()) {
                    StringJoiner row = new StringJoiner("\t");
                    for (int i = 1; i <= cols; i++) row.add(String.valueOf(rs.getObject(i)));
                    System.out.println(row);
                }
            }
        } catch (SQLException e) {
            System.err.println("DB error: " + e.getMessage());
            e.printStackTrace();
        }
    }
}

result

已插入 1 行

id
0
1
2
3
4
5
6
7
8
9

Process finished with exit code 0

10.3 kill old leader then insert data & query

  • kill old leader (node 1)
[polarx@c79-model polardbx-engine]$ ps -ef|grep mysql
polarx    4870     1  0 Jan12 pts/0    00:00:00 /bin/sh /opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf --cluster-info=192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@1
polarx    5436  4870  1 Jan12 pts/0    00:19:58 /opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf --basedir=/opt/polardbx_engine --datadir=/home/polarx/polardbx-engine/data --plugin-dir=/opt/polardbx_engine/lib/plugin --cluster-info=192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@1 --log-error=/home/polarx/polardbx-engine/log/alert.log --pid-file=/home/polarx/polardbx-engine/run/mysql.pid --socket=/home/polarx/polardbx-engine/tmp.mysql.sock --port=4886
polarx   13699 13698  0 11:40 pts/2    00:00:00 /opt/polardbx_engine/bin/mysql -h127.0.0.1 -P4886 -uroot
polarx   15752  1791  0 16:08 pts/0    00:00:00 grep --color=auto mysql
[polarx@c79-model polardbx-engine]$
[polarx@c79-model polardbx-engine]$ kill -9 5436
[polarx@c79-model polardbx-engine]$ /opt/polardbx_engine/bin/mysqld_safe: line 205:  5436 Killed                  env MYSQLD_PARENT_PID=4870 env MYSQLD_PARENT_PID=4870 nohup /opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf --basedir=/opt/polardbx_engine --datadir=/home/polarx/polardbx-engine/data --plugin-dir=/opt/polardbx_engine/lib/plugin --cluster-info=192.168.122.58\:14886\;192.168.122.10\:14886\;192.168.122.225\:14886\@1 --log-error=/home/polarx/polardbx-engine/log/alert.log --pid-file=/home/polarx/polardbx-engine/run/mysql.pid --socket=/home/polarx/polardbx-engine/tmp.mysql.sock --port=4886 < /dev/null >> /home/polarx/polardbx-engine/log/alert.log 2>&1 >> /home/polarx/polardbx-engine/log/alert.log 2>&1
2026-01-13T08:08:10.948631Z mysqld_safe Rename corefile from core.5182 to
2026-01-13T08:08:10.956310Z mysqld_safe Number of processes running now: 0
2026-01-13T08:08:10.958417Z mysqld_safe mysqld restarted

[polarx@c79-model polardbx-engine]$ ps -ef|grep mysql
polarx    4870     1  0 Jan12 pts/0    00:00:00 /bin/sh /opt/polardbx_engine/bin/mysqld_safe --defaults-file=my.cnf --cluster-info=192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@1
polarx   13699 13698  0 11:40 pts/2    00:00:00 /opt/polardbx_engine/bin/mysql -h127.0.0.1 -P4886 -uroot
polarx   15777  4870 23 16:08 pts/0    00:00:00 /opt/polardbx_engine/bin/mysqld --defaults-file=my.cnf --basedir=/opt/polardbx_engine --datadir=/home/polarx/polardbx-engine/data --plugin-dir=/opt/polardbx_engine/lib/plugin --cluster-info=192.168.122.58:14886;192.168.122.10:14886;192.168.122.225:14886@1 --log-error=/home/polarx/polardbx-engine/log/alert.log --pid-file=/home/polarx/polardbx-engine/run/mysql.pid --socket=/home/polarx/polardbx-engine/tmp.mysql.sock --port=4886
polarx   15833  1791  0 16:08 pts/0    00:00:00 grep --color=auto mysql
[polarx@c79-model polardbx-engine]$
  • check new leader
mysql> SELECT * FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_LOCAL \G
*************************** 1. row ***************************
          SERVER_ID: 2
       CURRENT_TERM: 11
     CURRENT_LEADER: 192.168.122.10:14886
       COMMIT_INDEX: 16
      LAST_LOG_TERM: 11
     LAST_LOG_INDEX: 16
               ROLE: Leader
          VOTED_FOR: 2
   LAST_APPLY_INDEX: 16
SERVER_READY_FOR_RW: Yes
      INSTANCE_TYPE: Normal
   DISABLE_ELECTION: No
      APPLY_RUNNING: No
          LEADER_IP:
        LEADER_PORT: 0
1 row in set (0.01 sec)

mysql>

new leader is node 2

  • then code tests

check PolarDB-X Cluster

package org.example;

import java.sql.*;
import java.util.StringJoiner;

public class Main {

    private static final String URL = "jdbc:polardbx://192.168.122.58:4886,192.168.122.10:4886,192.168.122.225:4886/";
    private static final String USER = "root";
    private static final String PASSWORD = "123456";
    private static final String SQL0 = "SELECT SERVER_ID,IP_PORT,MATCH_INDEX,ROLE,APPLIED_INDEX FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_GLOBAL";

    public static void main(String[] args) {

        try {
            Class.forName("com.alibaba.polardbx.Driver");
        } catch (ClassNotFoundException e) {
            System.err.println("PolarDB-X 驱动未找到,请检查依赖");
            e.printStackTrace();
            return;
        }

        try (Connection conn = DriverManager.getConnection(URL, USER, PASSWORD);
             Statement stmt = conn.createStatement()) {

            try (ResultSet rs = stmt.executeQuery(SQL0)) {
                ResultSetMetaData md = rs.getMetaData();
                int cols = md.getColumnCount();

                StringJoiner head = new StringJoiner("\t");
                for (int i = 1; i <= cols; i++) head.add(md.getColumnName(i));
                System.out.println(head);

                while (rs.next()) {
                    StringJoiner row = new StringJoiner("\t");
                    for (int i = 1; i <= cols; i++) row.add(String.valueOf(rs.getObject(i)));
                    System.out.println(row);
                }
            }
        } catch (SQLException e) {
            System.err.println("DB error: " + e.getMessage());
            e.printStackTrace();
        }
    }
}

result

SERVER_ID    IP_PORT    MATCH_INDEX    ROLE    APPLIED_INDEX
1    192.168.122.58:14886    19    Follower    19
2    192.168.122.10:14886    19    Leader    19
3    192.168.122.225:14886    19    Follower    19

Process finished with exit code 0

reinsert data

package org.example;

import java.sql.*;
import java.util.StringJoiner;

public class Main {

    private static final String URL = "jdbc:polardbx://192.168.122.58:4886,192.168.122.10:4886,192.168.122.225:4886/";
    private static final String USER = "root";
    private static final String PASSWORD = "123456";
    private static final String SQL0 = "SELECT SERVER_ID,IP_PORT,MATCH_INDEX,ROLE,APPLIED_INDEX FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_GLOBAL";
    private static final String SQL1 = "SELECT id FROM db1.tb1";
    private static final String SQL2 = "INSERT INTO db1.tb1 VALUES (666)";

    public static void main(String[] args) {

        try {
            Class.forName("com.alibaba.polardbx.Driver");
        } catch (ClassNotFoundException e) {
            System.err.println("PolarDB-X 驱动未找到,请检查依赖");
            e.printStackTrace();
            return;
        }

        try (Connection conn = DriverManager.getConnection(URL, USER, PASSWORD);
             Statement stmt = conn.createStatement()) {

            int rows = stmt.executeUpdate(SQL2);
            System.out.println("已插入 " + rows + " 行\n");

            try (ResultSet rs = stmt.executeQuery(SQL1)) {
                ResultSetMetaData md = rs.getMetaData();
                int cols = md.getColumnCount();

                StringJoiner head = new StringJoiner("\t");
                for (int i = 1; i <= cols; i++) head.add(md.getColumnName(i));
                System.out.println(head);

                while (rs.next()) {
                    StringJoiner row = new StringJoiner("\t");
                    for (int i = 1; i <= cols; i++) row.add(String.valueOf(rs.getObject(i)));
                    System.out.println(row);
                }
            }
        } catch (SQLException e) {
            System.err.println("DB error: " + e.getMessage());
            e.printStackTrace();
        }
    }
}

result

已插入 1 行

id
0
1
2
3
4
5
6
7
8
9
666

Process finished with exit code 0

10.4 update data

package org.example;

import java.sql.*;
import java.util.StringJoiner;

public class Main {

    private static final String URL = "jdbc:polardbx://192.168.122.58:4886,192.168.122.10:4886,192.168.122.225:4886/";
    private static final String USER = "root";
    private static final String PASSWORD = "123456";
    private static final String SQL0 = "SELECT SERVER_ID,IP_PORT,MATCH_INDEX,ROLE,APPLIED_INDEX FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_GLOBAL";
    private static final String SQL1 = "SELECT id FROM db1.tb1";
    private static final String SQL2 = "INSERT INTO db1.tb1 VALUES (666)";
    private static final String SQL3 = "UPDATE db1.tb1 SET id=99 WHERE id=9";

    public static void main(String[] args) {

        try {
            Class.forName("com.alibaba.polardbx.Driver");
        } catch (ClassNotFoundException e) {
            System.err.println("PolarDB-X 驱动未找到,请检查依赖");
            e.printStackTrace();
            return;
        }

        try (Connection conn = DriverManager.getConnection(URL, USER, PASSWORD);
             Statement stmt = conn.createStatement()) {

            int rows = stmt.executeUpdate(SQL3);
            System.out.println("已更新 " + rows + " 行\n");

            try (ResultSet rs = stmt.executeQuery(SQL1)) {
                ResultSetMetaData md = rs.getMetaData();
                int cols = md.getColumnCount();

                StringJoiner head = new StringJoiner("\t");
                for (int i = 1; i <= cols; i++) head.add(md.getColumnName(i));
                System.out.println(head);

                while (rs.next()) {
                    StringJoiner row = new StringJoiner("\t");
                    for (int i = 1; i <= cols; i++) row.add(String.valueOf(rs.getObject(i)));
                    System.out.println(row);
                }
            }
        } catch (SQLException e) {
            System.err.println("DB error: " + e.getMessage());
            e.printStackTrace();
        }
    }
}

result

已更新 1 行

id
0
1
2
3
4
5
6
7
8
99
666

Process finished with exit code 0

10.5 delete data

package org.example;

import java.sql.*;
import java.util.StringJoiner;

public class Main {

    private static final String URL = "jdbc:polardbx://192.168.122.58:4886,192.168.122.10:4886,192.168.122.225:4886/";
    private static final String USER = "root";
    private static final String PASSWORD = "123456";
    private static final String SQL0 = "SELECT SERVER_ID,IP_PORT,MATCH_INDEX,ROLE,APPLIED_INDEX FROM INFORMATION_SCHEMA.ALISQL_CLUSTER_GLOBAL";
    private static final String SQL1 = "SELECT id FROM db1.tb1";
    private static final String SQL2 = "INSERT INTO db1.tb1 VALUES (666)";
    private static final String SQL3 = "UPDATE db1.tb1 SET id=99 WHERE id=9";
    private static final String SQL4 = "DELETE FROM db1.tb1 WHERE id=666";

    public static void main(String[] args) {

        try {
            Class.forName("com.alibaba.polardbx.Driver");
        } catch (ClassNotFoundException e) {
            System.err.println("PolarDB-X 驱动未找到,请检查依赖");
            e.printStackTrace();
            return;
        }

        try (Connection conn = DriverManager.getConnection(URL, USER, PASSWORD);
             Statement stmt = conn.createStatement()) {

            int rows = stmt.executeUpdate(SQL4);
            System.out.println("已删除 " + rows + " 行\n");

            try (ResultSet rs = stmt.executeQuery(SQL1)) {
                ResultSetMetaData md = rs.getMetaData();
                int cols = md.getColumnCount();

                StringJoiner head = new StringJoiner("\t");
                for (int i = 1; i <= cols; i++) head.add(md.getColumnName(i));
                System.out.println(head);

                while (rs.next()) {
                    StringJoiner row = new StringJoiner("\t");
                    for (int i = 1; i <= cols; i++) row.add(String.valueOf(rs.getObject(i)));
                    System.out.println(row);
                }
            }
        } catch (SQLException e) {
            System.err.println("DB error: " + e.getMessage());
            e.printStackTrace();
        }
    }
}

result

已删除 1 行

id
0
1
2
3
4
5
6
7
8
99

Process finished with exit code 0

11. refer

https://www.modb.pro/db/1866671346403524608

驱动包名:com.alibaba.polardbx.Driver
JDBC URL 样例:jdbc:polardbx://192.168.122.58:4886,192.168.122.10:4886,192.168.122.225:4886/
协议标志头:polardbx
ip端口指定:标准版三节点其中一个 IP:port 或者 VIP:port,或者多个 IP0:port0,IP1:port1(必须为同一集群中的节点,以英文逗号分隔),建连时会被路由到 leader 节点。

相关文章
|
26天前
|
Linux 数据库
Linux 环境 Polardb-X 数据库 单机版 rpm 包 安装教程
本文介绍在CentOS 7.9环境下安装PolarDB-X单机版数据库的完整流程,涵盖系统环境准备、本地Yum源配置、RPM包安装、用户与目录初始化、依赖库解决、数据库启动及客户端连接等步骤,助您快速部署运行PolarDB-X。
501 1
Linux 环境 Polardb-X 数据库 单机版 rpm 包 安装教程
|
26天前
|
Linux 数据库
Linux 环境 国产银河麒麟V10操作系统安装 Polardb-X 数据库 单机版 rpm 包 教程
本文介绍在国产银河麒麟V10操作系统上安装Polardb-X单机版数据库的完整过程。基于RPM包方式部署,步骤与CentOS 7.9类似,涵盖系统环境确认、依赖安装、用户配置、初始化数据目录及启动服务等关键环节,并通过命令验证运行状态,助力国产化平台数据库搭建。
363 0
|
18天前
|
Kubernetes 容器 数据库
K8S 部署 阿里云 PolarDB-X 集群(企业版、标准版)minikube 教程
本文介绍在 Minikube Kubernetes 环境中部署 PolarDB-X 数据库的完整实践,涵盖环境准备、Operator 安装、企业版与标准版集群部署。通过 Helm 快速部署 Operator,并分别搭建分布式与三节点高可用架构,验证核心功能。虽 K8S 部署便捷,但生产环境建议谨慎使用,尤其对有状态数据库系统。附系列前8篇安装指南供参考。
669 2
|
20天前
|
Ubuntu Java Linux
ARM Linux 环境 国产 华为 欧拉 openEuler 20.03 操作系统安装 Polardb-X 数据库 单机版 rpm 包 教程
本文介绍在ARM64架构的openEuler 20.03系统上,通过RPM包安装部署PolarDB-X单节点数据库的完整过程,涵盖环境准备、RPM安装、用户配置、数据库初始化及启动连接等步骤,助力用户在国产化软硬件平台上快速搭建PolarDB-X运行环境。
252 1
|
20天前
|
Ubuntu Linux 数据库
Linux 环境 Docker 安装 Polardb-X 数据库 容器 教程
本文介绍如何通过 Docker 快速部署 Polardb-X 数据库容器,实现一键启动。涵盖镜像拉取、容器运行、登录验证等步骤,操作简便,适用于快速开发与测试环境。附往期7篇主流系统安装教程。
185 5
|
26天前
|
Linux 数据库
Linux 环境 国产 华为 欧拉 openEuler 20.03 操作系统安装 Polardb-X 数据库 单机版 rpm 包 教程
本文为华为欧拉openEuler 20.03操作系统下Polardb-X单机版RPM包安装教程,继CentOS 7.9与银河麒麟V10后,延续相似步骤,详述环境准备、依赖安装、配置初始化及服务启动全过程,助力国产化平台数据库部署。作者张阳,资深从业者,欢迎交流。
244 5
|
4月前
|
SQL 关系型数据库 MySQL
开源新发布|PolarDB-X v2.4.2开源生态适配升级
PolarDB-X v2.4.2开源发布,重点完善生态能力:新增客户端驱动、开源polardbx-proxy组件,支持读写分离与高可用;强化DDL变更、扩缩容等运维能力,并兼容MySQL主备复制及MCP AI生态。
开源新发布|PolarDB-X v2.4.2开源生态适配升级
|
13天前
|
Ubuntu API 数据库
PXD 部署 阿里云 PolarDB-X 集群(企业版、标准版)教程
本文详细演示了使用PXD在Ubuntu 24.04上部署PolarDB-X企业版和标准版集群的完整流程,涵盖环境准备、Docker与Python配置、PXD安装、集群创建、连接测试及销毁等步骤,并附多篇主流操作系统安装指南,助力快速搭建分布式数据库环境。
279 1
|
20天前
|
Ubuntu Java Linux
Linux 环境 Ubuntu 18.04.6 操作系统安装 Polardb-X 数据库 单机版 deb 包 教程
本文介绍在 Ubuntu 18.04 操作系统上通过DEB包安装PolarDB-X单机版的完整流程,涵盖环境准备、RPM转DEB、依赖安装、初始化配置及服务启动等步骤,助力快速部署运行。
139 7
|
28天前
|
Ubuntu 关系型数据库 MySQL
基于Ubuntu 24编译部署开源PolarDB-X
作者介绍: 韦玮 浙江宇视科技有限公司分布式存储开发工程师 浙江宇视科技有限公司是全球AIoT产品、解决方案与全栈式能力提供商,以“ABCI”(AI人工智能、BigData大数据、Cloud云计算、IoT物联网)技术为核心。

热门文章

最新文章