基于Ubuntu 24编译部署开源PolarDB-X

简介: 作者介绍:韦玮 浙江宇视科技有限公司分布式存储开发工程师 浙江宇视科技有限公司是全球AIoT产品、解决方案与全栈式能力提供商,以“ABCI”(AI人工智能、BigData大数据、Cloud云计算、IoT物联网)技术为核心。

背景

为了方便开发者在Ubuntu 等操作系统上使用,受社区邀请,提供基于Ubuntu 操作系统的源码编译指南。

PolarDB分布式版(简称:PolarDB-X)基于存算分离、采用集中式和分布式一体化架构设计,将分布式中的存储节点(DN)多副本作为其标准版(即集中式形态)单独提供服务,提供100%兼容MySQL的语法和功能,兼容MySQL 5.7、8.0等多个版本(开源PolarDB-X仅兼容MySQL8.0)。其产品架构图可参考下图,如需了解产品更多特点特性,可参考阿里官方文档

本文基于开源PolarDB-X V2.4.2版本源代码进行编译部署,包含DN、CN、glue编译。如果需要PolarDB-X集中式形态,只需要编译DN源码即可。

开源PolarDB-X代码空间:https://github.com/polardb

环境准备

操作系统环境:

本次编译使用操作系统:Ubuntu 24.04.3 LTS,相关系统环境如下:

root@iZbp1b2hnsd51fdnun2vwzZ:~$ cat /etc/os-release
PRETTY_NAME="Ubuntu 24.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="24.04"
VERSION="24.04.3 LTS (Noble Numbat)"
VERSION_CODENAME=noble
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=noble
LOGO=ubuntu-logo

编译环境

本次编译所依赖的编译环境为:

  • GCC/G++: 10
  • Cmake:3.28
  • java版本:openjdk 11

依赖环境安装:

sudo apt update
# 安装cmake
sudo apt install -y cmake
# 安装 jdk11
sudo apt install -y openjdk-11-jdk
# 安装一下可能用到的依赖库
sudo apt install -y git make wget automake bison libssl-dev libncurses5-dev libaio-dev mysql-client libsnappy-dev liblz4-dev libbz2-dev autoconf libarchive-dev
# 安装gcc 10
sudo apt install -y gcc-10 g++-10
# 验证安装
# 检查CMake,应显示cmake version 3.28.3
cmake --version
# 检查Java, 应显示openjdk 11
java --version
javac --version
# 检查关键依赖
apt list --installed | grep -E "libssl-dev|libncurses5-dev|libaio-dev|libsnappy-dev|liblz4-dev|libbz2-dev|libarchive-dev"
# tips: 如果g++/gcc --version不是10,请切换至10. 以下是步骤:
# 1. 确认已安装, 没有安装的请参考[环境准备]部分的介绍
which gcc-10 g++-10
# 2. 将gcc/g++ 10放入备选组
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 100
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-10 100
# 3. 手动选择GCC 10
sudo update-alternatives --set gcc /usr/bin/gcc-10
sudo update-alternatives --set g++ /usr/bin/g++-10
# 4. 验证设置是否成功
gcc --version  # 必须显示gcc-10
g++ --version  # 必须显示g++-10

编译PolarDB-X DN(数据节点)源码

因此如果要自行编译一个PolarDB-X的标准版RPM包,只需要编译开源PolarDB-X的DN源码即可。

开源PolarDB-X DN源码地址:https://github.com/polardb/polardbx-engine

# 1. 创建专用用户
    useradd -ms /bin/bash polarx
    echo "polarx:polarx" | sudo chpasswd
# 2. 赋予polarx用户sudo权限
    echo "polarx    ALL=(ALL)    NOPASSWD: ALL" >> /etc/sudoers
    su - polarx
# 3. git clone(在polarx用户下git clone避免后面的权限问题)
    git cloen https://github.com/polardb/polardbx-engine.git
    cd polardbx-engine
    
    # 如果有权限问题,请用以下命令将git clone下来的文件更改所有者和所属组为polarx
    sudo chown -R polarx:polarx /home/polarx/polardbx-*
    
    # 将多个分割的Boost库压缩文件(分卷)重新合并为一个完整的压缩文件
    cat extra/boost/boost_1_77_0.tar.bz2.*  > extra/boost/boost_1_77_0.tar.bz2
# 4. 使用bash编译
    bash build.sh -t release
    # 以下是编译的时候 缺少的库,可以提前install好
    sudo apt install -y pkg-config
    sudo apt install -y libtirpc-dev
    # 编译时缺少的库
# 5. 移动编译好的文件,编译好的文件会在 /home/polarx/tmp_run 下(如果不是 polarx 用户的,则在对应用户目录下的 tmp_run),我们把他移动到另一个目录
    mkdir -p /home/polarx/run
    mv /home/polarx/tmp_run /home/polarx/run/polardbx-engine
    
# tips: 编译失败时需要清理后再重新编译
    rm -rf build/* 2>/dev/null
    rm -rf tmp_run 2>/dev/null

编译PolarDB-X CN(计算节点)源码

此步骤编译和安装polardbx-sql & polardbx-glue代码,其中glue 属于CN 的submodule 。

开源PolarDB-X CN源码地址:https://github.com/polardb/polardbx-sql

# 1. 提前安装好 Maven 3,这部分是使用java编写的计算层,Maven是一个 Java项目的构建工具和依赖管理工具
    sudo apt install -y maven
    
# 2. 初始化 polardbx-rpc
    cd polardbx-sql
    # 为了 确保 polardbx-sql 依赖的 polardbx-glue 等子模块被正确初始化和下载。如果未初始化子模块,编译polardbx-sql会发生缺少依赖而失败
    git submodule update --init
# 3. 编译打包, 此处编译和后面运行CN时的java版本需要一致,不然会出错。
    mvn install -DskipTests -D env=release 
# 4. 解压运行(按需修改目标路径)
    mkdir -p /home/polarx/run/polardbx-sql
    cp target/polardbx-server-*.tar.gz /home/polarx/run/polardbx-sql/
    cd /home/polarx/run/polardbx-sql
    tar xzvf polardbx-server-*.tar.gz

启动DN

  • 此步骤启动一个mysql进程,作为metadb和dn
  • my.cnf是mysql的启动配置文件,需要修改。可参考附录中的mysql配置文件(my.cnf)
  • 默认使用 /home/polarx/data 作为mysql数据目录,可以修改成其他目录
  • 注意:启动 DN 需要使用非 root 账号完成。可以用之前创建的polarx用户启动
# 1. 创建数据目录
    mkdir -p /home/polarx/data/{data,log,run,tmp,mysql}
    touch /home/polarx/data/log/mysqld_safe.err
    # 假设 my.cnf 存放在 /home/polarx/data/my.cnf
    # 把文章最底下的内容添加到my.cnf中
# 2. 启动DN
    /home/polarx/run/polardbx-engine/bin/mysqld --defaults-file=/home/polarx/data/my.cnf --initialize-insecure
    (/home/polarx/run/polardbx-engine/bin/mysqld_safe --defaults-file=/home/polarx/data/my.cnf &)
    # 可以看到DN已经启动成功
    root@iZbp1b2hnsd51fdnun2vwzZ:~# ps -ef | grep mysql
    polarx    438826       1  0 20:30 pts/2    00:00:00 /bin/sh /home/polarx/run/polardbx-engine/bin/mysqld_safe --defaults-file=/home/polarx/data/my.cnf
    polarx    443213  438826  2 20:30 pts/2    00:00:00 /home/polarx/run/polardbx-engine/bin/mysqld --defaults-file=/home/polarx/data/my.cnf --basedir=/home/polarx/run/polardbx-engine --datadir=/home/polarx/data/data --plugin-dir=/home/polarx/run/polardbx-engine/lib/plugin --log-error=/home/polarx/data/log/alert.log --open-files-limit=65535 --pid-file=/home/polarx/data/run/mysql.pid --socket=/home/polarx/data/run/mysql.sock --port=4886
    root      443925  443730  0 20:30 pts/3    00:00:00 grep --color=auto mysql

启动PolarDB-X CN

启动mysql进程之后,便可以初始化PolarDB-X,需要准备以下几个配置:

  • metadb user:以下采用my_polarx
  • metadb database:创建metadb库,以下采用 polardbx_meta_db_polardbx
  • 密码加密key(dnPasswordKey):以下采用 asdf1234ghjk5678
  • PolarDB-X默认用户名:默认为 polarx_root
  • PolarDB-X默认用户密码:默认为 123456,可通过 -S 参数修改

注意:启动 CN 需要使用非 root 账号完成

#  1. 修改配置文件 /home/polarx/run/polardbx-sql/conf/server.properties,逐个替换以下配置项:
    # PolarDB-X 服务端口
    serverPort=8527
    # PolarDB-X RPC 端口
    rpcPort=9090
     # MetaDB地址
    metaDbAddr=127.0.0.1:4886
    # MetaDB私有协议端口
    metaDbXprotoPort=34886
    # MetaDB用户
    metaDbUser=my_polarx
    metaDbName=polardbx_meta_db_polardbx
    # PolarDB-X实例名
    instanceId=polardbx-polardbx
    galaxyXProtocol=2

初始化PolarDB-X:

  • -I: 进入初始化模式
  • -P: 之前准备的dnPasswordKey
  • -d: DataNode的地址列表,单机模式下就是之前启动的mysql进程的端口和地址
  • -r: 连接metadb的密码
  • -u: 为PolarDB-X创建的根用户
  • -S: 为PolarDB-X创建的根用户密码
# 2. 初始化CN,需要初始化CN后才能启动
    su - polarx
    cd /home/polarx/run/polardbx-sql/bin
    bash startup.sh \
        -I \
        -P asdf1234ghjk5678 \
        -d 127.0.0.1:4886:34886 \
        -r "" \
        -u polardbx_root \
        -S "123456"
        
    # 会有类似输出
    Generate password for user: my_polarx && M8%V5%K9^$5%oY0%yC0+&1!J7@8+R6)
    Encrypted password: DB84u4UkU/OYlMzu3aj9NFdknvxYgedFiW9z59bVnoc=
    Root user for polarx with password: polardbx_root && 123456
    Encrypted password for polarx: H1AzXc2NmCs61dNjH5nMvA==
    ======== Paste following configurations to conf/server.properties ! ======= 
    metaDbPasswd=HMqvkvXZtT7XedA6t2IWY8+D7fJWIJir/mIY1Nf1b58= 
    
# 3. 修改conf/server.properties
    # 把上面的metaDbPasswd=HMqvkvXZtT7XedA6t2IWY8+D7fJWIJir/mIY1Nf1b58= 添加到conf/server.properties 中
    vim /home/polarx/run/polardbx-sql/conf/server.properties
# 4. 启动CN
    cd /home/polarx/run/polardbx-sql/bin
    bash startup.sh -P asdf1234ghjk5678
    
    # 有以下类似输出则启动成功
        TDDL_OPTS : -DinitializeGms=false -DforceCleanup=false -DappName=tddl -Dio.grpc.netty.shaded.io.netty.transport.noNative=true -Dio.netty.transport.noNative=true -Dcom.alibaba.wisp.threadAsWisp.black=name:logback-* -Dlogback.configurationFile=/home/polarx/run/polardbx-sql/bin/../conf/logback.xml -Dtddl.conf=/home/polarx/run/polardbx-sql/bin/../conf/server.properties
    start polardb-x
    cd to /home/polarx/run/polardbx-sql/bin for continue
    
    # 检查是否启动成功,有关于polardbx-sql的输出
    ps -ef | grep polardbx-sql
    
# 5. 稍等, 连接polardb-x
    mysql -h127.1 -P8527 -upolardbx_root
    
    # 有以下输出则启动成功
        ~/run/polardbx-sql/bin$ mysql -h127.1 -P8527 -upolardbx_root
    Welcome to the MySQL monitor.  Commands end with ; or \g.
    Your MySQL connection id is 65
    Server version: 5.6.29 Tddl Server (ALIBABA)
    Copyright (c) 2000, 2025, Oracle and/or its affiliates.
    Oracle is a registered trademark of Oracle Corporation and/or its
    affiliates. Other names may be trademarks of their respective
    owners.

附录

my.cnf配置文件

[mysqld]
datadir = /home/polarx/data/data
general_log_file = /home/polarx/data/log/general.log
innodb_data_home_dir = /home/polarx/data/mysql
innodb_log_group_home_dir = /home/polarx/data/mysql
log-bin-index = /home/polarx/data/mysql/mysql-bin.index
log_bin = /home/polarx/data/mysql/mysql-bin.log
log_error = /home/polarx/data/log/alert.log
master_info_file = /home/polarx/data/mysql/master.info
relay_log = /home/polarx/data/mysql/slave-relay.log
relay_log_index = /home/polarx/data/mysql/slave-relay-log.index
relay_log_info_file = /home/polarx/data/mysql/slave-relay-log.info
slave_load_tmpdir = /home/polarx/data/tmp
slow_query_log_file = /home/polarx/data/mysql/slow_query.log
socket = /home/polarx/data/run/mysql.sock
tmpdir = /home/polarx/data/tmp
innodb_buffer_pool_size = 1073741824
loose_rpc_port = 34886
port = 4886
loose_cluster-id = 1234
loose_cluster-info = 127.0.0.1:14886@1
auto_increment_increment = 1
auto_increment_offset = 1
autocommit = ON
automatic_sp_privileges = ON
avoid_temporal_upgrade = OFF
back_log = 3000
binlog_cache_size = 1048576
binlog_checksum = CRC32
binlog_order_commits = OFF
binlog_row_image = full
binlog_rows_query_log_events = ON
binlog_stmt_cache_size = 32768
binlog_transaction_dependency_tracking = WRITESET
block_encryption_mode = "aes-128-ecb"
bulk_insert_buffer_size = 4194304
character_set_server = utf8
concurrent_insert = 2
connect_timeout = 10
default_authentication_plugin = mysql_native_password
default_storage_engine = InnoDB
default_time_zone = +8:00
default_week_format = 0
delay_key_write = ON
delayed_insert_limit = 100
delayed_insert_timeout = 300
delayed_queue_size = 1000
disconnect_on_expired_password = ON
div_precision_increment = 4
end_markers_in_json = OFF
enforce_gtid_consistency = ON
eq_range_index_dive_limit = 200
event_scheduler = OFF
expire_logs_days = 0
explicit_defaults_for_timestamp = OFF
flush_time = 0
ft_max_word_len = 84
ft_min_word_len = 4
ft_query_expansion_limit = 20
general_log = OFF
group_concat_max_len = 1024
gtid_mode = ON
host_cache_size = 644
init_connect = ''
innodb_adaptive_flushing = ON
innodb_adaptive_flushing_lwm = 10
innodb_adaptive_hash_index = OFF
innodb_adaptive_max_sleep_delay = 150000
innodb_autoextend_increment = 64
innodb_autoinc_lock_mode = 2
innodb_buffer_pool_chunk_size = 33554432
innodb_buffer_pool_dump_at_shutdown = ON
innodb_buffer_pool_dump_pct = 25
innodb_buffer_pool_instances = 8
innodb_buffer_pool_load_at_startup = ON
innodb_change_buffer_max_size = 25
innodb_change_buffering = none
innodb_checksum_algorithm = crc32
innodb_cmp_per_index_enabled = OFF
innodb_commit_concurrency = 0
innodb_compression_failure_threshold_pct = 5
innodb_compression_level = 6
innodb_compression_pad_pct_max = 50
innodb_concurrency_tickets = 5000
innodb_data_file_purge = ON
innodb_data_file_purge_interval = 100
innodb_data_file_purge_max_size = 128
innodb_deadlock_detect = ON
innodb_disable_sort_file_cache = ON
innodb_flush_log_at_trx_commit = 1
innodb_flush_method = O_DIRECT
innodb_flush_neighbors = 0
innodb_flush_sync = ON
innodb_ft_cache_size = 8000000
innodb_ft_enable_diag_print = OFF
innodb_ft_enable_stopword = ON
innodb_ft_max_token_size = 84
innodb_ft_min_token_size = 3
innodb_ft_num_word_optimize = 2000
innodb_ft_result_cache_limit = 2000000000
innodb_ft_sort_pll_degree = 2
innodb_ft_total_cache_size = 640000000
innodb_io_capacity = 20000
innodb_io_capacity_max = 40000
innodb_lock_wait_timeout = 50
innodb_log_buffer_size = 16777216
innodb_log_checksums = ON
innodb_log_file_size = 134217728
innodb_lru_scan_depth = 8192
innodb_max_dirty_pages_pct = 75
innodb_max_dirty_pages_pct_lwm = 0
innodb_max_purge_lag = 0
innodb_max_purge_lag_delay = 0
innodb_max_undo_log_size = 1073741824
innodb_monitor_disable =
innodb_monitor_enable =
innodb_old_blocks_pct = 37
innodb_old_blocks_time = 1000
innodb_online_alter_log_max_size = 134217728
innodb_open_files = 20000
innodb_optimize_fulltext_only = OFF
innodb_page_cleaners = 4
innodb_print_all_deadlocks = ON
innodb_purge_batch_size = 300
innodb_purge_rseg_truncate_frequency = 128
innodb_purge_threads = 4
innodb_random_read_ahead = OFF
innodb_read_ahead_threshold = 0
innodb_read_io_threads = 4
innodb_rollback_on_timeout = OFF
innodb_rollback_segments = 128
innodb_sort_buffer_size = 1048576
innodb_spin_wait_delay = 6
innodb_stats_auto_recalc = ON
innodb_stats_method = nulls_equal
innodb_stats_on_metadata = OFF
innodb_stats_persistent = ON
innodb_stats_persistent_sample_pages = 20
innodb_stats_transient_sample_pages = 8
innodb_status_output = OFF
innodb_status_output_locks = OFF
innodb_strict_mode = ON
innodb_sync_array_size = 16
innodb_sync_spin_loops = 30
innodb_table_locks = ON
innodb_tcn_cache_level = block
innodb_thread_concurrency = 0
innodb_thread_sleep_delay = 0
innodb_write_io_threads = 4
interactive_timeout = 7200
key_buffer_size = 16777216
key_cache_age_threshold = 300
key_cache_block_size = 1024
key_cache_division_limit = 100
lc_time_names = en_US
local_infile = OFF
lock_wait_timeout = 1800
log_bin_trust_function_creators = ON
log_bin_use_v1_row_events = 0
log_error_verbosity = 2
log_queries_not_using_indexes = OFF
log_slave_updates = 0
log_slow_admin_statements = ON
log_slow_slave_statements = ON
log_throttle_queries_not_using_indexes = 0
long_query_time = 1
loose_ccl_max_waiting_count = 0
loose_ccl_queue_bucket_count = 4
loose_ccl_queue_bucket_size = 64
loose_ccl_wait_timeout = 86400
loose_consensus_auto_leader_transfer = ON
loose_consensus_auto_reset_match_index = ON
loose_consensus_election_timeout = 10000
loose_consensus_io_thread_cnt = 8
loose_consensus_large_trx = ON
loose_consensus_log_cache_size = 536870912
loose_consensus_max_delay_index = 10000
loose_consensus_max_log_size = 20971520
loose_consensus_max_packet_size = 131072
loose_consensus_prefetch_cache_size = 268435456
loose_consensus_worker_thread_cnt = 8
loose_implicit_primary_key = 1
loose_information_schema_stats_expiry = 86400
loose_innodb_buffer_pool_in_core_file = OFF
loose_innodb_commit_cleanout_max_rows = 9999999999
loose_innodb_doublewrite_pages = 64
loose_innodb_lizard_stat_enabled = OFF
loose_innodb_log_compressed_pages = ON
loose_innodb_log_write_ahead_size = 4096
loose_innodb_parallel_read_threads = 1
loose_innodb_undo_retention = 1800
loose_innodb_undo_space_reserved_size = 1024
loose_innodb_undo_space_supremum_size = 102400
loose_internal_tmp_mem_storage_engine = TempTable
loose_new_rpc = ON
loose_optimizer_switch = index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,engine_condition_pushdown=on,index_condition_pushdown=on,mrr=on,mrr_cost_based=on,block_nested_loop=on,batched_key_access=off,materialization=on,semijoin=on,loosescan=on,firstmatch=on,subquery_materialization_cost_based=on,use_index_extensions=on
loose_optimizer_trace = enabled=off,one_line=off
loose_optimizer_trace_features = greedy_search=on,range_optimizer=on,dynamic_range=on,repeated_subselect=on
loose_performance-schema_instrument = 'wait/lock/metadata/sql/mdl=ON'
loose_performance_point_lock_rwlock_enabled = ON
loose_performance_schema-instrument = 'memory/%%=COUNTED'
loose_performance_schema_accounts_size = 10000
loose_performance_schema_consumer_events_stages_current = ON
loose_performance_schema_consumer_events_stages_history = ON
loose_performance_schema_consumer_events_stages_history_long = ON
loose_performance_schema_consumer_events_statements_current = OFF
loose_performance_schema_consumer_events_statements_history = OFF
loose_performance_schema_consumer_events_statements_history_long = OFF
loose_performance_schema_consumer_events_transactions_current = OFF
loose_performance_schema_consumer_events_transactions_history = OFF
loose_performance_schema_consumer_events_transactions_history_long = OFF
loose_performance_schema_consumer_events_waits_current = OFF
loose_performance_schema_consumer_events_waits_history = OFF
loose_performance_schema_consumer_events_waits_history_long = OFF
loose_performance_schema_consumer_global_instrumentation = OFF
loose_performance_schema_consumer_statements_digest = OFF
loose_performance_schema_consumer_thread_instrumentation = OFF
loose_performance_schema_digests_size = 10000
loose_performance_schema_error_size = 0
loose_performance_schema_events_stages_history_long_size = 0
loose_performance_schema_events_stages_history_size = 0
loose_performance_schema_events_statements_history_long_size = 0
loose_performance_schema_events_statements_history_size = 0
loose_performance_schema_events_transactions_history_long_size = 0
loose_performance_schema_events_transactions_history_size = 0
loose_performance_schema_events_waits_history_long_size = 0
loose_performance_schema_events_waits_history_size = 0
loose_performance_schema_hosts_size = 10000
loose_performance_schema_instrument = '%%=OFF'
loose_performance_schema_max_cond_classes = 0
loose_performance_schema_max_cond_instances = 10000
loose_performance_schema_max_digest_length = 0
loose_performance_schema_max_digest_sample_age = 0
loose_performance_schema_max_file_classes = 0
loose_performance_schema_max_file_handles = 0
loose_performance_schema_max_file_instances = 1000
loose_performance_schema_max_index_stat = 10000
loose_performance_schema_max_memory_classes = 0
loose_performance_schema_max_metadata_locks = 10000
loose_performance_schema_max_mutex_classes = 0
loose_performance_schema_max_mutex_instances = 10000
loose_performance_schema_max_prepared_statements_instances = 1000
loose_performance_schema_max_program_instances = 10000
loose_performance_schema_max_rwlock_classes = 0
loose_performance_schema_max_rwlock_instances = 10000
loose_performance_schema_max_socket_classes = 0
loose_performance_schema_max_socket_instances = 1000
loose_performance_schema_max_sql_text_length = 0
loose_performance_schema_max_stage_classes = 0
loose_performance_schema_max_statement_classes = 0
loose_performance_schema_max_statement_stack = 1
loose_performance_schema_max_table_handles = 10000
loose_performance_schema_max_table_instances = 1000
loose_performance_schema_max_table_lock_stat = 10000
loose_performance_schema_max_thread_classes = 0
loose_performance_schema_max_thread_instances = 10000
loose_performance_schema_session_connect_attrs_size = 0
loose_performance_schema_setup_actors_size = 10000
loose_performance_schema_setup_objects_size = 10000
loose_performance_schema_users_size = 10000
loose_rds_audit_log_buffer_size = 16777216
loose_rds_audit_log_enabled = OFF
loose_rds_audit_log_event_buffer_size = 8192
loose_rds_audit_log_row_limit = 100000
loose_rds_audit_log_version = MYSQL_V1
loose_replica_read_timeout = 3000
loose_session_track_system_variables = "*"
loose_session_track_transaction_info = OFF
loose_slave_parallel_workers = 32
low_priority_updates = 0
lower_case_table_names = 1
master_info_repository = TABLE
master_verify_checksum = OFF
max_allowed_packet = 1073741824
max_binlog_cache_size = 18446744073709551615
max_binlog_stmt_cache_size = 18446744073709551615
max_connect_errors = 65536
max_connections = 5532
max_error_count = 1024
max_execution_time = 0
max_heap_table_size = 67108864
max_join_size = 18446744073709551615
max_length_for_sort_data = 4096
max_points_in_geometry = 65536
max_prepared_stmt_count = 16382
max_seeks_for_key = 18446744073709551615
max_sort_length = 1024
max_sp_recursion_depth = 0
max_user_connections = 5000
max_write_lock_count = 102400
min_examined_row_limit = 0
myisam_sort_buffer_size = 262144
mysql_native_password_proxy_users = OFF
net_buffer_length = 16384
net_read_timeout = 30
net_retry_count = 10
net_write_timeout = 60
ngram_token_size = 2
open_files_limit = 65535
opt_indexstat = ON
opt_tablestat = ON
optimizer_prune_level = 1
optimizer_search_depth = 62
optimizer_trace_limit = 1
optimizer_trace_max_mem_size = 1048576
optimizer_trace_offset = -1
performance_schema = ON
preload_buffer_size = 32768
query_alloc_block_size = 8192
query_prealloc_size = 8192
range_alloc_block_size = 4096
range_optimizer_max_mem_size = 8388608
read_rnd_buffer_size = 442368
relay_log_info_repository = TABLE
relay_log_purge = OFF
relay_log_recovery = OFF
replicate_same_server_id = OFF
loose_rotate_log_table_last_name =
server_id = 1234
session_track_gtids = OFF
session_track_schema = ON
session_track_state_change = OFF
sha256_password_proxy_users = OFF
show_old_temporals = OFF
skip_slave_start = OFF
slave_exec_mode = strict
slave_net_timeout = 4
slave_parallel_type = LOGICAL_CLOCK
slave_pending_jobs_size_max = 1073741824
slave_sql_verify_checksum = OFF
slave_type_conversions =
slow_launch_time = 2
slow_query_log = OFF
sort_buffer_size = 868352
sql_mode = NO_ENGINE_SUBSTITUTION
stored_program_cache = 256
sync_binlog = 1
sync_master_info = 10000
sync_relay_log = 1
sync_relay_log_info = 10000
table_open_cache_instances = 16
temptable_max_ram = 1073741824
thread_cache_size = 100
thread_stack = 262144
tls_version = TLSv1.2,TLSv1.3
tmp_table_size = 2097152
transaction_alloc_block_size = 8192
transaction_isolation = REPEATABLE-READ
transaction_prealloc_size = 4096
transaction_write_set_extraction = XXHASH64
updatable_views_with_limit = YES
wait_timeout = 28800
loose_optimizer_switch=index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,engine_condition_pushdown=on,index_condition_pushdown=on,mrr=on,mrr_cost_based=on,block_nested_loop=on,batched_key_access=off,materialization=on,semijoin=on,loosescan=on,firstmatch=on,subquery_materialization_cost_based=on,use_index_extensions=on,skip_scan=off
xa_detach_on_prepare = OFF
[mysqld_safe]
pid_file = /home/polarx/data/run/mysql.pid

相关文章
|
20天前
|
Linux 数据库
Linux 环境 国产 华为 欧拉 openEuler 20.03 操作系统安装 Polardb-X 数据库 单机版 rpm 包 教程
本文为华为欧拉openEuler 20.03操作系统下Polardb-X单机版RPM包安装教程,继CentOS 7.9与银河麒麟V10后,延续相似步骤,详述环境准备、依赖安装、配置初始化及服务启动全过程,助力国产化平台数据库部署。作者张阳,资深从业者,欢迎交流。
227 5
|
14天前
|
Ubuntu Linux 数据库
Linux 环境 Docker 安装 Polardb-X 数据库 容器 教程
本文介绍如何通过 Docker 快速部署 Polardb-X 数据库容器,实现一键启动。涵盖镜像拉取、容器运行、登录验证等步骤,操作简便,适用于快速开发与测试环境。附往期7篇主流系统安装教程。
154 5
|
20天前
|
Linux 数据库
Linux 环境 国产银河麒麟V10操作系统安装 Polardb-X 数据库 单机版 rpm 包 教程
本文介绍在国产银河麒麟V10操作系统上安装Polardb-X单机版数据库的完整过程。基于RPM包方式部署,步骤与CentOS 7.9类似,涵盖系统环境确认、依赖安装、用户配置、初始化数据目录及启动服务等关键环节,并通过命令验证运行状态,助力国产化平台数据库搭建。
324 0
|
3月前
|
SQL 关系型数据库 MySQL
开源新发布|PolarDB-X v2.4.2开源生态适配升级
PolarDB-X v2.4.2开源发布,重点完善生态能力:新增客户端驱动、开源polardbx-proxy组件,支持读写分离与高可用;强化DDL变更、扩缩容等运维能力,并兼容MySQL主备复制及MCP AI生态。
开源新发布|PolarDB-X v2.4.2开源生态适配升级
|
12天前
|
人工智能 运维 前端开发
阿里云百炼高代码应用全新升级
阿里云百炼高代码应用全新升级,支持界面化代码提交、一键模板创建及Pipeline流水线部署,全面兼容FC与网关多Region生产环境。开放构建日志与可观测能力,新增高中低代码Demo与AgentIdentity最佳实践,支持前端聊天体验与调试。
242 35
|
10天前
|
传感器 机器学习/深度学习 人工智能
构建AI智能体:九十七、YOLO多模态智能感知系统:从理论到实践的实时目标检测探讨
本文介绍了基于YOLO的多模态智能感知系统的设计与实现。系统通过YOLOv8模型实现高效目标检测,并采用多模态数据融合、行为分析和时空预测等技术提升检测性能。文章详细解析了YOLOv8架构,包括CSPDarknet骨干网络、PANet特征融合和解耦检测头设计;探讨了数据级、特征级和决策级三种多模态融合方法;设计了行为分析模块,涵盖个体/群体行为识别、交互分析和异常检测;实现了时空分析与预测功能。该系统可应用于安防监控、自动驾驶等领域,在复杂场景下展现出更好的鲁棒性和准确性。
112 7
|
20天前
|
Linux 数据库
Linux 环境 Polardb-X 数据库 单机版 rpm 包 安装教程
本文介绍在CentOS 7.9环境下安装PolarDB-X单机版数据库的完整流程,涵盖系统环境准备、本地Yum源配置、RPM包安装、用户与目录初始化、依赖库解决、数据库启动及客户端连接等步骤,助您快速部署运行PolarDB-X。
451 1
Linux 环境 Polardb-X 数据库 单机版 rpm 包 安装教程
|
15天前
|
Java Linux 数据库连接
PolarDB-X 集中式三节点高可用集群部署 & Java 场景 CRUD 应用
本文介绍在CentOS 7.9、openEuler 20.03及银河麒麟V10上部署PolarDB-X三节点高可用集群的完整过程,涵盖环境准备、配置文件设置、集群初始化与启动,并通过Java应用实现CRUD操作验证。集群支持自动主备切换,确保服务高可用,适用于生产环境数据库架构搭建与学习参考。
269 0
|
负载均衡 监控 关系型数据库
利用ProxySQL构建PolarDB-X三节点高可用集群
作者介绍: 廖银华,重庆市中冉数字科技有限公司系统分析师; 邓海林,重庆市中冉数字科技有限公司项目经理; 何龙建,重庆远通电子技术开发有限公司信息系统项目管理师 作者聚焦数据治理中台架构设计与AI+业务深度融合实践,在数据质量、智能分析及高效数据处理等领域积累了扎实的实战经验,精通打造高效安全、稳定可扩展的数据中台解决方案
|
9月前
|
关系型数据库 分布式数据库 数据库
一库多能:阿里云PolarDB三大引擎、四种输出形态,覆盖企业数据库全场景
PolarDB是阿里云自研的新一代云原生数据库,提供极致弹性、高性能和海量存储。它包含三个版本:PolarDB-M(兼容MySQL)、PolarDB-PG(兼容PostgreSQL及Oracle语法)和PolarDB-X(分布式数据库)。支持公有云、专有云、DBStack及轻量版等多种形态,满足不同场景需求。2021年,PolarDB-PG与PolarDB-X开源,内核与商业版一致,推动国产数据库生态发展,同时兼容主流国产操作系统与芯片,获得权威安全认证。

热门文章

最新文章