基于Ubuntu 24编译部署开源PolarDB-X

简介: 作者介绍:韦玮 浙江宇视科技有限公司分布式存储开发工程师 浙江宇视科技有限公司是全球AIoT产品、解决方案与全栈式能力提供商,以“ABCI”(AI人工智能、BigData大数据、Cloud云计算、IoT物联网)技术为核心。

背景

为了方便开发者在Ubuntu 等操作系统上使用,受社区邀请,提供基于Ubuntu 操作系统的源码编译指南。

PolarDB分布式版(简称:PolarDB-X)基于存算分离、采用集中式和分布式一体化架构设计,将分布式中的存储节点(DN)多副本作为其标准版(即集中式形态)单独提供服务,提供100%兼容MySQL的语法和功能,兼容MySQL 5.7、8.0等多个版本(开源PolarDB-X仅兼容MySQL8.0)。其产品架构图可参考下图,如需了解产品更多特点特性,可参考阿里官方文档

本文基于开源PolarDB-X V2.4.2版本源代码进行编译部署,包含DN、CN、glue编译。如果需要PolarDB-X集中式形态,只需要编译DN源码即可。

开源PolarDB-X代码空间:https://github.com/polardb

环境准备

操作系统环境:

本次编译使用操作系统:Ubuntu 24.04.3 LTS,相关系统环境如下:

root@iZbp1b2hnsd51fdnun2vwzZ:~$ cat /etc/os-release
PRETTY_NAME="Ubuntu 24.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="24.04"
VERSION="24.04.3 LTS (Noble Numbat)"
VERSION_CODENAME=noble
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=noble
LOGO=ubuntu-logo

编译环境

本次编译所依赖的编译环境为:

  • GCC/G++: 10
  • Cmake:3.28
  • java版本:openjdk 11

依赖环境安装:

sudo apt update
# 安装cmake
sudo apt install -y cmake
# 安装 jdk11
sudo apt install -y openjdk-11-jdk
# 安装一下可能用到的依赖库
sudo apt install -y git make wget automake bison libssl-dev libncurses5-dev libaio-dev mysql-client libsnappy-dev liblz4-dev libbz2-dev autoconf libarchive-dev
# 安装gcc 10
sudo apt install -y gcc-10 g++-10
# 验证安装
# 检查CMake,应显示cmake version 3.28.3
cmake --version
# 检查Java, 应显示openjdk 11
java --version
javac --version
# 检查关键依赖
apt list --installed | grep -E "libssl-dev|libncurses5-dev|libaio-dev|libsnappy-dev|liblz4-dev|libbz2-dev|libarchive-dev"
# tips: 如果g++/gcc --version不是10,请切换至10. 以下是步骤:
# 1. 确认已安装, 没有安装的请参考[环境准备]部分的介绍
which gcc-10 g++-10
# 2. 将gcc/g++ 10放入备选组
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 100
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-10 100
# 3. 手动选择GCC 10
sudo update-alternatives --set gcc /usr/bin/gcc-10
sudo update-alternatives --set g++ /usr/bin/g++-10
# 4. 验证设置是否成功
gcc --version  # 必须显示gcc-10
g++ --version  # 必须显示g++-10

编译PolarDB-X DN(数据节点)源码

因此如果要自行编译一个PolarDB-X的标准版RPM包,只需要编译开源PolarDB-X的DN源码即可。

开源PolarDB-X DN源码地址:https://github.com/polardb/polardbx-engine

# 1. 创建专用用户
    useradd -ms /bin/bash polarx
    echo "polarx:polarx" | sudo chpasswd
# 2. 赋予polarx用户sudo权限
    echo "polarx    ALL=(ALL)    NOPASSWD: ALL" >> /etc/sudoers
    su - polarx
# 3. git clone(在polarx用户下git clone避免后面的权限问题)
    git cloen https://github.com/polardb/polardbx-engine.git
    cd polardbx-engine
    
    # 如果有权限问题,请用以下命令将git clone下来的文件更改所有者和所属组为polarx
    sudo chown -R polarx:polarx /home/polarx/polardbx-*
    
    # 将多个分割的Boost库压缩文件(分卷)重新合并为一个完整的压缩文件
    cat extra/boost/boost_1_77_0.tar.bz2.*  > extra/boost/boost_1_77_0.tar.bz2
# 4. 使用bash编译
    bash build.sh -t release
    # 以下是编译的时候 缺少的库,可以提前install好
    sudo apt install -y pkg-config
    sudo apt install -y libtirpc-dev
    # 编译时缺少的库
# 5. 移动编译好的文件,编译好的文件会在 /home/polarx/tmp_run 下(如果不是 polarx 用户的,则在对应用户目录下的 tmp_run),我们把他移动到另一个目录
    mkdir -p /home/polarx/run
    mv /home/polarx/tmp_run /home/polarx/run/polardbx-engine
    
# tips: 编译失败时需要清理后再重新编译
    rm -rf build/* 2>/dev/null
    rm -rf tmp_run 2>/dev/null

编译PolarDB-X CN(计算节点)源码

此步骤编译和安装polardbx-sql & polardbx-glue代码,其中glue 属于CN 的submodule 。

开源PolarDB-X CN源码地址:https://github.com/polardb/polardbx-sql

# 1. 提前安装好 Maven 3,这部分是使用java编写的计算层,Maven是一个 Java项目的构建工具和依赖管理工具
    sudo apt install -y maven
    
# 2. 初始化 polardbx-rpc
    cd polardbx-sql
    # 为了 确保 polardbx-sql 依赖的 polardbx-glue 等子模块被正确初始化和下载。如果未初始化子模块,编译polardbx-sql会发生缺少依赖而失败
    git submodule update --init
# 3. 编译打包, 此处编译和后面运行CN时的java版本需要一致,不然会出错。
    mvn install -DskipTests -D env=release 
# 4. 解压运行(按需修改目标路径)
    mkdir -p /home/polarx/run/polardbx-sql
    cp target/polardbx-server-*.tar.gz /home/polarx/run/polardbx-sql/
    cd /home/polarx/run/polardbx-sql
    tar xzvf polardbx-server-*.tar.gz

启动DN

  • 此步骤启动一个mysql进程,作为metadb和dn
  • my.cnf是mysql的启动配置文件,需要修改。可参考附录中的mysql配置文件(my.cnf)
  • 默认使用 /home/polarx/data 作为mysql数据目录,可以修改成其他目录
  • 注意:启动 DN 需要使用非 root 账号完成。可以用之前创建的polarx用户启动
# 1. 创建数据目录
    mkdir -p /home/polarx/data/{data,log,run,tmp,mysql}
    touch /home/polarx/data/log/mysqld_safe.err
    # 假设 my.cnf 存放在 /home/polarx/data/my.cnf
    # 把文章最底下的内容添加到my.cnf中
# 2. 启动DN
    /home/polarx/run/polardbx-engine/bin/mysqld --defaults-file=/home/polarx/data/my.cnf --initialize-insecure
    (/home/polarx/run/polardbx-engine/bin/mysqld_safe --defaults-file=/home/polarx/data/my.cnf &)
    # 可以看到DN已经启动成功
    root@iZbp1b2hnsd51fdnun2vwzZ:~# ps -ef | grep mysql
    polarx    438826       1  0 20:30 pts/2    00:00:00 /bin/sh /home/polarx/run/polardbx-engine/bin/mysqld_safe --defaults-file=/home/polarx/data/my.cnf
    polarx    443213  438826  2 20:30 pts/2    00:00:00 /home/polarx/run/polardbx-engine/bin/mysqld --defaults-file=/home/polarx/data/my.cnf --basedir=/home/polarx/run/polardbx-engine --datadir=/home/polarx/data/data --plugin-dir=/home/polarx/run/polardbx-engine/lib/plugin --log-error=/home/polarx/data/log/alert.log --open-files-limit=65535 --pid-file=/home/polarx/data/run/mysql.pid --socket=/home/polarx/data/run/mysql.sock --port=4886
    root      443925  443730  0 20:30 pts/3    00:00:00 grep --color=auto mysql

启动PolarDB-X CN

启动mysql进程之后,便可以初始化PolarDB-X,需要准备以下几个配置:

  • metadb user:以下采用my_polarx
  • metadb database:创建metadb库,以下采用 polardbx_meta_db_polardbx
  • 密码加密key(dnPasswordKey):以下采用 asdf1234ghjk5678
  • PolarDB-X默认用户名:默认为 polarx_root
  • PolarDB-X默认用户密码:默认为 123456,可通过 -S 参数修改

注意:启动 CN 需要使用非 root 账号完成

#  1. 修改配置文件 /home/polarx/run/polardbx-sql/conf/server.properties,逐个替换以下配置项:
    # PolarDB-X 服务端口
    serverPort=8527
    # PolarDB-X RPC 端口
    rpcPort=9090
     # MetaDB地址
    metaDbAddr=127.0.0.1:4886
    # MetaDB私有协议端口
    metaDbXprotoPort=34886
    # MetaDB用户
    metaDbUser=my_polarx
    metaDbName=polardbx_meta_db_polardbx
    # PolarDB-X实例名
    instanceId=polardbx-polardbx
    galaxyXProtocol=2

初始化PolarDB-X:

  • -I: 进入初始化模式
  • -P: 之前准备的dnPasswordKey
  • -d: DataNode的地址列表,单机模式下就是之前启动的mysql进程的端口和地址
  • -r: 连接metadb的密码
  • -u: 为PolarDB-X创建的根用户
  • -S: 为PolarDB-X创建的根用户密码
# 2. 初始化CN,需要初始化CN后才能启动
    su - polarx
    cd /home/polarx/run/polardbx-sql/bin
    bash startup.sh \
        -I \
        -P asdf1234ghjk5678 \
        -d 127.0.0.1:4886:34886 \
        -r "" \
        -u polardbx_root \
        -S "123456"
        
    # 会有类似输出
    Generate password for user: my_polarx && M8%V5%K9^$5%oY0%yC0+&1!J7@8+R6)
    Encrypted password: DB84u4UkU/OYlMzu3aj9NFdknvxYgedFiW9z59bVnoc=
    Root user for polarx with password: polardbx_root && 123456
    Encrypted password for polarx: H1AzXc2NmCs61dNjH5nMvA==
    ======== Paste following configurations to conf/server.properties ! ======= 
    metaDbPasswd=HMqvkvXZtT7XedA6t2IWY8+D7fJWIJir/mIY1Nf1b58= 
    
# 3. 修改conf/server.properties
    # 把上面的metaDbPasswd=HMqvkvXZtT7XedA6t2IWY8+D7fJWIJir/mIY1Nf1b58= 添加到conf/server.properties 中
    vim /home/polarx/run/polardbx-sql/conf/server.properties
# 4. 启动CN
    cd /home/polarx/run/polardbx-sql/bin
    bash startup.sh -P asdf1234ghjk5678
    
    # 有以下类似输出则启动成功
        TDDL_OPTS : -DinitializeGms=false -DforceCleanup=false -DappName=tddl -Dio.grpc.netty.shaded.io.netty.transport.noNative=true -Dio.netty.transport.noNative=true -Dcom.alibaba.wisp.threadAsWisp.black=name:logback-* -Dlogback.configurationFile=/home/polarx/run/polardbx-sql/bin/../conf/logback.xml -Dtddl.conf=/home/polarx/run/polardbx-sql/bin/../conf/server.properties
    start polardb-x
    cd to /home/polarx/run/polardbx-sql/bin for continue
    
    # 检查是否启动成功,有关于polardbx-sql的输出
    ps -ef | grep polardbx-sql
    
# 5. 稍等, 连接polardb-x
    mysql -h127.1 -P8527 -upolardbx_root
    
    # 有以下输出则启动成功
        ~/run/polardbx-sql/bin$ mysql -h127.1 -P8527 -upolardbx_root
    Welcome to the MySQL monitor.  Commands end with ; or \g.
    Your MySQL connection id is 65
    Server version: 5.6.29 Tddl Server (ALIBABA)
    Copyright (c) 2000, 2025, Oracle and/or its affiliates.
    Oracle is a registered trademark of Oracle Corporation and/or its
    affiliates. Other names may be trademarks of their respective
    owners.

附录

my.cnf配置文件

[mysqld]
datadir = /home/polarx/data/data
general_log_file = /home/polarx/data/log/general.log
innodb_data_home_dir = /home/polarx/data/mysql
innodb_log_group_home_dir = /home/polarx/data/mysql
log-bin-index = /home/polarx/data/mysql/mysql-bin.index
log_bin = /home/polarx/data/mysql/mysql-bin.log
log_error = /home/polarx/data/log/alert.log
master_info_file = /home/polarx/data/mysql/master.info
relay_log = /home/polarx/data/mysql/slave-relay.log
relay_log_index = /home/polarx/data/mysql/slave-relay-log.index
relay_log_info_file = /home/polarx/data/mysql/slave-relay-log.info
slave_load_tmpdir = /home/polarx/data/tmp
slow_query_log_file = /home/polarx/data/mysql/slow_query.log
socket = /home/polarx/data/run/mysql.sock
tmpdir = /home/polarx/data/tmp
innodb_buffer_pool_size = 1073741824
loose_rpc_port = 34886
port = 4886
loose_cluster-id = 1234
loose_cluster-info = 127.0.0.1:14886@1
auto_increment_increment = 1
auto_increment_offset = 1
autocommit = ON
automatic_sp_privileges = ON
avoid_temporal_upgrade = OFF
back_log = 3000
binlog_cache_size = 1048576
binlog_checksum = CRC32
binlog_order_commits = OFF
binlog_row_image = full
binlog_rows_query_log_events = ON
binlog_stmt_cache_size = 32768
binlog_transaction_dependency_tracking = WRITESET
block_encryption_mode = "aes-128-ecb"
bulk_insert_buffer_size = 4194304
character_set_server = utf8
concurrent_insert = 2
connect_timeout = 10
default_authentication_plugin = mysql_native_password
default_storage_engine = InnoDB
default_time_zone = +8:00
default_week_format = 0
delay_key_write = ON
delayed_insert_limit = 100
delayed_insert_timeout = 300
delayed_queue_size = 1000
disconnect_on_expired_password = ON
div_precision_increment = 4
end_markers_in_json = OFF
enforce_gtid_consistency = ON
eq_range_index_dive_limit = 200
event_scheduler = OFF
expire_logs_days = 0
explicit_defaults_for_timestamp = OFF
flush_time = 0
ft_max_word_len = 84
ft_min_word_len = 4
ft_query_expansion_limit = 20
general_log = OFF
group_concat_max_len = 1024
gtid_mode = ON
host_cache_size = 644
init_connect = ''
innodb_adaptive_flushing = ON
innodb_adaptive_flushing_lwm = 10
innodb_adaptive_hash_index = OFF
innodb_adaptive_max_sleep_delay = 150000
innodb_autoextend_increment = 64
innodb_autoinc_lock_mode = 2
innodb_buffer_pool_chunk_size = 33554432
innodb_buffer_pool_dump_at_shutdown = ON
innodb_buffer_pool_dump_pct = 25
innodb_buffer_pool_instances = 8
innodb_buffer_pool_load_at_startup = ON
innodb_change_buffer_max_size = 25
innodb_change_buffering = none
innodb_checksum_algorithm = crc32
innodb_cmp_per_index_enabled = OFF
innodb_commit_concurrency = 0
innodb_compression_failure_threshold_pct = 5
innodb_compression_level = 6
innodb_compression_pad_pct_max = 50
innodb_concurrency_tickets = 5000
innodb_data_file_purge = ON
innodb_data_file_purge_interval = 100
innodb_data_file_purge_max_size = 128
innodb_deadlock_detect = ON
innodb_disable_sort_file_cache = ON
innodb_flush_log_at_trx_commit = 1
innodb_flush_method = O_DIRECT
innodb_flush_neighbors = 0
innodb_flush_sync = ON
innodb_ft_cache_size = 8000000
innodb_ft_enable_diag_print = OFF
innodb_ft_enable_stopword = ON
innodb_ft_max_token_size = 84
innodb_ft_min_token_size = 3
innodb_ft_num_word_optimize = 2000
innodb_ft_result_cache_limit = 2000000000
innodb_ft_sort_pll_degree = 2
innodb_ft_total_cache_size = 640000000
innodb_io_capacity = 20000
innodb_io_capacity_max = 40000
innodb_lock_wait_timeout = 50
innodb_log_buffer_size = 16777216
innodb_log_checksums = ON
innodb_log_file_size = 134217728
innodb_lru_scan_depth = 8192
innodb_max_dirty_pages_pct = 75
innodb_max_dirty_pages_pct_lwm = 0
innodb_max_purge_lag = 0
innodb_max_purge_lag_delay = 0
innodb_max_undo_log_size = 1073741824
innodb_monitor_disable =
innodb_monitor_enable =
innodb_old_blocks_pct = 37
innodb_old_blocks_time = 1000
innodb_online_alter_log_max_size = 134217728
innodb_open_files = 20000
innodb_optimize_fulltext_only = OFF
innodb_page_cleaners = 4
innodb_print_all_deadlocks = ON
innodb_purge_batch_size = 300
innodb_purge_rseg_truncate_frequency = 128
innodb_purge_threads = 4
innodb_random_read_ahead = OFF
innodb_read_ahead_threshold = 0
innodb_read_io_threads = 4
innodb_rollback_on_timeout = OFF
innodb_rollback_segments = 128
innodb_sort_buffer_size = 1048576
innodb_spin_wait_delay = 6
innodb_stats_auto_recalc = ON
innodb_stats_method = nulls_equal
innodb_stats_on_metadata = OFF
innodb_stats_persistent = ON
innodb_stats_persistent_sample_pages = 20
innodb_stats_transient_sample_pages = 8
innodb_status_output = OFF
innodb_status_output_locks = OFF
innodb_strict_mode = ON
innodb_sync_array_size = 16
innodb_sync_spin_loops = 30
innodb_table_locks = ON
innodb_tcn_cache_level = block
innodb_thread_concurrency = 0
innodb_thread_sleep_delay = 0
innodb_write_io_threads = 4
interactive_timeout = 7200
key_buffer_size = 16777216
key_cache_age_threshold = 300
key_cache_block_size = 1024
key_cache_division_limit = 100
lc_time_names = en_US
local_infile = OFF
lock_wait_timeout = 1800
log_bin_trust_function_creators = ON
log_bin_use_v1_row_events = 0
log_error_verbosity = 2
log_queries_not_using_indexes = OFF
log_slave_updates = 0
log_slow_admin_statements = ON
log_slow_slave_statements = ON
log_throttle_queries_not_using_indexes = 0
long_query_time = 1
loose_ccl_max_waiting_count = 0
loose_ccl_queue_bucket_count = 4
loose_ccl_queue_bucket_size = 64
loose_ccl_wait_timeout = 86400
loose_consensus_auto_leader_transfer = ON
loose_consensus_auto_reset_match_index = ON
loose_consensus_election_timeout = 10000
loose_consensus_io_thread_cnt = 8
loose_consensus_large_trx = ON
loose_consensus_log_cache_size = 536870912
loose_consensus_max_delay_index = 10000
loose_consensus_max_log_size = 20971520
loose_consensus_max_packet_size = 131072
loose_consensus_prefetch_cache_size = 268435456
loose_consensus_worker_thread_cnt = 8
loose_implicit_primary_key = 1
loose_information_schema_stats_expiry = 86400
loose_innodb_buffer_pool_in_core_file = OFF
loose_innodb_commit_cleanout_max_rows = 9999999999
loose_innodb_doublewrite_pages = 64
loose_innodb_lizard_stat_enabled = OFF
loose_innodb_log_compressed_pages = ON
loose_innodb_log_write_ahead_size = 4096
loose_innodb_parallel_read_threads = 1
loose_innodb_undo_retention = 1800
loose_innodb_undo_space_reserved_size = 1024
loose_innodb_undo_space_supremum_size = 102400
loose_internal_tmp_mem_storage_engine = TempTable
loose_new_rpc = ON
loose_optimizer_switch = index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,engine_condition_pushdown=on,index_condition_pushdown=on,mrr=on,mrr_cost_based=on,block_nested_loop=on,batched_key_access=off,materialization=on,semijoin=on,loosescan=on,firstmatch=on,subquery_materialization_cost_based=on,use_index_extensions=on
loose_optimizer_trace = enabled=off,one_line=off
loose_optimizer_trace_features = greedy_search=on,range_optimizer=on,dynamic_range=on,repeated_subselect=on
loose_performance-schema_instrument = 'wait/lock/metadata/sql/mdl=ON'
loose_performance_point_lock_rwlock_enabled = ON
loose_performance_schema-instrument = 'memory/%%=COUNTED'
loose_performance_schema_accounts_size = 10000
loose_performance_schema_consumer_events_stages_current = ON
loose_performance_schema_consumer_events_stages_history = ON
loose_performance_schema_consumer_events_stages_history_long = ON
loose_performance_schema_consumer_events_statements_current = OFF
loose_performance_schema_consumer_events_statements_history = OFF
loose_performance_schema_consumer_events_statements_history_long = OFF
loose_performance_schema_consumer_events_transactions_current = OFF
loose_performance_schema_consumer_events_transactions_history = OFF
loose_performance_schema_consumer_events_transactions_history_long = OFF
loose_performance_schema_consumer_events_waits_current = OFF
loose_performance_schema_consumer_events_waits_history = OFF
loose_performance_schema_consumer_events_waits_history_long = OFF
loose_performance_schema_consumer_global_instrumentation = OFF
loose_performance_schema_consumer_statements_digest = OFF
loose_performance_schema_consumer_thread_instrumentation = OFF
loose_performance_schema_digests_size = 10000
loose_performance_schema_error_size = 0
loose_performance_schema_events_stages_history_long_size = 0
loose_performance_schema_events_stages_history_size = 0
loose_performance_schema_events_statements_history_long_size = 0
loose_performance_schema_events_statements_history_size = 0
loose_performance_schema_events_transactions_history_long_size = 0
loose_performance_schema_events_transactions_history_size = 0
loose_performance_schema_events_waits_history_long_size = 0
loose_performance_schema_events_waits_history_size = 0
loose_performance_schema_hosts_size = 10000
loose_performance_schema_instrument = '%%=OFF'
loose_performance_schema_max_cond_classes = 0
loose_performance_schema_max_cond_instances = 10000
loose_performance_schema_max_digest_length = 0
loose_performance_schema_max_digest_sample_age = 0
loose_performance_schema_max_file_classes = 0
loose_performance_schema_max_file_handles = 0
loose_performance_schema_max_file_instances = 1000
loose_performance_schema_max_index_stat = 10000
loose_performance_schema_max_memory_classes = 0
loose_performance_schema_max_metadata_locks = 10000
loose_performance_schema_max_mutex_classes = 0
loose_performance_schema_max_mutex_instances = 10000
loose_performance_schema_max_prepared_statements_instances = 1000
loose_performance_schema_max_program_instances = 10000
loose_performance_schema_max_rwlock_classes = 0
loose_performance_schema_max_rwlock_instances = 10000
loose_performance_schema_max_socket_classes = 0
loose_performance_schema_max_socket_instances = 1000
loose_performance_schema_max_sql_text_length = 0
loose_performance_schema_max_stage_classes = 0
loose_performance_schema_max_statement_classes = 0
loose_performance_schema_max_statement_stack = 1
loose_performance_schema_max_table_handles = 10000
loose_performance_schema_max_table_instances = 1000
loose_performance_schema_max_table_lock_stat = 10000
loose_performance_schema_max_thread_classes = 0
loose_performance_schema_max_thread_instances = 10000
loose_performance_schema_session_connect_attrs_size = 0
loose_performance_schema_setup_actors_size = 10000
loose_performance_schema_setup_objects_size = 10000
loose_performance_schema_users_size = 10000
loose_rds_audit_log_buffer_size = 16777216
loose_rds_audit_log_enabled = OFF
loose_rds_audit_log_event_buffer_size = 8192
loose_rds_audit_log_row_limit = 100000
loose_rds_audit_log_version = MYSQL_V1
loose_replica_read_timeout = 3000
loose_session_track_system_variables = "*"
loose_session_track_transaction_info = OFF
loose_slave_parallel_workers = 32
low_priority_updates = 0
lower_case_table_names = 1
master_info_repository = TABLE
master_verify_checksum = OFF
max_allowed_packet = 1073741824
max_binlog_cache_size = 18446744073709551615
max_binlog_stmt_cache_size = 18446744073709551615
max_connect_errors = 65536
max_connections = 5532
max_error_count = 1024
max_execution_time = 0
max_heap_table_size = 67108864
max_join_size = 18446744073709551615
max_length_for_sort_data = 4096
max_points_in_geometry = 65536
max_prepared_stmt_count = 16382
max_seeks_for_key = 18446744073709551615
max_sort_length = 1024
max_sp_recursion_depth = 0
max_user_connections = 5000
max_write_lock_count = 102400
min_examined_row_limit = 0
myisam_sort_buffer_size = 262144
mysql_native_password_proxy_users = OFF
net_buffer_length = 16384
net_read_timeout = 30
net_retry_count = 10
net_write_timeout = 60
ngram_token_size = 2
open_files_limit = 65535
opt_indexstat = ON
opt_tablestat = ON
optimizer_prune_level = 1
optimizer_search_depth = 62
optimizer_trace_limit = 1
optimizer_trace_max_mem_size = 1048576
optimizer_trace_offset = -1
performance_schema = ON
preload_buffer_size = 32768
query_alloc_block_size = 8192
query_prealloc_size = 8192
range_alloc_block_size = 4096
range_optimizer_max_mem_size = 8388608
read_rnd_buffer_size = 442368
relay_log_info_repository = TABLE
relay_log_purge = OFF
relay_log_recovery = OFF
replicate_same_server_id = OFF
loose_rotate_log_table_last_name =
server_id = 1234
session_track_gtids = OFF
session_track_schema = ON
session_track_state_change = OFF
sha256_password_proxy_users = OFF
show_old_temporals = OFF
skip_slave_start = OFF
slave_exec_mode = strict
slave_net_timeout = 4
slave_parallel_type = LOGICAL_CLOCK
slave_pending_jobs_size_max = 1073741824
slave_sql_verify_checksum = OFF
slave_type_conversions =
slow_launch_time = 2
slow_query_log = OFF
sort_buffer_size = 868352
sql_mode = NO_ENGINE_SUBSTITUTION
stored_program_cache = 256
sync_binlog = 1
sync_master_info = 10000
sync_relay_log = 1
sync_relay_log_info = 10000
table_open_cache_instances = 16
temptable_max_ram = 1073741824
thread_cache_size = 100
thread_stack = 262144
tls_version = TLSv1.2,TLSv1.3
tmp_table_size = 2097152
transaction_alloc_block_size = 8192
transaction_isolation = REPEATABLE-READ
transaction_prealloc_size = 4096
transaction_write_set_extraction = XXHASH64
updatable_views_with_limit = YES
wait_timeout = 28800
loose_optimizer_switch=index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,engine_condition_pushdown=on,index_condition_pushdown=on,mrr=on,mrr_cost_based=on,block_nested_loop=on,batched_key_access=off,materialization=on,semijoin=on,loosescan=on,firstmatch=on,subquery_materialization_cost_based=on,use_index_extensions=on,skip_scan=off
xa_detach_on_prepare = OFF
[mysqld_safe]
pid_file = /home/polarx/data/run/mysql.pid

相关文章
|
关系型数据库 测试技术 Serverless
【PolarDB Serverless】资源伸缩&压测 TPC-C 测评
【PolarDB Serverless】资源伸缩&压测 TPC-C 测评
156352 31
【PolarDB Serverless】资源伸缩&压测 TPC-C 测评
|
JavaScript 前端开发 容器
SolidJs尝鲜与Web Component实践造虚拟滚动的轮子
「造轮子」虚拟滚动 + soild + Web Component
1957 1
|
存储 人工智能 关系型数据库
拥抱Data+AI|解码Data+AI助力游戏日志智能分析
「拥抱Data+AI」系列第2篇:阿里云DMS+AnalyticDB助力游戏日志数据分析与预测
拥抱Data+AI|解码Data+AI助力游戏日志智能分析
|
存储 SQL 运维
当「内容科技企业」遇上多模数据库:新榜采用Lindorm打造全域数据“超级底盘”
新榜业务以数据服务提升内容产业信息流通效率,其数据处理需求聚焦于跨平台实时数据融合处理、实时分析检索、批量更新效率三大维度。Lindorm通过多模超融合架构,提供检索分析一体化、多引擎数据共享,分布式弹性扩展等能力,成为支撑新榜内容服务的核心引擎,助力客户在内容生态竞争中持续领跑。
|
人工智能 数据挖掘 数据库
拥抱Data+AI|破解电商7大挑战,DMS+AnalyticDB助力企业智能决策
本文为数据库「拥抱Data+AI」系列连载第1篇,该系列是阿里云瑶池数据库面向各行业Data+AI应用场景,基于真实客户案例&最佳实践,展示Data+AI行业解决方案的连载文章。本篇内容针对电商行业痛点,将深入探讨如何利用数据与AI技术以及数据分析方法论,为电商行业注入新的活力与效能。
拥抱Data+AI|破解电商7大挑战,DMS+AnalyticDB助力企业智能决策
|
人工智能 调度
【MCP教程系列】在阿里云百炼上用Qwen3+且慢MCP,用AI重新定义资产管理效率
通义千问Qwen3通过MCP协议,在Agent中具备强大的工具调度与复杂任务拆解能力,成为构建复杂AI应用的核心引擎。以“基金助手”为例,集成且慢MCP服务后,可一键调用多源金融数据并动态组合分析工具,实现精准调度。在阿里云百炼平台上,只需4步即可构建一个“金融顾问”智能体:开通且慢MCP服务、新建智能体、添加MCP技能、测试效果。此外,还可增加提示词规范输出内容,完成更复杂的任务。
1487 0
|
12月前
|
SQL OLAP API
微财基于 Flink 构造实时变量池
本文整理自微财资深数据开发工程师穆建魁老师在 Flink Forward Asia 2024 行业解决方案(一)专场中的分享。主要涵盖三部分内容:1) 基于 Flink 构建实时变量池,解决传统方案中数据库耦合度高、QPS 上限低等问题;2) 选择 Flink 进行流式计算的架构选型(Kappa 架构)及开发效率提升策略,通过数据分层优化开发流程;3) 实时变量池架构与多流关联优化实践,确保高效处理和存储实时变量,并应用于公司多个业务领域。
713 4
微财基于 Flink 构造实时变量池
|
Ubuntu 关系型数据库 分布式数据库
开源PolarDB -X 部署安装
本文记录了在Ubuntu 20.04上部署阿里云分布式数据库PolarDB-X的步骤,包括环境准备、安装依赖、下载源码、编译安装、配置启动,并分享了遇到的配置错误、依赖冲突和日志不清等问题。作者建议官方改进文档细节、优化代码质量和建立开发者社区。安装历史记录显示了相关命令行操作。尽管过程有挑战,但作者期待产品体验的提升。
1931 6
|
机器学习/深度学习 自然语言处理 算法
自然语言处理中的情感分析技术
自然语言处理中的情感分析技术
|
Python
Python中的反对称矩阵(Skew-Symmetric Matrices)
Python中的反对称矩阵(Skew-Symmetric Matrices)
670 2