高可用mongodb集群(分片+副本):shard2副本重建

本文涉及的产品
云数据库 MongoDB,独享型 2核8GB
推荐场景:
构建全方位客户视图
日志服务 SLS,月写入数据量 50GB 1个月
简介: 高可用mongodb集群(分片+副本):shard2副本重建

■■ shard2副本无法启动

■ 项目人员反馈集群的shard2副本无法启动

看了下,shard2副本能正常起来,但是过会儿以后就down掉,shard2的主节点日志显示shard2副本启动时马上进入 ROLLBACK 回滚状态:
2022-09-14T15:58:08.833+0800 I REPL [replexec-1] Member node3:27002 is now in state ROLLBACK
3分钟之后,shard2的主节点日志显示shard2副本 DOWN:
2022-09-14T16:01:09.246+0800 I REPL [replexec-2] Member node3:27002 is now in state RS_DOWN

■ 认真阅读了副本的日志,【我确认】它没啥有用的报错信息,就莫名其妙的 DOWN 了!是的,莫名其妙!怎么也得给一个相对明确的理由吧???我对它的日志设计合理性很表示怀疑!这个古老的版本是3.6.23,请为它树立墓志铭!

为此,我把它从启动开始到灭亡的所有日志都贴出来,作为证据:

2022-09-14T15:54:08.443+0800 I CONTROL  [main] ***** SERVER RESTARTED *****
2022-09-14T15:54:08.494+0800 I CONTROL  [initandlisten] MongoDB starting : pid=417227 port=27002 dbpath=/data/mongodb/shard2/data 64-bit host=mongodb3
2022-09-14T15:54:08.494+0800 I CONTROL  [initandlisten] db version v3.6.23
2022-09-14T15:54:08.494+0800 I CONTROL  [initandlisten] git version: d352e6a4764659e0d0350ce77279de3c1f243e5c
2022-09-14T15:54:08.494+0800 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.1.1k  FIPS 25 Mar 2021
2022-09-14T15:54:08.494+0800 I CONTROL  [initandlisten] allocator: tcmalloc
2022-09-14T15:54:08.494+0800 I CONTROL  [initandlisten] modules: none
2022-09-14T15:54:08.494+0800 I CONTROL  [initandlisten] build environment:
2022-09-14T15:54:08.494+0800 I CONTROL  [initandlisten]     distmod: rhel80
2022-09-14T15:54:08.494+0800 I CONTROL  [initandlisten]     distarch: x86_64
2022-09-14T15:54:08.494+0800 I CONTROL  [initandlisten]     target_arch: x86_64
2022-09-14T15:54:08.494+0800 I CONTROL  [initandlisten] options: { config: "/data/mongodb/conf/shard2.conf", net: { bindIp: "0.0.0.0,::", ipv6: true, maxIncomingConnections: 20000, port: 27002 }, processManagement: { fork: true, pidFilePath: "/data/mongodb/shard2/log/shard2.pid" }, replication: { replSet: "shard2" }, security: { keyFile: "/data/mongodb/conf/mongo.keyfile" }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/data/mongodb/shard2/data" }, systemLog: { destination: "file", logAppend: true, path: "/data/mongodb/shard2/log/shard2.log" } }
2022-09-14T15:54:08.494+0800 W -        [initandlisten] Detected unclean shutdown - /data/mongodb/shard2/data/mongod.lock is not empty.
2022-09-14T15:54:08.517+0800 I -        [initandlisten] Detected data files in /data/mongodb/shard2/data created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2022-09-14T15:54:08.517+0800 W STORAGE  [initandlisten] Recovering data from the last clean checkpoint.
2022-09-14T15:54:08.517+0800 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=31547M,cache_overflow=(file_max=0M),session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),compatibility=(release="3.0",require_max="3.0"),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2022-09-14T15:54:09.476+0800 I STORAGE  [initandlisten] WiredTiger message [1663142049:476530][417227:0x7f2826c01a40], txn-recover: Main recovery loop: starting at 6440/5376
2022-09-14T15:54:09.477+0800 I STORAGE  [initandlisten] WiredTiger message [1663142049:477008][417227:0x7f2826c01a40], txn-recover: Recovering log 6440 through 6441
2022-09-14T15:54:09.541+0800 I STORAGE  [initandlisten] WiredTiger message [1663142049:541921][417227:0x7f2826c01a40], txn-recover: Recovering log 6441 through 6441
2022-09-14T15:54:09.597+0800 I STORAGE  [initandlisten] WiredTiger message [1663142049:597459][417227:0x7f2826c01a40], txn-recover: Set global recovery timestamp: 0
2022-09-14T15:54:09.645+0800 I STORAGE  [initandlisten] Starting WiredTigerRecordStoreThread local.oplog.rs
2022-09-14T15:54:09.645+0800 I STORAGE  [initandlisten] The size storer reports that the oplog contains 9616 records totaling to 53636431979 bytes
2022-09-14T15:54:09.645+0800 I STORAGE  [initandlisten] Scanning the oplog to determine where to place markers for truncation
2022-09-14T15:58:06.081+0800 I STORAGE  [initandlisten] WiredTiger record store oplog processing took 236435ms
2022-09-14T15:58:06.090+0800 I CONTROL  [initandlisten]
2022-09-14T15:58:06.090+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2022-09-14T15:58:06.090+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2022-09-14T15:58:06.090+0800 I CONTROL  [initandlisten]
2022-09-14T15:58:06.163+0800 I STORAGE  [initandlisten] Found drop-pending namespace flowtest.system.drop.1662704001i5t8.usertable with drop optime { ts: Timestamp(1662704001, 5), t: 8 }
2022-09-14T15:58:06.163+0800 I SHARDING [initandlisten] initializing sharding state with: { _id: "shardIdentity", configsvrConnectionString: "configs/node1:21000,node2:21000,node3:21000", shardName: "shard2", clusterId: ObjectId('62e894ed3d73f307c53b4461') }
2022-09-14T15:58:06.163+0800 I SHARDING [initandlisten] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers
2022-09-14T15:58:06.164+0800 I NETWORK  [initandlisten] Starting new replica set monitor for configs/node1:21000,node2:21000,node3:21000
2022-09-14T15:58:06.164+0800 I SHARDING [thread1] creating distributed lock ping thread for process mongodb3:27002:1663142286:-365603539022679667 (sleeping for 30000ms)
2022-09-14T15:58:06.164+0800 I SHARDING [initandlisten] initialized sharding components for secondary node.
2022-09-14T15:58:06.179+0800 I NETWORK  [ReplicaSetMonitor-TaskExecutor-0] Successfully connected to node3:21000 (1 connections now open to node3:21000 with a 5 second timeout)
2022-09-14T15:58:06.180+0800 I NETWORK  [initandlisten] Successfully connected to node2:21000 (1 connections now open to node2:21000 with a 5 second timeout)
2022-09-14T15:58:06.180+0800 I NETWORK  [shard registry reload] Successfully connected to node1:21000 (1 connections now open to node1:21000 with a 5 second timeout)
2022-09-14T15:58:06.180+0800 I ASIO     [NetworkInterfaceASIO-ShardRegistry-0] Connecting to node2:21000
2022-09-14T15:58:06.180+0800 I ASIO     [NetworkInterfaceASIO-ShardRegistry-0] Connecting to node3:21000
2022-09-14T15:58:06.180+0800 I ASIO     [NetworkInterfaceASIO-ShardRegistry-0] Connecting to node2:21000
2022-09-14T15:58:06.181+0800 I ASIO     [NetworkInterfaceASIO-ShardRegistry-0] Connecting to node3:21000
2022-09-14T15:58:06.182+0800 I ASIO     [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to node3:21000, took 2ms (2 connections now open to node3:21000)
2022-09-14T15:58:06.182+0800 I ASIO     [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to node3:21000, took 2ms (2 connections now open to node3:21000)
2022-09-14T15:58:06.182+0800 I SHARDING [NetworkInterfaceASIO-ShardRegistry-0] Received reply from config server node (unknown) indicating config server optime term has increased, previous optime { ts: Timestamp(0, 0), t: -1 }, now { ts: Timestamp(1663142285, 1), t: 15 }
2022-09-14T15:58:06.182+0800 I ASIO     [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to node2:21000, took 2ms (2 connections now open to node2:21000)
2022-09-14T15:58:06.182+0800 I NETWORK  [shard registry reload] Starting new replica set monitor for shard1/node1:27001,node2:27001
2022-09-14T15:58:06.182+0800 I ASIO     [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to node2:21000, took 2ms (2 connections now open to node2:21000)
2022-09-14T15:58:06.182+0800 I NETWORK  [shard registry reload] Starting new replica set monitor for shard2/node2:27002,node3:27002
2022-09-14T15:58:06.182+0800 I NETWORK  [shard registry reload] Starting new replica set monitor for shard3/node1:27003,node3:27003
2022-09-14T15:58:06.188+0800 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: LockStateChangeFailed: findAndModify query predicate didn't match any lock document
2022-09-14T15:58:06.198+0800 I NETWORK  [ReplicaSetMonitor-TaskExecutor-0] Successfully connected to node1:27001 (1 connections now open to node1:27001 with a 5 second timeout)
2022-09-14T15:58:06.213+0800 I NETWORK  [ReplicaSetMonitor-TaskExecutor-0] Successfully connected to node2:27001 (1 connections now open to node2:27001 with a 5 second timeout)
2022-09-14T15:58:06.222+0800 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/mongodb/shard2/data/diagnostic.data'
2022-09-14T15:58:06.229+0800 I NETWORK  [ReplicaSetMonitor-TaskExecutor-0] Successfully connected to node2:27002 (1 connections now open to node2:27002 with a 5 second timeout)
2022-09-14T15:58:06.230+0800 W NETWORK  [ReplicaSetMonitor-TaskExecutor-0] Failed to connect to 10.105.2.9:27002, in(checking socket for error after poll), reason: Connection refused
2022-09-14T15:58:06.245+0800 I NETWORK  [ReplicaSetMonitor-TaskExecutor-0] Successfully connected to node1:27003 (1 connections now open to node1:27003 with a 5 second timeout)
2022-09-14T15:58:06.249+0800 I REPL     [initandlisten] Rollback ID is 1
2022-09-14T15:58:06.258+0800 I REPL     [initandlisten] Starting recovery oplog application at the appliedThrough: { ts: Timestamp(1662705355, 24), t: 8 }
2022-09-14T15:58:06.258+0800 I REPL     [initandlisten] No oplog entries to apply for recovery. appliedThrough is at the top of the oplog.
2022-09-14T15:58:06.259+0800 I NETWORK  [initandlisten] listening via socket bound to 0.0.0.0
2022-09-14T15:58:06.259+0800 I ASIO     [NetworkInterfaceASIO-ShardRegistry-0] Connecting to node1:21000
2022-09-14T15:58:06.259+0800 I NETWORK  [initandlisten] listening via socket bound to ::
2022-09-14T15:58:06.259+0800 I NETWORK  [initandlisten] listening via socket bound to /tmp/mongodb-27002.sock
2022-09-14T15:58:06.259+0800 I NETWORK  [initandlisten] waiting for connections on port 27002
2022-09-14T15:58:06.260+0800 I NETWORK  [ReplicaSetMonitor-TaskExecutor-0] Successfully connected to node3:27003 (1 connections now open to node3:27003 with a 5 second timeout)
2022-09-14T15:58:06.261+0800 I ASIO     [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to node1:21000, took 2ms (1 connections now open to node1:21000)
2022-09-14T15:58:06.261+0800 I ASIO     [NetworkInterfaceASIO-ShardRegistry-0] Connecting to node2:27002
2022-09-14T15:58:06.263+0800 I ASIO     [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to node2:27002, took 2ms (1 connections now open to node2:27002)
2022-09-14T15:58:06.276+0800 I REPL     [replexec-0]
2022-09-14T15:58:06.276+0800 I REPL     [replexec-0] ** WARNING: This replica set has a Primary-Secondary-Arbiter architecture, but readConcern:majority is enabled
2022-09-14T15:58:06.276+0800 I REPL     [replexec-0] **          for this node. This is not a recommended configuration. Please see
2022-09-14T15:58:06.276+0800 I REPL     [replexec-0] **          https://dochub.mongodb.org/core/psa-disable-rc-majority-3.6
2022-09-14T15:58:06.276+0800 I REPL     [replexec-0]
2022-09-14T15:58:06.276+0800 I REPL     [replexec-0] New replica set config in use: { _id: "shard2", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "node1:27002", arbiterOnly: true, buildIndexes: true, hidden: false, priority: 0.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "node2:27002", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "node3:27002", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('62e895986d02eb140daedcb8') } }
2022-09-14T15:58:06.276+0800 I REPL     [replexec-0] This node is node3:27002 in the config
2022-09-14T15:58:06.276+0800 I REPL     [replexec-0] transition to STARTUP2 from STARTUP
2022-09-14T15:58:06.276+0800 I REPL     [replexec-0] Starting replication storage threads
2022-09-14T15:58:06.276+0800 I ASIO     [NetworkInterfaceASIO-Replication-0] Connecting to node1:27002
2022-09-14T15:58:06.277+0800 I ASIO     [NetworkInterfaceASIO-Replication-0] Connecting to node2:27002
2022-09-14T15:58:06.277+0800 I REPL     [replexec-0] transition to RECOVERING from STARTUP2
2022-09-14T15:58:06.277+0800 I REPL     [replexec-0] Starting replication fetcher thread
2022-09-14T15:58:06.277+0800 I REPL     [replexec-0] Starting replication applier thread
2022-09-14T15:58:06.277+0800 I REPL     [replexec-0] Starting replication reporter thread
2022-09-14T15:58:06.278+0800 I REPL     [rsSync] transition to SECONDARY from RECOVERING
2022-09-14T15:58:06.278+0800 I REPL     [rsSync] Resetting sync source to empty, which was :27017
2022-09-14T15:58:06.287+0800 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to node1:27002, took 11ms (1 connections now open to node1:27002)
2022-09-14T15:58:06.287+0800 I REPL     [replexec-1] Member node1:27002 is now in state ARBITER
2022-09-14T15:58:06.288+0800 I ASIO     [NetworkInterfaceASIO-Replication-0] Successfully connected to node2:27002, took 11ms (1 connections now open to node2:27002)
2022-09-14T15:58:06.288+0800 I REPL     [replexec-0] Member node2:27002 is now in state PRIMARY
2022-09-14T15:58:06.819+0800 I NETWORK  [listener] connection accepted from 10.105.2.8:44144 #1 (1 connection now open)
2022-09-14T15:58:06.819+0800 I NETWORK  [conn1] received client metadata from 10.105.2.8:44144 conn1: { driver: { name: "NetworkInterfaceASIO-Replication", version: "3.6.23" }, os: { type: "Linux", name: "Red Hat Enterprise Linux release 8.6 (Ootpa)", architecture: "x86_64", version: "Kernel 4.18.0-372.9.1.el8.x86_64" } }
2022-09-14T15:58:06.833+0800 I ACCESS   [conn1] Successfully authenticated as principal __system on local from client 10.105.2.8:44144
2022-09-14T15:58:07.278+0800 I REPL     [rsBackgroundSync] sync source candidate: node2:27002 【此处确认使用node2:27002作为同步源】
2022-09-14T15:58:07.285+0800 I ASIO     [NetworkInterfaceASIO-RS-0] Connecting to node2:27002
2022-09-14T15:58:07.286+0800 I ASIO     [NetworkInterfaceASIO-RS-0] Successfully connected to node2:27002, took 1ms (1 connections now open to node2:27002)
2022-09-14T15:58:07.287+0800 I REPL     [rsBackgroundSync] Changed sync source from empty to node2:27002
2022-09-14T15:58:07.287+0800 I ASIO     [NetworkInterfaceASIO-RS-0] Connecting to node2:27002
2022-09-14T15:58:07.289+0800 I ASIO     [NetworkInterfaceASIO-RS-0] Successfully connected to node2:27002, took 2ms (2 connections now open to node2:27002)
2022-09-14T15:58:07.425+0800 I REPL     [rsBackgroundSync] Starting rollback due to OplogStartMissing: Our last op time fetched: { ts: Timestamp(1662705355, 24), t: 8 }. source's GTE: { ts: Timestamp(1662706338, 1), t: 9 } hashes: (-4559377292940263073/5327713539284899481)
2022-09-14T15:58:07.425+0800 I REPL     [rsBackgroundSync] Replication commit point: { ts: Timestamp(0, 0), t: -1 }
2022-09-14T15:58:07.425+0800 I REPL     [rsBackgroundSync] Rollback using the 'rollbackViaRefetch' method because UUID support is feature compatible with featureCompatibilityVersion 3.6.
2022-09-14T15:58:07.425+0800 I REPL     [rsBackgroundSync] transition to ROLLBACK from SECONDARY
2022-09-14T15:58:07.425+0800 I NETWORK  [rsBackgroundSync] Skip closing connection for connection # 1
2022-09-14T15:58:07.425+0800 I ROLLBACK [rsBackgroundSync] Starting rollback. Sync source: node2:27002 【此处确认使用node2:27002作为同步源】
2022-09-14T15:58:07.427+0800 I ROLLBACK [rsBackgroundSync] Finding the Common Point
2022-09-14T15:58:07.431+0800 I ROLLBACK [rsBackgroundSync] our last optime:   Timestamp(1662705355, 24)
2022-09-14T15:58:07.431+0800 I ROLLBACK [rsBackgroundSync] their last optime: Timestamp(1663142280, 1)
2022-09-14T15:58:07.431+0800 I ROLLBACK [rsBackgroundSync] diff in end of log times: -436925 seconds 【这是以上两个数值的差值】
2022-09-14T15:58:07.451+0800 I SH_REFR  [ShardServerCatalogCacheLoader-0] Refresh for collection config.system.sessions took 1190 ms and failed :: caused by :: PrimarySteppedDown: Error waiting for optime { ts: Timestamp(1663142280, 1), t: -1 }, current relevant optime is { ts: Timestamp(1662705355, 24), t: 8 }. :: caused by :: operation was interrupted
2022-09-14T15:58:07.451+0800 I CONTROL  [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Error waiting for optime { ts: Timestamp(1663142280, 1), t: -1 }, current relevant optime is { ts: Timestamp(1662705355, 24), t: 8 }. :: caused by :: operation was interrupted
2022-09-14T15:58:07.451+0800 I CONTROL  [LogicalSessionCacheReap] Sessions collection is not set up; waiting until next sessions reap interval: Error waiting for optime { ts: Timestamp(1663142280, 1), t: -1 }, current relevant optime is { ts: Timestamp(1662705355, 24), t: 8 }. :: caused by :: operation was interrupted
2022-09-14T15:58:08.194+0800 I NETWORK  [listener] connection accepted from 10.105.2.7:37336 #2 (2 connections now open)
2022-09-14T15:58:08.194+0800 I NETWORK  [conn2] received client metadata from 10.105.2.7:37336 conn2: { driver: { name: "NetworkInterfaceASIO-Replication", version: "3.6.23" }, os: { type: "Linux", name: "Red Hat Enterprise Linux release 8.6 (Ootpa)", architecture: "x86_64", version: "Kernel 4.18.0-372.9.1.el8.x86_64" } }
2022-09-14T15:58:08.208+0800 I ACCESS   [conn2] Successfully authenticated as principal __system on local from client 10.105.2.7:37336
2022-09-14T15:58:09.801+0800 I NETWORK  [listener] connection accepted from 10.105.2.8:44146 #3 (3 connections now open)
,,,【此处中间有 conn3 - conn14 的信息,为节省篇幅略掉】
2022-09-14T15:58:36.231+0800 I NETWORK  [conn15] received client metadata from 10.105.2.9:43116 conn15: { driver: { name: "MongoDB Internal Client", version: "3.6.23" }, os: { type: "Linux", name: "Red Hat Enterprise Linux release 8.6 (Ootpa)", architecture: "x86_64", version: "Kernel 4.18.0-372.9.1.el8.x86_64" } }
2022-09-14T15:58:36.245+0800 I ACCESS   [conn15] Successfully authenticated as principal __system on local from client 10.105.2.9:43116
2022-09-14T15:58:36.245+0800 I NETWORK  [ReplicaSetMonitor-TaskExecutor-0] Successfully connected to node3:27002 (1 connections now open to node3:27002 with a 5 second timeout)
2022-09-14T15:59:06.182+0800 I ASIO     [NetworkInterfaceASIO-ShardRegistry-0] Ending idle connection to host node3:21000 because the pool meets constraints; 1 connections to that host remain open
2022-09-14T15:59:06.183+0800 I ASIO     [NetworkInterfaceASIO-ShardRegistry-0] Ending idle connection to host node2:21000 because the pool meets constraints; 1 connections to that host remain open
2022-09-14T15:59:07.289+0800 I ASIO     [NetworkInterfaceASIO-RS-0] Ending idle connection to host node2:27002 because the pool meets constraints; 1 connections to that host remain open
2022-09-14T16:44:50.844+0800 I CONTROL  [main] ***** SERVER RESTARTED *****

【上面最后一行日志可见,一直到下一次启动,再无其他日志信息!】
感觉唯一值得怀疑的地方是:
Error waiting for optime { ts: Timestamp(1663142280, 1), t: -1
但是无论如何,最后死掉之前,日志没有关键错误信息,让人摸不着头脑!
pool meets constraints 也不是要害信息。

■■ shard2副本重建

■ 启动未果,原因查不明,于是想干脆删掉这个副本重建吧,此时遇见了安全权限问题,因为shard也开启了KeyFile安全认证,所以直接登录shard无法鉴权!

还记得,用户配置信息是在config服务器的,用户信息并不保存在数据分片上。
修改shard配置文件,去掉KeyFile安全认证后,却无法正常启动shard!只好恢复原来配置。

■ 最后想通了,既然是副本,干脆清理data目录,让它重新同步吧:

已知node2是shard2的primary,在node3的shard2副本无法启动的情况下,清理掉node3的shard2的data目录,启动会自动同步。

2022-09-14T16:44:50.844+0800 I CONTROL  [main] ***** SERVER RESTARTED *****
2022-09-14T16:44:50.918+0800 I CONTROL  [initandlisten] MongoDB starting : pid=588866 port=27002 dbpath=/data/mongodb/shard2/data 64-bit host=mongodb3
2022-09-14T16:44:50.918+0800 I CONTROL  [initandlisten] db version v3.6.23
2022-09-14T16:44:50.918+0800 I CONTROL  [initandlisten] git version: d352e6a4764659e0d0350ce77279de3c1f243e5c
2022-09-14T16:44:50.918+0800 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.1.1k  FIPS 25 Mar 2021
2022-09-14T16:44:50.918+0800 I CONTROL  [initandlisten] allocator: tcmalloc
2022-09-14T16:44:50.918+0800 I CONTROL  [initandlisten] modules: none
2022-09-14T16:44:50.918+0800 I CONTROL  [initandlisten] build environment:
2022-09-14T16:44:50.918+0800 I CONTROL  [initandlisten]     distmod: rhel80
2022-09-14T16:44:50.918+0800 I CONTROL  [initandlisten]     distarch: x86_64
2022-09-14T16:44:50.918+0800 I CONTROL  [initandlisten]     target_arch: x86_64
2022-09-14T16:44:50.918+0800 I CONTROL  [initandlisten] options: { config: "/data/mongodb/conf/shard2.conf", net: { bindIp: "0.0.0.0,::", ipv6: true, maxIncomingConnections: 20000, port: 27002 }, processManagement: { fork: true, pidFilePath: "/data/mongodb/shard2/log/shard2.pid" }, replication: { replSet: "shard2" }, security: { keyFile: "/data/mongodb/conf/mongo.keyfile" }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/data/mongodb/shard2/data" }, systemLog: { destination: "file", logAppend: true, path: "/data/mongodb/shard2/log/shard2.log" } }
,,,【略】
balabala一大堆日志信息显示,数据开始同步
,,,【略】
以下是clone processDataObj:
2022-09-14T17:26:28.635+0800 I -        [repl writer worker 0]   flowtest.processDataObj collection clone progress: 19111465/20174625 94% (documents copied)
2022-09-14T17:27:22.842+0800 I ASIO     [NetworkInterfaceASIO-ShardRegistry-0] Connecting to node1:21000
2022-09-14T17:27:22.844+0800 I ASIO     [NetworkInterfaceASIO-ShardRegistry-0] Successfully connected to node1:21000, took 2ms (1 connections now open to node1:21000)
2022-09-14T17:27:37.478+0800 I -        [repl writer worker 0]   flowtest.processDataObj collection clone progress: 19455863/20174625 96% (documents copied)
2022-09-14T17:28:44.143+0800 I -        [repl writer worker 0]   flowtest.processDataObj collection clone progress: 19880883/20174625 98% (documents copied)
2022-09-14T17:29:23.690+0800 I REPL     [repl writer worker 0] CollectionCloner ns:flowtest.processDataObj finished cloning with status: OK
2022-09-14T17:29:39.001+0800 I -        [repl writer worker 0]   Index: (2/3) BTree Bottom Up Progress: 16277900/20174625 80%
,,,【略】
以下是clone usertable:
2022-09-14T17:42:37.954+0800 I -        [repl writer worker 15]   flowtest.usertable collection clone progress: 16896/20000 84% (documents copied)
2022-09-14T17:43:43.274+0800 I -        [repl writer worker 15]   flowtest.usertable collection clone progress: 18432/20000 92% (documents copied)
2022-09-14T17:44:44.984+0800 I -        [repl writer worker 15]   flowtest.usertable collection clone progress: 19968/20000 99% (documents copied)
2022-09-14T17:44:46.189+0800 I REPL     [repl writer worker 15] CollectionCloner ns:flowtest.usertable finished cloning with status: OK
,,,【略】
以下是clone processDataObjInit:
2022-09-14T18:09:28.528+0800 I -        [repl writer worker 9]   flowtest.processDataObjInit collection clone progress: 15248762/16596007 91% (documents copied)
2022-09-14T18:10:30.535+0800 I -        [repl writer worker 9]   flowtest.processDataObjInit collection clone progress: 15863728/16596007 95% (documents copied)
2022-09-14T18:11:39.482+0800 I -        [repl writer worker 9]   flowtest.processDataObjInit collection clone progress: 16459081/16596007 99% (documents copied)
2022-09-14T18:11:54.826+0800 I REPL     [repl writer worker 3] CollectionCloner ns:flowtest.processDataObjInit finished cloning with status: OK
2022-09-14T18:12:11.779+0800 I INDEX    [repl writer worker 3]   done building bottom layer, going to commit
2022-09-14T18:12:26.001+0800 I -        [repl writer worker 3]   Index: (2/3) BTree Bottom Up Progress: 15723000/16596008 94%
,,,【略】
最后日志显示耗时如下:
2022-09-14T18:13:00.122+0800 I REPL     [replication-49] initial sync done; took 5287s.

data目录同步来了328G

注:以上处理整整耗时1天,整理此文耗时2个小时,记录于此,留备参考,感觉有价值,请随手点赞哈。

相关实践学习
MongoDB数据库入门
MongoDB数据库入门实验。
快速掌握 MongoDB 数据库
本课程主要讲解MongoDB数据库的基本知识,包括MongoDB数据库的安装、配置、服务的启动、数据的CRUD操作函数使用、MongoDB索引的使用(唯一索引、地理索引、过期索引、全文索引等)、MapReduce操作实现、用户管理、Java对MongoDB的操作支持(基于2.x驱动与3.x驱动的完全讲解)。 通过学习此课程,读者将具备MongoDB数据库的开发能力,并且能够使用MongoDB进行项目开发。   相关的阿里云产品:云数据库 MongoDB版 云数据库MongoDB版支持ReplicaSet和Sharding两种部署架构,具备安全审计,时间点备份等多项企业能力。在互联网、物联网、游戏、金融等领域被广泛采用。 云数据库MongoDB版(ApsaraDB for MongoDB)完全兼容MongoDB协议,基于飞天分布式系统和高可靠存储引擎,提供多节点高可用架构、弹性扩容、容灾、备份回滚、性能优化等解决方案。 产品详情: https://www.aliyun.com/product/mongodb
目录
相关文章
|
17天前
|
存储 NoSQL 前端开发
MongoDB 分片
10月更文挑战第17天
26 2
|
2月前
|
存储 监控 NoSQL
*MongoDB的水平扩展主要通过分片技术实
*MongoDB的水平扩展主要通过分片技术实
42 5
|
2月前
|
存储 NoSQL 前端开发
MongoDB 分片总结
这篇文章总结了MongoDB分片的概念、集群结构、分片实例、配置和测试过程。
55 6
|
3月前
|
存储 运维 NoSQL
轻松上手:逐步搭建你的高可用MongoDB集群(分片)
【8月更文挑战第13天】在数据激增的背景下,传统单机数据库难以胜任。MongoDB作为流行NoSQL数据库,采用分片技术实现水平扩展,有效处理海量数据。分片将数据分散存储,提高并发处理能力和容错性,是高可用架构基石。构建MongoDB集群需理解shard、config server和router三组件协同工作原理。通过具体实例演示集群搭建流程,包括各组件的启动及配置,确保数据高可用性和系统稳定性。合理规划与实践可构建高效稳定的MongoDB集群,满足业务需求并支持未来扩展。
81 0
|
1月前
|
存储 关系型数据库 MySQL
一个项目用5款数据库?MySQL、PostgreSQL、ClickHouse、MongoDB区别,适用场景
一个项目用5款数据库?MySQL、PostgreSQL、ClickHouse、MongoDB——特点、性能、扩展性、安全性、适用场景比较
|
2月前
|
存储 NoSQL 关系型数据库
非关系型数据库-MongoDB技术(二)
非关系型数据库-MongoDB技术(二)
|
2月前
|
NoSQL 关系型数据库 MongoDB
非关系型数据库-MongoDB技术(一)
非关系型数据库-MongoDB技术(一)
|
11天前
|
NoSQL Cloud Native atlas
探索云原生数据库:MongoDB Atlas 的实践与思考
【10月更文挑战第21天】本文探讨了MongoDB Atlas的核心特性、实践应用及对云原生数据库未来的思考。MongoDB Atlas作为MongoDB的云原生版本,提供全球分布式、完全托管、弹性伸缩和安全合规等优势,支持快速部署、数据全球化、自动化运维和灵活定价。文章还讨论了云原生数据库的未来趋势,如架构灵活性、智能化运维和混合云支持,并分享了实施MongoDB Atlas的最佳实践。
|
12天前
|
NoSQL Cloud Native atlas
探索云原生数据库:MongoDB Atlas 的实践与思考
【10月更文挑战第20天】本文探讨了MongoDB Atlas的核心特性、实践应用及对未来云原生数据库的思考。MongoDB Atlas作为云原生数据库服务,具备全球分布、完全托管、弹性伸缩和安全合规等优势,支持快速部署、数据全球化、自动化运维和灵活定价。文章还讨论了实施MongoDB Atlas的最佳实践和职业心得,展望了云原生数据库的发展趋势。
|
14天前
|
存储 NoSQL MongoDB
MongoDB 数据库引用
10月更文挑战第20天
9 1