Performing Data Write Operations with MongoDB

本文涉及的产品
云数据库 MongoDB,独享型 2核8GB
推荐场景:
构建全方位客户视图
简介: This article discusses the steps involved in performing data write operations with MongoDB, focusing on the roles of the Journal and Oplog applications.

Who_writes_data_first_MongoDB_Journal_or_Oplog

Introduction

This article discusses the steps involved in performing data write operations with MongoDB, focusing on the roles of the Journal and Oplog applications. Journal is a concept on the MongoDB storage engine layer while plog is a capped collection on the MongoDB master-slave replication layer.

MongoDB Journal

All data read and write operations in MongoDB require calling the interface on the storage engine layer to store and read data. The journal is an auxiliary mechanism for the storage engine to store data. Currently, MongoDB supports MMAPv1, WiredTiger, MongoRocks, and other storage engines, and all of them support the configuration of the journal.

To illustrate this, consider how WiredTiger functions. WiredTiger does not immediately store data written to it unless the configuration of the journal is complete. Instead, it performs a full-data checkpoint (storage.syncPeriodSecs configuration item) once every minute by default to make all the data persistent. If the server goes down in the middle of the process, data restoration is possible for data dating back to the most recent checkpoint.

It is often said that the enablement of journal is imperative. Upon enabling the journal, each write operation (reconstruction of the written data in the journal) is recorded in an operation log. As a result, if a fault occurs on the server after starting WiredTiger, WiredTiger can restore data from the most recent checkpoint, and the subsequent journal operation logs will be played back to restore the remaining data.

Two parameters control the actions of the journal in MongoDB. The storage.journal.enabled parameter determines whether to enable the journal and the storage.journal.commitInternalMs parameter determines the interval of the journal flushing to the disk, which has a default value of 100 ms. You can set the writeConcern to {j: true} during writing to ensure that journal flushes the disk at every write.

MongoDB Oplog

Through oplog, you can synchronize data between nodes in the replication set. The client writes data to the primary node, and the primary node records an oplog after writing the data. The secondary node pulls the oplog from the primary node (or other secondary nodes) to ensure each node in the replication set stores the same data. For the storage engine, oplog is part of the ordinary data.

One-Time Write with MongoDB

When writing a document to the MongoDB replication set, perform the following steps:

  • Write the document data to the corresponding set
  • Update the set's index information
  • Write an oplog for synchronization

The steps above must succeed completely, or fail completely, to avoid the following instances:

  • If data write is successful but the index write fails, some data may be readable in full-table scans but unreadable through indexes.
  • If data write and index write are successful but the oplog write fails, the synchronization of the write operation to the secondary node will not be possible. This leads to data inconsistency between the master and slave nodes.

When MongoDB writes data, it puts the above three operations into a WiredTiger transaction to ensure the atomicity of the operations.

beginTransaction();
writeDataToColleciton();
writeCollectionIndex();
writeOplog();
commitTransaction();

01

Performing a transaction with WiredTiger initializes all application changes, with all the operations written to a journal operation log. The background will frequently set checkpoints to make the changes persistent and remove useless journals.

In terms of the data layout, the relationship between oplog and journal is as follows:

02

Conclusion

In this article, we discussed how MongoDB performs data write operations, specifically looking at the roles of oplog and journal in the process. Oplog and journal are concepts that represent the different layers of MongoDB. Since oplog is a common set in MongoDB, oplog writes and common set writes are identical. One write will change the corresponding data, index, and oplog, and these changes correspond to a journal operation log.

相关实践学习
MongoDB数据库入门
MongoDB数据库入门实验。
快速掌握 MongoDB 数据库
本课程主要讲解MongoDB数据库的基本知识,包括MongoDB数据库的安装、配置、服务的启动、数据的CRUD操作函数使用、MongoDB索引的使用(唯一索引、地理索引、过期索引、全文索引等)、MapReduce操作实现、用户管理、Java对MongoDB的操作支持(基于2.x驱动与3.x驱动的完全讲解)。 通过学习此课程,读者将具备MongoDB数据库的开发能力,并且能够使用MongoDB进行项目开发。   相关的阿里云产品:云数据库 MongoDB版 云数据库MongoDB版支持ReplicaSet和Sharding两种部署架构,具备安全审计,时间点备份等多项企业能力。在互联网、物联网、游戏、金融等领域被广泛采用。 云数据库MongoDB版(ApsaraDB for MongoDB)完全兼容MongoDB协议,基于飞天分布式系统和高可靠存储引擎,提供多节点高可用架构、弹性扩容、容灾、备份回滚、性能优化等解决方案。 产品详情: https://www.aliyun.com/product/mongodb
目录
相关文章
|
5月前
|
NoSQL Java API
Spring Data MongoDB 使用
Spring Data MongoDB 使用
287 1
|
6月前
|
NoSQL Java MongoDB
Java一分钟之-Spring Data MongoDB:MongoDB集成
【6月更文挑战第11天】Spring Data MongoDB简化Java应用与MongoDB集成,提供模板和Repository模型。本文介绍其基本用法、常见问题及解决策略。包括时间字段的UTC转换、异常处理和索引创建。通过添加相关依赖、配置MongoDB连接、定义Repository接口及使用示例,帮助开发者高效集成MongoDB到Spring Boot应用。注意避免时间差、异常处理不充分和忽视索引的问题。
178 0
|
7月前
|
NoSQL MongoDB 数据安全/隐私保护
Flink CDC支持MongoDB的CDC(Change Data Capture)连接器
Flink CDC支持MongoDB的CDC(Change Data Capture)连接器
339 4
|
NoSQL MongoDB
《Data as a Service - 数据即服务 -- MongoDB⾼级应⽤模式》电子版地址
Data as a Service - 数据即服务 -- MongoDB⾼级应⽤模式
110 0
《Data as a Service - 数据即服务 -- MongoDB⾼级应⽤模式》电子版地址
|
SQL NoSQL Java
MongoDB--Spring Data MongoDB详细的操作手册(增删改查)
在NoSQL盛行的时代,App很大可能会涉及到MongoDB数据库的使用,而也必须学会在Spring boot使用Spring Data连接MongoDB进行数据增删改查操作,如下为详细的操作手册。
634 1
|
安全 NoSQL MongoDB
【安装MongoDB报错】mkdir: /data/db: Read-only file system
在安装MongoDB时,需要创建一个/data/db文件夹用来作为默认数据库目录。 但是因为Mac电脑默认是开启安全模式的,不能在根目录下面随便创建删除文件夹。所以我们创建的时候,会报这个错误mkdir: /data/db: Read-only file system,
720 0
|
存储 JSON NoSQL
Spring认证中国教育管理中心-Spring Data MongoDB教程四
Spring认证中国教育管理中心-Spring Data MongoDB教程四
Spring认证中国教育管理中心-Spring Data MongoDB教程四
|
SQL NoSQL 关系型数据库
教程:使用Data Lake Analytics读/写MongoDB数据
Data Lake Analytics 作为云上数据处理的枢纽,最近加入了对于MongoDB 的支持, 这篇教程带你玩转 DLA 的 MongoDB 支持。 创建数据库 在 DLA 里面创建一个底层映射到 MongoDB 的外表的语法如下: CREATE DATABASE `mongo_test`.
2890 0
教程:使用Data Lake Analytics读/写MongoDB数据
|
NoSQL Java Spring
spring data mongodb 使用 之 spring boot 2.x
spring data 是一个家族在使用中大同小异 本人写的spring data是通过maven子父工程管理 parent项目的 : pom.xml pom.
2087 0
|
前端开发 NoSQL Java