开发者社区> 德哥> 正文

MonetDB table level write lock? OR it's repeatable read above isolate?

简介:
+关注继续查看
在测试对单个表执行insert时, 发现并行的话会导致除最早提交的事务以外的其他事务回滚.

来看个例子 :
sql>\d d
CREATE TABLE "sys"."d" (
        "id"   INTEGER,
        "info" VARCHAR(64),
        "c1"   VARCHAR(64)
);


当前d表有1条记录.
sql>select * from d;
+------+------+------+
| id   | info | c1   |
+======+======+======+
|    1 | test | test |
+------+------+------+
1 tuple (1.698ms)

开启会话1, 插入一条记录, 可以查看到有2条记录
sql>start transaction;
auto commit mode: off
sql>insert into d values (2,'test','test');
1 affected rows (0.726ms)
sql>select * from d;
+------+------+------+
| id   | info | c1   |
+======+======+======+
|    1 | test | test |
|    2 | test | test |
+------+------+------+
2 tuples (1.595ms)


开启会话2, 查看只有1条记录, 说明至少是read committed隔离级别.
sql>start transaction;
auto commit mode: off
sql>select * from d;
+------+------+------+
| id   | info | c1   |
+======+======+======+
|    1 | test | test |
+------+------+------+
1 tuple (1.299ms)

在会话2删除这条记录. 再次查看没有记录了
sql>delete from d;
1 affected rows (0.684ms)
sql>select * from d;
+----+------+----+
| id | info | c1 |
+====+======+====+
+----+------+----+
0 tuples (1.152ms)


在会话1, 还可以看到2条记录
sql>select * from d;
+------+------+------+
| id   | info | c1   |
+======+======+======+
|    1 | test | test |
|    2 | test | test |
+------+------+------+
2 tuples (0.926ms)

会话2提交, 成功
sql>commit;
auto commit mode: on


在会话1, 还可以看到2条记录, 现在有点像repeatable read或者ssi隔离级别了.
sql>select * from d;
+------+------+------+
| id   | info | c1   |
+======+======+======+
|    1 | test | test |
|    2 | test | test |
+------+------+------+
2 tuples (1.065ms)

提交会话1失败
sql>commit;
COMMIT: failed

在会话1再次查看表d, 已经没有记录了. 因为会话2删除了d的所有记录
sql>select * from d;
+----+------+----+
| id | info | c1 |
+====+======+====+
+----+------+----+
0 tuples (1.399ms)


从现象上看, 在事务过程中并没有锁冲突发生, 在事务提交的时候检测到冲突后回滚. 
所以先提交的事务成功了, 后提交的事务就失败了.
这样也没有死锁的问题. 反正不会等待.
来看一下MonetDB对事务的介绍 : 用的是OCC并发控制方法, 提交时检测冲突, 所以不适合有增删改冲突的并行长事务场景.

Transactions

MonetDB/SQL supports a multi-statement transaction scheme marked by START TRANSACTION and closed with either COMMIT or ROLLBACK. The session variableauto_commit can be set to true if each SQL statement should be considered an independent transaction.

WARNING. The transaction management scheme is based on optimistic concurrency control. It provides each transaction with a consistent view on the database, but updates are collected in an addendum processed on transaction commit. If at commit time it can be assured that the data prepared for update affects tables has not changed in the mean time, the results are merged. 
This optimistic concurrency scheme is particularly useful for query dominant environments. It negatively affects long running transactions which concurrently are affected by updates on their underlying tables.

WARNING. Optimistic concurrency control may be confusing for those who built online-transaction applications, because the granularity of the concurrency control scheme will show higher then expected transaction failures. There is not a locking schema to avoid this. Applications may have to resort to serial execution.

WARNING. The tuples being deleted are only marked as such. They do not reduce the table size. It calls for a vacuum cleaning algorithm.

为什么说是表级别的冲突呢? 因为不同的表的DML不会冲突.
例如 :
会话1
sql>start transaction;
auto commit mode: off
sql>insert into d values(100,'test','test');
1 affected rows (0.678ms)
会话2
sql>start transaction;
auto commit mode: off
sql>insert into c values(100,'test','test');
1 affected rows (0.678ms)
会话1
sql>commit;
auto commit mode: on
会话2
sql>commit;
auto commit mode: on

都提交成功

[参考]

版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。

相关文章
Write operations are not allowed in read-only mode (FlushMode.NEVER/
今天在做ssh项目开发的时候遇到一个问题,保存数据的时候报错: Write operations are not allowed in read-only mode (FlushMode.NEVER/ 后参考这篇文章才知道出了什么问题:http://www.aichengxu.com/view/37412 原来在Spring事务管理中save类的方法都没有read-only=true,但是我的方法是add*,所以就出现开头讲的问题,把add*方法改成save*就可以了。
850 0
High Waits on 'Db File Sequential Read' Due to Table Lookup Following Index Access
最近某些系统AWR的top 5中“Db File Sequential Read”占据的时间百分比非常大,通常这种等待事件是一种正常的。但当前系统性能是有些问题的,并发量大,有些缓慢,因此需要判断这种等待事件是否能够减少。
658 0
+关注
德哥
公益是一辈子的事, I am digoal, just do it.
文章
问答
文章排行榜
最热
最新
相关电子书
更多
Dynamic DDL Adding Structure to Streaming Data on the Fly
立即下载
Dynamic DDL Adding Structure t
立即下载
低代码开发师(初级)实战教程
立即下载