前面两篇测试了一下python单线程压mongo和PostgreSQL的性能.
相比PostgreSQL的pgbench, python用到的这两个驱动未使用异步接口, 对性能影响极大.
本文使用threading这个模块, 测试一下多线程的性能.
PostgreSQL测试脚本, 使用8个线程 :
$ vi test.py
import threading
import time
import postgresql
conn = { "user": "postgres",
"database": "postgres",
"unix": "/data01/pgdata/pg_root/.s.PGSQL.1921"
}
db = postgresql.open(**conn)
db.execute("drop table if exists tt")
db.execute("create table tt(id int, username name, age int2, email text, qq text)")
print(db.query("select count(1) as a from tt"))
class n_t(threading.Thread): #The timer class is derived from the class threading.Thread
def __init__(self, num):
threading.Thread.__init__(self)
self.thread_num = num
def run(self): #Overwrite run() method, put what you want the thread do here
conn = { "user": "postgres",
"database": "postgres",
"unix": "/data01/pgdata/pg_root/.s.PGSQL.1921"
}
db = postgresql.open(**conn)
ins = db.prepare("insert into tt values($1,$2,$3,$4,$5)")
start_t = time.time()
print("TID:" + str(self.thread_num) + " " + str(start_t))
for i in range(0,125000):
ins(1,'digoal.zhou',32,'digoal@126.com','276732431')
stop_t = time.time()
print("TID:" + str(self.thread_num) + " " + str(stop_t))
print(stop_t-start_t)
def test():
t_names = dict()
for i in range(0, 8):
t_names[i] = n_t(i)
t_names[i].start()
return
if __name__ == '__main__':
test()
测试结果 :
比单线程187秒还慢了几十秒.
mongodb测试脚本, 同样使用8个线程 :
测试结果和单线程测试结果差不多 :
最后附上PostgreSQL pgbench使用8线程的测试结果 :
postgres@localhost-> python test.py
[(0,)]
TID:0 1423065305.517401
TID:3 1423065305.5209844
TID:1 1423065305.52123
TID:5 1423065305.5240796
TID:4 1423065305.5249543
TID:6 1423065305.5266497
TID:2 1423065305.5301073
TID:7 1423065305.533195
TID:5 1423065526.6725013
221.14842176437378
TID:7 1423065528.599816
223.06662106513977
TID:6 1423065529.8911068
224.36445713043213
TID:2 1423065530.6830883
225.15298104286194
TID:4 1423065531.1566184
225.63166403770447
TID:1 1423065531.4046018
225.88337182998657
TID:3 1423065531.4168346
225.8958501815796
TID:0 1423065531.5486302
226.03122925758362
mongodb测试脚本, 同样使用8个线程 :
# vi test.py
import threading
import time
import pymongo
c=pymongo.MongoClient('/tmp/mongodb-5281.sock')
db = c.test_database
db.drop_collection('test_collection')
collection = db.test_collection
print(collection.count())
class n_t(threading.Thread): #The timer class is derived from the class threading.Thread
def __init__(self, num):
threading.Thread.__init__(self)
self.thread_num = num
def run(self): #Overwrite run() method, put what you want the thread do here
c=pymongo.MongoClient('/tmp/mongodb-5281.sock')
db = c.test_database
collection = db.test_collection
start_t = time.time()
print("TID:" + str(self.thread_num) + " " + str(start_t))
for i in range(0,125000):
collection.insert({'id':1, 'username': 'digoal.zhou', 'age':32, 'email':'digoal@126.com', 'qq':'276732431'})
stop_t = time.time()
print("TID:" + str(self.thread_num) + " " + str(stop_t))
print(stop_t-start_t)
def test():
t_names = dict()
for i in range(0,8):
t_names[i] = n_t(i)
t_names[i].start()
return
if __name__ == '__main__':
test()
测试结果和单线程测试结果差不多 :
[root@localhost ~]# python test.py
0
TID:0 1423066038.8190722
TID:1 1423066038.819762
TID:2 1423066038.8214562
TID:3 1423066038.8254952
TID:4 1423066038.827397
TID:5 1423066038.8303092
TID:6 1423066038.8326738
TID:7 1423066038.8461218
TID:5 1423066400.8412485
362.0109393596649
TID:3 1423066402.4937685
363.6682732105255
TID:2 1423066402.8351183
364.01366209983826
TID:4 1423066402.9675741
364.14017701148987
TID:1 1423066403.0420506
364.222288608551
TID:0 1423066403.284279
364.465206861496
TID:6 1423066403.6458826
364.81320881843567
TID:7 1423066403.6860046
364.839882850647
# ./mongo 127.0.0.1:5281/test_database
MongoDB shell version: 3.0.0-rc7
connecting to: 127.0.0.1:5281/test_database
> db.test_collection.count()
1000000
最后附上PostgreSQL pgbench使用8线程的测试结果 :
耗时仅16秒, 比使用python测试的性能高出不少. (跑了800W事务, 所以要处以8和前面的结果比)
postgres@localhost-> vi test.sql
insert into tt values (1,'digoal.zhou',32,'digoal@126.com','276732431');
postgres@localhost-> pgbench -M prepared -n -r -f ./test.sql -c 8 -j 4 -t 1000000
transaction type: Custom query
scaling factor: 1
query mode: prepared
number of clients: 8
number of threads: 4
number of transactions per client: 1000000
number of transactions actually processed: 8000000/8000000
tps = 64215.539716 (including connections establishing)
tps = 64219.452898 (excluding connections establishing)
statement latencies in milliseconds:
0.118040 insert into tt values (1,'digoal.zhou',32,'digoal@126.com','276732431');