一、redis key数量为1千万时。
存储value为"0",比较小。如果value较大,则存储内存会增多
redis key数量为一千万时,使用了865M的内存。
# Keyspace db0:keys=11100111,expires=0,avg_ttl=0 内存使用情况 # Memory used_memory:907730088 used_memory_human:865.68M used_memory_rss:979476480 used_memory_rss_human:934.10M used_memory_peak:1258244232 used_memory_peak_human:1.17G used_memory_peak_perc:72.14% used_memory_overhead:580102896 used_memory_startup:765664 used_memory_dataset:327627192 used_memory_dataset_perc:36.12% total_system_memory:8365256704 total_system_memory_human:7.79G used_memory_lua:37888 used_memory_lua_human:37.00K
二、redis key数量为1千5百万时。
redis key数量为一千五百万时,使用了1.13G的内存。
# Keyspace db0:keys=15100031,expires=0,avg_ttl=0 # Memory used_memory:1211733288 used_memory_human:1.13G used_memory_rss:1247817728 used_memory_rss_human:1.16G used_memory_peak:1258244232 used_memory_peak_human:1.17G used_memory_peak_perc:96.30% used_memory_overhead:740104496 used_memory_startup:765664 used_memory_dataset:471628792 used_memory_dataset_perc:38.95% total_system_memory:8365256704 total_system_memory_human:7.79G used_memory_lua:37888 used_memory_lua_human:37.00K
三、redis key数量为一千五百万时压测
redis-benchmark -h 127.0.0.1 -p 6379 -c 1000 -n 10000 -t get -q GET: 34364.26 requests per second
四、使用map将key值打散存储,小key为1千五百万
使用hset存储打散为1024个key时,存储大小为921M,比直接存储节省了200M。
# Memory used_memory:966758968 used_memory_human:921.97M used_memory_rss:1002913792 used_memory_rss_human:956.45M used_memory_peak:1749456304 used_memory_peak_human:1.63G used_memory_peak_perc:55.26% used_memory_overhead:1929880 used_memory_startup:765664 used_memory_dataset:964829088 used_memory_dataset_perc:99.88% total_system_memory:8365256704 total_system_memory_human:7.79G used_memory_lua:37888 used_memory_lua_human:37.00K # Keyspace db0:keys=1024,expires=0,avg_ttl=0
五、使用hset存储打散为256个key
存储大小为1.09G,比直接存储小了80M。
used_memory:1170356864 used_memory_human:1.09G used_memory_rss:1190223872 used_memory_rss_human:1.11G used_memory_peak:1749456304 used_memory_peak_human:1.63G used_memory_peak_perc:66.90% used_memory_overhead:33759246 used_memory_startup:765664 used_memory_dataset:1136597618 used_memory_dataset_perc:97.18% total_system_memory:8365256704 total_system_memory_human:7.79G
六、进行hget的压力测试
redis-benchmark -h 127.0.0.1 -p 6379 -c 1000 -n 10000 -t hget myhash rand_int rand_int rand_int ====== myhash rand_int rand_int rand_int ====== 10000 requests completed in 0.22 seconds 1000 parallel clients 3 bytes payload keep alive: 1 46511.63 requests per second
七、总结
可见,当存储量特别大的时候,可以将key进行hash分散处理,可以减少存储内存。并且当key的数量很大的时候,redis取值性能还是很高的。