MyBatis系列目录--5. MyBatis一级缓存和二级缓存(redis实现)

本文涉及的产品
RDS MySQL Serverless 基础系列,0.5-2RCU 50GB
云数据库 RDS MySQL,集群系列 2核4GB
推荐场景:
搭建个人博客
云数据库 Tair(兼容Redis),内存型 2GB
简介:  转载请注明出处哈:http://carlosfu.iteye.com/blog/2238662 0. 相关知识: 查询缓存:绝大数系统主要是读多写少。 缓存作用:减轻数据库压力,提供访问速度。



 转载请注明出处哈:http://carlosfu.iteye.com/blog/2238662


0. 相关知识:
查询缓存:绝大数系统主要是读多写少。
缓存作用:减轻数据库压力,提供访问速度。

  

1. 一级缓存测试用例

(1) 默认开启,不需要有什么配置

(2) 示意图


 

(3) 测试代码

package com.sohu.tv.cache;
import org.apache.ibatis.session.SqlSession;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import com.sohu.tv.bean.Player;
import com.sohu.tv.mapper.PlayerDao;
import com.sohu.tv.test.mapper.BaseTest;
/**
 * 一级缓存测试
 * 
 * @author leifu
 * @Date 2015-8-3
 * @Time 下午9:51:00
 */
public class FirstCacheTest extends BaseTest {
    private SqlSession sqlSession;
    private SqlSession sqlSessionAnother;

    
    @Before
    public void before() {
        sqlSession = sessionFactory.openSession(false);
        sqlSessionAnother = sessionFactory.openSession(false);
    }
    @After
    public void after() {
        sqlSession.close();
        sqlSessionAnother.close();
    }
    @Test
    public void test1() throws Exception {
        PlayerDao playerDao = sqlSession.getMapper(PlayerDao.class);
        Player player = playerDao.getPlayerById(1);
        System.out.println(player);
        
        playerDao = sqlSession.getMapper(PlayerDao.class);
        player = playerDao.getPlayerById(1);
        System.out.println(player);
        
        playerDao = sqlSessionAnother.getMapper(PlayerDao.class);
        player = playerDao.getPlayerById(1);
        System.out.println(player);
        
    }
    
    @Test
    public void test2() throws Exception {
        PlayerDao playerDao = sqlSession.getMapper(PlayerDao.class);
        Player player = playerDao.getPlayerById(1);
        System.out.println(player);
        
        //1. session清除或者提交
//        sqlSession1.commit();
//        sqlSession.clearCache();
        
        //2. 增删改查
//        playerDao.savePlayer(new Player(-1, "abcd", 13));
//        playerDao.updatePlayer(new Player(4, "abcd", 13));
        playerDao.deletePlayer(4);
        
        player = playerDao.getPlayerById(1);
        System.out.println(player);
        
    }
    
    
}

 

2、二级缓存(自带 PerpetualCache)

(0) 示意图



 

(1) 二级缓存需要开启

总配置文件中,二级缓存也是开启的,不需要设置

<setting name="cacheEnabled" value="true"/>

mapper级别的cache需要开启,在对应的mapper.xml写入

<!--开启本mapper的二级缓存-->
<cache/>

(2) 实体类在二级缓存中需要进行序列化,所以所有实体类需要实现Serializable 

(3) 示例:

package com.sohu.tv.cache;
import org.apache.ibatis.session.SqlSession;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import com.sohu.tv.bean.Player;
import com.sohu.tv.mapper.PlayerDao;
import com.sohu.tv.test.mapper.BaseTest;
/**
 * 二级缓存测试
 * 
 * @author leifu
 * @Date 2015-8-3
 * @Time 下午10:10:34
 */
public class SecondCacheTest extends BaseTest {
    private SqlSession sqlSession1 = sessionFactory.openSession();
    
    private SqlSession sqlSession2 = sessionFactory.openSession();
    
    private SqlSession sqlSession3 = sessionFactory.openSession();
    
    private PlayerDao playerDao1;
    
    private PlayerDao playerDao2;
    
    private PlayerDao playerDao3;
    
    @Before
    public void before() {
        sqlSession1 = sessionFactory.openSession(false);
        sqlSession2 = sessionFactory.openSession(false);
        sqlSession3 = sessionFactory.openSession(false);
        
        playerDao1 = sqlSession1.getMapper(PlayerDao.class);
        playerDao2 = sqlSession2.getMapper(PlayerDao.class);
        playerDao3 = sqlSession3.getMapper(PlayerDao.class);
    }
    @After
    public void after() {
        sqlSession1.close();
        sqlSession2.close();
        sqlSession3.close();
    }
     
    @Test
    public void test1() throws Exception {
        int targetId = 1;
        
        //session1 查询并提交
        Player player1 = playerDao1.getPlayerById(targetId);
        System.out.println("player1: " + player1);
        sqlSession1.commit();
        
        //session2 命中后,更新并提交清空缓存
        Player player2 = playerDao2.getPlayerById(targetId);
        System.out.println("player2: " + player2);
        player2.setAge(15);
        playerDao2.update(player2);
        sqlSession2.commit();
        
        //session3 不命中
        Player player3 = playerDao3.getPlayerById(targetId);
        System.out.println("player3: " + player3);
    }
    
    @Test
    public void test2() throws Exception {
        int one = 1;
        int two = 2;
        
        //session1 查询并提交
        Player player1 = playerDao1.getPlayerById(one);
        playerDao1.getPlayerById(two);
        System.out.println("player1: " + player1);
        sqlSession1.commit();
        
        //session2 命中后,更新并提交清空缓存
        Player player2 = playerDao2.getPlayerById(one);
        System.out.println("player2: " + player2);
        player2.setAge(15);
        playerDao2.updatePlayer(player2);
        sqlSession2.commit();
        
        //session3 不命中
        Player player3 = playerDao3.getPlayerById(two);
        System.out.println("player3: " + player3);
    }
    
    
}

(4) 重要日志:

22:24:37.191 [main] DEBUG com.sohu.tv.mapper.PlayerDao - Cache Hit Ratio [com.sohu.tv.mapper.PlayerDao]: 0.0
22:24:37.196 [main] DEBUG o.a.i.t.jdbc.JdbcTransaction - Opening JDBC Connection
22:24:37.460 [main] DEBUG o.a.i.d.pooled.PooledDataSource - Created connection 1695520324.
22:24:37.460 [main] DEBUG o.a.i.t.jdbc.JdbcTransaction - Setting autocommit to false on JDBC Connection [com.mysql.jdbc.JDBC4Connection@650f9644]
22:24:37.463 [main] DEBUG c.s.t.mapper.PlayerDao.getPlayerById - ==> Preparing: select id,name,age from players where id=? 
22:24:37.520 [main] DEBUG c.s.t.mapper.PlayerDao.getPlayerById - ==> Parameters: 1(Integer)
22:24:37.541 [main] DEBUG c.s.t.mapper.PlayerDao.getPlayerById - <== Total: 1
player1: Player [id=1, name=kaka, age=60]
22:24:37.549 [main] DEBUG o.a.i.t.jdbc.JdbcTransaction - Resetting autocommit to true on JDBC Connection [com.mysql.jdbc.JDBC4Connection@650f9644]
22:24:37.549 [main] DEBUG o.a.i.t.jdbc.JdbcTransaction - Closing JDBC Connection [com.mysql.jdbc.JDBC4Connection@650f9644]
22:24:37.549 [main] DEBUG o.a.i.d.pooled.PooledDataSource - Returned connection 1695520324 to pool.
22:29:13.203 [main] DEBUG com.sohu.tv.mapper.PlayerDao - Cache Hit Ratio [com.sohu.tv.mapper.PlayerDao]: 0.5
player3: Player [id=1, name=kaka, age=60]
22:29:13.204 [main] DEBUG o.a.i.t.jdbc.JdbcTransaction - Opening JDBC Connection
22:29:13.204 [main] DEBUG o.a.i.d.pooled.PooledDataSource - Checked out connection 1695520324 from pool.
22:29:13.204 [main] DEBUG o.a.i.t.jdbc.JdbcTransaction - Setting autocommit to false on JDBC Connection [com.mysql.jdbc.JDBC4Connection@650f9644]
22:29:13.205 [main] DEBUG c.s.tv.mapper.PlayerDao.updatePlayer - ==> Preparing: update players set name=?,age=? where id=? 
22:29:13.207 [main] DEBUG c.s.tv.mapper.PlayerDao.updatePlayer - ==> Parameters: kaka(String), 60(Integer), 1(Integer)
22:29:13.208 [main] DEBUG c.s.tv.mapper.PlayerDao.updatePlayer - <== Updates: 1
22:29:13.210 [main] DEBUG o.a.i.t.jdbc.JdbcTransaction - Committing JDBC Connection [com.mysql.jdbc.JDBC4Connection@650f9644]
22:29:13.210 [main] DEBUG o.a.i.t.jdbc.JdbcTransaction - Resetting autocommit to true on JDBC Connection [com.mysql.jdbc.JDBC4Connection@650f9644]
22:29:13.211 [main] DEBUG o.a.i.t.jdbc.JdbcTransaction - Closing JDBC Connection [com.mysql.jdbc.JDBC4Connection@650f9644]
22:29:13.211 [main] DEBUG o.a.i.d.pooled.PooledDataSource - Returned connection 1695520324 to pool.
22:29:13.211 [main] DEBUG com.sohu.tv.mapper.PlayerDao - Cache Hit Ratio [com.sohu.tv.mapper.PlayerDao]: 0.3333333333333333
22:29:13.211 [main] DEBUG o.a.i.t.jdbc.JdbcTransaction - Opening JDBC Connection
22:29:13.212 [main] DEBUG o.a.i.d.pooled.PooledDataSource - Checked out connection 1695520324 from pool.
22:29:13.212 [main] DEBUG o.a.i.t.jdbc.JdbcTransaction - Setting autocommit to false on JDBC Connection [com.mysql.jdbc.JDBC4Connection@650f9644]
22:29:13.212 [main] DEBUG c.s.t.mapper.PlayerDao.getPlayerById - ==> Preparing: select id,name,age from players where id=? 
22:29:13.213 [main] DEBUG c.s.t.mapper.PlayerDao.getPlayerById - ==> Parameters: 1(Integer)
22:29:13.214 [main] DEBUG c.s.t.mapper.PlayerDao.getPlayerById - <== Total: 1
player2: Player [id=1, name=kaka, age=60]
22:29:13.215 [main] DEBUG o.a.i.t.jdbc.JdbcTransaction - Resetting autocommit to true on JDBC Connection [com.mysql.jdbc.JDBC4Connection@650f9644]
22:29:13.216 [main] DEBUG o.a.i.t.jdbc.JdbcTransaction - Closing JDBC Connection [com.mysql.jdbc.JDBC4Connection@650f9644]
22:29:13.216 [main] DEBUG o.a.i.d.pooled.PooledDataSource - Returned connection 1695520324 to pool.
 

 

 

3、二级缓存(Redis版)

(1) redis使用一个简单的单点实例作为数据源:

引入jedis pom依赖:

<jedis.version>2.8.0</jedis.version>
<protostuff.version>1.0.8</protostuff.version>
<dependency>
    <groupId>redis.clients</groupId>
    <artifactId>jedis</artifactId>
    <version>${jedis.version}</version>
</dependency>
<dependency>
    <groupId>com.dyuproject.protostuff</groupId>
    <artifactId>protostuff-runtime</artifactId>
    <version>${protostuff.version}</version>
</dependency>

<dependency>
    <groupId>com.dyuproject.protostuff</groupId>
    <artifactId>protostuff-core</artifactId>
    <version>${protostuff.version}</version>
</dependency>

 

jedis获取工具(使用jedispool)

package com.sohu.tv.redis;
import org.apache.commons.pool2.impl.GenericObjectPoolConfig;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import redis.clients.jedis.JedisPool;
/**
 * jedisPool获取工具
 * 
 * @author leifu
 * @Date 2015年8月4日
 * @Time 上午9:01:45
 */
public class RedisStandAloneUtil {
    private final static Logger logger = LoggerFactory.getLogger(RedisStandAloneUtil.class);
    /**
     * jedis连接池
     */
    private static JedisPool jedisPool;
     
    /**
     * redis-host
     */
    private final static String REDIS_HOST = "10.10.xx.xx";
     
    /**
     * redis-port
     */
    private final static int REDIS_PORT = 6384;
     
    static {
        try {
            jedisPool = new JedisPool(new GenericObjectPoolConfig(), REDIS_HOST, REDIS_PORT);
        } catch (Exception e) {
            logger.error(e.getMessage(), e);
        }
    }
    public static JedisPool getJedisPool() {
        return jedisPool;
    }
      
    public static void main(String[] args) {
        System.out.println(RedisStandAloneUtil.getJedisPool().getResource().info());
    }
}

 

(2) 如果自己实现mybatis的二级缓存,需要实现org.apache.ibatis.cache.Cache接口,已经实现的有如下:


 

 

序列化相关工具代码:

package com.sohu.tv.redis.serializable;



import com.dyuproject.protostuff.LinkedBuffer;
import com.dyuproject.protostuff.ProtostuffIOUtil;
import com.dyuproject.protostuff.Schema;
import com.dyuproject.protostuff.runtime.RuntimeSchema;

import java.util.concurrent.ConcurrentHashMap;

public class ProtostuffSerializer {

    private static ConcurrentHashMap<Class<?>, Schema<?>> cachedSchema = new ConcurrentHashMap<Class<?>, Schema<?>>();

    public <T> byte[] serialize(final T source) {
        VO<T> vo = new VO<T>(source);

        final LinkedBuffer buffer = LinkedBuffer.allocate(LinkedBuffer.DEFAULT_BUFFER_SIZE);
        try {
            final Schema<VO> schema = getSchema(VO.class);
            return serializeInternal(vo, schema, buffer);
        } catch (final Exception e) {
            throw new IllegalStateException(e.getMessage(), e);
        } finally {
            buffer.clear();
        }
    }

    public <T> T deserialize(final byte[] bytes) {
        try {
            Schema<VO> schema = getSchema(VO.class);
            VO vo = deserializeInternal(bytes, schema.newMessage(), schema);
            if (vo != null && vo.getValue() != null) {
                return (T) vo.getValue();
            }
        } catch (final Exception e) {
            throw new IllegalStateException(e.getMessage(), e);
        }
        return null;
    }

    private <T> byte[] serializeInternal(final T source, final Schema<T> schema, final LinkedBuffer buffer) {
        return ProtostuffIOUtil.toByteArray(source, schema, buffer);
    }

    private <T> T deserializeInternal(final byte[] bytes, final T result, final Schema<T> schema) {
        ProtostuffIOUtil.mergeFrom(bytes, result, schema);
        return result;
    }

    private static <T> Schema<T> getSchema(Class<T> clazz) {
        @SuppressWarnings("unchecked")
        Schema<T> schema = (Schema<T>) cachedSchema.get(clazz);
        if (schema == null) {
            schema = RuntimeSchema.createFrom(clazz);
            cachedSchema.put(clazz, schema);
        }
        return schema;
    }

}

 

package com.sohu.tv.redis.serializable;


import java.io.Serializable;

public class VO<T> implements Serializable {

    private T value;

    public VO(T value) {
        this.value = value;
    }

    public VO() {
    }

    public T getValue() {
        return value;
    }

    @Override
    public String toString() {
        return "VO{" +
                "value=" + value +
                '}';
    }
}

 
 

Redis需要自己来实现,代码如下:

package com.sohu.tv.redis;
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;
import org.apache.ibatis.cache.Cache;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.serializable.ProtostuffSerializer;
/**
 * mybatis redis实现
 * 
 * @author leifu
 * @Date 2015年8月4日
 * @Time 上午9:12:37
 */
public class MybatisRedisCache implements Cache {
    private static Logger logger = LoggerFactory.getLogger(MybatisRedisCache.class);
    private String id;
    private final ReadWriteLock readWriteLock = new ReentrantReadWriteLock();
    private final ProtostuffSerializer protostuffSerializer = new ProtostuffSerializer();
    public MybatisRedisCache(final String id) {
        if (logger.isInfoEnabled()) {
            logger.info("============ MybatisRedisCache id {} ============", id);
        }
        if (id == null) {  
            throw new IllegalArgumentException("Cache instances require an ID");  
        }  
        this.id = id;  
    } 
     
    @Override
    public String getId() {
        return this.id;
    }
    @Override
    public int getSize() {
        Jedis jedis = null;
        int size = -1;
        try {
            jedis = RedisStandAloneUtil.getJedisPool().getResource();
            size = Integer.valueOf(jedis.dbSize().toString());
        } catch (Exception e) {
            logger.error(e.getMessage(), e);
        } finally {
            if (jedis != null) {
                jedis.close();
            }
        }
        return size;
    }
    @Override
    public void putObject(Object key, Object value) {
        if (logger.isInfoEnabled()) {
            logger.info("============ putObject key: {}, value: {} ============", key, value);
        }
        Jedis jedis = null;
        try {
            jedis = RedisStandAloneUtil.getJedisPool().getResource();
            byte[] byteKey = protostuffSerializer.serialize(key);
            byte[] byteValue = protostuffSerializer.serialize(value);
            jedis.set(byteKey, byteValue);
        } catch (Exception e) {
            logger.error(e.getMessage(), e);
        } finally {
            if (jedis != null) {
                jedis.close();
            }
        }
    }
    @Override
    public Object getObject(Object key) {
        if (logger.isInfoEnabled()) {
            logger.info("============ getObject key: {}============", key);
        }
        Object object = null;
        Jedis jedis = null;
        try {
            jedis = RedisStandAloneUtil.getJedisPool().getResource();
            byte[] bytes = jedis.get(protostuffSerializer.serialize(key));
            if (bytes != null) {
                object = protostuffSerializer.deserialize(bytes);
            }
        } catch (Exception e) {
            logger.error(e.getMessage(), e);
        } finally {
            if (jedis != null) {
                jedis.close();
            }
        }
        return object;
    }
    @Override
    public Object removeObject(Object key) {
        if (logger.isInfoEnabled()) {
            logger.info("============ removeObject key: {}============", key);
        }
        String result = "success";
        Jedis jedis = null;
        try {
            jedis = RedisStandAloneUtil.getJedisPool().getResource();
            jedis.del(String.valueOf(key));
        } catch (Exception e) {
            logger.error(e.getMessage(), e);
        } finally {
            if (jedis != null) {
                jedis.close();
            }
        }
        return result;
    }
    @Override
    public void clear() {
        if (logger.isInfoEnabled()) {
            logger.info("============ start clear cache ============");
        }
        String result = "fail";
        Jedis jedis = null;
        try {
            jedis = RedisStandAloneUtil.getJedisPool().getResource();
            result = jedis.flushAll();
        } catch (Exception e) {
            logger.error(e.getMessage(), e);
        } finally {
            if (jedis != null) {
                jedis.close();
            }
        }
        if (logger.isInfoEnabled()) {
            logger.info("============ end clear cache result is {}============", result);
        }
    }
    @Override
    public ReadWriteLock getReadWriteLock() {
        return readWriteLock;
    }
}

 

(3) mapper配置中加入自定义redis二级缓存:

 

<cache type="com.sohu.tv.redis.MybatisRedisCache"/>

(4) 单元测试同第二节

相关实践学习
基于Redis实现在线游戏积分排行榜
本场景将介绍如何基于Redis数据库实现在线游戏中的游戏玩家积分排行榜功能。
云数据库 Redis 版使用教程
云数据库Redis版是兼容Redis协议标准的、提供持久化的内存数据库服务,基于高可靠双机热备架构及可无缝扩展的集群架构,满足高读写性能场景及容量需弹性变配的业务需求。 产品详情:https://www.aliyun.com/product/kvstore &nbsp; &nbsp; ------------------------------------------------------------------------- 阿里云数据库体验:数据库上云实战 开发者云会免费提供一台带自建MySQL的源数据库&nbsp;ECS 实例和一台目标数据库&nbsp;RDS实例。跟着指引,您可以一步步实现将ECS自建数据库迁移到目标数据库RDS。 点击下方链接,领取免费ECS&amp;RDS资源,30分钟完成数据库上云实战!https://developer.aliyun.com/adc/scenario/51eefbd1894e42f6bb9acacadd3f9121?spm=a2c6h.13788135.J_3257954370.9.4ba85f24utseFl
相关文章
|
17天前
|
存储 缓存 芯片
让星星⭐月亮告诉你,当我们在说CPU一级缓存二级缓存三级缓存的时候,我们到底在说什么?
本文介绍了CPU缓存的基本概念和作用,以及不同级别的缓存(L1、L2、L3)的特点和工作原理。CPU缓存是CPU内部的存储器,用于存储RAM中的数据和指令副本,以提高数据访问速度,减少CPU与RAM之间的速度差异。L1缓存位于处理器内部,速度最快;L2缓存容量更大,但速度稍慢;L3缓存容量最大,由所有CPU内核共享。文章还对比了DRAM和SRAM两种内存类型,解释了它们在计算机系统中的应用。
54 1
|
22天前
|
消息中间件 缓存 NoSQL
Redis 是一个高性能的键值对存储系统,常用于缓存、消息队列和会话管理等场景。
【10月更文挑战第4天】Redis 是一个高性能的键值对存储系统,常用于缓存、消息队列和会话管理等场景。随着数据增长,有时需要将 Redis 数据导出以进行分析、备份或迁移。本文详细介绍几种导出方法:1)使用 Redis 命令与重定向;2)利用 Redis 的 RDB 和 AOF 持久化功能;3)借助第三方工具如 `redis-dump`。每种方法均附有示例代码,帮助你轻松完成数据导出任务。无论数据量大小,总有一款适合你。
56 6
|
2天前
|
存储 缓存 监控
利用 Redis 缓存特性避免缓存穿透的策略与方法
【10月更文挑战第23天】通过以上对利用 Redis 缓存特性避免缓存穿透的详细阐述,我们对这一策略有了更深入的理解。在实际应用中,我们需要根据具体情况灵活运用这些方法,并结合其他技术手段,共同保障系统的稳定和高效运行。同时,要不断关注 Redis 缓存特性的发展和变化,及时调整策略,以应对不断出现的新挑战。
21 10
|
2天前
|
缓存 监控 NoSQL
Redis 缓存穿透的检测方法与分析
【10月更文挑战第23天】通过以上对 Redis 缓存穿透检测方法的深入探讨,我们对如何及时发现和处理这一问题有了更全面的认识。在实际应用中,我们需要综合运用多种检测手段,并结合业务场景和实际情况进行分析,以确保能够准确、及时地检测到缓存穿透现象,并采取有效的措施加以解决。同时,要不断优化和改进检测方法,提高检测的准确性和效率,为系统的稳定运行提供有力保障。
20 5
|
2天前
|
缓存 监控 NoSQL
Redis 缓存穿透及其应对策略
【10月更文挑战第23天】通过以上对 Redis 缓存穿透的详细阐述,我们对这一问题有了更深入的理解。在实际应用中,我们需要根据具体情况综合运用多种方法来解决缓存穿透问题,以保障系统的稳定运行和高效性能。同时,要不断关注技术的发展和变化,及时调整策略,以应对不断出现的新挑战。
17 4
|
4天前
|
缓存 NoSQL Java
有Redis为什么还要本地缓存?谈谈你对本地缓存的理解?
有Redis为什么还要本地缓存?谈谈你对本地缓存的理解?
12 0
有Redis为什么还要本地缓存?谈谈你对本地缓存的理解?
|
12天前
|
缓存 Java 数据库连接
使用MyBatis缓存的简单案例
MyBatis 是一种流行的持久层框架,支持自定义 SQL 执行、映射及复杂查询。本文介绍了如何在 Spring Boot 项目中集成 MyBatis 并实现一级和二级缓存,以提高查询性能,减少数据库访问。通过具体的电商系统案例,详细讲解了项目搭建、缓存配置、实体类创建、Mapper 编写、Service 层实现及缓存测试等步骤。
|
26天前
|
存储 缓存 NoSQL
数据的存储--Redis缓存存储(一)
数据的存储--Redis缓存存储(一)
60 1
|
运维 监控 NoSQL
本人新书-Redis开发与运维-目录
声明: 原定书名《Redis3开发运维最佳实践》改为《Redis开发与运维》,因为根据最新的广告法,书名不允许有最字。 一、图书简介 本人和同事撰写的新书《Redis开发与运维》近期已经截稿,本书重点关注Redis开发运维中方方面面的问题,作者是来自搜狐视频一线的Redis开发和运维工程师-付磊和张益军。
1902 0
|
26天前
|
存储 缓存 NoSQL
数据的存储--Redis缓存存储(二)
数据的存储--Redis缓存存储(二)
37 2
数据的存储--Redis缓存存储(二)