java实现“数据平滑升级”

简介: java实现“数据平滑升级”

image.png

@[toc]

一、摘要

所谓的数据平滑升级:指的是比如旧的系统代码咱称之为V4.6版本,V4.6涉及的部分库表字段只有5个字段,而新版本V4.7部分表字段有10个字段,所谓的“数据平滑升级”指的仅仅是新系统版本在不停止运行的情况下执行脚本,能让V4.6的库表旧数据同步到新版本V4.7中使用而已(即新版本V4.7系统用旧版本V4.6数据)。

二、前提场景说明:

  • 难点1:V4.6库表涉及的表太多且数据庞大,不可能把所有表导出成sql,再去新版本数据库中执行sql恢复数据,这明显不现实,因此我们想开发一个脚本,只要执行脚本就能把旧版本数据同步到新版本系统库表中。
  • 难点2:我们的系统用到了视图(视图就是多个表关联查询的展示结果),而视图中会有部分字段,比如我们的叫网元ID(neId)和节点ID(nodeId)、它们是动态更新的,随着安装系统时按照id自增序列动态变化生成的,举个例子就类似表索引id是动态自增的一样,无法控制。那么这就会造成V4.6中的数据neId和V4.7中的数据内容相同,但是neId和nodeId却不一样,所以需要动态匹配更新neId和nodeId值。
    image.png
    image.png

问题:你会问我们为啥会需要neId保持一致这么个东西?设计时候不多余吗?
答案:我们会需要多系统服务之间通信,只有确保两边服务系统的neId相同,才会触发相应的请求,关联进行查询执行接口调用。总之neId就是为了多服务通信之间的一个必要参数。

  • 难点3:针对的难点2,光编写脚本是无法有效快速实现的,那么我们考虑通过编写java业务逻辑代码生成表的新旧neId、nodeId值的excel文件,然后让脚本读取excel文件循环遍历表并更新neId即可实现通过匹配更新效果。
  • 难点4:该脚本得总共包含3步:备份旧数据、恢复旧数据到临时库、更新正式库数据
  • 注意点5:我们不是针对所有库表数据都进行同步,而是只针对部分业务数据进行同步,比如告警数据、统计数据等。针对的数据库只有lte_ems和olap数据库下的部分表数据。
  • 注意点6:使用前请先阅读《4.6升级到4.7说明.docx》+ 部分数据平滑升级4.6升4.7升级流程图.png。
  • 注意点7:我们项目是安装在服务器路径/home/ems/路径下,所以代码中指定的路径名称跟这个类似。
  • 注意点8:升级脚本命令生成的两个csv:neIdMapping.csv和nodeIdMapping.csv大概长这样。
    image.png

详情使用升级脚本步骤请看文档《4.6升级到4.7说明.docx》+ 部分数据平滑升级4.6升4.7升级流程图.png,文档和升级包可上我的资源中免费进行下载。

三、项目用到的脚本和代码

1.项目目录长这样

image.png

2.java代码、配置文件、部分脚本

EmsDTO实体类

public class EmsDTO {
   
    private String ID;
    private String NAME;
    private String NAME_KEY;
    private String TYPE;
    private String IP;
    private String TYPE_CODE;
    public String getID() {
   
        return ID;
    }
    public void setID(String iD) {
   
        ID = iD;
    }
    public String getNAME() {
   
        return NAME;
    }
    public void setNAME(String nAME) {
   
        NAME = nAME;
    }
    public String getNAME_KEY() {
   
        return NAME_KEY;
    }
    public void setNAME_KEY(String nAME_KEY) {
   
        NAME_KEY = nAME_KEY;
    }
    public String getTYPE() {
   
        return TYPE;
    }
    public void setTYPE(String tYPE) {
   
        TYPE = tYPE;
    }
    public String getIP() {
   
        return IP;
    }
    public void setIP(String iP) {
   
        IP = iP;
    }
    public String getTYPE_CODE() {
   
        return TYPE_CODE;
    }
    public void setTYPE_CODE(String tYPE_CODE) {
   
        TYPE_CODE = tYPE_CODE;
    }
    public EmsDTO(String iD, String nAME, String nAME_KEY, String tYPE, String iP, String tYPE_CODE) {
   
        super();
        ID = iD;
        NAME = nAME;
        NAME_KEY = nAME_KEY;
        TYPE = tYPE;
        IP = iP;
        TYPE_CODE = tYPE_CODE;
    }
    public EmsDTO() {
   
        super();
        // TODO Auto-generated constructor stub
    }    
}

KeyValueString实体类

public class KeyValueString {
       
    private String key;
    private String value;

    public String getKey() {
   
        return key;
    }
    public void setKey(String key) {
   
        this.key = key;
    }
    public String getValue() {
   
        return value;
    }
    public void setValue(String value) {
   
        this.value = value;
    }

    @Override
    public String toString() {
   
        return "KeyValueString [key=" + key + ", value=" + value + "]";
    }
}

KeyValueString2实体类

public class KeyValueString2 {
       
    private String neId_4_6;
    private String neId_4_7;
    public String getNeId_4_6() {
   
        return neId_4_6;
    }
    public void setNeId_4_6(String neId_4_6) {
   
        this.neId_4_6 = neId_4_6;
    }
    public String getNeId_4_7() {
   
        return neId_4_7;
    }
    public void setNeId_4_7(String neId_4_7) {
   
        this.neId_4_7 = neId_4_7;
    }
    @Override
    public String toString() {
   
        return "KeyValueString2 [neId_4_6=" + neId_4_6 + ", neId_4_7=" + neId_4_7 + "]";
    }
}

TableColumns实体类,映射表lte_ems_tables.csv+olap_tables.csv

public class TableColumns {
   
    private String tableName;
    private String nodeId;
    private String neId;
    private String equipmentId;

    public String getTableName() {
   
        return tableName;
    }
    public void setTableName(String tableName) {
   
        this.tableName = tableName;
    }
    public String getNodeId() {
   
        return nodeId;
    }
    public void setNodeId(String nodeId) {
   
        this.nodeId = nodeId;
    }
    public String getNeId() {
   
        return neId;
    }
    public void setNeId(String neId) {
   
        this.neId = neId;
    }
    public String getEquipmentId() {
   
        return equipmentId;
    }
    public void setEquipmentId(String equipmentId) {
   
        this.equipmentId = equipmentId;
    }
    @Override
    public String toString() {
   
        return "TableColumns [tableName=" + tableName + ", nodeId=" + nodeId + ", neId=" + neId + ", equipmentId="
                + equipmentId + "]";
    }
}

Config配置类

public class Config {
   

    /**
     * 更改数据库时,临时插入的字段,代表已经更新过的行。为了防止已经更新过的NEID,与即将更新的相同,会发生二次更新
     */
    public static final int CUSTOM_TAG_VALUE = 9999999;

    /**
     * 4.6的neId对应4.7的neId映射关系表
     */
    public static final String NE_ID_MAPPING_FILE_PATH = "/home/ems/upgrade/neIdMapping.csv";

    /**
     * 4.6的nodeId对应4.7的nodeId映射关系表
     */
    public static final String NODE_ID_MAPPING_FILE_PATH = "/home/ems/upgrade/nodeIdMapping.csv";

    /**
     * lte_ems要更新的表名与要更新的字段映射关系表
     */
    public static final String LTE_EMS_TABLES_FILE_PATH = "/home/ems/upgrade/lte_ems_tables.csv";

    /**
     * olap要更新的表名与要更新的字段映射关系表
     */
    public static final String OLAP_TABLES_FILE_PATH = "/home/ems/upgrade/olap_tables.csv";

    public static final String[] CSV_HEADER = new String[] {
   "neId_4_6", "neId_4_7"};

    public static final String[] TABLE_COLUMN_CSV_HEADER = new String[] {
   "tableName", "nodeId", "neId", "equipmentId"};

    public static final String[] TABLE_COLUMN_OLAP_HEADER = new String[] {
   "tableName","neId"};
}

DBToolKit 数据库工具类

import java.sql.*;
import java.util.HashMap;
import java.util.Map;
import java.util.Optional;
import java.util.Properties;

/**
 * 数据库工具类
 *
 * @author 211145187
 * @date 2022-04-27
 */
public class DBToolKit {
   

    private static final Map<String, String> DRIVERS;

    private static final String MYSQL;

    private static final String POSTGRES;

    private static final String MSSQL;

    private static final String ORACLE;

    static {
   
        MYSQL = "MYSQL";
        POSTGRES = "POSTGRESQL";
        MSSQL = "SQLSERVER";
        ORACLE = "ORACLE";
        DRIVERS = new HashMap<>(8);
        DRIVERS.put(MYSQL, "com.mysql.jdbc.Driver");
        DRIVERS.put(POSTGRES, "org.postgresql.Driver");
        DRIVERS.put(MSSQL, "com.microsoft.sqlserver.jdbc.SQLServerDriver");
        DRIVERS.put(ORACLE, "oracle.jdbc.OracleDriver");
    }

    public static Connection getConnection(Properties properties, String dbName) {
   
        Connection con = null;
        String ipAndPort = properties.getProperty("poc9000.db.ipAndPort");
        String username = properties.getProperty("poc9000.db.username");
        String password = properties.getProperty("poc9000.db.password");
        String driver = properties.getProperty("poc9000.db.driver");

        try {
   
            Class.forName(driver);
            String url = "jdbc:mysql://" + ipAndPort + "/" + dbName + "?useUinicode=true&characterEcoding=utf-8";
            con = DriverManager.getConnection(url, username, password);
        } catch (ClassNotFoundException | SQLException e) {
   
            e.printStackTrace();
        }
        return con;
    }


    /**
     * @param dbType    MYSQL/POSTGRES/MSSQL
     * @param ip        database ip
     * @param port      database port
     * @param spareIp   database spare ip
     * @param sparePort database spare port
     * @param dbName    database name
     * @param username  database username
     * @param password  database password
     * @return Connection database connection
     */
    @Deprecated
    public static Connection getConnectionWithRetry(final String dbType, final String ip, final int port, final String spareIp, final int sparePort,
                                                    final String dbName, final String username, final String password) {
   
        Optional<Connection> conn = DBToolKit.getConnectionWithRetry(dbType, ip, port, dbName, username, password);
        return conn.orElseGet(() -> DBToolKit.getConnectionWithRetry(dbType, spareIp, sparePort, dbName, username, password).orElse(null));
    }

    /**
     * @param dbType   MYSQL/POSTGRES/MSSQL
     * @param ip       database ip
     * @param port     database port
     * @param dbName   database name
     * @param username database username
     * @param password database password
     * @return Connection database connection
     */
    @Deprecated
    private static Optional<Connection> getConnectionWithRetry(final String dbType, final String ip, final int port,
                                                               final String dbName, final String username, final String password) {
   
        // 最多尝试3次
        int max = 3;
        Connection connection = null;
        for (int i = 0; i < max; i++) {
   
            try {
   
                connection = getConnection(dbType, ip, port, dbName, username, password);
                if (connection != null) {
   
                    break;
                }
            } catch (ClassNotFoundException | SQLException e) {
   
                e.printStackTrace();
            }
        }

        return Optional.ofNullable(connection);
    }

    public static Connection getConnection(final String dbType, final String ip, final int port,
                                            final String dbName, final String username, final String password)
            throws ClassNotFoundException, SQLException {
   

        // 连接数据库超时时间,单位:秒
        int requestTimeOut = 2;

        // 加载驱动程序
        Class.forName(DRIVERS.get(dbType.toUpperCase()));
        // 获取连接对象
        final String url = buildUrl(dbType.toUpperCase(), ip, port, dbName, username);
        DriverManager.setLoginTimeout(requestTimeOut);

        if (ORACLE.equals(dbType.toUpperCase())) {
   
            // oracle的数据库存储,服务名存储到username字段,用户名存储到dbName字段
            return DriverManager.getConnection(url, dbName, password);
        } else {
   
            return DriverManager.getConnection(url, username, password);
        } 
    }

    private static String buildUrl(final String dbType, final String ip, final int port, final String dbName, final String username) {
   
        StringBuilder url = new StringBuilder();
        url.append("jdbc:").append(dbType.toLowerCase());
        if (MYSQL.equals(dbType)) {
   
            url.append("://").append(ip).append(":").append(port).append("/").append(dbName);
        } else if (POSTGRES.equals(dbType)) {
   
            url.append("://").append(ip).append(":").append(port).append("/").append(dbName);
        } else if (MSSQL.equals(dbType)) {
   
            url.append("://").append(ip).append(":").append(port).append(";").append("DatabaseName").append("=").append(dbName);
        } else if (ORACLE.equals(dbType)) {
   
            // oracle的数据库存储,服务名存储到username字段,用户名存储到dbName字段
            url.append(":thin:@").append(ip).append(":").append(port).append(":").append(username);
        }
        return url.toString();
    }


    /**
     * 关闭数据连接 针对查询
     *
     * @param conn database Connection
     * @param ps   database PreparedStatement
     * @param rs   database ResultSet
     */
    public static void close(Connection conn, PreparedStatement ps, ResultSet rs) {
   
        closeResultSet(rs);
        closeStatement(ps);
        closeConnection(conn);
    }

    public static void close(PreparedStatement ps, ResultSet rs) {
   
        closeResultSet(rs);
        closeStatement(ps);
    }

    public static void closeConnection(Connection conn) {
   
        if (conn != null) {
   
            try {
   
                conn.close();
            } catch (SQLException e) {
   
                e.printStackTrace();
            }
        }
    }

    public static void closeStatement(PreparedStatement ps) {
   
        if (ps != null) {
   
            try {
   
                ps.close();
            } catch (SQLException e) {
   
                e.printStackTrace();
            }
        }
    }

    public static void closeResultSet(ResultSet rs) {
   
        if (rs != null) {
   
            try {
   
                rs.close();
            } catch (SQLException e) {
   
                e.printStackTrace();
            }
        }
    }
}

SuperCsvUtil 创建csv文件的工具类


import com.hytera.poc9000.beans.KeyValueString2;
import com.hytera.poc9000.beans.TableColumns;
import com.hytera.poc9000.config.Config;
import org.supercsv.cellprocessor.Optional;
import org.supercsv.cellprocessor.ift.CellProcessor;
import org.supercsv.io.CsvBeanReader;
import org.supercsv.io.CsvBeanWriter;
import org.supercsv.io.ICsvBeanReader;
import org.supercsv.io.ICsvBeanWriter;
import org.supercsv.prefs.CsvPreference;

import java.io.FileReader;
import java.io.FileWriter;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
/**
 * 
 * @author 211145187
 * @date 2022-05-13
 * https://super-csv.github.io/super-csv/examples_writing.html
 *
 */
public class SuperCsvUtil {
   

    public static List<KeyValueString2> getList(String filePath) throws IOException {
   
        List<KeyValueString2> emps = new ArrayList<KeyValueString2>();
        ICsvBeanReader beanReader = null;
        try {
   
            beanReader = new CsvBeanReader(new FileReader(filePath), CsvPreference.STANDARD_PREFERENCE);
            beanReader.getHeader(true);

            final CellProcessor[] processors = getProcessors();
            KeyValueString2 emp;
            while ((emp = beanReader.read(KeyValueString2.class, Config.CSV_HEADER, processors)) != null) {
   
                emps.add(emp);
            }
        } finally {
   
            if (beanReader != null) {
   
                try {
   
                    beanReader.close();
                } catch (IOException ex) {
   

                }
            }
        }
        return emps;
    }

    public static List<TableColumns> getTableColumnList(String filePath) throws IOException {
   
        List<TableColumns> emps = new ArrayList<TableColumns>();
        ICsvBeanReader beanReader = null;
        try {
   
            beanReader = new CsvBeanReader(new FileReader(filePath), CsvPreference.STANDARD_PREFERENCE);
            beanReader.getHeader(true);
//            final String[] header = beanReader.getHeader(true);
            final CellProcessor[] processors = new CellProcessor[] {
    
                    new Optional(),
                    new Optional(),
                    new Optional(),
                    new Optional()
            };

            TableColumns emp;
            while ((emp = beanReader.read(TableColumns.class, Config.TABLE_COLUMN_CSV_HEADER, processors)) != null) {
   
                emps.add(emp);
            }
        } finally {
   
            if (beanReader != null) {
   
                try {
   
                    beanReader.close();
                } catch (IOException ex) {
   

                }
            }
        }

        return emps;
    }


    public static List<TableColumns> getTableColumnOlapList(String filePath) throws IOException {
   
        List<TableColumns> emps = new ArrayList<TableColumns>();
        ICsvBeanReader beanReader = null;
        try {
   
            beanReader = new CsvBeanReader(new FileReader(filePath), CsvPreference.STANDARD_PREFERENCE);
            beanReader.getHeader(true);
            final CellProcessor[] processors = new CellProcessor[] {
    
                    new Optional(),
                    new Optional()
            };

            TableColumns emp;
            while ((emp = beanReader.read(TableColumns.class, Config.TABLE_COLUMN_OLAP_HEADER, processors)) != null) {
   
                emps.add(emp);
            }
        } finally {
   
            if (beanReader != null) {
   
                try {
   
                    beanReader.close();
                } catch (IOException ex) {
   

                }
            }
        }
        return emps;
    }

    public void writeCsv(List<KeyValueString2> list, String filePath) throws IOException {
   
        ICsvBeanWriter beanWriter = null;
        try {
   
            beanWriter = new CsvBeanWriter(new FileWriter(filePath), 
                    CsvPreference.STANDARD_PREFERENCE);
            beanWriter.writeHeader(Config.CSV_HEADER);
            for (KeyValueString2 keyValueString : list) {
   
                beanWriter.write(keyValueString, Config.CSV_HEADER, getProcessors());
            }
        } finally {
   
            if (beanWriter != null) {
   
                beanWriter.close();
            }
        }
    }

    private static CellProcessor[] getProcessors() {
   
        final CellProcessor[] processors = new CellProcessor[] {
    
                //new UniqueHashCode(),
                //new NotNull(),
                new Optional(),
                new Optional()
        };
        return processors;
    }
}

CreateCsv 创建csv的业务逻辑


import java.io.IOException;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Properties;
import java.util.Set;
import com.hytera.poc9000.beans.EmsDTO;
import com.hytera.poc9000.beans.KeyValueString2;
import com.hytera.poc9000.config.Config;
import com.hytera.poc9000.utils.DBToolKit;
import com.hytera.poc9000.utils.SuperCsvUtil;
import lombok.extern.slf4j.Slf4j;

@Slf4j
public class CreateCsv {
   

    private static String DB_NAME_4_6 = "lte_ems_temp";
    private static String DB_NAME_4_7 = "lte_ems";

    public static void createCsv(Properties properties) throws SQLException, IOException {
   
        log.warn("begin create /home/ems/upgrade/neIdMapping.csv and nodeIdMapping.csv");
        List<EmsDTO> emsList46 = getEmsList(properties, DB_NAME_4_6);
        List<EmsDTO> emsList47 = getEmsList(properties, DB_NAME_4_7);

        Map<String, String> neIdMap46 = getNeIdMap(emsList46);
        Map<String, String> neIdMap47 = getNeIdMap(emsList47);

        Map<String, String> nodeIdMap46 = getNodeIdMap(emsList46);
        Map<String, String> nodeIdMap47 = getNodeIdMap(emsList47);

        mapHandler(nodeIdMap46, nodeIdMap47, Config.NODE_ID_MAPPING_FILE_PATH);

        log.warn("End create /home/ems/upgrade/nodeIdMapping.csv");

        mapHandler(neIdMap46, neIdMap47, Config.NE_ID_MAPPING_FILE_PATH);

        log.warn("End create /home/ems/upgrade/neIdMapping.csv");

    }

    private static void mapHandler(Map<String, String> neIdMap46, 
            Map<String, String> neIdMap47, String filePath) throws IOException {
   
        List<KeyValueString2> neIdMappingList = new ArrayList<>();

        KeyValueString2 ks = null;
        Set<Map.Entry<String, String>> entries = neIdMap46.entrySet();
        for (Map.Entry<String, String> entry : entries) {
   
            ks = new KeyValueString2();
            if (neIdMap47.containsKey(entry.getKey())) {
   
                ks.setNeId_4_6(entry.getValue()); // 4.6 的neID
                ks.setNeId_4_7(neIdMap47.get(entry.getKey())); // 4.7的neId
                neIdMappingList.add(ks);
            } else {
   
                ks.setNeId_4_6(entry.getValue()); // 4.6 的neID
                ks.setNeId_4_7(""); // 4.7 的neID
                neIdMappingList.add(ks);
            }

        }

        Set<Map.Entry<String, String>> entries7 = neIdMap47.entrySet();
        for (Map.Entry<String, String> entry : entries7) {
   
            ks = new KeyValueString2();
            if (!neIdMap46.containsKey(entry.getKey())) {
   
                ks.setNeId_4_6(""); // 4.6 的neID
                ks.setNeId_4_7(entry.getValue()); // 4.7的neId
                neIdMappingList.add(ks);
            }

        }

        // 写入文件
        SuperCsvUtil sp = new SuperCsvUtil();
        sp.writeCsv(neIdMappingList, filePath);
    }

    public static List<EmsDTO> getEmsList(Properties properties ,String dbName) throws SQLException {
   
        Connection con = null;
        PreparedStatement pstmt = null;
        ResultSet rs = null;
        List<EmsDTO> list = new ArrayList<>();
        String ipAndPort = properties.getProperty("poc9000.db.ipAndPort");
        String username = properties.getProperty("poc9000.db.username");
        String password = properties.getProperty("poc9000.db.password");
        String driver = properties.getProperty("poc9000.db.driver");

        try {
   
            Class.forName(driver);
            String url = "jdbc:mysql://" + ipAndPort + "/" + dbName + "?useUinicode=true&characterEcoding=utf-8";
            con = DriverManager.getConnection(url, username, password);

            // 查询sql
            String sql = "select ID,NAME,TYPE_CODE,NAME_KEY,IP,TYPE from ne_resources WHERE TYPE != 'cluster'";
            pstmt = con.prepareStatement(sql);
            rs = pstmt.executeQuery();
            EmsDTO ems = null;
            while (rs.next()) {
   
                ems = new EmsDTO();

                ems.setID(null == rs.getString("ID") ? "" : rs.getString("ID"));
                ems.setIP(null == rs.getString("IP") ? "" : rs.getString("IP"));
                ems.setNAME(null == rs.getString("NAME") ? "" : rs.getString("NAME"));
                ems.setNAME_KEY(null == rs.getString("NAME_KEY") ? "" : rs.getString("NAME_KEY"));
                ems.setTYPE(null == rs.getString("TYPE") ? "" : rs.getString("TYPE"));
                ems.setTYPE_CODE(null == rs.getString("TYPE_CODE") ? "" : rs.getString("TYPE_CODE"));
                list.add(ems);
            }

        } catch (ClassNotFoundException e) {
   
            e.printStackTrace();
        } finally {
   
            DBToolKit.close(con, pstmt, rs);
        }
        return list;
    }
    /**
     * nodeIdMapping.csv  TYPE_CODE、NAME_KEY、NAME、TYPE、IP这5个字段匹配的
     * @param list
     * @return
     */
    private static Map<String, String> getNodeIdMap(List<EmsDTO> list) {
   
        Map<String, String> map40 = new HashMap<>();
        for (EmsDTO es : list) {
   
            String key = es.getNAME().concat("@").concat(es.getNAME_KEY()).concat("@").concat(es.getTYPE()).concat("@")
                    .concat(es.getTYPE_CODE()).concat("@").concat(es.getIP());
            map40.put(key, es.getID());
        }
        return map40;
    }

    /**
     * neIdMapping.csv  TYPE_CODE、IP、TYPE这3个字段匹配的,其中TYPE=DP,是固定的
     * @param list
     * @return
     */
    private static Map<String, String> getNeIdMap(List<EmsDTO> list) {
   
        Map<String, String> map40 = new HashMap<>();
        for (EmsDTO es : list) {
   
            String type = es.getTYPE();
            if(!Objects.equals(type, "DP")) {
   
                continue;
            }
            String key = es.getTYPE_CODE().concat("@").concat(es.getIP());
            map40.put(key, es.getID());
        }
        return map40;
    }
}

UpdateDB 主入口main函数所在位置

import java.io.BufferedInputStream;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.sql.SQLException;
import java.util.Arrays;
import java.util.Objects;
import java.util.Properties;
import org.springframework.core.io.ClassPathResource;
import org.springframework.core.io.support.PropertiesLoaderUtils;
import lombok.extern.slf4j.Slf4j;

/**
 * @author 211145187
 * @date 2022/4/26
 */
@Slf4j
public class UpdateDB {
   

    private static Properties properties;

    public static void main(String[] args) throws InterruptedException, SQLException {
   
        log.warn("main methed commit, args:{}", Arrays.toString(args));
        if(args == null || args.length == 0) {
   
            args[0] = "updateneid";
        }
        loadConfig();
        if(Objects.equals("createcsv", args[0].trim().toLowerCase())) {
   
            try {
   
                CreateCsv.createCsv(properties);
            } catch (SQLException e) {
   
                log.error("SQLException:{}", e.getMessage());
            } catch (IOException e) {
   
                log.error("IOException:{}", e.getMessage());
            }
            return;
        }
        // 更改temp数据库中的值,在temp数据库中更新,更新后备份成SQL文件,并还原到正式库
        if(Objects.equals("updateneid", args[0].trim().toLowerCase())) {
   
            try {
   
                UpdateNeId.updateNeId(properties);
            } catch (Exception e) {
   
                log.error("Exception:{}", e.getMessage());
            }
            return;
        }
        // 在正式库,更新nesofversion字段
        if(Objects.equals("updatenesofversion", args[0].trim().toLowerCase())) {
   
            try {
   
                UpdateNeSofVersion.selectStaticMainInfo(properties);
            } catch (Exception e) {
   
                log.error("Exception:{}", e.getMessage());
            }
            return;
        }

    }

    /**
     * 加载配置文件
     */
    private static void loadConfig() {
   
        try {
   
            String filePath = System.getProperty("user.dir") + "/config.properties";
            InputStream in = new BufferedInputStream(new FileInputStream(filePath));
            properties = new Properties();
            properties.load(in);
            System.out.println("System.getProperty(\"user.dir\") "+System.getProperty("user.dir") );
        } catch (IOException e) {
   
            log.error("Failed to read main configuration file:{}", e.getMessage());
            ClassPathResource classPathResource = new ClassPathResource("config.properties");
            try {
   
                properties = PropertiesLoaderUtils.loadProperties(classPathResource);
            } catch (IOException e1) {
   
                log.error("Failed to read path configuration file:{}", e1.getMessage());
            }
        }
    }
}

UpdateNeId 更新neId


import java.io.IOException;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.util.List;
import java.util.Properties;
import org.springframework.util.StringUtils;
import com.hytera.poc9000.beans.KeyValueString2;
import com.hytera.poc9000.beans.TableColumns;
import com.hytera.poc9000.config.Config;
import com.hytera.poc9000.utils.DBToolKit;
import com.hytera.poc9000.utils.SuperCsvUtil;
import lombok.extern.slf4j.Slf4j;

/**
 * 
 * @author 211145187
 *
 */
@Slf4j
public class UpdateNeId {
   

    public static void updateNeId(Properties properties) {
   
        log.warn("begin update NeId");
        try {
   
            List<KeyValueString2> neIdMappingList = SuperCsvUtil.getList(Config.NE_ID_MAPPING_FILE_PATH);
            if(!checkCsvKeyValue(neIdMappingList)) {
   
                log.error("neId csv file format is incorrect, please check neId csv file.");
                return;
            }
            List<KeyValueString2> nodeIdMappingList = SuperCsvUtil.getList(Config.NODE_ID_MAPPING_FILE_PATH);
            if(!checkCsvKeyValue(nodeIdMappingList)) {
   
                log.error("nodeId csv file format is incorrect, please check nodeId csv file.");
                return;
            }
            updateLteEmsTempNeIdAndNodeId(properties, neIdMappingList, nodeIdMappingList);
            updateOlapNeId(properties, neIdMappingList);

        } catch (IOException e) {
   
            log.error("IOException:{}", e.getMessage());
        } catch (SQLException e) {
   
            log.error("SQLException:{}", e.getMessage());
        } catch (Exception e) {
   
            log.error("Exception:{}", e.getMessage());
        }
        log.warn("update complete!");
    }

    private static boolean checkCsvKeyValue(List<KeyValueString2> neIdMappingList) {
   
        if(neIdMappingList == null || neIdMappingList.size() == 0) {
   
            return false;
        }
        for (KeyValueString2 keyValueString2 : neIdMappingList) {
   
            if(StringUtils.isEmpty(keyValueString2.getNeId_4_6()) 
                    || StringUtils.isEmpty(keyValueString2.getNeId_4_7())
                    || keyValueString2.getNeId_4_6().trim().equals("")
                    || keyValueString2.getNeId_4_7().trim().equals("")) {
   
                return false;
            }
        }
        return true;
    }

    private static void updateLteEmsTempNeIdAndNodeId(Properties properties, List<KeyValueString2> neIdMappingList
            , List<KeyValueString2> nodeIdMappingList) throws IOException, SQLException {
   
        log.warn("begin update lte ems alarm tables");
        Connection con = DBToolKit.getConnection(properties, "lte_ems_temp");

        // 获取要更新的表列表
        List<TableColumns> lteEmsTableList = SuperCsvUtil.getTableColumnList(Config.LTE_EMS_TABLES_FILE_PATH);

        for (TableColumns tableColumns : lteEmsTableList) {
   
            // 更新neId
            if(!StringUtils.isEmpty(tableColumns.getNeId())) {
   
                updateOneTable(con, tableColumns.getTableName(), tableColumns.getNeId(),
                        neIdMappingList);
            }
            // 更新nodeId
            if(!StringUtils.isEmpty(tableColumns.getNodeId())) {
   
                updateOneTable(con, tableColumns.getTableName(), tableColumns.getNodeId(),
                        nodeIdMappingList);
            }

        }
        DBToolKit.closeConnection(con);
        log.warn("update lte ems alarm tables complete!");
    }

    private static void updateOlapNeId(Properties properties, List<KeyValueString2> neIdMappingList) throws IOException, SQLException {
   
        log.warn("begin update olap tables");
        // 获取要更新的表列表
        List<TableColumns> olapTableList = SuperCsvUtil.getTableColumnOlapList(Config.OLAP_TABLES_FILE_PATH);
        Connection con = DBToolKit.getConnection(properties, "olap_temp");
        for (TableColumns tableColumns : olapTableList) {
   
            if(StringUtils.isEmpty(tableColumns.getNeId())) {
   
                continue;
            }
            updateOneTable(con, tableColumns.getTableName(), tableColumns.getNeId(),
                    neIdMappingList);
        }
        DBToolKit.closeConnection(con);
        log.warn("update olap tables complete!");
    }


    private static void updateOneTable(Connection con, String tableName, String updateColumnName, 
            List<KeyValueString2> mappingList) throws SQLException, IOException {
   
        log.info("update table:{}, Column:{}", tableName, updateColumnName);
        // 对表增加新字段,更改数据库时,临时插入的字段,代表已经更新过的行。为了防止已经更新过的NEID,与即将更新的相同,会发生二次更新
        String updatedTagAddSql = "ALTER TABLE "+ tableName +" ADD updated_tag INT DEFAULT 0 NOT NULL";

        PreparedStatement pstmt = con.prepareStatement(updatedTagAddSql);
        pstmt.execute();
        DBToolKit.closeStatement(pstmt);

        String sql = "UPDATE "+tableName+" SET "+updateColumnName+" = ?, updated_tag = 1 " + 
                "    WHERE "+updateColumnName+" = ? and updated_tag = 0" ;

        for (KeyValueString2 keyValueString2 : mappingList) {
   
            pstmt = con.prepareStatement(sql);
            String ne47 = null == keyValueString2.getNeId_4_7() ? "-1" :keyValueString2.getNeId_4_7();
            String ne46 = null == keyValueString2.getNeId_4_6() ? "-1" :keyValueString2.getNeId_4_6();
            pstmt.setLong(1, Long.parseLong(ne47));
            pstmt.setLong(2, Long.parseLong(ne46));
            pstmt.execute();
            DBToolKit.closeStatement(pstmt);
        }

        //删除临时字段
        String updatedTagDelSql = "ALTER TABLE "+ tableName +" DROP COLUMN updated_tag";
        pstmt = con.prepareStatement(updatedTagDelSql);
        pstmt.execute();
        DBToolKit.closeStatement(pstmt);        
    }
}

UpdateNeSofVersion 更新neSofVersion


import java.io.IOException;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Properties;
import com.hytera.poc9000.utils.DBToolKit;
import com.mysql.jdbc.JDBC4PreparedStatement;
import lombok.extern.slf4j.Slf4j;

/**
 * 入口:UpdateNeId
 * @author 211145187
 *
 */
@Slf4j
public class UpdateNeSofVersion {
   

    /**
     * 要更新的表
     */
    private static final Map<String, String> updateTableAndColumnMap;

    static {
   
        updateTableAndColumnMap = new HashMap<String, String>();
        updateTableAndColumnMap.put("fmalarm_current_event", "eventId");
        updateTableAndColumnMap.put("fmalarm_current_to_history_buf", "alarmId");
        updateTableAndColumnMap.put("fmalarm_redefine_info", "alarmId");
        updateTableAndColumnMap.put("fmalarm_syn_buf", "alarmId");
        updateTableAndColumnMap.put("fmalarm_historical", "alarmId");
    }

    public static void selectStaticMainInfo(Properties properties) {
   
        log.warn("select static main info");
        Map<Integer, String> alarmIdVersionMap = new HashMap<Integer, String>(32);
        try {
   
            Connection con = DBToolKit.getConnection(properties, "lte_ems");
            String sql = "select alarmId ,max(neSofVersion) as neSofVersion from fmalarm_static_main_info group by alarmId";
            PreparedStatement pstmt = con.prepareStatement(sql);
            ResultSet rs = pstmt.executeQuery();
            while (rs.next()) {
   
                alarmIdVersionMap.put(rs.getInt("alarmId"), rs.getString("neSofVersion"));
            }
            DBToolKit.close(pstmt, rs);
            updateNeSofVersion(con, alarmIdVersionMap);
        } catch (SQLException e) {
   
            log.error("SQLException:{}", e.getMessage());
        } catch (Exception e) {
   
            log.error("Exception:{}", e.getMessage());
        }
        log.warn("Update NeSofVersion complete!");
    }


    /**
     * fmalarm_current 表有neSofVersion字段,但是不需要处理。当前表在说明文档要求用户要处理,处理到历史表
     * fmalarm_historical 表有neSofVersion字段,需要处理。处理方式:查询alarmId(distinct),根据alarmId更改对应的neSofVersion字段值
     * @param con
     * @throws SQLException
     * @throws IOException
     */
    private static void updateNeSofVersion(Connection con, Map<Integer, String> alarmIdVersionMap) throws SQLException, IOException {
   
        PreparedStatement pstmt = null;
        PreparedStatement pstmt2 = null;
        ResultSet rs;
        for (Entry<String, String> entry : updateTableAndColumnMap.entrySet()) {
   
            String tableName = entry.getKey();
            String columnName = entry.getValue();
            String sql = "select distinct(" + columnName + ") as alarmId from " + tableName;
            pstmt = con.prepareStatement(sql);
            rs = pstmt.executeQuery();
            while (rs.next()) {
   
                int thisColumnAlarmId = rs.getInt("alarmId");
                // 循环更新表中的neSofVersion字段
                String updateSql = "UPDATE " + tableName + " SET neSofVersion = ? "
                        + "    WHERE " + columnName + " = ? " ;
                pstmt2 = con.prepareStatement(updateSql);
                pstmt2.setString(1, alarmIdVersionMap.get(thisColumnAlarmId));
                pstmt2.setInt(2, thisColumnAlarmId);
                log.info("updateSql:{}", ((JDBC4PreparedStatement)pstmt2).asSql());
                pstmt2.execute();
                DBToolKit.closeStatement(pstmt2);
            }
            DBToolKit.close(pstmt, rs);
        }
        DBToolKit.closeConnection(con);
    }    
}

config.properties 配置文件

poc9000.db.ipAndPort=127.0.0.1:3306
poc9000.db.username=xx
poc9000.db.password=xx
poc9000.db.driver=com.mysql.jdbc.Driver

buildJars.bat 执行脚本把dependency文件夹拷贝到UpgradePackage文件夹中

mvn dependency:copy-dependencies -DoutputDirectory=target/dependency

pom.xml pom依赖

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>com.hytera</groupId>
    <artifactId>updateDB</artifactId>
    <version>1.0</version>
    <packaging>jar</packaging>

    <name>updateDB</name>
    <description>poc 9000 4.6 to 4.7, alarm and olap db update</description>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
        <java.version>1.8</java.version>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
    </properties>

    <dependencies>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-jdbc</artifactId>
            <version>2.1.1.RELEASE</version>
        </dependency>
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>5.1.46</version>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <optional>true</optional>
            <version>1.18.6</version>
        </dependency>
        <dependency>
            <groupId>net.sf.supercsv</groupId>
            <artifactId>super-csv</artifactId>
            <version>2.4.0</version>
        </dependency>
    </dependencies>

    <build>  
        <finalName>updateDB</finalName>
        <plugins>
            <plugin>  
                <groupId>org.apache.maven.plugins</groupId>  
                <artifactId>maven-jar-plugin</artifactId>  
                <version>2.4</version>  
                <configuration>  
                    <archive>
                        <manifest>  
                            <addClasspath>true</addClasspath>  
                            <classpathPrefix>dependency/</classpathPrefix>
                            <mainClass>com.hytera.poc9000.UpdateDB</mainClass>  
                        </manifest>  
                    </archive>
                </configuration>  
            </plugin>  
        </plugins>  
    </build>
</project>

start.bat 用于验证jar包中createCsv 能否顺利执行并生成csv文件

jre\bin\java -jar updateDB.jar createCsv >>log.log 2>&1 &

开发说明.txt

1.在IDE,运行maven install
2.运行buildJars.bat,下载相关依赖保存到target/dependency目录下
3.复制updateDB.jar、target目录中的dependency目录和classes\config.properties到UpgradePackage文件夹下

3.升级包中的部分文件

change_olap_temp.sql 代表V4.7版本相比V4.6版本中库表改动,包括字段增删改、表的新增等等

-- 呼叫总时长每天二次统计表
DROP TABLE IF EXISTS `secondary_call_duration_month`;
CREATE TABLE `secondary_call_duration_month` (
         `ne_id` int(11) DEFAULT NULL COMMENT '所属网元ID',
         `ts_day` datetime DEFAULT NULL COMMENT '统计的月份',
         `total_time` bigint(20) DEFAULT NULL COMMENT '呼叫总时长',
         `op_date` datetime DEFAULT NULL COMMENT '操作时间',
         `caller_model` varchar(255) DEFAULT NULL COMMENT '主叫终端型号',
         `model_records_number` int(11) DEFAULT NULL COMMENT '终端记录数',
         INDEX(ts_day)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

lte_ems_tables.csv 代表lte_ems库下涉及的表

tableName,nodeId,neId,equipmentId
fmalarm_config_alarmtype,,,
fmalarm_config_alarmtype_lang,,,
...

olap_tables.csv 代表olap库下涉及的表

tableName,neId
base_account,ne_id
base_cycle_statistics,
base_digital_0101,ne_id
base_digital_0102,ne_id
...

upgradeShell 代表shell脚本

#!/bin/sh
#
# 升级脚本
# 运行/home/ems/upgrade/upgradeShell dbBackup 对4.6的数据库进行备份
# 运行/home/ems/upgrade/upgradeShell dbRestore  1.创建CSV文件,2导入4.6数据库
# 运行/home/ems/upgrade/upgradeShell update 进行升级操作
# @date 2022-06-01
#

file_path=/home/ems/upgrade/
log=$file_path/log.log

commond=$1

host='127.0.0.1' 
#10.161.42.145
#端口 18753
port='18753'
#用户
user='xx'
#密码
pwd='xx'
#备份的数据库名称,属组
dbs_name=("lte_ems" "olap")
#备份数据库文件的存储目录
data_dir='/home/ems/upgrade/dbBackup'

# if no args specified, show usage
if [ $# -le 0 ]; then
  echo '请指定参数'
  exit 1
fi

#创建存储备份数据库文件的目录
#date输出当前日期
mkdir -p $data_dir

date_now=$(date "+%Y-%m-%d %H:%M:%S")

if [ $1 = "dbBackup" ]
  then
  echo "-----------------------begin dbBackup--------------------------">>$log
  #数据库备份到文件夹------------------------------------------------------------


for db_item in ${
   dbs_name[*]}
do
 /home/ems/3rdparty/mysql/bin/mysqldump -h$host -P$port -u$user -p$pwd -R --databases $db_item  > $data_dir/$db_item.sql
  if [ $? -eq 0 ]  # -eq表示等于
  then
       echo -e "$db_item数据库4.6版本备份成功~\n"
  else
       echo -e "$db_item数据库备份失败~\n"
  fi
done

  #-------------------------------------------------------------------------------
  echo "end dbBackup">>$log
  elif [ $1 = "dbRestore" ]
  then
  echo "---------------------begin dbRestore--------------------------">>$log
  #先还原数据库-------------------------------------------------------------------
  #

sed -i '/lte_ems/d' $data_dir/lte_ems.sql
sed -i '/olap/d' $data_dir/olap.sql

echo "---------------------1 还原4.6temp库--------------------------">>$log
echo -e "准备进行数据库temp 临时库4.6 还原。。。。。。。\n"

/home/ems/3rdparty/mysql/bin/mysql -h$host -P$port -u$user -p$pwd <<EOF

SET FOREIGN_KEY_CHECKS = 0;

create database IF NOT EXISTS lte_ems_temp;
use lte_ems_temp;
source /home/ems/upgrade/dbBackup/lte_ems.sql;

create database IF NOT EXISTS olap_temp;
use olap_temp;
source /home/ems/upgrade/dbBackup/olap.sql;
EOF
echo "---------------------2 完成还原4.6--------------------------">>$log
echo -e "数据库还原 olap_temp  lte_ems_temp  4.6版本完成 。。。。。。。\n"
  #后创建CSV文件------------------------------------------------------------------
echo -e "创建CSV 导出文件开始。。。。。。。\n"
  /home/ems/3rdparty/java/bin/java -jar updateDB.jar createCsv >>log.log 2>&1
  # 更改数据库结构 change_lte_ems.sql -> lte_ems_temp change_olap_temp.sql-> olap_temp
  #------------------------------------------------------------------------------------

echo "---------------------3 CSV导出4.6--------------------------">>$log
echo -e "创建CSV 导出文件完成。。。。。。。\n"

echo "---------------------4 更新4.6->4.7库字段--------------------------">>$log
echo -e "更新olap_temp /lte_ems_temp 字段更新4.6开始。。。。。。。\n"
 /home/ems/3rdparty/mysql/bin/mysql -h$host -P$port -u$user -p$pwd <<EOF

use olap_temp;
source /home/ems/upgrade/change_olap_temp.sql;
EOF
echo -e "更新olap_temp /lte_ems_temp 升级4.7脚本完成。。。。。。。\n"
echo "----------------End dbRestore----------------">>$log

  elif [ $1 = "update" ]
  then

  echo "---------------------1 变更updateNeId--------------------------">>$log
  /home/ems/3rdparty/java/bin/java -jar updateDB.jar updateNeId >>log.log 2>&1

 echo "更新4.7 temp临时库完成">>$log

  # 导入temp -> 非temp数据
 echo "---------------------2 变更备份--------------------------">>$log

  /home/ems/3rdparty/mysql/bin/mysqldump -h$host -P$port -u$user -p$pwd -R  lte_ems_temp fm_export_task fmalarm_config_alarmtype fmalarm_config_alarmtype_lang fmalarm_config_brdtype fmalarm_config_brdtype_lang fmalarm_config_clearsta fmalarm_config_clearsta_lang fmalarm_config_confirm fmalarm_config_confirm_lang fmalarm_config_levelcol fmalarm_config_levelcol_lang fmalarm_current_event fmalarm_historical fmalarm_nms_status fmalarm_notification_alarm fmalarm_notification_alarmlevel fmalarm_notification_main fmalarm_notification_method fmalarm_notification_ne fmalarm_redefine_info fmalarm_shieldrule_alarm fmalarm_shieldrule_main fmalarm_shieldrule_ne fmalarm_static_info fmalarm_static_main_info fmalarm_syn_buf fmalarm_tone fmalarm_tone_cycle fmalarm_tone_switch fmalarm_user_def_info fmalarm_user_def_ne_info > /home/ems/upgrade/changeEms.sql

  /home/ems/3rdparty/mysql/bin/mysqldump -h$host -P$port -u$user -p$pwd -R  olap_temp > /home/ems/upgrade/changeOlap.sql

 echo "---------------------3 变更还原正式库--------------------------">>$log

/home/ems/3rdparty/mysql/bin/mysql -h$host -P$port -u$user -p$pwd <<EOF

SET FOREIGN_KEY_CHECKS = 0;

use lte_ems;
source /home/ems/upgrade/changeEms.sql;

use olap;
source /home/ems/upgrade/changeOlap.sql;
EOF

  elif [ $1 = "updateNeSofVersion" ]
  then
  echo "---------------------4 更改 neSofVersion 字段值--------------------------">>$log
  /home/ems/3rdparty/java/bin/java -jar updateDB.jar updateNeSofVersion >>log.log 2>&1

  ##
  else
  echo "commond error">>$log
fi



echo "----------End--------------">>$log
echo "----------Good bye !!!----------">>$log
echo "">>$log
相关实践学习
AnalyticDB MySQL海量数据秒级分析体验
快速上手AnalyticDB MySQL,玩转SQL开发等功能!本教程介绍如何在AnalyticDB MySQL中,一键加载内置数据集,并基于自动生成的查询脚本,运行复杂查询语句,秒级生成查询结果。
阿里云云原生数据仓库AnalyticDB MySQL版 使用教程
云原生数据仓库AnalyticDB MySQL版是一种支持高并发低延时查询的新一代云原生数据仓库,高度兼容MySQL协议以及SQL:92、SQL:99、SQL:2003标准,可以对海量数据进行即时的多维分析透视和业务探索,快速构建企业云上数据仓库。 了解产品 https://www.aliyun.com/product/ApsaraDB/ads
目录
相关文章
|
28天前
|
Java 程序员 容器
Java中的变量和常量:数据的‘小盒子’和‘铁盒子’有啥不一样?
在Java中,变量是一个可以随时改变的数据容器,类似于一个可以反复打开的小盒子。定义变量时需指定数据类型和名称。例如:`int age = 25;` 表示定义一个整数类型的变量 `age`,初始值为25。 常量则是不可改变的数据容器,类似于一个锁死的铁盒子,定义时使用 `final` 关键字。例如:`final int MAX_SPEED = 120;` 表示定义一个名为 `MAX_SPEED` 的常量,值为120,且不能修改。 变量和常量的主要区别在于变量的数据可以随时修改,而常量的数据一旦确定就不能改变。常量主要用于防止意外修改、提高代码可读性和便于维护。
|
7天前
|
前端开发 JavaScript Java
java常用数据判空、比较和类型转换
本文介绍了Java开发中常见的数据处理技巧,包括数据判空、数据比较和类型转换。详细讲解了字符串、Integer、对象、List、Map、Set及数组的判空方法,推荐使用工具类如StringUtils、Objects等。同时,讨论了基本数据类型与引用数据类型的比较方法,以及自动类型转换和强制类型转换的规则。最后,提供了数值类型与字符串互相转换的具体示例。
|
14天前
|
JSON Java 程序员
Java|如何用一个统一结构接收成员名称不固定的数据
本文介绍了一种 Java 中如何用一个统一结构接收成员名称不固定的数据的方法。
23 3
|
28天前
|
存储 缓存 安全
在 Java 编程中,创建临时文件用于存储临时数据或进行临时操作非常常见
在 Java 编程中,创建临时文件用于存储临时数据或进行临时操作非常常见。本文介绍了使用 `File.createTempFile` 方法和自定义创建临时文件的两种方式,详细探讨了它们的使用场景和注意事项,包括数据缓存、文件上传下载和日志记录等。强调了清理临时文件、确保文件名唯一性和合理设置文件权限的重要性。
50 2
|
28天前
|
Java
Java 8 引入的 Streams 功能强大,提供了一种简洁高效的处理数据集合的方式
Java 8 引入的 Streams 功能强大,提供了一种简洁高效的处理数据集合的方式。本文介绍了 Streams 的基本概念和使用方法,包括创建 Streams、中间操作和终端操作,并通过多个案例详细解析了过滤、映射、归并、排序、分组和并行处理等操作,帮助读者更好地理解和掌握这一重要特性。
27 2
|
1月前
|
存储 分布式计算 Java
存算分离与计算向数据移动:深度解析与Java实现
【11月更文挑战第10天】随着大数据时代的到来,数据量的激增给传统的数据处理架构带来了巨大的挑战。传统的“存算一体”架构,即计算资源与存储资源紧密耦合,在处理海量数据时逐渐显露出其局限性。为了应对这些挑战,存算分离(Disaggregated Storage and Compute Architecture)和计算向数据移动(Compute Moves to Data)两种架构应运而生,成为大数据处理领域的热门技术。
49 2
|
2月前
|
SQL Java 关系型数据库
java连接mysql查询数据(基础版,无框架)
【10月更文挑战第12天】该示例展示了如何使用Java通过JDBC连接MySQL数据库并查询数据。首先在项目中引入`mysql-connector-java`依赖,然后通过`JdbcUtil`类中的`main`方法实现数据库连接、执行SQL查询及结果处理,最后关闭相关资源。
|
1月前
|
SQL Java OLAP
java实现“数据平滑升级”
java实现“数据平滑升级”
18 0
|
缓存 Java 编译器
从Java 8升级到Java 11的注意事项
虽然Java最新版本已经发展到Java 18了,但市面上大部分的项目还在使用Java 8。由于从Java 8之后,Java API不一定向前兼容,因此很多人都对升级Java版本心存顾虑。Java 11是Java 8的下一个长期支持版本,毫无疑问Java 11比Java 8更加优秀。
2183 0
|
13天前
|
设计模式 Java 开发者
Java多线程编程的陷阱与解决方案####
本文深入探讨了Java多线程编程中常见的问题及其解决策略。通过分析竞态条件、死锁、活锁等典型场景,并结合代码示例和实用技巧,帮助开发者有效避免这些陷阱,提升并发程序的稳定性和性能。 ####

热门文章

最新文章