开发者社区> 科技小能手> 正文
阿里云
为了无法计算的价值
打开APP
阿里云APP内打开

PG数据库升级步骤说明(pg_dumpall和pg_upgrade)

简介:
+关注继续查看

一、数据库升级9.4到9.5


1.pg_dump导入导出(数据量不大时使用)

postgres用户登录

$ pg_dumpall > db_export.out


$ env|grep LANG

LANG=en_US.UTF-8



如果数据库字符集为en_US.UTF-8,系统字符集为zh_CN.UTF-8


导入步骤为

postgres用户登录

$export LANG=en_US.UTF-8

$psql -f db_export.out postgres


2.pg_upgrade(pg_upgrade方式)

升级版本:9.4 --》9.5

升级前提:9.4数据库中有完整数据,9.5安装完成后仅需要initdb就可以


1)升级前一致性检查


postgres用户登录执行以下命令:

[root@pgdb01 pgdata]# su - postgres

[postgres@pgdb01 ~]$ pwd

/home/postgres


使用9.5新版本bin路径的pg_upgrade执行检查。


[postgres@pgdb01 ~]$ /opt/pg/9.5/bin/pg_upgrade -c -d /pgdata94 -D /pgdata -b /opt/pg/9.4/bin -B /opt/pg/9.5/bin

Performing Consistency Checks

-----------------------------

Checking cluster versions                                   ok

Checking database user is the install user                  ok

Checking database connection settings                       ok

Checking for prepared transactions                          ok

Checking for reg* system OID user data types                ok

Checking for contrib/isn with bigint-passing mismatch       ok

Checking for presence of required libraries                 ok

Checking database user is the install user                  ok

Checking for prepared transactions                          ok


*Clusters are compatible*

[postgres@pgdb01 ~]$

 

2).执行升级


升级方式:复制数据文件方式,9.4旧版本PGDATA中数据复制到9.5新版本PGDATA,执行速度慢,两个数据目录独立,保存双份数据;

          硬链接方式,升级时命令行加上--link参数,不复制数据仅建立连接,执行速度快,仅保存一份数据。


(注意:两个数据库执行这个步骤时,全部为停止状态)


[postgres@pgdb01 ~]$ /opt/pg/9.5/bin/pg_upgrade -d /pgdata94 -D /pgdata -b /opt/pg/9.4/bin -B /opt/pg/9.5/bin

Performing Consistency Checks

-----------------------------

Checking cluster versions                                   ok

Checking database user is the install user                  ok

Checking database connection settings                       ok

Checking for prepared transactions                          ok

Checking for reg* system OID user data types                ok

Checking for contrib/isn with bigint-passing mismatch       ok

Creating dump of global objects                             ok

Creating dump of database schemas

                                                            ok

Checking for presence of required libraries                 ok

Checking database user is the install user                  ok

Checking for prepared transactions                          ok


If pg_upgrade fails after this point, you must re-initdb the

new cluster before continuing.


Performing Upgrade

------------------

Analyzing all rows in the new cluster                       ok

Freezing all rows on the new cluster                        ok

Deleting files from new pg_clog                             ok

Copying old pg_clog to new server                           ok

Setting next transaction ID and epoch for new cluster       ok

Deleting files from new pg_multixact/offsets                ok

Copying old pg_multixact/offsets to new server              ok

Deleting files from new pg_multixact/members                ok

Copying old pg_multixact/members to new server              ok

Setting next multixact ID and offset for new cluster        ok

Resetting WAL archives                                      ok

Setting frozenxid and minmxid counters in new cluster       ok

Restoring global objects in the new cluster                 ok

Restoring database schemas in the new cluster

                                                            ok

Creating newly-required TOAST tables                        ok

Copying user relation files

                                                            ok

Setting next OID for new cluster                            ok

Sync data directory to disk                                 ok

Creating script to analyze new cluster                      ok

Creating script to delete old cluster                       ok


Upgrade Complete

----------------

Optimizer statistics are not transferred by pg_upgrade so,

once you start the new server, consider running:

    ./analyze_new_cluster.sh


Running this script will delete the old cluster's data files:

    ./delete_old_cluster.sh

[postgres@pgdb01 ~]$


[postgres@pgdb01 ~]$ ll

total 1208

-rwx------. 1 postgres postgres     749 Apr  9 18:52 analyze_new_cluster.sh

-rwx------. 1 postgres postgres      30 Apr  9 18:52 delete_old_cluster.sh

[postgres@pgdb01 ~]$ 


3).使用新版本启动脚本启动数据库


[postgres@pgdb01 ~]$ exit

logout

[root@pgdb01 pgdata]# /etc/init.d/postgresql start

Starting PostgreSQL: ok

[root@pgdb01 pgdata]# su - postgres

[postgres@pgdb01 ~]$ psql

psql (9.5.2)

Type "help" for help.


postgres=# \l

                                      List of databases

        Name        |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   

--------------------+----------+----------+-------------+-------------+-----------------------

 a_authentication   | vincent  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 

 a_resources        | vincent  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 

 postgres           | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 

 a_server           | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | postgres=CTc/postgres+

                    |          |          |             |             | =Tc/postgres         +

                    |          |          |             |             | acent=CTc/postgres

 template0          | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +

                    |          |          |             |             | postgres=CTc/postgres

 template1          | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | postgres=CTc/postgres+

                    |          |          |             |             | =c/postgres

a_security      | uccc     | UTF8     | en_US.UTF-8 | en_US.UTF-8 | a=CTc/uccc        +

                    |          |          |             |             | =Tc/uccc             +

                    |          |          |             |             | a=CTc/uccc

(7 rows)


postgres=# \q


4).生产新的统计数据


[postgres@pgdb01 ~]$ ./analyze_new_cluster.sh 

This script will generate minimal optimizer statistics rapidly

so your system is usable, and then gather statistics twice more

with increasing accuracy.  When it is done, your system will

have the default level of optimizer statistics.


If you have used ALTER TABLE to modify the statistics target for

any tables, you might want to remove them and restore them after

running this script because they will delay fast statistics generation.


If you would like default statistics as quickly as possible, cancel

this script and run:

    "/opt/pg/9.5/bin/vacuumdb" --all --analyze-only


vacuumdb: processing database "a_authentication": Generating minimal optimizer statistics (1 target)

vacuumdb: processing database "a_resources": Generating minimal optimizer statistics (1 target)

vacuumdb: processing database "postgres": Generating minimal optimizer statistics (1 target)

vacuumdb: processing database "a_server": Generating minimal optimizer statistics (1 target)

vacuumdb: processing database "template1": Generating minimal optimizer statistics (1 target)

vacuumdb: processing database "a_security": Generating minimal optimizer statistics (1 target)

vacuumdb: processing database "a_authentication": Generating medium optimizer statistics (10 targets)

vacuumdb: processing database "a_resources": Generating medium optimizer statistics (10 targets)

vacuumdb: processing database "postgres": Generating medium optimizer statistics (10 targets)

vacuumdb: processing database "a_server": Generating medium optimizer statistics (10 targets)

vacuumdb: processing database "template1": Generating medium optimizer statistics (10 targets)

vacuumdb: processing database "a_security": Generating medium optimizer statistics (10 targets)

vacuumdb: processing database "a_authentication": Generating default (full) optimizer statistics

vacuumdb: processing database "a_resources": Generating default (full) optimizer statistics

vacuumdb: processing database "postgres": Generating default (full) optimizer statistics

vacuumdb: processing database "a_server": Generating default (full) optimizer statistics

vacuumdb: processing database "template1": Generating default (full) optimizer statistics

vacuumdb: processing database "a_security": Generating default (full) optimizer statistics


Done

[postgres@pgdb01 ~]$ 


5).旧版本数据清理


可以在新数据库运行一段时间后,执行delete_old_cluster.sh 脚本,删除旧版本PGDATA目录

[postgres@pgdb01 ~]$ more delete_old_cluster.sh 

#!/bin/sh


rm -rf '/pgdata94'

[postgres@pgdb01 ~]$


本文转自 pgmia 51CTO博客,原文链接:http://blog.51cto.com/heyiyi/1762076


版权声明:本文内容由阿里云实名注册用户自发贡献,版权归原作者所有,阿里云开发者社区不拥有其著作权,亦不承担相应法律责任。具体规则请查看《阿里云开发者社区用户服务协议》和《阿里云开发者社区知识产权保护指引》。如果您发现本社区中有涉嫌抄袭的内容,填写侵权投诉表单进行举报,一经查实,本社区将立刻删除涉嫌侵权内容。

相关文章
【DB吐槽大会】第20期 - PG pg_upgrade大版本升级不支持增量
大家好,这里是DB吐槽大会,第20期 - PG pg_upgrade大版本升级不支持增量
0 0
PostgreSQL pg_start_backup
本文探讨 pg_start_backup命令的连续归档备份
0 0
PostgreSQL 物化视图(Oracle同步到PG,PG同步到PG) - by pgsnapshot (plperlu trigger) (支持类似Oracle的mvlog fast complete force刷新)
标签 PostgreSQL , 物化视图 , 增量刷新 , mvlog , Oracle 同步到 PG , PG 同步到 PG 背景 PostgreSQL自身的物化视图没有MVLOG,也就是说,刷新的时候是VIEW定义产生的记录与MV已刷新的记录进行比对,进行增量更新的过程。
1396 0
Postgresql pg_dump&pg_restore用法
PostgreSQL提供的一个工具pg_dump,逻辑导出数据,生成sql文件或其他格式文件,pg_dump是一个客户端工具,可以远程或本地导出逻辑数据,恢复数据至导出时间点。pg_dump 一次只转储一个数据库, 并且不会转储有关角色或表空间的信息 (因为那些是群集范围而不是每个数据库)。
6835 0
文章
问答
文章排行榜
最热
最新
相关电子书
更多
PostgresChina2018_周飞_PG在Hellobike的应用
立即下载
PostgresChina2018_董红禹_SQL_Server迁移PG经验分享
立即下载
PostgresChina2018_杨杰_PostgreSQL-Flashback_Query实现与介绍
立即下载