Celerra(十)--checkpoint

简介:

在之前的文章中讲述了《CIFS》、《NFS》、《iscsi》、《Dedup:file-level or block-level??》、《Replication》、《Migration》,本篇讲述如何对某个文件系统创建某一时刻的checkpoint,并以此checkpoint为基准,进行restore。

下面是从手册上摘录的关于checkpoint的一段话

Before you begin

◆ When creating checkpoints, do not to exceed the system limit. VNX for File permits 96 read-only and 16 writeable checkpoints per PFS. This is regardless of whether the PFS is replicated or not, for all systems except the Model 510 Data Mover, which permits 32 checkpoints with PFS replication and 64 without. This limit counts existing checkpoints or those already created in a schedule. The limit may also count two internal checkpoints created by certain replication operations on either the PFS or SFS. If you are at the limit, delete existing checkpoints to create space for newer ones or decide not to create new checkpoints at all if existing ones are more important.

◆ Allow at least 15 minutes between the creation or refresh of SnapSure checkpoints of the same PFS. This includes checkpoints created by using the CLI, and those created or refreshed in an automated schedule, or between schedules that run on the same PFS.

◆ Be aware when you start to replicate a file system because the facility must be able to create two checkpoints. Otherwise, replication fails to start. For example, if you have 95 checkpoints and want to start a replication, the 96th checkpoint gets created, but replication fails when the system tries to create the 97th checkpoint because the limit is breached.

   1、打开checkpoint界面,选择对test文件系统创建一个新的checkpoint,名字为test_ckpt

clip_image002

  2、创建完毕,则可以看到test_ckpt和test_ckpt_baseline,有相应的时间,权限及状态

clip_image004

  3、使用命令可以看到如下checkpoint信息

clip_image006

  4、VNX上的test文件系统以NFS的方式对外提供,目录下面有hadoop*这个程序,占用空间65M

clip_image008 

  5、下面开始根据之前创建的checkpoint做restore,首先删除hadoop*这个程序

clip_image010

  6、点击restore,选择恢复test_ckpt

clip_image012

  7、可以看到test文件系统已经成功恢复,hadoop*这个程序又“回来”了

clip_image014

  8、再次使用命令查看,多出了test_ckpt2这个checkpoint。

clip_image016





本文转自 taojin1240 51CTO博客,原文链接:http://blog.51cto.com/taotao1240/735151,如需转载请自行联系原作者

目录
相关文章
|
7月前
|
消息中间件 存储 Java
AutoMQ 如何基于裸设备实现高性能的 WAL
AutoMQ是基于S3 Stream的Apache Kafka云原生解决方案,利用云盘和对象存储实现低延迟、高吞吐、低成本流式存储。Delta WAL是其核心组件,作为持久化写入缓冲区,先在云盘上做高效持久化,再上传至对象存储。Delta WAL采用Direct IO在裸设备上读写,避免Page Cache污染,提高写入性能,加快宕机恢复速度。设计目标包括轮转写入、充分利用云盘性能和快速恢复。其写入和恢复流程涉及WALHeader、RecordHeader和SlidingWindow数据结构。基准测试显示,Delta WAL能充分利用云盘资源,实现高吞吐和低延迟。
115 0
AutoMQ 如何基于裸设备实现高性能的 WAL
|
存储 缓存 分布式计算
Checkpoint_意义 | 学习笔记
快速学习 Checkpoint_意义
156 0
Checkpoint_意义 | 学习笔记
|
缓存 分布式计算 大数据
Checkpoint_使用 | 学习笔记
快速学习 Checkpoint_使用
451 0
Checkpoint_使用 | 学习笔记
|
Oracle 关系型数据库 MySQL
新特性速递 | InnoDB redo log archiving(归档)
新特性速递 | InnoDB redo log archiving(归档)
335 0