Rsync 实现原理

简介:

关于rsync的原始文档 Rsync technical report 以及Andrew Tridgell的论文 Phd thesis (pdf) 都是关于rsync算法原理的极好的文档。但是,这些文档注重的是rsync算法本身,而对算 法的实现方法则描述较少。

本文试图对Linux/Unix下的rsync工具的实现进行分析,并将描述下列问题:

  • rsync 算法纵览(非数学性的);
  • rsync 工具中,算法是如何实现的;
  • rsync 工具中用到的协议;
  • rsync 工具中,各个进程的作用(The identifiable roles the rsync processes play).

本文主要目的是为读者提供打下一个基础,在此基础上,读者可以更好的理解下列问题:

  • rsync工作原理
  • rsync缺陷
  • Why a requested feature is unsuited to the code-base.

当然,也许这篇文章页有助于程序员更好的阅读rsync代码。

 

进程与角色: 常用术语介绍

当谈到Rsync时候,我们将使用一些术语来指代rsync工具在完成其任务的不同阶段 下的各个角色或者进程。下面为一些后文将会用到的术语:

client/客户端 role/角色 客户端对同步过程进行初始化。
server/服务器端 role/角色 服务器是指远端的rsync进程或者客户端通过远端shell、socket所连接到的系 统。

服务器(server)是一个通用的术语,注意不要将其与Deamon混为一谈。

    一旦从Client到Server的链接建立起来,Client(客户 端)/Server(服务 器)的这两个角色的差别,就被Sender(发送者)/Receiver(接收者)所 取代了。
daemon/守护进程 角色,同时也是进程 Daemon是一个rsync进程,该进程用于等待接收从Client发起的连接。在一 些平台上,Daemon也被叫做服务(Service)
remote shell/远端shell 角色,同时也是一系列的进程 一个或多个进程,用于向client和远端的server之间提供连通性。
sender/发送者 role and process 可以存取待同步的文件资源的rsync进程。
receiver/接收者 role and process 作为角色:指同步过程中的目标系统; 作为进程:指目标系统中,用于接收数据并接数据写入磁盘的进程。
generator/生产者 process/进程 生产者进程用于识别文件的变化,并维持文件级别的逻辑。

 

进程启动

当rsync客户端启动后,它首先通过管道(pipes)或者网络来与server 进程建立 第一个连接。

根据远端连接的建立方式不同,rsync客户端的处理也不同。

当远端为一个通过remote shell建立起来的非Daemon server时,client会fork远 端shell,并借此在远端系统上启动一个服务器(server)。此后,client和 server均通过remote shell上的管道(pipes)来通讯。此过程中,单就rsync进程 而言,不涉及到网络操作。在这种模式下,server进程的rsync选项是通过用于启 动remote shell的命令行来传递的。

当rsync可以通过deamon来通讯时,它实际上是在直接通过网络来通讯。此模式 下,rsync的参数必须通过socket来发送,该过程具体如下:

通讯刚刚开始启动的时候,Client和Server将各自的版本号发送给对方,并选择较 低的版本号作为文件传输的标准。如果Server端的rsync是一个Daemon-Mode,则 rsync的选项由Client发送至Server。之后由Client发送到Server的,是exclude list,即排除的文件列表。

Local Rsync jobs (when the source and destination are both on locally mounted filesystems) are done exactly like a push. The client, which becomes the sender, forks a server process to fulfill the receiver role. The client/sender and server/receiver communicate with each other over pipes.

The File List

The file list includes not only the pathnames but also ownership, mode, permissions, size and modtime. If the --checksum option has been specified it also includes the file checksums.

The first thing that happens once the startup has completed is that the sender will create the file list. While it is being built, each entry is transmitted to the receiving side in a network-optimised way.

When this is done, each side sorts the file list lexicographically by path relative to the base directory of the transfer. (The exact sorting algorithm varies depending on what protocol version is in effect for the transfer.) Once that has happened all references to files will be done by their index in the file list.

If necessary the sender follows the file list with id→name tables for users and groups which the receiver will use to do a id→name→id translation for every file in the file list.

After the file list has been received by the receiver, it will fork to become the generator and receiver pair completing the pipeline.

The Pipeline

Rsync is heavily pipelined. This means that it is a set of processes that communicate in a (largely) unidirectional way. Once the file list has been shared the pipeline behaves like this:

generator → sender → receiver

The output of the generator is input for the sender and the output of the sender is input for the receiver. Each process runs independently and is delayed only when the pipelines stall or when waiting for disk I/O or CPU resources.

The Generator

The generator process compares the file list with its local directory tree. Prior to beginning its primary function, if --delete has been specified, it will first identify local files not on the sender and delete them on the receiver.

The generator will then start walking the file list. Each file will be checked to see if it can be skipped. In the most common mode of operation files are not skipped if the modification time or size differs. If --checksum was specified a file-level checksum will be created and compared. Directories, device nodes and symlinks are not skipped. Missing directories will be created.

If a file is not to be skipped, any existing version on the receiving side becomes the "basis file" for the transfer, and is used as a data source that will help to eliminate matching data from having to be sent by the sender. To effect this remote matching of data, block checksums are created for the basis file and sent to the sender immediately following the file's index number. An empty block checksum set is sent for new files and if --whole-file was specified.

The block size and, in later versions, the size of the block checksum are calculated on a per file basis according to the size of that file.

The Sender

The sender process reads the file index numbers and associated block checksum sets one at a time from the generator.

For each file id the generator sends it will store the block checksums and build a hash index of them for rapid lookup.

Then the local file is read and a checksum is generated for the block beginning with the first byte of the local file. This block checksum is looked for in the set that was sent by the generator, and if no match is found, the non-matching byte will be appended to the non-matching data and the block starting at the next byte will be compared. This is what is referred to as the “rolling checksum”

If a block checksum match is found it is considered a matching block and any accumulated non-matching data will be sent to the receiver followed by the offset and length in the receiver's file of the matching block and the block checksum generator will be advanced to the next byte after the matching block.

Matching blocks can be identified in this way even if the blocks are reordered or at different offsets. This process is the very heart of the rsync algorithm.

In this way, the sender will give the receiver instructions for how to reconstruct the source file into a new destination file. These instructions detail all the matching data that can be copied from the basis file (if one exists for the transfe), and includes any raw data that was not available locally. At the end of each file's processing a whole-file checksum is sent and the sender proceeds with the next file.

Generating the rolling checksums and searching for matches in the checksum set sent by the generator require a good deal of CPU power. Of all the rsync processes it is the sender that is the most CPU intensive.

The Receiver

The receiver will read from the sender data for each file identified by the file index number. It will open the local file (called the basis) and will create a temporary file.

The receiver will expect to read non-matched data and/or to match records all in sequence for the final file contents. When non-matched data is read it will be written to the temp-file. When a block match record is received the receiver will seek to the block offset in the basis file and copy the block to the temp-file. In this way the temp-file is built from beginning to end.

The file's checksum is generated as the temp-file is built. At the end of the file, this checksum is compared with the file checksum from the sender. If the file checksums do not match the temp-file is deleted. If the file fails once it will be reprocessed in a second phase, and if it fails twice an error is reported.

After the temp-file has been completed, its ownership and permissions and modification time are set. It is then renamed to replace the basis file.

Copying data from the basis file to the temp-file make the receiver the most disk intensive of all the rsync processes. Small files may still be in disk cache mitigating this but for large files the cache may thrash as the generator has moved on to other files and there is further latency caused by the sender. As data is read possibly at random from one file and written to another, if the working set is larger than the disk cache, then what is called a seek storm can occur, further hurting performance.

The Daemon

The daemon process, like many daemons, forks for every connection. On startup, it parses the rsyncd.conf file to determine what modules exist and to set the global options.

When a connection is received for a defined module the daemon forks a new child process to handle the connection. That child process then reads the rsyncd.conf file to set the options for the requested module, which may chroot to the module path and may drop setuid and setgid for the process. After that it will behave just like any other rsync server process adopting either a sender or receiver role.

The Rsync Protocol

A well-designed communications protocol has a number of characteristics.

  • Everything is sent in well defined packets with a header and an optional body or data payload.
  • In each packet's header a type and or command specified.
  • Each packet has a definite length.

In addition to these characteristics, protocols have varying degrees of statefulness, inter-packet independence, human readability, and the ability to reestablish a disconnected session.

Rsync's protocol has none of these good characteristics. The data is transferred as an unbroken stream of bytes. With the exception of the unmatched file-data, there are no length specifiers nor counts. Instead the meaning of each byte is dependent on its context as defined by the protocol level.

As an example, when the sender is sending the file list it simply sends each file list entry and terminates the list with a null byte. Within the file list entries, a bitfield indicates which fields of the structure to expect and those that are variable length strings are simply null terminated. The generator sending file numbers and block checksum sets works the same way.

This method of communication works quite well on reliable connections and it certainly has less data overhead than the formal protocols. It unfortunately makes the protocol extremely difficult to document, debug or extend. Each version of the protocol will have subtle differences on the wire that can only be anticipated by knowing the exact protocol version.

notes

This document is a work in progress. The author expects that it has some glaring oversights and some portions that may be more confusing than enlightening for some readers. It is hoped that this could evolve into a useful reference.

Specific suggestions for improvement are welcome, as would be a complete rewrite.

 

 

Sync Algorithm: RSync vs. RDC

 

数据同步(Sync)是很多网络应用需要 的解决的问题,比如文件镜像。这里就以文件同步为例,问题模型:网络中两个主机Host-A和Host-B,都有同一文件File-Old的拷贝,现在这 个文件在Host-A上做了一些改变成为了File-New,需要通过同步让Host-B也获得F-New。
 
让我们想想怎么处理这个问题,最简单的方法,把所有数据都传输一遍,这样是简单,但是显得浪费,因为File-New相对于File-Old只 是有些小改变,全部copy代价太大。如果我们能够只传输发生改变的部分,也就是增、删、改的文件部分,那就太好了。这样,我们要解决的问题变成,如何得 到File-Old和File-New的差别。
 
如果Host-A上面保留有一个File-Old,那用普通的diff算法求一下和File-New的差别就行了,但是实际应用中,Host- A往往不会保留File-Old;或者文件格式本身有很强的版本控制功能,Host-B告诉Host-A它手上文件的版本,Host-A就能够计算出差 别;更多情况下,文件就是一串bytes,没有版本控制信息,没有历史拷贝,Rsync和RDC就是解决这种情况的同步的。
 
RSync算法是澳大利亚人 Andrew Tridgell发明的,我看懂 这个算法之后的第一感觉是:"嘿,这算法 我也应该能想出来!”的确,按照 Andrew Tridgell自己的话,这个 算法只需要半个小时就能够理解,但是花费 了他几年时间研究出来。
 
这里大概介绍一下Rsync算法大概原理:
1) Host-B把File-Old划分成不重合的大小为K字节的若干块,不足K字节的结尾部分加上Padding,然后对每一块求弱Hash和强Hash。 弱Hash就是说很有可能两个不同的块Hash值相同,但是计算起来快,而且这里要求这个若Hash能够Rolling,也就是说已知字节1到字节K这个 块的Hash值,能够很快的计算出字节2到字节K+1这个块的Hash值,往前Roll一个字节,计算很快;强Hash就是可以认为不同块肯定有不同 Hash值,Rsync用的是MD4。我们让WH表示弱Hash,SH表示强Hash。
2) Host-B把每个块的WH和SH值发送给Host-A。
3) 该Host-A上场了,他的运算量比较大。Host-A对File-New每一个长度为K的块(也就是以每个字节开头的长度为K的块)计算WH,计算出来 之后和Host-B发送过来的WH匹配,如果发现有相同的,再计算这个块的SH进行匹配,如果还是相符,说明这个块在File-Old里面也存在。假如 File-New长度为N,那么Host-A要处理大约(N-K)个块,这里可见用两个Hash算法的作用,WH用来做初步比较,而且因为它可以 Rolling,所以能够很快筛选掉大多数不匹配,对于漏网之鱼,也躲不过SH的筛选。
4) 通过上面的计算,Host-A可知道,File-New中哪些块和File-Old中的块相同,这样自然也可以计算出哪些不同,Host-A把这些不同 encode一下送给Host-B。
5) Host-B收到Host-A送来的数据,decode,就得到了File-New相对于File-Old的改变,于是获得了File-New。
 
整个过程只需要一个round-trip,而且可以精确的得到一个字节级别的差别,Host-A的运算量相对要大一些。
 
Rsync的实现已经是*inx上面的一个重要工具,所以,当Microsoft在Windows 2003 Server上推出DFSR(Distributed File System Replication)时,Open Source Community颇有 嘘 声。其实DFSR采用的是RDC(Remote Differential Compression)算法,和RSync相差很大,并没有抄袭RSync。
 
我感觉,RSync有学院气息(这个算法本来就是 Andrew Tridgell的博士论文), 结果很完美,File-New和 File-Old每一个字节的差别都计算出来了,但是Host-A和Host-B的计算量不对等,大部分的计算都集中在Host-A上。RDC和 RSync相比方向上有点不同,RDC并不追求计算出字节级别的diff,而是用较少的运算求出数据块级别的diff。
 
RDC算法要求Host-A和Host-B通过一致的规则对File-New和File-Old分别进行分块,然后对每个块计算 SH,Host-B把每个块的SH值发给Host-A,Host-A对两组SH进行diff,就可以知道有哪些块不同,哪些块被删掉了,哪些块被添加了。 RDC的关键在于分块规则,也使用WH,要让同一规则应用于File-Old和File-New的时候,分出来的块能够尽量体现出区别。
 
比如File-Old包含" I Love Playing Basketball”,
File-New是" I Like Playing Football"。
如果是RSync算法,Host-A能够计算出准确的差别," I Like Playing Football" 黄色部分修改了,绿色部分是增加的,精确到每个字符,Host-A主要告诉Host-B:"把第4-6号字符换成'ike',把16-21号字符去掉,插 入'Foot'”。
 
如果是RDC算法,可能得到下面的结果:
File-Old分块的结果,分成3块。
" I Love Playing Basketball
File-New分块的结果,分成3块。
" I Like Playing Football"
Host-A经过比对,发现只有File-Old的第2块和File-New的第2块匹配,于是就告诉Host-B:"把你的第一块换成‘I Like’,把你的第3块换成‘Football’”。
 
如上面看到,RDC相对而言比较浪费,相比RSync,要多传输一些数据,但是Host-A和Host-B的计算量比较平均。为了让RDC发挥 好的性能,一定要制定一个好的分块机制,让包含Diff的块尽量少包含没有Diff的数据,怎么做到这一点呢,还要靠WH,通过rolling checksum来从数据中快速挖掘出数据的性质。
 
注意一点就是RSync的分块策略是每块都是固定长度的,而RDC则每块长度可能不一样。
 
虽然RDC相对浪费一点,但是传送的大部分还是Delta数据,而且计算量相对平均而且较少,目前Window 2003 Server R2上的DFS使用的就是RDC算法,还有一个应用就是Live Messenger的Shared Folder功能,用一用,就知道效率不差了:)

Note:
本文前半部分翻译,原文可从rsync官方网站上得到,但是因为 时间原因,没有翻译完成,已翻译的部分也存在词不达意的现象,等以后有时间再修改吧。后半部分是转载的网友的文章,原文地址为 这里


本文转自holy2009 51CTO博客,原文链接:http://blog.51cto.com/holy2010/549704

相关文章
|
7月前
|
Shell 开发工具 数据安全/隐私保护
Rsync 文件同步原理及部署
Rsync 文件同步原理及部署
57 0
|
7月前
|
存储 NoSQL 网络协议
redis相关命令详解及其原理
redis相关命令详解及其原理
51 0
|
7月前
|
存储 消息中间件 NoSQL
Redis相关命令详解及其原理
Redis相关命令详解及其原理
94 0
|
7月前
|
监控 网络安全 开发工具
Linxu服务器文件双向同步-rsync+sersync
Linxu服务器文件双向同步-rsync+sersync
125 0
|
分布式计算 安全 Hadoop
rsync 案例|学习笔记
快速学习 rsync 案例
103 0
|
数据安全/隐私保护 Linux 网络安全
学习笔记:rsync命令实战
rsync是Linux系统下的数据镜像备份工具,本章就会以实例具体介绍如何使用rsync命令。
1627 0