PostgreSQL pg_clog fsync 频率分析

本文涉及的产品
云原生数据库 PolarDB MySQL 版,通用型 2核4GB 50GB
云原生数据库 PolarDB PostgreSQL 版,标准版 2核4GB 50GB
简介:

本文主要分析一下pg_clog是在什么时候需要调用fsync的?

引用wiki里的一段pg_clog的介绍。
https://wiki.postgresql.org/wiki/Hint_Bits

Some details here are in src/backend/access/transam/README:
1. "pg_clog records the commit status for each transaction that has been assigned an XID."
2. "Transactions and subtransactions are assigned permanent XIDs only when/if they first do something that requires one --- typically, insert/update/delete a tuple, though there are a few other places that need an XID assigned."

pg_clog is updated only at sub or main transaction end. When the transactionid is assigned the page of the clog that contains that transactionid is checked to see if it already exists and if not, it is initialised.
pg_clog is allocated in pages of 8kB apiece(其实是和BLOCKSZ一致,所以不一定是8K,见后面的代码分析). 
Each transaction needs 2 bits, so on an 8 kB page there is space for 4 transactions/byte * 8k bytes = 32k transactions.
On allocation, pages are zeroed, which is the bit pattern for "transaction in progress". 
So when a transaction starts, it only needs to ensure that the pg_clog page that contains its status is allocated, but it need not write anything to it. 
In 8.3 and later, this happens not when the transaction starts, but when the Xid is assigned (i.e. when the transaction first calls a read-write command). 
In previous versions it happens when the first snapshot is taken, normally on the first command of any type with very few exceptions.

This means that one transaction in every 32K writing transactions *does* have to do extra work when it assigns itself an XID, namely create and zero out the next page of pg_clog. 
And that doesn't just slow down the transaction in question, but the next few guys that would like an XID but arrive on the scene while the zeroing-out is still in progress. 
This probably contributes to reported behavior that the transaction execution time is subject to unpredictable spikes.
每32K个事务,要扩展一个CLOG PAGE,每次扩展需要填充0,同时需要调用PG_FSYNC,这个相比FSYNC XLOG应该是比较轻量级的。但是也可能出现不可预知的响应延迟,因为如果堵塞在扩展CLOG PAGE,所有等待clog PAGE的会话都会受到影响。

这里指当CLOG buffer没有空的SLOT时,会从所有的CLOG buffer SLOT选择一个脏页,将其刷出,这个时候才会产生pg_fsync。
CLOG pages don't make their way out to disk until the internal CLOG buffers are filled, at which point the least recently used buffer there is evicted to permanent storage.

下面从代码中分析一下pg_clog是如何调用pg_fsync刷脏页的。

每次申请新的事务ID时,都需要调用ExtendCLOG,如果通过事务ID计算得到的CLOG PAGE页不存在,则需要扩展,但是并不是每次扩展都需要调用pg_fsync,因为checkpoint会将clog buffer刷到磁盘,除非在申请新的CLOG PAGE时所有的clog buffer都没有刷出脏页,才需要主动选择一个page并调用pg_fsync刷出对应的pg_clog/file。
src/backend/access/transam/varsup.c

/*
 * Allocate the next XID for a new transaction or subtransaction.
 *
 * The new XID is also stored into MyPgXact before returning.
 *
 * Note: when this is called, we are actually already inside a valid
 * transaction, since XIDs are now not allocated until the transaction
 * does something.  So it is safe to do a database lookup if we want to
 * issue a warning about XID wrap.
 */
TransactionId
GetNewTransactionId(bool isSubXact)
{
......
        /*
         * If we are allocating the first XID of a new page of the commit log,
         * zero out that commit-log page before returning. We must do this while
         * holding XidGenLock, else another xact could acquire and commit a later
         * XID before we zero the page.  Fortunately, a page of the commit log
         * holds 32K or more transactions, so we don't have to do this very often.
         *
         * Extend pg_subtrans too.
         */
        ExtendCLOG(xid);
        ExtendSUBTRANS(xid);
......

ExtendCLOG(xid);扩展clog page,调用TransactionIdToPgIndex计算XID和CLOG_XACTS_PER_PAGE的余数,如果不为0,则不需要扩展。
src/backend/access/transam/clog.c

#define TransactionIdToPgIndex(xid) ((xid) % (TransactionId) CLOG_XACTS_PER_PAGE)

/*
 * Make sure that CLOG has room for a newly-allocated XID.
 *
 * NB: this is called while holding XidGenLock.  We want it to be very fast
 * most of the time; even when it's not so fast, no actual I/O need happen
 * unless we're forced to write out a dirty clog or xlog page to make room
 * in shared memory.
 */
void
ExtendCLOG(TransactionId newestXact)
{
        int                     pageno;

        /*
         * No work except at first XID of a page.  But beware: just after
         * wraparound, the first XID of page zero is FirstNormalTransactionId.
         */
        if (TransactionIdToPgIndex(newestXact) != 0 &&    // 余数不为0,说明不需要扩展。
                !TransactionIdEquals(newestXact, FirstNormalTransactionId))
                return;

        pageno = TransactionIdToPage(newestXact);

        LWLockAcquire(CLogControlLock, LW_EXCLUSIVE);

        /* Zero the page and make an XLOG entry about it */
        ZeroCLOGPage(pageno, true);

        LWLockRelease(CLogControlLock);
}

ZeroCLOGPage(pageno, true);,调用SimpleLruZeroPage,扩展并初始化CLOG PAGE,写XLOG日志。

/*
 * Initialize (or reinitialize) a page of CLOG to zeroes.
 * If writeXlog is TRUE, also emit an XLOG record saying we did this.
 *
 * The page is not actually written, just set up in shared memory.
 * The slot number of the new page is returned.
 *
 * Control lock must be held at entry, and will be held at exit.
 */
static int
ZeroCLOGPage(int pageno, bool writeXlog)
{
        int                     slotno;

        slotno = SimpleLruZeroPage(ClogCtl, pageno);

        if (writeXlog)
                WriteZeroPageXlogRec(pageno);

        return slotno;
}

SimpleLruZeroPage(ClogCtl, pageno);,调用SlruSelectLRUPage(ctl, pageno);,从clog shared buffer中选择SLOT。
src/backend/access/transam/slru.c

/*
 * Initialize (or reinitialize) a page to zeroes.
 *
 * The page is not actually written, just set up in shared memory.
 * The slot number of the new page is returned.
 *
 * Control lock must be held at entry, and will be held at exit.
 */
int
SimpleLruZeroPage(SlruCtl ctl, int pageno)
{
        SlruShared      shared = ctl->shared;
        int                     slotno;

        /* Find a suitable buffer slot for the page */
        slotno = SlruSelectLRUPage(ctl, pageno);
        Assert(shared->page_status[slotno] == SLRU_PAGE_EMPTY ||
                   (shared->page_status[slotno] == SLRU_PAGE_VALID &&
                        !shared->page_dirty[slotno]) ||
                   shared->page_number[slotno] == pageno);

        /* Mark the slot as containing this page */
        shared->page_number[slotno] = pageno;
        shared->page_status[slotno] = SLRU_PAGE_VALID;
        shared->page_dirty[slotno] = true;
        SlruRecentlyUsed(shared, slotno);

        /* Set the buffer to zeroes */
        MemSet(shared->page_buffer[slotno], 0, BLCKSZ);

        /* Set the LSNs for this new page to zero */
        SimpleLruZeroLSNs(ctl, slotno);

        /* Assume this page is now the latest active page */
        shared->latest_page_number = pageno;

        return slotno;
}

SlruSelectLRUPage(SlruCtl ctl, int pageno),从clog buffer选择一个空的SLOT,如果没有空的SLOT,则需要调用SlruInternalWritePage(ctl, bestvalidslot, NULL);,写shared buffer page。

/*
 * Select the slot to re-use when we need a free slot.
 *
 * The target page number is passed because we need to consider the
 * possibility that some other process reads in the target page while
 * we are doing I/O to free a slot.  Hence, check or recheck to see if
 * any slot already holds the target page, and return that slot if so.
 * Thus, the returned slot is *either* a slot already holding the pageno
 * (could be any state except EMPTY), *or* a freeable slot (state EMPTY
 * or CLEAN).
 *
 * Control lock must be held at entry, and will be held at exit.
 */
static int
SlruSelectLRUPage(SlruCtl ctl, int pageno)
{
......
        /* See if page already has a buffer assigned */  先查看clog buffer中是否有空SLOT,有则返回,不需要调pg_fsync
        for (slotno = 0; slotno < shared->num_slots; slotno++)
        {
            if (shared->page_number[slotno] == pageno &&
                shared->page_status[slotno] != SLRU_PAGE_EMPTY)
                return slotno;
        }
...... 
        /*  如果没有找到空SLOT,则需要从clog buffer中选择一个使用最少的PAGE,注意他不会选择最近临近的PAGE,优先选择IO不繁忙的PAGE
         * If we find any EMPTY slot, just select that one. Else choose a
         * victim page to replace.  We normally take the least recently used
         * valid page, but we will never take the slot containing
         * latest_page_number, even if it appears least recently used.  We
         * will select a slot that is already I/O busy only if there is no
         * other choice: a read-busy slot will not be least recently used once
         * the read finishes, and waiting for an I/O on a write-busy slot is
         * inferior to just picking some other slot.  Testing shows the slot
         * we pick instead will often be clean, allowing us to begin a read at
         * once.
         *  
         * Normally the page_lru_count values will all be different and so
         * there will be a well-defined LRU page.  But since we allow
         * concurrent execution of SlruRecentlyUsed() within
         * SimpleLruReadPage_ReadOnly(), it is possible that multiple pages
         * acquire the same lru_count values.  In that case we break ties by
         * choosing the furthest-back page.
         *
         * Notice that this next line forcibly advances cur_lru_count to a
         * value that is certainly beyond any value that will be in the
         * page_lru_count array after the loop finishes.  This ensures that
         * the next execution of SlruRecentlyUsed will mark the page newly
         * used, even if it's for a page that has the current counter value.
         * That gets us back on the path to having good data when there are
         * multiple pages with the same lru_count.
         */
        cur_count = (shared->cur_lru_count)++;
        for (slotno = 0; slotno < shared->num_slots; slotno++)
        {
            int            this_delta;
            int            this_page_number;

            if (shared->page_status[slotno] == SLRU_PAGE_EMPTY)  // 如果在此期间出现了空SLOT,返回这个slotno
                return slotno;
            this_delta = cur_count - shared->page_lru_count[slotno];
            if (this_delta < 0)
            {
                /*
                 * Clean up in case shared updates have caused cur_count
                 * increments to get "lost".  We back off the page counts,
                 * rather than trying to increase cur_count, to avoid any
                 * question of infinite loops or failure in the presence of
                 * wrapped-around counts.
                 */
                shared->page_lru_count[slotno] = cur_count;
                this_delta = 0;
            }
            this_page_number = shared->page_number[slotno];
            if (this_page_number == shared->latest_page_number)
                continue;
            if (shared->page_status[slotno] == SLRU_PAGE_VALID)  // IO不繁忙的脏页
            {
                if (this_delta > best_valid_delta ||
                    (this_delta == best_valid_delta &&
                     ctl->PagePrecedes(this_page_number,
                                       best_valid_page_number)))
                {
                    bestvalidslot = slotno;
                    best_valid_delta = this_delta;
                    best_valid_page_number = this_page_number;
                }
            }
            else
            {
                if (this_delta > best_invalid_delta ||
                    (this_delta == best_invalid_delta &&
                     ctl->PagePrecedes(this_page_number,
                                       best_invalid_page_number)))
                {
                    bestinvalidslot = slotno;  // 当所有页面IO都繁忙时,无奈只能从IO繁忙中选择一个.
                    best_invalid_delta = this_delta;
                    best_invalid_page_number = this_page_number;
                }
            }
        }

        /*  如果选择到的PAGE
         * If all pages (except possibly the latest one) are I/O busy, we'll
         * have to wait for an I/O to complete and then retry.  In that
         * unhappy case, we choose to wait for the I/O on the least recently
         * used slot, on the assumption that it was likely initiated first of
         * all the I/Os in progress and may therefore finish first.
         */
        if (best_valid_delta < 0)  // 说明没有找到SLRU_PAGE_VALID的PAGE,所有PAGE都处于IO繁忙的状态。
        {
            SimpleLruWaitIO(ctl, bestinvalidslot);
            continue;
        }

        /*
         * If the selected page is clean, we're set.
         */
        if (!shared->page_dirty[bestvalidslot])  // 如果这个页面已经不是脏页(例如被CHECKPOINT刷出了),那么直接返回
            return bestvalidslot;

......
仅仅当以上所有的步骤,都没有找到一个EMPTY SLOT时,才需要主动刷脏页(在SlruInternalWritePage调用pg_fsync)。
                /*
                 * Write the page.  注意第三个参数为NULL,即fdata
                 */
                SlruInternalWritePage(ctl, bestvalidslot, NULL);
......

SlruInternalWritePage(SlruCtl ctl, int slotno, SlruFlush fdata),调用SlruPhysicalWritePage,执行write。

/*
 * Write a page from a shared buffer, if necessary.
 * Does nothing if the specified slot is not dirty.
 *
 * NOTE: only one write attempt is made here.  Hence, it is possible that
 * the page is still dirty at exit (if someone else re-dirtied it during
 * the write).  However, we *do* attempt a fresh write even if the page
 * is already being written; this is for checkpoints.
 *
 * Control lock must be held at entry, and will be held at exit.
 */
static void
SlruInternalWritePage(SlruCtl ctl, int slotno, SlruFlush fdata)
{
......
        /* Do the write */
        ok = SlruPhysicalWritePage(ctl, pageno, slotno, fdata);
......

SLRU PAGE状态

/*
 * Page status codes.  Note that these do not include the "dirty" bit.
 * page_dirty can be TRUE only in the VALID or WRITE_IN_PROGRESS states;
 * in the latter case it implies that the page has been re-dirtied since
 * the write started.
 */
typedef enum
{
    SLRU_PAGE_EMPTY,            /* buffer is not in use */
    SLRU_PAGE_READ_IN_PROGRESS, /* page is being read in */
    SLRU_PAGE_VALID,            /* page is valid and not being written */
    SLRU_PAGE_WRITE_IN_PROGRESS /* page is being written out */
} SlruPageStatus;

SlruPhysicalWritePage(ctl, pageno, slotno, fdata);,这里涉及pg_clog相关的SlruCtlData结构,do_fsync=true。

/*
 * Physical write of a page from a buffer slot
 *
 * On failure, we cannot just ereport(ERROR) since caller has put state in
 * shared memory that must be undone.  So, we return FALSE and save enough
 * info in static variables to let SlruReportIOError make the report.
 *
 * For now, assume it's not worth keeping a file pointer open across
 * independent read/write operations.  We do batch operations during
 * SimpleLruFlush, though.
 *
 * fdata is NULL for a standalone write, pointer to open-file info during
 * SimpleLruFlush.
 */
static bool
SlruPhysicalWritePage(SlruCtl ctl, int pageno, int slotno,
                                          SlruFlush fdata);
......
        int                     fd = -1;
......
//   如果文件不存在,自动创建
        if (fd < 0)
        {
                /*
                 * If the file doesn't already exist, we should create it.  It is
                 * possible for this to need to happen when writing a page that's not
                 * first in its segment; we assume the OS can cope with that. (Note:
                 * it might seem that it'd be okay to create files only when
                 * SimpleLruZeroPage is called for the first page of a segment.
                 * However, if after a crash and restart the REDO logic elects to
                 * replay the log from a checkpoint before the latest one, then it's
                 * possible that we will get commands to set transaction status of
                 * transactions that have already been truncated from the commit log.
                 * Easiest way to deal with that is to accept references to
                 * nonexistent files here and in SlruPhysicalReadPage.)
                 *
                 * Note: it is possible for more than one backend to be executing this
                 * code simultaneously for different pages of the same file. Hence,
                 * don't use O_EXCL or O_TRUNC or anything like that.
                 */
                SlruFileName(ctl, path, segno);
                fd = OpenTransientFile(path, O_RDWR | O_CREAT | PG_BINARY,
                                                           S_IRUSR | S_IWUSR);
......
        /*
         * If not part of Flush, need to fsync now.  We assume this happens
         * infrequently enough that it's not a performance issue.
         */
        if (!fdata)  // 因为传入的fdata=NULL,并且ctl->do_fsync=true,所以以下pg_fsync被调用。
        {
                if (ctl->do_fsync && pg_fsync(fd))  // 对于pg_clog和multixact,do_fsync=true。
                {
                        slru_errcause = SLRU_FSYNC_FAILED;
                        slru_errno = errno;
                        CloseTransientFile(fd);
                        return false;
                }

                if (CloseTransientFile(fd))
                {
                        slru_errcause = SLRU_CLOSE_FAILED;
                        slru_errno = errno;
                        return false;
                }
        }

ctl->do_fsync && pg_fsync(fd)涉及的代码:
src/include/access/slru.h

/*
 * SlruCtlData is an unshared structure that points to the active information
 * in shared memory.
 */
typedef struct SlruCtlData
{
        SlruShared      shared;

        /*
         * This flag tells whether to fsync writes (true for pg_clog and multixact
         * stuff, false for pg_subtrans and pg_notify).
         */
        bool            do_fsync;

        /*
         * Decide which of two page numbers is "older" for truncation purposes. We
         * need to use comparison of TransactionIds here in order to do the right
         * thing with wraparound XID arithmetic.
         */
        bool            (*PagePrecedes) (int, int);

        /*
         * Dir is set during SimpleLruInit and does not change thereafter. Since
         * it's always the same, it doesn't need to be in shared memory.
         */
        char            Dir[64];
} SlruCtlData;
typedef SlruCtlData *SlruCtl;

src/backend/access/transam/slru.c

......
void
SimpleLruInit(SlruCtl ctl, const char *name, int nslots, int nlsns,
                          LWLock *ctllock, const char *subdir)
......
        ctl->do_fsync = true;           /* default behavior */  // 初始化LRU时,do_fsync默认是true的。
......

以下是clog初始化LRU的调用,可以看到它没有修改do_fsync,所以是TURE。
src/backend/access/transam/clog.c

/*
 * Number of shared CLOG buffers.
 *
 * Testing during the PostgreSQL 9.2 development cycle revealed that on a
 * large multi-processor system, it was possible to have more CLOG page
 * requests in flight at one time than the number of CLOG buffers which existed
 * at that time, which was hardcoded to 8.  Further testing revealed that
 * performance dropped off with more than 32 CLOG buffers, possibly because
 * the linear buffer search algorithm doesn't scale well.
 *
 * Unconditionally increasing the number of CLOG buffers to 32 did not seem
 * like a good idea, because it would increase the minimum amount of shared
 * memory required to start, which could be a problem for people running very
 * small configurations.  The following formula seems to represent a reasonable
 * compromise: people with very low values for shared_buffers will get fewer
 * CLOG buffers as well, and everyone else will get 32.
 *
 * It is likely that some further work will be needed here in future releases;
 * for example, on a 64-core server, the maximum number of CLOG requests that
 * can be simultaneously in flight will be even larger.  But that will
 * apparently require more than just changing the formula, so for now we take
 * the easy way out.
 */
Size
CLOGShmemBuffers(void)
{
        return Min(32, Max(4, NBuffers / 512));
}

void
CLOGShmemInit(void)
{
        ClogCtl->PagePrecedes = CLOGPagePrecedes;
        SimpleLruInit(ClogCtl, "CLOG Ctl", CLOGShmemBuffers(), CLOG_LSNS_PER_PAGE,
                                  CLogControlLock, "pg_clog");
}

以下是subtrans初始化LRU的调用,看到它修改了do_fsync=false。所以subtrans扩展PAGE时不需要调用pg_fsync。
src/backend/access/transam/subtrans.c

void
SUBTRANSShmemInit(void)
{
        SubTransCtl->PagePrecedes = SubTransPagePrecedes;
        SimpleLruInit(SubTransCtl, "SUBTRANS Ctl", NUM_SUBTRANS_BUFFERS, 0,
                                  SubtransControlLock, "pg_subtrans");
        /* Override default assumption that writes should be fsync'd */
        SubTransCtl->do_fsync = false;
}

multixact.c也没有修改do_fsync,所以也是需要fsync的。
MultiXactShmemInit(void)@src/backend/access/transam/multixact.c

pg_fsync代码:
src/backend/storage/file/fd.c

/*
 * pg_fsync --- do fsync with or without writethrough
 */
int
pg_fsync(int fd)
{
        /* #if is to skip the sync_method test if there's no need for it */
#if defined(HAVE_FSYNC_WRITETHROUGH) && !defined(FSYNC_WRITETHROUGH_IS_FSYNC)
        if (sync_method == SYNC_METHOD_FSYNC_WRITETHROUGH)
                return pg_fsync_writethrough(fd);
        else
#endif
                return pg_fsync_no_writethrough(fd);
}

/*
 * pg_fsync_no_writethrough --- same as fsync except does nothing if
 *      enableFsync is off
 */
int
pg_fsync_no_writethrough(int fd)
{
        if (enableFsync)
                return fsync(fd);
        else
                return 0;
}

/*
 * pg_fsync_writethrough
 */
int
pg_fsync_writethrough(int fd)
{
        if (enableFsync)
        {
#ifdef WIN32
                return _commit(fd);
#elif defined(F_FULLFSYNC)
                return (fcntl(fd, F_FULLFSYNC, 0) == -1) ? -1 : 0;
#else
                errno = ENOSYS;
                return -1;
#endif
        }
        else
                return 0;
}

从上面的代码分析,扩展clog page时,如果在CLOG BUFFER中没有EMPTY SLOT,则需要backend process主动刷CLOG PAGE,所以会有调用pg_fsync的动作。

clog page和数据库BLOCKSZ (database block size)一样大,默认是8K(如果编译数据库软件时没有修改的话,默认是8KB),最大可以设置为32KB。
每个事务在pg_clog中需要2个比特位来存储事务信息(xmin commit/abort,xmax commit/abort)。
所以8K的clog page可以存储32K个事务信息,换句话说,每32K个事务,需要扩展一次clog page。

下面的代码是clog的一些常用宏。
src/backend/access/transam/clog.c

/*
 * Defines for CLOG page sizes.  A page is the same BLCKSZ as is used
 * everywhere else in Postgres.
 *
 * Note: because TransactionIds are 32 bits and wrap around at 0xFFFFFFFF,
 * CLOG page numbering also wraps around at 0xFFFFFFFF/CLOG_XACTS_PER_PAGE,
 * and CLOG segment numbering at
 * 0xFFFFFFFF/CLOG_XACTS_PER_PAGE/SLRU_PAGES_PER_SEGMENT.  We need take no
 * explicit notice of that fact in this module, except when comparing segment
 * and page numbers in TruncateCLOG (see CLOGPagePrecedes).
 */

/* We need two bits per xact, so four xacts fit in a byte */
#define CLOG_BITS_PER_XACT      2
#define CLOG_XACTS_PER_BYTE 4
#define CLOG_XACTS_PER_PAGE (BLCKSZ * CLOG_XACTS_PER_BYTE)
#define CLOG_XACT_BITMASK       ((1 << CLOG_BITS_PER_XACT) - 1)

#define TransactionIdToPage(xid)         ((xid) / (TransactionId) CLOG_XACTS_PER_PAGE)
#define TransactionIdToPgIndex(xid)     ((xid) % (TransactionId) CLOG_XACTS_PER_PAGE)
#define TransactionIdToByte(xid)          (TransactionIdToPgIndex(xid) / CLOG_XACTS_PER_BYTE)
#define TransactionIdToBIndex(xid)       ((xid) % (TransactionId) CLOG_XACTS_PER_BYTE)

查看数据库的block size:

postgres@digoal-> pg_controldata |grep block
Database block size:                  8192
WAL block size:                       8192

我们可以使用stap来跟踪是否调用pg_fsync,如果你要观察backend process主动刷clog 脏页,可以把checkpoint间隔开大,同时把clog shared buffer pages。你就会观察到backend process主动刷clog 脏页。

Size
CLOGShmemBuffers(void)
{
    return Min(32, Max(4, NBuffers / 512));
}

跟踪

src/backend/access/transam/slru.c
SlruPhysicalWritePage
......
                SlruFileName(ctl, path, segno);
                fd = OpenTransientFile(path, O_RDWR | O_CREAT | PG_BINARY,
                                                           S_IRUSR | S_IWUSR);
......

src/backend/storage/file/fd.c
OpenTransientFile
pg_fsync(fd)

stap脚本

[root@digoal ~]# cat trc.stp
global f_start[999999]

probe process("/opt/pgsql/bin/postgres").function("SlruPhysicalWritePage@/opt/soft_bak/postgresql-9.4.4/src/backend/access/transam/slru.c").call {
f_start[execname(), pid(), tid(), cpu()] = gettimeofday_ms()
printf("%s <- time:%d, pp:%s, par:%sn", thread_indent(-1), gettimeofday_ms(), pp(), $$parms$$)
# printf("%s -> time:%d, pp:%sn", thread_indent(1), f_start[execname(), pid(), tid(), cpu()], pp() )
}

probe process("/opt/pgsql/bin/postgres").function("SlruPhysicalWritePage@/opt/soft_bak/postgresql-9.4.4/src/backend/access/transam/slru.c").return {
t=gettimeofday_ms()
a=execname()
b=cpu()
c=pid()
d=pp()
e=tid()
if (f_start[a,c,e,b]) {
printf("%s <- time:%d, pp:%s, par:%sn", thread_indent(-1), t - f_start[a,c,e,b], d, $return$$)
# printf("%s <- time:%d, pp:%sn", thread_indent(-1), t - f_start[a,c,e,b], d)
}
}

probe process("/opt/pgsql/bin/postgres").function("OpenTransientFile@/opt/soft_bak/postgresql-9.4.4/src/backend/storage/file/fd.c").call {
f_start[execname(), pid(), tid(), cpu()] = gettimeofday_ms()
printf("%s <- time:%d, pp:%s, par:%sn", thread_indent(-1), gettimeofday_ms(), pp(), $$parms$$)
# printf("%s -> time:%d, pp:%sn", thread_indent(1), f_start[execname(), pid(), tid(), cpu()], pp() )
}

probe process("/opt/pgsql/bin/postgres").function("OpenTransientFile@/opt/soft_bak/postgresql-9.4.4/src/backend/storage/file/fd.c").return {
t=gettimeofday_ms()
a=execname()
b=cpu()
c=pid()
d=pp()
e=tid()
if (f_start[a,c,e,b]) {
printf("%s <- time:%d, pp:%s, par:%sn", thread_indent(-1), t - f_start[a,c,e,b], d, $return$$)
# printf("%s <- time:%d, pp:%sn", thread_indent(-1), t - f_start[a,c,e,b], d)
}
}

probe process("/opt/pgsql/bin/postgres").function("pg_fsync@/opt/soft_bak/postgresql-9.4.4/src/backend/storage/file/fd.c").call {
f_start[execname(), pid(), tid(), cpu()] = gettimeofday_ms()
printf("%s <- time:%d, pp:%s, par:%sn", thread_indent(-1), gettimeofday_ms(), pp(), $$parms$$)
# printf("%s -> time:%d, pp:%sn", thread_indent(1), f_start[execname(), pid(), tid(), cpu()], pp() )
}

probe process("/opt/pgsql/bin/postgres").function("pg_fsync@/opt/soft_bak/postgresql-9.4.4/src/backend/storage/file/fd.c").return {
t=gettimeofday_ms()
a=execname()
b=cpu()
c=pid()
d=pp()
e=tid()
if (f_start[a,c,e,b]) {
printf("%s <- time:%d, pp:%s, par:%sn", thread_indent(-1), t - f_start[a,c,e,b], d, $return$$)
# printf("%s <- time:%d, pp:%sn", thread_indent(-1), t - f_start[a,c,e,b], d)
}
}

开启一个pgbench执行txid_current()函数申请新的事务号。

postgres@digoal-> cat 7.sql
select txid_current();

约每秒32K左右的请求。

postgres@digoal-> pgbench -M prepared -n -r -P 1 -f ./7.sql -c 1 -j 1 -T 100000
progress: 240.0 s, 31164.4 tps, lat 0.031 ms stddev 0.183
progress: 241.0 s, 33243.3 tps, lat 0.029 ms stddev 0.127
progress: 242.0 s, 32567.3 tps, lat 0.030 ms stddev 0.179
progress: 243.0 s, 33656.6 tps, lat 0.029 ms stddev 0.038
progress: 244.0 s, 33948.1 tps, lat 0.029 ms stddev 0.021
progress: 245.0 s, 32996.8 tps, lat 0.030 ms stddev 0.046
progress: 246.0 s, 34156.7 tps, lat 0.029 ms stddev 0.015
progress: 247.0 s, 33259.5 tps, lat 0.029 ms stddev 0.074
progress: 248.0 s, 32979.6 tps, lat 0.030 ms stddev 0.043
progress: 249.0 s, 32892.6 tps, lat 0.030 ms stddev 0.039
progress: 250.0 s, 33090.7 tps, lat 0.029 ms stddev 0.020
progress: 251.0 s, 33238.3 tps, lat 0.029 ms stddev 0.017
progress: 252.0 s, 32341.3 tps, lat 0.030 ms stddev 0.045
progress: 253.0 s, 31999.0 tps, lat 0.030 ms stddev 0.167
progress: 254.0 s, 33332.6 tps, lat 0.029 ms stddev 0.056
progress: 255.0 s, 30394.6 tps, lat 0.032 ms stddev 0.027
progress: 256.0 s, 31862.7 tps, lat 0.031 ms stddev 0.023
progress: 257.0 s, 31574.0 tps, lat 0.031 ms stddev 0.112

跟踪backend process

postgres@digoal-> ps -ewf|grep postgres
postgres 2921 1883 29 09:37 pts/1 00:00:05 pgbench -M prepared -n -r -P 1 -f ./7.sql -c 1 -j 1 -T 100000
postgres 2924 1841 66 09:37 ? 00:00:13 postgres: postgres postgres [local] SELECT

[root@digoal ~]# stap -vp 5 -DMAXSKIPPED=9999999 -DSTP_NO_OVERLOAD -DMAXTRYLOCK=100 ./trc.stp -x 2924 >./stap.log 2>&1

从日志中抽取pg_clog相关的跟踪结果。
 0 postgres(2924): -> time:1441503927731, pp:process("/opt/pgsql9.4.4/bin/postgres").function("SlruPhysicalWritePage@/opt/soft_bak/postgresql-9.4.4/src/backend/access/transam/slru.c:699").call, par:ctl={.shared=0x7f74a9fe39c0, .do_fsync='\001', .PagePrecedes=0x4b1960, .Dir="pg_clog"} pageno=12350 slotno=10 fdata=ERROR
31 postgres(2924): -> time:1441503927731, pp:process("/opt/pgsql9.4.4/bin/postgres").function("OpenTransientFile@/opt/soft_bak/postgresql-9.4.4/src/backend/storage/file/fd.c:1710").call, par:fileName="pg_clog/0181" fileFlags=66 fileMode=384
53 postgres(2924): <- time:0, pp:process("/opt/pgsql9.4.4/bin/postgres").function("OpenTransientFile@/opt/soft_bak/postgresql-9.4.4/src/backend/storage/file/fd.c:1710").return, par:14

102 postgres(2924): -> time:1441503927731, pp:process("/opt/pgsql9.4.4/bin/postgres").function("pg_fsync@/opt/soft_bak/postgresql-9.4.4/src/backend/storage/file/fd.c:315").call, par:fd=14
1096 postgres(2924): <- time:1, pp:process("/opt/pgsql9.4.4/bin/postgres").function("pg_fsync@/opt/soft_bak/postgresql-9.4.4/src/backend/storage/file/fd.c:315").return, par:0
1113 postgres(2924): <- time:1, pp:process("/opt/pgsql9.4.4/bin/postgres").function("SlruPhysicalWritePage@/opt/soft_bak/postgresql-9.4.4/src/backend/access/transam/slru.c:699").return, par:'001'

1105302 postgres(2924): -> time:1441503928836, pp:process("/opt/pgsql9.4.4/bin/postgres").function("SlruPhysicalWritePage@/opt/soft_bak/postgresql-9.4.4/src/backend/access/transam/slru.c:699").call, par:ctl={.shared=0x7f74a9fe39c0, .do_fsync='001', .PagePrecedes=0x4b1960, .Dir="pg_clog"} pageno=12351 slotno=11 fdata=ERROR
1105329 postgres(2924): -> time:1441503928836, pp:process("/opt/pgsql9.4.4/bin/postgres").function("OpenTransientFile@/opt/soft_bak/postgresql-9.4.4/src/backend/storage/file/fd.c:1710").call, par:fileName="pg_clog/0181" fileFlags=66 fileMode=384
1105348 postgres(2924): <- time:0, pp:process("/opt/pgsql9.4.4/bin/postgres").function("OpenTransientFile@/opt/soft_bak/postgresql-9.4.4/src/backend/storage/file/fd.c:1710").return, par:14
1105405 postgres(2924): -> time:1441503928836, pp:process("/opt/pgsql9.4.4/bin/postgres").function("pg_fsync@/opt/soft_bak/postgresql-9.4.4/src/backend/storage/file/fd.c:315").call, par:fd=14
1106440 postgres(2924): <- time:1, pp:process("/opt/pgsql9.4.4/bin/postgres").function("pg_fsync@/opt/soft_bak/postgresql-9.4.4/src/backend/storage/file/fd.c:315").return, par:0
1106452 postgres(2924): <- time:1, pp:process("/opt/pgsql9.4.4/bin/postgres").function("SlruPhysicalWritePage@/opt/soft_bak/postgresql-9.4.4/src/backend/access/transam/slru.c:699").return, par:'001'

2087891 postgres(2924): -> time:1441503929819, pp:process("/opt/pgsql9.4.4/bin/postgres").function("SlruPhysicalWritePage@/opt/soft_bak/postgresql-9.4.4/src/backend/access/transam/slru.c:699").call, par:ctl={.shared=0x7f74a9fe39c0, .do_fsync='001', .PagePrecedes=0x4b1960, .Dir="pg_clog"} pageno=12352 slotno=12 fdata=ERROR
2087917 postgres(2924): -> time:1441503929819, pp:process("/opt/pgsql9.4.4/bin/postgres").function("OpenTransientFile@/opt/soft_bak/postgresql-9.4.4/src/backend/storage/file/fd.c:1710").call, par:fileName="pg_clog/0182" fileFlags=66 fileMode=384
2087958 postgres(2924): <- time:0, pp:process("/opt/pgsql9.4.4/bin/postgres").function("OpenTransientFile@/opt/soft_bak/postgresql-9.4.4/src/backend/storage/file/fd.c:1710").return, par:14
2088013 postgres(2924): -> time:1441503929819, pp:process("/opt/pgsql9.4.4/bin/postgres").function("pg_fsync@/opt/soft_bak/postgresql-9.4.4/src/backend/storage/file/fd.c:315").call, par:fd=14
2089250 postgres(2924): <- time:1, pp:process("/opt/pgsql9.4.4/bin/postgres").function("pg_fsync@/opt/soft_bak/postgresql-9.4.4/src/backend/storage/file/fd.c:315").return, par:0
2089265 postgres(2924): <- time:1, pp:process("/opt/pgsql9.4.4/bin/postgres").function("SlruPhysicalWritePage@/opt/soft_bak/postgresql-9.4.4/src/backend/access/transam/slru.c:699").return, par:'001'

每隔1秒左右会产生一次fsync。

postgres=# select 1441503928836-1441503927731;

?column?

 1105

(1 row)

postgres=# select 1441503929819-1441503928836;

?column?

  983

(1 row)

前面pgbench的输出看到每秒产生约32000个事务,刚好等于一个clog页的事务数(本例数据块大小为8KB)。
(每个事务需要2个比特位,每个字节存储4个事务信息)
8192*4=32768

如果你需要观察backend process不刷clog buffer脏页的情况。可以把checkpoint 间隔改小,或者手动执行checkpoint,同时还需要把clog buffer pages改大,例如:

Size
CLOGShmemBuffers(void)
{

return Min(1024, Max(4, NBuffers / 2));

}

使用同样的stap脚本,你就观察不到backend process主动刷clog dirty page了。

通过以上分析,如果你发现backend process频繁的clog,可以采取一些优化手段。
1. 因为每次扩展pg_clog文件后,文件大小都会发生变化,此时如果backend process调用pg_fdatasync也会写文件系统metadata journal(以EXT4为例,假设mount参数data不等于writeback),这个操作是整个文件系统串行的,容易产生堵塞。
所以backend process挑选clog page时,不选择最近的page number可以起到一定的效果,(最好是不选择最近的clog file中的pages)。
另一种方法是先调用sync_file_range, SYNC_FILE_RANGE_WAIT_BEFORE | SYNC_FILE_RANGE_WRITE | SYNC_FILE_RANGE_WAIT_AFTER,它不需要写metadata。将文件写入后再调用pg_fsync。减少等待data fsync的时间。
2. pg_clog文件预分配,目前pg_clog单个文件的大小是由CLOGShmemBuffers决定的,为BLOCKSZ的32倍。可以尝试预分配这个文件,而不是每次都扩展,改变它的大小。
3. 延迟backend process 的 fsync请求到checkpoint处理。

[参考]
https://wiki.postgresql.org/wiki/Hint_Bits
http://blog.163.com/digoal@126/blog/static/1638770402015840480734/
src/backend/access/transam/varsup.c
src/backend/access/transam/clog.c
src/backend/access/transam/slru.c
src/include/access/slru.h
src/backend/access/transam/subtrans.c
相关实践学习
使用PolarDB和ECS搭建门户网站
本场景主要介绍基于PolarDB和ECS实现搭建门户网站。
阿里云数据库产品家族及特性
阿里云智能数据库产品团队一直致力于不断健全产品体系,提升产品性能,打磨产品功能,从而帮助客户实现更加极致的弹性能力、具备更强的扩展能力、并利用云设施进一步降低企业成本。以云原生+分布式为核心技术抓手,打造以自研的在线事务型(OLTP)数据库Polar DB和在线分析型(OLAP)数据库Analytic DB为代表的新一代企业级云原生数据库产品体系, 结合NoSQL数据库、数据库生态工具、云原生智能化数据库管控平台,为阿里巴巴经济体以及各个行业的企业客户和开发者提供从公共云到混合云再到私有云的完整解决方案,提供基于云基础设施进行数据从处理、到存储、再到计算与分析的一体化解决方案。本节课带你了解阿里云数据库产品家族及特性。
目录
相关文章
|
关系型数据库 物联网 PostgreSQL
沉浸式学习PostgreSQL|PolarDB 11: 物联网(IoT)、监控系统、应用日志、用户行为记录等场景 - 时序数据高吞吐存取分析
物联网场景, 通常有大量的传感器(例如水质监控、气象监测、新能源汽车上的大量传感器)不断探测最新数据并上报到数据库. 监控系统, 通常也会有采集程序不断的读取被监控指标(例如CPU、网络数据包转发、磁盘的IOPS和BW占用情况、内存的使用率等等), 同时将监控数据上报到数据库. 应用日志、用户行为日志, 也就有同样的特征, 不断产生并上报到数据库. 以上数据具有时序特征, 对数据库的关键能力要求如下: 数据高速写入 高速按时间区间读取和分析, 目的是发现异常, 分析规律. 尽量节省存储空间
786 1
|
3月前
|
Oracle NoSQL 关系型数据库
主流数据库对比:MySQL、PostgreSQL、Oracle和Redis的优缺点分析
主流数据库对比:MySQL、PostgreSQL、Oracle和Redis的优缺点分析
655 2
|
关系型数据库 定位技术 分布式数据库
沉浸式学习PostgreSQL|PolarDB 18: 通过GIS轨迹相似伴随|时态分析|轨迹驻点识别等技术对拐卖、诱骗场景进行侦查
本文主要教大家怎么用好数据库, 而不是怎么运维管理数据库、怎么开发数据库内核.
1332 1
|
7月前
|
存储 关系型数据库 MySQL
TiDB与MySQL、PostgreSQL等数据库的比较分析
【2月更文挑战第25天】本文将对TiDB、MySQL和PostgreSQL等数据库进行详细的比较分析,探讨它们各自的优势和劣势。TiDB作为一款分布式关系型数据库,在扩展性、并发性能等方面表现突出;MySQL以其易用性和成熟性受到广泛应用;PostgreSQL则在数据完整性、扩展性等方面具有优势。通过对比这些数据库的特点和适用场景,帮助企业更好地选择适合自己业务需求的数据库系统。
1202 4
|
7月前
|
SQL 关系型数据库 MySQL
PostgreSQL【异常 01】java.io.IOException:Tried to send an out-of-range integer as a 2-byte value 分析+解决
PostgreSQL【异常 01】java.io.IOException:Tried to send an out-of-range integer as a 2-byte value 分析+解决
517 1
|
SQL 关系型数据库 MySQL
《PostgreSQL与MySQL:详细对比与分析》
《PostgreSQL与MySQL:详细对比与分析》
646 0
|
存储 关系型数据库 PostgreSQL
Postgresql内核源码分析-heapam分析
Postgresql内核源码分析-heapam分析
195 1
|
SQL 存储 自然语言处理
玩转阿里云PostgreSQL,通过pg_jieba对豆瓣影评进行热评分析
在当今社交媒体的时代,人们通过各种平台分享自己的生活、观点和情感。然而,对于平台管理员和品牌经营者来说,了解用户的情感和意见变得至关重要。为了帮助他们更好地了解用户的情感倾向,我们可以使用PostgreSQL中的pg_jieba插件对这些发帖进行分词和情感分析,来构建一个社交媒体情感分析系统,系统将根据用户的发帖内容,自动判断其情感倾向是积极、消极还是中性,并将结果存储在数据库中。 本文通过针对kaggle数据集中的豆瓣影评的中文评论数据,通过阿里云的PostgreSQL中的pg_jieba插件进行分词(可自定义多个词典,并且切换自定义词典进行分词),基于分词的结果进行统计分析。
|
关系型数据库 分布式数据库 PolarDB
沉浸式学习PostgreSQL|PolarDB 15: 企业ERP软件、网站、分析型业务场景、营销场景人群圈选, 任意字段组合条件数据筛选
本篇文章目标学习如何快速在任意字段组合条件输入搜索到满足条件的数据.
621 0
|
关系型数据库 数据库 PostgreSQL
PostgreSQL pg_rewind报错分析
PostgreSQL pg_rewind报错分析
212 0