Write Combining

简介: Modern CPUs employ lots of techniques to counteract the latency cost of going to main memory. These days CPUs can process hundreds of instructions

Modern CPUs employ lots of techniques to counteract the latency cost of going to main memory. These days CPUs can process hundreds of instructions in the time it takes to read or write data to the DRAM memory banks.


The major tool used to hide this latency is multiple layers of SRAM cache. In addition, SMP systems employ message passing protocols to achieve coherence between caches. Unfortunately CPUs are now so fast that even these caches cannot keep up at times. So to further hide this latency a number of less well known buffers are used.


This article explores “write combining store buffers” and how we can write code that uses them effectively.

CPU caches are effectively unchained hash maps where each bucket is typically 64-bytes. This is known as a “cache line”. The cache line is the effective unit of memory transfer. For example, an address A in main memory would hash to map to a given cache line C.


If a CPU needs to work with an address which hashes to a line that is not already in cache, then the existing line that matches that hash needs to be evicted so the new line can take its place. For example if we have two addresses which both map via the hashing algorithm to the same cache line, then the old one must make way for the new cache line.


When a CPU executes a store operation it will try to write the data to the L1 cache nearest to the CPU. If a cache miss occurs at this stage the CPU goes out to the next layer of cache. At this point on an Intel, and many other, CPUs a technique known as “write combining” comes into play.


While the request for ownership of the L2 cache line is outstanding the data to be stored is written to one of a number of cache line sized store buffers on the processor itself. These on chip buffers allow the CPU to continue processing instructions while the cache sub-system gets ready to receive and process the data. The biggest advantage comes when the data is not present in any of the other cache layers.


These buffers become very interesting when subsequent writes happen to require the same cache line. The subsequent writes can be combined into the buffer before it is committed to the L2 cache. These 64-byte buffers maintain a 64-bit field which has the corresponding bit set for each byte that is updated to indicate what data is valid when the buffer is transferred to the outer caches.


Hang on I hear you say. What happens if the program wants to read some of the data that has been written to a buffer? Well our hardware friends have thought of that and they will snoop the buffers before they read the caches.


What does all this mean for our programs?

If we can fill these buffers before they are transferred to the outer caches then we will greatly improve the effective use of the transfer bus at every level. How do we do this? Well programs spend most of their time in loops doing work.


There are a limited number of these buffers, and they differ by CPU model. For example on an Intel CPU you are only guaranteed to get 4 of them at one time. What this means is that within a loop you should not write to more than 4 distinct memory locations at one time or you will not benefit from the write combining effect.


What does this look like in code?


import static java.lang.System.out;

public final class WriteCombining
{
    private static final int ITERATIONS = Integer.MAX_VALUE;
    private static final int ITEMS = 1 << 24;
    private static final int MASK = ITEMS - 1;

    private static final byte[] arrayA = new byte[ITEMS];
    private static final byte[] arrayB = new byte[ITEMS];
    private static final byte[] arrayC = new byte[ITEMS];
    private static final byte[] arrayD = new byte[ITEMS];
    private static final byte[] arrayE = new byte[ITEMS];
    private static final byte[] arrayF = new byte[ITEMS];

    public static void main(final String[] args)
    {
        for (int i = 1; i <= 3; i++)
        {
            out.println(i + " SingleLoop duration (ns) = " + runCaseOne());
            out.println(i + " SplitLoop  duration (ns) = " + runCaseTwo());
        }

        int result = arrayA[1] + arrayB[2] + arrayC[3] +
                     arrayD[4] + arrayE[5] + arrayF[6];
        out.println("result = " + result);
    }

    public static long runCaseOne()
    {
        long start = System.nanoTime();

        int i = ITERATIONS;
        while (--i != 0)
        {
            int slot = i & MASK;
            byte b = (byte)i;
            arrayA[slot] = b;
            arrayB[slot] = b;
            arrayC[slot] = b;
            arrayD[slot] = b;
            arrayE[slot] = b;
            arrayF[slot] = b;
        }

        return System.nanoTime() - start;
    }

    public static long runCaseTwo()
    {
        long start = System.nanoTime();

        int i = ITERATIONS;
        while (--i != 0)
        {
            int slot = i & MASK;
            byte b = (byte)i;
            arrayA[slot] = b;
            arrayB[slot] = b;
            arrayC[slot] = b;
        }

        i = ITERATIONS;
        while (--i != 0)
        {
            int slot = i & MASK;
            byte b = (byte)i;
            arrayD[slot] = b;
            arrayE[slot] = b;
            arrayF[slot] = b;
        }

        return System.nanoTime() - start;
    }
}

This program on my Windows 7 64-bit Intel Core i7 860 @ 2.8 GHz system produces the following output:


1 SingleLoop duration (ns) = 14019753545
1 SplitLoop  duration (ns) = 8972368661
2 SingleLoop duration (ns) = 14162455066
2 SplitLoop  duration (ns) = 8887610558
3 SingleLoop duration (ns) = 13800914725
3 SplitLoop  duration (ns) = 7271752889

To spell it out, if we write to 6 array locations (memory addresses) inside one loop we see that the program takes significantly longer than if we split the work up, and write first to 3 array locations, then to the other 3 locations sequentially.

By splitting the loop we do much more work yet the program completes in much less time! Welcome to the magic of “write combining”. By using our knowledge of the CPU architecture to fill those buffers properly we can use the underlying hardware to accelerate our code by a factor of two.

Don’t forget that with hyper-threading you can have 2 threads in competition for these buffers on the same core.


文章转自 并发编程网-ifeve.com

相关文章
|
5月前
|
存储 C++ iOS开发
采用read()和write()读写二进制文件
C++ 中文本与二进制文件读写的区别在于数据存储格式和效率。文本文件以可读字符存储,浪费空间且不利于高效查找。二进制文件紧凑且高效,适合存储结构化数据如CStudent对象。`&gt;&gt;`和`&lt;&lt;`运算符适用于文本文件,而二进制文件需用`read()`和`write()`方法。`write()`从文件写指针位置写入数据,`read()`从文件读指针位置读取,两者都会移动指针。示例代码展示了如何使用这些方法处理学生信息。
64 12
|
存储 缓存
【什么是Read Write Through机制】
【什么是Read Write Through机制】
174 0
|
存储 C++ iOS开发
C++ 采用read()和write()读写二进制文件
以文本形式读写文件和以二进制形式读写文件的区别,并掌握了用重载的 >> 和 << 运算符实现以文本形式读写文件。在此基础上,本节继续讲解如何以二进制形式读写文件。 举个例子,现在要做一个学籍管理程序,其中一个重要的工作就是记录学生的学号、姓名、年龄等信息。这意味着,我们需要用一个类来表示学生,如下所示: class CStudent { char szName[20]; //假设学生姓名不超过19个字符,以 '\0' 结尾 char szId[l0]; //假设学号为9位,以 '\0' 结尾 int age; //年龄
133 0
UE Operation File [ Read / Write ] DTOperateFile 插件说明
UE Operation File [ Read / Write ] DTOperateFile 插件说明
93 0
|
物联网 Linux 开发者
Write 函数|学习笔记
快速学习 Write 函数
|
JavaScript 物联网 Linux
read 函数|学习笔记
快速学习 read 函数
|
C# 索引
艾伟:浅谈 Stream.Read 方法
Microsoft .NET Framework Base Class Library 中的 Stream.Read 方法: Stream.Read 方法 当在派生类中重写时,从当前流读取字节序列,并将此流中的位置提升读取的字节数。
1151 0
|
关系型数据库
### avoid read-on-write
### avoid read-on-write 什么是 "read-on-write" problem? 在我们使用最常见的buffer write 中 "read-on-write" 问题指的是当我需要进行小于4k 大小buffer write 的时候, 需要先将数据所在的page 从disk 中读取出放入到page cache, 在page cache 中修改好, 然后再将
1495 0