Eager thick vs Lazy thick disk performance

简介:

Eager-vs-Lazy-Disk

The VMware Thick Eager Zeroed Disk vs the Lazy Zeroed Thick disk in write performance.

What is the potential write performance difference between the VMware virtual disks: Thick Lazy Zeroed, Thick Eager Zeroed and Thin provisioned types? This has been discussed for many years and there are many opinions regarding this, both in terms of test vs real life write behavior as well as test methods. There is also important factors as storage effiency, migration times and similar to this, however I will in this article try to make the potential “first write” impact more easy to evaluate.

Before the virtual machine guest operating system can actually use a virtual disk some preparations has to be accomplished by the ESXi host. The main tasks that has to be done for each writable part of a virtual disk is that the space has to be allocated on the datastore and the specific disk sectors on the storage array has to be safely cleared (“zeroed”) from any previous content.

In short, this is done in the following way:

Thin: Allocate and zero on first write
Thick Lazy: Allocate in advance and zero on first write
Thick Eager: Allocate and zero in advance

There are some published performance tests between these three disk types often using the standard tool IOmeter. There is however a potential flaw to these tests, caused by the fact that before IOmeter starts the actual test it will create a file (iobw.tst) and write data to each part of that file – which at the same time causes ESXi to zero out these blocks on the storage array. This means that it is impossible to use IOmeter output data to spot any write performance differences between the three VMware virtual disk types, since the potential difference in write performance will already be nullified when the IOmeter test actually begins.

When the difference will only be in the very first write from the Virtual Machine across the virtual disk sectors a way to simulate this is to force a massive amount of writes over the whole disk area and note the time differences. This is of course not how most applications work in the sense that it is uncommon to do all writes in one continuous stream and instead the “first-writes” with ESXi zeroing is likely to be spread over a longer period of time, but sooner of later each sector that is used by the guest operating system has to be zeroed.

Eager-vs-Lazy-Disk

A way to simulate large amounts of writes could be done from using the standard Windows format tool which, despite some popular belief, actually erases the whole disk area if selecting a “full” / non-quick format. In real life there is not much specific interest how fast a partition format is done in itself, however in this test the format tool is used just to create a massive amount of “first-writes”.

Eager-vs-Lazy-Disk

This test case uses a VM with Windows 2012 R2 which was given three new virtual hard disks of 40 GB each, where there was one Eager, one Lazy and one Thin. Each disk was then formatted with NTFS, default allocation unit, no compression, but with the non-quick option.

Eager-vs-Lazy-Disk

One first observation was in ESXTOP while looking at the ratio between the writes that the virtual machine actually commits compared to how many writes are being sent from ESXi to the LUN.

Above we can see ESXTOP while doing a full format of an Eager Zeroed Thick disk. The important point here is that the numbers are very close. The writes being done at the LUN are only the writes the VM wants to make, i.e. there is no ESXi introduced extra writes since the “zeroing” was done already in advance.

Eager-vs-Lazy-Disk

Above a Lazy Zeroed Thick Disk is being full formated from inside the VM.

What could be noticed is the amount of write IOs being sent from ESXi to the LUN is much higher than the number of write IOs coming from the virtual machines. This is the actual zeroing taking place “in real time” and will make the VM write performance lower than the Eager version while accessing new areas of the virtual disk for the first time.

Eager-vs-Lazy-Disk

The actual time results for doing a full format of a 40 GB virtual disk was:

Eager Zeroed Thick Disk: 537 seconds
Lazy Zeroed Thick Disk: 667 seconds
Thin Disk: 784 seconds

The Eager Zeroed Thick Disk was almost 25 % faster in first-write performance compared to the Lazy Zeroed.

The Eager Zeroed Thick Disk was almost 45 % faster in first-write performance compared to the Thin Disk.

This is obvious in doing a full format which forces the VM to write at all sectors. In a real environment the “first-writes” will naturally be spread over a longer period of time, but sooner or later the Zeroing hit will take place for each part of the disk and might or might not be noticeable to the user. For a typical virtual machine that does the majority of “first-writes” at OS installation this is likely to be of lesser interest, but for VMs with databases, logfiles or other write intensive applications it is possible to result in a higher impact.

Eager-vs-Lazy-Disk

One important thing to notice is also that after the first-write is done the write performance is the same between all three disk types. This could be proven by doing several full formats of the same VMDK disk. On the first format the zeroing will be done (for the Lazy and Thin types), but when the whole disk is zeroed the write performance is identical.

Above a 5 GB VMDK disk was formated three times. The Eager was the fastest on the first format since the zeroing is done before the disk is even visible to the VM and the Lazy and Thin was slower as expected on the initial format run.

However, after the zeroing from ESXi is done the write performance is identical, which is visible above where all three disk performs the same on format run 2 and 3 on the same disk.

The Eager Zeroed Thick Disk will be faster on first write IO on each new part of the disk where Lazy and Thin is slower, however after the disk blocks are zeroed the write performance is the same.

本文转自学海无涯博客51CTO博客,原文链接http://blog.51cto.com/549687/1924158如需转载请自行联系原作者

520feng2007
相关文章
|
Rust 小程序
小程序警告:Now you can provide attr wxkey for a wxfor to improve performance
首先,无论什么程序,控制台中的警告都是会影响程序性能的。我们需要减少此类警告的出现,以提高程序的运行性能。 小程序开发的时候,遇到了如下的警告:
170 0
|
存储 缓存 分布式计算
|
SQL 算法 Java
《Speedy Transactions in Multicore In-Memory Databases》
Speedy Transactions in Multicore In-Memory Databases
《Speedy Transactions in Multicore In-Memory Databases》
|
SQL Oracle 算法
PostgreSQL 12 preview - plan_cache_mode参数控制强制使用plan cache或强制custom plan (force_custom_plan and force_generic_plan)
标签 PostgreSQL , plan_cache_mode 背景 plan cache在OLTP中,可以大幅降低生成sql parser, 执行计划的开销。 但是在某些场景中,plan cache可能成为问题,比如AP类型的场景中,由于SQL 输入条件的变化(通常AP业务涉及的条件可能比较容易出现这样的问题),可能导致plan cache并不是最佳的执行计划。
1416 0
|
人工智能 固态存储 内存技术
Alibaba Cloud Launches Dual-mode SSD to Optimize Hyper-scale Infrastructure Performance
Alibaba has announced today the launch of a new system that aims to optimize the storage performance of hyper-scale infrastructure in addressing incre.
2849 0

热门文章

最新文章