Prior to PVS 7, the most common WriteCache methods were Cache on device HDD and the much better performing Cache in device RAM. Both of these legacy cache methods had some positives and negatives:
Cache on device HDD:
- Performance less than or equal to the underlying storage. For physical targets, this could be physical disks, but more commonly, for VMs, this was whatever storage contained the virtual disk.
- The amount of storage available for caching was equal to the amount of free space less any persistent data (event logs, print spooler, etc)
Cache in device RAM
- Amazing performance – upwards of 10K IOPS or so, per VM
- The amount of cache space is limited to the size of RAM allocated to cache – the RAM is hardware reserved, so it is not available to the machine
Both of these legacy cache methods have one major problem: Once space is used, it cannot be reclaimed. For example, as users logon, all of the profiles are written to the C:\ drive – thanks to the magic of the PVS filter driver is actually written to write cache. If 100 users logon 20MB profiles each, that’s approximately 2GB used – when those 100 users logoff, that’s 2GB of space that is taken up – the cache doesn’t shrink.
For cache to HDD that’s not too big of a deal because there is usually 10-20 GB or so of HDD write cache…when it fills up, though, you will start corrupting profiles, and other ‘bad’ things because that same filter driver lies to Windows during disk write operations and says “Oh sure..that’s been written successfully [no it hasn’t]”.
When you fill up the cache in RAM, the system simply halts [BSOD].
Enter Provisioning Services 7 – Cache in device RAM with overflow to disk.
This caching method is essentially the best of both worlds – the performance of RAM with the storage space of HDD…and as an added bonus, you don’t halt the system AND the space on disk is reclaimed when objects are deleted. Plus, this is NOT hardware reserved memory, so when not in use, it is available to the system.
Let’s take a look. The ‘overflow’ file is a vdiskdif.vhdx file – very different from the legacy .vdiskcache file.
After booting, you can see that this file is about 600MB in size – so lets copy some big files to the C: drive and see what happens.
We’re copying a 1.4GB file to the C: drive – notice that the copy has already completed about 1.2 GB worth, but the vdiskdif file is only about 200MB larger.
After a few moments, the vdiskdif file clearly shows growth from the 1.4GB file just copied to the C: drive. The file will slowly grow after the file is copied as it is first written to the RAM cache then flushed out to disk – and you can watch the committed memory grow then shrink during and after the file copy.
Now the real test – let’s delete that 1.4GB file and copy some more files. In the legacy cache world, we would expect the cache file to grow by the size of every file written to the C: drive.
But thanks to the magic of this amazing new caching method, that is not the case. Space in the cache file is reclaimed when the original 1.4GB file is deleted and is overwritten by 1.1GB of new stuff – thus not growing the cache file at all!
There is a ton of data available to assist with the sizing of the cache and some more details on how it works here:
And don’t forget about the performance benefit – cache in RAM with overflow is truly the only option for PVS deployments.