Prior to PVS 7, the most common WriteCache methods were Cache on device HDD and the much better performing Cache in device RAM. Both of these legacy cache methods had some positives and negatives:
Cache on device HDD:
- Performance less than or equal to the underlying storage. For physical targets, this could be physical disks, but more commonly, for VMs, this was whatever storage contained the virtual disk.
- The amount of storage available for caching was equal to the amount of free space less any persistent data (event logs, print spooler, etc)
Cache in device RAM
- Amazing performance – upwards of 10K IOPS or so, per VM
- The amount of cache space is limited to the size of RAM allocated to cache – the RAM is hardware reserved, so it is not available to the machine
Both of these legacy cache methods have one major problem: Once space is used, it cannot be reclaimed. For example, as users logon, all of the profiles are written to the C:\ drive – thanks to the magic of the PVS filter driver is actually written to write cache. If 100 users logon 20MB profiles each, that’s approximately 2GB used – when those 100 users logoff, that’s 2GB of space that is taken up – the cache doesn’t shrink.
For cache to HDD that’s not too big of a deal because there is usually 10-20 GB or so of HDD write cache…when it fills up, though, you will start corrupting profiles, and other ‘bad’ things because that same filter driver lies to Windows during disk write operations and says “Oh sure..that’s been written successfully [no it hasn’t]”.
When you fill up the cache in RAM, the system simply halts [BSOD].
Enter Provisioning Services 7 – Cache in device RAM with overflow to disk.
This caching method is essentially the best of both worlds – the performance of RAM with the storage space of HDD…and as an added bonus, you don’t halt the system AND the space on disk is reclaimed when objects are deleted. Plus, this is NOT hardware reserved memory, so when not in use, it is available to the system.
Let’s take a look. The ‘overflow’ file is a vdiskdif.vhdx file – very different from the legacy .vdiskcache file.
After booting, you can see that this file is about 600MB in size – so lets copy some big files to the C: drive and see what happens.
We’re copying a 1.4GB file to the C: drive – notice that the copy has already completed about 1.2 GB worth, but the vdiskdif file is only about 200MB larger.
After a few moments, the vdiskdif file clearly shows growth from the 1.4GB file just copied to the C: drive. The file will slowly grow after the file is copied as it is first written to the RAM cache then flushed out to disk – and you can watch the committed memory grow then shrink during and after the file copy.
Now the real test – let’s delete that 1.4GB file and copy some more files. In the legacy cache world, we would expect the cache file to grow by the size of every file written to the C: drive.
But thanks to the magic of this amazing new caching method, that is not the case. Space in the cache file is reclaimed when the original 1.4GB file is deleted and is overwritten by 1.1GB of new stuff – thus not growing the cache file at all!
There is a ton of data available to assist with the sizing of the cache and some more details on how it works here:
And don’t forget about the performance benefit – cache in RAM with overflow is truly the only option for PVS deployments.
Hi Jacob, just came across this article and was interested in your description of how the cache drive works (ignoring the RAM option for now). I was under the impression that the cache file maps data blocks on the “real” filesystem to blocks in the cache file and therefore would re-use deleted blocks. So whilst I wouldn’t expect it to shrink when data is deleted, I also wouldn’t expect it to grow when those same disk blocks are subsequently re-written to. So provided my cache disk is correctly sized to the same as the “free” space on my primary disk, I should “never” run out of cache storage. Your article suggests that my assumption is incorrect. I wondering – do you have a technical reference that backs up your assertion of how the cache drive works?
Nick,
That is exactly what the post says: when using cache to RAM with overflow, the cache file will re-use deleted blocks. Note that this is not the case with legacy cache to hard disk.
Hi Nick, I know this is an older post, but I think it should still be valid.
But I’m not seeing the same behavior when I run the same test above. I’m using cache in RAM with overflow on Hard Disk on PVS 7.15.9. I copy a couple large files to the local drive of the Xenapp target from the network. I see the write cache increase accordingly. then I deleted the 2 files and copied another large file but the write cache continues to grow. Is there some way to configure this? I’m not sure why we’re getting different results.