EMC, Storage, VAAI, VMware, VNX, vSphere

“My VAAI is Better Than Yours” – The file side of things

EMC VNXI have to admit it. I stole, or rather “borrowed”, part of this title from a blog post of a colleague of mine, Erik Zandboer. He just now published a post on the mindset behind VAAI, and what the actual effect is on the array itself, and on your vSphere infrastructure.

VAAI was already available in vSphere 4.1, and with the switch to vSphere 5 some new features were introduced, which means that as of this release, we now have the following situation:

Block: File:
HW accelerated Zeroing NFS – Full Copy
HW accelerated Copy NFS – Extended Statistics
HW accelerated Locking NFS – Space Reservation

Some folks will say that I left out Thin Provision Stun, which is true. And while it does help to resolve some issues, I left it out because I don’t really view it as a hardware offload, which is what I’m trying to focus on.

I took the hardware in our lab, – a EMC VNX 5300 -, for a spin in our vSphere 5 setup to show the same thing as Erik showed in his blog, but instead showing off some of the File / NFS accelerations.

To get the VNX to actually support NAS VAAI offloading and get the result you expected, you need to meet the following prerequisites:

  • vSphere 5 – You need vSphere 5 installed with an Enterprise or Enterprise Plus license
  • VNX OE for File 7.0.35.x – You need your VNX Operating Environment for File to be at least at version VNX OE 7.0.35.x or newer
  • NFSv3 – The offloads only work on NFSv3-based datastores
  • The vSphere NFS VAAI offload plugin which is referenced here

If all those prerequisites are met, you should normally be able to go in to your vSphere Client and see Hardware Acceleration as Supported:

You could also enable SSH for your ESXi host, – do this by going to the individual host, click on the “Configuration” tab, select “Security Profile” and start the SSH service -, and check the support from the command line. For block devices you could enter the following command:

esxcli storage core device vaai status getand get back a result that shows you the NAAID, the VAAI plugin name, and the primitives with their support state. By using the following command:
esxcli storage core device list you get a similar output, but again this only works for block devices, and won’t really help you when checking the support for NFS. I haven’t found any way so far to get a reliable statement back via SSH, but I’ll try to continue looking and update this post if I find something.

In case of the VNX, we can actually check on the array itself to see if we are using the primitives, so I’m actually showing you the output from the array itself, using the following command on the VNX:

server_stats server_2 -monitor nfs.v3.vstorage -type accu -i 1
First off, I went back in to my ESXi host and went in to the NFSv3 datastore that was hosting my virtual machine. In this case, a Windows 2008 server, running an SAP Enterprise Portal, and I used the vmkfstools to create a clone:

vmkfstools -i GI-C-SAP-EPBW.vmdk CLONE-GI-C-SAP-EPBW.vmdkand I set off a snap using a similar command:
vmkfstools -i GI-C-SAP-EPBW.vmdk CLONE-GI-C-SAP-EPBW.vmdk. All the while, I had the VNX command that I posted before running in a different window. The output from the VNX was showing that we are actually using the VAAI NFS offloading functions:

server_2 NFS VAAI op VAAI Op Calls VAAI Op Total uSecs VAAI VAAI Op
Timestamp Op Max Average
uSecs uSec/Op
09:07:14
09:07:15
09:07:16
09:07:17
09:07:18 vaaiFastClone 1 0 0 0
vaaiVxAttrs 3 0 1 0
vaaiRegister 5 0 0 0
09:07:19
.......
09:08:27 vaaiOffloadStatus 1 0 0 0
vaaiVxAttrs 7 1 1 0
vaaiRegister 10 0 0 0
09:08:28
09:08:29
09:08:30
09:08:31
09:08:32 vaaiOffloadStatus 2 0 0 0
server_2 NFS VAAI op VAAI Op Calls VAAI Op Total uSecs VAAI VAAI Op
Summary Op Max Average
uSecs uSec/Op
Minimum vaaiFullClone 0 0 83308 -
vaaiFastClone 0 0 0 0
vaaiOffloadStatus 0 0 0 0
vaaiOffloadAbort 0 0 0 -
vaaiVxAttrs 0 0 1 0
vaaiReserveSpace 0 0 0 -
vaaiRegister 0 0 0 0
Average vaaiFullClone 0 0 83308 -
vaaiFastClone 1 0 0 0
vaaiOffloadStatus 0 0 0 0
vaaiOffloadAbort 0 0 0 -
vaaiVxAttrs 3 0 1 0
vaaiReserveSpace 0 0 0 -
vaaiRegister 5 0 0 0
Maximum vaaiFullClone 0 0 83308 -
vaaiFastClone 1 0 0 0
vaaiOffloadStatus 2 0 0 0
vaaiOffloadAbort 0 0 0 -
vaaiVxAttrs 7 1 1 0
vaaiReserveSpace 0 0 0 -
vaaiRegister 10 0 0 0
(sorry for the formatting, I couldn’t get it to show the way it should).

Once the files are created, use a:
vmkfstools --extendedstat GI-C-SAP-EPBW.vmdk on the source file, or on the snap or clone to actually display the extended statistics. The “Capacity bytes” show the allocated space for the virtual disk, the “Used bytes” displays the blocks used for the virtual disk which in case of our snapshot is the fast clone and it’s parent. The “Unshared bytes” displays the usage of the actual fast clone itself without the parent.

I should point out that the offload did speed up my full clone operation, but it was “only” in the range of 20%. That isn’t a great deal, but using both esxtop and the vSphere Client performance graphs showed that the ESXi server was busy doing what it is supposed to do: virtualizing my resources! And that’s the most important thing, isn’t it?