Clustering, GestaltIT, Nutanix, Storage, VMware, vSphere

Nutanix – What do you mean: “You are not a storage company”…?

Image copyright of the Davis Museum
Image copyright of the Davis Museum
“You are a black guy, you must be great at dancing and basketball”. “You’re a blonde? Let me explain that joke to you once more”.

Stereotypes. We all know them, we all apply them in some form or the other. We put things in boxes after a quick look, and every drawer has a different label and content to separate the stereotypes. But what if it doesn’t work that way?

Since I joined Nutanix, I’ve been in several customer and partner meetings. Some of the people I’ve get got the idea right away. We are doing something new. Others put us in to a respective box or drawer. “You are a storage company” is one of the classic pieces of feedback. Or, “So you do virtual desktop infrastructure?”.

But there’s more to it. We offer a combination of commodity hardware, combined with a piece of software, and sell that as a solution. And while the use case of virtual desktops is a great one, we can also run things like Splunk, Hadoop and classic server virtualization workloads.

And while we combine the benefits of a shared storage approach to run workloads, we’re not a storage company. We utilize features offered by shared storage to make your life easier. Each node performs its operations on the local storage, but I can use the “Nutanix Distributed File System” or NDFS to create an abstracted layer that offers many of the shared storage benefits. An example would be a shared container for my virtual machines, that are accessible to all of the hosts, enabling features like live migration between hosts.

While that works out really well with our customers, and it gives you the idea you have a SAN or NAS underneath the hoods, Nutanix’s main point is not to replace your SAN or NAS. We want to offer you a “Virtual Computing Platform”, a way to make your life easier when installing, configuring and deploying virtualized workloads and solutions.

That works great, and we’ve received great feedback. There seems to be a slight disconnect though. That begins when people start asking questions like:

What do you mean: “You are not a storage company”…?

A fair question by all means, but the simple answer is: No, we are not.

A simple example that seems to come up as of late is the following. How do I share disk space from your file system directly in to a virtual machine? While there is a way to export the storage directly in to a VM (for example via NFS), this bypasses some of the concepts we utilize. By default, we mount a datastore using an NFS IP address of 192.168.5.1, which runs over a virtual switch that has no uplinks. Since we are talking about traffic that stays within the same vSwitch, we can work at blazing speeds that are not limited by the speed of the physical NIC.

If I were to mount the NFS share from a virtual machine (or a different host), we could use the external IP of the Controller VM. The problem here, is that since the external IPs are different between controller VMs, if you were to migrate your NFS client VM to a different host, everything would go over the regular network. Also, if the controller VM that you connect to as an NFS Server would be offline, your NFS share is not accessible.

The thing is, the Nutanix block is designed to work this way. It offers great flexibility when it comes to running virtualized workloads, but it is not a 100% distributed storage system. We didn’t intend on being a storage system.

It then boils down to design. Is there a way around this? Certainly.

If you want to create a distributed CIFS file share, take a look at solutions like DFS from Microsoft. You can run multiple VMs inside of a container/datastore, and just pass the disk space of the VM through. If you need more space, just add more VMs on a different node, and add capacity, and off you go. And if you run out of space on your cluster? Just add another Nutanix node, get a VM up and running, and follow the same procedure.

That way, you are actually utilizing the distributed nature of our virtual compute platform, and running your storage services in a distributed manner. Gluster FS could be a possible solution to achieve the same thing with NFS on Linux.

And like I said, if this sounds like we are not a storage company? You are absolutely right, we are not. So you might want to categorize us under a different label, put us in a different box, or create an entirely new stereotype. 😉

Nutanix, NX-2400, Virtualization, VMware, vSphere

Upgrading your Nutanix NX-2400 block from ESXi 5.0 to ESXi 5.1 using a USB thumb drive.

At the moment, I’m lucky enough to have a Nutanix block at home that I use for demos (it’s coming along to Switzerland with me tomorrow). It’s not the model with the highest specs, but it helps in giving customers a chance to actually see the kit, and give partners some hands-on time. In case you are wondering, I’m actually working with a NX-2400, or a 4-node NX-2000 cluster, hence 2400.

Thing is though, that it was running an older version of the Nutanix Operating System (NOS), which I upgraded to the latest version (NOS 3.1) without a hitch, and it was running ESXi 5.0. And to play with some of the latest features, I actually decided to upgrade to ESXi 5.1, and I figured I might as well share how that worked out for me.

The steps are relatively simple, but I figured I’ll document them here anyway. One word of caution though:

    This was done with the latest info from the Nutanix knowledge base. Be sure to check if there are updated instructions available prior to upgrading your own block.

So, step one is to actually get the installation media for ESXi 5.1. In case of the NX-3000, you can use the standard ESXi 5.1 image. For the NX-2000 systems, you need to use an image that is customized by Nutanix. Contact myself or your local systems engineer to get the download location.

Next, create a bootable USB stick using the image. Easiest way I found is to actually format the stick with FAT32 as the filesystem. I recommend using a Windows system, or a Windows VM, since no matter what I tried, I couldn’t get it to boot using a Mac. Once the drive is formatted, I used UNetbootin:

UNetbootin ESXi 5.1 Nutanix

Click on “Disc Image” and select the ISO file. Make sure “USB Drive” is selected, and point it to the correct drive. Then click on “OK”, and watch it go to work. If it gives you a message stating that menu.c32 is already present, click on the “Yes” button.

We’ll also need to edit the NFS heartbeat timeout settings. To do that, log on to vCenter, select the node and go to “Software” -> “Advanced settings”. There go to the NFS entries, and modify the “NFS.HeartbeatTimeout” setting to 30 seconds. Do that for each host.

Next, we need to make sure the multiextent module is loaded. Add the following lines to /etc/rc.local.d/local.sh on each host (if not already there):
#added to support multiextent
localcli system module load -m multiextent
#end of adding

Then restart the host.

Once you are done, it is time to start the upgrade. Go in to the BIOS (using the Delete key) on the node you want to upgrade, and change the boot order so that you actually start off of your USB stick. Once you save the config and restart, you will be given a menu where you select the second option:

Unetbootin - Nutanix ESXi 5.1 upgrade menu

After that you should be able to see the trusted ESXi boot sequence:
ESXi 5.1 boot screen

At the installation screen, just hit the “Enter” key to continue with the installation. Read the license agreement, and continue with F11. Next, you are asked where the installation should reside. Normally you should see the Intel SSD already have a VMFS partition, indicated by the small asterisk in front of the disk. Select that disk and press “Enter” to continue:
ESXi 5.1 upgrade VMFS

Next, a prompt should show up asking if you want to upgrade. Select that option, and press “enter” once more:
Upgrade VMFS ESXi 5.1

The final step is to confirm your upgrade by pressing the “F11” key. Once the upgrade is done, remove the USB thumb drive, and reboot the server by again hitting the “Enter” key. Let the node reboot, change the boot order to the original sequence, and, tadaaaaaa:
Nutanix - ESXi 5.1 upgrade complete

Now, obviously this would be easier using the vSphere Update Manager, but this was the solution I used, since I only installed the vCenter virtual appliance. Not pretty but it works.

One key thing left to do, is to re-register the controller VMs on your ESXi host. You can do this via the vSphere client going directly to the ESXi host. Just right-click the VM and select “remove from inventory”. Then browse the datastore, go to the folder saying “ServiceVM-1.24_Ubuntu” and add the VM to the inventory using the VMX file. You can now start your VM after you confirmed that you moved it. 🙂

The other alternative to re-register your VM using vim-cmd via an SSH session on to your ESXi host. Just check which VMs you have running:
vim-cmd vmsvc/getallvms

Vmid Name File Guest OS Version Annotation
190 NTNX-TRAIN2-S11317022510746-A-CVM--2- [NTNX-local-ds-S11317022510746-A] ServiceVM/ServiceVM.vmx ubuntu64Guest vmx-07
Remember the VMID and de-register the VM:
vim-cmd vmsvc/unregister [vmid of controller VM]Now simply re-register the VM:
vim-cmd solo/register [/full/file/path/to/the/controller_vm_name.vmx]You might want to rename the controller VM once you have registered it.

Should you have any issues starting the VM, make sure that there is no line saying:
sched.mem.prealloc = "TRUE"in the .vmx file of you VM. If this line is present, remove it, and re-register your VM.

Nutanix, Storage, VMware, VMworld

Nutanix – 2013 vExpert gift

nutanix-logo-transparent-hirez300So, this is something I found out just after my first day at Nutanix. There is a Facebook post by Nutanix, stating the following:

Nutanix would like to congratulate all #vExpert winners with a personalized pint glass at #VMWorld! Winners- reach out to us if interested.

I sent out a tweet, and got back a couple of replies. Some folks don’t use Facebook, some won’t be visiting VMworld in the US (or Europe for that matter), and it wasn’t quite clear what info was needed.

In an effort to consolidate this a bit more, I set up a Google spreadsheet, that just has some basic info. Your first name, last name, Twitter handle, and if you will be visiting VMworld in the US or Europe. You don’t have to sign in, editing is possible when accessing the document using the direct link.

Should you not visit, I think we can arrange that the personalized pint glass will be shipped to you, and we will follow up with you regarding details on shipping. Just make sure that you either follow the Nutanix Twitter account, or my Twitter account so that we can send you a direct message should we need your shipping information.

The link to the document is: http://bit.ly/Nutanix_vExpert_2013

And in case you are wondering, I took the liberty of filling out the info of the people who had already replied to me via Twitter. And yes, we will be checking if you are on the official list. 😉

EMC, VMUG, VMware

VMUG for Germany west (Schwalbach am Taunus)

Just a small reminder for the people that live in my area. On Friday, June 7th, the German VMware User Group (VMUG) west will be meeting up at the EMC office in Schwalbach (click here for a PDF with the address and route). In case you don’t know what the VMUG is for, here’s a quick summary:

The VMware User Group (VMUG) is an independent, global, customer-led organization, created to maximize members’ use of VMware and partner solutions through knowledge sharing, training, collaboration, and events.

The beauty of it? It’s something set up by users for other users. That means that people come to these events to get information that is vendor neutral, and have the ability to talk freely to others without having to fear that someone is trying to only give them the marketing pitch. Or at least, that is what it should be like.

So, the Germany West VMUG Meeting is at Friday, June 7, 2013 at the following address and time:

09:30 – 16:15

EMC Deutschland GmbH
Am Kronberger Hang 2a
65824 Schwalbach/Taunus

You can use this link to register for the event, free of charge, and get to see talks on VMware Nicira, “VMware Network & Security” and other security related topics.

And one important thing to note. The VMUG is a community set up by VMware customers for VMware customers. To exchange ideas, exchange common issues or worries, learn and get to know others in the community. If you feel like you can contribute, submit a proposal for a talk, or suggest a topic for the next VMUG. The more people that participate, the better a VMUG gets!

I’ll be there, and I’m looking forward to seeing you there!

EMC, Symmetrix, V-MAX, vCenter Operations, Virtualization, VMware

Setting up the EMC Symmetrix adapter on the vCenter Operations Manager 5 vApp

I present a lot on vCenter Operations Manager, a pretty neat monitoring tool from VMware. I like this tool a lot, because getting started with it is easy enough, and you have a plethora of features once you dive deeper in to it, and the best part about it? If you use the “big” version, -Enterprise or Enterprise Plus that is-, you can even monitor your applications and non-virtualized infrastructure. To monitor your things that go beyond your virtual machines, you can install so called “adapters”. In a nutshell, such an adapter is nothing more than a piece of software that tells vCenter Operations how to connect to things, and how to interpret the results it gets back. Now, EMC has created such an adapter for their VMAX and Symmetrix storage arrays, and has created a document that tells you how to set up and configure the adapter. That way, you can get loads of information from your storage system inside of vCenter Operations. Great stuff, right? Original image from: http://my.opera.com/supergreatChandu8/albums/showpic.dml?album=5466862&picture=82475462Yeah ok, maybe not so great. The biggest problem, is that the documentation seems to have been created for the normal installable version of vCenter Operations. However, VMware has also created a version in the form of an appliance, a so called vApp. You download the files, deploy the vApp, enter the IP-addresses of both virtual machines that are contained in the vApp, and away you go. Wonderfully easy to install, and besides certain limits in scalability, it offers pretty much the same functionality as the normal installer. This is where the problem starts if you want to use the EMC Symmetrix adapter. You can find almost all adapters on the Integrien FTP site, and there’s a folder containing all the files you need to get started with the Symmetrix adapter right here. My teammate Matt Cowger actually wrote a nice blog post on how to configure and set up the Symmetrix adapter. This works like a charm, except for one tiny thing that you will run in to when using the vCenter Operations vApp. When you go to create an adapter instance, you need to give it a name, indicate if you want to auto discover everything, and you need to input a path to the “EMC Symmetrix Main Input Folder”. This is the folder where you actually archive all of the performance and configuration data from your storage system. The documentation tells you that this should be:

* If the main input folder is on a remote Windows machine, you must share the folder before you add the adapter instance. Do not map the main input folder. Windows services do not work with mapped drives. * If the main input folder is on a remote Linux machine, you must mount the folder to the Collector server before you add the adapter instance.

Problem being, that if you actually have your Solutions Enabler host running on Windows, you need to input a UNC path in the format of \\servername\sharename. But the problem here is that the virtual machines inside the vApp do not come with any access methods for Windows shares. You won’t find any tools like mount.cifs, smbclient or even have the option to specify smbfs as the type of file system to mount. And that means what? Well, you will have two options to overcome this situation. You can either install the Services for Unix/Services for NFS on your Windows host and set up an NFS share on your Windows machine. Or, you can migrate your Solutions Enabler host to a Linux machine and set up everything there. OK, so how do I configure this stuff under Linux? Glad you asked. You can follow some of the steps from the post that Matt created, but I’m going to write them down here anyway so you will have one page with all the steps you need. I’m going to assume that you have already set up your Linux machine, and that you have installed the Solutions Enabler package. Go in to the following file: /usr/emc/API/symapi/config and add these following lines at the end of the file, then make sure you save your changes (create a backup of the original, this is always a good idea):

storstpd:dmn_run_spa = disable
storstpd:dmn_run_smc = disable
storstpd:dmn_run_ttp = enable
storstpd:dmn_run_ttp_on_sp = disable
storstpd:dmn_run_rtc = disable
storstpd:ttp_collection_interval = 5
storstpd:ttp_rdflnk_metrics = enable
storstpd:ttp_se_tcp_metrics = enable
storstpd:ttp_se_nw_metrics = enable
storstpd:ttp_dev_metrics = disable
storstpd:ttp_disk_metrics = disable
storstpd:ttp_dgdev_metrics = enable
storstpd:ttp_se_tcp_metrics = enable
storstpd:ttp_se_nw_metrics = enable
storstpd:ttp_se_nwi_metrics = enable
storstpd:ttp_re_sg_metrics = enable
storstpd:ttp_re_nwc_metrics = enable
storstpd:ttp_rdflnk_metrics = enable
storstpd:ttp_se_tcp_metrics = enable
storstpd:ttp_se_nw_metrics = enable
storstpd:use_compression = enable

Next, restart the storstpd daemon:

/opt/emc/SYMCLI/bin/stordaemon shutdown storstpd /opt/emc/SYMCLI/bin/stordaemon start storstpd

Check if the daemon is up and running again by issuing the following command. The first line should show the Daemon State as “Running”:

/opt/emc/SYMCLI/bin/stordaemon show storstpd

Now, since the Analytics VM will be actually collecting the information from the adapter, it needs to be able to access the files from your Solutions Enabler host. Since the Analytics VM will be running the collection process as a user called “Admin”, we need to consider something. The admin user on the vCenter Operations appliance will be running with a user ID (UID) of 1000, and a group ID (GID) of 1003. That means that we should either install our Solutions Enabler using a user with the same user ID and group ID, or we need to map some things so that the admin user can actually access the files later on. In order to export the directory with the required files for the Symmetrix adapter, we will add the following line to /etc/exports: /usr/emc/API/symapi/stp *(rw,insecure,all_squash,anonuid=0,anongid=0) Obviously, this isn’t the best you can do from a security perspective, so feel free to change these options as needed for your environment, but basically what we are doing here is this:

  • The * just means that all IP-addresses have access. You can change this to for example the IP of the analytics VM.
  • RW means that the export is created with read and write access.
  • Insecure means that clients can use non-reserved ports.
  • All_squash means that all users get mapped to the anonymous user account
  • anonuid=0 means that the anonymous user ID will get mapped to the user ID 0. Be careful since this is the root account!
  • anonguid=0 means that the anonymous group ID will get mapped to the group ID 0. Again, this is the root group!

If you did install your Solutions Enabler as a different user, make sure that you map the anonuid and anonguid to the respective numerical IDs, to allow access to the files we are going to export. Now, we simply restart the NFS server, or have it re-read its config should it already be online, using:

/etc/init.d/nfsserver restart

or

exportfs –ra

We can check if the export is working, using the following command:

showmount –e localhost

Now, we create a scheduled job to archive the Solutions Enabler file. To do that, add the following line to your crontab: 2-57/5 * * * * /opt/emc/SYMCLI/bin/stordaemon action storstpd -cmd archive This will cause the job start at 2 minutes past the hour, and run in 5 minute intervals. Check under /usr/emc/API/symapi/stp/ttp, to see if you have a new directory. Normally the directory should be the serial number of your storage array, and contain compressed files inside of that directory that contain the information the Symmetrix adapter will need. Final thing to do right now, is log on to the analytics VM, and create a folder where we will mount the required files. For example create a directory called /media/VMAX. Once you have created the directory, edit /etc/fstab to contain the following line: 10.10.10.10: /usr/emc/API/symapi/stp /media/VMAX nfs rw,lock 0 0 Make sure you change the IP address to match that of your Solutions Enabler host, and then mount the directory using the following command:

mount /media/VMAX

If you don’t have a firewall blocking communication, you should now be able to traverse the subdirectories and access the files. Finally, you can now configure the adapter, and input the directory you just mounted as the “EMC Symmetrix Main Input Folder”. So, in the text field, simply enter the following as the path:

/media/VMAX/ttp

If you test the adapter now, you should see it come back successfully, and after giving it a bit of time, start working with the data you are now importing from your VMAX/Symmetrix system. 🙂

Clustering, EMC, Storage, Virtualization, VMware, VPLEX, vSphere

VMware HA demo using vMSC with EMC VPLEX Metro

That’s a mouth full of abbreviations for a title, isn’t it?

So, let me give you some background info. VMware introduced something called the vSphere Metro Storage Cluster, and Duncan Epping talks about this feature here.

What the vMSC allows us to do, is to create a regular stretched vSphere cluster, but now also stretch out the storage between the two clusters. This can be done in two ways (to quote from Duncan’s article):

I want to briefly explain the concept of a metro / stretched cluster, which can be carved up in to two different type of solutions. The first solution is where a synchronous copy of your datastore is available on the other site, this mirror copy will be read-only. In other words there is a read-write copy in Datacenter-A and a read-only copy in Datacenter-B. This means that your VMs in Datacenter-B located on this datastore will do I/O on Datacenter-A since the read-write copy of the datastore is in Datacenter-A. The second solution is which EMC calls “write anywhere”. In this case VMs always write locally. The key point here is that each of the LUNs / datastores has a “preferred site” defined, this is also sometimes referred to as “site bias”. In other words, if anything happens to the link in between then the storage system on the preferred site for a given datastore will be the only one left who can read-write access it.

The last scenario described here is something that obviously can cause some issues. EMC tried to address this by introducing the “independent 3rd party”, in form of the VPLEX Witness. Some documentation states that this witness should run in a 3rd site, but I would recommend to run this in a separate failure domain.

In essence, we have created the following setup:

© VMware

Awesome stuff, because we can do new things that weren’t quite possible before. Since VPLEX is one of the key storage virtualization solutions from EMC that allow us to perform an active/active disk access, we can perform a vMotion between the two sites, and due to the nature of VPLEX, we also perform a sort of storage vMotion on the underlying disks. That, without you having to shut down the VM to do both things at the same time. Pretty neat!

Now, as Chad describes here, a new disk connectivity state was introduced with vSphere 5, called “Permanent Device Loss” or PDL. This was a great feature to communicate to your infrastructure that a target was intentionally removed. You could unmount the disk, and remove the paths to your target in a proper way.

It was also useful to indicate an unexpected loss of your target, indicating that your cluster is in a partitioned state. The problem here was that a PDL state and VMware HA didn’t work so well together. When you had an APD notification, HA didn’t “kill” your VM, and your virtual machine would usually continue to respond to pings, but that was about it.

Then along came vSphere 5 Update 1, which allows us to set a flag on each of the hosts inside our cluster, and set a different flag for our HA cluster. Now, we can actually use HA and see terminate the VMs and have it restart the virtual machines on the hosts in our cluster that still have access to their datastores in their respective preferred sites.

I’ve created a short (ok, 8 minutes) video that show exactly this scenario. You’ll get a quick view of the VPLEX setup. You’ll see the Brocade switches that will change from a config with the normal full zoneset, being switched to a zoneset that will disable the inter-switch links between both VPLEX clusters. And you’ll see the settings inside of my vSphere lab setup, with the behavior of the hosts and virtual machines.

Since I’m quite new to creating videos like this, I hope the output is acceptable, and the video is clear enough. If you have any questions, feedback or would like to see more, please leave me a comment and I’ll see what I can do. 🙂


Just a quick modification to my post, since it wasn’t actually VM-HA (or VM monitoring) responding to the PDL event, but HA terminating the VM when running in to the PDL state, as Duncan pointed out to me on Twitter. Sorry for any confusion I may have caused!

EMC, Storage, VAAI, VMware, VNX, vSphere

“My VAAI is Better Than Yours” – The file side of things

EMC VNXI have to admit it. I stole, or rather “borrowed”, part of this title from a blog post of a colleague of mine, Erik Zandboer. He just now published a post on the mindset behind VAAI, and what the actual effect is on the array itself, and on your vSphere infrastructure.

VAAI was already available in vSphere 4.1, and with the switch to vSphere 5 some new features were introduced, which means that as of this release, we now have the following situation:

Block: File:
HW accelerated Zeroing NFS – Full Copy
HW accelerated Copy NFS – Extended Statistics
HW accelerated Locking NFS – Space Reservation

Some folks will say that I left out Thin Provision Stun, which is true. And while it does help to resolve some issues, I left it out because I don’t really view it as a hardware offload, which is what I’m trying to focus on.

I took the hardware in our lab, – a EMC VNX 5300 -, for a spin in our vSphere 5 setup to show the same thing as Erik showed in his blog, but instead showing off some of the File / NFS accelerations.

To get the VNX to actually support NAS VAAI offloading and get the result you expected, you need to meet the following prerequisites:

  • vSphere 5 – You need vSphere 5 installed with an Enterprise or Enterprise Plus license
  • VNX OE for File 7.0.35.x – You need your VNX Operating Environment for File to be at least at version VNX OE 7.0.35.x or newer
  • NFSv3 – The offloads only work on NFSv3-based datastores
  • The vSphere NFS VAAI offload plugin which is referenced here

If all those prerequisites are met, you should normally be able to go in to your vSphere Client and see Hardware Acceleration as Supported:

You could also enable SSH for your ESXi host, – do this by going to the individual host, click on the “Configuration” tab, select “Security Profile” and start the SSH service -, and check the support from the command line. For block devices you could enter the following command:

esxcli storage core device vaai status getand get back a result that shows you the NAAID, the VAAI plugin name, and the primitives with their support state. By using the following command:
esxcli storage core device list you get a similar output, but again this only works for block devices, and won’t really help you when checking the support for NFS. I haven’t found any way so far to get a reliable statement back via SSH, but I’ll try to continue looking and update this post if I find something.

In case of the VNX, we can actually check on the array itself to see if we are using the primitives, so I’m actually showing you the output from the array itself, using the following command on the VNX:

server_stats server_2 -monitor nfs.v3.vstorage -type accu -i 1
First off, I went back in to my ESXi host and went in to the NFSv3 datastore that was hosting my virtual machine. In this case, a Windows 2008 server, running an SAP Enterprise Portal, and I used the vmkfstools to create a clone:

vmkfstools -i GI-C-SAP-EPBW.vmdk CLONE-GI-C-SAP-EPBW.vmdkand I set off a snap using a similar command:
vmkfstools -i GI-C-SAP-EPBW.vmdk CLONE-GI-C-SAP-EPBW.vmdk. All the while, I had the VNX command that I posted before running in a different window. The output from the VNX was showing that we are actually using the VAAI NFS offloading functions:

server_2 NFS VAAI op VAAI Op Calls VAAI Op Total uSecs VAAI VAAI Op
Timestamp Op Max Average
uSecs uSec/Op
09:07:14
09:07:15
09:07:16
09:07:17
09:07:18 vaaiFastClone 1 0 0 0
vaaiVxAttrs 3 0 1 0
vaaiRegister 5 0 0 0
09:07:19
.......
09:08:27 vaaiOffloadStatus 1 0 0 0
vaaiVxAttrs 7 1 1 0
vaaiRegister 10 0 0 0
09:08:28
09:08:29
09:08:30
09:08:31
09:08:32 vaaiOffloadStatus 2 0 0 0
server_2 NFS VAAI op VAAI Op Calls VAAI Op Total uSecs VAAI VAAI Op
Summary Op Max Average
uSecs uSec/Op
Minimum vaaiFullClone 0 0 83308 -
vaaiFastClone 0 0 0 0
vaaiOffloadStatus 0 0 0 0
vaaiOffloadAbort 0 0 0 -
vaaiVxAttrs 0 0 1 0
vaaiReserveSpace 0 0 0 -
vaaiRegister 0 0 0 0
Average vaaiFullClone 0 0 83308 -
vaaiFastClone 1 0 0 0
vaaiOffloadStatus 0 0 0 0
vaaiOffloadAbort 0 0 0 -
vaaiVxAttrs 3 0 1 0
vaaiReserveSpace 0 0 0 -
vaaiRegister 5 0 0 0
Maximum vaaiFullClone 0 0 83308 -
vaaiFastClone 1 0 0 0
vaaiOffloadStatus 2 0 0 0
vaaiOffloadAbort 0 0 0 -
vaaiVxAttrs 7 1 1 0
vaaiReserveSpace 0 0 0 -
vaaiRegister 10 0 0 0
(sorry for the formatting, I couldn’t get it to show the way it should).

Once the files are created, use a:
vmkfstools --extendedstat GI-C-SAP-EPBW.vmdk on the source file, or on the snap or clone to actually display the extended statistics. The “Capacity bytes” show the allocated space for the virtual disk, the “Used bytes” displays the blocks used for the virtual disk which in case of our snapshot is the fast clone and it’s parent. The “Unshared bytes” displays the usage of the actual fast clone itself without the parent.

I should point out that the offload did speed up my full clone operation, but it was “only” in the range of 20%. That isn’t a great deal, but using both esxtop and the vSphere Client performance graphs showed that the ESXi server was busy doing what it is supposed to do: virtualizing my resources! And that’s the most important thing, isn’t it?