I spent some days in Cork, Ireland this week presenting to a customer. Besides the fact that I’m now almost two months in to my new job, and I’m loving every part of it, there is one part that is extremely cool about my job.
I get to talk to customers about very cool and new technology that can help them get their job done! And while it’s in the heart of every techno loving geek to get caught up in bits and bytes, I’ve noticed one thing very quickly. The technology is usually not the part that is limiting the customer from doing new things.
Everybody know about that last part. Sometimes you will actually run in to a problem, where some new piece of kit is wreaking havoc and we can’t seem to put our finger on what the problem is. But most of the time, we get caught up in entirely different problems altogether. Things like processes, certifications (think of ISO, SOX, ITIL), compliance, security or just something “simple” as people who don’t want to learn something new or feel threatened about their role that might be changing.
And this is where technology comes in again. I had the ability to talk about several things to this customer, but one of the key points was that technology should help make my life easier. One of the cool new things that will actually help me in that area was a topic that was part of my presentation.
Some of the VMware admins already know about this technology, and I would say that most of the folks that read blogs have already heard about it in some form. But when talking to people at conventions or in customer briefings, I get to introduce folks over and over to a new technology called VAAI (vStorage API for Array Integration), and I want to explain again in this blog post what it is, and how it might be able to help you.
So where does it come from?
Well, you might think that it is something new. And you would be wrong. VAAI was introduced as a part of the vStorage API during VMworld 2008, even though the release of the VAAI functionality to the customers was part of the vSphere 4.1 update (4.1 Enterprise and Enterprise Plus). But VAAI isn’t the entire vStorage API, since that consists of a family of APIs:
- vStorage API for Site Recovery Manager
- vStorage API for Data Protection
- vStorage API for Multipathing
- vStorage API for Array Integration
Now, the “only API” that was added with the update from vSphere 4.0 to vSphere 4.1 was the last API, called VAAI. I haven’t seen any of the roadmaps yet that contain more info about future vStorage APIs, but personally I would expect to see even more functionality coming in the future.
And how does VAAI make my life easier?
If you read back a couple of lines, you will notice that I said that technology should make my life easier. Well, with VAAI this is actually the case. Basically what VAAI allows you to do is offload operations on data to something that was made to do just that: the array. And it does that at the ESX storage stack.
As an admin, you don’t want your ESX(i) machines to be busy copying blocks or creating clones. You don’t want your network being clogged up with storage vMotion traffic. You want your host to be busy with compute operations and with the management of your memory, and that’s about it. You want as much reserve as you can on your machine, because that allows you to leverage virtualization more effectively!
So, this is where VAAI comes in. Using the API that was created by VMware, you can now use a set of SCSI commands:
- ATS: This command helps you out with hardware assisted locking, meaning that you don’t have to lock an entire LUN anymore but can now just lock the blocks that are allocated to the VMDK. This can be of benefit, for example when you have multiple machines on the same datastore and would like to create a clone.
- XSET: This one is also called “full copy” and is used to copy data and/or create clones, avoiding that all data is sent back and forth to your host. After all, why would your host need the data if everything is stored on the array already?
- WRITE-SAME: This is one that is also know as “bulk zero” and will come in handy when you create the VM. The array takes care of writing zeroes on your thin and thick VMDKs, and helps out at creation time for eager zeroed thick (EZT) guests.
Sounds great, but how do I notice this in reality?
Well, I’ve seen several scenarios where for example during a storage vMotion, you would see a reduction in CPU utilization of 20% or even more. In the other scenarios, you normally should also see a reduction in the time it takes to complete an operation, and the resources that are allocated to perform such an operation (usually CPU).
Does that mean that VAAI always reduces my CPU usage? Well, in a sense: yes. You won’t always notice a CPU reduction, but one of the key criteria is that with VAAI enabled, all of the SCSI operations mentioned above should always perform faster then without VAAI enabled. That means that even when you don’t see a reduction in CPU usage (which is normally the case), you will see that since the operations are faster, you get your CPU power back more quickly.
Ok, so what do I need, how do I enable it, and what are the caveats?
Let’s start off with the caveats, because some of these are easy to overlook:
- The source and destination VMFS volumes have different block sizes
- The source file type is RDM and the destination file type is non-RDM (regular file)
- The source VMDK type is eagerzeroedthick and the destination VMDK type is thin
- The source or destination VMDK is any sort of sparse or hosted format
- The logical address and/or transfer length in the requested operation are not aligned to the minimum alignment required by the storage device (all datastores created with the vSphere Client are aligned automatically)
- The VMFS has multiple LUNs/extents and they are all on different arrays
Or short and simple: “Make sure your source and target are the same”.
Key criteria to use VAAI are the use of vSphere 4.1 and an array that supports VAAI. If you have those two prerequisites set up you should be set to go. And if you want to be certain you are leveraging VAAI, check these things:
- In the vSphere Client inventory panel, select the host
- Click the Configuration tab, and click Advanced Settings under Software
- Check that these options are set to 1 (enabled):
Note that these are enabled by default. And if you need more info, please make sure that you check out the following VMware knowledge base article: >1021976.
Also, one last word on this. I really feel that this is a technology that will make your life as a VMware admin easier, so talk to your storage admins (if that person isn’t you in the first case) or your storage vendor and ask if their arrays support VAAI. If not, ask them when they will support it. Not because it’s cool technology, but because it’s cool technology that makes your job easier.
And, if you have any questions or comments, please hit me up in the remarks. I would love to see your opinions on this.
VMware guru and Yellow Bricks mastermind Duncan Epping was kind enough to point me to a post of his from earlier this week, that went in to more detail on some of the upcoming features. Make sure you check it out right here.
15 thoughts on “What is VAAI, and how does it add spice to my life as a VMware admin?”
Just read your post. Really a good peace. Enjoy reading it.
Thanks Roy, glad you enjoyed it! 🙂
[…] Dell Storage team has one more big event to get tuorhgh before VMworld though. Tech Field Day 7 is going to be held in Austin next month, and Dell is one of the stops for the delegates! As we […]
Under which two conditions can vStorage APIs for Array Integration (VAAI) provide a performance benefit?
A. When a virtual disk has VMDK files stored on an NFS datastore.
B. When a virtual disk is created using the New Virtual Machine wizard.
C. When cloning a virtual machine with snapshots.
D. When a virtual disk is deleted.
what is the answer to this question?
Almost looks like you are asking the answer to a braindump question?
is it BC?
You can do better. Give me a reason why you think it’s b and c, and I might just answer your question.
On 14.04.2013, at 19:21, “BasRaayman’s technical diatribe”
Refering to Amardeep’s question I was initially inclined to go with C and D but then I read in the following vmware KB Article
that cloning with snapshots is not supported. I still think Answer D is correct due to the block delete.
I would go with A and D on the basis that generally NFS datastores are going to be Thin provisioned (by default unless the VAAI states that the datastore can support Thick) and within the VAAI there is thin provisioning support to allow the host to tell the array to reclaim space when vm’s are deleted/migrated
Still not sure though. What do you think?
The assumptions seem correct. But let’s take a look at some things:
A) With NFS, the default is alway thin provisioned. Provisioning there should not have much benefit from VAAI offloads, except when you are not creating a thin disk. In that case, you utilize the “space reservation” function of VAAI, which allows you to create an EZT disk, but that doesn’t really mean I am creating the disk a lot faster. Actually, I’m not offloading the writing of zeroes at all via NFS, so performance is not the key part of this VAAI function.
B) The answer doesn’t tell met what option is selected, regardless of the option I choose, I can either use features like for example ATS or Write-Same, depending on which type of disk I’m provisioning (obviously this applies to block storage).
C) VMware KB 1021976 clearly states “VAAI hardware offload cannot be used when: [..] Cloning a virtual machine that has snapshots (or doing a View replica or recompose), because this process involves consolidating the snapshots into the virtual disks of the target virtual machine”. So this answer isn’t correct in my opinion.
D) You should distinguish between the deletion of an entire disk, and the space reclaim function for thin provisioning (TP Unmap) which is covered by Cormac here: http://cormachogan.com/2012/11/08/vaai-comparison-block-versus-nas/. The Unmap primitive was actually introduced with vSphere 5.0, and did include the feature that, when a VM was svMotioned to a different disk, or a VM was deleted, Unmap would reclaim the space. Unfortunately, they noticed that this could cause a performance problem, and they disabled the primitive the vSphere 5.0 patch 2. They then re-enabled it, but as a manual process with vSphere 5.0U1. See: http://blogs.vmware.com/vsphere/2012/03/vaai-thin-provisioning-block-reclaimunmap-is-back-in-50u1.html
So, in my opinion, the correct options would be answer B and answer D, although the questions aren’t ideal to start with. But I look forward to your feedback to see if you agree/disagree. 🙂
Wow! Lets just say that was a detailed answer! Nice. Very nice! The problem I have with D is that it’s not specifically performance (speed) related but more specifically dead-space-reclamation-specific. B works for me because of the zeroing that’s involved in creating a thick provisioned disk. That process will be offloaded from the host to the array and therefore likely to be completed quicker hence yeilding a performace gain. So, I think the least bad (best choice) would have to be B & D. Not an ideal question but an interesting one that I think would not come up in the real world in a hurry.