VAAI, VMware, vSphere

vBrownbag – VAAI on NFS session during VMworld

Well, after being in Barcelona for a week for VMworld Europe, and after some other things that I had going on, I wanted to take some time to throw out a quick blog post on somethign that I have been getting positive feedback on.

If you aren’t familiar already with the vBrownbag initiative, make sure to check it out at http://professionalvmware.com/brownbags/. To quote from the site:

The ProfessionalVMware #vBrownBag is a series of online webinars held using GotoMeeting and covering various Virtualization & VMware Certification topics.

While VMworld was going on, some of the vBrownbag crew were visiting, and set up short 10 minute sessions in which presenters could come by and discuss various topics. Topics ranging from VCDX certification, “unsupported” sessions which showed off some neat unsupported tricks, and other topics.

Fellow blogger Julian Wood, actually put up a great blog post over on wooditwork.com that directly shows you all of the recorded sessions, including my own which is titled: “VAAI tips, specifics, common pitfalls and caveats on NFS”.

It’s good that it adds video and audio commentary, but I thought I’d also add the slides, which you can find here:

http://app.sliderocket.com:80/app/fullplayer.aspx?id=34d7923b-017c-435b-8ea7-043ab0a895da

I hope that it’s of use to you, and look forward to your feedback.

EMC, Storage, VAAI, VMware, VNX, vSphere

“My VAAI is Better Than Yours” – The file side of things

EMC VNXI have to admit it. I stole, or rather “borrowed”, part of this title from a blog post of a colleague of mine, Erik Zandboer. He just now published a post on the mindset behind VAAI, and what the actual effect is on the array itself, and on your vSphere infrastructure.

VAAI was already available in vSphere 4.1, and with the switch to vSphere 5 some new features were introduced, which means that as of this release, we now have the following situation:

Block: File:
HW accelerated Zeroing NFS – Full Copy
HW accelerated Copy NFS – Extended Statistics
HW accelerated Locking NFS – Space Reservation

Some folks will say that I left out Thin Provision Stun, which is true. And while it does help to resolve some issues, I left it out because I don’t really view it as a hardware offload, which is what I’m trying to focus on.

I took the hardware in our lab, – a EMC VNX 5300 -, for a spin in our vSphere 5 setup to show the same thing as Erik showed in his blog, but instead showing off some of the File / NFS accelerations.

To get the VNX to actually support NAS VAAI offloading and get the result you expected, you need to meet the following prerequisites:

  • vSphere 5 – You need vSphere 5 installed with an Enterprise or Enterprise Plus license
  • VNX OE for File 7.0.35.x – You need your VNX Operating Environment for File to be at least at version VNX OE 7.0.35.x or newer
  • NFSv3 – The offloads only work on NFSv3-based datastores
  • The vSphere NFS VAAI offload plugin which is referenced here

If all those prerequisites are met, you should normally be able to go in to your vSphere Client and see Hardware Acceleration as Supported:

You could also enable SSH for your ESXi host, – do this by going to the individual host, click on the “Configuration” tab, select “Security Profile” and start the SSH service -, and check the support from the command line. For block devices you could enter the following command:

esxcli storage core device vaai status getand get back a result that shows you the NAAID, the VAAI plugin name, and the primitives with their support state. By using the following command:
esxcli storage core device list you get a similar output, but again this only works for block devices, and won’t really help you when checking the support for NFS. I haven’t found any way so far to get a reliable statement back via SSH, but I’ll try to continue looking and update this post if I find something.

In case of the VNX, we can actually check on the array itself to see if we are using the primitives, so I’m actually showing you the output from the array itself, using the following command on the VNX:

server_stats server_2 -monitor nfs.v3.vstorage -type accu -i 1
First off, I went back in to my ESXi host and went in to the NFSv3 datastore that was hosting my virtual machine. In this case, a Windows 2008 server, running an SAP Enterprise Portal, and I used the vmkfstools to create a clone:

vmkfstools -i GI-C-SAP-EPBW.vmdk CLONE-GI-C-SAP-EPBW.vmdkand I set off a snap using a similar command:
vmkfstools -i GI-C-SAP-EPBW.vmdk CLONE-GI-C-SAP-EPBW.vmdk. All the while, I had the VNX command that I posted before running in a different window. The output from the VNX was showing that we are actually using the VAAI NFS offloading functions:

server_2 NFS VAAI op VAAI Op Calls VAAI Op Total uSecs VAAI VAAI Op
Timestamp Op Max Average
uSecs uSec/Op
09:07:14
09:07:15
09:07:16
09:07:17
09:07:18 vaaiFastClone 1 0 0 0
vaaiVxAttrs 3 0 1 0
vaaiRegister 5 0 0 0
09:07:19
.......
09:08:27 vaaiOffloadStatus 1 0 0 0
vaaiVxAttrs 7 1 1 0
vaaiRegister 10 0 0 0
09:08:28
09:08:29
09:08:30
09:08:31
09:08:32 vaaiOffloadStatus 2 0 0 0
server_2 NFS VAAI op VAAI Op Calls VAAI Op Total uSecs VAAI VAAI Op
Summary Op Max Average
uSecs uSec/Op
Minimum vaaiFullClone 0 0 83308 -
vaaiFastClone 0 0 0 0
vaaiOffloadStatus 0 0 0 0
vaaiOffloadAbort 0 0 0 -
vaaiVxAttrs 0 0 1 0
vaaiReserveSpace 0 0 0 -
vaaiRegister 0 0 0 0
Average vaaiFullClone 0 0 83308 -
vaaiFastClone 1 0 0 0
vaaiOffloadStatus 0 0 0 0
vaaiOffloadAbort 0 0 0 -
vaaiVxAttrs 3 0 1 0
vaaiReserveSpace 0 0 0 -
vaaiRegister 5 0 0 0
Maximum vaaiFullClone 0 0 83308 -
vaaiFastClone 1 0 0 0
vaaiOffloadStatus 2 0 0 0
vaaiOffloadAbort 0 0 0 -
vaaiVxAttrs 7 1 1 0
vaaiReserveSpace 0 0 0 -
vaaiRegister 10 0 0 0
(sorry for the formatting, I couldn’t get it to show the way it should).

Once the files are created, use a:
vmkfstools --extendedstat GI-C-SAP-EPBW.vmdk on the source file, or on the snap or clone to actually display the extended statistics. The “Capacity bytes” show the allocated space for the virtual disk, the “Used bytes” displays the blocks used for the virtual disk which in case of our snapshot is the fast clone and it’s parent. The “Unshared bytes” displays the usage of the actual fast clone itself without the parent.

I should point out that the offload did speed up my full clone operation, but it was “only” in the range of 20%. That isn’t a great deal, but using both esxtop and the vSphere Client performance graphs showed that the ESXi server was busy doing what it is supposed to do: virtualizing my resources! And that’s the most important thing, isn’t it?

GestaltIT, VAAI, vCloud Director, Virtualization, VMware, VMworld, vSphere

vSphere 5, it’s here! What’s new?

It’s here, it’s here. 😉

VMware just announced their new version of vSphere 5, and as you have probably found out, general availability is targeted toward August this year. There is a whole bunch, and I mean a whole bunch, of new stuff coming out, and everyone knows what we can expect at VMworld this year.

Let me be clear that this post is in no way trying to sum up all the new things that are introduced with vSphere 5, but this is mean to give you a quick and easy to consume overview of some of the major new features.

Key stuff that is new or has changed in vSphere 5:

  • Virtual hardware limits. We can now address 32 virtual CPUs and a maximum of 1TB of RAM (note that virtual machine hardware type 8 is required). We see people running larger and larger workloads, and are seeing more and more people moving their tier 1 applications to a virtualized environent. Anyone who has tried to virtualize a large database or business warehouse system will know what I mean.

    One word of caution though. Even though we can now create very large installations, be careful. This is not a sensible size for all applications, and you should check on an application specific basis if you really need something this big, and are able to leverage all of the resources it offers.

  • VMFS version 5. With the updated version of the VMFS there are some modifications being made. For one, you no longer need to use extents to create volumes over 2TB in size, and they have added support for physical RDMs that are over 2TB.
  • The service console is missing. Well, it's not really missing, but there is no more service console, due to the fact that you will now only find ESXi as the hypervisor. Although some people might be missing some things without the traditional ESX service console, this does offer some advantages like having only a single vSphere package, hardened security and less patches. But this should probably be one of the changes that almost everyone has seen coming, so I'm not going to go in to the depths on the pros and cons of this choice.
  • VAAI has again been enhanced. With vSphere 5, there are enhancements for both block and file based storage:
    • for block:
      • Thin provision stun has been added, which is basically an option to get feedback when a thinly provisioned LUN is full. You will now get a message back from the array, and the affected guests are “stunned”. This allows the admin to add some more free space to the LUN, after which the guests can resume normal operation.
      • Space reclaim is the second feature that has been added. Now, one caveat is that this hardware offload is dependent of VMFS version 5. Anything prior to that won’t do the job. If that prerequisite is in place, any blocks that are freed up by VMFS operations, things like VM deletion or snapshot deletion, will now return their blocks to the pool of free blocks.
    • For file:
      • You can now use NFS full copy. Somewhat similar to the block version, copying of files can now be offloaded to the array, which of course should speed up things like clone creation.
      • Extended stats adds the ability to get the extended information from files. Information about actual space allocation or the fact if the file is deduplicated can now be retrieved.
      • We can now use space reservation, to actually pre-initialize a disk and allocate all of the required space right off the bat.
  • VMware has redesigned HA. The new architecture should help people who want to work with streched clusters.

    Basically, VMware has moved away from underlying EMC Autostart based construct to an entirely new model. The HA agent is now called the FDM, and one of the nodes in the cluster will now take on the role of master. All of the remaining nodes in the cluster are slaves to this master, which means that we are no longer using the primary/secondary concept that was common with the previous version of HA. During normal operation, we should only see one master node per cluster.

    Benefits of the new construct are that we are no longer that susceptible to DNS issues. Also, VMware has added additional communication paths, -we can now also leverage so called “Heartbeat datastores”-, that will aid us in the detection of failures. And, as a bonus VMware has also added support for IPv6.

    Since the entire HA stack has been rewritten, there are a lot of changes coming, and I’m planning on getting down to the nitty gritty in a future post, and I’m sure that my friend Duncan will also be explaining this in great detail on his blog.

  • VASA, or “vSphere Storage API for Storage Awareness” is basically a way for the storage array to actually tell vSphere what it can do, or what it is currently doing. Imagine getting feedback if your storage is cable of VAAI. Or something more simple like telling vSphere what RAID level a datastore has. Sounds sensible right? Now combine that with the new Storage DRS in vSphere 5, and you get a pretty good picture of what VASA can help you with.
  • Storage DRS. The DRS feature in vSphere is already pretty well known, and it’s something that I see in use a lot at customer sites.

    Well, now you can also use DRS for your storage. To enable this feature, you create a so called “datastore cluster”, which is in essence nothing more than several datastores combined. Now, when you create a new VM, it is placed inside of a datastore cluster, and storage DRS balances everything out based on some key criteria like space utilization and I/O latencies. More to follow in a different post.

Now, this is by no means a complete overview, and I’ll be going in to these an other new features in upcoming posts. And I don’t want to flood you with information that can also be found on plenty of other blogs out there, but this should give you a good start. Look back for the things mentioned up here, but also for things like the added support for software based FCoE initiators, APD / PDL, the vSphere storage appliance, the new SRM 5 or vCloud Director 1.5.

GestaltIT, Performance, Storage, VAAI, Virtualization, VMware, vSphere

What is VAAI, and how does it add spice to my life as a VMware admin?

EMC EBC Cork
I spent some days in Cork, Ireland this week presenting to a customer. Besides the fact that I’m now almost two months in to my new job, and I’m loving every part of it, there is one part that is extremely cool about my job.

I get to talk to customers about very cool and new technology that can help them get their job done! And while it’s in the heart of every techno loving geek to get caught up in bits and bytes, I’ve noticed one thing very quickly. The technology is usually not the part that is limiting the customer from doing new things.

Everybody know about that last part. Sometimes you will actually run in to a problem, where some new piece of kit is wreaking havoc and we can’t seem to put our finger on what the problem is. But most of the time, we get caught up in entirely different problems altogether. Things like processes, certifications (think of ISO, SOX, ITIL), compliance, security or just something “simple” as people who don’t want to learn something new or feel threatened about their role that might be changing.

And this is where technology comes in again. I had the ability to talk about several things to this customer, but one of the key points was that technology should help make my life easier. One of the cool new things that will actually help me in that area was a topic that was part of my presentation.

Some of the VMware admins already know about this technology, and I would say that most of the folks that read blogs have already heard about it in some form. But when talking to people at conventions or in customer briefings, I get to introduce folks over and over to a new technology called VAAI (vStorage API for Array Integration), and I want to explain again in this blog post what it is, and how it might be able to help you.

So where does it come from?

Well, you might think that it is something new. And you would be wrong. VAAI was introduced as a part of the vStorage API during VMworld 2008, even though the release of the VAAI functionality to the customers was part of the vSphere 4.1 update (4.1 Enterprise and Enterprise Plus). But VAAI isn’t the entire vStorage API, since that consists of a family of APIs:

  • vStorage API for Site Recovery Manager
  • vStorage API for Data Protection
  • vStorage API for Multipathing
  • vStorage API for Array Integration

Now, the “only API” that was added with the update from vSphere 4.0 to vSphere 4.1 was the last API, called VAAI. I haven’t seen any of the roadmaps yet that contain more info about future vStorage APIs, but personally I would expect to see even more functionality coming in the future.

And how does VAAI make my life easier?

If you read back a couple of lines, you will notice that I said that technology should make my life easier. Well, with VAAI this is actually the case. Basically what VAAI allows you to do is offload operations on data to something that was made to do just that: the array. And it does that at the ESX storage stack.

As an admin, you don’t want your ESX(i) machines to be busy copying blocks or creating clones. You don’t want your network being clogged up with storage vMotion traffic. You want your host to be busy with compute operations and with the management of your memory, and that’s about it. You want as much reserve as you can on your machine, because that allows you to leverage virtualization more effectively!

So, this is where VAAI comes in. Using the API that was created by VMware, you can now use a set of SCSI commands:

  • ATS: This command helps you out with hardware assisted locking, meaning that you don’t have to lock an entire LUN anymore but can now just lock the blocks that are allocated to the VMDK. This can be of benefit, for example when you have multiple machines on the same datastore and would like to create a clone.
  • XSET: This one is also called “full copy” and is used to copy data and/or create clones, avoiding that all data is sent back and forth to your host. After all, why would your host need the data if everything is stored on the array already?
  • WRITE-SAME: This is one that is also know as “bulk zero” and will come in handy when you create the VM. The array takes care of writing zeroes on your thin and thick VMDKs, and helps out at creation time for eager zeroed thick (EZT) guests.

Sounds great, but how do I notice this in reality?

Well, I’ve seen several scenarios where for example during a storage vMotion, you would see a reduction in CPU utilization of 20% or even more. In the other scenarios, you normally should also see a reduction in the time it takes to complete an operation, and the resources that are allocated to perform such an operation (usually CPU).

Does that mean that VAAI always reduces my CPU usage? Well, in a sense: yes. You won’t always notice a CPU reduction, but one of the key criteria is that with VAAI enabled, all of the SCSI operations mentioned above should always perform faster then without VAAI enabled. That means that even when you don’t see a reduction in CPU usage (which is normally the case), you will see that since the operations are faster, you get your CPU power back more quickly.

Ok, so what do I need, how do I enable it, and what are the caveats?

Let’s start off with the caveats, because some of these are easy to overlook:

  • The source and destination VMFS volumes have different block sizes
  • The source file type is RDM and the destination file type is non-RDM (regular file)
  • The source VMDK type is eagerzeroedthick and the destination VMDK type is thin
  • The source or destination VMDK is any sort of sparse or hosted format
  • The logical address and/or transfer length in the requested operation are not aligned to the minimum alignment required by the storage device (all datastores created with the vSphere Client are aligned automatically)
  • The VMFS has multiple LUNs/extents and they are all on different arrays

Or short and simple: “Make sure your source and target are the same”.

Key criteria to use VAAI are the use of vSphere 4.1 and an array that supports VAAI. If you have those two prerequisites set up you should be set to go. And if you want to be certain you are leveraging VAAI, check these things:

  • In the vSphere Client inventory panel, select the host
  • Click the Configuration tab, and click Advanced Settings under Software
  • Check that these options are set to 1 (enabled):
    • DataMover/HardwareAcceleratedMove
    • DataMover/HardwareAcceleratedInit
    • VMFS3/HardwareAcceleratedLocking

Note that these are enabled by default. And if you need more info, please make sure that you check out the following VMware knowledge base article: >1021976.

Also, one last word on this. I really feel that this is a technology that will make your life as a VMware admin easier, so talk to your storage admins (if that person isn’t you in the first case) or your storage vendor and ask if their arrays support VAAI. If not, ask them when they will support it. Not because it’s cool technology, but because it’s cool technology that makes your job easier.

And, if you have any questions or comments, please hit me up in the remarks. I would love to see your opinions on this.

Update: 2010-11-30
VMware guru and Yellow Bricks mastermind Duncan Epping was kind enough to point me to a post of his from earlier this week, that went in to more detail on some of the upcoming features. Make sure you check it out right here.