VMworld 2013 – Link collection

26 08 2013

As most of you will know, VMworld is going on right now, and they kicked off this morning with the general Keynote. There were some new announcements, like for example the introduction of NSX, the public beta of VSAN, and the vCloud Suite 5.5.

As always, you’ll be flooded with blog posts and articles, so like the last couple of years, I’ll be trying to give you an overview with links. If you feel like something is missing, please leave a note in the comments, or send me a direct message on Twitter and I’ll try to get it added pronto.

So, here goes:





Upgrading your Nutanix NX-2400 block from ESXi 5.0 to ESXi 5.1 using a USB thumb drive.

16 07 2013

At the moment, I’m lucky enough to have a Nutanix block at home that I use for demos (it’s coming along to Switzerland with me tomorrow). It’s not the model with the highest specs, but it helps in giving customers a chance to actually see the kit, and give partners some hands-on time. In case you are wondering, I’m actually working with a NX-2400, or a 4-node NX-2000 cluster, hence 2400.

Thing is though, that it was running an older version of the Nutanix Operating System (NOS), which I upgraded to the latest version (NOS 3.1) without a hitch, and it was running ESXi 5.0. And to play with some of the latest features, I actually decided to upgrade to ESXi 5.1, and I figured I might as well share how that worked out for me.

The steps are relatively simple, but I figured I’ll document them here anyway. One word of caution though:

    This was done with the latest info from the Nutanix knowledge base. Be sure to check if there are updated instructions available prior to upgrading your own block.

So, step one is to actually get the installation media for ESXi 5.1. In case of the NX-3000, you can use the standard ESXi 5.1 image. For the NX-2000 systems, you need to use an image that is customized by Nutanix. Contact myself or your local systems engineer to get the download location.

Next, create a bootable USB stick using the image. Easiest way I found is to actually format the stick with FAT32 as the filesystem. I recommend using a Windows system, or a Windows VM, since no matter what I tried, I couldn’t get it to boot using a Mac. Once the drive is formatted, I used UNetbootin:

UNetbootin ESXi 5.1 Nutanix

Click on “Disc Image” and select the ISO file. Make sure “USB Drive” is selected, and point it to the correct drive. Then click on “OK”, and watch it go to work. If it gives you a message stating that menu.c32 is already present, click on the “Yes” button.

We’ll also need to edit the NFS heartbeat timeout settings. To do that, log on to vCenter, select the node and go to “Software” -> “Advanced settings”. There go to the NFS entries, and modify the “NFS.HeartbeatTimeout” setting to 30 seconds. Do that for each host.

Next, we need to make sure the multiextent module is loaded. Add the following lines to /etc/rc.local.d/local.sh on each host (if not already there):
#added to support multiextent
localcli system module load -m multiextent
#end of adding

Then restart the host.

Once you are done, it is time to start the upgrade. Go in to the BIOS (using the Delete key) on the node you want to upgrade, and change the boot order so that you actually start off of your USB stick. Once you save the config and restart, you will be given a menu where you select the second option:

Unetbootin - Nutanix ESXi 5.1 upgrade menu

After that you should be able to see the trusted ESXi boot sequence:
ESXi 5.1 boot screen

At the installation screen, just hit the “Enter” key to continue with the installation. Read the license agreement, and continue with F11. Next, you are asked where the installation should reside. Normally you should see the Intel SSD already have a VMFS partition, indicated by the small asterisk in front of the disk. Select that disk and press “Enter” to continue:
ESXi 5.1 upgrade VMFS

Next, a prompt should show up asking if you want to upgrade. Select that option, and press “enter” once more:
Upgrade VMFS ESXi 5.1

The final step is to confirm your upgrade by pressing the “F11” key. Once the upgrade is done, remove the USB thumb drive, and reboot the server by again hitting the “Enter” key. Let the node reboot, change the boot order to the original sequence, and, tadaaaaaa:
Nutanix - ESXi 5.1 upgrade complete

Now, obviously this would be easier using the vSphere Update Manager, but this was the solution I used, since I only installed the vCenter virtual appliance. Not pretty but it works.

One key thing left to do, is to re-register the controller VMs on your ESXi host. You can do this via the vSphere client going directly to the ESXi host. Just right-click the VM and select “remove from inventory”. Then browse the datastore, go to the folder saying “ServiceVM-1.24_Ubuntu” and add the VM to the inventory using the VMX file. You can now start your VM after you confirmed that you moved it. 🙂

The other alternative to re-register your VM using vim-cmd via an SSH session on to your ESXi host. Just check which VMs you have running:
vim-cmd vmsvc/getallvms

Vmid Name File Guest OS Version Annotation
190 NTNX-TRAIN2-S11317022510746-A-CVM--2- [NTNX-local-ds-S11317022510746-A] ServiceVM/ServiceVM.vmx ubuntu64Guest vmx-07
Remember the VMID and de-register the VM:
vim-cmd vmsvc/unregister [vmid of controller VM]Now simply re-register the VM:
vim-cmd solo/register [/full/file/path/to/the/controller_vm_name.vmx]You might want to rename the controller VM once you have registered it.

Should you have any issues starting the VM, make sure that there is no line saying:
sched.mem.prealloc = "TRUE"in the .vmx file of you VM. If this line is present, remove it, and re-register your VM.





vSphere Design: CARR – How do you know if you have them correct?

10 12 2012

Image linked from http://justcreative.com/2009/10/11/classic-elegant-serif-fonts/I’m a techie. I like technology, and ask me to solve a problem that involves something with a computer, and usually I’ll get it solved. My boss seems to know that, and it’s one of the reasons why I get pulled in to projects that require hands-on.

I like to talk about technology. It’s one of the reasons that I enjoy being in my current pre-sales role so much. I enjoy taking a technology, trying to simplify what it does, and then talking to a customer to see if a technology can add value in their setup, or solve one or more specific problems they might be having.

The one doesn’t work without the other for me. I need stick time with something before I’m really able to effectively communicate about it. I’m not the kind of guy to go over a PowerPoint presentation and then deduce how a product works in real life. I can do that up to a certain degree, but I won’t feel really confident without having some form of hands-on.

In comes the design part

In one of my previous posts, I asked how you learn to speak design. There are design methodologies that can help you uncover goals, and it will be up to you to identify the CARR, or written out:

  • Constraints
  • Assumptions
  • Risks
  • Requirements

And this is where the hard part is for me. I don’t deal with this terminology, in a design environment, on a day-to-day basis. And it makes it hard to actually categorize these in a correct fashion, without a lot of practice.

There is a good document on the VMware Community page that goes in to detail on “Functional versus Non-functional” requirements. The document states the following:

Functional requirements specify specific behavior or functions, for example:
“Display the heart rate, blood pressure and temperature of a patient connected to the patient monitor.”

…..

Non-functional requirements specify all the remaining requirements not covered by the functional requirements. They specify criteria that judge the operation of a system, rather than specific behaviors, for example: “Display of the patient’s vital signs must respond to a change in the patient’s status within 2 seconds.”

Which makes it relatively simple. Those are simple examples, and when you keep in mind that a non-functional requirement usually is a design constraint, you should be all set to identify constraints and requirements, right?

Maybe not so much?

Along comes something in a different form, and then the over-thinking starts:

  • “You must re-use existing server hardware”.
  • That’s great. I “must” do something, so it’s a requirement, right? But does this change the way that “my heart rate is displayed”? Well, since I’m a techie, depending on the server model, this might influence the way it’s displayed. Do I need to change my design because I’m re-using the hardware? Well, you may need to. But normally your design shouldn’t depend on the hardware you are re-using. But what if it’s not allowing me to create a cluster, or run certain workloads, or is so old that it won’t allow me to use certain features?

And the rambling goes on, and on, and on.. At least, I think this is where a lot of folks can go wrong. My gut feeling is that we perhaps over-think what is being said/asked. If we know nothing about the environment at all, but the customer tells us that we need to re-use the hardware that is already in place, then that is a?

  • Requirement? We are after all required to re-use the hardware?
  • Constraint? We are constrained from bringing in any other hardware?

What would be your take on this? And what do you use to actually differentiate and remember what is what?





vBrownbag – VAAI on NFS session during VMworld

22 10 2012

Well, after being in Barcelona for a week for VMworld Europe, and after some other things that I had going on, I wanted to take some time to throw out a quick blog post on somethign that I have been getting positive feedback on.

If you aren’t familiar already with the vBrownbag initiative, make sure to check it out at http://professionalvmware.com/brownbags/. To quote from the site:

The ProfessionalVMware #vBrownBag is a series of online webinars held using GotoMeeting and covering various Virtualization & VMware Certification topics.

While VMworld was going on, some of the vBrownbag crew were visiting, and set up short 10 minute sessions in which presenters could come by and discuss various topics. Topics ranging from VCDX certification, “unsupported” sessions which showed off some neat unsupported tricks, and other topics.

Fellow blogger Julian Wood, actually put up a great blog post over on wooditwork.com that directly shows you all of the recorded sessions, including my own which is titled: “VAAI tips, specifics, common pitfalls and caveats on NFS”.

It’s good that it adds video and audio commentary, but I thought I’d also add the slides, which you can find here:

http://app.sliderocket.com:80/app/fullplayer.aspx?id=34d7923b-017c-435b-8ea7-043ab0a895da

I hope that it’s of use to you, and look forward to your feedback.





VMware releases vSphere 5.1

27 08 2012

Today, at VMworld in San Francisco, VMware released a new version of their virtualization platform, namely vSphere 5.1.

To anyone who has been working with vSphere for some time, the version number won’t be that big of a surprise. Also, just before the weekend, the new version number actually showed up in the VMware Compatibility Guide (and was taken offline again over the weekend). But, as little surprise as the version number was, there was one quite big surprise that went along with all of the sparkly new features: A change in the licensing model.

Rumors were already circulating a week before the convention, and this change certainly wasn’t an easy decision for VMware. I was in an early partner briefing, and while we were getting the briefing, there were still mails going around inside of VMware, and a change in the licensing policy was actually communicated via an internal mail during the briefing. Since most people didn’t really like the change in licensing that came with vSphere 5, VMware made a subsequent change in its new licensing policy just a month after releasing vSphere 5.

So, what changed in the licensing department?

It all become much easier to deal with, since VMware dropped the VRAM licensing model. Yes, you read that right, VMware is no longer charging the virtualized RAM. The short version is this:

vRAM licensing is no longer used, the licensing is now per CPU socket.

There are other changes like the VMware vCloud Suite, but I will cover this in a different post.

And what is new in the technology department?

Well, the usual upgrade in terms of bumped maximums:

  • Up to 64 virtual CPUs
  • Up to 1TB of vRAM
  • Up to 32 hosts can now access a linked clone
  • 16Gb FC HBA support

And more things that will put vSphere and Hyper-V on par from a maximums standpoint. But I don’t really think that those limits are interesting. To find the latest maximums we can have a look at the configuration maximums guide. So, what else has changed?

  • Virtual Shared Graphics Acceleration (vSGA)
    “vSGA expands upon existing non‐hardware accelerated graphics capabilities for basic 3D workloads, by supporting accelerating VDI workloads using physical GPU resources. With this new capability, it is now possible to virtualize physical GPU resources, sharing them across virtual desktops. This functionality supports an array of graphically rich and intense applications such as full motion video, rich media services, and more demanding 3D graphics”To actually use this feature, you currently need an NVIDIA GPU that is based on the GF100 architecture (Fermi) such as the Quadro 4000, Quadro 5000 or Quadro 6000  series. For people who like to dig around, this is also what the X server is for in the installation media.
  • No more reboots to upgrade the VMware tools
    You read that right. You can now perform an online upgrade of the VMware tools for any VM that is running Windows Vista, or a later Windows release.
  • Network Health Check
    “Assures proper physical and virtual operation by providing health monitoring for physical network setups including VLAN, MTU or Teaming. Today, vSphere administrators have no tools to verify whether the physical setup is capable of deploying virtual machines correctly. Even simple physical setups can be frustrating to debug end to end. Network health check eases these issues by giving you a window into the operation of your physical and virtual network”
  • Single Sign On (SSO)
    “The vSphere 5.1 SSO feature simplifies the login process for the Cloud Infrastructure Suite. You can log into the management infrastructure a single time through the vSphere Web Client or the API and be able to perform operations across all components of the Cloud Infrastructure Suite without having to log into the components separately SSO operates across all Cloud Infrastructure Suite components that support this feature.”This is a great addition. You can now use your vCenter installation as a SSO source, or you can integrate directly in to existing OpenLDAP and Active Directory sources. Scheme support is present for LDAP, LDAPS and NIS.
  • vMotion
    You can now simultaneously vMotion memory and storage. I hear you thinking that “you could do that for a while now”, and you are correct. But with vSphere 5.1, you can finally do it online. Additionally, there is no need for shared storage to perform a vMotion. This means that you can use local disks, inside of your hosts, and online migrate your virtual machines between hosts without having centralised storage, using NBD/NFC in the background. In my book, this is a great feature when working with a home lab.

Those are some pretty neat things, and there are even more out there, but there is one major change that I wanted to save. Previously, VMware announced the vSphere Web Client about a year ago (David Davis has done a nice writeup of it here), and set the tone for the future interface. Now, in vSphere 5, they made it very clear:

To use new things like the newer VM hardware version, the shared nothing vMotion, or any of the other new features, you have to use the new vSphere Web Client.

And that’s ok. The Web Client works like a charm. It did have some smaller bugs during testing, but to me proved to be quite reliable and easy to use. Plus, it allows you to search for objects from any location, adds features like custom tags that you can add to resources, and has modifications that make life easier. An example of that last point, is when you add a new datastore to a host. If you re-use the name, the Web Client will detect this, and will ask you if you would like to use the same settings, saves a bit of time.

There is one problem with this strategy though. You won’t be able to completely switch to the Web Client. For four tasks, you will still need the classic vSphere Client:

  • Import and export host profiles. You cannot import or export host profiles with the vSphere Web Client.
  • vSphere Update Manager. vSphere Update Manager isn’t available in the vSphere Web Client.
  • Datastore Browser inflate thin disk option. The Datastore Browser in the vSphere Client has an option to inflate a thin disk to a thick disk. The vSphere Web Client does not have this option. You cannot inflate a thin disk using the vSphere Web Client.
  • vSphere Authentication Proxy Server.

That might change once the final version is available for download though. Also, with the Web Client, the way that vCenter plugins work, has changed. This will mean that if you rely on any plugins for your daily operation, now is the time to contact your software/hardware vendor, and ask them if they are planning on the release of a new plugin that will work in the Web Client.

All in all?

All in all, I would say that with vSphere 5.1, the maximum configurations were aligned with what other hypervisors offer, and we again see some nice additions in functionality. A lot of folks will welcome the change in licensing policy, and all of those Mac users will welcome the fact that they can now perform their daily administration, without having to install a VM or connect to a remote desktop.

Some things aren’t entirely logical, like the fact that not all of the functionality was ported to the Web Client (yet), but I think it’s safe to say that there is more good than bad in this release. If you want to learn more about the technical side, or the rest of the VMware vCloud Suite, make sure to check every now and then, since I’ll be posting follow ups with exactly that info. In terms of the software being released, we are still waiting for an official release date, but I’ll update this posting once the date has been announced.

Update – August 31st:

The Dutch VMware Twitter account (VMware_NL) just gave me an expected release date for the vCloud Suite, and for vSphere 5.1: September 11th 2012. Keep in mind that this may change though.

Update – September 18th:
You can now actually download the release. It went live during my holiday, so I didn’t update the post. Download it from the VMware website. Also, the configuration maximums guide got released. Download it at: http://www.vmware.com/pdf/vsphere5/r51/vsphere-51-configuration-maximums.pdf.





VMworld 2012 – Call for voting and a jiffy?!

30 05 2012

vote! by smallcaps, on FlickrThe Twitter world has been slightly abuzz. The reason? Well, a couple of weeks ago people were allowed to submit session proposals on VMworld.com. Basically, the call for papers is a way for folks to say “Hey, this is a cool idea for a session I have. This is what I would like to talk about.”. You submitted that on the site, and a first selection was made of the submissions, before they were now put online.

What do you need to do now? Well, you need to vote! If you go to VMworld.com you can click on the “Call for Papers Public Voting” link, and then cast a vote for the sessions you would like to see at VMworld. The only thing you need is a registered account at VMworld.com, and if you don’t have an account, you can create one here.

Once your are on the site, just browse through the sessions, and click on the thumb symbol in front of the session to cast your vote. It’s as easy as that, and you can vote for all the sessions that seem interesting to you (and others).

And while you are browsing, why not also take a quick look at session number 1665? This was submitted by my colleague Jonas Rosland and myself, and is titled:

Automagically Set-up Your Private Cloud Lab Environment: From Empty Box to Infrastructure as a Service in a Jiffy!

After casting your vote, it should look like this:

In the session, we will cover setting up a fully automated vCloud Director deployment in your lab environment. Starting off with an empty server and teaching you how to automate the installation of a full Cloud Infrastructure with ESXi, vCenter, vCloud Director and vApps, combined with the power of vCenter Orchestrator. And with all of this combined, you’ll be done in a jiffy!

If you think it would be interesting, we are both thankful for your vote! 🙂





VMware HA demo using vMSC with EMC VPLEX Metro

5 04 2012

That’s a mouth full of abbreviations for a title, isn’t it?

So, let me give you some background info. VMware introduced something called the vSphere Metro Storage Cluster, and Duncan Epping talks about this feature here.

What the vMSC allows us to do, is to create a regular stretched vSphere cluster, but now also stretch out the storage between the two clusters. This can be done in two ways (to quote from Duncan’s article):

I want to briefly explain the concept of a metro / stretched cluster, which can be carved up in to two different type of solutions. The first solution is where a synchronous copy of your datastore is available on the other site, this mirror copy will be read-only. In other words there is a read-write copy in Datacenter-A and a read-only copy in Datacenter-B. This means that your VMs in Datacenter-B located on this datastore will do I/O on Datacenter-A since the read-write copy of the datastore is in Datacenter-A. The second solution is which EMC calls “write anywhere”. In this case VMs always write locally. The key point here is that each of the LUNs / datastores has a “preferred site” defined, this is also sometimes referred to as “site bias”. In other words, if anything happens to the link in between then the storage system on the preferred site for a given datastore will be the only one left who can read-write access it.

The last scenario described here is something that obviously can cause some issues. EMC tried to address this by introducing the “independent 3rd party”, in form of the VPLEX Witness. Some documentation states that this witness should run in a 3rd site, but I would recommend to run this in a separate failure domain.

In essence, we have created the following setup:

© VMware

Awesome stuff, because we can do new things that weren’t quite possible before. Since VPLEX is one of the key storage virtualization solutions from EMC that allow us to perform an active/active disk access, we can perform a vMotion between the two sites, and due to the nature of VPLEX, we also perform a sort of storage vMotion on the underlying disks. That, without you having to shut down the VM to do both things at the same time. Pretty neat!

Now, as Chad describes here, a new disk connectivity state was introduced with vSphere 5, called “Permanent Device Loss” or PDL. This was a great feature to communicate to your infrastructure that a target was intentionally removed. You could unmount the disk, and remove the paths to your target in a proper way.

It was also useful to indicate an unexpected loss of your target, indicating that your cluster is in a partitioned state. The problem here was that a PDL state and VMware HA didn’t work so well together. When you had an APD notification, HA didn’t “kill” your VM, and your virtual machine would usually continue to respond to pings, but that was about it.

Then along came vSphere 5 Update 1, which allows us to set a flag on each of the hosts inside our cluster, and set a different flag for our HA cluster. Now, we can actually use HA and see terminate the VMs and have it restart the virtual machines on the hosts in our cluster that still have access to their datastores in their respective preferred sites.

I’ve created a short (ok, 8 minutes) video that show exactly this scenario. You’ll get a quick view of the VPLEX setup. You’ll see the Brocade switches that will change from a config with the normal full zoneset, being switched to a zoneset that will disable the inter-switch links between both VPLEX clusters. And you’ll see the settings inside of my vSphere lab setup, with the behavior of the hosts and virtual machines.

Since I’m quite new to creating videos like this, I hope the output is acceptable, and the video is clear enough. If you have any questions, feedback or would like to see more, please leave me a comment and I’ll see what I can do. 🙂


Just a quick modification to my post, since it wasn’t actually VM-HA (or VM monitoring) responding to the PDL event, but HA terminating the VM when running in to the PDL state, as Duncan pointed out to me on Twitter. Sorry for any confusion I may have caused!








%d bloggers like this: