GestaltIT, Performance, Storage, VAAI, Virtualization, VMware, vSphere

What is VAAI, and how does it add spice to my life as a VMware admin?

EMC EBC Cork
I spent some days in Cork, Ireland this week presenting to a customer. Besides the fact that I’m now almost two months in to my new job, and I’m loving every part of it, there is one part that is extremely cool about my job.

I get to talk to customers about very cool and new technology that can help them get their job done! And while it’s in the heart of every techno loving geek to get caught up in bits and bytes, I’ve noticed one thing very quickly. The technology is usually not the part that is limiting the customer from doing new things.

Everybody know about that last part. Sometimes you will actually run in to a problem, where some new piece of kit is wreaking havoc and we can’t seem to put our finger on what the problem is. But most of the time, we get caught up in entirely different problems altogether. Things like processes, certifications (think of ISO, SOX, ITIL), compliance, security or just something “simple” as people who don’t want to learn something new or feel threatened about their role that might be changing.

And this is where technology comes in again. I had the ability to talk about several things to this customer, but one of the key points was that technology should help make my life easier. One of the cool new things that will actually help me in that area was a topic that was part of my presentation.

Some of the VMware admins already know about this technology, and I would say that most of the folks that read blogs have already heard about it in some form. But when talking to people at conventions or in customer briefings, I get to introduce folks over and over to a new technology called VAAI (vStorage API for Array Integration), and I want to explain again in this blog post what it is, and how it might be able to help you.

So where does it come from?

Well, you might think that it is something new. And you would be wrong. VAAI was introduced as a part of the vStorage API during VMworld 2008, even though the release of the VAAI functionality to the customers was part of the vSphere 4.1 update (4.1 Enterprise and Enterprise Plus). But VAAI isn’t the entire vStorage API, since that consists of a family of APIs:

  • vStorage API for Site Recovery Manager
  • vStorage API for Data Protection
  • vStorage API for Multipathing
  • vStorage API for Array Integration

Now, the “only API” that was added with the update from vSphere 4.0 to vSphere 4.1 was the last API, called VAAI. I haven’t seen any of the roadmaps yet that contain more info about future vStorage APIs, but personally I would expect to see even more functionality coming in the future.

And how does VAAI make my life easier?

If you read back a couple of lines, you will notice that I said that technology should make my life easier. Well, with VAAI this is actually the case. Basically what VAAI allows you to do is offload operations on data to something that was made to do just that: the array. And it does that at the ESX storage stack.

As an admin, you don’t want your ESX(i) machines to be busy copying blocks or creating clones. You don’t want your network being clogged up with storage vMotion traffic. You want your host to be busy with compute operations and with the management of your memory, and that’s about it. You want as much reserve as you can on your machine, because that allows you to leverage virtualization more effectively!

So, this is where VAAI comes in. Using the API that was created by VMware, you can now use a set of SCSI commands:

  • ATS: This command helps you out with hardware assisted locking, meaning that you don’t have to lock an entire LUN anymore but can now just lock the blocks that are allocated to the VMDK. This can be of benefit, for example when you have multiple machines on the same datastore and would like to create a clone.
  • XSET: This one is also called “full copy” and is used to copy data and/or create clones, avoiding that all data is sent back and forth to your host. After all, why would your host need the data if everything is stored on the array already?
  • WRITE-SAME: This is one that is also know as “bulk zero” and will come in handy when you create the VM. The array takes care of writing zeroes on your thin and thick VMDKs, and helps out at creation time for eager zeroed thick (EZT) guests.

Sounds great, but how do I notice this in reality?

Well, I’ve seen several scenarios where for example during a storage vMotion, you would see a reduction in CPU utilization of 20% or even more. In the other scenarios, you normally should also see a reduction in the time it takes to complete an operation, and the resources that are allocated to perform such an operation (usually CPU).

Does that mean that VAAI always reduces my CPU usage? Well, in a sense: yes. You won’t always notice a CPU reduction, but one of the key criteria is that with VAAI enabled, all of the SCSI operations mentioned above should always perform faster then without VAAI enabled. That means that even when you don’t see a reduction in CPU usage (which is normally the case), you will see that since the operations are faster, you get your CPU power back more quickly.

Ok, so what do I need, how do I enable it, and what are the caveats?

Let’s start off with the caveats, because some of these are easy to overlook:

  • The source and destination VMFS volumes have different block sizes
  • The source file type is RDM and the destination file type is non-RDM (regular file)
  • The source VMDK type is eagerzeroedthick and the destination VMDK type is thin
  • The source or destination VMDK is any sort of sparse or hosted format
  • The logical address and/or transfer length in the requested operation are not aligned to the minimum alignment required by the storage device (all datastores created with the vSphere Client are aligned automatically)
  • The VMFS has multiple LUNs/extents and they are all on different arrays

Or short and simple: “Make sure your source and target are the same”.

Key criteria to use VAAI are the use of vSphere 4.1 and an array that supports VAAI. If you have those two prerequisites set up you should be set to go. And if you want to be certain you are leveraging VAAI, check these things:

  • In the vSphere Client inventory panel, select the host
  • Click the Configuration tab, and click Advanced Settings under Software
  • Check that these options are set to 1 (enabled):
    • DataMover/HardwareAcceleratedMove
    • DataMover/HardwareAcceleratedInit
    • VMFS3/HardwareAcceleratedLocking

Note that these are enabled by default. And if you need more info, please make sure that you check out the following VMware knowledge base article: >1021976.

Also, one last word on this. I really feel that this is a technology that will make your life as a VMware admin easier, so talk to your storage admins (if that person isn’t you in the first case) or your storage vendor and ask if their arrays support VAAI. If not, ask them when they will support it. Not because it’s cool technology, but because it’s cool technology that makes your job easier.

And, if you have any questions or comments, please hit me up in the remarks. I would love to see your opinions on this.

Update: 2010-11-30
VMware guru and Yellow Bricks mastermind Duncan Epping was kind enough to point me to a post of his from earlier this week, that went in to more detail on some of the upcoming features. Make sure you check it out right here.

GestaltIT, vCloud Director, Virtualization, VMware

Shorts: VMware vCloud Director installation tips

So folks, I helped a colleague install the VMware vCloud Director. In case you are not aware of what the vCloud Director is I can give you a very rough description.

Think about how you deploy virtual machines. Usually you will deploy one machine at a time, which is a good thing if you only need one server. But usually in larger environments, you will find that applications or application systems are not based on a single server. You will find larger environments that consist of multiple servers that will segregate functions, so for example, your landscape could consist of a DB server, an application server, and one or more proxies that provide access to your application servers.

If you are lucky, the folks installing everything will only request one virtual machine at a time. Usually that isn’t the case though. Now, this is where vCloud Director comes in. This will allow you to roll out a set of virtual machines at a time as a landscape. But it doesn’t stop there, since you can do a lot more because you can pool things like storage, networks and you a tight integration with vShield to secure your environment. But this should give you a very rough idea of what you can do with the vCloud Director. For a more comprehensive overview, take a look at Duncan’s post here.

Anyway, let’s dig in to the technical part.

There are plenty of blog posts that cover how to set up the CentOS installation, so I won’t cover that at great length. If you are looking for that info, take a peek here. If you want to install the Oracle DB on CentOS, take a look here to see how it’s done.

Here are some tips that might come in useful during the install:

  • Use the full path to the keytool. There is a slight difference between /usr/bin/keytool, /usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre/bin/keytool and /opt/vmware/cloud-director/jre/bin/keytool. Be sure to use one of those, and if the commands to create and import your self-signed certificates are not working for some reason be sure to try a different one.

If you just simply create a database and browsed through the installation guide, you might have a hard time once you install the binary. Basically you run the “dbca” tool to create an empty database. If you by any chance forget to create the database files and run the installation binary (or the vCD configuration tool for that matter), you will receive an error while running the .sql database initialization scripts under /opt/vmware/cloud-director/db/oracle. The error message will tell you that there was an error creating the database.

Well, if only you had read the installation guide properly. Bascially what you do is start up the database:

sqlplus "/ as sysdba"
startup

Make sure that the path you use in the “create tablespace” command actually exists. If they don’t you need to perform “mkdir $ORACLE_HOME/oradata” first. Then create the tablespaces and corresponding files:

Create Tablespace CLOUD_DATA datafile '$ORACLE_HOME/oradata/cloud_data01.dbf' size 1000M autoextend on;
Create Tablespace CLOUD_INDX datafile '$ORACLE_HOME/oradata/cloud_indx01.dbf' size 500M autoextend on;

Now create a seperate user that we will give right for the database. The password for the user is the thing you type after “identified by”:

create user vcloud identified by vcloud default tablespace CLOUD_DATA;

Make sure that you give the user the correct rights to perform all the DB operations:

grant CONNECT, RESOURCE, CREATE TRIGGER, CREATE TYPE, CREATE VIEW, CREATE MATERIALIZED VIEW, CREATE PROCEDURE, CREATE SEQUENCE, EXECUTE ANY PROCEDURE to vcloud;

Now run the setup script, or run the configure script and you should be set to go.

Clariion, CX4, EMC, FLARE, Storage

What’s new in EMC Clariion CX4 FLARE 30

CLARiiON CX4 UltraFlex I/O module - Copyright: EMC Corporation.A little while back, EMC released a new version of it’s CLARiiON Fibre Logic Array Runtime Environment, or in short “FLARE” operating environment. This release brings us to version 04.30 and again has some enhancements that might interest you, so once more here’s a short overview of what this update packs:

Let’s start off with some basics. Along with this update you will find updated firmware versions for the following:

    Enclosure: DAE2		- FRUMon: 5.10
    Enclosure: DAE2-ATA	- FRUMon: 1.99
    Enclosure: DAE2P	- FRUMon: 6.71
    Enclosure: DAE3P	- FRUMon: 7.81

Major changes:

  • With version 04.30.000.5.507 you get support for FCoE. Prerequisite is using a 10 Gigabit Ethernet I/O module on CX4-120, CX4-240, CX4-480, and CX4-960 arrays.
  • SATA EFD support.
  • Following that point, you can now use Fibre Channel EFD and SATA EFD in the same DAE.
  • And, you can now also mix Fibre Channel and SATA EFDs in the same RAID group.
  • VMware vStorage API support in form of “vStorage full copy acceleration” (basically the array takes care of copying all the blocks, instead of sending everything to and from the application) and in form of “Compare and Swap” (an enhancement to the LUN locking mechanism).
  • Rebuild avoidance. This feature will change the routing of I/O to the service processor that still has access to all the drives in the RAID group. You do need write caching to be enabled if you want to be able to use this feature.
  • Virtual provisioning, basically EMC’s name for thin provisioning on the array.

There are some nice features in there, but for me personally the virtual provisioning, the FCoE support and the vStorage API support are the main ones.

One thing that caught my eye was in the section called limitations for FLARE version 04.30.000.5.507. In the release notes you will find the following statement:

Host attach support – Supported host attached systems are limited to the following operating systems: Windows, VMWare, and Linux

Which would mean that you have a problem when you are using something else like Solaris or HP-UX. I’m trying to get some confirmation, and I’ll update this post as soon as I have more info.

Update

The statement has changed in the meantime:

Host attach support – Supported hosts that can be attached over an FCoE connection are limited to the following operating systems: Windows, VMWare, and Linux

Which means that this is just related to FCoE connected hosts.


After some feedback on Twitter from among others Andrew Sharrock, I’d thought it might be wise to talk a few sentences about the Virtual Provisioning feature.

In short, Virtual Provisioning was already introduced with FLARE 28. Problem was that at the time, you could only use the feature with thin pools. Basically, with this update, you also get support for a newer version of the feature. Things that were added are:

  • Thick LUNs
  • LUN expand and shrink
  • Tiering preference (storage allocation from pools with mixed drives and different performance characteristics)
  • Per-tier tracking support of pool usage
  • RAID 1/0 support for pools
  • Increased limits for drive usage in pools
General, VMworld

Lack of updates and VMworld Copenhagen 2010

Hey folks,

first of all, I need to apologize. There have been way no updates for quite some time now. Things were hectic with me wrapping up things with my previous employer, and with getting things organized at my new spot. Things are slowly coming together, but it’s been quite time intensive, which left me with little time to actually write much for my blog.

But, things are hopefully changing. I’m headed for VMworld 2010 in Copenhagen, Denmark on Sunday, and I’m bringing along my digital notebooks. Since I’m still fairly new in my new role, I won’t have quite the same schedule as my colleagues, and I hope that this will allow me some time to visit some of the sessions and create some notes that I’m able to share with you all.

So, keep your eyes open for things to come in this space!

General

Shorts: Get your free ESTA while it’s hot! – No longer valid

Word got out this morning that starting September 8th 2010, you will need to pay USD $14.- when you apply for a travel approval for the US. You can do that by filling out the so called ESTA form. Most people without a visa for the USA remember the green I-94W visa waiver that you needed to fill out. This has been mostly replaced by the online version of the form which can be found on the ESTA website and can be requested by the people from the following countries.

What most people don’t seem to know, is that you can create a request on the website that is valid for two years. Most people I know (and I was one of them) used to fill out an ESTA application on the website prior to each visit to the US. Basically, there’s nothing wrong with that, and the approach is still valid. But, starting September 8th 2010, you will simply have to pay USD $14.- each time you fill out the form. However, if you fill out the form prior to this date, you can create a request that is valid for two years once it has passed all checks.

How do you do that? It’s actually quite simple. The fields that state “Address While In The United States” and “Travel Information” are not mandatory. So, the simplest way is to fill out the form on the ESTA website and leave those items blank. If you are granted access rights with your request, your approval will not just be valid for one trip to the USA, but for all trips in the upcoming two years, without having to pay for additional requests. Depending on your travel frequency, this might just save you a bit of money.

VMware, VMworld

UPDATE! Contest: Get away to VMworld with Gestalt IT / Pay it forward!

Some of you may have already read about the contest over at the Gestalt IT website, but I thought this contest was nice enough to give you an overview here and link back to the contest.

Now, I’m guessing that most of the folks reading here will be familiar with the event called VMworld, but for those that aren’t, here’s a short overview:

the annual VMworld gathering in San Francisco has become the central event for virtualization-related companies. Although focused on VMware, the conference draws many companies. And the labs and sessions are really awesome!

So, what is Gestalt IT doing? Because most people can’t afford to attend if their boss is not allowing them to go, Gestalt IT decided to set up a contest that will cover the following (if you should win):

  • One conference ticket.
  • One roundtrip air ticket from one of the major airports near you to SFO or another airport in the San Francisco area.
  • Three nights at a hotel within 1 mile of the Moscone Center in San Francisco (VMworld runs August 30 through September 2).

Now the final question would be what you need to do to enter, right? Well, we decided that VMworld was created for the community of VMware customers, users and partners. So, what we want to know from you is what you will do for our and/or your community by attending VMworld. Will you take notes from sessions and try to help people back home? Are you going to try and get some video interviews that will answer the burning questions your community may have? We want to know how you plan on “paying it forward”!

So, what are you waiting for? Get on over to the contest page to read all of the details and to enter the contest. We look forward to seeing your entries!

Update!

I received word that the contest has been extended. The winners will be announced Friday, August 13th. Yes, you read that right, winners. We were lucky enough to find some additional sponsors, which means that we will now give away two trips to VMworld. Check out the details here!


Also, a special thank you to Symantec and Xsigo for their help as a sponsor. And a thank you to two wonderful additional sponsors, Zetta and Veeam, that made it possible to pick two winners.

Uncategorized, Virtualization, VMware, vSphere

Shorts: What is it about cpuid.corespersocket on vSphere?

Time for another short! The google searches leading to this blog show searches coming in based on the cpuid.corespersocket setting. In this short I’ll try to explain what this setting is for and how it behaves. So, let’s dig right in!

The cpuid.corespersocket setting

In a sense, you would assume that the name of the setting says it all. And in a sense, it does. In the physical world, you will have a number of sockets on your motherboard, this number of sockets is normally also the number of physical CPU’s that you have on said motherboard (at least in an ideal world), and each CPU will have one or more cores on it.

Wikipedia describes this in the following way:

One can describe it as an integrated circuit to which two or more individual processors (called cores in this sense) have been attached.

…..

The amount of performance gained by the use of a multi-core processor depends very much on the software algorithms and implementation. In particular, the possible gains are limited by the fraction of the software that can be parallelized to run on multiple cores simultaneously; this effect is described by Amdahl’s law. In the best case, so-called embarrassingly parallel problems may realize speedup factors near the number of cores, or beyond even that if the problem is split up enough to fit within each processor’s or core’s cache(s) due to the fact that the much slower main memory system is avoided.

Now, this sounds quite good, but some of you may ask what kind of influence this has on my virtualized systems. The most obvious answer would be “none at all”. This is because by default your virtualized system will see the cores as physical CPU’s and be done with it.

So, now you are probably wondering why VMware would even distinguish between cores and sockets. The answer is quite simple; It’s due to licensing. Not so much by VMware, but by the software or operating system that you would like to virtualize. You see, some of that software is licensed per core, and some will license by the number of sockets (some even combine the two).

So how do I use it?

As with all things computer related… It depends. When you are using ESX 3.5 you have no chance of using it. With ESX 4, you can actually use this feature, but it is not officially supported (someone please point me in the right direction if this is incorrect). And starting with ESX 4.1 the setting is now officially supported, and even documented in the VMware Knowledge Base as KB Article: 1010184.

Simply put, you can now create a virtual machine with for example 4 vCPU’s and set the cpuid.corespersocket to 2. This will make your operating system assume that you have two CPU’s, and that each CPU has two cores. If you create a machine with 8 vCPU’s and again select a cpuid.corespersocket of 2, your operating system will report 4 dual-core CPU’s.

You can actually set this value by either going this route:

  1. Power off the virtual machine.
  2. Right-click on the virtual machine and click Edit Settings.
  3. Click Hardware and select CPUs.
  4. Choose the number of virtual processors.
  5. Click the Options tab.
  6. Click General, in the Advanced options section.
  7. Click Configuration Parameters.
  8. Include cpuid.coresPerSocket in the Name column.
  9. Enter a value ( try 2, 4, or 8 ) in the Value column.

    Note: This must hold:

    #VCPUs for the VM / cpuid.coresPerSocket = An integer

    That is, the number of vCPUs must be divisible by cpuid.coresPerSocket. So if your virtual machine is created with 8 vCPUs, coresPerSocket can only be 1, 2, 4, or 8.

    The virtual machine now appears to the operating system as having multi-core CPUs with the number of cores per CPU given by the value that you provided in step 9.

  10. Click OK.
  11. Power on the virtual machine.

If the setting isn’t shown, for example for those who want to experiment with it under ESX 4.0, you can create the values in the following way:

  1. Power off the virtual machine.
  2. Right-click on the virtual machine and click Edit Settings.
  3. Click the Options tab.
  4. Click General, under the Advanced options section.
  5. Click Configuration Parameters.
  6. Click Add Row.
  7. Enter “cpuid.coresPerSocket” in the Name column.
  8. Enter a value ( try 2, 4, or 8 ) in the Value column.
  9. Click OK.
  10. Power on the virtual machine.

To check if your settings actually worked, you can use the sysinternals tool called Coreinfo on your Windows systems, and on Linux you can perform a simple “cat /proc/cpuinfo” to see if everything works.

as a Service, GestaltIT, Virtualization

Virtualization makes me say “Who cares about your hardware or operating system?!”

When you come to think about it, people who work in the IT sector are all slightly nuts. We all work in an area that is notorious for trying to make itself not needed. When we find repetitive tasks, we try to automate them. When we have a feeling that we can improve something, we do just that. And by doing that, we try to remove ourselves from the equation where we possibly can. In a sense, we try to make ourselves invisible to the people working with our infrastructure, because a happy customer is one that doesn’t even notice that we are there or did something to allow him to work.

Traditional IT shops were loaded with departments that were responsible for storage, for networking, for operating systems and loads more. The one thing that each department has in common? They tried to make everything as easy and smooth as possible. Usually you will find loads of scripts that perform common tasks, automated installations and processes that intend to remove the effort from the admins.

In comes a new technology that allows me to automate even more, that removes the hassle of choosing the right hardware. That helps me reduce downtimes because of (un)planned maintenance. It also helps me reduce worrying about operating system drivers and stuff like that. It’s a new technology that people refer to as server virtualization. It’s wonderful and helps me automate yet another layer.

All of the people who are in to tech will now say “cool! This can help me make my life easier”, and your customer will thank you because it’s an additional service you can offer, and it helps your customer work. But the next question your customer is going to ask you is probably going to be something along the lines of “Why can’t I virtualize the rest?”, or perhaps even “Why can’t I virtualize my application?”. And you know what? Your customer is absolutely right. Companies like VMware are already sensing this, as can be read in an interview at GigaOM.

The real question your customer is asking is more along the lines of “Who cares about your hardware or operating system?!”. And as much as it pains me to say it (being a person who loves technology), it’s a valid question. When it comes to true virtualization, why should it bother me if am running on Windows, Unix, Mac or Linux? Who cares if there is an array in the background that uses “one point twenty-one jiggawatts” to transport my synchronously mirrored historic data back to the future?

In the long run, I as a customer don’t really care about either software or hardware. As a customer I only care about getting the job done, in a way that I expected to, and preferably as cheap as possible with the availability I need. In an ideal world, the people and the infrastructure in the back are invisible, because that means they did a good job, and I’m not stuck wondering what my application runs on.

This is the direction we are working towards in IT. It’s nothing new, and the concept of doing this in a centralized/decentralized fashion seem to change from decade to decade, but the only thing that remained a constant was that your customer only cared about getting the job done. So, it’s up to us. Let’s get the job done and try to automate the heck out of it. Lets remove ourselves from the equation, because time that your customer spends talking to you is time spent not using his application.

GestaltIT, High Availability

How do you define high availability and disaster recovery?

A while back I was on a call with someone who asked me the difference between high availability (HA) and disaster recovery (DR), saying that there are so many different solutions out there and that a lot of people seem to use the terminology but are unable to explain anything more about these two descriptions. So, here’s an attempt to demystify things.

First of all, let’s take a look at the individual terms:

High Availability:

According to Wikipedia, you can define availability in the following ways:

The degree to which a system, subsystem, or equipment is operable and in a committable state at the start of a mission, when the mission is called for at an unknown, i.e., a random, time. Simply put, availability is the proportion of time a system is in a functioning condition.

The ratio of (a) the total time a functional unit is capable of being used during a given interval to (b) the length of the interval.

And most online dictionaries seem to have a similar definition of availability. When we are talking about HA, we imply that we want the functioning condition of your system to be increased.

Going by the above you will also notice that there is no fixed definition of the availability. Simply put, it would mean that you need to put your own definition in place when talking about HA. You need to define what HA means in your environment. I’ve had customers that needed HA and defined this as the system having a certain amount of uptime, which is one way to measure it.

On the other hand you would be hard pressed if you were able to work with your system, but the data that you were working with was corrupted because one of your power users made an error during a copy job and wrote an older data set in the wrong spot. This would mean that your system is in itself available. You can log on to it, you can work with it, but the output you are going to get will be wrong.

To me, such a scenario would mean that your system isn’t available. After all, it’s not about everything being online. It’s about using a system in the way you would expect it to work. But when you ask most people in IT about availability, the first thing you will likely hear is something related to uptime or downtime. So, my tip to you is once again:

Define what “available” means to you and your organization/customer!

Disaster Recovery:

Natural disasterLet’s do the same thing as before and let’s turn to some general definitions. Wikipedia defines disaster the following way:

disaster is a perceived tragedy, being either a natural calamity or man-made catastrophe. It is a hazard which has comes to fruition. A hazard, in turn, is a situation which poses a level of threat to life, health, property, or that may deleteriously affect society or an environment.

And recovery is defined the following way (when it comes to health):

Healing, or Cure, the process of recovering from an injury or illness.

So, in a nutshell this is about bouncing back to your feet once a disaster strikes. Now again, it’s important to define what you would call a disaster, but at least there seems to be some sort of common understanding that anything that would get you back up and running after an entire site goes down, usually falls under the label of a DR solution.

It all boils down to definitions!

When you talk to other companies or vendors about HA and/or DR, you will soon notice that most have a different understanding of what HA and DR are. Your main focus should be to have a clear definition for yourself. Try to find out the importance and value of your solution and base your requirements on that. Ask yourself simple questions like for example:

  • What is the maximum downtime I can cope with before I need to start working again? 8 hours per year? 1 hour per year? 4 hours per month? What is my RPO and RTO
  • How do I handle planned maintenance? Can I bring everything down or do I need to distribute my maintenance across independent entities?
  • Can I afford the loss of any data at all? Can I afford the partial loss of data?
  • What if we see a city-wide power outage? Do I need a failover site, or are all my users in the same spot and won’t be able to work anyway?

Questions like these will help you realize that not everything you have running has the same value. Your development system with 6000 people working on it worldwide might need better protection than your productive system that is only being used by 500 people spread through the Baltic region.

Or in short.

Knowing what kind of protection you need is key. Fact is that both HA and DR solutions never come cheap. If you need the certainty that your solution is available and able to recover from a disaster, you will notice that the price tag will quickly skyrocket. Which is another reason to make sure that you know exactly what kind of protection you need, and creating that definition is the most important starting point. Once you have your own definition, make sure that you communicate those definitions and requirements so that all parties are on the same page. It should make your life a little easier in the end.

Cisco, EMC, General, VMware

It’s all about change and passion

Some of you who read the title of this post will already have a hunch what this is all about. Heraclitus seems to be the person who first stated:

Nothing endures but change.

And I can only agree with that. I remember reading a post from Nick Weaver about an important change in his professional life, and I love this quote:

By taking this position I am intentionally moving myself from the top man on the totem pole to the lowest man on the rung.

And I think that most people who have read Nick’s blog know that this wasn’t entirely the truth, especially when looking what he was able to do until now.

Well, Nick can be assured now. There’s actually on person on the team that is “lower on the rung”. That person would be me.

Time for a change!

I am joining EMC and taking on the role of vSpecialist, or as my new contract says “Technical Consultant VCE”.

I am also going to be leaving my comfort zone and leave a team of people behind that have been great to work with. I have been working at SAP for seven years now, and the choice to leave wasn’t easy. I was lucky enough to have worked with a multitude of technologies in an environment that was high paced and stressful, but very rewarding, and I want to thank all of my colleagues for making the journey interesting! Even so, it’s time for me to make a change.

I was lucky enough to get to know several people who already work in a similar role, and if there’s one thing that distinguishes them in my mind, then it would be the passion they have for their job. This was actually the main reason for me to make the switch to EMC. It’s not about making big bucks, it’s not about being a mindless drone in the Evil Machine Company or drinking the Kool-Aid, it’s about getting a chance to work with people that share a passion and are experts at what they do. It’s about the chance to prove myself and perhaps one day joining their ranks as experts.

So, while I wrap things up here at SAP, if all goes well I will be joining the vSpecialist team on October 1st, and hopefully you will bear with me while I find my way going through this change, and I do hope you drop by every now and then to read some new posts from me.

See you on the other side!