Certification, General, Virtualization

How do I get to be “that good”?

This is a post that I’ve been struggling with for quite some time now. Did you ever get that feeling, seeing folks around you achieve things that you envisioned for yourself? People seeming to reach a certain level of knowledge, and you strive yourself to get to that level? Asking yourself the question, how can I reach their level, how do I get to be “that good”?

I’ve joined EMC just over 2 years ago in my current role as a vSpecialist. When I actually joined the team, I always felt like I was the dumbest guy on the team. Since then, I’ve learned so much about all kinds of topic, and I think I achieved a pretty decent level of knowledge surrounding virtualization and a lot of the encompassing technologies. I’ve been lucky enough to get the vExpert title awarded twice, and I was able to work on my certifications (VCP, VCAP, EMC Cloud Architect, EMC IT-as-a-Service Expert).

Still, you see folks around you working on stuff, and the more you learn, the more you learn about what you don’t know. As for myself, I still need to work on my networking knowledge. I realize more and more that it needs brushing up. The role of vSpecialist inside of the company is evolving, and while we still support the basic virtualization stack, we are now starting to focus more on what we do, now that a lot of folks are starting to realize that the hypervisor itself isn’t that “thrilling” anymore. Most hypervisors will perform their basic function at a good level. That means, that we need to start looking at what we can do with the technologies that build upon the features and functions that were enabled by using a hypervisor.

And then, there is the part about where you would like to go as an individual. I don’t perform designs on a daily basis for my work. I’ve been involved in roughly 4 very large design projects in my time as a vSpecialist, but that doesn’t qualify me as a landscape designer or architect. I still have a personal goal though, to attain the VCDX certification.

Why? Yeah, the title sounds nice and all. But I feel like it’s an important skillset to have. And it’s a confirmation from a select group of peers that you have attained a certain level. You understand how things interconnect, are able to obtain a holistic view. It shows that, given/taken the time needed, you are able to understand the customer requirements, map those to a blueprint that will actually help the customer in achieving a set goal.

For me the challenge is the way I learn (I absolutely need hands to make stuff stick in my head and make the logical link), and finding the time to actually learn what I both need and what I want to learn.

In the end, I guess that we get to be “that good”, by looking at examples of people who we see as being “that good”, trying to learn from them in ways that help us enable ourselves. We spend the time because we don’t have any other choice. I want to learn, it’s in my DNA. The biggest problem in actually achieving the next level is more of a mental challenge as I see it, since that next level is a moving target. Usually we reach that next level without even knowing, just by being dedicated and motivated.

I know this isn’t a real technical post, and I’m not even 100% sure this post is of use to anyone besides myself, but it’s something that I needed to write down to clear my own head. So here goes, off to the next level, and maybe one day I’ll actually be that good. And I promise, the next post will be more technical in nature again. And if you should have any comments, I’m looking forward to reading them. 🙂

as a Service, General

How those SLA metals are losing their value

I was on a call before, and a thought struck my mind again. I’ve been seeing people all over the globe use valuable metals to describe their service levels, resources, and/or properties. And you know what? It doesn’t work!

I see examples every day. I’ve created a service offering, and it goes by the name “Platinum”. You get 4 of the fastest servers out there, 512GB of RAM per server, and we’ll throw in some SSD’s.

So, what do I do next year?

Since platinum is still platinum, what happens when the servers that I ordered don’t have the same CPU frequency? Or people would expect double the amount of RAM for the server? Maybe the price for the Solid State Disks went down, and I can now get double or triple the capacity for the same amount of money (well, maybe not next year, but what about the year after)?

When you actually offer an internal service, it’s key to think about what you are actually offering. Are you describing your service? If so, a general name might not be bad. Car manufacturers have been doing this for ages, – Get the new XYZ executive edition! -, and while the model name rarely changes when a revision came, they’ve added a year, or an internal version number to distinguish between revisions. And you ordered your car just prior to the new launch? Well, you’re out of luck, but we’ll gladly sell you the newer version.

Now change places, and take on the role of the car manufacturer. Would you still call your currently fastest model “Platinum”? When you know that in two weeks time, you’ll be working on an even faster engine?

No you wouldn’t!

You would pick something that describes the product (or service) you are going to offer. If you want to offer that server class I mentioned before, pick something sensible. Describe what the service does, perhaps add a revision number or a time stamp. Instead of calling it “Jumbo-servers Platinum”, call it “Jumbo-servers Q1 2012, 4x XYZ virtualization server, quad core, 512GB RAM, 1 SSD 120GB”.

And if you can’t make the name that long, think of useful shorter codes. Spread the word and show people that you aren’t starting off well with new projects by using gold, silver and bronze as your service levels, and tell them that gold isn’t going to be gold in one year.

Oh, and before someone on Twitter says something. Unobtanium is cool, but I wouldn’t want it as a name. Nor would I prefer Yuan Renminbi, atomic weights or Apple specs. Although those gave me a good chuckle!

Virtualization, VMware, VMworld

VMware is raising it’s bar, still time to register!

I’m sure most of you will have already known this, but some might have forgotten to register, or some wanted to but never got around to it, so here’s a reminder.

Tomorrow, July 12th, VMware is having an online event called “Raising the Bar, Part V”, where VMware is going to be “presenting on the next generation of cloud infrastructure”, or as VMware has put it:

Register now for this online event

July 12, 2011
9am-Noon Pacific Time

VMware CEO Paul Maritz and CTO Steve Herrod will be presenting on the next generation of cloud infrastructure. Join us and experience how the virtualization journey is helping transform IT and ushering in the era of Cloud Computing. 

9:00-9:45 Paul and Steve present – live online streaming
10:00-12:00 five tracks of deep dive breakout sessions
10:00-12:00 live Q&A with VMware cloud and virtualization experts

The event is free — if you sign up today you'll get an email reminder. This is a not-to-miss event!

These vExperts will be on-site at the event in San Francisco and will be covering the event live! (Also watch for live-tweeting from @VMwareEvents and @jtroyer with the #vmwarecloud hashtag)

After the event, you'll still be able to ask questions on Twitter. And on Wednesday, we'll be recapping the event on our VMware Community Roundtable — join us for an hour of live Q&A.

See you there!
John 

Some people say they already know what’s coming. For me, that would be all the more reason to register and see what kind of cool stuff VMware is showing during the event. And, if you register for the event, you’ll automatically be entered in to the free drawing for a ticket to VMworld. More info about the drawing can be found here

as a Service, Certification, Virtualization

How about them cloud architects?

I was on EMC training last week. To be more specific, I was on “Virtualized Data Center and Cloud Infrastructure” training, and took the “E20-018 – Virtualized Infrastructure Specialist for Cloud Architects” afterwards.

First off, I passed the exam, which I’m happy about. Second, I’m still not sure how happy I am about the training and the exam itself.

The training

When it comes to the training itself, we are talking about a week long, instructor led training. It covers several aspects of what would or could be considered elements of a virtualized data center, as well as cloud technologies.

You will start off with an overview of some of the definitions that make up cloud computing. A reference there is the definition of cloud, as it is defined by NIST. There is a reference the three service models of cloud computing, as well as the phases that you usually see when building a cloud infrastructure and the five key characteristics of a cloud delivery model.

You will also get an overview of the technologies that can be used to deliver a cloud like infrastructure. That includes stuff like different synchronization models, hypervisor types, link speeds (think of dark fibre as an example) and technologies used within a virtualized environment that range from a live migration, to the ability to offer services and service catalogs in a self service environment.

You also take a stab at things like governance, risk and compliance. You will get an idea of the things you can run in to when you create or even work with such an environment. You will references to things like the Sarbanes–Oxley Act, and business driven frameworks and best practices like ITIL.

Mix that up with some labs that focus on giving you food for thought when designing a cloud like infrastructure (there is no hands on in the labs, just paper and teamwork), and it sounds like you have a pretty decent training.

Or does it?

I have two main problems with the course itself. For one, I think that it’s based too much on the standards and concerns out of the US. I think that people absolutely need to be aware of things like the Sarbanes–Oxley act, but I also feel that models and concerns should be highlighted for companies that are not bound by this act. Or at least tell these people why the rationale for these acts might be useful to implement, besides the “you legally have to” way of explaining.

Don’t get me wrong, you need to consider these kinds of things, especially in a cloud like environment where you more quickly are able to cross global and legal boundaries, but then also focus on things that you might encounter outside of the United States.

The second concern is much harder to address, since one might get the feeling that you are not being taught anything new. And I think this is the much bigger issue. In a sense, anyone who has been working on the “* as a Service” environments will not really encounter anything new, and they might feel that EMC is just stating the obvious in their course. Techies that attend the training will probably leave with a feeling of “not enough examples and hands on, too much fluffy stuff”.

I think that they are partially right. For me it was a large part of what I have already encountered more than once. On the other hand, it’s good to see that people are working on standards, and the course itself brings together a lot of the things that you encounter, but might not knowingly consider. It’s “food for thought” if you will. And as with all such classes, you get a chance to exchange ideas and perspectives with your classmates, and that might be the most valuable thing about this class, especially with a topic that is harder to grasp (from an actual technology used/where do i start standpoint).

The exam

Read carefully! The biggest piece of advice I can give you, since sometimes the wording of the questions is just plain odd. Also, make sure you have an idea of the technology used in data centers, and virtualization across different tiers. Examples of the last one could be things like storage and network virtualization.

Spend some time on getting to know what governance, risk and compliance is all about. Have some sort of insight of what are the driving factors behind GRC.

The certification can be achieved, even without the training, but I feel the way some questions are asked is perhaps the biggest issue you can face in passing the exam if you already have some experience in designing or working in cloud like environments.

And let me wish you some good luck if you attempt to take he certification! One important note is that the EMC E20-001 is a mandatory prerequisite before attempting the certification, so make sure that you have that one if you want to be an EMC certified cloud architect.

as a Service, GestaltIT, Virtualization

Virtualization makes me say “Who cares about your hardware or operating system?!”

When you come to think about it, people who work in the IT sector are all slightly nuts. We all work in an area that is notorious for trying to make itself not needed. When we find repetitive tasks, we try to automate them. When we have a feeling that we can improve something, we do just that. And by doing that, we try to remove ourselves from the equation where we possibly can. In a sense, we try to make ourselves invisible to the people working with our infrastructure, because a happy customer is one that doesn’t even notice that we are there or did something to allow him to work.

Traditional IT shops were loaded with departments that were responsible for storage, for networking, for operating systems and loads more. The one thing that each department has in common? They tried to make everything as easy and smooth as possible. Usually you will find loads of scripts that perform common tasks, automated installations and processes that intend to remove the effort from the admins.

In comes a new technology that allows me to automate even more, that removes the hassle of choosing the right hardware. That helps me reduce downtimes because of (un)planned maintenance. It also helps me reduce worrying about operating system drivers and stuff like that. It’s a new technology that people refer to as server virtualization. It’s wonderful and helps me automate yet another layer.

All of the people who are in to tech will now say “cool! This can help me make my life easier”, and your customer will thank you because it’s an additional service you can offer, and it helps your customer work. But the next question your customer is going to ask you is probably going to be something along the lines of “Why can’t I virtualize the rest?”, or perhaps even “Why can’t I virtualize my application?”. And you know what? Your customer is absolutely right. Companies like VMware are already sensing this, as can be read in an interview at GigaOM.

The real question your customer is asking is more along the lines of “Who cares about your hardware or operating system?!”. And as much as it pains me to say it (being a person who loves technology), it’s a valid question. When it comes to true virtualization, why should it bother me if am running on Windows, Unix, Mac or Linux? Who cares if there is an array in the background that uses “one point twenty-one jiggawatts” to transport my synchronously mirrored historic data back to the future?

In the long run, I as a customer don’t really care about either software or hardware. As a customer I only care about getting the job done, in a way that I expected to, and preferably as cheap as possible with the availability I need. In an ideal world, the people and the infrastructure in the back are invisible, because that means they did a good job, and I’m not stuck wondering what my application runs on.

This is the direction we are working towards in IT. It’s nothing new, and the concept of doing this in a centralized/decentralized fashion seem to change from decade to decade, but the only thing that remained a constant was that your customer only cared about getting the job done. So, it’s up to us. Let’s get the job done and try to automate the heck out of it. Lets remove ourselves from the equation, because time that your customer spends talking to you is time spent not using his application.

EMC, Virtualization, VMware, VPLEX

EMC VPLEX – Introduction and link overview

I’m currently visiting the Boston area because I’m attending EMC World. One of the bigger introductions made here yesterday was actually a new appliance called the VPLEX. In short, the VPLEX is all about virtualizing the access to your block based storage.

Let me give you a quick overview of what I mean with virtualized access to block based storage. With VPLEX, you can take (almost) any block based storage device on a local and remote site, and allow active read and writes on both sides. It’s an active/active setup that allows you to access any storage device via any port when you need to.

You can get two versions right now, the VPLEX local and the VPLEX Metro. Two other version, the VPLEX Geo and the VPLEX Global are planned for early next year. And since there is so much information that can be found online about the VPLEX, I figured I’d create a post here that will help me find the links when I return, and to also give you a one spot that can help you find the info you need.

An overview with links to more information on the EMC VPLEX:

Official links / EMC company bloggers / VMware company bloggers

Blogs and media coverage:

Now, if I missed one or more links, please just send me a tweet or leave a comment and I will make sure that the link is added to this post.

GestaltIT, Networking, Stack, Storage, Virtualization

My take on the stack wars

As some of you might have read, the stack wars have started. One of the bigger coalitions announced in November 2009 was that between VMware, Cisco and EMC, aptly named VCE. Hitachi Data Systems announced something similar and partnered up with Microsoft, but left everyone puzzled about the partner that will be providing the networking technology in it’s stack. Companies like IBM have been able to provide customers with a complete solution stack for some time now, and IBM will be sure to tell it’s customers that they did so and offered the management tools in form of anything branded Tivoli. To me, IBM’s main weakness is not so much the stack that they offer, as the sheer number of solutions and the lack of one tool to manage it all, let alone getting an overview of all possible combinations.

So, what is this thing called the stack?

Actually the stack is just that, a stack. A stack of what you say? A stack of solutions, bound together by one or more management tools, offered to you as a happy meal that allows you to run the desired workloads on this stack. Or to put things more simply and quote from the Gestalt IT stack wars post:

  • Standard hardware configurations are specified for ease of purchasing and support
  • The hardware stack includes blade servers, integrated I/O technology, Ethernet networking for connectivity, and SAN or NAS storage
  • Unifying software is included to manage the hardware components in one interface
  • A joint services organization is available to help in selection, architecture, and deployment
  • Higher-level software, from the virtualization hypervisor through application platforms, will be included as well

Until now, we have usually seen a standardized form of hardware, including storage and connectivity. Vendors mix that up with one or multiple management tools and tend to target some form of virtualization. Finally a service offering is included to allow the customer to get service and support from one source.

This strategy has it’s advantages.

Compatibility is one of my favorite ones. You no longer need to work trough compatibility guides that are 1400 pages long and will burn you for installing a firmware version that was just one digit off and is now no longer supported in combination with one of your favorite storage arrays. You no longer have to juggle different release notes from your business warehouse provider, your hardware provider, your storage and network provider, your operating system and tomorrow’s weather forecast. Trying to find the lowest common denominator through all of this is still something magical. It’s actually a form of dark magic that usually means working long hours to find out if your configuration is even supported by all the vendors you are dealing with.

This is no longer the case with these stacks. Usually they are purpose or workload built and you have one central source where you get your support from. This source will tell you that you need at least firmware version X.Y on these parts to be eligible for support and you are pretty much set after that. And because you are working with a federated solution and received management tools for the entire stack, your admins can pretty much manage everything from this one console or GUI and be done with it. Or, if you don’t want to that you can use the service offering and have it done for you.

So far so good, right?

Yes, but things get more complicated from here on. For one there is one major problem, and that is flexibility. One of the bigger concerns came up during the Gestalt IT tech field day vBlock session at Cisco. With the vBlock, I have a fixed configuration and it will run smoothly and within certain performance boundaries as long as I stick to the specifications. In the case of a vBlock this was a quite obvious example, where if I add more RAM to a server blade then is specified, I no longer have a vBlock and basically no longer have those advantages previously stated.

Solution stacks force me to think about the future. I might be a Oracle shop now as far as my database goes. And Oracle will run fine on newly purchased stack. But what if I want to switch to Microsoft SQL Server in 3 years, because Mr. Ellison decided that he needs a new yacht and I no longer want to use Oracle? Is my stack also certified to run a different SQL server or am I no longer within my stack boundaries and lost my single service source or the guaranteed workload it could hold?

What about updates for features that are important to me as a single customer? Or what about the fact that these solution stacks work great for new landscapes, or in a highly homogeneous environment? But what about those other Cisco switches that I would love to manage from the tools that are offered within my vBlock, but are outside of the vBlock scope, even if they are the same models?

What about something simple as a “stack lock-in”? I don’t really have a vendor lock-in since only very few companies have the option of offering everything first hand. Microsoft doesn’t make server blades, Cisco doesn’t make SAN storage and that list goes on and on. But with my choice of stack, I am now locked in to a set of vendors, and I certainly have some tools to migrate in to that stack, but migrating out is an entirely different story.

The trend is the stack, it’s as simple as that. But for how long?

We can see the trend clearly. Every vendor seems to be working on a stack offering. I’m still missing Fujitsu as a big hardware vendor in this area, but I am absolutely certain we will see something coming from them. Smaller companies will probably offer part of their portfolio under some sort of OEM license or perhaps features will just be re-branded. And if they are successful enough, they will most likely be swallowed by the bigger vendors at some point.

But as with all in the IT, this is just a trend. Anyone who has been in the business longer than me can probably confirm this. We’ve seen a start with centralized systems, then moving towards a de-centralized environment. Now we are on the move again, centralizing everything.

I’m actually much more interested to see how long this trend will continue. I’m am certain that we will be seeing some more companies offer a complete solution stack, or joining in coalitions to offer said stack. I still think that Oracle was one of the first that pointed in this direction, but they were not the first to offer the complete stack.

So, how do you think this is going to continue? Do you agree with us? What companies do you think are likely to be swallowed, or will we see more coalitions from smaller companies? What are your takes on the advantages and disadvantages?

I’m curious to hear your take on this so let me know. I’m looking forward to what you have to say!

as a Service, General, GestaltIT

Jack of all trades, master of… the solution stack?

Stevie Chambers wrote something in a tweet last night. He stated the following:

The problem with an IT specialist is that he only gets to do the things he’s already good at, like building a coffin from the inside.

And my first thought was that he’s absolutely right. A lot of the people I know are absolute cracks or specialists in their own area. I’ll talk to the colleagues over in the Windows team, and they can tell you everything about the latest version of Windows and know each nook and cranny of their system. I’ll talk to the developers and they can write impossible stuff for their ABAP Web Dynpro installations.

But then I ask my developers what effect a certain OS parameter will have on their installation. Or perhaps how the read and write response times from the storage array in the back-end might influence the overall time an end user spends while he’s waiting for his batch job to complete. And you know what answer you get a lot of times? Just a blank stare, or if you are lucky some shoulders being shrugged. They’ll tell you that you need to talk to the experts in that area. It’s not their thing, and they don’t have the time, knowledge, interest or just simply aren’t allowed to help you in other areas.

So what about our changing environment? In environments where multiple tenants are common? Where we virtualize, thin provision and dedupe our installations and create pointer based copies of our systems? Where oversubscription might affect performance levels? Fact is that we are moving away from isolated solutions and moving toward a solution stack. We no longer care about the single installation of Linux on a piece of hardware, but need to troubleshoot how the database in our Linux VM interacts with our ESX installation and the connected thin provisioned disks.

In order to be an effective administrator I will need to change. I can’t be the absolute expert in all areas. The amount of information would just be overwhelming, and I wouldn’t have the time to master all of this. But being an expert in only one area will definitely not make my job easier in the future. We will see great value in generalists that have the ability to comprehend the interactions of the various components that make up a stack, and are able to do a deep dive when needed or can gather expertise for specific problems or scenarios when they need to.

Virtualization and the whole “* as a Service” model isn’t changing the way any of the individual components work, but they change the interconnect behavior. Since we are delivering new solutions as a stack, we also need to focus on troubleshooting the stack, and this can’t always be done in the classical approach. In a way this is a bigger change for the people supporting the systems than it is for the people actually using those systems.

GestaltIT, Virtualization, VMware

Virtualization and the challenge of knowing what your customer understands

Customers only hear what is useful to them. There, I’ve said it out loud and now I’ll just wait for my lapidation to begin. Or perhaps you will bear with me while I try to explain?

As I wrote in previous blog posts we deliver a lot of our systems virtualized. You could call our VM implementation level a big hit, but we keep running in to the same problem, and it’s something that comes close to selective hearing. We have dozens of presentations, slides and info sessions that explain the benefits of a virtual server. And there are several benefits. Things like vMotion or the Site Recovery Manager can help you achieve a certain service level and add value by automating failovers or minimizing downtime due to planned maintenance.

The problem is the translation that is made by the customer. In our case the various internal teams are our customer, and the amount of knowledge around the operating system or the features that a virtualized landscape offers them are not always fully comprehended. You can tell a customer what these features do and how virtualization adds redundancy to the hardware stack. Usually all that arrives is “Less downtime because I can vMotion the virtual machine to a different piece of hardware.”, and the customer will automatically link this to a higher availability of the application that is running on this host.

The problem is this higher availability will not cascade through the entire stack. Sure, a vMotion might save you from a downtime due to hardware maintenance, but it won’t save you from the proverbial blue screen or kernel panic due to a defunct driver. You can write down everything in an SLA, but the thing that stuck in the customers head was the thing that he deduced from the presentation.

Fellow Blogger Storagebod already wrote a piece on “Asking the right questions“, and I fully agree with him that we as a service provider need to start asking the right questions to our customers. We have no other way to find out and offer the service he wants.

But, we also need to find out if the customer is certain that he understands the services we are offering. It’s not about leaning back in your chair and saying “we only offered a 99.5% uptime, it was there black on white”. It’s about going the extra mile and making it clear that “We offer an infrastructure that can give you 99.999% uptime, but your application isn’t laid out for that same uptime”. It’s about asking the right questions, but it’s also making sure that your customer heard the entire message, not just the parts he wanted to hear.

Virtualization is a tool or a way you can help your customer obtain higher uptimes. It can enable, but won’t guarantee availability of the entire stack.