Clustering, EMC, Storage, Virtualization, VMware, VPLEX, vSphere

VMware HA demo using vMSC with EMC VPLEX Metro

That’s a mouth full of abbreviations for a title, isn’t it?

So, let me give you some background info. VMware introduced something called the vSphere Metro Storage Cluster, and Duncan Epping talks about this feature here.

What the vMSC allows us to do, is to create a regular stretched vSphere cluster, but now also stretch out the storage between the two clusters. This can be done in two ways (to quote from Duncan’s article):

I want to briefly explain the concept of a metro / stretched cluster, which can be carved up in to two different type of solutions. The first solution is where a synchronous copy of your datastore is available on the other site, this mirror copy will be read-only. In other words there is a read-write copy in Datacenter-A and a read-only copy in Datacenter-B. This means that your VMs in Datacenter-B located on this datastore will do I/O on Datacenter-A since the read-write copy of the datastore is in Datacenter-A. The second solution is which EMC calls “write anywhere”. In this case VMs always write locally. The key point here is that each of the LUNs / datastores has a “preferred site” defined, this is also sometimes referred to as “site bias”. In other words, if anything happens to the link in between then the storage system on the preferred site for a given datastore will be the only one left who can read-write access it.

The last scenario described here is something that obviously can cause some issues. EMC tried to address this by introducing the “independent 3rd party”, in form of the VPLEX Witness. Some documentation states that this witness should run in a 3rd site, but I would recommend to run this in a separate failure domain.

In essence, we have created the following setup:

© VMware

Awesome stuff, because we can do new things that weren’t quite possible before. Since VPLEX is one of the key storage virtualization solutions from EMC that allow us to perform an active/active disk access, we can perform a vMotion between the two sites, and due to the nature of VPLEX, we also perform a sort of storage vMotion on the underlying disks. That, without you having to shut down the VM to do both things at the same time. Pretty neat!

Now, as Chad describes here, a new disk connectivity state was introduced with vSphere 5, called “Permanent Device Loss” or PDL. This was a great feature to communicate to your infrastructure that a target was intentionally removed. You could unmount the disk, and remove the paths to your target in a proper way.

It was also useful to indicate an unexpected loss of your target, indicating that your cluster is in a partitioned state. The problem here was that a PDL state and VMware HA didn’t work so well together. When you had an APD notification, HA didn’t “kill” your VM, and your virtual machine would usually continue to respond to pings, but that was about it.

Then along came vSphere 5 Update 1, which allows us to set a flag on each of the hosts inside our cluster, and set a different flag for our HA cluster. Now, we can actually use HA and see terminate the VMs and have it restart the virtual machines on the hosts in our cluster that still have access to their datastores in their respective preferred sites.

I’ve created a short (ok, 8 minutes) video that show exactly this scenario. You’ll get a quick view of the VPLEX setup. You’ll see the Brocade switches that will change from a config with the normal full zoneset, being switched to a zoneset that will disable the inter-switch links between both VPLEX clusters. And you’ll see the settings inside of my vSphere lab setup, with the behavior of the hosts and virtual machines.

Since I’m quite new to creating videos like this, I hope the output is acceptable, and the video is clear enough. If you have any questions, feedback or would like to see more, please leave me a comment and I’ll see what I can do. 🙂


Just a quick modification to my post, since it wasn’t actually VM-HA (or VM monitoring) responding to the PDL event, but HA terminating the VM when running in to the PDL state, as Duncan pointed out to me on Twitter. Sorry for any confusion I may have caused!

GestaltIT, Performance, Storage, VAAI, Virtualization, VMware, vSphere

What is VAAI, and how does it add spice to my life as a VMware admin?

EMC EBC Cork
I spent some days in Cork, Ireland this week presenting to a customer. Besides the fact that I’m now almost two months in to my new job, and I’m loving every part of it, there is one part that is extremely cool about my job.

I get to talk to customers about very cool and new technology that can help them get their job done! And while it’s in the heart of every techno loving geek to get caught up in bits and bytes, I’ve noticed one thing very quickly. The technology is usually not the part that is limiting the customer from doing new things.

Everybody know about that last part. Sometimes you will actually run in to a problem, where some new piece of kit is wreaking havoc and we can’t seem to put our finger on what the problem is. But most of the time, we get caught up in entirely different problems altogether. Things like processes, certifications (think of ISO, SOX, ITIL), compliance, security or just something “simple” as people who don’t want to learn something new or feel threatened about their role that might be changing.

And this is where technology comes in again. I had the ability to talk about several things to this customer, but one of the key points was that technology should help make my life easier. One of the cool new things that will actually help me in that area was a topic that was part of my presentation.

Some of the VMware admins already know about this technology, and I would say that most of the folks that read blogs have already heard about it in some form. But when talking to people at conventions or in customer briefings, I get to introduce folks over and over to a new technology called VAAI (vStorage API for Array Integration), and I want to explain again in this blog post what it is, and how it might be able to help you.

So where does it come from?

Well, you might think that it is something new. And you would be wrong. VAAI was introduced as a part of the vStorage API during VMworld 2008, even though the release of the VAAI functionality to the customers was part of the vSphere 4.1 update (4.1 Enterprise and Enterprise Plus). But VAAI isn’t the entire vStorage API, since that consists of a family of APIs:

  • vStorage API for Site Recovery Manager
  • vStorage API for Data Protection
  • vStorage API for Multipathing
  • vStorage API for Array Integration

Now, the “only API” that was added with the update from vSphere 4.0 to vSphere 4.1 was the last API, called VAAI. I haven’t seen any of the roadmaps yet that contain more info about future vStorage APIs, but personally I would expect to see even more functionality coming in the future.

And how does VAAI make my life easier?

If you read back a couple of lines, you will notice that I said that technology should make my life easier. Well, with VAAI this is actually the case. Basically what VAAI allows you to do is offload operations on data to something that was made to do just that: the array. And it does that at the ESX storage stack.

As an admin, you don’t want your ESX(i) machines to be busy copying blocks or creating clones. You don’t want your network being clogged up with storage vMotion traffic. You want your host to be busy with compute operations and with the management of your memory, and that’s about it. You want as much reserve as you can on your machine, because that allows you to leverage virtualization more effectively!

So, this is where VAAI comes in. Using the API that was created by VMware, you can now use a set of SCSI commands:

  • ATS: This command helps you out with hardware assisted locking, meaning that you don’t have to lock an entire LUN anymore but can now just lock the blocks that are allocated to the VMDK. This can be of benefit, for example when you have multiple machines on the same datastore and would like to create a clone.
  • XSET: This one is also called “full copy” and is used to copy data and/or create clones, avoiding that all data is sent back and forth to your host. After all, why would your host need the data if everything is stored on the array already?
  • WRITE-SAME: This is one that is also know as “bulk zero” and will come in handy when you create the VM. The array takes care of writing zeroes on your thin and thick VMDKs, and helps out at creation time for eager zeroed thick (EZT) guests.

Sounds great, but how do I notice this in reality?

Well, I’ve seen several scenarios where for example during a storage vMotion, you would see a reduction in CPU utilization of 20% or even more. In the other scenarios, you normally should also see a reduction in the time it takes to complete an operation, and the resources that are allocated to perform such an operation (usually CPU).

Does that mean that VAAI always reduces my CPU usage? Well, in a sense: yes. You won’t always notice a CPU reduction, but one of the key criteria is that with VAAI enabled, all of the SCSI operations mentioned above should always perform faster then without VAAI enabled. That means that even when you don’t see a reduction in CPU usage (which is normally the case), you will see that since the operations are faster, you get your CPU power back more quickly.

Ok, so what do I need, how do I enable it, and what are the caveats?

Let’s start off with the caveats, because some of these are easy to overlook:

  • The source and destination VMFS volumes have different block sizes
  • The source file type is RDM and the destination file type is non-RDM (regular file)
  • The source VMDK type is eagerzeroedthick and the destination VMDK type is thin
  • The source or destination VMDK is any sort of sparse or hosted format
  • The logical address and/or transfer length in the requested operation are not aligned to the minimum alignment required by the storage device (all datastores created with the vSphere Client are aligned automatically)
  • The VMFS has multiple LUNs/extents and they are all on different arrays

Or short and simple: “Make sure your source and target are the same”.

Key criteria to use VAAI are the use of vSphere 4.1 and an array that supports VAAI. If you have those two prerequisites set up you should be set to go. And if you want to be certain you are leveraging VAAI, check these things:

  • In the vSphere Client inventory panel, select the host
  • Click the Configuration tab, and click Advanced Settings under Software
  • Check that these options are set to 1 (enabled):
    • DataMover/HardwareAcceleratedMove
    • DataMover/HardwareAcceleratedInit
    • VMFS3/HardwareAcceleratedLocking

Note that these are enabled by default. And if you need more info, please make sure that you check out the following VMware knowledge base article: >1021976.

Also, one last word on this. I really feel that this is a technology that will make your life as a VMware admin easier, so talk to your storage admins (if that person isn’t you in the first case) or your storage vendor and ask if their arrays support VAAI. If not, ask them when they will support it. Not because it’s cool technology, but because it’s cool technology that makes your job easier.

And, if you have any questions or comments, please hit me up in the remarks. I would love to see your opinions on this.

Update: 2010-11-30
VMware guru and Yellow Bricks mastermind Duncan Epping was kind enough to point me to a post of his from earlier this week, that went in to more detail on some of the upcoming features. Make sure you check it out right here.

Clariion, CX4, EMC, FLARE, Storage

What’s new in EMC Clariion CX4 FLARE 30

CLARiiON CX4 UltraFlex I/O module - Copyright: EMC Corporation.A little while back, EMC released a new version of it’s CLARiiON Fibre Logic Array Runtime Environment, or in short “FLARE” operating environment. This release brings us to version 04.30 and again has some enhancements that might interest you, so once more here’s a short overview of what this update packs:

Let’s start off with some basics. Along with this update you will find updated firmware versions for the following:

    Enclosure: DAE2		- FRUMon: 5.10
    Enclosure: DAE2-ATA	- FRUMon: 1.99
    Enclosure: DAE2P	- FRUMon: 6.71
    Enclosure: DAE3P	- FRUMon: 7.81

Major changes:

  • With version 04.30.000.5.507 you get support for FCoE. Prerequisite is using a 10 Gigabit Ethernet I/O module on CX4-120, CX4-240, CX4-480, and CX4-960 arrays.
  • SATA EFD support.
  • Following that point, you can now use Fibre Channel EFD and SATA EFD in the same DAE.
  • And, you can now also mix Fibre Channel and SATA EFDs in the same RAID group.
  • VMware vStorage API support in form of “vStorage full copy acceleration” (basically the array takes care of copying all the blocks, instead of sending everything to and from the application) and in form of “Compare and Swap” (an enhancement to the LUN locking mechanism).
  • Rebuild avoidance. This feature will change the routing of I/O to the service processor that still has access to all the drives in the RAID group. You do need write caching to be enabled if you want to be able to use this feature.
  • Virtual provisioning, basically EMC’s name for thin provisioning on the array.

There are some nice features in there, but for me personally the virtual provisioning, the FCoE support and the vStorage API support are the main ones.

One thing that caught my eye was in the section called limitations for FLARE version 04.30.000.5.507. In the release notes you will find the following statement:

Host attach support – Supported host attached systems are limited to the following operating systems: Windows, VMWare, and Linux

Which would mean that you have a problem when you are using something else like Solaris or HP-UX. I’m trying to get some confirmation, and I’ll update this post as soon as I have more info.

Update

The statement has changed in the meantime:

Host attach support – Supported hosts that can be attached over an FCoE connection are limited to the following operating systems: Windows, VMWare, and Linux

Which means that this is just related to FCoE connected hosts.


After some feedback on Twitter from among others Andrew Sharrock, I’d thought it might be wise to talk a few sentences about the Virtual Provisioning feature.

In short, Virtual Provisioning was already introduced with FLARE 28. Problem was that at the time, you could only use the feature with thin pools. Basically, with this update, you also get support for a newer version of the feature. Things that were added are:

  • Thick LUNs
  • LUN expand and shrink
  • Tiering preference (storage allocation from pools with mixed drives and different performance characteristics)
  • Per-tier tracking support of pool usage
  • RAID 1/0 support for pools
  • Increased limits for drive usage in pools
General, Networking, Storage, Virtualization

My “Follow, even if it’s not Friday” list

There’s a meme on Twitter that can be witnessed each Friday. It’s called “Follow Friday” and can be found by searching for the #FollowFriday hash tag, or sometimes just simply abbreviated to #FF to save on space in a tweet.

Problem with a lot of those follow Friday tweets is that most of the time you have no idea why you are being given the advice to follow these people. If you are lucky you will see a remark in the tweet saying why you want to follow someone, or why I should follow all of these people, but in most cases it’s a matter of clicking on a person, going to their time line and hope that you can find a common denominator that gives you an indication of why you want to follow someone.

In an attempt to do some things differently, I decided to create this post and list some of the folks that I think are worth following. And I’ll try and add a description of what someone (or a list of people) do that make them worth following in my opinion. And if you are not on this list please don’t be offended, I will try to update it every now and then, but it would be impossible for me to pick out every single one of you on the first attempt.

So here goes nothing! I’m starting off this post with people that offer a great deal of info on things related to VMware, and I will try to follow up with other topics as time goes buy. Check back every now and then to see some new people to follow.

Focus on VMware:

  • @sakacc – Besides being the VP for the VMware Technology Alliance at EMC, Chad is still a true geek and is a great source of knowledge when it comes to things VMware and EMC. Also, very helpful in regards to try and help people who have questions in those areas. Be sure to check out his blog as it is a great source of information!
  • @Kiwi_Si – Simon is a great guy, and can tell you a lot about VMware and home labs. Because of the home labs he is also very strong when it comes to finding out more about HP’s x86 platform, and once again I highly recommend reading his TechHead blog.
  • @alanrenouf – This French sounding guy is actually hiding in the UK and is considered by many to be a PowerCLI demi-god. Follow his tweets and you will find out why people think of him that way.
  • @stevie_chambers – You want to find out more about Cisco UCS? Steve is the man to follow on Twitter, also for finding out more about UCS combined with VMware.
  • @DuncanYB – Duncan started the Yellow Bricks blog, which emphasizes on all things VMware, and also is a great source of info on VMware HA.
  • @scott_lowe – Scott is an ace when it comes to VMware.
  • @jtroyer – John is the online evangelist and enterprise community builder at VMware. For anything new regarding VMware and it’s community you should follow John.
  • @lynxbat – I would call it something else, but Nick is a genius. He started tweaking the EMC Celerra VSA and has worked wonders with it. I highly recommend following him!
  • @jasonboche – Virtualization evangelist extraordinaire. Jason has the biggest home lab setup that I know of, I’d like to see someone trump that setup.
  • @gabvirtualworld – Gabrie is a virtualization architect and has a great blog with lot’s of resources on VMware.
  • @daniel_eason – Daniel is an architect for a large British airline and knows his way around VMware quite well, but is also quite knowledgeable in other areas.
  • @SimonLong_ – With a load of certifications and an excellent blog, Simon is definitely someone to follow on Twitter.

Focus on storage:

  • @StorageNerve – Devang is the go-to-guy on all things EMC Symmetrix.
  • @storageanarchy – Our friendly neighborhood storage anarchist is known to have an opinion, but Barry is also great when it comes to finding out more about EMC’s storage technology.
  • @valb00 – Val is a great source of info on things NetApp, and you can find a lot of good retweets with useful information from him.
  • @storagebod – If you want someone to tell it to you like it is, you should follow Martin.
  • @Storagezilla – Mark is an EMC guy with great storage knowledge. Also, if you find any videos of him cursing, tell me about it because I could just listen to him go on and on for hours with that accent he has.
  • @nigelpoulton – Nigel is the guy to talk to when you want to know more about data centre, storage and I/O virtualisation. He’s also great on all areas Hitachi/HDS.
  • @esignoretti – If you are (planning on) using Compellent storage, be sure to add Enrico to your list.
  • @chrismevans – The storage architect, or just Chris, knows his way around most storage platforms, and I highly recommend you read his blog for all things storage, virtualization and cloud computing.
  • @HPStorageGuy – For all things related to HP and their storage products you should follow Calvin.
  • @ianhf – “Don’t trust any of the vendors” is almost how I would sum up Ian’s tweets. Known to be grumpy at times, but a great source when it comes to asking the storage vendors the right questions.
  • @rootwyrm – As with Ian, rootwyrm also knows how to ask hard questions. Also, he isn’t afraid to fire up big Bertha to put the numbers to the test that were given by a vendor.
  • @sfoskett – Stephen is an original blogger and can probably be placed under any of the categories here. Lot’s of good information and founder of Gestalt IT
  • @Alextangent – The office of the CTO is where Alex is located inside of NetApp. As such you can expect deep technical knowledge on all things NetApp when you follow him.
  • @StorageMojo – I was lucky to have met Robin in person. A great guy working as an analyst, and you will find refreshing takes and articles by following his tweets. A definite recommendation!
  • @mpyeager – Since Matthew is working for IT service provider Computacenter, he has a lot of experience with different environments and has great insight on various storage solutions as well as a concern about getting customers more bang for their buck.

Focus on cloud computing:

  • @Beaker – Christofer Hoff is the director of Cloud & Virtualization Solutions at Cisco and has a strong focus on all things cloud related. His tweets can be a bit noisy, but I would consider his tweets worth the noise in exchange for the good info you get by following him. Oh, and by the way… Squirrel!!
  • @ruv – Reuven is one of the people behind CloudCamp and is a good source of information on cloud and on CloudCamp.
  • @ShlomoSwidler – Good cloud stuff is being (re)tweeted and commented on by Shlomo.

So, this is my list for now, but be sure to check back every once in a while to see what new people have been added!


Created: May 27th 2010
Updated: May 28th 2010 – Added storage focused bloggers
Updated: July 23rd 2010 – Added some storage focused bloggers and some folks that center on cloud computing
Updated:

GestaltIT, Networking, Stack, Storage, Virtualization

My take on the stack wars

As some of you might have read, the stack wars have started. One of the bigger coalitions announced in November 2009 was that between VMware, Cisco and EMC, aptly named VCE. Hitachi Data Systems announced something similar and partnered up with Microsoft, but left everyone puzzled about the partner that will be providing the networking technology in it’s stack. Companies like IBM have been able to provide customers with a complete solution stack for some time now, and IBM will be sure to tell it’s customers that they did so and offered the management tools in form of anything branded Tivoli. To me, IBM’s main weakness is not so much the stack that they offer, as the sheer number of solutions and the lack of one tool to manage it all, let alone getting an overview of all possible combinations.

So, what is this thing called the stack?

Actually the stack is just that, a stack. A stack of what you say? A stack of solutions, bound together by one or more management tools, offered to you as a happy meal that allows you to run the desired workloads on this stack. Or to put things more simply and quote from the Gestalt IT stack wars post:

  • Standard hardware configurations are specified for ease of purchasing and support
  • The hardware stack includes blade servers, integrated I/O technology, Ethernet networking for connectivity, and SAN or NAS storage
  • Unifying software is included to manage the hardware components in one interface
  • A joint services organization is available to help in selection, architecture, and deployment
  • Higher-level software, from the virtualization hypervisor through application platforms, will be included as well

Until now, we have usually seen a standardized form of hardware, including storage and connectivity. Vendors mix that up with one or multiple management tools and tend to target some form of virtualization. Finally a service offering is included to allow the customer to get service and support from one source.

This strategy has it’s advantages.

Compatibility is one of my favorite ones. You no longer need to work trough compatibility guides that are 1400 pages long and will burn you for installing a firmware version that was just one digit off and is now no longer supported in combination with one of your favorite storage arrays. You no longer have to juggle different release notes from your business warehouse provider, your hardware provider, your storage and network provider, your operating system and tomorrow’s weather forecast. Trying to find the lowest common denominator through all of this is still something magical. It’s actually a form of dark magic that usually means working long hours to find out if your configuration is even supported by all the vendors you are dealing with.

This is no longer the case with these stacks. Usually they are purpose or workload built and you have one central source where you get your support from. This source will tell you that you need at least firmware version X.Y on these parts to be eligible for support and you are pretty much set after that. And because you are working with a federated solution and received management tools for the entire stack, your admins can pretty much manage everything from this one console or GUI and be done with it. Or, if you don’t want to that you can use the service offering and have it done for you.

So far so good, right?

Yes, but things get more complicated from here on. For one there is one major problem, and that is flexibility. One of the bigger concerns came up during the Gestalt IT tech field day vBlock session at Cisco. With the vBlock, I have a fixed configuration and it will run smoothly and within certain performance boundaries as long as I stick to the specifications. In the case of a vBlock this was a quite obvious example, where if I add more RAM to a server blade then is specified, I no longer have a vBlock and basically no longer have those advantages previously stated.

Solution stacks force me to think about the future. I might be a Oracle shop now as far as my database goes. And Oracle will run fine on newly purchased stack. But what if I want to switch to Microsoft SQL Server in 3 years, because Mr. Ellison decided that he needs a new yacht and I no longer want to use Oracle? Is my stack also certified to run a different SQL server or am I no longer within my stack boundaries and lost my single service source or the guaranteed workload it could hold?

What about updates for features that are important to me as a single customer? Or what about the fact that these solution stacks work great for new landscapes, or in a highly homogeneous environment? But what about those other Cisco switches that I would love to manage from the tools that are offered within my vBlock, but are outside of the vBlock scope, even if they are the same models?

What about something simple as a “stack lock-in”? I don’t really have a vendor lock-in since only very few companies have the option of offering everything first hand. Microsoft doesn’t make server blades, Cisco doesn’t make SAN storage and that list goes on and on. But with my choice of stack, I am now locked in to a set of vendors, and I certainly have some tools to migrate in to that stack, but migrating out is an entirely different story.

The trend is the stack, it’s as simple as that. But for how long?

We can see the trend clearly. Every vendor seems to be working on a stack offering. I’m still missing Fujitsu as a big hardware vendor in this area, but I am absolutely certain we will see something coming from them. Smaller companies will probably offer part of their portfolio under some sort of OEM license or perhaps features will just be re-branded. And if they are successful enough, they will most likely be swallowed by the bigger vendors at some point.

But as with all in the IT, this is just a trend. Anyone who has been in the business longer than me can probably confirm this. We’ve seen a start with centralized systems, then moving towards a de-centralized environment. Now we are on the move again, centralizing everything.

I’m actually much more interested to see how long this trend will continue. I’m am certain that we will be seeing some more companies offer a complete solution stack, or joining in coalitions to offer said stack. I still think that Oracle was one of the first that pointed in this direction, but they were not the first to offer the complete stack.

So, how do you think this is going to continue? Do you agree with us? What companies do you think are likely to be swallowed, or will we see more coalitions from smaller companies? What are your takes on the advantages and disadvantages?

I’m curious to hear your take on this so let me know. I’m looking forward to what you have to say!

GestaltIT, Performance, Storage, Tiering

“Storage tiering is dying.” But purple unicorns exist.

Chris Mellor over at the Register put an interview online with NetApp CEO Tom Georgens.

To quote from the Register piece:

He is dismissive of multi-level tiering, saying: “The simple fact of the matter is, tiering is a way to manage migration of data between Fibre Channel-based systems and serial ATA based systems.”

He goes further: “Frankly I think the entire concept of tiering is dying.”

Now, for those who are not familiar with the concept of tiering, it’s basically moving data between faster and slower media in the background. Clasically tiering is something that every organization is already doing. You consider the value of the information, and based on that you decide if this data should be accessible instantly from your more expensive hardware, and even at home you will see that as the value decreases you will archive that data to a media that has a different type of performance like your USB archiving disk or for example by burning it to a DVD.

For companies the more interesting part in tiering comes with automation. To put it simply, you want your data to be available on a fast drive when you need it, and it can remain on slower drives if you don’t require it at that moment. Several vendors each have their own specific implementation of how they tier their storage, but you find this kind of technology coming from almost any vendor.

Aparrantly, NetApp has a different definition of tiering, since according to their CTO tiering is limited to the “migration of data between Fibre Channel-based systems and serial ATA based systems”. And this is where I heartily disagree with him. I purposely picked the example of home users who are also using different tiers, and it’s no different for all storage vendors.

The major difference? They remove the layer of fibre channel drives in between of the flash and SATA drives. They still tier their data to the medium that is most fitting. They will try to do that automatically (and hopefully succeed in doing so), but just don’t call it tiering anymore.

As with all vendors, NetApp is also trying to remove the fibre channel drive layer, and I am convinced that this will be possible as soon as the prices of flash drives can be compared to those of regular fibre channel drives, and the automated tiering is automated to a point that any actions performed are transparent to the connected system.

But, if NetApp doesn’t want to call it tiering, that’s fine by me but I hope they don’t honestly expect customers to fall for it. The rest of the world will continue to call it tiering, and they will try to sell you a purple unicorn that moves data around disk types as if by magic.

Compellent, Storage, Storage Center

Compellent just introduced their new Storage Center 5

So, as of today 17:00 (German time) Compellent introduced their new Storage Center in version 5. Storage Center is essentially a SAN solution that similar to EMC’s V-Max is based on industry-standard hardware. It’s effectively an Intel-based server with a custom OS that runs from flash memory.

Now then, one of the main technologies used by Compellent in these arrays is something called “Dynamic Block Architecture” or DBA, which basically is a storage virtualization technology that tracks each block in the array independently. Since all metadata contains all relevant information like RAID level, volume and disk location, the data can be stored anywhere. This allowed for features like automated storage tiering or thin provisioning. Zero block reclaim was also an option in the form of “thin import”.

Today version 5 is released which offers the following improvements and new features:

  • Portable Volume
  • Scalable SAS
  • Automated Tiered Storage with RAID 6
  • Virtual Ports
  • Server Mapping
  • ConsistencyGroups

Now, some of these speak for themselves like the support for RAID6 with automated tiered storage, or virtual ports that allow you to share multiple virtual ports on one physical port by using N_Port ID virtualization (NPIV). Some of the others are less obvious so I’m going to look at them a bit closer.

Portable Volume
Compellent stated that portable volume is just that, “a way to move data around”. Primary uses could for example be the initial synchronization between a primary and a backup site that both contain Storage Centers. “Customers don’t want to purchase a ‘big pipe’ for just that first synchronization”. James Bond... ishBasically you plug in a portable volume to a controller. That’s done via USB. After that the system copied the data by basically creating a snapshot and synchronizing it to the USB disk. Once that is done you can physically move the data over (there’s even a James Bond style suitcase or “travel container”), connect the portable volume to your second array and the replication automatically takes place. Currently the biggest portable volume is a 2TB drive, but you can combine multiple drives. Filling a drive can take up to around 15 hours, speeds are mostly limited by USB 2.0 connection. Compellent is currently also “looking to support portable FC attached drives”, but I wouldn’t hold my breath for that one in the coming weeks.

Scalable SAS

Not that spectacular, but this feature allows you to connect cheaper SAS disks to the array. You have a choice between either 450GB drives with 15,000 RPM or 1TB drives with 7,200 RPM. This will scale from a minimum of 6 drives up to 384 drives, where 384 is the current drive limit for SAS drives. Disk shelves are being sold that will offer capacity for 24 disks. For solid state disks and for FC-disks you can have a 1008 drive maximum.

Server Mapping

Main focus of this point is the virtualized environment. You can now create groups of servers called “clusters”, that can be moved or configured all at once. As with the default configuration, the LUNs created will be then, and a nice touch is that they have an “OS-aware mapping”.

ConsistencyGroups

Basically consistent recovery points from the array. Up to 40 different volumes can be combined in one group, which gives you the option to create groups for the various applications or landscapes within the array. Compellent is looking to provide an alternative to Microsoft’s Visual SourceSafe with this new feature.

So, that’s it for the announcement. Let’s see what this new array will show us in production environments. One small detail I should mention is that Compellent is also looking at implementing FCoE as an interconnect in their arrays, but the jury is still out on when this is going to be launched.


twitter.png Tweet about this article