GestaltIT, High Availability

How do you define high availability and disaster recovery?

A while back I was on a call with someone who asked me the difference between high availability (HA) and disaster recovery (DR), saying that there are so many different solutions out there and that a lot of people seem to use the terminology but are unable to explain anything more about these two descriptions. So, here’s an attempt to demystify things.

First of all, let’s take a look at the individual terms:

High Availability:

According to Wikipedia, you can define availability in the following ways:

The degree to which a system, subsystem, or equipment is operable and in a committable state at the start of a mission, when the mission is called for at an unknown, i.e., a random, time. Simply put, availability is the proportion of time a system is in a functioning condition.

The ratio of (a) the total time a functional unit is capable of being used during a given interval to (b) the length of the interval.

And most online dictionaries seem to have a similar definition of availability. When we are talking about HA, we imply that we want the functioning condition of your system to be increased.

Going by the above you will also notice that there is no fixed definition of the availability. Simply put, it would mean that you need to put your own definition in place when talking about HA. You need to define what HA means in your environment. I’ve had customers that needed HA and defined this as the system having a certain amount of uptime, which is one way to measure it.

On the other hand you would be hard pressed if you were able to work with your system, but the data that you were working with was corrupted because one of your power users made an error during a copy job and wrote an older data set in the wrong spot. This would mean that your system is in itself available. You can log on to it, you can work with it, but the output you are going to get will be wrong.

To me, such a scenario would mean that your system isn’t available. After all, it’s not about everything being online. It’s about using a system in the way you would expect it to work. But when you ask most people in IT about availability, the first thing you will likely hear is something related to uptime or downtime. So, my tip to you is once again:

Define what “available” means to you and your organization/customer!

Disaster Recovery:

Natural disasterLet’s do the same thing as before and let’s turn to some general definitions. Wikipedia defines disaster the following way:

disaster is a perceived tragedy, being either a natural calamity or man-made catastrophe. It is a hazard which has comes to fruition. A hazard, in turn, is a situation which poses a level of threat to life, health, property, or that may deleteriously affect society or an environment.

And recovery is defined the following way (when it comes to health):

Healing, or Cure, the process of recovering from an injury or illness.

So, in a nutshell this is about bouncing back to your feet once a disaster strikes. Now again, it’s important to define what you would call a disaster, but at least there seems to be some sort of common understanding that anything that would get you back up and running after an entire site goes down, usually falls under the label of a DR solution.

It all boils down to definitions!

When you talk to other companies or vendors about HA and/or DR, you will soon notice that most have a different understanding of what HA and DR are. Your main focus should be to have a clear definition for yourself. Try to find out the importance and value of your solution and base your requirements on that. Ask yourself simple questions like for example:

  • What is the maximum downtime I can cope with before I need to start working again? 8 hours per year? 1 hour per year? 4 hours per month? What is my RPO and RTO
  • How do I handle planned maintenance? Can I bring everything down or do I need to distribute my maintenance across independent entities?
  • Can I afford the loss of any data at all? Can I afford the partial loss of data?
  • What if we see a city-wide power outage? Do I need a failover site, or are all my users in the same spot and won’t be able to work anyway?

Questions like these will help you realize that not everything you have running has the same value. Your development system with 6000 people working on it worldwide might need better protection than your productive system that is only being used by 500 people spread through the Baltic region.

Or in short.

Knowing what kind of protection you need is key. Fact is that both HA and DR solutions never come cheap. If you need the certainty that your solution is available and able to recover from a disaster, you will notice that the price tag will quickly skyrocket. Which is another reason to make sure that you know exactly what kind of protection you need, and creating that definition is the most important starting point. Once you have your own definition, make sure that you communicate those definitions and requirements so that all parties are on the same page. It should make your life a little easier in the end.

General, GestaltIT, Stack

This vendor is locking me in!

Or so I’m told. Not just once or twice, but it’s something that is written down at least once each time a vendor introduces something new or when a revision of an existing product is rolled out.

Now, you could say that this is the pot calling the kettle black and I would agree with you. It’s a thing I mentioned in my UCS post, and also in my recent post on the stack wars. And today a tweet from @Zaxstor got me thinking about it some more. I asked the following on Twitter:

I hear this argument about vendor lock in all of the time. Open question: How do I avoid a vendor lock in? By going heterogeneous?

Because, when you think about it, we all are subject to vendor lock-in all of the time. As soon as I decide to purchase my new mobile phone, I am usually tied to either the phone manufacturer or the carrier that is use. Sometimes I am even tied to both, you just need to think about the iPhone as an example for this kind of lock-in.

The same goes for the car I drive. When I buy it from the dealer, I get an excellent package that is guaranteed to work. Until I take it to an inspection with a garage that is not part of the authorized network. My car will still drive, and will probably work great, but I no longer have a large part of the guarantees that came with it when I bought it, and would have been intact if I had taken it to an authorized dealer.

Now, I know my analogy is slightly flawed here since we are talking about things that work on a different scale and use entirely different technologies, but what I am trying to say is that we make decisions that lock us in with a certain vendor on an almost daily basis. Apparently the guys in and around the data center just like to talk about that problem a bit more.

One remark was made however by fellow blogger Dimitris Krekoukias and confirmed by several others:

“It’s not how you get in to the lock, but how you get out of it.”

And I do think that this is probably the key, but fortunately we have some help there from the competition. But it’s not all down to the others! All vendors are guilty with trying to sell something. It’s not their fault, it’s just something that “comes with the territory”. They will try to pitch you their product and make your head dizzy with what this new product can do. It’s all good, and it’s all grand according to them.

And yes, it is truly grand what this shiny new toy can do, but the question is if you really need it? Try to ask what kind of value a feature will offer in your specific setup. Try to judge if you really need this feature, and ask yourself the question what you are going to do if the feature proves to be less useful then expected.

Remember that not all is lost if you do lock yourself in with that vendor. Usually others will be quick to follow with new features and this is where the help from the competition comes in. Take the example with the mobile phone. Even if you will not receive any help from your current provider, you can bet that the provider that now also offers the same package will try to help you to become his customer. If NetApp is not providing you with an option to migrate out of that storage array, you can bet your pants that Hitachi will try and help you migrate to their arrays.

Now, I’m not saying that this is the best solution. Usually exchanging solutions is also accompanied with a loss of knowledge and investments that were made. But it’s all on you to factor that in before you take the plunge, and in the end that lock that you have with your current vendor might be hard and expensive to break, but usually it’s never a mission impossible.


P.S. Just as a side note, I’m not saying NetApp will not allow or help you to migrate out of an array, I’m just using these names as an example. Replace them with any vendor you like.

P.P.S. Being part of the discussion fellow blogger Storagebod posted something quite similar, be sure to read it here

GestaltIT, Networking, Stack, Storage, Virtualization

My take on the stack wars

As some of you might have read, the stack wars have started. One of the bigger coalitions announced in November 2009 was that between VMware, Cisco and EMC, aptly named VCE. Hitachi Data Systems announced something similar and partnered up with Microsoft, but left everyone puzzled about the partner that will be providing the networking technology in it’s stack. Companies like IBM have been able to provide customers with a complete solution stack for some time now, and IBM will be sure to tell it’s customers that they did so and offered the management tools in form of anything branded Tivoli. To me, IBM’s main weakness is not so much the stack that they offer, as the sheer number of solutions and the lack of one tool to manage it all, let alone getting an overview of all possible combinations.

So, what is this thing called the stack?

Actually the stack is just that, a stack. A stack of what you say? A stack of solutions, bound together by one or more management tools, offered to you as a happy meal that allows you to run the desired workloads on this stack. Or to put things more simply and quote from the Gestalt IT stack wars post:

  • Standard hardware configurations are specified for ease of purchasing and support
  • The hardware stack includes blade servers, integrated I/O technology, Ethernet networking for connectivity, and SAN or NAS storage
  • Unifying software is included to manage the hardware components in one interface
  • A joint services organization is available to help in selection, architecture, and deployment
  • Higher-level software, from the virtualization hypervisor through application platforms, will be included as well

Until now, we have usually seen a standardized form of hardware, including storage and connectivity. Vendors mix that up with one or multiple management tools and tend to target some form of virtualization. Finally a service offering is included to allow the customer to get service and support from one source.

This strategy has it’s advantages.

Compatibility is one of my favorite ones. You no longer need to work trough compatibility guides that are 1400 pages long and will burn you for installing a firmware version that was just one digit off and is now no longer supported in combination with one of your favorite storage arrays. You no longer have to juggle different release notes from your business warehouse provider, your hardware provider, your storage and network provider, your operating system and tomorrow’s weather forecast. Trying to find the lowest common denominator through all of this is still something magical. It’s actually a form of dark magic that usually means working long hours to find out if your configuration is even supported by all the vendors you are dealing with.

This is no longer the case with these stacks. Usually they are purpose or workload built and you have one central source where you get your support from. This source will tell you that you need at least firmware version X.Y on these parts to be eligible for support and you are pretty much set after that. And because you are working with a federated solution and received management tools for the entire stack, your admins can pretty much manage everything from this one console or GUI and be done with it. Or, if you don’t want to that you can use the service offering and have it done for you.

So far so good, right?

Yes, but things get more complicated from here on. For one there is one major problem, and that is flexibility. One of the bigger concerns came up during the Gestalt IT tech field day vBlock session at Cisco. With the vBlock, I have a fixed configuration and it will run smoothly and within certain performance boundaries as long as I stick to the specifications. In the case of a vBlock this was a quite obvious example, where if I add more RAM to a server blade then is specified, I no longer have a vBlock and basically no longer have those advantages previously stated.

Solution stacks force me to think about the future. I might be a Oracle shop now as far as my database goes. And Oracle will run fine on newly purchased stack. But what if I want to switch to Microsoft SQL Server in 3 years, because Mr. Ellison decided that he needs a new yacht and I no longer want to use Oracle? Is my stack also certified to run a different SQL server or am I no longer within my stack boundaries and lost my single service source or the guaranteed workload it could hold?

What about updates for features that are important to me as a single customer? Or what about the fact that these solution stacks work great for new landscapes, or in a highly homogeneous environment? But what about those other Cisco switches that I would love to manage from the tools that are offered within my vBlock, but are outside of the vBlock scope, even if they are the same models?

What about something simple as a “stack lock-in”? I don’t really have a vendor lock-in since only very few companies have the option of offering everything first hand. Microsoft doesn’t make server blades, Cisco doesn’t make SAN storage and that list goes on and on. But with my choice of stack, I am now locked in to a set of vendors, and I certainly have some tools to migrate in to that stack, but migrating out is an entirely different story.

The trend is the stack, it’s as simple as that. But for how long?

We can see the trend clearly. Every vendor seems to be working on a stack offering. I’m still missing Fujitsu as a big hardware vendor in this area, but I am absolutely certain we will see something coming from them. Smaller companies will probably offer part of their portfolio under some sort of OEM license or perhaps features will just be re-branded. And if they are successful enough, they will most likely be swallowed by the bigger vendors at some point.

But as with all in the IT, this is just a trend. Anyone who has been in the business longer than me can probably confirm this. We’ve seen a start with centralized systems, then moving towards a de-centralized environment. Now we are on the move again, centralizing everything.

I’m actually much more interested to see how long this trend will continue. I’m am certain that we will be seeing some more companies offer a complete solution stack, or joining in coalitions to offer said stack. I still think that Oracle was one of the first that pointed in this direction, but they were not the first to offer the complete stack.

So, how do you think this is going to continue? Do you agree with us? What companies do you think are likely to be swallowed, or will we see more coalitions from smaller companies? What are your takes on the advantages and disadvantages?

I’m curious to hear your take on this so let me know. I’m looking forward to what you have to say!

Cisco, GestaltIT, Tech Field Day, UCS, Virtualization

Gestalt IT Tech Field Day – On Cisco and UCS

There are a couple of words that are high on my list as being the buzzwords for 2010. The previous year brought us things like “green computing”, but the new hip seems to be “federation”, “unification”. And let’s not forget the one that seems to last longer then just one year, it’s the problem solving term “cloud”.

Last Friday (April 9th), I and the rest of the Gestalt IT tech field day delegates were invited by Cisco to get a briefing on Cisco’s Unified Computing System or in short “UCS”. Basically this is Cisco’s view that builds on the notion that we are currently viewing a server as being tied to the application, instead of seeing the server as a resource that allows us to run that application.

Anyone in marketing will know that the next question being asked is “What is your suggestion to change all that?”, and Cisco’s marketing department didn’t disappoint us and tried to answer that question for us. The key, in their opinion, is using a system consisting of building blocks that allow me to to give customers a solution stack.

As the trend can be spotted to go towards commodity hardware, Cisco is following suit by using industry standard servers that are equipped with Intel Xeon processors. Other key elements are a virtualization of services, a focus on automated provisioning and unification of the fabric by means of FCoE.

What this basically means is that you order building blocks from Cisco in the form of blade servers, blade chassis, fabric interconnects and virtual adapters. But instead of connecting this stuff up and expanding my connectivity like I do in a standard scenario, I instead wire my hardware depending on the bandwidth requirements and that’s pretty much it. Once I am done with that, I can assign virtual interfaces as I need them on a per blade basis, which in term removes the hassle of plugging in physical adapters and cabling all that stuff up. In a sense it reminded me of the take that Xsigo offered with their I/O director, but with the difference that Cisco uses FCoE instead of Infiniband, and with Cisco you add the I/O virtualization to a more complete management stack.

The management stack

This is in my opinion the key difference. I can bolt together my own pieces of hardware and use the Xsigo I/O director in combination with VMware and have a similar set-up, but I will be missing out on one important element. A central management utility.

This UCS unified management offers me some advantages that I have not seen from other vendors. I can now tie the properties to the resources that I want, meaning that I can set up properties tied to a blade, but can also tie them to the VM or application running on that blade in form of service profiles. Things like MAC, WWN or QoS profiles are defined inside of these service profiles in an XML format and then applied to my resources as I see fit.

Sounds good, but…..?

There is always a but, that’s something that is almost impossible to avoid. Even though Cisco offers a solution that seems to offer some technical advantages, there are some potential drawbacks.

  • Vendor lock in:
    This is something that is quite easy to see. The benefit of getting everything from one vendor also means that my experience is only as good as the vendors support is in case of trouble. Same thing applies when ordering new hardware and there are unexpected problems somewhere in the ordering/delivery chain
  • The price tag:
    Cisco is not know to be cheap. Some would even say that Cisco is very expensive, and it will all boil down to one thing. Is the investment that I need to make for a UCS solution going to give me the return on invest? And is it going to do that anytime soon? Sure it can reduce my management overhead and complexity, sure it can lower my operational expense, but I want to see something in return for the money I gave Cisco and preferably today, not tomorrow.
  • Interoperability with my existing environment:
    This sort of stuff works great when you are lucky enough to create something new. A new landscape, a new data center or something along those lines. Truth is that usually we will end up adding something new to our existing environment. It’s great that I can manage all of my UCS stack with one management interface. But what about the other stuff? What if I already have other Cisco switches that are not connected to this new UCS landscape? Can I manage those using the built in UCS features? Or is this another thing that my admins have to learn?
  • The fact that UCS is unified does not mean that my company is:
    In smaller companies, you have a couple of sysadmins that do everything. They install hardware, configure the operating system, upload firewall policies to their routers and zone some new storage. So far so good, I’ll give them my new UCS gear and they usually know what goes where and will get going. Now I end up in the enterprise segment where I talk to one department to change my kernel parameters, a different to configure my switch port to auto-negotiate and the third one will check on the WWN of my fibre-channel HBA to see if this is matching to the one configured on the storage side. Now I need to get all of them together to work on creating the service profiles, although not all will be able to work outside of their knowledge silo. The other alternative would be to create a completely new team that just does UCS, but do I want that?

Besides the things that are fairly obvious and not necessarily Cisco’s fault, I think that Cisco was actually one of the first companies to go this way and one of the first to show an actual example of a federated and consolidated solution. Because that is what this is all about, it’s not about offering a piece of hardware, it’s about offering a solution. Initiatives like VCE and VCN only show us that Cisco is moving forward and is actually pushing towards offering complete solution stacks.

My opinion? I like it. I think Cisco have delivered something that is a usable showcase, and although unfortunately I have not been able to actually test it so far, I do really like the potential it offers and the way it was designed. If I ever get the chance to do some testing on a complete UCS stack, I’ll be sure to let you know more, but until then I at least hope that this post has made things a bit clearer and removed some of the questions you might have. And if that’s not the case, leave a comment and I will be sure to ask some more questions on your behalf.

Disclaimer:
The sponsors are each paying their share for this non-profit event. We, the delegates, are not paid to attend. Most of us will take some days off from our regular job to attend. What is paid for us is the flight, something to eat and the stay at a hotel. However as stated in the above post, we are not forced to write about anything that happens during the event, or to only write positive things.

Data Robotics, Drobo FS, GestaltIT, Storage, Tech Field Day

Drobo announces their new Drobo FS

In November 2009, Data Robotics Inc. released two new products, the Drobo S and the Drobo Elite. Yesterday I was lucky enough to be invited to a closed session with the folks from Data Robotics as they had some interesting news about a new product they are announcing today called the Drobo FS.

When we visited the Data Robotics premises with the entire Tech Field Day crew last November, one of the biggest gripes about the Drobo was that it relied on the Drobo Share to allow an ethernet connection to the storage presented from my Drobo. The newly introduced Drobo S added an eSATA port, but also didn’t solve this limitation since it wasn’t even compatible to the Drobo Share. As such the Drobo Share was not the worst solution ever, be it for the fact that it connects to the Drobo via a USB 2.0 connection, thus limiting the maximum speed one could achieve when accessing the disks.

Front of the new Drobo FSWell, that part changes today with the introduction of the Drobo FS. Basically this model offers the same amount of drives as the Drobo S, namely a maximum of 5, and exchanges the eSATA port for a gigabit ethernet port. The folks from Data Robotics said that this would mean that you will see an estimated 4x performance improvement when comparing the Drobo FS to the Drobo Share, and you also get the option of single or dual drive redundancy to ensure that no data is lost when one or two drives fail.

Included with all configurations you will receive a CAT 6 ethernet cable, an external power supply (100v-240v) with a fitting power cord for your region, a user guide and quick start card ( in print) and a Drobo resource CD with the Drobo Dashboard application, help files, and electronic documentation. The only thing that will change, depending on your configuration, is the amount of drives that are included with the Drobo FS. You can order the enclosure without any drives at all, this would set you back $699.- (€519,- / £469,-), or you can get the version that includes a total of 10 terabyte of disk space for a total of $1499.- (€1079,- / £969,-).

As with the other Drobo’s you are able to enhance the function of your Drobo with the so called DroboApps. This option will for example allow you to extend the two default protocols (CIFS/SMB and AFP) with additional ones such as NFS. Unfortunately we won’t be seeing iSCSI on this model since according to the guys from Data Robotics they are aiming more towards a file level solution than a block level solution.

Back of the new Drobo FSOne of the newer applications on the Drobo FS is something that caught my eye. This application is targeted towards the private cloud and uses “Oxygen Cloud” as a service provider to provide file access to a shared storage. This means that you can link your Drobo’s together (up to a current limit of 256 Drobo units) and allow these to share their files and shares. This will include options like access control and even features such as remote wipe, but a more complete feature list will follow today’s release.

One feature that was requested by some users hasn’t made it yet. The Drobo dashboard which is used to control the Drobo is still an application that needs to be installed, but Data Robotics is looking at the option of changing this in to something that might be controlled via a browser based interface. However no comments were made regarding a possible release date for such a web interface. What is also under development on is an SDK that will allow the creation of custom DroboApps. Again, a release date was not mentioned in the call.

I will try to get my hands on a review unit and post some tests once I have the chance. Also, I am looking forward to finding out more about the device when I meet the Drobo folks in person later this week during the Gestalt IT Tech Field Days in Boston, so keep your eye on this space for more to come.

GestaltIT, Tech Field Day

Getting ready for the Gestalt IT Tech Field Day 2010 – Boston

Last year in November I was fortunate enough to be invited to the Gestalt IT Tech Field Day which took place in San Jose. A recap of what happened there can be found here.

Some of you might not be aware of the concept of the tech field days, so let me give you an overview.

The origin might be found in an event that is called “Tech Day” and was initiated by HP. Basically HP invited several bloggers from around the globe and offered them a technical discussion and a more in depth view of several of their products.

Gestalt IT’s Stephen Foskett
was one of the bloggers invited to this event who felt that this might be a good basis for that which now makes up the tech field days.

So, in a nutshell the tech field days brings together a group of independent people that are present in the various social media (think of Twitter, blogs, podcasts, the works) and have a technical background. These good folks then are packed with two days of presentations, discussions and hands on from the sponsors of this event.

You might want to think of this as vendor love, but you wouldn’t be quite right. First of all there is no obligation to communicate about any of the things that are presented to you. What is even more important, when you decide to actually report on what happened, you can give your honest opinion, be it good or bad. Secondly, since the group of people that are invited have a very broad background, the service or products presented will usually get a very broad array of questions fired at them. These will range from very detailed questions that could be about the choice of an algorithm to something more general like for example the value of deduplication in a virtualized environment.

Because we are talking about people with backgrounds in (among others) backgrounds in storage, virtualization, operating systems, hardware, networking and analysts you will find that the questions asked are usually tough on the presenters. These are people that know their stuff and this is also why presenters get the recommendation to not make this in to a marketing show.

This is an event for the community and as such the people who attend are very aware of this fact and looking at the first event, you will see a lot of feedback coming from the people who attend. This is not just limited to the on-site discussions. We had discussions put on video in the pub, there were dynamic conversations in the hotel lobby where the delegates discussed ideas or even took the time to explain concepts to the other delegates who were not experts in the same area.

So, here’s a list of the delegates that will be attending the event:

Jason Boche Boche.net JasonBoche
Carlo Costanzo VMware Info CCostan
David Davis VMwareVideos DavidMDavis
Greg Ferro EtherealMind
Gestalt IT
EtherealMind
Edward Haletky The Virtualization Practice Texiwill
Robin Harris Storage Mojo
ZDNet Storage Bits
StorageMojo
Greg Knieriemen Storage Monkeys
iKnerd
Knieriemen
Simon Long The SLOG
Gestalt IT
SimonLong_
Scott D. Lowe Tech Republic
SearchCIO
ScottDLowe
John Obeto Absolutely Windows JohnObeto
Devang Panchigar StorageNerve
Gestalt IT
StorageNerve
Bas Raayman Technical Diatribe BasRaayman
Simon Seagrave TechHead Kiwi_Si
Matt Simmons Standalone Sysadmin StandaloneSA
Gabrie van Zanten Gabe’s Virtual World GabVirtualWorld

If you check out the profiles of the attendees, you will see that these people should make for an interesting mix. What’s more, I am certain that these folks are able to ask questions that are not always easy to answer.

Cisco Systems
Data Robotics
EMC Corporation
Hewlett-Packard Company
VKernel

So, look for some interesting posts coming from the delegates and on Gestalt IT. You can follow what happens online on Twitter by using the hashtag #TechFieldDay, and be on the lookout for lot’s of interesting things to be coming on April 8th and 9th.

One final thing that should be said.

Disclaimer:
The sponsors are each paying their share for this non-profit event. We, the delegates, are not paid to attend. Most of us will take some days off from our regular job to attend. What is paid for us is the flight, something to eat and the stay at a hotel. However as stated in the above post, we are not forced to write about anything that happens during the event, or to only write positive things.

GestaltIT, Performance, Storage, Tiering

“Storage tiering is dying.” But purple unicorns exist.

Chris Mellor over at the Register put an interview online with NetApp CEO Tom Georgens.

To quote from the Register piece:

He is dismissive of multi-level tiering, saying: “The simple fact of the matter is, tiering is a way to manage migration of data between Fibre Channel-based systems and serial ATA based systems.”

He goes further: “Frankly I think the entire concept of tiering is dying.”

Now, for those who are not familiar with the concept of tiering, it’s basically moving data between faster and slower media in the background. Clasically tiering is something that every organization is already doing. You consider the value of the information, and based on that you decide if this data should be accessible instantly from your more expensive hardware, and even at home you will see that as the value decreases you will archive that data to a media that has a different type of performance like your USB archiving disk or for example by burning it to a DVD.

For companies the more interesting part in tiering comes with automation. To put it simply, you want your data to be available on a fast drive when you need it, and it can remain on slower drives if you don’t require it at that moment. Several vendors each have their own specific implementation of how they tier their storage, but you find this kind of technology coming from almost any vendor.

Aparrantly, NetApp has a different definition of tiering, since according to their CTO tiering is limited to the “migration of data between Fibre Channel-based systems and serial ATA based systems”. And this is where I heartily disagree with him. I purposely picked the example of home users who are also using different tiers, and it’s no different for all storage vendors.

The major difference? They remove the layer of fibre channel drives in between of the flash and SATA drives. They still tier their data to the medium that is most fitting. They will try to do that automatically (and hopefully succeed in doing so), but just don’t call it tiering anymore.

As with all vendors, NetApp is also trying to remove the fibre channel drive layer, and I am convinced that this will be possible as soon as the prices of flash drives can be compared to those of regular fibre channel drives, and the automated tiering is automated to a point that any actions performed are transparent to the connected system.

But, if NetApp doesn’t want to call it tiering, that’s fine by me but I hope they don’t honestly expect customers to fall for it. The rest of the world will continue to call it tiering, and they will try to sell you a purple unicorn that moves data around disk types as if by magic.