Clariion, CX4, EMC, FLARE, Storage

What’s new in EMC Clariion CX4 FLARE 30

CLARiiON CX4 UltraFlex I/O module - Copyright: EMC Corporation.A little while back, EMC released a new version of it’s CLARiiON Fibre Logic Array Runtime Environment, or in short “FLARE” operating environment. This release brings us to version 04.30 and again has some enhancements that might interest you, so once more here’s a short overview of what this update packs:

Let’s start off with some basics. Along with this update you will find updated firmware versions for the following:

    Enclosure: DAE2		- FRUMon: 5.10
    Enclosure: DAE2-ATA	- FRUMon: 1.99
    Enclosure: DAE2P	- FRUMon: 6.71
    Enclosure: DAE3P	- FRUMon: 7.81

Major changes:

  • With version 04.30.000.5.507 you get support for FCoE. Prerequisite is using a 10 Gigabit Ethernet I/O module on CX4-120, CX4-240, CX4-480, and CX4-960 arrays.
  • SATA EFD support.
  • Following that point, you can now use Fibre Channel EFD and SATA EFD in the same DAE.
  • And, you can now also mix Fibre Channel and SATA EFDs in the same RAID group.
  • VMware vStorage API support in form of “vStorage full copy acceleration” (basically the array takes care of copying all the blocks, instead of sending everything to and from the application) and in form of “Compare and Swap” (an enhancement to the LUN locking mechanism).
  • Rebuild avoidance. This feature will change the routing of I/O to the service processor that still has access to all the drives in the RAID group. You do need write caching to be enabled if you want to be able to use this feature.
  • Virtual provisioning, basically EMC’s name for thin provisioning on the array.

There are some nice features in there, but for me personally the virtual provisioning, the FCoE support and the vStorage API support are the main ones.

One thing that caught my eye was in the section called limitations for FLARE version 04.30.000.5.507. In the release notes you will find the following statement:

Host attach support – Supported host attached systems are limited to the following operating systems: Windows, VMWare, and Linux

Which would mean that you have a problem when you are using something else like Solaris or HP-UX. I’m trying to get some confirmation, and I’ll update this post as soon as I have more info.

Update

The statement has changed in the meantime:

Host attach support – Supported hosts that can be attached over an FCoE connection are limited to the following operating systems: Windows, VMWare, and Linux

Which means that this is just related to FCoE connected hosts.


After some feedback on Twitter from among others Andrew Sharrock, I’d thought it might be wise to talk a few sentences about the Virtual Provisioning feature.

In short, Virtual Provisioning was already introduced with FLARE 28. Problem was that at the time, you could only use the feature with thin pools. Basically, with this update, you also get support for a newer version of the feature. Things that were added are:

  • Thick LUNs
  • LUN expand and shrink
  • Tiering preference (storage allocation from pools with mixed drives and different performance characteristics)
  • Per-tier tracking support of pool usage
  • RAID 1/0 support for pools
  • Increased limits for drive usage in pools
Clariion, FLARE

Shorts: How to check the FLARE version of your CLARiiON?

I decided to introduce something new on my blog. It’s something I’ve decided to call “shorts”. In these shorts I will try to pick some fairly simple and common questions that come up from the searches to my blog and try to give a short descriptive answer to help you out.

So, in this short:

How to check the FLARE version of your CLARiiON?

There are two simple ways to check the release of your FLARE operating environment.

  1. Use the NaviSphere GUI and right click on the array icon inside NaviSphere. Select Properties from the menu and go to the “software” tab. This will give you an overview of all licensed software that is enabled on your array. Should you be in engineering mode, you will find all the software that was pre-loaded on the array, but only those items that have a dash/minus sign in front of them are enabled. In that list of items you should find something like this:
    FLARE-Operating-Environment 03.26.010.5.016
  2. You can also use the navicli or naviseccli to enter the command “navicli ndu -list -isactive” and get a list of all active software on your array. The entry for your FLARE version would look similar to this:
    Name of the software package:        FLARE-Operating-Environment
    Revision of the software package:    03.26.010.5.016
    Commit Required:                     NO
    Revert Possible:                     NO
    Active State:                        YES
    Required packages:                   FA_MIB 260, AnalyzerProvider 260, RPSplitterEngine 260, MVAEngine 260, OpenSANCopy 260, MirrorView 260, SnapView 260, EMCRemoteNG 260, SANCopyProvider 260, SnapViewProvider 260, SnapCloneProvider 260, MirrorProvider 260, CLIProvider 260, APMProvider 260, APMUI 260, AnalyzerUI 260, MirrorViewUI 260, SANCopyUI 260, SnapViewUI 260, ManagementUI 260, ManagementServer 260, Navisphere 260, Base 263
    Is installation completed:           YES
    Is this System Software:             NO

As you can see, finding out which version of FLARE you have is actually quite simple. Good luck, and let me know if this works for you.

Clariion, CX3, EMC, GestaltIT, Storage

The Asymmetrical Logical Unit Access (ALUA) mode on CLARiiON

I’ve noticed that I have been getting a lot of search engine hits relating to the various features, specifications and problems on the EMC CLARiiON array. One of the searches was related to a feature that has been around for a bit. It was actually introduced in 2001, but in order to give a full explanation I’m just going to start at the beginning.

DetourThe beginning is actually somewhere in 1979 when the founder of Seagate Technology, Alan Shugart, created the Shugart Associates Systems Interface (SASI). This was the early predecessor of SCSI and had a very rudimentary set of capabilities. Only few commands were supported and speeds were limited to 1.5 Mb/s. In 1981, Shugart Associates was able to convince the NCR corporation to team up and thereby convincing ANSI to set up a technical committee to standardize the interface. This was realized in1982 and known as the “X3T9.2 technical committee” and resulted in the name being changed to SCSI.

The committee published their first interface standard in 1986, but would grow on to become the group known now as “International Committee for Information Technology Standards” or INCITS and that is actually responsible for many of the standards used by storage devices such as T10 (SCSI), T11 (Fibre Channel) and T13 (ATA).

Now, in July 2001 the second revision of the SCSI Primary Commands (SPC-2) was published, and this included a feature called Asymmetrical Logical Unit Access mode or in short ALUA mode, and some changes were made in the newer revisions of the primary command set.

Are you with me so far? Good.

On Logical Unit Numbers

Since you came here to read this article I will just assume that I don’t have to explain the concept of a LUN. But what I might need to explain is that it’s common to have multiple connections to a LUN in environments that are concerned with the availability of their disks. Depending on the fabric and the amount of fibre channel cards you have connected you can have multiple paths to the same lun. And if you have multiple paths you might as well use them, right? It’s no good having the additional bandwidth lying around and then not using it.

Since you have multiple paths to the same disk, you need a tool that will somehow merge these paths and tell your operating system that this is the same disk. This tool might even help you achieve a higher throughput since it can balance the reads and writes over all of the paths.

As you might already have guessed there are multiple implementations of this, usually called Multipathing I/O, MPIO or just plainly Multipath, and you will be able to find a solution natively or as an additional piece of software for most modern operating systems.

What might be less obvious is that the connection to these LUNs don’t have to behave in the same way. Depending on what you are connecting to, you have several states for that connection. Or to draw the analogy to the CX4, some paths are active and some paths are passive.

Normally a path to a CLARiiON is considered active when we are connected to the service processor that is currently serving you the LUN. CLARiiON arrays are so called “active/passive” arrays, meaning that only one service processor is in charge of a LUN, and the secondary service processor is just waiting for a signal to take over the ownership in case of a failure. The array will normally receive a signal that tells it to switch from one service processor to the other one. This routine is called a “trespass” and happens so fast that you usually don’t really notice such a failover.

When we go back to the host, the connection state will be shown as active for that connection that is routed to the active service processor, and something like “standby” or “passive” for the connection that goes to the service processor that is not serving you that LUN. Also, since you have multiple connections, it’s not unlikely that the different paths can also have other properties that are different. Things like bandwith (you may have added a faster HBA later) or latency can be different. Due to the characteristics, the target ports might need to indicate how efficient a path is. And if a failure should occur, the link status might change, causing a path to go offline.

You can check the the status of a path to a LUN by asking the port on the storage array, the so called “target port”. For example, you can check the access characteristics of a path by sending the following SCSI command:

  • REPORT TARGET PORT GROUPS (RTPG)

Similar commands exist to actually set the state of a target port.

So where does ALUA come in?

What the ALUA interface does is allow an initiator (your server or the HBA in your server) to discover target port groups. Simply put, a group of ports that provide a common failover behavior for your LUN(s). By using the SCSI INQUIRY response, we find out to what standard the LUN adheres, if the LUN provides symmetric or asymmetric access, and if the LUN uses explicit or implicit failover.

To put it more simply, ALUA allows me to reach my LUN via the active and the inactive service processor. Oversimplified this just means that all traffic that is directed to the non-active service processor will be routed internally to the active service processor.

On a CLARiiON that is using ALUA mode this will result in the host seeing paths that are in an optimal state, and paths that are in an non-optimal state. The optimal path is the path to the active storage processor and is ready to perform I/O and will give you the best performance, and the non-optimal path is also ready to perform I/O but won’t give you the best performance since you are taking a detour.

The ALUA mode is available on CX-3 and CX-4, but the results you get can vary between both arrays. For example if you want to use ALUA with your vSphere installation you will need to use the CX-4 with FLARE 26 or newer and change the failover mode to “4”. Once you have changed the failover mode you will see a slightly different trespass behavior since you can now either manually initiate a trespass (explicit) or the array itself can perform a trespass once it’s noticed that the non-optimal path has received 128,000 or more I/Os than the optimal path (implicit).

Depending on which software you use – PowerPath or for example the native solution – you will find that ALUA is supported or not. You can take a look at Primus ID: emc187614 in Powerlink to get more details on supported configurations. Please note that you need a valid Powerlink account to access that Primus entry.

Clariion, EMC, Storage

Downloading the EMC CLARiiON CX / Navisphere simulator

I just wanted to write a really short post to share this tip with you. A lot of people seem to stumble on this site while they are looking to do some tests. Now, as always you will most likely not have full on storage array sitting around that is just waiting to be a guinea pig while serving your production environment.

A partial solution is to test things in a simulator. For people who want to test things on their Cisco switches there is an open source “Internetwork Operating System” or IOS simulator that gives you a taste of the real thing. Admittedly it’s not the same as having a full environment, but it might just help you in testing a scenario or routine that you have in mind.

Now, you will find that there is also a simulator for the CLARiiON environment that is called the “Navisphere simulator” and a CX simulator. Problem is that the simulator can’t be downloaded with any old Powerlink account. Partners and employees can use a simple download in Powerlink ( Home => Products => Software E-O => Navisphere Management Suite => Demos) , but if you don’t fall under that category you will have a hard time actually finding a download.

Normally to get the simulator you would need to order some CLARiiON training. The Navisphere and CX simulators are actually packaged with the Foundations course and you can also find them in one of their video instructor led trainings. The problem is that you or your boss will pay quite a bit for said trainings, and this is not great if you just want to perform a quick test.

Now for my tip… Buy the “Information and Storage Management” book (ISBN-13: 978-0-470-29421-5 / ISBN-10: 0-470-29421-3) from your favorite book supplier. Beside it being a good read it also allows you to register on a special site created for the book where you can actually find some learning aids that also include the Navisphere simulator and the CX simulator. You can find the book starting around $40 and there’s also a version available for the Kindle if you are in to e-books. You don’t need any special information to register the book on the EMC site so it’s quite a quick way to get the simulators and check if you can actually simulate the scenario you have in mind.

Clariion, FLARE, Storage

Is it possible to downgrade the Clariion CX FLARE to a lower version?

After checking the searches that lead to my blog, one came up that was interesting to me, so I decided to answer the question in fairly short post since it might be useful to some. The question was if it is possible to downgrade from one major version of the FLARE operating environment to a lower version.

The short answer is: Yes. It is possible to downgrade, but there are some situations that you need to consider, and here are some scenarios with the matching answer:

Major versions:

So, let’s say you want to downgrade from FLARE 29 to FLARE 28.

  • If you have upgraded, but not yet “committed” the new version you are all set. You can downgrade without any problems. However, it is most unlikely that this is a situation you will actually encounter. With newer versions that bring you features such as spinning down drives or shrinking of thin LUNs, you actually need to commit the newer version. If you don’t you won’t be able to use these new features, which is why it is most unlikely to find people running an uncommitted major FLARE version for longer times.
  • If you have upgraded and committed the new major version you can still downgrade. Drawback is that you can’t do it yourself. In such a case you need to consider how much the downgrade brings you, because you need to contact one of the engineering teams. They can install an older version but keep in mind that this is not something that is easily done.

Minor versions:

You want to downgrade from one of higher patch versions. For example from patch .018 to .010.

  • Again, If you have upgraded, but not yet “committed” the new patch you are all set. You can downgrade without any problems.
  • If you have committed the new patch you can still downgrade, but this involves the engineering mode and I would recommend to still contact one of EMC’s engineering teams so that they can help you. It’s not an option that is recommended or supported as a self-service scenario, but the procedure is not as intrusive as it is when downgrading a major release.

So, to sum it up you can always downgrade from both a major and a minor release. If you haven’t committed the changes yet you are always good to go. If you have committed, then just contact EMC and they can actually help you downgrade, but keep in mind that even if they help you there will be limits too how far you can downgrade.

Clariion, CX4, EMC, FLARE

What’s new in EMC Clariion CX4 FLARE 29

CLARiiON CX4 UltraFlex I/O module - Copyright: EMC Corporation.Along with the release of FAST, EMC also released a new version of it’s CLARiiON Fibre Logic Array Runtime Environment, or in short “FLARE” operating environment. This release brings us to version 04.29 and offers some interesting enhancements, so I thought I’d give you an overview of what’s in there:

Let’s start off with some basics. Along with this update you will find updated firmware versions for the following:

    Enclosure: DAE2		- FRUMon: 5.10
    Enclosure: DAE2-ATA	- FRUMon: 1.99
    Enclosure: DAE2P	- FRUMon: 6.69
    Enclosure: DAE3P	- FRUMon: 7.79

Major changes:

  • VLAN tagging for 1Gb/s and 10Gb/s iSCSI interfaces.
  • Support for 10Gb/s dual port optical I/O modules.
  • Spin down support for storage system and/or RAID group. Once enabled drives spin down automatically if no user or system I/O has been recognized for 30 minutes. These SATA drives support spin down:
    • 00548797
    • 00548853
    • 00548829
  • Shrinking of a FLARE and meta LUNs. Note that this is only supported on Windows hosts that are capable of shrinking logical disks.
  • Upgrade of UltraFlex I/O modules with an increased performance, more specifically 8Gb FC and 10Gb iSCSI. Note that only an upgrade is supported, a downgrade from for example 8Gb FC to 4Gb FC will not work.
  • Rebuild logging is now supported on RAID6 LUNs, which means that a drive that may have been issuing timeouts will have it’s I/O logged and rebuild only the pending writes.
  • The maximum number of LUNs per storage group have been upgraded from 256 for all CX4 models with FLARE 28 to the following:
    • CX4-120 – 512
    • CX4-240 – 512
    • CX4-480 – 1024
    • CX4-960 – 1024

You can find an overview with the supported connectivity options and front-end and back-end ports right here.