There’s no beating about the bush on that fact. And it’s not because there aren’t any cool new things out there. I have roughly 20 posts in draft, and a lot of cool things happened and have been released in the meantime. Examples would be stuff like the new VNX and VNXe from EMC. I have a take on the NetApp FlexPod, and there were a lot of things that I learned or had to (re)consider after talking to customers, and it’s all good stuff.
And still you haven’t seen any updates here. But why?
Well, truth of the matter is that my new job is great! It’s actually so great that I am constantly busy and that has changed my ability to finish my drafts and/or rough blog posts.
To give you an idea, let me give you an overview of the week. It started having conference calls with colleagues to finish a Vblock draft configuration for a large service provider proof of concept. The afternoon was filled with the preparation for a workshop I gave on EMC’s IONIX Unified Infrastructure Manager, which is basically a management and orchestration tool for the Vblock.
Think of the UIM as a tool that allows you to predefine flexible hardware configurations, and then roll out those configurations. For example, roll out between 1- 4 blades and 20 – 400 GB storage of a certain self defined grade, like a large class that has the fastest blades and only SSD storage. Your admin just needs to decide how many blades he wants to deploy and how much storage he needs. he clicks on the button, and 45 minutes later he has those blades up and running with the amount of storage he selected and a fresh ESX 4 installed without having to run to a storage admin or a network admin, perhaps even multiple times.
So, the day after that I was at a large partner, actually giving that workshop. That went so smooth that we even had a change to finish earlier, and spend time on some different topics that the participants were interested in.
The next two days was spent in a workshop for a large service provider that wants to create a private cloud offering. I had the pleasure to work together in a team of roughly 35 people, including Cisco, VMware, EMC, VCE and customer representatives that were all top at what they do. I was lucky enough to work on a high level architecture for the vSphere and vCloud Director part of it, together with Richard Damoser. This went so well that the first rough draft still needs to be written down in a templatized form, but was able to set a basis for a design and it’s interfaces. Thanks once more for your amazing work Richard!
That being done, I got in my car and drove down to the Cebit, the worlds largest IT convention, to help support my colleagues on booth duty.
Now, add in some conference calls, emails, colleagues calling for support plus the regular stuff that needs to get done, and you have a working week that goes well beyond the regular 40 hours. The week before I even got an email asking me if I could “briefly” fly down to South Africa, which unfortunately wasn’t possible due to my full calendar. This is not a complaint, since I’m having an absolute blast, but it means that stuff like blogging just gets a lower priority.
But, dear readers, I’ll try to improve!
And for now I want to thank you for continuing to read, and I hope that the insight to a weeks worth of work was somewhat interesting. Oh, and while I’m at it, I need to apologize to Steve Chambers for not sending out the presentation he requested. I was just swamped, sorry for that Steve!
EMC EBC CorkI spent some days in Cork, Ireland this week presenting to a customer. Besides the fact that I’m now almost two months in to my new job, and I’m loving every part of it, there is one part that is extremely cool about my job.
I get to talk to customers about very cool and new technology that can help them get their job done! And while it’s in the heart of every techno loving geek to get caught up in bits and bytes, I’ve noticed one thing very quickly. The technology is usually not the part that is limiting the customer from doing new things.
Everybody know about that last part. Sometimes you will actually run in to a problem, where some new piece of kit is wreaking havoc and we can’t seem to put our finger on what the problem is. But most of the time, we get caught up in entirely different problems altogether. Things like processes, certifications (think of ISO, SOX, ITIL), compliance, security or just something “simple” as people who don’t want to learn something new or feel threatened about their role that might be changing.
And this is where technology comes in again. I had the ability to talk about several things to this customer, but one of the key points was that technology should help make my life easier. One of the cool new things that will actually help me in that area was a topic that was part of my presentation.
Some of the VMware admins already know about this technology, and I would say that most of the folks that read blogs have already heard about it in some form. But when talking to people at conventions or in customer briefings, I get to introduce folks over and over to a new technology called VAAI (vStorage API for Array Integration), and I want to explain again in this blog post what it is, and how it might be able to help you.
So where does it come from?
Well, you might think that it is something new. And you would be wrong. VAAI was introduced as a part of the vStorage API during VMworld 2008, even though the release of the VAAI functionality to the customers was part of the vSphere 4.1 update (4.1 Enterprise and Enterprise Plus). But VAAI isn’t the entire vStorage API, since that consists of a family of APIs:
vStorage API for Site Recovery Manager
vStorage API for Data Protection
vStorage API for Multipathing
vStorage API for Array Integration
Now, the “only API” that was added with the update from vSphere 4.0 to vSphere 4.1 was the last API, called VAAI. I haven’t seen any of the roadmaps yet that contain more info about future vStorage APIs, but personally I would expect to see even more functionality coming in the future.
And how does VAAI make my life easier?
If you read back a couple of lines, you will notice that I said that technology should make my life easier. Well, with VAAI this is actually the case. Basically what VAAI allows you to do is offload operations on data to something that was made to do just that: the array. And it does that at the ESX storage stack.
As an admin, you don’t want your ESX(i) machines to be busy copying blocks or creating clones. You don’t want your network being clogged up with storage vMotion traffic. You want your host to be busy with compute operations and with the management of your memory, and that’s about it. You want as much reserve as you can on your machine, because that allows you to leverage virtualization more effectively!
So, this is where VAAI comes in. Using the API that was created by VMware, you can now use a set of SCSI commands:
ATS: This command helps you out with hardware assisted locking, meaning that you don’t have to lock an entire LUN anymore but can now just lock the blocks that are allocated to the VMDK. This can be of benefit, for example when you have multiple machines on the same datastore and would like to create a clone.
XSET: This one is also called “full copy” and is used to copy data and/or create clones, avoiding that all data is sent back and forth to your host. After all, why would your host need the data if everything is stored on the array already?
WRITE-SAME: This is one that is also know as “bulk zero” and will come in handy when you create the VM. The array takes care of writing zeroes on your thin and thick VMDKs, and helps out at creation time for eager zeroed thick (EZT) guests.
Sounds great, but how do I notice this in reality?
Well, I’ve seen several scenarios where for example during a storage vMotion, you would see a reduction in CPU utilization of 20% or even more. In the other scenarios, you normally should also see a reduction in the time it takes to complete an operation, and the resources that are allocated to perform such an operation (usually CPU).
Does that mean that VAAI always reduces my CPU usage? Well, in a sense: yes. You won’t always notice a CPU reduction, but one of the key criteria is that with VAAI enabled, all of the SCSI operations mentioned above should always perform faster then without VAAI enabled. That means that even when you don’t see a reduction in CPU usage (which is normally the case), you will see that since the operations are faster, you get your CPU power back more quickly.
Ok, so what do I need, how do I enable it, and what are the caveats?
Let’s start off with the caveats, because some of these are easy to overlook:
The source and destination VMFS volumes have different block sizes
The source file type is RDM and the destination file type is non-RDM (regular file)
The source VMDK type is eagerzeroedthick and the destination VMDK type is thin
The source or destination VMDK is any sort of sparse or hosted format
The logical address and/or transfer length in the requested operation are not aligned to the minimum alignment required by the storage device (all datastores created with the vSphere Client are aligned automatically)
The VMFS has multiple LUNs/extents and they are all on different arrays
Or short and simple: “Make sure your source and target are the same”.
Key criteria to use VAAI are the use of vSphere 4.1 and an array that supports VAAI. If you have those two prerequisites set up you should be set to go. And if you want to be certain you are leveraging VAAI, check these things:
In the vSphere Client inventory panel, select the host
Click the Configuration tab, and click Advanced Settings under Software
Check that these options are set to 1 (enabled):
DataMover/HardwareAcceleratedMove
DataMover/HardwareAcceleratedInit
VMFS3/HardwareAcceleratedLocking
Note that these are enabled by default. And if you need more info, please make sure that you check out the following VMware knowledge base article: >1021976.
Also, one last word on this. I really feel that this is a technology that will make your life as a VMware admin easier, so talk to your storage admins (if that person isn’t you in the first case) or your storage vendor and ask if their arrays support VAAI. If not, ask them when they will support it. Not because it’s cool technology, but because it’s cool technology that makes your job easier.
And, if you have any questions or comments, please hit me up in the remarks. I would love to see your opinions on this.
Update: 2010-11-30
VMware guru and Yellow Bricks mastermind Duncan Epping was kind enough to point me to a post of his from earlier this week, that went in to more detail on some of the upcoming features. Make sure you check it out right here.
So folks, I helped a colleague install the VMware vCloud Director. In case you are not aware of what the vCloud Director is I can give you a very rough description.
Think about how you deploy virtual machines. Usually you will deploy one machine at a time, which is a good thing if you only need one server. But usually in larger environments, you will find that applications or application systems are not based on a single server. You will find larger environments that consist of multiple servers that will segregate functions, so for example, your landscape could consist of a DB server, an application server, and one or more proxies that provide access to your application servers.
If you are lucky, the folks installing everything will only request one virtual machine at a time. Usually that isn’t the case though. Now, this is where vCloud Director comes in. This will allow you to roll out a set of virtual machines at a time as a landscape. But it doesn’t stop there, since you can do a lot more because you can pool things like storage, networks and you a tight integration with vShield to secure your environment. But this should give you a very rough idea of what you can do with the vCloud Director. For a more comprehensive overview, take a look at Duncan’s post here.
Anyway, let’s dig in to the technical part.
There are plenty of blog posts that cover how to set up the CentOS installation, so I won’t cover that at great length. If you are looking for that info, take a peek here. If you want to install the Oracle DB on CentOS, take a look here to see how it’s done.
Here are some tips that might come in useful during the install:
Use the full path to the keytool. There is a slight difference between /usr/bin/keytool, /usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre/bin/keytool and /opt/vmware/cloud-director/jre/bin/keytool. Be sure to use one of those, and if the commands to create and import your self-signed certificates are not working for some reason be sure to try a different one.
If you just simply create a database and browsed through the installation guide, you might have a hard time once you install the binary. Basically you run the “dbca” tool to create an empty database. If you by any chance forget to create the database files and run the installation binary (or the vCD configuration tool for that matter), you will receive an error while running the .sql database initialization scripts under /opt/vmware/cloud-director/db/oracle. The error message will tell you that there was an error creating the database.
Well, if only you had read the installation guide properly. Bascially what you do is start up the database:
sqlplus "/ as sysdba"
startup
Make sure that the path you use in the “create tablespace” command actually exists. If they don’t you need to perform “mkdir $ORACLE_HOME/oradata” first. Then create the tablespaces and corresponding files:
first of all, I need to apologize. There have been way no updates for quite some time now. Things were hectic with me wrapping up things with my previous employer, and with getting things organized at my new spot. Things are slowly coming together, but it’s been quite time intensive, which left me with little time to actually write much for my blog.
But, things are hopefully changing. I’m headed for VMworld 2010 in Copenhagen, Denmark on Sunday, and I’m bringing along my digital notebooks. Since I’m still fairly new in my new role, I won’t have quite the same schedule as my colleagues, and I hope that this will allow me some time to visit some of the sessions and create some notes that I’m able to share with you all.
So, keep your eyes open for things to come in this space!
Some of you may have already read about the contest over at the Gestalt IT website, but I thought this contest was nice enough to give you an overview here and link back to the contest.
Now, I’m guessing that most of the folks reading here will be familiar with the event called VMworld, but for those that aren’t, here’s a short overview:
the annual VMworld gathering in San Francisco has become the central event for virtualization-related companies. Although focused on VMware, the conference draws many companies. And the labs and sessions are really awesome!
So, what is Gestalt IT doing? Because most people can’t afford to attend if their boss is not allowing them to go, Gestalt IT decided to set up a contest that will cover the following (if you should win):
One conference ticket.
One roundtrip air ticket from one of the major airports near you to SFO or another airport in the San Francisco area.
Three nights at a hotel within 1 mile of the Moscone Center in San Francisco (VMworld runs August 30 through September 2).
Now the final question would be what you need to do to enter, right? Well, we decided that VMworld was created for the community of VMware customers, users and partners. So, what we want to know from you is what you will do for our and/or your community by attending VMworld. Will you take notes from sessions and try to help people back home? Are you going to try and get some video interviews that will answer the burning questions your community may have? We want to know how you plan on “paying it forward”!
So, what are you waiting for? Get on over to the contest page to read all of the details and to enter the contest. We look forward to seeing your entries!
Update!
I received word that the contest has been extended. The winners will be announced Friday, August 13th. Yes, you read that right, winners. We were lucky enough to find some additional sponsors, which means that we will now give away two trips to VMworld. Check out the details here!
Also, a special thank you to Symantec and Xsigo for their help as a sponsor. And a thank you to two wonderful additional sponsors, Zetta and Veeam, that made it possible to pick two winners.
Time for another short! The google searches leading to this blog show searches coming in based on the cpuid.corespersocket setting. In this short I’ll try to explain what this setting is for and how it behaves. So, let’s dig right in!
The cpuid.corespersocket setting
In a sense, you would assume that the name of the setting says it all. And in a sense, it does. In the physical world, you will have a number of sockets on your motherboard, this number of sockets is normally also the number of physical CPU’s that you have on said motherboard (at least in an ideal world), and each CPU will have one or more cores on it.
One can describe it as an integrated circuit to which two or more individual processors (called cores in this sense) have been attached.
…..
The amount of performance gained by the use of a multi-core processor depends very much on the software algorithms and implementation. In particular, the possible gains are limited by the fraction of the software that can be parallelized to run on multiple cores simultaneously; this effect is described by Amdahl’s law. In the best case, so-called embarrassingly parallel problems may realize speedup factors near the number of cores, or beyond even that if the problem is split up enough to fit within each processor’s or core’s cache(s) due to the fact that the much slower main memory system is avoided.
Now, this sounds quite good, but some of you may ask what kind of influence this has on my virtualized systems. The most obvious answer would be “none at all”. This is because by default your virtualized system will see the cores as physical CPU’s and be done with it.
So, now you are probably wondering why VMware would even distinguish between cores and sockets. The answer is quite simple; It’s due to licensing. Not so much by VMware, but by the software or operating system that you would like to virtualize. You see, some of that software is licensed per core, and some will license by the number of sockets (some even combine the two).
So how do I use it?
As with all things computer related… It depends. When you are using ESX 3.5 you have no chance of using it. With ESX 4, you can actually use this feature, but it is not officially supported (someone please point me in the right direction if this is incorrect). And starting with ESX 4.1 the setting is now officially supported, and even documented in the VMware Knowledge Base as KB Article: 1010184.
Simply put, you can now create a virtual machine with for example 4 vCPU’s and set the cpuid.corespersocket to 2. This will make your operating system assume that you have two CPU’s, and that each CPU has two cores. If you create a machine with 8 vCPU’s and again select a cpuid.corespersocket of 2, your operating system will report 4 dual-core CPU’s.
You can actually set this value by either going this route:
Power off the virtual machine.
Right-click on the virtual machine and click Edit Settings.
Click Hardware and select CPUs.
Choose the number of virtual processors.
Click the Options tab.
Click General, in the Advanced options section.
Click Configuration Parameters.
Include cpuid.coresPerSocket in the Name column.
Enter a value ( try 2, 4, or 8 ) in the Value column.
Note: This must hold:
#VCPUs for the VM / cpuid.coresPerSocket = An integer
That is, the number of vCPUs must be divisible by cpuid.coresPerSocket. So if your virtual machine is created with 8 vCPUs, coresPerSocket can only be 1, 2, 4, or 8.
The virtual machine now appears to the operating system as having multi-core CPUs with the number of cores per CPU given by the value that you provided in step 9.
Click OK.
Power on the virtual machine.
If the setting isn’t shown, for example for those who want to experiment with it under ESX 4.0, you can create the values in the following way:
Power off the virtual machine.
Right-click on the virtual machine and click Edit Settings.
Click the Options tab.
Click General, under the Advanced options section.
Click Configuration Parameters.
Click Add Row.
Enter “cpuid.coresPerSocket” in the Name column.
Enter a value ( try 2, 4, or 8 ) in the Value column.
Click OK.
Power on the virtual machine.
To check if your settings actually worked, you can use the sysinternals tool called Coreinfo on your Windows systems, and on Linux you can perform a simple “cat /proc/cpuinfo” to see if everything works.
When you come to think about it, people who work in the IT sector are all slightly nuts. We all work in an area that is notorious for trying to make itself not needed. When we find repetitive tasks, we try to automate them. When we have a feeling that we can improve something, we do just that. And by doing that, we try to remove ourselves from the equation where we possibly can. In a sense, we try to make ourselves invisible to the people working with our infrastructure, because a happy customer is one that doesn’t even notice that we are there or did something to allow him to work.
Traditional IT shops were loaded with departments that were responsible for storage, for networking, for operating systems and loads more. The one thing that each department has in common? They tried to make everything as easy and smooth as possible. Usually you will find loads of scripts that perform common tasks, automated installations and processes that intend to remove the effort from the admins.
In comes a new technology that allows me to automate even more, that removes the hassle of choosing the right hardware. That helps me reduce downtimes because of (un)planned maintenance. It also helps me reduce worrying about operating system drivers and stuff like that. It’s a new technology that people refer to as server virtualization. It’s wonderful and helps me automate yet another layer.
All of the people who are in to tech will now say “cool! This can help me make my life easier”, and your customer will thank you because it’s an additional service you can offer, and it helps your customer work. But the next question your customer is going to ask you is probably going to be something along the lines of “Why can’t I virtualize the rest?”, or perhaps even “Why can’t I virtualize my application?”. And you know what? Your customer is absolutely right. Companies like VMware are already sensing this, as can be read in an interview at GigaOM.
The real question your customer is asking is more along the lines of “Who cares about your hardware or operating system?!”. And as much as it pains me to say it (being a person who loves technology), it’s a valid question. When it comes to true virtualization, why should it bother me if am running on Windows, Unix, Mac or Linux? Who cares if there is an array in the background that uses “one point twenty-one jiggawatts” to transport my synchronously mirrored historic data back to the future?
In the long run, I as a customer don’t really care about either software or hardware. As a customer I only care about getting the job done, in a way that I expected to, and preferably as cheap as possible with the availability I need. In an ideal world, the people and the infrastructure in the back are invisible, because that means they did a good job, and I’m not stuck wondering what my application runs on.
This is the direction we are working towards in IT. It’s nothing new, and the concept of doing this in a centralized/decentralized fashion seem to change from decade to decade, but the only thing that remained a constant was that your customer only cared about getting the job done. So, it’s up to us. Let’s get the job done and try to automate the heck out of it. Lets remove ourselves from the equation, because time that your customer spends talking to you is time spent not using his application.
Some of you who read the title of this post will already have a hunch what this is all about. Heraclitus seems to be the person who first stated:
Nothing endures but change.
And I can only agree with that. I remember reading a post from Nick Weaver about an important change in his professional life, and I love this quote:
By taking this position I am intentionally moving myself from the top man on the totem pole to the lowest man on the rung.
And I think that most people who have read Nick’s blog know that this wasn’t entirely the truth, especially when looking what he was able to do until now.
Well, Nick can be assured now. There’s actually on person on the team that is “lower on the rung”. That person would be me.
Time for a change!
I am joining EMC and taking on the role of vSpecialist, or as my new contract says “Technical Consultant VCE”.
I am also going to be leaving my comfort zone and leave a team of people behind that have been great to work with. I have been working at SAP for seven years now, and the choice to leave wasn’t easy. I was lucky enough to have worked with a multitude of technologies in an environment that was high paced and stressful, but very rewarding, and I want to thank all of my colleagues for making the journey interesting! Even so, it’s time for me to make a change.
I was lucky enough to get to know several people who already work in a similar role, and if there’s one thing that distinguishes them in my mind, then it would be the passion they have for their job. This was actually the main reason for me to make the switch to EMC. It’s not about making big bucks, it’s not about being a mindless drone in the Evil Machine Company or drinking the Kool-Aid, it’s about getting a chance to work with people that share a passion and are experts at what they do. It’s about the chance to prove myself and perhaps one day joining their ranks as experts.
So, while I wrap things up here at SAP, if all goes well I will be joining the vSpecialist team on October 1st, and hopefully you will bear with me while I find my way going through this change, and I do hope you drop by every now and then to read some new posts from me.
There’s a meme on Twitter that can be witnessed each Friday. It’s called “Follow Friday” and can be found by searching for the #FollowFriday hash tag, or sometimes just simply abbreviated to #FF to save on space in a tweet.
Problem with a lot of those follow Friday tweets is that most of the time you have no idea why you are being given the advice to follow these people. If you are lucky you will see a remark in the tweet saying why you want to follow someone, or why I should follow all of these people, but in most cases it’s a matter of clicking on a person, going to their time line and hope that you can find a common denominator that gives you an indication of why you want to follow someone.
In an attempt to do some things differently, I decided to create this post and list some of the folks that I think are worth following. And I’ll try and add a description of what someone (or a list of people) do that make them worth following in my opinion. And if you are not on this list please don’t be offended, I will try to update it every now and then, but it would be impossible for me to pick out every single one of you on the first attempt.
So here goes nothing! I’m starting off this post with people that offer a great deal of info on things related to VMware, and I will try to follow up with other topics as time goes buy. Check back every now and then to see some new people to follow.
Focus on VMware:
@sakacc – Besides being the VP for the VMware Technology Alliance at EMC, Chad is still a true geek and is a great source of knowledge when it comes to things VMware and EMC. Also, very helpful in regards to try and help people who have questions in those areas. Be sure to check out his blog as it is a great source of information!
@Kiwi_Si – Simon is a great guy, and can tell you a lot about VMware and home labs. Because of the home labs he is also very strong when it comes to finding out more about HP’s x86 platform, and once again I highly recommend reading his TechHead blog.
@alanrenouf – This French sounding guy is actually hiding in the UK and is considered by many to be a PowerCLI demi-god. Follow his tweets and you will find out why people think of him that way.
@stevie_chambers – You want to find out more about Cisco UCS? Steve is the man to follow on Twitter, also for finding out more about UCS combined with VMware.
@DuncanYB – Duncan started the Yellow Bricks blog, which emphasizes on all things VMware, and also is a great source of info on VMware HA.
@scott_lowe – Scott is an ace when it comes to VMware.
@jtroyer – John is the online evangelist and enterprise community builder at VMware. For anything new regarding VMware and it’s community you should follow John.
@lynxbat – I would call it something else, but Nick is a genius. He started tweaking the EMC Celerra VSA and has worked wonders with it. I highly recommend following him!
@jasonboche – Virtualization evangelist extraordinaire. Jason has the biggest home lab setup that I know of, I’d like to see someone trump that setup.
@gabvirtualworld – Gabrie is a virtualization architect and has a great blog with lot’s of resources on VMware.
@daniel_eason – Daniel is an architect for a large British airline and knows his way around VMware quite well, but is also quite knowledgeable in other areas.
@SimonLong_ – With a load of certifications and an excellent blog, Simon is definitely someone to follow on Twitter.
Focus on storage:
@StorageNerve – Devang is the go-to-guy on all things EMC Symmetrix.
@storageanarchy – Our friendly neighborhood storage anarchist is known to have an opinion, but Barry is also great when it comes to finding out more about EMC’s storage technology.
@valb00 – Val is a great source of info on things NetApp, and you can find a lot of good retweets with useful information from him.
@storagebod – If you want someone to tell it to you like it is, you should follow Martin.
@Storagezilla – Mark is an EMC guy with great storage knowledge. Also, if you find any videos of him cursing, tell me about it because I could just listen to him go on and on for hours with that accent he has.
@nigelpoulton – Nigel is the guy to talk to when you want to know more about data centre, storage and I/O virtualisation. He’s also great on all areas Hitachi/HDS.
@esignoretti – If you are (planning on) using Compellent storage, be sure to add Enrico to your list.
@chrismevans – The storage architect, or just Chris, knows his way around most storage platforms, and I highly recommend you read his blog for all things storage, virtualization and cloud computing.
@HPStorageGuy – For all things related to HP and their storage products you should follow Calvin.
@ianhf – “Don’t trust any of the vendors” is almost how I would sum up Ian’s tweets. Known to be grumpy at times, but a great source when it comes to asking the storage vendors the right questions.
@rootwyrm – As with Ian, rootwyrm also knows how to ask hard questions. Also, he isn’t afraid to fire up big Bertha to put the numbers to the test that were given by a vendor.
@sfoskett – Stephen is an original blogger and can probably be placed under any of the categories here. Lot’s of good information and founder of Gestalt IT
@Alextangent – The office of the CTO is where Alex is located inside of NetApp. As such you can expect deep technical knowledge on all things NetApp when you follow him.
@StorageMojo – I was lucky to have met Robin in person. A great guy working as an analyst, and you will find refreshing takes and articles by following his tweets. A definite recommendation!
@mpyeager – Since Matthew is working for IT service provider Computacenter, he has a lot of experience with different environments and has great insight on various storage solutions as well as a concern about getting customers more bang for their buck.
Focus on cloud computing:
@Beaker – Christofer Hoff is the director of Cloud & Virtualization Solutions at Cisco and has a strong focus on all things cloud related. His tweets can be a bit noisy, but I would consider his tweets worth the noise in exchange for the good info you get by following him. Oh, and by the way… Squirrel!!
@ruv – Reuven is one of the people behind CloudCamp and is a good source of information on cloud and on CloudCamp.
@ShlomoSwidler – Good cloud stuff is being (re)tweeted and commented on by Shlomo.
So, this is my list for now, but be sure to check back every once in a while to see what new people have been added!
Created: May 27th 2010 Updated: May 28th 2010 – Added storage focused bloggers Updated: July 23rd 2010 – Added some storage focused bloggers and some folks that center on cloud computing Updated:
I’m currently visiting the Boston area because I’m attending EMC World. One of the bigger introductions made here yesterday was actually a new appliance called the VPLEX. In short, the VPLEX is all about virtualizing the access to your block based storage.
Let me give you a quick overview of what I mean with virtualized access to block based storage. With VPLEX, you can take (almost) any block based storage device on a local and remote site, and allow active read and writes on both sides. It’s an active/active setup that allows you to access any storage device via any port when you need to.
You can get two versions right now, the VPLEX local and the VPLEX Metro. Two other version, the VPLEX Geo and the VPLEX Global are planned for early next year. And since there is so much information that can be found online about the VPLEX, I figured I’d create a post here that will help me find the links when I return, and to also give you a one spot that can help you find the info you need.
An overview with links to more information on the EMC VPLEX:
Official links / EMC company bloggers / VMware company bloggers