Apple, General, OSX

Shorts: How to set up your BlackBerry as a UMTS/GPRS modem on Snow Leopard with T-Mobile in Germany

After being on the road in a high speed train without any WLAN connection, I decided to do some searching on how to set up my BlackBerry as modem. Since the current firmware on my BlackBerry 9700 seems to have a somewhat flaky Bluetooth stack (currently I’m running on firmware v5.0.0.545) I wanted to do this via USB, but most of the settings should be the same for a device connected via Bluetooth.

One note should be made, and that is that I set this up for a connection on T-Mobile Germany, so the settings are most likely different for your provider, but this might give you a rough idea on how to set up everything. So let’s get started:

  • Start by downloading the BlackBerry Desktop Software for Mac. Right now you should be able to get a copy of it right here.
  • Install the software and connect your BlackBerry to it. The steps here should be pretty self explanatory.
  • Now, open your network preferences. To do so, go to “System Preferences” and click on “Network”, which can be found in the row with the header “Internet & Wireless”.
  • You should find a new device there called “RIM Composite Device”. If it’s not there, click on the plus sign at the lower left, and select the “RIM Composite Device” from the “Interface” drop down list. You can give it any name, for example “BlackBerry USB Internet Connection” might be a name that gives you a better idea of what this is. Then click on the “create” button.
  • Now, for the telephone number you will enter “*99#” (without the quotes). If you were setting up dial-in info on your BlackBerry, you would also use this as the dial-in number, and you would need to alter the number to tell your smartphone about the APN it should use. You could enter “*99*1#” or “*99***1#,” to force it to use the first APN, or you could use “*99*4#” or “*99***4#,” to make it use the fourth entry. But in my case I just went with the first one and used the short form of “*99#”.
  • You can enter anything you want as a user name and password, but the fields can not be left blank. I used “tm” in my setup.
  • Once you have done that, you can click on the “Advanced” button and go to the tab “Modem”. There, change the “Vendor” to “Research in Motion”, and select “Blackberry IP Modem (CDMA)” as the model.
  • Leave the CID as it is (it should be “1”), and enter “internet.t-mobile” or “dynamic” as the APN.
  • Click on the tab “DNS” and enter “193.254.160.1” as the DNS server.
  • Go to the tab “PPP” and deselect all of the checks.
  • Now, click on “OK” and after that select “Save”.

Now, you should be able to connect to the internet using your phone. You can check the “Show modem in status in menu bar” to have a small phone symbol at the top menu bar to make it easier to track the status of your connection, and make it slightly easier to connect and disconnect your connection.

Two small notes to finish up this short. One, these are the settings that worked for me. If you are not in Germany, it’s likely that you would need to change the APN, DNS server and username/password to correspond with the carrier you are using. Also, it is possible that some of the settings made under “PPP” could be different and the connections still works. These are just my settings that I wanted to share.

Second, check your data plan!. Surfing via your phone is no problem once you get the connection up and running, but your data usage may accumulate quicker than you initially thought, and exceeding the amount of data in your plan could get expensive really quickly.

Last but not least: Let me know if this works for you, or if it doesn’t and you managed to get it working in a different way, let me know about it and I’ll make sure that I update the post.

General, Networking, Storage, Virtualization

My “Follow, even if it’s not Friday” list

There’s a meme on Twitter that can be witnessed each Friday. It’s called “Follow Friday” and can be found by searching for the #FollowFriday hash tag, or sometimes just simply abbreviated to #FF to save on space in a tweet.

Problem with a lot of those follow Friday tweets is that most of the time you have no idea why you are being given the advice to follow these people. If you are lucky you will see a remark in the tweet saying why you want to follow someone, or why I should follow all of these people, but in most cases it’s a matter of clicking on a person, going to their time line and hope that you can find a common denominator that gives you an indication of why you want to follow someone.

In an attempt to do some things differently, I decided to create this post and list some of the folks that I think are worth following. And I’ll try and add a description of what someone (or a list of people) do that make them worth following in my opinion. And if you are not on this list please don’t be offended, I will try to update it every now and then, but it would be impossible for me to pick out every single one of you on the first attempt.

So here goes nothing! I’m starting off this post with people that offer a great deal of info on things related to VMware, and I will try to follow up with other topics as time goes buy. Check back every now and then to see some new people to follow.

Focus on VMware:

  • @sakacc – Besides being the VP for the VMware Technology Alliance at EMC, Chad is still a true geek and is a great source of knowledge when it comes to things VMware and EMC. Also, very helpful in regards to try and help people who have questions in those areas. Be sure to check out his blog as it is a great source of information!
  • @Kiwi_Si – Simon is a great guy, and can tell you a lot about VMware and home labs. Because of the home labs he is also very strong when it comes to finding out more about HP’s x86 platform, and once again I highly recommend reading his TechHead blog.
  • @alanrenouf – This French sounding guy is actually hiding in the UK and is considered by many to be a PowerCLI demi-god. Follow his tweets and you will find out why people think of him that way.
  • @stevie_chambers – You want to find out more about Cisco UCS? Steve is the man to follow on Twitter, also for finding out more about UCS combined with VMware.
  • @DuncanYB – Duncan started the Yellow Bricks blog, which emphasizes on all things VMware, and also is a great source of info on VMware HA.
  • @scott_lowe – Scott is an ace when it comes to VMware.
  • @jtroyer – John is the online evangelist and enterprise community builder at VMware. For anything new regarding VMware and it’s community you should follow John.
  • @lynxbat – I would call it something else, but Nick is a genius. He started tweaking the EMC Celerra VSA and has worked wonders with it. I highly recommend following him!
  • @jasonboche – Virtualization evangelist extraordinaire. Jason has the biggest home lab setup that I know of, I’d like to see someone trump that setup.
  • @gabvirtualworld – Gabrie is a virtualization architect and has a great blog with lot’s of resources on VMware.
  • @daniel_eason – Daniel is an architect for a large British airline and knows his way around VMware quite well, but is also quite knowledgeable in other areas.
  • @SimonLong_ – With a load of certifications and an excellent blog, Simon is definitely someone to follow on Twitter.

Focus on storage:

  • @StorageNerve – Devang is the go-to-guy on all things EMC Symmetrix.
  • @storageanarchy – Our friendly neighborhood storage anarchist is known to have an opinion, but Barry is also great when it comes to finding out more about EMC’s storage technology.
  • @valb00 – Val is a great source of info on things NetApp, and you can find a lot of good retweets with useful information from him.
  • @storagebod – If you want someone to tell it to you like it is, you should follow Martin.
  • @Storagezilla – Mark is an EMC guy with great storage knowledge. Also, if you find any videos of him cursing, tell me about it because I could just listen to him go on and on for hours with that accent he has.
  • @nigelpoulton – Nigel is the guy to talk to when you want to know more about data centre, storage and I/O virtualisation. He’s also great on all areas Hitachi/HDS.
  • @esignoretti – If you are (planning on) using Compellent storage, be sure to add Enrico to your list.
  • @chrismevans – The storage architect, or just Chris, knows his way around most storage platforms, and I highly recommend you read his blog for all things storage, virtualization and cloud computing.
  • @HPStorageGuy – For all things related to HP and their storage products you should follow Calvin.
  • @ianhf – “Don’t trust any of the vendors” is almost how I would sum up Ian’s tweets. Known to be grumpy at times, but a great source when it comes to asking the storage vendors the right questions.
  • @rootwyrm – As with Ian, rootwyrm also knows how to ask hard questions. Also, he isn’t afraid to fire up big Bertha to put the numbers to the test that were given by a vendor.
  • @sfoskett – Stephen is an original blogger and can probably be placed under any of the categories here. Lot’s of good information and founder of Gestalt IT
  • @Alextangent – The office of the CTO is where Alex is located inside of NetApp. As such you can expect deep technical knowledge on all things NetApp when you follow him.
  • @StorageMojo – I was lucky to have met Robin in person. A great guy working as an analyst, and you will find refreshing takes and articles by following his tweets. A definite recommendation!
  • @mpyeager – Since Matthew is working for IT service provider Computacenter, he has a lot of experience with different environments and has great insight on various storage solutions as well as a concern about getting customers more bang for their buck.

Focus on cloud computing:

  • @Beaker – Christofer Hoff is the director of Cloud & Virtualization Solutions at Cisco and has a strong focus on all things cloud related. His tweets can be a bit noisy, but I would consider his tweets worth the noise in exchange for the good info you get by following him. Oh, and by the way… Squirrel!!
  • @ruv – Reuven is one of the people behind CloudCamp and is a good source of information on cloud and on CloudCamp.
  • @ShlomoSwidler – Good cloud stuff is being (re)tweeted and commented on by Shlomo.

So, this is my list for now, but be sure to check back every once in a while to see what new people have been added!


Created: May 27th 2010
Updated: May 28th 2010 – Added storage focused bloggers
Updated: July 23rd 2010 – Added some storage focused bloggers and some folks that center on cloud computing
Updated:

EMC, Storage, Symmetrix

Shorts: Trouble with symapi_db.bin causing erratic behavior

Usually when you are connected to a EMC Symmetrix array you will install the Solutions Enabler package on your system. Solutions Enabler is basically both a set of tools to help you manage your Symmetrix arrays, as well as an API. The Solutions Enabler basically creates a small database that displays what Symmetrix arrays are connected to the host you are running the software on, the so called SYMAPI database that you will find as a file on your system called “symapi_db.bin”.

Under a normal situation you will run a discover process to initially scan and fill the database with entries. To do that you can issue the command:

symcfg discover

This will start the scan operation, and depending on the amount of arrays and the configuration on those arrays you can plan anywhere from just under a minute for a scan up to several minutes. Once the file has been created you could try opening the file and searching for strings inside of the file, and you will find a lot of information about devices, device paths, disk IDs and lot’s more.

Now, in some situations after your array configuration has changed, it is useful to refresh the database file. Under normal circumstances this should all be easily done and without any issues.

However, in some cases your database file might be facing problems, without manifestation in any obvious ways. I have seen cases where new devices would simply not show up. Other examples are error messages about disks that can not be reached because of access control list errors.

If you happen to have some erratic behavior on one of your hosts, you might want to try one thing before creating a service request in Powerlink. You might want to try creating a copy of your database, removing it and then performing a new discover. Some steps to help you do just that:

  • Create a backup of your device and/or composite groups using the symdg/symcg commands.
  • Rename your old symapi_db.bin to something else.
  • Issue a “symcfg discover” to create a new symapi_db.bin
  • Import your device and/or composite groups from the backup file(s) you created.

This won’t help you in all situations, but it helped me solve several cases were we were seeing erratic behavior on our hosts, and it might do the trick for you.

General

Something fun. The Linux alternative to Edgar Allan Poe’s “The Raven”

I was talking to some colleagues this morning and I mentioned a parody of Edgar Allan Poe’s poem “The Rave”. Since my colleagues didn’t know this poem existed, I figured there might be more people out there that are not aware of it, and I like what they did. So without any further ado I present to you:

The Penguin – by Rob Flynn and Jeramey Crawford

Once upon a term’nal dreary, while I hack’ed, weak and weary,
Over many a quaint and curious volume of forgotten code–
While I nodded, nearly napping, suddenly there came a beeping,
As of some one gently feeping, feeping using damn talk mode.
“‘Tis some hacker,” I muttered, “beeping using damn talk mode–
Only this. I hate talk mode.”

Ah, distinctly I remember it was in the bleak semester,
And college life wrought its terror as the school year became a bore.
Eagerly I wished for privledges;–higher access I sought to borrow
For my term’nal, unceasing sorrow–sorrow for a file called core–
For the rare and radiant files of .c the coders call the core–
Access Denied. Chown me more.

“Open Source,” did all mutter, when, with very little flirt and flutter,
In there stepped a stately Penguin of the saintly days of yore.
Quite a bit obese was he; having eaten lots of fish had he,
But, by deign of Finnish programmer, he sat in the middle of my floor–
Looking upon my dusty term’nal in the middle of my floor–
Came, and sat, and nothing more.

Then the tubby bird beguiling my sad code into shining,
By the free and open decorum of the message that it bore,
“Though thy term’nal be dusty and slow,” he said, “Linux be not craven!”
And thus I installed a new OS far from the proprietary shore–
The kernel code open but documentation lacking on this shore.
Quoth the Penguin, “pipe grep more!”

Much I marvelled this rotund fowl to hear discourse so plainly,
Though its answer little meaning–little relevancy bore;
For we cannot help believing that no living human being
Ever yet was blessed with seeing bird in the middle of his floor–
Bird or beast sitting in the middle of his cluttered floor,
With such instructions as “pipe grep more.”

But the Penguin, sitting lonely in that cluttered floor, spoke only
Those words, as if its soul in that instruction he did outpour.
Nothing more did he need utter; understood did I among that clutter–
Understood his command as I could scarcely do a few moments before–
I typed as furious as was willed me, understanding just a minute before.
Again the bird said “pipe grep more!”

“Amazing!” said I, “Penguin we will conquor the world if you will!
By the Network that interconnects us–by that Finn we both adore–
We’ll take this very world by storm!” For now grasped I what he’d meant,
The thing I do while searching
/usr/doc/* for that wond’rous lore–
Those compendiums of plaintext documentation and descriptive lore.
Quoth the Penguin, “pipe grep more!”

And the Penguin, never waddling, still is sitting, still is sitting
In the middle of my room and still very cluttered floor;
And his eyes have all the seeming of the free beer I am drinking
And the term’nal-light o’er him glowing throws his shadows on the floor;
And this OS from out the shadows that is pow’ring my term’nal on the floor
Shall be dominating–“Pipe grep more!”

EMC, Virtualization, VMware, VPLEX

EMC VPLEX – Introduction and link overview

I’m currently visiting the Boston area because I’m attending EMC World. One of the bigger introductions made here yesterday was actually a new appliance called the VPLEX. In short, the VPLEX is all about virtualizing the access to your block based storage.

Let me give you a quick overview of what I mean with virtualized access to block based storage. With VPLEX, you can take (almost) any block based storage device on a local and remote site, and allow active read and writes on both sides. It’s an active/active setup that allows you to access any storage device via any port when you need to.

You can get two versions right now, the VPLEX local and the VPLEX Metro. Two other version, the VPLEX Geo and the VPLEX Global are planned for early next year. And since there is so much information that can be found online about the VPLEX, I figured I’d create a post here that will help me find the links when I return, and to also give you a one spot that can help you find the info you need.

An overview with links to more information on the EMC VPLEX:

Official links / EMC company bloggers / VMware company bloggers

Blogs and media coverage:

Now, if I missed one or more links, please just send me a tweet or leave a comment and I will make sure that the link is added to this post.

General, GestaltIT, Stack

This vendor is locking me in!

Or so I’m told. Not just once or twice, but it’s something that is written down at least once each time a vendor introduces something new or when a revision of an existing product is rolled out.

Now, you could say that this is the pot calling the kettle black and I would agree with you. It’s a thing I mentioned in my UCS post, and also in my recent post on the stack wars. And today a tweet from @Zaxstor got me thinking about it some more. I asked the following on Twitter:

I hear this argument about vendor lock in all of the time. Open question: How do I avoid a vendor lock in? By going heterogeneous?

Because, when you think about it, we all are subject to vendor lock-in all of the time. As soon as I decide to purchase my new mobile phone, I am usually tied to either the phone manufacturer or the carrier that is use. Sometimes I am even tied to both, you just need to think about the iPhone as an example for this kind of lock-in.

The same goes for the car I drive. When I buy it from the dealer, I get an excellent package that is guaranteed to work. Until I take it to an inspection with a garage that is not part of the authorized network. My car will still drive, and will probably work great, but I no longer have a large part of the guarantees that came with it when I bought it, and would have been intact if I had taken it to an authorized dealer.

Now, I know my analogy is slightly flawed here since we are talking about things that work on a different scale and use entirely different technologies, but what I am trying to say is that we make decisions that lock us in with a certain vendor on an almost daily basis. Apparently the guys in and around the data center just like to talk about that problem a bit more.

One remark was made however by fellow blogger Dimitris Krekoukias and confirmed by several others:

“It’s not how you get in to the lock, but how you get out of it.”

And I do think that this is probably the key, but fortunately we have some help there from the competition. But it’s not all down to the others! All vendors are guilty with trying to sell something. It’s not their fault, it’s just something that “comes with the territory”. They will try to pitch you their product and make your head dizzy with what this new product can do. It’s all good, and it’s all grand according to them.

And yes, it is truly grand what this shiny new toy can do, but the question is if you really need it? Try to ask what kind of value a feature will offer in your specific setup. Try to judge if you really need this feature, and ask yourself the question what you are going to do if the feature proves to be less useful then expected.

Remember that not all is lost if you do lock yourself in with that vendor. Usually others will be quick to follow with new features and this is where the help from the competition comes in. Take the example with the mobile phone. Even if you will not receive any help from your current provider, you can bet that the provider that now also offers the same package will try to help you to become his customer. If NetApp is not providing you with an option to migrate out of that storage array, you can bet your pants that Hitachi will try and help you migrate to their arrays.

Now, I’m not saying that this is the best solution. Usually exchanging solutions is also accompanied with a loss of knowledge and investments that were made. But it’s all on you to factor that in before you take the plunge, and in the end that lock that you have with your current vendor might be hard and expensive to break, but usually it’s never a mission impossible.


P.S. Just as a side note, I’m not saying NetApp will not allow or help you to migrate out of an array, I’m just using these names as an example. Replace them with any vendor you like.

P.P.S. Being part of the discussion fellow blogger Storagebod posted something quite similar, be sure to read it here

GestaltIT, Networking, Stack, Storage, Virtualization

My take on the stack wars

As some of you might have read, the stack wars have started. One of the bigger coalitions announced in November 2009 was that between VMware, Cisco and EMC, aptly named VCE. Hitachi Data Systems announced something similar and partnered up with Microsoft, but left everyone puzzled about the partner that will be providing the networking technology in it’s stack. Companies like IBM have been able to provide customers with a complete solution stack for some time now, and IBM will be sure to tell it’s customers that they did so and offered the management tools in form of anything branded Tivoli. To me, IBM’s main weakness is not so much the stack that they offer, as the sheer number of solutions and the lack of one tool to manage it all, let alone getting an overview of all possible combinations.

So, what is this thing called the stack?

Actually the stack is just that, a stack. A stack of what you say? A stack of solutions, bound together by one or more management tools, offered to you as a happy meal that allows you to run the desired workloads on this stack. Or to put things more simply and quote from the Gestalt IT stack wars post:

  • Standard hardware configurations are specified for ease of purchasing and support
  • The hardware stack includes blade servers, integrated I/O technology, Ethernet networking for connectivity, and SAN or NAS storage
  • Unifying software is included to manage the hardware components in one interface
  • A joint services organization is available to help in selection, architecture, and deployment
  • Higher-level software, from the virtualization hypervisor through application platforms, will be included as well

Until now, we have usually seen a standardized form of hardware, including storage and connectivity. Vendors mix that up with one or multiple management tools and tend to target some form of virtualization. Finally a service offering is included to allow the customer to get service and support from one source.

This strategy has it’s advantages.

Compatibility is one of my favorite ones. You no longer need to work trough compatibility guides that are 1400 pages long and will burn you for installing a firmware version that was just one digit off and is now no longer supported in combination with one of your favorite storage arrays. You no longer have to juggle different release notes from your business warehouse provider, your hardware provider, your storage and network provider, your operating system and tomorrow’s weather forecast. Trying to find the lowest common denominator through all of this is still something magical. It’s actually a form of dark magic that usually means working long hours to find out if your configuration is even supported by all the vendors you are dealing with.

This is no longer the case with these stacks. Usually they are purpose or workload built and you have one central source where you get your support from. This source will tell you that you need at least firmware version X.Y on these parts to be eligible for support and you are pretty much set after that. And because you are working with a federated solution and received management tools for the entire stack, your admins can pretty much manage everything from this one console or GUI and be done with it. Or, if you don’t want to that you can use the service offering and have it done for you.

So far so good, right?

Yes, but things get more complicated from here on. For one there is one major problem, and that is flexibility. One of the bigger concerns came up during the Gestalt IT tech field day vBlock session at Cisco. With the vBlock, I have a fixed configuration and it will run smoothly and within certain performance boundaries as long as I stick to the specifications. In the case of a vBlock this was a quite obvious example, where if I add more RAM to a server blade then is specified, I no longer have a vBlock and basically no longer have those advantages previously stated.

Solution stacks force me to think about the future. I might be a Oracle shop now as far as my database goes. And Oracle will run fine on newly purchased stack. But what if I want to switch to Microsoft SQL Server in 3 years, because Mr. Ellison decided that he needs a new yacht and I no longer want to use Oracle? Is my stack also certified to run a different SQL server or am I no longer within my stack boundaries and lost my single service source or the guaranteed workload it could hold?

What about updates for features that are important to me as a single customer? Or what about the fact that these solution stacks work great for new landscapes, or in a highly homogeneous environment? But what about those other Cisco switches that I would love to manage from the tools that are offered within my vBlock, but are outside of the vBlock scope, even if they are the same models?

What about something simple as a “stack lock-in”? I don’t really have a vendor lock-in since only very few companies have the option of offering everything first hand. Microsoft doesn’t make server blades, Cisco doesn’t make SAN storage and that list goes on and on. But with my choice of stack, I am now locked in to a set of vendors, and I certainly have some tools to migrate in to that stack, but migrating out is an entirely different story.

The trend is the stack, it’s as simple as that. But for how long?

We can see the trend clearly. Every vendor seems to be working on a stack offering. I’m still missing Fujitsu as a big hardware vendor in this area, but I am absolutely certain we will see something coming from them. Smaller companies will probably offer part of their portfolio under some sort of OEM license or perhaps features will just be re-branded. And if they are successful enough, they will most likely be swallowed by the bigger vendors at some point.

But as with all in the IT, this is just a trend. Anyone who has been in the business longer than me can probably confirm this. We’ve seen a start with centralized systems, then moving towards a de-centralized environment. Now we are on the move again, centralizing everything.

I’m actually much more interested to see how long this trend will continue. I’m am certain that we will be seeing some more companies offer a complete solution stack, or joining in coalitions to offer said stack. I still think that Oracle was one of the first that pointed in this direction, but they were not the first to offer the complete stack.

So, how do you think this is going to continue? Do you agree with us? What companies do you think are likely to be swallowed, or will we see more coalitions from smaller companies? What are your takes on the advantages and disadvantages?

I’m curious to hear your take on this so let me know. I’m looking forward to what you have to say!

General

The scam that is called e-books?

Amazon made a big splash when they introduced the Kindle back in 2007. It was one of the first e-book readers out there and sported an E Ink brand electronic paper display which shows 16 shades of gray, and was connected to the Sprint network that allowed the use of Amazon’s own Whispernet which allowed it’s owner to download books over the air. We saw several iterations of the device, and at a certain point Amazon saw it fit to release the Kindle as an application for different platforms, allowing books to be synchronized to other devices and read from any platform that was able to run the Kindle application.

Other vendors soon picked up this trend and introduced their own e-book readers. Examples of that are the Nook from Barnes & Noble, or Sony’s Reader. All of them having the basic function of reading books in their respective supported formats, some sporting their own unique features to set them apart from the rest. Besides the introduction of devices which were pretty similar, the e-book market was fairly quiet with Amazon dominating the market.

Things pretty much looked like that, that is, until Apple made a splash by entering the e-book reader game with their iBook application on their recently introduced iPad.

Now, I don’t want to turn this in to a review of the iPad. I’ll try to follow up on the iPad itself with a seperate post. But I do want to take a closer look at the e-books that I am reading on my iPad, because some of you might not be aware of these things when considering to purchase an e-book reader.

“The journey of a thousand miles begins beneath one’s feet.” – Laozi

Several of the people I talked to lately were considering the purchase of an e-book reader. Their considerations were all quite valid, thinking of things like the reduced weight while traveling but still being able to take several books along for the trip. Heck, some of them would be happy to have an e-book reader because it meant that they would still be getting their favorite newspaper synchronized to the device while they were abroad.

As I just wrote, I can fully comprehend what they mean. But, the journey towards the digital print usually ends up being a bit more then just reading your books on the road. Even though that seems to be such a simple request, you get much more then you bargained for when you finally decide to make the switch.

First, there’s the choice of your device. You could use a Kindle which is great when it comes to battery life, you can even read it with direct sunlight on the screen and you can read your books or magazines on a lot of different systems now because the Kindle application is pretty wide spread. But then you might think back to remember that Amazon actually remotely wiped a copy of George Orwell’s novel “1984”.

To me as a simple user, I would say that this means that I did not purchase the book itself, it seems to be that I just purchased a license to read that book via my Kindle account. Things like the proprietary Kindle document format (.AZW) and the Terms of Use that forbid transferring Amazon e-books to another user or a different type of device just make this feeling stronger.

I could go with a different reader like for example the Sony Reader. With that one, the ties to my single user account are not that strong. The Reader can read e-books that have some form of <abbr title="Digital Rights Management"DRM, but at least I can transfer my purchases to and from the Reader. I also have some added features (depending on the model) like a touch based screen, or the fact that I can use a lot of open formats which in turn saves me time on converting my documents, or even use Sony’s partnership with OverDrive that let’s me get e-books from the library. On the other hand, I miss out on the connectivity that a Kindle offers me and synchronizing books, or on reading the same books from the point where I left off by using digital bookmarks. And lending a book to a friend is going to be hard no matter which type of e-book reader I purchase, although there are plenty of people who wouldn’t loan the book anyway.

Second, you will never be able to combine the emotions you might have with a printed book. Depending on who you are, you will feel differently about handling an actual book. The feel of the covers, the choice of a hardcover or a pocket edition, the weight in your hands, the look in your book closet or even the touch of the paper that is used. Even something simple as they way a collection of books looks on a shelf.

Reading is all about emotions and knowledge. You usually read because you need or want to learn something, you read to pass the time at the barber shop, or you read to get lost in a different world where you yourself get to decide what just exactly how green the jacket is that the hero in that latest part of the sci-fi trilogy is wearing. It’s all about feeling it, and it’s just the same when it comes to reading something that is no longer on paper. Some people will like the color screen of the iPad just because it makes magazines or books about photography come to life. Others loathe the fact that the iPads screen is glossy or that the battery life is much lower than that of the Sony Reader. Then there are those folks that say that looking at an e-Ink or iPad screen is way too tiring on the eyes when compared to paper, but they like the option of having a screen with back-lighting so that they don’t have to keep the light on in bed when their partner is trying to get some sleep.

All in all

When you look at the previous two points, you will have some things to make you wonder if going the e-book route is right for your. Well, if that made you wonder, hang on to your socks because the third one will probably be even more simple but way harder to comprehend. It’s as simple as…

the price of the e-book!

Not some sort of cool things like make fonts larger or smaller, or something like a preview option or handy things like my book spoken to me by a cool computer voice. It’s just the simple pricing mechanism that sort of ruin the e-book experience. And the problem gets far worse when you are taking a look at books that are not considered main stream.

Let me just give you some simple examples:

  • VMware vSphere and Virtual Infrastructure Security: Securing the Virtual Environment – Edward L. Haletky
    • Amazon – Paperback: $34.98
    • Amazon – Kindle: $39.09
    • Sony – Reader: $39.99
    • Hotbooksale – Paperback: $35.12
  • Angelology: A Novel – Danielle Trussoni
    • Amazon – Hardcover: $17.97
    • Amazon – Kindle: $13.79
    • Sony – Reader: $9.99
    • Hotbooksale – Hardcover: $12.95

Now, that’s just two books. From all of the stores combined you will probably have around 600,000 to 650,000 books to choose from. But how can it be that only the popular books are cheaper as an e-reader? These two examples are not that far apart, but another example would be the “Solaris Internals: Solaris 10 and OpenSolaris Kernel Architecture” where the price difference is a whopping $26.69. I could buy a different book just by looking at that price difference.

And that’s the part that I don’t understand. With some digging you will note that the printing costs are about 10% of the total costs when it comes to publishing a book. That’s not that huge a difference, and let’s also take in to account that the publisher actually has to do something extra to create a separate copy of the book in it’s electronic form.

But then I also need to take a look at the costs that will fall away in it’s electronic form. Things like the number of trees that are not cut down, or the whole carbon footprint that comes with creating the paper, shipping the books, or even something simple as the space it takes to display and stock the book.

In the end

I do truly think that actually distributing or selling books in their digital forms would be cheaper than doing the same in it’s actual printed form. And even if that’s not the case, why are publishing companies not experimenting with these new types of media? Why not have a pricing model that allows me to loan the book for a certain amount of time? Some folks mentioned on twitter mentioned the option of using a ticker to display ads and create additional revenue that way.

The new media open the way to interactive books. Could you imagine reading Winnie the Pooh that mixes a story with animations that pick up the story where the text stops? It could be a different way of learning your kids to read a book. And advertisers could combine that with small games or the option to purchase different endings to a story. It all boils down to how creative the publishing companies are, but so far there seems to be little motivation to actually achieve a pricing model where e-books are cheaper when comparing them to their printed counterpart. It almost seems that the publishing companies are as dusty and static as an old printed book.

As for myself, I have made the switch to reading digitally where I can. And I’m just starting that journey, but there are still some bumps along the way. In a certain way it’s a shame that I will try to reduce the amount of printed stuff that I purchase. On the other hand, I’m on the road enough to appreciate just having that one device with me and reducing the weight of my bags a bit. In the end it all boils down to your own preferences, but if you do decide to go down that digital road, be prepared for some surprises and try to educate yourself before you find yourself pinned down in one area where you didn’t even consider ending up when you started the trip.

Cisco, GestaltIT, Tech Field Day, UCS, Virtualization

Gestalt IT Tech Field Day – On Cisco and UCS

There are a couple of words that are high on my list as being the buzzwords for 2010. The previous year brought us things like “green computing”, but the new hip seems to be “federation”, “unification”. And let’s not forget the one that seems to last longer then just one year, it’s the problem solving term “cloud”.

Last Friday (April 9th), I and the rest of the Gestalt IT tech field day delegates were invited by Cisco to get a briefing on Cisco’s Unified Computing System or in short “UCS”. Basically this is Cisco’s view that builds on the notion that we are currently viewing a server as being tied to the application, instead of seeing the server as a resource that allows us to run that application.

Anyone in marketing will know that the next question being asked is “What is your suggestion to change all that?”, and Cisco’s marketing department didn’t disappoint us and tried to answer that question for us. The key, in their opinion, is using a system consisting of building blocks that allow me to to give customers a solution stack.

As the trend can be spotted to go towards commodity hardware, Cisco is following suit by using industry standard servers that are equipped with Intel Xeon processors. Other key elements are a virtualization of services, a focus on automated provisioning and unification of the fabric by means of FCoE.

What this basically means is that you order building blocks from Cisco in the form of blade servers, blade chassis, fabric interconnects and virtual adapters. But instead of connecting this stuff up and expanding my connectivity like I do in a standard scenario, I instead wire my hardware depending on the bandwidth requirements and that’s pretty much it. Once I am done with that, I can assign virtual interfaces as I need them on a per blade basis, which in term removes the hassle of plugging in physical adapters and cabling all that stuff up. In a sense it reminded me of the take that Xsigo offered with their I/O director, but with the difference that Cisco uses FCoE instead of Infiniband, and with Cisco you add the I/O virtualization to a more complete management stack.

The management stack

This is in my opinion the key difference. I can bolt together my own pieces of hardware and use the Xsigo I/O director in combination with VMware and have a similar set-up, but I will be missing out on one important element. A central management utility.

This UCS unified management offers me some advantages that I have not seen from other vendors. I can now tie the properties to the resources that I want, meaning that I can set up properties tied to a blade, but can also tie them to the VM or application running on that blade in form of service profiles. Things like MAC, WWN or QoS profiles are defined inside of these service profiles in an XML format and then applied to my resources as I see fit.

Sounds good, but…..?

There is always a but, that’s something that is almost impossible to avoid. Even though Cisco offers a solution that seems to offer some technical advantages, there are some potential drawbacks.

  • Vendor lock in:
    This is something that is quite easy to see. The benefit of getting everything from one vendor also means that my experience is only as good as the vendors support is in case of trouble. Same thing applies when ordering new hardware and there are unexpected problems somewhere in the ordering/delivery chain
  • The price tag:
    Cisco is not know to be cheap. Some would even say that Cisco is very expensive, and it will all boil down to one thing. Is the investment that I need to make for a UCS solution going to give me the return on invest? And is it going to do that anytime soon? Sure it can reduce my management overhead and complexity, sure it can lower my operational expense, but I want to see something in return for the money I gave Cisco and preferably today, not tomorrow.
  • Interoperability with my existing environment:
    This sort of stuff works great when you are lucky enough to create something new. A new landscape, a new data center or something along those lines. Truth is that usually we will end up adding something new to our existing environment. It’s great that I can manage all of my UCS stack with one management interface. But what about the other stuff? What if I already have other Cisco switches that are not connected to this new UCS landscape? Can I manage those using the built in UCS features? Or is this another thing that my admins have to learn?
  • The fact that UCS is unified does not mean that my company is:
    In smaller companies, you have a couple of sysadmins that do everything. They install hardware, configure the operating system, upload firewall policies to their routers and zone some new storage. So far so good, I’ll give them my new UCS gear and they usually know what goes where and will get going. Now I end up in the enterprise segment where I talk to one department to change my kernel parameters, a different to configure my switch port to auto-negotiate and the third one will check on the WWN of my fibre-channel HBA to see if this is matching to the one configured on the storage side. Now I need to get all of them together to work on creating the service profiles, although not all will be able to work outside of their knowledge silo. The other alternative would be to create a completely new team that just does UCS, but do I want that?

Besides the things that are fairly obvious and not necessarily Cisco’s fault, I think that Cisco was actually one of the first companies to go this way and one of the first to show an actual example of a federated and consolidated solution. Because that is what this is all about, it’s not about offering a piece of hardware, it’s about offering a solution. Initiatives like VCE and VCN only show us that Cisco is moving forward and is actually pushing towards offering complete solution stacks.

My opinion? I like it. I think Cisco have delivered something that is a usable showcase, and although unfortunately I have not been able to actually test it so far, I do really like the potential it offers and the way it was designed. If I ever get the chance to do some testing on a complete UCS stack, I’ll be sure to let you know more, but until then I at least hope that this post has made things a bit clearer and removed some of the questions you might have. And if that’s not the case, leave a comment and I will be sure to ask some more questions on your behalf.

Disclaimer:
The sponsors are each paying their share for this non-profit event. We, the delegates, are not paid to attend. Most of us will take some days off from our regular job to attend. What is paid for us is the flight, something to eat and the stay at a hotel. However as stated in the above post, we are not forced to write about anything that happens during the event, or to only write positive things.

Data Robotics, Drobo FS, GestaltIT, Storage, Tech Field Day

Drobo announces their new Drobo FS

In November 2009, Data Robotics Inc. released two new products, the Drobo S and the Drobo Elite. Yesterday I was lucky enough to be invited to a closed session with the folks from Data Robotics as they had some interesting news about a new product they are announcing today called the Drobo FS.

When we visited the Data Robotics premises with the entire Tech Field Day crew last November, one of the biggest gripes about the Drobo was that it relied on the Drobo Share to allow an ethernet connection to the storage presented from my Drobo. The newly introduced Drobo S added an eSATA port, but also didn’t solve this limitation since it wasn’t even compatible to the Drobo Share. As such the Drobo Share was not the worst solution ever, be it for the fact that it connects to the Drobo via a USB 2.0 connection, thus limiting the maximum speed one could achieve when accessing the disks.

Front of the new Drobo FSWell, that part changes today with the introduction of the Drobo FS. Basically this model offers the same amount of drives as the Drobo S, namely a maximum of 5, and exchanges the eSATA port for a gigabit ethernet port. The folks from Data Robotics said that this would mean that you will see an estimated 4x performance improvement when comparing the Drobo FS to the Drobo Share, and you also get the option of single or dual drive redundancy to ensure that no data is lost when one or two drives fail.

Included with all configurations you will receive a CAT 6 ethernet cable, an external power supply (100v-240v) with a fitting power cord for your region, a user guide and quick start card ( in print) and a Drobo resource CD with the Drobo Dashboard application, help files, and electronic documentation. The only thing that will change, depending on your configuration, is the amount of drives that are included with the Drobo FS. You can order the enclosure without any drives at all, this would set you back $699.- (€519,- / £469,-), or you can get the version that includes a total of 10 terabyte of disk space for a total of $1499.- (€1079,- / £969,-).

As with the other Drobo’s you are able to enhance the function of your Drobo with the so called DroboApps. This option will for example allow you to extend the two default protocols (CIFS/SMB and AFP) with additional ones such as NFS. Unfortunately we won’t be seeing iSCSI on this model since according to the guys from Data Robotics they are aiming more towards a file level solution than a block level solution.

Back of the new Drobo FSOne of the newer applications on the Drobo FS is something that caught my eye. This application is targeted towards the private cloud and uses “Oxygen Cloud” as a service provider to provide file access to a shared storage. This means that you can link your Drobo’s together (up to a current limit of 256 Drobo units) and allow these to share their files and shares. This will include options like access control and even features such as remote wipe, but a more complete feature list will follow today’s release.

One feature that was requested by some users hasn’t made it yet. The Drobo dashboard which is used to control the Drobo is still an application that needs to be installed, but Data Robotics is looking at the option of changing this in to something that might be controlled via a browser based interface. However no comments were made regarding a possible release date for such a web interface. What is also under development on is an SDK that will allow the creation of custom DroboApps. Again, a release date was not mentioned in the call.

I will try to get my hands on a review unit and post some tests once I have the chance. Also, I am looking forward to finding out more about the device when I meet the Drobo folks in person later this week during the Gestalt IT Tech Field Days in Boston, so keep your eye on this space for more to come.