I’ve been thinking about this post for a few days now. This is a very common question that I receive. I’ve been jotting notes on all the things to remember, when yesterday, one of my sales reps sends me this article by Steve Kaplan and Mondy Carpio. The title alone got my attention, "Virtualization and the Cisco Nexus Switch Combine to Kill Blade Servers." The article starts with the 3 reasons to deploy blades: space, hot-swap, and cable management. These were some of the reasons that I wrote on my little notepad during my brainstorming. In my experiences with SMB customers however, I’ve only seen one real deciding factor.
Space, the final frontier.
In my experience talking to customers, the decision to go with blades was not a hot-swap requirement or a cable management issue. I agree that they could be factors in the decision to use blades but I do not have any customers where they have been the deciding factor. Space has always been a deciding factor as to why my customers have chosen blades. It could be space in their datacenter, or in some cases it was space in a co-location facility. The density that blades can provide is very difficult to match with standalone servers.
Show me the money.
One of the things to also remember on blades is that your purchasing costs are not always linear. That is, you have to start by making a large chassis purchase followed by smaller individual blade purchases. You continue those small purchases until the chassis fills and you need to purchase the next blade. I.T. is now challenged by the sudden spike in costs for a new chassis to house that one additional blade and those costs must be absorbed somewhere.
We’re gonna rock down to, Electric Avenue.
Another thing to remember, power. A fully loaded chassis draws a large amount of power and puts out vast amounts of heat (in some cases with an equivalent amount of thrust). Virtual Infrastructure has a feature in it called Dynamic Power Management (DPM). If you have 8 physical hosts in a VMware cluster, during the nights and weekends the utilization is typically very low. The Infrastructure views that as a lees than optimal use of resources. VI3 will vMotion (Live Migrate) the workloads to a subset of hosts and power off the ones that are not needed. This can be beneficial in our server environment, but it really hits home with Virtual Desktops (i.e. Why are we running 8 physical hosts for our Virtual Desktop farm on Saturday when 4 people are logged in?) How about this scenario: For desktop farms we reduce the running hosts on weekends to 2 and when users start to log in on Monday morning, we can automatically start up the additional hosts in the cluster to add capacity on the fly. With standalone servers this works great. When I go from 8 standalone hosts to 2 standalone hosts in the cluster, I can reduce my host power consumption by up to 75%. If I have 8 blades in a chassis, I cannot reduce my power consumption as efficiently because so much of the power is in the chassis backplane. I can shutdown 6 of the 8 blades in the blade cluster and only reduce my power consumption by 40% but no more, due to the constant chassis power requirements. Bottom line: With blades, I do not have as much granular control over my power usage.
A related scenario would be similar to the costs above. You may have a large leap in power requirements for every chassis you purchase but minimal for each blade you purchase. I have met customers who have to unplug something old in the datacenter every time they plug in something new.
Don’t cut the red wire.
Some of the best practices with networking VMware are to have a separate private network for the service console, to have a dedicated network for the kernel port and vMotion. Then you also need additional ports for the virtual machines to uplink to the physical networks. They should be redundant and aggregated together for maximum performance. This is starting to add up to a number of connections per host. Something to think about with blades is also the number of network uplink connections that they can provide. Many blades are limited to 2,4 or 6 physical network ports. Keep that in mind and think about how many physical networks your virtual machines will need to connect to. That determination will help you figure out how many network ports each physical host will need for your environment and whether blades or standalones can meet that requirement.
Just some food for thought. Remember, I work with SMB’s so these are some of the most common things that we are talking about. You may have some different reasons to choose one or the other. If you have had a determining factor in making this choice that I have not mentioned, please post it in the comments section.