I just completed some consulting work for my former employer, Novell, and had the opportunity to work once again with Sal Darji and Ed Murphy. During the course of this work, I again ran into something that has long baffled me. Why is anyone buying proprietary Unix servers anymore?
To see why I say this, perform a simple comparison for yourself. Take the SPEC server CPU benchmark for a proprietary Unix server from Sun, IBM or HP, and divide it by the list price for the server. Do the same for a commodity Dell x86/64 server. Compare.
In my analyses, I use SPEC’s SPECint_rate2006 benchmark. It measures integer processing, which is more typical of business workloads. Also, this benchmark measures throughput rather than peak speed, since most datacenters are more concerned with the number of transactions processed, not minimizing the time for a single transaction. (The peak speed benchmark, SPECint2006, would be more appropriate for realtime computing.)
The value of the SPECint_rate2006 benchmark for a given server represents its capacity. We can treat SPECint_rate2006 as a unit of capacity, and when we purchase a given server, we are adding capacity to our datacenter equal to the SPECint_rate2006 benchmark for that server. Given this, the cost per SPECint_rate2006 is the amount we’re paying for each unit of additional capacity.
I realize that this analysis only holds true if all we really care about is throughput, and integer processing. Every workload is different, and the results may be different if the characteristics of a particular workload differ significantly from this approximation of a typical workload. But still, the results are startling.
Based on publicly available list prices, Solaris servers range from $300/SPECint_rate2006 to over $1,000. In comparison, Dell x86/64 servers range from $30/SPECint_rate2006 to no higher than $170. At the low end, this is an order of magnitude difference in cost/performance! And it’s not just Dell — IBM, HP and Sun x86 servers have significant cost/performance advantages over their own Unix servers (although not as large).
This cost advantage could get eaten up by operating system licensing costs. If Windows were the only alternative on x86 architectures, that would certainly be true. But not only is Linux cheaper than Windows, it is typically as cheap or cheaper than the license costs of the various Unixes.
At the high end, Unix servers are much larger than the largest available x86 servers. But the Unix world is moving towards virtualization, with single servers running multiple workloads. In this scenario, there is no advantage to a scale-up architecture vs. a scale-out architecture. Besides which, with quad-core x86 chips and ever more processors available on high-end x86 servers, the gap is steadily shrinking. I can see that occasionally x86 servers may not be able to meet the HPC needs of a particular app, but this would have to be the rare exception rather than the rule.
I have heard anecdotally of significant discounts off list for Unix servers, much more so than for Dell servers, and it’s easy to see why. Perhaps the Unix vendors are able to discount low enough when they have to so that the switching cost eats up any initial cost savings. And of course application and middleware software compatibility can be an issue, but these days almost any vendor offering software for Unix also supports one distribution or another of Linux.
So I remain baffled. Given these economics, for the vast majority of workloads, why would anyone stick with Unix? I realize the market has been steadily moving away from Unix and towards Linux and Windows, but plenty of shops are still buying new Unix servers for brand new applications. Why?
I am entirely open to the idea that I’ve overlooked something, so if anyone has any thoughts about additional factors to consider please leave them in comments. But until proven otherwise, my recommendation to any datacenter is to avoid Unix except for the rare exception, and move to Linux on x86/64 instead.