Contrary to a lot of assumptions, large servers tend to work best at temperatures between 42-68 degrees Fahrenheit. Anything less and condensation can set in. Anything more and all kinds of bad things happen:
- Processors overheat and sometime melt down
- Hard drive lubricant starts to vaporize
- Electrons actually start to "jump" across circuits rather than following their designed pathway (not good)
- Motherboards get too hot and begin to warp and bend
More recently, hardware vendors have been turning to a technology first introduced in the 1980s called ARM (Advanced RISC Machine) chips. For a mind-numbingly dry explanation of the technical specs behind ARM technology, go here: http://en.wikipedia.org/wiki/ARM_architecture
The main difference between a regular CPU chip and an ARM chip is fairly simple. On regular chips, companies like Intel and AMD will pack millions of transistors onto each silicon wafer. The more transistors, the more computing power is generated, following a linear relationship. On the other hand, an ARM chip may only have 30 or 40 thousand transistors. These ARM chips are not as powerful as their "regular counterparts", but not disproportionally so. The way the chips work in concert with the motherboard and core software allows them to be a lot more powerful than one might ordinarily expect.
(A Standard 64-bit "Regular" Chip - millions of transitors)
The ARM chips work wonderfully in mobile devices where the conservation of power is a huge consideration. In fact, over 95% of mobile phones and tablets use ARM chips. Whenever you hear about the new Apple A4, A5, or A6 chips, those are built on the ARM architecture.
(An ARM Chip - Small and compact, just as you would expect)
(Picture courtesy of Wikipedia)
Given the success and hence proliferation of ARM chips into the mobile market, innovators have been looking into whether enterprise servers could use the same technology. If viable data center class servers could be built with ARM chips, it would remove many constraints that exist today for the availability of power and cooling. All of that could add up to reduced need for robust data centers and greatly lesson monthly power bills.
Recent tests have showed some promise for the use of ARM technology in the data center, especially as the chips have been made faster and more efficient through newer generations. However, the ARM chip always loses out to regular chips when raw processing power becomes a primary consideration.
If you have a data center full of mid-size servers, you might very well be able to take advantage of the cost savings that ARM-equipped servers are being designed to provide. Be careful, though, because like super/turbo charged cars, there is a limit to the processing power of the ARM chip.
Finally, don't forget that the arguments over which chips to use may be a moot point. These days, smart CIOs are looking to get out of the business of owning/managing data centers altogether. Rather than the buy-and-own model of old IT practices, more and more CIOs are looking to move as much into the "Cloud" as possible. In other words, we are looking for ways to leverage the data centers of companies who do only that. That way, we can focus on the services we are providing rather than having to worry about a physical data center going down in the middle of the night, being swept away in a tsunami, or swallowed up in an earthquake.
For me, the less hardware I have in my portfolio the better I sleep at night...