45 thoughts on “Facebook's Insatiable Hunger for Hardware”

  1. Isn’t Google employing strategy of 100 cheap used hardware instead of 1 costly powerful server? It would be interesting to confirm because its working for Google and maybe facebook should go that route. (Although its a different story that Google loads up those cheap machines with its own version of linux and file system for better performance).

    1. No, Google doesn’t use100 cheap computers instead of 1 powerful server. They *might* use a couple of slightly underpowered computers instead of 1 high end server. Google uses cost effective Intel and AMD processors. Intel uses standard PC disk drives spinning at 5400 to 7200 RPM instead of faster, more reliable SCSI. Google’s real strategy is that they customize the hardware and software for the problem at hand. They design and build their own mother boards, customize the OS, supply their own cluster software: file system, map-reduce, database, remote procedure call, distributed locking, distributed cluster managers; the whole kit and kaboodle.

  2. That strategy only works when power and space is cheap and processing power is expensive. We’re starting to see that shift in a major way. Big iron is making a comeback in power effecient packaging.

  3. “between 1,200 and 1,500 servers from EMC Corp. and Force 10 Networks.”

    EMC makes storage systems and Force 10 makes Gigabit Ethernet switches…not really “servers” in my opinion. Was this sentence supposed to read differently?

  4. Klaus: I usually find Facebook and most big US sites fine; sadly, though, I’ve often found US/Canadian transit providers with little or no global transit bandwidth: technically, they provide “transit”, but with shockingly high latency by very inefficient routes. I have a server colocated in California on the end of a Peer1 connection – which routes UK traffic via Amsterdam, of all places, resulting in atrocious performance! (The really stupid thing is that IIRC they and the UK ISPs I was testing from have direct connections into LINX, which should give far better performance than the detour via the Netherlands. Maybe they’re trying to meet a traffic quota at AMSIX?)

  5. I hope that I can come up with a simple business concept such as facebook and make it worth as much. Is facebook really worth its crazy current valuation of multi billions, even though there is plenty of valid reason for them being worthless.

  6. I have never been happy with Facebook’s latencies and load times, no matter where I access from, although I have noticed some modest improvements to date. It may be that Facebook’s questionable utility to serious businesses, coupled with it’s weak ability to monetize, will be a serious drag on its growth heading into the weakening economy.

    Also, all of this heavy iron and the investment thereof, might just be one boat anchor of a liability (sucking up cash that might be put to better use). There are many more creative strategies for building capacity.

  7. It’s much better to run larger servers rather than smaller servers. This is because, I can bet, synchronizing memcached between 805 servers is much more difficult than synchronizing between let’s say 200 servers, of 4x the power. Also Apache and other web-servers can scale very well on high powered machines. Running Apache across many servers will require more code to run, and will cause much more heating, and consume more space.


  8. 10,000 servers is ridiculous. For crying out loud, you can build a nice quad-quad core server with 64gb of RAM for $10k. With 20 of those I can power Facebook easily. That is 1.2 TB of RAM or shit, buy a 100 of those and you got 6 TB of RAM and 1600 processors. If you cannot power Facebook with that, then you don’t know how to architect for shit. Can’t do it with 100 servers? Ok, buy 1000 of them and you got 60 TB of RAM and 16000 processors. If you cannot power with that, then you are probably mentally retarded.

  9. strange, why to copy Google if Google strategy has been targeted on US success and times has changed – US is less and less leader (if at all, make yourself search for “food rationing”, yes, first time in US history)… do they have capability to think by own brain at FB at all?

  10. @A.T.
    Please expand on your ideas about why the US not being a leader in any way invalidates following Google’s proven success? I’m just wondering what other examples of mega-success you’ve come across on the Internet that has somehow escaped the attention of the rest of the Internet.

  11. What about the number of SysAdmins? Do you have any figures on this by any chance? Would be interesting to find out…

  12. The original article mentions 2 DB Admin for 1,800 MySQL servers…. Ouch…. Don’t know how leggit this info is, but if it is actually the case, it is damn impressive!

  13. Facebook is doing it’s best to scale up and those hardships are going to be a hard juggle. I think they started off with their feet facing the wrong direction.

  14. With that amount of server power running, I hope Facebook considers energy saving and environmental issues a top priority. Siting is definitely one of the most important factors that affect the energy consumption and environmental effects of a server farm. There are cooler climates than in the continental US and with more green electricity available. I would recommend anybody considering siting a datacenter to take a look at the Finnish website on these issues: http://www.fincloud.freehostingcloud.com/

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.