Men at Work At MySpace

6 thoughts on “Men at Work At MySpace”

  1. I found it a little humurous that a site that large didn’t have decentralized data centers. But, then again, MySpace has never been a bastion of techical expertise.

  2. Why are people running data centers out of a place with high real estate costs in a State with longterm chronic power issues. You’d think everyone would be doing their data center in the rust belt.

  3. Decentralised redundant data centers are possible, but they are Expensive, especially with the load MySpace will put on it.
    But maybe they just decided that with their demograpic and requirements, it would just be to expensive.
    Put aside the troubles they seem to be having with just keeping up with the growth, building everything just once, instead of twice.

  4. If they indeed want to have true redundancy, that means having matching clusters of databases replicated across the US with very little latency replicating said databases, which thereby absolutely adds more complexity to the equation. Simply doing a site A/B with network hardware, geoIP, global load balancing and web servers is a no brainer, but when you throw in massive databases that have changes replicated across the country within seconds, having duplicate sites becomes more of a challenge.

    I do agree that moving out of LA would be their best bet and Equinix has a great (and massive!) facility in Ashburn, Virginia that would suit their needs (fyi – you’ll likely need LX sfp’s not SX in some of those Ashburn facilities). They’ve had power outages in LA before that did indeed take down the MySpace site and one would have hoped they learned their lesson after that happened, but here we are.

    Getting the space is not the big deal for MySpace – cages and cabinets are pennies when compared with the monthly recurring price for bandwidth. Think about it – they pay for cdn from Limelight, they have their streaming music hosted at VitalStream and who knows how many transit providers and peering arrangements they have. To get all that setup in parity means renegotiating all their transit, cdn and peering agreements over again and figuring out how much headcount to have at their second site, or to simply use expensive remote hands/contractors for installation and maintenance so far from the mothership. It’s going to be a fun game of math for the MySpace folks, but I’m sure their operations folks are more than up to the task.

  5. There’s already in a bunch of other Equinix sites. This issue isn’t the network or the servers – its the software. Getting their software to work in a distributed fashion is appearently quite difficult. Freakout, they’ve already done all the math, and have negotiated all the deals. Its just a matter of coding – most coders don’t think about the need for a distributed app, and when the do, they don’t consider greater-than-LAN latencies. How many coders have you ever met who thought about writing an app that performed well with 80ms of latency seperating two plesiosyncronous databases, constantly replicating.

This site uses Akismet to reduce spam. Learn how your comment data is processed.