Are You Ready for Open-Source Hardware?

29 thoughts on “Are You Ready for Open-Source Hardware?”

  1. Market disruption aside, Backblaze has effected a marketing coup. Did you ever hear about them before? I had no inkling I could get unlimited online storage for $5 a month. WOW and more WOW! If my instincts are right, people will burst their door jambs out to get in as customers.

  2. During my storage-networking days a few years back, I remember working on a storage product called as the “Cube” – similar in concept that storage is in modular blocks, self-contained although not in the business model….
    The difference between Raw & EMC is astronomical – certainly an opportunity for the model to permeate. May be it could be a spin of the Freemium model (a post from Om earlier talked about the Freemium model – http://gigaom.com/2009/09/01/how-freemium-can-work-for-your-startup/)

  3. Is this chart on an apples-to-apples basis?

    My understanding is that when you buy a ‘gigabyte’ on S3 you are actually getting multiple redundant copies of that gigabyte, whereas if you buy a gigabyte of RAID, it is only 1 gigabyte.

    So you would need to multiply any of the storage solutions by a multiple of 2x to 3x to even it up.

    Then there are labor costs, etc, but that is footnoted correctly in the chart.

  4. This is really an interesting topic. Specifically, I find it fascinating to witness the adoption curve of these sorts of things.

    I did some work for Vyatta (enterprise grade routing & security on Intel hardware) on business development into the ISP’s and service providers (similarly, large consumers of networking as-well as storage). It quickly comes down to a few things:

    – Customers will not be early adopters simply because due to disruptive economics, the entire solution needs to be there before they jump… as-in all the esoteric features which make the products truly deployable & manageable.

    – Despite closed hardware/systems being ridiculously expensive on a $$/horsepower basis, users have grown accustom to their “form” and have a hard time getting past this sort of “if it walks like a duck, it must be a duck” mentality. For example, because servers typically use hard drives for storage (even if RAID/redundant), people have a hard time considering them for use as a networking utility… because routers don’t have drives! It’s all psychological, as often a router sits next to a server with an equally mission critical function running on it.

    – Not surprisingly, this “nobody gets fired for using Cisco, EMC, NetApp” phenomenon leads “bigger” customers to be slower… opens the door for groups like Backblaze to disrupt…

    From what I have seen firsthand @ big customers, there is a ton of opportunity to build “closed-open hardware appliances” for these sorts of open solutions. ie: adaptations of intel hardware into form factors which more closely resemble their closed-system counterparts.

    1. Hahnfield

      It is clear that Backblaze expects some ODMs in Asia or other new hardware start ups to build on their design so that more people will have a chance to buy big storage cheaper.

      The way the company described to me, they are giving away the whole shebang including the code and SDK so people can build on the whole system. I think that is what makes this more disruptive than usual.

      Thanks for your awesome comment. Enjoyed learning from you.

      1. I just wanted to clarify that we are giving away the complete design for the Backblaze Storage Pods (which is the hardware plus the software stack that brings it online) but not an SDK or the code for the online backup service itself.

        Appreciate the perspectives and comments,

        Gleb Budman
        CEO, Backblaze

    2. The big part missing from the analysis is support: NetApp, Sun (with ZFS and storage products) and others will all deliver bugfixes with some sort of SLA, as well as hardware support, none of which you get with a build your own approach. BackBlaze are providing their own hardware and software support, which makes sense as their volumes are so enormous – those with less need for storage may find commercial offerings are better.

      This is also why people pay for Red Hat Enterprise Linux (RHEL) support when they can get virtually identical software (down to the bugs and fixes) from CentOS, who rebuild every release of RHEL from source. Part of the support payments to Red Hat is to fund Linux kernel developers. Novell does something similar of course with SUSE Linux.

      Depending on volume of storage and in-house guruhood, it can pay to in-source support of hardware and/or software.

  5. Interesting hardware design and it’s good they are publishing the details. However, it would be a lot more useful if they would also distribute the open-source stack they are using, including its configuration: Linux, JFS, etc.

    And of course they are not giving away the real secret sauce referred to at the end: the higher level software stack that maps a backup request into specific encrypted blocks on a storage server, including de-duplication, incremental storage, etc.

  6. I’ve read this article as well as their blog and had a few questions:

    * Even with 6 fans, I wonder how much heat does each box generate and whether 6 fans is enough to cool the system.
    * From the vimeo video as well as the flickr snapshots, I noticed that the disks are very close to one another. I wonder how easy maintenance is for these types of setup. I also didn’t noticed a way for a way for the drives to be ejected easily (in case of drive failure).

    Great article and product / idea though. I’d be very interested to learn more how they handle outages (if any)

  7. I wonder what the power and cooling requirements for these shelves are. How “green” are they? I don’t see how you’d replace a defective drive without taking a whole shelf offline, either.

This site uses Akismet to reduce spam. Learn how your comment data is processed.