12 thoughts on “The storage vs bandwidth debate”

  1. Close, but no cigar. The idea that cloud computing depends on “bandwidth” is a symptom of a fundamentally warped perspective on networking that became normative during a period when two things happened together: The opening of the Internet to commercial use and the development of fiber-based networking.

    The Internet actually has a very bad design in terms of several dimensions of network efficiency and innovation-friendliness, but the rapid ascent of fiber optics covered them up from the mid-90s until the rise of mobile data.

    What cloud users actually need from their network isn’t “bandwidth,” it’s “quality.” That is, they need reliable connections that don’t introduce arbitrary delays when packets are dropped or neighbors on shared links generate little bursts of load. The thing that you get from a SATA interface that IP over best efforts can never give you is predictable, high quality, low delay, deterministic access.

    There will never be enough bandwidth in the world to provide an analogous service over a mobile network running under classless IP.

    The debate we need to have isn’t storage vs. bandwidth, it’s about networks optimized for the cloud rather than optimized for web browsing as the Internet currently is.

    1. You telecom geezers crack me up. IP is evil because it destroyed all the jobs for the wizards who maintained the arcane and overly complicated legacy signaling protocols.

      Video over IP exists, and video over GSMLTEEPCATMSS7ISDNSONETITUXXX407.33x never did, because IP has great economics.

      IP is cheap because you don’t need an army of technocrats to cobble together a convoluted network which barely works. Sure, IP has its issues. So does Ethernet. But they work and companies can afford to deploy them in the real world to paying customers. The alternative to occasionally choppy video is $200/month ATM/SONET/GPON delivering proprietary telco walled-garden services which nobody would want.

      Which carrier did you work for, anyhow?

      1. I’ve never worked for a carrier. When I was doing engineering, I worked for companies like Tandem, 3Com, Compression Labs, HP, and Cisco developing standards for Ethernet over UTP, Wi-Fi, and UWB and fancy apps like video conferencing.

        Thanks for asking, I appreciate insight comments like yours.

  2. Finally, someone spells it out. Cloud is for computing and backup, not streaming data. In a few years, a $10 stick will hold more HD video than you could watch in a lifetime – and a 10c chip will decode it.

    Already, you can download the whole of Wikipedia to a smartphone. Seven times over, if you’ve got a 32GB model.

    As storage tends towards zero cost, you may as well keep a local cache of pretty much everything. Lots of caches, in fact.

    It’s also an energy issue. Just receiving a streaming HD video uses so much power for a mobile device, it may as well fire up a GPU and decode a locally-stored copy. The chips will get even cheaper and more efficient, but you’ll always notice the difference if you leave the wi-fi and bluetooth on.

    Besides anything else, cloud computing depends on economies of scale. Those mega-datacentres need cheap commodity hardware – and that means Intel & friends will continue to stamp chips out like Coke cans.

    The real cloud is only just beginning.

    1. It occurs to me that lower price, higher capacity storage creates more demand for networking. If every handheld in the world had a complete copy of all the world’s information on it, we would need furious network capacity to propagate the updates and revisions. The idea that storage and networking are an either/or is fundamentally wrong, like the idea that the world would only need six computers. It turned out that computers themselves create the need for more computers, and the same dynamic exists for storage and networks. The real deal is information, which always increases in proportion to technology’s ability to store it, analyze it, and communicate it.

      1. The point is that storage (for most people) is not a problem already, and will become even less so – at a rate which networking cannot match.

        The most important graph on the chart is the last one.

        Syncing some media files, sharing documents and keeping online backups? Cloud is all good.

        Instantly tapping 1000 extra machines when you have a peak demand for computation or bandwidth? Cloud is all good.

        Migrating to it entirely, including all your REALLY private data and processes, with no offline capability whatsoever – just to save $50-worth of CPU and hard drive?

        Not going to happen.

        http://www.infoworld.com/d/mobile-technology/whatever-you-do-dont-buy-chromebook-377?page=0,1

  3. Thanks to Om for highlighting this and the B-Blaze team for sharing their thoughts – this is the conversation that matters as we contemplate the future of cloud data.

  4. The insight that should come out of this observation is that caching of data should be aggressively pushed out to the edges of the network.

    Broadband providers build out residential IP networks to serve the peak demand that occurs during the evenings. Their aggregation networks are often congested for 2-3 hours each day, and nearly empty overnight. Streaming video and Netflix in particular are a major contributor to on-peak consumption.

    Imagine that instead of streaming HD video on-demand to individual consumers, taking up megabits of capacity for each household, Netflix movies were downloaded to your set-top box overnight when the network is uncongested. When terabytes of storage in your STB are cheap, it’s more cost-effective to add the storage than to add network capacity.

    All the requisite technology to do this exists today. RSS already manages downloading of audio and video podcasts in many media players (like iTunes). Authentication can control the distribution of podcasts to paying subscribers only. Encryption can ensure only the Netflix player can access the content. So there’s no reason your Netflix list, which not long ago was used to decide which DVDs were mailed to your house, could not control which movies were pre-loaded to your player (PS3, AppleTV, or just a PC).

    All Netflix needs to do is add an overnight download feature. All carriers need to do is classify those download streams as low-priority and throttle them when the network is busy. Because there is no expectation for real-time streaming, nobody minds the traffic management. And virtually overnight, the single biggest headache for broadband providers disappears.

    To make this happen, you might need an incentive for consumers to switch away from the streaming model. One incentive could be quality. Overnight downloads can be uncompressed HD, while prime-time streaming is limited by network congestion using adaptive codecs. Another incentive is economic. Carriers moving to usage-based billing could decide that best-effort traffic doesn’t count towards a subscriber’s monthly total, or costs less per GB.

    The most pressing challenge in residential broadband is peak usage. And the application which drives the majority of that usage is now Netflix. With a little effort, both parties can effectively resolve the last-mile congestion issue. What’s stopping them?

    1. What’s stopping them? How much time have you got?

      Piracy and the whimsical nature of consumer taste are high on the list, and there’s also that whole time travel thing. I can’t download tomorrow’s baseball game tonight, even though for many teams the result might be predictable.

      1. Baseball games are overwhelmingly watched live over the existing broadcast TV infrastructure.

        Even if you decided to reclaim all that broadcast bandwidth and allocate it to the data network, people would still watch sports content live. And live content streaming is relatively efficient if you do it with multicast.

        The problem in last-mile networks comes from the rise of on-demand streaming, where the 100 households on a street each decide to watch a different streaming Netflix title over their shared cable/DSL/PON infrastructure, during the same 2-3 hours of prime time.

        The “podcasting” model doesn’t work for everything consumers want to watch, but it takes care of the “nightly movie” phenomenon that is the biggest problem with residential broadband today.

        As for piracy, I don’t see how in-home caching makes it materially easier. The folks who upload pirated TV shows to torrent sites don’t do it by decrypting the hard drives in their Scientific Atlanta PVRs. They record the raw stream over the air.

        Sadly the folks at Netflix, Apple, Comcast, Verizon etc. won’t read this. But if they did it would offer the solution to their common problem.

  5. “A pilot gigabit project initiated by the government is under way, with 1,500 households in five South Korean cities wired. Each customer pays about 30,000 won a month, or less than $27,”

    http://www.huffingtonpost.com/2011/02/23/south-korea-gigabit-internet-2012_n_827145.html

    so $27 / 1000mb/s is, yes, about three CENTS per megabyte a second. so yes bandwidth is cheaper then storage according to your chart, but only in countries with ISP’s who don’t charge ridiculous amounts for there services.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.