Why It's the Megabits, Not the MIPs, That Matter

34 thoughts on “Why It's the Megabits, Not the MIPs, That Matter”

    1. yeah but it can’t be the only focus. the processor needs to be smart enough and big enough. the bandwidth is the key and graphics are the second most important key in future experiences, i think most of us are forgetting that little bit.

      1. Om, who says that a processor is the only focus. Eli Harari said like that because it’s for the sake of his company to deny Moore’s law and so did Apple for the sake of its product iPhone. Microsoft’s, Google’s, Yahoo’s, and Facebook’s guys are focusing on something else that can be software and communication. And as jasonspalace said that it had better go hand in hand so that the demand of better, clear, and quick communication can be fulfilled by quick processors.

        You are a bit delirious here.

  1. I agree and disagree with your point. For the vast majority of the applications out there the bandwidth will be the limiting factor. However as the memory in cell phones increases and processor capabilities increase then the application power is going to increase so the MIPS will become important. These application will be able to do more with the data they get. In the end we will have to find a holistic balance between the speed to get the data and the ability to manipulate it.

    1. So we are not really disagreeing. When I say comm-puting, I am saying exactly that. computing and communications come into sync. Computing at the line speed is what matters not the other way around. So I am actually arguing for a world view which is communications centric and processor is part of the show. without the communication fabric, processor is just a processor. nothing more in a world where every app and service is going to be network enabled.

  2. Bandwidth at the periphery don’t grow as fast as storage and computing.

    It has been like that for years, and it is even more of a problem in the US than in Europe.

    A good and geeky read on what it means is Accelerando by Stross. A sci-fi book pushing that historical fact — bandwidth at the edge don’t grow fast enough — to the day when the whole solar system will be a computer. That computer can exchange just a tiny fraction of its data with any given pair elsewhere. Because long distance bandwidth is much more constrained than anything local.

    The whole book is about bandwidth, storage and download, actually! That and a bunch of other Singularity nerdyness 🙂

    Ok, but the point ? Huge and powerful local computing and storage may always be necessary versus pure distant access, because local access is faster — and network edge upload/download speeds are stuck at a very low level everywhere on the globe.

    Maybe we WANT a 10x bandwidth, but it’s much easier to get a 10x more spacious and powerful computer.

  3. Bandwidth, specially upload speeds, and latency are usually missing from discussion. In large part, that is because there is competition in the CPU space where we have choices, whereas bandwidth is in the hands of the infernal cable-telco duopoly, who would rather manage artificially created scarcity and nickel and dime you for every dollar of value you can get from the network, even if that means stillbirth for new technology.
    The fact the cable/telco industry is one of the largest campaign contributors to a venal Congress does not inspire hope either. This is not going to change until we have an administration willing to confront the oligopoly the way the French did with theirs. Julius Genachowski has been saying all the right things, but will he be able to deliver against such entrenched vested interests?

  4. I think you’re missing a couple of big points, Om. For one, wireless speed depends to a huge degree on the processing power of the DSP-ish device that encodes and decodes the bits. So while the speed of the general-purpose CPU doesn’t need to grow like wildfire, it does need to keep pace with the lower-level system components that make speed happen. And don’t forget that the end-to-end architecture of the current Internet invests a great deal of infrastructure in the end system’s CPU.

    But the real limiting factor of performance in hand-held devices is power. It doesn’t matter how fast you can encode bits for MIMO and OFDM and manage packets at the TCP level if your battery dies after 45 minutes, so the real deal today is something like MIPS/Watt. Adding parts to a die doesn’t improve that metric, so your Sun guy is clearly showing his stripes as a datacenter-oriented dude, which is what he is, of course.

  5. I’m all for taking shots, veiled or otherwise, at AT&T and everyone else’s crummy data network, but I think you are misrepresenting John Gruber to set this up. Gruber’s point is that a faster CPU matters because it will let the iPhone better keep up with even the mediocre of ATT’s network.

    1. A more relevant fact is that the upgraded iPhone just barely surpasses the CPU power of the Blackberry 9000 series, which is more or less the standard for high-end smartphones these days.

      By reference, the Blackberry has more the 2000 times the CPU power and memory of the first node on the ARPANET. But the AT&T wireless network is roughly 1500 times faster than the first ARPANET connection, so these things are more or less in proportion.

      1. Richard,

        First of all thank you for a historical perspective. It is amazing — I have been putting together a presentation around this whole notion and well, now I can go back to the days of ARPANET. 🙂

        I totally agree – the Bold is so much better than any other smart phone I have used and Curve 2 (8900) is even better. With UMA it totally rocks. I just think people have to redefine their thinking around the whole notion of what is computing today. I just think Moore’s Law is morphing into more of a Moore’s Theorem, though I am not smart enough to make that assertion. I would love love to hear your thoughts here.

      2. Om,

        I’m writing a white paper that deals in part with the progression of CPU power, network speeds, protocol sophistication, and regulatory models, so I don’t want to reveal too much of my thinking around all of this until I’m ready to go public with it. For the current discussion I think we do have to accept that we’re into the last legs of easy upgrades for both CPU power and network speeds, so it behooves us to pay more attention to efficiency in both the network and the system than we have in the past, at least for mobile devices where battery power is such a limiting factor.

        If you’d like to comp me for your event, we can talk some more about these dynamics.

      3. “We are into the last legs of easy upgrades for both CPU power and network speeds” ?

        Thank you for raising this point. Finally I have something to say 🙂 It is now up to us software and application people to push the limit by building more efficient softeware!

    2. Eas

      You are perhaps mis-reading the post. I am not misrepresenting John, and neither am I taking pot shot at him. AT&T’s network is a pretty pathetic and you know it as much as I do. Others have also experienced that.

      On the keeping up with the AT&T’s mediocre network etc: point of my post is that we need to think processors/network speeds in tandem and not as standalone metrics.

      We might be saying the same thing.

  6. Particularly in the Cloud, with dense computing, more symmetry in the network is needed
    My emerging though simplistic rule of thumb is 1GHz of compute Processing = 1 Gigabit of I/O
    All depends on workloads and apps of course

    More in my blog later this summer…

    1. Jayshree

      I totally agree with you on this. I think the bump in the speed of the iPhone is keeping up with the upgraded speed of the AT&T network. They have been promising us speeds of around 700 kbps and higher, though my view is that there will be more symmetry in the compute/communicate worlds.

      1. They’re actually promising 7.2 Mbps in the shared downstream direction this summer for the new iPhone. It’s just a firmware upgrade to their existing plant, but it will do some interesting things to the backhaul in places where they still use copper.

  7. What is the constraint in handheld networked computing ?

    1. CPU speed
    2. Bandwidth

    PC was in same stage few years back and we see that we still need more bandwidth and more computing power.
    Hand held revolution is on and we don’t really know what is possible at this moment.

    Ignore any of it at your own peril.

  8. I agree with you here OM ………actually you can summarize current IT trend (apps and data in cloud ) with Sun slogan …”NETWORK IS COMPUTER” …………5 years ago its true for IT geeks for building …massively parallel super computers ….but now with advent with cloud aps ….its become more relevant to normal …………..why download song when you can stream ………access data from any where using cloud devices like pogo plug or cloud syscronization services ……….even core office apps now cloud based ……..like google office , zoho ………………fact of life today is very well work productively with good network connection + cloud apps on netbooks / old hardware

  9. I would operating system efficiency and browser page rendering speed to the SIP+higher_bandwidth equation.

    Chrome and Android running on that new Qualcomm “all in one” chip over WiMax is what we want, and the result would be way faster than merely doubling ( quadrupling? ) the CPU processor speed.

  10. outside of the tech/geek crowd people are far less willing to pay for bandwidth than for compute speed. i sell compouters and when i ask a customer why they wat to upgrade the numbe rone answer is either ‘faster downloads’ or ‘not waiting for the internet all the time.’ i usually suggest that they may be better off upgrading there connection from the basic DSL plan to a faster one or from mobile broadband to a wired one. the usual response is that they want to know what they can spend at once to speed up there connection; but they are unwilling to have any higher recurring monthly charges for the sake of speed.

    in addition to fatter pipes we need to be looking at better compresion algrithims and some stadard for send heavily compressed data from the cloud to user clients. this should all be totally transparent to the users.

  11. PC processing is extermely important. It is true that when you have an option of fast access to intenet, you will definietly enjoy that much more than faster processing in localized environ. But faster internet access is also related to CPU speeds.
    So in my opinion CPU speeds and Bandwith both are equally important.

  12. I just read your entry trhough the lin at http://technologycritics.wordpress.com/
    Though agree with your article as well as Technologycritics’ article, I could not deby that Windowslog has a strong point. When we are talking of mobility, MIPS of the terminal is as relevant as it was before. However, it is totally different
    A) to think on chanllenging the MIPS of a Data Center by accelerating the speed of each processorthan by multiplying the number of processors and managing their conectivity in a very producting manner
    B) to think of increasing MIPS of the terminal, as one of the ways to make the connectivity factor efficient than maintain the challenge on MIPS as the main challenge and the performance of the network (not just bandwith) as a secondary one.

    Manuel No

  13. Om,

    Though both connectivity and computing matter, their relative importance do change depending on time, application workload, device, etc. Ping-ponging between the two will continue unabated… There are times when I crave for a much faster CPU & more memory (e.g. when running VMs on my PC & Mac) and other times when I crave for much faster bandwidth (e.g. when surfing from coffee shop).

    Interestingly, both computing and connectivity equipment depend on processors, ASICs and memories (MIPS, Mbps, Mpps, MB) which are bound by fundamental laws of precious electrons…

    PG.

This site uses Akismet to reduce spam. Learn how your comment data is processed.