Why Apple Should Buy Nuance

17 thoughts on “Why Apple Should Buy Nuance”

  1. Hi Om: I appreciate your sentiment, but I don’t think an Apple acquisition of Nuance is in the cards. I’m assuming that Steve is referring to Loquendo (an R&D business unit of Telecom Italia) but I’m not sure who the second one might be (tho there are other subsidiaries of larger companies that might be spun off.

    Don’t forget that Apple and Nuance already have a pretty good business relationship, with Nuance “powering” both Siri and Voice Control on the iPhone. Apple could quickly add APIs for “native” voice in the iPhone SDK based on the tight integration of both recognition and text-to-speech software from Nuancd.

    I don’t think either company wants an exclusive relationship at this point. Apple has a bunch of patents of it’s own to support a multi-modal user interface, including multi-touch. Ditto for Nuance surrounding predictive texting and speech recognition. As you correctly note, there’s quite a bit of common interest here to be leveraged. But Nuance is also working closely with IBM to define the next gen of mobile interface that includes speech. How Big Blue and Apple would work together is an issue as well.

    It’s ironic that all the speculation started with a poor choice of words by Steve Wozniak. We could all dream of more “speechable moments” on the iPhone thanks to the acquisition of Nuance but, as you noted, divesting or shuttering the unwanted elements, which is more than half the business right now, will probably preclude an acquisition.

    1. Dan

      Thanks for the insight. Looks like speech technologies are part of those key technologies that go into the core of the OS/platform and if that is indeed the case, Apple should think about buying Nuance and making its patents/technology part of the Apple SDK. Sort of like location-based services.

      On the IBM/Nuance connection, well Apple has a history of working with IBM in the past. 🙂

  2. honestly … we aren’t socially at the point yet where high levels of voice recognition/transcriptions make sense on mobile devices

    one of the prime uses of mobile devices is the privacy, you can use them anywhere without people really noticing … note the massive increase in text and other data, and the corresponding decrease in MOU

    in order to effectively use voice, the user would have to speak out loud exactly what they want done in a public setting … sure itll be useful in your car or in private, but then it requires the user to master both a verbal and non-verbal skill set …

    the other issue is that true voice recognition is also semantic recognition … which is not just recognizing the the wording but the intent of the query … if you look at how people converse, everyone gives instructions in a different way, will everyone learn a single voice method? ,,, i personally doubt it

    voice queries tend to be much more free flowing than text ones … the limitations of the results return is an issue … try asking for “best place to eat in vancouver” … and having a program list verbally 100 results a la google … the results must be absolutely concise … thus semantic recognition not just of the intent of the query but also the best result

    you just have to look no further than your favorite IVR for the pitfalls …

  3. Recognition != understanding
    Understanding requires context. Context includes more than a monotone command, well one can program with punch cards too.

    In other words the machine has to be situation aware to make sense out of a command and apply the right action. Anybody really solved that? IBM has some nice research and even commercial products in that area, but none combined effort. Apple?

  4. Pardon me Om, but wouldn’t this be as bizarre as Intel acquiring McAfee?

    Speech Recognition technologies span more than Mobile or Mac based computing. If Apple does acquire Nuance, the latter’s speech processing innovations will probably be limited to Apple based platforms or apps only. That would be too bad because although Nuance will increase adoption by sheer numbers, I don’t personally see Apple working with banks to incorporate speech recognition technologies for their ATMs, or with Gesture-Speech detection possibilities, voice based authentication services, etc.

    Needless to say, future roadmaps will be highly tailored to Apple’s requirements and Apple could really care less about any other Speech applications than the ones it needs.

This site uses Akismet to reduce spam. Learn how your comment data is processed.