Camera sales are continuing to fall off a cliff. And will continue to do so as the capabilities of camera phones keep increasing. Later this week, Apple is likely to announce a three-camera iPhone 11 —just like Google, Huawei, Samsung and every other phone maker. In 2020, Apple’s iPhone is likely to have cameras with the ability to see how far things are thanks to a new “time of flight” sensor. This will essentially give the phone super sight, and thus, the phones will be great for augmented reality and make computer vision even more powerful.
The 2020 phone’s sensors use VCSEL — vertical-cavity surface-emitting laser — which can accurately measure how far objects are. VCSEL is a semiconductor-based laser diode that emits light or optical beam vertically from its top surface. In comparison, LED (light-emitting diodes) emit light from the side and top surfaces. Originally, VCSELs were used in communication equipment, but Apple has been using a version of this for its front-facing Face ID camera. The emitted laser goes out and bounces back and tells the camera about the distance between the object and the device, and the time it takes helps give it a depth map. That process is called “time of flight” and is now becoming more popular, thanks to the growing processing capabilities of silicon inside the phones.
Apple isn’t the only one who is imagining such cameras on their devices. Phone makers are spending billions of dollars on their camera capabilities because — as Xiaomi co-founder and CEO Lei Jun said in an internal document — “camera functions have become a decisive factor for smartphone purchases among many consumers.” The company set up a separate division and gave it a lot of resources to compete in the market. Why not? It is up against giants, who keep throwing up bigger and better devices. Samsung is said to be rethinking lenses and the whole idea of the phone camera itself. They are spending money at a time when the revenues of the traditional camera companies are shrinking, and shrinking fast.
You don’t have to look too far to imagine the future of cameras with interchangeable lens — it will also include more interchangeable algorithms. Light’s introductory multi-lens camera might have been a commercial disaster and a let down from a usage standpoint, but it is a good mile marker to a future. Looking into the future, it wouldn’t be a surprise to see multiple lenses at the back of our phones, with beefier special graphic and machine learning chips turn out even more amazing photos.
A lot of traditionalists dismiss my arguments, but in reality, if a generation or two is growing up on a steady diet of cameras-on-phones and consuming visual data on digital screens, they will have little use for special cameras. Today, programmers don’t think about spinning up data center infrastructure when developing an app — they sign up for AWS. Photography is going through the same natural progression, in which complexity and hardware are abstracted. I also argue that Apple, Samsung, Google, Huawei are outspending the traditional camera makers exponentially. That is why we will continue to see massive gains in computational photography and camera-phone technologies versus traditional cameras.
Photography industry purists recoil at the notion that their idea of the future is wrong. When I suggest that the lifecycle of a photo will start and end with a digital screen, many dismiss it as unartistic. Samsung’s The Frame TV might not meet people’s exacting standards, but imagine a future when you don’t buy a single photo of mine, but a lifetime subscription to my best photos.
As I keep saying again and again, camera phones FTW.