Apple’s iPhone Pro & the Uncanny Valley of Computational Photography

Krishna Sankar
2 min readMar 21, 2022

An interesting article in The New Yorker on the “Pro” cameras in the latest iPhones.

Kyle Chayka has done a good job [Here] in capturing what is not exactly right with the software defined photography ! Am looking forward to reading his new book “Filterworld”.

The question he asks is “Has the iPhone cameras become too smart ?” This is a broader question that will reverberate in many parts of our life including autonomous vehicles, recommendation engines and even common IoT ”smart” appliances.

The new iPhone promises “next level” photography with push-button ease. But the results look odd and uncanny. “Make it less smart — I’m serious,” users lament !

One example of the visual glitches caused by the device’s intelligent photography is the erasure of bridge cables in a landscape shot !

“Its complex, interwoven set of ‘smart’ software components don’t fit together quite right,”

On the iPhone 12 Pro, the digital manipulations are aggressive and unsolicited. “They bring details back in the highlights and in the shadows that often are more than what you see in real life. It looks over-real.”

“Computational Photography,” describes imagery formed from digital data and processing as much as from optical information.

  • Each picture registered by the lens is altered to bring it closer to a pre-programmed ideal. The device “sees the things one is trying to photograph as a problem to solve” — for example the image processing tries to eliminate digital noise, smoothing it into a soft blur; but the “fix” ends up creating a distortion more noticeable than whatever perceived mistake was in the original.

The “Deep Fusion” feature is an example of the uncanny valley.

  • For every photograph, the camera creates as many as nine frames with different levels of exposure, and then merges the clearest parts of all those frames together, pixel by pixel, forming a single composite image.
  • The iPhone camera also analyzes each image semantically, with the help of a graphics-processing unit, which picks out specific elements of a frame — faces, landscapes, skies — and exposes each one differently.
  • This technique, like salt, should be applied very judiciously. Now every photo we take on our iPhones has had the salt applied generously, whether it is needed or not.

While we get “perfect” images, “they are coldly crisp and vaguely inhuman, caught in the uncanny valley where creative expression meets machine learning” !

You see, the machines still don’t have the eye for photography, they can only render images !

--

--