Facebook Explains Auto-Enhancing of Photos Uploaded via iOS App

Facebook indirectly confirmed Tuesday’s report on auto-enhancing photos uploaded via its iOS application with a post on its engineering blog Wednesday explaining the thinking behind the new feature.

iOSAutoEnhanceOfficial650Facebook indirectly confirmed Tuesday’s report on auto-enhancing photos uploaded via its iOS application with a post on its engineering blog Wednesday explaining the thinking behind the new feature.

TechCrunch reported the auto-enhance feature Tuesday, saying that it would be extended to Android users soon, as well.

For users who don’t want their photos to be auto-enhanced, Lifehacker provided the following instructions to turn the feature off:

  • Tap the “More” button in the Facebook app.
  • Scroll down to “Settings > Videos and Photos.”
  • Disable “Enhance Automatically.”

Following are highlights of the engineering blog post explaining why Facebook launched the feature:

Creating a software solution that helps people capture images as vibrant as the moment itself required breaking down what it means to shoot images as we saw them. There are three main reasons images captured with your phone look different than they do in real life.

First, a digital camera doesn’t capture as much light information about the scene as your eyes do. The scene or real-life image often contains a very wide dynamic range of light from 10 Lux (dark bar scene) to 100,000 Lux (bright outdoor scene). This represents an 80-decibel dynamic range, whereas most smart phone cameras deliver around 9.25 bits or 55db of dynamic range — much less than we see. The net result is that digital images appear somewhat flat and not as vibrant and tonally rich as the real world.

Second, the human visual system dynamically adapts to the huge swings in dynamic range through a process of “image memory” and physiological adaptations (e.g., your iris getting bigger and smaller depending on the scene and where you look). Pragmatically, this means you can see something in shadow and bright light when both are in the same scene. Even though your eye is adapting as you move your eye around, you remember the whole the scene as if it was evenly lit.

Third, the world is a noisy place. I don’t mean loud, but the light itself and the sensors collecting the light can generate “noise,” or pixelation on an image. Light-gathering CMOS sensors are inherently noisy, especially in low light scenarios. Fascinatingly, it is one of the few places where the strange world of quantum mechanics impinges on our daily life. Photons are emitted randomly from light sources and thus reach us with a certain amount of randomness. Our visual system, which accumulates a mental image of the scene, effectively removes much of our perception of this noise allowing our brain to amplify the image free of noise. Not only does light arrive randomly at the silicon sensor, but the sensor itself adds noise as it turns photons into electrons and amplifies the signal. This conversion process is also confounded by quantum uncertainty and thus is somewhat random. Unfortunately, the digital sensor captures all of this noise and conflicts with our sense of the scene.

At first blush, it seems like an intractable problem: How do we recover light in the dynamic range missed by the camera to create a noise-free image? It turns out this is an age-old problem in photography that confronted the 20th century masters such as Ansel Adams and Ernst Haas. Photographic film and paper also had these same problems, arguably to a greater degree. The masters evolved a set of darkroom techniques that managed local and global tone through the use of dodging-and-burning techniques, along with chemical recipes and various colored filters. These techniques required a huge amount of time to execute and years to master. Later, in the digital age, desktop tools followed suit, providing similar techniques in the digital domain. While not quite as time-consuming, it still required a level of mastery and patience that inhibited all but most avid enthusiasts and professionals.

Our approach was to adapt ideas from the masters and figure out automated algorithms — collectively known as computational imaging — that would apply these techniques in the right amount and the right time. We developed three computational imaging technologies drawn directly from these historical techniques: adaptive Global Tone Mapping (GTM), Local Tone Mapping (LTM) and Noise Suppression (NS). Applied together, these manage the dynamic range in the way our visual system remembers them and the way in which the 20th century masters brought images to life.

Each technique is applied to every image differently based on the content of the image itself, lending the image a richness that more closely models the way we saw the moment. And, in this case, the tool brings to life the brightness on my daughters’ faces I fondly remember.

You can use the auto-enhance filter on iOS now. Check the settings to control the strength or turn it on and off. We hope you enjoy the new tool.

iOS users: Have you tried the auto-enhance feature yet? What are your initial thoughts?

david.cohen@adweek.com David Cohen is editor of Adweek's Social Pro Daily.
Publish date: December 17, 2014 https://stage.adweek.com/digital/auto-enhancing-photos-ios-app/ © 2020 Adweek, LLC. - All Rights Reserved and NOT FOR REPRINT