Deep Fusion iPhone Feature Explained

Deep Fusion Feature on iPhone Explained

The introduction of the new iPhone 11 line of smartphones and the new A13 Bionic chip by Apple on 10 September 2019 also came with the introduction of a new photography feature the company has called Deep Fusion. Essentially, this new feature demonstrated how the company tackles photography fueled by artificial intelligence.

Note that the new iPhone 11 smartphones come with new and better cameras. Furthermore, they come with an improved Neural Engine hardware integrated within the A13 Bionic chip to allow a new image processing method called Deep Fusion. Apple called this method “computational photography mad science” because it is fundamentally based on artificial intelligence and machine learning.

So What Exactly is Deep Fusion? What Does it Do and How Does it Work?

The principle behind this new AI-powered photography feature found in iPhone 11, iPhone 11 Pro, and iPhone 11 Pro Max is considerably simple. Deep Fusion works by taking a total of 9 images and fusing them all together to generate a clearer final image.

More specifically, before the user presses the shutter button, the phone has already shot four short images and four secondary images. Upon pressing the shutter button, the phone takes one long exposure.

The Neural Engine analyzes the nine images pixel by pixel. Note that a single image has 29 million pixels. This neural network hardware combines the best of each image to create a single image that has been optimized to have as much detail, impressive dynamic range, and as little noise as possible. The entire process only takes a second according to Apple.

Nevertheless, based on the explanation above, Deep Fusion marks the application and mass introduction of computational photography based on artificial intelligence and more specifically, machine learning.

A Further Note on Computation Photography by Apple and Other Competitors

The new Night Mode feature of the iPhone 11 line of smartphones also takes advantage of the localized and built-in machine learning capabilities of the A13 Bionic chip. Specifically, through the Neural Engine hardware, the feature works by capturing multiple frames at multiple shutter speeds. These are then combined together to create better images.

Because of the multiple shots at varying shutter speed, there is less motion blur and more image detail even at low light. The features also automatically kick in if the phone detects settings or environments with a below-average source of light.

Note that Apple is not the first time to integrate AI-powered photography on a smartphone. Google has been taking advantage of artificial intelligence for several years with its Pixel line of Android smartphone devices. The AI prowess of Google has made the cameras on the Pixel 3 and Pixel 3A as one of the best in the market.