Updated on 11 December 2024
September is one of the most highly anticipated months of the year, at least for Apple iPhone enthusiasts. After all, it is the month when the tech giant unleashes a new generation of iPhones, which boast features like better cameras, new colorways, and more.
As expected, this year welcomes the arrival of iPhone XS, which boasts a slightly large sensor and a significantly more computing power. The latter, in particular, is achieved through the A12 Bionic Chip that greatly enhances the device’s image signal processing.
While it is true that iPhones have the ability to process 5 trillion operations in a matter of a second, it never has the ability to surpass – or even be at par – modern ILP cameras. According to PetaPixel, true exposure or basically the number of photons captured is directly related to image quality in photography.
It is true that Apple’s success in upgrading its devices’ computing power offers some powerful workaround. However, in photography, dedicated cameras are still king compared to smartphones.
,
Portrait Mode is More Like a Mask for Smartphone Cameras
One of the new features installed in modern iPhones is the ability to alter the depth-of-field post image capture. This is quite an upgrade, considering that first generation portrait mode was only able to provide a single predetermined setting.
The user interface employs an f-stop dial that is capable of utilizing a full-frame aperture reference to somehow correlate the degree of depth of field. Judging its functionality, it is safe to say it is more like an epitome of Skeuomorphism in designs.
A good number of smartphones these days are capable of fixed apertures, not to mention the inclusion of a sensor that has a size relative to the total focal length of the lens. This enables these mobile devices to create depth maps via two ways: using two cameras (i.e. the iPhone) or a sub-pixel design (i.e. Pixel2). And for the process to achieve foreground and some adequate background elements, it is combined with a neural network.
The Light Field Makes the Difference
While the iPhone XS can now alter the depth of field in post, it is not really a light field or plenoptic camera. So, unlike the latter, it still cannot capture both light intensity and direction.
So, how does iPhone conjure a workaround? It is fairly simple – the answer lies in its fancy processor. To put it simply, the processor is responsible for the depth of field in post to obtain foreground and background elements. But what users do not know is that it simply masks the process.
This is also the reason why the device is able to contextually input facial features such as hair, eyes, etc with ease. The aperture dial combined with the processor means that it can successfully increase the blur radius of elements outside the mask in real-time.
This is where light field becomes a topic of concern. A few years from now, Apple – and other tech companies – may be successful in improving a device’s computing power to generate a more camera-ish result.
Since iPhones, for this matter, does not use an ILP system (it is also hard to imagine the company building one) and instead relies on flexible computing machines equipped with light capture abilities, it is likely for users to run into errors that they could not experience in optical captures.