I always thought of AI as an entity with two eyes—a camera, and the Internet—with the pair comes a fresh visual perspective alongside a plethora of applications.
The talk of AI has always been intriguing. It’s proved itself to be a disruptive tool across diverse industries, and has made its way to the lifestyles of everyday consumers. Artificial Intelligence, they say, forces people to break habits of the repetitive, and to grow both in skill and in talent. It’s been around for awhile, has become ubiquitous, yet its relevance as catalyst for innovation has yet to fade. Although not apparent to the average consumer, AI has actually made its way to our everyday devices, especially those equipped with cameras.
Birthed from the IT industry, AI is branch of computer science that attempts to let a device function, think, and learn like a human. Rather than simply executing automated tasks, AI goes beyond, integrating machine learning into the process. Like a human, it adapts and learns from analyzing different patterns. Enhancement is the key of AI as it helps individuals, and in this case, photographers and videographers focus on what is most necessary—composition—rather than wasting time tweaking camera settings, or hours in post-production software.
AI and phone cameras: the future is now!
Before delving in photography or videography use, let’s take a glance at one of the most practical uses of AI and cameras in terms of phone security. Back in 2017, Apple released the iPhone X, and with it, a new biometric security feature, Face ID. Now, there are four main components of this technology that makes unlocking your phone seamlessly yet securely; the Dot Projector that flashes 30,000 invisible dots on a face to create a unique visual map; the Flood Illuminator which flashes infrared light to a face to aid facial recognition in the dark; and the Infrared Camera which reads the dot pattern, captures an infrared image, and sends the data to Apple’s Bionic chip. Technicalities aside, Face ID is amazing feat. A new natural security that uses your face as a password is certainly a game changer, is it not? Combined with AI, Face ID becomes attention and history aware—it adapts over time, and will work even if you put glasses on, grow a beard, or naturally age. And although face unlocking has been around for a while, nothing of this level of safety and security has been brought to a mass-produced user friendly product. Meeting standards of high-security biometrics, many, including banks and institutions, have integrated these with their respective mobile applications.
Now let’s talk Google. As a company that is known for collating and storing data for easy accessibility, it’s interesting what the company has done in terms of combining both data and cameras. The application Google Lens, for instance, allows the user to simply point his/her mobile camera to something, and receive relevant results and information at an instance. And it doesn’t matter where you point it at. Point it at a restaurant or store front and instantly be greeted with the menu, or reviews. Point it at an item for sale, and find out how much it is. The applications are simply infinite, and the fact that you can use your camera as a “third eye” with all the smart functions is amusing in itself.
Although Apple, with its iPhone XR release, as well as many other Chinese brands have mimicked the bokeh effect with a single lens, it was Google with its Pixel phone that began the trend. Simply put, sophisticated AI technology is a well enough substitute for things that are absent, in this case, an expensive DSLR/mirrorless camera lens or a dual lens camera phone. Google has done it again with ultra-smart HDR. Ever tried taking a picture of a subject with a bright landscape as your background with your phone? Tricky isn’t it? It’s either you expose for the background, and get a dark subject, or expose for the subject, and get blown out highlights. With smart HDR, all aspects of the photo are well exposed. Taking it to the next level, Google has also used the same concept to enable users to take photos in extremely dark situations. Computational photography made possible by AI in phones is, without a doubt, an innovation leaps and bounds better than what was initially possible. Sometimes even running circles around much more expensive cameras, the level of AI intelligence integrated into camera phones have yet to be overtaken by what is natively offered in higher-tier devices.
Now let’s talk about professional cameras. There’s a product out there called Arsenal, and it almost feels like you’re cheating when taking a photo. Why? Because the Kickstarter product unlocks the full potential of a camera by optimizing the camera settings using AI. Simply giving the camera a smarter brain, the device analyzes subject motion, hyperfocal distance, diffraction, and many more factors, automatically adjusting the camera settings for the user to take the perfect photo without any hassle. For landscape photographers, there’s focus stacking to get the entire scene in focus. Smart HDR is also present in Arsenal, enabling users to automate photo stacking techniques, to capture a photo with balanced shadows, midtones, and highlights. Another feature that is to be loved by landscape photographers is the ability to capture creamy looking photos with long exposure that is guided by the averaging pixel values.
If ever you’ve held a DSLR/mirrorless camera and equipped it with a zoom lens, you’d figure that zooming is better because it physically adjusts the lens’ glass to either zoom in or zoom out of a subject a.k.a, optically zooming. If you’re using a single lens camera phone, and zoom in, you’re digitally zooming, which means that you’re basically zooming in an image or cropping digitally, hence, losing quality. Now if you’ve held a relatively recent Sony camera, you may have come across Sony’s Clear Image Zoom Feature, a feature similar to digital zoom, only smarter. The technology allows you to zoom digitally up to 2x without any loss of quality. How? The technology compares patterns of adjacent pixels, and creates new pixels to match the pattern seamlessly. Think of it this way, say for instance you’re assembling a puzzle, and are missing some pieces. Through AI, the whole picture is still created. Why does
this matter? If you’re carrying a single 50 mm prime lens, you’re able to transform that into a 100 mm lens without any hassle, and without losing quality.
Google Glass didn’t take off because the technology was raw, and although its appeal was intriguing, many have left it for novelty use. Yet Epson, with its different approach, continues to make a mark. Targeting businesses, rather than consumers, the technology has allowed professionals to work more productively. Almost like having robotic vision that’s featured in numerous sci-fi films like Terminator, Robocop, or Transformers, Epson’s Moverio Glasses fuses AI and cameras to create an augmented world with easily viewable data. Such a world with enhanced vision changes things, especially for professionals looking into accelerating their workflow.
Also published in GADGETS MAGAZINE February 2019 Issue.
Words by Gerry Gaviola