Tuesday , October 4 2022

Why is iPhone Porting so bright from iPhone XS?


From the beginning, the function of smartphone photography is just a supplement that is not a quality camera after the world of evolution. But after the evolution of technology developed a leap. The function is so powerful that it becomes one of the brand's selling points. It is used as a rod in its products. Today every brand has a similar default camera standard.

It is enough to explain 3 techniques that cause today's showdown. "Smart phones to clean your face after being blurred." If you're ready to go.

1. Simulate human exposure in double lens form.

All camera types on the planet simulate a human-like 3D image. Briefly explain. The objects near the eyes are the most noticeable and sharp when we use the same space object. The object that is far from our eyes is the hidden object. If you compare images from the camera, it is clear that the face is blurry.

However, the size of the sensor in any camera. It cannot compare human opinion completely. And especially the smartphone's camera sensor is relatively small, but with wide viewing angle. As a result, the resulting image becomes the image of all the objects in the focused image or the shallow depth of field.

Many brands (Apple, Huawei, Vivo, Oppo etc.), So the smartphone can fire to a more beautiful face than a pro camera. By changing the simulation of human exposureDouble lens (Today we have 4 lenses.The smartphone will have the same size sensor. There are two cameras that work differently and lenses use different distances.

The first camera will use a wide-angle lens with a low focal length to capture the sharpness of the front model; the other camera uses a telephoto lens with a long focal length to be used for storing. Floor details. Shadows, backgrounds etc. When the images on both cameras are completed, they will be processed through the software and integrated into the same image.

butconsThe lens. The floor of a person's photo is to be decoded and blurred in large amounts as the rear curtain is solved from a camera with a telephoto lens, and if it is a lens telephoto lens, it must use that lens to shoot that lens. It will only blur the overlapping portion of 2 lenses.

2. TrueDepth Technology (Infrared Sensor)

TrueDepth technology uses more than 30,000 infrared lights to illuminate areas. To resort to depths or proportions. And we're scanned in three dimensions, so we use our face to unlock the machine.

But in fact, Apple technology also added to the shooting of individuals. This is a great way to scan the face of this technology by recognizing the shallow depth or face shape and separating the face of the model from the background.

butconsUsing TrueDepth technology to attract people. If we get a place. Strong sunlight. The quality of the backlight distribution is distorted because the infrared light on the front of the camera is exposed to sunlight due to similar light patterns.

3. Using Artificial Intelligence with Dual Pixel Autofocus

Google Pixel 2 does not use double lens. However, choose to use Dual Pixel Autofocus (Already on most smart phones.Technology that provides fast and accurate focusing by dividing the two light lines into the processor chip and applying the difference to both sides to calculate the correct point for the next focus point. Blended with artificial intelligence to separate the person from the background in a variety of ways. By marking the color of human skin skeletal lines of the human body, etc.

Apple's newest smartphones. iPhone XR The technology used to animate people similar to Google Pixel 2, but Portrait Effects is used to focus on Matte's artificial intelligence and artificial intelligence.PEMFind the person in the frame in a timely manner, from the entry training of 2D color photos and from 3D depth imaging, where the software will process the image from where the image is. The body of the person, including objects on it.Hair, glassesIt won't be blurry anymore.

Source: petapixel, blog.halide, AI.googleblog

Leave a comment

Source link