computerscience

Auto Added by WPeMatico

UK Passport Photo Checker Shows Bias Against Dark-Skinned Women

According to an investigation by the BBC, women with darker skin are more than twice as likely to fail the automated United Kingdom passport rules than fair-skinned men when submitted online through the nation’s automated government checker.

The United Kingdom offers an online service to submit your own images for use on passports, which would theoretically allow a person to get their passports more quickly. If you follow a set of guidelines, a person could also avoid paying to have a photo taken of them if they have the means to photograph themselves at home. Those guidelines include having a neutral expression, keeping a closed mouth, and looking directly at the camera. If a photo is submitted that does not meet all of the criteria, it is rejected as being “poor quality.”

According to the BBC, a student named Elaine Owusu found that the automatic online portal rejected her image for having an “open mouth,” which if you see the image yourself was clearly not the case. Owusu did manage to eventually get the photo approved after challenging the verdict, but she had to write a note arguing that her mouth was indeed closed.

Though she did win, she wasn’t happy about it. “I shouldn’t have to celebrate overriding a system that wasn’t built for me,” she told the BBC.

To determine if there was a systemic problem, the BBC fed more than 1,000 photographs of politicians (based on the Gender Shades study) into the system to see if there were any patterns. They found that dark-skinned men were told that the image was of poor quality 15% of the time when compared to 9% of the time for light-skinned men. For women, it was worse: 22% of the time dark-skinned women’s images were rejected while women with light skin were told their images were of poor quality 14% of the time.

Computers are only biased when the information they are given is biased. In 2019, The New York Times published a detailed article explaining the history of racial bias built into the basics of photography, and that issue continues to show itself in newer technologies like the UK’s automatic photo checker.

“The accuracy of face detection systems partly depends on the diversity of the data they were trained on,” David Leslie of the Alan Turing Institute wrote in response to the BBC investigation. “The labels we use to classify racial, ethnic and gender groups reflect cultural norms, and could lead to racism and prejudice being built into automated systems.”

When a system like this doesn’t work for everyone, the designer of the software would normally be asked to explain. Unfortunately, the government declined to name the external company that provided the automated checker.

As a result, a solution to the problem uncovered by this investigation – where the system in place fails for a disproportionate number of dark-skinned people – is not immediately apparent.

(Via BBC)

This AI Can Transform Regular Footage Into Slow Motion with No Artifacts

Earlier this year, researchers from two universities and Google published a new AI-powered technique they developed called “Depth-Aware Video Frame Interpolation” or DAIN, and it’s simply mind-blowing. The tech can interpolate a 30fps video all the way to 120fps or even 480fps with almost no visible artifacts.

The team behind this breakthrough was led by Wenbo Bao from Shanghai Jiao Tong University, and included computer scientists from the University of California Merced, and Google. Together, they used the power of deep convoluted neural networks to significantly improve the quality and capability of video frame interpolation, to the point where you’d be hard-pressed to spot any artifacts.

You can see the technology at work in the stop motion video up top, which has been up-framed from 15fps to 60fps without any visible artifacts whatsoever.

For a more extreme example, check out the video below. The original footage (left) is just 30fps. Using DAIN, it’s been transformed to 120fps (middle) and even 480fps (right), taking normal footage and creating super-slow motion shot using nothing more than AI to create the intervening frames de novo.

The method works by using a “depth-aware flow projection layer,” which creates and considers a depth map and “optical flow layer” for the video as it decides how to create the intervening frames. This allows the algorithm to more accurately predict the motion of various objects based on where they sit in the frame, and accounts for occlusions more accurately as well.

The result, as the researchers put it, “performs favorably against state-of-the-art frame interpolation methods on a wide variety of datasets.”

Here’s one more sample video posted by Bao himself, where DAIN is compared to other state-of-the-art frame interpolation methods when converting a video from 12fps, to 24fps, to 48fps. This video includes a combination of camera motion, a fast moving object, and slower moving objects as well:

Sure, if you watch closely enough you may see the occasional artifact or spot an imperfection in the interpolation, but they’re shockingly rare, even when the frame rate is being tripled, quadrupled, or more.

Admittedly, this paper was published at the very beginning of 2020, and we’ve actually already shared samples that took advantage of this technique to add frames to classic footage—see here, here, and here. But we’ve never dived into the technique itself or shown the results that are possible when you really crank this up to create super-slow motion video.

Check out the samples above to learn more about exactly how this method works and see the results for yourself. And if you want to dive even deeper, you can read the full research paper or download the latest DAIN build and try it out from this link.

(via Reddit)