If 2016 was about disinformation, 2020 is the year of the deepfake.
- Microsoft has released a new software tool that detects deepfakes ahead of the 2020 Presidential election.
- “Microsoft Video Authenticator” provides confidence scores that show how likely it is a third party has manipulated a given piece of media.
- Microsoft is partnering with a coalition of news organizations to pilot the new tech.
With two months to go until the 2020 U.S. Presidential election, Microsoft has introduced a new tool to help spot deepfakes, or media that has been manipulated with artificial intelligence. These kinds of photos, videos, and audio recordings appear in disinformation campaigns, making them a hindrance for politicians, regulators, and the public.
➡ Don’t let tech trick you. Master your digital world with best-in-class explainers and unlimited access to Pop Mech, starting NOW.
In a blog post this week, Microsoft said “in the short run, such as the upcoming U.S. election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes.”
Microsoft Video Authenticator searches for tiny discrepancies in the images, videos, or audio in question, which only computers can “see.” In visual deepfakes, for example, the system detects “blending boundaries,” or grayscale elements and subtle fading that are usually invisible to the human eye.
With each deepfake, someone has engineered a piece of media to take the image or video out of context, making it look or sound like a person has said things they haven’t said, or appeared in places they haven’t been.
In 2014, Ian Goodfellow, a Ph.D. student who now works at Apple, invented the deepfake, which is based on generative adversarial networks, or GANs. These can help algorithms move beyond the simple task of classifying data into the arena of creating data—in this case, images. This happens when two GANs try to fool each other into thinking an image is real. Using as little as one image, a tried-and-tested GAN can create a video clip of, say, Richard Nixon.
It’s easy to see how bad actors can use the technology to sway political opinion. Last year, for example, a viral video of Nancy Pelosi, the Democratic Speaker of the House, circulated around social media. In it, she appeared to be slurring in some sort of a nonsensical, drunken soliloquy. President Donald Trump shared that video on Twitter—but it wasn’t real. It was a deepfake.
Microsoft Video Authenticator analyzes these kinds of images and videos and assigns them a confidence score (computer science talk for a percentage chance) that shows how likely it is that someone has manipulated that piece of media.
“In the case of a video, it can provide this percentage in real-time on each frame as the video plays,” Microsoft said in its blog post.
Still, despite the technical prowess, this is only a Band-Aid solution. Inevitably, as these detection tools roll out, bad actors will come up with new ways to build deepfakes that fly beneath the radar.
ReadMore..