Did you fall for this fake video of Joe Biden at the G7 in Italy?

2024-06-18 20:23:36+00:00 - Scroll down for original article

Click the button to request GPT analysis of the article, or scroll down to read the original article text

Original Article:

Source: Link

Before there were deepfakes, there were cheapfakes; now there are both. Last week, social media was suddenly awash in videos pushed by unscrupulous Republican accounts edited to play up stereotypes about President Joe Biden's age. Media outlets, too, promoted the clips, with the New York Post recently claiming to have footage showing Biden wandering off in a daze during the G7 summit in Italy. In reality, Biden was congratulating a skydiver who had just landed but was not visible in the frame. A week later, the Post published a similarly deceptively edited video claiming to show Biden frozen onstage at a fundraiser. The full video shows this did not happen. All forms of fakes are being used to influence elections both here in the U.S. and abroad. But while some rules and regulations are being developed to fight deepfakes, we may be less prepared to mitigate the risks of their easier-to-create and harder-to-detect cousins. Deepfakes encompass images and videos that are created or modified almost entirely by AI-powered technologies. Cheapfakes are real images or videos that are simply misattributed or deceptively edited. Their power is in their simplicity. As rumors swirled that former President Donald Trump was falling asleep at his criminal trial in New York City, photos circulated purportedly showing him asleep in the courtroom. Many of those images appeared repurposed from a different setting. Although second-hand reports confirm that Trump had looked sleepy at his trial, the photo was still deceptive, and highlights why cheapfakes can pose a larger challenge than deepfakes. I have spent the past 25 years as an academic researcher developing techniques to detect all forms of deceptive content, from Photoshop manipulation to AI generation. I subjected the Trump courtroom photo to several forensic techniques to determine if it was authentic or not. Each technique confidently classified the image as real. I only realized this was likely a cheapfake after an observant member of my team noticed that Trump’s chair didn’t match the shape or color of contemporaneous photos we knew to have come from the courtroom. The first challenge of cheapfakes is that they are more difficult to detect as obviously deceptive, since there are no misshapen hands or gravity-defying background objects. The first challenge of cheapfakes is that they are more difficult to detect as obviously deceptive, since there are no misshapen hands or gravity-defying background objects — telltale signs often found in deepfakes. The second challenge is that cheapfakes can be easier to create: In the case of the G7 photos and videos of Biden, a simple crop can make it appear as if Biden is staring off into a void. And the third challenge is that social media platforms have what can, at best, be described as incoherent policies when it comes to these types of deceptive posts. After a deceptive cheapfake video circulated on Facebook claiming to show Biden inappropriately touching his adult granddaughter, Facebook refused to take the video down claiming it didn’t violate their “manipulated media” policy which placed limits on deepfakes but not cheapfakes. In February of this year, Meta’s oversight board stated that this media policy “is lacking in persuasive justification, is incoherent and confusing to users, and fails to clearly specify the harms it is seeking to prevent,” and suggested that Meta update its policies to be consistent regardless of how deceptive content was created. Meta said “it will respond publicly to their recommendations within 60 days in accordance with the bylaws,” but Mark Zuckerberg’s company is under no obligation to follow the board’s recommendations. How do we move forward when lies and distortions of all forms are so easy to create, when some media outlets are less than scrupulous, and when social media platforms continue to turn a blind eye to the harms caused by their services? To begin, we are going to need more coherent policies from social media companies. New technologies can also play a role, and help readers sort out the truth from the lies. Earlier this year, BBC News announced a new “content credentials” feature that lets readers know the source of an image or video, how it was authenticated and what, if any, modifications the content has undergone. These credentials are embedded into the content, regardless of where it is shared online. Coherent policies and content credentials will not, of course, eliminate deception but they will help. Policies only work if they are enforced, and no technology will be able to protect us from a photographer who frames a shot to exclude important context (a trick as old as photography). The internet delivered on its promise to democratize access to information, but it did so without discriminating between truthful information and deceptive information. Unfortunately, that means readers will have to remain ever more vigilant as we consume information online. Jordan Peele, as a (transparently) deepfake President Obama, may have said it best all the way back in 2018: “How we move forward in the age of information is gonna be the difference between whether we survive or whether we become some kind of f--- up dystopia.” The jury’s still out.