Deep Fake videos of Biden in drag promoting Bud Light, Trump in ‘Breaking Bad’ go viral

Recent deepfake videos of President Joe Biden and former President Donald Trump suggest that the 2024 presidential election will be unlike any election before it, according to those familiar with the technology.

One particularly fascinating deepfake video of Biden that emerged this week shows what appears to be a woman (or transgender woman, perhaps) with the president’s face and an effeminate version of his voice promoting Bud Light.

From the looks of it, the video creator meant to portray Biden as Dylan Mulvaney, the infamous “transgender woman” known for promoting Bud Light.

Watch or access the original here:

“Experts” spoken to by the Daily Mail said that while it’s pretty easy to spot these deepfakes at the moment, it may become virtually impossible in the near future.

“It is becoming increasingly difficult to identify disinformation, particularly sophisticated AI-generated deepfake,” Cayce Myers, a professor at Virginia Tech’s School of Communication, said.

“Spotting this disinformation is going to require users to have more media literacy and savvy in examining the truth of any claim. ‘The cost barrier for generative AI is also so low that now almost anyone with a computer and internet has access to AI,” he added.

Myers further emphasized the role that everybody — including tech companies and everyday citizens — will have to play to prevent deepfakes from disrupting the 2024 presidential election, not to mention future elections.

“Examining sources, understanding warning signs of disinformation, and being diligent in what we share online is one personal way to combat the spread of disinformation. However, that is not going to be enough,” he said.

“Companies that produce AI content and social media companies where disinformation is spread will need to implement some level of guardrails to prevent the widespread disinformation from being spread,” he added.

Otherwise, there could be problems — much like what happened this week, in fact.

“A falsified photograph of an explosion near the Pentagon spread widely on social media Monday morning, briefly sending US stocks lower in possibly the first instance of an AI-generated image moving the market,” according to Bloomberg.

“It soon spread on Twitter accounts that reach millions of followers, including the Russian state-controlled news network RT and the financial news site ZeroHedge, a participant in the social-media company’s new Twitter Blue verification system.”

“We’re not quite to the stage where we are seeing deepfakes weaponized, but that moment is coming,” Robert Chesney, a University of Texas law professor who has researched the topic, told AFP back in 2019.

Writing for the Council on Foreign Relations alongside Danielle K. Citron of the University of Maryland Francis King Carey School of Law, a year earlier he’d warned specifically about the effect deepfake videos could have on elections.

“A well-timed and thoughtfully scripted deepfake or series of deepfakes could tip an election, spark violence in a city primed for civil unrest, bolster insurgent narratives about an enemy’s supposed atrocities, or exacerbate political divisions in a society,” he’d written.

“The opportunities for the sabotage of rivals are legion—for example, sinking a trade deal by slipping to a foreign leader a deepfake purporting to reveal the insulting true beliefs or intentions of U.S. officials,” he’d added.

In fairness, software applications like Adobe Photoshop have allowed people to make deepfake images for years. But something has changed in recent times thanks to the introduction of high-tech, super-advanced artificial intelligence (AI).

“Photoshop allows for fake images, but AI can create altered videos that are very compelling. Given that disinformation is now a widespread source of content online, this type of fake news content can reach a much wider audience, especially if the content goes viral,” Myers explained.

The one thing that could theoretically help is government regulation, but there don’t seem to be any regulations on their way quite yet. Indeed, just this week, former Google CEO Erich Schmidt expressed doubt that the U.S. government will launch a new regulatory agency to handle AI anytime soon

“The issue is that lawmakers do not want to create a new law regulating AI before we know where the technology is going,” he reportedly said.

Below is another deepfake. This one shows Trump as a character from the popular TV series “Better Call Saul.”

To avoid getting duped by deepfakes, keep an eye out for unnatural eye movements and facial expressions, awkward body movements, a lack of emotion, unnatural colors, and fake-looking teeth and hair. Also, use your common sense …

Vivek Saxena

Comment

We have no tolerance for comments containing violence, racism, profanity, vulgarity, doxing, or discourteous behavior. If a comment is spam, instead of replying to it please click the ∨ icon below and to the right of that comment. Thank you for partnering with us to maintain fruitful conversation.

Latest Articles