Menu Close

In Too Deep

“You are successful, and I am successful, and I’m wondering: Are you happy?” said Anthony Bourdain, the American chef, author, and travel documentarian in the final moments of his posthumous documentary, “Roadrunner: A Film About Anthony Bourdain”. Those words ended the film on a bittersweet, seemingly perfect note.

However, it is not actually Bourdain speaking. 

Against the knowledge of his family and initially of the world, filmmaker Morgan Neville used artificial intelligence (AI) technology, otherwise known as “deepfake” technology, to digitally recreate Bourdain’s voice and read three quotes from an email he had sent to a friend.

The use of deepfake technology, named for its creation of fake video and audio using deep learning technology, was meant to be a creative and heartfelt way to end the documentary. Instead, it brought to light an intense battle of ethics, as Neville suggested he had spoken with people close to Bourdain to ensure they were fine with the use of the technology, including Bourdain’s ex-wife, Ottavia Busia-Bourdain, who took to Twitter to negate this claim.

“I certainly was NOT the one who said Tony would have been cool with that,” wrote Busia in a tweet.

Fans of Bourdain took to Twitter to express their disdain for Neville’s choice, saying “Well, this is ghoulish”; “This is awful”; “WTF?!”; “this seems like the opposite of what he would want. this is so upsetting and gross. let him be”; and even documentary reviewer Sean Burns, who reviewed the project negatively, tweeted “I feel like this tells you all you need to know about the ethics of the people behind this project.”

The biggest issue in this situation might not even come from the fact the technology was put to use; rather, the issue here stems from Neville’s decision to not disclose the use of the technology to viewers, and made it seem like something Bourdain had legitimately said.

This criticism and other examples beg two questions: where do creatives draw the line with AI deepfake technology, and what ethical measures are necessary to use this technology without causing harm?

When thinking of deepfake technology, the first thought that might come to mind is the social media filters that allow one to superimpose someone else’s face on one’s own. Though that is heading in the right direction, the technology in question is more advanced – even sometimes convincing viewers the fake video is the real thing. More often than not, people use this technology in jest with high-profile people, for pornographic reasons, and even weaponize it for political or revenge purposes.

John Bowditch, associate professor and director of the Game Research and Immersive Design (GRID) lab at Ohio University, first learned about the research behind creating deepfakes in Los Angeles at SIGGRAPH 2017, an annual conference focused on computer graphics and interactive technologies.

“The researchers demonstrated former President Obama delivering a speech that was artificially created,” said Bowditch. “It was brilliantly executed and believable. It wasn’t difficult to see the possible uses of such technology, particularly for nefarious purposes. I initially worried how easily it might be exploited for disinformation campaigns, especially in politics.”

Bowditch explained anyone’s likeness can end up in deepfakes, especially with public social media profiles or work-related websites right at our fingertips. It is a difficult process to enforce copyright or consent abuse violations for someone’s likeness because anonymous people could create deepfakes with publicly available content. 

Though celebrity likeness used without permission is common, they are not the only targets. Non-celebrities have also been victim to the technology, as many open-source coding softwares can make it easier for people with limited to intermediate technical experience to experiment with deepfakes. Even simple smartphone apps exist that allow people to upload a photo and convert it to a simple, lip-synched deepfake. This type of app will not necessarily trick people into thinking it is real, but it could just be the beginning, with this type of software continuing to become more and more advanced and increasingly available to the general public.

Bowditch knew some tech companies were working to find a beneficial use of deepfaking, including American multinational tech company NVIDIA Corporation. After most places of business moved to a virtual format in response to the coronavirus pandemic, NVIDIA is working on a way to ease the burden of the large amount of bandwidth required with video technology. One bandwidth-conserving innovation takes a few photographs of each conference participant and recreates synchronized moving images from only their live audio. The visual quality is still preserved, but the bandwidth is lowered.

On the audio side, Josh Antonuccio, associate professor and director of the School of Media Arts and Studies at Ohio University, found there were some benefits in using an algorithm to recreate voices.

“It’s pretty wild, but they created an interesting market for where technology that can manipulate other people’s voices can also be used to help people that have lost their own,” said Antonuccio.

However, Bowditch and Antonuccio believe, and research shows, most places are not viewing this technology from a beneficial standpoint. That is exactly why Bowditch believes there will be more harm than good done with this technology.

“To be honest, I really struggle seeing many beneficial uses of this technology outside of entertainment. It is so easy to abuse,” said Bowditch.

Antonuccio cited political warfare as one of the most tricky spots where deepfakes come into play. He said the Donald Trump and Joe Biden 2020 election cycle was where he saw the most deepfake activity, and emphasized how dangerous that type of content can be in an election that is already heavily driven by emotional stakes from voters.

“It’s in the consciousness of the public, so it becomes very hard to address it because video and audio are such a powerful medium, especially when it’s something that’s very politically charged,” said Antonuccio.

On the other side of the technology’s negatives, Bowditch cited pornography as one of the most tricky spots where deepfakes come into play.

“This has often been used to create realistic revenge porn,” said Bowditch. “I believe laws do not punish those that use this technology to create revenge porn harshly enough. It has and will continue to severely impact the mental health of victims.”

As Bowditch pointed out, the more nefarious uses of deepfake technology can create a devastatingly negative impact on the mental health of those involved. Antonuccio was aware that someone can ruin an entire lifetime of work to build a reputation in one well-edited video or audio clip, and for that person, he believed the mental impact would be horrific.

“I think it’d be excruciating,” said Antonuccio. “It doesn’t take a genius to figure out what happens if you have a vengeful teenager with technology like that at their fingertips – or at any age level.”

Aside from just the emotional impact, the way the entertainment industry runs as a whole will be — and already has been —  impacted to a high degree by deepfake technology.

From the technical standpoint, Antonuccio knew even if a person was alive, aspects of media production like dialog replacement in postproduction, where filmmakers have to bring actors back into the studio to match dialog offscreen, can be pretty expensive. With the deepfake technology, it becomes a one-step edit on a keyboard and has the capacity to streamline that process.

“To me, I think there’s a lot of things where, in terms of the speed of things like postproduction, that’s going to increase,” said Antonuccio. “And I think it creates interesting possibilities for storytelling. You fast forward 20 years into the Marvel world, and you can pull characters from 2008 as they were then and integrate them into stories at that time. So there’s all kinds of ways you can extend storylines, beyond the age of the actor or actress.” 

Both Bowditch and Antonuccio, however, found the morals surrounding deepfake technology and deceased public figures to be difficult to navigate. There are digital rights protections laws that are put in place for high-profile people regarding who owns their likeness when they die. But often, as in the case with Bourdain’s AI audio recording, there are violations of these rights because of loopholes.

“I am a strong advocate for digital rights protections,” said Bowditch. “I believe this violates dead actors’ digital rights if used without their estate’s consent. If the family has given permission, I have no problem with it. Maybe future wills can stipulate how digital likenesses may be used. I think it will be interesting to see if someone’s likeness is challenged as public domain soon. Many Charlie Chaplin films are out of copyright and public domain. Can someone commercially use his likeness sourced from public domain films in deepfakes?”

There are some celebrities and their family members who take the precautions necessary to avoid having their likeness used or taken through loopholes in the law. For instance, Robin Williams, who died by suicide in 2014, restricted the use of his image for 25 years after his death. This means, until the pre-agreed date in 2039, no one can deepfake him into any form of media.

Williams’ daughter, Zelda, created a Twitter thread in August 2020 discussing her thoughts on deepfake technology, writing that her fear that the future will hold a severe lack of trust in what is real by watching the advancement of deepfake technology is solidified.

“If deepfakes come out to defame or destroy, who are we to believe? The person swearing up and down that they didn’t do it, or what USED to count as irrefutable proof?” wrote Williams in the Twitter thread. “And what if those people are no longer around to defend themselves? The dead cannot tell you ‘that’s not me’.”

Similar to Bowditch and Antonuccio, Zelda also pointed out how easily accessible this technology can be. The fact of the matter is, people create deepfakes quite often, and sometimes the public is none the wiser.

“Especially since the beginning of the pandemic, we have witnessed that many people are easily susceptible to disinformation and conspiracy theories,” said Bowditch. “I think deepfakes will always seem convincing to some. Not too long ago, it was easy to differentiate a fake video from real by watching how often a subject’s eyes blinked. Historically, deepfakes often didn’t blink their eyes enough to be convincing. Assuredly, deepfake creators are starting to account for that and make appropriate adjustments.”

Antonuccio believed the struggle to determine the legitimacy of video and audio content is only heightened by social media.

“The speed of which we have sharing via social media now, it just bypasses any filtration system,” said Antonuccio. “And I think that’s only going to get worse as we move forward.”

The verdict on deepfake technology? Seemingly subjective, depending on who one asks. However, two aspects of the argument are almost always agreed upon: there must be a way to vet what video and audio are using deepfake technology to avoid anything salacious or malicious, and if deepfake technology is going to be used after a celebrity’s passing, it must be approved by members of the family.

Bowditch and Antonuccio both, more than anything, encouraged people to be wary of deepfake technology as it continues to circulate through the public. As seen with social media edits all the way up to professional projects like Bourdain’s documentary, some people work to blur the ethical lines of this technology use and its disclosure, making everyone susceptible to deepfake deception.

“If there’s one historical photograph, video, or audio recording or a combination, and you’ve got five deepfakes that are as convincing, you spend all this time sorting through what’s actually real as it disseminates through any number of social networks or otherwise,” said Antonuccio. “That, I think, is the bigger concern; it’s not just the propaganda itself, but it’s the system noise level of what it creates in terms of distorting reality.”

Posted in BACK OF THE CLOSET