Seeing something can no longer mean believing it. A decade ago, the concept of Artificial Intelligence (AI) manipulating human actions sounded far-fetched. Today, deepfake technology makes this idea a plausible reality.
Deepfakes are a type of deep AI that can synthetically generate convincing videos by manipulating facial features and voices. The danger associated with this AI comes from its susceptibility to being abused. The public consumes a plethora of information from videos on social platforms and news outlets, which could easily spread false information.
With the 2024 elections approaching, deepfake technology could have major implications for political campaigns. During the volatile election season, proliferated AI-generated content can sway public opinion and tarnish public figures’ reputations. In 2017, a fake video of former President Barack Obama constructed by American actor and director Jordan Peele surfaced. The realistic deepfake of Obama mirrored every one of Peele’s facial movements with surgical precision. While this video was created without hostile intent, several malicious versions soon followed. Last year, a hacker posted a video of the Ukrainian president calling on his soldiers to lay down their weapons and return to their families. People easily recognized this deep fake and social media platforms removed it, but the hoax still caused mass confusion on Facebook, YouTube and Twitter.
Most recently, President Biden has been the target of many deepfake programmers. In early February, a video of him calling for the mandatory drafting of American troops to fight in Ukraine was doctored, enraging the general public. After the truth was exposed, the video shed light on the growing issue of deepfake technology. President of Programming Club Joe Li (12) underscores the importance of validating information in these changing times.
“On several social media platforms, there are community notes or other mechanisms where the audience can tell whether a video is real or fake,” Joe said. “If the community collectively agrees that a video is fake, then we can counter the spread of misinformation.”
AI-generated media is becoming dangerously harder to distinguish from real content. Paradoxically, research conducted to detect differences actually helps the AI perform better. As soon as a well-intentioned researcher publishes a study pointing out a flaw in the deep learning networks, deepfake creators promptly correct the issue. In 2018, researchers published a paper describing the intricacies of human blinking. They intended to help the general public discern which videos were fake by paying attention to blinking patterns, but this approach backfired. Instead of helping people protect themselves from falling prey to propagated information, the study allowed programmers to improve the model, and the following deepfakes were much more realistic.
In addition to their increasing sophistication, deepfakes are becoming easier to create. In the past, one would need a facial recognition algorithm and an autoencoder. Now, anyone can build a reasonably realistic generated video through integrated software and applications. All a user has to do is provide some form of training data.
The increased accessibility of deepfake technology makes it a bigger threat. Even a single compromising video has the potential to harm someone’s career. Joe believes that people will inevitably abuse that power.
“I don’t see too many benefits of deepfake because you’re creating something that’s not actually real,” Joe said. “But I think it does show how far AI has come. If we use our knowledge of AI in other fields, then we can bring a lot more advancements.”
The emergence of deepfake technology has some potential to help society. It is becoming increasingly utilized in the film industry, giving directors the chance to explore new realms. Many popular media franchises like the Marvel Cinematic Universe are beginning to test early forms of this AI to de-age actors or create hyper-realistic visual effects. As deepfake continues to impact the movie industry, this technology could potentially replace the need for actors to join every shoot. There has also been speculation that dead actors could be ‘re-cast’ in new movies through these deep generative models. Of course, legality and ethics are major factors in determining the practicality of these new applications of deepfake technology. Actors are becoming increasingly concerned about being potentially “replaced” by this technology, prompting measures such as the SAG-AFTRA strike against the Alliance of Motion Picture and Television Producers.
AI has come a long way in recent years. Now, the question remains, is this innovation for better or for worse?