First, let’s get the basics out of the way. The term “deepfake” combines the words “deep” from AI deep-learning technology and “fake” signifying potential inauthenticity. It most likely originated in late 2017 from a Reddit user known as “deepfakes”, although the underlying technology predates those posts.
A deepfake is created when someone uses an artificial-intelligence tool, especially deep learning, to manipulate or generate a face, a voice or – with the rise of large language models like GPT, which underlies ChatGPT – conversational language. Increasingly, it is also possible to manipulate or generate entire events and situations.
Although the technology for deepfakes pre-dates 2017, they rose to notoriety in that year due to their use in the production of non-consensual pornography of female celebrities, ex-girlfriends and former spouses (a problem that continues today). Since then, they have been quickly adopted for troubling political and military purposes, as various actors around the world, some known, some unknown, seek to influence the course of historical events by deceiving electorates and militaries.
The big concern among experts, policymakers and the general public is whether there can be positive or at least neutral use cases for deepfakes, or whether the technology is inherently negative. For example, there is a debate happening right now about the ethics of using deepfakes to help people mourn the loss of loved ones, or to present a youthful version of Luke Skywalker long after the actor who plays him, Mark Hamill, has aged. In fact, the recent strike by the Screen Actors Guild and the Writers Guild of America was in part a preemptive move to gain a measure of control over how deepfakes will be used in film and television productions.
Hollywood writers recently went on strike to influence how deepfakes will be used in the creative industry.
The question going forward will be whether deepfakes can and should be regulated, and if so, how. Answers to these questions will depend in part on our philosophical instincts about this technology. For example, if you believe strongly that deepfakes are inherently inhuman and inauthentic, you will probably be inclined to ban them wholesale. However, you may believe that deepfake technology is just another way to make content. They would therefore have an appropriate place in culture, and it will be up to content creators and their audiences to experiment and figure out what is and is not appropriate.
Any policy will require a definition of “deepfake.” The definition does not need to be perfect, but it needs to be viable, and this is where the world is currently hitting a stumbling block.
For example, scholars often refer to deepfakes as “synthetic media” to express the role played AI in the content creation. There is some sense to this, yet the term “synthetic media” is by no means straightforward. Is not all media in some sense synthetic, insofar that it is made by tools? It also seems true to suggest that no media is synthetic, insofar that all content ultimately comes from human minds. AI art itself demonstrates this fact, for its training data contains real artists.
Additionally, deepfakes have traditionally been distinguished from “cheapfakes,” which are essentially the same thing but produced by non-AI tools. For example, manipulating photos with Photoshop used to be a common cheapfake technique. Yet, Photoshop now has AI built into it – a harbinger of times to come. Once AI is endemic to media software, will it be meaningful to distinguish between deepfakes and cheapfakes? Perhaps the term “cheapfake” will come to designate consumer-grade deepfakes, i.e., produced via AI tools that the average person or small enterprise can financially afford.
The fate of cheapfakes is a lot more serious than slang; it is something that policymakers should seriously reckon with. Defining deepfakes according to the role played by AI in their creation seems like the obvious thing to do. Even this blog post started by defining them that way. Yet, if policy does the same, that could create a legal slippery slope, in which all media in some way becomes classified as deepfakes.
Is fake always false?
Now let’s consider one of the biggest philosophical challenges that deepfakes present: whether fakeness always implies falseness. Both fakeness and falseness concern whether what is depicted in a piece of content corresponds to reality, but reality itself has two dimensions: the subjective and the objective. Put another way, there is the inner world and the outer world. The two are connected, but their difference is where the difficulties begin.
A couple holding a newborn baby. This image is a deepfake, completely generated by AI for the purposes of this blog post.
Take for example a married couple who desire a child and make a deepfake of themselves holding a baby. The reality depicted in this deepfake is that of their inner world, but it does not correspond to their outer world. The image is therefore fake, but is it merited to say it is false? Perhaps instead it is aspirational.
Even so, that does not eliminate the problem that fakeness usually does imply falseness, and for good reason. Whether a piece of content is false depends on context. Whatever the couple’s personal motivations for creating the deepfake, the empirical fact remains that they do not yet have a baby. This gap between their subjective and objective realities introduces an epistemological hazard. One of their relatives may see the deepfake on one of their social media profiles and mistakenly believe that they have finally become parents.
The epistemological hazard here is limited to the social circle of the couple. However, now consider this situation: an activist publishes a deepfake depicting a politician accepting a bribe. The activist may be perfectly aware that there have never been any accusations of corruption raised against the politician, but they feel in their innermost heart that this person is part of a conspiracy attempting to systematically undermine their community. Soon, the deepfake is being amplified by social media influencers and fake accounts controlled by a faction seeking to undermine the politician for their own goals.
A Stoic solution
The key issue here ultimately is not technological, but the assumption of objective reality – in other words, the tendency of an audience to uncritically interpret content as accurately depicting the outer world. And not only this, but also the tendency of an audience to want to believe content is factual.
The good news is that we are not entirely in the dark, as this is a very old problem. In the 19th Century, mere illustrations, not photographs, were enough to spark violence. Sometimes the audience had genuinely mistaken an illustration as representing objective reality, but sometimes they had an ulterior motive for insisting it was true, typically a class grievance or racial bias.
It is probably cold comfort that the fundamental problem is people continuing to fail to think critically and responsibly about the media. Still, at least we more or less know what to do about it, not necessarily as a society but as individuals.
The solution is not to go full-tilt skeptical and believe nothing you see and hear online. You just need to take a page from the Stoics. Whenever you encounter content that is in some way being presented as true – perhaps by a news organization, perhaps by your relative on social media, perhaps just because a lot of people are sharing it online – before you react to it, pause. Do you have good reason to believe it is true? And crucially, what are the feelings it is arousing in you? Do you want it to be true?
The irony is that the challenge posed by deepfakes is not really about whether you should believe everything you see, but whether you should believe everything you feel.
Don’t lose your mind over deepfakes. This photograph of a headless statue of a Greek philosopher is licensed under the Creative Commons Attribution 2.0 Generic license.