Decoding Authenticity in a Deepfake World

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

This article is part of a series called Decoding Authenticity. In the initial part of the series, I explored Authenticity in the Age of AI & MR, delving into the cultural discussions surrounding authenticity, framed within the context of ‘The Curse.’ Now, let’s tackle the pervasive issue of deepfakes — a substantial catalyst for the surge in disingenuous content, deception, and, frequently, fraudulent activities.

Deepfakes, powered by artificial intelligence, manipulate and replace a person’s likeness in computer-generated images, videos, or audio recordings, making it seem like they’re saying or doing things they never did. Unlike the past, where creating artificial content faced technical hurdles, advancements in AI, cloud computing, and 3D content engines have drastically increased synthetic content creation and dissemination.

Opening the Door to a Creative Revolution

Before we delve into the ominous implications of deepfakes, I want to take a moment to appreciate the awe and boundless creativity that has been propelled by GenAI tools (e.g. Midjourney, OpenAI’s DALL-E and ChatGPT, RunwayML, and many many more). We’re amidst a creative renaissance, with industries exploring and experimenting unprecedentedly, especially in media, fashion, and entertainment. This surge has birthed novel forms of aesthetic design and collaborative content creation.

There is real inherent value in this emerging content and other digital forms when utilized in an ethical manner, sourced responsibly, and appropriately credited. However, in light of the creative revolution facilitated by GenAI tools, the time has come for us to reimagine our approach to all forms of content. Implementing measures that safeguard its authenticity and value is imperative. This shift ensures that the content produced is not only innovative but also protected from potential manipulation and misuse. Content, instead of being viewed merely as a commodity, should be understood as a form of currency susceptible to counterfeiting.

Backlash of Misinformation and the Erosion of Truth

It is unfortunate that the same technology fueling creativity loses its appeal when wielded maliciously to fabricate scenarios that never transpired. What makes deepfakes particularly insidious is their non-consensual nature. The Identity Fraud Report by Sumsub indicates a staggering 1740% surge in deepfake use in North America from 2022 to 2023.

High-profile instances, such as the deepfake images of superstar Taylor Swift and fabricated robocalls mimicking President Joe Biden’s voice circulating on social media in early 2024, underscore the gravity of the issue. The Brennan Center for Justice warns that the early 2020s might be recorded as the dawn of the deepfake era in elections, as Generative AI gains the ability to convincingly emulate elected leaders and public figures, potentially eradicating trust across society.

While the political arena grapples with the rise of deepfakes, the issue extends beyond, with a staggering 98% of all deepfake videos online dedicated to non-consensual deepfake pornography. As these manipulations blur the boundaries between reality and fantasy, the urgency to address deepfakes lies in the immediate and long-term threats they pose to individual security, privacy, and societal trust at large.

Establishing a Unified Approach to Counter Deepfakes

As we struggle to keep pace with the rapid development of these new tools, it is imperative that legislation is enacted to squash these bad actors. While the federal landscape remains bereft of explicit laws addressing deepfakes, a smattering of states — California, Texas, Michigan, Washington, and Minnesota — have taken it upon themselves to enact regulations.

On the national stage, just this week, the Senate Judiciary Committee introduced legislation that would allow victims to sue people who create and distribute sexually-explicit deepfakes under certain circumstances — The Disrupt Explicit Forged Images and Non-Consensual Edits Bill (DEFIANCE). Several other acts have also been introduced, including the Deepfakes Accountability Act of 2023, the No AI FRAUD Act guarding identities against the malevolent misuse of AI, and the Preventing Deepfakes of Intimate Images Act, specifically targeting non-consensual pornographic deepfakes.

The Coalition for Content Provenance and Authenticity (C2PA) emerged in early 2021 with broad support from tech giants like Adobe, Microsoft, BBC, and numerous others. Aimed at combating AI-generated misinformation and promoting media transparency, C2PA is formulating new standards to incorporate safeguards such as PKI-based (public key encryption) digital certificates, signatures, encryption keys, and certificate authorities, establishing a secure chain of trust. The open technical standard set by C2PA serves the purpose of helping individuals discern between real and fake media. By authenticating and validating digital files with a tamper-evident record, it offers a necessary tool in the battle against misinformation. However, for this approach to be effective, the specifications outlined by C2PA must be universally adopted by all stakeholders involved in the various stages of image creation and editing.

Our Human Imperative in a Tech-driven Era

The challenge posed by deepfakes extends beyond technology — it’s a distinctly human issue. While regulatory and technical guidelines are necessary, cultivating discerning minds to withstand the impending fake content onslaught is urgent. Clear actions are necessary to educate the public about the immediate danger of deepfakes, as demonstrated by Canada’s recent nationwide campaign on disinformation. We need to foster a digitally literate society equipped with the guidelines and toolkits to critically analyze content. In the digital age, content is more than a commodity; it’s currency demanding careful scrutiny. The question is, are we up to this challenge which demands significant investment and regulation? Thankfully, many other people are thinking about these issues as well. I would recommend checking out this article An FAQ from the future — how we struggled and defeated deepfakes.

In my next article of this series, I will delve into the application and ethics of virtual humans, exploring their evolving role in pivotal industries such as healthcare and media, emphasizing the imperative for authenticity in navigating this dynamic landscape.

Originally published on Medium. All rights reserved by the author.

Your ideas fuel transformation—join us to make an impact that matters: join.neol.co

You may also like

Do you like our stuff? Subscribe.

Expand more icon.

Five main reasons to sign-up for our newsletter

Curious? Let's chat.
8 Back Hill

5th Floor
London EC1R 5EN

United Kingdom
info@neol.co
For businesses
CREATE AN ACCOUNT
Connect with Neol
LinkedIn logo
Copyright © 2024 Neol. All rights reserved.
PRIVACY & Terms