Getting digitally cloned was easier than Devin Finley expected it to be. The voice-over artist, who also works as a model and bar manager, entered a studio in Manhattan last spring and read a script from a teleprompter.
Across the room, a man with a large camera working for Hour One, a Tel Aviv–based video agency specializing in providing clients with lifelike virtual humans, filmed Finley from the waist up. Over Zoom, a director offered instructions about how much to move his hands. He was done in less than an hour.
When Finley first learned that he could license his virtual twin, he had reservations. He was skeptical that a digital double, as the nascent sub-industry calls it, could capture his personality. He also disapproved of A.I.–generated avatars eventually taking jobs from people. Then again, the situation enabled the impossible. “You can be in two places at one time,” he said, and the new income stream meant that “as you get older or you’re on vacation, your avatar is still virtually making money for you,” possibly speaking a language he has never learned, without ever aging beyond 36.
And so Finley accepted an initial payment of $500 and signed a contract giving Hour One a certain number of credits to book his twin in videos used for marketing, product tutorials, online training, employee onboarding, and more. If he was in high demand, Hour One, which lists Nvidia, Microsoft, and DreamWorks among its clients, would buy more credits from him.
“It’s a new technology—either you hate it or you love it,” Finley said. So long as the company kept its promise to keep him out of political, sexual, and malicious content, he was open to loving it.
Not so long ago, concerns about digital doubles seemed like fantastical sci-fi matters. But at the moment, it’s looking increasingly plausible that disagreements over how to handle them could contribute to one of the largest Hollywood labor strikes in decades. A couple of weeks ago, leaders of SAG-AFTRA, the primary union representing actors and other performers, asked its 160,000 members to vote by June 5 on whether they are authorized to strike if studios and streaming companies waver on their contract demands. Among other nonnegotiables, the union is determined to make it clear how much performers should get paid and when they must be consulted for studios to use A.I.–generated simulations of their voice or likeness. The Writers Guild of America is already on strike (and citing its own A.I.–replacement fears). If the actors join the writers, it would be the first time in more than 60 years that the two unions have simultaneously halted work.
At the same time, the moment when anyone can easily make a convincing video featuring any celebrity or public figure is rapidly approaching. Within “the coming weeks,” the buzzy startup Runway is going to release a tool that turns text and image prompts into videos, a spokeswoman told me. Promotional materials imply that Runway makes it possible to take a photo of anyone and turn them into a semi-realistic 3D character. Google has invested $75 million in the endeavor, the Information reported last week. Every day, it seems, ever-improving deepfakes have been wreaking new forms of havoc and spawning novel attempts to capitalize on the technology. Among other notable moments of the past few weeks: a deepfake of former Ukrainian President Petro Poroshenko tricked activist Bill Browder into a video call.
So are we on the cusp of major studios forcing Meryl Streep into a sequel by casting her digital double? Not quite. It’s not that studios wouldn’t want to. But, unions aside, A.I. can’t replicate top-notch acting from scratch. Just yet.
Amid the frequently confusing discussion about A.I. replacing actors, here’s a look at some concrete ways that A.I. doppelgangers are already being used (on union and non-union productions alike). What becomes clear is that the future that actors are warning about is already here. You just might have missed it.
Face swapping your way into an ad is on its way to becoming the Zooming into a meeting of the celebrity ad hustle—or so I concluded after talking to Alexandra Shangin, a co-founder of Brask Doubles, an A.I. content agency. The big names that brands want are often busy. But there is no need for French soccer star Kylian Mbappé or Argentinian soccer phenomenon Lionel Messi—two athletes who have worked with Brask—to show up on the set.
Though this appears to be Mbappé sprinting across a stadium at the 15-second mark in this ad for Mengniu, a Chinese dairy company, it’s not:
Rather, the runner is a human double, who shares Mbappé’s build and skin tone, wearing what Shangin calls an “A.I.–generated digital mask.” Brask, which has offices in Delaware, Dubai, and beyond, trained its A.I. tool on videos and photos of Mbappé, studying how the French forward’s face looks at various angles and in different lighting scenarios, and then superimposed it onto the double.
Long before easy-to-use generative A.I. models entered the equation, the public was exposed to a vast array of impressive and disturbing digital doubles. A hologram of Tupac, who died in 1996, performed at Coachella in 2012. A resurrected Peter Cushing, who died in 1994, appeared in the 2016 Star Wars movie Rogue One. In 2018 the New York Times detailed some of the terrifying ways that that deepfake technology was being used online, including placing a woman who looks like Michelle Obama in a porn video. Other similarly off-putting usages prompted think tanks and op-ed writers to warn that deepfakes were on the cusp of sparking a war or manipulating an election.
Fast-forward five years. Generative A.I. has gotten exponentially better. And what we get is a doctored but benign clip of a soccer star that looks nearly identical to thousands of other clips of that same star? Yes, indeed. Perhaps the true sign of A.I. takeover is that the technology has evolved from novelty to production hack. Brands may pay a comparable amount to the star as if they showed up in person. But by removing the celebrity and his team from that shoot, scheduling conflicts and travel costs disappear. None of this made sense back when hyperrealistic 3D face swapping required a labor-intensive animation process. But now weeks of work take days. The cost-to-hassle calculus is increasingly pointing to digital doubles.
And unless the star does something totally out of character, no one will suspect that it’s A.I. Speaking Russian was the giveaway when Brask’s sister company Deepcake (yes, that’s its name) placed “Bruce Willis” in an ad for MegaFon, a Russian mobile phone company, two years ago. The ad generated incorrect reports that Willis had sold his digital life rights. Shangin told me her company focuses on hiring celebrities for specific projects. “We just provide the service so the talent can make more of their personality.”
Concrete Use No. 2: Building Synthespians on Top of Body Doubles
How long will it be before A.I. actors are competing for awards? If the technology can wow us with its ability to create realistic photos of popes in puffer jackets and convincingly conjure up a Drake and the Weeknd collaboration, surely we’ll eventually get an A.I.–generated actor worthy of an Oscar, right?
Wrong, say some of the people most knowledgeable about on-screen A.I. Matt Panousis, the chief operating officer of the company Monsters Aliens Robots Zombies (or MARZ), spent an hour telling me about the many ways that A.I. is a visual-effects “paradigm changer” enabling artists to dramatically speed up their work. But when we got to casting A.I., his tone changed. “The idea that in the near future we’re going to have entirely A.I. actors that will be able to perform and look perfectly real and they will negate the need for real actors—I think it’s a little pie-in-the-sky.”
Others gave it two years. But for now, A.I. can only generate face makeup of a sort, which goes over a body double, who does the acting. The double’s voice may later get altered to sound like the star. But producers are still reliant on these human stand-ins for intonation and facial expressions. This is the case whether you want to revive a dead actor or place twin siblings in a single shot. As with the TV ads, the body doubles will ideally share a hairline and face shape with the actual actor, as you can see in this clip from MARZ:
Advances in this tech have already begun boosting demand for body doubles who can act. Does this eliminate concerns about how studios will use A.I. to exploit actors? Of course not. Just because you can’t yet program a wryly funny Jason Bateman look-alike doesn’t mean you can’t create an eighth season of the 1980s sitcom Family Ties by placing his sister’s face on a body double. That’s what she says, anyway.
“No film is locked,” director and producer Justine Bateman, who studied computer science and rose to fame as Mallory Keaton on Family Ties, told me. I reached out after reading her widely shared Twitter thread predicting the ways A.I. would ruin Hollywood. She’s particularly concerned about producers placing actors’ likenesses in scenes they never shot.
Even if SAG-AFTRA gets its way, this is bound to happen, but it will require more pleading and money. “We want to see consent, fair compensation, and appropriate limits on any use of digitally created performances,” a union representative told me. Now they get to hash it out at the bargaining table.
Even the most gifted actor can act in only one language at a time; a dubbed film’s mismatch between lips and words is just something audiences have to deal with. Or it was. Last week, MARZ released a video showing off its new lip-dubbing feature. In the video, actor Adam Brody speaks what appears to be fluent French in the movie The Kid Detective. “We just need the audio for the original footage and we run it through the lip dub,” Panousis told me.
Several other production companies have recently released—or are on the cusp of unveiling—similar tools. A downside of this is that it will be easier to pull a Kendall Roy. In this season’s Succession, the power-hungry son asks an editor to modify a video of his deceased father to misrepresent a new business endeavor. A desire to misuse these technologies is far from hypothetical. Shangin said Brask has had to turn down requests to create videos of an actual politician doing drugs and other people saying compromising things.
Is allowing a company to place a moving, speaking avatar of you in an A.I.–facilitated real estate ad all that different from appearing in traditional ads? As we talked on the phone, it became clear that Finley was still working through the answer to this question. What he seemed certain of is that it’s better than signing up for a stock photo shoot, something he did during the pandemic, while work was slow. That particular gig paid even less than the digital-double gig, promised zero residuals, and similarly forced him to relinquish control of how his likeness is used.
This job was faster and easier. “I don’t feel like I was used or abused,” Finley told me, when it became clear that I was horrified that he was only paid $500 to clone himself. Still, he’d like to see some sort of universal guidelines emerge eventually—beyond just the unions. (Finley is one of many performers who are not eligible to be in SAG-AFTRA.)
For now, the career of Finley’s digital double is advancing slowly. One of his main gigs has been playing a futuristic cop in Burner Face, a sci-fi audio movie. The only video he has appeared in so far is a sort of promotional video for that project. In the video, his lips don’t entirely sync with his words. And because Hour One trained its system only on his likeness—not his deep, dramatic voice—we hear someone else’s A.I.-generated words.
Sooner or later, other offers should come. Berlitz, the language learning company, hired eight “virtual instructors” to pump out 20,000 videos, Natalie Monbiot, Hour One’s head of strategy, told me. The site also lists numerous major companies as clients, though it’s not clear if they hired 1 of the 150 or so “virtual talents” Hour One offers or cloned their own employees, another suggested use. “It can help you scale your presence and look polished every time,” said Monbiot. Synthesia, a similar A.I. video company, told me it’s now working with “more than 15,000 corporate clients, including 35% of the Fortune 100.”
Ian Beacraft, a trend consultant and self-identified “gonzo futurist,” was similarly positive about joining Hour One’s talent pool. Beacraft’s digital double now hosts The Future Report, a tech news segment on DeFiance Media, which covers “decentralized culture, finance, and technology.” That company also uses an A.I. rendition of another actor’s voice. Beacraft still writes the scripts, but “I don’t have to get stage makeup or do the lights,” he said. Ultimately, he believes that in the not-so-distant future, after people become popular online, they will, rather than get offered a book deal, be asked to license their digital twins. “The real danger is when you have a marketplace where I can just take anyone and make a random person and get close to them” without actually being them, he told me. Then companies can avoid paying humans altogether, since their creations will cease to qualify as doubles.
That’s a possible future, but it’s not quite here yet. So where does this leave actors now? I found myself thinking about the first time I learned that a friend took compensation from a brand for an Instagram post. How fake of her, I thought. By the time she leveraged a similar deal to get a flight to Iceland, I was impressed. It’s tempting to think that, like so many seismic changes before, we’ll just get used to this coming phase of the talent economy.
But digital cloning requires ceding control in a different kind of way. Regardless of whether you license your avatar or someone else surreptitiously creates it, once it exists you’re no longer in charge of what you say, where you go, or how you move. The battle for that control will affect almost every aspect of the A.I.-twin era we’re now entering, no matter how specific anyone’s contracts get.