With measures to stem the spread of COVID-19 putting a chokehold on their filming capabilities, advertising agencies are enhancing old content with new tech, including deepfakes.
Deepfakes typically blend one person’s likeness, or parts thereof, with the image of another person. For example, a recent commercial for State Farm insurance blended the mouth of 2020 ESPN anchor Kenny Mayne over the image of 1998 Mayne to make it appear as if the younger Mayne was predicting events in 2020.
Ad agencies are so restricted in how they can generate content, they’ll explore anything that can be computer-generated, suggested a New York Times
“Effective advertising is built on novelty and surprise,” noted Josh Crandall,
CEO of NetPop Research, a market research and strategy consulting firm in San Francisco.
“Deepfakes allow creative people to come up with the seemingly unbelievable right in front of the audience,” he told TechNewsWorld. “It’s very powerful.”
Creating the unbelievable by mixing the old and new in advertising isn’t new. Campaigns in the past have found ways to sneak post-mortem appearances of stars in commercials. For example, a Diet Coke ad paired Paula Abdul with Gene Kelly, Cary Grant and Groucho Marx.
“It’s not entirely new, but the technology is much better than it used to be,” observed
, a media analyst for WBUR in Boston.
The conditions are a little different now than they were when Abdul and Kelly were hoofing it for Coke.
“We have a sort of recycling situation now because of the inability to create new ads. We need to repurpose existing material,” Carroll told TechNewsWorld.
“Part of the appeal of this kind of creative approach is the buzz that it creates. State Farm was all over Twitter as soon as its deepfake ad ran. That gives your ad an extra bump. it expands the universe of people exposed to your commercial,” he said.
“In a situation like State Farm’s, there’s no harm and virtually no downside to it,” Carroll added, “but when you translate that technology to political advertising or public policy advertising, that certainly is a more fraught situation than what you had with State Farm.”
When Fakery Leads to Deception
Advertising is just the beginning for deepfakes, said Crandall.
“Political operators, strategists and lobbyists often leverage advertising and marketing tactics for their own objectives. Online video and social media platforms are relatively inexpensive and easy targets for these groups to distribute their deepfakes and influence the social dialogue,” he explained.
There are legitimate uses for deepfake technology, including in advertising, maintained Daniel Castro, vice president of the Information Technology and Innovation Foundation, a research and public policy organization in Washington, D.C.
“Many companies already use CGI when producing video, as well as other editing tools,” he old TechNewsWorld. “Deepfake technology is a way of automating some of this process.”
Deepfakes become a problem when they’re used to deceive people — to make them believe something happened that did not happen, or that someone said something that they did not say, Castro said.
Another concern is the use of deepfakes to create media resembling someone’s likeness without their permission — or permission from their estate, if the individual is deceased, he added.
Difficult to Detect
The primary issue is one of intent and impact, Castro argued. Are people being manipulated or deceived?
A number of projects have been launched to detect deepfakes. Some states, notably Texas and California, have passed laws to regulate their use in elections, he pointed out.
“But detecting deepfakes may be difficult over the long term,” Castro said. “In that case, the focus will likely be on authenticating legitimate content — this will require both technical solutions, such as digital watermarking, and non-technical solutions, such as digital literacy campaigns.”
Deepfakes are creating issues for social networking platforms, Carroll added.
“Facebook, Twitter, Instagramv– all of them have to come up with some kind of policy to deal with this — either some kind of labeling system or guidelines to remove ads that are particularly deceptive,” he said.
“Those platforms are always reluctant to get into something like that,” Carroll added.
Advertising and public policy aren’t the only areas where deepfakes will make an impact. Information security pros are concerned about the technology, too.
“As deepfakes become more convincing and easier for attackers to make with commodity hardware, it’s likely we’ll see a whole new category of social engineering attack emerge,” predicted Chris Clements, vice president of solutions architecture at
, a cybersecurity consulting and penetration testing company located in Scottsdale, Arizona.
“Imagine getting an ’emergency call’ from someone who sounds exactly like your CEO by a deepfake voice trained from her frequent public speaking engagements — or a technical support department receiving a Zoom video call with a deepfake constructed to look identical to a CFO asking to reset their password,” he suggested.
“The potential damage of a convincing deepfake could have a devastating impact on organizations that fall victim to the attack,” Clements added.
One of the most significant threats in modern information security is social engineering
— pretending to be someone else to trick people into making poor decisions or performing actions that are detrimental to their organization, noted Erich Kron, security awareness advocate at
, a security awareness training provider located in Clearwater, Florida.
“Deep fakes are a powerful tool that can make it tougher for employees to determine whether a request to transfer a large amount of money or to make purchases of goods through the company are legitimately from their leadership,” he told TechNewsWorld.
No Truth, No Consensus
“Our society is being bombarded by fake — fake news, fake likes, fake realities,” observed Crandall. “We are seeing an erosion of what people consider to be a shared truth.”
“As deepfake technology is used by more companies and organizations, private and public, a person’s ability to decipher fact from fiction will be severely hampered,” he continued. “The results will increase interpersonal friction and political difficulty in building consensus to address the looming problems of climate change, future pandemics, and other global crises.”
Meanwhile, advertisers may reap rewards from deepfakes now, but the technology could have diminishing returns for them in the future, Carroll pointed out.
“It’s possible deepfakes will make people suspicious of everything,” he said. “Then the innate suspicion of advertising will be magnified. That will hurt the whole industry.”