A new form of online disinformation has some government officials uneasy about its potential effects on upcoming political campaigns and elections, but policy efforts to address it are sparse.
“Deepfakes” — videos altered with the help of AI that can make people (typically celebrities or politicians) appear to do and say things they actually did not — are not only weird, uncanny manifestations of a new era of technological progress; they’re also a national security threat, according to some.
In November, the Council on Foreign Relations hosted a public roundtable discussion of the new online phenomenon, where panelists lamented the potential these videos have for deployment by hostile foreign actors. Similarly, the Pentagon and its research agency, the Defense Advanced Research Projects Agency (DARPA), recently announced their commitment to researching various ways to combat the new phenomenon.
Deepfakes are created through AI algorithms that observe and record movement patterns in a subject's face from actual video and then recreate and simulate them to make the subject do or say something they did not. The concern by many is that such videos will be used to manipulate public perceptions of public figures during elections.
With the globalized reach of “fake news,” disinformation has gone from being a mostly federal issue to one with state and local relevance as well. Still, states have been slow to adopt legislation that could combat these potential forms of election interference — in no small part because the technology is still so new and untested in its ability to sow confusion among voters.
As a result, though a number of federal bills have been introduced that attack deepfakes, only a handful of state bills have been introduced — namely in California, Texas and Massachusetts.
A bill that sought to combat deepfakes was recently introduced in the California Legislature but failed to pass.
AB 1280 was introduced by the Organization for Social Media Safety (OSMS), a relatively new nonprofit that describes itself as committed to combating forms of online bullying and other hazards related to social media. Though it was voted down, the bill was granted an opportunity for reconsideration next year.
“Even though it’s still early, we feel that there should be some sort of legislative response,” said Marc Berkman, executive director for OSMS.
While Berkman’s bill focused heavily on the potential for videos to be used as a form of bullying or ostracization, it would have also made it a felony or a misdemeanor to “prepare, produce, or develop, any deepfake [within 60 days of an election] with the intent that the deepfake coerce or deceive any voter into voting for or against a candidate or measure in that election.”
Those who opposed the bill argued that deepfakes are still a largely theoretical threat to the integrity of elections, and that laws already exist to assist potential victims of such videos. Some civil liberties advocacy groups like the ACLU also voiced opposition to the bill, seeing it as an infringement on rights to freedom of expression.
“It was helpful to see where everyone’s at and there were some good words spoken about it at the hearing. Some more work needs to be done,” Berkman said.