Iron Hand on AI: FCC Steps up Against AI-voiced Robocalls, Trump’s Warning Stands Valid
Just a few days following a robocall with a synthesized voice imitating President Joe Biden reaching New Hampshire voters, an interesting development took place. The Federal Communications Commission took swift action, prohibiting the use of AI-aided voices in robocalls. The moment held its significance, as it signified a direct confrontation between advancing technology and regulations. What made this even more significant was the fact that the 2024 elections were fast approaching, set against the backdrop of public accessibility to AI generators capable of manipulating images, audio, and video.
In the face of this new reality, numerous institutions sprung into action to thwart AI-powered transgressions. A wave of lawmaking swept across sixteen states specifically to regulate AI application in election campaigns. The common element in many of these new laws was the demand to place clear disclaimers on any AI-generated media published near election times. Legislators were quick to identify the inherent threats of such AI utilization, and took appropriate steps.
The Election Assistance Commission, a federal entity tasked with supporting those involved on the administrative end of elections, contributed to the efforts by releasing an AI toolkit. This resource provided election officials with recommendations on how to effectively relay election-related information in an environment increasingly characterized by misinformation and deception. Simultaneously, states stepped in rolling out their instructive webpages designed to help voters recognize and dissect AI-engineered content.
The potential perils of AI were not lost on the experts. They issued warnings concerning the capability of AI in fabricating deepfakes that could depict political candidates uttering statements or performing actions that are absolutely untrue. The professionals anticipated a significant domestic impact, as such artificial content could influence voters’ choices, manipulate their decision-making processes, or even discourage them from participating in the election all together.
But against the backdrop of apprehension and speculation, the expected onslaught of AI-produced disinformation surprisingly did not come to pass. In the landscape of the elections, an array of distorted truths about vote counts, mail-in ballots, and voting machines took center stage. Interestingly, the deception relied largely on old, well-trodden methods, such as text-based assertions on social media platforms and misleading videos or images taken out of context.
The absence of generative AI as a significant factor in the misinformation campaign wasn’t accidental. Tech and public policy experts highlighted the effectiveness of preventative measures and legislative actions in mitigating the potential abuse of AI in crafting harmful political narratives. So, while the threat of AI in promoting fake news lurked in the backdrop, old-school deception techniques became the preferred arsenal for those determined to spread misinformation.
Tech giant Meta, the parent company of Facebook, Instagram and Threads, made its contribution to limiting the misuse of AI. It imposed requirements on advertisers to reveal the use of AI in any ads pertaining to politics or social matters. TikTok followed suit by implementing a mechanism that labeled certain AI-engineered content automatically.
However, distrust fueled by deception still managed to find its way into the ether. Trump was at the forefront of this, asserting time and again via various platforms including speeches, media interviews and social media, that illegal immigrants were being ushered into the states to vote. His claim, despite being debunked as baseless, struck a chord with a considerable portion of Americans who shared his concerns about noncitizens partaking in the 2024 election.
Now, while instances of AI-enabled misinformation did make an appearance in PolitiFact’s examinations, most of the viral media belonged to what experts classify as ‘cheap fakes’–genuine content that has been twisted or edited in underhanded ways without involving AI. In an unanticipated twist, some politicians opted to deride or blame AI rather than utilize it for their own propagation.
Moreover, it was predicted in 2023 that AI could act as a tool for foreign entities to execute influence operations economically and swiftly. However, the Foreign Malign Influence Center, tasked with monitoring foreign influence activities targeting the US, countered this prognosis towards late September, conveying that AI had not essentially ‘revolutionized’ these attempts.
For foreign entities to jeopardize U.S. elections using AI, they would need to surmount the numerous restrictions placed on AI tools, evade detection, and strategically craft and circulate the manipulated content. Notably, several intelligence agencies clustered up to signal foreign influence operations, which ironically, more frequently employed actors in staged videos rather than AI.
A fabricated video surfaced showing a woman implicating Harris in a hit-and-run car crash. Upon investigation, the video was tied back to a Russian network named Storm-1516 known for using similar tactics in attempts to subvert election trust in critical states like Pennsylvania and Georgia. Aid came from social media and AI platforms in trying to make it more difficult for such harmful and political content to thrive, by introducing watermarks, labels, and fact-checks to such posts.
