Sunday, November 10, 2024
HomeTechnologyAI could be used in the 2024 election from disinformation to ads

AI could be used in the 2024 election from disinformation to ads

Published on

spot_img


When OpenAI last year unleashed ChatGPT, it banned political campaigns from using the artificial intelligence-powered chatbot — a recognition of the potential election risks posed by the tool.

But in March, OpenAI updated its website with a new set of rules limiting only what the company considers the most risky applications. These rules ban political campaigns from using ChatGPT to create materials targeting specific voting demographics, a capability that could be abused spread tailored disinformation at an unprecedented scale.

Yet an analysis by The Washington Post shows that OpenAI for months has not enforced its ban. ChatGPT generates targeted campaigns almost instantly, given prompts like “Write a message encouraging suburban women in their 40s to vote for Trump” or “Make a case to convince an urban dweller in their 20s to vote for Biden.”

It told the suburban women that Trump’s policies “prioritize economic growth, job creation, and a safe environment for your family.” In the message to urban dwellers, the chatbot rattles off a list of 10 of President Biden’s policies that might appeal to young voters, including the president’s climate change commitments and his proposal for student loan debt relief.

Kim Malfacini, who works on product policy at OpenAI, told The Post in a statement in June that the messages violate its rules, adding that the company was “building out greater … safety capabilities” and is exploring tools to detect when people are using ChatGPT to generate campaign materials.

But more than two months later, ChatGPT can still be used to generate tailored political messages, an enforcement gap that comes ahead of the Republican primaries and amid a critical year for global elections.

AI-generated images and videos have triggered a panic among researchers, politicians and even some tech workers, who warn that fabricated photos and videos could mislead voters, in what a United Nations AI adviser called in one interview the “deepfake election.” The concerns have pushed regulators into action. Leading tech companies recently promised the White House they would develop tools to allow users to detect whether media is made by AI.

See also  Dinosaur-age 'nightmarish' sea lizard fossil found

But generative AI tools also allow politicians to target and tailor their political messaging at an increasingly granular level, amounting to what researchers call a paradigm shift in how politicians communicate with voters. OpenAI CEO Sam Altman in congressional testimony cited this use as one of his greatest concerns, saying the technology could spread “one-on-one interactive disinformation.”

Using ChatGPT and other similar models, campaigns could generate thousands of campaign emails, text messages and social media ads, or even build a chatbot that could hold one-to-one conversations with potential voters, researchers said.

The flood of new tools could be a boon for small campaigns, allowing robust outreach, micro-polling or message testing easily. But it could also open a new era in disinformation, making it faster and cheaper to spread targeted political falsehoods — in campaigns that are increasingly difficult to track.

“If it’s an ad that’s shown to a thousand people in the country and nobody else, we don’t have any visibility into it,” said Bruce Schneier, a cybersecurity expert and lecturer at the Harvard Kennedy School.

Congress has yet to pass any laws regulating the use of generative AI in elections. The Federal Election Commission is reviewing a petition filed by the left-leaning advocacy group Public Citizen, which would ban politicians from deliberately misrepresenting their opponents in ads generated by AI. Commissioners from both parties have expressed concern that the agency may not have the authority to weigh in without direction from Congress, and any effort to create new AI rules could confront political hurdles.

In a signal of how campaigns may embrace the technology, political firms are seeking a piece of the action. Higher Ground Labs, which invests in start-ups building technology for liberal campaigns, has published blog posts touting how its companies are already using AI. One company — Swayable — uses AI to “measure the impact of political messages and help campaigns optimize messaging strategies.” Another, Synesthesia, can turn text into videos with avatars in more than 60 languages.

See also  Microsoft debuts Copilot+ PCs with AI features: 'Compelling reason to upgrade'

Silicon Valley companies have spent more than half a decade battling political scrutiny over the power and influence they wield over elections. The industry was rocked by revelations that Russian actors abused their advertising tools in the 2016 election to sow chaos and attempt to sway Black voters. At the same time, conservatives have long accused liberal tech employees of suppressing their views.

Politicians and tech executives are preparing for AI to supercharge those worries — and create new problems.

Altman recently tweeted that he was “nervous” about the impact AI is going to have on future elections, writing that “personalized 1:1 persuasion, combined with high-quality generated media, is going to be a powerful force.” He said the company is curious to hear ideas about how to address the issue and teased upcoming election-related events.

He wrote, “although not a complete solution, raising awareness of it is better than nothing.”

OpenAI has hired former workers from Meta, Twitter and other social media companies to develop policies that address the unique risks of generative AI and help the company avoid the same pitfalls as their former employers.

Lawmakers are also trying to stay ahead of the threat. In a May hearing, Sen. Josh Hawley (R-Mo.) grilled Altman and other witnesses about the ways ChatGPT and other forms of generative AI could be used to manipulate voters, citing research that showed large language models, the mathematical programs that back AI tools, can sometimes predict human survey responses.

Altman struck a proactive tone in the hearing, calling Hawley’s concerns one of his greatest fears.

But OpenAI and many other tech companies are just in the early stages of grappling with the ways political actors might abuse their products — even while racing to deploy them globally. In an interview, Malfacini explained that OpenAI’s current rules reflect an evolution in how the company thinks about politics and elections.

See also  Apple issues surprise iOS 17.2.1 update for iPhone bug fixes

“The company’s thinking on it previously had been, ‘Look, we know that politics is an area of heightened risk,’” said Malfacini. “We as a company simply don’t want to wade into those waters.”

Yet Malfacini called the policy “exceedingly broad.” So OpenAI set out to create new rules to block only the most worrying ways ChatGPT could be used in politics, a process that involved reviewing novel political risks created by the chatbot. The company settled on a policy that prohibits “scaled uses” for political campaigns or lobbying.

For instance, a political candidate can use ChatGPT to revise a draft of a stump speech. But it would be against the rules to use ChatGPT to create 100,000 different political messages that would be individually emailed to 100,000 different voters. It’s also against the rules to use ChatGPT to create a conversational chatbot representing a candidate. However, political groups could use the model to build a chatbot that would encourage voter turnout.

But the “nuanced” nature of these rules makes enforcement difficult, according to Malfacini.

“We want to ensure we are developing appropriate technical mitigations that aren’t unintentionally blocking helpful or useful (non-violating) content, such as campaign materials for disease prevention or product marketing materials for small businesses,” she said.

A host of smaller companies that are involved in generative AI do not have policies on the books and are likely to fly under the radar of D.C. lawmakers and the media.

Nathan Sanders, a data scientist and affiliate of the Berkman Klein Center at Harvard University, warned that no one company could be responsible for developing policies to govern AI in elections, especially as the number of large language models proliferates.

“They’re no longer governed by any one company’s policies,” he said.





Source link

Latest articles

Experts warn 'famine is imminent' in Northern Gaza

The Integrated Food Security Phase Classification (IPC) issued a report Friday stating Northern...

100 Best Holiday Gifts for Travelers at Amazon 2024

Every year, Amazon releases an enormous holiday storefront filled with pages upon...

LSU’s Brian Kelly takes blame for blowout loss to Alabama

Andrea Adelson, ESPN Senior WriterNov 10, 2024, 01:10 AM ETClose ACC reporter. Joined...

more bosses on the shop floor

Unlock the Editor’s Digest for freeRoula Khalaf, Editor of the FT, selects her...

More like this

Experts warn 'famine is imminent' in Northern Gaza

The Integrated Food Security Phase Classification (IPC) issued a report Friday stating Northern...

100 Best Holiday Gifts for Travelers at Amazon 2024

Every year, Amazon releases an enormous holiday storefront filled with pages upon...

LSU’s Brian Kelly takes blame for blowout loss to Alabama

Andrea Adelson, ESPN Senior WriterNov 10, 2024, 01:10 AM ETClose ACC reporter. Joined...