The company also introduced the latest iteration of its large language model, Llama 3, a move that puts Meta’s AI tools squarely in competition with the leading AI chatbots, including OpenAI’s ChatGPT, Google’s Gemini, Microsoft’s Copilot and Anthropic’s Claude. Zuckerberg touted the revamped Meta AI product as the “the most intelligent AI assistant” that is free to use.
But experts warn that the broad use of the AI chatbot may amplify problems that have long plagued Meta’s social networks, including harmful misinformation, hate speech and extremist content. The company’s image generator is also likely to spark debates about how it chooses to depict race and gender when drumming up imaginary scenarios.
“There was a general fear about how LLMs would interact with social and exacerbate misinformation, hate speech, etc.,” said Anika Collier Navaroli, a senior fellow at Columbia’s Tow Center for Digital Journalism and a former senior Twitter policy official. “And it feels like they just keep making it easier for the bad predictions to come true.”
Meta spokesman Kevin McAlister said in a statement that it is “new technology and it may not always return the response we intend, which is the same for all generative AI systems.
“Since we launched, we’ve constantly released updates and improvements to our models and we’re continuing to work on making them better,” he added.
While Meta AI will be available on a new stand-alone website, it will also populate search boxes on WhatsApp, Instagram, Facebook and Messenger. Meta has also experimented with putting the AI assistant into groups on Facebook, where it automatically chimes in to answer questions in groups if no one has responded in an hour.
Meta has long faced scrutiny from activists and regulators about how it handles dicey content about politics, social issues and current events. AI-powered chatbots, which are known to “hallucinate” and give responses that are false or not grounded in reality, could deepen these controversies.
Including the chatbots is “inviting these tools to opine on topics from education to health, housing to local politics — all domains where developers of AI technology should be treading carefully,” said Miranda Bogen, the director of the AI Governance Lab at the think tank Center for Democracy and Technology and a former AI policy manager at Meta. “If developers fail to think through the contexts in which AI tools will be deployed, these tools will not only be ill-suited for their intended tasks but also risk causing confusion, disruption and harm.”
On Wednesday, Princeton computer science and public affairs professor Aleksandra Korolova posted screenshots on X of Meta AI speaking up in a Facebook group for thousands of New York City parents. Responding to a question about gifted and talented programs, Meta AI claimed to be a parent with experience in the city’s school system, and it went on to recommend a specific school.
McAlister said that the product is evolving and that some people may start to see “some responses from Meta AI are replaced with a new response that says ‘This answer wasn’t useful and was removed. We’ll continue to improve Meta AI.’”
Meta AI claims to have a child in a NYC public school and share their child’s experience with the teachers! The reply is in response to a question looking for personal feedback in a private Facebook group for parents. Also, Meta’s algorithm ranks it as the top comment! @AIatMeta pic.twitter.com/wdwqFObWxt
— Aleksandra Korolova (@korolova) April 17, 2024
This week, an entrepreneur experimenting with Meta AI in WhatsApp found that it made up a blog post accusing him of plagiarism — even offering a formal citation for the post, which does not exist.
Image generators such as Meta’s have also come with their own problems. Earlier this month, a Verge reporter struggled to get Meta AI to generate images of an Asian person with a white person in a couple or as friends, despite giving the service repeated and specific prompts. In February, Google blocked the ability to generate images of people on its artificial intelligence tool Gemini after some users accused it of anti-White bias.
Now, Navaroli said she worries that biases baked into AI tools “will be fed back into social timelines,” potentially reinforcing those biases in a “feedback loop to hell.”
Korolova, the Princeton professor, said Meta AI’s potentially false claims in Facebook groups are probably “only a tip of the iceberg of harms Meta didn’t anticipate.”
“Just because the technology is new, should we be accepting a lower bar for potential harm?” Korolova asked. “This sounds like ‘Move fast and break things’ again.”