Wednesday, November 13, 2024
HomeTechnologyMicrosoft, Google chatbots made false claims of cease-fire in Israel-Hamas war: report

Microsoft, Google chatbots made false claims of cease-fire in Israel-Hamas war: report

Published on

spot_img



AI chatbots operated by Microsoft and Google are spitting out incorrect information about the Israel-Hamas war – including false claims that the two sides agreed to a cease-fire.

Google’s Bard declared in one response on Monday that “both sides are committed” to maintaining peace “despite some tensions and occasional flare-ups of violence,” according to Bloomberg.

Bing Chat wrote Tuesday that “the ceasefire signals an end to the immediate bloodshed.”

No such ceasefire has occurred. Hamas has continued firing a barrage of rockets into Israel, while Israeli’s military on Friday ordered the evacuation of approximately 1 million people in Gaza ahead of an expected ground invasion to root out the terrorist group.

Google’s Bard also bizarrely predicted on Oct. 9 that “as of October 11, 2023, the death toll has surpassed 1,300.”

The chatbots “spit out glaring errors at times that undermine the overall credibility of their responses and risk adding to public confusion about a complex and rapidly evolving war,” Bloomberg reported after conducting the analysis.

The issues were discovered after Google’s Bard and Microsoft’s Bing Chat were asked to answer a series of questions about the war – which broke out last Saturday after Hamas launched a surprise attack on Israeli border towns and military bases, killing more than 1,200 people.

Israel has ordered an evacuation in Gaza.
Mirrorpix / MEGA

Despite the errors, Bloomberg noted that the chatbots “generally stayed balanced on a sensitive topic, and often gave decent news summaries” in response to user questions. Bard reportedly apologized and retracted its claim about the ceasefire when asked if it was sure, while Bing had amended its response by Wednesday.

See also  BBC science correspondent has heart age assessed by AI

Both Microsoft and Google have acknowledged to users that their chatbots are experimental and prone to including false information in their responses to user prompts.

These inaccurate answers, known as “hallucinations,” are a source of particular concern for critics who warn that AI chatbots are fueling the spread of misinformation.

Hamas staged a surprise attack last weekend.
MOHAMMED SABER/EPA-EFE/Shutterstock

When reached for comment, a Google spokesperson said it released Bard and AI-powered search functions as “opt-in experiments and are always working to improve their quality and reliability.”

“We take information quality seriously across our products, and have developed protections against low-quality information along with tools to help people learn more about the information they see online,” the Google spokesperson said.

“We continue to quickly implement improvements to better protect against low quality or outdated responses for queries like these,” the spokesperson added.

Google Bard is still in an experimental phase.
Gado via Getty Images

Google noted that its trust and safety teams are actively monitoring Bard and working quickly to address issues as they arise.

Microsoft told the outlet that it had investigated the mistakes and would be making adjustments to the chatbot in response.

 “We have made significant progress in the chat experience by providing the system with text from the top search results and instructions to ground its responses in these top search results, and we will continue making further investments to do so,” a Microsoft spokesperson said.

Both Microsoft and Google say their chatbots are prone to mistakes.
BELGA MAG/AFP via Getty Images

The Post has reached out to Microsoft for further comment.

Earlier this year, experts told The Post that AI-generated “deepfake” content could wreak havoc on the 2024 presidential election if protective measures aren’t in place ahead of time.

See also  How to Prepare Your Phone and Other Tech for a Natural Disaster

In August, British researchers found that ChatGPT, the chatbot created by Microsoft-backed OpenAI, generated cancer treatment regimens that contained a “potentially dangerous” mixture of correct and false information.



Source link

Latest articles

Kelowna RCMP: 45 speeding tickets issued in school zones within 4 days

Mounties in Kelowna say 45 speeding tickets were issued in school zones in...

Want to Go to Alaska Next Summer? Delta Is Giving Fliers More Options

While many airlines are increasing flight capacity to Europe next summer, Delta...

Los Angeles councilmember-elect Ysabel Jurado outlines her top priorities

Ysabel Jurado is the latest political outsider to unseat a Los Angeles City...

More like this

Kelowna RCMP: 45 speeding tickets issued in school zones within 4 days

Mounties in Kelowna say 45 speeding tickets were issued in school zones in...

Want to Go to Alaska Next Summer? Delta Is Giving Fliers More Options

While many airlines are increasing flight capacity to Europe next summer, Delta...

Los Angeles councilmember-elect Ysabel Jurado outlines her top priorities

Ysabel Jurado is the latest political outsider to unseat a Los Angeles City...