Our Privacy Statement & Cookie Policy

By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.

I agree

Apple to 'update' malfunctioning AI news alert feature as journalists call for its removal

Matt Stuttard (the real one)

Europe;UK

There've been further calls by journalists for Apple to withdraw its Artificial Intelligence news alerts, after a string of recent false headlines were wrongly attributed to major news organizations. On Monday, Apple promised to update the feature, the first time the tech giant has commented on a weeks-long row which has seen a formal complaint issued by the BBC. 

The AI was rolled out towards the end of last year to users of the latest iPhone models, as well as some iPads and Macs. Several innacurate alerts have been issued, mostly attributed and visually branded as having been reported by the BBC. They include one which said that Luigi Mangione, the man accused of killing UnitedHealthcare CEO Brian Thompson, had shot himself.

Screengrab of recent Apple AI news alert / Handout
Screengrab of recent Apple AI news alert / Handout

Screengrab of recent Apple AI news alert / Handout

Further false alerts included Luke Littler having won the PDC World Darts final before the match had even begun, and the Spanish tennis star Rafael Nadal having come out as gay and falsely stating his nationality as Brazilian. 

The alerts are not generated centrally or distributed by Apple. The AI installed on the user's device summarises incoming notifications, while trawling through multiple apps. Results vary from user to user, depending on what apps are installed and on individual settings. 

Apple says it's in the process of developing an update that will clarify when notifications are AI summaries. In its statement on Monday, Apple said: 

"Apple Intelligence features are in beta and we are continuously making improvements with the help of user feedback. A software update in the coming weeks will further clarify when the text being displayed is summarisation provided by Apple Intelligence. We encourage users to report a concern if they view an unexpected notification summary.”

On the same day the BBC said:

"These AI summarisations by Apple do not reflect – and in some cases completely contradict – the original BBC content. "It is critical that Apple urgently addresses these issues as the accuracy of our news is essential in maintaining trust.”

Screengrab of recent Apple AI news alert / Handout
Screengrab of recent Apple AI news alert / Handout

Screengrab of recent Apple AI news alert / Handout

In an older AI notification to some users in November, the New York Times was incorrectly cited as reporting that Israeli Prime Minister Benjamin Netanyahu had been arrested. The paper has not commented. 

The BBC is not alone in its criticism of the new tool. 

Reporters Without Borders (RSF) has called on Apple to drop the feature completely. In a statement on its website, it said 

"This accident illustrates that generative AI services are still too immature to produce reliable information for the public, and should not be allowed on the market for such uses."

RSF's head of technology and journalism Vincent Berthier says "The automated production of false information attributed to a media outlet is a blow to the outlet's credibility."

On Tuesday, the UK's National Union of Journalists echoed the call, its general secretary Laura Davison saying: 

"Apple must act swiftly, removing Apple Intelligence to ensure it plays no role in contributing to the misinformation already prevalent and causing harm to journalism online. At a time where access to accurate reporting has never been more important, the public must not be placed in a position of second-guessing the accuracy of news they receive.

Editorial integrity is crucial to our public service broadcaster and AI generated summaries falsely attributing information, risk harm to the reputation of journalists reporting ethically. We will continue to engage with the BBC supporting their concerns about Apple's feature, whilst making clear our calls to Apple for action that goes beyond an update to Apple Intelligence"

Other companies have faced serious teething problems while introducing Artificial Intelligence. Last year, several comical Google AI search summaries were widely shared on social media. They included recipes for gasoline spaghetti, glue pizza and a false recommendation from geologists that human's should eat at least one rock per day.

Google has called them "isolated examples." 

A McDonald's restaurant in Arlington, Virginia, U.S. /Joshua Roberts/Reuters
A McDonald's restaurant in Arlington, Virginia, U.S. /Joshua Roberts/Reuters

A McDonald's restaurant in Arlington, Virginia, U.S. /Joshua Roberts/Reuters

Last June, McDonalds terminated a three-year trial of an AI ordering system, developed with IBM, after customers complained of mistakes such as bacon-topped ice cream and hundreds of dollars' worth of chicken nuggets. 

Elsewhere, Microsoft's MyCity chatbot has been found to have given illegal advice to entrepreneurs; Air Canada has paid damages to a customer after its virtual assistant provided false information; Elon Musk's Grok AI chatbot falsely reported that NBA star Klay Thompson had been accused of throwing bricks through windows of houses in Sacramento. 

And the 'founding father' of AI Chatbots, OpenAI's ChatGPT has been shown to have 'hallucinated' nonexistent court cases. Lawyers in both the U.S. and the UK were found to have used ChatGPT to search previous rulings to support their cases - several of which had never happened. 

Source(s): AP ,Reuters ,AFP
Search Trends