Unless you’ve been hiding under a tin-foil covered rock, you’ll know that Apple has entered the game with its own AI – Apple Intelligence. Apple has spent much of the time since the WWDC 2024 keynote reveal on June 10 highlighting how its AI is better than the competition, even though it”ll be teaming up with ChatGPT, and eventually Gemini. But there’s one common AI complaint even Apple cannot solve: accuracy.
In an interview with The Washington Post (the piece is behind a paywall), CEO Tim Cook admitted that while “[Apple] has done everything that we know to do” he can’t say Apple Intelligence will be perfectly accurate adding “I am confident it will be very high quality, but I’d say in all honesty that’s short of 100 percent.”
That seems like a realistic assertion, given some of the high-profile AI gaffes we’ve seen in recent months. Firstly, there were Google’s AI Overviews, which is still an experimental feature but gave many results that were either wrong or bizarre following its wider US rollout.
Then there was time in February that ChatGPT seemed to have an off day and became even creepier than usual. Apple’s play with Apple Intelligence seems to be to separate itself from those high-profile mistakes, by handling more limited on-device AI processing itself – and then sending you to ChatGPT for queries that need cloud servers. That could be a smart move, but even Apple can’t promise that its AI-powered features won’t go off the rails on some occasions, it seems.
Opinion: Built-in humility is a must-have feature
While some of us may look at this statement as an admission of failure from Tim Cook, I see it as yet another success of Apple’s approach to AI – and I seriously hope this honest appreciation of AI’s flaws is reflected in the responses Apple Intelligence provides.
I’ve been testing the Meta AI used by its Ray-Ban smart glasses collaboration a fair bit recently, and my favorite response to hear is “I don’t know” – or some variant of that.
Listening to some press conferences on AI you’d go away believing it’s a technological messiah here to solve all of life’s woes; but it’s not.
AI is a mere mortal with all the same flaws you’d find in the human-made content it has been trained on. This means there will be gaps in the AI’s knowledge, misunderstandings developed based on inaccuracies in the data it ingests, and biases that are baked into its core interpretations of the world because they’re baked into ours.
The perfect AI isn’t a virtual assistant that confidently spouts error-ridden answers – it’s a system that is able to unapologetically hold its digital hands up and admit it doesn’t have a good answer.
This might be a challenge to implement, but one form could simply be an automatic version of the ‘Double-check response’ feature in Google Gemini – where the AI fact checks itself using Google Search. If it finds supporting articles for its points from a reliable source it highlights the sentence in green and adds a hyperlink; if Google Search returns no reliable support, or conflicting results then the sentence gets a red highlight and an explanation.
Automatic citations, and clear indications of when the AI is and isn’t confident in its answer should come as standard – especially because upon seeing these admissions I’m not less trusting in the AI, I trust it more.
Apple Intelligence hasn’t even launched in beta yet so we haven’t had the chance to see how it works in practice, but I seriously hope Siri follows Tim Cook’s lead and isn’t afraid to proudly admit when it’s not 100% confident or accurate.