Apple AI’s Platform Pivot Potential
Description
— Charles Dickens, A Tale of Two Cities
Apple’s Bad Week
Apple has had the worst of weeks when it comes to AI. Consider this commercial which the company was running incessantly last fall:
<figure class="wp-block-jetpack-videopress jetpack-videopress-player" style="" >
</figure>
In case you missed the fine print in the commercial, it reads:
Apple Intelligence coming fall 2024 with Siri and device language set to U.S. English. Some features and languages will be coming over the next year.
“Next year” is doing a lot of work, now that the specific feature detailed in this commercial — Siri’s ability to glean information from sources like your calendar — is officially delayed. Here is the statement Apple gave to John Gruber at Daring Fireball:
Siri helps our users find what they need and get things done quickly, and in just the past six months, we’ve made Siri more conversational, introduced new features like type to Siri and product knowledge, and added an integration with ChatGPT. We’ve also been working on a more personalized Siri, giving it more awareness of your personal context, as well as the ability to take action for you within and across your apps. It’s going to take us longer than we thought to deliver on these features and we anticipate rolling them out in the coming year.
It was a pretty big surprise, even at the time, that Apple, a company renowned for its secrecy, was so heavily advertising features that did not yet exist; I also, in full disclosure, thought it was all an excellent idea. From my post-WWDC Update:
The key part here is the “understanding personal context” bit: Apple Intelligence will know more about you than any other AI, because your phone knows more about you than any other device (and knows what you are looking at whenever you invoke Apple Intelligence); this, by extension, explains why the infrastructure and privacy parts are so important.
What this means is that Apple Intelligence is by-and-large focused on specific use cases where that knowledge is useful; that means the problem space that Apple Intelligence is trying to solve is constrained and grounded — both figuratively and literally — in areas where it is much less likely that the AI screws up. In other words, Apple is addressing a space that is very useful, that only they can address, and which also happens to be “safe” in terms of reputation risk. Honestly, it almost seems unfair — or, to put it another way, it speaks to what a massive advantage there is for a trusted platform. Apple gets to solve real problems in meaningful ways with low risk, and that’s exactly what they are doing.
Contrast this to what OpenAI is trying to accomplish with its GPT models, or Google with Gemini, or Anthropic with Claude: those large language models are trying to incorporate all of the available public knowledge to know everything; it’s a dramatically larger and more difficult problem space, which is why they get stuff wrong. There is also a lot of stuff that they don’t know because that information is locked away — like all of the information on an iPhone. That’s not to say these models aren’t useful: they are far more capable and knowledgable than what Apple is trying to build for anything that does not rely on personal context; they are also all trying to achieve the same things.
So is Apple more incompetent than these companies, or was my evaluation of the problem space incorrect? Much of the commentary this week assumes point one, but as Simon Willison notes, you shouldn’t discount point two:
I have a hunch that this delay might relate to security. These new Apple Intelligence features involve Siri responding to requests to access information in applications and then performing actions on the user’s behalf. This is the worst possible combination for prompt injection attacks! Any time an LLM-based system has access to private data, tools it can call, and exposure to potentially malicious instructions (like emails and text messages from untrusted strangers) there’s a significant risk that an attacker might subvert those tools and use them to damage or exfiltrating a user’s data.
Willison links to a previous piece of his on the risk of prompt injections; to summarize the problem, if your on-device LLM is parsing your emails, what happens if one of those emails contains malicious text perfectly tuned to make your on-device AI do something you don’t want it to? We intuitively get why code injections are bad news; LLMs expand the attack surface to text generally; Apple Intelligence, by being deeply interwoven into the system, expands the attack surface to your entire device, and all of that precious content it has unique access to.
Needless to say, I regret not raising this point last June, but I’m sure my regret pales in comparison to Apple executives and whoever had to go on YouTube to pull that commercial over the weekend.
Apple’s Great Week
Apple has had the best of weeks when it comes to AI. Consider their new hardware announcements, particularly the Mac Studio and its available M3 Ultra; from the company’s press release:
Apple today announced M3 Ultra, the highest-performing chip it has ever created, offering the most powerful CPU and GPU in a Mac, double




