This week, Apple announced its plan to bring artificial intelligence capabilities to iPhones and other Apple devices. This means photos, text messages, Notes, and other content will be able to capitalize on certain AI functionality. Imagine removing the background of a photo on your phone with Apple Intelligence. Cool, right?

AI processing and Apple security, however, don’t inherently go together. The privacy we have come to expect from products like iMessage derives from Apple’s mentality of processing everything on the device, rather than in the Cloud, or elsewhere. This limits potential points of weakness and allows Apple to say that even they don’t have access to a user’s iPhone and iMessage content. 

Yet even the iPhone 16, with Apple’s most advanced proprietary processors, doesn’t have the capacity to manage the large artificial intelligence tasks we’ve come to expect from generative AI. Apple must, inevitably, process this information elsewhere. If you want to remove the background from a photo on your phone, the photo must leave your device.

Leaving your device means going out into the scary world of the internet at-large. And it means storing your data on a server somewhere—even if it’s just temporary. Do you trust this server? Does Apple trust this server? Does the company who owns the server even trust the server?

Ideally, we wouldn’t have to answer these questions. In fact, Apple has taken pains to avoid these issues with its AI products so far. OCR and subject recognition in your photos still happen on your device. As does predictive text in iMessage. Alas, this is not possible with the new expectations of the capabilities of AI.

In response to this, Apple has brought its particular brand of security into off-device processing with its Private Cloud Compute (PCC) product. With PCC, Apple will process as much of the data as possible on-device and will send the remaining data to its specific servers. These servers have hardened security that is intended to be able to be guaranteed and independently verified. Interested readers can dig further into specifics in the Apple security blog article, Private Cloud Compute: A new frontier for AI privacy in the cloud.

For our purposes, Apple’s PCC means that third parties (including Apple itself) don’t have administrative, emergency, or backdoor access to the data that is processed on Apple’s servers. This cuts off one of the largest points of exploitation that exist on any computer. It also makes Apple Intelligence different from the vast majority of products on the market. 

Additionally, Apple has multiple levels of deleting (and verifying the deletion of) data that was processed on its servers. So, not only is it not training its AI models on your information, it isn’t even keeping logs of your prompts for debugging purposes.  

As with all artificial intelligence products out there, users will want to be wary of hallucinations, bad source data, and copyright issues that transcend AI models. And as always, we suggest that users should read the terms of service, and only give access to data they have vetted. But at first glance, it appears that Apple Intelligence has avoided some of the major pitfalls of other AI products on the market.  

Share Article

Last updated September 13th, 2024