After months of speculation, Apple Intelligence took center stage during WWDC 2024 in June. The platform was announced following a torrent of news about generative AI from companies like Google and Open AI, sparking concerns that the famously tight-lipped tech giant had missed the mark during the latest technological craze.
Contrary to such speculation, however, Apple had a team in place, working on what turned out to be a very Apple approach to artificial intelligence. There was still some spice in the middle of the demos – Apple always likes to put on a show – but Apple Intelligence is ultimately a very pragmatic vision of the category.
Apple Intelligence (yes, AI for short) is not a standalone feature. Rather, it’s about integrating with existing offerings. Although this is a branding exercise in the very real sense, technology based on the Large Language Model (LLM) will be working behind the scenes. For the consumer, the technology will mainly come in the form of new features for existing applications.
We learned more at Apple’s iPhone 16 eventwhich was held on September 9. During the event, Apple touted a number of AI-based features coming to its devices, from translation on the Apple Watch Series 10, visual search on iPhones And a number of adjustments to Siri’s capabilities. The first wave of Apple Intelligence arrives at the end of October, as part of iOS 18.1, iPadOS 18.1 and macOS Sequoia 15.1. A second wave of features are available as part of the iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2 developer betas.
The features first launched in US English. Apple has since added Australian, Canadian, New Zealand, South African, and British English localizations.
Support for Chinese, English (India), English (Singapore), French, German, Italian, Japanese, Korean, Portuguese, Spanish and Vietnamese will arrive in 2025. Notably, users in China and the EU may not have access to Apple Intelligence features, due to regulatory hurdles.
What is Apple Intelligence?
Cupertino marketing executives have called Apple Intelligence “AI for the rest of us.” The platform is designed to leverage things that generative AI already does well, like generating text and images, to improve existing functionality. Like other platforms, notably ChatGPT And Google GeminiApple Intelligence was trained on big information models. These systems use deep learning to make connections, whether text, images, video or music.
The text offering, powered by LLM, presents itself as writing tools. The feature is available in various Apple apps, including Mail, Messages, Pages, and Notifications. It can be used to provide summaries of long texts, proofread, and even write messages for you, using content and tone prompts.
Image generation has also been integrated in a similar way, albeit a little less seamlessly. Users can ask Apple Intelligence to generate custom emojis (Genmojis) in an Apple house style. Image Playground, on the other hand, is a standalone image generation application which uses prompts to create visual content that can be used in Posts, Keynote, or shared via social media.
Apple Intelligence also marks a highly anticipated milestone facelift for Siri. The smart assistant was early, but has been largely neglected in recent years. Siri is integrated much deeper into Apple’s operating systems; for example, instead of the familiar icon, users will see a bright light around the edge of their iPhone screen when it does its job.
More importantly, the new Siri works on all applications. This means, for example, that you can ask Siri to edit a photo and then insert it directly into a text message. This is a frictionless experience that Assistant previously lacked. On-screen recognition means Siri uses the context of the content you’re currently working with to provide an appropriate response.
Who benefits from Apple Intelligence and when?
The first wave of Apple Intelligence arrives in October via iOS 18.1, iPadOS 18. and macOS Sequoia 15.1 updates. These include built-in writing tools, image cleanup, article summaries, and typing for the redesigned Siri experience.
Many remaining features will be added with the upcoming October release, as part of iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1. A second wave of features is available in iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2. This list includes, GenmojiImage Playground, Visual Intelligence, Image Wand and ChatGPT integration.
The offer will be free to use, provided you have one of the following hardware items:
- All iPhone 16 models
- iPhone 15 Pro Max (A17 Pro)
- iPhone 15 Pro (A17 Pro)
- iPad Pro (M1 and later)
- iPad Air (M1 and later)
- iPad mini (A17 or later)
- MacBook Air (M1 and later)
- MacBook Pro (M1 and later)
- iMac (M1 and later)
- Mac mini (M1 and later)
- Mac Studio (M1 Max and later)
- Mac Pro (M2 Ultra)
Notably, only the Pro versions of the iPhone 15 have access to it, due to gaps in the standard model’s chipset. Presumably, though, the entire iPhone 16 lineup will be able to run Apple Intelligence when it arrives.
Private cloud computing
Apple took a small modela tailor-made training approach. Rather than relying on the type of kitchen sink approach that powers platforms like GPT and Gemini, the company compiled data sets in-house for specific tasks like, say, writing an email. The biggest advantage of this approach is that many of these tasks require far fewer resources and can be performed on the device.
But this doesn’t apply to everything. More complex queries will use the new Private Cloud Compute offering. The company now operates remote servers running on Apple Silicon, which it says allows it to offer the same level of privacy as its consumer devices. Whether an action is performed locally or via the cloud will be invisible to the user unless their device is offline, in which case remote queries will generate an error.
Apple Intelligence with third-party apps
There was a lot of talk about the ongoing partnership between Apple and OpenAI ahead of WWDC. Ultimately, however, it turned out that the deal was less about powering Apple Intelligence and more about offering an alternative platform for things it’s not really designed for. This is a tacit acknowledgment that building a small-model system has its limitations.
Apple Intelligence is free. So also, is access to ChatGPT. However, those with paid accounts on it will have access to premium features that free users don’t have, including unlimited queries.
The ChatGPT integration, which debuts in iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2, has two primary roles: supplementing Siri’s knowledge base and adding to existing writing tool options.
Once the service is activated, certain questions will prompt the new Siri to ask the user to approve their access to ChatGPT. Recipes and travel planning are examples of questions that may surface. Users can also directly ask Siri to “ask ChatGPT”.
Compose is the other main feature of ChatGPT available through Apple Intelligence. Users can access it in any app that supports the new Writing Tools feature. Compose adds the ability to write content based on a prompt. This joins existing writing tools such as Style and Summary.
We know for sure that Apple is planning to partner with additional generative AI services. The company practically said that Google Gemini is next on this list.