Random Image Display on Page Reload

Apple’s Biggest AI Challenge? Making It Behave

If you buy something using links in our stories, we may earn a commission. Learn more.

Jun 10, 2024 7:39 PM

Apple’s Biggest AI Challenge? Making It Behave

Apple Intelligence will make apps and services smarter. But Apple’s most notable innovations focus on ensuring the technology doesn’t disappoint, annoy, or offend.

Person speaking on stage in front of a large black screen with a multicolored Apple logo on it

Craig Federighi, senior VP of software engineering at Apple, delivers remarks at the start of the Apple Worldwide Developers Conference on June 10, 2024.Photograph: Justin Sullivan/Getty Images

Apple has a history of succeeding despite being late to market so many times before: the iPhone, the Apple Watch, AirPods, to name a few cases. Now the company hopes to show that the same approach will work with generative artificial intelligence, announcing today an Apple Intelligence initiative that bakes the technology into just about every device and application Apple offers.

Apple unveiled its long-awaited AI strategy at the company’s Worldwide Developer Conference (WWDC) today. “This is a moment we've been working towards for a long time,” said Apple CEO Tim Cook at the event. “We're tremendously excited about the power of generating models.”

That may be so, but Apple also seems to understand that generative AI must be handled with care since the technology is notoriously data hungry and error prone. The company showed Apple Intelligence infusing its apps with new capabilities, including a more capable Siri voice assistant, a version of Mail that generates complex email responses, and Safari summarizing web information. The trick will be doing those things while minimizing hallucinations, potentially offensive content, and other classic pitfalls of generative AI—while also protecting user privacy.

Apple Intelligence will still tap into private user information to make its models more useful by better understanding a person’s interests, habits, and schedule, for instance. But those insights typically require privacy trade-offs that Apple is trying to avoid.

“We think about what it means for intelligence to be really useful, it has to be centered on you,” Craig Federighi, Apple's senior vice president of software engineering, said at a briefing after the WWDC keynote presentation. “That requires some really deep thoughts about privacy.”

While many generative AI programs, including ChatGPT, run in the cloud, Apple says Apple Intelligence would primarily use AI models running locally on its devices. It has also developed a way to determine whether a query needs to be handed off to a more powerful AI model in the cloud, and a technology called Private Cloud Compute that it says will keep personal data secure should it be sent off-device.

In a blog post outlining the technology, Apple said Private Cloud Compute would be designed to prevent the information used in a query from being retained by a model or anywhere else on the device, and would not allow developers or Apple to access sensitive information. It said the system would use a new server hardware that uses Apple silicon to keep data in secure storage areas, and will employ end-to-end encryption to ensure data cannot be spied on.

“I think it solves a necessary, profound challenge,” Federighi said. “Cloud computing typically comes with some real compromises when it comes to privacy assurances. Even if a company makes some promise, ‘We’re not going to do anything with your data,’ you have no way to verify that.”

Keeping that data private shouldn’t compromise Apple Intelligence’s capabilities, said John Giannandrea, senior vice president of machine learning and AI strategy, at the same briefing. In another blog post, Apple revealed that it has developed its own AI models using a framework called AXLearn, which it made open source in 2023. It said that it has employed several techniques to reduce the latency and boost the efficiency of its models.

Giannandrea said that Apple had focused on reducing hallucinations in its models partly by using curated data. “We have put considerable energy into training these models very carefully,” he said. “So we're pretty confident that we're applying this technology responsibly.”

That training wheels approach to AI applies across Apple’s offering. If it works as promised, it should mean that Apple Intelligence is less prone to fabricate or suggest something inappropriate. In its blog post, Apple claimed that testers found its models more useful and less harmful more often than competing on-device models from OpenAI, Microsoft, and Google. "We're not taking this teenager and sort of telling him to go fly an airplane," Federighi said.

Apple’s hotly anticipated tie-in with OpenAI will also keep ChatGPT at arms length, with Siri and a new writing assistant called Writing Tools only tapping it for certain tricky queries, and with a user’s permission. “We'll ask you before you go to ChatGPT,” Federighi said. “From a privacy point of view, you're always in control and have total transparency with that experience that you leave Apple's privacy realm and go out and use that other model.”

Apple’s deal with OpenAI would have once seemed highly unlikely. The startup has experienced a meteoric rise, thanks to the brilliance of its chatbot, but it has also repeatedly courted controversy with legal battles, boardroom drama, and its relentless promotion of a powerful but unreliable technology. Federighi said that Apple may incorporate Google’s flagship Gemini model at a future date, without offering further information.

Apple has been derided for moving slower than its competitors in building generative AI, and it has not yet revealed anything as powerful as OpenAI’s ChatGPT or Google’s Gemini, but the company has published some notable AI research, including details of company multimodal models that run on devices.

Apple once seemed to have a lead in leveraging AI for personal computing, after launching Siri in 2011. The assistant made use of recent AI breakthroughs at the time to recognize speech more reliably, and sought to turn a limited range of voice commands into useful actions on the iPhone.

Competitors like Amazon, Google, and Microsoft, soon followed suit with voice assistants of their own, but their utility was fundamentally limited by the challenge of parsing meaning from complex and ambiguous language. The large language models that power programs like ChatGPT represent a significant advance in machines’ ability to handle language, and Apple and others hope to use AI to upgrade their personal assistants in a number of ways. LLMs could make helpers like Siri better able to understand complex commands and hold relatively sophisticated conversations. They could also provide a way for assistants to use software by writing code on-the-fly.

“They came through with a commitment to personal, private, and context-aware AI,” says Tom Gruber, an AI entrepreneur who cofounded the company that developed Siri, which was acquired by Apple in 2010. Gruber says he was happy to see the company demo use cases that emphasized those features.

Other observers say that Apple’s announcements amount to an effort to match the competition without risking too many gaffes. “What Apple is great at is offering great new capabilities and showing us new ways to do things,” says David Yoffie, a professor at Harvard Business School. “None of the things announced seem like that, which isn’t surprising because they’re playing catch-up.”

Yoffie says Apple’s focus on data privacy and security was unsurprising given the worries people have about sharing data with programs like ChatGPT. “Generative AI is a complement for the iPhone,” he says. “I think it’s important that they show they aren’t behind the Android world, which I think they did today.”

Still, generative AI is definitionally unpredictable. Apple Intelligence may have behaved in testing, but there’s no way to account for every output once it’s unleashed on millions of iOS and macOS users. To live up to its WWDC promises, Apple will need to imbue AI with a feature no one else has yet managed. It needs to make it behave.

Will Knight is a senior writer for WIRED, covering artificial intelligence. He writes the Fast Forward newsletter that explores how advances in AI and other emerging technology are set to change our lives—sign up here. He was previously a senior editor at MIT Technology Review, where he wrote about fundamental… Read more
Senior Writer

Read More

Everything Apple Announced at WWDC

The company’s annual developer event was stacked with demonstrations that showed off the new artificial intelligence capabilities coming to iPhones, iPads, and Macs.

Boone Ashworth

Judge Hints at Plans to Rein In Google’s Illegal Play Store Monopoly

“Google as an illegal monopolist will have to pay some penalties,” US federal judge James Donato said Thursday, in a hearing discussing next steps after a jury found the company breached antitrust laws.

Paresh Dave

The Lords of Silicon Valley Are Thrilled to Present a ‘Handheld Iron Dome’

ZeroMark wants to build a system that will let soldiers easily shoot a drone out of the sky with the weapons they’re already carrying—and venture capital firm a16z is betting the startup can pull it off.

Matthew Gault

How Game Theory Can Make AI More Reliable

Researchers are drawing on ideas from game theory to improve large language models and make them more correct, efficient, and consistent.

Steve Nadis

Why the EU’s Vice President Isn’t Worried About Moon-Landing Conspiracies on YouTube

During a tour of Silicon Valley, EU vice president Věra Jourová said she expects tech giants to prioritize stamping out content that could distort democracy.

Paresh Dave

OpenAI Employees Warn of a Culture of Risk and Retaliation

An open letter signed by former and current employees at OpenAI and other AI giants calls for whistleblower protections as the artificial intelligence rapidly evolves.

Will Knight

What ScarJo v. ChatGPT Could Look Like in Court

If Scarlett Johansson pursues legal action against OpenAI for giving ChatGPT a voice she calls “eerily similar to mine,” she might claim the company breached her right to publicity.

Kate Knibbs

Pocket-Sized AI Models Could Unlock a New Era of Computing

Research at Microsoft shows it’s possible to make AI models small enough to run on phones or laptops without major compromises to their smarts. The technique could open up new use cases for AI.

Will Knight

Credit belongs to : www.wired.com

Check Also

Break the monotony with the Olive Green Redmi Note 13 Pro 5G

It is always exciting to see fresh takes on smartphone design, a welcome change from …