7 AI features the iPhone 17 needs to embrace from Google, OpenAI, and others

Apple Intelligence on iPhone 16 Pro Max

Jason Hiner/ZDNET

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • The release of the Google Pixel 10 phones with deeply integrated AI features that offer impressive new capabilities has revealed how vulnerable the iPhone 17 could be.
  • Many of the smartest AI services in the world are already on the iPhone as apps, and so there’s the possibility that Apple could partner with them for deeper integrations.
  • The new AI camera features in the Pixel 10 could be the biggest differentiator between it and the iPhone 17.

While the iPhone has virtually all of the smartest AI apps available from the latest AI trailblazers, it lacks the kind of deep integration of AI features that are only available at the intersection of the operating system and the latest hardware. That’s what we’ve seen with the rollout of Google’s Pixel 10 lineup. 

Here are seven features from AI leaders that would make a huge impact if they were seamlessly embedded at the system level in the iPhone 17.

1. ChatGPT’s Voice Mode

OpenAI’s Voice Mode in ChatGPT essentially works the way I’ve always wanted Siri to work on the iPhone. You just fire it up and start talking to it in natural language, and it can answer questions, pull up information, and even carry out a few actions. ZDNET’s Sabrina Ortiz has explained how to assign Voice Mode to the iPhone’s Action Button to use it like a Siri replacement. 

Also: How to use ChatGPT’s Voice Mode (and why you’ll want to)

But Voice Mode — which is being renamed ChatGPT Voice and is soon rolling out to free users — is still limited in the commands it can carry out on your iPhone. An Apple version of this feature or a partnership with OpenAI could allow much deeper integration across calendar, email, text messages, notes, settings, and other operating system tasks (with Apple privacy protections in place). Similarly, Google already has Gemini Live and Microsoft offers Copilot Voice, so Apple needs to move more deliberately to help the iPhone keep up.

2. Pixel 10’s Pro Res Zoom

EMBARGO - Google Pixel 10 Telephoto Camera

Kerry Wan/ZDNET

I’ve written about the fact that I love zoom photography and how it’s the one area where phone cameras still fall down and I have to regularly turn to my Sony mirrorless camera and 70-200mm zoom lens. However, Google has recently taken a big step to fill the gap in zoom photography in the Pixel 10 Pro. With its new Super Res Zoom feature, the Pixel 10 Pro will fill in missing data and automatically process a digital zoom image up to 100x to make it more usable. 

Also: Pixel just zoomed ahead of iPhone in the camera photography race

This brings up a number of questions about what makes a photo, and I still need to try it out on the Pixel 10 Pro to report back on how well it works, but this feels like a worthy use of computational photography. And the only smartphone maker that’s going to compete with Google on computational photography is Apple.

3. Google’s Magic Cue

Last year at WWDC 2024, Apple made a big deal about its Personal Intelligence feature that could understand your questions and requests because it had information about you from your calendar, mail, text messages, and other data stored privately in the Apple ecosystem. In the WWDC keynote, Apple used examples like “Pull up the files Joz shared with me last week” and a real-time alert that a meeting you’re about to reschedule could conflict with giving your kid a ride to a regular activity. 

Of course, Apple has never shipped this feature — but now Google has. In the Pixel 10, Google launched Magic Cue, which can save you from having to jump between apps by knowing enough about you to help cue you with info. One example it provided was a text message where someone asked you what time dinner reservations were, and Magic Cue presumably used info from a Gmail confirmation message to surface the info right in the messaging app, and the user simply had to tap it to send a response. 

Apple Intelligence: AI for the rest of us

Jason Hiner/ZDNET

Google says this kind of action can now happen locally on the device because of the Tensor G5 chip in the Pixel 10. Still, I think more people would trust Apple with their privacy on a feature like this because Apple doesn’t make money off of using your data in opportunistic ways. 

4. Deep Research from Anthropic

One of the biggest ways generative AI saves me time is by using it as a research assistant. Several of the AI apps now offer a Deep Research feature where you can ask an important question about a more complex topic and give the AI extra time (usually 5-30 minutes) to scour available sources and come back with an answer that includes clearly marked links to where the info came from.

Also: Anthropic wants to stop AI models from turning evil – here’s how 
I prefer to use Deep Research from Anthropic’s Claude app because of its focus on accuracy. There have been many reports that Apple has been in talks with Anthropic about various collaboration opportunities. Integrating Claude’s Deep Research into Siri so that you could trigger it quickly from a voice or text prompt would be a powerful option. 

5. Best Take from Google Photos

Google first launched its Best Take feature on the Pixel 8 in 2023 and recently gave it another big upgrade on the Pixel 10. The feature came about from a collaboration by the Google Pixel, Google Photos, and Google Research teams working together to solve the “group shot dilemma.” 

It uses multiple photos taken back-to-back of a group of people where not everyone has their eyes open, is looking at the camera, or is making an awkward expression. It then combines everyone’s best take into a more usable photo. The new “Auto Best Take” on the Pixel 10 does this in the background and produces the Best Take photo for you.

Also: I tried every new AI feature on the Google Pixel 10 series 
Similarly, there’s also the Add Me feature (launched on the Pixel 9), which uses AR and AI in clever ways to allow the photographer to get added to the group shot, by essentially combining two photos — guided by the camera app. It’s reasonable to expect that Apple has the computational photography chops to pull this off, or the relationship with Google to license the technology, especially since it’s based in the Google Photos app that’s already available on iOS.

6. Much broader language support

Oakley vs Ray-Ban Meta

Sabrina Ortiz/ZDNET

One of the most advanced capabilities of large language models is translating between different languages, and we’ve seen not only smart phones take advantage of this but also smart glasses as well — including Meta Ray-Bans, Solos AirGo 3, Even Realities G1, and The Frame from Brilliant Labs. Some of these smart glasses, along with several phone apps, can now translate into dozens of languages (Google Translate supports over 100 languages). 

Apple still lags behind by only supporting 20 languages in Apple Translate. By tapping into the power of LLMs, Apple should boost the number of supported languages considerably and integrate them into Siri and other AI features, such as Live Translation in phone calls and text messages, and Visual Intelligence. 

7. Conversational photo editing from Google

Perhaps the biggest surprise feature in the new Pixel 10 phone is its new Conversional Editing feature in Google Photos. This allows you to describe the changes you’d like to make to a photo, and then the AI automatically goes in and does it. For example, you could have it move the subject in a scene, remove glare or reflections, re-center an object, replace the background, add clouds to a blue sky, increase or decrease background blur, and more. 

Also: Google Pixel 10 series hands-on: I did not expect this model to be my favorite

Of course, altering photos can be sensitive. On LinkedIn, Google’s product lead for computational photography noted, “We have tuned our models to be hypersensitive to small details in the photo so that it reflects the context you want to keep with the changes you want to make.”
I suspect this is going to be a very popular feature, since it is super easy to access and doesn’t involve the advanced technical skills that you would have previously needed to do these kinds of photo edits. 

Final word

Apple has a lot of work to do to catch up with the features that the leading AI companies are bringing to their iPhone apps — let alone the deep AI integration that Google is now bringing to key features on its Pixel phones. 

While the delay in rolling out Apple Intelligence features may not have seemed to hurt the iPhone during the past year, Apple will need to close the gap to avoid the iPhone 17 feeling like a device that’s a step behind. As of right now, Google can make a pretty strong case that it’s now got the smartest phone in the industry.

Leave a Comment