In fact, as a product company, Apple doesn't like to talk about artificial intelligence, but prefers the more academic term "machine learning", or discuss the functions that this technology can achieve. This allows Apple to focus more on showcasing its work and improvements in the future.
Apple's promotional focus on artificial intelligence (AI) is based on "doing real things, not blowing smoke."
On Monday, June 5th, local time, at Apple's annual developer conference, WWDC, although the outside world was looking forward to Apple's launch of more intelligent AI-related applications, AI itself was not the focus of Apple's release that day. Apple was more inclined to focus on the more specific and practical functions that this technology can achieve.
At the conference, Apple promoted its efforts in AI and machine learning in a more pragmatic way, including the improved iPhone autocorrect function based on machine learning programs. The feature uses a language model conversion, the same technology that supports ChatGPT. Apple said it will even learn from users' text and typing styles to optimize their experience.
Apple's software chief, Craig Federighi, said, "When you just want to type 'ducking,' the keyboard will learn it too." Federighi joked that the iPhone's autocorrect function tends to replace common slang with the meaningless word "ducking."
In fact, as a product company, Apple does not like to talk about artificial intelligence, but prefers the more academic term "machine learning," or to discuss the functions that this technology can achieve. This allows Apple to focus more on showcasing its work and improvements in the future.
For example, at the developer conference, Apple demonstrated improvements to the AirPods Pro headphones, which can automatically turn off noise reduction when users are in conversation. Apple did not define this improvement as a machine learning feature, but it is a difficult problem to solve, and the solution is based on an artificial intelligence model.
Another practical idea from Apple is to use the new Digital Persona feature to scan users' faces and bodies in 3D, and then virtually reproduce their appearance when they wear Vision Pro headsets for video conferencing with others.
Apple also mentioned several other new features that utilize the company's progress in neural networks, such as the ability to recognize fields to be filled in PDFs.
For most viewers, the Vision Pro mixed reality headset is undoubtedly the star of this year's developer conference. This is a heavyweight product that Apple has spent years carefully polishing. Some in the industry have previously called it Apple's new "iPhone moment," and Apple CEO Cook has also said it will overturn the way humans interact with machines in the past.
Analysts believe that Apple has never been a follower in the industry, but rather sets its own agenda at its own pace. How to maintain its lead in the possible next era of human-machine interaction is the unique issue that Apple, the only one in this industry, really cares about.