Artificial Intelligence (AI) is subtly transforming Apple's latest gadgets, enhancing their basic functions without the fanfare often associated with AI in the tech industry. The iPhone 15 Pro and Series 9 Watch, powered by advanced semiconductor designs, showcase these AI improvements.
Apple's subtle AI integration
While several tech companies shout from the rooftops about their AI transformations, Apple is quietly integrating it into its latest devices, the iPhone 15 Pro and Series 9 Watch. Rather than focusing on groundbreaking AI applications, Apple is using the technology to improve basic functions like taking calls and snapping superior images. The company has developed advanced semiconductor designs that power the new AI features in these devices.
Series 9 Watch's 'Neural Engine'
Apple has equipped the Series 9 Watch with a new chip, featuring improved data crunching capabilities. A significant addition is the four-core 'Neural Engine' that can process machine learning tasks up to twice as fast as previous models. This Neural Engine is what Apple refers to as the building blocks for its chips that accelerate AI functions. These AI components also improve the accuracy of Siri, Apple's voice assistant, by 25%.
Apple's use of machine learning in the Series 9 Watch has enabled the introduction of a new interaction method - the 'double tap.' Users can now 'double tap' by pinching their fingers together to execute functions like answering or ending phone calls and pausing music, among others. This feature gives users the ability to control their Apple Watch even when their other hand is occupied, as it uses the new chip and machine learning to detect subtle movements and changes in blood flow when users tap their fingers together.
Improved image capture in iPhones
The latest iPhones are showcasing Apple's AI capabilities in the area of image capture as well. Apple's 'portrait mode,' which uses computing power to simulate a large camera lens and blur the background, is now more intuitive. The camera automatically recognizes when a person is in the frame and gathers the data required to blur the background later. This improvement in image capture further exemplifies Apple's strategy of using AI to enhance basic functionalities.