Here is a narrative by Alex Wawro from tom’s guide:
01. Apple Prioritizes User Privacy Over Microsoft
As a loyal Windows user, I have always been hesitant about switching to Mac despite Microsoft’s aggressive integration of artificial intelligence into everything. However, after attending WWDC 2024 and hearing Apple’s discussion on AI and privacy, I am seriously considering switching to the Apple ecosystem when MacOS Sequoia is released later this year.
Honestly, my inclination towards this shift is not necessarily because I am fond of Apple products, but more so due to Microsoft’s relentless push of “AI” into all their products. While I use an iPhone for writing articles on gaming due to the significance of the App Store for mobile game developers, I am not particularly fond of Apple’s design philosophy.
At times, I feel restricted by Apple’s lack of customization options and control over crucial aspects of the device, which can be frustrating. The way Windows Copilot and other AI features permeate computers is even more off-putting to me.
Since Microsoft introduced Bing with ChatGPT in 2023, I have experimented with various forms of “artificial intelligence.” Yet, apart from cluttering the internet with junk data, I have not found any compelling reason to use them.
While Microsoft emphasizes that integrating Copilot can enhance user productivity, the capabilities I have witnessed either entail haphazard assistance, summarizing text, or rewriting it. The generated images may add some fun to chats, but they are not particularly useful as they are clearly AI-generated.
Undeniably, AI technology is still in its early stages, and both companies have plenty of time to refine their integration of AI into operating systems. Though Apple has been a late starter in the AI field, I am drawn to its Apple Intelligence marketing that prioritizes privacy.
I appreciate Apple’s emphasis on privacy as the key selling point of its AI products and hope that Microsoft follows suit.
Apple has positioned privacy as a crucial pillar of its 2024 marketing, releasing a press release highlighting how its new operating system respects user privacy in every aspect.
While this press release aims to enhance Apple’s image, Microsoft has yet to make a similar declaration, and without a serious overhaul of its data infrastructure, it may struggle to do so.
For instance, when Siri underwent a major revamp in iOS 18, it integrated ChatGPT-4o, which is purported to handle much more complex requests than before. Siri can now access internal information about applications on the device and take actions within them to fulfill user requests.
During WWDC, Apple demonstrated how the new Siri could locate photos of specific individuals and places, allowing Apple’s AI assistant to edit the images creatively.
Apple assures that the new Siri strives to process all tasks on the device to ensure that user requests and images are not sent elsewhere or viewed by anyone else. If Siri requires additional processing power to fulfill requests, it contacts Apple’s privacy cloud computing servers for additional processing.
These servers, running on Apple chips, offer the same security features as iPhone or MacBook, such as Secure Enclave and Secure Boot.
Apple pledges that no one, including Apple employees, can access your data on these servers, and once your request is fulfilled, the data is promptly deleted. Apple also promises to allow “independent experts” to review the code running these privacy computing servers and verify their claims.
However, if Siri needs to use ChatGPT-4o, it seeks the user’s consent before sending requests and/or images/files to ChatGPT. This small step in the process reassures me that my personal data is being sent to third-party servers for processing.
In contrast, Microsoft seems to seamlessly integrate AI into all computer operations, which makes me uncomfortable about their lack of emphasis on user privacy.
While the company has released guidelines on how Copilot utilizes your data, it explicitly states that you must pay to ensure data security.
Although the basic version of Copilot provided to Windows users for free has an option to enable or disable its feature to read data from Microsoft Edge, this is the extent of control users have over their privacy. To ensure that requests and data sent to Copilot are private and not shared with anyone, users must pay a monthly subscription fee of $30 for Copilot for Microsoft 365.
Therefore, if you are a paying Microsoft user, using Copilot will display a reminder stating “Your personal and company data is protected in this chat.” If you do not see this reminder, it indicates that your data is not protected.
Frankly, I do not believe Microsoft would sell my data to advertising companies or other data-driven enterprises. However, if you use the Outlook application in the EU or UK, you may find that the company is willing to sell user data to advertisers.
This is because Microsoft has added an option in certain regions for users to choose not to sell their data to third parties, possibly due to regulatory pressure.
On the other hand, Apple seems committed to not profiting from user data through Apple Intelligence while integrating AI into everyday life.
Apple’s efforts in the field of artificial intelligence appear to be helping Siri fulfill its promise as a virtual assistant while enhancing features like Spotlight search and voice input in Notes.
Overall, Apple’s privacy-first approach in integrating AI into its products is the best I have seen among major tech companies.
While some may view Apple’s response in the AI field as slow compared to competitors, as a user, it actually gives me a sense of security.
Seeing Microsoft rapidly introduce AI into all its products, I am somewhat apprehensive, and I believe that this rush to market may have more disadvantages than advantages in the long run.
02. How Does Apple’s AI Handle User Data?
Last week, Apple made a grand announcement at the Worldwide Developers Conference (WWDC) about integrating artificial intelligence into its products and collaborating with the developers of ChatGPT, OpenAI.
While Apple has introduced a suite of internally developed AI models, it has also integrated ChatGPT into its devices and software. Naturally, people are curious about how these two companies will handle user’s personal information, especially considering Apple’s reputation for security and privacy.
Apple Intelligence is the collective term for Apple’s internally developed AI tools, primarily focused on personal assistants and emphasizing the “personal” aspect. It gathers specific information about users’ relationships, contacts, messages sent, emails, events attended, meetings on the calendar, and other data closely related to users’ lives.
Apple aims to use this data to make users’ lives easier, such as helping users find photos from a concert several years ago, identifying the correct attachment to include in an email, or sorting mobile notifications based on priority and urgency.
However, Apple Intelligence may lack what some tech experts call “world knowledge,” which includes historical events, current affairs, and other information indirectly related to users. This is where ChatGPT comes into play.
Users can choose to have Siri forward questions and prompts to ChatGPT or allow ChatGPT to assist in writing documents within Apple applications.
Apple plans to eventually integrate with other third-party AI models, essentially eliminating the need to access ChatGPT, providing a more seamless user experience for Apple users.
Due to the distinct purposes of Apple Intelligence and ChatGPT, the volume and type of information users send to these two AIs may vary.
The Apple Intelligence system can access a vast amount of personal data from users, ranging from written communications, photos, and videos to calendars, event logs, and more. Aside from refraining from using its features, there seems to be no other way to prevent Apple Intelligence from accessing this information. An Apple spokesperson did not immediately respond to inquiries about this issue.
If users opt to use ChatGPT through Apple, they may choose to share some data and more data with OpenAI, but ChatGPT may not necessarily or automatically access highly personalized information from users. In the demonstration at the conference, Apple showed Siri seeking user permission before sending prompts to ChatGPT.
As part of the agreement with Apple, OpenAI has made a significant concession: OpenAI agrees not to store any request information from Apple users or collect their IP addresses.
Given that we have established how OpenAI will handle user data, what about Apple?
While Apple users must send personal information and AI requests to OpenAI to use ChatGPT, Apple states that most of the time, Apple Intelligence does not send user data anywhere. Apple aims to process AI requests directly on devices using smaller AI models.
This approach is similar to how Apple handles FaceID and other sensitive data, emphasizing that processing data on devices limits the risk of exposure. By ensuring that user data never actually transmits anywhere else, it cannot be intercepted or targeted by hackers.
If a user’s AI task requires more processing power, Apple Intelligence will send the user’s queries and data to Apple’s controlled cloud computing platform for completion by more powerful AI models.
This is where Apple claims to have made significant strides in privacy protection.
In the keynote address at the conference, Apple announced this development, which seemed to receive minimal attention, yet it is a significant advancement meticulously planned by Apple.
On Monday, Apple announced that it has developed a new cloud computing method, enabling it to compute sensitive data while ensuring that no one, not even the company itself, can know which data is being processed or what calculations are being performed.
Apple’s new architecture, known as “Private Cloud Compute,” draws on certain hardware and security concepts from the iPhone, including the secure enclave that safeguards sensitive data on Apple mobile devices.
Craig Federighi, Apple’s Senior Vice President of Software Engineering, stated in the keynote address, “With Private Cloud Compute, user data is never stored and cannot be accessed by Apple. After fulfilling a user’s AI request, Private Cloud Compute automatically deletes any user data involved in the process.”
03. Apple’s Training Data
Apple’s artificial intelligence models do not appear out of thin air. They require training, much like models offered by other companies. This raises questions about whose data Apple uses and how it is used.
In a technical document released last week, Apple stated that its models are trained based on “authorized data,” including data selected to enhance specific features.
Apple further explained, “We never use users’ private personal data or user interactions during the training of base models. We apply filters to remove personal identifiers, such as Social Security numbers and credit card numbers publicly available on the internet, when training base models.”
However, Apple does acknowledge scraping data from the public internet and using it to train its in-house models, similar to other AI companies. Some of these companies have faced copyright lawsuits, sparking a debate about whether AI startups are unfairly profiting from human labor.
Apple did not disclose the internet data it scrapes, but the company clearly states that publishers can add code to their websites to prevent Apple’s web crawlers from collecting their data. However, this shift of responsibility to protect intellectual property rights entirely onto publishers, rather than Apple itself, is noteworthy.
Original sources:
1. https://www.tomsguide.com/ai/apple-is-handling-ai-so-much-better-than-microsoft-i-may-ditch-windows-for-macos-sequoia
2. https://edition.cnn.com/2024/06/13/tech/apple-ai-data-openai-artificial-intelligence/index.html
Translated Chinese content by the MetaverseHub team, for reprints please contact us.
Subscribe to Updates
Get the latest creative news from FooBar about art, design and business.