Apple Partners With OpenAI For ChatGPT On Phones: What It May Mean

By Hera Rizwan

Apple, on Monday, introduced "Apple Intelligence", a collection of new AI features for its popular devices, alongside a partnership with OpenAI, as it aims to keep pace with competitors advancing rapidly in the AI field.

The move is said to elevate the experience of Apple products to "new heights", CEO Tim Cook said, while opening the annual Worldwide Developers Conference at the company's headquarters in Cupertino, California.

The iPhone-maker is anticipated to enhance its Siri voice assistant and operating systems by integrating OpenAI's ChatGPT. Been refreshed with a new interface and a chattier approach, the voice assistant is touted to help users navigate their devices and apps more seamlessly. Updates to iPhone and Mac operating systems will also enable access to ChatGPT through a collaboration with OpenAI.

ChatGPT will also be leveraged to improve various tools, such as text and content generation. The test version is expected to be available in the autumn.

In his X post, Sam Altman reflected on Apple-OpenAI partnership saying, “Very happy to be partnering with Apple to integrate ChatGPT into their devices later this year! Think you will really like it.”

Apple executives also emphasised that privacy safeguards are integrated into Apple Intelligence, ensuring that its Siri digital assistant and other products become smarter without compromising user data.

Also Read:Meta, OpenAI Combat Manipulated Content Targeting Indian Elections, Sikh Groups

Elon Musk threatens Apple ban over OpenAI alliance

However, not everyone welcomed the announcement. Elon Musk has threatened to prohibit iPhones at his companies citing "data security" concerns. He expressed worry about OpenAI's ChatGPT's extensive integration into Apple devices like iPhone, iPad, and Mac, warning of device bans due to security breaches.

Musk said on X, "Apple has no clue what's actually going on once they hand your data over to OpenAI." He further added, "They're selling you down the river."

Besides, Musk also mentioned that “visitors will have to check their Apple devices at the door, where they will be stored in a Faraday cage.” A Faraday cage, invented by scientist Michael Faraday, functions as a shield or enclosure that prevents the passage of digital signals, including cellular signals, both inbound and outbound.

Also Read:Meta Harnesses User Data For AI Training; Complicates Opt-Out Process

Musk even took to X to share an Indian meme, in which he took a dig at Apple. The meme, which read, "How intelligence works" featured an image showing a man and a woman sharing coconut water. The text suggests potential privacy concerns if Apple were to share data with OpenAI.

Concerns around Apple's AI overhaul

Although Musk opposes the increased OS-level integration of ChatGPT on Apple devices, Apple assures that GPT4o-driven Siri and other native apps on iOS 18, iPadOS 18, and macOS Sequoia will request permission each time before sharing questions, photos, documents, presentations, or PDFs. Users will also have the option to access ChatGPT without creating an account.

Apple stated that "the request and information will not be logged," and users of paid ChatGPT accounts can link their accounts to access premium features. These features will be available on certain iPhones, iPads, and Macs later this year. Apple also confirmed plans to introduce support for additional AI models in the future.

Other privacy measures from Apple include a new hybrid cloud system called “private cloud compute”. The company said it aims to complete the majority of processing for AI tools on-device, but will provide additional privacy measures for more complex computing that requires the cloud. Despite the assurances, Apple's foray into AI, aided by OpenAI, remains marred with concerns.

The announcement comes on the heels of current and former employees of OpenAI, Google DeepMind, and Anthropic joining the ongoing debate about the potential impact of generative AI on humanity. They lent their voices to the discussion by signing an open letter on June 4, cautioning about the looming dangers ahead.

The consortium of 13 individuals raised concerns that AI companies working on next-generation AI systems have not been transparent in sharing information, as they compete for a slice of the projected $1.3 trillion market for chatbots and other AI technologies by 2032. They called upon the AI companies to become more open with the public about "the risk levels of different kinds of harms," since those companies have "strong financial incentives to avoid effective oversight".

Furthermore, the open letter follows the departure of several prominent executives from OpenAI, who cited safety concerns and claimed that the company had reduced resources for teams researching AI's long-term risks. Helen Toner, a former OpenAI board member, also recently voiced her concerns, stating that the board's lack of confidence in CEO Sam Altman last year stemmed from his lack of transparency in communications with them.

The company was also embroiled in a recent controversy when Hollywood actress Scarlett Johansson alleged that OpenAI developed a voice for its ChatGPT model, named 'Sky', that bore a striking resemblance to her own voice, following her refusal to lend her voice to the chatbot.

OpenAI had clarified that it would remove the voice, “out of respect for Ms Johansson”, but insisted that it was not meant to be an "imitation" of the star.

Also Read:Arvind Kejriwal Refuses To Share iPhone Password With ED: Will Apple Comply?

© BOOM Live