Apple restricts use of OpenAI’s ChatGPT

Wall street journal reports

Apple Inc has restricted the use of ChatGPT and other external artificial intelligence tools for its employees as Apple develops similar technology, the Wall Street Journal reported on Thursday, citing a document and sources.

Apple is concerned about the leak of confidential data by employees who use the AI programs and has also advised its employees not to use Microsoft-owned GitHub’s Copilot, used to automate the writing of software code, the report said.

Scrutiny has been growing over how ChatGPT and other chatbots it inspired manage hundreds of millions of users’ data, commonly used to improve, or “train,” AI.

Earlier Thursday, OpenAI introduced the ChatGPT app for Apple’s iOS in the United States.

Apple, OpenAI and Microsoft did not respond to Reuters request for comment.

why Apple is so cautious?

The reason why Apple is so cautious is because when people use these models, data is sent back to the developer to improve them further. This creates the risk of unintentionally sharing private or confidential information. OpenAI temporarily took ChatGPT offline in March due to a bug that allowed users to see the titles from another user’s chat history. However, OpenAI has since introduced a feature that allows users to turn off their chat history to prevent training the AI model on that data.

Apple is in the process of developing its own AI product. It’s AI efforts are being led by John Giannandrea, who was hired from Google in 2018. Apple has acquired several AI startups under his leadership. During Apple’s recent earnings call, CEO Tim Cook expressed some concerns about generative AI and emphasized the need for a careful and thoughtful approach to such advancements. Apple has also been closely reviewing new software on its App Store that utilizes generative AI. When a developer tried to update an email app with a ChatGPT feature, Apple temporarily blocked the update because of potential inappropriate content being shown to children. After implementing content filtering, the app was approved.

use in incognito mode

In recent times, OpenAI, the organisation behind ChatGPT, announced the introduction of an “incognito mode” for ChatGPT. This mode ensures that users’ conversation history is not stored or utilised to enhance the AI model’s capabilities. The move was made in response to mounting scrutiny over how AI systems like ChatGPT and their derivative chatbots handle and utilise vast amounts of user data for training and improvement purposes.

error: Content is protected !!