It is unlikely that Apple will act against ChatGPT applications, new rules may come into play later
Apple CEO Tim Cook speaks during the Apple Worldwide Developer Conference (WWDC) at the San Jose Convention Center on Monday, June 4, 2018, in San Jose, California.
Josh Edelson | AFP | Getty Images
Large language models such as ChatGPT can produce entire blocks of text that read as if written by a human. Companies are racing to integrate ChatGPT into their applications, including Microsoft, Snapand Shopify. However, the trend could come to a screeching halt if Apple decides to restrict ChatGPT-based apps from the App Store, which is the only way to install software on the iPhone.
Blix, the email app company that has regularly clashed with Apple over App Store rules, says it hit that roadblock this week.
Co-founder Ben Volach told the Wall Street Journal that Apple rejected an update to the BlueMail app because it integrated ChatGPT to help compose emails and didn’t include content filtering on the chatbot’s output. Volach also claimed on Twitter that Apple is “blocking” an AI update.
Apple said that without content filtering, the Blue Mail chatbot could produce words that are inappropriate for children, and the report said the email app should raise the recommended age to 17+.
Apple is investigating, and developers may appeal the decision, a spokesperson told CNBC.
Regardless, the Blue Mail episode isn’t a sign of AI apps coming against Apple.
In fact, ChatGPT-based features are already found in Snapchat and Microsoft’s Bing app, which are currently being distributed through the App Store. Other AI apps such as Lensa have also been distributed and flourished in the App Store.
There is no official AI or chatbot policy in Apple’s App Store guidelines. This document outlines what Apple allows in the App Store. Employees of Apple’s App Review department download and briefly test all apps and updates before approving them.
Apple may add AI-specific policies in the future. For crypto apps, for example, in a 2018 update, Apple specifically introduced a section on cryptocurrencies into its policies, allowing wallet apps and banning on-device mining. Apple introduced new rules for NFTs last year. The company often updates its policies in June and October.
But the Blue Mail episode reflects Apple’s App Store’s strict adherence to content that’s mass-generated — either by users (for example, in the case of social media apps) or, more recently, by AI.
If an app can display content that violates intellectual property rights or messages that are considered cyberbullying, the app must provide a way to filter that material and give users a way to report it, Apple says.
The content moderation rule was likely at the heart of Elon Musk’s Twitter business late last year, and it was the reason Apple pulled Parler from the App Store in 2021. Apple allowed Parler back into the App Store when it added content moderation.
Before appearing in the Bing app on the iPhone, the ChatGPT-based AI had creepy conversations in Bing, including threats and requests for help against users.
However, Bing has built-in content moderation and filtering tools. Microsoft’s artificial intelligence allows users to negatively rate harmful responses and includes a “safety system” that includes content filtering and abuse detection. Microsoft has also updated Bing’s chatbot in recent weeks to crack down on these creepy conversations, and the chatbot now often refuses to pick up topics that might cause it to go off track.