With a new policy that will go into effect early next year, Google is going after potentially troublesome generative AI apps. The policy mandates that Android application creators who submit their work to the Play Store include a mechanism for users to report or flag objectionable AI-generated content. According to the firm, the new policy will mandate that reporting and flagging be done in-app, and developers can use the report to guide their filtering and moderation strategies.
The proliferation of AI-generated applications, some of which users misled into producing NSFW material, as with Lensa the previous year, prompted the policy change. Others, however, struggle with more subtle problems. For example, it was discovered that the AI headshot app Remini, which became popular this summer, significantly thinned and increased the size of certain women’s breasts or cleavage. Then there were the more recent problems with the AI tools from Meta and Microsoft, where users could get around security measures and create pictures such as pregnant Sonic the Hedgehog or imaginary characters carrying out the 9/11 attacks.
Naturally, using AI picture generators raises even more significant problems, as pedophiles have been found to generate child sexual abuse material (CSAM) on a large scale using open-source AI techniques. Concerns have also been raised about using artificial intelligence (AI) to produce “deepfakes,” or fake pictures, to deceive or mislead voters ahead of the next elections.
The new policy states that apps that generate images “based on text, image, or voice prompts” and “text-to-text conversational generative AI chatbots, in which interacting with the chatbot is a central feature of the app”—a category that includes ChatGPT and other similar apps—are examples of AI-generated content.
In its announcement, Google reminded developers that all apps—including those that generate content using artificial intelligence (AI)—must adhere to its current developer regulations, which forbid the use of restricted content like CSAM and other materials that support dishonest behavior.
Google claims that in addition to altering its policy to tighten down on applications with AI content, the Google Play team will also assess some app permissions again, especially those that ask for extensive access to photos and videos. Apps will only be allowed to view images and videos if they are directly necessary to function under the company’s new policy. Applications must utilize a system selector, such as the new Android picture picker, if they have a one-time or rare need. Examples of such applications include AI programs that want users to upload a series of selfies.
The new policy would also restrict distracting, full-screen messages to moments when an urgent need arises. Many applications have misused the ability to display full-screen alerts to upsell users into premium memberships or other offers. In contrast, the feature should only be utilized for high-priority real-world use cases, such as receiving a phone call or video contact. According to Google, the restrictions will be altered, and a unique app access authorization will be needed. Apps designed for Android 14 and above that genuinely need full-screen capabilities will only receive this “Full-Screen Intent Permission.”
Surprisingly, Google was the first to release a policy on AI applications and chatbots. Usually, Apple is the one to introduce new regulations to curb undesirable app behavior, which Google later adopts. Apple has tightened restrictions in other areas, such as applications asking for data to identify the user or device—a process known as “fingerprinting”—and on apps that try to imitate others. Still, its App Store Guidelines do not have an official AI or chatbot policy.
Although AI app makers have until early 2024 to apply the flags and submit modifications to their products, Google Play is rolling out policy adjustments now.