Google has made it clear to developers that all applications—including those created by AI content generators—must abide by its current developer policies, which forbid the creation of content that supports “deceptive behavior” and other types of restricted content, such as child sexual abuse material (CSAM).
Updates to the developer policies were announced by the company in an effort to improve the caliber of apps available on Google Play.
Google stated that it wants to assist in making sure AI-generated content is safe for people and that their feedback is taken into consideration, in keeping with its commitment to responsible AI practices.
The internet behemoth released a statement saying, “We’ll be requiring developers to provide the ability to report or flag offensive AI-generated content without needing to exit the app” early next year.
As a reminder, Google stated that apps that use AI to create content must also abide by all other developer policies.
Certain app permissions have extra restrictions and need to be reviewed by the Google Play team in order to protect user privacy.
“We’re expanding these requirements, including a new policy to reduce the types of apps allowed to request broad photo and video permissions,” the company said. “We’ve found this has been an effective strategy to protect people’s privacy.”
Apps will only be allowed to access images and videos for reasons directly relevant to their operation under the new policy.
Applications are asked to use a system picker, like the Android photo picker, if they only occasionally or once in a while need to access these files.
Stronger restrictions are also placed on the use of full screen intent notifications, which convey urgent messages and demand the user’s immediate attention.
Google declared that only applications whose primary function necessitates a full-screen notification will by default be granted Full Screen Intent permission for Android 14 and above, and all other apps will have to request permission to use this permission.