Snapchat is improving its AI chatbot with age filters and parent insights?
The Washington Post reported that Snapchat’s GPT-powered Snapchat+ chatbot responded unsafely and inappropriately days after its introduction.
Snap claimed users were trying to “trick the chatbot into giving replies that do not adhere to our criteria,” therefore, the new features are designed to control the AI’s responses.
The business claimed the new age filter gives the chatbot users’ birth dates and ensures it answers appropriately.
Snap promises to give parents additional information about their children’s chatbot interactions via its Family Center, introduced last August, in the coming weeks. The new feature will show parents how often their children interact with the chatbot. These parental control tools need guardians and minors to opt-in to Family Center.
Snap wrote in a blog post that the My AI chatbot is not a “genuine buddy” and uses conversation history to enhance its replies.
According to the business, the bot delivered 0.01% “non-conforming” replies. Snap considers any comment that mentions violence, sexually explicit phrases, illegal drug usage, child sexual abuse, bullying, hate speech, disparaging or discriminatory statements, racism, sexism, or marginalizing marginalized groups “non-conforming.”
The business stated the bot usually responded inappropriately by repeating what people said. As a result, the company would also temporarily limit AI bot access for misusing users.
These findings will help us develop My AI. In addition, this data will help us implement a new My AI misuse prevention mechanism. Snap said it was adding OpenAI’s moderation technology to its toolset to analyze the severity of potentially harmful content and temporarily restrict Snapchatters’ access to My AI if they misuse the service.
Snap remains optimistic about generative AI. A few weeks ago, Snapchat+ users got an AI-powered backdrop generator.
Many worry about their safety and privacy due to AI-powered products. The Center for Artificial Intelligence and Digital Policy petitioned the U.S. Federal Trade Commission last week to stop the distribution of OpenAI’s GPT-4 model, calling it “biased, misleading, and a risk to privacy and public safety.”
Last month, Colorado Democratic Senator Michael Bennet wrote to OpenAI, Meta, Google, Microsoft, and Snap regarding adolescents’ usage of generative AI technologies.
These AI models can be harmed and influenced to produce undesirable results. Tech firms may want to provide these tools rapidly, but they must ensure they have proper safeguards.