Snapchat adds new parental controls to prevent teenagers from viewing ‘sensitive’ and ‘suggestive’ content
Image sources: Snapchat
Last year, Snapchat launched parental controls on its app through a new “Family Center” feature. The company announced today in a post on its online privacy and security center that it is now adding content filtering features that allow parents to limit exposure of teens to sensitive or suggestive content.
To enable this feature, parents can turn on the “Restrict Sensitive Content” filter in the Snapchat Family Center. Once enabled, teens will no longer see blocked content in Stories and Spotlight, the short video section of the platform. The text below the switch indicates that turning on the filter does not affect content shared in Chat, Snaps, and Search.
With this change, Snapchat is also publishing its content guidelines for the first time so that creators on Stories and Spotlight pages can gain more insight into what posts can be featured on its platform and what content is now considered “sensitive” according to its community guidelines. The platform said it shared these guidelines with Snap Stars creators and media partners as part of the Snap Stars program, but now the company is making them available to everyone on its website.
The company already prohibits content on its platform such as hateful content, terrorism, violent extremism, illegal activities, harmful false or misleading information, harassment and intimidation, threats of violence, etc. However, the guidelines now specify how sensitive content is in different categories. This is content that may be eligible for recommendation, but may be blocked from teenage users under the new controls, or for people living in the app based on their age, location or personal preferences.
For example, in the category of sexual content, Snap explains that content is considered “sensitive” if it “contains any nudity and any depiction of sexual activity, even if clothed, and even if the image is not real” (e.g. . as well as artificial intelligence images and “explicit language” describing sexual acts and other sex-related things such as sex work, taboos, genitalia, sex toys, “overtly suggestive images”, “indecent or degrading sexual content” and “manipulated media”.
It addresses what is considered sensitive in other categories, including harassment, disturbing or violent content, false or misleading information, illegal or regulated activities, hateful content, terrorism and violent extremism, and commercial content (public invitation to buy by non-approved creators). This includes a range of content such as depictions of drugs, engagement baiting (“wait for it”), self-harm, body modification, gore, violence in the news, graphic depictions of human physical illness, animal suffering, sensationalized reporting of the distribution of incidents such as violent or sexual crimes , dangerous behavior and more.
The changes come well after a 2021 congressional hearing in which Snap discussed showing adult content in the app’s Discover feed, such as invitations to sexualized video games and articles about going to bars or porn. As the senators correctly pointed out, Snap’s app was listed in the 12+ category on the App Store, but the content it shared was clearly intended for a more mature audience. In some cases, even the video games it promoted were rated as targeting older users.
“We hope these new tools and guidelines help parents, caregivers, trusted adults and teens not only personalize their Snapchat experience, but empower them to have productive conversations about their online experiences,” the social media company said in a blog post.
However, while the new feature greatly restricts sensitive content for teenagers in some areas, it doesn’t address one area Congress proposed — the Discover feed. Here, Snap includes content from publishers, including those who post content that is considered “sensitive” under its policies. Honestly, it’s a lot of clickbait. However, this area is not handled by the new controls.
Also, the feature requires parents to take action by turning on a switch they likely know nothing about.
In short, it’s yet another example of how the lack of legislation and regulation for social media companies has led to self-determination that doesn’t go far enough to protect young users from harm.
In addition to content controls, Snap said it is working on adding tools to give parents more “visibility and control” over how teenagers use the new My AI chatbot.
Last month, the social network launched this chatbot powered by Open AI’s GPT technology with the Snapchat+ subscription. Incidentally, Snapchat’s announcement came after the chatbot went rogue while chatting with a Washington Post columnist posing as a teenager. The bot allegedly advised the columnist to hide the smell of pot and alcohol during a birthday party. Researchers at the Center for Humane Technology also found that the bot gave sexual advice a user pretending to be 13 years old.
Additional tools targeting the chatbot have not yet been introduced.