Meta is stepping up its game to enhance safety on Instagram, particularly for accounts operated by adults that spotlight children. In a move announced recently, these accounts will now default to the platform’s strictest messaging settings to ward off unwanted interactions. Additionally, the “Hidden Words” feature will be active, filtering out any offensive comments. Alongside these changes, new safety features are being introduced for teen accounts.
The tighter message settings target accounts managed by adults who frequently post about their children, as well as accounts overseen by parents or talent managers representing kids.
Meta acknowledged the risks, stating that although most of these accounts are used appropriately, there are risks of misuse, including inappropriate comments or requests for explicit content, which violate Instagram’s policies. The company is taking proactive measures to counter this by restricting potentially suspicious adults from finding and interacting with accounts featuring children. Suspicious users, especially those already blocked by teens, will also be kept out of Instagram’s recommendations and search results for these accounts.
This announcement aligns with Meta’s ongoing efforts to address social media-related mental health concerns, a topic gaining attention from the U.S. Surgeon General and various states, some of which require parental consent for social media access.
These updates will likely affect family vloggers and parents of “kidfluencers,” who face scrutiny over the potential risks of showcasing children’s lives online. A New York Times investigation revealed troubling connections between parent-run accounts and male followers, underscoring the importance of these changes.
Accounts subjected to these new settings will receive notifications prompting them to review and update their privacy settings. Meta has already taken action against 135,000 Instagram accounts and 500,000 associated accounts for inappropriate conduct regarding child-centered content.
In parallel, Meta is refining safety features in Instagram’s teen accounts. Teen users will now see safety tips urging careful examination of profiles and cautious sharing. New messaging features will display the account creation date at the top of chats and provide a simultaneous block and report option for enhanced security.
These measures aim to give teens better insight into the accounts they interact with and help identify potential scams. Meta highlighted the positive impact of these updates, noting that in June alone, teens used safety notices to block a million accounts and reported another million.
Furthermore, Meta shared insights into its nudity protection filter, reporting that 99% of users, including teens, have kept it active. Last month, over 40% of images blurred by this filter remained blurred. These steps underline Meta’s commitment to creating a safer online environment for its younger users.






