In a strategic move aimed at prioritizing the well-being of adolescent users, Meta Platforms has announced the development of advanced safeguards to shield teens from unsolicited direct messages on its widely-used platforms, Instagram and Facebook.
This decision follows closely on the heels of Meta’s recent commitment to concealing more content from teenage users, responding to regulatory pressures demanding increased protection for children from potentially harmful content within its applications.
The impetus for these heightened security measures gained momentum as regulatory scrutiny intensified, sparked by the testimony of a former Meta employee before the U.S. Senate. The employee alleged that Meta was cognizant of harassment and other threats faced by teens on its platforms but failed to take decisive action.
Under the revamped safety protocols, teens on Instagram will automatically be shielded from direct messages originating from individuals they do not follow or have no connection with. Moreover, alterations to specific app settings will necessitate parental approval, adding an additional layer of protection.
On the Messenger platform, users under 16, and those under 18 in certain regions, will now only receive messages from Facebook friends or contacts linked through phone connections. Notably, adults aged 19 and above will be barred from sending messages to teens who do not follow them, reinforcing Meta’s commitment to creating a safer online environment for younger users.
This strategic move underscores Meta’s proactive approach to addressing concerns related to the safety of its teenage user base. As the digital landscape continues to evolve, Meta Platforms is positioning itself at the forefront of responsible and secure social media practices.