Instagram is set to trial the use of artificial intelligence to verify user ages, aiming to address concerns about minors misrepresenting their age on the platform. This initiative, introduced by Meta Platforms, aims to identify teen accounts that may have registered with incorrect birthdates. Should the AI detect age discrepancies, these accounts will be automatically adjusted to teen settings, which offer greater privacy and restrictions compared to adult accounts.
Teen accounts on Instagram are designed to be private by default. This includes limitations on private messages, allowing teenagers to receive messages only from users they already follow or are connected with. Additionally, content deemed “sensitive,” such as videos of altercations or those related to cosmetic procedures, will be restricted.
To further enhance safety, Meta has implemented notifications to alert teens if they exceed 60 minutes of usage. A “sleep mode” will also activate from 10 p.m. to 7 a.m., disabling notifications and sending automated replies to direct messages during these hours.
Meta’s AI system analyzes various factors to determine a user’s age, including account activity, profile details, and the timeline of the account’s creation. These measures come amidst increasing scrutiny over social media’s impact on the mental health of younger users. There is also a legislative push in several states to implement age verification laws, although such efforts have encountered legal challenges.
In response to criticism that social media companies aren’t doing enough to ensure child safety, Meta and other platforms advocate for responsibility to be placed on app stores for age verification. Additionally, Instagram plans to inform parents about discussing the importance of accurate age representation with their teens, providing guidance on maintaining online safety.