
What you need to know
- YouTube states it's going to begin rolling out its AI-based protections for teenagers, which will help identify users younger than 18.
- Its AI will reportedly take into account how long your account's been active, the type of content you search for, and content categories you watch.
- If flagged, YouTube states some limiters will go into affect; however, if wrongfully flagged, you must provide a credit card or a government-issued ID to prove you're over 18.
- In June, YouTube increased the minimum age for streamers, going from 13 to 16.
YouTube's had a busy week, as it starts detailing an upcoming update to identify teen users on its platform and a change that affects creators.
YouTube published a blog post about its upcoming "built-in" protections for teenagers on its platform that will begin rolling out "over the next few weeks." To break things down, YouTube begins by stating its protections are AI-based. It adds that its AI software utilizes a "variety of signals" to help determine whether or not a user is under 18 or over.
The kinds of "signals" YouTube's AI looks for include the type of videos you're searching for, video categories you've watched, and the longevity of your account. If YouTube's AI deems an account to be under the age of 18, it will put forth a few limiters.
YouTube states such accounts will have their digital wellbeing tools activated, a limit set on "repetitive views" for "some kinds of content," and personalized advertising will be deactivated. Considering AI can get things wrong, YouTube states users can either use a credit card or their government-issued ID to prove they're above 18. Only a small set of U.S.-based users will see YouTube's AI-based protections for younger users roll out in the next few weeks.
The company states it will monitor this before expanding its availability.
The Guidelines Change

YouTube's platform will also see a change to how it handles profanity present in content creators' videos (via 9to5Google). In the Creator Insider video, YouTube's Head of Monetization Policy Experience, Conor Kavanagh, states creators who swear (moderate or strong profanity) within the "first seven seconds" of their content will be eligible for full ad revenue. This was something that was in reverse two years ago, which YouTube states garnered a wealth of feedback.
Kavanagh goes into a little more depth, stating that "high frequency" levels of swearing in videos will still violate its rules. Additionally, creators with profanity in their video's title or comments.
What YouTube is going to begin in the U.S. to protect younger audiences is in a similar vein to an update in June. The company increased the minimum age for livestreams, going from 13 to 16. So, if you're under the age of 16, you will need to have an adult visibly present with you on your livestream through its entirety. If not, YouTube states your stream's chat will be disabled, as well as "other" features.