Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - US
The Guardian - US
Lifestyle
Johana Bhuiyan

‘Fundamentally against their safety’: the social media insiders fearing for their kids

illustration of computer games characters crossing a bridge with notifications coming out of their phones while two figures with the instagram and x logo for heads hold a net beneath them
‘When it comes to child safety, it’s always an afterthought.’ Illustration: Jinhwa Jang/The Guardian

Arturo Bejar would not have let his daughter use Instagram at the age of 14 if he’d known then what he knows now.

Bejar left Facebook in 2015, where he spent six years making it easier for users to report when they had problems on the platform. But it wasn’t until his departure that he witnessed what he described in recent congressional testimony as the “true level of harm” the products his former employer built are inflicting on children and teens – his own included.

Bejar discovered his then 14-year-old daughter and her friends were routinely subjected to unwanted sexual advances, harassment and misogyny on Instagram, according to his testimony.

“I really can’t imagine a world where, as things stand today, these things are safe for a 13-year-old to use,” Bejar told the Guardian. “At that age, you’re still developing in so many ways, and you experience the world so intensely. And then you have an environment that really doesn’t provide the right safeguards on time spent or what happens when someone makes a comment that makes you uncomfortable, or what happens if you get bullied.”

But it wasn’t his daughter’s experience on Instagram alone that convinced Bejar that the social network is unsafe for kids younger than 16; it was the company’s meager response to his concerns. Ultimately, he concluded, companies like Meta will need to be “compelled by regulators and policymakers to be transparent about these harms and what they are doing to address them”.

‘Sympathy but no concrete action’

After seeing his daughter’s experience on the platform, Bejar returned to Facebook’s parent company Meta as a consultant for two years. That’s when he said he discovered that much of the work he and his team once did was no longer in place, including support tools for teens who said they were being bullied or harassed.

Initial data he examined from the research team showed that 21.8% of 13- to 15-year-olds said they were bullied, and just more than 13% said they received unwanted advances in the previous seven days.

He wrote to the executives Mark Zuckerberg, Sheryl Sandberg and Adam Mosseri with his findings. Their response was “not constructive”, he testified. “Sheryl Sandberg expressed empathy for my daughter but offered no concrete ideas or action. Adam Mosseri responded with an request for a follow-up meeting, Mark Zuckerberg never replied,” he said.

It’s been two years since Bejar left Instagram, and he has little hope that the company will voluntarily make its social network safer for teens. He’s certainly not alone in thinking so. Several parents working at large tech companies said they did not trust that their employers and their industry would prioritize child safety without public and legal pressure. By contrast, their opinions on screen time and on whether their children are allowed to use social media varied widely. Some permit limited use of screens; some ban them entirely.

“The foundation of what drives this industry is fundamentally against the safety of our children,” said one now former TikTok employee and parent of a four-year-old. The worker, who left the company in the months after this interview, asked not to be named for fear of retaliation from his employer. “When it comes to child safety, it’s always an afterthought. It’s always after a million mistakes happen and a ton of pressure gets put on you by regulators and parent groups. So you have to come up with solutions to keep children ‘safe’, but it’s never foolproof.”

How tech workers supervise their kids’ screen use

The battle over teenagers’ use of social media and screens is currently raging at the highest levels of government in the US and UK. In the US, federal children’s online safety bills are being hotly debated, efforts to ban TikTok have picked up steam and waned over the last year, and dozens of states have sued Meta for allegedly deliberately designing its platforms to be addictive to kids.

But long before those battles turned into bills on desks in Congress, parents in tech had been grappling with whether to let their own kids use the very products their industry has unleashed. Even those parents, many of whom may be equipped with a bit more awareness of the inner workings of technology and the companies building it, are still figuring it out.

Beyond bullying and harassment, parents in tech have the same concerns any other parent might over their kids’ use of social media and screens. They wrestle with questions of whether their children are becoming addicted to the use of screens, what type of content algorithms are pushing to them and whether the use of technology might be harming their children’s development. And while they may get a first-hand look at how the tech industry works and the forces that drive many of the decisions company executives might make – the way they approach their concerns varies just as widely as most other groups of parents.

The ex-TikTok employee said his four-year-old was too young to be using TikTok. The father is not sure yet whether he’ll let his son use the platform in the future either. He has yet to explore what parental controls TikTok has in place because he doesn’t think there’s enough content on it that would be suitable for his child at four. But he is generally aware of how kids behave on various platforms, he said. His 12-year-old nephew, for instance, uses Discord – a platform that has gotten the teenager in a bit of trouble. “When all these middle-schoolers get together on an online platform and there’s no faces involved, they can say whatever they want,” he said. “So I’m definitely wary of what kids can do when parents aren’t really around.”

Today, he allows his child to use YouTube with some supervision during meals. He pays for the Premium service to avoid any ads disrupting the content and has auto-play shut off – so a random video he didn’t choose for his child doesn’t start playing. But that’s as far as his planning around his kid’s use of social platforms has gone.

Another parent of three, who works at a major payments company, said he didn’t let his kids use tech at all. His children, who are three, six and 11 years old, are home-schooled, and he worries tech will get in the way of their learning and development.

“We don’t see any benefit of screen time right now,” he said. “Even in our social circle, you see adults who are hooked to their phones and laptops. Then there are other factors like how social platforms are developing all these algorithms, which can expose kids to negative things.”

But, as a tech worker, he understands the importance of his kids learning about data. He realized doing that without screens was a challenge and found few books to help him teach his children, so after a bit of experimentation he wrote and published his own book on data literacy for kids. “We do data visualization or data analysis of board games that we play or other things that we see in our daily lives,” he said. “And it’s been a good success with my kids, so that’s why I decided to write a book and share it.”

Parents in tech struggle to trust their own employers

While both parents have vastly different approaches to how their kids use tech, they said the irony of working in an industry that produces products they have various safety concerns about was not lost on them. But their ultimate hope is to try to make some sort of a difference from the inside. “It’s something I think about often as someone who’s working in tech and I’m like, ‘oh, I might be a sellout,’” said the ex-TikTok worker. “But at the same time, I’m learning about it and getting familiar with it so that maybe one day I can make an impact and change and be an advocate for what’s right or what’s wrong.”

However, Bejar’s experience at Meta shows the potential limits parents in tech might face when trying to affect change from inside the company. When he was working at the company as a consultant, Bejar recalled trying to get his team to prioritize reducing the number of unwanted sexual advances for teens on Meta-owned platforms. “We could not get that goal adopted by the team,” he said.

“That’s why I reached out to [Zuckerberg] and [the Meta chief product officer, Chris Cox] and [the Instagram CEO, Mosseri],” he continued. “I genuinely thought they don’t know how bad it is. But it’s been two years and they didn’t do anything.”

Meta spokesperson Liza Crenshaw said it was “absurd” to suggest the company only started conducting user perception surveys in 2019 when Bejar returned to the company as a consultant. Crenshaw said the company conducts qualitative surveys on users’ experience on the platform in addition to collecting quantitative data.

“Prevalence metrics and user perception surveys measure two different things,” Crenshaw said in a statement. “We take actions based on both and work on both continues to this day.”

Bejar knows many well-meaning Meta employees who are working hard to make a difference from the inside. “But the company does not listen to them,” he said.

“I cannot tell you how many parents I’ve met who are working in these places because they believe in their hearts that they make it better, and they work on integrity or trust and safety. And there’s just so many wonderful, committed people willing to do the hardest work in those areas.”

Just last week, months after Bejar’s congressional testimony, Meta announced a slate of new safety features for users under 18 that included hiding “age inappropriate content” even when shared by someone the user follows. The company said it will also prompt teenagers to update their privacy settings. Bejar said he was not convinced these new features were as restrictive as Meta is making them out to be and, he pointed out, there was still no way “for a teen to easily flag an unwanted advance in the product. It is as simple as a button.”

Meta pointed to several Instagram features including one that restricts users over 19 from contacting teens who don’t follow them. The company also said they only allow users to message a person who doesn’t follow them one time until that person accepts the message.

Ultimately, Bejar’s congressional testimony served as his retirement from the tech industry. Now he said his goals were “to help regulators, policymakers, academics, journalists and the public better understand how companies think about these problems and what it would take to address them”.

Is screen time bad for kids?

Complicating matters for parents attempting to navigate how much their kids should use social media or screens generally, is that the jury is still out on how harmful screen time actually is for them.

The surgeon general of the US issued an oft-cited warning in May 2023 that “there is growing evidence that social media use is associated with harm to young people’s mental health.” But the warning came with a qualifier: there is not enough evidence to say definitively whether social media is safe. A recent study of 2 million people that came out of Oxford University concluded that, without tech companies providing more detailed data, there was not enough evidence to prove popular claims that social media and internet use can harm mental health. And in 2020, researchers combed through 40 studies on the relationship between social media and depression and anxiety and found the link was small and inconsistent.

Parents’ fears are not unfounded, however. In terms of screen time, research published in the journal Jama Pediatrics found that one-year-olds who had two or more hours of screen time a day were 61% more likely to have development delays. Whistleblowers such as Frances Haugen and Bejar have shared internal data from companies like Meta that point to a harmful relationship between teenage girls’ use of Instagram and their mental health.

Whether the use of screens and tech will ultimately be proven to be inherently good or bad, parents in tech who spoke to the Guardian said they felt companies needed to provide better controls.

“I will certainly lean away from technology platforms that don’t have enough controls that parents can at least have access to,” Frank Padikkala, a tech worker and parent of two, said. “I’m not saying that it just needs to be completely locked down or restricted but at least give us the choice to make those decisions.”

For Bejar, the controls in place on social networks like Instagram are not sufficient because they turn “inherently human interactions into an objective assessment”. There are too few options for users to hide content or flag comments and DMs and explain why it made them uncomfortable even if it doesn’t violate Meta’s specific policies, he said. “There’s a question of how clearly bad does the content need to be to warrant removal? And that means you set a line somewhere and have to define a criterion where either a computer system or a human can evaluate a piece of content,” Bejar said.

Meta said it has rolled out more than 30 features to improve the experience of teens on their platforms. For example, teens can identify words or emojis they’d like to be automatically hidden in comments or DMs. A company spokesperson also said users can easily manage their recommendations by selecting “not interested” on certain posts or they can list specific topics they want to see less of, after which the system will try to surface less content with those topics in the captions or in hashtags.

Bejar pointed to the “don’t recommend to me” or “not interested” button on Meta’s social networks as an example of a protective measure that wasn’t helpful on its own because “they don’t allow people to say don’t show me this because of this reason” so similar content may continue to appear.

“You don’t have to be the judge and jury on everything in the world,” he said. “You have to create a human mechanism that helps people have the content that they want and not get the content that they don’t want.”

If Instagram did implement that sort of a feature, Bejar continued, “I would be so happy for my daughter to be on Instagram.”

Sign up to our free coaching newsletter to help you spend less time on your phone

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.