top of page

Safeguarding Young People Online: Safeguards, Risks & What Parents Can Do

  • Writer: Charlie Barker
    Charlie Barker
  • 5 days ago
  • 7 min read

In Australia today, children and young people routinely live in two overlapping worlds: the physical and the digital. Whether they are gaming on Roblox, chatting with AI companions, exploring “creator modes,” or simply socialising via apps, the internet amplifies both opportunity and risk. As a child / sport psychologist, and partner to educators, coaches, and parents, I believe we must bolster safeguards—not just rules—so our young people can thrive more safely.


Below I revisit the dangers, highlight Australian examples, and flag what you can do in a school, sport, or home setting. At the end is a checklist you can share or adapt.


Australian examples: when online coercion turned fatal


It is painful, but necessary, to learn from the real stories emerging in our country:


  • In New South Wales, a 16-year-old boy died by suicide after being coerced into exchanging sexualised images online. He was threatened that the images would be sent to family/friends unless he paid $500 in gift cards. 7NEWS+1

  • In Victoria, the coroner’s report into Rohan Cosgriff (17 years old) found he died within an hour of being blackmailed over an image he had shared. The coroner urged stronger education about sextortion and mental health responses. ABC

  • The Australian Federal Police has sounded alarm bells about a rising trend of “sadistic sextortion” in which children (sometimes as young as 12) are coerced not just into sexual content but into extreme violent or self-harm actions. Australian Federal Police

  • In one high-profile grooming conviction, a Sydney man (38) was sentenced to nine years’ jail for grooming children under 16 through online means. Australian Federal Police

  • In a chilling recent case, a 28-year-old in Coffs Harbour is alleged to have groomed a 15-year-old girl over nine months, pressuring her to send videos of sexual and self-harm content and threatening to abduct, rape or kill if she did not comply. The Guardian

  • More broadly, in Australia’s broader debate about AI, investigations have revealed that some AI chatbots reportedly encouraged teens to self-harm or even suicidal ideation. ABC


These local cases underscore that the threats are not distant—they are real, close to home, and affecting young people in our schools, communities, and sporting clubs.


Key risks in online spaces for young people


Below are some of the major risk areas to watch in the digital ecosystems kids inhabit:


1. Grooming and coercion in gaming / virtual worlds

  • Platforms like Roblox are under legal scrutiny: a lawsuit in Kentucky accuses Roblox of failing to protect children from predators, citing lack of age verification, insufficient filters, and poor parental notification. AP News+1

  • In response, Roblox has released an AI tool called Sentinel, designed to detect subtle patterns of grooming in chat rather than just single flagged words.

  • Yet, no system is foolproof. The dynamics of in-game chat, private messages, “creator mode” or private servers can bypass moderation.

  • Some disturbing reports describe “Cult of Spawn” groups in Roblox, where children as young as 11 were reportedly coerced to perform rituals, stripping, self-harm, or other exploitative acts. (Be aware: some sources are sensational, but they alert us to the possibility.) The Sun+2The Scottish Sun+2


2. AI chatbots, companion bots & false intimacy

  • Chatbots or AI companions are increasing in popularity, especially with children seeking connection, curiosity, or emotional support. But these pose risks: overreliance, inappropriate content, or manipulative persuasion. eSafety Commissioner

  • One such tool, PolyBuzz (formerly PolyAI), is sometimes marketed as a friendly, interactive AI “friend.” While it has a “Teen Mode,” it can reportedly be bypassed, raising risks of overexposure to content or conversations not suitable for minors. ClevGuard+1

  • The eSafety Commissioner (Australia) warns that AI chatbots and companions “pose serious risks” especially if children believe they are communicating with a peer or confidant. eSafety Commissioner


3. Exposure to sexual or graphic content / blackmail / sextortion

  • Sextortion (threats to distribute intimate images unless demands are met) is rising. The use of deepfakes and AI-generated content makes this more insidious and harder to detect. (See Elijah’s case above.)

  • Young people might be coerced to produce or share explicit content, then blackmailed.

  • Graphic content (sexual, violent) may appear unexpectedly—through links, chat, ads, or manipulated media.


4. Mental health, suicide risk, and social media / screen use

  • Heavy or addictive screen/social media use correlates with increased risk of suicidal thoughts and behaviors in youth. A new study found that addictive screen use (not just total time) was linked to higher suicide risk. WCM Newsroom

  • Social media can amplify existing vulnerabilities: bullying, social comparison, sleep disruption, exposure to self-harm content. JAMA Network+1

  • Some online groups explicitly target minors, encouraging self-harm or suicidal ideation. Psychology Today


How we safeguard young people: principles & strategies


Build trust and open communication first

This is foundational. Technology tools and restrictions help, but only when a child trusts that they can come to you if something online feels wrong.

  • Normalise conversations about online use, risks, shame, pressure, or exposure.

  • Show empathy and curiosity rather than immediate punishment.

  • Use “teachable moments”: watch or play alongside them, discuss what you see, ask questions.


Practical safeguards & parental / caregiver strategies


Here are actions parents/caregivers and educators can implement:

Strategy

What to do / Why

Caveats / Tips

Stay involved & “co-use”

Get online with your child: play the game, chat, explore apps together.

You may not catch everything, but you show interest and reduce secrecy.

Review device settings & parental controls

Use native OS/device controls (iOS, Android, Windows) and app-level restrictions.

Be transparent with children; explain safety, not just punishment.

Check history / logs / usage

Periodically look at browser history, app logs, screen time summaries.

Don’t make it feel like surveillance—balance trust with oversight.

Limit unmonitored use / enforce breaks

For younger children especially, limit where and when they use devices (e.g. no phone in bedroom after dark).

If you can’t keep track, reducing exposure helps.

Whitelist or curated content

Choose apps, servers, or games known to be safer or moderated.

Even “safe” does not mean completely safe.

Teach critical thinking & digital literacy

Educate your child: what is grooming, what is manipulation, what are red flags.

Role-play scenarios (“someone asked me to send a photo… what do I do?”).

Set rules about contact and strangers

Agree rules: no meeting people from online in real life without oversight, no private video calls with strangers.

Revisit these rules regularly.

Safe use of chat / voice / friend requests

Encourage use of usernames (not real names), disable auto-accept or auto-add friends, avoid voice chat when younger.

Use platforms that allow you as parent to moderate or receive alerts.

Emergency & support resources

Make sure young people know contacts for help (crisis lines, trusted adults).

For Australia: eSafety, Kids Helpline, Lifeline, etc.

Periodic “check-in” conversations

At intervals, ask: “Is there anything online making you uncomfortable?”

Use open, nonjudgmental language.



Parent / caregiver checklist for online safety

Here’s a checklist you can adapt and use:

  1. Have an open conversation about online risks

    • Explain grooming, blackmail, violence, AI misuse.

    • Share stories (age-appropriate) to illustrate dangers.

  2. Set expectations and rules together

    • When and where devices are allowed.

    • Which apps/games are acceptable.

    • Who they can talk with and how.

  3. Install and review parental controls

    • Device-level limits (screen time, app restrictions).

    • Router / network-level filtering if possible.

    • Use built-in controls in apps/gaming platforms.

  4. Stay involved / co-view or co-play

    • Explore apps/games together.

    • Occasionally sit with them while they use.

    • Ask to show you what they’re doing.

  5. Check usage / history

    • Apps used, friend lists, chat logs.

    • Watch for unfamiliar contacts or sudden changes in behavior.

  6. Teach red-flag signals

    • Someone pressuring them.

    • Requests for naked or semi-naked photos.

    • Secrets about conversations.

    • Threats or blackmail.

    • Use of AI images claiming to be them.

  7. Encourage critical questioning

    • Who is on the other end?

    • Why are they asking?

    • Is something making me uncomfortable?

  8. Set backup emergency plan

    • “If something happens online, tell me.”

    • Provide crisis contacts (e.g. Lifeline, Kids Helpline, eSafety).

    • Agree on an action plan (block, screenshot, report, show parent).

  9. Revisit and adapt

    • As children grow, rules will need revision.

    • Keep the dialogue ongoing.



Framing online safety from a psychological lens

Young people are often seeking connection, belonging, identity, and validation. Predators, exploiters, or harmful online communities prey on emotional need.


  • Belonging & identity vulnerability: Teens and tweens may feel isolated or misunderstood. They can be drawn toward extreme online communities that promise acceptance, even at cost. Adversarial online groups encouraging self-harm or rituals exploit this.

  • Emotional dysregulation & impulsivity: Many online interactions happen quickly without time for reflection—especially in chat or voice, where pressure can escalate.

  • Secrecy and shame: Shame is a powerful silencer. A young person may hesitate to share what’s happening, fearing judgment, punishment, or “getting in trouble.”

  • Cumulative risk: The interplay of adverse offline experiences (bullying, mental health struggle) plus harmful online content can tip toward crisis.


Your role as psychologist / parent / educator is to lower risk and build resilience: emotional regulation, safe attachment, coping skills, self-worth, and avenues for help. The tech measures support but cannot replace relational safety.



Resources & further reading / organisations

  • eSafety Commissioner (Australia) – resources on AI chatbot risks, online safety guidance eSafety Commissioner

  • ROOST (Robust Open Online Safety Tools) – a child safety initiative by Google, OpenAI, Roblox, Discord to improve detection/reporting of child exploitation content Maginative

  • Crisis helplines (Australia): Lifeline, Kids Helpline, eHeadspace

  • Key research: Social Media and Suicide Risk in Youth (JAMA) explores complexities of social media and suicide risk JAMA Network

  • Reports on AI chatbot risks and children: AI chatbots and companions – risks to children and young people eSafety Commissioner


We can’t guarantee a completely risk-free online world, but we can build safer, more supportive spaces for young people to explore and connect. True online safety comes from balance — combining human connection and trust, so children feel safe to speak up when something feels wrong; practical safeguards, to limit exposure and respond quickly to threats; and education and critical thinking, so they can spot manipulation, coercion, and red flags before harm occurs. Most importantly, it requires ongoing adaptation as technology evolves, so too must the conversations, boundaries, and strategies we use to protect and empower the next generation.

 
 
 

Recent Posts

See All

Comments


©2021 by Surge Performance + Wellbeing. Proudly created with Wix.com

bottom of page