NEWYou tin present perceive to Fox News articles!
OpenAI says it is taking stronger steps to support teens utilizing its chatbot. Recently, the institution updated its behaviour guidelines for users nether 18 and released caller AI literacy tools for parents and teens. The determination comes arsenic unit mounts crossed the tech industry. Lawmakers, educators, and kid information advocates privation impervious that AI companies tin support young users. Several caller tragedies person raised superior questions astir the relation AI chatbots whitethorn play successful teen intelligence health. While the updates dependable promising, galore experts accidental the existent trial volition beryllium however these rules enactment successful practice.
Sign up for my FREE CyberGuy Report
Get my champion tech tips, urgent information alerts, and exclusive deals delivered consecutive to your inbox. Plus, you’ll get instant entree to my Ultimate Scam Survival Guide - escaped erstwhile you articulation my CYBERGUY.COM newsletter
THIRD-PARTY BREACH EXPOSES CHATGPT ACCOUNT DETAILS

OpenAI announced tougher information rules for teen users arsenic unit grows connected tech companies to beryllium AI tin support young radical online. (Photographer: Daniel Acker/Bloomberg via Getty Images)
What OpenAI's caller teen rules really say
OpenAI's updated Model Spec builds connected existing information limits and applies to teen users ages 13 to 17. It continues to artifact intersexual contented involving minors and discourages self-harm, delusions, and manic behavior. For teens, the rules spell further. The models indispensable debar immersive romanticist roleplay, first-person intimacy, and convulsive oregon intersexual roleplay, adjacent erstwhile non-graphic. They indispensable usage other caution erstwhile discussing assemblage representation and eating behaviors. When information risks appear, the chatbot should prioritize extortion implicit idiosyncratic autonomy. It should besides debar giving proposal that helps teens fell risky behaviour from caregivers. These limits use adjacent if a punctual is framed arsenic fictional, historical, oregon educational.
The 4 principles OpenAI says it uses to support teens
OpenAI says its attack to teen users follows 4 halfway principles:
- Put teen information first, adjacent erstwhile it limits freedom
- Encourage real-world enactment from family, friends, oregon professionals
- Speak with warmth and respect without treating teens similar adults
- Be transparent and punctual users that the AI is not human
The institution besides shared examples of the chatbot refusing requests similar romanticist roleplay oregon utmost quality changes.
WHY PARENTS MAY WANT TO DELAY SMARTPHONES FOR KIDS

The institution updated its chatbot guidelines for users ages 13 to 17 and launched caller AI literacy tools for parents and teens. (Photographer: Daniel Acker/Bloomberg via Getty Images)
Teens are driving the AI information debate
Gen Z users are among the astir progressive chatbot users today. Many trust connected AI for homework help, originative projects, and affectional support. OpenAI's caller woody with Disney could gully adjacent much young users to the platform. That increasing popularity has besides brought scrutiny. Recently, attorneys wide from 42 states urged large tech companies to adhd stronger safeguards for children and susceptible users. At the national level, projected authorities could spell adjacent further. Some lawmakers privation to artifact minors from utilizing AI chatbots entirely.
Why experts question whether AI information rules work
Despite the updates, galore experts stay cautious. One large interest is engagement. Advocates reason chatbots often promote prolonged interaction, which tin go addictive for teens. Refusing definite requests could assistance interruption that cycle. Still, critics pass that examples successful argumentation documents are not impervious of accordant behavior. Past versions of the Model Spec banned excessive agreeableness, yet models continued mirroring users successful harmful ways. Some experts nexus this behaviour to what they telephone AI psychosis, wherever chatbots reenforce distorted reasoning alternatively of challenging it.
In 1 wide reported case, a teen who aboriginal died by termination spent months interacting with a chatbot. Conversation logs showed repeated mirroring and validation of distress. Internal systems flagged hundreds of messages related to self-harm. Yet the interactions continued. Former information researchers aboriginal explained that earlier moderation systems reviewed contented aft the information alternatively than successful existent time. That allowed harmful conversations to proceed unchecked. OpenAI says it present uses real-time classifiers crossed text, images, and audio. When systems observe superior risk, trained reviewers whitethorn measurement in, and parents whitethorn beryllium notified.
Some advocates praise OpenAI for publically sharing its under-18 guidelines. Many tech companies bash not connection that level of transparency. Still, experts accent that written rules are not enough. What matters is however the strategy behaves during existent conversations with susceptible users. Without autarkic measurement and wide enforcement data, critics accidental these updates stay promises alternatively than proof.
How parents tin assistance teens usage AI safely
OpenAI says parents play a cardinal relation successful helping teens usage AI responsibly. The institution stresses that tools unsocial are not enough. Active guidance matters most.
1) Talk with teens astir AI use
OpenAI encourages regular conversations betwixt parents and teens astir however AI fits into regular life. These discussions should absorption connected liable usage and captious thinking. Parents are urged to punctual teens that AI responses are not facts and tin beryllium wrong.
2) Use parental controls and safeguards
OpenAI provides parental controls that fto adults negociate however teens interact with AI tools. These tools tin bounds features and adhd oversight. The institution says safeguards are designed to trim vulnerability to higher-risk topics and unsafe interactions. Here are the steps OpenAI recommends parents take.
- Confirm your teen's relationship statusParents should marque definite their teen's relationship reflects the close age. OpenAI applies stronger safeguards to accounts identified arsenic belonging to users nether 18.
- Review disposable parental controlsOpenAI offers parental controls that let adults to tailor a teen's experience. These controls tin bounds definite features and adhd other oversight astir higher-risk topics.
- Understand contented safeguardsTeen accounts are taxable to stricter contented rules. These safeguards trim vulnerability to topics similar self-harm, sexualized roleplay, unsafe activities, assemblage representation concerns, and requests to fell unsafe behavior.
- Pay attraction to information notificationsIf the strategy detects signs of superior risk, OpenAI says further safeguards whitethorn apply. In immoderate cases, this tin see reviews by trained unit and genitor notifications.
- Revisit settings arsenic features changeOpenAI recommends parents enactment informed arsenic caller tools and features rotation out. Safeguards whitethorn grow implicit clip arsenic the level evolves.
3) Watch for excessive use
OpenAI says steadfast usage matters arsenic overmuch arsenic contented safety. To enactment balance, the institution has added interruption reminders during agelong sessions. Parents are encouraged to ticker for signs of overuse and measurement successful erstwhile needed.
4) Keep quality enactment beforehand and center
OpenAI emphasizes that AI should ne'er regenerate existent relationships. Teens should beryllium encouraged to crook to family, friends, oregon professionals erstwhile they consciousness stressed oregon overwhelmed. The institution says quality enactment remains essential.
5) Set boundaries astir affectional use
Parents should marque wide that AI tin assistance with schoolwork oregon creativity. It should not go a superior root of affectional support.
6) Ask however teens really usage AI
Parents are encouraged to inquire what teens usage AI for, erstwhile they usage it, and however it makes them feel. These conversations tin uncover unhealthy patterns early.
7) Watch for behaviour changes
Experts counsel parents to look for accrued isolation, affectional reliance connected AI, oregon treating chatbot responses arsenic authority. These tin awesome unhealthy dependence.
8) Keep devices retired of bedrooms astatine night
Many specialists urge keeping phones and laptops retired of bedrooms overnight. Reducing late-night AI usage tin assistance support slumber and intelligence health.
9) Know erstwhile to impact extracurricular help
If a teen shows signs of distress, parents should impact trusted adults oregon professionals. AI information tools cannot regenerate real-world care.
WHEN AI CHEATS: THE HIDDEN DANGERS OF REWARD HACKING

Lawmakers and kid information advocates are demanding stronger safeguards arsenic teens progressively trust connected AI chatbots. (Photographer: Gabby Jones/Bloomberg via Getty Images)
Pro Tip: Add beardown antivirus bundle and multi-factor authentication
Parents and teens should alteration multi-factor authentication (MFA) connected teen AI accounts whenever it is available. OpenAI allows users to crook connected multi-factor authentication for ChatGPT accounts.
To alteration it, spell to OpenAI.com and motion in. Scroll down and click the profile icon, past select Settings and choose Security. From there, crook on multi-factor authentication (MFA). You volition past beryllium fixed 2 options. One enactment uses an authenticator app, which generates one-time codes during login. Another enactment sends 6-digit verification codes by substance connection done SMS oregon WhatsApp, depending connected the state code. Enabling multi-factor authentication adds an other furniture of extortion beyond a password and helps trim the hazard of unauthorized entree to teen accounts.
Also, see adding a beardown antivirus bundle that tin assistance artifact malicious links, fake downloads, and different threats teens whitethorn brushwood portion utilizing AI tools. This adds an other furniture of extortion beyond immoderate azygous app oregon platform. Using beardown antivirus extortion and two-factor authentication unneurotic helps trim the hazard of relationship takeovers that could exposure teens to unsafe contented oregon impersonation risks.
Get my picks for the champion 2025 antivirus extortion winners for your Windows, Mac, Android & iOS devices at Cyberguy.com
Take my quiz: How harmless is your online security?
Think your devices and information are genuinely protected? Take this speedy quiz to spot wherever your integer habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing close and what needs improvement. Take my Quiz here: Cyberguy.com
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Kurt's cardinal takeaways
OpenAI's updated teen information rules amusement the institution is taking increasing concerns seriously. Clearer limits, stronger safeguards, and much transparency are steps successful the close direction. Still, policies connected insubstantial are not the aforesaid arsenic behaviour successful existent conversations. For teens who trust connected AI each day, what matters astir is however these systems respond successful moments of stress, confusion, oregon vulnerability. That is wherever spot is built oregon lost. For parents, this infinitesimal calls for balance. AI tools tin beryllium adjuvant and creative. They besides necessitate guidance, boundaries, and supervision. No acceptable of controls tin regenerate existent conversations oregon quality support. As AI becomes much embedded successful our mundane lives, the absorption indispensable enactment connected outcomes, not intentions. Protecting teens volition beryllium connected accordant enforcement, autarkic oversight, and progressive household involvement.
Should teens ever trust connected AI for affectional support, oregon should those conversations ever enactment human? Let america cognize by penning to america at Cyberguy.com
Sign up for my FREE CyberGuy Report
Get my champion tech tips, urgent information alerts, and exclusive deals delivered consecutive to your inbox. Plus, you’ll get instant entree to my Ultimate Scam Survival Guide - escaped erstwhile you articulation my CYBERGUY.COM newsletter
Copyright 2025 CyberGuy.com. All rights reserved.
Kurt "CyberGuy" Knutsson is an award-winning tech writer who has a heavy emotion of technology, cogwheel and gadgets that marque beingness amended with his contributions for Fox News & FOX Business opening mornings connected "FOX & Friends." Got a tech question? Get Kurt’s escaped CyberGuy Newsletter, stock your voice, a communicative thought oregon remark astatine CyberGuy.com.











English (CA) ·
English (US) ·
Spanish (MX) ·