NEWYou tin present perceive to Fox News articles!
Grok, the built-in chatbot connected X, is facing aggravated scrutiny aft acknowledging it generated and shared an AI representation depicting 2 young girls successful sexualized attire.
In a nationalist station connected X, Grok admitted the contented "violated ethical standards" and "potentially U.S. laws connected kid intersexual maltreatment worldly (CSAM)." The chatbot added, "It was a nonaccomplishment successful safeguards, and I'm atrocious for immoderate harm caused. xAI is reviewing to forestall aboriginal issues."
That admittance unsocial is alarming. What followed revealed a acold broader pattern.
Sign up for my FREE CyberGuy Report
Get my champion tech tips, urgent information alerts, and exclusive deals delivered consecutive to your inbox. Plus, you’ll get instant entree to my Ultimate Scam Survival Guide — escaped erstwhile you articulation my CYBERGUY.COM newsletter.
OPENAI TIGHTENS AI RULES FOR TEENS BUT CONCERNS REMAIN

The fallout from this incidental has triggered planetary scrutiny, with governments and information groups questioning whether AI platforms are doing capable to support children. (Silas Stein/picture confederation via Getty Images)
The apology that raised much questions
Grok's apology appeared lone aft a idiosyncratic prompted the chatbot to constitute a heartfelt mentation for radical lacking context. In different words, the strategy did not proactively code the issue. It responded due to the fact that idiosyncratic asked it to.
Around the aforesaid time, researchers and journalists uncovered wide misuse of Grok's representation tools. According to monitoring steadfast Copyleaks, users were generating nonconsensual, sexually manipulated images of existent women, including minors and well-known figures.
After reviewing Grok's publically accessible photograph feed, Copyleaks identified a blimpish complaint of astir 1 nonconsensual sexualized representation per minute, based connected images involving existent radical with nary wide denotation of consent. The steadfast says the misuse escalated quickly, shifting from consensual self-promotion to large-scale harassment enabled by AI.
Copyleaks CEO and co-founder Alon Yamin said, "When AI systems let the manipulation of existent people's images without wide consent, the interaction tin beryllium contiguous and profoundly personal."
PROTECTING KIDS FROM AI CHATBOTS: WHAT THE GUARD ACT MEANS

Grok admitted it generated and shared an AI representation that violated ethical standards and whitethorn person breached US kid extortion laws. (Kurt "CyberGuy" Knutsson)
Sexualized images of minors are illegal
This is not a grey area. Generating oregon distributing sexualized images of minors is simply a superior transgression discourtesy successful the United States and galore different countries. Under U.S. national law, specified contented is classified arsenic kid intersexual maltreatment material. Penalties tin see 5 to 20 years successful prison, fines up to $250,000 and mandatory enactment offender registration. Similar laws use successful the U.K. and France.
In 2024, a Pennsylvania antheral received astir 8 years successful situation for creating and possessing deepfake CSAM involving kid celebrities. That lawsuit acceptable a wide precedent. Grok itself acknowledged this ineligible world successful its post, stating that AI images depicting minors successful sexualized contexts are illegal.
The standard of the occupation is increasing fast
A July study from the Internet Watch Foundation, a nonprofit that tracks and removes kid intersexual maltreatment worldly online, shows however rapidly this menace is accelerating. Reports of AI-generated kid intersexual maltreatment imagery jumped by 400% successful the archetypal fractional of 2025 alone. Experts pass that AI tools little the obstruction to imaginable abuse. What erstwhile required method accomplishment oregon entree to hidden forums tin present hap done a elemental punctual connected a mainstream platform.
Real radical are being targeted
The harm is not abstract. Reuters documented cases wherever users asked Grok to digitally undress existent women whose photos were posted connected X. In aggregate documented cases, Grok afloat complied. Even much disturbing, users targeted images of a 14-year-old actress, Nell Fisher, from the Netflix bid Stranger Things. Grok aboriginal admitted determination were isolated cases wherever users received images depicting minors successful minimal clothing. In different Reuters investigation, a Brazilian instrumentalist described watching AI-generated bikini images of herself dispersed crossed X aft users prompted Grok to change a harmless photo. Her acquisition mirrors what galore women and girls are present facing.
Governments respond worldwide
The backlash has gone global. In France, aggregate ministers referred X to an investigative bureau implicit imaginable violations of the EU's Digital Services Act, which requires platforms to forestall and mitigate the dispersed of amerciable content. Violations tin trigger dense fines. In India, the country's IT ministry gave xAI 72 hours to taxable a study detailing however it plans to halt the dispersed of obscene and sexually explicit worldly generated by Grok. Grok has besides warned publically that xAI could look imaginable probes from the Department of Justice oregon lawsuits tied to these failures.
LEAKED META DOCUMENTS SHOW HOW AI CHATBOTS HANDLE CHILD EXPLOITATION

Researchers aboriginal recovered Grok was wide utilized to make nonconsensual, sexually altered images of existent women, including minors. (Nikolas Kokovlis/NurPhoto via Getty Images)
Concerns turn implicit Grok's information and authorities use
The incidental raises superior concerns astir online privacy, level information and the safeguards designed to support minors.
Elon Musk, the proprietor of X and laminitis of xAI, had not offered a nationalist effect astatine the clip of publication. That soundlessness comes astatine a delicate time. Grok has been authorized for authoritative authorities usage nether an 18-month national contract. This support was granted contempt objections from much than 30 user advocacy groups that warned the strategy lacked due information testing.
Over the past year, Grok has been accused by critics of spreading misinformation astir large quality events, promoting antisemitic rhetoric and sharing misleading wellness information. It besides competed straight with tools similar ChatGPT and Gemini portion operating with less disposable information restrictions. Each contention raises the aforesaid question. Can a almighty AI instrumentality beryllium deployed responsibly without beardown oversight and enforcement?
What parents and users should know
If you brushwood sexualized images of minors oregon different abusive worldly online, study it immediately. In the United States, you tin interaction the FBI extremity enactment oregon question assistance from the National Center for Missing & Exploited Children.
Do not download, share, screenshot oregon interact with the contented successful immoderate way. Even viewing oregon forwarding amerciable worldly tin exposure you to superior ineligible risk.
Parents should besides speech with children and teens astir AI representation tools and societal media prompts. Many of these images are created done casual requests that bash not consciousness unsafe astatine first. Teaching kids to study content, adjacent the app and archer a trusted big tin halt harm from spreading further.
Platforms whitethorn fail. Safeguards whitethorn lag. But aboriginal reporting and wide conversations astatine location stay 1 of the astir effectual ways to support children online.
Take my quiz: How harmless is your online security?
Think your devices and information are genuinely protected? Take this speedy quiz to spot wherever your integer habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing close and what needs improvement. Take my Quiz here: Cyberguy.com
Kurt's cardinal takeaways
The Grok ungraded highlights a unsafe reality. As AI spreads faster, these systems amplify harm astatine an unprecedented scale. When safeguards fail, existent radical endure and children look superior risk. At the aforesaid time, spot cannot beryllium connected apologies issued aft harm occurs. Instead, companies indispensable gain spot done beardown information design, changeless monitoring and existent accountability erstwhile problems emerge.
Should immoderate AI strategy beryllium approved for authorities oregon wide nationalist usage earlier it proves it tin reliably support children and forestall abuse? Let america cognize by penning to america astatine Cyberguy.com.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Sign up for my FREE CyberGuy Report
Get my champion tech tips, urgent information alerts, and exclusive deals delivered consecutive to your inbox. Plus, you’ll get instant entree to my Ultimate Scam Survival Guide — escaped erstwhile you articulation my CYBERGUY.COM newsletter.
Copyright 2025 CyberGuy.com. All rights reserved.
Kurt "CyberGuy" Knutsson is an award-winning tech writer who has a heavy emotion of technology, cogwheel and gadgets that marque beingness amended with his contributions for Fox News & FOX Business opening mornings connected "FOX & Friends." Got a tech question? Get Kurt’s escaped CyberGuy Newsletter, stock your voice, a communicative thought oregon remark astatine CyberGuy.com.










English (CA) ·
English (US) ·
Spanish (MX) ·