Top Headlines

Feeds

South Korea’s KMCC Urges X to Implement Minor‑Protection Measures for Grok AI

Updated (3 articles)

Regulatory Request Delivered to X on January 14, 2026 The Korea Media and Communications Commission (KMCC) sent a formal request to X on Wednesday, urging the platform to install safeguards that block sexual content generated by the Grok artificial‑intelligence model and to restrict teenage access to such material [1]. The commission highlighted growing concerns over deep‑fake sexual imagery proliferating on AI‑driven services [1]. KMCC framed the request as a proactive step to protect minors while allowing continued technological advancement [1].

Platforms Must Appoint Minor‑Protection Officials and Report Annually Under existing South Korean law, social‑media services are required to designate a dedicated minor‑protection officer and submit yearly compliance reports to the KMCC [1]. Failure to comply can trigger criminal penalties for creating, distributing, or storing non‑consensual sexual deep‑fake content [1]. The commission’s request reiterates these obligations, positioning them as baseline expectations for X’s operations [1].

Chair Kim Jong‑cheol Stresses Safety and Innovation Balance KMCC chair Kim Jong‑cheol emphasized that safeguarding minors is a core component of the regulator’s broader agenda to nurture safe, innovative technology development [1]. He described the dual priority of protecting youth while fostering responsible AI progress [1]. The statement signals the commission’s intent to tighten oversight without stifling industry growth [1].

No Explicit Deadline or Enforcement Mechanism Provided The KMCC’s release outlines the desired protective measures but does not specify a timeline for X to implement them [1]. Likewise, the document lacks details on how non‑compliance will be monitored or penalized beyond existing legal frameworks [1]. This omission leaves the enforcement pathway ambiguous, prompting industry observers to watch for subsequent regulatory actions [1].

Grok AI Model Identified as Source of Concern Grok, the AI model at the center of the regulator’s request, has been linked to the generation of sexual deep‑fake content that could be accessed by minors on X [1]. The commission singled out Grok to illustrate the tangible risks posed by advanced generative AI tools [1]. Addressing Grok’s outputs is presented as a priority to curb potential harm to under‑age users [1].

Sources

Timeline

2024 – Australia enacts a ban on social‑media use for anyone under 16, targeting platforms such as Instagram, X and TikTok, and cites rising cyber‑bullying, scams and other harmful content as justification [2][3].

Dec 16, 2025 – Kim Jong‑cheol, a Yonsei Law School professor nominated to head the Korea Media and Communications Commission, tells a parliamentary confirmation hearing, “I believe it is absolutely necessary” to consider age‑restriction policies like Australia’s, pledges a strong commitment to youth protection, and vows to strengthen AI‑focused dispute‑resolution systems while promoting AI adoption for competitiveness [2][3].

Dec 16, 2025 – During the same hearing, Kim warns that sophisticated AI is increasingly used for hacking and cyber‑terrorism, calls the regulator’s protection role “weakened,” and urges simplification of platform subscription and withdrawal procedures after the Coupang data breach, insisting on equal treatment for users [2][3].

Jan 14, 2026 – The Korea Media and Communications Commission formally requests X to put in place minor‑protection measures for the Grok AI model, demanding safeguards against sexual deep‑fake content, the appointment of a dedicated minor‑protection officer, and an annual compliance report under existing criminal‑penalty law [1].

Jan 14, 2026 – KMCC chair Kim Jong‑cheol frames safety and innovation as twin priorities, stating the commission aims to support sound, safe development of new technologies while updating regulations to mitigate side effects and protect minors [1].

All related articles (3 articles)