South Korea Enacts First Global AI Safety Law, Sets Watermark and Penalty Rules
Updated (7 articles)
Law Takes Immediate Effect and Establishes Regulatory Framework South Korea’s AI Basic Act was formally enacted on 22 January 2026 and became effective the same day, marking the world’s first comprehensive AI safety statute [1][2]. The act creates a nationwide regulatory structure aimed at curbing misinformation, deep‑fakes, and other AI‑related hazards. It places accountability on both developers and users of AI systems across sectors. The legislation is positioned as a government‑wide adoption of AI guidelines not seen elsewhere [1][2].
High‑Risk AI Defined and Transparency Obligations Imposed The act classifies “high‑risk” (or “high‑impact”) AI as models whose outputs can significantly affect daily life, employment decisions, loan assessments, or medical advice [1][2][3]. Entities deploying such systems must clearly inform users that the service is AI‑based and must ensure safety measures are in place. All AI‑generated content must carry a visible watermark indicating its artificial origin, described by officials as the minimum safeguard against abuse [1][2][3]. Failure to comply triggers enforcement actions.
Global Tech Firms Must Appoint Korean Representatives Companies that meet thresholds for global revenue, domestic sales, or user numbers in Korea are required to designate a local representative to liaise with regulators [1][2]. OpenAI and Google are cited as current examples that fall under this requirement, bringing their Korean operations under direct oversight. The representative serves as the point of contact for investigations and compliance inquiries. This provision extends the law’s reach to major multinational AI providers.
Penalties, Grace Period, and Ongoing Policy Blueprint Violations can result in fines up to 30 million won, with a one‑year grace period intended to help businesses adjust to the new obligations [1][2]. The science ministry will issue an AI policy blueprint every three years to guide future regulatory updates [1][2]. A support desk will answer general queries within three days and provide in‑depth reviews within 14 days. These mechanisms aim to balance enforcement with industry assistance.
Start‑up Community Warns of Ambiguity and Compliance Costs The law establishes a National AI Committee and an AI Safety Institute to oversee compliance, but it defines “high‑impact” AI using vague moral criteria rather than precise technical thresholds [3]. A survey of AI startups shows only 2 % feel prepared for enforcement, citing limited resources and unclear obligations [3]. The ambiguity may force firms to redesign models after deployment, risking significant financial strain, especially for early‑stage companies. Analysts argue that clearer, industry‑specific guidelines are needed to prevent the law from stifling domestic innovation while still protecting public safety [3].
Sources (3 articles)
-
[1]
Yonhap: South Korea enacts world's first comprehensive AI safety law: details the act’s immediate effect, framework, penalties, watermark mandate, and support desk for businesses.
-
[2]
Yonhap: South Korea becomes first nation to enact comprehensive AI safety law: reiterates core provisions, emphasizes developer and user accountability, and outlines the three‑year policy blueprint requirement.
-
[3]
Yonhap: South Korea's pioneering AI law aims to set safeguards but risks hindering startups: highlights ambiguous high‑impact definitions, low startup preparedness, regulatory asymmetry, and calls for clearer thresholds to protect innovation.
All related articles (7 articles)
-
Yonhap: South Korea enacts world's first comprehensive AI safety law
-
Yonhap: South Korea becomes first nation to enact comprehensive AI safety law
-
Yonhap: South Korea's pioneering AI law aims to set safeguards but risks hindering startups
-
Yonhap: Korean creators push for full review of AI action plan over copyright concerns
-
Yonhap: South Korea passes AI framework bill to take effect Jan 22
-
Yonhap: South Korea to enforce comprehensive AI framework act in January amid trust-innovation balance
-
Yonhap: AI law set to be implemented next month amid biz concerns