SCGP Newsroom

14

AI Regulation Check Out AI Laws around the World

Loading Data...

After the EU AI Act, the European Union’s key legislation on artificial intelligence, came into force on 1 August 2024, many countries have been rolling out their own rules and frameworks for AI governance. The goal is to strike the right balance between protecting citizens’ rights and fostering innovation and fair competition. But as technology advances at lightning speed, will these regulatory fences be able to keep pace with AI’s rapid evolution? Let’s take a closer look.

AI Regulation Abroad

While countries worldwide share the same goal of effective AI governance, the details and flexibility of enforcement vary according to each nation’s vision.

Europe

The European Union has taken a proactive stance with the EU AI Act, which categorizes AI systems into four risk levels: 1) Minimal risk – Those in this level are free to use without regulatory oversight; 2) Limited risk – Those in this level must be transparent. For example, they must inform users when interacting with chatbots to avoid confusion; 3) High risk – AI systems in this level could impact health, safety, or fundamental rights. They are thus required to obtain authorization before deployment; and 4) Unacceptable risk – AI systems in this level are considered harmful and strictly prohibited, such as real-time remote biometric identification in public spaces for law enforcement, which infringes on privacy rights. The EU AI Act applies across all 27 EU member states. Outside the bloc, approaches differ. For example, the United Kingdom emphasizes fostering innovation. Instead of a single overarching law, the country delegates regulatory responsibilities to sector-specific agencies, making the framework more flexible. The UK is currently working toward drafting future legislation to expand AI governance.

The United States (US)

The US does not have a federal AI law, leaving regulation to individual states. For example, Colorado AI Act will take effect in 2026 and apply to just Colorado. The law focuses on preventing bias and discrimination in AI systems, holding both developers and users accountable, in response to public debates on high-profile cases. The COMPAS algorithm, for one, is widely criticized for disproportionately labeling Black defendants as having high risks of reoffending compared to white defendants. Workday, a financial and HR software company, meanwhile has faced lawsuits alleging that its AI-powered recruitment tools discriminated against applicants over the age of 40.

Asia

China has imposed strict measures to curb content manipulation, requiring AI-generated material on social media to be clearly labeled. Singapore, on the other hand, takes a more flexible approach by updating existing laws. Key initiatives include the Model AI Governance Framework and the AI Verify program, which is designed to provide guidance on ethics and transparency in AI use.

Thailand’s AI Law Development

At present, Thailand regulates AI through soft law in the form of guidelines, while working toward its first draft AI Act. Led by the Electronic Transactions Development Agency (ETDA) under the Ministry of Digital Economy and Society (DE), the drafting integrates an AI Sandbox in the conceptualization stage, providing a testing ground where stakeholders can co-design and lay the foundations of a governance framework for the ultimate goal of protecting users’ rights without stifling innovation and fair competition.

Developing AI laws is a delicate task, as each country must consider its own context. For example, the EU AI Act takes a strict approach to safeguard civil rights, while many other nations adopt more flexible frameworks to avoid hindering innovation. This is why keeping up with AI regulations is no longer a distant concern. Whether you are a user or an investor, understanding these developments helps ensure safer AI adoption, uncover new opportunities, and foster sustainable growth for the future.

เนื้อหาที่เกี่ยวข้อง

Loading Data...