AI regulation has become a primary focus of leaders across the globe. In Singapore, the EU, China and the USA, there are varying regulatory approaches to oversee AI deployment and use, mitigate potential risks, encourage innovation and protect relevant stakeholders’ rights and interests.
The Singapore government has recommended promoted the responsible use of AI through non-binding, voluntary guidelines for individuals and businesses that use, develop and deploy AI. For instance, through the Model Artificial Intelligence Governance Framework, the government has released guidance on best governance practices for private sector organisations that deploy AI solutions at scale.
The Singapore government has also actively promoted AI research and development through the AI Verify Foundation, a not-for-profit organisation launched to create a neutral platform for open collaboration on testing and governing AI. Recently, the foundation and the Info-communications Media Development Authority issued the Model AI Governance Framework for Generative AI that seeks to foster a trusted AI ecosystem in a practical and holistic manner and launched the Generative AI Evaluation Sandbox to develop new baseline benchmarks and tests for generative AI systems in collaboration with global AI model developers, app deployers and external testers.
In addition, Singapore has developed a governance testing framework and toolkit, AI Verify, to enable organisations to determine if their AI systems are consistent with internationally recognised AI governance principles such as safety, fairness, accountability, human agency and oversight. After testing, AI Verify generates a report which organisations can use to demonstrate transparency about their AI systems and build trust with their stakeholders.
(See also our article on Overview of Singapore’s AI Regulatory Landscape.)
On 13 March 2024, the European Parliament passed the Artificial Intelligence Act (AI Act), which establishes a tiered risk-based approach to regulating AI systems that are made available in the EU (whether or not its provider is established in the EU):
A breach of a prohibited AI practice could result in an administrative penalty of up to Euros 40 million or, for a company, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
To complement the AI Act, the European Commission also proposed the AI Liability Directive, which enables consumers to claim for non-contractual damages caused by AI products and services. Among others, the directive establishes a rebuttable presumption that there is a causal link between non-compliance with a defendant’s duty of care and an AI system’s output (or failure to produce an output) which caused the damage. The directive aims to build trust in AI systems, decrease businesses’ legal uncertainty regarding AI-related liability claims and avoid fragmentation in liability rules adopted by EU Member States.
Unlike the EU’s AI Act which applies a risk-based approach, China is currently regulating AI systems by type (although a more comprehensive AI law is in the works). For instance, the Interim Measures for the Management of Generative Artificial Technology (Interim Measures) regulates generative AI technologies that generate content (e.g.,text, images, audio) to the Chinese public by imposing certain obligations to protect the national interest and image, prevent discrimination in designing and training AI, respect others’ personal and intellectual rights and maintain content reliability. A breach of the Interim Measures may be subject to penalties under other laws (e.g., the Cybersecurity Law, the Data Security Law). Furthermore, Chinese regulatory agencies are authorised to deal with persons outside China where they provide generative AI services to persons in China but do not comply with the Interim Measures.
In addition, China has separate regulations on deep synthesis services (i.e., Administrative Provisions on Deep Synthesis in Internet-based Information Services) as well as algorithm recommendation services (i.e., Administrative Provisions on Recommendation Algorithms in Internet-based Information Services). Among other requirements, the former prohibits the generation of fake news and requires certain synthetically generated content to be labelled, while the latter requires algorithmic recommendation services with public opinion properties or social mobilisation capabilities to be reported and undertake filing formalities. Both measures prohibit AI use in a manner that harms national security, social public interest, economic order or other persons’ lawful rights and interests.
USA currently regulates AI development and use through sector and state specific laws. However, non-binding federal guidelines have been issued, including:
There is also a push to enact a federal AI framework –the SAFE Innovation Framework – to promote AI innovation while requiring AI systems to be secure and transparent and AI developers to be accountable.
Regulatory frameworks governing AI will continue to develop as the role, use and risks of AI technology continue to evolve. Though Singapore, the EU, China and USA have varying AI regulatory approaches, all notably promote similar principles of transparent, ethical, accountable and responsible AI, and are supportive of AI research and innovation. As the regulatory landscape changes, organisations should stay up to date with compliance obligations if they wish to commercialise their products and services across jurisdictions.
OrionW regularly advises clients on artificial intelligence matters. For more information about responsible development and deployment of artificial intelligence, or if you have questions about this article, please contact us at info@orionw.com.
Disclaimer: This article is for general information only and does not constitute legal advice.