ASEAN has published a guide on AI governance and ethics to set guidelines for developing and launching AI-powered products and services.

Insights

ASEAN Publishes AI Governance and Ethics Guide

Date
February 17, 2024
Author
OrionW

The Association of Southeast Asian Nations (ASEAN) published the ASEAN Guide on AI Governance and Ethics (AI Guide) designed as a practical guide for organisations looking to design, develop, and deploy traditional AI technologies in commercial and non-military or dual-use applications.  The adoption of the AI Guide is voluntary, and it is meant to be periodically updated.

The AI Guide promotes the following guiding principles:

AI Ethics Table
Transparency and Explainability The use of an AI system and the purpose of such use should be disclosed to individuals.
An AI system’s decision-making process should be explained in a simple manner or, alternatively, information that could build trust in the AI system’s outcomes (such as demonstrating the AI system’s repeatability and auditability) should be shared.
Fairness and Equity Effort must be placed during the development and monitoring of an AI system to ensure that it does not propagate any bias or result in discrimination.
Security and Safety Before deployment and use of AI systems, appropriate risk assessments should be carried out and the corresponding measures to mitigate the identified risks should be implemented.
Vulnerability assessment and penetration testing should also be done to ensure the safety of AI systems.
Human-centricity AI systems should be developed to pursue benefits for human society and protect humans from potential harms.
Privacy and Data Governance Protocols should be in place for responsible data collection and processing in accordance with applicable data protection regulations.
Governance frameworks should be developed, setting out who can access data, when data can be accessed and for what reasonable purposes.
Accountability and Integrity Processes should be in place to ensure compliance with legislation, internal policies, and ethical guidelines.
Deployers should be accountable for decisions made by their AI systems and, in case of malfunction or misuse, implement measures to prevent similar future incidents.
Robustness and Reliability AI should be subject to rigorous testing before deployment to ensure consistent and robust results in an environment similar to live use cases.


With these principles in mind, the AI Guide suggests the following governance framework that organisations should adopt:

  • Internal governance structures and measures – Organisations should implement an internal governance structure, such as a multi-disciplinary and broad-based oversight body and an escalation procedure, to assess and manage the risks of AI systems.  The governance structure should clearly define roles for effective governance and should be agile to keep pace with AI development.  Overall awareness of AI ethics within the organisation should also be raised.    
  • Human involvement in AI-augmented decision-making – Organisations should be very particular in determining the appropriate level of human involvement in AI-augmented decision-making.  In that regard, the level of human involvement (whether human-in-the-loop (where AI systems only provide recommendations that humans can use to make decisions), human-over-the-loop (where humans act as supervisors that can override an AI system’s decision), or human-out-of-the-loop (where an AI system has full control that cannot be overridden by humans)) should take into consideration an organisation’s purpose for, and the risks associated with, using AI.
  • Operations management – AI governance should be embedded into AI systems by design.  This is done by implementing the guiding principles discussed above when designing and developing AI systems.  There should be processes to assess and mitigate the risks of AI systems and systems to monitor and evaluate the AI performance from development to deployment for further improvement and refinement, including on matters relating to data management and provenance, data storage and model evaluation.
  • Stakeholder interaction and communication – Stakeholder trust should be developed throughout the lifecycle of an AI system.  Transparency should be paramount with stakeholders.  In addition, as the deployment of AI systems may cause a major shift in employment patterns and role allocation, deployers should be aware of the extent of this shift and be able to communicate these changes with employees and redesign roles if necessary.  There should also be feedback mechanisms regarding an AI system’s performance.

Conclusion

The AI Guide, while not mandatory, expresses what are deemed to be the best practices across the ASEAN region for AI development and deployment and can serve as a window as to how ASEAN governments may look to regulate AI in the future.  Therefore, organisations deploying, or intending to deploy, AI systems, should strive to comply with the guiding principles and recommendations set out in the AI Guide.

For More Information

OrionW regularly advises clients on artificial intelligence matters.  For more information about artificial intelligence, or if you have questions about this article, please contact us at info@orionw.com.

Disclaimer: This article is for general information only and does not constitute legal advice.

Newsletter

Subscribe to
our newsletters

To subscribe, select the newsletter options that interest you (TMT, FinTech or DPC - Data Protection and Cybersecurity) and provide your details.

  • TMT - Technology, Media and Telecommunications
  • FinTech
  • DPC - Data Protection & Cybersecurity
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.