Singapore launches world’s first Artificial Intelligence governance self-test


I. Launch of A.I. Verify

At the World Economic Forum Annual Meeting held in Davos in May this year (2022), Minister for Communications and Information Josephine Teo (Minister Teo) announced the launch by Singapore of A.I. Verify, which is the world’s first AI Governance Testing Framework and Toolkit, providing a means for companies to measure and demonstrate how safe and reliable their artificial intelligence (AI) products and services are.

Singapore’s launch of A.I. Verify follows its launch of the Model AI Governance Framework in 2020 and the National AI Strategy in 2019. A.I. Verify seeks to promote transparency on the use of AI between companies and their stakeholders through self-conducted technical tests and process checks. Developed by the Infocomm Media Development Authority and the Personal Data Protection Commission, A.I. Verify puts Singapore at the forefront of international discourse concerning the ethical use of AI.

A.I. Verify has been launched as a Minimum Viable Product (the MVP) which will undergo further product development. Organisations can partake in the piloting of the MVP where they can obtain early access to the MVP and use it to conduct self-testing on their AI systems and models. This also allows for the shaping of an internationally applicable MVP to reflect industry needs and develop international standards.

II. AI Governance Testing Framework and Toolkit

Products and services are increasingly using AI to provide greater personalisation and make autonomous predictions. There is a strong public interest for AI systems to be fair, explainable and safe, and for companies which utilise AI to be transparent and accountable.

The MVP comprises a “Testing Framework” and a “Toolkit”. They allow for developers to authenticate the claimed performance of their AI systems against standardised tests. However, the MVP does not define ethical standards, it instead provides a way for AI system developers and owners to exhibit and demonstrate their claims about the performance of their AI systems vis-à-vis the AI ethics principles.

A. The Testing Framework

The Testing Framework addresses five (5) major areas of concerns for AI systems (the 5 Pillars) which covers 11 internationally AI ethics principles (the AI Ethics Principles).

The 5 Pillars are:

  1. transparency on the use of AI and AI systems;
  2. the understanding of how an AI model reaches a decision;
  3. ensuring of safety and resilience of AI systems;
  4. ensuring of fairness and no unintended discrimination by AI; and
  5. ensuring of proper management and oversight of AI systems.

The AI Ethics Principles are:

  1. Transparency;
  2. Explainability;
  3. Repeatability or Reproducibility;
  4. Safety;
  5. Security;
  6. Robustness;
  7. Fairness;
  8. Data governance;
  9. Accountability;
  10. Human agency and oversight; and
  11. Inclusive growth, societal and environmental well-being.

The Testing Framework defines the AI Ethics Principles and ascribes a set of testable criteria to each principle. The Testing Framework provides testing processes, which are actionable steps to be carried out to ascertain if each testable criterion has been satisfied. The Testing Framework also sets out well-defined parameters for metrics to be measured, and thresholds that define acceptable values or benchmarks for the selected metrics.

B. The Toolkit

The Toolkit covers technical testing for fairness, explainability and robustness. It provides a user interface to guide users in the testing process, supports certain binary classification and regression models, and produces a summary report to help system developers and owners interpret the test results. It is packaged into a Docker container to be easily deployed in the user’s environment.

III. The development of international standards on AI governance

AWS, DBS Bank, Google, Meta, Microsoft, Singapore Airlines, NCS, Standard Chartered Bank, UCARE.AI and X0PA.AI have tested and/or provided feedback on the MVP. Going forward, Singapore aims to work with AI system owners or developers globally to collate and build industry benchmarks for the development of international standards on AI governance. For the interoperability of AI governance frameworks and the development of international standards on AI, Singapore has participated in ISO/IEC JTC1/SC 42 on AI, and is working with the US Department of Commerce and other like-minded countries and partners.

IV. Concluding thoughts

Developments in the digital space are moving rapidly and regulations must be capable of keeping up; rule-setting will in turn need policymakers and technology leaders to be involved in a dynamic and collaborative way, with the ultimate aim of allowing new technologies to be harnessed while guarding against the accompanying risks.

Dentons Rodyk thanks and acknowledges Practice Trainee Tan Wei En for his contributions to this article.