Whale is part of AI Verify Foundation

Whale is proud to be a part of the AI Verify Foundation, a newly established open-source community in Singapore that was presented at the Asia Tech x Singapore conference. This community is devoted to the development and promotion of testing tools for the ethical use of AI.

Whale is one of 60 members of the foundation, launched by the Singapore Government’s Infocomm Media Development Authority as a pilot in 2022 and now available to the open source community. Currently, the foundation has members, including Adobe, Meta and Microsoft, AWS, and Zoom. 

About AI Verify

IMDA's AI Verify provides organisations with an AI Governance Testing Framework and Toolkit to validate the performance of their AI systems. What's more, AI Verify is extensible, enabling the development of additional toolkits, such as sector-specific governance frameworks, on top of it. To facilitate open collaboration for the governance of AI, the AI Verify Foundation will work to create and promote AI testing frameworks, code base, standards, and best practices.

AI Verify's testing processes comprises technical tests on three principles: Fairness, Explainability, and Robustness. Process checks are applied to the 11 identified principles. The testing framework is consistent with internationally recognised AI governance principles, such as those from the EU, OECD and Singapore.

AI Verify is a single integrated software toolkit that operates within the user organisation's enterprise environment, facilitating the conduct of technical tests on the user's AI models and the recording of process checks. User organisation can use the resulting test reports to be more transparent about their AI by sharing them with their shareholders.

“The immense potential of AI led us to the vision of creating AI for the public good and its full potential will only be realised when we foster greater alignment on how it should be used to serve wider communities,” said Josephine Teo, minister for communications and information in Singapore at Asia Tech x Singapore. “Singapore recognises that the government is no longer simply a regulator or experimenter. We are a full participant of the AI revolution.”

The AI Verify Foundation will help to develop AI testing frameworks, code base, standards and best practices, while promoting open collaboration for the governance of AI.

The responsible use of AI has become a key issue for organisations utilising AI systems and for regulators. In particular, the rise of generative AI throughout a plethora of use-cases across various industries has raised concerns about the resulting risks, including: (a) mistakes and hallucinations; (b) privacy and confidentiality; (c) disinformation, toxicity and cyber-threats; (d) copyright challenges; (e) embedded bias and; (f) values and alignment.

The increasing implementation of Artificial Intelligence (AI) in the workplace and in business has highlighted the need for comprehensive risk assessments of AI systems. The launch of the Foundation and the ongoing development of AI Verify are essential components of creating an effective testing framework.

Empower Business Growth