- Home
- Resources
- AI Markets Toolkit
- Fact Sheet
Fact Sheet
Here you’ll find more information on the AIM Toolkit.
Background
1. The AIM toolkit is developed by the Competition and Consumer Commission of Singapore (“CCS”) in collaboration with IMDA and builds upon IMDA’s AI Verify toolkit - AI Verify Toolkit), relying on a set of principles based on the competition and consumer protection framework in Singapore.
2. The AIM toolkit allows both AI model developers and AI model deployers to self-assess whether their AI models and business practices are in compliance with the Competition Act and the Consumer Protection (Fair Trading) Act (“CPFTA”). The toolkit also serves to highlight beneficial General Good Practices for businesses adopting AI. The AIM toolkit can be utilised either before deployment or after the development of an AI model. The AIM toolkit consists of two main components:
a. A series of self-assessment process checks: These are questions that relate to both the businesses’ AI models and the practices of businesses planning to develop or use AI that could potentially be anti-competitive or be considered as unfair practices. The business can answer with Yes/No/Not Applicable and choose to provide further elaboration to each of the provided answers. The questions are categorised into eight principles as follows:
Principle | Brief overview of principle |
Pro-competitive algorithms (Competition) | Ensures AI algorithms do not facilitate anti-competitive practices like price collusion or predatory pricing. This includes avoiding algorithmic designs that could lead to coordination between competitors, the sharing or replication of commercially sensitive information, or the manipulation of market conditions in a way that restricts competition. |
Accessibility (Competition) | Prevents the unfair restriction of essential resources and mandatory data collection practices, while ensuring fair market access to AI products and services. To facilitate competitive markets, businesses should be able to gain access, on reasonable terms, to key inputs they need to develop and/or deploy AI models. In addition, a dominant business should avoid mandating data collection from users as a prerequisite for using the product or service where such collection is not necessary for provision of the product or service, particularly when the data collected is intended to improve the business’s AI models. |
Flexibility (Competition) | Promotes interoperability between AI systems and prevents high switching costs or anti-competitive bundling. The market will tend towards more positive outcomes if the AI systems are interoperable with one another and there are no/little barriers introduced by developers or deployers to impede users from uploading/downloading, extracting and porting their information to rival services. For the market to be more competitive, products and services should compete on their merits and not be restricted by anti-competitive tying or bundling. Additionally, vertical integration and partnerships should not be a means of insulating firms from competition. |
Fairness (Competition/Consumer Protection) | Prevents discrimination against specific groups and favouring of certain products or brands without objective reason. This includes ensuring that AI decision-making processes are free from bias, that recommendations and rankings are based on transparent and relevant criteria, and that all users and market participants are treated equally. Ensuring fairness in AI models’ predictions is crucial for building consumer confidence that the best products and services will win out as firms are perceived to be playing by the rules. Ensuring fairness in AI model’s predictions, supported by representative training data, also helps ensure equal treatment of consumers in AI-driven recommendations and that certain groups of consumers would not be taken advantage of. |
Transparency (Consumer Protection) | Ensures clear and truthful communication about AI capabilities and prevents misleading claims or deceptive practices. This involves providing accurate descriptions of the AI system’s capabilities and limitations, and offering explanations that are understandable to users. The market would trend towards positive outcomes if models were themselves reliable and accurate, and if consumers had the right information about them to make informed decisions. In cases where AI models have limitations, users should be appropriately informed about these constraints. |
Diversity (General good practice) | Promotes variety in AI model offerings and encourages open-source engagement to drive innovation. Having sufficient diversity in the market in relation to AI models being developed, how they are released to consumers and businesses, and the business models that businesses employ can help to increase competition in the market to the benefit of consumers. |
Accountability (General good practice) | Establishes clear responsibility and redress mechanisms for AI-related issues. This includes documenting decision-making processes and ensuring that there are effective channels for individuals or businesses to seek remedies if harmed by AI outputs. The market would trend towards positive outcomes, such as the bolstering of innovation and competitive dynamics, if there were mechanisms to determine the proper allocation of accountability and responsibility. Proper allocation of accountability will also incentivise firms to improve consumer outcomes, and consumers may more easily seek redress when things go wrong, leading to consumer confidence, trust and adoption. |
Accuracy (General good practice) | Ensures AI models produce reliable, consistent, and reproducible results to maintain consumer trust. This includes regular testing, validation, and retraining of models to address performance degradation or model drift over time. This also includes documentation of information about the models, such as source codes and configurations. The market would trend towards positive outcomes if the AI models were more accurate (eg. reducing issues like hallucinations where models are producing inaccurate or misleading results and ensuring that predictions are replicable with the same input). When AI models produce inaccurate or misleading results, it could create issues for output reliability and consumer trust. |
b) Two technical tests: These are tests (namely Explainability and Fairness tests) that could be conducted on the business's AI model by the AIM toolkit (see section on technical test for more detailed explanation). These technical tests serve as a guide for businesses to better understand their AI models. If the business wishes to do the technical tests, the business would be prompted by the toolkit to upload the (i) AI model to be tested, (ii) the testing dataset, and (iii) the ground truth dataset. The technical tests currently only support supervised learning AI models out of the box (binary classification, multiclass classification, and regression). The two technical tests are as follows:
Technical Test | Brief overview |
Explainability | The Explainability test uses SHAP (SHapley Additive exPlanations) analysis to examine how different features influence an AI model's predictions. By aggregating and ranking these contributions across many predictions, the test can highlight which factors are consistently driving the model’s outputs. This deeper understanding of feature importance enables businesses to detect whether their AI model is relying on variables in ways that could raise competition or consumer protection concerns, even when such patterns are not immediately obvious from the model’s design. eg. in a competition context, if the business’s decision-making AI model is designed to coordinate with another competitor and the predictions are heavily influenced by actions from said competitor, the business may be engaging in algorithmic collusion. eg. in a consumer protection context, if a feature relating to customer’s age heavily influences a business’s AI model pricing decisions, the business’s algorithm could potentially be engaging in discriminatory treatment to take advantage of vulnerable consumers. |
Fairness | The Fairness test evaluates how accurately a model predicts outcomes across different user-selected features using different fairness metrics. The test aims to identify any significant disparities in predictions between different groups. By analysing model performance separately for different groups, the test helps uncover whether certain groups are systematically disadvantaged by the model’s predictions. eg. in a competition context, the business’s AI model predictions should not result in a higher number of false positives for downstream competitors compared to its own subsidiary. For instance, on an ecommerce platform, a deployed classification model may exhibit bias by classifying products from third-party sellers as less relevant to searches compared to the platform's own products even though customers would consider the former to be relevant. This results in lower search rankings for third-party products. This differential treatment directs customer traffic away from competitors to the platform's own retail offerings, potentially leveraging its upstream market power to advantage its subsidiary. eg. in a consumer protection context, any of such disparities should be clearly and accurately informed to users, so that they would not be misled by the results. Any false or misleading claims made in this regard to consumers could be an unfair practice. |
3. Upon completion of the above, the AIM toolkit will automatically generate a report for the business based on their responses to the process checks as well as the technical tests (if applicable). The report would consist of a summary of the results and include recommendations based on what was answered by the business during the process checks and the model uploaded for the technical tests (if applicable).
4. The AIM toolkit can be used by businesses that develop or deploy any type of AI model, including both traditional and generative AI. However, the built-in technical tests currently only support supervised learning AI models (specifically those performing binary classification, multiclass classification, and regression tasks). If the business's AI models are not currently supported by the technical tests, the business may choose to skip these tests and complete the rest of the toolkit.
5. The AIM toolkit is intended to be part of a voluntary competition and consumer compliance program for businesses. Implementing the AIM toolkit would minimise the risk of inadvertently engaging in anti-competitive or unfair trading practices in the development/deployment of AI. The use of the AIM toolkit could also be a mitigating factor for the purposes of financial penalty calculation in the event of an infringement.
6. For the avoidance of doubt, CCS will not be endorsing businesses who have completed the assessment under the toolkit as the AIM toolkit serves as a self-assessment tool for businesses developing or deploying AI to assess their compliance with the Competition Act and the CPFTA.