LawFlash

Singapore Expands Its AI Governance Approach to Include Generative AI

02. Juli 2024

Singapore’s Infocomm Media Development Authority (IMDA) launched the Model AI Governance Framework for Generative AI (Generative AI Framework) on 30 May 2024. The framework, developed by the IMDA and its wholly owned and not-for-profit subsidiary the AI Verify Foundation, aims to establish a systematic and balanced approach to addressing generative AI concerns while continuing to facilitate innovation.

The Generative AI Framework launched by the IMDA expands on the existing Model AI Governance Framework that covers traditional AI. Singapore released the first edition of the Model AI Governance Framework in January 2019 and a revised second edition in January 2020. With the recent advent of generative AI reinforcing some of the known AI risks (e.g., bias, misuse, and lack of explainability) and simultaneously introducing new ones (e.g., hallucinations, copyright infringement, and value alignment), there was a need to update the earlier Model AI Governance Framework.

Between 16 January and 15 March 2024, the IMDA, under the Singapore Ministry of Communications, conducted a public consultation on a proposed framework to govern generative AI.

Traditional AI and Generative AI

For context, traditional AI refers to AI models that make predictions by leveraging insights derived from historical data. Typical traditional AI models include logistic regression, decision trees, and conditional random fields. In contrast, generative AI comprises AI models capable of generating text, images, or other media types. They learn the patterns and structure of their input training data and generate new data with similar characteristics.

The Nine Dimensions of the Generative AI Framework

The Generative AI Framework comprises nine dimensions to be looked at in totality to foster a comprehensive and trusted AI ecosystem. These nine dimensions are summarized below.

Accountability

This aspect of the framework underscores the importance of putting in place the right incentive structure for different players in the AI system development life cycle (including model developers, application deployers, and cloud service providers) to be responsible to end users.

The Generative AI Framework states that there should be consideration for how responsibility is allocated upfront in the development process (ex-ante) as best practice, and guidance on how redress can be obtained if issues are discovered thereafter (ex-post).

It provides that responsibility can be allocated based on the level of control that each stakeholder has in the generative AI development chain, so that the able party takes necessary action to protect end users. In this regard, there is value in extending the cloud industry’s approach of allocating responsibility upfront through shared responsibility models to AI development. These models allocate responsibility by explaining the controls and measures that cloud service providers (who provide the base infrastructure layer) and their customers (who host applications on the layer above) respectively undertake.

To better cover end users, the Generative AI Framework also states that it will be worth considering the implementation of additional measures, including concepts around indemnity and insurance, to act as safety nets.

As for residual issues that may potentially fall through the cracks, alternative solutions such as no-fault insurance could be considered as well.

Data

The framework conveys the importance of ensuring data quality and addressing potentially contentious training data in a pragmatic way, as data is core to model development.

A large corpus of data is needed to train robust and reliable AI models. In this regard, businesses require clarity and certainty on how they can use data, including personal data and copyright material, in model development. According to the Generative AI Framework, as personal data operates within existing legal regimes, a useful starting point is for policymakers to articulate how existing personal data laws apply to generative AI.

Privacy enhancing technologies (PETs), such as anonymization techniques, are an emerging group of technologies which have the potential to allow data to be used in the development of AI models while protecting data confidentiality and privacy. The understanding of how PETs can be applied to AI will therefore be an important area to advance.

From a model development perspective, the use of copyright material in training datasets and the issue of consent from copyright owners are starting to raise concerns, particularly as to remuneration and licensing to facilitate such uses. AI models are also increasingly being used for generating creative output—some of which mimic the styles of existing creators and give rise to considerations of whether this would constitute fair use.

Given the various interests at stake, the Generative AI Framework recommends that policymakers should foster open dialogue among all relevant stakeholders to understand the impact of the fast-evolving generative AI technology. It also recommends that policymakers ensure potential solutions are balanced and in line with market realities.

At an organizational level, it would be a good practice for AI developers to undertake data quality control measures and adopt general best practices in data governance, including annotating training datasets consistently and accurately and using data analysis tools to facilitate data cleaning (e.g., debiasing and removing inappropriate content).

Trusted Development and Deployment

This aspect of the framework aims to enhance transparency around baseline safety and hygiene measures based on industry best practices in development, disclosure, and evaluation.

  • Development: The Generative AI Framework notes that safety measures are developing rapidly, and model developers and application deployers are best placed to determine what to use. Nevertheless, industry practices are starting to coalesce around some common safety practices. For example, fine-tuning techniques such as Reinforcement Learning from Human Feedback (RLHF) can guide the model to generate “safer” output that is more aligned with human preferences and values. Techniques like Retrieval-Augmented Generation (RAG) and few-shot learning are also commonly used to reduce hallucinations and improve accuracy.
  • Disclosure: The Generative AI Framework states that relevant information should be disclosed to downstream users to enable them to make more informed decisions. Areas of disclosure may include data used, training infrastructure, evaluation results, mitigations and safety measures, risks and limitations, intended use, and user data protection. The level of detail disclosed can be calibrated based on the need to be transparent vis-à-vis protecting proprietary information. Greater transparency to the government will also be needed for models that could pose high risks, such as advanced models that have national security or societal implications.
  • Evaluation: The Generative AI Framework adds that there is a need for a more comprehensive and systematic approach to safety evaluations where generative AI is concerned, beyond the current main approaches of benchmarking (which tests models against datasets of questions and answers to assess performance and safety) and red teaming (where a red team acts as an adversarial user to “break” the model and induce safety, security, and other violations). Industry and sectoral policymakers will need to jointly improve evaluation benchmarks and tools, while still maintaining coherence between baseline and sector-specific requirements.

Incident Reporting

The framework underscores the importance of implementing an incident management system for timely notification, remediation, and continuous improvements of AI systems.

Before an incident occurs, software product owners adopt vulnerability reporting as part of an overall proactive security approach. They co-opt and support white hats or independent researchers to discover vulnerabilities in their software, sometimes through a curated bug-bounty program. The Generative AI Framework suggests that AI developers can apply this same approach by allowing reporting channels for uncovered safety vulnerabilities in their AI systems.

After an incident, organizations need internal processes to report the incident for timely notification and remediation. Depending on the impact of the incident and how extensively AI was involved, this could include notifying both the public and the government.

Reporting should be proportionate, which means striking a balance between comprehensive reporting and practicality.

Testing and Assurance

This aspect emphasizes the need for providing external validation, fostering trust through third-party testing, and developing common AI testing standards for consistency.

The Generative AI Framework provides that fostering the development of a third-party testing ecosystem involves the following two steps:

  • Defining a testing methodology that is reliable and consistent and specifying the scope of testing to complement internal testing
  • Identifying the entities to conduct testing that ensures independence

Established audit practices can also be drawn upon and adapted to grow the AI third-party testing ecosystem.

Security

Security is a necessary consideration in order to address new threat vectors and risks that arise through generative AI models.

Existing frameworks for information security need to be adapted, and new testing tools developed to address these risks. The Generative AI Framework suggests that such new tools may include:

  • Input moderation tools to detect unsafe prompts (e.g., blocking malicious code); the tools need to be tailored to understand domain-specific risks
  • Digital forensics tools for generative AI, which are used to investigate and analyze digital data (e.g., file contents) to reconstruct a cybersecurity incident; new forensics tools should be explored to help enhance the ability to identify and extract malicious codes that might be hidden within a generative AI model

Content Provenance

Providing transparency about where content comes from can provide useful signals for end users.

The Generative AI Framework notes that the rise of generative AI, which has enabled the rapid creation of realistic synthetic content at scale, has made it harder for consumers to distinguish between AI-generated and original content. There is therefore recognition across governments, industry, and society on the need for technical solutions, such as digital watermarking and cryptographic provenance, to catch up with the speed and scale of AI-generated content. Digital watermarking and cryptographic provenance both aim to label and provide additional information and are used to flag content created with or modified by AI.

Nevertheless, the Generative AI Framework also recognizes that technical solutions alone may not be sufficient and will likely have to be complemented by enforcement mechanisms. There is also a need to work with key parties in the content life cycle, such as publishers, to support the embedding and display of digital watermarks and provenance details.

Safety and Alignment Research and Development

The acceleration of research and development through global cooperation among AI safety institutes is necessary to improve model alignment with human intention and values.

AI for Public Good

Responsible AI includes harnessing AI to benefit the public by democratizing access to AI systems, improving public sector adoption, upskilling workers, and developing AI systems sustainably.

The Generative AI Framework is expected to be further developed through, among other things, the implementation of additional guidelines and resources.

Contacts

If you have any questions or would like more information on the issues discussed in this LawFlash, please contact any of the following:

Authors
Wai Ming Yap (Singapore)*
Kristian Lee (Singapore)*

*A solicitor of Morgan Lewis Stamford LLC, a Singapore law corporation affiliated ‎with Morgan, Lewis & Bockius LLP