Mistakes, false stories: Singapore flags 6 key risks of AI in paper, launches foundation to boost governance

  • The Infocomm Media Development Authority (IMDA) and global tech company Aicadium published a discussion paper on Singapore’s approach to building an ecosystem for safe adoption of generative artificial intelligence (AI) 
  • The paper highlighted six risks of generative AI that policymakers should take note of to ensure safe and responsible AI use
  • The risks range from AI mistakes to spreading of misinformation and copyright infringement
  • IMDA will continue to make targeted regulatory measures to uphold safety in future AI developments
  • A new AI Verify Foundation has also been launched to encourage the development of responsible AI testing tools

SINGAPORE — Generative artificial intelligence (AI) models can produce erroneous responses and false stories of sexual harassment when they make mistakes and “hallucinate”, according to a discussion paper released on Wednesday (June 7) that flagged this as one of several key risks of AI tech such as ChatGPT.

It also highlighted how AI models can pose privacy risks by leaking details keyed in by other users, or perpetuate biases based on race, among others.

The paper, jointly published by the Infocomm Media Development Authority (IMDA) and global tech company Aicadium, lays out Singapore’s approach to building an ecosystem for trusted and responsible adoption of generative AI.

The two were also part of seven organisations that launched the AI Verify Foundation, a platform for open collaboration and idea-sharing to encourage the development of responsible AI testing tools.

With more than 60 general members, the foundation also hopes to nurture a network of advocates for AI to drive a broad adoption of AI testing through education and outreach.

This is in addition to Singapore’s previous efforts in AI governance with the Model AI Governance Framework that was released in 2019 and the AI Verify tool launched last year, which allows companies to conduct technical tests on their AI models in line with global ethical standards.

Speaking at the Asia Tech x Artificial Intelligence Conference, Minister for Communications and Information Josephine Teo said that a strong desire for AI safety need not mean pulling the drawbridge to innovation and adoption.

“In AI, safety mechanisms and shared standards will equally instill confidence in its use but effective safeguards will take time and good research to discover,” she said.

IMDA’s discussion paper contains ideas for senior leaders in the Government and businesses on how to build an ecosystem for a responsible adoption of generative AI.

It also identifies six key risks that policymakers should note to ensure the safe use of AI.

MISTAKES AND FALSE RESPONSES

Generative AI models like ChatGPT have a more challenging time with tasks that require logic and common sense as these are modelled on how people use language.

This makes it easy for them to make mistakes with false responses, commonly known as “hallucinations”, that appear overly confident to convince users to believe them.

Just early this year, ChatGPT was found to have misrepresented key facts by creating convincing but erroneous responses to medical questions, created false stories of sexual harassment with real individuals and generated software code that was susceptible to vulnerabilities.

LOSS OF CONFIDENTIALITY

Known for its ability to memorise, generative AI essentially retains data and replicates it when prompted. 

A particularly worrying find is AI’s ability to quickly memorise parts of sentences like nouns, pronouns and numerals — information that may be sensitive.

The discussion paper states that generative AI also poses copyright and confidentiality issues. For instance, Samsung employees were reported to have unintentionally leaked copyrighted information when they used ChatGPT to check for errors.

SPREADING MISINFORMATION

The dissemination of fake news has become increasingly rampant in recent years and it runs the risk of becoming large scale with the use of generative AI.

The negative impact of such interactive models is greater as it taps human reactions and uses language found on the internet to further spread toxic content.

Impersonation and reputation attacks have also become easier to achieve with generative AI as it is able to generate images in an individual’s likeness.

In the hands of the wrong individual, generative AI makes it possible to generate phishing emails, create malicious computer codes, set up a dark web marketplace and disrupt the traffic of a targeted server.

COPYRIGHT INFRINGEMENT

Current generative AI models require massive amounts of data and this has raised concerns over copyrighted materials being used.

IMDA cited the example of Getty Images suing AI model Stable Diffusion over an alleged copyright violation for using the photo agency’s watermarked photo collection.

This rising concern is also evident in the creative community as this form of AI is capable of creating high quality images in the style of other artists.

INHERITING BIASES

AI models is likely to inherit stereotypical biases of the internet if left unchecked.

Some image generators have been reported to lighten the image of a black man when it is prompted to create the image of an “American person”, and create individuals in ragged clothes when prompted for an image of an “African worker”.

DIFFICULTY OF GOOD INSTRUCTIONS

AI safety is often aligned with human values and goals to prevent it from harming its human creators.

However, this task has proven challenging as the objectives outlined by AI scientists and designers are often misrepresented even with simple instructions.

For example, instructing AI to assign more importance to being helpful to the user can lead to the system not filtering responses and generating toxic responses that might cause harm.

FUTURE AI REGULATION

IMDA said that while Singapore does not currently intend to implement general AI regulation, the discussion paper itself is an example of how the country has taken concrete action to develop technical tools, standards and technology, which in turn lays the groundwork for clear and effective regulation in the future.

Careful deliberation and a calibrated approach should be taken, while investing in capabilities and development of standards and tools, said IMDA.

It added that it will continue to introduce and update targeted regulatory measures to uphold safety in future digital and AI developments.

Professor Hahn Jungpil from the School of Computing at the National University of Singapore told TODAY that there is a “tricky trade-off” between allowing for technological innovation and ensuring the safe use of AI.

He suggested three approaches to balance the two.

The first is a risk-based approach which focuses on regulating high-risk AI applications but allow for more freedom with low-risk AI innovations.

Another is for a contained or measured approach where innovation and experimentation can occur but allow for the plug to be pulled when risks are identified and materialised.

The third is a transparency-oriented approach that allows for some aspects of AI data to be made accessible to researchers for them to experiment with the technology and identify any hidden risks.

Source: TodayOnline