In The News

Ethical AI for lawyers in Singapore: Key responsibilities

Artificial intelligence (AI) systems are reshaping how lawyers service their clients in Singapore. Furthermore, AI adoption is on the rise. However, legal professionals must take ethical issues into consideration, balancing key responsibilities and risks. 

Policymakers have also put model AI governance frameworks in place and evolved existing regulations. What does ethical AI for lawyers in Singapore look like? How can lawyers in Singapore use AI solutions responsibly in legal matters?  

How is ethical AI defined in Singapore?

Ethical AI is a concept that promotes and encourages the responsible design, development, implementation and use of artificial intelligence technologies.  

International policymakers are developing Model AI governance frameworks (see, Singapore’s Model AI Governance Framework), defining ethical principles and associated practices. Singapore’s Model AI governance covers safety, fundamental human rights such as addressing risks of bias and algorithmic discrimination. They also provide for internal governance structures for the responsible use and deployment of AI tools. 

Singapore’s Personal Data Protection Commission (PDPC) has issued directives on the use of personal data in AI in line with the Personal Data Protection Act (PDPA) 2012. The PDPC enforces personal data collection, usage, and disclosure according to the PDPA. The PDPC has built trust over a decade by educating public and industry sectors on their ‘roles and responsibilities in safeguarding personal data.’

The Singaporean government launched the first edition of their Model AI Governance Framework in January 2019. It introduced ethics principles and outlined practical advice to enable organisations to implement AI tools responsibly.  

In January 2020, the second edition included refined recommendations based on organisations’ experiences after applying the model framework. Additionally, measures for ‘robustness and reproducibility,’ enhanced the relevance and utility of the model framework.  

The rise of generative AI technology is driving today’s industrial revolution according to UNESCO. Generative AI is advancing innovation, whilst also posing significant risk to many industries.  

The Singaporean government released the latest Model AI Governance Framework for Generative AI in May 2024. The update improves the foundation set by the Model AI Governance Framework. The update included input from AI developers, organisations, research communities from local and international jurisdictions.

The Model AI Governance Framework for Generative AI and its updated recommendations align with the guiding AI governance principles issued by the PDPC. The framework advocates for the responsible use and design of generative AI technologies.    

To build trust with all stakeholders, the Singapore government aims to ensure:  

  • Decisions made by AI should be explainable, transparent, and fair.  
  • Ensure that AI systems operate in a human-centric and safe manner.  

Can generative AI do legal work without lawyers?

Generative AI solutions cannot complete legal work without human intervention. Humans should still review and critically analyse answers generated by AI systems to ensure quality.

At the time of writing, generative AI models cannot perform soft skill tasks involving empathy, creativity, or critical thinking without prompts. Chat GPT-4o aims to enable the generative AI system to mimic human emotion, enabling ‘natural human-computer interaction.’ The update won’t empower Chat GPT-4o to negotiate on behalf of a client, or enable the generative AI tool to provide legal representation.

Currently, generative AI can automate many technical tasks. Although, by virtue of their design, large language models are predisposed to producing inaccurate or biased results. Lawyers who use these models without quality checking risk breaching their legal and ethical duties.  

Professor Jungpil Hahn works for the Department of Information Systems and Analytics at the School of Computing, National University of Singapore (NUS). The Professor is an advocate for using AI responsibly.  

“AI developers and business leaders should consider all relevant ethical considerations not only in order to be compliant with regulations but also for engendering trust from its consumers and users.”

“The primary challenge in applying AI ethical principles is that much of the discourse surrounding AI ethics and governance is too broad in the sense that the conversation surrounding it is at a very high level,” Professor Hahn said.

“How to actually operationalise and put it into action is still quite underdeveloped, and vague.”

Professor Jungpil Hahn is a researcher and educator of digital innovation at the Department of Information Systems and Analytics at the School of Computing, National University of Singapore (NUS).

The rapid uptake and widespread use of generative AI systems has put a spotlight on AI ethics and governance. The ‘lack of clear and explicit’ standards led Professor Hahn and colleagues to study the evolution of AI Governance.

“The “black box” nature of AI models, which makes it impossible to fully (exhaustively) know how it will perform/behave.”  Professor Hahn added.   

Why are AI guardrails and governance important?

Even though AI technology is an effective tool, legal professionals believe that generative AI guardrails and governance is important. Almost half the legal professionals surveyed in a Thomson Reuters report believe generative AI increases internal efficiency (48%) and improves internal collaboration (47%).  

Only a fraction of professionals surveyed believe current generative AI regulation is adequate. Two out of four professionals (40%) surveyed believe their government should regulate how their profession uses AI. Legal professionals surveyed also advocated for self-regulation.  

Source: Thomson Reuters Future of Professionals Report – Asia & Emerging Markets Edition.

One of the most appealing opportunities generative AI can offer legal professionals is time savings. Automating manual, laborious, and protracted processes facilitates efficiency, which affords legal professionals the opportunity to invest more time focused on the crux of casework.  

Survey respondents were receptive to generative AI assisting them at work. However, it is important to be aware that generative AI is no substitute for legal review.  

The 2023 case Mata v Avianca is an example of the use of generative AI for legal work gone wrong. Lawyers from a New York law firm represented a client in a personal injury case.  

The client’s lawyers used ChatGPT to prepare for a court filing submission. Unfortunately for the client, the case citations and all other data provided by ChatGPT for the brief were all made-up. Generative AI models can ‘hallucinate’, and if left unchecked, can be a serious problem for lawyers. 

Alec Christie, Partner for Clyde & Co, believes that AI will become ubiquitous in the legal profession and across industries. Foundational principles like accountability hold even more value than ever. Speaking at Thomson Reuters’ SYNERGY event in Sydney, Alec said there’s no time to waste. 

“We’re at a point where we’re at a crossroads with AI.”  

“Whether people realise it or not AI will be used by their team members, it will be used in their organisations,” added Alec. 

Alec Christie, Partner at Clyde & Co. sharing key insights at Thomson Reuters SYNERGY, May 2024.

“If we don’t get on top of the data governance aspect of that, and the frameworks for use of AI, then there’s going to be some significant issues and concerns.”  

Mata v Avianca 2023 is not an isolated incident. So-called ‘fake cases’ have also surfaced in Canada and the UK. Former President Donald Trump’s previous lawyer, Michael Cohen, used Google Bard in documents submitted for official court filing. 

The generative AI system produced the content, which turned out to be inaccurate. Consequently, the citations submitted in the court filing were false. 

“If you can’t guarantee the source, you can’t guarantee where the information is coming from, you can’t guarantee the quality, ” Alec added.  

“Quality of data is fundamental, because it’s that old tech motto, ‘garbage in garbage out’.”  

Singapore’s Model AI Governance Framework

January 2019, the Singaporean Government released the first edition of the Model AI Governance Framework. The framework guides companies on maintaining good corporate governance over their use of AI technology. 

Following public consultation, the Singapore government updated the Model AI Governance Framework in May 2024. Here are the emerging issues, developments, and safety considerations:  

  • Accountability: The Singaporean government will deploy incentives like grants, regulatory frameworks, tech and talent resources. Their aim is to encourage ethical and responsible generative AI technologies. 
  • Data: Prevent generative AI training data sets from contamination with proactive security measures.
  • Trusted Development and Deployment: Continued commitment to best practice in evaluation, disclosure and development to increase hygiene and safety transparency. 
  • Incident Reporting: Establish and introduce an incident management system to assist with deploying counteractive measures and expediting protective measures.  
  • Testing and Assurance: Develop standardised generative AI testing model through external validation, increase added trust via third party testing.
  • Security: Intercept and eliminate threat vectors emerging via generative AI systems. 
  • Content Provenance: Transparency of generative AI content sources and implementation of guardrails for contentious data for end-users.  
  • Safety and Alignment R&D: Cooperate with global generative AI safety regulators to accelerate R&D, ensuring generative AI technology aligns with human intentions and values.  
  • AI for Public Good: Develop AI systems sustainably and responsibly in the public sector. This comprises providing standardised access to generative AI systems and upskilling professionals.

The framework features nine recommendations for policymakers, generative AI system developers, business leaders and academics to follow. Together, these aim to facilitate a ‘trusted generative AI ecosystem’. 

Which AI models are responsible for lawyers?

As an emerging technology, generative AI will take time to achieve greater levels of automation. The string of adverse incidents from the use of AI systems in legal proceedings worldwide demonstrates AI technologies will not replace human lawyers. 

Retrieval augmented generation (RAG) ensures an AI system only takes information from approved sources. It helps to reduce the frequency of the AI system reaching incorrect conclusions. The quality of data used to train the model can make a significant difference to the standard of the AI model’s output, too.   

Risk assessments can also help stop unethical generative AI in its tracks. That’s according to Carter Cousineau, Vice President Data and Model (AI/ML) Governance and Ethics at Thomson Reuters. These risk assessments can include process checks and technical tests. 

“There was definitely a lot of great due diligence prior to our team starting and coming in,” said Carter. 

“I see an opportunity to optimise responsible AI, where understanding the risks of AI opens the door to creating AI systems that are fair and more transparent.”  

Carter Cousineau, Thomson Reuters Vice President, Responsible AI and Data speaking at Thomson Reuters SYNERGY in May 2024.

Thomson Reuters’ CoCounsel, the novel generative AI technology uses the RAG model architecture. CoCounsel restricts research to verified legal texts from its extensive publications.  

Thomson Reuters’ approach to AI application development according to Carter, it is important to keep technology limitations in mind. AI developers at Thomson Reuters embed data and AI governance throughout the whole organisation.  

“It is something that has multiple control points throughout the lifecycle, especially around customer data sets.”  

“I would say creating that refinement around our AI machine learning models and building it integrated into the workflow.”  

“Getting very specific on the ethical AI risks, and then reacting with a more proactive approach, I would say, is what we’ve taken,” Carter concluded.  

By Thomson Reuters

NUS FinTech Lab under School of Computing receives US$1 million grant from Ripple’s University Blockchain Research Initiative to further advance financial technology education

Singapore, 17 January 2024 ― The FinTech Lab at the National University of Singapore’s (NUS) School of Computing has received a generous grant of US$1 million from global enterprise blockchain and crypto solutions provider Ripple’s University Blockchain Research Initiative (UBRI). The new funding will support FinTech Lab’s operations for the next two years and extend Ripple and NUS’ existing collaboration in promoting innovation and thought leadership in fintech.

Established in November 2019, NUS FinTech Lab has evolved to become a strategic hub at the core of Singapore’s vibrant fintech and business ecosystem, providing strategic dialogue and research-driven innovation. Through initiatives such as the FinTech Brew, a regular coffee chat series, the Lab brings the latest ideas and brightest minds together in an informal setting where students and business leaders meet to address and discuss the challenges facing the fintech sector. Additionally, the FinTech Lab’s regular podcast series extends its reach globally, providing both local and international audiences with valuable insights into its work.

Commitment to innovation and collaboration

Professor Tan Kian Lee, Dean of NUS School of Computing, emphasised the FinTech Lab’s role in bridging theory and practice. He said, “This new phase of the NUS FinTech Lab symbolises our dedication to merging academic inquiry with practical industry solutions, fostering an ecosystem rich in innovation and impact.”

“NUS joined UBRI’s roster of incredible global academic partners in 2019 and has been driving innovation at scale through the NUS FinTech Lab ever since. We’ve been extremely impressed with the work to come out of this partnership – NUS has excelled at supporting an extremely active student fintech society, launching a new EVM sidechain validator, and maintaining an XRPL validator – not to mention the UBRI-funded NUS FinTech SG Programme, an advanced professional certificate initiative with 200 graduates achieving full-time job offers in fintech and over 50,000 programme participants receiving training in fintech via webinars, festivals, community events and more,” said Eric van Miltenburg, Senior Vice President of Strategic Initiatives at Ripple. “We look forward to many more years of collaboration as Singapore and NUS continue to pave the way for digital asset innovation and cement the country’s status as one of the leading fintech hubs in the region.”

In line with the University and the School’s mission, the FinTech Lab actively supports innovation and research leadership in partnership with industry, collaborating with Ripple’s UBRI programme and other business partners to catalyse knowledge sharing and collaborative inquiry. It also inspires students and young researchers to take up the greatest challenges in today’s digital economy through regular workshops, conferences, hackathons, and knowledge transfer partnerships.

Recognising the pivotal role of academic rigour in advancing the global fintech sector, and expressing appreciation for Ripple’s support, Professor Hahn Jungpil, Director of the NUS FinTech Lab, added, “The UBRI programme’s steadfast support is invaluable in our mission to blend rigorous academic research with real-time solutions to address critical industry challenges, leading to impactful advancements in financial technology.”

Looking ahead

Drawing on insights from the Singapore FinTech Festival – the world’s largest gathering of fintech professionals, regulators and thought leaders – NUS FinTech Lab’s agenda for the coming year will focus on the digital economy, digital assets and trusted AI and governance.

NUS FinTech Lab looks forward to engaging with many more business partners and policy-making institutions in months to come, as it embarks on the next exciting phase of development.

 

About National University of Singapore (NUS)

The National University of Singapore (NUS) is Singapore’s flagship university, which offers a global approach to education, research and entrepreneurship, with a focus on Asian perspectives and expertise. We have 16 colleges, faculties and schools across three campuses in Singapore, with more than 40,000 students from 100 countries enriching our vibrant and diverse campus community. We have also established more than 20 NUS Overseas Colleges entrepreneurial hubs around the world.

Our multidisciplinary and real-world approach to education, research and entrepreneurship enables us to work closely with industry, governments and academia to address crucial and complex issues relevant to Asia and the world. Researchers in our faculties, research centres of excellence, corporate labs and more than 30 university-level research institutes focus on themes that include energy; environmental and urban sustainability; treatment and prevention of diseases; active ageing; advanced materials; risk management and resilience of financial systems; Asian studies; and Smart Nation capabilities such as artificial intelligence, data science, operations research and cybersecurity.

For more information on NUS, please visit nus.edu.sg.

About NUS FinTech Lab

NUS FinTech Lab is a convening hub for interdisciplinary dialogue with academia, fintech industry, policy makers and regulators. Our lab is dedicated to advancing thought leadership, catalysing innovative research, and education with a focus on areas such as digital payments, data privacy, trust and security, regulatory compliance, and financial inclusion.

For more information, please visit fintechlab.nus.edu.sg.

About Ripple

Ripple is the leader in enterprise blockchain and crypto solutions, transforming how the world moves, manages, tokenises and stores value. Ripple’s business solutions are faster, more transparent, and more cost effective – solving inefficiencies that have long defined the status quo. And together with partners and the larger developer community, we identify use cases where crypto technology will inspire new business models and create opportunity for more people. With every solution, we’re realising a more sustainable global economy and planet – increasing access to inclusive and scalable financial systems while leveraging carbon neutral blockchain technology and a green digital asset, XRP. This is how we deliver on our mission to build crypto solutions for a world without economic borders. Ripple is a global leader in enterprise blockchain solutions, providing infrastructure for payments, settlements, and asset issuance. Ripple’s solutions enable banks, payment providers, and digital asset exchanges to send and receive money globally, instantly, securely, and cost-effectively.

For more information, visit ripple.com.

NUS Fintech Summit 2024

SINGAPORE – 11 December 2023 –NUS Fintech Society, a student group under NUS School of Computing that provides opportunities for students to learn and grow their knowledge, skills and network in Fintech, is thrilled to announce their annual flagship event in its 4th edition, NUS Fintech Summit 2024! This year, NUS Fintech Society is collaborating with NUS Asian Institute of Digital Finance (AIDF) and NUS FinTech Lab. The Summit is set to take place from 20th December 2023 to 19th January 2024 and this time they are pushing boundaries to explore the potential of Fintech even further!

Tackle Challenges, Win Big:
Tertiary students get ready to tackle four distinct challenges in the fields of blockchain, software development, and quantitative finance. All of which were carefully crafted by the Platinum Sponsors. Hackathon participants can also stand a chance to receive mentorship opportunities throughout the competition to win a share of the total prize pool, amounting to $40,000!!!

Key Events:
Alongside the hackathon, a series of workshops conducted by partners and guests have also been lined up. These workshops are designed to give tertiary students an insight into the dynamic world of Fintech and also expose them to the latest developments within the space. Among the speakers are industry leaders, tech innovators and financial experts, all of whom have played a pivotal role in shaping the Fintech landscape, so don’t miss out on this opportunity!

Don’t forget to mark your calendars for demonstration day as well, happening on 19th January 2024 (Friday) at Suntec Convention Centre! This will be hallmarked by the hackathon finale coupled with a panel discussion and career booth put up by the esteemed sponsors for the participants. With something for everyone, there would simply be no reason to miss out on the event!

Meet the Sponsors:
A special shout-out to the sponsors which were instrumental in making the event possible!
Platinum Sponsors: Ripple, Tokka Labs, aelf, Northern Trust
Gold Sponsors: Thought Machine, eFinancialCareers
Silver Sponsors: Flowdesk, Coinbase, Endowus, Alibaba Cloud, Metacamp, HashKey Capital

Registration and Information:
Dive into the heart of the digital revolution by tuning in on virtual kick-off day on the 20th December 2023 (Wednesday) and Opening Ceremony on 5th January 2024 (Friday) held at the National University of Singapore. Secure your spot by registering for NUS Fintech Summit 2024 now!

Event Sign-up Link: https://linktr.ee/nusfintech
*Registration for the virtual kick-off day has begun. For opening day, hackathon and workshops, sign-ups open on 20th December 2023.

Stay Connected:
Meanwhile, don’t hesitate to follow us on LinkedIn and Instagram to stay updated on the latest details of NUS Fintech Summit 2024!

About NUS Fintech Society:

NUS Fintech Society was founded in 2018 in collaboration with the NUS Fintech Lab under the NUS School of Computing. Since then, it has grown to be known for nurturing the Fintech talents of tomorrow by providing students with the opportunity to hone their knowledge, skills and expand on their network within the space.

Source : TechInAsia

 

Mistakes, false stories: Singapore flags 6 key risks of AI in paper, launches foundation to boost governance

  • The Infocomm Media Development Authority (IMDA) and global tech company Aicadium published a discussion paper on Singapore’s approach to building an ecosystem for safe adoption of generative artificial intelligence (AI) 
  • The paper highlighted six risks of generative AI that policymakers should take note of to ensure safe and responsible AI use
  • The risks range from AI mistakes to spreading of misinformation and copyright infringement
  • IMDA will continue to make targeted regulatory measures to uphold safety in future AI developments
  • A new AI Verify Foundation has also been launched to encourage the development of responsible AI testing tools

SINGAPORE — Generative artificial intelligence (AI) models can produce erroneous responses and false stories of sexual harassment when they make mistakes and “hallucinate”, according to a discussion paper released on Wednesday (June 7) that flagged this as one of several key risks of AI tech such as ChatGPT.

It also highlighted how AI models can pose privacy risks by leaking details keyed in by other users, or perpetuate biases based on race, among others.

The paper, jointly published by the Infocomm Media Development Authority (IMDA) and global tech company Aicadium, lays out Singapore’s approach to building an ecosystem for trusted and responsible adoption of generative AI.

The two were also part of seven organisations that launched the AI Verify Foundation, a platform for open collaboration and idea-sharing to encourage the development of responsible AI testing tools.

With more than 60 general members, the foundation also hopes to nurture a network of advocates for AI to drive a broad adoption of AI testing through education and outreach.

This is in addition to Singapore’s previous efforts in AI governance with the Model AI Governance Framework that was released in 2019 and the AI Verify tool launched last year, which allows companies to conduct technical tests on their AI models in line with global ethical standards.

Speaking at the Asia Tech x Artificial Intelligence Conference, Minister for Communications and Information Josephine Teo said that a strong desire for AI safety need not mean pulling the drawbridge to innovation and adoption.

“In AI, safety mechanisms and shared standards will equally instill confidence in its use but effective safeguards will take time and good research to discover,” she said.

IMDA’s discussion paper contains ideas for senior leaders in the Government and businesses on how to build an ecosystem for a responsible adoption of generative AI.

It also identifies six key risks that policymakers should note to ensure the safe use of AI.

MISTAKES AND FALSE RESPONSES

Generative AI models like ChatGPT have a more challenging time with tasks that require logic and common sense as these are modelled on how people use language.

This makes it easy for them to make mistakes with false responses, commonly known as “hallucinations”, that appear overly confident to convince users to believe them.

Just early this year, ChatGPT was found to have misrepresented key facts by creating convincing but erroneous responses to medical questions, created false stories of sexual harassment with real individuals and generated software code that was susceptible to vulnerabilities.

LOSS OF CONFIDENTIALITY

Known for its ability to memorise, generative AI essentially retains data and replicates it when prompted. 

A particularly worrying find is AI’s ability to quickly memorise parts of sentences like nouns, pronouns and numerals — information that may be sensitive.

The discussion paper states that generative AI also poses copyright and confidentiality issues. For instance, Samsung employees were reported to have unintentionally leaked copyrighted information when they used ChatGPT to check for errors.

SPREADING MISINFORMATION

The dissemination of fake news has become increasingly rampant in recent years and it runs the risk of becoming large scale with the use of generative AI.

The negative impact of such interactive models is greater as it taps human reactions and uses language found on the internet to further spread toxic content.

Impersonation and reputation attacks have also become easier to achieve with generative AI as it is able to generate images in an individual’s likeness.

In the hands of the wrong individual, generative AI makes it possible to generate phishing emails, create malicious computer codes, set up a dark web marketplace and disrupt the traffic of a targeted server.

COPYRIGHT INFRINGEMENT

Current generative AI models require massive amounts of data and this has raised concerns over copyrighted materials being used.

IMDA cited the example of Getty Images suing AI model Stable Diffusion over an alleged copyright violation for using the photo agency’s watermarked photo collection.

This rising concern is also evident in the creative community as this form of AI is capable of creating high quality images in the style of other artists.

INHERITING BIASES

AI models is likely to inherit stereotypical biases of the internet if left unchecked.

Some image generators have been reported to lighten the image of a black man when it is prompted to create the image of an “American person”, and create individuals in ragged clothes when prompted for an image of an “African worker”.

DIFFICULTY OF GOOD INSTRUCTIONS

AI safety is often aligned with human values and goals to prevent it from harming its human creators.

However, this task has proven challenging as the objectives outlined by AI scientists and designers are often misrepresented even with simple instructions.

For example, instructing AI to assign more importance to being helpful to the user can lead to the system not filtering responses and generating toxic responses that might cause harm.

FUTURE AI REGULATION

IMDA said that while Singapore does not currently intend to implement general AI regulation, the discussion paper itself is an example of how the country has taken concrete action to develop technical tools, standards and technology, which in turn lays the groundwork for clear and effective regulation in the future.

Careful deliberation and a calibrated approach should be taken, while investing in capabilities and development of standards and tools, said IMDA.

It added that it will continue to introduce and update targeted regulatory measures to uphold safety in future digital and AI developments.

Professor Hahn Jungpil from the School of Computing at the National University of Singapore told TODAY that there is a “tricky trade-off” between allowing for technological innovation and ensuring the safe use of AI.

He suggested three approaches to balance the two.

The first is a risk-based approach which focuses on regulating high-risk AI applications but allow for more freedom with low-risk AI innovations.

Another is for a contained or measured approach where innovation and experimentation can occur but allow for the plug to be pulled when risks are identified and materialised.

The third is a transparency-oriented approach that allows for some aspects of AI data to be made accessible to researchers for them to experiment with the technology and identify any hidden risks.

Source: TodayOnline

NUS, Ripple set up Singapore fintech lab

SINGAPORE – The National University of Singapore’s School of Computing (NUS Computing) has teamed up with blockchain-powered global payments firm Ripple to help grow Singapore’s financial technology (fintech) sector.

They unveiled a fintech lab on Wednesday (Nov 6), which will bring together academia and industry to develop talent in the sector.

Read more here.

NUS fintech course aims to ease supply crunch

TO BOOST the local pipeline of financial technology professionals and ease supply crunch in the industry, the National University of Singapore (NUS) has launched a two-month crash course to teach students and mid-career changers – especially those without computing or tech expertise – to tackle contemporary issues in the fintech sector.

Read more here.

Pioneer batch of FinTechSG graduates

Just two months ago, Mr Na Yi Rong had little to no knowledge of Financial Technology (FinTech). The Engineering Science graduate from the National University of Singapore (NUS) is now working full time as a product management lead at a local FinTech start-up, after receiving the job offer when he was participating in the NUS-FinTechSG Programme.

Read more here.

Tech-lite roles in ICT sector need to be filled

The information and communications technology (ICT) sector is also looking to fill “tech-lite” roles, such as in digital marketing.

Minister for Communications and Information Josephine Teo said on Saturday (July 31) that aside from jobs that require people with tech skills, the sector also wants to tap the experience and knowledge of specific industries and sectors.

Read more here.

Scroll to Top