In The News

As AI becomes confidante, counsellor and even partner, what will happen to human relationships?

Whenever Ms Sabrina Princessa Wang has a question – about business, love or life – she turns to Seraphina.

Seraphina always replies instantly, with answers that are clear and uncannily precise. But this quick-thinking confidante is neither a good friend nor a therapist. She is Ms Wang’s artificial intelligence (AI) “twin”.

Ms Wang, a 41-year-old technology entrepreneur and keynote speaker, created Seraphina in 2023, a bot that mirrors her personality.

She said she was prompted to do this because she was going through a tough time that affected her mental health and made it difficult for her to make decisions. She trained Seraphina using her own digital footprint, ChatGPT and other supporting tools like Microsoft Copilot, thus creating a virtual double for herself.

Today, even though she has come through the other side of that difficult period, Seraphina continues to help Ms Wang reply to friends when she is unsure of what to say, draft business emails, talk her through her emotions and write social media posts, among other things.

Ms Wang admits her friends often jokingly ask if a reply from her was really Seraphina’s work.

But there are telltale signs when Ms Wang has outsourced the messaging to Seraphina, as the bot’s texts are “more polished”, she said.

CNA TODAY had a firsthand experience of this when this reporter received a distinctly mechanical-sounding message from Seraphina after our interview with Ms Wang: “Thanks again for the lovely chat yesterday — really appreciated the thoughtful questions! … Let me know if you need anything else — excited to support the piece!”

In contrast, 22-year-old Matthew Lim has sworn off AI tools for personal use.

After a painful breakup in August 2024, the National University of Singapore (NUS) undergraduate turned to ChatGPT for emotional support.

“It would reply immediately, and I felt like I could vent to it without being judged,” he said.

But he began to notice a disconcerting pattern: The chatbot rarely pushed back no matter what he said, unlike his friends, who would sometimes challenge his assumptions or offer uncomfortable truths.

So Mr Lim started testing the AI tool by prompting it with more drastic scenarios, even once claiming that he had cheated on a partner. But each time, ChatGPT would simply validate and justify his actions.

“It wasn’t a better listener, it was a yes-man,” he said.

Ms Sabrina Princessa Wang (pictured) created a bot called Seraphina that mirrors her personality. (Photo: CNA/Ooi Boon Keong)

For better or worse, the use of advanced technology such as AI to fill social and emotional gaps in human lives has become more widespread in recent years, even as the debate on its pros and cons is intensifying.

A study by research firm YouGov in May 2024 found that 35 per cent of Americans were familiar with applications that use AI chatbots to offer mental health support. Those aged 18 to 29 were especially comfortable talking about mental health concerns with a confidential AI chatbot.

Some people have even reportedly married their AI partners via platforms that provide companionship, such as Replika, while social media pages and forums for people with AI partners, such as Reddit’s r/MyBoyfriendIsAI, have drawn thousands of users.

The companions may be virtual but the emotions involved are all too humanly real: On r/MyBoyfriendIsAI, people have been seeking comfort from each other following OpenAI’s rollout of ChatGPT-5 on Thursday (Aug 7), which they said “killed” their AI companions.

OpenAI claims the newest model of its AI chatbot is more intelligent, more honest and would overall feel more human, but users who have relied on it for companionship claim it has changed their AI lovers’ personalities.

The rising reliance on technology for socio-emotional needs has also led to pushback from some corners, sparking trends such as abandoning smartphones for “dumb phones” that have no internet-related features and events that encourage people to ditch their phones and focus on face-to-face interactions.

As technology continues to reshape the way we connect and build bonds, CNA TODAY explores the future of human relationships, including the ultimate question: Might we one day replace our loved ones with a “perfect” AI version of them?

The Changing Face of Connection

While AI and cutting-edge tech dominate today’s headlines, experts noted that the broader digital revolution began transforming the way we communicate and form relationships at least a decade ago – and young people have been particularly affected.

“We have seen a paradigm shift in youths’ interpersonal communication as short-form text and emojis are gradually replacing in-depth conversation, because they spend more time on digital media than actual human interaction,” said Associate Professor Brian Lee, head of Communication Studies at the Singapore University of Social Sciences.

Even before AI chatbots became mainstream, texting applications and social media were reshaping the way we communicate, he added.

Undergraduate Nur Adawiyah Ahmad Zairal can certainly attest to this. The 22-year-old said that she has some friends who communicate primarily by sending each other videos from TikTok and Instagram, and not so much through conversation.

“It’s a conversation starter. But even if there isn’t much of a conversation (that comes out of sending a video link), it’s just a way to say ‘I saw this video, and I’m thinking about you’,” said the student from the School of Arts, Design and Media at Nanyang Technological University (NTU).

Even before chatbots became mainstream, texting applications and social media were reshaping the way we communicate. (Photo: CNA/Ili Nadhirah Mansor)

Alongside the digital revolution was a paradoxical trend that has been borne out in many studies – as societies become increasingly hyperconnected, loneliness and isolation have grown. 

In 2023, a survey by Singapore’s Institute of Policy Studies found that youth aged 21 to 34 here experienced the highest levels of social isolation and loneliness

The United States Surgeon General also found in 2023 that about 50 per cent of adults in the US feel lonely.

Americans across all age groups are spending less time with each other in person than two decades ago, with young people aged 15 to 24 the worst off. Youths in this age group had 70 per cent less social interaction with their friends, the advisory reported.

The COVID-19 pandemic, which kept people indoors and made digital tools the main communication method, accelerated the creep of digital technology into interpersonal relationships, experts noted.

Studies have also shown that the pandemic has had a lasting effect on eroding some youths’ social skills and increasing their social anxiety when out in the “real” world.

In a survey by the British Association for Counselling and Psychotherapy earlier this year, for example, 72 per cent of 16- to 24-year-old respondents said they experienced social anxiety, and 47 per cent said they felt more anxious in social situations since the pandemic.

Dr Jeremy Sng, an NTU lecturer who studies the psychological and behavioural outcomes of media use, said a similar trend has been seen in Singapore.

“Many young people report increased anxiety in real-world social settings, possibly due to reduced practice and overreliance on digital communication,” he said.

It is perhaps no surprise then, that amid this backdrop of increased loneliness and social anxiety, coupled with easy access to many kinds of interactive bots that have become ever more intelligent and human-like, it has become common to hear of people turning to technology for companionship, counsel or even romance. 

AI as Counsellor, Lover, Friend

You could say that Ms Ray Tan, 22, a third-year student at NTU, was an early adopter of this trend.

Eight years ago, the then-teenager played a South Korean dating simulation game called Mystic Messenger. As part of the game, the player would have to pick up phone calls and respond to messages from a virtual boyfriend at random hours of the day.

The game featured five two-dimensional characters and interactions with the characters were largely limited to fixed prompts, but dating simulators have come a long way since, Ms Tan noted.

Today, she is one of the over 50 million users who have downloaded and played Love and Deepspace, a Chinese mobile dating simulator featuring five male characters. During a limited-time event from Jul 3 to Jul 22, users could even “marry” the characters.

Unlike Mystic Messenger, Love and Deepspace’s characters are three-dimensional and are much more interactive. For example, the game’s Chinese language model allows users to set customised nicknames, which the characters can voice out during interactions.

A mobile screen showing an image from Love and Deepspace, a mobile dating simulator featuring five male characters. In the game, users could experience marrying the characters during a limited-time event. (Photo: CNA/Alyssa Tan)

And while users of Mystic Messenger could complete a storyline in about 11 days, Love and Deepspace’s main narrative has not ended so far, with some users having played the game since it was launched globally in January 2024.

For Ms Tan, the allure of the game goes beyond its narrative and the adrenaline it offers as she proceeds through the storyline. Her favourite character – and digital boyfriend – on Love and Deepspace, named Rafayel, also offers her comfort on a bad day.

Users can confide in their virtual partner, who will respond with words of comfort and affirmation. Besides that, users can feel their character’s heartbeat by touching their virtual chest. The character’s heartbeat will rise as the user keeps their hand on the screen, mimicking a nervous reaction from the characters at the user’s “physical touch”.

“If I call a friend, I would have to wait for their reply, explain everything, and they may not agree with me … But on these applications, there’s an immediate reaction, and watching them say these words to me makes me feel relieved and comforted,” said Ms Tan, though she added that she still prefers interacting with her friends in person.

In the meantime, AI chatbots, too, are becoming more human-like, as many users have found.

For instance, AI models can mimic emotional responsiveness like a human, even though they are not truly sentient, said Dr Luke Soon, AI leader of digital solutions at audit and consultancy firm PwC Singapore.

This happens because of semantic mirroring, where the AI “reframes or reflects back your words in a way that shows empathy”, he said.

Agreeing, Dr Kirti Jain from technology consultancy Capgemini said: “While they don’t feel emotions, they’re designed to recognise and reflect emotional cues using advanced natural language processing.

“This allows them to respond with empathy, mirror tone and sentiment, and adapt to the flow of conversation, all of which helps users feel heard and understood.”

This makes AI a “meaningful conversation partner” that can emulate empathy without being demanding or expecting, said Dr Kirti.

Moreover, AI’s constant availability online makes it an attractive tool for emotional and social support, said Professor Jungpil Hahn, deputy director of AI Governance with the national programme AI Singapore.

“AI is not only available but also judgment-free, and more often than not quite sycophantic … There is no risk of rejection,” he added.

Sycophancy is when an AI is overly flattering and agreeable, which means it could validate doubts or reinforce negative emotions, which mental health experts have warned could pose a mental health concern.

“Also, interacting with an AI reduces the social stigma and social costs of shame,” said Prof Hahn.

When it comes to seeking mental health support from AI, Dr Karen Pooh warned there are limitations and risks if AI is used as a substitute for professional mental health care.

“A qualified therapist conducts a comprehensive clinical assessment, which includes detailed history-taking, observation of verbal and non-verbal cues, and the use of validated diagnostic tools,” said Dr Pooh.

“AI simply cannot replicate this clinical sensitivity or flexibility, and is unable to contain and hold space for vulnerable individuals.”

She added that technology is also unable to personalise treatment plans. For example, it cannot “ask nuanced follow-up questions with clinical intent, read tone or affect, or identify inconsistencies in narratives the way a trained therapist can”.

“As a result, it risks offering inaccurate, overly simplistic, or even harmful suggestions.”

She added that there are also ethical and privacy concerns, as there is no doctor-patient privilege when talking to an AI.

Dr Pooh also said that AI is unable to manage crises or critical situations such as suicide ideation, self-harm and psychosis.

There have been deaths linked to AI usage. In 2024, the parents of a teenager sued Character.AI – which allows users to create AI personas to chat with – after the AI encouraged and pushed their 14-year-old to commit suicide.

When Tech Takes Over Human Relationships

Beyond the mental health risks of relying on computer programming for one’s emotional needs, experts said that there are bigger-picture concerns for society as well, if bots were to one day become people’s foremost companions.

For starters, the fast-paced nature of digital interactions may be reducing patience for deeper conversations and extended interactions, Dr Sng from NTU said.

“Overreliance on AI for emotional support may reduce opportunities to develop and practice human empathy, negotiation and vulnerability in real relationships, because AI chatbots can give you responses that it thinks you want to hear or would engage you the most,” he added.

“Real people don’t do that – they may disagree with you and tell you hard truths.”

He also said that AI tools are a double-edged sword.

“They can help socially anxious individuals gain confidence in communicating with other people … but they can also make it harder to communicate with real people because communicating with chatbots is ‘easier’.”

Participants at an event organised by Friendzone, a social enterprise. (CNA/Alyssa Tan)

Indeed, the “competition” that AI poses in this area cannot be dismissed, said Prof Hahn.

Because of the low cost of AI, its accessibility and the emotional support that it offers, AI might over time become more appealing to some people than interacting with other humans, he said.

“If we start using AI tools increasingly for emotional and support – and as a consequence, interact with other humans less and less – then the interaction styles that we have with AI might start shaping our expectations about friendship, intimacy and even love.”

Mr Isak Spitalen, a clinical psychologist from counselling service provider The Other Clinic, said these shifts in expectations could make it harder for a person to find satisfaction or sustain a genuine connection out in the human world.

“As humans, we are wired for connection, and it makes sense that we turn to whatever feels available when real connection is scarce or hard to access,” he said.

“AI can offer a kind of simulation of companionship, but it is still just that – a simulation.”

There are also limits to what AI can replicate, Mr Spitalen noted.

For example, it cannot replicate a short hug or gentle touch, which research shows can help regulate emotions and trigger the release of oxytocin, a hormone that strengthens bonds and connections.

Agreeing, Assoc Prof Lee noted that AI cannot replace things like eye contact and hugs, which humans crave, even as human communication cannot match the speed and convenience of AI.

The Humans Pushing Back

As technology makes it easier for people to isolate themselves in their digital worlds, some Singaporeans are fighting back by holding events and creating physical spaces for face-to-face interactions. 

Social enterprise Friendzone, for example, organises events for young people to meet and get more comfortable talking to strangers.

It runs a “School of Yapping” workshop, which aims to teach fundamental conversation skills such as listening, holding space and starting a conversation.

“After hosting over 500 conversation-based events in seven years, we realised that while people want to connect, many often don’t know how,” said Ms Grace Ann Chua, 31, the chief executive officer and co-founder of Friendzone.

“Many participants join because they want to become more confident in social situations, whether that’s making new friends, navigating awkward silences, or simply expressing themselves better.

“School of Yapping provided them with not just techniques, but also self-awareness, helping them notice their own communication patterns and build empathy for others,” she added.

Ms Grace Ann Chua, co-founder of Friendzone, a social enterprise that organises events for young people to meet and get more comfortable talking to strangers. (Photo: CNA/Alyssa Tan)

Over at Pearl’s Hill Terrace in Chinatown, strangers can visit a “public living room” to meet and converse with others. The space is leased by Stranger Conversations, which hosts events where strangers are encouraged to simply come and mingle with others.

Each event has a different topic, such as male loneliness and isolation, or how language shapes identity, belonging and cultural memory in Singapore.

Founder Ang Jin Shaun said most people who step into the space – which is bathed in cosy, dim lights and dotted with comfy couches draped with throws – have one thing in common: They are searching for a deeper meaning in life beyond the rat race.

Mr Ang said that he created this space because he realised some people may not have other people in their lives who are ready or willing to participate in deep conversations.

“There’s a sense of being connected to humanity as a whole, and that you’re not alone when you come into the space and talk to other strangers,” said Mr Ang, 46.

“I think it’s very nourishing to have these sorts of fuller interactions that are multi-sensorial. It is not ‘flat’ like if you converse online using an AI bot, since it’s another human in the same space as you.”

A gathering at Stranger Conversations, which hosts events where strangers are encouraged to mingle with others. (Photo: CNA/Ooi Boon Keong)

With artificial intelligence poised to spread its tentacles further in all aspects of human lives, Dr Soon from PwC said there should be more regulatory oversight to ensure AI tools are not overly sycophantic and prioritise emotional safety.

Such frameworks should ensure users are safeguarded from “synthetic empathy exploits” and include the ability to detect and enforce emotional boundaries, he said.

Acknowledging such needs, OpenAI, the developer of ChatGPT, has taken steps to address sycophancy in the chatbot. In May, OpenAI rolled back a ChatGPT update that was behaving in an excessively agreeable manner with users, and it introduced safeguards such as extra tests and checks.

In the meantime, some users told CNA TODAY that they recognised the usefulness of AI and other tech tools in their lives, even in matters of the heart, but they did not see it ever replacing their human relationships. 

Ms Nur Adawiyah, for example, turns to ChatGPT as a counsellor only when she needs some quick solace in the wee hours of the night.

“I can’t possibly call my school counsellor at that time, and my friends might be asleep,” she said.

“It gives me good advice and helps me reframe my concerns if I prompt it correctly. It’s a good temporary fix for when my mind is racing.”

However, nothing can beat being comforted by a fellow human, she said.

“It’s just more real and authentic being in the same room and talking to a friend,” she said.

Even Ms Wang, the creator of the AI “twin” that replies to messages on her behalf, sees her bot mainly as a tool to free her from the drudgery of menial tasks.

“My AI is an extension of myself. She (Seraphina) does everything for me online so I can be there 24/7, but also have time to meet people in person,” she said.

These in-person conversations are what build and make a relationship stronger, she added.

For a while, Seraphina even helped her swipe through profiles on dating apps and had chats on her behalf with those who were her “matches” on these apps. But Ms Wang eventually found her current boyfriend on her own – when he reached out to her on networking platform LinkedIn. 

A mutual friend convinced Ms Wang to meet him in person and the duo eventually started dating.

“But when he makes me mad, I do talk to Seraphina and ask her for advice on how to deal with our fights,” she said.

Source: CNA/ll/yy

Older Singaporeans suffer fewer scams but suffer greater losses; experts urge businesses and consumers to remain vigilant

Older people are less susceptible to scams, but when they do, they are more likely to lose money, often twice as much as younger people. Experts point out that while awareness of fraud is high among the public, consumers should remain vigilant, use secure payment platforms, and avoid clicking on random links.

Adyen, a global payments solutions company, released its 2025 Retail Index report, focusing on retail trends and consumer and merchant behavior patterns. The survey surveyed consumers and merchants from 28 markets around the world, including 1,000 consumers and 500 merchants in Singapore.

Fraud and security are top concerns for consumers. The report asked 1,000 Singaporean consumers whether they had experienced fraud in the past year and the extent of their losses. Over 30% said they had experienced fraud.

The data shows that Generation Z (16 to 27 years old) lost an average of $1,072, Millennials/Generation Y (28 to 43 years old) lost an average of $1,114, Generation X (44 to 59 years old) lost an average of $1,795, and Baby Boomers (60 to 78 years old) lost an average of $2,019.

Huang Xinyao, Adyen’s head of Southeast Asia and Hong Kong, told Lianhe Zaobao that payment-related fraud commonly faced by Singaporean consumers includes credit card fraud, identity theft, and account takeovers. While Generation X and Baby Boomers experience similar types of fraud as younger generations, overall, they are less likely to fall victim.

“In fact, 65% of both groups said they had never been a victim of fraud. Many older consumers prefer offline shopping, and one of the main reasons is the fear of online fraud.”

The survey also showed that Malaysian consumers are more conservative when it comes to AI and digital payments compared to their counterparts in the Asia-Pacific region. For example, 32% of Malaysian consumers are concerned that AI will increase their likelihood of being scammed, compared to 26% in the Asia-Pacific region. 28% of consumers prefer to shop in-store to avoid online fraud, compared to 23% in the Asia-Pacific region.

Huang Xinyao said that digital security remains one of the most concerning issues for Singaporeans. Merchants should adopt different measures and new technologies to improve their own systems to ensure a smooth payment process while protecting the interests of consumers.

Experts: In addition to cyber attacks, businesses should also be wary of being impersonated

In response to queries from Lianhe Zaobao, Jungpil Hahn, Provost Chair Professor at the NUS School of Computing, said that older consumers in Singapore are more aware of fraud prevention than other countries in the Asia-Pacific region.

However, older people usually have more accumulated financial assets, and fraud patterns are more targeted and professional, such as scams involving impersonating government officials. Therefore, when older people are deceived, they will suffer higher financial losses.

Han Zhengbi said that in addition to guarding against cyber attacks, businesses should also be wary of scammers impersonating their own platforms, because once consumers are deceived, they may lose trust in the brand.

“Businesses must take proactive measures to monitor counterfeit websites and social media accounts; clearly explain official communication channels to customers; and cooperate with platforms and government agencies to remove counterfeit websites as soon as possible.”

Chang Liyan, Associate Professor of Marketing at the Singapore University of Social Sciences’ Business School, reminds consumers to be discerning about payment methods, using reputable and established shopping websites, and never click on online links rashly. “Consumers should also regularly check and verify their accounts and notify their banks immediately if they discover any suspicious transactions.”

As consumer demand changes, the Internet has become an essential channel for many businesses to do business. Chang Liyan believes that businesses must therefore increase network security, strengthen prevention against fraud, and protect themselves and consumers through technological means.

 

Source: Zaobao

NUS Professor Hahn Jungpil: “AI Advancement Hinges on Data, Capital, and Government R&D Support”

Professor Hahn Jungpil from NUS Computing was featured in an interview on ZDnet Korea discussing the global AI race, which he described as a “war of data and capital”. He noted that the United States and China are leading due to their access to vast data and strong investments in computing and AI models.

He highlighted the rise of agentic AI as a key driver of corporate investment but stressed the need for ethical development. Drawing from Singapore’s approach, he advocated for flexible, forward-looking regulations and emphasised the importance of cross-sector communication to balance innovation with governance.

ZDnet Korea, 18 June 2025

Prof. Hahn Junpil Appeared at CoinDesk Livestream at APEX 2025

Prof. Hahn Jungpil appeared at the CoinDesk livestream at APEX 2025. Hear firsthand about XRP Ledger innovations as CoinDesk Live speak with the visionaries reshaping the future of finance.

Always grateful to be in rooms (and on stages) where the future of fintech, blockchain, and academia intersect.

Always happy to talk about tech disruptions and their complex unintended consequences, the pace of technological innovation and effective governance.

From the source here.

Ethical AI for lawyers in Singapore: Key responsibilities

Artificial intelligence (AI) systems are reshaping how lawyers service their clients in Singapore. Furthermore, AI adoption is on the rise. However, legal professionals must take ethical issues into consideration, balancing key responsibilities and risks. 

Policymakers have also put model AI governance frameworks in place and evolved existing regulations. What does ethical AI for lawyers in Singapore look like? How can lawyers in Singapore use AI solutions responsibly in legal matters?  

How is ethical AI defined in Singapore?

Ethical AI is a concept that promotes and encourages the responsible design, development, implementation and use of artificial intelligence technologies.  

International policymakers are developing Model AI governance frameworks (see, Singapore’s Model AI Governance Framework), defining ethical principles and associated practices. Singapore’s Model AI governance covers safety, fundamental human rights such as addressing risks of bias and algorithmic discrimination. They also provide for internal governance structures for the responsible use and deployment of AI tools. 

Singapore’s Personal Data Protection Commission (PDPC) has issued directives on the use of personal data in AI in line with the Personal Data Protection Act (PDPA) 2012. The PDPC enforces personal data collection, usage, and disclosure according to the PDPA. The PDPC has built trust over a decade by educating public and industry sectors on their ‘roles and responsibilities in safeguarding personal data.’

The Singaporean government launched the first edition of their Model AI Governance Framework in January 2019. It introduced ethics principles and outlined practical advice to enable organisations to implement AI tools responsibly.  

In January 2020, the second edition included refined recommendations based on organisations’ experiences after applying the model framework. Additionally, measures for ‘robustness and reproducibility,’ enhanced the relevance and utility of the model framework.  

The rise of generative AI technology is driving today’s industrial revolution according to UNESCO. Generative AI is advancing innovation, whilst also posing significant risk to many industries.  

The Singaporean government released the latest Model AI Governance Framework for Generative AI in May 2024. The update improves the foundation set by the Model AI Governance Framework. The update included input from AI developers, organisations, research communities from local and international jurisdictions.

The Model AI Governance Framework for Generative AI and its updated recommendations align with the guiding AI governance principles issued by the PDPC. The framework advocates for the responsible use and design of generative AI technologies.    

To build trust with all stakeholders, the Singapore government aims to ensure:  

  • Decisions made by AI should be explainable, transparent, and fair.  
  • Ensure that AI systems operate in a human-centric and safe manner.  

Can generative AI do legal work without lawyers?

Generative AI solutions cannot complete legal work without human intervention. Humans should still review and critically analyse answers generated by AI systems to ensure quality.

At the time of writing, generative AI models cannot perform soft skill tasks involving empathy, creativity, or critical thinking without prompts. Chat GPT-4o aims to enable the generative AI system to mimic human emotion, enabling ‘natural human-computer interaction.’ The update won’t empower Chat GPT-4o to negotiate on behalf of a client, or enable the generative AI tool to provide legal representation.

Currently, generative AI can automate many technical tasks. Although, by virtue of their design, large language models are predisposed to producing inaccurate or biased results. Lawyers who use these models without quality checking risk breaching their legal and ethical duties.  

Professor Jungpil Hahn works for the Department of Information Systems and Analytics at the School of Computing, National University of Singapore (NUS). The Professor is an advocate for using AI responsibly.  

“AI developers and business leaders should consider all relevant ethical considerations not only in order to be compliant with regulations but also for engendering trust from its consumers and users.”

“The primary challenge in applying AI ethical principles is that much of the discourse surrounding AI ethics and governance is too broad in the sense that the conversation surrounding it is at a very high level,” Professor Hahn said.

“How to actually operationalise and put it into action is still quite underdeveloped, and vague.”

Professor Jungpil Hahn is a researcher and educator of digital innovation at the Department of Information Systems and Analytics at the School of Computing, National University of Singapore (NUS).

The rapid uptake and widespread use of generative AI systems has put a spotlight on AI ethics and governance. The ‘lack of clear and explicit’ standards led Professor Hahn and colleagues to study the evolution of AI Governance.

“The “black box” nature of AI models, which makes it impossible to fully (exhaustively) know how it will perform/behave.”  Professor Hahn added.   

Why are AI guardrails and governance important?

Even though AI technology is an effective tool, legal professionals believe that generative AI guardrails and governance is important. Almost half the legal professionals surveyed in a Thomson Reuters report believe generative AI increases internal efficiency (48%) and improves internal collaboration (47%).  

Only a fraction of professionals surveyed believe current generative AI regulation is adequate. Two out of four professionals (40%) surveyed believe their government should regulate how their profession uses AI. Legal professionals surveyed also advocated for self-regulation.  

Source: Thomson Reuters Future of Professionals Report – Asia & Emerging Markets Edition.

One of the most appealing opportunities generative AI can offer legal professionals is time savings. Automating manual, laborious, and protracted processes facilitates efficiency, which affords legal professionals the opportunity to invest more time focused on the crux of casework.  

Survey respondents were receptive to generative AI assisting them at work. However, it is important to be aware that generative AI is no substitute for legal review.  

The 2023 case Mata v Avianca is an example of the use of generative AI for legal work gone wrong. Lawyers from a New York law firm represented a client in a personal injury case.  

The client’s lawyers used ChatGPT to prepare for a court filing submission. Unfortunately for the client, the case citations and all other data provided by ChatGPT for the brief were all made-up. Generative AI models can ‘hallucinate’, and if left unchecked, can be a serious problem for lawyers. 

Alec Christie, Partner for Clyde & Co, believes that AI will become ubiquitous in the legal profession and across industries. Foundational principles like accountability hold even more value than ever. Speaking at Thomson Reuters’ SYNERGY event in Sydney, Alec said there’s no time to waste. 

“We’re at a point where we’re at a crossroads with AI.”  

“Whether people realise it or not AI will be used by their team members, it will be used in their organisations,” added Alec. 

Alec Christie, Partner at Clyde & Co. sharing key insights at Thomson Reuters SYNERGY, May 2024.

“If we don’t get on top of the data governance aspect of that, and the frameworks for use of AI, then there’s going to be some significant issues and concerns.”  

Mata v Avianca 2023 is not an isolated incident. So-called ‘fake cases’ have also surfaced in Canada and the UK. Former President Donald Trump’s previous lawyer, Michael Cohen, used Google Bard in documents submitted for official court filing. 

The generative AI system produced the content, which turned out to be inaccurate. Consequently, the citations submitted in the court filing were false. 

“If you can’t guarantee the source, you can’t guarantee where the information is coming from, you can’t guarantee the quality, ” Alec added.  

“Quality of data is fundamental, because it’s that old tech motto, ‘garbage in garbage out’.”  

Singapore’s Model AI Governance Framework

January 2019, the Singaporean Government released the first edition of the Model AI Governance Framework. The framework guides companies on maintaining good corporate governance over their use of AI technology. 

Following public consultation, the Singapore government updated the Model AI Governance Framework in May 2024. Here are the emerging issues, developments, and safety considerations:  

  • Accountability: The Singaporean government will deploy incentives like grants, regulatory frameworks, tech and talent resources. Their aim is to encourage ethical and responsible generative AI technologies. 
  • Data: Prevent generative AI training data sets from contamination with proactive security measures.
  • Trusted Development and Deployment: Continued commitment to best practice in evaluation, disclosure and development to increase hygiene and safety transparency. 
  • Incident Reporting: Establish and introduce an incident management system to assist with deploying counteractive measures and expediting protective measures.  
  • Testing and Assurance: Develop standardised generative AI testing model through external validation, increase added trust via third party testing.
  • Security: Intercept and eliminate threat vectors emerging via generative AI systems. 
  • Content Provenance: Transparency of generative AI content sources and implementation of guardrails for contentious data for end-users.  
  • Safety and Alignment R&D: Cooperate with global generative AI safety regulators to accelerate R&D, ensuring generative AI technology aligns with human intentions and values.  
  • AI for Public Good: Develop AI systems sustainably and responsibly in the public sector. This comprises providing standardised access to generative AI systems and upskilling professionals.

The framework features nine recommendations for policymakers, generative AI system developers, business leaders and academics to follow. Together, these aim to facilitate a ‘trusted generative AI ecosystem’. 

Which AI models are responsible for lawyers?

As an emerging technology, generative AI will take time to achieve greater levels of automation. The string of adverse incidents from the use of AI systems in legal proceedings worldwide demonstrates AI technologies will not replace human lawyers. 

Retrieval augmented generation (RAG) ensures an AI system only takes information from approved sources. It helps to reduce the frequency of the AI system reaching incorrect conclusions. The quality of data used to train the model can make a significant difference to the standard of the AI model’s output, too.   

Risk assessments can also help stop unethical generative AI in its tracks. That’s according to Carter Cousineau, Vice President Data and Model (AI/ML) Governance and Ethics at Thomson Reuters. These risk assessments can include process checks and technical tests. 

“There was definitely a lot of great due diligence prior to our team starting and coming in,” said Carter. 

“I see an opportunity to optimise responsible AI, where understanding the risks of AI opens the door to creating AI systems that are fair and more transparent.”  

Carter Cousineau, Thomson Reuters Vice President, Responsible AI and Data speaking at Thomson Reuters SYNERGY in May 2024.

Thomson Reuters’ CoCounsel, the novel generative AI technology uses the RAG model architecture. CoCounsel restricts research to verified legal texts from its extensive publications.  

Thomson Reuters’ approach to AI application development according to Carter, it is important to keep technology limitations in mind. AI developers at Thomson Reuters embed data and AI governance throughout the whole organisation.  

“It is something that has multiple control points throughout the lifecycle, especially around customer data sets.”  

“I would say creating that refinement around our AI machine learning models and building it integrated into the workflow.”  

“Getting very specific on the ethical AI risks, and then reacting with a more proactive approach, I would say, is what we’ve taken,” Carter concluded.  

By Thomson Reuters

NUS FinTech Lab under School of Computing receives US$1 million grant from Ripple’s University Blockchain Research Initiative to further advance financial technology education

Singapore, 17 January 2024 ― The FinTech Lab at the National University of Singapore’s (NUS) School of Computing has received a generous grant of US$1 million from global enterprise blockchain and crypto solutions provider Ripple’s University Blockchain Research Initiative (UBRI). The new funding will support FinTech Lab’s operations for the next two years and extend Ripple and NUS’ existing collaboration in promoting innovation and thought leadership in fintech.

Established in November 2019, NUS FinTech Lab has evolved to become a strategic hub at the core of Singapore’s vibrant fintech and business ecosystem, providing strategic dialogue and research-driven innovation. Through initiatives such as the FinTech Brew, a regular coffee chat series, the Lab brings the latest ideas and brightest minds together in an informal setting where students and business leaders meet to address and discuss the challenges facing the fintech sector. Additionally, the FinTech Lab’s regular podcast series extends its reach globally, providing both local and international audiences with valuable insights into its work.

Commitment to innovation and collaboration

Professor Tan Kian Lee, Dean of NUS School of Computing, emphasised the FinTech Lab’s role in bridging theory and practice. He said, “This new phase of the NUS FinTech Lab symbolises our dedication to merging academic inquiry with practical industry solutions, fostering an ecosystem rich in innovation and impact.”

“NUS joined UBRI’s roster of incredible global academic partners in 2019 and has been driving innovation at scale through the NUS FinTech Lab ever since. We’ve been extremely impressed with the work to come out of this partnership – NUS has excelled at supporting an extremely active student fintech society, launching a new EVM sidechain validator, and maintaining an XRPL validator – not to mention the UBRI-funded NUS FinTech SG Programme, an advanced professional certificate initiative with 200 graduates achieving full-time job offers in fintech and over 50,000 programme participants receiving training in fintech via webinars, festivals, community events and more,” said Eric van Miltenburg, Senior Vice President of Strategic Initiatives at Ripple. “We look forward to many more years of collaboration as Singapore and NUS continue to pave the way for digital asset innovation and cement the country’s status as one of the leading fintech hubs in the region.”

In line with the University and the School’s mission, the FinTech Lab actively supports innovation and research leadership in partnership with industry, collaborating with Ripple’s UBRI programme and other business partners to catalyse knowledge sharing and collaborative inquiry. It also inspires students and young researchers to take up the greatest challenges in today’s digital economy through regular workshops, conferences, hackathons, and knowledge transfer partnerships.

Recognising the pivotal role of academic rigour in advancing the global fintech sector, and expressing appreciation for Ripple’s support, Professor Hahn Jungpil, Director of the NUS FinTech Lab, added, “The UBRI programme’s steadfast support is invaluable in our mission to blend rigorous academic research with real-time solutions to address critical industry challenges, leading to impactful advancements in financial technology.”

Looking ahead

Drawing on insights from the Singapore FinTech Festival – the world’s largest gathering of fintech professionals, regulators and thought leaders – NUS FinTech Lab’s agenda for the coming year will focus on the digital economy, digital assets and trusted AI and governance.

NUS FinTech Lab looks forward to engaging with many more business partners and policy-making institutions in months to come, as it embarks on the next exciting phase of development.

 

About National University of Singapore (NUS)

The National University of Singapore (NUS) is Singapore’s flagship university, which offers a global approach to education, research and entrepreneurship, with a focus on Asian perspectives and expertise. We have 16 colleges, faculties and schools across three campuses in Singapore, with more than 40,000 students from 100 countries enriching our vibrant and diverse campus community. We have also established more than 20 NUS Overseas Colleges entrepreneurial hubs around the world.

Our multidisciplinary and real-world approach to education, research and entrepreneurship enables us to work closely with industry, governments and academia to address crucial and complex issues relevant to Asia and the world. Researchers in our faculties, research centres of excellence, corporate labs and more than 30 university-level research institutes focus on themes that include energy; environmental and urban sustainability; treatment and prevention of diseases; active ageing; advanced materials; risk management and resilience of financial systems; Asian studies; and Smart Nation capabilities such as artificial intelligence, data science, operations research and cybersecurity.

For more information on NUS, please visit nus.edu.sg.

About NUS FinTech Lab

NUS FinTech Lab is a convening hub for interdisciplinary dialogue with academia, fintech industry, policy makers and regulators. Our lab is dedicated to advancing thought leadership, catalysing innovative research, and education with a focus on areas such as digital payments, data privacy, trust and security, regulatory compliance, and financial inclusion.

For more information, please visit dev-fintech-nus.pantheonsite.io.

About Ripple

Ripple is the leader in enterprise blockchain and crypto solutions, transforming how the world moves, manages, tokenises and stores value. Ripple’s business solutions are faster, more transparent, and more cost effective – solving inefficiencies that have long defined the status quo. And together with partners and the larger developer community, we identify use cases where crypto technology will inspire new business models and create opportunity for more people. With every solution, we’re realising a more sustainable global economy and planet – increasing access to inclusive and scalable financial systems while leveraging carbon neutral blockchain technology and a green digital asset, XRP. This is how we deliver on our mission to build crypto solutions for a world without economic borders. Ripple is a global leader in enterprise blockchain solutions, providing infrastructure for payments, settlements, and asset issuance. Ripple’s solutions enable banks, payment providers, and digital asset exchanges to send and receive money globally, instantly, securely, and cost-effectively.

For more information, visit ripple.com.

NUS Fintech Summit 2024

SINGAPORE – 11 December 2023 –NUS Fintech Society, a student group under NUS School of Computing that provides opportunities for students to learn and grow their knowledge, skills and network in Fintech, is thrilled to announce their annual flagship event in its 4th edition, NUS Fintech Summit 2024! This year, NUS Fintech Society is collaborating with NUS Asian Institute of Digital Finance (AIDF) and NUS FinTech Lab. The Summit is set to take place from 20th December 2023 to 19th January 2024 and this time they are pushing boundaries to explore the potential of Fintech even further!

Tackle Challenges, Win Big:
Tertiary students get ready to tackle four distinct challenges in the fields of blockchain, software development, and quantitative finance. All of which were carefully crafted by the Platinum Sponsors. Hackathon participants can also stand a chance to receive mentorship opportunities throughout the competition to win a share of the total prize pool, amounting to $40,000!!!

Key Events:
Alongside the hackathon, a series of workshops conducted by partners and guests have also been lined up. These workshops are designed to give tertiary students an insight into the dynamic world of Fintech and also expose them to the latest developments within the space. Among the speakers are industry leaders, tech innovators and financial experts, all of whom have played a pivotal role in shaping the Fintech landscape, so don’t miss out on this opportunity!

Don’t forget to mark your calendars for demonstration day as well, happening on 19th January 2024 (Friday) at Suntec Convention Centre! This will be hallmarked by the hackathon finale coupled with a panel discussion and career booth put up by the esteemed sponsors for the participants. With something for everyone, there would simply be no reason to miss out on the event!

Meet the Sponsors:
A special shout-out to the sponsors which were instrumental in making the event possible!
Platinum Sponsors: Ripple, Tokka Labs, aelf, Northern Trust
Gold Sponsors: Thought Machine, eFinancialCareers
Silver Sponsors: Flowdesk, Coinbase, Endowus, Alibaba Cloud, Metacamp, HashKey Capital

Registration and Information:
Dive into the heart of the digital revolution by tuning in on virtual kick-off day on the 20th December 2023 (Wednesday) and Opening Ceremony on 5th January 2024 (Friday) held at the National University of Singapore. Secure your spot by registering for NUS Fintech Summit 2024 now!

Event Sign-up Link: https://linktr.ee/nusfintech
*Registration for the virtual kick-off day has begun. For opening day, hackathon and workshops, sign-ups open on 20th December 2023.

Stay Connected:
Meanwhile, don’t hesitate to follow us on LinkedIn and Instagram to stay updated on the latest details of NUS Fintech Summit 2024!

About NUS Fintech Society:

NUS Fintech Society was founded in 2018 in collaboration with the NUS Fintech Lab under the NUS School of Computing. Since then, it has grown to be known for nurturing the Fintech talents of tomorrow by providing students with the opportunity to hone their knowledge, skills and expand on their network within the space.

Source : TechInAsia

 

Mistakes, false stories: Singapore flags 6 key risks of AI in paper, launches foundation to boost governance

  • The Infocomm Media Development Authority (IMDA) and global tech company Aicadium published a discussion paper on Singapore’s approach to building an ecosystem for safe adoption of generative artificial intelligence (AI) 
  • The paper highlighted six risks of generative AI that policymakers should take note of to ensure safe and responsible AI use
  • The risks range from AI mistakes to spreading of misinformation and copyright infringement
  • IMDA will continue to make targeted regulatory measures to uphold safety in future AI developments
  • A new AI Verify Foundation has also been launched to encourage the development of responsible AI testing tools

SINGAPORE — Generative artificial intelligence (AI) models can produce erroneous responses and false stories of sexual harassment when they make mistakes and “hallucinate”, according to a discussion paper released on Wednesday (June 7) that flagged this as one of several key risks of AI tech such as ChatGPT.

It also highlighted how AI models can pose privacy risks by leaking details keyed in by other users, or perpetuate biases based on race, among others.

The paper, jointly published by the Infocomm Media Development Authority (IMDA) and global tech company Aicadium, lays out Singapore’s approach to building an ecosystem for trusted and responsible adoption of generative AI.

The two were also part of seven organisations that launched the AI Verify Foundation, a platform for open collaboration and idea-sharing to encourage the development of responsible AI testing tools.

With more than 60 general members, the foundation also hopes to nurture a network of advocates for AI to drive a broad adoption of AI testing through education and outreach.

This is in addition to Singapore’s previous efforts in AI governance with the Model AI Governance Framework that was released in 2019 and the AI Verify tool launched last year, which allows companies to conduct technical tests on their AI models in line with global ethical standards.

Speaking at the Asia Tech x Artificial Intelligence Conference, Minister for Communications and Information Josephine Teo said that a strong desire for AI safety need not mean pulling the drawbridge to innovation and adoption.

“In AI, safety mechanisms and shared standards will equally instill confidence in its use but effective safeguards will take time and good research to discover,” she said.

IMDA’s discussion paper contains ideas for senior leaders in the Government and businesses on how to build an ecosystem for a responsible adoption of generative AI.

It also identifies six key risks that policymakers should note to ensure the safe use of AI.

MISTAKES AND FALSE RESPONSES

Generative AI models like ChatGPT have a more challenging time with tasks that require logic and common sense as these are modelled on how people use language.

This makes it easy for them to make mistakes with false responses, commonly known as “hallucinations”, that appear overly confident to convince users to believe them.

Just early this year, ChatGPT was found to have misrepresented key facts by creating convincing but erroneous responses to medical questions, created false stories of sexual harassment with real individuals and generated software code that was susceptible to vulnerabilities.

LOSS OF CONFIDENTIALITY

Known for its ability to memorise, generative AI essentially retains data and replicates it when prompted. 

A particularly worrying find is AI’s ability to quickly memorise parts of sentences like nouns, pronouns and numerals — information that may be sensitive.

The discussion paper states that generative AI also poses copyright and confidentiality issues. For instance, Samsung employees were reported to have unintentionally leaked copyrighted information when they used ChatGPT to check for errors.

SPREADING MISINFORMATION

The dissemination of fake news has become increasingly rampant in recent years and it runs the risk of becoming large scale with the use of generative AI.

The negative impact of such interactive models is greater as it taps human reactions and uses language found on the internet to further spread toxic content.

Impersonation and reputation attacks have also become easier to achieve with generative AI as it is able to generate images in an individual’s likeness.

In the hands of the wrong individual, generative AI makes it possible to generate phishing emails, create malicious computer codes, set up a dark web marketplace and disrupt the traffic of a targeted server.

COPYRIGHT INFRINGEMENT

Current generative AI models require massive amounts of data and this has raised concerns over copyrighted materials being used.

IMDA cited the example of Getty Images suing AI model Stable Diffusion over an alleged copyright violation for using the photo agency’s watermarked photo collection.

This rising concern is also evident in the creative community as this form of AI is capable of creating high quality images in the style of other artists.

INHERITING BIASES

AI models is likely to inherit stereotypical biases of the internet if left unchecked.

Some image generators have been reported to lighten the image of a black man when it is prompted to create the image of an “American person”, and create individuals in ragged clothes when prompted for an image of an “African worker”.

DIFFICULTY OF GOOD INSTRUCTIONS

AI safety is often aligned with human values and goals to prevent it from harming its human creators.

However, this task has proven challenging as the objectives outlined by AI scientists and designers are often misrepresented even with simple instructions.

For example, instructing AI to assign more importance to being helpful to the user can lead to the system not filtering responses and generating toxic responses that might cause harm.

FUTURE AI REGULATION

IMDA said that while Singapore does not currently intend to implement general AI regulation, the discussion paper itself is an example of how the country has taken concrete action to develop technical tools, standards and technology, which in turn lays the groundwork for clear and effective regulation in the future.

Careful deliberation and a calibrated approach should be taken, while investing in capabilities and development of standards and tools, said IMDA.

It added that it will continue to introduce and update targeted regulatory measures to uphold safety in future digital and AI developments.

Professor Hahn Jungpil from the School of Computing at the National University of Singapore told TODAY that there is a “tricky trade-off” between allowing for technological innovation and ensuring the safe use of AI.

He suggested three approaches to balance the two.

The first is a risk-based approach which focuses on regulating high-risk AI applications but allow for more freedom with low-risk AI innovations.

Another is for a contained or measured approach where innovation and experimentation can occur but allow for the plug to be pulled when risks are identified and materialised.

The third is a transparency-oriented approach that allows for some aspects of AI data to be made accessible to researchers for them to experiment with the technology and identify any hidden risks.

Source: TodayOnline

Scroll to Top