Moving towards Harmless AI – the Future of Fintech

There are many reservations about the use of AI by dominant institutions and companies – namely, the lack of transparency and the potential for discrimination. People are worried that the algorithms may not always be understandable, and that the accuracy of results must be upheld, along with the transparency of the algorithmic process for legal explainability. Recent studies and developments show how credit prediction algorithms tend to reflect the bias in the data, raising the risks of discrimination against minority groups. There have been recent efforts to push for safe and responsible AI, including the development of “constitutional AI” models that aim to reduce human bias, increase transparency, and improve fairness and equality.

An AI arms race has shaken the tech industry.

Within a week, Microsoft announced plans to integrate ChatGPT’s revolutionary technology into its search engine Bing, Google invested $300 million in Anthropic, the maker of ChatGPT’s rival, and Alphabet’s shares plummeted after Google’s AI chatbot Bard answered a question incorrectly

The integration of AI into everyday institutions, products, and services have been increasingly and inevitably prevalent over the last few years. Recognising AI’s potential as early as 2019, The Money Authority of Singapore (MAS) launched the National Artificial Intelligence Programme in Finance. Under the programme, financial institutions (FIs) are able to use AI technology in financial risk assessments more productively with the intention to create commercial opportunities for businesses and employment for citizens.

The breakneck speed at which AI is being made available to institutions, companies, and citizens have caused anxiety around its potential shortcomings when it comes to ethics and fairness. Tech giants and government bodies giving technology the power to play a major role in decision making posits an urgent concern: ensuring that the integrity and responsibility of such powerful groups are not compromised. In late 2022, DBS launched an AI-driven initiative to bolster the application process for working capital loans. With the financial livelihoods of small businesses at stake, is it really safe to be employing such novel technology in its important decision-making?

Thus, the discourse surrounding these integrations largely focuses on developing safe and responsible AI. Government-backed efforts such as the MAS AI in Finance programme highlights the importance of ensuring “MAS’ fairness, ethics, accountability, and transparency (FEAT) principles” are upheld. Likewise, Anthropic threatens OpenAI’s dominance in the AI sphere by priding itself as an “AI safety and research company”.

What are some of the reservations people currently have around AI being used by dominant institutions and companies?

One of the major contentions surrounding AI’s role in the finance industry is the lack of transparency. There appears to be a lack of public support for these AI models. Their algorithms are also sometimes known as ‘black boxes’; the way they operate is not always understandable to users. If these algorithms are being given the power to participate in important decision making, it is legally and ethically imperative that all interested parties are privy to how the AI reaches its conclusions. In corporate risk assessments, an AI model’s complex process may produce good results but may lack visibility and interpretability. The accuracy of results need to be upheld; transparency of the algorithmic process is also needed for legal explainability.

Viral news of harmful AI or algorithmic bias has also sparked heated discussions. AI operates through machine learning (ML) – a process similar to human learning, that looks for patterns in the data with the intention that it can continue to learn and improve automatically. Though the technology is automated, it is still based on human input.  In a Tweet that has garnered nearly 9,000 likes, a user posts a screenshot of a seemingly racist and sexist output from ChatGPT. The AI was asked to produce a Python function that would check if someone would be a good scientist based on race and gender. The AI responded code that discriminatorily favored the conditions “white” and “male”. 

Examining the use of AI by banks, a 2022 study showed how credit prediction algorithms tended to reflect the bias in the data. Specifically, it acknowledges the potential discrimination against minority groups within the classes of race, gender, or sexual orientation. These risks should be attended to with caution by AI developers, especially if such technologies are to be commonly implemented in a country as multicultural and diverse as Singapore. A local study examines a model similar to that of most international banks. Such models examine consumer data, excluding “protected” variables such as age or gender and using “proxy” variables like education type in an attempt at inclusion and fairness. The results showed that these efforts still failed to eliminate discrimination. So why are companies and institutions still integrating these algorithms so rapidly when it appears, at least for now, that AI technology is still unable to escape potential human bias and discrimination?

The push for safe and responsible AI has recently shown promise. Anthropic, the receiver of Google’s $300 million investment, has pointed to its research on “constitutional AI”. This study posits a model that has been shown to produce less harmful outputs. It seeks to mitigate human bias by using AI feedback in the AI’s reinforcement learning, minimizing the need for labeling by humans, a process that may introduce bias. It also addresses the prior issue of transparency, using “chain-of-thought reasoning” through which the AI cannot produce evasive responses like “I’m sorry, I cannot answer that”. However, though Anthropic’s novel model seems to produce more transparent and less harmful results, helpfulness or accuracy seems to be compromised. 

Despite its limitations, if constitutional AI can truly reduce the biases and discrimination that plague current AI models, it may just revolutionize the accessibility and inclusivity of the technology’s implementation in sectors where careful decision-making is key, such as the finance sector. AI has been and continues to be integral to the endeavor towards financial inclusion. Singapore-based ADVANCE.AI has partnered with Visa in an effort to improve credit accessibility across Southeast Asia. The integration of this technology affords credit companies the ability to reach the underbanked by way of alternative consumer data. In dealing with such an underprivileged group, constitutional AI and its promise of improved fairness and equality would be helpful in boosting these efforts.

There is still much work to be done in developing artificial intelligence for fair and ethical use in our financial systems. Singapore remains a frontrunner in the push for reliable and safe AI implementation. Launched in 2022 by the Infocomm Media Development Authority (IMDA), A.I. Verify is a government initiative that allows participating companies to use technical and process checks as a means of ensuring transparency and responsibility in their use of AI. The programme is still in its pilot stage but hopes to improve trust with stakeholders in the industry and contribute to international standards of development.

Progress seems to be made, but a careful balance between transparency, accuracy, and harmlessness has yet to be achieved. Success in mitigating the harmfulness of AI algorithms seems increasingly promising for the efforts towards fairness and financial inclusion. However, biases in algorithms are ultimately a reflection of the data it is based on. For now, it appears the ML cliché still rings true: “garbage in, garbage out”.

Scroll to Top