As generative AI continues to capture the world’s imagination, governments are in a race to regulate AI.
One of the challenges lies in balancing the scale and speed of Gen AI-enabled threats, against the cost of compliance, said John Bateman, Vice President of Systems Engineering, Asia Pacific and Japan, with cybersecurity solutions provider Radware.
He was speaking at a panel “Keeping Government Apps Safe: Navigating Tech and Data Security of AI” at GovInsider’s Festival of Innovation 2024 on 26 March in Singapore.
Other speakers included Juan Kanggrawan from Jakarta Smart City, Brigadier-General (BG) Edward Chen from the Singapore Armed Forces (SAF), and Prof Hahn Jungpil from AI Singapore (AISG) and the Centre for Technology, Robotics, AI and the Law. The panel was moderated by Prof Mark Findlay from the University of Edinburgh.
Keeping up with the speed and scale of AI-enabled threats
It is difficult for governments to make decisions when the proportion of unknown threats we face today is much larger and faster, especially with the advent of Gen AI and LLMs, said Prof Jungpil with AISG. Kanggrawan from Jakarta Smart City shared the same perspective.
Most of the speakers were in consensus that a preventative approach – whether it be on the technology or human front – is needed to manage AI-enabled threats.
An example of an AI-enabled cybersecurity tool is Radware’s Cloud DDoS Protection Service, which uses advanced algorithms to protect organizations against today’s most damaging DDoS threats, previously reported by GovInsider.
For instance, Bateman from Radware proposed a hybrid governance model for AI: using AI-powered tools to automate compliance with human governance to monitor the process.
On cybersecurity training to address AI threats, BG Chen from the SAF said that the Cybersecurity Agency of Singapore brings together other government stakeholders on drills to expose them to AI-enabled threats and respond to them accordingly.
Innovation vs. regulation: Reaping AI benefits, while countering AI risks
“The risks will remain, but the benefits will grow,” said Prof Findlay with the University of Edinburgh.
Prof Jungpil highlighted that while the Singapore government pours in plenty of investments in AI R&D projects, it still faces the challenge of aligning AI governance with geopolitics, as the rest of the world looks to what the three big players, China, the US, and the EU, are doing.
Speaking about AI governance from a private sector player’s point of view, Bateman pointed out that Gen AI has greatly democratised cybersecurity threats.
This has made it easier for malicious actors to develop and carry out cyber attacks, such as the recent surge in a new type of Distributed Denial of Service (DDoS) attack on web applications, known as Web DDoS Tsunami attacks. Some of the major Gen AI players, such as ChatGPT and Google Gemini, would reject prompts that they recognise as explicitly illegal or nefarious.
However, there is a newer generation of GPT tools that is designed without safety guardrails and available on open-source developer platforms like GitHub, said Bateman. Malicious actors can use these tools to create undetectable malware, write malicious code, and find vulnerabilities in systems.
“The new tools have increased the level of threat, as they enable unskilled people to become very sophisticated and powerful actors and generate [malicious] programs,” he explained. Some of these threats include malware, business email compromise, and spear phishing campaigns.”
As organisations create their own GPT models by tapping on open-source developer platforms, BG Chen added that they run the risk of downloading malicious codes into their devices.
Government and business collaborations needed to align innovation and regulation
“The role of people in governance is paramount [to regulating AI], and technology is just a tool at the end of the day,” said Bateman.
He added that it is important to bring together different public and private communities to keep up to date on AI developments and emerging threats, and to share detection and mitigation best practices based on the latest research, experience and technology.
The position taken by BG Chen was that there is no one-size-fits-all regulation. Critical information infrastructures such as public utilities would require tighter regulations to prevent disruption to citizens.
In general, whether it’s Gen AI or other technologies, regulations tend to lag behind innovations, said Kanggrawan. In the case of Indonesia, data privacy laws were only recently rolled out in 2022, which would set the foundation for other regulations around technology use.
As AI use remains in its infancy in Indonesia, Kanggrawan underlined the importance of a culture of innovation within organisations to trial new innovations among relatively lower risk use cases, or at the sandbox, provincial and local levels, before rolling out to the national level.
Gallery