AI-Based Attacks , Artificial Intelligence & Machine Learning , Fraud Management & Cybercrime

Asian Governments Tackle AI Misuse as Global Adoption Surges

Government Concerns Over AI-Related Crime, Bias and Deepfakes in the Spotlight
Asian Governments Tackle AI Misuse as Global Adoption Surges
Image: Shutterstock

Indian Prime Minister Narendra Modi on Friday warned how AI-generated content such as deepfake videos could cause societal friction. He is among a growing number of Asian leaders concerned about the potential misuse of artificial intelligence technology.

See Also: Live Webinar | Digital Doppelgängers: The Dual Faces of Deepfake Technology

"Because of AI, especially deepfake technology, a new crisis has emerged," Modi said. "People at large do not have access to a verification mechanism for video or pictorial content, so the dissemination of deepfake content can create instability within the populace and even create large-scale discontent."

Addressing a gathering of media representatives at the Bharatiya Janata Party headquarters in New Delhi, Modi said the government and the media must educate the masses about the perils of AI-generated content. "Recently, someone made a video where I was seen dancing to a folk song, and it was a hilarious one and was widely shared on social media. But this capability also gives rise to a major concern. We must make people understand the deep impact deepfakes can have on social order," he said.

Modi said he met with OpenAI, the makers of ChatGPT, and advised them to include warnings on AI-generated content, similar to health warnings on cigarette packets. He said company representatives "understand the perils but do not know how the menace can be stopped."

Modi's comments followed an edict by Indian authorities to social media intermediaries to identify disinformation and deepfakes on their platforms and remove them within 36 hours upon receiving a report from a user or a government authority (see: India Orders Removal of Deepfakes Within 36 Hours).

The adoption of generative AI technology has exploded in the Asia-Pacific region since 2022, and adoption is expected to grow rapidly through 2030. AI tools give anyone the ability to create high quality audiovisual, text-based and graphical content in a matter of seconds.

Nasscom, the leading association of the $245 billion technology industry in India, said generative AI investments in India grew over twelvefold in 2022. More than $19 billion in investments has been committed toward the technology so far. Inc42 estimates that India's generative AI market will grow over 15 times over the next seven years, driven by fierce competition among generative AI startups in the areas of code generation and data analysis.

AI Misuse Concerns Grow

Digital change has given rise to financial scams, cyberespionage and digital crimes, and experts fear the rise of generative AI could trigger a wave of opportunistic misuse of the technology. AI algorithms have also suffered from biases and inaccuracies, making governments wary of embracing AI without establishing checks and balances.

South Korean President Yoon Suk Yeol expressed his concerns during a recent speech at the United Nations General Assembly. "If we fail to curb the spread of fake news resulting from the misuse of AI and digital technologies, our freedom will be at risk, the market economy anchored in liberal democracy will be in peril, and our very future will be under threat," he said.

Japanese Prime Minister Fumio Kishida, a recent victim of a deepfake video, has advocated a middle path - funding AI development while implementing international standards and guidelines for AI developers.

Governments and regulators must address two key challenges: ensure that tech firms keep their AI tools free of errors, inaccuracies and bias and also ensure that criminal use of AI in the form of deepfakes, voice cloning or phishing is mitigated before it becomes a crisis.

Recorded Future's Insikt Group said in a report that cybercriminals have mastered the use of voice-cloning technology in conjunction with other AI technologies such as deepfake video technology, text-based large language models and generative art and open-source or freemium AI platforms that have lowered the barrier for low-skilled threat actors.

"These platforms' ease-of-use and out-of-the-box functionality enable threat actors to streamline and automate cybercriminal tasks that they may not be equipped to act upon otherwise. Threat actors have begun to monetize voice cloning services, including developing their own cloning tools that are available for purchase on Telegram, leading to the emergence of voice-cloning-as-a-service," the firm said.

"It goes beyond saying that security and AI go hand in hand," Vivek Mahajan, Fujitsu's chief technology officer, told Information Security Media Group. "Organizations won't use AI without security processes in place. We set up a lab in Israel with Ben-Gurion University where we're doing research on how users of AI platforms can better detect the origin of IP addresses where data is coming from. That is an essential part of what we do as part of AI."

Tackling AI Bias and Inaccuracies

Governments and organizations expect AI systems to help them form future strategies, plan their operations, identify risks and opportunities and monitor performance, among other potential tasks. But the effectiveness of an AI model rests on the quality of the datasets used to train the algorithm and the effort developers make to curb bias or assumptions.

Governments are already making it clear that running AI systems with unresolved inaccuracies or biases could result in regulatory action. India's Minister of State for Electronics and IT Rajeev Chandrasekhar made the government's intentions clear on Thursday after an X user reported that Google's Bard refused to summarize a media article, stating that the news portal spread biased and false information.

"Search bias, algorithmic bias and AI models with bias are real violations of the safety and trust obligations placed on platforms under Rule 3(1)(b) of IT rules under regulatory framework in India. Those who are aggrieved by this can file FIRs against such platforms and safe harbor/immunity under Sec79 will not apply to these cases," Chandrasekhar tweeted.

Reva Schwartz, principal investigator for AI bias at NIST, said AI systems do not operate in isolation and their decision-making abilities must be based on context. "Context is everything," she said. "If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the public's trust in AI."

Fujitsu's Head of AI Amitkumar Shrivastava told ISMG that possibly the only way to make an AI system less prone to bias and inaccuracies is to expose it to a set of people in a particular geography or region and use their feedback and context to refine it before it is launched on a global scale.

"If you're not basically making your technology available to the people, I think it will be really difficult for you to get appropriate feedback. And if you're not getting appropriate feedback, you're not going to understand the actual loopholes of the technology," he said, adding that open-source tools such as ChatGPT evolved over time based on feedback from millions of users.

According to Gartner, AI development requires a coordinated approach within an organization for AI adoption, business goals and user acceptance. "AI requires new forms of trust, risk and security management that conventional controls don't provide," said Mark Horvath, Gartner vice president and analyst. "Chief information security officers need to champion AI TRiSM to improve AI results, by, for example, increasing the speed of AI model-to-production, enabling better governance or rationalizing AI model portfolio, which can eliminate up to 80% of faulty and illegitimate information."

"The implementation of AI TRiSM enables organizations to understand what their AI models are doing, how well they align with the original intentions and what can be expected in terms of performance and business value," Horvath said. "Without a robust AI TRiSM program, AI models can work against the business introducing unexpected risks, which causes adverse model outcomes, privacy violations, substantial reputational damage and other negative consequences."

Federated Learning Can Improve AI Models

Shrivastava said that to develop and train an AI model, developers need access to vast amounts of live data that may include sensitive information. Storing that information is not only complex, it also endangers the data security and privacy of millions in the event of a breach or an inadvertent exposure.

Risks can be mitigated if organizations intent on building AI systems choose the federated learning approach, he said. This approach involves developers training an AI model on a data repository without actually seeing or accessing the data.

"I feel that technologies like federated learning as a new way of doing machine learning in the future could create a scenario where organizations will choose not to store data at the centralized location but train the AI models on the client system itself," he said.

"AI models trained on the client system can be aggregated to create a global model which is free of bias, hallucinations or inaccuracies, and organizations can then use the model to apply to specific geographies or countries."


About the Author

Jayant Chakravarti

Jayant Chakravarti

Senior Editor, APAC

Chakravarti covers cybersecurity developments in the Asia-Pacific region. He has been writing about technology since 2014, including for Ziff Davis.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.in, you agree to our use of cookies.