How to Fight AI-Caused Disinformation Threat?

  07 October 2024    Read: 899
  How to Fight AI-Caused Disinformation Threat?

Disinformation is an old foe, which has taken on a new shape in the digital age. Gossip and propaganda used to be the main tools to spread false rumours, whereas hi-tech algorithms have joined the game now.

In an ideal world, AI should serve to improve the quality of information. But it is not a perfect world we live in, and it has become a toy in the hands of criminals, who have found new means to turn lies into mass information weapons. Yet, the question we must answer is this: how can countries fight the AI-caused disinformation threat at the regional level? And what should they do to protect their interests from attacks by malicious AI systems?

Many states have already realized that a lone wolf always loses on the global stage. Fighting AI-disinformation requires joint efforts, and most importantly, regional interstate cooperation. The European Union is a good example of how similar associations can become a tool for combatting AI threats.

Regional alliances can develop their data verification standards. One way is to build joint monitoring hubs that can follow how disinformation spreads in real time and exchange information among states. This will allow them to react quickly to attacks, identify sources and neutralize them before they can cause real damage. Imagine something like a regional ‘Shield AI’, which guards the information state of not one state, but the entire region.

They can also develop joint legislative frameworks to regulate how AI is applied. This could embrace strict requirements for algorithm transparency and step-by-step introduction of mandatory certification for companies involved with AI-systems. Regional alliances, such as the EU, can oblige social networks and online platforms to check facts and impose fines for lack of control.

Governments can sign agreements on implementing unified AI security standards to protect their countries’ interests at a legal level. This will facilitate tracking attempts at malicious use of technologies.

National and regional strategies must also include actively educating the public, such as mandatory regional media literacy programs, including courses for school and university students. We must also teach them how to recognize fake content generated through AI.

Joint educational initiatives among several states would allow to circulate and exchange success stories and experiences. Building a single base for fact-checking tools and educational programs for the public could strengthen our protection against misinformation at all levels.

Employing the AI to combat itself is a paradox, albeit an extremely effective one. Developing counter-disinformation technologies, such as algorithms that can detect and block fakes, is an endeavour that necessitates both national and regional efforts. Regional alliances could allocate grants and jointly finance projects that aim to develop AI capable of putting up an effective counter front against false information.

This approach will allow us to both protect ourselves against external threats and develop our own AI technologies, which is crucial considering geopolitical competition in the field.

Cooperating in cyber intelligence could also become a key element in fighting disinformation. Joint data monitoring and analysis centres could allow the participating states to swiftly identify suspicious activity and safeguard their information resources. Joint analytical groups can exchange information on new forms of AI-generated threats and develop strategies to neutralize them together. Developing a single ‘register’ of suspicious AI-systems and platforms would also facilitate tracking sources and schemes of spreading disinformation.

Outside of spreading disinformation, AI can also pose a direct threat to national infrastructure, from hacker attacks to interference in the management of key systems. Regional efforts on cyber security could include developing uniform standards to protect such critical infrastructure. This envelops building protocols for rapid information exchange on cyberattack attempts, especially those generated by AI. Joint training and exercises to develop defence scenarios against such threats would add another level of protection.

Cooperation on a regional level has long outgrown the category of ‘valuable practice’. It has become a necessity. AI technologies are becoming more advanced with every passing year, which translates into more threats of disinformation. But joining efforts and developing common strategies would allow all states involved not only to protect their information space, but to also build effective mechanisms to combat any threats, be it disinformation or direct attacks on national systems.

The time has come to stop thinking of AI as something abstract and start treating it as a real challenge that demands coordinated decisive action.

Saken Mukan, Kazakhstani political scientist, Professor at the Department of Media Communications and History of Kazakhstan at International Information Technology University of Kazakhstan

Specially for AzVision.az

 


More about:


News Line