Technical community

How do we govern the complex intersection between AI and cybersecurity?

Co-authors: Yolanda Lannquist, Jia Yuan Loke, Cyrus Hodes and Roman V. Yampolskiy

It’s easy to imagine a not so distant future where suburban commuters might enjoy a peaceful ride to work in a self-driving vehicle that is either privately owned or belongs to a regional transportation system. Many studies have speculated this utopian scenario of time well spent in a small mobile living space – configured to reduce traffic, pollution, and accidents – with passengers preparing for work or enjoying some form of leisurely entertainment.

But what if a cyber attacker were to take over the vehicle’s operating system and transform this scenario into a social protest, a hostage situation or even a terrorist attack? In a less sensational scenario, someone might take control of the vehicle to try to steal it. Beyond these hypothetical situations, the truth is that cyber attacks are already here and very real.

The stakes of cyber attacks are increasing with digitalization

As digitization increases so do the stakes and risks in the event of cyber attacks. As the number of digital devices and possible risks increase, more actors and systems find themselves at the intersection of AI and cybersecurity. As individuals, businesses and governments rely increasingly on IT, they are more likely to fall victim to cyber attacks.

The automobile industry is a good example of how digitalization increases risk. If we look at the ever increasing level of digitalization in cars, it is easy to see that a cyber attacker has many more options hacking options today than 20 years ago. This will surely continue to increase if self-driving cars become the norm.

Cyber attacks are vast but attackers’ goals are narrow

The cyber attack surface is expanding in unison with the growing number of interconnected personal devices, ‘smart’ IoT and home devices, cloud computing systems and databases, supply chains, etc. While cyber attacks are vast and differentiated they can be categorized according to just a few information security goals: confidentiality, integrity, and availability.

Confidentiality attacks gain entry into a computer network to monitor or extract information, such as the September 2020 breach of Norway’s Labour Party members’ email accounts.

Integrity attacks involve entering a system and altering information. In August 2020, Russian hackers compromised Polish, Lithuanian and Latvian news sites and replaced legitimate articles with falsified posts that used fabricated quotes from military and political officials to discredit NATO.

Availability attacks block access to a network, either by overwhelming it with visits, denying service, or shutting down physical or virtual operations. In August 2020, New Zealand’s stock exchange was disrupted for several days after a denial of service attack by unknown actors.

Cyber attackers misappropriate technologies while victims weigh options

AI cyber attackers frequently use methods related to ‘dual-use’ technologies, or tools and methods developed by well-meaning actors that can be misappropriated to do harm.

Attribution is very difficult in cyberspace, so tracing the culprits can be tricky. Cyberspace is interconnected, has no geographical limits, and is largely anonymous. While it might be possible to determine the computer used in an attack, it is difficult to know if it is being remotely controlled.

Measuring the costs of cyber attacks, and in turn the appropriate response, can also be challenging. Many attacks do not lead to financial or physical harm, but loss of privacy,  confidentiality and other unquantifiable costs.

Finally, it is often said that cyberspace is an offense-persistent strategic environment: defence has to be continuous because systems are under constant stress.

How vulnerabilities in AI systems can impact cybersecurity

AI-enabled systems are vulnerable to a vast range of attacks, such as integrity attacks that alter information. For instance, AI models may be hacked to give attackers a back door, leaving training data vulnerable to tampering (i.e. data poisoning).

Adversarial examples are a common risk – this involves making small changes to machine learning models’ inputs that then lead to misclassification. For an image classification model where machines learn to identify classes of images, an attacker overlays pixels on an existing object or image. The resulting perturbation is imperceptible to humans but enough to ‘fool’ the model. This can have implications for the real world, for example when an autonomous vehicle is relying on computer vision to identify stop signs.

AI can enable labour intensive attacks (e.g. spear phishing), exploit human vulnerabilities (e.g. image synthesis for impersonation), be used to search for vulnerabilities in IT systems, and enable new and more advanced attacker techniques.

By nature, cyber operations make it difficult to know if human labour, simple automation, or AI systems are behind attacks. For example, ‘deep fakes’, or realistic photos, videos or audio generated by AI, can be used to access biometric systems or impersonate individuals to gain access to personal or professional networks. 

AI can fill the talent gap in cyber defence for governments and companies

AI can also be used for defence. First, AI can be used to find vulnerabilities in software. Some argue that this will asymmetrically benefit defenders because defenders can find and patch vulnerabilities before the software is even released.

Machine learning models may be trained on data from past attacks to detect malicious activity. A widening number of cybersecurity providers use machine learning to detect anomalies.

AI may be used for knowledge consolidation. Currently, cybersecurity professionals are challenged to keep up with the latest research, blogs, articles, and reports about cybersecurity. IBM Watson for Cyber Security is an ongoing effort to use AI to process large amounts of cybersecurity data and predict which pieces of information are most relevant to professionals.

Finally, AI may enable new defensive techniques by improving actors’ ability to detect attacks, identify attackers, and to counter. For example, in the 2016 DARPA Cyber Grand Challenge, seven AI systems fought to identify and patch their own vulnerabilities while probing and exploiting their opponents’ weaknesses.

The cyberspace governance landscape

Cyberspace governance is the subject of several international and multi-stakeholder agreements, industry initiatives and community norms. Governance efforts focus on offence and active defense, but there needs to be more laws, norms and legal protocols in cyber operations, especially at the international level.

The United Nations Institute for Disarmament Research (UNIDIR) has summarized national cyber policies and profiles across 193 UN Member States in its online interactive ‘Cyber Policy Portal’, showing a wide range of diversity in approaches. The Portal allows for analysis and comparison across key legislation and documents, responsible agencies, and even multilateral agreements.

International law

Few international agreements address the development and use of cyber weapons. In fact, the extent to which international laws are applicable to cyber weapons is a key debate. The US has argued that cyber defence regulations should be built on international laws. Other nations including Cuba, Russia and China have disagreed, stating that applying existing laws to cyberspace will militarize cyberspace and impede peaceful resolution.

Some international bodies have considered a human rights based approach toward issues of cybersecurity. The Human Rights Council (HRC) has done just that in its Resolutions on the Promotion, Protection and Enjoyment of Human Rights on the Internet. This follows the HRC’s 2012 affirmation “that the same rights that people have offline must also be protected online, in particular freedom of expression”.

Multi-stakeholder and international agreements

A number of multi-stakeholder and international agreements aim to bring together global actors for security in cyberspace. This is important since the boundaries between private and public sector, and of course national borders, continue to blur in terms of responsibility for cybersecurity and cyber attack attribution.

To respond to this new dynamic, governments and intergovernmental organizations need to build stronger formal mechanisms to equip institutions and incentivize actors to report cyber breaches. This would contribute to the well-being of cyberspace and empower citizen-users thorugh protection and awareness.

At the 2018 UNESCO Internet Governance Forum (IGF), France’s President Macron launched the Paris Call for Trust and Security in Cyberspace. It is a non-binding agreement that calls on supporters to adhere to a set of common principles including agreeing to stop cyber attacks on critical infrastructure like electrical grids and hospitals. As of its one-year anniversary on November 11th, 2019, The Paris Call has been endorsed by 74 more than 50 states, 24 public and local government authorities, 333 organizations and members of civil society, and 604 companies and private sector entities. This includes major technology companies such as Microsoft and Facebook. Notably, the U.S., Russia and China have not endorsed the initiative.

Industry approaches

In 2018, a coalition of technology companies, led by Microsoft, created the Cybersecurity Tech Accord to improve the security, stability and resilience of cyberspace. In addition to spearheading this accord, Microsoft has also proposed 6 norms for international cybersecurity. However, these norms are challenged when national IT infrastructure is tied to the private sector, such as when US companies compete to secure a 10 billion dollar contract to build a cloud computing platform for the Pentagon.

Building secure foundations in the computing community

As the creators of potentially vulnerable programmes and data systems, members of the computing community have a critical role to play in cybersecurity. Once design choices have been integrated into systems, their path-dependent effects are hard to rectify without drastically changing the whole system. For example, two security vulnerabilities relate to common commands in the widely used C programming languages – ‘null pointer’ and the ‘gets function’. Both of these were ultimately exploited by malware, resulting in billions of dollars in damage. Progress may have been slower if these early programming languages had been designed with more security in mind, but computers would have been less vulnerable.

It is better to build AI systems that are secure and resilient as opposed to grappling with actor behaviour. Ethics training in computer science curricula could prevent harmful behaviour, whether intentional or otherwise.

Creating cybersecurity awareness to prevent attacks

Educating the public to avoid cyber threats

It is important to educate and empower those who use and depend on digital products and services. A survey by Kaspersky found that 52% of businesses are at risk from human errors such as opening emails from unknown senders, clicking on unsafe links, entering confidential information into seemingly friendly accounts.

Governments and businesses can educate citizens and employees to identify basic attacks and to adopt good cyber security habits. From a business perspective, if individuals, businesses and governments demand security, then service providers will have more incentive to meet that demand.

One awareness campaign about cyber threats  from an insurance provider shows how real world hacker techniques work. Standard-setting bodies may help consumers cut through technical and complex issues, and guidelines such as the NIST Cybersecurity Framework aid businesses in improving their cybersecurity.

AI and cybersecurity communities can work together

Many existing cybersecurity guidelines and principles, such as the Tallinn Manual, focus on human-led cyber operation whereas security experts should expand their focus to the implications of AI for cybersecurity. The AI community should prioritize securing AI models and appropriate the cybersecurity community‘s best practices including red teaming, or formal verification and bug bounty programs. How AI will impact the attacker-defender balance remains a key question.

Both communities should take the dual-use nature of their work seriously, which includes thinking about how to responsibly disclose and publish research. Members of the Association for Computing Machinery (ACM) have proposed leveraging peer review as a gatekeeping function to compel researchers to consider the negative implications of their work.

Policymakers should focus on prevention

While it is important to build systems that are resilient to cyber attacks, it is also important to prevent attacks. This means setting boundaries on actors’ behaviour. The issue of how international law applies to cyberspace remains contentious and there is a need for increased multilateral conversations on cybersecurity, such as the UN GGE on cybersecurity. Overall, policymakers should collaborate closely with technical experts to develop cyber security solutions informed by the technical realities at hand.

There will always be threats

As a ‘general purpose technology,’ AI offers paradigm-shifting opportunities to boost productivity and innovation globally. However as a dual-use technology, important safety and security risks must be managed, including attacks on AI systems. Mitigating these risks requires governance by a wide range of actors across sectors who intervene in pre and post-deployment. That include businesses, policymakers, non-profits, academia, civil society and individuals. Education and awareness, incentives and mechanisms that encourage safe AI, and multi-stakeholder partnerships and collaborations can help to turn recommendations and ideas into action.

Learn more about The Future Society’s work on ‘The Intersection and Governance of Artificial Intelligence and Cybersecurity’.

Co-authors: Yolanda Lannquist, Jia Yuan Loke, Cyrus Hodes and Roman V. Yampolskiy



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.