Articolo | 12 marzo 2026

AI Weapons and Global Security: Rethinking Non-Proliferation in the Age of Autonomous Systems

AI Weapons and Global Security: Rethinking Non-Proliferation in the Age of Autonomous Systems

Condividi su:

 

Beatrice Nicolini

 

Artificial Intelligence and the Limits of the Existing Non-Proliferation Framework

The study examines the implications of artificial intelligence for international security and the global non-proliferation regime, with particular attention to the limitations of United Nations Security Council Resolution 1540 in addressing emerging AI-enabled weapons. Adopted unanimously in 2004, Resolution 1540 established a binding international framework designed to prevent non-state actors from acquiring nuclear, chemical, and biological weapons and their means of delivery. It obliges states to adopt domestic legislation, enforce export controls, secure sensitive materials, and cooperate internationally in order to limit the proliferation of weapons of mass destruction. However, the resolution was drafted at a time when artificial intelligence, autonomous systems, and digital warfare technologies were not yet central to global security concerns. As a consequence, its language reflects the technological assumptions of the early 2000s and does not account for the strategic challenges posed by contemporary AI-enabled military systems.

Over the past decade, artificial intelligence has evolved from a relatively specialised technological field into a crucial component of modern military infrastructures. AI applications now support intelligence analysis, surveillance operations, target identification, logistics management, cyber operations, and autonomous platforms such as drones. Although many AI technologies originate in civilian or commercial sectors, their dual-use nature enables them to be rapidly adapted for military purposes. This dual-use character complicates traditional arms-control approaches because digital technologies can be transferred, copied, or modified far more easily than physical weapons or materials associated with nuclear, chemical, or biological arsenals. Unlike fissile materials or toxic agents, AI systems cannot be secured through conventional methods of storage or inspection. Algorithms, datasets, and software architectures can circulate globally through digital networks, making monitoring and regulation significantly more difficult.

The study argues that AI-enabled weapons represent a new domain of proliferation risk that is largely unaddressed by the existing non-proliferation architecture. These systems do not fall neatly within the traditional categories of weapons of mass destruction. Nevertheless, they can amplify the effectiveness of military operations by improving targeting accuracy, accelerating decision-making processes, and coordinating complex battlefield activities. In this sense, AI-enabled systems may have destabilising effects comparable to those associated with more traditional strategic weapons. The growing availability of autonomous drones, AI-supported surveillance technologies, and algorithmic targeting systems demonstrates that such technologies are no longer theoretical possibilities but are already being deployed in contemporary conflicts.

One of the central challenges discussed in the study concerns the increasing accessibility of AI technologies to non-state actors. Resolution 1540 was designed primarily to prevent terrorist groups and other non-state entities from acquiring nuclear, chemical, or biological weapons. However, the technological barriers associated with these weapons remain extremely high, requiring specialised materials, complex industrial infrastructures, and significant financial resources. AI-enabled capabilities, by contrast, can often be developed using commercially available technologies. Consumer drones, open-source machine-learning models, cloud-computing platforms, and publicly accessible datasets can all be repurposed for military or violent purposes. As a result, non-state actors may acquire AI-supported operational capabilities without the need for state-level technological infrastructures.

 

The Libyan Conflict as an Empirical Illustration of Emerging Technological Warfare

The transformation of warfare through digital technologies is illustrated by the case study of the Libyan conflict. The civil war in Libya during 2020 became one of the most internationally influenced conflicts of the decade, characterised by extensive foreign intervention and the widespread use of advanced military technologies. Two main political and military blocs emerged during this phase of the conflict. On one side stood the internationally recognised Government of National Accord based in Tripoli, while on the other side was the Libyan National Army led by Khalifa Haftar, which controlled large parts of eastern and central Libya.

The confrontation between these factions was intensified by the involvement of external powers that supplied military equipment, advisors, and personnel. Armed drones played a particularly important role in shaping the dynamics of the conflict. Turkish unmanned aerial vehicles provided significant support to the Government of National Accord, while the United Arab Emirates supplied comparable systems to Haftar’s forces. These technologies altered the balance of power on the battlefield by enabling remote targeting and sustained aerial surveillance. At the same time, the conflict demonstrated the limits of international mechanisms designed to regulate arms transfers. Although Libya has been subject to a United Nations arms embargo since 2011, multiple states repeatedly violated these restrictions by supplying weapons and military support to their preferred factions.

The European Union launched Operation Irini in 2020 to monitor maritime arms transfers, but the operation lacked the mandate and resources to effectively monitor land and air supply routes, which were the primary channels through which military equipment entered the country. The Libyan case also reveals the long-term consequences of earlier international decisions related to non-proliferation. In 2004 Libya renounced its programmes for weapons of mass destruction and sought reintegration into the international community. This decision was welcomed by the United Nations Security Council and followed by the gradual removal of certain sanctions, including restrictions on conventional arms transfers. The lifting of these restrictions allowed the Libyan government to rebuild its conventional military arsenals between 2004 and 2011.

After the collapse of the Qaddafi regime, many of these weapons circulated among militias and armed groups, contributing to widespread militarisation across the country and the broader region. Although the 2004 resolution addressed only weapons of mass destruction and not conventional weapons, the subsequent flow of military equipment illustrates how regulatory frameworks can produce unintended long-term consequences.

 

Conceptual and Regulatory Challenges of AI-Enabled Weapons

Beyond the empirical case study, the research develops a conceptual framework for understanding AI-enabled weapons within the context of international security governance. Artificial intelligence is defined broadly as computational systems capable of performing tasks that normally require human cognitive abilities, including pattern recognition, predictive analysis, decision-making, and data interpretation. Within military environments, AI technologies include machine-learning algorithms, computer-vision systems, natural language processing tools, data-fusion platforms, and autonomous decision-support systems.

These technologies can enhance operational efficiency, improve situational awareness, and accelerate the processing of large volumes of data. An important conceptual distinction is drawn between automated systems and autonomous systems. Automated systems follow predetermined rules and execute tasks according to fixed programming instructions. For example, a missile guided by GPS coordinates without the ability to adjust its behaviour would fall into this category. Autonomous systems, by contrast, employ artificial intelligence to perceive their environment and adapt their behaviour accordingly. Loitering munitions that use onboard AI to identify and engage targets represent one example of this type of system. This distinction is crucial because most existing arms-control frameworks were designed to regulate conventional automated weapons and do not adequately address systems capable of independent decision-making.

The dual-use nature of artificial intelligence presents an additional challenge for regulation. Many of the technological components required for advanced AI applications—such as graphics processing units, cloud-computing infrastructure, and widely used programming libraries—are essential to both civilian and military innovation. Artificial intelligence research is driven largely by private companies and academic institutions rather than government defence agencies. As a result, technological developments that have potential military applications often originate outside traditional regulatory structures. This creates difficulties for export-control regimes, which were designed primarily to monitor the transfer of tangible goods rather than digital software architectures or algorithmic models. In order to analyse the risks associated with AI-enabled weapons, the study identifies several categories of indicators.

Proliferation indicators measure the likelihood that a technology will diffuse beyond state control, taking into account factors such as the availability of open-source code, the accessibility of cloud-based computing services, and the declining cost of commercial drones. Autonomy indicators evaluate the degree to which artificial intelligence contributes to a weapon system’s operation, including its ability to perceive environmental conditions, classify targets, and execute engagement decisions. Supply-chain indicators examine the digital infrastructures that support AI development, including commercial hardware, software libraries, and data sources. Human-oversight indicators assess the extent to which human operators remain involved in operational decision-making.

Systems operating without human supervision pose significantly greater risks in terms of accountability and legal responsibility. Finally, vulnerability indicators consider the possibility that AI systems may produce harmful outcomes due to algorithmic errors, cyber manipulation, or unintended operational consequences. Methodologically, the study adopts a qualitative approach that combines doctrinal legal analysis, comparative analysis of regulatory frameworks, and case-study investigation. The legal analysis examines the wording of Resolution 1540 and related United Nations documents in order to determine whether AI-enabled weapons could be interpreted as falling within its existing provisions.

Comparative analysis explores how other regulatory frameworks, including NATO strategies, international humanitarian law, and export-control regimes, address emerging technologies. The Libyan conflict provides an empirical example illustrating how advanced technologies influence real-world conflicts and expose regulatory gaps. The study concludes that Resolution 1540 is structurally ill-equipped to address the challenges posed by artificial intelligence. Because the resolution focuses specifically on nuclear, chemical, and biological threats, it does not account for the proliferation of digital technologies, autonomous systems, or algorithm-driven weapons.

At the same time, AI-enabled systems are likely to become increasingly important in future conflicts as technological innovation accelerates and the costs of advanced computing continue to decline. In light of these developments, the research suggests that international law must adapt to new technological realities. Future non-proliferation frameworks will need to address the regulation of intangible digital capabilities, the monitoring of software supply chains, and the involvement of private-sector actors in technological development. Without such reforms, the proliferation of AI-enabled weapons may continue largely unchecked, creating new forms of instability and undermining the preventive logic of the existing global non-proliferation regime.

 

Structural Limits of UNSCR 1540 in the Context of Artificial Intelligence

United Nations Security Council Resolution 1540, adopted in 2004, represents one of the most important instruments in the global architecture of non-proliferation. It was designed to address the risk that non-state actors, particularly terrorist organisations, might acquire nuclear, chemical, or biological weapons. The Resolution established three principal obligations for states: refraining from assisting non-state actors in obtaining weapons of mass destruction, adopting domestic legislation criminalising such proliferation, and implementing regulatory and security measures to control materials, equipment, and technologies associated with the development or delivery of these weapons. The political and security environment in which the Resolution emerged was shaped by the post-9/11 context and by growing concern over clandestine proliferation networks capable of trafficking nuclear materials or other dangerous substances. Consequently, the Resolution was constructed around the regulation of physical objects and traceable materials, such as fissile elements, toxic chemicals, specialised industrial equipment, and delivery systems including missiles and dispersal devices. This material focus reflects the technological realities of the early twenty-first century.

At that time, artificial intelligence was not considered a significant element of international security debates, and digital technologies did not yet occupy the central role in military operations that they hold today. The conceptual architecture of the Resolution therefore assumes that proliferation risks are primarily tangible and detectable. It refers to materials, components, equipment, and delivery mechanisms that can be monitored through export controls, customs inspections, intelligence surveillance, and physical accounting procedures. Such an approach presupposes that proliferation involves the transfer of identifiable objects moving through traceable supply chains.The emergence of AI-enabled weapons fundamentally challenges these assumptions. Artificial intelligence is a largely intangible technology composed of algorithms, software architectures, training datasets, and computational infrastructures. Unlike nuclear materials or chemical precursors, AI systems can be reproduced instantly, transmitted digitally across borders, and integrated into a wide range of technological platforms. These characteristics blur the traditional distinction between material and immaterial capabilities. While UNSCR 1540 regulates the physical means of producing or delivering weapons of mass destruction, it does not address the digital technologies that increasingly shape modern military capabilities.

The Resolution contains no references to artificial intelligence, machine-learning systems, autonomous platforms, cyber capabilities, digital supply chains, or the civilian innovation ecosystems in which most AI technologies are developed. Its conceptual vocabulary reflects a security paradigm centred on weapons that can be physically detected and controlled. In contrast, AI-enabled weapons operate through code and data rather than through specialised materials. As a result, the Resolution’s existing categories cannot easily accommodate them. Attempts to interpret artificial intelligence as implicitly covered by the Resolution encounter significant legal and conceptual difficulties. Some commentators have suggested that AI could fall within the category of “means of delivery” mentioned in the Resolution.

However, artificial intelligence does not constitute a delivery system in itself. Instead, it functions as an enabling technology that enhances the performance of delivery systems by improving targeting accuracy, navigation, surveillance, or decision-making processes. Similarly, efforts to classify AI under the category of “related materials” are problematic because this term has historically referred to items such as chemical precursors, biological agents, or specialised equipment necessary for weapons production. Artificial intelligence does not fit within these definitions. The intangible nature of AI also creates practical regulatory difficulties. Traditional non-proliferation mechanisms rely on monitoring physical objects through export-control regimes, customs inspections, satellite imagery, or laboratory oversight. AI-enabled systems, by contrast, exist largely within digital environments that can be transferred across borders without triggering these detection mechanisms. Software libraries, open-source machine-learning frameworks, and cloud-based computing resources can circulate globally through ordinary digital communication channels.

Even if the Resolution were interpreted broadly, it would lack the operational tools necessary to regulate such technological diffusion. In this sense, the rise of artificial intelligence exposes a structural gap between the existing non-proliferation regime and the technological realities of contemporary warfare. While UNSCR 1540 remains effective in regulating traditional weapons of mass destruction, it was not designed to address digital technologies capable of enhancing or transforming military capabilities. This tension between legal frameworks and technological developments forms the central analytical problem addressed by the study.

 

AI-Enabled Weapons in Contemporary Conflicts: Lessons from Libya and Cyber Warfare

The practical implications of these regulatory gaps become evident when examining how artificial intelligence is already being integrated into contemporary conflicts. One of the most widely discussed examples concerns the Libyan civil war. During the intensification of hostilities in 2020, various external actors supplied advanced military technologies to the opposing factions involved in the conflict. Among these technologies were unmanned aerial systems equipped with increasingly sophisticated targeting capabilities. A particularly notable incident involved the Turkish-produced Kargu-2 loitering munition. According to a report by a United Nations Panel of Experts released in 2021, this drone may have autonomously identified and engaged targets during combat operations.

Although the precise level of autonomy involved remains contested, the incident is frequently cited as one of the earliest documented cases in which a lethal autonomous system may have operated without direct human control. The drone used machine-learning-based image recognition to detect potential targets and employed onboard sensors to guide its engagement process. Its operational behaviour was described as resembling a “hunt and kill” function, indicating a degree of autonomous target selection. The significance of this episode lies not only in the technology itself but also in the broader circumstances surrounding its deployment. The system was reportedly used by Libyan militias, a form of non-state actor, after being supplied through state-level channels. This sequence illustrates how advanced military technologies can move from state control into the hands of irregular armed groups. The event also occurred outside the regulatory framework established by UNSCR 1540, demonstrating how AI-enabled weapons may operate beyond existing non-proliferation mechanisms.

Beyond autonomous drones, artificial intelligence is also becoming increasingly integrated into cyber warfare. AI-driven cyber tools can automate the identification of software vulnerabilities, facilitate the propagation of malware across digital networks, and conduct rapid reconnaissance of targeted infrastructures. These capabilities enable attackers to analyse large datasets, identify weaknesses in security systems, and launch adaptive attacks that evolve in response to defensive countermeasures. Artificial intelligence can therefore significantly accelerate the speed and scale of cyber operations. Such technologies are particularly difficult to regulate because they are inherently dual-use. Many AI-driven cyber tools rely on commercially available software libraries and widely accessible computing resources. These tools may be used legitimately for cybersecurity research, data analysis, or software development, but they can also be repurposed for offensive operations. Within the framework of UNSCR 1540, which focuses on weapons of mass destruction and their delivery systems, such technologies fall almost entirely outside regulatory oversight.

The Libyan case and the broader emergence of AI-driven cyber capabilities reveal several important trends. First, artificial intelligence is already being deployed in real conflict environments rather than remaining confined to experimental research. Second, increasing autonomy in weapons systems reduces the level of human expertise required to conduct complex military operations. Third, private-sector companies play a crucial role in developing and integrating AI technologies into military systems. Fourth, commercially available drones and other digital technologies can be adapted for use by non-state armed groups. Finally, AI complicates traditional mechanisms of accountability because autonomous systems blur the chain of responsibility between human operators, software developers, and military commanders. Together, these developments highlight the growing urgency of developing regulatory frameworks capable of addressing AI-enabled weapons. Without such frameworks, the diffusion of these technologies may continue to outpace the ability of international law to govern their use.

 

Governance Challenges and Policy Options for Regulating AI-Enabled Weapons

Given the limitations of existing legal frameworks, the study explores how other regulatory regimes address emerging technologies and what lessons they may offer for updating international governance mechanisms. Several existing arrangements provide partial models, although none fully address the challenges posed by artificial intelligence. The Wassenaar Arrangement on export controls for conventional arms and dual-use goods has attempted to regulate certain digital technologies, including intrusion software and surveillance systems. However, the Arrangement focuses primarily on hardware and specialised equipment rather than on software algorithms themselves. It also does not effectively regulate open-source AI models or cloud-based computational infrastructures. As a result, it illustrates the limitations of traditional export-control models when applied to rapidly diffusing digital technologies.

Similarly, the Missile Technology Control Regime regulates systems capable of delivering weapons of mass destruction. While certain AI-enhanced navigation systems might indirectly fall within its scope, the regime does not explicitly address artificial intelligence. Highly mobile and commercially available systems such as drones often evade the control mechanisms designed for large and specialised missile technologies. The European Union’s Dual-Use Regulation represents one of the most advanced efforts to govern intangible technologies. It includes provisions covering certain surveillance tools and cyber capabilities. Nevertheless, even this relatively sophisticated framework faces limitations. It cannot effectively regulate open-source AI models trained on publicly available datasets, and its implementation varies across different member states. International humanitarian law also offers relevant principles, including the requirements of distinction, proportionality, and precaution in the conduct of hostilities. These principles apply to all weapons systems, including those incorporating artificial intelligence.

However, humanitarian law primarily governs the use of weapons during armed conflict rather than their proliferation. It therefore complements but does not replace non-proliferation frameworks. Recognising these limitations, the study outlines two possible policy pathways. The first involves updating UNSCR 1540 to incorporate artificial intelligence explicitly within its regulatory scope. Such an amendment would require redefining the categories used in the Resolution to include AI-enabled weapons, machine-learning systems, and cyber-AI tools. It would also require expanding national reporting obligations and extending the mandate of the 1540 Committee to address digital technologies. This approach would benefit from building upon an existing and widely recognised framework, but it might encounter political resistance from technologically advanced states.

The second option involves drafting an entirely new United Nations resolution dedicated specifically to emerging technologies. Such an instrument could define AI-enabled weapons, establish rules governing dual-use software and algorithms, regulate access to cloud-based training resources, and create mechanisms for oversight of private-sector AI development. Although more ambitious, this approach would allow international law to address contemporary technological realities without stretching the conceptual structure of UNSCR 1540 beyond its intended purpose. In addition to legal reforms, effective regulation will require cooperation with private technology companies, which play a central role in AI innovation. Mechanisms such as transparency agreements, mandatory risk assessments, and oversight of high-risk AI models could form part of a broader governance framework.

Monitoring AI proliferation will also require new verification tools, including software audits, digital supply-chain tracking, and oversight of access to advanced computing infrastructures. Ultimately, the study concludes that the existing non-proliferation regime no longer fully corresponds to the technological environment in which contemporary warfare operates. AI-enabled weapons derive their capabilities from code, data, and computational infrastructures rather than from physical materials that can be easily monitored. Without new regulatory mechanisms capable of addressing these digital realities, the proliferation of autonomous weapons and AI-driven military systems may continue to expand beyond the reach of existing international law.

 

Beatrice Nicolini is a full professor of African history. She teaches African History and Institutions; Religions, Conflicts, and Slavery; and The Indian Ocean World at Università Cattolica del Sacro Cuore.

Data

12 marzo 2026

Condividi su:

Newsletter

Iscriviti alla newsletter