Brain-computer interfaces (BCIs) are no longer speculative technologies of future warfare – they are being field-tested by countries such as the United States and China. As BCI technologies transition from the laboratory to the battlefield, they bring both significant risks and potential advantages for future warfare.
In this post, Dr. Anna M. Gielas, an affiliated researcher with the Centre for Global Knowledge Studies at the University of Cambridge, explores how BCI may challenge international humanitarian law (IHL) and international human rights law, requiring closer scrutiny and deeper debate on the development of national and international BCI regulations.
As brain-computer interface (BCI) technologies advance rapidly, their integration into military systems is no longer speculative but an impending reality. These devices, which translate neural signals into digital commands, enable users to control drones and other military platforms through thought – offering unprecedented cognitive integration on the battlefield. BCIs can be worn on the head, for instance as helmets or caps, but also implanted into the brain. Integrating human brains with military technologies – including weapon systems – positions BCIs as a novel means of warfare and warrants closer scrutiny under international humanitarian law and international human rights law.
In the United States, military investment in BCIs has accelerated as part of a broader push to maintain technological superiority over perceived geopolitical competitors. This effort extends beyond any single military branch or program – interest in neurotechnologies is present across the Department of Defense, with BCIs increasingly featured in research and development solicitations, defense innovation initiatives, and AI-human teaming strategies. China has similarly prioritized BCIs as a critical technology, integrating civilian research, military applications, and state-backed development under a unified national strategy. In March 2025, the Chinese National Healthcare Security Administration (NHSA) added new categories to its guideline for neural care services – including the implantation and removal fees for invasive BCIs. While the measure is aimed at accelerating clinical adoption, it also reflects Beijing’s broader effort to normalize BCI technology across society. Analysts suggest that steps like this are likely to generate civilian–military spill-overs, further blurring the line between medical innovation and strategic capability.
The convergence of neuroscience and warfare in China and the United States reflects a new kind of arms race – one focused not only on weapon systems but on the human brain itself. This intensifying competition has spurred interest among other technologically advanced militaries. Seeking to avoid strategic disadvantage, countries such as Israel and Russia are also developing BCI technologies for military purposes.
The private sector plays a central role in BCI development. Companies like Neuralink and Synchron have achieved major breakthroughs in BCI applications, underscoring how rapidly the field is evolving. In 2024, Neuralink’s implantation of its “Telepathy” device in human participants – enabling control of digital systems through thought – marked a milestone. The company, valued at nine billion dollars, is scaling up its efforts this year. Synchron, offering a less invasive, vascular-based approach, has demonstrated real-time BCI integration with platforms like ChatGPT, Apple’s Vision Pro, and Amazon Alexa. This compatibility with widely adopted consumer devices lowers integration barriers, suggesting a potentially shorter path to deployment in military settings, where leveraging existing digital ecosystems can, for example, simplify application and reduce training needs. As corporate investment surges, the risk grows that battlefield neurotechnologies will evolve without sufficient public oversight, regulatory clarity, and ethical guardrails.
BCIs detect signals from the brain, usually through non-invasive sensors placed on the scalp or through electrodes implanted on the brain surface or deeper in the brain tissue. Once the brain activity is recorded, algorithms analyze the patterns and translate them into specific commands, such as moving a drone upwards. Precise commands become increasingly possible for two main reasons: advances in understanding the human brain – for example, how activity in different brain areas is related to specific intentions – and advances in machine-learning techniques that can decode ever-finer patterns of neural activity.
Improving survivability in warfare often comes down to speed; specifically, how quickly a combatant can perceive a threat, process it, and act. In conventional operations, this chain is limited by the lag between brain activity and physical execution, such as reaching for a weapon, shouting a warning, or operating a control interface. BCIs enable direct brain-to-system communication, potentially shaving off critical milliseconds, or even seconds, in response time. For example, a BCI-based system could detect a soldier’s intent to maneuver before the action is physically executed, enabling automated evasion systems to initiate lifesaving measures more rapidly.
Potential to subvert and to strengthen IHL
A foundational challenge with military BCIs is accountability and the attribution of responsibility – two central elements in the IHL framework. Conventional weapons, such as missiles, already embed complex software into military systems. However, BCIs add an additional layer – the interpretive neural decoder – which amplifies uncertainty about the individual’s intent. With BCIs, the process involves translating sometimes ambiguous neural activity into commands, raising questions about how reliably the machine interprets those signals and how much control the human truly has. This makes it harder to pinpoint whether an allegedly accidental killing results from user intention, device malfunction, or flaws in the decoding algorithms, complicating the chain of accountability in ways traditional weapons systems do not. Similarly, legal scholars raise the topic of accountability for sub-conscious acts, since the BCI system could react to brain activity of which the soldier is not aware. Such potential blurring of agency may not only challenge existing legal doctrines of intent and culpability but also require a rethinking of individual and state responsibility under IHL.
The bidirectional BCI, another type of BCI technology, can raise additional challenges. This BCI does not only translate neural activity into digital commands for technological devices; it can also translate digital information into neural signals. A BCI-based arm prosthetic offers an illustrative example. The individual thinks about a specific movement and the BCI system translates it into the respective command for the smart arm prosthetic. The bidirectional BCI also enables the prosthetic to send sensory feedback to the brain. By stimulating the somatosensory brain area in specific ways, it conveys the individual details such as the grip strength of the prosthetic arm. Theoretically, bidirectional BCIs could cognitively enhance healthy human beings.
Noam Lubell and Katya Al-Khateeb note that through “the bidirectional flow of data between the brain and a computer system, BCIs can effectively integrate AI and human capacities.” The integration of artificial intelligence (AI) in the BCI would enable the analysis of large volumes of data – in the best-case scenario vastly improving a soldier’s situational awareness, thereby enabling a more discriminate targeting and improved protection of civilians. At the same time, bidirectional BCIs may blur the legal protections of soldiers. As these neurotechnologies will increasingly be able to enhance human cognition and performance, surgically implanted BCIs may risk recasting combatants as components of weapon systems, weapons in their own right or, even more ethically troubling, as biological “platforms” for warfare. Should military BCI applications outpace legal safeguards, the protections enshrined in the Geneva Conventions and fundamental human rights may not be guaranteed for enhanced military personnel.
Alongside these potential risks, there may also be benefits. Passive BCIs—devices that monitor brain activity without requiring user input—offer one promising avenue. By tracking cognitive states like attention, stress, or fatigue in military personnel, these systems could support real-time adjustments to workload and pacing in combat environments. In high-stakes scenarios, passive BCIs could identify mental overload in one soldier and redirect decision-making to another soldier who is exhibiting less stress, thereby potentially enhancing the reliability of human judgment. Such mechanisms could help safeguard the principles of distinction and proportionality not by automating ethical decisions, but by reinforcing the conditions necessary for humans to make lawful ones, even under combat duress.
Passive BCIs may also help address the challenge of accountability in unidirectional BCIs. By continuously tracking neural states such as stress and cognitive load, passive BCIs could document a soldier’s mental state and decision-making capacity at critical moments. Such data could provide some evidence to determine whether actions were deliberate, informed, and voluntary – key criteria for accountability under IHL. In cases of disputed incidents, neural data logs could help distinguish genuine human errors or accidents from intentional misconduct, shedding light on individual responsibility and supporting transparent, accountable military operations.
While passive BCIs hold promise for enhancing accountability in warfare, their use also raises significant privacy and human rights concerns for military personnel. Continuous monitoring of brain activity could lead to unprecedented intrusion into soldiers’ private mental states, emotions, and intentions – domains traditionally protected by rights to privacy and bodily integrity. The routine recording, analysis, and storage of neural data could open the door to misuse, including unauthorized surveillance and coercion. These challenges demand nuanced consent procedures as well as strong and functioning legal safeguards, balancing operational needs with essential human rights protection.
Emerging research suggests that BCIs could offer new ways to support the alignment of autonomous systems with human intent. The human brain generates immediate, often unconscious responses to social norm violations that can be measured. It also produces rapid responses to errors and unexpected outcomes. For example, observing a system’s mistake can elicit a so-called error-related potential. Experimental BCI studies have harnessed such signals to guide or correct the behavior of AI and robotic systems through reinforcement learning. While these methods are currently at an early stage, they point to the potential of BCIs as an additional feedback channel for shaping autonomous behavior in sensitive contexts. In the best case, such neurotechnology could help align military systems more closely with social and moral norms.
Yet again, promising BCI applications are accompanied by troubling ones. Some authors, such as Charles N. Munyon, refer to “disruptive BCIs”, describing BCIs that could be used in an offensive manner. Jean-Marc Rickli and Marcello Ienca suggest that such technologies could, in principle, be weaponized to manipulate or degrade the cognitive, sensory, and motor neural activity of adversaries. The targeted stimulation of specific brain regions could inflict psychological distress and pain, rendering these devices potential tools of torture without physical contact. In the future, disruptive BCIs could challenge established norms of acceptable conduct in warfare, pointing to the importance of robust preventive regulation.
Regulatory approaches
Since BCI technologies hold immense promise for treating and managing individuals with neurodegenerative, psychiatric, and motor disorders – and have been discussed for numerous other medical applications – a global ban of military BCIs has been considered a disproportionate policy response. Such a global ban “would prevent any spillover effect into civilian applications and could delay technological innovation for people in need including older people and patients with neurological disorders.” The 1995 Protocol IV to the Convention on Certain Conventional Weapons (CCW) may offer a precedent for regulation. It preemptively outlawed only those laser systems whose sole purpose was to blind combatants, demonstrating that states can carve out narrow, humane exceptions without halting innovation in military technologies. Applying a similar approach to neurotechnologies would not ban BCIs, but rather establish targeted prohibitions on especially high-risk types, such as invasive BCIs. Another option involves imposing limits on the technologies that BCIs connect to the human brain – for example, prohibiting their use to directly control weaponry.
Alongside international approaches, policymakers could take proactive steps to guide domestic BCI development in line with humanitarian principles. One approach is to tie military BCI funding to ethical benchmarks. For instance, development grants and procurement could be contingent on requirements like rigorous trials and innovative features to distinguish combatants from civilians. Dedicated oversight boards could then certify BCI technologies that demonstrably reduce civilian harm and enhance compliance with the laws of war. Such certification could be incentivized through expedited regulatory approval or preferential export status. These and similar steps could offer a path to embed meaningful guardrails and incentives into the BCI development process. While such military BCI certification boards currently do not exist, the individual components – expert legal reviews, mandated testing, independent verification, and strong incentives for compliance – already operate with regards to military weapon systems. Leveraging those precedents would offer tangible rewards to developers for building neuro-interfaces that make warfighting safer for civilians and more consistent with IHL.
Other innovative approaches continue to surface. For example, R. Roland Nadler, Tade M Spranger and colleagues suggest integrating patent offices into the regulatory oversight of neurotechnology as an approach to early risk detection. By evaluating patent applications not only for novelty and utility but also for potential social harms, patent examiners could serve as the first institutional checkpoint in identifying problematic trajectories. This early-stage scrutiny could enable timely alerts to regulatory agencies or public stakeholders, creating a more anticipatory model of governance. However, patent offices would require expanded expertise in security, ethics, and social impact assessment—domains traditionally outside their purview. Nevertheless, this model may inspire similar ideas, ultimately leading to a proactive layer of (military) BCI risk governance.
The regulation of emerging technologies, especially in military contexts, is often constrained by the Collingridge dilemma: in the early stages, technologies are still malleable but poorly understood, making it difficult to anticipate their consequences and guide their trajectory; in later stages, they are better understood but have become entrenched, rendering meaningful intervention difficult. However, this dilemma should not be mistaken for inevitability and seen as a reason for inaction. Proactive measures – such as anticipatory ethical oversight, adaptive regulatory frameworks, and incentive-based certification regimes – can meaningfully shape the integration of BCIs into the armed forces in ways that reinforce compliance with IHL.
See also:
- Ruben Stewart, The shifting battlefield: technology, tactics, and the risk of blurring lines in warfare, May 22, 2025.
- Joanna Wilson, AI, war and (in)humanity: the role of human emotions in military decision-making, February 20, 2025.
- Elke Schwarz, The (im)possibility of responsible military AI governance, December 12, 2024.
- Matthias Klaus, Transcending weapon systems: the ethical challenges of AI in military decision support systems, September 24, 2024.
- Gilles Doucet and Stuart Eves, Reducing the civilian cost of military counterspace operations, August 17, 2023.


