The military decision-making process is facing a challenge by the increasing number of interconnected sensors capturing information on the battlefield. The abundance of information offers advantages for operational planning – if it can be processed and acted upon rapidly. This is where AI-assisted decision-support systems (DSS) enter the picture. They are meant to empower military commanders to make faster and more informed decisions, thus accelerating and improving the decision-making process. Although they are just meant to assist – and not replace – human decision-makers, they pose several ethical challenges which need to be addressed.
In this post, Matthias Klaus, who has a background in AI ethics, risk analysis and international security studies, explores the ethical challenges associated with a military AI application often overshadowed by the largely dominating concern about autonomous weapon systems (AWS). He highlights a number of ethical challenges associated specifically with DSS, which are often portrayed as bringing more objectivity, effectivity and efficiency to military decision-making. However, they could foster forms of bias, infringe upon human autonomy and dignity, and effectively undermine military moral responsibility by resulting in peer pressure and deskilling.
Autonomous weapon systems (AWS) steal the limelight in current military AI debates, but they are far from the only ethically challenging use cases. Here, I want to discuss AI-based decision support systems (DSS), which collect, aggregate and make sense of incoming intelligence, surveillance and reconnaissance (ISR) reports. Due to the proliferation of sensors, drones and the Internet of Things in military organizations, large amounts of data need to be processed at various military command levels. While this influx of data can result in better intelligence and coordination of own forces, its very existence poses challenges to analyzing and acting upon it in a timely manner. AI-based DSS are meant to cope with this issue and aid the command staff with building common operational pictures, developing courses of action and supporting the execution of orders, all within a fraction of the time needed by human planners.
However, while these systems are meant only to support and enable – rather than replace – human planners, they could stymie moral responsibility and foster unethical behavior and results instead. And while AI-based DSS have no embodiment per se and do not kinetically engage in warfighting themselves, they could have an even bigger impact than AWS. After all, AI-based DSS could shape the military decisions being made about the employment of AWS and human fighters alike. Thus, they deserve equal or even higher scrutiny, as their influence could be more subtle and impactful at the same time.
Expanding the ethical viewpoint
In the context of military operations influenced by AI, ethics play a special role. Ethics is concerned with how these technologies affect human values, human accountability and responsibility, and military virtues.
Military virtues such as courage, responsibility, and duty are fundamentally rooted in human judgment and decision-making. AI-based DSS, while valuable in processing large amounts of data, risk overshadowing these virtues. By taking on more of the cognitive load in military operations, AI-based DSS could dilute the human element of moral and ethical decision-making. A shift from human-led to AI-assisted judgment could erode the capacity of commanders to fully assume moral responsibility for their decisions, ultimately compromising military virtues.
The development and subsequent future deployment of AI-based DSS involves complex interactions between technology and human operators, going far beyond mere questions of functionality. Instead, it involves the evaluation of whether such systems can be aligned with fundamental principles such as human dignity and autonomy.
The ethical challenges associated with AI-based DSS are inherently linked with the technical aspects of these systems. For example, biases included in training data can lead to unwanted discriminatory outcomes, potentially violating the principles of justice and fairness. Opacity in AI systems can prevent humans from understanding or challenging the systems’ suggestions. This compromises the principles of transparency and accountability. These challenges affect operational efficiency, but also undermine the ethical principles at the core of military decision-making. This issue becomes even more pressing when it comes to maintaining meaningful human control over AI-based DSS, which is often regarded as essential to ethical and responsible AI use in military operations.
The rundown
The following examples represent a selection of ethical challenges related to AI-based DSS. They are deeply interconnected with concerns over human responsibility, military virtues and meaningful human control in military decision-making. These challenges affect the operational efficiency of decision-support systems but also raise critical ethical questions about the limitations of human decision-makers regarding their oversight, autonomy, and accountability in high-stakes environments.
Biases existing in the training data of AI systems can be unintentionally perpetuated or even amplified biases by the systems themselves. This can result in certain individuals or groups being discriminated against based on characteristics like sex, race, class, age, nationality, and many other factors. For AI-based DSS, this could play a role if its data is faulty, for example, if automated target recognition systems, which contribute to the creation of a common operational picture, misclassify certain individuals or groups as legitimate targets. There have been reports about this regarding drone warfare, where biased data labelling and interpretation have contributed to people being targeted largely due to their apparent affiliation with certain tribes, for example.


AI-based DSS analyze data and suggest actions quickly and often more accurately than humans, leading to a natural trust in their recommendations. It could cause the users to disregard their training and intuition, relying on AI-based DSS outputs even when inappropriate. This is exacerbated if the system aligns with users’ preferences, as they are less likely to question comfortable suggestions. Additionally, a lack of understanding of how these systems work can lead to over-trusting the system, especially if its limitations and biases are not apparent due to their opaqueness. Automation bias risks collateral damage and unnecessary destruction on the battlefield by causing operators to accept AI-based DSS’ suggestions uncritically, potentially resulting in unnecessary suffering and harm.

Comprehensive AI-based DSSs could foster a form of virtual remote command and control, reducing soldiers to executing orders displayed on their devices without critically engaging with these systems’ outputs. This scenario risks soldiers not questioning orders, even if they have insights suggesting alternative actions. If soldiers receive commands about enemy positions via AI-based DSS, they might act without verifying the situation. This challenges the military self-perception of conscious decision-making in the spirit of the “Auftragstaktik” and in the worst case, result in soldiers “only following orders”.

AI-based DSSs, while reducing the cognitive workload, risk deskilling command staff by taking over planning and decision-making tasks. Without regular practice, staff may lose proficiency in these, which can be crucial during system failures. For example, automated target recognition and threat assessments may lead to a degradation of “manually” assessing ISR reports. Ineffective human decision-making will expose troops, civilians, and enemies to unnecessary risks, leading to wrongful attacks, fratricide, or collateral damage. Additionally, deskilling can erode military virtues like courage and mercy, which require constant practice as well.


While, for example, collateral damage estimation (CDE) is already being performed with the aid of basic computer tools, the extent to which AI-based DSS could change the process is cause for ethical concerns. It removes the decision-makers even further from the human element of warfare. Traditionally, these decisions are being made by human commanders bearing the moral responsibility for weighing the potential harm to their soldiers and civilians against the value of their objectives. In contrast, when such calculations are automated by AI-based DSS, the risk is that human lives could be reduced to mere data points in an algorithmic equation. This would obscure the moral significance of choices being made.
Looking ahead
The integration of AI-based DSS in military operations promises more efficient and faster decision-making. However, as illustrated, these systems also present significant ethical challenges that must be addressed to ensure their responsible use.
To mitigate these risks, it is crucial to develop robust ethical frameworks and guidelines for the use of AI-based DSS. Continuous training and education for military personnel on the limitations and potential biases of these systems are essential. This should include fostering critical thinking and a healthy portion of skepticism or caution towards DSS to allow responsible and meaningful human control.
The assumption that AI will inevitably change military operations, often referred to as “technological determinism”, should be critically examined. Instead of simply accepting this notion, a comprehensive risk analysis is in order. This would allow for a careful consideration of the ethical challenges raised earlier, including weighing potential risks against benefits. Such an approach could indeed foster (responsible) innovation by providing guardrails developers and users could follow.
It is essential to draw more attention to AI-based DSS. Even if they are meant to merely assist human decision-makers, they will inevitably shape their decisions by how they process data, present information and propose courses of action. By regarding AI-based DSS as socio-technical systems, we need to raise awareness to their moral impact. While they may lack moral agency, they are tools with moral impact to be deployed in armed conflicts. Therefore, we need to foster our discussion on how these systems should be designed, developed, used and overseen.
See also:
- Jimena Sofía Viveros Álvarez, The risks and inefficacies of AI systems in military targeting support, September 4, 2024
- Ingvild Bode, Ishmael Bhila, The problem of algorithmic bias in AI-based military decision support systems, September 3, 2024
- Wen Zhou, Anna Rosalie Greipl, Artificial intelligence in military decision-making: supporting humans, not replacing them, August 29, 2024



