Skip to main content

Is arms control over emerging technologies just a peacetime luxury? Lessons learned from the First World War  

History / Humanitarian Action / Law and Conflict / Weapons 14 mins read

Is arms control over emerging technologies just a peacetime luxury? Lessons learned from the First World War  
At the turn of the twentieth century, many engineers with fertile imaginations—from France’s Gustave Gabet to America’s Orville Wright—hoped that their inventions would make wars impossible, because the fighting would cost too many lives. However, the four years of the First World War scuppered those hopes and forged a new era involving the intensive use of science and technology to support military action. The scientific community spent those four years busily designing the most lethal weapons imaginable, and the doubt spread that a country at war, even a democratic one, could remain true to its values and principles.[1] But is that the lesson to be drawn? Does the First World War show that moral objections to new weapons technologies and their uses—including those enshrined in international treaties (those of Saint Petersburg in 1868 or The Hague in 1899 and 1907)—are bound to evaporate when a country is fighting for its survival?

1914–1918 and the theory of relativity applied to acceptance of new weapons

The First World War represents a history lesson, teaching us to ‘be realistic’ in our expectations to uphold legal and ethical standards in future technological wars. That, at least, is the view of Peter W. Singer, the author of several bestsellers about wars of the twenty-first century. He draws a parallel between ‘killer robots’ of the future and the Great War’s submarines.[2] In an interview quoted by the New York Times, he recalls that all-out attacks by German U-Boats prompted the United States to join the war in 1917. In the aftermath of the war, some statesmen opposed the spread of new weapons that ran manifestly counter to the principles of humanity and push for an international treaty banning that kind of submarine warfare. The plan was strongly supported by Senator Elihu Root[3] who led the U.S. delegation attending the Washington Naval Conference of 1921–1922.[4]

Singer looks at diplomatic efforts in the interwar period and notes that the broad moral principles they established, having been defended for a quarter of a century, lasted for only a few hours after the attack on Pearl Harbour. This, in turn, prompted the United States to launch an all-out attack on Japanese merchant ships on the evening of 7 December 1941.[5] Singer concludes: ‘The point is, what happens once submarines are no longer a new technology, and we’re losing?’, and adds: ‘Think about robots, things we say we wouldn’t do now, in a different kind of war.’ Would such a situation be a case of ‘necessity knows no law’?

Did moral relativism really start with the outbreak of the war?

At first glance, this ‘natural law’ under which all weapons that are initially forbidden eventually gain acceptance seems to be confirmed by the experience of the First World War. For the first time in the history of humanity, war took place under the sea and into the air(as high as the stratosphere in the case of shells used by the Paris Guns in March 1918)—places where war had been avoided or prohibited by international conventions. However, it would be wrong to think that it was the outbreak of war that prompted a willingness to overcome reluctance to use certain weapons or methods of warfare.

In 1901, Admiral Arthur Wilson (who became First Sea Lord in 1910) is said to have declared that submarine warfare was ‘un-English’ and that any captured submariners should be hanged like pirates. That, however, did not prevent the Royal Navy from acquiring, that same year, its first submarine. And, by 1914, it had more of them than the German Navy.

It is often forgotten that, as early as the second Hague Conference of 1907, France and Germany were the only countries to refuse to renew the commitment they had made eight years earlier not to discharge ‘projectiles and explosives from balloons by other new methods of a similar nature’. In 1911, the Michelin brothers unapologetically created two ‘aéro-cible’ prizes that rewarded pilots who were able to discharge projectiles accurately.[6] The two French industrialists wanted to show their government that aircraft ‘could soon become a formidable tool of war’. Their aim was not to ban the use of these new weapons, but rather to encourage massive investment in them.

A challenged but genuine resilience of pre-war moral principles among belligerents

Irrespective of whether or not lifting the taboo on aerial bombing in the First World War can be directly linked to the outbreak of hostilities, what is of note here is that the indiscriminate nature of such bombing in urban areas remained controversial throughout the war. Pre-war moral principles were deeply challenged during the conflict. However, we also witness evidence of their resilience through acts of belligerents.[7]

Looking closely at the propaganda of the time, we see that the use of new means and methods of warfare were always accompanied by arguments justifying their use. For example, when the French Air Force carried out the first large-scale bombing of German factories on 26 May 1915, it justified the risk of killing workers and civilians by arguing that Ludwigshafen am Rhein was not a harmless industrial site, but the location of factories that produced the asphyxiating gases that Germany had recently started using on the battlefield. Although the need for justification faded during the war, it did not disappear.

It would also be wrong to believe that combatants had no difficulty ignoring commitments signed in peacetime.[8] Take the example of the 1868 Saint Petersburg Declaration, which banned phosphorus-based incendiary munitions deemed to be ‘contrary to the laws of humanity … which uselessly aggravate the sufferings of disabled men, or render their death inevitable’. In 1914 the countries that ratified that declaration faced a test of conscience, because incendiary projectiles were very effective in destroying observation balloons used to direct artillery fire. German pilots were the first to receive a written order stating that the incendiary projectiles they carried were to be used only on balloons and were in no event to be used against other aircraft. British and French pilots demanded something similar in writing, fearing that they would be summarily executed if captured with such munitions.

A moral compass disturbed, but not shaken

The feeling of being watched from abroad also acted, in a way, as a North Star guiding the moral standards for some of the nations at war. In this truly global war, it was just as important for a country to shape how it was seen in allied and neutral countries as it was to gain the support of its own population. Although the length and severity of the conflict probably dulled public sensitivities in combatant nations, people further from the battlefield remained keen to see the laws of war upheld. When the United States joined the war in 1917, for instance, President Woodrow Wilson refused to allow the U.S. Air Force to take part in ‘messy bombing of industrial and commercial sites, or of the people of enemy countries, which would not meet a demonstrated military need’.

Another source of a moral compass, perhaps one of the most undervalued, is the role that the judgment from a person’s most inner circle may play. History is often blind to the role played by family and kinship encouraging individuals to uphold a higher degree of moral behaviour. But we also know some cases where it failed. A rather extreme case is given by Fritz Haber,who supervised the production and use of German asphyxiating gases. His wife Clara Immerwahr—also a talented chemist—committed suicide in 1915 during a party held in Haber’s honour to celebrate the success of the first poison gas attack in Ypres, which claimed more than 5,000 victims.

Is there a pattern for social acceptance of controversial weapons?

In 2014, French historian François Cochet published a book about the many mistaken ideas people have about the First World War,[11] which have gone largely unchallenged during the centenary commemorations. The collective view remains focused on the ‘slaughter in the trenches’, and people struggle to believe that ethics could have played a role in such a deadly conflict.

This simplified version of history would be of little consequence if it did not foster a cynical attitude towards the future, inviting people to look at military history and ‘get real’ about what is likely to come. Looking closely at the First World War, however, we see that the shift in people’s perception about the nature or use of certain weapons—morally reprehensible to start with, but later tolerated—was not straightforward. We need to take an interest in what made those weapons acceptable or unacceptable then if we are to draw the parallels for today.

In the interwar period, moral objections to chemical weapons led the international community to strengthen the prohibition on their use—through the Geneva Protocol of 1925. But, it is often forgotten that the efforts in the 1920 to enhance arms control also resulted in a struggle to restrain contested means of warfare from the air or underwater. Heading the U.S. delegation at the Washington Naval Conference,Elihu Root still considered submarines as incapable of complying with the law of the seabecause they did not allow the obligation of assisting shipwrecked crews to be met. More than their aerial counterparts, underwater vessels remained ‘outliers’ throughout the twentieth century until this military resource gained unforeseen ethical legitimacy as a deterrent of war, particularly through their association with nuclear weapons.

History shows that new uses of science and technology in the military field do not follow a linear path from initial rejection to inevitable acceptance. International relations involve an ongoing trade-off in which compliance with international treaties and moral values can be subject to compromises based on perceived military usefulness, financial and political costs, but also the potential for concealment and diplomatic denials.[12]

What lesson can be drawn today for emerging military technologies?

Being realistic about the future should not lead us to the conclusion that pre-war international commitments automatically lapse the moment a country is fighting for its survival. On the contrary, the First World War should remind us that the process by which new weapons become socially acceptable is subtle and complex. This knowledge can inform contemporary debates, such as those regarding the terrible reappearance of chemical and biological weapons, or the ‘fully autonomous weapons’ of the future.

A century ago, the first unmanned systems were developed—from the Gabet torpedo to the aircraft piloted over the airwaves by Captain Max Boucher. Oddly, these capacities are now central to current discussions about military ethics. What would 1912 Nobel Peace Prize-winner Elihu Root have made of the prospect of ‘killer robots’? He would certainly have supported a multilateral initiative like the one that has been taking place for the last five years in Geneva regarding certain conventional weapons.[13] But, he would surely have regarded diplomatic efforts as sterile if the public were not informed about the real societal challenges at stake.

This duty to inform, enlighten and, in his words, forge ‘worldwide public opinion’, particularly regarding weapons control issues, is what prompted Root to launch the magazine Foreign Affairs in 1922.[14] His generation came through a war that was not just global in its scale but also in the imbrication of the various battlegrounds (physical, informational, emotional, etc.) that could no longer be considered separately. The lessons they learned from this first technological war might tell our generation that it would be unwise to separate out the questions raised by the prospect of global algorithmic warfare from the general and complex societal issues brought up by civilian uses of ubiquitous digital technology.

As Henry Kissinger recently said about the rise of artificial intelligence, governments are not only responsible for preparing military assets for future warfare, but they are also engaging their nations in a ‘transformation of the human condition that it has begun to produce’[15]. In that sense, for today, as it was a century ago, it is not naive to challenge the inevitability of the uncontrolled spread of coercive technologies (whether civilian or military), nor of its unlimited use. In the aftermath of the First World War, Elihu Root was one of those visionary minds forecasting that any tool of international humanitarian law would be of little use if not articulated with a well-informed worldwide public debate. Today, discussions on algorithmic warfare should not remain limited, as they currently are, to a small number of people. This might be a lesson to be learned a century after.


This article is a contribution solely in the author’s academic and personal capacity.



[1] All references not mentioned in this article can be found in Out of sight, out of reach: Moral issues in the globalization of the battlefield, which appeared in the 900th edition of the International Review of the Red Cross. A first outline of the present article makes the concluding chapter of the superbly well illustrated collective book Les armes de la Grande Guerre : Histoire d’une révolution industrielle et scientifique, éditions Pierre de Taillac & Ministère des Armées, 2018.

[2] Interview with Peter W. Singer, quoted by Matthew Rosenberg and John Markoff, The Pentagon’s ‘terminator conundrum’: Robots that could kill on their own, New York Times, 25 October 2016.

[3] Elihu Root (1845-1937) was U.S. Secretary of War from 1899 to 1904, the first president of the Carnegie Endowment for International Peace and the founder of the Council on Foreign Relations and its well-known magazine Foreign Affairs. See The Nobel Prize, Elihu Root Biographical.

[4] Lawrence H. Douglas, The submarine and the Washington conference of 1921, International Law Studies, vol. 62, 1980, pp 477–490.

[5] Singer fails to mention that, at the time, the greater immorality of Japan attacking without first declaring war, and not simple military necessity, was the justification for this change of attitude.

[6] Antoine Champeaux, Michelin et l’aviation, 1896-1945, Patriotisme industriel et innovation, Lavauzelle, 2006, and

[7] Historian Isabel V. Hull shows that the decision to comply with international law was not an easy one for any combatant nation in her book, A Scrap of Paper: Breaking and Making International Law during the Great War, Cornell University Press, Ithaca, 2014.

[8] In her article The ICRC in the First World War: Unwavering belief in the power of law?, Lindsey Cameron shows how, during the First World War, the ICRC engaged in a legal dialogue with States to uphold the 1906 Convention for the Wounded and Sick and the 1907 Hague Convention on Maritime Warfare.

[9] Daniel Bellet and Will Darvillé, La Guerre Moderne et ses nouveaux procédés, Hachette, Paris, 1916, p. 235

[10] Ibid., p. 253.

[11] François Cochet,Idées reçues sur la Première Guerre mondiale, Le Cavalier Bleu, 2014.

[12] This is the fine analysis made by Paul Schulte in the case of chemical weapons; P. Schulte,When chemical weapons killed 90,000, CNN, 9 November 2018.

[13] The United Nations Convention on Certain Conventional Weapons (CCW), which started to discuss the subject of ‘emerging technologies in the area of lethal autonomous weapons systems (LAWS)’ in autumn 2013.

[14] The magazine was launched the very year of the drafting the Five-Power Treaty limiting naval armament.

[15] Henry A. Kissinger, How the enlightenment ends, The Atlantic, June 2018.


DISCLAIMER: Posts and discussion on the Humanitarian Law & Policy blog may not be interpreted as positioning the ICRC in any way, nor does the blog’s content amount to formal policy or doctrine, unless specifically indicated.


Share this article


Leave a comment