Does artificial intelligence make wars better or worse? That was the provocative question we asked participants at an AI Expo hosted by the Strategic Competition and Studies Project earlier this summer. SCSP is, a non-profit whose mission is to help strengthen U.S. competitiveness in artificial intelligence and other emerging technologies that shape national security, the economy, and the way the US fights wars. We dig into the question with drone enthusiasts, tech CEOs, and Mark Montgomery from the Foundation for Defense of Democracies.

Images from ICRC’s SCSP AI Expo booth in June 2025. (Dominique Maria Bonessi/ICRC)

Images from ICRC’s SCSP AI Expo booth in June 2025. (Dominique Maria Bonessi/ICRC)

Images from ICRC’s SCSP AI Expo booth in June 2025. (Dominique Maria Bonessi/ICRC)
Additional Reading:
Artificial intelligence | International Committee of the Red Cross
AI in Military Decision Making: A Dialogue on How to Enhance IHL Compliance
Artificial Intelligence in Military Decision Making: Legal and Humanitarian Implications
Transcript of show for hearing impaired.
[BONESSI] Does artificial intelligence make wars better or worse?
That was the provocative question we asked participants at an AI Expo hosted by the Strategic Competition and Studies Project earlier this summer. SCSP is, a non-profit whose mission is to help strengthen U.S. competitiveness in artificial intelligence and other emerging technologies that shape our national security, economy, and the way the US fights wars.
We’ll dig into the question with drone enthusiasts, tech CEOs, and Mark Montgomery from the Foundation for Defense of Democracies.
I’m Dominique Maria Bonessi and this is Intercross, conversation about conflict and the people caught in the middle of it.
[INTERCROSS INTERLUDE MUSIC]
[Ambient drone sound]
[BONESSI] The sound of drones racing through the air in a netted-off section at the AI Expo was meant to draw a younger crowd.
[DOMINICK ROMANO] “I think we left off in the high school kids were beating the military teams.”
[BONESSI] That’s Dominick Romano talking to our summer intern Cormac Thorpe. Dominick is chair of the AI steering committee at the US National Drone Association, the organization running the drone racing championship at the expo.
[DOMINICK] “It’s kind of like playing a video game from behind the screens. Right. So really just driving awareness. This is why we’re here and you know, bringing these kinds of competitions to the general public.”
[BONESSI] While there is a levity in raising awareness and getting others to participate in these drone races, Dominick also recognizes the harm drones and new technologies can have on civilians and civilian infrastructure.
[DOMINICK] “Drones are without a doubt a dual-use technology. Where there’s war taking place, there is this element of where you’re gonna be able to get supplies and logistics and food. But then at the same time, you also have this added threat, where drones are making the world a, a far more dangerous place today than they ever were.”
[BONESSI] Others at the expo had their own take on the question we posed.
[DANIEL KOFMAN] “ I think AI is an inevitability in war.”
[BONESSI] Daniel Kofman is the CEO and co-founder of an autonomous drone company called Mara. They’re building a counter drone swarm system using fiber optic and AI guided drones to protect high-value targets in conflicts.
[DANIEL] “I think in a lot of ways it makes the threat to a soldier on the front line worse, but it can also potentially be used by both parties in a conflict to create higher precision systems. So, it, it’s frightening and it’s in, and it’s. Inhuman in a way, as is war in general, as is artillery. But I think if used correctly, AI enables the creation of weapons of less destruction.”
[BONESSI] Daniel says his company is working to minimize the chances of civilians being hurt by their system.
[DANIEL] “We don’t wanna leave around on exploded ordinance, even if it’s a very small charge. So we’re looking at, several payload options that enable us to operate in entirely civilian areas with the only risk coming from the enemy drone, which may have a large explosive payload.”
[BONESSI] Aaron Brown, founder and CEO of Lumbra, a company working to unify AI tools to improve intelligence analysis and decision making. Aaron says a drone attack on Russian military targets in June demonstrated how central the techology has become in the conflict
[AARON] “The Ukrainians execute an operation that was a year in the making to attack Russia’s Air Force. Mm-hmm. Uh, in some cases, obviating uh, decapitating is a huge portion of their strategic bombing capability with. A very low-cost drone swarm, uh, effectively and, and a rudimentary one at that.”
[BONESSI] He said that instance was one making warfare more lethal…
[AARON] “ You wouldn’t have been able to be a small country and take out a big country’s bombers on the ground before this. But at the same time now those bombers are no longer in play. So, war has been made more safer.”
[BONESSI] ICRC has its own thoughts on AI-guided drones, and other digital technologies issues, which you can find on our blog at intercrossblog.icrc.org. To get a different perspective on the future of AI in war and this question of does it make wars better or worse, we turn to our guest, Mark Montgomery from the Foundation for Defense of Democracies, a non-partisan research institute based in Washington, D.C. focused on national security and foreign policy.
Mark is a Navy veteran having worked throughout Indo-Pacific Command. He has also worked for the late Senator John McCain and more recently ran the Cybersecurity Solarium Commission. The commission worked for three years to develop and implement recommendations on how the US approached cybersecurity. So, I asked him when we met on the floor of the AI expo, if we needed a commission to develop and implement recommendations on AI.
[AARON]Eric Schmidt, the chairman, has done a lot of work, with Yilli Bakari, the executive director to set up special competitive studies project and is going even deeper and farther. So they’re not only trying to implement their recommendations from a few years ago, but they’re also studying the impact of artificial intelligence on the military , the economy, foreign policy technology, and I, think they’re doing a fantastic job delving into, drawing this issue out.
[BONESSI] So going to the speed of things, there are already a lot of cyber vulnerabilities that we know of, on the battlefield. We’ve been aware of these for years due to the commission. Now we have AI sort of embedded into that, that system. What do you foresee are the risks assessed for people caught conflict?
[MARK] I think that AI, like in any issue, I think AI can be a force for good [03;30] Now, on the flip side, it’s also used for targeting. There’s a potentiality not with US forces for sure, but maybe with others that you begin to remove the human from the loop on decision making. So there’s risk in that. I would say the reward. That I described earlier of a better situational awareness can only benefit people who have no standing right. IDPs, refugees. No one’s looking out for them intrinsically, no one’s actively defending them. They need that. They need the intelligence to be exquisite so that there’s a great understanding of where they’re at. And I think artificial intelligence and machine learning. Can really help facilitate that.
[BONESSI] And you’ve mentioned the US military not being one of those actors that would never, I guess not have a human decision maker behind it. I’m just curious where do you come out from that?
[MARK] Well, I’m saying that the US military is one that does not now. I mean, there’s three standings here, right? You could have a human in the loop where almost every, the decision-making process is managed by a human. You can have a human on the loop. Where the decision-making process is observed by a human. And you can have a human outta the loop, which is like, it sounds like there isn’t a human there. I believe we currently are not authorized to operate in the final condition. We’re authorized to operate with, precautions in the loop obviously, and in the on the loop. When we’re building weapon systems, now we build them almost exclusively to be able to at least have a man on a human on the loop. The advantages in AI is gonna be that it’s, the speed at which this is happening is gonna make that commander’s ability to assess the righteousness of what you’re doing is gonna become more challenging.
[BONESSI] So now that we’ve talked about some sort of, some of the risks [00:06:00] involved for civilians on the ground in conflict, I’m curious to hear your thoughts on how might AI lessen the effects for civilians on their own? [00:06:06] You, you touched on it a little bit. I just wanted to go into that a little bit more.
[MARK] Well, listen, artificial intelligence has incredible opportunity. In the crisis humanitarian assistance world, that could be really beneficial. It is extremely hard to manage disparate supply chains with, uneven distribution networks and hard to predict IDP refugee populations. All of that can be much more better managed by artificial intelligence and machine learning. There’s no doubt about it. The problem is creating an environment where all the proper data is flowed in. The, the humanitarian in charge can make the right decisions, get distributions going and delegate down. Humanitarian support is best when pushed down is not best micromanaged from the top, it’s best pushed down to the lowest possible level. To do that AI could be extremely beneficial, so I do believe there is a long-term really good thing coming. Again, the military things are things we’re gonna have to bounce risks on, on this. I think it’s push it down. I don’t think you’re introducing risk by introducing improved knowledge sharing and improved decision making.
[BONESSI] Absolutely. You know, I’ve had a couple people at this convention try to sell us on new software and new AI technology we can use to improve our data, led analytics on, on how to keep track of, uh, you know, refugee movements and such. And I’m like, you know what? This is way above my pay grade.
[MARK] You know, I’d say, um, look, you gotta be careful. People throw the word AI in front of whatever the heck they were selling. Sure. The day before AI. Or quantum, throw it and you’re like, well, listen, I, but I do believe that there are tools developing, and I do believe that there is gonna be opportunity for the humanitarian disaster response community to become more efficient and to become more effective. And, uh, and that that’s coming. But just the uncertainty of knowledge.
[BONESSI] offense and defense in cyber? And could, I mean, here’s my nightmare scenario. We’re gonna have bots in the future that can just have that AI cyber war back and forth, and there’s not even need a human involved. Is that, is that realistic? Is, is that, is that the future? Am I. Or am I imagining things?
[MARK] You’re imagining things? Uh, no, I mean the robots aren’t coming. AI will help both cyber offense and defense. Right now, in my mind, defense is losing. Now defense isn’t losing ’cause it’s, technology’s worse because defense is losing is ’cause it’s strapped to us, humans. We have invariably decided that our password should be mark 1, 2, 3, 4, right? Or we invariably decide that. That multifactor authentication is too much work for me as CEO of this company. We invariably say, you’re kidding me, and Nigerian prince is leaving me $5 million. Let me hit that link, right? Our personnel cyber hygiene is the number one culprit in all of this. So the degree that AI can help you, the defense needs help. AI is gonna help the defense in cyber and it’s gonna help the offense. In cyber, what I’d say gently is the offense is operating at a slightly higher level of efficiency. So if the defense can get the AI overcome that level of efficiency, it’s gonna be, it’s gonna overall make, I, I think, gonna be to the. We’re down to the benefit of the defender, but you know, there, this is a case-by-case thing.
[BONESSI] We’re in a huge room of just government industry actors , that are going to be making investment procurement regulatory decisions in the next 10 years. What do we do to improve the decision making?
[MARK] Listen, first we’ve had machine learning, which is really to me, the, the kind of like calculator that takes boring things and makes them understand, you know, that takes lots of data and produces readable results. I think artificial intelligence allows you to put some, extra thought into that product. So I think. I think that what we’re gonna find, is that we’ve always had the ability to have a lot of data at our fingertips. We haven’t always had the ability to have it organized in a way that it illuminates the right decision. I think AI is gonna be gonna help with that light, you know, to bring the illumination to like all that data you have and I can’t think of a, as I said, a more disparate, dysfunctional, data driven, yet not available, operation than humanitarian assistance and disaster response ’cause by definition stuff is broken. Yeah. And with stuff broken. You know, you don’t have all the information you need. So I think AI will help kind of repair that data, get it right. So I do think it will make decision making more seamless, and it’ll make your likelihood of an error lesser, and that’s gonna be really good for all of us.
[BONESSI] And going back to the, question that we have at this booth today is, you know, does AI. Um, help or harm in war? Does it, does it make wars better or worse? I’m getting a lot that it’s both. I think I’m drawing that conclusion. Is that sort of what you draw from, what you, from your knowledge?
[MARK] It’s both. And I think, if you add in the idea of war plus the recovery from war, it’s gonna be a general positive. It’s gonna be, ’cause the recovery from war, it is not a negative, it is a positive. And so I think AI overall is gonna be,, is a slight positive in this regard. And it really, it’s on the military side it has a lot to do with the intent, the criminality, the recklessness, of the operator, of the tools. So if they’re, if they’re reckless or, have ill intent, AI can do some bad stuff.
[BONESSI] I mean, there was a, there was a professor here from Purdue University earlier talking talking about the massive challenges that AI poses. Just the ability to hack it, the ability to. So malleable by anyone who’s able to sort of get their hands on the code for, to write the AI. And it’s pretty much saying to people, you know, here’s an open check to do whatever you want. And it’s very hard to protect against that. To finish his argument, he was saying, you know, we need to really work on these challenges before it has further mass deployment.
[MARK] No, he’s right. And I agree with that. But here’s what I’d say. Like I, I argued earlier that I’m, for kind of an entrepreneurial, entrepreneurial approach to this. What I would say is I’m very much caveat that with, we must have physical and cyber security, guardrails on the AI foundries, right? And on the model weights, on the access to the LLMs, Large language models, so that we can, so that we can control this access that he’s right to be concerned. And I am afraid that the big AI companies do not want those guardrails. And they’ve worked hard. I’ve worked inside Congress to try to push, work, working with Congress to try to get those guardrails in place and the, and I, would not call the AI companies my friend in that right. They have not been, effective. They have been effective in making sure we have not been successful. Yeah, but we need those guardrails. We need to treat, we need to put the kind of physical and cybersecurity around that intellectual property the same way we would, a weapons laboratory or an energy laboratory inside the government. And we don’t do that. And the gov and the company say, don’t worry, we gotta taken care of. Yet they simultaneously, after report to their, shareholders or to their boards that they’ve lost access on occasion, and people have stolen things. Right. So they simultaneously say, don’t give us any standards. Don’t enforce security on us. And by the way, we had a security problem.
[BONESSI] Right. That’s one of the things at our booth that we’re talking about with folks. If you don’t care about the ethical considerations for a, for AI and, what that might do to a civilian in conflict because they’re million miles away from you and you don’t understand where they’re coming from, think about your own business. Think about your bottom line. Are your assets, are your employees? Are those who are in situations? Of conflict and violence at risk of potentially being liable for some sort of military action or the target of a, an adversary’s military action. Is that going to catch up with you? Is your business going to survive that? And I actually was talking with, a company today, um, who, that, you know, I, I was bringing up these questions and I said, you’re working in this space. Do you have a. Do you have this risk assessment that you’re ensuring that you’re gonna be sort of cushioned from that? And they said, yeah, we have, our software engineer who’s in charge. I said, what about your legal risk? And they didn’t really have a good answer for that.
[MARK] No, you’re right to bring that up. Look, this is about risk, ma. I, I’m glad you brought up risk here because this is a, this is a. 3D, three level chess risk management game, maybe four level. It’s a lot because you’ve got to absolutely understand unintended consequences. You’ve gotta under, you gotta understand what’s the , how much exposure you have with the loss of a tool, things like that. So from my perspective, like I said earlier, any tool can be ethical or non-ethical. The difference with the AI ones is I think that the risk embedded in that is much higher.And so they do. They do. I do think we need some guardrails on the security. And I think we need them now. Yeah, because I think that we’re getting to the point now where theft of a current model. It begins to have serious security implications.
[BONESSI] Absolutely. Well, thank you so much, Mark Montgomery.
[MARK] Thank you very much for having me, Dominique.
—————————————————————————————————————
[OUTRO]
[BONESSI] That was Mark Montgomery from the Foundation for Defense of Democracies speaking with me on the floor of the AI expo.
If you liked this episode, please rate, review, and subscribe to Intercross wherever you get your podcast.
And, if you’d like to learn more about our booth and our work on artificial intelligence and the latest in everything warfare and digital technologies please visit us at intercrossblog.icrc.org or follow us on X.com @ICRC_DC.
See you next time on Intercross.
[Intercross Interlude Music]
