Fighting wars on factory settings
24 Mar 2025

Critical industries’ emergence as a target for hostile states reflects the increasingly digital aspects of modern warfare. Strategic insights director at corporate finance advisers Heligan Group Will Ashford-Brown throws light on how advances in AI and digitalisation in the wider economy impact modern weaponry and vice versa…
Has tech extended warfare beyond the battlefield and overt conflict significantly more than before – to the point where critical infrastructure is always on the front line?
The evolution of technology has allowed the inception of 'grey zone' tactics, carried out by our adversaries, often in the digital space. The proliferation of activities such as cyber-attacks has provided our adversaries (both state and non-state actors) a cost effectively way to disrupt our critical institutions and infrastructure such as the NHS. This puts our critical national infrastructure in harm's way from adversaries stationed thousands of miles away (often acting under anonymity), who are able to carry out activities below the threshold of war, disrupting our way of life.
How will AI-driven autonomous weapons influence military strategy and human decision-making in conflict?
AI systems, with use cases in targeting, for example, will substantially improve the time from sensor, to processing & analysis, to action. However, the ethical implications of AI potentially handling the whole 'kill chain' must be considered - is it ethical to allow AI to push the button to carry out a strike? Our current position is no, with western militaries preferring to keep humans in the chain for now... But as technology evolves, and our adversaries test the moral compass, can we afford to keep the status quo?
Can truly autonomous systems ever be trusted to make life-or-death decisions?
The problem we currently have with AI is that it lacks reasoning capability, that we as humans take for granted. If advancements in AI reasoning fail to materialise, we will be unlikely to trust such systems to carry out life or death decisions in my opinion.
How vulnerable are AI-based systems to hacking, spoofing, or other forms of cyber warfare?
I think the main risk or threat when it comes to AI models, is the quality of the data such models have been trained on. There is a real risk that adversaries may infiltrate training datasets, planting false information, to compromise model integrity. This can then have significant down-the-line implications we humans come to use these models for critical (potentially dangerous) tasks.
Will AI-powered surveillance and predictive analytics make insurgencies and guerrilla impossible?
Such systems will certainly help militaries deal with insurgencies - for example, they'll be able to sift through reams and reams of intelligence and transcriptions (potentially in another language) from insurgent communications - but they'll likely play a supporting role, rather than eliminate them altogether. We will still see human soldiers carry out kinetic operations as well as human centric influence and reassurance activities.
How are technical developments influencing offensive and defensive tactics? t’s a bit of a 'cat and mouse game' where we, and our adversaries, develop new ways to carry out offensive actions on each other, with the other party soon scrambling to develop way to counter these new threats. It’s what forces us to continually innovate.
Bio warfare traditionally focused on germs, viruses and toxins but are we at the stage where biotech can become a component of military hardware and systems?
Biological warfare has significant ethical and moral implications, and is illegal under international law - this is not something I contend the West will ever consider using, although we must be prepared should our adversaries use it. But that's not to say innovative technologies being developed in biology and human sciences won't be beneficial to our military capability development.
If technologies reduce the need for human soldiers, do wars become more or less frequent?
Potential more, due to the reduced need for governments to put their citizens in harm's way. However, I would argue that we are a very long way, and I still have doubts that it'll ever happen, from completely removing the human from warfare.
As private military contractors and tech giants develop advanced autonomous systems, could corporations start dictating the terms of warfare more than governments?
I don't think so, governments will still hold significant resources to bring them under control, should there be another Wagner mutiny situation. Tech giants also have much to gain in developing products and selling these into government.
Will companies such as SpaceX, Palantir or even STEM start-ups play a more significant role in defence strategy than traditional contractors?
I would add Anduril and Helsing to that list - these companies have the potential to challenge the established ‘defence primes’. Their specialism in areas such as AI, and its application to defence, will position them to be key players in future defence procurement projects which will likely require the development and deployment of such capabilities.