https://robertskidelsky.substack.com/p/robots-as-weapons-of-war
Robots as Weapons of War by Robert Skidelsky
Some of you may know the famous scene at the start of Kubrick’s 2001: A Space Odyssey, when one of our fur-covered ancestors picks up a bone from a skeleton lying on the ground and realises that it can be used to fight off enemies. Having killed the leader of a marauding group, this humanoid throws up the bone in the air in triumph, where it transforms before our eyes into a slender spaceship speeding towards Jupiter.
This scene is a timely reminder that technology started off as a weapon of war. My theme is about what happens to AI when it is seen mainly as a weapon of war, when we view AI, development, that is, through the lens of geopolitics.
By geopolitics, I mean quite simply the view that international relations is inherently warlike, whether or not war is actually waged or avoided.. This leads readily to the view that we must ensure that our AI is better than your AI. And that makes controlling its development in the interests of a shared humanity extremely difficult.
Technology has always had its champions and detractors, both claiming to speak for a shared human future. On the one side there are those who, emphasising its power for good, demand the removal of obstacles to technological innovation; others, warning of its risks to lives and livelihoods, urge caution and restraint.
The problem has always been to get agreement on what technology is good and what is bad. But once geopolitics is introduced into the argument, we get the additional complication that no government will act to control or limit the potential harms of technology if that means ceding a technological advantage to a potential enemy. So, digital technology is set to be doubly out of control.
II.
Current discussion of AI is dominated by five topics.
First and most familiar is its impact on jobs. Automation has been a fraught matter ever since the Luddites, early 19th century British handloom weavers, started smashing the power looms which were destroying their jobs. Over forty years, handloom weaving was extinguished as a profession, but until quite recently it could be plausibly claimed that the spread of machinery has increased the total quantity of employment by progressively reducing unit labour costs. This has created an additional demand for goods and services, which not only provided replacement jobs at higher wages but supported a growing population.
However the advent of generative AI has greatly accelerated the potential scale and speed of job losses. It is now claimed that up to eight million UK workers could be replaced by AI within the next ten years.
The optimists tell us not to be alarmed by this. They foresee a steady ascent in the quality of jobs as their routine parts are farmed out to robots, and humans are freed for higher value (more creative) work.
Pessimists like Martin Ford and David Susskind argue that the new jobs created will be fewer in number and worse in quality than the jobs they replace. So the current debate is between tech-enthusiasts who promote AI as ‘enhancing’ human performance and those who want to slow it down to avoid replacing human by robotic performance.
Second is the impact of technology on health.. ‘A paradigm shift: How AI could be used to predict people’s health issues’. screamed a Guardian headline of a few days ago. An AI trained on 5 years of data and 10bn events such as hospital admissions, diagnoses, and deaths, will be able to predict the onset of 1000 diseases, allowing doctors to offer ‘more focussed’ screening tests and preventive medicines.
The medical dream is that AI will be able to reverse, at least partly, the ageing process. We are on the brink of a generation of technologically enabled centenarians. Surely this is pure gain? To which the right answer is that it’s the quality of life matters more than the quantity of years - ’better to die gloriously than live uselessly’ as the proverb has it.
A third topic of current discussion concerns the impact of technology on society. The great benefit of the social media is said to be the empowerment of ordinary people through unprecedented access to information. By overturning the authority of professional and religious gatekeepers, they release a pent-up flood of democratic activity, political and creative.
But with this go the atomised, solipsistic relationships of humans with the internet which replace person to person relationships, and which lead to the growth of internet diseases - alienation, isolation, pornography addiction, and so on. .The more digital technology promises to free its users from the constraints of authority, so does the demand grow for restriction of access and control of content to guard against these addictions.
A fourth and, related focus, is the impact of of AI on politics. Social media are said to undermine democracy by spreading disinformation and conspiracy theories.
Finally, there is the impact of generative AI on the human essence. It is asserted by some that within a decade or so, AI will be able to outperform humans in everything it does. This prompts the obvious question: what then is the value-added of being human? The traditional answer, that humans uniquely have a soul or consciousness, is unconvincing to materialists, who believe, with Descartes, that the soul is located somewhere in the brain. The human mind is only a complicated kind of brain, and there is in principle no obstacle to building artificial brains with souls.
The main thread in this debate is that as technology is applied to an ever wider range of human activities much more effort will be needed to ensure it remains safe and healthy. Prominent leaders in the AI field like Stuart Russell, Max Tegmark, and Yuval Noah Harari have called for pauses in research and deployment to allow time for reflection on the existential risks technology poses. The ideal end of such a pause might be a global agreement to ban, or at least slow down, certain types of research or development, on the ground that it is too dangerous to allow it to go forward. unchecked.
But geopolitics cuts across any such prospect. Take Iaac Asimov’s first ethical rule for robots: ‘A robot may not injure a human being, or through inaction, allow a human being to come to harm’. For the age of geopolitics this may be rewritten: ‘A robot may not injure any human being on our side, but should be programmed to inflict the maximum injury on the enemy’. The logic is clear : if we -the good guys - slow down our own technological innovation they-the bad guys -will develop weapons able to destroy us.
III.
War and war preparation have always been unacknowledged parents of technology. The computer was not born in scientific institutes working for the common good but in the UK’s Bletchley Park and the USA’s DARPA programmes, the first designed to break Germany’s wartime code, the second to keep the USA ahead of the Soviet Union in the Cold War.
The dominant view today is that we have returned to the Cold War situation, or even worse. Fiona Hill, adviser to the UK Government’s strategic Defence Review, is reported as saying the third world war has already started.
In such a world AI research and development is part of the arms race; AI policy becomes a matter of making sure that our AI development stays ahead of that of our potential enemies. This from the Charleston Festival Programme of 2025: ‘The race to shape our technological future is on. With the USA and China battling it out for dominance, AI and advanced technologies are starting to close in and redefine our lives’. Or the following headline in the Daily Mail: ‘Cyber chiefs warn of China ‘spy hub’ threat’. Scarcely a day passes without such warnings.In other words, whatever kind of AI we might think good for us, the AI we get will be defined by security requirements.
Even today weak privacy laws are overruled by governments’ claim to protect their citizens from various kinds of harm. When the ‘malign actors’ are said to be foreign countries the demands of national security become deafening.
The optimists will say that even countries at war or potentially at war will still be able to reach ‘functional’ agreements to stop the development and deployment of weapons which would cripple or destroy them all. They cite the Geneva Protocol of 1925 banning the use of poison gas, and the various non-proliferation and arms control agreements which have sought, with some success, to limit possession and development of nuclear and chemical and biological weapons.
These were notable achievements at the time But such weapons were specific and identifiable, so their development was subject to inspection and control. However the threat of AI weaponry is more diffuse, since it penetrates nearly every domain of military operations.
AI-powered weaponry includes autonomous weapons systems like drones and robots, intelligence surveillance and reconnaissance,cyber warfare, hacking and disinformation, command and control enhancement.
It is not surprising, nor especially comforting, to learn that debate is ‘ongoing at the UN and elsewhere’ over barring or regulating lethal autonomous weapons systems (LAWS). But no progress is reported.
A number of countries now possess tactical nuclear weapons designed for use on the battlefield. During the Ukraine war Russia has repeatedly threatened to use tactical nuclear weapons in some contingencies. Unlike with strategic nuclear weapons, there are no arms control treaties banning their use.
What conclusion might one draw? The most obvious one is the urgent need to challenge the geopolitical perspective itself. Not all accounts of international relations take a zero-sum view. Free Trade promises a harmony of interests; balance of power theories are built on compromises and compensations; the United Nations, as its name implies, was set up as a peacemaking and peacekeeping institution. We need to distinguish between genuine threats to our national security and fake ones conjured up mainly to channel resources into the development of ever more harmful forms of AI.
In concrete terms,how much of a threat to our own security is posed by China and Russia? If they do pose a threat, what measures short of an arms race can be taken to reduce it?
So to end where we began: We can have either an arms race or safe AI development, but not both at the same time.