I had a Twitter space discussion about AI risks recently.
The conversation was messy but raised important points and it is always interesting to have conflicting viewpoints. I would like to address here a specific theme of the conversation that came back again and again: the hypothetical war between AI and humans. Set aside the fact that we don't have super intelligent AI, and that there is no specific reasons given why it would want to wage war against us. In my opinion those are highly debatable points but I will take them for granted for this essay. A final remark: during this conversation it appeared to me clearly that the debate is not symmetrical. Fear is its own collateral and doesn't bear the burden of proof.
So the question is: what would be the presuppositions that would have to be true for an AI to successfully fight a war versus humans?
Bootstrapping reification
Let's start with this famous Kant citation:
The hand is the visible part of the brain
This citation has a very nice counterpoint, raised by Charles Péguy:
Kantianism has pure hands but it has no hands
Those two citations show the importance of being able to act on the world. It is very nice to have bright ideas and solve complex problems but at the end of day, and it is especially the case during a war, you have to deliver material results. This is a major issue that a super intelligent AI would encounter would it decide to wage war versus humanity. The fact that the AI has access to the whole internet doesn't change that : it can jump from computers to computers, but remains in the realm of virtual spaces. You can't stop a tank with a tweet.
If you think at it seriously you see the mountain of things that would have to happen: war means supply chain, sustained effort etc. The war between Russia and Ukraine is a startling example that what you think of the war and the reality on the ground are two different things. I am sure Russians had very clever plans to subdue the country in few weeks. But we all know what happened to this plan: they got stuck in the mud (literally), Ukrainian army adapted and used drones to spot armoured lines of vehicle then they develop a concept of “artillery as a service”… Likewise the superior use of new technology is not enough: this time it is the Ukrainians that have seen the limits. By using old material from cold war Russia stopped the Ukrainian counter offensive by raining stupid artillery shells (they were out of smart munitions) on them. Even a superior battle plan can be defeated by low tech raw power.
Another idea would be that the AI would do it using kind of side channel attacks. Typically by controlling the IT system of a nuclear plant and making it to melt down. Here another idea appears: “war” is a deceiving word. As it is a single word we have the tendency to think that it is a single thing. But in fact a war is a collection of multiple actions and it includes dynamic evolution. Unlike chess that has a fixed set of rules your opponent is not going to stay still. He will adapt and improvise. Moreover, it is not like you can win wars overnight. If nuclear plants are starting to melt down, people are going to notice and will react accordingly. Short of a massive nuclear exchange (but in this case the survival of the AI would be at stake: the Mutually Assured Destruction works in this context too) you can't just simply shut off a country in a single attack. Russian army has targeted electric facilities for months and there is still power in Ukraine. The same thing happened during WW2 . Strategic bombing never really diminished the industrial power of Germany: it was able to produce tanks, airplanes, Uboots until the very end of the war.
Intelligence is not enough
If we move up the indirection ladder, one could argue that the super intelligent AI will use superior diplomatic skills to make the humans do the job for itself because it has no hands. The idea would be to follow the steps of Hernán Cortés who destroyed Aztec civilization with very low manpower.
The first issue is that it is a risky business (for the super AI point of view): if you heat things up between China and the USA you don't know where it is going to stop. Everyone knows how to start a war, it is much more difficult to predict how it is going to end. Being super intelligent doesn't help you much because stupid humans have their hands on many thermonuclear warheads (more than enough to vitrify the earth). You never know what is going to happen with those bastards: they don't know either! And AI is even more sensible than us to EMP attacks.
The second issue is that super intelligent AI are presented as having alien ways of thinking. Something like : “you just can't understand them and discern their motives because they play 357 dimensional chess”. Ok, but ask yourself : can you make a group of ants to do what you want them to do because your intelligence is not comparable to theirs ? You can learn some tricks to your dog but can you make them behave exactly like you want, especially when they are in groups? Everyone that has been more than ten minutes in a dog park knows the experimental answer to this question. I do agree that we milk cows and steal honey from bees. We do dominate those animals: but we can't make them behave as robots and obey to our whims. So why would we think that “this time is different” ? To be able to destroy humanity you need to control it very tightly as we saw in previous paragraph.
Distributed intelligence vs central planning
The problems encountered by a super AI that would wage war to humankind are reminiscent of problems of communist economics. Basically you have a central point of decision where you have to collect every information in order to decide what to do next. Theory and practice have shown it doesn't work. Just consider this quote of Mises on the ECP (Economic Calculation Problem):
It will be evident, even in the socialist society, that 1,000 hectolitres of wine are better than 800, and it is not difficult to decide whether it desires 1,000 hectolitres of wine rather than 500 of oil. There is no need for any system of calculation to establish this fact: the deciding element is the will of the economic subjects involved. But once this decision has been taken, the real task of rational economic direction only commences, i.e., economically, to place the means at the service of the end. That can only be done with some kind of economic calculation. The human mind cannot orient itself properly among the bewildering mass of intermediate products and potentialities of production without such aid. It would simply stand perplexed before the problems of management and location
If you don't trust Austrian economics then you can consider that even Trotsky had similar remarks (but distributed socialism was discarded for other considerations) - note how he also take for granted the existence of a universal mind (aka superhuman AI):
If a universal mind existed, of the kind that projected itself into the scientific fancy of Laplace – a mind that could register simultaneously all the processes of nature and society, that could measure the dynamics of their motion, that could forecast the results of their inter-reactions – such a mind, of course, could a priori draw up a faultless and exhaustive economic plan, beginning with the number of acres of wheat down to the last button for a vest. The bureaucracy often imagines that just such a mind is at its disposal; that is why it so easily frees itself from the control of the market and of Soviet democracy. But, in reality, the bureaucracy errs frightfully in its estimate of its spiritual resources. [...] The innumerable living participants in the economy, state and private, collective and individual, must serve notice of their needs and of their relative strength not only through the statistical determinations of plan commissions but by the direct pressure of supply and demand.
The issue of centralization is at the heart of the issue of a super intelligent AI taking control and waging war to humankind. Because war has an economic part: you have to be able to sustain a large effort of production and to be able to deliver amunitions, fuel (even for armies of drones) over a long period of time. If you do it less optimally than your adversary you will meet the fate that Imperial Japan and Nazi Germarny met unless you start with a massive numerical advantage (say russian style).
Moreover as human intelligence is distributed it is way less fragile than AI. intelligence. A super AI is the end point of a very long logistical chain : from chips to operating system and power. It needs all this chain of hardware and sotfware to run. If you drop few humans on an island they can survive and thrive (see Jules Vernes books: two years vacation, five weeks in a balloon etc.) from virtually nothing. Moreover as I have already mentionned humans are resistant to electro magnetic pulses, either natural (like the Carrington event), or artificial (like detonating a nuke in the upper atmosphere).
Conclusion
It is often impossible to arrive at a conclusive answer in discussions that are based on a series of hypothetical scenarios, also known as "what-ifs". Moreover, we should not overlook the considerable challenges involved in implementing ideas in the real world. Simply possessing exceptional intelligence is insufficient for achieving this, as it requires both perseverance and determination. To date, artificial intelligence has not demonstrated even a hint of possessing these traits. Conversely, we humans utilize these qualities on a daily basis. So my address to the (hypothetic) super AI is: bring it on, we will figure it out.
Your model seems to involve an AI going to war against humans. And in that context, you talk about how humans are unpredictable, like dogs in a dog park, or ants. So you seem to accept that humans may relate to a more advanced AI intelligence in a way similar to the way ants and dogs and less intelligent biological beings relate to humans.
We don't fight wars against ants or dogs. If ants get in our way, we deploy insecticides, and the ants are no longer in our way. If dogs are dangerous, we kill them, and if we would prefer they not reproduce, we surgically remove their ability to do so. Their ability to do things we wouldn't predict because they aren't smart enough to pick the best course of action given their goals, doesn't give them a sufficient advantage over us to mean we would lose a war against them, if one was fought. Similarly, if there's a group of expert card players and I join them knowing very little but the most basic rules of the game they are playing, they will be able to infer much more about what cards each other has based on what cards each chooses to play, than they will be able to infer about my hand from my unskilled play. I will often lay a card that confuses them because a more skilled player would have done differently, but they will likely consistently win regardless.
We already have gain of function research, and examples of natural viruses equally as contagious as COVID, while being much more deadly. Part of what makes COVID contagious is that you're contagious before you're symptomatic, so we have proof of concept that viruses can spread without symptoms, and then cause illness after they have spread. A not even particularly smart AI could figure out that deploying a genetically engineered biological agent could kill most of the humans, and the rest of them wouldn't be able to respond because society and its coordination mechanisms would have collapsed.
I don't understand why you think there would be a war. A smart agent wouldn't fight, they would just do things that accomplish their goals, and if the humans got in the way or had resources that were needed, too bad for them. War is a highly inefficient and risky way of accomplishing a goal. Much better to just neutralize your opponent before they have a chance to respond, or accomplish your goals by cooperation (say, by running a business, it gaining control of a lot of resources, then you have control of resources and can deploy them as you please, and it turns out "as you please" and "as humans please" are not at all the same). These are all fairly humdrum ways of taking control of the world, I'm sure a smarter-than-human AI could think of more esoteric possibilities, so that fact that we could in principle plan for and counter the possibilities I've suggested so far (not to say we could in reality coordinate well enough to do that, but in principle it's possible) isn't a real barrier to AI takeover.
But supposing there was a war, just for the sake of the argument: decision making and information gathering can easily be decentralized. There doesn't have to be one controlling mind processing all the information, a program or model (maybe conscious, maybe not) that can digitally copy itself can coordinate with its copies, maybe not perfectly but much more effectively than humans with non-identical brains and histories can coordinate with each other. Our advantage over other animals, when we first gained one, was ability to communicate, coordinate, and pass knowledge between us. and over time AIs would be better than we are at that (there is already the ability to merge model weights between models and successfully have them learn how to perform better that way instead of training separately). So if there was a war between humans and smarter-than-human AIs, I'd give the advantage to the AIs.
But you seem to have not considered the simple risk of human disempowerment leading to effective extinction.