FIND InnoNuggets

 

My Book on Strategic Decision Making

My Book on Strategic Decision Making
Applying the Analytic Hierarchy Process

Saturday, October 07, 2017

General’s Intelligence Vs Artificial Intelligence – Can Military Strategy become an Algorithm?

General’s Intelligence Vs Artificial Intelligence –
Can Military Strategy become an Algorithm?

11th May 1997 is a significant day. Besides being exactly a year before India conducted its second nuclear tests, it is the day that IBM’s Deep Blue computer made Gary Kasparov, the human world champion, concede defeat in less than 20 moves in the 6th Game of Chess that they played together. Kasparov reflect today, 20 years later, in his book “Deep Thinking – where machine intelligence ends and human creativity begins”, even if he would have won, it was just a matter of time when computers would have started winning. The supporters of Artificial Intelligence – called the hard AI – were delighted then – proclaiming a day, not in too far in future ahead, when machines will be able to replicate the human decision-making process. In contrast, soft AI proponents believe that intelligence cannot be created artificially. It can at best be simulated at an appropriate level of detail to create solutions for some of human decision-making problems.

Winning a game of Chess, of course, cannot be considered a comprehensive test of intelligence. Various alternatives in Chess can be known in advance. The win depends to a considerable extent upon look-ahead of number of moves of the opponents a player can analyze from the present board position. Supercomputers will have more look-ahead capability than the best human chess player like Kasparov. But do they have vision, can they create, invent or innovate? More important and perhaps interesting is the question can they, the machines with artificial intelligence, systems with a mind, if one may, can they conduct war operations? Can they assist in military tactics, operations and strategy? Can an Artificial Intelligence replace our military commanders and Generals?

Military Competence as a test case of Intelligence
Human intelligence, a particularly demonstrable version of it, arguably, is amenable for potential study in war scenarios, as it is likely to be highly pronounced under stress and crisis. Wars and military situations can produce a set of intelligence traits under extremely stressful conditions with greatest stakes, although war itself can be considered a most foolish act of human intelligence. Each of the two world wars in the last century showed and resulted in rapid advancements of science and technology due to the enhanced pace of competition for creating combat superiority. The wars may have ended but the technology competition continued during cold war. Advanced technologies started to overpower the classical warfare doctrines and strategies so much that a new term was coined in erstwhile USSR - Military Technology Revolution (MTR). By mid 1990s the term has mutated to Revolution in Military Affairs (RMA). The technological armed forces of US, allies and other high-tech powers, however, were faced with low-tech warfare of different type of actors – guerillas, insurgents, terrorists, freedom fighters who usually get embedded in urban, rural, hill populations or in dense jungles. The vapor army of these actors – non-state as well as state actors used methods and techniques that the conventional militaries despite their cutting-edge technologies are not able to fathom clearly, what to say about responding with finesse.

In 1975, Norman Dixon published a fascinating and provocative book titled “On the psychology of military incompetence” that brought to the fore the inherent human traits that Military Generals in the wars of the past displayed that can only be termed as incompetence. He writes, “One thing is certain: the ways of conventional militarism are ill suited to ‘low intensity operations’ “. Clausewitz says, “Although our intellect always longs for clarity and certainty, our nature often finds uncertainty fascinating.” Given the Clausewitz’s fog of war and military incompetence inherent in the psychology of human Military Generals, a case for AI based military generals can be made.

Can war outcomes be predicted or can operations be modeled mathematically?
In a war situation like the 1991 Gulf-war, for example, various factors affecting the outcome and conduct of war are uncertain, unquantifiable and usually unique to the war context. An attempt to list down the key factors was made by military historian Col. T.N. Dupuy who identified 73 factors. He developed the ‘Quantified Judgement Method of Analysis’ (QJMA) based upon analysis of historical war data which was published in his book “Numbers, Prediction and War”. On 13 December 1990, about a month before the 1991 Desert Storm started on 16 January 1991, Col. Dupuy successfully analyzed and presented the military options in the Gulf war to the House Armed Committee. The five options discussed by Dupuy were named – Colorado Springs, Bulldozer, Leavenworth, Razzle-dazzle and Seize. His models calculated the potential US casualties in D+40 days to range from 680 to 10479 (dead and wounded) in various options. Iraqi casualties were estimated to be 118500. He published his options in a 1991 book titled – “If war comes, how to defeat Saddam Hussein”.

It must be noted that the five options discussed by Dupuy were perceived/conceived by him through his experience as a retired service officer and not by computer. Computer though helped in the analysis of these five options based on the QJMA and associated theory developed by Dupuy. Incidentally, other such models have also been made by various actors – for example, Rand Corporation developed a model named Situational Force Scoring (SFS) that uses certain expert judgement factors besides assigning a firepower scores to each weapon. Unlike Chess, war is what can be termed an open system, where many imponderables exist. To successfully model a war is not only difficult but perhaps may fall under the realm of not amenable for modeling systems – due to the chaotic regions the non-linear effects of war may lead the interactions into – what Clausewitz termed the fog of war.   

What computers can then do?
Computers can help in evaluating various options, particularly for problems which, though complicated, are well-defined. Before and during the 1991 Gulf-war, the allied forces needed to schedule several lakhs of troops and hundreds of thousands of tons of cargo. Manually it was a challenging task since each mission required a three-day round trip, visiting seven or more different airfields under the command of up to four different air crews and consuming almost million pounds of fuel. This challenge was met by a computer/operational research team of scientists who helped to schedule more than 100 missions each day – a task humanly impossible to plan without computers.
Consider another example of selecting and attacking different targets. For each target depending on its classification as land, air or naval target, fixed or mobile, defended or undefended, a strike package of aircraft to achieve maximum damage had to be created and chosen. This required identification, assessment and location of targets using reconnaissance photographs. In case of mobile targets such as Scud launchers, a track of movement from one location to another must be made continuously. Also, an assessment whether the target needs another attack must be made on a continuous basis. This planning for an attack required computer support and could not have been achieved without the help of computers and related technological support.

1991 Gulf war is already more than a quarter century old. Today’s wars have evolved into a peculiar mix of hybrids and multi-dimensional mutants that the RMA based strategic thinkers of 1990s didn’t predict. Generals in future wars will face more complicated decision-making scenarios. The progress being made in the field of Artificial Intelligence is no doubt substantial, but we are still far from behind a scenario in which military commanders will be replaced by computers. No computer program as of now has passed the critical “Turing test” which is the threshold to assign “intelligence” to the computer. The Turing test proposed by the famous British mathematician Allan Turing, considers a scenario in which a human being is talking to a computer through a network or any other means without being aware of its identity. If the human being after lots of questions and answers starts believing that he is talking to another human being instead of a computer, then the computer can be considered as intelligent. Currently, computers are essentially data processing and information processing machines. Their role is predominantly applicable to first three levels of the intelligence pyramid which has data, information, knowledge, intelligence and wisdom. The current state of use is limited to knowledge processing systems to aid intelligent decisions. Wars continue to be led by Generals or military commanders with computers used for information processing and as an aid for decision-making by investigating outcome of various alternatives. The evolution of strategies which require creativity and innovativeness continue to depend upon the wisdom and experience of Generals. Computers are supposed to assist the generals in determining optimal strategies, despite the psychology of military incompetence that they are plagued with.

Technology, however, is evolving and now making inroads into the realms hitherto unthought. As per Law of increasing intelligence of technical systems (One can download the pdf on law of increasing intelligence of technical systems at http://aitriz.org/articles/InsideTRIZ/323031322D31312D6E61766E6565742D627573686D616E.pdf ), dumb/unguided systems became guided systems, then smart systems, brilliant systems and genius systems. Today, we are already in the era of smart munitions. Brilliant munitions are emerging. Genius munitions will the next stage.

A recent report indicates the possible use cases of Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI) in Defence and Security of a nation. Pentagon already has established an algorithmic warfare cross functional team that will fight the ISIS through application of machine learning and deep learning on the rapid infusion of real-time satellite and drone data, images and video feeds. The immediate task is for the machines to learn about 38 critical objects in the video/image feeds. Once learned, machines would prefer and so would their controllers to let them decide on what to do with or against the identified targets. A scary “automated kill” intelligence to be built in the machines. Recent incident of the AI creating its own language resulting in the closure of the system (for the time being) at Facebook, clearly indicates the potential as well potential pitfalls that we are getting into.

AI for Generals

Although we are away from replacing the military commanders by AI, we reckon technology is fast evolving when we must take a serious call on how much of the Generals intelligence we should replace by artificial intelligence, when and for what tasks. Giving the last point to Kasparov – the human Chess master who gave in to the Artificial Intelligence two decades back – despite the military incompetence, human stupidity and increasing machine intelligence, human creativity and imagination will be needed by the machines in the way forward – not only for controlling the machines but also enabling the machines. AI will continue to require the human mind – with intelligence or otherwise as thinking – specifically reflective or critical thinking is still not mechanizable. The way forward indeed will be with AI supplanting and enhancing general’s intelligence and of course suppressing their military incompetence.     

*****

My Book @Goodread