A Putin Nuclear Strike on Ukraine? A Chinese Attack Against Taiwan? How the US Prepares for Global Nightmares
By John McLaughlin - December 12, 2022
For
many of the dangerous situations confronting the United States today,
there is precious little precedent and little guidance on how to
respond. Look no further than the possibility that Russian President
Vladimir Putin might use a nuclear weapon in Ukraine or that China might
invade Taiwan. Add the prospects of a fourth nuclear test by North
Korea, Iran acquiring a nuclear capability and widespread unrest in
China … well, you get the idea.
The potential for global surprises has rarely been greater. And surprise is the enemy of any nation’s foreign policy.
The
challenge for U.S. policymakers is to prepare for all these traumas and
game out how the U.S. would respond — not just in the moment, but in
prolonged and escalating circumstances. For example, not just whether
Russia would go nuclear in Ukraine, but what the U.S. would do and what
next steps Russia might take. All this to avoid relying on improvisation
and potentially chaotic responses, if and when the moment comes.
The
good news? Such planning is happening now when it comes to the Taiwan
and Russian situations. The bad news? Those are as complex and dangerous
as any scenarios in recent memory.
How does the
scenario-planning work? Who does this work? My experience in government
taught me that there are many ways to prepare for such uncertainties and
entire teams of people whose job descriptions might best be described
as “preparing for nightmares.”
War games
The concept
dates to the early 19th century and a Prussian army officer who took
actual games to his superiors and suggested that they be used to
simulate conditions on the battlefield. Today, war gaming in the U.S.
happens inside government agencies and private think tanks, and it takes
many forms.
There is the geopolitical war game — a kind of
chess match of diplomatic moves and countermoves — and then there is the
more kinetic variety, with U.S. military officers simulating teams and
playing through a series of military “moves” in which the battlefield is
constantly changing.
For years, the U.S. has used war games to
simulate a U.S. war with China over Taiwan. As Grid’s Joshua Keating has
reported, in classified Air Force war games held since 2018, “blue”
teams representing the U.S. have repeatedly lost to Chinese “red” teams.
That’s partly by design — these games are designed to highlight
vulnerabilities — but the simulations have also highlighted specific
issues involving China’s geographic advantages and its rapid development
of certain weapons (in particular anti-ship ballistic missiles, capable
of precision strikes on U.S. ships at a range of more than 900 miles).
The
2021 Air Force game reportedly showed improvement for the U.S. side,
though the Air Force game commander highlighted the fact that many of
the necessary U.S. military assets were not yet in development or
production. His conclusion: “If we change, we can win.” More broadly,
these war games have shown the grave damage China could inflict against
Taiwan in the early days of a conflict, but also the likelihood of a
long and drawn-out war once the U.S. was involved — a war that could be
devastating to the U.S. and China both.
Other war games take into
consideration both military capabilities and political factors,
sometimes using a team that mixes government players with outside
experts. This was the case in a war game in March run by West Point’s
Modern War Institute, which simulated a Russia-Ukraine war just weeks
after the actual Russian invasion.
That game opened with the
U.S. players overestimating Russian capabilities but quickly coming to
the prescient view that over time Russia could not sustain the combat
power necessary to take and hold a major city. The game also foresaw the
eventual need for a Russian mobilization, resulting political tensions
in Russia and a long-lasting stalemate on the battlefield.
What’s
gained in these games? In the Ukraine case, those conclusions helped
underline some of Ukraine’s underlying strengths and, more importantly,
to expose weaknesses in Russia’s position.
More broadly, the
games help the U.S. “team” take the measure of itself, expose resource
and coordination problems among U.S. agencies and with allies, and test
how a range of responses might work under the pressure of time and
surprise.
Red teaming
“Red teaming,” which also
originated with the 19th century Prussian military, is an invaluable
variation on this — and one that I’ve seen work effectively. Call it war
gaming with a twist.
A team of experts is asked to “become” the
country or group whose actions you are trying to anticipate. And
“experts” is the key word. The team must consist of people with two
qualities: deep expertise on the adversary, and an ability to challenge
conventional wisdom and avoid “mirror imaging” (the tendency to assume
the adversary will behave as Americans would). The team members must be
expert enough to enter the enemy’s social, cultural and ideological
milieus and think as they would.
Red teaming differs from war
gaming in that there is no opposing side; you don’t want this group
reacting to others — you just want them to replicate and channel the
thinking, logic and planning of your adversary. In colloquial terms, to
get in their heads.
So in my Taiwan example, this team would
consist of people schooled in Chinese and Taiwanese culture and history,
and ideally with fluency in Mandarin. They might be given two kinds of
tasks: playing Chinese policymakers to game out how Beijing would pursue
its aims on Taiwan, or playing Chinese officials reacting to setbacks
in their strategy. In the Putin-nukes scenario, the same idea — but with
the expertise focused on Russia, nuclear weapons and Putin himself.
I
know red teaming can work, based on the CIA’s use of the technique
after the terrorist attacks of 9/11, when the agency felt acutely
responsible for ensuring that another attack did not take place. We took
our most adventurous and unconventional thinkers and formed a red team
with a very specific task: We told them to “become” the terrorists — to
imagine how and where they might plot their next attacks. Combined with
raw intelligence we collected, the red team’s work guided many of the
steps the administration took to harden vulnerable targets in the U.S.
and abroad.
This proved important because, in my experience, the
U.S. tendency even after 9/11 was to rely almost passively on
intelligence to warn policymakers; in other words, wait for a CIA
warning and then react. Our message was that we would do our best but —
against an enemy that played by no rules — there would always be a
chance something might be missed, and someone would get through. Better
to augment intelligence warnings with proactive protection of potential
targets that we could identify.
It was hard to convince
Americans — government officials and ordinary citizens — what an open
target the U.S. was at that moment. In the immediate aftermath of 9/11,
watch lists were not yet systematic and effective; people could board
planes with knives; and in the entire aviation system, there were only
33 air marshals. The red team exercise helped expose these and other
vulnerabilities.
Devil’s advocacy
It’s a simple,
well-known concept, one that appears to have had its origins in an
ancient Catholic Church practice of testing the arguments for and
against conferring sainthood. In its modern, secular form, it’s another
tool that helps government decision-makers narrow the potential for
surprise or error.
Devil’s advocacy is most useful when leaders
have arrived at a conclusion with confidence but must test that
conclusion relentlessly, given the disastrous consequences should they
turn out to be wrong.
Unlike war gaming or red teaming, the key
here isn’t military or cultural expertise; rather than getting into the
adversary’s mind, the idea for a devil’s advocacy team is to clinically
evaluate and challenge the mainline judgment — to test it by arraying
the data in ways that arrive at a different conclusion, or to detect
some missing piece that is distorting the judgment.
This may
sound like an academic exercise, but it can have profound impact. When
the CIA concluded that Osama bin Laden was in Abbottabad, Pakistan, in
2011, it used this technique to test its judgment. It asked another
agency to look at the case and provide a competing assessment to see if
it differed from the mainline view — in other words, devil’s advocacy.
The agency also conducted what’s known as “competing hypothesis
analysis” — asking what other conclusions could have been consistent
with the analysis. Could the Abbottabad resident have been someone other
than bin Laden? A different terrorist? A man who resembled him?
Finally, several other individuals with no experience in the matter were
brought in to do an additional cold review of the data and conclusions.
You might say that there were three rounds of devil’s advocacy, all looking for differences or weak spots in the analysis.
The
precise results of those exercises remain classified, but I can say
this much: The multiple competing assessments offered the Obama
administration a 360-degree view of the information and analysis, and
ultimately gave the president and his top advisers confidence that the
data had been thoroughly scrubbed and tested as they wrestled with the
decision to launch the special forces raid on Abbottabad.
It was
a different case, Iraq in 2003, that encouraged the increased and more
systematic use of such techniques — given the flawed assessments of
Iraq’s weapons of mass destruction and the assumption that the U.S.
could overthrow Saddam Hussein without the trauma that followed. As for
the Iraq War itself, it’s not clear that any prewar devil’s advocacy
exercise would have made a difference given the momentum for going ahead
that had built up in the Bush administration and Congress.
“What if” analysis
Here
again, the words seem simple, but in practice the technique is
important. The idea in a “what-if” analysis is to shift thinking from
“How likely is it?” (say, that Putin might use a nuclear weapon in
Ukraine) to “How could it actually come about?” You start by assuming
that the hypothetical has already happened and work back from there; you
assume, for example, that China has attacked Taiwan, Putin has used a
nuclear weapon or Iran’s regime has collapsed, and so forth. And as you
work your way back, you consider what must have occurred at each step.
What would be the indicators or tripwires to watch for? What would we
expect to see if the nightmare scenario were coming? And how would we
know?
And then you gear your technical tools and best agents to watch for those things.
Scenarios analysis
Sometimes
a global crisis or hot zone will defy nearly all the other tools. An
outcome may be too uncertain, complex or controversial to place
confidence in any single prediction. Here’s where “scenarios analysis”
comes in. This was a technique I relied on often when I was managing
what are called National Intelligence Estimates (NIEs) from 1995 to 1997
and wrestling with questions about North Korea, Russia, Iran, and
nuclear and missile proliferation.
This is not about predicting
the future or gaming out moves, but about mapping the range of possible
outcomes. One of the most intelligence-savvy policymakers I worked with,
Gen. Brent Scowcroft, national security adviser for President George
H.W. Bush, often said the real role of intelligence was to “narrow the
range of uncertainty when difficult decisions have to be made.”
Scenarios analysis can help you do that.
Here, North Korea may
be the most relevant — and nightmarish — example. Policymakers have
worried for years about the possibility that a nuclear-armed North Korea
could collapse under the combined weight of economic and social
problems, grinding poverty, and a brittle dictatorship — and the
nightmares that might follow. But the potential ways in which a North
Korean collapse might play out are almost limitless; put differently,
you’d have to conduct hundreds of war games or red-team exercises to
plan appropriate responses.
So rather than war gaming, you start
with the forces and events that could determine the outcome. You
typically generate at least three or four scenarios — “best-case,”
“worst-case,” and one or two in between, along with the indicators that
each of those is coming to pass. Ideally, you make a judgment about the
relative probability of these outcomes. This gives the government some
guidelines for planning against various contingencies. In the case of
Ukraine, your best case might be that Ukraine succeeds in pushing
Russian forces off all the territory it has taken, including Crimea.
Your worst case could be that Russia regroups and roars back to occupy
most of the country. And you focus your energies on those that are most
likely.
When it works — and when it doesn’t
At the CIA
in the spring and summer of 2001, we had strong evidence that al-Qaeda
was planning a major attack on the U.S. This was the result of a huge
spike in intelligence reporting more than a conscious application of the
foregoing techniques — although I think it’s fair to say that we were
in a nonstop “scenarios analysis” exercise, weighing different
possibilities in terms of potential targets and methods. We know it was
coming and that it was coming to the U.S. But we were unable to identify
timing and specific targets
In the aftermath of 9/11, however,
the response benefited from an elaborate “what if” exercise that had
been carried out in the year prior. In its final months, the Clinton
administration had given the CIA a task in the form of a question, based
on the possibility of a major al-Qaeda strike on U.S. soil: What if the
CIA and other agencies were unconstrained by resources and given
special authorities in the aftermath of such an attack? In those
circumstances, what would we do to destroy al-Qaeda?
Our response
was to develop what we called a “Blue Sky” plan, a term that reflected
the unconstrained conditions the administration had posed. We delivered
the plan in December 2000; it came off the shelf at Camp David, where
President George W. Bush had assembled the national security team for
the first full discussion of response strategies four days after the
9/11 attacks. Two days later, the president told us to put our plan into
action. Within weeks, CIA teams on the ground in Afghanistan had
prepared the way for U.S. Special Forces — and with the combined effort
of CIA and the U.S. military, Kabul and the Taliban fell by November of
2001.
In the case of the Putin-launches-a-nuke scenario, I
imagine a mix of all these techniques would be needed: war gaming for
the chess match of moves and countermoves; red teaming, to be sure that
decisions are made with a sophisticated understanding of the current
Kremlin mindset (no small task); and what might be called the mother of
all what-if exercises — using that calendar-in-reverse approach to do
everything possible to ensure that it never comes to that.
None
of these exercises guarantees perfection — or anything close. But using
them — and using the results wisely — may narrow the chances of
confusion or failure in national security emergencies. If we can answer
those questions about Putin, or game out all the military and political
steps that might play out should President Xi Jinping move with force
against Taiwan, we have a better chance of success if and when the
nightmares come.
Perhaps the clearest wisdom on this comes from
then-Gen. Dwight Eisenhower. Having commanded the largest amphibious
military operation in history, the D-Day invasion of Normandy,
Eisenhower was acutely aware of the many things that could surprise a
decision-maker. He had no illusions that he could design a perfect plan
that would hold up under the pressures and chaos of battle, but he also
knew that not planning would leave him even more exposed to surprise and
disaster.
As he put it: “In preparing for battle I have always found that plans are useless … but planning is indispensable.”