The Science of War: Defense Budgeting, Military Technology, Logistics, and Combat Outcomes

Oct 13, 2009

Full Video

Global Ethics Forum TV Show

Michael O'Hanlon explains how military modeling and planning are done, taking as examples Desert Storm, the Iraq War, and the decisions to be made now about Afghanistan.

Introduction

JOANNE MYERS: Good afternoon. I'm Joanne Myers, Director of Public Affairs Programs, and on behalf of the Carnegie Council, I want to thank you all for joining us.

I'm very pleased to be welcoming Michael E. O'Hanlon to our program this afternoon. He is someone whose writings I have personally been following for awhile, so I'm delighted that you're here with us today.

Today he will be discussing his most recent book, The Science of War: Defense Budgeting, Military Technology, Logistics, and Combat Outcomes.

These are extraordinary times in our nation's history. We are facing some of the most daunting and complicated national security challenges in more than a generation. Homeland security concerns still abound in the wake of the September 11th attacks and we are fighting two wars, in Iraq and Afghanistan.

To protect our country and to ensure our position as the most sophisticated, high-tech military power in the world, we are spending more on defense than at any time since the end of the Second World War.
President Obama, as we have seen, will have to allocate a great deal of time and money on national security, hard power, and war. Some of the issues this raises are: How should these competing demands be prioritized; how much money will be needed; how much will be available; and how much should we spend?

In The Science of War our speaker draws on a broad range of sources, along with his considerable expertise as a defense analyst and teacher, to provide a brief but excellent primer on defense matters. He addresses such issues as our country's defense budget; he helps us to understand military logistics and scientific issues that arise when we discuss defense policy. This includes space warfare, missile defense, and nuclear weapons development.

The complexity of the U.S. military can be daunting, and the way we spend our money and choose which way to employ it and allocate our resources all have enormous implications for national defense and the economy. Accordingly, I believe The Science of War is one book you will want to read in order to have a better understanding of these issues.

So please join me in welcoming our speaker, Michael E. O'Hanlon. Thank you.

Remarks

MICHAEL O'HANLON: Thank you, Joanne. That's a very kind introduction.

Thanks to you all for being here, and for a topic that is potentially too broad, potentially dry, potentially depressing. I really appreciate your interest, although I guess, as Carnegie Council members, given the responsibility you feel towards the public policy process, it's no great surprise that you would be here.

This book is, in part, and really in large part, a textbook. So I am especially thrilled and honored that a group like this would be willing to hear me talk about a textbook. I'm not going to play up the textbook part, but I do want you to know a little bit about some of the broad motivation of what I was trying to do here before I really get into Afghanistan. I'm going to talk a little bit about what the book says about Afghanistan as the main thing I'd like to do to prime the discussion, and then I think it will be a lot of fun to go back and forth on that.

When I first talked to Chuck Myers from Princeton Press about the idea of doing this, we realized that there were a number of potential good reasons to have such a book out. There really isn't that much of a defense textbook of this type on the streets, or even at the war colleges. There are typically readers; people will put out compendia of articles and that sort of thing.

There have been some authors in the past who have written books—like James Dunnigan wrote a book, How to Make War, that was partly a primer on how defense establishments operate, and he had some calculations and numbers and so forth as well.

I wasn't going to try to do that. I was going to leave it to others to write the primer on how the U.S. military is structured and some of the history of various recent operations. I just wanted to distill some of the methods that, at Brookings for 15 years and at the Congressional Budget Office for five years before that, I have been trying to use in my own work—what you might call sort of the technical side of military policy.

But when I said that, I knew that, even though you're all being very kindly attentive, I could lose people very fast by talking about the technical side of defense policy, that people would either be daunted by the complexity of things, or just uninterested because it didn't seem up their alley. So let me say a few words of reassurance and also of what the scope and guiding philosophy of the book is.

You know, I'm not trying in these 200 pages to create some elaborate computer program that predicts the outcomes of wars; I'm not trying to create a 1,000-page-long defense appropriations bill that gives every element of detail; and I'm not trying to in any way solve the debates about space weapons or nuclear testing or other scientific matters, which are the last chapter of the book, the sort of true science of war. "Science of War" is, I think, a good title for the budget part as well, and it's really the last chapter that has the technological component.

But what I'm trying to do is to get at the intermediate level of analysis. This is often the level where policymakers either get stuck or want help or are trying to operate.

For example, I remember Paul Wolfowitz once saying that when he came into the Pentagon he was frustrated because either he got no detail on questions he was asking about, or he got a 1,000-page data dump on issues like how much money do we spend doing X, Y, or Z.

If you want to conceptualize new ways of doing business—and I realize invoking Wolfowitz's name may or may not be the right way to start a conversation—but nonetheless, if you want to try to do that, how do you begin to get traction in this incredibly difficult arena of policy?

You need some sort of guiding lights that are more detailed, more analytically meaningful than just broad observations or anecdotes, and yet more manageable and more accessible than all the incredible detail that is in some of these models, some of these budgets.

Let me give you an example of why I also think we can sometimes do pretty well in this field at this level of detail.

Back in the prelude to Operation Desert Storm in 1990, when, as you know, we were sending a lot of forces to Saudi Arabia to get ready for the liberation of Kuwait, there was a lot of talk about whether we should do this or not, how many casualties would be incurred, whether Saddam would use chemical weapons, whether it would be like World War I trench warfare, and a lot of these kinds of issues were very much in the press and very much on the minds of military planners.

For those of you who were, I'm sure, following this in many cases just as closely as I was, you may recall the Pentagon was trying to do a lot of war games and combat models to figure out what could happen. The Pentagon models sometimes predicted 10,000-30,000 American fatalities.

Meanwhile, my professors at Princeton and elsewhere, where I had been learning this stuff, were doing back-of-the-envelope calculations at the level of detail that I try to capture in this book—and maybe a little more than an envelope, maybe a notepad—and basically say: What are the key determinants of what's going to happen here; what are the best historical analogies we can think of that are relevant; what are the big wild cards that we have to keep in mind, because we want to not get too precise?

The problem with some of these Pentagon models and war games is they're so elaborate that one group of people has to go build a model, another group of people operates the model, and a third group of people is the consumers of the model, and the consumers don't actually even know what assumptions were built into the way the thing worked.

What we try to do with these models in the book, is keep visible the basic political and strategic assumptions.

And so my advisors and other academics went through and did some of these calculations. Some of them were curly-haired Ph.D.s like me. Others were retired colonels, but who got into the business of defense data analysis. In other words, this is not a cultural issue about academia versus the military; it's almost more of a methodological issue. But the people who were trying to do things on a notepad did much better in predicting the outcomes than people who were using the 10,000- or 100,000-line computer code.

Now, that's not always going to be the case. But we still have to ask why was that the case. The reason is because the people who were doing things a little more simply kept thinking strategically—you know, what's my best historical parallel; what things is Saddam probably pretty good at, what things is he probably pretty bad at?

If you're making the computer model, you have to actually make all those decisions when you start, because you've got to write that into the code the way a lot of these models are done. And then the user doesn't necessarily have the flexibility to change those inputs.

Now, after the war—if I could stick with this Desert Storm analogy or lesson just for a moment before I weave my way towards Afghanistan—after the war, a couple of people, like my friend and fellow think-tanker Steve Biddle at the Council on Foreign Relations, went back to these models and said: "Okay, is there any way that we can correct the model in retrospect—in other words, adjust it so we get the right answer now that we know what the right answer is? If we can figure that out, that will tell us something about what the model should have done but failed to do."

So Steve went back and he looked at the so-called Battle of 73 Easting, which was one of the big encounters where the Republican Guard was moving in one direction and we found them. They had had a little bit of time to dig in. But it was one of the maneuver encounters, as opposed to the dug-in trench lines by the Kuwaiti-Saudi border where Saddam had put many of his forces.

Steve said: "Okay, the model predicted this would be a very tough fight and that we would lose a lot of American equipment, a lot of American soldiers. What happened was we actually didn't lose any American equipment, and I forget if there were a couple of people wounded—but basically we decimated the Iraqi formation there."

So Steve went back and he said: "What did the model do wrong?"

Well, the model assumed that because the Iraqis were owning and deploying Soviet equipment—which was roughly comparable to ours, we thought at the time, especially if you're doing conservative defense planning and you don't want to presume advantages that you can't prove—we assumed they would operate it correctly as well.

So if you are following proper techniques and you've got all this nice T-72/T-80 technology, the modern Soviet tanks of the era, and you've got a day or two to prepare, you should dig in very well; you should have your tank covered up by hard, firm soil; you should take all the dirt away so that people can't see where you've done the digging; you should—you know, it's the desert so it's a little bit hard to use foliage—but you can still minimize the degree of visible digging that you do and try to patch up the little holes in the ground where you don't need to have a visible signature once you're done; and then you deploy some teams in advance of the line, a couple to three miles ahead of your main formation, so you can see the Americans coming—this now being from the Iraqi perspective.

So Steve went back and he said: "Well, you know what happened? We've done a lot of prisoner interviews. We found out that the Iraqis forgot to put those observers up in their advanced posts, so they got no warning we were coming. That was mistake number one.

"Second, because they were afraid of American air strikes, they were not spending the time in their equipment unless they knew that there was a ground assault coming. So, given that they had no forward observers, they also were not in their equipment at the time we got close, so they couldn't get off a first shot.

"And third"—and this one is really interesting—"they actually didn't dig in very well, and so we could actually see their tanks, partly because we could see the dirt that they left in place right in front of the tank lines, so it actually gave away their position, and then left the vehicles exposed enough that we could shoot directly at them with no packed earth to protect them from the incoming tank rounds."

So they did everything wrong. You make the assumptions more accurate for reality in the model, and the model finally gives you the right answer.

And so, lo and behold, in 1993 the Pentagon's best computer models accurately predicted that we would have a 100:1 advantage in the loss rate in the Battle of 73 Easting; whereas meanwhile some of the professors who did it on a notepad back in 1990 more or less got it right. Now, they were wrong as well; they were too pessimistic by a factor of two or three.

But one more theme of this book: when you're doing combat modeling, that ain't bad. And that's about as well as the field is going to allow you to do. Because there are so many human uncertainties in war, as we're all aware, the idea that you can distill this down to a science is a little bit misleading—which may make you wonder why I called it "The Science of War."

Well, obviously, I was trying to make it punchy and play off Sun Tzu a little bit [author of The Art of War]. But in the opening pages I almost begin with an apology to Sun Tzu and Clausewitz, because I think they're mostly right about the nature of war, that it is mostly about human endeavor and enterprise and effort and courage and tactics and performance under stress and all these intangibles that we cannot easily quantify or pretend to be scientific.

And yet, if that's the only way you look at war, how do you decide how big the defense budget should be; how do you decide which wars you are likely to prevail in quickly and which ones are maybe too hard; how do you decide if you're winning or losing in Iraq or Afghanistan; how do you decide if we should sign and ratify—we've signed already, but should we ratify—the Comprehensive Nuclear Test Ban Treaty?

There is a science of war. That's important—not because it resolves issues, not because it answers questions definitively, but because it at least constrains and grounds the debate in reality.

And then, I think, what I tell my students and what I try to say in the book, is this is grist for policy debates. The Science of War does not answer very many questions. It provides information to have a healthy debate. But it keeps that debate from being something other than pure speculation or fiction, and it allows you to test assumptions and try to explore hypotheses, and maybe think of new hypotheses you didn't think of before.

I'll give you one more anecdote back from the Iraq experience and then move over to Afghanistan.

Back in 2002, when we were thinking of invading Iraq, I did an article, which appeared in Orbis magazine, and then I have some of it summarized in the book, where I was asked to estimate the casualties from the invasion.

Of course, all of my professors who had done this for Operation Desert Storm were much smarter than I was and they stayed away from that issue with a ten-foot pole. I was actually very proud of all of them—they were only off by a factor of two or three, as I said, in their estimates of how many people we would lose. The Pentagon models were off by a factor of 10 or 20 or 30. And certainly, there was no doubt about the outcome of the duration of the war. So I thought military modeling did have something to offer. So I said, "Sure, I'll take a crack at this."

Now, my methodology for predicting this war and its outcome was wrong. I got it wrong. In other words, I thought that the war would not lead to a six-year- or four-year-long insurgency.

However, the discipline of using analytical methods to try to think through plausible scenarios and try to think through a relatively optimistic outcome and a relatively pessimistic outcome did lead me to estimate that we could easily lose several thousand people in the course of the war. Now, I thought that would be within a few months, not within six years.

I did not get the basic dynamics of the war correct. And so I am not trying to sit up here and say "I told you so." I did not tell you so. I did not understand that was going to be, and I share the guilt of 95 percent of all defense analysts who didn't really anticipate it.

However, I did wind up with two observations that I thought were important: one, we could lose several thousand fatalities in this war, and we better be ready for that; and secondly, we better have a fair amount of force and capability for all the things that could go wrong. So on those two fundamental policy points I think I got it right.

The methods that are in this book are part of what forced me to go back, to keep asking: "Well, what's a reasonable, plausible, pessimistic case?" So I don't just jump on the Ken Adelman/Richard Perle "cakewalk" bandwagon, which I was never a believer in and was criticizing at the time.

But nonetheless, if you just do this as sort of anecdote, impression, gut instinct, cocktail party conversation, bring in your favorite historical analogy, you can lead yourself astray—and I could have easily led myself astray. But the methodology of doing proper defense analytics says you've got to do a reasonable optimistic case and also a reasonable pessimistic case whenever you deal with something as inherently uncertain as war.

So by trying to be precise and scientific, I was forced to remember that war is unpredictable and unscientific, and that led me to do three different ways of the same calculation and kept me a little more real than most of the other people who were saying it's going to be really fast and easy.

So again, the last thing I'm trying to do is claim personal clairvoyance on this. And I have certainly been beaten up a lot by people who said I supported the war too much, and they're probably right, that I should have been a little bit more worried that we were not preparing adequately.

But it did lead me to make two points throughout: this could be very ugly and we better have a big force. The methods guided me to those judgments.

Now let me get to something that is more immediately on all of our minds in today's policy conversation, which is Afghanistan.

There is a section in this book that talks about counterinsurgency. There are actually two sections. One of them is, how do you size forces for these sort of missions—and that's certainly relevant to the whole Stanley McChrystal proposal we're all discussing and thinking about today. The other is, how do you know if you're winning or if you have a chance to win, or how will you know if you've lost. I have sections on each of these two concepts in the book in Chapter 2, Chapter 2 being the section on force sizing, combat modeling, and outcome prediction.

For those of you who missed the Cold War, there's a brief section on nuclear exchange calculations. There's also a section on amphibious assault, if you like to think about Normandy or Taiwan or other scenarios like that. But there's a fair amount that is on the issue of counterinsurgency and stabilization efforts.

Let me just, as I ease into the Q&A and discussion period—and I look very much forward to your thoughts on all these issues—and, by the way, we can easily go into nuclear testing and that issue, which is going to be on the radar screen, I think, next year quite a bit as well.

But let me just try to get the Afghanistan issue squarely on the table. These are not going to be definitive judgments or arguments that I've somehow calculated the right answer to any of these things.

Again, the philosophy of this book is—there's sort of a yin and yang to it—there's a healthy skepticism, that we better try to be precise and analytical and bring evidence to bear in these issues because they're so important, and we don't have the alternative of just trusting some great wise man from on high who's going to tell us the right answer; and yet we better not believe we ever have the methods quite down either because war is so inherently unpredictable.

Of course, we all remember Vietnam, where we thought we had the four or five right metrics—body count; crossover rate; Ho Chi Minh Trail, how many tons of supplies did the Vietcong need to bring down that trail. All these kinds of metrics, which we were convinced were definitive in some sense and telling us when the war would be won, actually proved to be counterproductive I would argue—some of you may disagree, and we can talk about that too—but I would argue they were actually counterproductive because they reinforced our tendencies to use certain tactics which were actually very inappropriate for counterinsurgency, like a heavy use of fire power—far worse than we ever have done in Afghanistan, even before Stanley McChrystal arrived—way too much focus on what Andy Krepinevich, the Army historian and former Army officer, described as the concept that the Army wanted to employ in Central Europe, that it then simply transmutated over to Vietnam and applied a lot of the same fire-power-intensive concepts, and then had the Air Force do the same in much of the carpet bombing that we were trying to use to shut down the Ho Chi Minh Trail.

It turned out that the Vietcong could recruit faster than we could kill them anyway, and we were helping them recruit by the kind of tactics we were using. It turned out as well that the Ho Chi Minh Trail didn't need to carry very much in the way of supplies because the Vietcong got most of what they needed locally. And so all of our assumptions about how if we bombed the Ho Chi Minh Trail we would end the war turned out to be wrong and led to counterproductive tactics as well.

So again, there's no real science to this in the sense of creating a definitive answer, and we have to be wary and skeptical that any given set of metrics is going to be the right way to think about it. But what's the alternative—just to make things up, just to reason by analogy, just to wait for the secretary of defense to tell us what the right answer is? I don't think so.

So we've got to have this yin and yang, this tension of trying to make things more analytical and precise, always being aware that we could be doing it wrong, we could be leading ourselves astray, re-questioning our assumptions, keeping the calculations simple enough that we don't forget what the assumptions were, so we come back and reexamine them. That's the philosophy of the book.

Now let me apply it to Afghanistan.

On force sizing, one thing we could say, I think, right off the bat is that neither you nor I nor Stanley McChrystal knows the right number of U.S. troops for Afghanistan. Now, I'm a supporter of his proposal—let me just admit that up front and I'll come back to it. But this business is not so precise that we can know exactly what the right number is.

If it were so precise, do you think that David Petraeus, who was already in command of Central Command back in the winter, would have blessed a review process that led to the argument that 68,000 Americans, or maybe he wanted 75,000—there were reports that Obama didn't give everything they wanted, but he gave most of it—would Petraeus have approved a plan that he knew was 40,000 troops short without at least voicing that objection?

In the wintertime we didn't know that 68,000 troops, roughly the number that we've now got in Afghanistan, would be inadequate. We really thought it might be enough, because we were hoping that we could concentrate these forces in the right places, the right key strategic zones, even though Petraeus' manual had said back in 2006 when he wrote it at Fort Leavenworth that you need one counterinsurgent for every 50 people in the population, and that implies that you need 600,000 for Afghanistan.

Even though there was no way we were going to get close to that number, Petraeus said: "Okay, we're going to have 100,000 native troops, a couple hundred thousand Afghans—that's about half of 600,000. That's a pretty good start. Let's start in the south and the east; that's where the insurgency is strongest. We've got about half the number we're supposed to have for the whole country. We'll concentrate them in the south and the east. That is at least moving us in the right direction." From all I can tell, General Petraeus was very comfortable with that basic logic.

Lo and behold, six months later the guy that Petraeus himself has helped appoint to the new position, General McChrystal, winds up saying: "You know what? The logic was fine, the basic idea was fine, but the arithmetic was off a little bit. Turns out 68,000 Americans isn't quite enough. I might need 90,000 or 100,000."

That's the quality of the methods we've got in this field. Part of what McChrystal is saying is: "You know what? Now I've had six more months of data that Dave Petraeus and David McKiernan and Bob Gates did not have when they did the review back in the winter."

We've put down those additional forces, the ones that Obama approved in March, and they all arrived in the spring and the summer, and they went to different parts of the south and the east of Afghanistan, and they got put down in certain places. McChrystal didn't want to move them around. He wanted them to go in and clear and then stay, because that's the way you win over the population. That's the way you also protect your own troops. If you're constantly re-clearing the same place you already cleared because you get up and go and then come back, you're going to run into a lot more IEDs [Improvised Explosive Device]. So he wanted to put them down and stay.

Well, he did that and he found he didn't have enough people. He had isolated pockets where we had cleared and established some control, but we didn't have enough of a contiguous zone even for the south and the east of the country to get the economy going, to prevent the Taliban from having sanctuaries in the next town over or in the next village over.

So he said: "I've redone the math." Even though the numbers are still apparently in some flux and he's got a range, it looks like he's asking for 90,000-100,000 total troops, 20,000-30,000 more roughly, just six months after Dave Petraeus—who kindly blurbed my book, and I love the guy, and I was lucky to go with school with him, and I think he's fantastic—but he had to suffer with the same methods that the rest of us have to suffer with in this business, and that was to basically say: "You know what? The math was a little bit off. Turns out we need about 30,000 more people than we thought just six months ago for the exact same job."

Again, this is not a criticism, but it is—and I'm being a little flip just to drive home the point—but it is more or less what has happened.

So I support McChrystal because I know what they've done. They've learned a lot more. They've put down these forces and they've learned that we don't have enough for the south and the east, and it's clear we don't have enough, and the Taliban are maintaining sanctuaries in the places in between where we've put down these initial NATO reinforcements.

If we could clear them out throughout the whole south and east, we begin to allow the basis for an economy to regenerate, for that ring road to be safe, for the populations not to be intimidated by Taliban who are sneaking in at night from the next town over—we have a little bit better of a chance of creating sort of a zone of positive energy and government control.

So he has some additional information Petraeus did not have six months ago. But nonetheless it's sobering about the limitations of our methodology.

So I personally give Obama some space on this. I think the president deserves some time to understand why the devil these same generals who asked him for maybe 70,000-75,000 troops last winter and promised that would be enough now seem to want 100,000 or more for the same job. I think he's entitled to understand a little more about the uncertainties and imprecisions of this methodology.

Again, I think there are good answers, and ultimately I support McChrystal and support Petraeus. But it has to raise your eyebrows a little bit when just six or eight months later you're being told that, "Oops, we were off by about a factor of two in the number of additional troops that we thought would be needed for this war."

Remember, just to give you a little bit of a benchmark in case you've forgotten, under President Bush about this time last year we had about 30,000 U.S. soldiers, making for a total of about 65,000 NATO soldiers. President Bush was adding modest numbers of additional forces in the last six months of his presidency. Then, when Obama came in, he approved an increase to 68,000. We're almost there now, we're almost up to that 68,000. McChrystal appears to want something closer to 100,000 total, so maybe another 30,000—although the numbers in the press, as we're all aware, are all over the place.

So President Obama is entitled, recognizing the imprecisions of this business, to take some time. What I encourage my friends in uniform to do is: "Don't rely entirely on these broad rules of thumb. Go in and explain to doubting members of Congress and the administration on a more micro level, where we have forces and where they're not yet in position and why we need to fill in those gaps."

I think that argument works better than some broad mathematical formula about how we need one counterinsurgent for every 50 Afghans. I don't think people are going to buy that kind of crude, almost think-tanky kind of simple algorithm—and, again, they shouldn't, because the methods of this business are too imprecise.

So we should tell more of a story of what's happening on the ground. The methods do not give us the definitive answer. Until we've looked in more detail at what's on the ground, what's going on in those places that we've cleared and what's happening in between, we're still a little unsure of what to do next. So that's point number one.

Second point—and then I'll be done—is how do we know if we're winning—or at least if we're not losing, which perhaps is the more realistic short-term goal—and, unfortunately, how might we know if we've lost? There is no short answer.

The Executive Branch just sent the Congress 46 different metrics on this war. We had our New York Times op-ed today. For those of you who don't get bored by our op charts that we do every three months, we had Pakistan, Iraq, and Afghanistan. We showed about six or eight indicators for each country. We have about 50-75 for each country in the Internet databases that we maintain.

We always say part of the reason we're giving you 50-75 is that we don't know which ones are the most important. We don't know which ones are the most accurately measured. We also are probably forgetting a few. In fact, we are always trying to think of what would be a better way to understand this dilemma.

Right now, the Taliban are infiltrating Kandahar without a lot of violence. They have reduced the number—I mean they had a couple of big car bombs, but for the most part they don't use car bombs; they use assassination and night letters and much more sophisticated techniques. Frankly, they're a better enemy than al-Qaeda in terms of how al-Qaeda fought in Iraq.

They're tougher to beat, because they are not alienating the population as much. Now, the good news is the population does remember what they do when they're in power. And so if you ask Afghans "Do you like the Taliban?" they almost all say "No." But the Taliban is learning, and they have figured out they've got to be a little kinder and gentler in the way they treat some of the populations they're close to.
They have also managed to create a sense of battlefield momentum and a little bit of intimidation, a little bit of fear, on the part of people.

So how do you measure these things? How do you find the right metrics to know if people are sort of silently giving in to the Taliban, even if the trendlines and the violence don't look all that terrible, because the trendlines and the violence are very bad for our troops, because we've cleared a lot of IEDs and gotten hit by a lot, but the violence against the Afghan population is not getting worse right now, and in fact it's not nearly as bad as it was in Iraq three or four years ago, not nearly as bad.

The level of violence in Afghanistan—and this will sound a little crass, but let me just say it—is basically tolerable. The average Afghan citizen is not in great danger of being killed by this war day-in and day-out. They are in danger of living in a country that is mired in poverty and corruption, and the Taliban has figured out how to tap into that frustration and actually try to create a sense of a little bit cruder, more rapid justice.

We know this as analysts. So how do we apply the science of war methods to such an environment? Well, I'm sure you'll have some ideas for me in discussion, so I won't go on too long.

But we do look at a lot of public opinion polling. Now, when you do polling you have to worry are people really answering honestly? Do they feel intimidated, that maybe the questioner is a member of the Taliban and they had better give the answer they want? Do they feel intimidated that the questioner is a NATO official and they better give the answer the NATO people want to hear? So you have to look at a number of different polls done by different people, look for trends over time, don't trust the data you get at any one moment, and then find other ways to test your assumptions as well. That's one way we try to approach this problem.

We also are trying to create metrics like: Does the average Afghan farmer know how much he is going to have to pay in bribes to get food to market? Now, when the Taliban run a place, they tax the farmer, but they only tax him once, and then they let him take his food or his opium to market. So they manage to create this huge drug trade, but with a veneer of civility and predictability to it. When the government runs an area, you've got the police hitting up the farmer for bribes every five miles. So we're trying to figure out indicators that will allow us to assess is the Afghan police force becoming less corrupt, on the assumption that if we can't figure out a way to begin to make a dent in that—and I appreciate the laugh because I know you're right; I hear your implied message, which is don't get your hopes up.

But we don't have to expect this place to be Nirvana—or Valhalla, to use Secretary Gates' line—but we do have to begin to allow the government to look more credible in the eyes of its own people than the Taliban. If we can't get over that hump, then we're probably in some pretty bad trouble.

So one of the reasons why I do tend to believe Afghan public opinion polls is that there seems to be a certain logic to them. For example, the police are totally unpopular but the army is reasonably popular, and that's what you hear from other people who work with both institutions as well—the Afghan army is showing promise, the police are still riddled with corruption.

So the combination of making reforms in the police, maybe getting their salaries a little higher so they don't need the bribes as much, having a few people fired so there's a message sent that you better not engage in corruption because we're watching or the Afghan watchdogs are watching—these are the kind of things that need to happen.

But as you can probably already see—and I'm sure many of you have thought this problem through at least as much as I have, so you perhaps reached this conclusion even before tonight—but as you can see, this is at least as much art as science, and that again is a theme of my book. I do really believe in analytics, in data, in trying to be precise; and yet at the end of the day, I think you wind up in a policy debate based on politics and strategy and history, and you can't get away from these facts.

You cannot make war overly scientific or precise. You ultimately have to bring it back to people, to politics, to the human spirit—just what Clausewitz and Sun Tzu always said. So I pivoted off those guys to write this book. I apologize to their memories that in some sense I was abusing the notion of The Art of War or profiting from the ability to counter it a little bit. But I ultimately agree with Sun Tzu and Carl von Clausewitz—at least, as much as I disagree, I just don't believe you can avoid the neat and specific and actual about these debates.

And so on Afghanistan my bottom line is that right now we're losing—McChrystal is right—but we have begun to level things off. We haven't turned them around. I don't know—when McChrystal says we've got 12 months, as he wrote in his strategic assessment, I think that is, again, more of a guess than anything precise. It doesn't feel wrong to me, but I also cannot back it up.

But I do think in the course of 2010, if we add more troops and if the strategy works, we should begin to see progress, because that is the timeframe over which we should be able to do a lot of the clearing and holding of these areas and then begin to build up the kind of Afghan institutions that are needed.

So if we don't at least see pockets of progress in 2010 with the new strategy, then I become worried that the police force is corrupt beyond reformability, that the government of President Karzai is corrupt beyond reformability, and all this has seeped so much into the consciousness of the population that they have indeed begun to simply concede or accept fatalistically the likelihood of a Taliban recovery and return to power.

But again, that's notional, that's debatable. And so let's have the debate.

Thanks for your patience. I look forward to your comments and questions.

Questions and Answers

QUESTION: Two things. First, my son served in both Iraq and Afghanistan, most recently in Afghanistan. He was working with the police in both places—a very big difference, by the way. But one of the things that he inquired about was "Why all this corruption?" The answer was, "Because we can't put food on our table." Now, that is going to continue unless we know where we have to put some money to supply the basics for these people, or else they have no choice in the matter.

My question is one about mathematics. If in Desert Storm we had 500,000 troops in Saudi Arabia, why did we think that 130,000 troops could do it in Iraq? That's an amazing thought. The government decided that General Shinseki didn't know what he was talking about and Colin Powell didn't know what he was talking about and they had this smaller number. How was that possible?

MICHAEL O'HANLON: Thank you. I'm sure all of us would agree. And thank you for your son's service as well. I know it's a sacrifice for the families, just as much as for the soldiers really, because you miss him when he's gone and you worry about him.

It's not really kosher in Washington to say this, and so I doubt it's kosher on the Upper East Side, but in partial defense of Rumsfeld, let me say the following, and with apologies also to any soldiers in the room. The U.S. Army is historically known for "when in doubt add 50 percent more force." That's the American way of war.

It's usually not a bad idea, because where's the problem if you have a little too much? Except maybe in Vietnam and a couple of other places it could have been arguably the wrong idea, but for the most part it's better to err on the side of safety.

But Rumsfeld said, "There are a couple of problems with doing it the full Army way. One of them is that we're going to win the invasion part of the war fast anyway"—and he was right about that.

He was looking too much to the next debate, and I do not have any sympathy for him for that. But he was thinking: "Okay, the Army, they're always asking for too much, and if we let them bring 500,000 people they're going to keep telling us that you need to have 500,000 for any such future war, and they're going to be clamoring for all this budget share which they don't really need, and it's because they're just ultra-conservative and they haven't realized how warfare is changing."

You know, I have to say I don't agree with where Rumsfeld wound up, but some of his criticisms of the Army I think were correct. I don't mean everyone in the Army, but I do mean a certain element of traditional Army thinking.

By the way, the invasion force turned out not to be too far off. The problem was, of course, the stabilization force, and that's where there's no defense for Rumsfeld.

One thing I feel vindicated on in my role in the debate is I was saying at that point and before the war "you've got to have plenty of people to stabilize." So I was fully with Shinseki.

Shinseki arguably did not do enough. People say Shinseki was fired. Shinseki was not fired. He was going to leave the position anyway by the summer of 2003 because that's when his tour was ending.

He arguably could have pushed the conversation further and done additional studies to figure out what would have been needed to properly stabilize Iraq. So I give him a lot of credit, but he could have done even more, given that he was on the way out. But still, given that he was the only one who really spoke out, obviously he had some courage, and I do commend him.

But there was a debate there that Rumsfeld was somewhere between 25 and 40 percent right in his thinking. Of course, for Rumsfeld—and I'm not going to really defend him, because what he obviously totally misunderstood was this is not the right moment for that debate; and moreover, there's no excuse for not having a plan for stabilizing the country, with whatever number of troops wind up on the ground. I mean Rumsfeld didn't want to give Saddam umpteen months of warning to allow Saddam to do whatever else he might have come up with. So there were some strategic arguments in favor of invading relatively fast with a somewhat smaller force. But I cannot begin to understand why you would ever discard the plans for stabilizing the country.

I just saw General Zinni the other day—we were on CNN together—and as Central Command Combatant Commander he had a plan in the late 1990s for stabilizing Iraq , and Rumsfeld threw it away. I mean he actually did net negative work on the stabilization effort. He actually walked us back from work we had already done and said to people, "You can't do that." That's where he made his mistake.

But to understand the mistake I think you have to understand the 25 percent of his thinking which was correct. I'm a Democrat. I was not in the Clinton Administration, but I was a defense planner in the 1990s. We're the ones who created that Army. I mean the Army of 2003 was the Army that was purchased in the 1980s and 1990s. Rumsfeld had only been Secretary for a year and a half.

Now, you could argue, fairly, that that would have been an argument for postponing the invasion, since it was not a war of necessity, to use Richard Haass' term. But if there is blame on that one, I think it's a more widely shared blame.

On the issue, however, of discarding the stabilization plans, that's Rumsfeld through and through, and that's to my mind why I really think he probably was our worst Secretary of Defense. I say that with sadness, because there was a part of him and a part of his way of thinking that I liked. But unfortunately, he made his biggest mistake on what will probably be his most important legacy.

On the issue of Afghan police, I think you're right. There is movement. We have increased their salaries—it probably has to go farther. Also, when Afghan police get killed, we've got to take care of their families better; we've got to have a program of survivors' benefits and things like that. And we've got to give them better equipment. Some of this stuff is still too slow. The salary part has gotten a lot better, but I still tend to think it's on the lower side.

You know, through last year there was still this argument in American policy circles that said, "We've got to keep the Afghan army and police small and cheap because they cannot afford a big or expensive one, and someday it's going to be theirs, and therefore we shouldn't build something that they cannot afford."

That sounds right, and it's the way development people are always inclined to think about development in a normal country. But here we've got a war to win. If we don't get the Afghan army and police to help win the war, we're going to have to do all the fighting ourselves indefinitely.

So I'm with you. I think that if in doubt on this one, even if it costs the United States $8-10 billion a year for the next two decades to sustain this force—and let's hope that we can get some allied help—but even if it does cost us that much, it's better than the alternatives.

QUESTION: I appreciated your analysis and your comments at the end where you take into account that there are other factors apart from the metrics. But I'm concerned that by putting the accent on these mathematical models we tend to obviate issues like history, like culture, like language, like hostility towards any type of foreign occupation.

So my question is: Does the existence of these models tend to promote and encourage war? Do these models at some point yield results that are we should pull out, or is that essentially a political judgment at the end of the day? So I'm concerned that by saying that war is a scientific phenomenon it tends to blind us to the human intelligence part of things which argue against those military interventions in the first place.

MICHAEL O'HANLON: That's a very fair question and well put.

I hope I gave enough caveats to what I believe about the science of war that you know I'm at least partly sympathetic to where you're coming from.

Let me just say that when I got into this business in the first place, in the 1980s, what I saw—and with apologies to Republican friends—you know, part of all of us loves Ronald Reagan and everything else—but the Reagan buildup troubled the heck out of me, because what I saw was a president who was extremely good at wrapping himself in imagery of national patriotism and national security and supporting generals who wanted larger budgets, and to some extent justifiably. But what troubled me was that the atmospherics of that day, if unchallenged by analysis, pushed a lot of our defense policy towards more and more and more, because the emotions of it actually made people want to do more of this kind of stuff, and the analytics helped challenge some of the ideas, let's say on nuclear policy, the missile defense debate and the Star Wars concept.

If you went through any of the mathematics, you very quickly realized there is simply no way that any kind of a missile defense system is ever going to meaningfully blunt a Soviet attack for any time into the foreseeable future, if in our lifetimes at all. So there may be arguments for missile defense, but they are not about blunting a Soviet attack.

You may or may not believe that the Soviets have a nuclear advantage against us. But when you actually examine the amount of damage that could be done by either side's arsenal against the other, and you actually try to do that in a somewhat quantitative, scientific way, you realize the degree of overkill that we've already purchased.

So it made me a little more sober and a little more restrained about the need for these big defense buildups, in some areas at least. And so I came into this motivated by the idea that if you don't use analytics that you actually allow that kind of phenomenon to run amok.

People get intimidated by the imagery of a popular president riding a national security slogan to effective political leadership. They get intimidated by generals with four stars saying, "My military experience tells me this, and so don't challenge me." But I find generals are actually very good at these kind of give-and-takes. If you make an error in your analysis or you try to push the science of war too far, they'll try to pull you back.

But they do not reject the basic concept, because they have to, at some point, plan themselves. How do you think war plans are created? How do you think budgets are made? How do you think estimates on how long a war might last are created; what kind of ammunition stockpiles we might require; what military base is needed in the Persian Gulf; if some are in jeopardy, how hard do we have to work to keep others? These are questions of defense analytics. I just don't see the alternative.

So you may not like the fact that I'm writing a book about this because it tends to give a little bit more of a nudge towards thinking of war scientifically. But don't worry. I'm not getting too many invitations to appear on The Today Show with this volume, so it's not going to become a mass national movement in all likelihood.

On top of that, the first page of my book sets out the philosophy here, that I don't believe that you can do any kind of precise prediction. And certainly, when you're talking about calculations on casualties, even though they show up as numbers on a page, they make me sick to my stomach, still today, to do. So I don't think that trying to force yourself to think about this stuff systematically makes you impervious to the risks of war or inclined to want to fight.

So the short answer to your question is no, I don't think that. But the longer answer is what I just said.

QUESTION: I doubt that most people would question the wisdom of analytics when you're trying to assess "how many troops do I need to go in and occupy an area, establish a beachhead; what ammunition do I need for that?" But I question the wisdom of relying on defense analytics to go much beyond that, into the area of what is necessary to win over the population, because as the previous questioner suggested, and although it's very much related to the military outcome, you're then getting into the province of sociology, history, all sorts of soft issues, where I don't believe really military analysts are well equipped to have particularly deep views on this.

This is not to denigrate them. They're very good at understanding what the fire power should be, what kind of equipment I need, what my logistical requirements are. But all these other areas, which are so crucial for the long-term success, are not areas that we should reasonably expect them to have a deep understanding of. They don't necessarily understand the history of the Sunnis and the Shiites and the Kurds; they don't understand what's gone on in the Ottoman Empire—some may do, but the average, even very distinguished, general has no reason to know about those things and doesn't proclaim himself to be an expert in it.

So when you talk about the science of war, the real question is: Are you just talking about giving a model to predict that initial phase of getting in there, occupying, and securing the area; or are you going much beyond that, to what you call stabilizing, because stabilizing to my mind is a much more woolly and a much, much vaster subject?

MICHAEL O'HANLON: That's extremely well said. I hope it's not inconsistent in the least with anything I say in the book, because I actually, probably less eloquently, try to say some of those same things.
What I believe you can do with metrics is not so much on a counterinsurgency campaign of the type you're talking about, is not so much decide whether the mission is fundamentally feasible in the first place. It's sort of once you're in how's it going; and also, if you're going to go in, what do you need realistically to have a decent chance at doing the job right.

I would basically concede every other point you made. So I think if you were to read that part of the book you would not find me going around and picking candidate countries for our next stabilization mission. That's not my purpose and—which you implied but were kind enough not to say explicitly—not my expertise or ability.

If I have any background, any feel, for a place, it's Congo, where I was a Peace Corps volunteer for two years, and that's about the only place I would claim enough—and that was 25 years ago, so even there I better be careful.

So I take every point you said. In fact, it's a good pitch for bringing in and including many other disciplines in these kinds of debates, which is part of what we try to do at Brookings. We have regionalists and functionalists, and we try to actually interact.

I realize Ken Pollack is not universally popular for his role in the Iraq war debate, but he's a very good regionalist, and I've done a lot of my work with him; and I've done a lot of my Northeast Asia work with experts on Japan and Korea, et cetera, because I agree with you that you need to have that, and even that just begins to scratch the surface of what's needed.

So yes, I think we basically agree. I just hope I haven't done any disservice to that point of view in the book. You can feel free to let me know. If you peruse it later and find I have, I would actually appreciate you telling me.

QUESTION: Following up on the two gentlemen, it seems to me that your methodology lacks two things: enough information from our CIA, or whatever, before you make your plan or while you're making your plan; and secondly, enough information from our State Department on the culture of the country involved. It's a little late now for Afghanistan or Pakistan, but I think that they ought to be boning up on Iran and North Korea and other places that we may end up some day fighting. I think we were terribly lacking in both items in even Vietnam. So I wonder if we're going to get that changed.

MICHAEL O'HANLON: I think that's convincing. I would agree with you too.

Not in this book, because this is a book about defense analytics, but in the other book I've done recently I argued for a lot of what Hillary Clinton is trying to do in beefing up a lot of State Department capability. We were talking earlier about what her legacy could be. That may be her most important effort so far in the Department of State. I think she's being successful, and I think she should. I think there are good reasons why State needs to be a much stronger arm of the government.

And again, I did not in this book either endorse the Iraq war or endorse the Afghanistan war. For one thing, the book is published in 2009 and they're both well underway; but also because I agree with the previous points that have been made. So yes, I think we're on the same page.

QUESTION: Have you tried to use your analytics to study the possibilities of taking out the Iranian nuclear establishment; and, if so, what have you concluded?

MICHAEL O'HANLON: That's a great question too.

I do not talk about that in this book. We actually just did a Brookings study where we looked at nine different options for how to handle Iran. We did it in the same spirit of starting with the analytics and helping people reach their own conclusions and just sort of provide our best estimates of how different options would work. Of course, my job was the military options, because that's the job I was given, not because I support them. In fact I don't; I'm more of a containment guy.

I think Iran is going to get a nuclear weapon. I think the question is going to be: How do we try to put enough pressure on them that someday, under a different leader, they perhaps rethink that? That's not a very optimistic way to think about the problem, but that is what I've come to in my own conclusions.

Having said that, we did in the other book use some of these methods. And yes, I think that you're familiar—there's no huge mystery here, except that we don't know what we don't know about other sites. In the last three weeks we've learned something new about other sites. That was always going to be the biggest uncertainty. That's a fundamental intelligence uncertainty that defense analytics don't resolve.

Defense analytics tell you that it's easy for us to destroy the nuclear reactor, the one that could produce plutonium for a plutonium bomb. They tell us that any kind of uranium enrichment facility that is within 10-30 meters of the surface of the earth and probably hardened by no more than 6-8 feet of concrete we can probably get.

But if it's deep enough those same weapons won't get it. They can perhaps then go and try to cut off access to that facility. You can use commandos to try to go in and do something to it. But you still can't attack what you don't know about.

And then the question becomes: How fast can they recover and can you keep hitting them as they are recovering? The answer there is probably no.

And so I tend to think that the estimates that are out there for—yes, we could probably delay Iran one to three additional years with this kind of an air strike campaign. We could probably, even once they get back to building up uranium centrifuges—which for them is the way to do it because we can destroy the reactors as often as they build them—we can probably restrain the rate at which their arsenal grows compared to what would have happened in the absence of military action. But having said that, I don't find that a good enough accomplishment, given what it's likely going to do to Iranian public opinion and politics.

So I can't prove this—it's a judgment, it's using analytics to inform a broader policy debate, which is all I think analytics can ever do—but my instinct is to say that without any military strikes, Iran probably has ten bombs in 2015-2018; with military strikes, maybe they only have one or two, and then a slower growth path after that.

Then we can all debate, first of all, whether my numbers are right; and then, secondly, whether that's an important enough difference to be worth the backlash effect from the military strike.

JOANNE MYERS: I would just like to end by asking you a question. You said that we were losing the war in Afghanistan. I wonder what does winning the war in Afghanistan mean?

MICHAEL O'HANLON: That's a great question.

I think the key to it is getting the Afghans in a position where they can keep fighting themselves, less and less with us, more and more on their own. In other words, we don't have to stop the drug trade, we don't have to stop the current level of violence, we don't have to see the Taliban eviscerated. If the current level of violence could be capped and then the Afghan institutions could be given the capacity to maintain the fight on their own, that would be close enough to an exit strategy for me.

JOANNE MYERS: How can you cap them? I mean the Taliban just keeps growing and growing.

MICHAEL O'HANLON: I'm just giving you a sort of definition of how I would see it. To get to that point, I agree I have not given an easy answer. That is my answer, but I would acknowledge that it is not an easy answer or an easily attainable goal.

To get to that point you still have to do most of what McChrystal wants, because you still have to—you cannot sort of freeze things where they are. The Taliban are on the march, they're doing better, they have momentum, people know they have momentum. And so, in the absence of changing that dynamic, we're going to gradually lose this thing, and we're probably not going to train up the Afghans fast enough to counter that trend.

And so in practice, in the abstract, if we give them the tools to have the fight, we're okay. But in practice we're also going to have to help them turn the tide of battle on our own, which is why I ultimately agree with McChrystal that we need more troops, because I don't think we have the ability to wait for them to get ready to do it primarily on their own.

JOANNE MYERS: I'd like to thank you for giving us the tools to look at the military more effectively.

You may also like

DEC 17, 2024 Feature

Empowering Ethics in 2024

Explore Carnegie Council’s 2024 Year in Review resource which highlights podcasts, events, and more covering some of this year’s key ethical issues.

Dr. Strangelove War Room. CREDIT: IMDB/Columbia Pictures

DEC 10, 2024 Article

Ethics on Film: Discussion of "Dr. Strangelove"

This review explores ethical issues around nuclear weapons and non-proliferation, the military-industrial complex, and the role of political satire in Stanley Kubrick's "Dr. Strangelove."

DEC 3, 2024 Article

Child Poverty and Equality of Opportunity for Children in the United States

This final project from the first CEF cohort discusses the effects of child poverty in the United States and ethical solutions to help alleviate this ...

Non traduit

Ce contenu n'a pas encore été traduit dans votre langue. Vous pouvez demander une traduction en cliquant sur le bouton ci-dessous.

Demande de traduction